-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQEcBAABAgAGBQJP0qm4AAoJEHm+PkMAQRiG62QIAJRNJFyVB0ZrsMPgdwLnlX4O
 5I86H7GaYXoOK/KMb2s5h4KiFggIODnyEkZi+/39tJOgGo0KrMcDlsh0owB1Iggw
 LE6iyze9I1z9wQze0+SXe7VAcvUYvsx2vgpOKvoNi97Qgn3B6onL+SAi5U+NAqJl
 0NdKmveEd42UIm7JfChHlxl8bm8YB+WcU38OkMGpRpJ/Moz9EbSjYVQg3oHrzJjy
 duiX6SD/OV4m5yCcXXmu+f41pN+SG7xENJ5r4enyi2ZF8mAyVz2goIyL2bA0AJX2
 +GbpD1sxUHkZ6yPg4tf2bmJOj0PkfZNAi8YpFxZDlP4y1pKuCTEDTBp8O2id43w=
 =Jyn8
 -----END PGP SIGNATURE-----

Merge tag 'v3.5-rc2' into perf/core

Merge in Linux 3.5-rc2 - to pick up fixes.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2012-06-11 10:51:35 +02:00
Родитель 7eb9ba5ed3 cfaf025112
Коммит c3e228d59b
111 изменённых файлов: 1634 добавлений и 866 удалений

Просмотреть файл

@ -0,0 +1,93 @@
Pinctrl-based I2C Bus Mux
This binding describes an I2C bus multiplexer that uses pin multiplexing to
route the I2C signals, and represents the pin multiplexing configuration
using the pinctrl device tree bindings.
+-----+ +-----+
| dev | | dev |
+------------------------+ +-----+ +-----+
| SoC | | |
| /----|------+--------+
| +---+ +------+ | child bus A, on first set of pins
| |I2C|---|Pinmux| |
| +---+ +------+ | child bus B, on second set of pins
| \----|------+--------+--------+
| | | | |
+------------------------+ +-----+ +-----+ +-----+
| dev | | dev | | dev |
+-----+ +-----+ +-----+
Required properties:
- compatible: i2c-mux-pinctrl
- i2c-parent: The phandle of the I2C bus that this multiplexer's master-side
port is connected to.
Also required are:
* Standard pinctrl properties that specify the pin mux state for each child
bus. See ../pinctrl/pinctrl-bindings.txt.
* Standard I2C mux properties. See mux.txt in this directory.
* I2C child bus nodes. See mux.txt in this directory.
For each named state defined in the pinctrl-names property, an I2C child bus
will be created. I2C child bus numbers are assigned based on the index into
the pinctrl-names property.
The only exception is that no bus will be created for a state named "idle". If
such a state is defined, it must be the last entry in pinctrl-names. For
example:
pinctrl-names = "ddc", "pta", "idle" -> ddc = bus 0, pta = bus 1
pinctrl-names = "ddc", "idle", "pta" -> Invalid ("idle" not last)
pinctrl-names = "idle", "ddc", "pta" -> Invalid ("idle" not last)
Whenever an access is made to a device on a child bus, the relevant pinctrl
state will be programmed into hardware.
If an idle state is defined, whenever an access is not being made to a device
on a child bus, the idle pinctrl state will be programmed into hardware.
If an idle state is not defined, the most recently used pinctrl state will be
left programmed into hardware whenever no access is being made of a device on
a child bus.
Example:
i2cmux {
compatible = "i2c-mux-pinctrl";
#address-cells = <1>;
#size-cells = <0>;
i2c-parent = <&i2c1>;
pinctrl-names = "ddc", "pta", "idle";
pinctrl-0 = <&state_i2cmux_ddc>;
pinctrl-1 = <&state_i2cmux_pta>;
pinctrl-2 = <&state_i2cmux_idle>;
i2c@0 {
reg = <0>;
#address-cells = <1>;
#size-cells = <0>;
eeprom {
compatible = "eeprom";
reg = <0x50>;
};
};
i2c@1 {
reg = <1>;
#address-cells = <1>;
#size-cells = <0>;
eeprom {
compatible = "eeprom";
reg = <0x50>;
};
};
};

Просмотреть файл

@ -1077,7 +1077,7 @@ F: drivers/media/video/s5p-fimc/
ARM/SAMSUNG S5P SERIES Multi Format Codec (MFC) SUPPORT ARM/SAMSUNG S5P SERIES Multi Format Codec (MFC) SUPPORT
M: Kyungmin Park <kyungmin.park@samsung.com> M: Kyungmin Park <kyungmin.park@samsung.com>
M: Kamil Debski <k.debski@samsung.com> M: Kamil Debski <k.debski@samsung.com>
M: Jeongtae Park <jtp.park@samsung.com> M: Jeongtae Park <jtp.park@samsung.com>
L: linux-arm-kernel@lists.infradead.org L: linux-arm-kernel@lists.infradead.org
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
S: Maintained S: Maintained
@ -1743,10 +1743,10 @@ F: include/linux/can/platform/
CAPABILITIES CAPABILITIES
M: Serge Hallyn <serge.hallyn@canonical.com> M: Serge Hallyn <serge.hallyn@canonical.com>
L: linux-security-module@vger.kernel.org L: linux-security-module@vger.kernel.org
S: Supported S: Supported
F: include/linux/capability.h F: include/linux/capability.h
F: security/capability.c F: security/capability.c
F: security/commoncap.c F: security/commoncap.c
F: kernel/capability.c F: kernel/capability.c
CELL BROADBAND ENGINE ARCHITECTURE CELL BROADBAND ENGINE ARCHITECTURE
@ -2146,11 +2146,11 @@ S: Orphan
F: drivers/net/wan/pc300* F: drivers/net/wan/pc300*
CYTTSP TOUCHSCREEN DRIVER CYTTSP TOUCHSCREEN DRIVER
M: Javier Martinez Canillas <javier@dowhile0.org> M: Javier Martinez Canillas <javier@dowhile0.org>
L: linux-input@vger.kernel.org L: linux-input@vger.kernel.org
S: Maintained S: Maintained
F: drivers/input/touchscreen/cyttsp* F: drivers/input/touchscreen/cyttsp*
F: include/linux/input/cyttsp.h F: include/linux/input/cyttsp.h
DAMA SLAVE for AX.25 DAMA SLAVE for AX.25
M: Joerg Reuter <jreuter@yaina.de> M: Joerg Reuter <jreuter@yaina.de>
@ -2270,7 +2270,7 @@ F: include/linux/device-mapper.h
F: include/linux/dm-*.h F: include/linux/dm-*.h
DIOLAN U2C-12 I2C DRIVER DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <guenter.roeck@ericsson.com> M: Guenter Roeck <linux@roeck-us.net>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
S: Maintained S: Maintained
F: drivers/i2c/busses/i2c-diolan-u2c.c F: drivers/i2c/busses/i2c-diolan-u2c.c
@ -3145,7 +3145,7 @@ F: drivers/tty/hvc/
HARDWARE MONITORING HARDWARE MONITORING
M: Jean Delvare <khali@linux-fr.org> M: Jean Delvare <khali@linux-fr.org>
M: Guenter Roeck <guenter.roeck@ericsson.com> M: Guenter Roeck <linux@roeck-us.net>
L: lm-sensors@lm-sensors.org L: lm-sensors@lm-sensors.org
W: http://www.lm-sensors.org/ W: http://www.lm-sensors.org/
T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-hwmon/ T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-hwmon/
@ -4103,6 +4103,8 @@ F: drivers/scsi/53c700*
LED SUBSYSTEM LED SUBSYSTEM
M: Bryan Wu <bryan.wu@canonical.com> M: Bryan Wu <bryan.wu@canonical.com>
M: Richard Purdie <rpurdie@rpsys.net> M: Richard Purdie <rpurdie@rpsys.net>
L: linux-leds@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds.git
S: Maintained S: Maintained
F: drivers/leds/ F: drivers/leds/
F: include/linux/leds.h F: include/linux/leds.h
@ -4418,6 +4420,13 @@ S: Orphan
F: drivers/video/matrox/matroxfb_* F: drivers/video/matrox/matroxfb_*
F: include/linux/matroxfb.h F: include/linux/matroxfb.h
MAX16065 HARDWARE MONITOR DRIVER
M: Guenter Roeck <linux@roeck-us.net>
L: lm-sensors@lm-sensors.org
S: Maintained
F: Documentation/hwmon/max16065
F: drivers/hwmon/max16065.c
MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER
M: "Hans J. Koch" <hjk@hansjkoch.de> M: "Hans J. Koch" <hjk@hansjkoch.de>
L: lm-sensors@lm-sensors.org L: lm-sensors@lm-sensors.org
@ -5156,7 +5165,7 @@ F: drivers/leds/leds-pca9532.c
F: include/linux/leds-pca9532.h F: include/linux/leds-pca9532.h
PCA9541 I2C BUS MASTER SELECTOR DRIVER PCA9541 I2C BUS MASTER SELECTOR DRIVER
M: Guenter Roeck <guenter.roeck@ericsson.com> M: Guenter Roeck <linux@roeck-us.net>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
S: Maintained S: Maintained
F: drivers/i2c/muxes/i2c-mux-pca9541.c F: drivers/i2c/muxes/i2c-mux-pca9541.c
@ -5176,7 +5185,7 @@ S: Maintained
F: drivers/firmware/pcdp.* F: drivers/firmware/pcdp.*
PCI ERROR RECOVERY PCI ERROR RECOVERY
M: Linas Vepstas <linasvepstas@gmail.com> M: Linas Vepstas <linasvepstas@gmail.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
F: Documentation/PCI/pci-error-recovery.txt F: Documentation/PCI/pci-error-recovery.txt
@ -5306,7 +5315,7 @@ F: drivers/video/fb-puv3.c
F: drivers/rtc/rtc-puv3.c F: drivers/rtc/rtc-puv3.c
PMBUS HARDWARE MONITORING DRIVERS PMBUS HARDWARE MONITORING DRIVERS
M: Guenter Roeck <guenter.roeck@ericsson.com> M: Guenter Roeck <linux@roeck-us.net>
L: lm-sensors@lm-sensors.org L: lm-sensors@lm-sensors.org
W: http://www.lm-sensors.org/ W: http://www.lm-sensors.org/
W: http://www.roeck-us.net/linux/drivers/ W: http://www.roeck-us.net/linux/drivers/
@ -7298,11 +7307,11 @@ F: Documentation/DocBook/uio-howto.tmpl
F: drivers/uio/ F: drivers/uio/
F: include/linux/uio*.h F: include/linux/uio*.h
UTIL-LINUX-NG PACKAGE UTIL-LINUX PACKAGE
M: Karel Zak <kzak@redhat.com> M: Karel Zak <kzak@redhat.com>
L: util-linux-ng@vger.kernel.org L: util-linux@vger.kernel.org
W: http://kernel.org/~kzak/util-linux-ng/ W: http://en.wikipedia.org/wiki/Util-linux
T: git git://git.kernel.org/pub/scm/utils/util-linux-ng/util-linux-ng.git T: git git://git.kernel.org/pub/scm/utils/util-linux/util-linux.git
S: Maintained S: Maintained
UVESAFB DRIVER UVESAFB DRIVER

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 3 VERSION = 3
PATCHLEVEL = 5 PATCHLEVEL = 5
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc2
NAME = Saber-toothed Squirrel NAME = Saber-toothed Squirrel
# *DOCUMENTATION* # *DOCUMENTATION*

Просмотреть файл

@ -7,7 +7,6 @@ config ARM
select HAVE_IDE if PCI || ISA || PCMCIA select HAVE_IDE if PCI || ISA || PCMCIA
select HAVE_DMA_ATTRS select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS if (CPU_V6 || CPU_V6K || CPU_V7) select HAVE_DMA_CONTIGUOUS if (CPU_V6 || CPU_V6K || CPU_V7)
select CMA if (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select RTC_LIB select RTC_LIB
select SYS_SUPPORTS_APM_EMULATION select SYS_SUPPORTS_APM_EMULATION

Просмотреть файл

@ -268,10 +268,8 @@ static int __init consistent_init(void)
unsigned long base = consistent_base; unsigned long base = consistent_base;
unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT; unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT;
#ifndef CONFIG_ARM_DMA_USE_IOMMU if (IS_ENABLED(CONFIG_CMA) && !IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU))
if (cpu_architecture() >= CPU_ARCH_ARMv6)
return 0; return 0;
#endif
consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL); consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL);
if (!consistent_pte) { if (!consistent_pte) {
@ -342,7 +340,7 @@ static int __init coherent_init(void)
struct page *page; struct page *page;
void *ptr; void *ptr;
if (cpu_architecture() < CPU_ARCH_ARMv6) if (!IS_ENABLED(CONFIG_CMA))
return 0; return 0;
ptr = __alloc_from_contiguous(NULL, size, prot, &page); ptr = __alloc_from_contiguous(NULL, size, prot, &page);
@ -704,7 +702,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
if (arch_is_coherent() || nommu()) if (arch_is_coherent() || nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page); addr = __alloc_simple_buffer(dev, size, gfp, &page);
else if (cpu_architecture() < CPU_ARCH_ARMv6) else if (!IS_ENABLED(CONFIG_CMA))
addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);
else if (gfp & GFP_ATOMIC) else if (gfp & GFP_ATOMIC)
addr = __alloc_from_pool(dev, size, &page, caller); addr = __alloc_from_pool(dev, size, &page, caller);
@ -773,7 +771,7 @@ void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
if (arch_is_coherent() || nommu()) { if (arch_is_coherent() || nommu()) {
__dma_free_buffer(page, size); __dma_free_buffer(page, size);
} else if (cpu_architecture() < CPU_ARCH_ARMv6) { } else if (!IS_ENABLED(CONFIG_CMA)) {
__dma_free_remap(cpu_addr, size); __dma_free_remap(cpu_addr, size);
__dma_free_buffer(page, size); __dma_free_buffer(page, size);
} else { } else {

Просмотреть файл

@ -21,6 +21,7 @@ KBUILD_DEFCONFIG := default_defconfig
NM = sh $(srctree)/arch/parisc/nm NM = sh $(srctree)/arch/parisc/nm
CHECKFLAGS += -D__hppa__=1 CHECKFLAGS += -D__hppa__=1
LIBGCC = $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name)
MACHINE := $(shell uname -m) MACHINE := $(shell uname -m)
ifeq ($(MACHINE),parisc*) ifeq ($(MACHINE),parisc*)
@ -79,7 +80,7 @@ kernel-y := mm/ kernel/ math-emu/
kernel-$(CONFIG_HPUX) += hpux/ kernel-$(CONFIG_HPUX) += hpux/
core-y += $(addprefix arch/parisc/, $(kernel-y)) core-y += $(addprefix arch/parisc/, $(kernel-y))
libs-y += arch/parisc/lib/ `$(CC) -print-libgcc-file-name` libs-y += arch/parisc/lib/ $(LIBGCC)
drivers-$(CONFIG_OPROFILE) += arch/parisc/oprofile/ drivers-$(CONFIG_OPROFILE) += arch/parisc/oprofile/

Просмотреть файл

@ -1,3 +1,4 @@
include include/asm-generic/Kbuild.asm include include/asm-generic/Kbuild.asm
header-y += pdc.h header-y += pdc.h
generic-y += word-at-a-time.h

Просмотреть файл

@ -1,6 +1,8 @@
#ifndef _PARISC_BUG_H #ifndef _PARISC_BUG_H
#define _PARISC_BUG_H #define _PARISC_BUG_H
#include <linux/kernel.h> /* for BUGFLAG_TAINT */
/* /*
* Tell the user there is some problem. * Tell the user there is some problem.
* The offending file and line are encoded in the __bug_table section. * The offending file and line are encoded in the __bug_table section.

Просмотреть файл

@ -176,8 +176,8 @@ int module_frob_arch_sections(Elf32_Ehdr *hdr,
static inline int entry_matches(struct ppc_plt_entry *entry, Elf32_Addr val) static inline int entry_matches(struct ppc_plt_entry *entry, Elf32_Addr val)
{ {
if (entry->jump[0] == 0x3d600000 + ((val + 0x8000) >> 16) if (entry->jump[0] == 0x3d800000 + ((val + 0x8000) >> 16)
&& entry->jump[1] == 0x396b0000 + (val & 0xffff)) && entry->jump[1] == 0x398c0000 + (val & 0xffff))
return 1; return 1;
return 0; return 0;
} }
@ -204,10 +204,9 @@ static uint32_t do_plt_call(void *location,
entry++; entry++;
} }
/* Stolen from Paul Mackerras as well... */ entry->jump[0] = 0x3d800000+((val+0x8000)>>16); /* lis r12,sym@ha */
entry->jump[0] = 0x3d600000+((val+0x8000)>>16); /* lis r11,sym@ha */ entry->jump[1] = 0x398c0000 + (val&0xffff); /* addi r12,r12,sym@l*/
entry->jump[1] = 0x396b0000 + (val&0xffff); /* addi r11,r11,sym@l*/ entry->jump[2] = 0x7d8903a6; /* mtctr r12 */
entry->jump[2] = 0x7d6903a6; /* mtctr r11 */
entry->jump[3] = 0x4e800420; /* bctr */ entry->jump[3] = 0x4e800420; /* bctr */
DEBUGP("Initialized plt for 0x%x at %p\n", val, entry); DEBUGP("Initialized plt for 0x%x at %p\n", val, entry);

Просмотреть файл

@ -475,6 +475,7 @@ void timer_interrupt(struct pt_regs * regs)
struct pt_regs *old_regs; struct pt_regs *old_regs;
u64 *next_tb = &__get_cpu_var(decrementers_next_tb); u64 *next_tb = &__get_cpu_var(decrementers_next_tb);
struct clock_event_device *evt = &__get_cpu_var(decrementers); struct clock_event_device *evt = &__get_cpu_var(decrementers);
u64 now;
/* Ensure a positive value is written to the decrementer, or else /* Ensure a positive value is written to the decrementer, or else
* some CPUs will continue to take decrementer exceptions. * some CPUs will continue to take decrementer exceptions.
@ -509,9 +510,16 @@ void timer_interrupt(struct pt_regs * regs)
irq_work_run(); irq_work_run();
} }
*next_tb = ~(u64)0; now = get_tb_or_rtc();
if (evt->event_handler) if (now >= *next_tb) {
evt->event_handler(evt); *next_tb = ~(u64)0;
if (evt->event_handler)
evt->event_handler(evt);
} else {
now = *next_tb - now;
if (now <= DECREMENTER_MAX)
set_dec((int)now);
}
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
/* collect purr register values often, for accurate calculations */ /* collect purr register values often, for accurate calculations */

Просмотреть файл

@ -91,11 +91,6 @@ extern void smp_nap(void);
/* Enable interrupts racelessly and nap forever: helper for cpu_idle(). */ /* Enable interrupts racelessly and nap forever: helper for cpu_idle(). */
extern void _cpu_idle(void); extern void _cpu_idle(void);
/* Switch boot idle thread to a freshly-allocated stack and free old stack. */
extern void cpu_idle_on_new_stack(struct thread_info *old_ti,
unsigned long new_sp,
unsigned long new_ss10);
#else /* __ASSEMBLY__ */ #else /* __ASSEMBLY__ */
/* /*

Просмотреть файл

@ -68,20 +68,6 @@ STD_ENTRY(KBacktraceIterator_init_current)
jrp lr /* keep backtracer happy */ jrp lr /* keep backtracer happy */
STD_ENDPROC(KBacktraceIterator_init_current) STD_ENDPROC(KBacktraceIterator_init_current)
/*
* Reset our stack to r1/r2 (sp and ksp0+cpu respectively), then
* free the old stack (passed in r0) and re-invoke cpu_idle().
* We update sp and ksp0 simultaneously to avoid backtracer warnings.
*/
STD_ENTRY(cpu_idle_on_new_stack)
{
move sp, r1
mtspr SPR_SYSTEM_SAVE_K_0, r2
}
jal free_thread_info
j cpu_idle
STD_ENDPROC(cpu_idle_on_new_stack)
/* Loop forever on a nap during SMP boot. */ /* Loop forever on a nap during SMP boot. */
STD_ENTRY(smp_nap) STD_ENTRY(smp_nap)
nap nap

Просмотреть файл

@ -29,6 +29,7 @@
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/timex.h> #include <linux/timex.h>
#include <linux/hugetlb.h> #include <linux/hugetlb.h>
#include <linux/start_kernel.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>

Просмотреть файл

@ -94,10 +94,10 @@ bs_die:
.section ".bsdata", "a" .section ".bsdata", "a"
bugger_off_msg: bugger_off_msg:
.ascii "Direct booting from floppy is no longer supported.\r\n" .ascii "Direct floppy boot is not supported. "
.ascii "Please use a boot loader program instead.\r\n" .ascii "Use a boot loader program instead.\r\n"
.ascii "\n" .ascii "\n"
.ascii "Remove disk and press any key to reboot . . .\r\n" .ascii "Remove disk and press any key to reboot ...\r\n"
.byte 0 .byte 0
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
@ -111,7 +111,7 @@ coff_header:
#else #else
.word 0x8664 # x86-64 .word 0x8664 # x86-64
#endif #endif
.word 2 # nr_sections .word 3 # nr_sections
.long 0 # TimeDateStamp .long 0 # TimeDateStamp
.long 0 # PointerToSymbolTable .long 0 # PointerToSymbolTable
.long 1 # NumberOfSymbols .long 1 # NumberOfSymbols
@ -158,8 +158,8 @@ extra_header_fields:
#else #else
.quad 0 # ImageBase .quad 0 # ImageBase
#endif #endif
.long 0x1000 # SectionAlignment .long 0x20 # SectionAlignment
.long 0x200 # FileAlignment .long 0x20 # FileAlignment
.word 0 # MajorOperatingSystemVersion .word 0 # MajorOperatingSystemVersion
.word 0 # MinorOperatingSystemVersion .word 0 # MinorOperatingSystemVersion
.word 0 # MajorImageVersion .word 0 # MajorImageVersion
@ -200,8 +200,10 @@ extra_header_fields:
# Section table # Section table
section_table: section_table:
.ascii ".text" #
.byte 0 # The offset & size fields are filled in by build.c.
#
.ascii ".setup"
.byte 0 .byte 0
.byte 0 .byte 0
.long 0 .long 0
@ -217,9 +219,8 @@ section_table:
# #
# The EFI application loader requires a relocation section # The EFI application loader requires a relocation section
# because EFI applications must be relocatable. But since # because EFI applications must be relocatable. The .reloc
# we don't need the loader to fixup any relocs for us, we # offset & size fields are filled in by build.c.
# just create an empty (zero-length) .reloc section header.
# #
.ascii ".reloc" .ascii ".reloc"
.byte 0 .byte 0
@ -233,6 +234,25 @@ section_table:
.word 0 # NumberOfRelocations .word 0 # NumberOfRelocations
.word 0 # NumberOfLineNumbers .word 0 # NumberOfLineNumbers
.long 0x42100040 # Characteristics (section flags) .long 0x42100040 # Characteristics (section flags)
#
# The offset & size fields are filled in by build.c.
#
.ascii ".text"
.byte 0
.byte 0
.byte 0
.long 0
.long 0x0 # startup_{32,64}
.long 0 # Size of initialized data
# on disk
.long 0x0 # startup_{32,64}
.long 0 # PointerToRelocations
.long 0 # PointerToLineNumbers
.word 0 # NumberOfRelocations
.word 0 # NumberOfLineNumbers
.long 0x60500020 # Characteristics (section flags)
#endif /* CONFIG_EFI_STUB */ #endif /* CONFIG_EFI_STUB */
# Kernel attributes; used by setup. This is part 1 of the # Kernel attributes; used by setup. This is part 1 of the

Просмотреть файл

@ -50,6 +50,8 @@ typedef unsigned int u32;
u8 buf[SETUP_SECT_MAX*512]; u8 buf[SETUP_SECT_MAX*512];
int is_big_kernel; int is_big_kernel;
#define PECOFF_RELOC_RESERVE 0x20
/*----------------------------------------------------------------------*/ /*----------------------------------------------------------------------*/
static const u32 crctab32[] = { static const u32 crctab32[] = {
@ -133,11 +135,103 @@ static void usage(void)
die("Usage: build setup system [> image]"); die("Usage: build setup system [> image]");
} }
#ifdef CONFIG_EFI_STUB
static void update_pecoff_section_header(char *section_name, u32 offset, u32 size)
{
unsigned int pe_header;
unsigned short num_sections;
u8 *section;
pe_header = get_unaligned_le32(&buf[0x3c]);
num_sections = get_unaligned_le16(&buf[pe_header + 6]);
#ifdef CONFIG_X86_32
section = &buf[pe_header + 0xa8];
#else
section = &buf[pe_header + 0xb8];
#endif
while (num_sections > 0) {
if (strncmp((char*)section, section_name, 8) == 0) {
/* section header size field */
put_unaligned_le32(size, section + 0x8);
/* section header vma field */
put_unaligned_le32(offset, section + 0xc);
/* section header 'size of initialised data' field */
put_unaligned_le32(size, section + 0x10);
/* section header 'file offset' field */
put_unaligned_le32(offset, section + 0x14);
break;
}
section += 0x28;
num_sections--;
}
}
static void update_pecoff_setup_and_reloc(unsigned int size)
{
u32 setup_offset = 0x200;
u32 reloc_offset = size - PECOFF_RELOC_RESERVE;
u32 setup_size = reloc_offset - setup_offset;
update_pecoff_section_header(".setup", setup_offset, setup_size);
update_pecoff_section_header(".reloc", reloc_offset, PECOFF_RELOC_RESERVE);
/*
* Modify .reloc section contents with a single entry. The
* relocation is applied to offset 10 of the relocation section.
*/
put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
put_unaligned_le32(10, &buf[reloc_offset + 4]);
}
static void update_pecoff_text(unsigned int text_start, unsigned int file_sz)
{
unsigned int pe_header;
unsigned int text_sz = file_sz - text_start;
pe_header = get_unaligned_le32(&buf[0x3c]);
/* Size of image */
put_unaligned_le32(file_sz, &buf[pe_header + 0x50]);
/*
* Size of code: Subtract the size of the first sector (512 bytes)
* which includes the header.
*/
put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]);
#ifdef CONFIG_X86_32
/*
* Address of entry point.
*
* The EFI stub entry point is +16 bytes from the start of
* the .text section.
*/
put_unaligned_le32(text_start + 16, &buf[pe_header + 0x28]);
#else
/*
* Address of entry point. startup_32 is at the beginning and
* the 64-bit entry point (startup_64) is always 512 bytes
* after. The EFI stub entry point is 16 bytes after that, as
* the first instruction allows legacy loaders to jump over
* the EFI stub initialisation
*/
put_unaligned_le32(text_start + 528, &buf[pe_header + 0x28]);
#endif /* CONFIG_X86_32 */
update_pecoff_section_header(".text", text_start, text_sz);
}
#endif /* CONFIG_EFI_STUB */
int main(int argc, char ** argv) int main(int argc, char ** argv)
{ {
#ifdef CONFIG_EFI_STUB
unsigned int file_sz, pe_header;
#endif
unsigned int i, sz, setup_sectors; unsigned int i, sz, setup_sectors;
int c; int c;
u32 sys_size; u32 sys_size;
@ -163,6 +257,12 @@ int main(int argc, char ** argv)
die("Boot block hasn't got boot flag (0xAA55)"); die("Boot block hasn't got boot flag (0xAA55)");
fclose(file); fclose(file);
#ifdef CONFIG_EFI_STUB
/* Reserve 0x20 bytes for .reloc section */
memset(buf+c, 0, PECOFF_RELOC_RESERVE);
c += PECOFF_RELOC_RESERVE;
#endif
/* Pad unused space with zeros */ /* Pad unused space with zeros */
setup_sectors = (c + 511) / 512; setup_sectors = (c + 511) / 512;
if (setup_sectors < SETUP_SECT_MIN) if (setup_sectors < SETUP_SECT_MIN)
@ -170,6 +270,10 @@ int main(int argc, char ** argv)
i = setup_sectors*512; i = setup_sectors*512;
memset(buf+c, 0, i-c); memset(buf+c, 0, i-c);
#ifdef CONFIG_EFI_STUB
update_pecoff_setup_and_reloc(i);
#endif
/* Set the default root device */ /* Set the default root device */
put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]); put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
@ -194,66 +298,8 @@ int main(int argc, char ** argv)
put_unaligned_le32(sys_size, &buf[0x1f4]); put_unaligned_le32(sys_size, &buf[0x1f4]);
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
file_sz = sz + i + ((sys_size * 16) - sz); update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz));
#endif
pe_header = get_unaligned_le32(&buf[0x3c]);
/* Size of image */
put_unaligned_le32(file_sz, &buf[pe_header + 0x50]);
/*
* Subtract the size of the first section (512 bytes) which
* includes the header and .reloc section. The remaining size
* is that of the .text section.
*/
file_sz -= 512;
/* Size of code */
put_unaligned_le32(file_sz, &buf[pe_header + 0x1c]);
#ifdef CONFIG_X86_32
/*
* Address of entry point.
*
* The EFI stub entry point is +16 bytes from the start of
* the .text section.
*/
put_unaligned_le32(i + 16, &buf[pe_header + 0x28]);
/* .text size */
put_unaligned_le32(file_sz, &buf[pe_header + 0xb0]);
/* .text vma */
put_unaligned_le32(0x200, &buf[pe_header + 0xb4]);
/* .text size of initialised data */
put_unaligned_le32(file_sz, &buf[pe_header + 0xb8]);
/* .text file offset */
put_unaligned_le32(0x200, &buf[pe_header + 0xbc]);
#else
/*
* Address of entry point. startup_32 is at the beginning and
* the 64-bit entry point (startup_64) is always 512 bytes
* after. The EFI stub entry point is 16 bytes after that, as
* the first instruction allows legacy loaders to jump over
* the EFI stub initialisation
*/
put_unaligned_le32(i + 528, &buf[pe_header + 0x28]);
/* .text size */
put_unaligned_le32(file_sz, &buf[pe_header + 0xc0]);
/* .text vma */
put_unaligned_le32(0x200, &buf[pe_header + 0xc4]);
/* .text size of initialised data */
put_unaligned_le32(file_sz, &buf[pe_header + 0xc8]);
/* .text file offset */
put_unaligned_le32(0x200, &buf[pe_header + 0xcc]);
#endif /* CONFIG_X86_32 */
#endif /* CONFIG_EFI_STUB */
crc = partial_crc32(buf, i, crc); crc = partial_crc32(buf, i, crc);
if (fwrite(buf, 1, i, stdout) != i) if (fwrite(buf, 1, i, stdout) != i)

Просмотреть файл

@ -54,6 +54,20 @@ struct nmiaction {
__register_nmi_handler((t), &fn##_na); \ __register_nmi_handler((t), &fn##_na); \
}) })
/*
* For special handlers that register/unregister in the
* init section only. This should be considered rare.
*/
#define register_nmi_handler_initonly(t, fn, fg, n) \
({ \
static struct nmiaction fn##_na __initdata = { \
.handler = (fn), \
.name = (n), \
.flags = (fg), \
}; \
__register_nmi_handler((t), &fn##_na); \
})
int __register_nmi_handler(unsigned int, struct nmiaction *); int __register_nmi_handler(unsigned int, struct nmiaction *);
void unregister_nmi_handler(unsigned int, const char *); void unregister_nmi_handler(unsigned int, const char *);

Просмотреть файл

@ -149,7 +149,6 @@
/* 4 bits of software ack period */ /* 4 bits of software ack period */
#define UV2_ACK_MASK 0x7UL #define UV2_ACK_MASK 0x7UL
#define UV2_ACK_UNITS_SHFT 3 #define UV2_ACK_UNITS_SHFT 3
#define UV2_LEG_SHFT UV2H_LB_BAU_MISC_CONTROL_USE_LEGACY_DESCRIPTOR_FORMATS_SHFT
#define UV2_EXT_SHFT UV2H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_SHFT #define UV2_EXT_SHFT UV2H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_SHFT
/* /*

Просмотреть файл

@ -20,7 +20,6 @@
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/kmemleak.h>
#include <asm/e820.h> #include <asm/e820.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/iommu.h> #include <asm/iommu.h>
@ -95,11 +94,6 @@ static u32 __init allocate_aperture(void)
return 0; return 0;
} }
memblock_reserve(addr, aper_size); memblock_reserve(addr, aper_size);
/*
* Kmemleak should not scan this block as it may not be mapped via the
* kernel direct mapping.
*/
kmemleak_ignore(phys_to_virt(addr));
printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n", printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
aper_size >> 10, addr); aper_size >> 10, addr);
insert_aperture_resource((u32)addr, aper_size); insert_aperture_resource((u32)addr, aper_size);

Просмотреть файл

@ -1195,7 +1195,7 @@ static void __clear_irq_vector(int irq, struct irq_cfg *cfg)
BUG_ON(!cfg->vector); BUG_ON(!cfg->vector);
vector = cfg->vector; vector = cfg->vector;
for_each_cpu_and(cpu, cfg->domain, cpu_online_mask) for_each_cpu(cpu, cfg->domain)
per_cpu(vector_irq, cpu)[vector] = -1; per_cpu(vector_irq, cpu)[vector] = -1;
cfg->vector = 0; cfg->vector = 0;
@ -1203,7 +1203,7 @@ static void __clear_irq_vector(int irq, struct irq_cfg *cfg)
if (likely(!cfg->move_in_progress)) if (likely(!cfg->move_in_progress))
return; return;
for_each_cpu_and(cpu, cfg->old_domain, cpu_online_mask) { for_each_cpu(cpu, cfg->old_domain) {
for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS;
vector++) { vector++) {
if (per_cpu(vector_irq, cpu)[vector] != irq) if (per_cpu(vector_irq, cpu)[vector] != irq)

Просмотреть файл

@ -1274,7 +1274,7 @@ static void mce_timer_fn(unsigned long data)
*/ */
iv = __this_cpu_read(mce_next_interval); iv = __this_cpu_read(mce_next_interval);
if (mce_notify_irq()) if (mce_notify_irq())
iv = max(iv, (unsigned long) HZ/100); iv = max(iv / 2, (unsigned long) HZ/100);
else else
iv = min(iv * 2, round_jiffies_relative(check_interval * HZ)); iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
__this_cpu_write(mce_next_interval, iv); __this_cpu_write(mce_next_interval, iv);
@ -1557,7 +1557,7 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
static void __mcheck_cpu_init_timer(void) static void __mcheck_cpu_init_timer(void)
{ {
struct timer_list *t = &__get_cpu_var(mce_timer); struct timer_list *t = &__get_cpu_var(mce_timer);
unsigned long iv = __this_cpu_read(mce_next_interval); unsigned long iv = check_interval * HZ;
setup_timer(t, mce_timer_fn, smp_processor_id()); setup_timer(t, mce_timer_fn, smp_processor_id());

Просмотреть файл

@ -42,7 +42,7 @@ static int __init nmi_unk_cb(unsigned int val, struct pt_regs *regs)
static void __init init_nmi_testsuite(void) static void __init init_nmi_testsuite(void)
{ {
/* trap all the unknown NMIs we may generate */ /* trap all the unknown NMIs we may generate */
register_nmi_handler(NMI_UNKNOWN, nmi_unk_cb, 0, "nmi_selftest_unk"); register_nmi_handler_initonly(NMI_UNKNOWN, nmi_unk_cb, 0, "nmi_selftest_unk");
} }
static void __init cleanup_nmi_testsuite(void) static void __init cleanup_nmi_testsuite(void)
@ -64,7 +64,7 @@ static void __init test_nmi_ipi(struct cpumask *mask)
{ {
unsigned long timeout; unsigned long timeout;
if (register_nmi_handler(NMI_LOCAL, test_nmi_ipi_callback, if (register_nmi_handler_initonly(NMI_LOCAL, test_nmi_ipi_callback,
NMI_FLAG_FIRST, "nmi_selftest")) { NMI_FLAG_FIRST, "nmi_selftest")) {
nmi_fail = FAILURE; nmi_fail = FAILURE;
return; return;

Просмотреть файл

@ -639,9 +639,11 @@ void native_machine_shutdown(void)
set_cpus_allowed_ptr(current, cpumask_of(reboot_cpu_id)); set_cpus_allowed_ptr(current, cpumask_of(reboot_cpu_id));
/* /*
* O.K Now that I'm on the appropriate processor, * O.K Now that I'm on the appropriate processor, stop all of the
* stop all of the others. * others. Also disable the local irq to not receive the per-cpu
* timer interrupt which may trigger scheduler's load balance.
*/ */
local_irq_disable();
stop_other_cpus(); stop_other_cpus();
#endif #endif

Просмотреть файл

@ -382,6 +382,15 @@ void __cpuinit set_cpu_sibling_map(int cpu)
if ((i == cpu) || (has_mc && match_llc(c, o))) if ((i == cpu) || (has_mc && match_llc(c, o)))
link_mask(llc_shared, cpu, i); link_mask(llc_shared, cpu, i);
}
/*
* This needs a separate iteration over the cpus because we rely on all
* cpu_sibling_mask links to be set-up.
*/
for_each_cpu(i, cpu_sibling_setup_mask) {
o = &cpu_data(i);
if ((i == cpu) || (has_mc && match_mc(c, o))) { if ((i == cpu) || (has_mc && match_mc(c, o))) {
link_mask(core, cpu, i); link_mask(core, cpu, i);

Просмотреть файл

@ -62,7 +62,8 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
extra += PMD_SIZE; extra += PMD_SIZE;
#endif #endif
/* The first 2/4M doesn't use large pages. */ /* The first 2/4M doesn't use large pages. */
extra += mr->end - mr->start; if (mr->start < PMD_SIZE)
extra += mr->end - mr->start;
ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
} else } else

Просмотреть файл

@ -176,6 +176,8 @@ acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma)
return; return;
} }
node_set(node, numa_nodes_parsed);
printk(KERN_INFO "SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]\n", printk(KERN_INFO "SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]\n",
node, pxm, node, pxm,
(unsigned long long) start, (unsigned long long) end - 1); (unsigned long long) start, (unsigned long long) end - 1);

Просмотреть файл

@ -782,7 +782,7 @@ BLOCKING_NOTIFIER_HEAD(intel_scu_notifier);
EXPORT_SYMBOL_GPL(intel_scu_notifier); EXPORT_SYMBOL_GPL(intel_scu_notifier);
/* Called by IPC driver */ /* Called by IPC driver */
void intel_scu_devices_create(void) void __devinit intel_scu_devices_create(void)
{ {
int i; int i;

Просмотреть файл

@ -1295,7 +1295,6 @@ static void __init enable_timeouts(void)
*/ */
mmr_image |= (1L << SOFTACK_MSHIFT); mmr_image |= (1L << SOFTACK_MSHIFT);
if (is_uv2_hub()) { if (is_uv2_hub()) {
mmr_image &= ~(1L << UV2_LEG_SHFT);
mmr_image |= (1L << UV2_EXT_SHFT); mmr_image |= (1L << UV2_EXT_SHFT);
} }
write_mmr_misc_control(pnode, mmr_image); write_mmr_misc_control(pnode, mmr_image);

Просмотреть файл

@ -208,7 +208,7 @@ config ACPI_IPMI
config ACPI_HOTPLUG_CPU config ACPI_HOTPLUG_CPU
bool bool
depends on ACPI_PROCESSOR && HOTPLUG_CPU depends on EXPERIMENTAL && ACPI_PROCESSOR && HOTPLUG_CPU
select ACPI_CONTAINER select ACPI_CONTAINER
default y default y

Просмотреть файл

@ -643,11 +643,19 @@ static int acpi_battery_update(struct acpi_battery *battery)
static void acpi_battery_refresh(struct acpi_battery *battery) static void acpi_battery_refresh(struct acpi_battery *battery)
{ {
int power_unit;
if (!battery->bat.dev) if (!battery->bat.dev)
return; return;
power_unit = battery->power_unit;
acpi_battery_get_info(battery); acpi_battery_get_info(battery);
/* The battery may have changed its reporting units. */
if (power_unit == battery->power_unit)
return;
/* The battery has changed its reporting units. */
sysfs_remove_battery(battery); sysfs_remove_battery(battery);
sysfs_add_battery(battery); sysfs_add_battery(battery);
} }

Просмотреть файл

@ -333,6 +333,7 @@ static int acpi_processor_get_performance_states(struct acpi_processor *pr)
struct acpi_buffer state = { 0, NULL }; struct acpi_buffer state = { 0, NULL };
union acpi_object *pss = NULL; union acpi_object *pss = NULL;
int i; int i;
int last_invalid = -1;
status = acpi_evaluate_object(pr->handle, "_PSS", NULL, &buffer); status = acpi_evaluate_object(pr->handle, "_PSS", NULL, &buffer);
@ -394,14 +395,33 @@ static int acpi_processor_get_performance_states(struct acpi_processor *pr)
((u32)(px->core_frequency * 1000) != ((u32)(px->core_frequency * 1000) !=
(px->core_frequency * 1000))) { (px->core_frequency * 1000))) {
printk(KERN_ERR FW_BUG PREFIX printk(KERN_ERR FW_BUG PREFIX
"Invalid BIOS _PSS frequency: 0x%llx MHz\n", "Invalid BIOS _PSS frequency found for processor %d: 0x%llx MHz\n",
px->core_frequency); pr->id, px->core_frequency);
result = -EFAULT; if (last_invalid == -1)
kfree(pr->performance->states); last_invalid = i;
goto end; } else {
if (last_invalid != -1) {
/*
* Copy this valid entry over last_invalid entry
*/
memcpy(&(pr->performance->states[last_invalid]),
px, sizeof(struct acpi_processor_px));
++last_invalid;
}
} }
} }
if (last_invalid == 0) {
printk(KERN_ERR FW_BUG PREFIX
"No valid BIOS _PSS frequency found for processor %d\n", pr->id);
result = -EFAULT;
kfree(pr->performance->states);
pr->performance->states = NULL;
}
if (last_invalid > 0)
pr->performance->state_count = last_invalid;
end: end:
kfree(buffer.pointer); kfree(buffer.pointer);

Просмотреть файл

@ -1687,10 +1687,6 @@ static int acpi_video_bus_add(struct acpi_device *device)
set_bit(KEY_BRIGHTNESS_ZERO, input->keybit); set_bit(KEY_BRIGHTNESS_ZERO, input->keybit);
set_bit(KEY_DISPLAY_OFF, input->keybit); set_bit(KEY_DISPLAY_OFF, input->keybit);
error = input_register_device(input);
if (error)
goto err_stop_video;
printk(KERN_INFO PREFIX "%s [%s] (multi-head: %s rom: %s post: %s)\n", printk(KERN_INFO PREFIX "%s [%s] (multi-head: %s rom: %s post: %s)\n",
ACPI_VIDEO_DEVICE_NAME, acpi_device_bid(device), ACPI_VIDEO_DEVICE_NAME, acpi_device_bid(device),
video->flags.multihead ? "yes" : "no", video->flags.multihead ? "yes" : "no",
@ -1701,12 +1697,16 @@ static int acpi_video_bus_add(struct acpi_device *device)
video->pm_nb.priority = 0; video->pm_nb.priority = 0;
error = register_pm_notifier(&video->pm_nb); error = register_pm_notifier(&video->pm_nb);
if (error) if (error)
goto err_unregister_input_dev; goto err_stop_video;
error = input_register_device(input);
if (error)
goto err_unregister_pm_notifier;
return 0; return 0;
err_unregister_input_dev: err_unregister_pm_notifier:
input_unregister_device(input); unregister_pm_notifier(&video->pm_nb);
err_stop_video: err_stop_video:
acpi_video_bus_stop_devices(video); acpi_video_bus_stop_devices(video);
err_free_input_dev: err_free_input_dev:
@ -1743,9 +1743,18 @@ static int acpi_video_bus_remove(struct acpi_device *device, int type)
return 0; return 0;
} }
static int __init is_i740(struct pci_dev *dev)
{
if (dev->device == 0x00D1)
return 1;
if (dev->device == 0x7000)
return 1;
return 0;
}
static int __init intel_opregion_present(void) static int __init intel_opregion_present(void)
{ {
#if defined(CONFIG_DRM_I915) || defined(CONFIG_DRM_I915_MODULE) int opregion = 0;
struct pci_dev *dev = NULL; struct pci_dev *dev = NULL;
u32 address; u32 address;
@ -1754,13 +1763,15 @@ static int __init intel_opregion_present(void)
continue; continue;
if (dev->vendor != PCI_VENDOR_ID_INTEL) if (dev->vendor != PCI_VENDOR_ID_INTEL)
continue; continue;
/* We don't want to poke around undefined i740 registers */
if (is_i740(dev))
continue;
pci_read_config_dword(dev, 0xfc, &address); pci_read_config_dword(dev, 0xfc, &address);
if (!address) if (!address)
continue; continue;
return 1; opregion = 1;
} }
#endif return opregion;
return 0;
} }
int acpi_video_register(void) int acpi_video_register(void)

Просмотреть файл

@ -898,6 +898,7 @@ static struct pci_device_id agp_intel_pci_table[] = {
ID(PCI_DEVICE_ID_INTEL_B43_HB), ID(PCI_DEVICE_ID_INTEL_B43_HB),
ID(PCI_DEVICE_ID_INTEL_B43_1_HB), ID(PCI_DEVICE_ID_INTEL_B43_1_HB),
ID(PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB), ID(PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB),
ID(PCI_DEVICE_ID_INTEL_IRONLAKE_D2_HB),
ID(PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB), ID(PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB),
ID(PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB), ID(PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB),
ID(PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB), ID(PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB),

Просмотреть файл

@ -212,6 +212,7 @@
#define PCI_DEVICE_ID_INTEL_G41_HB 0x2E30 #define PCI_DEVICE_ID_INTEL_G41_HB 0x2E30
#define PCI_DEVICE_ID_INTEL_G41_IG 0x2E32 #define PCI_DEVICE_ID_INTEL_G41_IG 0x2E32
#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB 0x0040 #define PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB 0x0040
#define PCI_DEVICE_ID_INTEL_IRONLAKE_D2_HB 0x0069
#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG 0x0042 #define PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG 0x0042
#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB 0x0044 #define PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB 0x0044
#define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB 0x0062 #define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB 0x0062

Просмотреть файл

@ -244,8 +244,8 @@ static const struct file_operations exynos_drm_driver_fops = {
}; };
static struct drm_driver exynos_drm_driver = { static struct drm_driver exynos_drm_driver = {
.driver_features = DRIVER_HAVE_IRQ | DRIVER_BUS_PLATFORM | .driver_features = DRIVER_HAVE_IRQ | DRIVER_MODESET |
DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME, DRIVER_GEM | DRIVER_PRIME,
.load = exynos_drm_load, .load = exynos_drm_load,
.unload = exynos_drm_unload, .unload = exynos_drm_unload,
.open = exynos_drm_open, .open = exynos_drm_open,

Просмотреть файл

@ -172,19 +172,12 @@ static void exynos_drm_encoder_commit(struct drm_encoder *encoder)
manager_ops->commit(manager->dev); manager_ops->commit(manager->dev);
} }
static struct drm_crtc *
exynos_drm_encoder_get_crtc(struct drm_encoder *encoder)
{
return encoder->crtc;
}
static struct drm_encoder_helper_funcs exynos_encoder_helper_funcs = { static struct drm_encoder_helper_funcs exynos_encoder_helper_funcs = {
.dpms = exynos_drm_encoder_dpms, .dpms = exynos_drm_encoder_dpms,
.mode_fixup = exynos_drm_encoder_mode_fixup, .mode_fixup = exynos_drm_encoder_mode_fixup,
.mode_set = exynos_drm_encoder_mode_set, .mode_set = exynos_drm_encoder_mode_set,
.prepare = exynos_drm_encoder_prepare, .prepare = exynos_drm_encoder_prepare,
.commit = exynos_drm_encoder_commit, .commit = exynos_drm_encoder_commit,
.get_crtc = exynos_drm_encoder_get_crtc,
}; };
static void exynos_drm_encoder_destroy(struct drm_encoder *encoder) static void exynos_drm_encoder_destroy(struct drm_encoder *encoder)

Просмотреть файл

@ -51,11 +51,22 @@ struct exynos_drm_fb {
static void exynos_drm_fb_destroy(struct drm_framebuffer *fb) static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)
{ {
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb); struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
unsigned int i;
DRM_DEBUG_KMS("%s\n", __FILE__); DRM_DEBUG_KMS("%s\n", __FILE__);
drm_framebuffer_cleanup(fb); drm_framebuffer_cleanup(fb);
for (i = 0; i < ARRAY_SIZE(exynos_fb->exynos_gem_obj); i++) {
struct drm_gem_object *obj;
if (exynos_fb->exynos_gem_obj[i] == NULL)
continue;
obj = &exynos_fb->exynos_gem_obj[i]->base;
drm_gem_object_unreference_unlocked(obj);
}
kfree(exynos_fb); kfree(exynos_fb);
exynos_fb = NULL; exynos_fb = NULL;
} }
@ -134,11 +145,11 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
} }
drm_gem_object_unreference_unlocked(obj);
fb = exynos_drm_framebuffer_init(dev, mode_cmd, obj); fb = exynos_drm_framebuffer_init(dev, mode_cmd, obj);
if (IS_ERR(fb)) if (IS_ERR(fb)) {
drm_gem_object_unreference_unlocked(obj);
return fb; return fb;
}
exynos_fb = to_exynos_fb(fb); exynos_fb = to_exynos_fb(fb);
nr = exynos_drm_format_num_buffers(fb->pixel_format); nr = exynos_drm_format_num_buffers(fb->pixel_format);
@ -152,8 +163,6 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
} }
drm_gem_object_unreference_unlocked(obj);
exynos_fb->exynos_gem_obj[i] = to_exynos_gem_obj(obj); exynos_fb->exynos_gem_obj[i] = to_exynos_gem_obj(obj);
} }

Просмотреть файл

@ -31,10 +31,10 @@
static inline int exynos_drm_format_num_buffers(uint32_t format) static inline int exynos_drm_format_num_buffers(uint32_t format)
{ {
switch (format) { switch (format) {
case DRM_FORMAT_NV12M: case DRM_FORMAT_NV12:
case DRM_FORMAT_NV12MT: case DRM_FORMAT_NV12MT:
return 2; return 2;
case DRM_FORMAT_YUV420M: case DRM_FORMAT_YUV420:
return 3; return 3;
default: default:
return 1; return 1;

Просмотреть файл

@ -689,7 +689,6 @@ int exynos_drm_gem_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *dev, uint32_t handle, struct drm_device *dev, uint32_t handle,
uint64_t *offset) uint64_t *offset)
{ {
struct exynos_drm_gem_obj *exynos_gem_obj;
struct drm_gem_object *obj; struct drm_gem_object *obj;
int ret = 0; int ret = 0;
@ -710,15 +709,13 @@ int exynos_drm_gem_dumb_map_offset(struct drm_file *file_priv,
goto unlock; goto unlock;
} }
exynos_gem_obj = to_exynos_gem_obj(obj); if (!obj->map_list.map) {
ret = drm_gem_create_mmap_offset(obj);
if (!exynos_gem_obj->base.map_list.map) {
ret = drm_gem_create_mmap_offset(&exynos_gem_obj->base);
if (ret) if (ret)
goto out; goto out;
} }
*offset = (u64)exynos_gem_obj->base.map_list.hash.key << PAGE_SHIFT; *offset = (u64)obj->map_list.hash.key << PAGE_SHIFT;
DRM_DEBUG_KMS("offset = 0x%lx\n", (unsigned long)*offset); DRM_DEBUG_KMS("offset = 0x%lx\n", (unsigned long)*offset);
out: out:

Просмотреть файл

@ -365,7 +365,7 @@ static void vp_video_buffer(struct mixer_context *ctx, int win)
switch (win_data->pixel_format) { switch (win_data->pixel_format) {
case DRM_FORMAT_NV12MT: case DRM_FORMAT_NV12MT:
tiled_mode = true; tiled_mode = true;
case DRM_FORMAT_NV12M: case DRM_FORMAT_NV12:
crcb_mode = false; crcb_mode = false;
buf_num = 2; buf_num = 2;
break; break;
@ -601,18 +601,20 @@ static void mixer_win_reset(struct mixer_context *ctx)
mixer_reg_write(res, MXR_BG_COLOR2, 0x008080); mixer_reg_write(res, MXR_BG_COLOR2, 0x008080);
/* setting graphical layers */ /* setting graphical layers */
val = MXR_GRP_CFG_COLOR_KEY_DISABLE; /* no blank key */ val = MXR_GRP_CFG_COLOR_KEY_DISABLE; /* no blank key */
val |= MXR_GRP_CFG_WIN_BLEND_EN; val |= MXR_GRP_CFG_WIN_BLEND_EN;
val |= MXR_GRP_CFG_BLEND_PRE_MUL;
val |= MXR_GRP_CFG_PIXEL_BLEND_EN;
val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */ val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */
/* the same configuration for both layers */ /* the same configuration for both layers */
mixer_reg_write(res, MXR_GRAPHIC_CFG(0), val); mixer_reg_write(res, MXR_GRAPHIC_CFG(0), val);
val |= MXR_GRP_CFG_BLEND_PRE_MUL;
val |= MXR_GRP_CFG_PIXEL_BLEND_EN;
mixer_reg_write(res, MXR_GRAPHIC_CFG(1), val); mixer_reg_write(res, MXR_GRAPHIC_CFG(1), val);
/* setting video layers */
val = MXR_GRP_CFG_ALPHA_VAL(0);
mixer_reg_write(res, MXR_VIDEO_CFG, val);
/* configuration of Video Processor Registers */ /* configuration of Video Processor Registers */
vp_win_reset(ctx); vp_win_reset(ctx);
vp_default_filter(res); vp_default_filter(res);

Просмотреть файл

@ -233,6 +233,7 @@ static const struct intel_device_info intel_sandybridge_d_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct intel_device_info intel_sandybridge_m_info = { static const struct intel_device_info intel_sandybridge_m_info = {
@ -243,6 +244,7 @@ static const struct intel_device_info intel_sandybridge_m_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct intel_device_info intel_ivybridge_d_info = { static const struct intel_device_info intel_ivybridge_d_info = {
@ -252,6 +254,7 @@ static const struct intel_device_info intel_ivybridge_d_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct intel_device_info intel_ivybridge_m_info = { static const struct intel_device_info intel_ivybridge_m_info = {
@ -262,6 +265,7 @@ static const struct intel_device_info intel_ivybridge_m_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct intel_device_info intel_valleyview_m_info = { static const struct intel_device_info intel_valleyview_m_info = {
@ -289,6 +293,7 @@ static const struct intel_device_info intel_haswell_d_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct intel_device_info intel_haswell_m_info = { static const struct intel_device_info intel_haswell_m_info = {
@ -298,6 +303,7 @@ static const struct intel_device_info intel_haswell_m_info = {
.has_blt_ring = 1, .has_blt_ring = 1,
.has_llc = 1, .has_llc = 1,
.has_pch_split = 1, .has_pch_split = 1,
.has_force_wake = 1,
}; };
static const struct pci_device_id pciidlist[] = { /* aka */ static const struct pci_device_id pciidlist[] = { /* aka */
@ -1139,10 +1145,9 @@ MODULE_LICENSE("GPL and additional rights");
/* We give fast paths for the really cool registers */ /* We give fast paths for the really cool registers */
#define NEEDS_FORCE_WAKE(dev_priv, reg) \ #define NEEDS_FORCE_WAKE(dev_priv, reg) \
(((dev_priv)->info->gen >= 6) && \ ((HAS_FORCE_WAKE((dev_priv)->dev)) && \
((reg) < 0x40000) && \ ((reg) < 0x40000) && \
((reg) != FORCEWAKE)) && \ ((reg) != FORCEWAKE))
(!IS_VALLEYVIEW((dev_priv)->dev))
#define __i915_read(x, y) \ #define __i915_read(x, y) \
u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg) { \ u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg) { \

Просмотреть файл

@ -285,6 +285,7 @@ struct intel_device_info {
u8 is_ivybridge:1; u8 is_ivybridge:1;
u8 is_valleyview:1; u8 is_valleyview:1;
u8 has_pch_split:1; u8 has_pch_split:1;
u8 has_force_wake:1;
u8 is_haswell:1; u8 is_haswell:1;
u8 has_fbc:1; u8 has_fbc:1;
u8 has_pipe_cxsr:1; u8 has_pipe_cxsr:1;
@ -1101,6 +1102,8 @@ struct drm_i915_file_private {
#define HAS_PCH_CPT(dev) (INTEL_PCH_TYPE(dev) == PCH_CPT) #define HAS_PCH_CPT(dev) (INTEL_PCH_TYPE(dev) == PCH_CPT)
#define HAS_PCH_IBX(dev) (INTEL_PCH_TYPE(dev) == PCH_IBX) #define HAS_PCH_IBX(dev) (INTEL_PCH_TYPE(dev) == PCH_IBX)
#define HAS_FORCE_WAKE(dev) (INTEL_INFO(dev)->has_force_wake)
#include "i915_trace.h" #include "i915_trace.h"
/** /**

Просмотреть файл

@ -510,7 +510,7 @@ out:
return ret; return ret;
} }
static void pch_irq_handler(struct drm_device *dev, u32 pch_iir) static void ibx_irq_handler(struct drm_device *dev, u32 pch_iir)
{ {
drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
int pipe; int pipe;
@ -550,6 +550,35 @@ static void pch_irq_handler(struct drm_device *dev, u32 pch_iir)
DRM_DEBUG_DRIVER("PCH transcoder A underrun interrupt\n"); DRM_DEBUG_DRIVER("PCH transcoder A underrun interrupt\n");
} }
static void cpt_irq_handler(struct drm_device *dev, u32 pch_iir)
{
drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
int pipe;
if (pch_iir & SDE_AUDIO_POWER_MASK_CPT)
DRM_DEBUG_DRIVER("PCH audio power change on port %d\n",
(pch_iir & SDE_AUDIO_POWER_MASK_CPT) >>
SDE_AUDIO_POWER_SHIFT_CPT);
if (pch_iir & SDE_AUX_MASK_CPT)
DRM_DEBUG_DRIVER("AUX channel interrupt\n");
if (pch_iir & SDE_GMBUS_CPT)
DRM_DEBUG_DRIVER("PCH GMBUS interrupt\n");
if (pch_iir & SDE_AUDIO_CP_REQ_CPT)
DRM_DEBUG_DRIVER("Audio CP request interrupt\n");
if (pch_iir & SDE_AUDIO_CP_CHG_CPT)
DRM_DEBUG_DRIVER("Audio CP change interrupt\n");
if (pch_iir & SDE_FDI_MASK_CPT)
for_each_pipe(pipe)
DRM_DEBUG_DRIVER(" pipe %c FDI IIR: 0x%08x\n",
pipe_name(pipe),
I915_READ(FDI_RX_IIR(pipe)));
}
static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS) static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
{ {
struct drm_device *dev = (struct drm_device *) arg; struct drm_device *dev = (struct drm_device *) arg;
@ -591,7 +620,7 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
if (pch_iir & SDE_HOTPLUG_MASK_CPT) if (pch_iir & SDE_HOTPLUG_MASK_CPT)
queue_work(dev_priv->wq, &dev_priv->hotplug_work); queue_work(dev_priv->wq, &dev_priv->hotplug_work);
pch_irq_handler(dev, pch_iir); cpt_irq_handler(dev, pch_iir);
/* clear PCH hotplug event before clear CPU irq */ /* clear PCH hotplug event before clear CPU irq */
I915_WRITE(SDEIIR, pch_iir); I915_WRITE(SDEIIR, pch_iir);
@ -684,7 +713,10 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
if (de_iir & DE_PCH_EVENT) { if (de_iir & DE_PCH_EVENT) {
if (pch_iir & hotplug_mask) if (pch_iir & hotplug_mask)
queue_work(dev_priv->wq, &dev_priv->hotplug_work); queue_work(dev_priv->wq, &dev_priv->hotplug_work);
pch_irq_handler(dev, pch_iir); if (HAS_PCH_CPT(dev))
cpt_irq_handler(dev, pch_iir);
else
ibx_irq_handler(dev, pch_iir);
} }
if (de_iir & DE_PCU_EVENT) { if (de_iir & DE_PCU_EVENT) {

Просмотреть файл

@ -210,6 +210,14 @@
#define MI_DISPLAY_FLIP MI_INSTR(0x14, 2) #define MI_DISPLAY_FLIP MI_INSTR(0x14, 2)
#define MI_DISPLAY_FLIP_I915 MI_INSTR(0x14, 1) #define MI_DISPLAY_FLIP_I915 MI_INSTR(0x14, 1)
#define MI_DISPLAY_FLIP_PLANE(n) ((n) << 20) #define MI_DISPLAY_FLIP_PLANE(n) ((n) << 20)
/* IVB has funny definitions for which plane to flip. */
#define MI_DISPLAY_FLIP_IVB_PLANE_A (0 << 19)
#define MI_DISPLAY_FLIP_IVB_PLANE_B (1 << 19)
#define MI_DISPLAY_FLIP_IVB_SPRITE_A (2 << 19)
#define MI_DISPLAY_FLIP_IVB_SPRITE_B (3 << 19)
#define MI_DISPLAY_FLIP_IVB_PLANE_C (4 << 19)
#define MI_DISPLAY_FLIP_IVB_SPRITE_C (5 << 19)
#define MI_SET_CONTEXT MI_INSTR(0x18, 0) #define MI_SET_CONTEXT MI_INSTR(0x18, 0)
#define MI_MM_SPACE_GTT (1<<8) #define MI_MM_SPACE_GTT (1<<8)
#define MI_MM_SPACE_PHYSICAL (0<<8) #define MI_MM_SPACE_PHYSICAL (0<<8)
@ -3313,7 +3321,7 @@
/* PCH */ /* PCH */
/* south display engine interrupt */ /* south display engine interrupt: IBX */
#define SDE_AUDIO_POWER_D (1 << 27) #define SDE_AUDIO_POWER_D (1 << 27)
#define SDE_AUDIO_POWER_C (1 << 26) #define SDE_AUDIO_POWER_C (1 << 26)
#define SDE_AUDIO_POWER_B (1 << 25) #define SDE_AUDIO_POWER_B (1 << 25)
@ -3349,15 +3357,44 @@
#define SDE_TRANSA_CRC_ERR (1 << 1) #define SDE_TRANSA_CRC_ERR (1 << 1)
#define SDE_TRANSA_FIFO_UNDER (1 << 0) #define SDE_TRANSA_FIFO_UNDER (1 << 0)
#define SDE_TRANS_MASK (0x3f) #define SDE_TRANS_MASK (0x3f)
/* CPT */
#define SDE_CRT_HOTPLUG_CPT (1 << 19) /* south display engine interrupt: CPT/PPT */
#define SDE_AUDIO_POWER_D_CPT (1 << 31)
#define SDE_AUDIO_POWER_C_CPT (1 << 30)
#define SDE_AUDIO_POWER_B_CPT (1 << 29)
#define SDE_AUDIO_POWER_SHIFT_CPT 29
#define SDE_AUDIO_POWER_MASK_CPT (7 << 29)
#define SDE_AUXD_CPT (1 << 27)
#define SDE_AUXC_CPT (1 << 26)
#define SDE_AUXB_CPT (1 << 25)
#define SDE_AUX_MASK_CPT (7 << 25)
#define SDE_PORTD_HOTPLUG_CPT (1 << 23) #define SDE_PORTD_HOTPLUG_CPT (1 << 23)
#define SDE_PORTC_HOTPLUG_CPT (1 << 22) #define SDE_PORTC_HOTPLUG_CPT (1 << 22)
#define SDE_PORTB_HOTPLUG_CPT (1 << 21) #define SDE_PORTB_HOTPLUG_CPT (1 << 21)
#define SDE_CRT_HOTPLUG_CPT (1 << 19)
#define SDE_HOTPLUG_MASK_CPT (SDE_CRT_HOTPLUG_CPT | \ #define SDE_HOTPLUG_MASK_CPT (SDE_CRT_HOTPLUG_CPT | \
SDE_PORTD_HOTPLUG_CPT | \ SDE_PORTD_HOTPLUG_CPT | \
SDE_PORTC_HOTPLUG_CPT | \ SDE_PORTC_HOTPLUG_CPT | \
SDE_PORTB_HOTPLUG_CPT) SDE_PORTB_HOTPLUG_CPT)
#define SDE_GMBUS_CPT (1 << 17)
#define SDE_AUDIO_CP_REQ_C_CPT (1 << 10)
#define SDE_AUDIO_CP_CHG_C_CPT (1 << 9)
#define SDE_FDI_RXC_CPT (1 << 8)
#define SDE_AUDIO_CP_REQ_B_CPT (1 << 6)
#define SDE_AUDIO_CP_CHG_B_CPT (1 << 5)
#define SDE_FDI_RXB_CPT (1 << 4)
#define SDE_AUDIO_CP_REQ_A_CPT (1 << 2)
#define SDE_AUDIO_CP_CHG_A_CPT (1 << 1)
#define SDE_FDI_RXA_CPT (1 << 0)
#define SDE_AUDIO_CP_REQ_CPT (SDE_AUDIO_CP_REQ_C_CPT | \
SDE_AUDIO_CP_REQ_B_CPT | \
SDE_AUDIO_CP_REQ_A_CPT)
#define SDE_AUDIO_CP_CHG_CPT (SDE_AUDIO_CP_CHG_C_CPT | \
SDE_AUDIO_CP_CHG_B_CPT | \
SDE_AUDIO_CP_CHG_A_CPT)
#define SDE_FDI_MASK_CPT (SDE_FDI_RXC_CPT | \
SDE_FDI_RXB_CPT | \
SDE_FDI_RXA_CPT)
#define SDEISR 0xc4000 #define SDEISR 0xc4000
#define SDEIMR 0xc4004 #define SDEIMR 0xc4004

Просмотреть файл

@ -6158,17 +6158,34 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_i915_private *dev_priv = dev->dev_private; struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc); struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_ring_buffer *ring = &dev_priv->ring[BCS]; struct intel_ring_buffer *ring = &dev_priv->ring[BCS];
uint32_t plane_bit = 0;
int ret; int ret;
ret = intel_pin_and_fence_fb_obj(dev, obj, ring); ret = intel_pin_and_fence_fb_obj(dev, obj, ring);
if (ret) if (ret)
goto err; goto err;
switch(intel_crtc->plane) {
case PLANE_A:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_A;
break;
case PLANE_B:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_B;
break;
case PLANE_C:
plane_bit = MI_DISPLAY_FLIP_IVB_PLANE_C;
break;
default:
WARN_ONCE(1, "unknown plane in flip command\n");
ret = -ENODEV;
goto err;
}
ret = intel_ring_begin(ring, 4); ret = intel_ring_begin(ring, 4);
if (ret) if (ret)
goto err_unpin; goto err_unpin;
intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | (intel_crtc->plane << 19)); intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane_bit);
intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode)); intel_ring_emit(ring, (fb->pitches[0] | obj->tiling_mode));
intel_ring_emit(ring, (obj->gtt_offset)); intel_ring_emit(ring, (obj->gtt_offset));
intel_ring_emit(ring, (MI_NOOP)); intel_ring_emit(ring, (MI_NOOP));

Просмотреть файл

@ -266,10 +266,15 @@ u32 intel_ring_get_active_head(struct intel_ring_buffer *ring)
static int init_ring_common(struct intel_ring_buffer *ring) static int init_ring_common(struct intel_ring_buffer *ring)
{ {
drm_i915_private_t *dev_priv = ring->dev->dev_private; struct drm_device *dev = ring->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj = ring->obj; struct drm_i915_gem_object *obj = ring->obj;
int ret = 0;
u32 head; u32 head;
if (HAS_FORCE_WAKE(dev))
gen6_gt_force_wake_get(dev_priv);
/* Stop the ring if it's running. */ /* Stop the ring if it's running. */
I915_WRITE_CTL(ring, 0); I915_WRITE_CTL(ring, 0);
I915_WRITE_HEAD(ring, 0); I915_WRITE_HEAD(ring, 0);
@ -317,7 +322,8 @@ static int init_ring_common(struct intel_ring_buffer *ring)
I915_READ_HEAD(ring), I915_READ_HEAD(ring),
I915_READ_TAIL(ring), I915_READ_TAIL(ring),
I915_READ_START(ring)); I915_READ_START(ring));
return -EIO; ret = -EIO;
goto out;
} }
if (!drm_core_check_feature(ring->dev, DRIVER_MODESET)) if (!drm_core_check_feature(ring->dev, DRIVER_MODESET))
@ -326,9 +332,14 @@ static int init_ring_common(struct intel_ring_buffer *ring)
ring->head = I915_READ_HEAD(ring); ring->head = I915_READ_HEAD(ring);
ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR; ring->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
ring->space = ring_space(ring); ring->space = ring_space(ring);
ring->last_retired_head = -1;
} }
return 0; out:
if (HAS_FORCE_WAKE(dev))
gen6_gt_force_wake_put(dev_priv);
return ret;
} }
static int static int
@ -987,6 +998,10 @@ static int intel_init_ring_buffer(struct drm_device *dev,
if (ret) if (ret)
goto err_unref; goto err_unref;
ret = i915_gem_object_set_to_gtt_domain(obj, true);
if (ret)
goto err_unpin;
ring->virtual_start = ioremap_wc(dev->agp->base + obj->gtt_offset, ring->virtual_start = ioremap_wc(dev->agp->base + obj->gtt_offset,
ring->size); ring->size);
if (ring->virtual_start == NULL) { if (ring->virtual_start == NULL) {

Просмотреть файл

@ -460,15 +460,28 @@ static void cayman_gpu_init(struct radeon_device *rdev)
rdev->config.cayman.max_pipes_per_simd = 4; rdev->config.cayman.max_pipes_per_simd = 4;
rdev->config.cayman.max_tile_pipes = 2; rdev->config.cayman.max_tile_pipes = 2;
if ((rdev->pdev->device == 0x9900) || if ((rdev->pdev->device == 0x9900) ||
(rdev->pdev->device == 0x9901)) { (rdev->pdev->device == 0x9901) ||
(rdev->pdev->device == 0x9905) ||
(rdev->pdev->device == 0x9906) ||
(rdev->pdev->device == 0x9907) ||
(rdev->pdev->device == 0x9908) ||
(rdev->pdev->device == 0x9909) ||
(rdev->pdev->device == 0x9910) ||
(rdev->pdev->device == 0x9917)) {
rdev->config.cayman.max_simds_per_se = 6; rdev->config.cayman.max_simds_per_se = 6;
rdev->config.cayman.max_backends_per_se = 2; rdev->config.cayman.max_backends_per_se = 2;
} else if ((rdev->pdev->device == 0x9903) || } else if ((rdev->pdev->device == 0x9903) ||
(rdev->pdev->device == 0x9904)) { (rdev->pdev->device == 0x9904) ||
(rdev->pdev->device == 0x990A) ||
(rdev->pdev->device == 0x9913) ||
(rdev->pdev->device == 0x9918)) {
rdev->config.cayman.max_simds_per_se = 4; rdev->config.cayman.max_simds_per_se = 4;
rdev->config.cayman.max_backends_per_se = 2; rdev->config.cayman.max_backends_per_se = 2;
} else if ((rdev->pdev->device == 0x9990) || } else if ((rdev->pdev->device == 0x9919) ||
(rdev->pdev->device == 0x9991)) { (rdev->pdev->device == 0x9990) ||
(rdev->pdev->device == 0x9991) ||
(rdev->pdev->device == 0x9994) ||
(rdev->pdev->device == 0x99A0)) {
rdev->config.cayman.max_simds_per_se = 3; rdev->config.cayman.max_simds_per_se = 3;
rdev->config.cayman.max_backends_per_se = 1; rdev->config.cayman.max_backends_per_se = 1;
} else { } else {

Просмотреть файл

@ -2426,6 +2426,12 @@ int r600_startup(struct radeon_device *rdev)
if (r) if (r)
return r; return r;
r = r600_audio_init(rdev);
if (r) {
DRM_ERROR("radeon: audio init failed\n");
return r;
}
return 0; return 0;
} }
@ -2462,12 +2468,6 @@ int r600_resume(struct radeon_device *rdev)
return r; return r;
} }
r = r600_audio_init(rdev);
if (r) {
DRM_ERROR("radeon: audio resume failed\n");
return r;
}
return r; return r;
} }
@ -2577,9 +2577,6 @@ int r600_init(struct radeon_device *rdev)
rdev->accel_working = false; rdev->accel_working = false;
} }
r = r600_audio_init(rdev);
if (r)
return r; /* TODO error handling */
return 0; return 0;
} }

Просмотреть файл

@ -192,6 +192,7 @@ void r600_audio_set_clock(struct drm_encoder *encoder, int clock)
struct radeon_device *rdev = dev->dev_private; struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
int base_rate = 48000; int base_rate = 48000;
switch (radeon_encoder->encoder_id) { switch (radeon_encoder->encoder_id) {
@ -217,8 +218,8 @@ void r600_audio_set_clock(struct drm_encoder *encoder, int clock)
WREG32(EVERGREEN_AUDIO_PLL1_DIV, clock * 10); WREG32(EVERGREEN_AUDIO_PLL1_DIV, clock * 10);
WREG32(EVERGREEN_AUDIO_PLL1_UNK, 0x00000071); WREG32(EVERGREEN_AUDIO_PLL1_UNK, 0x00000071);
/* Some magic trigger or src sel? */ /* Select DTO source */
WREG32_P(0x5ac, 0x01, ~0x77); WREG32(0x5ac, radeon_crtc->crtc_id);
} else { } else {
switch (dig->dig_encoder) { switch (dig->dig_encoder) {
case 0: case 0:

Просмотреть файл

@ -348,7 +348,6 @@ void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mod
WREG32(HDMI0_AUDIO_PACKET_CONTROL + offset, WREG32(HDMI0_AUDIO_PACKET_CONTROL + offset,
HDMI0_AUDIO_SAMPLE_SEND | /* send audio packets */ HDMI0_AUDIO_SAMPLE_SEND | /* send audio packets */
HDMI0_AUDIO_DELAY_EN(1) | /* default audio delay */ HDMI0_AUDIO_DELAY_EN(1) | /* default audio delay */
HDMI0_AUDIO_SEND_MAX_PACKETS | /* send NULL packets if no audio is available */
HDMI0_AUDIO_PACKETS_PER_LINE(3) | /* should be suffient for all audio modes and small enough for all hblanks */ HDMI0_AUDIO_PACKETS_PER_LINE(3) | /* should be suffient for all audio modes and small enough for all hblanks */
HDMI0_60958_CS_UPDATE); /* allow 60958 channel status fields to be updated */ HDMI0_60958_CS_UPDATE); /* allow 60958 channel status fields to be updated */
} }

Просмотреть файл

@ -1374,9 +1374,9 @@ struct cayman_asic {
struct si_asic { struct si_asic {
unsigned max_shader_engines; unsigned max_shader_engines;
unsigned max_pipes_per_simd;
unsigned max_tile_pipes; unsigned max_tile_pipes;
unsigned max_simds_per_se; unsigned max_cu_per_sh;
unsigned max_sh_per_se;
unsigned max_backends_per_se; unsigned max_backends_per_se;
unsigned max_texture_channel_caches; unsigned max_texture_channel_caches;
unsigned max_gprs; unsigned max_gprs;
@ -1387,7 +1387,6 @@ struct si_asic {
unsigned sc_hiz_tile_fifo_size; unsigned sc_hiz_tile_fifo_size;
unsigned sc_earlyz_tile_fifo_size; unsigned sc_earlyz_tile_fifo_size;
unsigned num_shader_engines;
unsigned num_tile_pipes; unsigned num_tile_pipes;
unsigned num_backends_per_se; unsigned num_backends_per_se;
unsigned backend_disable_mask_per_asic; unsigned backend_disable_mask_per_asic;

Просмотреть файл

@ -476,12 +476,18 @@ int radeon_vm_bo_add(struct radeon_device *rdev,
mutex_lock(&vm->mutex); mutex_lock(&vm->mutex);
if (last_pfn > vm->last_pfn) { if (last_pfn > vm->last_pfn) {
/* grow va space 32M by 32M */ /* release mutex and lock in right order */
unsigned align = ((32 << 20) >> 12) - 1; mutex_unlock(&vm->mutex);
radeon_mutex_lock(&rdev->cs_mutex); radeon_mutex_lock(&rdev->cs_mutex);
radeon_vm_unbind_locked(rdev, vm); mutex_lock(&vm->mutex);
/* and check again */
if (last_pfn > vm->last_pfn) {
/* grow va space 32M by 32M */
unsigned align = ((32 << 20) >> 12) - 1;
radeon_vm_unbind_locked(rdev, vm);
vm->last_pfn = (last_pfn + align) & ~align;
}
radeon_mutex_unlock(&rdev->cs_mutex); radeon_mutex_unlock(&rdev->cs_mutex);
vm->last_pfn = (last_pfn + align) & ~align;
} }
head = &vm->va; head = &vm->va;
last_offset = 0; last_offset = 0;
@ -595,8 +601,8 @@ int radeon_vm_bo_rmv(struct radeon_device *rdev,
if (bo_va == NULL) if (bo_va == NULL)
return 0; return 0;
mutex_lock(&vm->mutex);
radeon_mutex_lock(&rdev->cs_mutex); radeon_mutex_lock(&rdev->cs_mutex);
mutex_lock(&vm->mutex);
radeon_vm_bo_update_pte(rdev, vm, bo, NULL); radeon_vm_bo_update_pte(rdev, vm, bo, NULL);
radeon_mutex_unlock(&rdev->cs_mutex); radeon_mutex_unlock(&rdev->cs_mutex);
list_del(&bo_va->vm_list); list_del(&bo_va->vm_list);
@ -641,9 +647,8 @@ void radeon_vm_fini(struct radeon_device *rdev, struct radeon_vm *vm)
struct radeon_bo_va *bo_va, *tmp; struct radeon_bo_va *bo_va, *tmp;
int r; int r;
mutex_lock(&vm->mutex);
radeon_mutex_lock(&rdev->cs_mutex); radeon_mutex_lock(&rdev->cs_mutex);
mutex_lock(&vm->mutex);
radeon_vm_unbind_locked(rdev, vm); radeon_vm_unbind_locked(rdev, vm);
radeon_mutex_unlock(&rdev->cs_mutex); radeon_mutex_unlock(&rdev->cs_mutex);

Просмотреть файл

@ -273,7 +273,7 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
break; break;
case RADEON_INFO_MAX_PIPES: case RADEON_INFO_MAX_PIPES:
if (rdev->family >= CHIP_TAHITI) if (rdev->family >= CHIP_TAHITI)
value = rdev->config.si.max_pipes_per_simd; value = rdev->config.si.max_cu_per_sh;
else if (rdev->family >= CHIP_CAYMAN) else if (rdev->family >= CHIP_CAYMAN)
value = rdev->config.cayman.max_pipes_per_simd; value = rdev->config.cayman.max_pipes_per_simd;
else if (rdev->family >= CHIP_CEDAR) else if (rdev->family >= CHIP_CEDAR)

Просмотреть файл

@ -908,12 +908,6 @@ static int rs600_startup(struct radeon_device *rdev)
return r; return r;
} }
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "failed initializing audio\n");
return r;
}
r = radeon_ib_pool_start(rdev); r = radeon_ib_pool_start(rdev);
if (r) if (r)
return r; return r;
@ -922,6 +916,12 @@ static int rs600_startup(struct radeon_device *rdev)
if (r) if (r)
return r; return r;
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "failed initializing audio\n");
return r;
}
return 0; return 0;
} }

Просмотреть файл

@ -637,12 +637,6 @@ static int rs690_startup(struct radeon_device *rdev)
return r; return r;
} }
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "failed initializing audio\n");
return r;
}
r = radeon_ib_pool_start(rdev); r = radeon_ib_pool_start(rdev);
if (r) if (r)
return r; return r;
@ -651,6 +645,12 @@ static int rs690_startup(struct radeon_device *rdev)
if (r) if (r)
return r; return r;
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "failed initializing audio\n");
return r;
}
return 0; return 0;
} }

Просмотреть файл

@ -956,6 +956,12 @@ static int rv770_startup(struct radeon_device *rdev)
if (r) if (r)
return r; return r;
r = r600_audio_init(rdev);
if (r) {
DRM_ERROR("radeon: audio init failed\n");
return r;
}
return 0; return 0;
} }
@ -978,12 +984,6 @@ int rv770_resume(struct radeon_device *rdev)
return r; return r;
} }
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "radeon: audio init failed\n");
return r;
}
return r; return r;
} }
@ -1092,12 +1092,6 @@ int rv770_init(struct radeon_device *rdev)
rdev->accel_working = false; rdev->accel_working = false;
} }
r = r600_audio_init(rdev);
if (r) {
dev_err(rdev->dev, "radeon: audio init failed\n");
return r;
}
return 0; return 0;
} }

Просмотреть файл

@ -867,200 +867,6 @@ void dce6_bandwidth_update(struct radeon_device *rdev)
/* /*
* Core functions * Core functions
*/ */
static u32 si_get_tile_pipe_to_backend_map(struct radeon_device *rdev,
u32 num_tile_pipes,
u32 num_backends_per_asic,
u32 *backend_disable_mask_per_asic,
u32 num_shader_engines)
{
u32 backend_map = 0;
u32 enabled_backends_mask = 0;
u32 enabled_backends_count = 0;
u32 num_backends_per_se;
u32 cur_pipe;
u32 swizzle_pipe[SI_MAX_PIPES];
u32 cur_backend = 0;
u32 i;
bool force_no_swizzle;
/* force legal values */
if (num_tile_pipes < 1)
num_tile_pipes = 1;
if (num_tile_pipes > rdev->config.si.max_tile_pipes)
num_tile_pipes = rdev->config.si.max_tile_pipes;
if (num_shader_engines < 1)
num_shader_engines = 1;
if (num_shader_engines > rdev->config.si.max_shader_engines)
num_shader_engines = rdev->config.si.max_shader_engines;
if (num_backends_per_asic < num_shader_engines)
num_backends_per_asic = num_shader_engines;
if (num_backends_per_asic > (rdev->config.si.max_backends_per_se * num_shader_engines))
num_backends_per_asic = rdev->config.si.max_backends_per_se * num_shader_engines;
/* make sure we have the same number of backends per se */
num_backends_per_asic = ALIGN(num_backends_per_asic, num_shader_engines);
/* set up the number of backends per se */
num_backends_per_se = num_backends_per_asic / num_shader_engines;
if (num_backends_per_se > rdev->config.si.max_backends_per_se) {
num_backends_per_se = rdev->config.si.max_backends_per_se;
num_backends_per_asic = num_backends_per_se * num_shader_engines;
}
/* create enable mask and count for enabled backends */
for (i = 0; i < SI_MAX_BACKENDS; ++i) {
if (((*backend_disable_mask_per_asic >> i) & 1) == 0) {
enabled_backends_mask |= (1 << i);
++enabled_backends_count;
}
if (enabled_backends_count == num_backends_per_asic)
break;
}
/* force the backends mask to match the current number of backends */
if (enabled_backends_count != num_backends_per_asic) {
u32 this_backend_enabled;
u32 shader_engine;
u32 backend_per_se;
enabled_backends_mask = 0;
enabled_backends_count = 0;
*backend_disable_mask_per_asic = SI_MAX_BACKENDS_MASK;
for (i = 0; i < SI_MAX_BACKENDS; ++i) {
/* calc the current se */
shader_engine = i / rdev->config.si.max_backends_per_se;
/* calc the backend per se */
backend_per_se = i % rdev->config.si.max_backends_per_se;
/* default to not enabled */
this_backend_enabled = 0;
if ((shader_engine < num_shader_engines) &&
(backend_per_se < num_backends_per_se))
this_backend_enabled = 1;
if (this_backend_enabled) {
enabled_backends_mask |= (1 << i);
*backend_disable_mask_per_asic &= ~(1 << i);
++enabled_backends_count;
}
}
}
memset((uint8_t *)&swizzle_pipe[0], 0, sizeof(u32) * SI_MAX_PIPES);
switch (rdev->family) {
case CHIP_TAHITI:
case CHIP_PITCAIRN:
case CHIP_VERDE:
force_no_swizzle = true;
break;
default:
force_no_swizzle = false;
break;
}
if (force_no_swizzle) {
bool last_backend_enabled = false;
force_no_swizzle = false;
for (i = 0; i < SI_MAX_BACKENDS; ++i) {
if (((enabled_backends_mask >> i) & 1) == 1) {
if (last_backend_enabled)
force_no_swizzle = true;
last_backend_enabled = true;
} else
last_backend_enabled = false;
}
}
switch (num_tile_pipes) {
case 1:
case 3:
case 5:
case 7:
DRM_ERROR("odd number of pipes!\n");
break;
case 2:
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 1;
break;
case 4:
if (force_no_swizzle) {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 1;
swizzle_pipe[2] = 2;
swizzle_pipe[3] = 3;
} else {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 2;
swizzle_pipe[2] = 1;
swizzle_pipe[3] = 3;
}
break;
case 6:
if (force_no_swizzle) {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 1;
swizzle_pipe[2] = 2;
swizzle_pipe[3] = 3;
swizzle_pipe[4] = 4;
swizzle_pipe[5] = 5;
} else {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 2;
swizzle_pipe[2] = 4;
swizzle_pipe[3] = 1;
swizzle_pipe[4] = 3;
swizzle_pipe[5] = 5;
}
break;
case 8:
if (force_no_swizzle) {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 1;
swizzle_pipe[2] = 2;
swizzle_pipe[3] = 3;
swizzle_pipe[4] = 4;
swizzle_pipe[5] = 5;
swizzle_pipe[6] = 6;
swizzle_pipe[7] = 7;
} else {
swizzle_pipe[0] = 0;
swizzle_pipe[1] = 2;
swizzle_pipe[2] = 4;
swizzle_pipe[3] = 6;
swizzle_pipe[4] = 1;
swizzle_pipe[5] = 3;
swizzle_pipe[6] = 5;
swizzle_pipe[7] = 7;
}
break;
}
for (cur_pipe = 0; cur_pipe < num_tile_pipes; ++cur_pipe) {
while (((1 << cur_backend) & enabled_backends_mask) == 0)
cur_backend = (cur_backend + 1) % SI_MAX_BACKENDS;
backend_map |= (((cur_backend & 0xf) << (swizzle_pipe[cur_pipe] * 4)));
cur_backend = (cur_backend + 1) % SI_MAX_BACKENDS;
}
return backend_map;
}
static u32 si_get_disable_mask_per_asic(struct radeon_device *rdev,
u32 disable_mask_per_se,
u32 max_disable_mask_per_se,
u32 num_shader_engines)
{
u32 disable_field_width_per_se = r600_count_pipe_bits(disable_mask_per_se);
u32 disable_mask_per_asic = disable_mask_per_se & max_disable_mask_per_se;
if (num_shader_engines == 1)
return disable_mask_per_asic;
else if (num_shader_engines == 2)
return disable_mask_per_asic | (disable_mask_per_asic << disable_field_width_per_se);
else
return 0xffffffff;
}
static void si_tiling_mode_table_init(struct radeon_device *rdev) static void si_tiling_mode_table_init(struct radeon_device *rdev)
{ {
const u32 num_tile_mode_states = 32; const u32 num_tile_mode_states = 32;
@ -1562,18 +1368,151 @@ static void si_tiling_mode_table_init(struct radeon_device *rdev)
DRM_ERROR("unknown asic: 0x%x\n", rdev->family); DRM_ERROR("unknown asic: 0x%x\n", rdev->family);
} }
static void si_select_se_sh(struct radeon_device *rdev,
u32 se_num, u32 sh_num)
{
u32 data = INSTANCE_BROADCAST_WRITES;
if ((se_num == 0xffffffff) && (sh_num == 0xffffffff))
data = SH_BROADCAST_WRITES | SE_BROADCAST_WRITES;
else if (se_num == 0xffffffff)
data |= SE_BROADCAST_WRITES | SH_INDEX(sh_num);
else if (sh_num == 0xffffffff)
data |= SH_BROADCAST_WRITES | SE_INDEX(se_num);
else
data |= SH_INDEX(sh_num) | SE_INDEX(se_num);
WREG32(GRBM_GFX_INDEX, data);
}
static u32 si_create_bitmask(u32 bit_width)
{
u32 i, mask = 0;
for (i = 0; i < bit_width; i++) {
mask <<= 1;
mask |= 1;
}
return mask;
}
static u32 si_get_cu_enabled(struct radeon_device *rdev, u32 cu_per_sh)
{
u32 data, mask;
data = RREG32(CC_GC_SHADER_ARRAY_CONFIG);
if (data & 1)
data &= INACTIVE_CUS_MASK;
else
data = 0;
data |= RREG32(GC_USER_SHADER_ARRAY_CONFIG);
data >>= INACTIVE_CUS_SHIFT;
mask = si_create_bitmask(cu_per_sh);
return ~data & mask;
}
static void si_setup_spi(struct radeon_device *rdev,
u32 se_num, u32 sh_per_se,
u32 cu_per_sh)
{
int i, j, k;
u32 data, mask, active_cu;
for (i = 0; i < se_num; i++) {
for (j = 0; j < sh_per_se; j++) {
si_select_se_sh(rdev, i, j);
data = RREG32(SPI_STATIC_THREAD_MGMT_3);
active_cu = si_get_cu_enabled(rdev, cu_per_sh);
mask = 1;
for (k = 0; k < 16; k++) {
mask <<= k;
if (active_cu & mask) {
data &= ~mask;
WREG32(SPI_STATIC_THREAD_MGMT_3, data);
break;
}
}
}
}
si_select_se_sh(rdev, 0xffffffff, 0xffffffff);
}
static u32 si_get_rb_disabled(struct radeon_device *rdev,
u32 max_rb_num, u32 se_num,
u32 sh_per_se)
{
u32 data, mask;
data = RREG32(CC_RB_BACKEND_DISABLE);
if (data & 1)
data &= BACKEND_DISABLE_MASK;
else
data = 0;
data |= RREG32(GC_USER_RB_BACKEND_DISABLE);
data >>= BACKEND_DISABLE_SHIFT;
mask = si_create_bitmask(max_rb_num / se_num / sh_per_se);
return data & mask;
}
static void si_setup_rb(struct radeon_device *rdev,
u32 se_num, u32 sh_per_se,
u32 max_rb_num)
{
int i, j;
u32 data, mask;
u32 disabled_rbs = 0;
u32 enabled_rbs = 0;
for (i = 0; i < se_num; i++) {
for (j = 0; j < sh_per_se; j++) {
si_select_se_sh(rdev, i, j);
data = si_get_rb_disabled(rdev, max_rb_num, se_num, sh_per_se);
disabled_rbs |= data << ((i * sh_per_se + j) * TAHITI_RB_BITMAP_WIDTH_PER_SH);
}
}
si_select_se_sh(rdev, 0xffffffff, 0xffffffff);
mask = 1;
for (i = 0; i < max_rb_num; i++) {
if (!(disabled_rbs & mask))
enabled_rbs |= mask;
mask <<= 1;
}
for (i = 0; i < se_num; i++) {
si_select_se_sh(rdev, i, 0xffffffff);
data = 0;
for (j = 0; j < sh_per_se; j++) {
switch (enabled_rbs & 3) {
case 1:
data |= (RASTER_CONFIG_RB_MAP_0 << (i * sh_per_se + j) * 2);
break;
case 2:
data |= (RASTER_CONFIG_RB_MAP_3 << (i * sh_per_se + j) * 2);
break;
case 3:
default:
data |= (RASTER_CONFIG_RB_MAP_2 << (i * sh_per_se + j) * 2);
break;
}
enabled_rbs >>= 2;
}
WREG32(PA_SC_RASTER_CONFIG, data);
}
si_select_se_sh(rdev, 0xffffffff, 0xffffffff);
}
static void si_gpu_init(struct radeon_device *rdev) static void si_gpu_init(struct radeon_device *rdev)
{ {
u32 cc_rb_backend_disable = 0;
u32 cc_gc_shader_array_config;
u32 gb_addr_config = 0; u32 gb_addr_config = 0;
u32 mc_shared_chmap, mc_arb_ramcfg; u32 mc_shared_chmap, mc_arb_ramcfg;
u32 gb_backend_map;
u32 cgts_tcc_disable;
u32 sx_debug_1; u32 sx_debug_1;
u32 gc_user_shader_array_config;
u32 gc_user_rb_backend_disable;
u32 cgts_user_tcc_disable;
u32 hdp_host_path_cntl; u32 hdp_host_path_cntl;
u32 tmp; u32 tmp;
int i, j; int i, j;
@ -1581,9 +1520,9 @@ static void si_gpu_init(struct radeon_device *rdev)
switch (rdev->family) { switch (rdev->family) {
case CHIP_TAHITI: case CHIP_TAHITI:
rdev->config.si.max_shader_engines = 2; rdev->config.si.max_shader_engines = 2;
rdev->config.si.max_pipes_per_simd = 4;
rdev->config.si.max_tile_pipes = 12; rdev->config.si.max_tile_pipes = 12;
rdev->config.si.max_simds_per_se = 8; rdev->config.si.max_cu_per_sh = 8;
rdev->config.si.max_sh_per_se = 2;
rdev->config.si.max_backends_per_se = 4; rdev->config.si.max_backends_per_se = 4;
rdev->config.si.max_texture_channel_caches = 12; rdev->config.si.max_texture_channel_caches = 12;
rdev->config.si.max_gprs = 256; rdev->config.si.max_gprs = 256;
@ -1594,12 +1533,13 @@ static void si_gpu_init(struct radeon_device *rdev)
rdev->config.si.sc_prim_fifo_size_backend = 0x100; rdev->config.si.sc_prim_fifo_size_backend = 0x100;
rdev->config.si.sc_hiz_tile_fifo_size = 0x30; rdev->config.si.sc_hiz_tile_fifo_size = 0x30;
rdev->config.si.sc_earlyz_tile_fifo_size = 0x130; rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;
gb_addr_config = TAHITI_GB_ADDR_CONFIG_GOLDEN;
break; break;
case CHIP_PITCAIRN: case CHIP_PITCAIRN:
rdev->config.si.max_shader_engines = 2; rdev->config.si.max_shader_engines = 2;
rdev->config.si.max_pipes_per_simd = 4;
rdev->config.si.max_tile_pipes = 8; rdev->config.si.max_tile_pipes = 8;
rdev->config.si.max_simds_per_se = 5; rdev->config.si.max_cu_per_sh = 5;
rdev->config.si.max_sh_per_se = 2;
rdev->config.si.max_backends_per_se = 4; rdev->config.si.max_backends_per_se = 4;
rdev->config.si.max_texture_channel_caches = 8; rdev->config.si.max_texture_channel_caches = 8;
rdev->config.si.max_gprs = 256; rdev->config.si.max_gprs = 256;
@ -1610,13 +1550,14 @@ static void si_gpu_init(struct radeon_device *rdev)
rdev->config.si.sc_prim_fifo_size_backend = 0x100; rdev->config.si.sc_prim_fifo_size_backend = 0x100;
rdev->config.si.sc_hiz_tile_fifo_size = 0x30; rdev->config.si.sc_hiz_tile_fifo_size = 0x30;
rdev->config.si.sc_earlyz_tile_fifo_size = 0x130; rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;
gb_addr_config = TAHITI_GB_ADDR_CONFIG_GOLDEN;
break; break;
case CHIP_VERDE: case CHIP_VERDE:
default: default:
rdev->config.si.max_shader_engines = 1; rdev->config.si.max_shader_engines = 1;
rdev->config.si.max_pipes_per_simd = 4;
rdev->config.si.max_tile_pipes = 4; rdev->config.si.max_tile_pipes = 4;
rdev->config.si.max_simds_per_se = 2; rdev->config.si.max_cu_per_sh = 2;
rdev->config.si.max_sh_per_se = 2;
rdev->config.si.max_backends_per_se = 4; rdev->config.si.max_backends_per_se = 4;
rdev->config.si.max_texture_channel_caches = 4; rdev->config.si.max_texture_channel_caches = 4;
rdev->config.si.max_gprs = 256; rdev->config.si.max_gprs = 256;
@ -1627,6 +1568,7 @@ static void si_gpu_init(struct radeon_device *rdev)
rdev->config.si.sc_prim_fifo_size_backend = 0x40; rdev->config.si.sc_prim_fifo_size_backend = 0x40;
rdev->config.si.sc_hiz_tile_fifo_size = 0x30; rdev->config.si.sc_hiz_tile_fifo_size = 0x30;
rdev->config.si.sc_earlyz_tile_fifo_size = 0x130; rdev->config.si.sc_earlyz_tile_fifo_size = 0x130;
gb_addr_config = VERDE_GB_ADDR_CONFIG_GOLDEN;
break; break;
} }
@ -1648,31 +1590,7 @@ static void si_gpu_init(struct radeon_device *rdev)
mc_shared_chmap = RREG32(MC_SHARED_CHMAP); mc_shared_chmap = RREG32(MC_SHARED_CHMAP);
mc_arb_ramcfg = RREG32(MC_ARB_RAMCFG); mc_arb_ramcfg = RREG32(MC_ARB_RAMCFG);
cc_rb_backend_disable = RREG32(CC_RB_BACKEND_DISABLE);
cc_gc_shader_array_config = RREG32(CC_GC_SHADER_ARRAY_CONFIG);
cgts_tcc_disable = 0xffff0000;
for (i = 0; i < rdev->config.si.max_texture_channel_caches; i++)
cgts_tcc_disable &= ~(1 << (16 + i));
gc_user_rb_backend_disable = RREG32(GC_USER_RB_BACKEND_DISABLE);
gc_user_shader_array_config = RREG32(GC_USER_SHADER_ARRAY_CONFIG);
cgts_user_tcc_disable = RREG32(CGTS_USER_TCC_DISABLE);
rdev->config.si.num_shader_engines = rdev->config.si.max_shader_engines;
rdev->config.si.num_tile_pipes = rdev->config.si.max_tile_pipes; rdev->config.si.num_tile_pipes = rdev->config.si.max_tile_pipes;
tmp = ((~gc_user_rb_backend_disable) & BACKEND_DISABLE_MASK) >> BACKEND_DISABLE_SHIFT;
rdev->config.si.num_backends_per_se = r600_count_pipe_bits(tmp);
tmp = (gc_user_rb_backend_disable & BACKEND_DISABLE_MASK) >> BACKEND_DISABLE_SHIFT;
rdev->config.si.backend_disable_mask_per_asic =
si_get_disable_mask_per_asic(rdev, tmp, SI_MAX_BACKENDS_PER_SE_MASK,
rdev->config.si.num_shader_engines);
rdev->config.si.backend_map =
si_get_tile_pipe_to_backend_map(rdev, rdev->config.si.num_tile_pipes,
rdev->config.si.num_backends_per_se *
rdev->config.si.num_shader_engines,
&rdev->config.si.backend_disable_mask_per_asic,
rdev->config.si.num_shader_engines);
tmp = ((~cgts_user_tcc_disable) & TCC_DISABLE_MASK) >> TCC_DISABLE_SHIFT;
rdev->config.si.num_texture_channel_caches = r600_count_pipe_bits(tmp);
rdev->config.si.mem_max_burst_length_bytes = 256; rdev->config.si.mem_max_burst_length_bytes = 256;
tmp = (mc_arb_ramcfg & NOOFCOLS_MASK) >> NOOFCOLS_SHIFT; tmp = (mc_arb_ramcfg & NOOFCOLS_MASK) >> NOOFCOLS_SHIFT;
rdev->config.si.mem_row_size_in_kb = (4 * (1 << (8 + tmp))) / 1024; rdev->config.si.mem_row_size_in_kb = (4 * (1 << (8 + tmp))) / 1024;
@ -1683,55 +1601,8 @@ static void si_gpu_init(struct radeon_device *rdev)
rdev->config.si.num_gpus = 1; rdev->config.si.num_gpus = 1;
rdev->config.si.multi_gpu_tile_size = 64; rdev->config.si.multi_gpu_tile_size = 64;
gb_addr_config = 0; /* fix up row size */
switch (rdev->config.si.num_tile_pipes) { gb_addr_config &= ~ROW_SIZE_MASK;
case 1:
gb_addr_config |= NUM_PIPES(0);
break;
case 2:
gb_addr_config |= NUM_PIPES(1);
break;
case 4:
gb_addr_config |= NUM_PIPES(2);
break;
case 8:
default:
gb_addr_config |= NUM_PIPES(3);
break;
}
tmp = (rdev->config.si.mem_max_burst_length_bytes / 256) - 1;
gb_addr_config |= PIPE_INTERLEAVE_SIZE(tmp);
gb_addr_config |= NUM_SHADER_ENGINES(rdev->config.si.num_shader_engines - 1);
tmp = (rdev->config.si.shader_engine_tile_size / 16) - 1;
gb_addr_config |= SHADER_ENGINE_TILE_SIZE(tmp);
switch (rdev->config.si.num_gpus) {
case 1:
default:
gb_addr_config |= NUM_GPUS(0);
break;
case 2:
gb_addr_config |= NUM_GPUS(1);
break;
case 4:
gb_addr_config |= NUM_GPUS(2);
break;
}
switch (rdev->config.si.multi_gpu_tile_size) {
case 16:
gb_addr_config |= MULTI_GPU_TILE_SIZE(0);
break;
case 32:
default:
gb_addr_config |= MULTI_GPU_TILE_SIZE(1);
break;
case 64:
gb_addr_config |= MULTI_GPU_TILE_SIZE(2);
break;
case 128:
gb_addr_config |= MULTI_GPU_TILE_SIZE(3);
break;
}
switch (rdev->config.si.mem_row_size_in_kb) { switch (rdev->config.si.mem_row_size_in_kb) {
case 1: case 1:
default: default:
@ -1745,26 +1616,6 @@ static void si_gpu_init(struct radeon_device *rdev)
break; break;
} }
tmp = (gb_addr_config & NUM_PIPES_MASK) >> NUM_PIPES_SHIFT;
rdev->config.si.num_tile_pipes = (1 << tmp);
tmp = (gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT;
rdev->config.si.mem_max_burst_length_bytes = (tmp + 1) * 256;
tmp = (gb_addr_config & NUM_SHADER_ENGINES_MASK) >> NUM_SHADER_ENGINES_SHIFT;
rdev->config.si.num_shader_engines = tmp + 1;
tmp = (gb_addr_config & NUM_GPUS_MASK) >> NUM_GPUS_SHIFT;
rdev->config.si.num_gpus = tmp + 1;
tmp = (gb_addr_config & MULTI_GPU_TILE_SIZE_MASK) >> MULTI_GPU_TILE_SIZE_SHIFT;
rdev->config.si.multi_gpu_tile_size = 1 << tmp;
tmp = (gb_addr_config & ROW_SIZE_MASK) >> ROW_SIZE_SHIFT;
rdev->config.si.mem_row_size_in_kb = 1 << tmp;
gb_backend_map =
si_get_tile_pipe_to_backend_map(rdev, rdev->config.si.num_tile_pipes,
rdev->config.si.num_backends_per_se *
rdev->config.si.num_shader_engines,
&rdev->config.si.backend_disable_mask_per_asic,
rdev->config.si.num_shader_engines);
/* setup tiling info dword. gb_addr_config is not adequate since it does /* setup tiling info dword. gb_addr_config is not adequate since it does
* not have bank info, so create a custom tiling dword. * not have bank info, so create a custom tiling dword.
* bits 3:0 num_pipes * bits 3:0 num_pipes
@ -1789,34 +1640,30 @@ static void si_gpu_init(struct radeon_device *rdev)
rdev->config.si.tile_config |= (3 << 0); rdev->config.si.tile_config |= (3 << 0);
break; break;
} }
rdev->config.si.tile_config |= if ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT)
((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4; rdev->config.si.tile_config |= 1 << 4;
else
rdev->config.si.tile_config |= 0 << 4;
rdev->config.si.tile_config |= rdev->config.si.tile_config |=
((gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT) << 8; ((gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT) << 8;
rdev->config.si.tile_config |= rdev->config.si.tile_config |=
((gb_addr_config & ROW_SIZE_MASK) >> ROW_SIZE_SHIFT) << 12; ((gb_addr_config & ROW_SIZE_MASK) >> ROW_SIZE_SHIFT) << 12;
rdev->config.si.backend_map = gb_backend_map;
WREG32(GB_ADDR_CONFIG, gb_addr_config); WREG32(GB_ADDR_CONFIG, gb_addr_config);
WREG32(DMIF_ADDR_CONFIG, gb_addr_config); WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
WREG32(HDP_ADDR_CONFIG, gb_addr_config); WREG32(HDP_ADDR_CONFIG, gb_addr_config);
/* primary versions */
WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_GC_SHADER_ARRAY_CONFIG, cc_gc_shader_array_config);
WREG32(CGTS_TCC_DISABLE, cgts_tcc_disable);
/* user versions */
WREG32(GC_USER_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(GC_USER_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(GC_USER_SHADER_ARRAY_CONFIG, cc_gc_shader_array_config);
WREG32(CGTS_USER_TCC_DISABLE, cgts_tcc_disable);
si_tiling_mode_table_init(rdev); si_tiling_mode_table_init(rdev);
si_setup_rb(rdev, rdev->config.si.max_shader_engines,
rdev->config.si.max_sh_per_se,
rdev->config.si.max_backends_per_se);
si_setup_spi(rdev, rdev->config.si.max_shader_engines,
rdev->config.si.max_sh_per_se,
rdev->config.si.max_cu_per_sh);
/* set HW defaults for 3D engine */ /* set HW defaults for 3D engine */
WREG32(CP_QUEUE_THRESHOLDS, (ROQ_IB1_START(0x16) | WREG32(CP_QUEUE_THRESHOLDS, (ROQ_IB1_START(0x16) |
ROQ_IB2_START(0x2b))); ROQ_IB2_START(0x2b)));

Просмотреть файл

@ -24,6 +24,11 @@
#ifndef SI_H #ifndef SI_H
#define SI_H #define SI_H
#define TAHITI_RB_BITMAP_WIDTH_PER_SH 2
#define TAHITI_GB_ADDR_CONFIG_GOLDEN 0x12011003
#define VERDE_GB_ADDR_CONFIG_GOLDEN 0x12010002
#define CG_MULT_THERMAL_STATUS 0x714 #define CG_MULT_THERMAL_STATUS 0x714
#define ASIC_MAX_TEMP(x) ((x) << 0) #define ASIC_MAX_TEMP(x) ((x) << 0)
#define ASIC_MAX_TEMP_MASK 0x000001ff #define ASIC_MAX_TEMP_MASK 0x000001ff
@ -408,6 +413,12 @@
#define SOFT_RESET_IA (1 << 15) #define SOFT_RESET_IA (1 << 15)
#define GRBM_GFX_INDEX 0x802C #define GRBM_GFX_INDEX 0x802C
#define INSTANCE_INDEX(x) ((x) << 0)
#define SH_INDEX(x) ((x) << 8)
#define SE_INDEX(x) ((x) << 16)
#define SH_BROADCAST_WRITES (1 << 29)
#define INSTANCE_BROADCAST_WRITES (1 << 30)
#define SE_BROADCAST_WRITES (1 << 31)
#define GRBM_INT_CNTL 0x8060 #define GRBM_INT_CNTL 0x8060
# define RDERR_INT_ENABLE (1 << 0) # define RDERR_INT_ENABLE (1 << 0)
@ -480,6 +491,8 @@
#define VGT_TF_MEMORY_BASE 0x89B8 #define VGT_TF_MEMORY_BASE 0x89B8
#define CC_GC_SHADER_ARRAY_CONFIG 0x89bc #define CC_GC_SHADER_ARRAY_CONFIG 0x89bc
#define INACTIVE_CUS_MASK 0xFFFF0000
#define INACTIVE_CUS_SHIFT 16
#define GC_USER_SHADER_ARRAY_CONFIG 0x89c0 #define GC_USER_SHADER_ARRAY_CONFIG 0x89c0
#define PA_CL_ENHANCE 0x8A14 #define PA_CL_ENHANCE 0x8A14
@ -688,6 +701,12 @@
#define RLC_MC_CNTL 0xC344 #define RLC_MC_CNTL 0xC344
#define RLC_UCODE_CNTL 0xC348 #define RLC_UCODE_CNTL 0xC348
#define PA_SC_RASTER_CONFIG 0x28350
# define RASTER_CONFIG_RB_MAP_0 0
# define RASTER_CONFIG_RB_MAP_1 1
# define RASTER_CONFIG_RB_MAP_2 2
# define RASTER_CONFIG_RB_MAP_3 3
#define VGT_EVENT_INITIATOR 0x28a90 #define VGT_EVENT_INITIATOR 0x28a90
# define SAMPLE_STREAMOUTSTATS1 (1 << 0) # define SAMPLE_STREAMOUTSTATS1 (1 << 0)
# define SAMPLE_STREAMOUTSTATS2 (2 << 0) # define SAMPLE_STREAMOUTSTATS2 (2 << 0)

Просмотреть файл

@ -37,4 +37,16 @@ config I2C_MUX_PCA954x
This driver can also be built as a module. If so, the module This driver can also be built as a module. If so, the module
will be called i2c-mux-pca954x. will be called i2c-mux-pca954x.
config I2C_MUX_PINCTRL
tristate "pinctrl-based I2C multiplexer"
depends on PINCTRL
help
If you say yes to this option, support will be included for an I2C
multiplexer that uses the pinctrl subsystem, i.e. pin multiplexing.
This is useful for SoCs whose I2C module's signals can be routed to
different sets of pins at run-time.
This driver can also be built as a module. If so, the module will be
called pinctrl-i2cmux.
endmenu endmenu

Просмотреть файл

@ -4,5 +4,6 @@
obj-$(CONFIG_I2C_MUX_GPIO) += i2c-mux-gpio.o obj-$(CONFIG_I2C_MUX_GPIO) += i2c-mux-gpio.o
obj-$(CONFIG_I2C_MUX_PCA9541) += i2c-mux-pca9541.o obj-$(CONFIG_I2C_MUX_PCA9541) += i2c-mux-pca9541.o
obj-$(CONFIG_I2C_MUX_PCA954x) += i2c-mux-pca954x.o obj-$(CONFIG_I2C_MUX_PCA954x) += i2c-mux-pca954x.o
obj-$(CONFIG_I2C_MUX_PINCTRL) += i2c-mux-pinctrl.o
ccflags-$(CONFIG_I2C_DEBUG_BUS) := -DDEBUG ccflags-$(CONFIG_I2C_DEBUG_BUS) := -DDEBUG

Просмотреть файл

@ -0,0 +1,279 @@
/*
* I2C multiplexer using pinctrl API
*
* Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/i2c.h>
#include <linux/i2c-mux.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of_i2c.h>
#include <linux/pinctrl/consumer.h>
#include <linux/i2c-mux-pinctrl.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
struct i2c_mux_pinctrl {
struct device *dev;
struct i2c_mux_pinctrl_platform_data *pdata;
struct pinctrl *pinctrl;
struct pinctrl_state **states;
struct pinctrl_state *state_idle;
struct i2c_adapter *parent;
struct i2c_adapter **busses;
};
static int i2c_mux_pinctrl_select(struct i2c_adapter *adap, void *data,
u32 chan)
{
struct i2c_mux_pinctrl *mux = data;
return pinctrl_select_state(mux->pinctrl, mux->states[chan]);
}
static int i2c_mux_pinctrl_deselect(struct i2c_adapter *adap, void *data,
u32 chan)
{
struct i2c_mux_pinctrl *mux = data;
return pinctrl_select_state(mux->pinctrl, mux->state_idle);
}
#ifdef CONFIG_OF
static int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
int num_names, i, ret;
struct device_node *adapter_np;
struct i2c_adapter *adapter;
if (!np)
return 0;
mux->pdata = devm_kzalloc(&pdev->dev, sizeof(*mux->pdata), GFP_KERNEL);
if (!mux->pdata) {
dev_err(mux->dev,
"Cannot allocate i2c_mux_pinctrl_platform_data\n");
return -ENOMEM;
}
num_names = of_property_count_strings(np, "pinctrl-names");
if (num_names < 0) {
dev_err(mux->dev, "Cannot parse pinctrl-names: %d\n",
num_names);
return num_names;
}
mux->pdata->pinctrl_states = devm_kzalloc(&pdev->dev,
sizeof(*mux->pdata->pinctrl_states) * num_names,
GFP_KERNEL);
if (!mux->pdata->pinctrl_states) {
dev_err(mux->dev, "Cannot allocate pinctrl_states\n");
return -ENOMEM;
}
for (i = 0; i < num_names; i++) {
ret = of_property_read_string_index(np, "pinctrl-names", i,
&mux->pdata->pinctrl_states[mux->pdata->bus_count]);
if (ret < 0) {
dev_err(mux->dev, "Cannot parse pinctrl-names: %d\n",
ret);
return ret;
}
if (!strcmp(mux->pdata->pinctrl_states[mux->pdata->bus_count],
"idle")) {
if (i != num_names - 1) {
dev_err(mux->dev, "idle state must be last\n");
return -EINVAL;
}
mux->pdata->pinctrl_state_idle = "idle";
} else {
mux->pdata->bus_count++;
}
}
adapter_np = of_parse_phandle(np, "i2c-parent", 0);
if (!adapter_np) {
dev_err(mux->dev, "Cannot parse i2c-parent\n");
return -ENODEV;
}
adapter = of_find_i2c_adapter_by_node(adapter_np);
if (!adapter) {
dev_err(mux->dev, "Cannot find parent bus\n");
return -ENODEV;
}
mux->pdata->parent_bus_num = i2c_adapter_id(adapter);
put_device(&adapter->dev);
return 0;
}
#else
static inline int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
struct platform_device *pdev)
{
return 0;
}
#endif
static int __devinit i2c_mux_pinctrl_probe(struct platform_device *pdev)
{
struct i2c_mux_pinctrl *mux;
int (*deselect)(struct i2c_adapter *, void *, u32);
int i, ret;
mux = devm_kzalloc(&pdev->dev, sizeof(*mux), GFP_KERNEL);
if (!mux) {
dev_err(&pdev->dev, "Cannot allocate i2c_mux_pinctrl\n");
ret = -ENOMEM;
goto err;
}
platform_set_drvdata(pdev, mux);
mux->dev = &pdev->dev;
mux->pdata = pdev->dev.platform_data;
if (!mux->pdata) {
ret = i2c_mux_pinctrl_parse_dt(mux, pdev);
if (ret < 0)
goto err;
}
if (!mux->pdata) {
dev_err(&pdev->dev, "Missing platform data\n");
ret = -ENODEV;
goto err;
}
mux->states = devm_kzalloc(&pdev->dev,
sizeof(*mux->states) * mux->pdata->bus_count,
GFP_KERNEL);
if (!mux->states) {
dev_err(&pdev->dev, "Cannot allocate states\n");
ret = -ENOMEM;
goto err;
}
mux->busses = devm_kzalloc(&pdev->dev,
sizeof(mux->busses) * mux->pdata->bus_count,
GFP_KERNEL);
if (!mux->states) {
dev_err(&pdev->dev, "Cannot allocate busses\n");
ret = -ENOMEM;
goto err;
}
mux->pinctrl = devm_pinctrl_get(&pdev->dev);
if (IS_ERR(mux->pinctrl)) {
ret = PTR_ERR(mux->pinctrl);
dev_err(&pdev->dev, "Cannot get pinctrl: %d\n", ret);
goto err;
}
for (i = 0; i < mux->pdata->bus_count; i++) {
mux->states[i] = pinctrl_lookup_state(mux->pinctrl,
mux->pdata->pinctrl_states[i]);
if (IS_ERR(mux->states[i])) {
ret = PTR_ERR(mux->states[i]);
dev_err(&pdev->dev,
"Cannot look up pinctrl state %s: %d\n",
mux->pdata->pinctrl_states[i], ret);
goto err;
}
}
if (mux->pdata->pinctrl_state_idle) {
mux->state_idle = pinctrl_lookup_state(mux->pinctrl,
mux->pdata->pinctrl_state_idle);
if (IS_ERR(mux->state_idle)) {
ret = PTR_ERR(mux->state_idle);
dev_err(&pdev->dev,
"Cannot look up pinctrl state %s: %d\n",
mux->pdata->pinctrl_state_idle, ret);
goto err;
}
deselect = i2c_mux_pinctrl_deselect;
} else {
deselect = NULL;
}
mux->parent = i2c_get_adapter(mux->pdata->parent_bus_num);
if (!mux->parent) {
dev_err(&pdev->dev, "Parent adapter (%d) not found\n",
mux->pdata->parent_bus_num);
ret = -ENODEV;
goto err;
}
for (i = 0; i < mux->pdata->bus_count; i++) {
u32 bus = mux->pdata->base_bus_num ?
(mux->pdata->base_bus_num + i) : 0;
mux->busses[i] = i2c_add_mux_adapter(mux->parent, &pdev->dev,
mux, bus, i,
i2c_mux_pinctrl_select,
deselect);
if (!mux->busses[i]) {
ret = -ENODEV;
dev_err(&pdev->dev, "Failed to add adapter %d\n", i);
goto err_del_adapter;
}
}
return 0;
err_del_adapter:
for (; i > 0; i--)
i2c_del_mux_adapter(mux->busses[i - 1]);
i2c_put_adapter(mux->parent);
err:
return ret;
}
static int __devexit i2c_mux_pinctrl_remove(struct platform_device *pdev)
{
struct i2c_mux_pinctrl *mux = platform_get_drvdata(pdev);
int i;
for (i = 0; i < mux->pdata->bus_count; i++)
i2c_del_mux_adapter(mux->busses[i]);
i2c_put_adapter(mux->parent);
return 0;
}
#ifdef CONFIG_OF
static const struct of_device_id i2c_mux_pinctrl_of_match[] __devinitconst = {
{ .compatible = "i2c-mux-pinctrl", },
{},
};
MODULE_DEVICE_TABLE(of, i2c_mux_pinctrl_of_match);
#endif
static struct platform_driver i2c_mux_pinctrl_driver = {
.driver = {
.name = "i2c-mux-pinctrl",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(i2c_mux_pinctrl_of_match),
},
.probe = i2c_mux_pinctrl_probe,
.remove = __devexit_p(i2c_mux_pinctrl_remove),
};
module_platform_driver(i2c_mux_pinctrl_driver);
MODULE_DESCRIPTION("pinctrl-based I2C multiplexer driver");
MODULE_AUTHOR("Stephen Warren <swarren@nvidia.com>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:i2c-mux-pinctrl");

Просмотреть файл

@ -1593,6 +1593,10 @@ static int import_ep(struct c4iw_ep *ep, __be32 peer_ip, struct dst_entry *dst,
struct net_device *pdev; struct net_device *pdev;
pdev = ip_dev_find(&init_net, peer_ip); pdev = ip_dev_find(&init_net, peer_ip);
if (!pdev) {
err = -ENODEV;
goto out;
}
ep->l2t = cxgb4_l2t_get(cdev->rdev.lldi.l2t, ep->l2t = cxgb4_l2t_get(cdev->rdev.lldi.l2t,
n, pdev, 0); n, pdev, 0);
if (!ep->l2t) if (!ep->l2t)

Просмотреть файл

@ -140,7 +140,7 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
props->max_mr_size = ~0ull; props->max_mr_size = ~0ull;
props->page_size_cap = dev->dev->caps.page_size_cap; props->page_size_cap = dev->dev->caps.page_size_cap;
props->max_qp = dev->dev->caps.num_qps - dev->dev->caps.reserved_qps; props->max_qp = dev->dev->caps.num_qps - dev->dev->caps.reserved_qps;
props->max_qp_wr = dev->dev->caps.max_wqes; props->max_qp_wr = dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE;
props->max_sge = min(dev->dev->caps.max_sq_sg, props->max_sge = min(dev->dev->caps.max_sq_sg,
dev->dev->caps.max_rq_sg); dev->dev->caps.max_rq_sg);
props->max_cq = dev->dev->caps.num_cqs - dev->dev->caps.reserved_cqs; props->max_cq = dev->dev->caps.num_cqs - dev->dev->caps.reserved_cqs;
@ -1084,12 +1084,9 @@ static void mlx4_ib_alloc_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
int total_eqs = 0; int total_eqs = 0;
int i, j, eq; int i, j, eq;
/* Init eq table */ /* Legacy mode or comp_pool is not large enough */
ibdev->eq_table = NULL; if (dev->caps.comp_pool == 0 ||
ibdev->eq_added = 0; dev->caps.num_ports > dev->caps.comp_pool)
/* Legacy mode? */
if (dev->caps.comp_pool == 0)
return; return;
eq_per_port = rounddown_pow_of_two(dev->caps.comp_pool/ eq_per_port = rounddown_pow_of_two(dev->caps.comp_pool/
@ -1135,7 +1132,10 @@ static void mlx4_ib_alloc_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
static void mlx4_ib_free_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev) static void mlx4_ib_free_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
{ {
int i; int i;
int total_eqs;
/* no additional eqs were added */
if (!ibdev->eq_table)
return;
/* Reset the advertised EQ number */ /* Reset the advertised EQ number */
ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors; ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors;
@ -1148,12 +1148,7 @@ static void mlx4_ib_free_eqs(struct mlx4_dev *dev, struct mlx4_ib_dev *ibdev)
mlx4_release_eq(dev, ibdev->eq_table[i]); mlx4_release_eq(dev, ibdev->eq_table[i]);
} }
total_eqs = dev->caps.num_comp_vectors + ibdev->eq_added;
memset(ibdev->eq_table, 0, total_eqs * sizeof(int));
kfree(ibdev->eq_table); kfree(ibdev->eq_table);
ibdev->eq_table = NULL;
ibdev->eq_added = 0;
} }
static void *mlx4_ib_add(struct mlx4_dev *dev) static void *mlx4_ib_add(struct mlx4_dev *dev)

Просмотреть файл

@ -44,6 +44,14 @@
#include <linux/mlx4/device.h> #include <linux/mlx4/device.h>
#include <linux/mlx4/doorbell.h> #include <linux/mlx4/doorbell.h>
enum {
MLX4_IB_SQ_MIN_WQE_SHIFT = 6,
MLX4_IB_MAX_HEADROOM = 2048
};
#define MLX4_IB_SQ_HEADROOM(shift) ((MLX4_IB_MAX_HEADROOM >> (shift)) + 1)
#define MLX4_IB_SQ_MAX_SPARE (MLX4_IB_SQ_HEADROOM(MLX4_IB_SQ_MIN_WQE_SHIFT))
struct mlx4_ib_ucontext { struct mlx4_ib_ucontext {
struct ib_ucontext ibucontext; struct ib_ucontext ibucontext;
struct mlx4_uar uar; struct mlx4_uar uar;

Просмотреть файл

@ -310,8 +310,8 @@ static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
int is_user, int has_rq, struct mlx4_ib_qp *qp) int is_user, int has_rq, struct mlx4_ib_qp *qp)
{ {
/* Sanity check RQ size before proceeding */ /* Sanity check RQ size before proceeding */
if (cap->max_recv_wr > dev->dev->caps.max_wqes || if (cap->max_recv_wr > dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE ||
cap->max_recv_sge > dev->dev->caps.max_rq_sg) cap->max_recv_sge > min(dev->dev->caps.max_sq_sg, dev->dev->caps.max_rq_sg))
return -EINVAL; return -EINVAL;
if (!has_rq) { if (!has_rq) {
@ -329,8 +329,17 @@ static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
qp->rq.wqe_shift = ilog2(qp->rq.max_gs * sizeof (struct mlx4_wqe_data_seg)); qp->rq.wqe_shift = ilog2(qp->rq.max_gs * sizeof (struct mlx4_wqe_data_seg));
} }
cap->max_recv_wr = qp->rq.max_post = qp->rq.wqe_cnt; /* leave userspace return values as they were, so as not to break ABI */
cap->max_recv_sge = qp->rq.max_gs; if (is_user) {
cap->max_recv_wr = qp->rq.max_post = qp->rq.wqe_cnt;
cap->max_recv_sge = qp->rq.max_gs;
} else {
cap->max_recv_wr = qp->rq.max_post =
min(dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE, qp->rq.wqe_cnt);
cap->max_recv_sge = min(qp->rq.max_gs,
min(dev->dev->caps.max_sq_sg,
dev->dev->caps.max_rq_sg));
}
return 0; return 0;
} }
@ -341,8 +350,8 @@ static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
int s; int s;
/* Sanity check SQ size before proceeding */ /* Sanity check SQ size before proceeding */
if (cap->max_send_wr > dev->dev->caps.max_wqes || if (cap->max_send_wr > (dev->dev->caps.max_wqes - MLX4_IB_SQ_MAX_SPARE) ||
cap->max_send_sge > dev->dev->caps.max_sq_sg || cap->max_send_sge > min(dev->dev->caps.max_sq_sg, dev->dev->caps.max_rq_sg) ||
cap->max_inline_data + send_wqe_overhead(type, qp->flags) + cap->max_inline_data + send_wqe_overhead(type, qp->flags) +
sizeof (struct mlx4_wqe_inline_seg) > dev->dev->caps.max_sq_desc_sz) sizeof (struct mlx4_wqe_inline_seg) > dev->dev->caps.max_sq_desc_sz)
return -EINVAL; return -EINVAL;

Просмотреть файл

@ -231,7 +231,6 @@ struct ocrdma_qp_hwq_info {
u32 entry_size; u32 entry_size;
u32 max_cnt; u32 max_cnt;
u32 max_wqe_idx; u32 max_wqe_idx;
u32 free_delta;
u16 dbid; /* qid, where to ring the doorbell. */ u16 dbid; /* qid, where to ring the doorbell. */
u32 len; u32 len;
dma_addr_t pa; dma_addr_t pa;

Просмотреть файл

@ -101,8 +101,6 @@ struct ocrdma_create_qp_uresp {
u32 rsvd1; u32 rsvd1;
u32 num_wqe_allocated; u32 num_wqe_allocated;
u32 num_rqe_allocated; u32 num_rqe_allocated;
u32 free_wqe_delta;
u32 free_rqe_delta;
u32 db_sq_offset; u32 db_sq_offset;
u32 db_rq_offset; u32 db_rq_offset;
u32 db_shift; u32 db_shift;
@ -126,8 +124,7 @@ struct ocrdma_create_srq_uresp {
u32 db_rq_offset; u32 db_rq_offset;
u32 db_shift; u32 db_shift;
u32 free_rqe_delta; u64 rsvd2;
u32 rsvd2;
u64 rsvd3; u64 rsvd3;
} __packed; } __packed;

Просмотреть файл

@ -732,7 +732,7 @@ static void ocrdma_dispatch_ibevent(struct ocrdma_dev *dev,
break; break;
case OCRDMA_SRQ_LIMIT_EVENT: case OCRDMA_SRQ_LIMIT_EVENT:
ib_evt.element.srq = &qp->srq->ibsrq; ib_evt.element.srq = &qp->srq->ibsrq;
ib_evt.event = IB_EVENT_QP_LAST_WQE_REACHED; ib_evt.event = IB_EVENT_SRQ_LIMIT_REACHED;
srq_event = 1; srq_event = 1;
qp_event = 0; qp_event = 0;
break; break;
@ -1990,19 +1990,12 @@ static void ocrdma_get_create_qp_rsp(struct ocrdma_create_qp_rsp *rsp,
max_wqe_allocated = 1 << max_wqe_allocated; max_wqe_allocated = 1 << max_wqe_allocated;
max_rqe_allocated = 1 << ((u16)rsp->max_wqe_rqe); max_rqe_allocated = 1 << ((u16)rsp->max_wqe_rqe);
if (qp->dev->nic_info.dev_family == OCRDMA_GEN2_FAMILY) {
qp->sq.free_delta = 0;
qp->rq.free_delta = 1;
} else
qp->sq.free_delta = 1;
qp->sq.max_cnt = max_wqe_allocated; qp->sq.max_cnt = max_wqe_allocated;
qp->sq.max_wqe_idx = max_wqe_allocated - 1; qp->sq.max_wqe_idx = max_wqe_allocated - 1;
if (!attrs->srq) { if (!attrs->srq) {
qp->rq.max_cnt = max_rqe_allocated; qp->rq.max_cnt = max_rqe_allocated;
qp->rq.max_wqe_idx = max_rqe_allocated - 1; qp->rq.max_wqe_idx = max_rqe_allocated - 1;
qp->rq.free_delta = 1;
} }
} }

Просмотреть файл

@ -26,7 +26,6 @@
*******************************************************************/ *******************************************************************/
#include <linux/module.h> #include <linux/module.h>
#include <linux/version.h>
#include <linux/idr.h> #include <linux/idr.h>
#include <rdma/ib_verbs.h> #include <rdma/ib_verbs.h>
#include <rdma/ib_user_verbs.h> #include <rdma/ib_user_verbs.h>

Просмотреть файл

@ -940,8 +940,6 @@ static int ocrdma_copy_qp_uresp(struct ocrdma_qp *qp,
uresp.db_rq_offset = OCRDMA_DB_RQ_OFFSET; uresp.db_rq_offset = OCRDMA_DB_RQ_OFFSET;
uresp.db_shift = 16; uresp.db_shift = 16;
} }
uresp.free_wqe_delta = qp->sq.free_delta;
uresp.free_rqe_delta = qp->rq.free_delta;
if (qp->dpp_enabled) { if (qp->dpp_enabled) {
uresp.dpp_credit = dpp_credit_lmt; uresp.dpp_credit = dpp_credit_lmt;
@ -1307,8 +1305,6 @@ static int ocrdma_hwq_free_cnt(struct ocrdma_qp_hwq_info *q)
free_cnt = (q->max_cnt - q->head) + q->tail; free_cnt = (q->max_cnt - q->head) + q->tail;
else else
free_cnt = q->tail - q->head; free_cnt = q->tail - q->head;
if (q->free_delta)
free_cnt -= q->free_delta;
return free_cnt; return free_cnt;
} }
@ -1501,7 +1497,6 @@ static int ocrdma_copy_srq_uresp(struct ocrdma_srq *srq, struct ib_udata *udata)
(srq->pd->id * srq->dev->nic_info.db_page_size); (srq->pd->id * srq->dev->nic_info.db_page_size);
uresp.db_page_size = srq->dev->nic_info.db_page_size; uresp.db_page_size = srq->dev->nic_info.db_page_size;
uresp.num_rqe_allocated = srq->rq.max_cnt; uresp.num_rqe_allocated = srq->rq.max_cnt;
uresp.free_rqe_delta = 1;
if (srq->dev->nic_info.dev_family == OCRDMA_GEN2_FAMILY) { if (srq->dev->nic_info.dev_family == OCRDMA_GEN2_FAMILY) {
uresp.db_rq_offset = OCRDMA_DB_GEN2_RQ1_OFFSET; uresp.db_rq_offset = OCRDMA_DB_GEN2_RQ1_OFFSET;
uresp.db_shift = 24; uresp.db_shift = 24;

Просмотреть файл

@ -28,7 +28,6 @@
#ifndef __OCRDMA_VERBS_H__ #ifndef __OCRDMA_VERBS_H__
#define __OCRDMA_VERBS_H__ #define __OCRDMA_VERBS_H__
#include <linux/version.h>
int ocrdma_post_send(struct ib_qp *, struct ib_send_wr *, int ocrdma_post_send(struct ib_qp *, struct ib_send_wr *,
struct ib_send_wr **bad_wr); struct ib_send_wr **bad_wr);
int ocrdma_post_recv(struct ib_qp *, struct ib_recv_wr *, int ocrdma_post_recv(struct ib_qp *, struct ib_recv_wr *,

Просмотреть файл

@ -547,26 +547,12 @@ static void iommu_poll_events(struct amd_iommu *iommu)
spin_unlock_irqrestore(&iommu->lock, flags); spin_unlock_irqrestore(&iommu->lock, flags);
} }
static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u32 head) static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u64 *raw)
{ {
struct amd_iommu_fault fault; struct amd_iommu_fault fault;
volatile u64 *raw;
int i;
INC_STATS_COUNTER(pri_requests); INC_STATS_COUNTER(pri_requests);
raw = (u64 *)(iommu->ppr_log + head);
/*
* Hardware bug: Interrupt may arrive before the entry is written to
* memory. If this happens we need to wait for the entry to arrive.
*/
for (i = 0; i < LOOP_TIMEOUT; ++i) {
if (PPR_REQ_TYPE(raw[0]) != 0)
break;
udelay(1);
}
if (PPR_REQ_TYPE(raw[0]) != PPR_REQ_FAULT) { if (PPR_REQ_TYPE(raw[0]) != PPR_REQ_FAULT) {
pr_err_ratelimited("AMD-Vi: Unknown PPR request received\n"); pr_err_ratelimited("AMD-Vi: Unknown PPR request received\n");
return; return;
@ -578,12 +564,6 @@ static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u32 head)
fault.tag = PPR_TAG(raw[0]); fault.tag = PPR_TAG(raw[0]);
fault.flags = PPR_FLAGS(raw[0]); fault.flags = PPR_FLAGS(raw[0]);
/*
* To detect the hardware bug we need to clear the entry
* to back to zero.
*/
raw[0] = raw[1] = 0;
atomic_notifier_call_chain(&ppr_notifier, 0, &fault); atomic_notifier_call_chain(&ppr_notifier, 0, &fault);
} }
@ -595,25 +575,62 @@ static void iommu_poll_ppr_log(struct amd_iommu *iommu)
if (iommu->ppr_log == NULL) if (iommu->ppr_log == NULL)
return; return;
/* enable ppr interrupts again */
writel(MMIO_STATUS_PPR_INT_MASK, iommu->mmio_base + MMIO_STATUS_OFFSET);
spin_lock_irqsave(&iommu->lock, flags); spin_lock_irqsave(&iommu->lock, flags);
head = readl(iommu->mmio_base + MMIO_PPR_HEAD_OFFSET); head = readl(iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);
tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET); tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET);
while (head != tail) { while (head != tail) {
volatile u64 *raw;
u64 entry[2];
int i;
/* Handle PPR entry */ raw = (u64 *)(iommu->ppr_log + head);
iommu_handle_ppr_entry(iommu, head);
/* Update and refresh ring-buffer state*/ /*
* Hardware bug: Interrupt may arrive before the entry is
* written to memory. If this happens we need to wait for the
* entry to arrive.
*/
for (i = 0; i < LOOP_TIMEOUT; ++i) {
if (PPR_REQ_TYPE(raw[0]) != 0)
break;
udelay(1);
}
/* Avoid memcpy function-call overhead */
entry[0] = raw[0];
entry[1] = raw[1];
/*
* To detect the hardware bug we need to clear the entry
* back to zero.
*/
raw[0] = raw[1] = 0UL;
/* Update head pointer of hardware ring-buffer */
head = (head + PPR_ENTRY_SIZE) % PPR_LOG_SIZE; head = (head + PPR_ENTRY_SIZE) % PPR_LOG_SIZE;
writel(head, iommu->mmio_base + MMIO_PPR_HEAD_OFFSET); writel(head, iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);
/*
* Release iommu->lock because ppr-handling might need to
* re-aquire it
*/
spin_unlock_irqrestore(&iommu->lock, flags);
/* Handle PPR entry */
iommu_handle_ppr_entry(iommu, entry);
spin_lock_irqsave(&iommu->lock, flags);
/* Refresh ring-buffer information */
head = readl(iommu->mmio_base + MMIO_PPR_HEAD_OFFSET);
tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET); tail = readl(iommu->mmio_base + MMIO_PPR_TAIL_OFFSET);
} }
/* enable ppr interrupts again */
writel(MMIO_STATUS_PPR_INT_MASK, iommu->mmio_base + MMIO_STATUS_OFFSET);
spin_unlock_irqrestore(&iommu->lock, flags); spin_unlock_irqrestore(&iommu->lock, flags);
} }

Просмотреть файл

@ -1029,6 +1029,9 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
if (!iommu->dev) if (!iommu->dev)
return 1; return 1;
iommu->root_pdev = pci_get_bus_and_slot(iommu->dev->bus->number,
PCI_DEVFN(0, 0));
iommu->cap_ptr = h->cap_ptr; iommu->cap_ptr = h->cap_ptr;
iommu->pci_seg = h->pci_seg; iommu->pci_seg = h->pci_seg;
iommu->mmio_phys = h->mmio_phys; iommu->mmio_phys = h->mmio_phys;
@ -1323,20 +1326,16 @@ static void iommu_apply_resume_quirks(struct amd_iommu *iommu)
{ {
int i, j; int i, j;
u32 ioc_feature_control; u32 ioc_feature_control;
struct pci_dev *pdev = NULL; struct pci_dev *pdev = iommu->root_pdev;
/* RD890 BIOSes may not have completely reconfigured the iommu */ /* RD890 BIOSes may not have completely reconfigured the iommu */
if (!is_rd890_iommu(iommu->dev)) if (!is_rd890_iommu(iommu->dev) || !pdev)
return; return;
/* /*
* First, we need to ensure that the iommu is enabled. This is * First, we need to ensure that the iommu is enabled. This is
* controlled by a register in the northbridge * controlled by a register in the northbridge
*/ */
pdev = pci_get_bus_and_slot(iommu->dev->bus->number, PCI_DEVFN(0, 0));
if (!pdev)
return;
/* Select Northbridge indirect register 0x75 and enable writing */ /* Select Northbridge indirect register 0x75 and enable writing */
pci_write_config_dword(pdev, 0x60, 0x75 | (1 << 7)); pci_write_config_dword(pdev, 0x60, 0x75 | (1 << 7));
@ -1346,8 +1345,6 @@ static void iommu_apply_resume_quirks(struct amd_iommu *iommu)
if (!(ioc_feature_control & 0x1)) if (!(ioc_feature_control & 0x1))
pci_write_config_dword(pdev, 0x64, ioc_feature_control | 1); pci_write_config_dword(pdev, 0x64, ioc_feature_control | 1);
pci_dev_put(pdev);
/* Restore the iommu BAR */ /* Restore the iommu BAR */
pci_write_config_dword(iommu->dev, iommu->cap_ptr + 4, pci_write_config_dword(iommu->dev, iommu->cap_ptr + 4,
iommu->stored_addr_lo); iommu->stored_addr_lo);

Просмотреть файл

@ -481,6 +481,9 @@ struct amd_iommu {
/* Pointer to PCI device of this IOMMU */ /* Pointer to PCI device of this IOMMU */
struct pci_dev *dev; struct pci_dev *dev;
/* Cache pdev to root device for resume quirks */
struct pci_dev *root_pdev;
/* physical address of MMIO space */ /* physical address of MMIO space */
u64 mmio_phys; u64 mmio_phys;
/* virtual address of MMIO space */ /* virtual address of MMIO space */

Просмотреть файл

@ -2550,6 +2550,7 @@ static struct r1conf *setup_conf(struct mddev *mddev)
err = -EINVAL; err = -EINVAL;
spin_lock_init(&conf->device_lock); spin_lock_init(&conf->device_lock);
rdev_for_each(rdev, mddev) { rdev_for_each(rdev, mddev) {
struct request_queue *q;
int disk_idx = rdev->raid_disk; int disk_idx = rdev->raid_disk;
if (disk_idx >= mddev->raid_disks if (disk_idx >= mddev->raid_disks
|| disk_idx < 0) || disk_idx < 0)
@ -2562,6 +2563,9 @@ static struct r1conf *setup_conf(struct mddev *mddev)
if (disk->rdev) if (disk->rdev)
goto abort; goto abort;
disk->rdev = rdev; disk->rdev = rdev;
q = bdev_get_queue(rdev->bdev);
if (q->merge_bvec_fn)
mddev->merge_check_needed = 1;
disk->head_position = 0; disk->head_position = 0;
} }

Просмотреть файл

@ -3475,6 +3475,7 @@ static int run(struct mddev *mddev)
rdev_for_each(rdev, mddev) { rdev_for_each(rdev, mddev) {
long long diff; long long diff;
struct request_queue *q;
disk_idx = rdev->raid_disk; disk_idx = rdev->raid_disk;
if (disk_idx < 0) if (disk_idx < 0)
@ -3493,6 +3494,9 @@ static int run(struct mddev *mddev)
goto out_free_conf; goto out_free_conf;
disk->rdev = rdev; disk->rdev = rdev;
} }
q = bdev_get_queue(rdev->bdev);
if (q->merge_bvec_fn)
mddev->merge_check_needed = 1;
diff = (rdev->new_data_offset - rdev->data_offset); diff = (rdev->new_data_offset - rdev->data_offset);
if (!mddev->reshape_backwards) if (!mddev->reshape_backwards)
diff = -diff; diff = -diff;

Просмотреть файл

@ -264,6 +264,9 @@ static struct dentry *dfs_rootdir;
*/ */
int ubi_debugfs_init(void) int ubi_debugfs_init(void)
{ {
if (!IS_ENABLED(DEBUG_FS))
return 0;
dfs_rootdir = debugfs_create_dir("ubi", NULL); dfs_rootdir = debugfs_create_dir("ubi", NULL);
if (IS_ERR_OR_NULL(dfs_rootdir)) { if (IS_ERR_OR_NULL(dfs_rootdir)) {
int err = dfs_rootdir ? -ENODEV : PTR_ERR(dfs_rootdir); int err = dfs_rootdir ? -ENODEV : PTR_ERR(dfs_rootdir);
@ -281,7 +284,8 @@ int ubi_debugfs_init(void)
*/ */
void ubi_debugfs_exit(void) void ubi_debugfs_exit(void)
{ {
debugfs_remove(dfs_rootdir); if (IS_ENABLED(DEBUG_FS))
debugfs_remove(dfs_rootdir);
} }
/* Read an UBI debugfs file */ /* Read an UBI debugfs file */
@ -403,6 +407,9 @@ int ubi_debugfs_init_dev(struct ubi_device *ubi)
struct dentry *dent; struct dentry *dent;
struct ubi_debug_info *d = ubi->dbg; struct ubi_debug_info *d = ubi->dbg;
if (!IS_ENABLED(DEBUG_FS))
return 0;
n = snprintf(d->dfs_dir_name, UBI_DFS_DIR_LEN + 1, UBI_DFS_DIR_NAME, n = snprintf(d->dfs_dir_name, UBI_DFS_DIR_LEN + 1, UBI_DFS_DIR_NAME,
ubi->ubi_num); ubi->ubi_num);
if (n == UBI_DFS_DIR_LEN) { if (n == UBI_DFS_DIR_LEN) {
@ -470,5 +477,6 @@ out:
*/ */
void ubi_debugfs_exit_dev(struct ubi_device *ubi) void ubi_debugfs_exit_dev(struct ubi_device *ubi)
{ {
debugfs_remove_recursive(ubi->dbg->dfs_dir); if (IS_ENABLED(DEBUG_FS))
debugfs_remove_recursive(ubi->dbg->dfs_dir);
} }

Просмотреть файл

@ -1262,11 +1262,11 @@ int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum)
dbg_wl("flush pending work for LEB %d:%d (%d pending works)", dbg_wl("flush pending work for LEB %d:%d (%d pending works)",
vol_id, lnum, ubi->works_count); vol_id, lnum, ubi->works_count);
down_write(&ubi->work_sem);
while (found) { while (found) {
struct ubi_work *wrk; struct ubi_work *wrk;
found = 0; found = 0;
down_read(&ubi->work_sem);
spin_lock(&ubi->wl_lock); spin_lock(&ubi->wl_lock);
list_for_each_entry(wrk, &ubi->works, list) { list_for_each_entry(wrk, &ubi->works, list) {
if ((vol_id == UBI_ALL || wrk->vol_id == vol_id) && if ((vol_id == UBI_ALL || wrk->vol_id == vol_id) &&
@ -1277,18 +1277,27 @@ int ubi_wl_flush(struct ubi_device *ubi, int vol_id, int lnum)
spin_unlock(&ubi->wl_lock); spin_unlock(&ubi->wl_lock);
err = wrk->func(ubi, wrk, 0); err = wrk->func(ubi, wrk, 0);
if (err) if (err) {
goto out; up_read(&ubi->work_sem);
return err;
}
spin_lock(&ubi->wl_lock); spin_lock(&ubi->wl_lock);
found = 1; found = 1;
break; break;
} }
} }
spin_unlock(&ubi->wl_lock); spin_unlock(&ubi->wl_lock);
up_read(&ubi->work_sem);
} }
out: /*
* Make sure all the works which have been done in parallel are
* finished.
*/
down_write(&ubi->work_sem);
up_write(&ubi->work_sem); up_write(&ubi->work_sem);
return err; return err;
} }

Просмотреть файл

@ -697,10 +697,10 @@ static int mlx4_common_set_port(struct mlx4_dev *dev, int slave, u32 in_mod,
if (slave != dev->caps.function) if (slave != dev->caps.function)
memset(inbox->buf, 0, 256); memset(inbox->buf, 0, 256);
if (dev->flags & MLX4_FLAG_OLD_PORT_CMDS) { if (dev->flags & MLX4_FLAG_OLD_PORT_CMDS) {
*(u8 *) inbox->buf = !!reset_qkey_viols << 6; *(u8 *) inbox->buf |= !!reset_qkey_viols << 6;
((__be32 *) inbox->buf)[2] = agg_cap_mask; ((__be32 *) inbox->buf)[2] = agg_cap_mask;
} else { } else {
((u8 *) inbox->buf)[3] = !!reset_qkey_viols; ((u8 *) inbox->buf)[3] |= !!reset_qkey_viols;
((__be32 *) inbox->buf)[1] = agg_cap_mask; ((__be32 *) inbox->buf)[1] = agg_cap_mask;
} }

Просмотреть файл

@ -5,7 +5,7 @@
* *
* (C) 2009 - Peter Feuerer peter (a) piie.net * (C) 2009 - Peter Feuerer peter (a) piie.net
* http://piie.net * http://piie.net
* 2009 Borislav Petkov <petkovbb@gmail.com> * 2009 Borislav Petkov bp (a) alien8.de
* *
* Inspired by and many thanks to: * Inspired by and many thanks to:
* o acerfand - Rachel Greenham * o acerfand - Rachel Greenham

Просмотреть файл

@ -587,14 +587,14 @@ static void sbp_management_request_logout(
{ {
struct sbp_tport *tport = agent->tport; struct sbp_tport *tport = agent->tport;
struct sbp_tpg *tpg = tport->tpg; struct sbp_tpg *tpg = tport->tpg;
int login_id; int id;
struct sbp_login_descriptor *login; struct sbp_login_descriptor *login;
login_id = LOGOUT_ORB_LOGIN_ID(be32_to_cpu(req->orb.misc)); id = LOGOUT_ORB_LOGIN_ID(be32_to_cpu(req->orb.misc));
login = sbp_login_find_by_id(tpg, login_id); login = sbp_login_find_by_id(tpg, id);
if (!login) { if (!login) {
pr_warn("cannot find login: %d\n", login_id); pr_warn("cannot find login: %d\n", id);
req->status.status = cpu_to_be32( req->status.status = cpu_to_be32(
STATUS_BLOCK_RESP(STATUS_RESP_REQUEST_COMPLETE) | STATUS_BLOCK_RESP(STATUS_RESP_REQUEST_COMPLETE) |

Просмотреть файл

@ -133,16 +133,11 @@ static struct se_device *fd_create_virtdevice(
ret = PTR_ERR(dev_p); ret = PTR_ERR(dev_p);
goto fail; goto fail;
} }
/* O_DIRECT too? */
flags = O_RDWR | O_CREAT | O_LARGEFILE;
/* /*
* If fd_buffered_io=1 has not been set explicitly (the default), * Use O_DSYNC by default instead of O_SYNC to forgo syncing
* use O_SYNC to force FILEIO writes to disk. * of pure timestamp updates.
*/ */
if (!(fd_dev->fbd_flags & FDBD_USE_BUFFERED_IO)) flags = O_RDWR | O_CREAT | O_LARGEFILE | O_DSYNC;
flags |= O_SYNC;
file = filp_open(dev_p, flags, 0600); file = filp_open(dev_p, flags, 0600);
if (IS_ERR(file)) { if (IS_ERR(file)) {
@ -380,23 +375,6 @@ static void fd_emulate_sync_cache(struct se_cmd *cmd)
} }
} }
static void fd_emulate_write_fua(struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
struct fd_dev *fd_dev = dev->dev_ptr;
loff_t start = cmd->t_task_lba *
dev->se_sub_dev->se_dev_attrib.block_size;
loff_t end = start + cmd->data_length;
int ret;
pr_debug("FILEIO: FUA WRITE LBA: %llu, bytes: %u\n",
cmd->t_task_lba, cmd->data_length);
ret = vfs_fsync_range(fd_dev->fd_file, start, end, 1);
if (ret != 0)
pr_err("FILEIO: vfs_fsync_range() failed: %d\n", ret);
}
static int fd_execute_cmd(struct se_cmd *cmd, struct scatterlist *sgl, static int fd_execute_cmd(struct se_cmd *cmd, struct scatterlist *sgl,
u32 sgl_nents, enum dma_data_direction data_direction) u32 sgl_nents, enum dma_data_direction data_direction)
{ {
@ -411,19 +389,21 @@ static int fd_execute_cmd(struct se_cmd *cmd, struct scatterlist *sgl,
ret = fd_do_readv(cmd, sgl, sgl_nents); ret = fd_do_readv(cmd, sgl, sgl_nents);
} else { } else {
ret = fd_do_writev(cmd, sgl, sgl_nents); ret = fd_do_writev(cmd, sgl, sgl_nents);
/*
* Perform implict vfs_fsync_range() for fd_do_writev() ops
* for SCSI WRITEs with Forced Unit Access (FUA) set.
* Allow this to happen independent of WCE=0 setting.
*/
if (ret > 0 && if (ret > 0 &&
dev->se_sub_dev->se_dev_attrib.emulate_write_cache > 0 &&
dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0 && dev->se_sub_dev->se_dev_attrib.emulate_fua_write > 0 &&
(cmd->se_cmd_flags & SCF_FUA)) { (cmd->se_cmd_flags & SCF_FUA)) {
/* struct fd_dev *fd_dev = dev->dev_ptr;
* We might need to be a bit smarter here loff_t start = cmd->t_task_lba *
* and return some sense data to let the initiator dev->se_sub_dev->se_dev_attrib.block_size;
* know the FUA WRITE cache sync failed..? loff_t end = start + cmd->data_length;
*/
fd_emulate_write_fua(cmd);
}
vfs_fsync_range(fd_dev->fd_file, start, end, 1);
}
} }
if (ret < 0) { if (ret < 0) {
@ -442,7 +422,6 @@ enum {
static match_table_t tokens = { static match_table_t tokens = {
{Opt_fd_dev_name, "fd_dev_name=%s"}, {Opt_fd_dev_name, "fd_dev_name=%s"},
{Opt_fd_dev_size, "fd_dev_size=%s"}, {Opt_fd_dev_size, "fd_dev_size=%s"},
{Opt_fd_buffered_io, "fd_buffered_io=%d"},
{Opt_err, NULL} {Opt_err, NULL}
}; };
@ -454,7 +433,7 @@ static ssize_t fd_set_configfs_dev_params(
struct fd_dev *fd_dev = se_dev->se_dev_su_ptr; struct fd_dev *fd_dev = se_dev->se_dev_su_ptr;
char *orig, *ptr, *arg_p, *opts; char *orig, *ptr, *arg_p, *opts;
substring_t args[MAX_OPT_ARGS]; substring_t args[MAX_OPT_ARGS];
int ret = 0, arg, token; int ret = 0, token;
opts = kstrdup(page, GFP_KERNEL); opts = kstrdup(page, GFP_KERNEL);
if (!opts) if (!opts)
@ -498,19 +477,6 @@ static ssize_t fd_set_configfs_dev_params(
" bytes\n", fd_dev->fd_dev_size); " bytes\n", fd_dev->fd_dev_size);
fd_dev->fbd_flags |= FBDF_HAS_SIZE; fd_dev->fbd_flags |= FBDF_HAS_SIZE;
break; break;
case Opt_fd_buffered_io:
match_int(args, &arg);
if (arg != 1) {
pr_err("bogus fd_buffered_io=%d value\n", arg);
ret = -EINVAL;
goto out;
}
pr_debug("FILEIO: Using buffered I/O"
" operations for struct fd_dev\n");
fd_dev->fbd_flags |= FDBD_USE_BUFFERED_IO;
break;
default: default:
break; break;
} }
@ -542,10 +508,8 @@ static ssize_t fd_show_configfs_dev_params(
ssize_t bl = 0; ssize_t bl = 0;
bl = sprintf(b + bl, "TCM FILEIO ID: %u", fd_dev->fd_dev_id); bl = sprintf(b + bl, "TCM FILEIO ID: %u", fd_dev->fd_dev_id);
bl += sprintf(b + bl, " File: %s Size: %llu Mode: %s\n", bl += sprintf(b + bl, " File: %s Size: %llu Mode: O_DSYNC\n",
fd_dev->fd_dev_name, fd_dev->fd_dev_size, fd_dev->fd_dev_name, fd_dev->fd_dev_size);
(fd_dev->fbd_flags & FDBD_USE_BUFFERED_IO) ?
"Buffered" : "Synchronous");
return bl; return bl;
} }

Просмотреть файл

@ -14,7 +14,6 @@
#define FBDF_HAS_PATH 0x01 #define FBDF_HAS_PATH 0x01
#define FBDF_HAS_SIZE 0x02 #define FBDF_HAS_SIZE 0x02
#define FDBD_USE_BUFFERED_IO 0x04
struct fd_dev { struct fd_dev {
u32 fbd_flags; u32 fbd_flags;

Просмотреть файл

@ -683,6 +683,8 @@ EXPORT_SYMBOL(dget_parent);
/** /**
* d_find_alias - grab a hashed alias of inode * d_find_alias - grab a hashed alias of inode
* @inode: inode in question * @inode: inode in question
* @want_discon: flag, used by d_splice_alias, to request
* that only a DISCONNECTED alias be returned.
* *
* If inode has a hashed alias, or is a directory and has any alias, * If inode has a hashed alias, or is a directory and has any alias,
* acquire the reference to alias and return it. Otherwise return NULL. * acquire the reference to alias and return it. Otherwise return NULL.
@ -691,9 +693,10 @@ EXPORT_SYMBOL(dget_parent);
* of a filesystem. * of a filesystem.
* *
* If the inode has an IS_ROOT, DCACHE_DISCONNECTED alias, then prefer * If the inode has an IS_ROOT, DCACHE_DISCONNECTED alias, then prefer
* any other hashed alias over that. * any other hashed alias over that one unless @want_discon is set,
* in which case only return an IS_ROOT, DCACHE_DISCONNECTED alias.
*/ */
static struct dentry *__d_find_alias(struct inode *inode) static struct dentry *__d_find_alias(struct inode *inode, int want_discon)
{ {
struct dentry *alias, *discon_alias; struct dentry *alias, *discon_alias;
@ -705,7 +708,7 @@ again:
if (IS_ROOT(alias) && if (IS_ROOT(alias) &&
(alias->d_flags & DCACHE_DISCONNECTED)) { (alias->d_flags & DCACHE_DISCONNECTED)) {
discon_alias = alias; discon_alias = alias;
} else { } else if (!want_discon) {
__dget_dlock(alias); __dget_dlock(alias);
spin_unlock(&alias->d_lock); spin_unlock(&alias->d_lock);
return alias; return alias;
@ -736,7 +739,7 @@ struct dentry *d_find_alias(struct inode *inode)
if (!list_empty(&inode->i_dentry)) { if (!list_empty(&inode->i_dentry)) {
spin_lock(&inode->i_lock); spin_lock(&inode->i_lock);
de = __d_find_alias(inode); de = __d_find_alias(inode, 0);
spin_unlock(&inode->i_lock); spin_unlock(&inode->i_lock);
} }
return de; return de;
@ -1647,8 +1650,9 @@ struct dentry *d_splice_alias(struct inode *inode, struct dentry *dentry)
if (inode && S_ISDIR(inode->i_mode)) { if (inode && S_ISDIR(inode->i_mode)) {
spin_lock(&inode->i_lock); spin_lock(&inode->i_lock);
new = __d_find_any_alias(inode); new = __d_find_alias(inode, 1);
if (new) { if (new) {
BUG_ON(!(new->d_flags & DCACHE_DISCONNECTED));
spin_unlock(&inode->i_lock); spin_unlock(&inode->i_lock);
security_d_instantiate(new, inode); security_d_instantiate(new, inode);
d_move(new, dentry); d_move(new, dentry);
@ -2478,7 +2482,7 @@ struct dentry *d_materialise_unique(struct dentry *dentry, struct inode *inode)
struct dentry *alias; struct dentry *alias;
/* Does an aliased dentry already exist? */ /* Does an aliased dentry already exist? */
alias = __d_find_alias(inode); alias = __d_find_alias(inode, 0);
if (alias) { if (alias) {
actual = alias; actual = alias;
write_seqlock(&rename_lock); write_seqlock(&rename_lock);

Просмотреть файл

@ -90,8 +90,8 @@ unsigned ext4_num_overhead_clusters(struct super_block *sb,
* unusual file system layouts. * unusual file system layouts.
*/ */
if (ext4_block_in_group(sb, ext4_block_bitmap(sb, gdp), block_group)) { if (ext4_block_in_group(sb, ext4_block_bitmap(sb, gdp), block_group)) {
block_cluster = EXT4_B2C(sbi, (start - block_cluster = EXT4_B2C(sbi,
ext4_block_bitmap(sb, gdp))); ext4_block_bitmap(sb, gdp) - start);
if (block_cluster < num_clusters) if (block_cluster < num_clusters)
block_cluster = -1; block_cluster = -1;
else if (block_cluster == num_clusters) { else if (block_cluster == num_clusters) {
@ -102,7 +102,7 @@ unsigned ext4_num_overhead_clusters(struct super_block *sb,
if (ext4_block_in_group(sb, ext4_inode_bitmap(sb, gdp), block_group)) { if (ext4_block_in_group(sb, ext4_inode_bitmap(sb, gdp), block_group)) {
inode_cluster = EXT4_B2C(sbi, inode_cluster = EXT4_B2C(sbi,
start - ext4_inode_bitmap(sb, gdp)); ext4_inode_bitmap(sb, gdp) - start);
if (inode_cluster < num_clusters) if (inode_cluster < num_clusters)
inode_cluster = -1; inode_cluster = -1;
else if (inode_cluster == num_clusters) { else if (inode_cluster == num_clusters) {
@ -114,7 +114,7 @@ unsigned ext4_num_overhead_clusters(struct super_block *sb,
itbl_blk = ext4_inode_table(sb, gdp); itbl_blk = ext4_inode_table(sb, gdp);
for (i = 0; i < sbi->s_itb_per_group; i++) { for (i = 0; i < sbi->s_itb_per_group; i++) {
if (ext4_block_in_group(sb, itbl_blk + i, block_group)) { if (ext4_block_in_group(sb, itbl_blk + i, block_group)) {
c = EXT4_B2C(sbi, start - itbl_blk + i); c = EXT4_B2C(sbi, itbl_blk + i - start);
if ((c < num_clusters) || (c == inode_cluster) || if ((c < num_clusters) || (c == inode_cluster) ||
(c == block_cluster) || (c == itbl_cluster)) (c == block_cluster) || (c == itbl_cluster))
continue; continue;

Просмотреть файл

@ -123,7 +123,6 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
else else
ext4_clear_inode_flag(inode, i); ext4_clear_inode_flag(inode, i);
} }
ei->i_flags = flags;
ext4_set_inode_flags(inode); ext4_set_inode_flags(inode);
inode->i_ctime = ext4_current_time(inode); inode->i_ctime = ext4_current_time(inode);

Просмотреть файл

@ -2918,6 +2918,9 @@ int dbg_debugfs_init_fs(struct ubifs_info *c)
struct dentry *dent; struct dentry *dent;
struct ubifs_debug_info *d = c->dbg; struct ubifs_debug_info *d = c->dbg;
if (!IS_ENABLED(DEBUG_FS))
return 0;
n = snprintf(d->dfs_dir_name, UBIFS_DFS_DIR_LEN + 1, UBIFS_DFS_DIR_NAME, n = snprintf(d->dfs_dir_name, UBIFS_DFS_DIR_LEN + 1, UBIFS_DFS_DIR_NAME,
c->vi.ubi_num, c->vi.vol_id); c->vi.ubi_num, c->vi.vol_id);
if (n == UBIFS_DFS_DIR_LEN) { if (n == UBIFS_DFS_DIR_LEN) {
@ -3010,7 +3013,8 @@ out:
*/ */
void dbg_debugfs_exit_fs(struct ubifs_info *c) void dbg_debugfs_exit_fs(struct ubifs_info *c)
{ {
debugfs_remove_recursive(c->dbg->dfs_dir); if (IS_ENABLED(DEBUG_FS))
debugfs_remove_recursive(c->dbg->dfs_dir);
} }
struct ubifs_global_debug_info ubifs_dbg; struct ubifs_global_debug_info ubifs_dbg;
@ -3095,6 +3099,9 @@ int dbg_debugfs_init(void)
const char *fname; const char *fname;
struct dentry *dent; struct dentry *dent;
if (!IS_ENABLED(DEBUG_FS))
return 0;
fname = "ubifs"; fname = "ubifs";
dent = debugfs_create_dir(fname, NULL); dent = debugfs_create_dir(fname, NULL);
if (IS_ERR_OR_NULL(dent)) if (IS_ERR_OR_NULL(dent))
@ -3159,7 +3166,8 @@ out:
*/ */
void dbg_debugfs_exit(void) void dbg_debugfs_exit(void)
{ {
debugfs_remove_recursive(dfs_rootdir); if (IS_ENABLED(DEBUG_FS))
debugfs_remove_recursive(dfs_rootdir);
} }
/** /**

Просмотреть файл

@ -440,8 +440,8 @@ static inline int acpi_pm_device_sleep_wake(struct device *dev, bool enable)
#else /* CONFIG_ACPI */ #else /* CONFIG_ACPI */
static int register_acpi_bus_type(struct acpi_bus_type *bus) { return 0; } static inline int register_acpi_bus_type(void *bus) { return 0; }
static int unregister_acpi_bus_type(struct acpi_bus_type *bus) { return 0; } static inline int unregister_acpi_bus_type(void *bus) { return 0; }
#endif /* CONFIG_ACPI */ #endif /* CONFIG_ACPI */

Просмотреть файл

@ -181,6 +181,7 @@
{0x1002, 0x6747, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6747, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6748, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6748, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x674A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6758, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6758, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TURKS|RADEON_NEW_MEMMAP}, \
@ -198,6 +199,7 @@
{0x1002, 0x6767, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6767, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6768, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6768, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6770, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6770, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6771, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6772, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6772, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6778, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6778, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6779, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6779, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CAICOS|RADEON_NEW_MEMMAP}, \
@ -229,10 +231,11 @@
{0x1002, 0x6827, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6827, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6828, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6828, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6829, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6829, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ {0x1002, 0x682F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6830, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6830, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6831, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6831, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6838, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6838, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6839, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \ {0x1002, 0x6839, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
@ -531,6 +534,7 @@
{0x1002, 0x9645, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO2|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9645, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO2|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ {0x1002, 0x9647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
{0x1002, 0x9648, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\ {0x1002, 0x9648, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
{0x1002, 0x9649, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP},\
{0x1002, 0x964a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x964a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x964b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x964b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x964c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x964c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_SUMO|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
@ -550,6 +554,7 @@
{0x1002, 0x9807, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9807, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9808, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9808, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9809, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9809, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x980A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_PALM|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9903, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9903, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
@ -561,11 +566,19 @@
{0x1002, 0x9909, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9909, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x990A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x990A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9910, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9913, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9917, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9918, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9919, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9990, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9990, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9991, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9991, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9993, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9993, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9994, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ {0x1002, 0x9994, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0, 0, 0} {0, 0, 0}
#define r128_PCI_IDS \ #define r128_PCI_IDS \

Просмотреть файл

@ -64,6 +64,7 @@ struct drm_exynos_gem_map_off {
* A structure for mapping buffer. * A structure for mapping buffer.
* *
* @handle: a handle to gem object created. * @handle: a handle to gem object created.
* @pad: just padding to be 64-bit aligned.
* @size: memory size to be mapped. * @size: memory size to be mapped.
* @mapped: having user virtual address mmaped. * @mapped: having user virtual address mmaped.
* - this variable would be filled by exynos gem module * - this variable would be filled by exynos gem module
@ -72,7 +73,8 @@ struct drm_exynos_gem_map_off {
*/ */
struct drm_exynos_gem_mmap { struct drm_exynos_gem_mmap {
unsigned int handle; unsigned int handle;
unsigned int size; unsigned int pad;
uint64_t size;
uint64_t mapped; uint64_t mapped;
}; };

Просмотреть файл

@ -0,0 +1,41 @@
/*
* i2c-mux-pinctrl platform data
*
* Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _LINUX_I2C_MUX_PINCTRL_H
#define _LINUX_I2C_MUX_PINCTRL_H
/**
* struct i2c_mux_pinctrl_platform_data - Platform data for i2c-mux-pinctrl
* @parent_bus_num: Parent I2C bus number
* @base_bus_num: Base I2C bus number for the child busses. 0 for dynamic.
* @bus_count: Number of child busses. Also the number of elements in
* @pinctrl_states
* @pinctrl_states: The names of the pinctrl state to select for each child bus
* @pinctrl_state_idle: The pinctrl state to select when no child bus is being
* accessed. If NULL, the most recently used pinctrl state will be left
* selected.
*/
struct i2c_mux_pinctrl_platform_data {
int parent_bus_num;
int base_bus_num;
int bus_count;
const char **pinctrl_states;
const char *pinctrl_state_idle;
};
#endif

Просмотреть файл

@ -128,7 +128,7 @@ struct kparam_array
* The ops can have NULL set or get functions. * The ops can have NULL set or get functions.
*/ */
#define module_param_cb(name, ops, arg, perm) \ #define module_param_cb(name, ops, arg, perm) \
__module_param_call(MODULE_PARAM_PREFIX, name, ops, arg, perm, 0) __module_param_call(MODULE_PARAM_PREFIX, name, ops, arg, perm, -1)
/** /**
* <level>_param_cb - general callback for a module/cmdline parameter * <level>_param_cb - general callback for a module/cmdline parameter
@ -192,7 +192,7 @@ struct kparam_array
{ (void *)set, (void *)get }; \ { (void *)set, (void *)get }; \
__module_param_call(MODULE_PARAM_PREFIX, \ __module_param_call(MODULE_PARAM_PREFIX, \
name, &__param_ops_##name, arg, \ name, &__param_ops_##name, arg, \
(perm) + sizeof(__check_old_set_param(set))*0, 0) (perm) + sizeof(__check_old_set_param(set))*0, -1)
/* We don't get oldget: it's often a new-style param_get_uint, etc. */ /* We don't get oldget: it's often a new-style param_get_uint, etc. */
static inline int static inline int
@ -272,7 +272,7 @@ static inline void __kernel_param_unlock(void)
*/ */
#define core_param(name, var, type, perm) \ #define core_param(name, var, type, perm) \
param_check_##type(name, &(var)); \ param_check_##type(name, &(var)); \
__module_param_call("", name, &param_ops_##type, &var, perm, 0) __module_param_call("", name, &param_ops_##type, &var, perm, -1)
#endif /* !MODULE */ #endif /* !MODULE */
/** /**
@ -290,7 +290,7 @@ static inline void __kernel_param_unlock(void)
= { len, string }; \ = { len, string }; \
__module_param_call(MODULE_PARAM_PREFIX, name, \ __module_param_call(MODULE_PARAM_PREFIX, name, \
&param_ops_string, \ &param_ops_string, \
.str = &__param_string_##name, perm, 0); \ .str = &__param_string_##name, perm, -1); \
__MODULE_PARM_TYPE(name, "string") __MODULE_PARM_TYPE(name, "string")
/** /**
@ -432,7 +432,7 @@ extern int param_set_bint(const char *val, const struct kernel_param *kp);
__module_param_call(MODULE_PARAM_PREFIX, name, \ __module_param_call(MODULE_PARAM_PREFIX, name, \
&param_array_ops, \ &param_array_ops, \
.arr = &__param_arr_##name, \ .arr = &__param_arr_##name, \
perm, 0); \ perm, -1); \
__MODULE_PARM_TYPE(name, "array of " #type) __MODULE_PARM_TYPE(name, "array of " #type)
extern struct kernel_param_ops param_array_ops; extern struct kernel_param_ops param_array_ops;

Просмотреть файл

@ -127,8 +127,8 @@
#define PR_SET_PTRACER 0x59616d61 #define PR_SET_PTRACER 0x59616d61
# define PR_SET_PTRACER_ANY ((unsigned long)-1) # define PR_SET_PTRACER_ANY ((unsigned long)-1)
#define PR_SET_CHILD_SUBREAPER 36 #define PR_SET_CHILD_SUBREAPER 36
#define PR_GET_CHILD_SUBREAPER 37 #define PR_GET_CHILD_SUBREAPER 37
/* /*
* If no_new_privs is set, then operations that grant new privileges (i.e. * If no_new_privs is set, then operations that grant new privileges (i.e.
@ -142,7 +142,9 @@
* asking selinux for a specific new context (e.g. with runcon) will result * asking selinux for a specific new context (e.g. with runcon) will result
* in execve returning -EPERM. * in execve returning -EPERM.
*/ */
#define PR_SET_NO_NEW_PRIVS 38 #define PR_SET_NO_NEW_PRIVS 38
#define PR_GET_NO_NEW_PRIVS 39 #define PR_GET_NO_NEW_PRIVS 39
#define PR_GET_TID_ADDRESS 40
#endif /* _LINUX_PRCTL_H */ #endif /* _LINUX_PRCTL_H */

Просмотреть файл

@ -368,8 +368,11 @@ radix_tree_next_slot(void **slot, struct radix_tree_iter *iter, unsigned flags)
iter->index++; iter->index++;
if (likely(*slot)) if (likely(*slot))
return slot; return slot;
if (flags & RADIX_TREE_ITER_CONTIG) if (flags & RADIX_TREE_ITER_CONTIG) {
/* forbid switching to the next chunk */
iter->next_index = 0;
break; break;
}
} }
} }
return NULL; return NULL;

Просмотреть файл

@ -439,6 +439,7 @@ extern int get_dumpable(struct mm_struct *mm);
/* leave room for more dump flags */ /* leave room for more dump flags */
#define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */
#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */ #define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */
#define MMF_EXE_FILE_CHANGED 18 /* see prctl_set_mm_exe_file() */
#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK) #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)
@ -876,6 +877,8 @@ struct sched_group_power {
* Number of busy cpus in this group. * Number of busy cpus in this group.
*/ */
atomic_t nr_busy_cpus; atomic_t nr_busy_cpus;
unsigned long cpumask[0]; /* iteration mask */
}; };
struct sched_group { struct sched_group {
@ -900,6 +903,15 @@ static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
return to_cpumask(sg->cpumask); return to_cpumask(sg->cpumask);
} }
/*
* cpumask masking which cpus in the group are allowed to iterate up the domain
* tree.
*/
static inline struct cpumask *sched_group_mask(struct sched_group *sg)
{
return to_cpumask(sg->sgp->cpumask);
}
/** /**
* group_first_cpu - Returns the first cpu in the cpumask of a sched_group. * group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
* @group: The group whose first cpu is to be returned. * @group: The group whose first cpu is to be returned.

Просмотреть файл

@ -508,7 +508,7 @@ asmlinkage void __init start_kernel(void)
parse_early_param(); parse_early_param();
parse_args("Booting kernel", static_command_line, __start___param, parse_args("Booting kernel", static_command_line, __start___param,
__stop___param - __start___param, __stop___param - __start___param,
0, 0, &unknown_bootoption); -1, -1, &unknown_bootoption);
jump_label_init(); jump_label_init();
@ -755,13 +755,8 @@ static void __init do_initcalls(void)
{ {
int level; int level;
for (level = 0; level < ARRAY_SIZE(initcall_levels) - 1; level++) { for (level = 0; level < ARRAY_SIZE(initcall_levels) - 1; level++)
pr_info("initlevel:%d=%s, %d registered initcalls\n",
level, initcall_level_names[level],
(int) (initcall_levels[level+1]
- initcall_levels[level]));
do_initcall_level(level); do_initcall_level(level);
}
} }
/* /*

Просмотреть файл

@ -393,6 +393,16 @@ static int shm_fsync(struct file *file, loff_t start, loff_t end, int datasync)
return sfd->file->f_op->fsync(sfd->file, start, end, datasync); return sfd->file->f_op->fsync(sfd->file, start, end, datasync);
} }
static long shm_fallocate(struct file *file, int mode, loff_t offset,
loff_t len)
{
struct shm_file_data *sfd = shm_file_data(file);
if (!sfd->file->f_op->fallocate)
return -EOPNOTSUPP;
return sfd->file->f_op->fallocate(file, mode, offset, len);
}
static unsigned long shm_get_unmapped_area(struct file *file, static unsigned long shm_get_unmapped_area(struct file *file,
unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long addr, unsigned long len, unsigned long pgoff,
unsigned long flags) unsigned long flags)
@ -410,6 +420,7 @@ static const struct file_operations shm_file_operations = {
.get_unmapped_area = shm_get_unmapped_area, .get_unmapped_area = shm_get_unmapped_area,
#endif #endif
.llseek = noop_llseek, .llseek = noop_llseek,
.fallocate = shm_fallocate,
}; };
static const struct file_operations shm_file_operations_huge = { static const struct file_operations shm_file_operations_huge = {
@ -418,6 +429,7 @@ static const struct file_operations shm_file_operations_huge = {
.release = shm_release, .release = shm_release,
.get_unmapped_area = shm_get_unmapped_area, .get_unmapped_area = shm_get_unmapped_area,
.llseek = noop_llseek, .llseek = noop_llseek,
.fallocate = shm_fallocate,
}; };
int is_file_shm_hugepages(struct file *file) int is_file_shm_hugepages(struct file *file)

Просмотреть файл

@ -896,10 +896,13 @@ static void cgroup_diput(struct dentry *dentry, struct inode *inode)
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
/* /*
* Drop the active superblock reference that we took when we * We want to drop the active superblock reference from the
* created the cgroup * cgroup creation after all the dentry refs are gone -
* kill_sb gets mighty unhappy otherwise. Mark
* dentry->d_fsdata with cgroup_diput() to tell
* cgroup_d_release() to call deactivate_super().
*/ */
deactivate_super(cgrp->root->sb); dentry->d_fsdata = cgroup_diput;
/* /*
* if we're getting rid of the cgroup, refcount should ensure * if we're getting rid of the cgroup, refcount should ensure
@ -925,6 +928,13 @@ static int cgroup_delete(const struct dentry *d)
return 1; return 1;
} }
static void cgroup_d_release(struct dentry *dentry)
{
/* did cgroup_diput() tell me to deactivate super? */
if (dentry->d_fsdata == cgroup_diput)
deactivate_super(dentry->d_sb);
}
static void remove_dir(struct dentry *d) static void remove_dir(struct dentry *d)
{ {
struct dentry *parent = dget(d->d_parent); struct dentry *parent = dget(d->d_parent);
@ -1532,6 +1542,7 @@ static int cgroup_get_rootdir(struct super_block *sb)
static const struct dentry_operations cgroup_dops = { static const struct dentry_operations cgroup_dops = {
.d_iput = cgroup_diput, .d_iput = cgroup_diput,
.d_delete = cgroup_delete, .d_delete = cgroup_delete,
.d_release = cgroup_d_release,
}; };
struct inode *inode = struct inode *inode =

Просмотреть файл

@ -5556,15 +5556,20 @@ static cpumask_var_t sched_domains_tmpmask; /* sched_domains_mutex */
#ifdef CONFIG_SCHED_DEBUG #ifdef CONFIG_SCHED_DEBUG
static __read_mostly int sched_domain_debug_enabled; static __read_mostly int sched_debug_enabled;
static int __init sched_domain_debug_setup(char *str) static int __init sched_debug_setup(char *str)
{ {
sched_domain_debug_enabled = 1; sched_debug_enabled = 1;
return 0; return 0;
} }
early_param("sched_debug", sched_domain_debug_setup); early_param("sched_debug", sched_debug_setup);
static inline bool sched_debug(void)
{
return sched_debug_enabled;
}
static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level, static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
struct cpumask *groupmask) struct cpumask *groupmask)
@ -5604,7 +5609,12 @@ static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
break; break;
} }
if (!group->sgp->power) { /*
* Even though we initialize ->power to something semi-sane,
* we leave power_orig unset. This allows us to detect if
* domain iteration is still funny without causing /0 traps.
*/
if (!group->sgp->power_orig) {
printk(KERN_CONT "\n"); printk(KERN_CONT "\n");
printk(KERN_ERR "ERROR: domain->cpu_power not " printk(KERN_ERR "ERROR: domain->cpu_power not "
"set\n"); "set\n");
@ -5652,7 +5662,7 @@ static void sched_domain_debug(struct sched_domain *sd, int cpu)
{ {
int level = 0; int level = 0;
if (!sched_domain_debug_enabled) if (!sched_debug_enabled)
return; return;
if (!sd) { if (!sd) {
@ -5673,6 +5683,10 @@ static void sched_domain_debug(struct sched_domain *sd, int cpu)
} }
#else /* !CONFIG_SCHED_DEBUG */ #else /* !CONFIG_SCHED_DEBUG */
# define sched_domain_debug(sd, cpu) do { } while (0) # define sched_domain_debug(sd, cpu) do { } while (0)
static inline bool sched_debug(void)
{
return false;
}
#endif /* CONFIG_SCHED_DEBUG */ #endif /* CONFIG_SCHED_DEBUG */
static int sd_degenerate(struct sched_domain *sd) static int sd_degenerate(struct sched_domain *sd)
@ -5994,6 +6008,44 @@ struct sched_domain_topology_level {
struct sd_data data; struct sd_data data;
}; };
/*
* Build an iteration mask that can exclude certain CPUs from the upwards
* domain traversal.
*
* Asymmetric node setups can result in situations where the domain tree is of
* unequal depth, make sure to skip domains that already cover the entire
* range.
*
* In that case build_sched_domains() will have terminated the iteration early
* and our sibling sd spans will be empty. Domains should always include the
* cpu they're built on, so check that.
*
*/
static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
{
const struct cpumask *span = sched_domain_span(sd);
struct sd_data *sdd = sd->private;
struct sched_domain *sibling;
int i;
for_each_cpu(i, span) {
sibling = *per_cpu_ptr(sdd->sd, i);
if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
continue;
cpumask_set_cpu(i, sched_group_mask(sg));
}
}
/*
* Return the canonical balance cpu for this group, this is the first cpu
* of this group that's also in the iteration mask.
*/
int group_balance_cpu(struct sched_group *sg)
{
return cpumask_first_and(sched_group_cpus(sg), sched_group_mask(sg));
}
static int static int
build_overlap_sched_groups(struct sched_domain *sd, int cpu) build_overlap_sched_groups(struct sched_domain *sd, int cpu)
{ {
@ -6012,6 +6064,12 @@ build_overlap_sched_groups(struct sched_domain *sd, int cpu)
if (cpumask_test_cpu(i, covered)) if (cpumask_test_cpu(i, covered))
continue; continue;
child = *per_cpu_ptr(sdd->sd, i);
/* See the comment near build_group_mask(). */
if (!cpumask_test_cpu(i, sched_domain_span(child)))
continue;
sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(), sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(),
GFP_KERNEL, cpu_to_node(cpu)); GFP_KERNEL, cpu_to_node(cpu));
@ -6019,8 +6077,6 @@ build_overlap_sched_groups(struct sched_domain *sd, int cpu)
goto fail; goto fail;
sg_span = sched_group_cpus(sg); sg_span = sched_group_cpus(sg);
child = *per_cpu_ptr(sdd->sd, i);
if (child->child) { if (child->child) {
child = child->child; child = child->child;
cpumask_copy(sg_span, sched_domain_span(child)); cpumask_copy(sg_span, sched_domain_span(child));
@ -6030,13 +6086,24 @@ build_overlap_sched_groups(struct sched_domain *sd, int cpu)
cpumask_or(covered, covered, sg_span); cpumask_or(covered, covered, sg_span);
sg->sgp = *per_cpu_ptr(sdd->sgp, i); sg->sgp = *per_cpu_ptr(sdd->sgp, i);
atomic_inc(&sg->sgp->ref); if (atomic_inc_return(&sg->sgp->ref) == 1)
build_group_mask(sd, sg);
/*
* Initialize sgp->power such that even if we mess up the
* domains and no possible iteration will get us here, we won't
* die on a /0 trap.
*/
sg->sgp->power = SCHED_POWER_SCALE * cpumask_weight(sg_span);
/*
* Make sure the first group of this domain contains the
* canonical balance cpu. Otherwise the sched_domain iteration
* breaks. See update_sg_lb_stats().
*/
if ((!groups && cpumask_test_cpu(cpu, sg_span)) || if ((!groups && cpumask_test_cpu(cpu, sg_span)) ||
cpumask_first(sg_span) == cpu) { group_balance_cpu(sg) == cpu)
WARN_ON_ONCE(!cpumask_test_cpu(cpu, sg_span));
groups = sg; groups = sg;
}
if (!first) if (!first)
first = sg; first = sg;
@ -6109,6 +6176,7 @@ build_sched_groups(struct sched_domain *sd, int cpu)
cpumask_clear(sched_group_cpus(sg)); cpumask_clear(sched_group_cpus(sg));
sg->sgp->power = 0; sg->sgp->power = 0;
cpumask_setall(sched_group_mask(sg));
for_each_cpu(j, span) { for_each_cpu(j, span) {
if (get_group(j, sdd, NULL) != group) if (get_group(j, sdd, NULL) != group)
@ -6150,7 +6218,7 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
sg = sg->next; sg = sg->next;
} while (sg != sd->groups); } while (sg != sd->groups);
if (cpu != group_first_cpu(sg)) if (cpu != group_balance_cpu(sg))
return; return;
update_group_power(sd, cpu); update_group_power(sd, cpu);
@ -6200,11 +6268,8 @@ int sched_domain_level_max;
static int __init setup_relax_domain_level(char *str) static int __init setup_relax_domain_level(char *str)
{ {
unsigned long val; if (kstrtoint(str, 0, &default_relax_domain_level))
pr_warn("Unable to set relax_domain_level\n");
val = simple_strtoul(str, NULL, 0);
if (val < sched_domain_level_max)
default_relax_domain_level = val;
return 1; return 1;
} }
@ -6314,14 +6379,13 @@ static struct sched_domain_topology_level *sched_domain_topology = default_topol
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
static int sched_domains_numa_levels; static int sched_domains_numa_levels;
static int sched_domains_numa_scale;
static int *sched_domains_numa_distance; static int *sched_domains_numa_distance;
static struct cpumask ***sched_domains_numa_masks; static struct cpumask ***sched_domains_numa_masks;
static int sched_domains_curr_level; static int sched_domains_curr_level;
static inline int sd_local_flags(int level) static inline int sd_local_flags(int level)
{ {
if (sched_domains_numa_distance[level] > REMOTE_DISTANCE) if (sched_domains_numa_distance[level] > RECLAIM_DISTANCE)
return 0; return 0;
return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE; return SD_BALANCE_EXEC | SD_BALANCE_FORK | SD_WAKE_AFFINE;
@ -6379,6 +6443,42 @@ static const struct cpumask *sd_numa_mask(int cpu)
return sched_domains_numa_masks[sched_domains_curr_level][cpu_to_node(cpu)]; return sched_domains_numa_masks[sched_domains_curr_level][cpu_to_node(cpu)];
} }
static void sched_numa_warn(const char *str)
{
static int done = false;
int i,j;
if (done)
return;
done = true;
printk(KERN_WARNING "ERROR: %s\n\n", str);
for (i = 0; i < nr_node_ids; i++) {
printk(KERN_WARNING " ");
for (j = 0; j < nr_node_ids; j++)
printk(KERN_CONT "%02d ", node_distance(i,j));
printk(KERN_CONT "\n");
}
printk(KERN_WARNING "\n");
}
static bool find_numa_distance(int distance)
{
int i;
if (distance == node_distance(0, 0))
return true;
for (i = 0; i < sched_domains_numa_levels; i++) {
if (sched_domains_numa_distance[i] == distance)
return true;
}
return false;
}
static void sched_init_numa(void) static void sched_init_numa(void)
{ {
int next_distance, curr_distance = node_distance(0, 0); int next_distance, curr_distance = node_distance(0, 0);
@ -6386,7 +6486,6 @@ static void sched_init_numa(void)
int level = 0; int level = 0;
int i, j, k; int i, j, k;
sched_domains_numa_scale = curr_distance;
sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL); sched_domains_numa_distance = kzalloc(sizeof(int) * nr_node_ids, GFP_KERNEL);
if (!sched_domains_numa_distance) if (!sched_domains_numa_distance)
return; return;
@ -6397,23 +6496,41 @@ static void sched_init_numa(void)
* *
* Assumes node_distance(0,j) includes all distances in * Assumes node_distance(0,j) includes all distances in
* node_distance(i,j) in order to avoid cubic time. * node_distance(i,j) in order to avoid cubic time.
*
* XXX: could be optimized to O(n log n) by using sort()
*/ */
next_distance = curr_distance; next_distance = curr_distance;
for (i = 0; i < nr_node_ids; i++) { for (i = 0; i < nr_node_ids; i++) {
for (j = 0; j < nr_node_ids; j++) { for (j = 0; j < nr_node_ids; j++) {
int distance = node_distance(0, j); for (k = 0; k < nr_node_ids; k++) {
if (distance > curr_distance && int distance = node_distance(i, k);
(distance < next_distance ||
next_distance == curr_distance)) if (distance > curr_distance &&
next_distance = distance; (distance < next_distance ||
next_distance == curr_distance))
next_distance = distance;
/*
* While not a strong assumption it would be nice to know
* about cases where if node A is connected to B, B is not
* equally connected to A.
*/
if (sched_debug() && node_distance(k, i) != distance)
sched_numa_warn("Node-distance not symmetric");
if (sched_debug() && i && !find_numa_distance(distance))
sched_numa_warn("Node-0 not representative");
}
if (next_distance != curr_distance) {
sched_domains_numa_distance[level++] = next_distance;
sched_domains_numa_levels = level;
curr_distance = next_distance;
} else break;
} }
if (next_distance != curr_distance) {
sched_domains_numa_distance[level++] = next_distance; /*
sched_domains_numa_levels = level; * In case of sched_debug() we verify the above assumption.
curr_distance = next_distance; */
} else break; if (!sched_debug())
break;
} }
/* /*
* 'level' contains the number of unique distances, excluding the * 'level' contains the number of unique distances, excluding the
@ -6525,7 +6642,7 @@ static int __sdt_alloc(const struct cpumask *cpu_map)
*per_cpu_ptr(sdd->sg, j) = sg; *per_cpu_ptr(sdd->sg, j) = sg;
sgp = kzalloc_node(sizeof(struct sched_group_power), sgp = kzalloc_node(sizeof(struct sched_group_power) + cpumask_size(),
GFP_KERNEL, cpu_to_node(j)); GFP_KERNEL, cpu_to_node(j));
if (!sgp) if (!sgp)
return -ENOMEM; return -ENOMEM;
@ -6578,7 +6695,6 @@ struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
if (!sd) if (!sd)
return child; return child;
set_domain_attribute(sd, attr);
cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu)); cpumask_and(sched_domain_span(sd), cpu_map, tl->mask(cpu));
if (child) { if (child) {
sd->level = child->level + 1; sd->level = child->level + 1;
@ -6586,6 +6702,7 @@ struct sched_domain *build_sched_domain(struct sched_domain_topology_level *tl,
child->parent = sd; child->parent = sd;
} }
sd->child = child; sd->child = child;
set_domain_attribute(sd, attr);
return sd; return sd;
} }

Просмотреть файл

@ -3602,7 +3602,7 @@ void update_group_power(struct sched_domain *sd, int cpu)
} while (group != child->groups); } while (group != child->groups);
} }
sdg->sgp->power = power; sdg->sgp->power_orig = sdg->sgp->power = power;
} }
/* /*
@ -3632,7 +3632,7 @@ fix_small_capacity(struct sched_domain *sd, struct sched_group *group)
/** /**
* update_sg_lb_stats - Update sched_group's statistics for load balancing. * update_sg_lb_stats - Update sched_group's statistics for load balancing.
* @sd: The sched_domain whose statistics are to be updated. * @env: The load balancing environment.
* @group: sched_group whose statistics are to be updated. * @group: sched_group whose statistics are to be updated.
* @load_idx: Load index of sched_domain of this_cpu for load calc. * @load_idx: Load index of sched_domain of this_cpu for load calc.
* @local_group: Does group contain this_cpu. * @local_group: Does group contain this_cpu.
@ -3652,7 +3652,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
int i; int i;
if (local_group) if (local_group)
balance_cpu = group_first_cpu(group); balance_cpu = group_balance_cpu(group);
/* Tally up the load of all CPUs in the group */ /* Tally up the load of all CPUs in the group */
max_cpu_load = 0; max_cpu_load = 0;
@ -3667,7 +3667,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
/* Bias balancing toward cpus of our domain */ /* Bias balancing toward cpus of our domain */
if (local_group) { if (local_group) {
if (idle_cpu(i) && !first_idle_cpu) { if (idle_cpu(i) && !first_idle_cpu &&
cpumask_test_cpu(i, sched_group_mask(group))) {
first_idle_cpu = 1; first_idle_cpu = 1;
balance_cpu = i; balance_cpu = i;
} }
@ -3741,11 +3742,10 @@ static inline void update_sg_lb_stats(struct lb_env *env,
/** /**
* update_sd_pick_busiest - return 1 on busiest group * update_sd_pick_busiest - return 1 on busiest group
* @sd: sched_domain whose statistics are to be checked * @env: The load balancing environment.
* @sds: sched_domain statistics * @sds: sched_domain statistics
* @sg: sched_group candidate to be checked for being the busiest * @sg: sched_group candidate to be checked for being the busiest
* @sgs: sched_group statistics * @sgs: sched_group statistics
* @this_cpu: the current cpu
* *
* Determine if @sg is a busier group than the previously selected * Determine if @sg is a busier group than the previously selected
* busiest group. * busiest group.
@ -3783,9 +3783,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
/** /**
* update_sd_lb_stats - Update sched_domain's statistics for load balancing. * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
* @sd: sched_domain whose statistics are to be updated. * @env: The load balancing environment.
* @this_cpu: Cpu for which load balance is currently performed.
* @idle: Idle status of this_cpu
* @cpus: Set of cpus considered for load balancing. * @cpus: Set of cpus considered for load balancing.
* @balance: Should we balance. * @balance: Should we balance.
* @sds: variable to hold the statistics for this sched_domain. * @sds: variable to hold the statistics for this sched_domain.
@ -3874,10 +3872,8 @@ static inline void update_sd_lb_stats(struct lb_env *env,
* Returns 1 when packing is required and a task should be moved to * Returns 1 when packing is required and a task should be moved to
* this CPU. The amount of the imbalance is returned in *imbalance. * this CPU. The amount of the imbalance is returned in *imbalance.
* *
* @sd: The sched_domain whose packing is to be checked. * @env: The load balancing environment.
* @sds: Statistics of the sched_domain which is to be packed * @sds: Statistics of the sched_domain which is to be packed
* @this_cpu: The cpu at whose sched_domain we're performing load-balance.
* @imbalance: returns amount of imbalanced due to packing.
*/ */
static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds) static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
{ {
@ -3903,9 +3899,8 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
* fix_small_imbalance - Calculate the minor imbalance that exists * fix_small_imbalance - Calculate the minor imbalance that exists
* amongst the groups of a sched_domain, during * amongst the groups of a sched_domain, during
* load balancing. * load balancing.
* @env: The load balancing environment.
* @sds: Statistics of the sched_domain whose imbalance is to be calculated. * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
* @this_cpu: The cpu at whose sched_domain we're performing load-balance.
* @imbalance: Variable to store the imbalance.
*/ */
static inline static inline
void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
@ -4048,11 +4043,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* Also calculates the amount of weighted load which should be moved * Also calculates the amount of weighted load which should be moved
* to restore balance. * to restore balance.
* *
* @sd: The sched_domain whose busiest group is to be returned. * @env: The load balancing environment.
* @this_cpu: The cpu for which load balancing is currently being performed.
* @imbalance: Variable which stores amount of weighted load which should
* be moved to restore balance/put a group to idle.
* @idle: The idle status of this_cpu.
* @cpus: The set of CPUs under consideration for load-balancing. * @cpus: The set of CPUs under consideration for load-balancing.
* @balance: Pointer to a variable indicating if this_cpu * @balance: Pointer to a variable indicating if this_cpu
* is the appropriate cpu to perform load balancing at this_level. * is the appropriate cpu to perform load balancing at this_level.

Просмотреть файл

@ -1562,7 +1562,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
task_running(rq, task) || task_running(rq, task) ||
!task->on_rq)) { !task->on_rq)) {
raw_spin_unlock(&lowest_rq->lock); double_unlock_balance(rq, lowest_rq);
lowest_rq = NULL; lowest_rq = NULL;
break; break;
} }

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше