Merge branch 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/urgent

This commit is contained in:
Ingo Molnar 2008-11-11 09:16:20 +01:00
Родитель 072ba49838 4143c5cb36
Коммит 45b86a96f1
228 изменённых файлов: 4407 добавлений и 1462 удалений

Просмотреть файл

@ -21,11 +21,14 @@ This driver is known to work with the following cards:
* SA E200
* SA E200i
* SA E500
* SA P700m
* SA P212
* SA P410
* SA P410i
* SA P411
* SA P812
* SA P712m
* SA P711m
Detecting drive failures:
-------------------------

Просмотреть файл

@ -213,4 +213,29 @@ TkRat (GUI)
Works. Use "Insert file..." or external editor.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gmail (Web GUI)
If you just have to use Gmail to send patches, it CAN be made to work. It
requires a bit of external help, though.
The first problem is that Gmail converts tabs to spaces. This will
totally break your patches. To prevent this, you have to use a different
editor. There is a firefox extension called "ViewSourceWith"
(https://addons.mozilla.org/en-US/firefox/addon/394) which allows you to
edit any text box in the editor of your choice. Configure it to launch
your favorite editor. When you want to send a patch, use this technique.
Once you have crafted your messsage + patch, save and exit the editor,
which should reload the Gmail edit box. GMAIL WILL PRESERVE THE TABS.
Hoorah. Apparently you can cut-n-paste literal tabs, but Gmail will
convert those to spaces upon sending!
The second problem is that Gmail converts tabs to spaces on replies. If
you reply to a patch, don't expect to be able to apply it as a patch.
The last problem is that Gmail will base64-encode any message that has a
non-ASCII character. That includes things like European names. Be aware.
Gmail is not convenient for lkml patches, but CAN be made to work.
###

Просмотреть файл

@ -8,6 +8,12 @@ if you want to format from within Linux.
VFAT MOUNT OPTIONS
----------------------------------------------------------------------
uid=### -- Set the owner of all files on this filesystem.
The default is the uid of current process.
gid=### -- Set the group of all files on this filesystem.
The default is the gid of current process.
umask=### -- The permission mask (for files and directories, see umask(1)).
The default is the umask of current process.
@ -36,7 +42,7 @@ codepage=### -- Sets the codepage number for converting to shortname
characters on FAT filesystem.
By default, FAT_DEFAULT_CODEPAGE setting is used.
iocharset=name -- Character set to use for converting between the
iocharset=<name> -- Character set to use for converting between the
encoding is used for user visible filename and 16 bit
Unicode characters. Long filenames are stored on disk
in Unicode format, but Unix for the most part doesn't
@ -86,6 +92,8 @@ check=s|r|n -- Case sensitivity checking setting.
r: relaxed, case insensitive
n: normal, default setting, currently case insensitive
nocase -- This was deprecated for vfat. Use shortname=win95 instead.
shortname=lower|win95|winnt|mixed
-- Shortname display/create setting.
lower: convert to lowercase for display,
@ -99,11 +107,31 @@ shortname=lower|win95|winnt|mixed
tz=UTC -- Interpret timestamps as UTC rather than local time.
This option disables the conversion of timestamps
between local time (as used by Windows on FAT) and UTC
(which Linux uses internally). This is particuluarly
(which Linux uses internally). This is particularly
useful when mounting devices (like digital cameras)
that are set to UTC in order to avoid the pitfalls of
local time.
showexec -- If set, the execute permission bits of the file will be
allowed only if the extension part of the name is .EXE,
.COM, or .BAT. Not set by default.
debug -- Can be set, but unused by the current implementation.
sys_immutable -- If set, ATTR_SYS attribute on FAT is handled as
IMMUTABLE flag on Linux. Not set by default.
flush -- If set, the filesystem will try to flush to disk more
early than normal. Not set by default.
rodir -- FAT has the ATTR_RO (read-only) attribute. But on Windows,
the ATTR_RO of the directory will be just ignored actually,
and is used by only applications as flag. E.g. it's setted
for the customized folder.
If you want to use ATTR_RO as read-only flag even for
the directory, set this option.
<bool>: 0,1,yes,no,true,false
TODO

Просмотреть файл

@ -995,13 +995,15 @@ and is between 256 and 4096 characters. It is defined in the file
Format:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number> (must be a positive range in ascending order)
<cpu number>-<cpu number>
(must be a positive range in ascending order)
or a mixture
<cpu number>,...,<cpu number>-<cpu number>
This option can be used to specify one or more CPUs
to isolate from the general SMP balancing and scheduling
algorithms. The only way to move a process onto or off
an "isolated" CPU is via the CPU affinity syscalls.
algorithms. You can move a process onto or off an
"isolated" CPU via the CPU affinity syscalls or cpuset.
<cpu number> begins at 0 and the maximum value is
"number of CPUs in system - 1".
@ -1470,8 +1472,6 @@ and is between 256 and 4096 characters. It is defined in the file
Valid arguments: on, off
Default: on
noirqbalance [X86-32,SMP,KNL] Disable kernel irq balancing
noirqdebug [X86-32] Disables the code which attempts to detect and
disable unhandled interrupt sources.

Просмотреть файл

@ -721,7 +721,7 @@ W: http://sourceforge.net/projects/acpi4asus
W: http://xf.iksaif.net/acpi4asus
S: Maintained
ASYNCHRONOUS TRANSFERS/TRANSFORMS API
ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API
P: Dan Williams
M: dan.j.williams@intel.com
P: Maciej Sosnowski

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 28
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Killer Bat of Doom
# *DOCUMENTATION*

Просмотреть файл

@ -21,7 +21,7 @@ config OPROFILE_IBS
Instruction-Based Sampling (IBS) is a new profiling
technique that provides rich, precise program performance
information. IBS is introduced by AMD Family10h processors
(AMD Opteron Quad-Core processor “Barcelona”) to overcome
(AMD Opteron Quad-Core processor "Barcelona") to overcome
the limitations of conventional performance counter
sampling.

Просмотреть файл

@ -44,10 +44,10 @@
* The module space lives between the addresses given by TASK_SIZE
* and PAGE_OFFSET - it must be within 32MB of the kernel text.
*/
#define MODULE_END (PAGE_OFFSET)
#define MODULE_START (MODULE_END - 16*1048576)
#define MODULES_END (PAGE_OFFSET)
#define MODULES_VADDR (MODULES_END - 16*1048576)
#if TASK_SIZE > MODULE_START
#if TASK_SIZE > MODULES_VADDR
#error Top of user space clashes with start of module space
#endif
@ -56,7 +56,7 @@
* Since we use sections to map it, this macro replaces the physical address
* with its virtual address while keeping offset from the base section.
*/
#define XIP_VIRT_ADDR(physaddr) (MODULE_START + ((physaddr) & 0x000fffff))
#define XIP_VIRT_ADDR(physaddr) (MODULES_VADDR + ((physaddr) & 0x000fffff))
/*
* Allow 16MB-aligned ioremap pages
@ -94,8 +94,8 @@
/*
* The module can be at any place in ram in nommu mode.
*/
#define MODULE_END (END_MEM)
#define MODULE_START (PHYS_OFFSET)
#define MODULES_END (END_MEM)
#define MODULES_VADDR (PHYS_OFFSET)
#endif /* !CONFIG_MMU */

Просмотреть файл

@ -42,6 +42,10 @@
#define CR_U (1 << 22) /* Unaligned access operation */
#define CR_XP (1 << 23) /* Extended page tables */
#define CR_VE (1 << 24) /* Vectored interrupts */
#define CR_EE (1 << 25) /* Exception (Big) Endian */
#define CR_TRE (1 << 28) /* TEX remap enable */
#define CR_AFE (1 << 29) /* Access flag enable */
#define CR_TE (1 << 30) /* Thumb exception enable */
/*
* This is used to ensure the compiler did actually allocate the register we

Просмотреть файл

@ -21,12 +21,16 @@ int elf_check_arch(const struct elf32_hdr *x)
eflags = x->e_flags;
if ((eflags & EF_ARM_EABI_MASK) == EF_ARM_EABI_UNKNOWN) {
unsigned int flt_fmt;
/* APCS26 is only allowed if the CPU supports it */
if ((eflags & EF_ARM_APCS_26) && !(elf_hwcap & HWCAP_26BIT))
return 0;
flt_fmt = eflags & (EF_ARM_VFP_FLOAT | EF_ARM_SOFT_FLOAT);
/* VFP requires the supporting code */
if ((eflags & EF_ARM_VFP_FLOAT) && !(elf_hwcap & HWCAP_VFP))
if (flt_fmt == EF_ARM_VFP_FLOAT && !(elf_hwcap & HWCAP_VFP))
return 0;
}
return 1;

Просмотреть файл

@ -26,12 +26,12 @@
/*
* The XIP kernel text is mapped in the module area for modules and
* some other stuff to work without any indirect relocations.
* MODULE_START is redefined here and not in asm/memory.h to avoid
* MODULES_VADDR is redefined here and not in asm/memory.h to avoid
* recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off.
*/
extern void _etext;
#undef MODULE_START
#define MODULE_START (((unsigned long)&_etext + ~PGDIR_MASK) & PGDIR_MASK)
#undef MODULES_VADDR
#define MODULES_VADDR (((unsigned long)&_etext + ~PGDIR_MASK) & PGDIR_MASK)
#endif
#ifdef CONFIG_MMU
@ -43,7 +43,7 @@ void *module_alloc(unsigned long size)
if (!size)
return NULL;
area = __get_vm_area(size, VM_ALLOC, MODULE_START, MODULE_END);
area = __get_vm_area(size, VM_ALLOC, MODULES_VADDR, MODULES_END);
if (!area)
return NULL;

Просмотреть файл

@ -429,18 +429,16 @@ void __init gpmc_init(void)
gpmc_l3_clk = clk_get(NULL, ck);
if (IS_ERR(gpmc_l3_clk)) {
printk(KERN_ERR "Could not get GPMC clock %s\n", ck);
return -ENODEV;
BUG();
}
gpmc_base = ioremap(l, SZ_4K);
if (!gpmc_base) {
clk_put(gpmc_l3_clk);
printk(KERN_ERR "Could not get GPMC register memory\n");
return -ENOMEM;
BUG();
}
BUG_ON(IS_ERR(gpmc_l3_clk));
l = gpmc_read_reg(GPMC_REVISION);
printk(KERN_INFO "GPMC revision %d.%d\n", (l >> 4) & 0x0f, l & 0x0f);
/* Set smart idle mode and automatic L3 clock gating */

Просмотреть файл

@ -98,7 +98,7 @@ static void xsc3_l2_inv_range(unsigned long start, unsigned long end)
/*
* Clean and invalidate partial last cache line.
*/
if (end & (CACHE_LINE_SIZE - 1)) {
if (start < end && (end & (CACHE_LINE_SIZE - 1))) {
xsc3_l2_clean_pa(end & ~(CACHE_LINE_SIZE - 1));
xsc3_l2_inv_pa(end & ~(CACHE_LINE_SIZE - 1));
end &= ~(CACHE_LINE_SIZE - 1);
@ -107,7 +107,7 @@ static void xsc3_l2_inv_range(unsigned long start, unsigned long end)
/*
* Invalidate all full cache lines between 'start' and 'end'.
*/
while (start != end) {
while (start < end) {
xsc3_l2_inv_pa(start);
start += CACHE_LINE_SIZE;
}

Просмотреть файл

@ -180,20 +180,20 @@ void adjust_cr(unsigned long mask, unsigned long set)
#endif
#define PROT_PTE_DEVICE L_PTE_PRESENT|L_PTE_YOUNG|L_PTE_DIRTY|L_PTE_WRITE
#define PROT_SECT_DEVICE PMD_TYPE_SECT|PMD_SECT_XN|PMD_SECT_AP_WRITE
#define PROT_SECT_DEVICE PMD_TYPE_SECT|PMD_SECT_AP_WRITE
static struct mem_type mem_types[] = {
[MT_DEVICE] = { /* Strongly ordered / ARMv6 shared device */
.prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_SHARED |
L_PTE_SHARED,
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PROT_SECT_DEVICE | PMD_SECT_UNCACHED,
.prot_sect = PROT_SECT_DEVICE | PMD_SECT_S,
.domain = DOMAIN_IO,
},
[MT_DEVICE_NONSHARED] = { /* ARMv6 non-shared device */
.prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_NONSHARED,
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PROT_SECT_DEVICE | PMD_SECT_TEX(2),
.prot_sect = PROT_SECT_DEVICE,
.domain = DOMAIN_IO,
},
[MT_DEVICE_CACHED] = { /* ioremap_cached */
@ -205,7 +205,7 @@ static struct mem_type mem_types[] = {
[MT_DEVICE_WC] = { /* ioremap_wc */
.prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_WC,
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PROT_SECT_DEVICE | PMD_SECT_BUFFERABLE,
.prot_sect = PROT_SECT_DEVICE,
.domain = DOMAIN_IO,
},
[MT_CACHECLEAN] = {
@ -273,22 +273,23 @@ static void __init build_mem_type_table(void)
#endif
/*
* On non-Xscale3 ARMv5-and-older systems, use CB=01
* (Uncached/Buffered) for ioremap_wc() mappings. On XScale3
* and ARMv6+, use TEXCB=00100 mappings (Inner/Outer Uncacheable
* in xsc3 parlance, Uncached Normal in ARMv6 parlance).
* Strip out features not present on earlier architectures.
* Pre-ARMv5 CPUs don't have TEX bits. Pre-ARMv6 CPUs or those
* without extended page tables don't have the 'Shared' bit.
*/
if (cpu_is_xsc3() || cpu_arch >= CPU_ARCH_ARMv6) {
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1);
mem_types[MT_DEVICE_WC].prot_sect &= ~PMD_SECT_BUFFERABLE;
}
if (cpu_arch < CPU_ARCH_ARMv5)
for (i = 0; i < ARRAY_SIZE(mem_types); i++)
mem_types[i].prot_sect &= ~PMD_SECT_TEX(7);
if ((cpu_arch < CPU_ARCH_ARMv6 || !(cr & CR_XP)) && !cpu_is_xsc3())
for (i = 0; i < ARRAY_SIZE(mem_types); i++)
mem_types[i].prot_sect &= ~PMD_SECT_S;
/*
* ARMv5 and lower, bit 4 must be set for page tables.
* (was: cache "update-able on write" bit on ARM610)
* However, Xscale cores require this bit to be cleared.
* ARMv5 and lower, bit 4 must be set for page tables (was: cache
* "update-able on write" bit on ARM610). However, Xscale and
* Xscale3 require this bit to be cleared.
*/
if (cpu_is_xscale()) {
if (cpu_is_xscale() || cpu_is_xsc3()) {
for (i = 0; i < ARRAY_SIZE(mem_types); i++) {
mem_types[i].prot_sect &= ~PMD_BIT4;
mem_types[i].prot_l1 &= ~PMD_BIT4;
@ -302,6 +303,64 @@ static void __init build_mem_type_table(void)
}
}
/*
* Mark the device areas according to the CPU/architecture.
*/
if (cpu_is_xsc3() || (cpu_arch >= CPU_ARCH_ARMv6 && (cr & CR_XP))) {
if (!cpu_is_xsc3()) {
/*
* Mark device regions on ARMv6+ as execute-never
* to prevent speculative instruction fetches.
*/
mem_types[MT_DEVICE].prot_sect |= PMD_SECT_XN;
mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_XN;
mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_XN;
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_XN;
}
if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) {
/*
* For ARMv7 with TEX remapping,
* - shared device is SXCB=1100
* - nonshared device is SXCB=0100
* - write combine device mem is SXCB=0001
* (Uncached Normal memory)
*/
mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1);
mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(1);
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_BUFFERABLE;
} else if (cpu_is_xsc3()) {
/*
* For Xscale3,
* - shared device is TEXCB=00101
* - nonshared device is TEXCB=01000
* - write combine device mem is TEXCB=00100
* (Inner/Outer Uncacheable in xsc3 parlance)
*/
mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1) | PMD_SECT_BUFFERED;
mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2);
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1);
} else {
/*
* For ARMv6 and ARMv7 without TEX remapping,
* - shared device is TEXCB=00001
* - nonshared device is TEXCB=01000
* - write combine device mem is TEXCB=00100
* (Uncached Normal in ARMv6 parlance).
*/
mem_types[MT_DEVICE].prot_sect |= PMD_SECT_BUFFERED;
mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2);
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1);
}
} else {
/*
* On others, write combining is "Uncached/Buffered"
*/
mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_BUFFERABLE;
}
/*
* Now deal with the memory-type mappings
*/
cp = &cache_policies[cachepolicy];
vecs_pgprot = kern_pgprot = user_pgprot = cp->pte;
@ -317,12 +376,8 @@ static void __init build_mem_type_table(void)
* Enable CPU-specific coherency if supported.
* (Only available on XSC3 at the moment.)
*/
if (arch_is_coherent()) {
if (cpu_is_xsc3()) {
mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED;
}
}
if (arch_is_coherent() && cpu_is_xsc3())
mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S;
/*
* ARMv6 and above have extended page tables.
@ -336,11 +391,6 @@ static void __init build_mem_type_table(void)
mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
/*
* Mark the device area as "shared device"
*/
mem_types[MT_DEVICE].prot_sect |= PMD_SECT_BUFFERED;
#ifdef CONFIG_SMP
/*
* Mark memory with the "shared" attribute for SMP systems
@ -360,9 +410,6 @@ static void __init build_mem_type_table(void)
mem_types[MT_LOW_VECTORS].prot_pte |= vecs_pgprot;
mem_types[MT_HIGH_VECTORS].prot_pte |= vecs_pgprot;
if (cpu_arch < CPU_ARCH_ARMv5)
mem_types[MT_MINICLEAN].prot_sect &= ~PMD_SECT_TEX(1);
pgprot_user = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | user_pgprot);
pgprot_kernel = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG |
L_PTE_DIRTY | L_PTE_WRITE |
@ -654,7 +701,7 @@ static inline void prepare_page_table(struct meminfo *mi)
/*
* Clear out all the mappings below the kernel image.
*/
for (addr = 0; addr < MODULE_START; addr += PGDIR_SIZE)
for (addr = 0; addr < MODULES_VADDR; addr += PGDIR_SIZE)
pmd_clear(pmd_off_k(addr));
#ifdef CONFIG_XIP_KERNEL
@ -766,7 +813,7 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
*/
#ifdef CONFIG_XIP_KERNEL
map.pfn = __phys_to_pfn(CONFIG_XIP_PHYS_ADDR & SECTION_MASK);
map.virtual = MODULE_START;
map.virtual = MODULES_VADDR;
map.length = ((unsigned long)&_etext - map.virtual + ~SECTION_MASK) & SECTION_MASK;
map.type = MT_ROM;
create_mapping(&map);

Просмотреть файл

@ -115,7 +115,7 @@ ENTRY(cpu_v7_set_pte_ext)
orr r3, r3, r2
orr r3, r3, #PTE_EXT_AP0 | 2
tst r2, #1 << 4
tst r1, #1 << 4
orrne r3, r3, #PTE_EXT_TEX(1)
tst r1, #L_PTE_WRITE
@ -192,11 +192,11 @@ __v7_setup:
mov pc, lr @ return to head.S:__ret
ENDPROC(__v7_setup)
/*
* V X F I D LR
* .... ...E PUI. .T.T 4RVI ZFRS BLDP WCAM
* rrrr rrrx xxx0 0101 xxxx xxxx x111 xxxx < forced
* 0 110 0011 1.00 .111 1101 < we want
/* AT
* TFR EV X F I D LR
* .EEE ..EE PUI. .T.T 4RVI ZFRS BLDP WCAM
* rxxx rrxx xxx0 0101 xxxx xxxx x111 xxxx < forced
* 1 0 110 0011 1.00 .111 1101 < we want
*/
.type v7_crval, #object
v7_crval:

Просмотреть файл

@ -428,23 +428,23 @@ static int clk_debugfs_register_one(struct clk *c)
if (c->id != 0)
sprintf(p, ":%d", c->id);
d = debugfs_create_dir(s, pa ? pa->dent : clk_debugfs_root);
if (IS_ERR(d))
return PTR_ERR(d);
if (!d)
return -ENOMEM;
c->dent = d;
d = debugfs_create_u8("usecount", S_IRUGO, c->dent, (u8 *)&c->usecount);
if (IS_ERR(d)) {
err = PTR_ERR(d);
if (!d) {
err = -ENOMEM;
goto err_out;
}
d = debugfs_create_u32("rate", S_IRUGO, c->dent, (u32 *)&c->rate);
if (IS_ERR(d)) {
err = PTR_ERR(d);
if (!d) {
err = -ENOMEM;
goto err_out;
}
d = debugfs_create_x32("flags", S_IRUGO, c->dent, (u32 *)&c->flags);
if (IS_ERR(d)) {
err = PTR_ERR(d);
if (!d) {
err = -ENOMEM;
goto err_out;
}
return 0;
@ -483,8 +483,8 @@ static int __init clk_debugfs_init(void)
int err;
d = debugfs_create_dir("clock", NULL);
if (IS_ERR(d))
return PTR_ERR(d);
if (!d)
return -ENOMEM;
clk_debugfs_root = d;
list_for_each_entry(c, &clocks, node) {

Просмотреть файл

@ -65,7 +65,8 @@
#include <mach/omap34xx.h>
#endif
#define INTCPS_SIR_IRQ_OFFSET 0x0040 /* Active interrupt number */
#define INTCPS_SIR_IRQ_OFFSET 0x0040 /* Active interrupt offset */
#define ACTIVEIRQ_MASK 0x7f /* Active interrupt bits */
.macro disable_fiq
.endm
@ -88,6 +89,7 @@
cmp \irqnr, #0x0
2222:
ldrne \irqnr, [\base, #INTCPS_SIR_IRQ_OFFSET]
and \irqnr, \irqnr, #ACTIVEIRQ_MASK /* Clear spurious bits */
.endm

Просмотреть файл

@ -372,7 +372,7 @@
/* External TWL4030 gpio interrupts are optional */
#define TWL4030_GPIO_IRQ_BASE TWL4030_PWR_IRQ_END
#ifdef CONFIG_TWL4030_GPIO
#ifdef CONFIG_GPIO_TWL4030
#define TWL4030_GPIO_NR_IRQS 18
#else
#define TWL4030_GPIO_NR_IRQS 0

Просмотреть файл

@ -148,6 +148,7 @@ config IA64_GENERIC
select ACPI_NUMA
select SWIOTLB
select PCI_MSI
select DMAR
help
This selects the system type of your hardware. A "generic" kernel
will run on any supported IA-64 system. However, if you configure
@ -585,7 +586,7 @@ source "fs/Kconfig.binfmt"
endmenu
menu "Power management and ACPI"
menu "Power management and ACPI options"
source "kernel/power/Kconfig"
@ -641,6 +642,8 @@ source "net/Kconfig"
source "drivers/Kconfig"
source "arch/ia64/hp/sim/Kconfig"
config MSPEC
tristate "Memory special operations driver"
depends on IA64
@ -652,6 +655,12 @@ config MSPEC
source "fs/Kconfig"
source "arch/ia64/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"
source "arch/ia64/kvm/Kconfig"
source "lib/Kconfig"
@ -678,11 +687,3 @@ config IRQ_PER_CPU
config IOMMU_HELPER
def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)
source "arch/ia64/hp/sim/Kconfig"
source "arch/ia64/Kconfig.debug"
source "security/Kconfig"
source "crypto/Kconfig"

Просмотреть файл

@ -13,19 +13,12 @@
*/
#include <linux/device.h>
#include <linux/swiotlb.h>
#include <asm/machvec.h>
/* swiotlb declarations & definitions: */
extern int swiotlb_late_init_with_default_size (size_t size);
extern ia64_mv_dma_alloc_coherent swiotlb_alloc_coherent;
extern ia64_mv_dma_free_coherent swiotlb_free_coherent;
extern ia64_mv_dma_map_single_attrs swiotlb_map_single_attrs;
extern ia64_mv_dma_unmap_single_attrs swiotlb_unmap_single_attrs;
extern ia64_mv_dma_map_sg_attrs swiotlb_map_sg_attrs;
extern ia64_mv_dma_unmap_sg_attrs swiotlb_unmap_sg_attrs;
extern ia64_mv_dma_supported swiotlb_dma_supported;
extern ia64_mv_dma_mapping_error swiotlb_dma_mapping_error;
/* hwiommu declarations & definitions: */

Просмотреть файл

@ -434,28 +434,4 @@ extern void memset_io(volatile void __iomem *s, int c, long n);
# endif /* __KERNEL__ */
/*
* Enabling BIO_VMERGE_BOUNDARY forces us to turn off I/O MMU bypassing. It is said that
* BIO-level virtual merging can give up to 4% performance boost (not verified for ia64).
* On the other hand, we know that I/O MMU bypassing gives ~8% performance improvement on
* SPECweb-like workloads on zx1-based machines. Thus, for now we favor I/O MMU bypassing
* over BIO-level virtual merging.
*/
extern unsigned long ia64_max_iommu_merge_mask;
#if 1
#define BIO_VMERGE_BOUNDARY 0
#else
/*
* It makes no sense at all to have this BIO_VMERGE_BOUNDARY macro here. Should be
* replaced by dma_merge_mask() or something of that sort. Note: the only way
* BIO_VMERGE_BOUNDARY is used is to mask off bits. Effectively, our definition gets
* expanded into:
*
* addr & ((ia64_max_iommu_merge_mask + 1) - 1) == (addr & ia64_max_iommu_vmerge_mask)
*
* which is precisely what we want.
*/
#define BIO_VMERGE_BOUNDARY (ia64_max_iommu_merge_mask + 1)
#endif
#endif /* _ASM_IA64_IO_H */

Просмотреть файл

@ -11,6 +11,7 @@
#define _ASM_IA64_MACHVEC_H
#include <linux/types.h>
#include <linux/swiotlb.h>
/* forward declarations: */
struct device;
@ -297,27 +298,6 @@ extern void machvec_init_from_cmdline(const char *cmdline);
# error Unknown configuration. Update arch/ia64/include/asm/machvec.h.
# endif /* CONFIG_IA64_GENERIC */
/*
* Declare default routines which aren't declared anywhere else:
*/
extern ia64_mv_dma_init swiotlb_init;
extern ia64_mv_dma_alloc_coherent swiotlb_alloc_coherent;
extern ia64_mv_dma_free_coherent swiotlb_free_coherent;
extern ia64_mv_dma_map_single swiotlb_map_single;
extern ia64_mv_dma_map_single_attrs swiotlb_map_single_attrs;
extern ia64_mv_dma_unmap_single swiotlb_unmap_single;
extern ia64_mv_dma_unmap_single_attrs swiotlb_unmap_single_attrs;
extern ia64_mv_dma_map_sg swiotlb_map_sg;
extern ia64_mv_dma_map_sg_attrs swiotlb_map_sg_attrs;
extern ia64_mv_dma_unmap_sg swiotlb_unmap_sg;
extern ia64_mv_dma_unmap_sg_attrs swiotlb_unmap_sg_attrs;
extern ia64_mv_dma_sync_single_for_cpu swiotlb_sync_single_for_cpu;
extern ia64_mv_dma_sync_sg_for_cpu swiotlb_sync_sg_for_cpu;
extern ia64_mv_dma_sync_single_for_device swiotlb_sync_single_for_device;
extern ia64_mv_dma_sync_sg_for_device swiotlb_sync_sg_for_device;
extern ia64_mv_dma_mapping_error swiotlb_dma_mapping_error;
extern ia64_mv_dma_supported swiotlb_dma_supported;
/*
* Define default versions so we can extend machvec for new platforms without having
* to update the machvec files for all existing platforms.

Просмотреть файл

@ -48,7 +48,6 @@ extern int reserve_elfcorehdr(unsigned long *start, unsigned long *end);
*/
#define GRANULEROUNDDOWN(n) ((n) & ~(IA64_GRANULE_SIZE-1))
#define GRANULEROUNDUP(n) (((n)+IA64_GRANULE_SIZE-1) & ~(IA64_GRANULE_SIZE-1))
#define ORDERROUNDDOWN(n) ((n) & ~((PAGE_SIZE<<MAX_ORDER)-1))
#ifdef CONFIG_NUMA
extern void call_pernode_memory (unsigned long start, unsigned long len, void *func);

Просмотреть файл

@ -337,11 +337,24 @@ typedef struct sal_log_record_header {
#define sal_log_severity_fatal 1
#define sal_log_severity_corrected 2
/*
* Error Recovery Info (ERI) bit decode. From SAL Spec section B.2.2 Table B-3
* Error Section Error_Recovery_Info Field Definition.
*/
#define ERI_NOT_VALID 0x0 /* Error Recovery Field is not valid */
#define ERI_NOT_ACCESSIBLE 0x30 /* Resource not accessible */
#define ERI_CONTAINMENT_WARN 0x22 /* Corrupt data propagated */
#define ERI_UNCORRECTED_ERROR 0x20 /* Uncorrected error */
#define ERI_COMPONENT_RESET 0x24 /* Component must be reset */
#define ERI_CORR_ERROR_LOG 0x21 /* Corrected error, needs logging */
#define ERI_CORR_ERROR_THRESH 0x29 /* Corrected error threshold exceeded */
/* Definition of log section header structures */
typedef struct sal_log_sec_header {
efi_guid_t guid; /* Unique Section ID */
sal_log_revision_t revision; /* Major and Minor revision of Section */
u16 reserved;
u8 error_recovery_info; /* Platform error recovery status */
u8 reserved;
u32 len; /* Section length */
} sal_log_section_hdr_t;

Просмотреть файл

@ -90,6 +90,8 @@
#define SN_SAL_SET_CPU_NUMBER 0x02000068
#define SN_SAL_KERNEL_LAUNCH_EVENT 0x02000069
#define SN_SAL_WATCHLIST_ALLOC 0x02000070
#define SN_SAL_WATCHLIST_FREE 0x02000071
/*
* Service-specific constants
@ -1185,4 +1187,47 @@ ia64_sn_kernel_launch_event(void)
SAL_CALL_NOLOCK(rv, SN_SAL_KERNEL_LAUNCH_EVENT, 0, 0, 0, 0, 0, 0, 0);
return rv.status;
}
union sn_watchlist_u {
u64 val;
struct {
u64 blade : 16,
size : 32,
filler : 16;
};
};
static inline int
sn_mq_watchlist_alloc(int blade, void *mq, unsigned int mq_size,
unsigned long *intr_mmr_offset)
{
struct ia64_sal_retval rv;
unsigned long addr;
union sn_watchlist_u size_blade;
int watchlist;
addr = (unsigned long)mq;
size_blade.size = mq_size;
size_blade.blade = blade;
/*
* bios returns watchlist number or negative error number.
*/
ia64_sal_oemcall_nolock(&rv, SN_SAL_WATCHLIST_ALLOC, addr,
size_blade.val, (u64)intr_mmr_offset,
(u64)&watchlist, 0, 0, 0);
if (rv.status < 0)
return rv.status;
return watchlist;
}
static inline int
sn_mq_watchlist_free(int blade, int watchlist_num)
{
struct ia64_sal_retval rv;
ia64_sal_oemcall_nolock(&rv, SN_SAL_WATCHLIST_FREE, blade,
watchlist_num, 0, 0, 0, 0, 0);
return rv.status;
}
#endif /* _ASM_IA64_SN_SN_SAL_H */

Просмотреть файл

@ -678,6 +678,30 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
return 0;
}
int __init early_acpi_boot_init(void)
{
int ret;
/*
* do a partial walk of MADT to determine how many CPUs
* we have including offline CPUs
*/
if (acpi_table_parse(ACPI_SIG_MADT, acpi_parse_madt)) {
printk(KERN_ERR PREFIX "Can't find MADT\n");
return 0;
}
ret = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_SAPIC,
acpi_parse_lsapic, NR_CPUS);
if (ret < 1)
printk(KERN_ERR PREFIX
"Error parsing MADT - no LAPIC entries\n");
return 0;
}
int __init acpi_boot_init(void)
{
@ -701,11 +725,6 @@ int __init acpi_boot_init(void)
printk(KERN_ERR PREFIX
"Error parsing LAPIC address override entry\n");
if (acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_SAPIC, acpi_parse_lsapic, NR_CPUS)
< 1)
printk(KERN_ERR PREFIX
"Error parsing MADT - no LAPIC entries\n");
if (acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC_NMI, acpi_parse_lapic_nmi, 0)
< 0)
printk(KERN_ERR PREFIX "Error parsing LAPIC NMI entry\n");

Просмотреть файл

@ -12,13 +12,11 @@
#include <asm/machvec.h>
#include <linux/dma-mapping.h>
#include <asm/machvec.h>
#include <asm/system.h>
#ifdef CONFIG_DMAR
#include <linux/kernel.h>
#include <linux/string.h>
#include <asm/page.h>
#include <asm/iommu.h>

Просмотреть файл

@ -359,7 +359,7 @@ reserve_memory (void)
}
#endif
#ifdef CONFIG_CRASH_KERNEL
#ifdef CONFIG_CRASH_DUMP
if (reserve_elfcorehdr(&rsvd_region[n].start,
&rsvd_region[n].end) == 0)
n++;
@ -561,8 +561,12 @@ setup_arch (char **cmdline_p)
#ifdef CONFIG_ACPI
/* Initialize the ACPI boot-time table parser */
acpi_table_init();
early_acpi_boot_init();
# ifdef CONFIG_ACPI_NUMA
acpi_numa_init();
#ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
#endif
per_cpu_scan_finalize((cpus_weight(early_cpu_possible_map) == 0 ?
32 : cpus_weight(early_cpu_possible_map)),
additional_cpus > 0 ? additional_cpus : 0);
@ -853,9 +857,6 @@ void __init
setup_per_cpu_areas (void)
{
/* start_kernel() requires this... */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
#endif
}
/*

Просмотреть файл

@ -635,7 +635,6 @@ static __init int count_node_pages(unsigned long start, unsigned long len, int n
(min(end, __pa(MAX_DMA_ADDRESS)) - start) >>PAGE_SHIFT;
#endif
start = GRANULEROUNDDOWN(start);
start = ORDERROUNDDOWN(start);
end = GRANULEROUNDUP(end);
mem_data[node].max_pfn = max(mem_data[node].max_pfn,
end >> PAGE_SHIFT);

Просмотреть файл

@ -19,6 +19,12 @@ EXPORT_PER_CPU_SYMBOL_GPL(__uv_hub_info);
#ifdef CONFIG_IA64_SGI_UV
int sn_prom_type;
long sn_partition_id;
EXPORT_SYMBOL(sn_partition_id);
long sn_coherency_id;
EXPORT_SYMBOL_GPL(sn_coherency_id);
long sn_region_size;
EXPORT_SYMBOL(sn_region_size);
#endif
struct redir_addr {

Просмотреть файл

@ -16,6 +16,7 @@
#include <linux/kexec.h>
#include <linux/crash_dump.h>
#include <asm/kexec.h>
#include <asm/reg.h>
#include <asm/io.h>
#include <asm/prom.h>

Просмотреть файл

@ -189,7 +189,6 @@ struct pci_controller * __devinit init_phb_dynamic(struct device_node *dn)
{
struct pci_controller *phb;
int primary;
struct pci_bus *b;
primary = list_empty(&hose_list);
phb = pcibios_alloc_controller(dn);

Просмотреть файл

@ -1494,7 +1494,7 @@ config HAVE_ARCH_EARLY_PFN_TO_NID
def_bool X86_64
depends on NUMA
menu "Power management options"
menu "Power management and ACPI options"
depends on !X86_VOYAGER
config ARCH_HIBERNATION_HEADER

Просмотреть файл

@ -101,30 +101,22 @@
#define LAST_VM86_IRQ 15
#define invalid_vm86_irq(irq) ((irq) < 3 || (irq) > 15)
#ifdef CONFIG_X86_64
#if defined(CONFIG_X86_IO_APIC) && !defined(CONFIG_X86_VOYAGER)
# if NR_CPUS < MAX_IO_APICS
# define NR_IRQS (NR_VECTORS + (32 * NR_CPUS))
# else
# define NR_IRQS (NR_VECTORS + (32 * MAX_IO_APICS))
# endif
#elif !defined(CONFIG_X86_VOYAGER)
# if defined(CONFIG_X86_IO_APIC) || defined(CONFIG_PARAVIRT) || defined(CONFIG_X86_VISWS)
# define NR_IRQS 224
# else /* IO_APIC || PARAVIRT */
# define NR_IRQS 16
# endif
#else /* !VISWS && !VOYAGER */
#elif defined(CONFIG_X86_VOYAGER)
# define NR_IRQS 224
#endif /* VISWS */
#else /* IO_APIC || VOYAGER */
# define NR_IRQS 16
#endif
/* Voyager specific defines */
/* These define the CPIs we use in linux */

Просмотреть файл

@ -108,9 +108,7 @@ static __always_inline unsigned long long __native_read_tsc(void)
{
DECLARE_ARGS(val, low, high);
rdtsc_barrier();
asm volatile("rdtsc" : EAX_EDX_RET(val, low, high));
rdtsc_barrier();
return EAX_EDX_VAL(val, low, high);
}

Просмотреть файл

@ -154,7 +154,7 @@ extern unsigned long node_remap_size[];
#endif
/* sched_domains SD_NODE_INIT for NUMAQ machines */
/* sched_domains SD_NODE_INIT for NUMA machines */
#define SD_NODE_INIT (struct sched_domain) { \
.min_interval = 8, \
.max_interval = 32, \
@ -169,8 +169,9 @@ extern unsigned long node_remap_size[];
.flags = SD_LOAD_BALANCE \
| SD_BALANCE_EXEC \
| SD_BALANCE_FORK \
| SD_SERIALIZE \
| SD_WAKE_BALANCE, \
| SD_WAKE_AFFINE \
| SD_WAKE_BALANCE \
| SD_SERIALIZE, \
.last_balance = jiffies, \
.balance_interval = 1, \
}

Просмотреть файл

@ -34,6 +34,8 @@ static inline cycles_t get_cycles(void)
static __always_inline cycles_t vget_cycles(void)
{
cycles_t cycles;
/*
* We only do VDSOs on TSC capable CPUs, so this shouldnt
* access boot_cpu_data (which is not VDSO-safe):
@ -42,7 +44,11 @@ static __always_inline cycles_t vget_cycles(void)
if (!cpu_has_tsc)
return 0;
#endif
return (cycles_t)__native_read_tsc();
rdtsc_barrier();
cycles = (cycles_t)__native_read_tsc();
rdtsc_barrier();
return cycles;
}
extern void tsc_init(void);

Просмотреть файл

@ -520,6 +520,7 @@ extern void voyager_restart(void);
extern void voyager_cat_power_off(void);
extern void voyager_cat_do_common_interrupt(void);
extern void voyager_handle_nmi(void);
extern void voyager_smp_intr_init(void);
/* Commands for the following are */
#define VOYAGER_PSI_READ 0
#define VOYAGER_PSI_WRITE 1

Просмотреть файл

@ -50,7 +50,7 @@ static int dma_ops_unity_map(struct dma_ops_domain *dma_dom,
/* returns !0 if the IOMMU is caching non-present entries in its TLB */
static int iommu_has_npcache(struct amd_iommu *iommu)
{
return iommu->cap & IOMMU_CAP_NPCACHE;
return iommu->cap & (1UL << IOMMU_CAP_NPCACHE);
}
/****************************************************************************
@ -536,6 +536,9 @@ static void dma_ops_free_addresses(struct dma_ops_domain *dom,
{
address >>= PAGE_SHIFT;
iommu_area_free(dom->bitmap, address, pages);
if (address + pages >= dom->next_bit)
dom->need_flush = true;
}
/****************************************************************************
@ -992,8 +995,10 @@ static void __unmap_single(struct amd_iommu *iommu,
dma_ops_free_addresses(dma_dom, dma_addr, pages);
if (amd_iommu_unmap_flush)
if (amd_iommu_unmap_flush || dma_dom->need_flush) {
iommu_flush_pages(iommu, dma_dom->domain.id, dma_addr, size);
dma_dom->need_flush = false;
}
}
/*

Просмотреть файл

@ -3611,6 +3611,8 @@ int __init probe_nr_irqs(void)
/* something wrong ? */
if (nr < nr_min)
nr = nr_min;
if (WARN_ON(nr > NR_IRQS))
nr = NR_IRQS;
return nr;
}

Просмотреть файл

@ -29,11 +29,7 @@ EXPORT_SYMBOL(pm_power_off);
static const struct desc_ptr no_idt = {};
static int reboot_mode;
/*
* Keyboard reset and triple fault may result in INIT, not RESET, which
* doesn't work when we're in vmx root mode. Try ACPI first.
*/
enum reboot_type reboot_type = BOOT_ACPI;
enum reboot_type reboot_type = BOOT_KBD;
int reboot_force;
#if defined(CONFIG_X86_32) && defined(CONFIG_SMP)

Просмотреть файл

@ -154,6 +154,12 @@ void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
flush_mm = mm;
flush_va = va;
cpus_or(flush_cpumask, cpumask, flush_cpumask);
/*
* Make the above memory operations globally visible before
* sending the IPI.
*/
smp_mb();
/*
* We have to send the IPI only to
* CPUs affected.

Просмотреть файл

@ -182,6 +182,11 @@ void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
f->flush_va = va;
cpus_or(f->flush_cpumask, cpumask, f->flush_cpumask);
/*
* Make the above memory operations globally visible before
* sending the IPI.
*/
smp_mb();
/*
* We have to send the IPI only to
* CPUs affected.

Просмотреть файл

@ -55,7 +55,7 @@ u64 native_sched_clock(void)
rdtscll(this_offset);
/* return the value in ns */
return cycles_2_ns(this_offset);
return __cycles_2_ns(this_offset);
}
/* We need to define a real function for sched_clock, to override the
@ -813,10 +813,6 @@ void __init tsc_init(void)
cpu_khz = calibrate_cpu();
#endif
lpj = ((u64)tsc_khz * 1000);
do_div(lpj, HZ);
lpj_fine = lpj;
printk("Detected %lu.%03lu MHz processor.\n",
(unsigned long)cpu_khz / 1000,
(unsigned long)cpu_khz % 1000);
@ -836,6 +832,10 @@ void __init tsc_init(void)
/* now allow native_sched_clock() to use rdtsc */
tsc_disabled = 0;
lpj = ((u64)tsc_khz * 1000);
do_div(lpj, HZ);
lpj_fine = lpj;
use_tsc_delay();
/* Check and install the TSC clocksource */
dmi_check_system(bad_tsc_dmi_table);

Просмотреть файл

@ -27,7 +27,7 @@ static struct irqaction irq2 = {
void __init intr_init_hook(void)
{
#ifdef CONFIG_SMP
smp_intr_init();
voyager_smp_intr_init();
#endif
setup_irq(2, &irq2);

Просмотреть файл

@ -1258,7 +1258,7 @@ static void handle_vic_irq(unsigned int irq, struct irq_desc *desc)
#define QIC_SET_GATE(cpi, vector) \
set_intr_gate((cpi) + QIC_DEFAULT_CPI_BASE, (vector))
void __init smp_intr_init(void)
void __init voyager_smp_intr_init(void)
{
int i;

Просмотреть файл

@ -67,18 +67,18 @@ static void split_page_count(int level)
void arch_report_meminfo(struct seq_file *m)
{
seq_printf(m, "DirectMap4k: %8lu kB\n",
seq_printf(m, "DirectMap4k: %8lu kB\n",
direct_pages_count[PG_LEVEL_4K] << 2);
#if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
seq_printf(m, "DirectMap2M: %8lu kB\n",
seq_printf(m, "DirectMap2M: %8lu kB\n",
direct_pages_count[PG_LEVEL_2M] << 11);
#else
seq_printf(m, "DirectMap4M: %8lu kB\n",
seq_printf(m, "DirectMap4M: %8lu kB\n",
direct_pages_count[PG_LEVEL_2M] << 12);
#endif
#ifdef CONFIG_X86_64
if (direct_gbpages)
seq_printf(m, "DirectMap1G: %8lu kB\n",
seq_printf(m, "DirectMap1G: %8lu kB\n",
direct_pages_count[PG_LEVEL_1G] << 20);
#endif
}

Просмотреть файл

@ -27,8 +27,7 @@ static int num_counters = 2;
static int counter_width = 32;
#define CTR_IS_RESERVED(msrs, c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l, h, msrs, c) do {rdmsr(msrs->counters[(c)].addr, (l), (h)); } while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<(counter_width-1))))
#define CTR_OVERFLOWED(n) (!((n) & (1ULL<<(counter_width-1))))
#define CTRL_IS_RESERVED(msrs, c) (msrs->controls[(c)].addr ? 1 : 0)
#define CTRL_READ(l, h, msrs, c) do {rdmsr((msrs->controls[(c)].addr), (l), (h)); } while (0)
@ -124,14 +123,14 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
static int ppro_check_ctrs(struct pt_regs * const regs,
struct op_msrs const * const msrs)
{
unsigned int low, high;
u64 val;
int i;
for (i = 0 ; i < num_counters; ++i) {
if (!reset_value[i])
continue;
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
rdmsrl(msrs->counters[i].addr, val);
if (CTR_OVERFLOWED(val)) {
oprofile_add_sample(regs, i);
wrmsrl(msrs->counters[i].addr, -reset_value[i]);
}

Просмотреть файл

@ -863,15 +863,16 @@ static void xen_alloc_ptpage(struct mm_struct *mm, unsigned long pfn, unsigned l
if (PagePinned(virt_to_page(mm->pgd))) {
SetPagePinned(page);
vm_unmap_aliases();
if (!PageHighMem(page)) {
make_lowmem_page_readonly(__va(PFN_PHYS((unsigned long)pfn)));
if (level == PT_PTE && USE_SPLIT_PTLOCKS)
pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn);
} else
} else {
/* make sure there are no stray mappings of
this page */
kmap_flush_unused();
vm_unmap_aliases();
}
}
}

Просмотреть файл

@ -850,13 +850,16 @@ static int xen_pin_page(struct mm_struct *mm, struct page *page,
read-only, and can be pinned. */
static void __xen_pgd_pin(struct mm_struct *mm, pgd_t *pgd)
{
vm_unmap_aliases();
xen_mc_batch();
if (xen_pgd_walk(mm, xen_pin_page, USER_LIMIT)) {
/* re-enable interrupts for kmap_flush_unused */
if (xen_pgd_walk(mm, xen_pin_page, USER_LIMIT)) {
/* re-enable interrupts for flushing */
xen_mc_issue(0);
kmap_flush_unused();
vm_unmap_aliases();
xen_mc_batch();
}
@ -874,7 +877,7 @@ static void __xen_pgd_pin(struct mm_struct *mm, pgd_t *pgd)
#else /* CONFIG_X86_32 */
#ifdef CONFIG_X86_PAE
/* Need to make sure unshared kernel PMD is pinnable */
xen_pin_page(mm, virt_to_page(pgd_page(pgd[pgd_index(TASK_SIZE)])),
xen_pin_page(mm, pgd_page(pgd[pgd_index(TASK_SIZE)]),
PT_PMD);
#endif
xen_do_pin(MMUEXT_PIN_L3_TABLE, PFN_DOWN(__pa(pgd)));
@ -991,7 +994,7 @@ static void __xen_pgd_unpin(struct mm_struct *mm, pgd_t *pgd)
#ifdef CONFIG_X86_PAE
/* Need to make sure unshared kernel PMD is unpinned */
xen_unpin_page(mm, virt_to_page(pgd_page(pgd[pgd_index(TASK_SIZE)])),
xen_unpin_page(mm, pgd_page(pgd[pgd_index(TASK_SIZE)]),
PT_PMD);
#endif

Просмотреть файл

@ -1770,8 +1770,6 @@ static void end_that_request_last(struct request *req, int error)
{
struct gendisk *disk = req->rq_disk;
blk_delete_timer(req);
if (blk_rq_tagged(req))
blk_queue_end_tag(req->q, req);
@ -1781,6 +1779,8 @@ static void end_that_request_last(struct request *req, int error)
if (unlikely(laptop_mode) && blk_fs_request(req))
laptop_io_completion();
blk_delete_timer(req);
/*
* Account IO completion. bar_rq isn't accounted as a normal
* IO on queueing nor completion. Accounting the containing

Просмотреть файл

@ -222,27 +222,6 @@ new_segment:
}
EXPORT_SYMBOL(blk_rq_map_sg);
static inline int ll_new_mergeable(struct request_queue *q,
struct request *req,
struct bio *bio)
{
int nr_phys_segs = bio_phys_segments(q, bio);
if (req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) {
req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge)
q->last_merge = NULL;
return 0;
}
/*
* A hw segment is just getting larger, bump just the phys
* counter.
*/
req->nr_phys_segments += nr_phys_segs;
return 1;
}
static inline int ll_new_hw_segment(struct request_queue *q,
struct request *req,
struct bio *bio)

Просмотреть файл

@ -75,14 +75,7 @@ void blk_delete_timer(struct request *req)
{
struct request_queue *q = req->q;
/*
* Nothing to detach
*/
if (!q->rq_timed_out_fn || !req->deadline)
return;
list_del_init(&req->timeout_list);
if (list_empty(&q->timeout_list))
del_timer(&q->timeout);
}
@ -142,7 +135,7 @@ void blk_rq_timed_out_timer(unsigned long data)
}
if (next_set && !list_empty(&q->timeout_list))
mod_timer(&q->timeout, round_jiffies(next));
mod_timer(&q->timeout, round_jiffies_up(next));
spin_unlock_irqrestore(q->queue_lock, flags);
}
@ -198,17 +191,10 @@ void blk_add_timer(struct request *req)
/*
* If the timer isn't already pending or this timeout is earlier
* than an existing one, modify the timer. Round to next nearest
* than an existing one, modify the timer. Round up to next nearest
* second.
*/
expiry = round_jiffies(req->deadline);
/*
* We use ->deadline == 0 to detect whether a timer was added or
* not, so just increase to next jiffy for that specific case
*/
if (unlikely(!req->deadline))
req->deadline = 1;
expiry = round_jiffies_up(req->deadline);
if (!timer_pending(&q->timeout) ||
time_before(expiry, q->timeout.expires))

Просмотреть файл

@ -773,12 +773,6 @@ struct request *elv_next_request(struct request_queue *q)
*/
rq->cmd_flags |= REQ_STARTED;
blk_add_trace_rq(q, rq, BLK_TA_ISSUE);
/*
* We are now handing the request to the hardware,
* add the timeout handler
*/
blk_add_timer(rq);
}
if (!q->boundary_rq || q->boundary_rq == rq) {
@ -850,6 +844,12 @@ void elv_dequeue_request(struct request_queue *q, struct request *rq)
*/
if (blk_account_rq(rq))
q->in_flight++;
/*
* We are now handing the request to the hardware, add the
* timeout handler.
*/
blk_add_timer(rq);
}
EXPORT_SYMBOL(elv_dequeue_request);

Просмотреть файл

@ -1712,6 +1712,8 @@ unsigned ata_exec_internal_sg(struct ata_device *dev,
else
tag = 0;
if (test_and_set_bit(tag, &ap->qc_allocated))
BUG();
qc = __ata_qc_from_tag(ap, tag);
qc->tag = tag;
@ -4024,6 +4026,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
/* Weird ATAPI devices */
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 },
{ "QUANTUM DAT DAT72-000", NULL, ATA_HORKAGE_ATAPI_MOD16_DMA },
/* Devices we expect to fail diagnostics */
@ -4444,7 +4447,8 @@ int atapi_check_dma(struct ata_queued_cmd *qc)
/* Don't allow DMA if it isn't multiple of 16 bytes. Quite a
* few ATAPI devices choke on such DMA requests.
*/
if (unlikely(qc->nbytes & 15))
if (!(qc->dev->horkage & ATA_HORKAGE_ATAPI_MOD16_DMA) &&
unlikely(qc->nbytes & 15))
return 1;
if (ap->ops->check_atapi_dma)
@ -4560,6 +4564,37 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
#endif /* __BIG_ENDIAN */
}
/**
* ata_qc_new - Request an available ATA command, for queueing
* @ap: Port associated with device @dev
* @dev: Device from whom we request an available command structure
*
* LOCKING:
* None.
*/
static struct ata_queued_cmd *ata_qc_new(struct ata_port *ap)
{
struct ata_queued_cmd *qc = NULL;
unsigned int i;
/* no command while frozen */
if (unlikely(ap->pflags & ATA_PFLAG_FROZEN))
return NULL;
/* the last tag is reserved for internal command. */
for (i = 0; i < ATA_MAX_QUEUE - 1; i++)
if (!test_and_set_bit(i, &ap->qc_allocated)) {
qc = __ata_qc_from_tag(ap, i);
break;
}
if (qc)
qc->tag = i;
return qc;
}
/**
* ata_qc_new_init - Request an available ATA command, and initialize it
* @dev: Device from whom we request an available command structure
@ -4569,20 +4604,16 @@ void swap_buf_le16(u16 *buf, unsigned int buf_words)
* None.
*/
struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag)
struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev)
{
struct ata_port *ap = dev->link->ap;
struct ata_queued_cmd *qc;
if (unlikely(ap->pflags & ATA_PFLAG_FROZEN))
return NULL;
qc = __ata_qc_from_tag(ap, tag);
qc = ata_qc_new(ap);
if (qc) {
qc->scsicmd = NULL;
qc->ap = ap;
qc->dev = dev;
qc->tag = tag;
ata_qc_reinit(qc);
}
@ -4590,6 +4621,31 @@ struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag)
return qc;
}
/**
* ata_qc_free - free unused ata_queued_cmd
* @qc: Command to complete
*
* Designed to free unused ata_queued_cmd object
* in case something prevents using it.
*
* LOCKING:
* spin_lock_irqsave(host lock)
*/
void ata_qc_free(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
unsigned int tag;
WARN_ON(qc == NULL); /* ata_qc_from_tag _might_ return NULL */
qc->flags = 0;
tag = qc->tag;
if (likely(ata_tag_valid(tag))) {
qc->tag = ATA_TAG_POISON;
clear_bit(tag, &ap->qc_allocated);
}
}
void __ata_qc_complete(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
@ -5934,7 +5990,7 @@ static void ata_port_detach(struct ata_port *ap)
* to us. Restore SControl and disable all existing devices.
*/
__ata_port_for_each_link(link, ap) {
sata_scr_write(link, SCR_CONTROL, link->saved_scontrol);
sata_scr_write(link, SCR_CONTROL, link->saved_scontrol & 0xff0);
ata_link_for_each_dev(dev, link)
ata_dev_disable(dev);
}

Просмотреть файл

@ -190,7 +190,7 @@ static ssize_t ata_scsi_park_show(struct device *device,
struct ata_port *ap;
struct ata_link *link;
struct ata_device *dev;
unsigned long flags;
unsigned long flags, now;
unsigned int uninitialized_var(msecs);
int rc = 0;
@ -208,10 +208,11 @@ static ssize_t ata_scsi_park_show(struct device *device,
}
link = dev->link;
now = jiffies;
if (ap->pflags & ATA_PFLAG_EH_IN_PROGRESS &&
link->eh_context.unloaded_mask & (1 << dev->devno) &&
time_after(dev->unpark_deadline, jiffies))
msecs = jiffies_to_msecs(dev->unpark_deadline - jiffies);
time_after(dev->unpark_deadline, now))
msecs = jiffies_to_msecs(dev->unpark_deadline - now);
else
msecs = 0;
@ -708,11 +709,7 @@ static struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev,
{
struct ata_queued_cmd *qc;
if (cmd->request->tag != -1)
qc = ata_qc_new_init(dev, cmd->request->tag);
else
qc = ata_qc_new_init(dev, 0);
qc = ata_qc_new_init(dev);
if (qc) {
qc->scsicmd = cmd;
qc->scsidone = done;
@ -1107,17 +1104,7 @@ static int ata_scsi_dev_config(struct scsi_device *sdev,
depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id));
depth = min(ATA_MAX_QUEUE - 1, depth);
/*
* If this device is behind a port multiplier, we have
* to share the tag map between all devices on that PMP.
* Set up the shared tag map here and we get automatic.
*/
if (dev->link->ap->pmp_link)
scsi_init_shared_tag_map(sdev->host, ATA_MAX_QUEUE - 1);
scsi_set_tag_type(sdev, MSG_SIMPLE_TAG);
scsi_activate_tcq(sdev, depth);
scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, depth);
}
return 0;
@ -1957,11 +1944,6 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
hdr[1] |= (1 << 7);
memcpy(rbuf, hdr, sizeof(hdr));
/* if ncq, set tags supported */
if (ata_id_has_ncq(args->id))
rbuf[7] |= (1 << 1);
memcpy(&rbuf[8], "ATA ", 8);
ata_id_string(args->id, &rbuf[16], ATA_ID_PROD, 16);
ata_id_string(args->id, &rbuf[32], ATA_ID_FW_REV, 4);

Просмотреть файл

@ -74,7 +74,7 @@ extern struct ata_link *ata_dev_phys_link(struct ata_device *dev);
extern void ata_force_cbl(struct ata_port *ap);
extern u64 ata_tf_to_lba(const struct ata_taskfile *tf);
extern u64 ata_tf_to_lba48(const struct ata_taskfile *tf);
extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag);
extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev);
extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
u64 block, u32 n_block, unsigned int tf_flags,
unsigned int tag);
@ -103,6 +103,7 @@ extern int ata_dev_configure(struct ata_device *dev);
extern int sata_down_spd_limit(struct ata_link *link);
extern int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel);
extern void ata_sg_clean(struct ata_queued_cmd *qc);
extern void ata_qc_free(struct ata_queued_cmd *qc);
extern void ata_qc_issue(struct ata_queued_cmd *qc);
extern void __ata_qc_complete(struct ata_queued_cmd *qc);
extern int atapi_check_dma(struct ata_queued_cmd *qc);
@ -118,22 +119,6 @@ extern struct ata_port *ata_port_alloc(struct ata_host *host);
extern void ata_dev_enable_pm(struct ata_device *dev, enum link_pm policy);
extern void ata_lpm_schedule(struct ata_port *ap, enum link_pm);
/**
* ata_qc_free - free unused ata_queued_cmd
* @qc: Command to complete
*
* Designed to free unused ata_queued_cmd object
* in case something prevents using it.
*
* LOCKING:
* spin_lock_irqsave(host lock)
*/
static inline void ata_qc_free(struct ata_queued_cmd *qc)
{
qc->flags = 0;
qc->tag = ATA_TAG_POISON;
}
/* libata-acpi.c */
#ifdef CONFIG_ATA_ACPI
extern void ata_acpi_associate_sata_port(struct ata_port *ap);

Просмотреть файл

@ -307,10 +307,10 @@ static int nv_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
static void nv_nf2_freeze(struct ata_port *ap);
static void nv_nf2_thaw(struct ata_port *ap);
static int nv_nf2_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static void nv_ck804_freeze(struct ata_port *ap);
static void nv_ck804_thaw(struct ata_port *ap);
static int nv_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static int nv_adma_slave_config(struct scsi_device *sdev);
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
static void nv_adma_qc_prep(struct ata_queued_cmd *qc);
@ -405,17 +405,8 @@ static struct scsi_host_template nv_swncq_sht = {
.slave_configure = nv_swncq_slave_config,
};
/* OSDL bz3352 reports that some nv controllers can't determine device
* signature reliably and nv_hardreset is implemented to work around
* the problem. This was reported on nf3 and it's unclear whether any
* other controllers are affected. However, the workaround has been
* applied to all variants and there isn't much to gain by trying to
* find out exactly which ones are affected at this point especially
* because NV has moved over to ahci for newer controllers.
*/
static struct ata_port_operations nv_common_ops = {
.inherits = &ata_bmdma_port_ops,
.hardreset = nv_hardreset,
.scr_read = nv_scr_read,
.scr_write = nv_scr_write,
};
@ -429,12 +420,22 @@ static struct ata_port_operations nv_generic_ops = {
.hardreset = ATA_OP_NULL,
};
/* OSDL bz3352 reports that nf2/3 controllers can't determine device
* signature reliably. Also, the following thread reports detection
* failure on cold boot with the standard debouncing timing.
*
* http://thread.gmane.org/gmane.linux.ide/34098
*
* Debounce with hotplug timing and request follow-up SRST.
*/
static struct ata_port_operations nv_nf2_ops = {
.inherits = &nv_common_ops,
.freeze = nv_nf2_freeze,
.thaw = nv_nf2_thaw,
.hardreset = nv_nf2_hardreset,
};
/* CK804 finally gets hardreset right */
static struct ata_port_operations nv_ck804_ops = {
.inherits = &nv_common_ops,
.freeze = nv_ck804_freeze,
@ -443,7 +444,7 @@ static struct ata_port_operations nv_ck804_ops = {
};
static struct ata_port_operations nv_adma_ops = {
.inherits = &nv_common_ops,
.inherits = &nv_ck804_ops,
.check_atapi_dma = nv_adma_check_atapi_dma,
.sff_tf_read = nv_adma_tf_read,
@ -467,7 +468,7 @@ static struct ata_port_operations nv_adma_ops = {
};
static struct ata_port_operations nv_swncq_ops = {
.inherits = &nv_common_ops,
.inherits = &nv_generic_ops,
.qc_defer = ata_std_qc_defer,
.qc_prep = nv_swncq_qc_prep,
@ -1553,6 +1554,17 @@ static void nv_nf2_thaw(struct ata_port *ap)
iowrite8(mask, scr_addr + NV_INT_ENABLE);
}
static int nv_nf2_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
{
bool online;
int rc;
rc = sata_link_hardreset(link, sata_deb_timing_hotplug, deadline,
&online, NULL);
return online ? -EAGAIN : rc;
}
static void nv_ck804_freeze(struct ata_port *ap)
{
void __iomem *mmio_base = ap->host->iomap[NV_MMIO_BAR];
@ -1605,21 +1617,6 @@ static void nv_mcp55_thaw(struct ata_port *ap)
ata_sff_thaw(ap);
}
static int nv_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
{
int rc;
/* SATA hardreset fails to retrieve proper device signature on
* some controllers. Request follow up SRST. For more info,
* see http://bugzilla.kernel.org/show_bug.cgi?id=3352
*/
rc = sata_sff_hardreset(link, class, deadline);
if (rc)
return rc;
return -EAGAIN;
}
static void nv_adma_error_handler(struct ata_port *ap)
{
struct nv_adma_port_priv *pp = ap->private_data;

Просмотреть файл

@ -153,6 +153,10 @@ static void pdc_freeze(struct ata_port *ap);
static void pdc_sata_freeze(struct ata_port *ap);
static void pdc_thaw(struct ata_port *ap);
static void pdc_sata_thaw(struct ata_port *ap);
static int pdc_pata_softreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static int pdc_sata_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline);
static void pdc_error_handler(struct ata_port *ap);
static void pdc_post_internal_cmd(struct ata_queued_cmd *qc);
static int pdc_pata_cable_detect(struct ata_port *ap);
@ -186,6 +190,7 @@ static struct ata_port_operations pdc_sata_ops = {
.scr_read = pdc_sata_scr_read,
.scr_write = pdc_sata_scr_write,
.port_start = pdc_sata_port_start,
.hardreset = pdc_sata_hardreset,
};
/* First-generation chips need a more restrictive ->check_atapi_dma op */
@ -200,6 +205,7 @@ static struct ata_port_operations pdc_pata_ops = {
.freeze = pdc_freeze,
.thaw = pdc_thaw,
.port_start = pdc_common_port_start,
.softreset = pdc_pata_softreset,
};
static const struct ata_port_info pdc_port_info[] = {
@ -693,6 +699,20 @@ static void pdc_sata_thaw(struct ata_port *ap)
readl(host_mmio + hotplug_offset); /* flush */
}
static int pdc_pata_softreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
{
pdc_reset_port(link->ap);
return ata_sff_softreset(link, class, deadline);
}
static int pdc_sata_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline)
{
pdc_reset_port(link->ap);
return sata_sff_hardreset(link, class, deadline);
}
static void pdc_error_handler(struct ata_port *ap)
{
if (!(ap->pflags & ATA_PFLAG_FROZEN))

Просмотреть файл

@ -602,8 +602,10 @@ static int svia_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
rc = vt8251_prepare_host(pdev, &host);
break;
default:
return -EINVAL;
rc = -EINVAL;
}
if (rc)
return rc;
svia_configure(pdev);

Просмотреть файл

@ -96,6 +96,8 @@ static const struct pci_device_id cciss_pci_device_id[] = {
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3245},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3247},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3249},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324A},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324B},
{PCI_VENDOR_ID_HP, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0},
{0,}
@ -133,6 +135,8 @@ static struct board_type products[] = {
{0x3245103C, "Smart Array P410i", &SA5_access},
{0x3247103C, "Smart Array P411", &SA5_access},
{0x3249103C, "Smart Array P812", &SA5_access},
{0x324A103C, "Smart Array P712m", &SA5_access},
{0x324B103C, "Smart Array P711m", &SA5_access},
{0xFFFF103C, "Unknown Smart Array", &SA5_access},
};
@ -1366,6 +1370,7 @@ static void cciss_add_disk(ctlr_info_t *h, struct gendisk *disk,
disk->first_minor = drv_index << NWD_SHIFT;
disk->fops = &cciss_fops;
disk->private_data = &h->drv[drv_index];
disk->driverfs_dev = &h->pdev->dev;
/* Set up queue information */
blk_queue_bounce_limit(disk->queue, h->pdev->dma_mask);
@ -3404,7 +3409,8 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
int i;
int j = 0;
int rc;
int dac;
int dac, return_code;
InquiryData_struct *inq_buff = NULL;
i = alloc_cciss_hba();
if (i < 0)
@ -3510,6 +3516,25 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
/* Turn the interrupts on so we can service requests */
hba[i]->access.set_intr_mask(hba[i], CCISS_INTR_ON);
/* Get the firmware version */
inq_buff = kzalloc(sizeof(InquiryData_struct), GFP_KERNEL);
if (inq_buff == NULL) {
printk(KERN_ERR "cciss: out of memory\n");
goto clean4;
}
return_code = sendcmd_withirq(CISS_INQUIRY, i, inq_buff,
sizeof(InquiryData_struct), 0, 0 , 0, TYPE_CMD);
if (return_code == IO_OK) {
hba[i]->firm_ver[0] = inq_buff->data_byte[32];
hba[i]->firm_ver[1] = inq_buff->data_byte[33];
hba[i]->firm_ver[2] = inq_buff->data_byte[34];
hba[i]->firm_ver[3] = inq_buff->data_byte[35];
} else { /* send command failed */
printk(KERN_WARNING "cciss: unable to determine firmware"
" version of controller\n");
}
cciss_procinit(i);
hba[i]->cciss_max_sectors = 2048;
@ -3520,6 +3545,7 @@ static int __devinit cciss_init_one(struct pci_dev *pdev,
return 1;
clean4:
kfree(inq_buff);
#ifdef CONFIG_CISS_SCSI_TAPE
kfree(hba[i]->scsi_rejects.complete);
#endif

Просмотреть файл

@ -567,7 +567,12 @@ static int __init cpqarray_init(void)
num_cntlrs_reg++;
}
return(num_cntlrs_reg);
if (num_cntlrs_reg)
return 0;
else {
pci_unregister_driver(&cpqarray_pci_driver);
return -ENODEV;
}
}
/* Function to find the first free pointer into our hba[] array */

Просмотреть файл

@ -1644,7 +1644,10 @@ static void reset_terminal(struct vc_data *vc, int do_clear)
vc->vc_tab_stop[1] =
vc->vc_tab_stop[2] =
vc->vc_tab_stop[3] =
vc->vc_tab_stop[4] = 0x01010101;
vc->vc_tab_stop[4] =
vc->vc_tab_stop[5] =
vc->vc_tab_stop[6] =
vc->vc_tab_stop[7] = 0x01010101;
vc->vc_bell_pitch = DEFAULT_BELL_PITCH;
vc->vc_bell_duration = DEFAULT_BELL_DURATION;
@ -1935,7 +1938,10 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
vc->vc_tab_stop[1] =
vc->vc_tab_stop[2] =
vc->vc_tab_stop[3] =
vc->vc_tab_stop[4] = 0;
vc->vc_tab_stop[4] =
vc->vc_tab_stop[5] =
vc->vc_tab_stop[6] =
vc->vc_tab_stop[7] = 0;
}
return;
case 'm':

Просмотреть файл

@ -65,12 +65,14 @@ static void cpuidle_idle_call(void)
return;
}
#if 0
/* shows regressions, re-enable for 2.6.29 */
/*
* run any timers that can be run now, at this point
* before calculating the idle duration etc.
*/
hrtimer_peek_ahead_timers();
#endif
/* ask the governor for the next state */
next_state = cpuidle_curr_governor->select(dev);
if (need_resched())

Просмотреть файл

@ -587,8 +587,7 @@ static void create_units(struct fw_device *device)
unit->device.bus = &fw_bus_type;
unit->device.type = &fw_unit_type;
unit->device.parent = &device->device;
snprintf(unit->device.bus_id, sizeof(unit->device.bus_id),
"%s.%d", device->device.bus_id, i++);
dev_set_name(&unit->device, "%s.%d", dev_name(&device->device), i++);
init_fw_attribute_group(&unit->device,
fw_unit_attributes,
@ -711,8 +710,7 @@ static void fw_device_init(struct work_struct *work)
device->device.type = &fw_device_type;
device->device.parent = device->card->device;
device->device.devt = MKDEV(fw_cdev_major, minor);
snprintf(device->device.bus_id, sizeof(device->device.bus_id),
"fw%d", minor);
dev_set_name(&device->device, "fw%d", minor);
init_fw_attribute_group(&device->device,
fw_device_attributes,
@ -741,13 +739,13 @@ static void fw_device_init(struct work_struct *work)
if (device->config_rom_retries)
fw_notify("created device %s: GUID %08x%08x, S%d00, "
"%d config ROM retries\n",
device->device.bus_id,
dev_name(&device->device),
device->config_rom[3], device->config_rom[4],
1 << device->max_speed,
device->config_rom_retries);
else
fw_notify("created device %s: GUID %08x%08x, S%d00\n",
device->device.bus_id,
dev_name(&device->device),
device->config_rom[3], device->config_rom[4],
1 << device->max_speed);
device->config_rom_retries = 0;
@ -883,12 +881,12 @@ static void fw_device_refresh(struct work_struct *work)
FW_DEVICE_RUNNING) == FW_DEVICE_SHUTDOWN)
goto gone;
fw_notify("refreshed device %s\n", device->device.bus_id);
fw_notify("refreshed device %s\n", dev_name(&device->device));
device->config_rom_retries = 0;
goto out;
give_up:
fw_notify("giving up on refresh of device %s\n", device->device.bus_id);
fw_notify("giving up on refresh of device %s\n", dev_name(&device->device));
gone:
atomic_set(&device->state, FW_DEVICE_SHUTDOWN);
fw_device_shutdown(work);

Просмотреть файл

@ -2468,7 +2468,7 @@ pci_probe(struct pci_dev *dev, const struct pci_device_id *ent)
goto fail_self_id;
fw_notify("Added fw-ohci device %s, OHCI version %x.%x\n",
dev->dev.bus_id, version >> 16, version & 0xff);
dev_name(&dev->dev), version >> 16, version & 0xff);
return 0;
fail_self_id:

Просмотреть файл

@ -1135,7 +1135,7 @@ static int sbp2_probe(struct device *dev)
tgt->unit = unit;
kref_init(&tgt->kref);
INIT_LIST_HEAD(&tgt->lu_list);
tgt->bus_id = unit->device.bus_id;
tgt->bus_id = dev_name(&unit->device);
tgt->guid = (u64)device->config_rom[3] << 32 | device->config_rom[4];
if (fw_device_enable_phys_dma(device) < 0)

Просмотреть файл

@ -81,9 +81,9 @@ static void dmi_table(u8 *buf, int len, int num,
const struct dmi_header *dm = (const struct dmi_header *)data;
/*
* We want to know the total length (formated area and strings)
* before decoding to make sure we won't run off the table in
* dmi_decode or dmi_string
* We want to know the total length (formatted area and
* strings) before decoding to make sure we won't run off the
* table in dmi_decode or dmi_string
*/
data += dm->length;
while ((data - buf < len - 1) && (data[0] || data[1]))

Просмотреть файл

@ -116,6 +116,18 @@ static const char* temperature_sensors_sets[][36] = {
/* Set 9: Macbook Pro 3,1 (Santa Rosa) */
{ "TALP", "TB0T", "TC0D", "TC0P", "TG0D", "TG0H", "TTF0", "TW0P",
"Th0H", "Th1H", "Th2H", "Tm0P", "Ts0P", NULL },
/* Set 10: iMac 5,1 */
{ "TA0P", "TC0D", "TC0P", "TG0D", "TH0P", "TO0P", "Tm0P", NULL },
/* Set 11: Macbook 5,1 */
{ "TB0T", "TB1T", "TB2T", "TB3T", "TC0D", "TC0P", "TN0D", "TN0P",
"TTF0", "Th0H", "Th1H", "ThFH", "Ts0P", "Ts0S", NULL },
/* Set 12: Macbook Pro 5,1 */
{ "TB0T", "TB1T", "TB2T", "TB3T", "TC0D", "TC0F", "TC0P", "TG0D",
"TG0F", "TG0H", "TG0P", "TG0T", "TG1H", "TN0D", "TN0P", "TTF0",
"Th2H", "Tm0P", "Ts0P", "Ts0S", NULL },
/* Set 13: iMac 8,1 */
{ "TA0P", "TC0D", "TC0H", "TC0P", "TG0D", "TG0H", "TG0P", "TH0P",
"TL0P", "TO0P", "TW0P", "Tm0P", "Tp0P", NULL },
};
/* List of keys used to read/write fan speeds */
@ -1276,6 +1288,14 @@ static __initdata struct dmi_match_data applesmc_dmi_data[] = {
{ .accelerometer = 1, .light = 1, .temperature_set = 8 },
/* MacBook Pro 3: accelerometer, backlight and temperature set 9 */
{ .accelerometer = 1, .light = 1, .temperature_set = 9 },
/* iMac 5: light sensor only, temperature set 10 */
{ .accelerometer = 0, .light = 0, .temperature_set = 10 },
/* MacBook 5: accelerometer, backlight and temperature set 11 */
{ .accelerometer = 1, .light = 1, .temperature_set = 11 },
/* MacBook Pro 5: accelerometer, backlight and temperature set 12 */
{ .accelerometer = 1, .light = 1, .temperature_set = 12 },
/* iMac 8: light sensor only, temperature set 13 */
{ .accelerometer = 0, .light = 0, .temperature_set = 13 },
};
/* Note that DMI_MATCH(...,"MacBook") will match "MacBookPro1,1".
@ -1285,6 +1305,10 @@ static __initdata struct dmi_system_id applesmc_whitelist[] = {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir") },
&applesmc_dmi_data[7]},
{ applesmc_dmi_match, "Apple MacBook Pro 5", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro5") },
&applesmc_dmi_data[12]},
{ applesmc_dmi_match, "Apple MacBook Pro 4", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro4") },
@ -1305,6 +1329,10 @@ static __initdata struct dmi_system_id applesmc_whitelist[] = {
DMI_MATCH(DMI_BOARD_VENDOR,"Apple"),
DMI_MATCH(DMI_PRODUCT_NAME,"MacBook3") },
&applesmc_dmi_data[6]},
{ applesmc_dmi_match, "Apple MacBook 5", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "MacBook5") },
&applesmc_dmi_data[11]},
{ applesmc_dmi_match, "Apple MacBook", {
DMI_MATCH(DMI_BOARD_VENDOR,"Apple"),
DMI_MATCH(DMI_PRODUCT_NAME,"MacBook") },
@ -1317,6 +1345,14 @@ static __initdata struct dmi_system_id applesmc_whitelist[] = {
DMI_MATCH(DMI_BOARD_VENDOR,"Apple"),
DMI_MATCH(DMI_PRODUCT_NAME,"MacPro2") },
&applesmc_dmi_data[4]},
{ applesmc_dmi_match, "Apple iMac 8", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "iMac8") },
&applesmc_dmi_data[13]},
{ applesmc_dmi_match, "Apple iMac 5", {
DMI_MATCH(DMI_BOARD_VENDOR, "Apple"),
DMI_MATCH(DMI_PRODUCT_NAME, "iMac5") },
&applesmc_dmi_data[10]},
{ applesmc_dmi_match, "Apple iMac", {
DMI_MATCH(DMI_BOARD_VENDOR,"Apple"),
DMI_MATCH(DMI_PRODUCT_NAME,"iMac") },

Просмотреть файл

@ -1270,8 +1270,14 @@ static int dv1394_mmap(struct file *file, struct vm_area_struct *vma)
struct video_card *video = file_to_video_card(file);
int retval = -EINVAL;
/* serialize mmap */
mutex_lock(&video->mtx);
/*
* We cannot use the blocking variant mutex_lock here because .mmap
* is called with mmap_sem held, while .ioctl, .read, .write acquire
* video->mtx and subsequently call copy_to/from_user which will
* grab mmap_sem in case of a page fault.
*/
if (!mutex_trylock(&video->mtx))
return -EAGAIN;
if ( ! video_card_initialized(video) ) {
retval = do_dv1394_init_default(video);

Просмотреть файл

@ -155,11 +155,11 @@ struct hpsb_host *hpsb_alloc_host(struct hpsb_host_driver *drv, size_t extra,
memcpy(&h->device, &nodemgr_dev_template_host, sizeof(h->device));
h->device.parent = dev;
set_dev_node(&h->device, dev_to_node(dev));
snprintf(h->device.bus_id, BUS_ID_SIZE, "fw-host%d", h->id);
dev_set_name(&h->device, "fw-host%d", h->id);
h->host_dev.parent = &h->device;
h->host_dev.class = &hpsb_host_class;
snprintf(h->host_dev.bus_id, BUS_ID_SIZE, "fw-host%d", h->id);
dev_set_name(&h->host_dev, "fw-host%d", h->id);
if (device_register(&h->device))
goto fail;

Просмотреть файл

@ -826,13 +826,11 @@ static struct node_entry *nodemgr_create_node(octlet_t guid,
memcpy(&ne->device, &nodemgr_dev_template_ne,
sizeof(ne->device));
ne->device.parent = &host->device;
snprintf(ne->device.bus_id, BUS_ID_SIZE, "%016Lx",
(unsigned long long)(ne->guid));
dev_set_name(&ne->device, "%016Lx", (unsigned long long)(ne->guid));
ne->node_dev.parent = &ne->device;
ne->node_dev.class = &nodemgr_ne_class;
snprintf(ne->node_dev.bus_id, BUS_ID_SIZE, "%016Lx",
(unsigned long long)(ne->guid));
dev_set_name(&ne->node_dev, "%016Lx", (unsigned long long)(ne->guid));
if (device_register(&ne->device))
goto fail_devreg;
@ -932,13 +930,11 @@ static void nodemgr_register_device(struct node_entry *ne,
ud->device.parent = parent;
snprintf(ud->device.bus_id, BUS_ID_SIZE, "%s-%u",
ne->device.bus_id, ud->id);
dev_set_name(&ud->device, "%s-%u", dev_name(&ne->device), ud->id);
ud->unit_dev.parent = &ud->device;
ud->unit_dev.class = &nodemgr_ud_class;
snprintf(ud->unit_dev.bus_id, BUS_ID_SIZE, "%s-%u",
ne->device.bus_id, ud->id);
dev_set_name(&ud->unit_dev, "%s-%u", dev_name(&ne->device), ud->id);
if (device_register(&ud->device))
goto fail_devreg;
@ -953,7 +949,7 @@ static void nodemgr_register_device(struct node_entry *ne,
fail_classdevreg:
device_unregister(&ud->device);
fail_devreg:
HPSB_ERR("Failed to create unit %s", ud->device.bus_id);
HPSB_ERR("Failed to create unit %s", dev_name(&ud->device));
}

Просмотреть файл

@ -2268,7 +2268,8 @@ static ssize_t raw1394_write(struct file *file, const char __user * buffer,
return -EFAULT;
}
mutex_lock(&fi->state_mutex);
if (!mutex_trylock(&fi->state_mutex))
return -EAGAIN;
switch (fi->state) {
case opened:
@ -2548,7 +2549,8 @@ static int raw1394_mmap(struct file *file, struct vm_area_struct *vma)
struct file_info *fi = file->private_data;
int ret;
mutex_lock(&fi->state_mutex);
if (!mutex_trylock(&fi->state_mutex))
return -EAGAIN;
if (fi->iso_state == RAW1394_ISO_INACTIVE)
ret = -EINVAL;
@ -2669,7 +2671,8 @@ static long raw1394_ioctl(struct file *file, unsigned int cmd,
break;
}
mutex_lock(&fi->state_mutex);
if (!mutex_trylock(&fi->state_mutex))
return -EAGAIN;
switch (fi->iso_state) {
case RAW1394_ISO_INACTIVE:

Просмотреть файл

@ -148,6 +148,8 @@ static linear_conf_t *linear_conf(mddev_t *mddev, int raid_disks)
min_sectors = conf->array_sectors;
sector_div(min_sectors, PAGE_SIZE/sizeof(struct dev_info *));
if (min_sectors == 0)
min_sectors = 1;
/* min_sectors is the minimum spacing that will fit the hash
* table in one PAGE. This may be much smaller than needed.

Просмотреть файл

@ -3884,7 +3884,6 @@ static int do_md_stop(mddev_t * mddev, int mode, int is_open)
if (mode == 0) {
mdk_rdev_t *rdev;
struct list_head *tmp;
struct block_device *bdev;
printk(KERN_INFO "md: %s stopped.\n", mdname(mddev));
@ -3941,11 +3940,6 @@ static int do_md_stop(mddev_t * mddev, int mode, int is_open)
mddev->degraded = 0;
mddev->barriers_work = 0;
mddev->safemode = 0;
bdev = bdget_disk(mddev->gendisk, 0);
if (bdev) {
blkdev_ioctl(bdev, 0, BLKRRPART, 0);
bdput(bdev);
}
kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
} else if (mddev->pers)

Просмотреть файл

@ -1137,7 +1137,7 @@ static int raid10_add_disk(mddev_t *mddev, mdk_rdev_t *rdev)
if (!enough(conf))
return -EINVAL;
if (rdev->raid_disk)
if (rdev->raid_disk >= 0)
first = last = rdev->raid_disk;
if (rdev->saved_raid_disk >= 0 &&

Просмотреть файл

@ -77,12 +77,6 @@ MODULE_VERSION(my_VERSION);
* Fusion MPT LAN private structures
*/
struct NAA_Hosed {
u16 NAA;
u8 ieee[FC_ALEN];
struct NAA_Hosed *next;
};
struct BufferControl {
struct sk_buff *skb;
dma_addr_t dma;
@ -159,11 +153,6 @@ static u8 LanCtx = MPT_MAX_PROTOCOL_DRIVERS;
static u32 max_buckets_out = 127;
static u32 tx_max_out_p = 127 - 16;
#ifdef QLOGIC_NAA_WORKAROUND
static struct NAA_Hosed *mpt_bad_naa = NULL;
DEFINE_RWLOCK(bad_naa_lock);
#endif
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
/**
* lan_reply - Handle all data sent from the hardware.
@ -780,30 +769,6 @@ mpt_lan_sdu_send (struct sk_buff *skb, struct net_device *dev)
// ctx, skb, skb->data));
mac = skb_mac_header(skb);
#ifdef QLOGIC_NAA_WORKAROUND
{
struct NAA_Hosed *nh;
/* Munge the NAA for Tx packets to QLogic boards, which don't follow
RFC 2625. The longer I look at this, the more my opinion of Qlogic
drops. */
read_lock_irq(&bad_naa_lock);
for (nh = mpt_bad_naa; nh != NULL; nh=nh->next) {
if ((nh->ieee[0] == mac[0]) &&
(nh->ieee[1] == mac[1]) &&
(nh->ieee[2] == mac[2]) &&
(nh->ieee[3] == mac[3]) &&
(nh->ieee[4] == mac[4]) &&
(nh->ieee[5] == mac[5])) {
cur_naa = nh->NAA;
dlprintk ((KERN_INFO "mptlan/sdu_send: using NAA value "
"= %04x.\n", cur_naa));
break;
}
}
read_unlock_irq(&bad_naa_lock);
}
#endif
pTrans->TransactionDetails[0] = cpu_to_le32((cur_naa << 16) |
(mac[0] << 8) |
@ -1572,79 +1537,6 @@ mpt_lan_type_trans(struct sk_buff *skb, struct net_device *dev)
fcllc = (struct fcllc *)skb->data;
#ifdef QLOGIC_NAA_WORKAROUND
{
u16 source_naa = fch->stype, found = 0;
/* Workaround for QLogic not following RFC 2625 in regards to the NAA
value. */
if ((source_naa & 0xF000) == 0)
source_naa = swab16(source_naa);
if (fcllc->ethertype == htons(ETH_P_ARP))
dlprintk ((KERN_INFO "mptlan/type_trans: got arp req/rep w/ naa of "
"%04x.\n", source_naa));
if ((fcllc->ethertype == htons(ETH_P_ARP)) &&
((source_naa >> 12) != MPT_LAN_NAA_RFC2625)){
struct NAA_Hosed *nh, *prevnh;
int i;
dlprintk ((KERN_INFO "mptlan/type_trans: ARP Req/Rep from "
"system with non-RFC 2625 NAA value (%04x).\n",
source_naa));
write_lock_irq(&bad_naa_lock);
for (prevnh = nh = mpt_bad_naa; nh != NULL;
prevnh=nh, nh=nh->next) {
if ((nh->ieee[0] == fch->saddr[0]) &&
(nh->ieee[1] == fch->saddr[1]) &&
(nh->ieee[2] == fch->saddr[2]) &&
(nh->ieee[3] == fch->saddr[3]) &&
(nh->ieee[4] == fch->saddr[4]) &&
(nh->ieee[5] == fch->saddr[5])) {
found = 1;
dlprintk ((KERN_INFO "mptlan/type_trans: ARP Re"
"q/Rep w/ bad NAA from system already"
" in DB.\n"));
break;
}
}
if ((!found) && (nh == NULL)) {
nh = kmalloc(sizeof(struct NAA_Hosed), GFP_KERNEL);
dlprintk ((KERN_INFO "mptlan/type_trans: ARP Req/Rep w/"
" bad NAA from system not yet in DB.\n"));
if (nh != NULL) {
nh->next = NULL;
if (!mpt_bad_naa)
mpt_bad_naa = nh;
if (prevnh)
prevnh->next = nh;
nh->NAA = source_naa; /* Set the S_NAA value. */
for (i = 0; i < FC_ALEN; i++)
nh->ieee[i] = fch->saddr[i];
dlprintk ((KERN_INFO "Got ARP from %02x:%02x:%02x:%02x:"
"%02x:%02x with non-compliant S_NAA value.\n",
fch->saddr[0], fch->saddr[1], fch->saddr[2],
fch->saddr[3], fch->saddr[4],fch->saddr[5]));
} else {
printk (KERN_ERR "mptlan/type_trans: Unable to"
" kmalloc a NAA_Hosed struct.\n");
}
} else if (!found) {
printk (KERN_ERR "mptlan/type_trans: found not"
" set, but nh isn't null. Evil "
"funkiness abounds.\n");
}
write_unlock_irq(&bad_naa_lock);
}
}
#endif
/* Strip the SNAP header from ARP packets since we don't
* pass them through to the 802.2/SNAP layers.

Просмотреть файл

@ -216,8 +216,7 @@ int mmc_add_card(struct mmc_card *card)
int ret;
const char *type;
snprintf(card->dev.bus_id, sizeof(card->dev.bus_id),
"%s:%04x", mmc_hostname(card->host), card->rca);
dev_set_name(&card->dev, "%s:%04x", mmc_hostname(card->host), card->rca);
switch (card->type) {
case MMC_TYPE_MMC:

Просмотреть файл

@ -280,7 +280,11 @@ void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card)
(card->host->ios.clock / 1000);
if (data->flags & MMC_DATA_WRITE)
limit_us = 250000;
/*
* The limit is really 250 ms, but that is
* insufficient for some crappy cards.
*/
limit_us = 300000;
else
limit_us = 100000;

Просмотреть файл

@ -73,8 +73,7 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
if (err)
goto free;
snprintf(host->class_dev.bus_id, BUS_ID_SIZE,
"mmc%d", host->index);
dev_set_name(&host->class_dev, "mmc%d", host->index);
host->parent = dev;
host->class_dev.parent = dev;
@ -121,7 +120,7 @@ int mmc_add_host(struct mmc_host *host)
WARN_ON((host->caps & MMC_CAP_SDIO_IRQ) &&
!host->ops->enable_sdio_irq);
led_trigger_register_simple(host->class_dev.bus_id, &host->led);
led_trigger_register_simple(dev_name(&host->class_dev), &host->led);
err = device_add(&host->class_dev);
if (err)

Просмотреть файл

@ -239,8 +239,7 @@ int sdio_add_func(struct sdio_func *func)
{
int ret;
snprintf(func->dev.bus_id, sizeof(func->dev.bus_id),
"%s:%d", mmc_card_id(func->card), func->num);
dev_set_name(&func->dev, "%s:%d", mmc_card_id(func->card), func->num);
ret = device_add(&func->dev);
if (ret == 0)

Просмотреть файл

@ -1348,7 +1348,7 @@ static int mmc_spi_probe(struct spi_device *spi)
goto fail_add_host;
dev_info(&spi->dev, "SD/MMC host %s%s%s%s%s\n",
mmc->class_dev.bus_id,
dev_name(&mmc->class_dev),
host->dma_dev ? "" : ", no DMA",
(host->pdata && host->pdata->get_ro)
? "" : ", no WP",

Просмотреть файл

@ -1733,7 +1733,7 @@ int sdhci_add_host(struct sdhci_host *host)
mmc_add_host(mmc);
printk(KERN_INFO "%s: SDHCI controller on %s [%s] using %s%s\n",
mmc_hostname(mmc), host->hw_name, mmc_dev(mmc)->bus_id,
mmc_hostname(mmc), host->hw_name, dev_name(mmc_dev(mmc)),
(host->flags & SDHCI_USE_ADMA)?"A":"",
(host->flags & SDHCI_USE_DMA)?"DMA":"PIO");

Просмотреть файл

@ -632,7 +632,7 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
if (host->req) {
printk(KERN_ERR "%s : unfinished request detected\n",
sock->dev.bus_id);
dev_name(&sock->dev));
mrq->cmd->error = -ETIMEDOUT;
goto err_out;
}
@ -672,7 +672,7 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
? PCI_DMA_TODEVICE
: PCI_DMA_FROMDEVICE)) {
printk(KERN_ERR "%s : scatterlist map failed\n",
sock->dev.bus_id);
dev_name(&sock->dev));
mrq->cmd->error = -ENOMEM;
goto err_out;
}
@ -684,7 +684,7 @@ static void tifm_sd_request(struct mmc_host *mmc, struct mmc_request *mrq)
: PCI_DMA_FROMDEVICE);
if (host->sg_len < 1) {
printk(KERN_ERR "%s : scatterlist map failed\n",
sock->dev.bus_id);
dev_name(&sock->dev));
tifm_unmap_sg(sock, &host->bounce_buf, 1,
r_data->flags & MMC_DATA_WRITE
? PCI_DMA_TODEVICE
@ -748,7 +748,7 @@ static void tifm_sd_end_cmd(unsigned long data)
if (!mrq) {
printk(KERN_ERR " %s : no request to complete?\n",
sock->dev.bus_id);
dev_name(&sock->dev));
spin_unlock_irqrestore(&sock->lock, flags);
return;
}
@ -789,7 +789,7 @@ static void tifm_sd_abort(unsigned long data)
printk(KERN_ERR
"%s : card failed to respond for a long period of time "
"(%x, %x)\n",
host->dev->dev.bus_id, host->req->cmd->opcode, host->cmd_flags);
dev_name(&host->dev->dev), host->req->cmd->opcode, host->cmd_flags);
tifm_eject(host->dev);
}
@ -906,7 +906,7 @@ static int tifm_sd_initialize_host(struct tifm_sd *host)
if (rc) {
printk(KERN_ERR "%s : controller failed to reset\n",
sock->dev.bus_id);
dev_name(&sock->dev));
return -ENODEV;
}
@ -933,7 +933,7 @@ static int tifm_sd_initialize_host(struct tifm_sd *host)
if (rc) {
printk(KERN_ERR
"%s : card not ready - probe failed on initialization\n",
sock->dev.bus_id);
dev_name(&sock->dev));
return -ENODEV;
}
@ -954,7 +954,7 @@ static int tifm_sd_probe(struct tifm_dev *sock)
if (!(TIFM_SOCK_STATE_OCCUPIED
& readl(sock->addr + SOCK_PRESENT_STATE))) {
printk(KERN_WARNING "%s : card gone, unexpectedly\n",
sock->dev.bus_id);
dev_name(&sock->dev));
return rc;
}

Просмотреть файл

@ -406,19 +406,6 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
/* Set the default CFI lock/unlock addresses */
cfi->addr_unlock1 = 0x555;
cfi->addr_unlock2 = 0x2aa;
/* Modify the unlock address if we are in compatibility mode */
if ( /* x16 in x8 mode */
((cfi->device_type == CFI_DEVICETYPE_X8) &&
(cfi->cfiq->InterfaceDesc ==
CFI_INTERFACE_X8_BY_X16_ASYNC)) ||
/* x32 in x16 mode */
((cfi->device_type == CFI_DEVICETYPE_X16) &&
(cfi->cfiq->InterfaceDesc ==
CFI_INTERFACE_X16_BY_X32_ASYNC)))
{
cfi->addr_unlock1 = 0xaaa;
cfi->addr_unlock2 = 0x555;
}
} /* CFI mode */
else if (cfi->cfi_mode == CFI_MODE_JEDEC) {

Просмотреть файл

@ -1808,9 +1808,7 @@ static inline u32 jedec_read_mfr(struct map_info *map, uint32_t base,
* several first banks can contain 0x7f instead of actual ID
*/
do {
uint32_t ofs = cfi_build_cmd_addr(0 + (bank << 8),
cfi_interleave(cfi),
cfi->device_type);
uint32_t ofs = cfi_build_cmd_addr(0 + (bank << 8), map, cfi);
mask = (1 << (cfi->device_type * 8)) - 1;
result = map_read(map, base + ofs);
bank++;
@ -1824,7 +1822,7 @@ static inline u32 jedec_read_id(struct map_info *map, uint32_t base,
{
map_word result;
unsigned long mask;
u32 ofs = cfi_build_cmd_addr(1, cfi_interleave(cfi), cfi->device_type);
u32 ofs = cfi_build_cmd_addr(1, map, cfi);
mask = (1 << (cfi->device_type * 8)) -1;
result = map_read(map, base + ofs);
return result.x[0] & mask;
@ -2067,8 +2065,8 @@ static int jedec_probe_chip(struct map_info *map, __u32 base,
}
/* Ensure the unlock addresses we try stay inside the map */
probe_offset1 = cfi_build_cmd_addr(cfi->addr_unlock1, cfi_interleave(cfi), cfi->device_type);
probe_offset2 = cfi_build_cmd_addr(cfi->addr_unlock2, cfi_interleave(cfi), cfi->device_type);
probe_offset1 = cfi_build_cmd_addr(cfi->addr_unlock1, map, cfi);
probe_offset2 = cfi_build_cmd_addr(cfi->addr_unlock2, map, cfi);
if ( ((base + probe_offset1 + map_bankwidth(map)) >= map->size) ||
((base + probe_offset2 + map_bankwidth(map)) >= map->size))
goto retry;

Просмотреть файл

@ -38,7 +38,6 @@
#include <asm/arch/gpmc.h>
#include <asm/arch/onenand.h>
#include <asm/arch/gpio.h>
#include <asm/arch/gpmc.h>
#include <asm/arch/pm.h>
#include <linux/dma-mapping.h>

Просмотреть файл

@ -2010,9 +2010,13 @@ config IGB_LRO
If in doubt, say N.
config IGB_DCA
bool "Enable DCA"
bool "Direct Cache Access (DCA) Support"
default y
depends on IGB && DCA && !(IGB=y && DCA=m)
---help---
Say Y here if you want to use Direct Cache Access (DCA) in the
driver. DCA is a method for warming the CPU cache before data
is used, with the intent of lessening the impact of cache misses.
source "drivers/net/ixp2000/Kconfig"
@ -2437,9 +2441,13 @@ config IXGBE
will be called ixgbe.
config IXGBE_DCA
bool
bool "Direct Cache Access (DCA) Support"
default y
depends on IXGBE && DCA && !(IXGBE=y && DCA=m)
---help---
Say Y here if you want to use Direct Cache Access (DCA) in the
driver. DCA is a method for warming the CPU cache before data
is used, with the intent of lessening the impact of cache misses.
config IXGB
tristate "Intel(R) PRO/10GbE support"
@ -2489,9 +2497,13 @@ config MYRI10GE
will be called myri10ge.
config MYRI10GE_DCA
bool
bool "Direct Cache Access (DCA) Support"
default y
depends on MYRI10GE && DCA && !(MYRI10GE=y && DCA=m)
---help---
Say Y here if you want to use Direct Cache Access (DCA) in the
driver. DCA is a method for warming the CPU cache before data
is used, with the intent of lessening the impact of cache misses.
config NETXEN_NIC
tristate "NetXen Multi port (1/10) Gigabit Ethernet NIC"

Просмотреть файл

@ -46,7 +46,6 @@
#include <linux/vmalloc.h>
#include <linux/pagemap.h>
#include <linux/tcp.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#include <linux/workqueue.h>

Просмотреть файл

@ -564,14 +564,15 @@ static const struct arb_line write_arb_addr[NUM_WR_Q-1] = {
static void bnx2x_init_pxp(struct bnx2x *bp)
{
u16 devctl;
int r_order, w_order;
u32 val, i;
pci_read_config_word(bp->pdev,
bp->pcie_cap + PCI_EXP_DEVCTL, (u16 *)&val);
DP(NETIF_MSG_HW, "read 0x%x from devctl\n", (u16)val);
w_order = ((val & PCI_EXP_DEVCTL_PAYLOAD) >> 5);
r_order = ((val & PCI_EXP_DEVCTL_READRQ) >> 12);
bp->pcie_cap + PCI_EXP_DEVCTL, &devctl);
DP(NETIF_MSG_HW, "read 0x%x from devctl\n", devctl);
w_order = ((devctl & PCI_EXP_DEVCTL_PAYLOAD) >> 5);
r_order = ((devctl & PCI_EXP_DEVCTL_READRQ) >> 12);
if (r_order > MAX_RD_ORD) {
DP(NETIF_MSG_HW, "read order of %d order adjusted to %d\n",

Просмотреть файл

@ -59,8 +59,8 @@
#include "bnx2x.h"
#include "bnx2x_init.h"
#define DRV_MODULE_VERSION "1.45.22"
#define DRV_MODULE_RELDATE "2008/09/09"
#define DRV_MODULE_VERSION "1.45.23"
#define DRV_MODULE_RELDATE "2008/11/03"
#define BNX2X_BC_VER 0x040200
/* Time in jiffies before concluding the transmitter is hung */
@ -6481,6 +6481,7 @@ load_int_disable:
bnx2x_free_irq(bp);
load_error:
bnx2x_free_mem(bp);
bp->port.pmf = 0;
/* TBD we really need to reset the chip
if we want to recover from this */
@ -6791,6 +6792,7 @@ unload_error:
/* Report UNLOAD_DONE to MCP */
if (!BP_NOMCP(bp))
bnx2x_fw_command(bp, DRV_MSG_CODE_UNLOAD_DONE);
bp->port.pmf = 0;
/* Free SKBs, SGEs, TPA pool and driver internals */
bnx2x_free_skbs(bp);
@ -10204,8 +10206,6 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
return -ENOMEM;
}
netif_carrier_off(dev);
bp = netdev_priv(dev);
bp->msglevel = debug;
@ -10229,6 +10229,8 @@ static int __devinit bnx2x_init_one(struct pci_dev *pdev,
goto init_one_exit;
}
netif_carrier_off(dev);
bp->common.name = board_info[ent->driver_data].name;
printk(KERN_INFO "%s: %s (%c%d) PCI-E x%d %s found at mem %lx,"
" IRQ %d, ", dev->name, bp->common.name,

Просмотреть файл

@ -1099,7 +1099,9 @@ static int __devinit fs_enet_probe(struct of_device *ofdev,
ndev->stop = fs_enet_close;
ndev->get_stats = fs_enet_get_stats;
ndev->set_multicast_list = fs_set_multicast_list;
#ifdef CONFIG_NET_POLL_CONTROLLER
ndev->poll_controller = fs_enet_netpoll;
#endif
if (fpi->use_napi)
netif_napi_add(ndev, &fep->napi, fs_enet_rx_napi,
fpi->napi_weight);
@ -1209,7 +1211,7 @@ static void __exit fs_cleanup(void)
static void fs_enet_netpoll(struct net_device *dev)
{
disable_irq(dev->irq);
fs_enet_interrupt(dev->irq, dev, NULL);
fs_enet_interrupt(dev->irq, dev);
enable_irq(dev->irq);
}
#endif

Просмотреть файл

@ -1066,9 +1066,12 @@ static int smi_wait_ready(struct mv643xx_eth_shared_private *msp)
return 0;
}
if (!wait_event_timeout(msp->smi_busy_wait, smi_is_done(msp),
msecs_to_jiffies(100)))
return -ETIMEDOUT;
if (!smi_is_done(msp)) {
wait_event_timeout(msp->smi_busy_wait, smi_is_done(msp),
msecs_to_jiffies(100));
if (!smi_is_done(msp))
return -ETIMEDOUT;
}
return 0;
}

Просмотреть файл

@ -8667,7 +8667,6 @@ static void __devinit niu_device_announce(struct niu *np)
static int __devinit niu_pci_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
unsigned long niureg_base, niureg_len;
union niu_parent_id parent_id;
struct net_device *dev;
struct niu *np;
@ -8758,10 +8757,7 @@ static int __devinit niu_pci_init_one(struct pci_dev *pdev,
dev->features |= (NETIF_F_SG | NETIF_F_HW_CSUM);
niureg_base = pci_resource_start(pdev, 0);
niureg_len = pci_resource_len(pdev, 0);
np->regs = ioremap_nocache(niureg_base, niureg_len);
np->regs = pci_ioremap_bar(pdev, 0);
if (!np->regs) {
dev_err(&pdev->dev, PFX "Cannot map device registers, "
"aborting.\n");

Просмотреть файл

@ -499,7 +499,7 @@ static void smc911x_hardware_send_pkt(struct net_device *dev)
#else
SMC_PUSH_DATA(lp, buf, len);
dev->trans_start = jiffies;
dev_kfree_skb(skb);
dev_kfree_skb_irq(skb);
#endif
if (!lp->tx_throttle) {
netif_wake_queue(dev);

Просмотреть файл

@ -2060,6 +2060,7 @@ static int smc_request_attrib(struct platform_device *pdev,
struct net_device *ndev)
{
struct resource * res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "smc91x-attrib");
struct smc_local *lp __maybe_unused = netdev_priv(ndev);
if (!res)
return 0;
@ -2074,6 +2075,7 @@ static void smc_release_attrib(struct platform_device *pdev,
struct net_device *ndev)
{
struct resource * res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "smc91x-attrib");
struct smc_local *lp __maybe_unused = netdev_priv(ndev);
if (res)
release_mem_region(res->start, ATTRIB_SIZE);

Просмотреть файл

@ -37,7 +37,6 @@
#include <asm/irq.h>
#include <asm/uaccess.h>
#include <asm/types.h>
#include <asm/uaccess.h>
#include "ucc_geth.h"
#include "ucc_geth_mii.h"

Просмотреть файл

@ -2942,8 +2942,10 @@ static void ath5k_configure_filter(struct ieee80211_hw *hw,
sc->opmode != NL80211_IFTYPE_MESH_POINT &&
test_bit(ATH_STAT_PROMISC, sc->status))
rfilt |= AR5K_RX_FILTER_PROM;
if (sc->opmode == NL80211_IFTYPE_ADHOC)
if (sc->opmode == NL80211_IFTYPE_STATION ||
sc->opmode == NL80211_IFTYPE_ADHOC) {
rfilt |= AR5K_RX_FILTER_BEACON;
}
/* Set filters */
ath5k_hw_set_rx_filter(ah,rfilt);

Просмотреть файл

@ -531,10 +531,10 @@ static int ath5k_hw_proc_5210_rx_status(struct ath5k_hw *ah,
AR5K_5210_RX_DESC_STATUS0_RECEIVE_SIGNAL);
rs->rs_rate = AR5K_REG_MS(rx_status->rx_status_0,
AR5K_5210_RX_DESC_STATUS0_RECEIVE_RATE);
rs->rs_antenna = rx_status->rx_status_0 &
AR5K_5210_RX_DESC_STATUS0_RECEIVE_ANTENNA;
rs->rs_more = rx_status->rx_status_0 &
AR5K_5210_RX_DESC_STATUS0_MORE;
rs->rs_antenna = AR5K_REG_MS(rx_status->rx_status_0,
AR5K_5210_RX_DESC_STATUS0_RECEIVE_ANTENNA);
rs->rs_more = !!(rx_status->rx_status_0 &
AR5K_5210_RX_DESC_STATUS0_MORE);
/* TODO: this timestamp is 13 bit, later on we assume 15 bit */
rs->rs_tstamp = AR5K_REG_MS(rx_status->rx_status_1,
AR5K_5210_RX_DESC_STATUS1_RECEIVE_TIMESTAMP);
@ -607,10 +607,10 @@ static int ath5k_hw_proc_5212_rx_status(struct ath5k_hw *ah,
AR5K_5212_RX_DESC_STATUS0_RECEIVE_SIGNAL);
rs->rs_rate = AR5K_REG_MS(rx_status->rx_status_0,
AR5K_5212_RX_DESC_STATUS0_RECEIVE_RATE);
rs->rs_antenna = rx_status->rx_status_0 &
AR5K_5212_RX_DESC_STATUS0_RECEIVE_ANTENNA;
rs->rs_more = rx_status->rx_status_0 &
AR5K_5212_RX_DESC_STATUS0_MORE;
rs->rs_antenna = AR5K_REG_MS(rx_status->rx_status_0,
AR5K_5212_RX_DESC_STATUS0_RECEIVE_ANTENNA);
rs->rs_more = !!(rx_status->rx_status_0 &
AR5K_5212_RX_DESC_STATUS0_MORE);
rs->rs_tstamp = AR5K_REG_MS(rx_status->rx_status_1,
AR5K_5212_RX_DESC_STATUS1_RECEIVE_TIMESTAMP);
rs->rs_status = 0;

Просмотреть файл

@ -3252,7 +3252,11 @@ static void iwl4965_mac_update_tkip_key(struct ieee80211_hw *hw,
return;
}
iwl_scan_cancel_timeout(priv, 100);
if (iwl_scan_cancel(priv)) {
/* cancel scan failed, just live w/ bad key and rely
briefly on SW decryption */
return;
}
key_flags |= (STA_KEY_FLG_TKIP | STA_KEY_FLG_MAP_KEY_MSK);
key_flags |= cpu_to_le16(keyconf->keyidx << STA_KEY_FLG_KEYID_POS);

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше