This commit is contained in:
Jeff Garzik 2005-10-28 12:29:23 -04:00
Родитель 972c26bdd6 5fadd053d9
Коммит 7a9f8f93d2
544 изменённых файлов: 15719 добавлений и 9798 удалений

30
.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,30 @@
#
# NOTE! Don't add files that are generated in specific
# subdirectories here. Add them in the ".gitignore" file
# in that subdirectory instead.
#
# Normal rules
#
.*
*.o
*.a
*.s
*.ko
*.mod.c
#
# Top-level generic files
#
vmlinux*
System.map
Module.symvers
#
# Generated include files
#
include/asm
include/config
include/linux/autoconf.h
include/linux/compile.h
include/linux/version.h

Просмотреть файл

@ -906,9 +906,20 @@ Aside:
4. The I/O scheduler
I/O schedulers are now per queue. They should be runtime switchable and modular
but aren't yet. Jens has most bits to do this, but the sysfs implementation is
missing.
I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch
queue and specific I/O schedulers. Unless stated otherwise, elevator is used
to refer to both parts and I/O scheduler to specific I/O schedulers.
Block layer implements generic dispatch queue in ll_rw_blk.c and elevator.c.
The generic dispatch queue is responsible for properly ordering barrier
requests, requeueing, handling non-fs requests and all other subtleties.
Specific I/O schedulers are responsible for ordering normal filesystem
requests. They can also choose to delay certain requests to improve
throughput or whatever purpose. As the plural form indicates, there are
multiple I/O schedulers. They can be built as modules but at least one should
be built inside the kernel. Each queue can choose different one and can also
change to another one dynamically.
A block layer call to the i/o scheduler follows the convention elv_xxx(). This
calls elevator_xxx_fn in the elevator switch (drivers/block/elevator.c). Oh,
@ -921,44 +932,36 @@ keeping work.
The functions an elevator may implement are: (* are mandatory)
elevator_merge_fn called to query requests for merge with a bio
elevator_merge_req_fn " " " with another request
elevator_merge_req_fn called when two requests get merged. the one
which gets merged into the other one will be
never seen by I/O scheduler again. IOW, after
being merged, the request is gone.
elevator_merged_fn called when a request in the scheduler has been
involved in a merge. It is used in the deadline
scheduler for example, to reposition the request
if its sorting order has changed.
*elevator_next_req_fn returns the next scheduled request, or NULL
if there are none (or none are ready).
elevator_dispatch_fn fills the dispatch queue with ready requests.
I/O schedulers are free to postpone requests by
not filling the dispatch queue unless @force
is non-zero. Once dispatched, I/O schedulers
are not allowed to manipulate the requests -
they belong to generic dispatch queue.
*elevator_add_req_fn called to add a new request into the scheduler
elevator_add_req_fn called to add a new request into the scheduler
elevator_queue_empty_fn returns true if the merge queue is empty.
Drivers shouldn't use this, but rather check
if elv_next_request is NULL (without losing the
request if one exists!)
elevator_remove_req_fn This is called when a driver claims ownership of
the target request - it now belongs to the
driver. It must not be modified or merged.
Drivers must not lose the request! A subsequent
call of elevator_next_req_fn must return the
_next_ request.
elevator_requeue_req_fn called to add a request to the scheduler. This
is used when the request has alrnadebeen
returned by elv_next_request, but hasn't
completed. If this is not implemented then
elevator_add_req_fn is called instead.
elevator_former_req_fn
elevator_latter_req_fn These return the request before or after the
one specified in disk sort order. Used by the
block layer to find merge possibilities.
elevator_completed_req_fn called when a request is completed. This might
come about due to being merged with another or
when the device completes the request.
elevator_completed_req_fn called when a request is completed.
elevator_may_queue_fn returns true if the scheduler wants to allow the
current context to queue a new request even if
@ -967,13 +970,33 @@ elevator_may_queue_fn returns true if the scheduler wants to allow the
elevator_set_req_fn
elevator_put_req_fn Must be used to allocate and free any elevator
specific storate for a request.
specific storage for a request.
elevator_activate_req_fn Called when device driver first sees a request.
I/O schedulers can use this callback to
determine when actual execution of a request
starts.
elevator_deactivate_req_fn Called when device driver decides to delay
a request by requeueing it.
elevator_init_fn
elevator_exit_fn Allocate and free any elevator specific storage
for a queue.
4.2 I/O scheduler implementation
4.2 Request flows seen by I/O schedulers
All requests seens by I/O schedulers strictly follow one of the following three
flows.
set_req_fn ->
i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn ->
(deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn
ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn
iii. [none]
-> put_req_fn
4.3 I/O scheduler implementation
The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
optimal disk scan and request servicing performance (based on generic
principles and device capabilities), optimized for:
@ -993,18 +1016,7 @@ request in sort order to prevent binary tree lookups.
This arrangement is not a generic block layer characteristic however, so
elevators may implement queues as they please.
ii. Last merge hint
The last merge hint is part of the generic queue layer. I/O schedulers must do
some management on it. For the most part, the most important thing is to make
sure q->last_merge is cleared (set to NULL) when the request on it is no longer
a candidate for merging (for example if it has been sent to the driver).
The last merge performed is cached as a hint for the subsequent request. If
sequential data is being submitted, the hint is used to perform merges without
any scanning. This is not sufficient when there are multiple processes doing
I/O though, so a "merge hash" is used by some schedulers.
iii. Merge hash
ii. Merge hash
AS and deadline use a hash table indexed by the last sector of a request. This
enables merging code to quickly look up "back merge" candidates, even when
multiple I/O streams are being performed at once on one disk.
@ -1013,28 +1025,7 @@ multiple I/O streams are being performed at once on one disk.
are far less common than "back merges" due to the nature of most I/O patterns.
Front merges are handled by the binary trees in AS and deadline schedulers.
iv. Handling barrier cases
A request with flags REQ_HARDBARRIER or REQ_SOFTBARRIER must not be ordered
around. That is, they must be processed after all older requests, and before
any newer ones. This includes merges!
In AS and deadline schedulers, barriers have the effect of flushing the reorder
queue. The performance cost of this will vary from nothing to a lot depending
on i/o patterns and device characteristics. Obviously they won't improve
performance, so their use should be kept to a minimum.
v. Handling insertion position directives
A request may be inserted with a position directive. The directives are one of
ELEVATOR_INSERT_BACK, ELEVATOR_INSERT_FRONT, ELEVATOR_INSERT_SORT.
ELEVATOR_INSERT_SORT is a general directive for non-barrier requests.
ELEVATOR_INSERT_BACK is used to insert a barrier to the back of the queue.
ELEVATOR_INSERT_FRONT is used to insert a barrier to the front of the queue, and
overrides the ordering requested by any previous barriers. In practice this is
harmless and required, because it is used for SCSI requeueing. This does not
require flushing the reorder queue, so does not impose a performance penalty.
vi. Plugging the queue to batch requests in anticipation of opportunities for
iii. Plugging the queue to batch requests in anticipation of opportunities for
merge/sort optimizations
This is just the same as in 2.4 so far, though per-device unplugging
@ -1069,7 +1060,7 @@ Aside:
blk_kick_queue() to unplug a specific queue (right away ?)
or optionally, all queues, is in the plan.
4.3 I/O contexts
4.4 I/O contexts
I/O contexts provide a dynamically allocated per process data area. They may
be used in I/O schedulers, and in the block layer (could be used for IO statis,
priorities for example). See *io_context in drivers/block/ll_rw_blk.c, and

Просмотреть файл

@ -17,7 +17,7 @@ are specified on the kernel command line with the module name plus
usbcore.blinkenlights=1
The text in square brackets at the beginning of the description state the
The text in square brackets at the beginning of the description states the
restrictions on the kernel for the said kernel parameter to be valid. The
restrictions referred to are that the relevant option is valid if:
@ -71,7 +71,7 @@ restrictions referred to are that the relevant option is valid if:
SERIAL Serial support is enabled.
SMP The kernel is an SMP kernel.
SPARC Sparc architecture is enabled.
SWSUSP Software suspension is enabled.
SWSUSP Software suspend is enabled.
TS Appropriate touchscreen support is enabled.
USB USB support is enabled.
USBHID USB Human Interface Device support is enabled.
@ -106,7 +106,7 @@ running once the system is up.
See also Documentation/scsi/ncr53c7xx.txt.
acpi= [HW,ACPI] Advanced Configuration and Power Interface
Format: { force | off | ht | strict }
Format: { force | off | ht | strict | noirq }
force -- enable ACPI if default was off
off -- disable ACPI if default was on
noirq -- do not use ACPI for IRQ routing
@ -123,16 +123,19 @@ running once the system is up.
acpi_sci= [HW,ACPI] ACPI System Control Interrupt trigger mode
Format: { level | edge | high | low }
acpi_irq_balance [HW,ACPI] ACPI will balance active IRQs
acpi_irq_balance [HW,ACPI]
ACPI will balance active IRQs
default in APIC mode
acpi_irq_nobalance [HW,ACPI] ACPI will not move active IRQs (default)
acpi_irq_nobalance [HW,ACPI]
ACPI will not move active IRQs (default)
default in PIC mode
acpi_irq_pci= [HW,ACPI] If irq_balance, Clear listed IRQs for use by PCI
acpi_irq_pci= [HW,ACPI] If irq_balance, clear listed IRQs for
use by PCI
Format: <irq>,<irq>...
acpi_irq_isa= [HW,ACPI] If irq_balance, Mark listed IRQs used by ISA
acpi_irq_isa= [HW,ACPI] If irq_balance, mark listed IRQs used by ISA
Format: <irq>,<irq>...
acpi_osi= [HW,ACPI] empty param disables _OSI
@ -145,14 +148,14 @@ running once the system is up.
acpi_dbg_layer= [HW,ACPI]
Format: <int>
Each bit of the <int> indicates an acpi debug layer,
Each bit of the <int> indicates an ACPI debug layer,
1: enable, 0: disable. It is useful for boot time
debugging. After system has booted up, it can be set
via /proc/acpi/debug_layer.
acpi_dbg_level= [HW,ACPI]
Format: <int>
Each bit of the <int> indicates an acpi debug level,
Each bit of the <int> indicates an ACPI debug level,
1: enable, 0: disable. It is useful for boot time
debugging. After system has booted up, it can be set
via /proc/acpi/debug_level.
@ -161,12 +164,13 @@ running once the system is up.
acpi_generic_hotkey [HW,ACPI]
Allow consolidated generic hotkey driver to
over-ride platform specific driver.
override platform specific driver.
See also Documentation/acpi-hotkey.txt.
enable_timer_pin_1 [i386,x86-64]
Enable PIN 1 of APIC timer
Can be useful to work around chipset bugs (in particular on some ATI chipsets)
Can be useful to work around chipset bugs
(in particular on some ATI chipsets).
The kernel tries to set a reasonable default.
disable_timer_pin_1 [i386,x86-64]
@ -205,10 +209,6 @@ running once the system is up.
aic79xx= [HW,SCSI]
See Documentation/scsi/aic79xx.txt.
AM53C974= [HW,SCSI]
Format: <host-scsi-id>,<target-scsi-id>,<max-rate>,<max-offset>
See also header of drivers/scsi/AM53C974.c.
amijoy.map= [HW,JOY] Amiga joystick support
Map of devices attached to JOY0DAT and JOY1DAT
Format: <a>,<b>
@ -219,7 +219,8 @@ running once the system is up.
connected to one of 16 gameports
Format: <type1>,<type2>,..<type16>
apc= [HW,SPARC] Power management functions (SPARCstation-4/5 + deriv.)
apc= [HW,SPARC]
Power management functions (SPARCstation-4/5 + deriv.)
Format: noidle
Disable APC CPU standby support. SPARCstation-Fox does
not play well with APC CPU idle - disable it if you have
@ -251,7 +252,7 @@ running once the system is up.
atkbd.reset= [HW] Reset keyboard during initialization
atkbd.set= [HW] Select keyboard code set
Format: <int> (2 = AT (default) 3 = PS/2)
Format: <int> (2 = AT (default), 3 = PS/2)
atkbd.scroll= [HW] Enable scroll wheel on MS Office and similar
keyboards
@ -259,8 +260,8 @@ running once the system is up.
atkbd.softraw= [HW] Choose between synthetic and real raw mode
Format: <bool> (0 = real, 1 = synthetic (default))
atkbd.softrepeat=
[HW] Use software keyboard repeat
atkbd.softrepeat= [HW]
Use software keyboard repeat
autotest [IA64]
@ -277,11 +278,13 @@ running once the system is up.
Format: <io>,<mode>
See header of drivers/net/hamradio/baycom_par.c.
baycom_ser_fdx= [HW,AX25] BayCom Serial Port AX.25 Modem (Full Duplex Mode)
baycom_ser_fdx= [HW,AX25]
BayCom Serial Port AX.25 Modem (Full Duplex Mode)
Format: <io>,<irq>,<mode>[,<baud>]
See header of drivers/net/hamradio/baycom_ser_fdx.c.
baycom_ser_hdx= [HW,AX25] BayCom Serial Port AX.25 Modem (Half Duplex Mode)
baycom_ser_hdx= [HW,AX25]
BayCom Serial Port AX.25 Modem (Half Duplex Mode)
Format: <io>,<irq>,<mode>
See header of drivers/net/hamradio/baycom_ser_hdx.c.
@ -292,7 +295,8 @@ running once the system is up.
blkmtd_count=
bttv.card= [HW,V4L] bttv (bt848 + bt878 based grabber cards)
bttv.radio= Most important insmod options are available as kernel args too.
bttv.radio= Most important insmod options are available as
kernel args too.
bttv.pll= See Documentation/video4linux/bttv/Insmod-options
bttv.tuner= and Documentation/video4linux/bttv/CARDLIST
@ -318,15 +322,17 @@ running once the system is up.
checkreqprot [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" }
See security/selinux/Kconfig help text.
0 -- check protection applied by kernel (includes any implied execute protection).
0 -- check protection applied by kernel (includes
any implied execute protection).
1 -- check protection requested by application.
Default value is set via a kernel config option.
Value can be changed at runtime via /selinux/checkreqprot.
Value can be changed at runtime via
/selinux/checkreqprot.
clock= [BUGS=IA-32,HW] gettimeofday timesource override.
Forces specified timesource (if avaliable) to be used
when calculating gettimeofday(). If specicified timesource
is not avalible, it defaults to PIT.
when calculating gettimeofday(). If specicified
timesource is not avalible, it defaults to PIT.
Format: { pit | tsc | cyclone | pmtmr }
hpet= [IA-32,HPET] option to disable HPET and use PIT.
@ -336,12 +342,14 @@ running once the system is up.
Format: { auto | [<io>,][<irq>] }
com20020= [HW,NET] ARCnet - COM20020 chipset
Format: <io>[,<irq>[,<nodeID>[,<backplane>[,<ckp>[,<timeout>]]]]]
Format:
<io>[,<irq>[,<nodeID>[,<backplane>[,<ckp>[,<timeout>]]]]]
com90io= [HW,NET] ARCnet - COM90xx chipset (IO-mapped buffers)
Format: <io>[,<irq>]
com90xx= [HW,NET] ARCnet - COM90xx chipset (memory-mapped buffers)
com90xx= [HW,NET]
ARCnet - COM90xx chipset (memory-mapped buffers)
Format: <io>[,<irq>[,<memstart>]]
condev= [HW,S390] console device
@ -367,7 +375,8 @@ running once the system is up.
options are the same as for ttyS, above.
cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver
Format: <first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
Format:
<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
cpia_pp= [HW,PPT]
Format: { parport<nr> | auto | none }
@ -428,7 +437,7 @@ running once the system is up.
earlyprintk=vga
earlyprintk=serial[,ttySn[,baudrate]]
Append ,keep to not disable it when the real console
Append ",keep" to not disable it when the real console
takes over.
Only vga or serial at a time, not both.
@ -463,11 +472,12 @@ running once the system is up.
elevator= [IOSCHED]
Format: {"as" | "cfq" | "deadline" | "noop"}
See Documentation/block/as-iosched.txt
and Documentation/block/deadline-iosched.txt for details.
See Documentation/block/as-iosched.txt and
Documentation/block/deadline-iosched.txt for details.
elfcorehdr= [IA-32]
Specifies physical address of start of kernel core image
elf header.
Specifies physical address of start of kernel core
image elf header.
See Documentation/kdump.txt for details.
enforcing [SELINUX] Set initial enforcing status.
@ -532,6 +542,7 @@ running once the system is up.
hashdist= [KNL,NUMA] Large hashes allocated during boot
are distributed across NUMA nodes. Defaults on
for IA-64, off otherwise.
Format: 0 | 1 (for off | on)
hcl= [IA-64] SGI's Hardware Graph compatibility layer
@ -661,9 +672,9 @@ running once the system is up.
"number of CPUs in system - 1".
This option is the preferred way to isolate CPUs. The
alternative - manually setting the CPU mask of all tasks
in the system can cause problems and suboptimal load
balancer performance.
alternative -- manually setting the CPU mask of all
tasks in the system -- can cause problems and
suboptimal load balancer performance.
isp16= [HW,CD]
Format: <io>,<irq>,<dma>,<setup>
@ -680,13 +691,14 @@ running once the system is up.
l2cr= [PPC]
lapic [IA-32,APIC] Enable the local APIC even if BIOS disabled it.
lapic [IA-32,APIC] Enable the local APIC even if BIOS
disabled it.
lasi= [HW,SCSI] PARISC LASI driver for the 53c700 chip
Format: addr:<io>,irq:<irq>
llsc*= [IA64]
See function print_params() in arch/ia64/sn/kernel/llsc4.c.
llsc*= [IA64] See function print_params() in
arch/ia64/sn/kernel/llsc4.c.
load_ramdisk= [RAM] List of ramdisks to load from floppy
See Documentation/ramdisk.txt.
@ -713,8 +725,9 @@ running once the system is up.
7 (KERN_DEBUG) debug-level messages
log_buf_len=n Sets the size of the printk ring buffer, in bytes.
Format is n, nk, nM. n must be a power of two. The
default is set in kernel config.
Format: { n | nk | nM }
n must be a power of two. The default size
is set in the kernel config file.
lp=0 [LP] Specify parallel ports to use, e.g,
lp=port[,port...] lp=none,parport0 (lp0 not configured, lp1 uses
@ -750,18 +763,18 @@ running once the system is up.
ltpc= [NET]
Format: <io>,<irq>,<dma>
mac5380= [HW,SCSI]
Format: <can_queue>,<cmd_per_lun>,<sg_tablesize>,<hostid>,<use_tags>
mac5380= [HW,SCSI] Format:
<can_queue>,<cmd_per_lun>,<sg_tablesize>,<hostid>,<use_tags>
mac53c9x= [HW,SCSI]
Format: <num_esps>,<disconnect>,<nosync>,<can_queue>,<cmd_per_lun>,<sg_tablesize>,<hostid>,<use_tags>
mac53c9x= [HW,SCSI] Format:
<num_esps>,<disconnect>,<nosync>,<can_queue>,<cmd_per_lun>,<sg_tablesize>,<hostid>,<use_tags>
machvec= [IA64]
Force the use of a particular machine-vector (machvec) in a generic
kernel. Example: machvec=hpzx1_swiotlb
machvec= [IA64] Force the use of a particular machine-vector
(machvec) in a generic kernel.
Example: machvec=hpzx1_swiotlb
mad16= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma16>,<mpu_io>,<mpu_irq>,<joystick>
mad16= [HW,OSS] Format:
<io>,<irq>,<dma>,<dma16>,<mpu_io>,<mpu_irq>,<joystick>
maui= [HW,OSS]
Format: <io>,<irq>
@ -776,11 +789,11 @@ running once the system is up.
max_addr=[KMG] [KNL,BOOT,ia64] All physical memory greater than or
equal to this physical address is ignored.
max_luns= [SCSI] Maximum number of LUNs to probe
max_luns= [SCSI] Maximum number of LUNs to probe.
Should be between 1 and 2^32-1.
max_report_luns=
[SCSI] Maximum number of LUNs received
[SCSI] Maximum number of LUNs received.
Should be between 1 and 16384.
mca-pentium [BUGS=IA-32]
@ -851,15 +864,15 @@ running once the system is up.
MTD_Partition= [MTD]
Format: <name>,<region-number>,<size>,<offset>
MTD_Region= [MTD]
Format: <name>,<region-number>[,<base>,<size>,<buswidth>,<altbuswidth>]
MTD_Region= [MTD] Format:
<name>,<region-number>[,<base>,<size>,<buswidth>,<altbuswidth>]
mtdparts= [MTD]
See drivers/mtd/cmdline.c.
mtouchusb.raw_coordinates=
[HW] Make the MicroTouch USB driver use raw coordinates ('y', default)
or cooked coordinates ('n')
[HW] Make the MicroTouch USB driver use raw coordinates
('y', default) or cooked coordinates ('n')
n2= [NET] SDL Inc. RISCom/N2 synchronous serial card
@ -880,6 +893,8 @@ running once the system is up.
Format: <irq>,<io>,<mem_start>,<mem_end>,<name>
Note that mem_start is often overloaded to mean
something different and driver-specific.
This usage is only documented in each driver source
file if at all.
nfsaddrs= [NFS]
See Documentation/nfsroot.txt.
@ -948,7 +963,8 @@ running once the system is up.
noresidual [PPC] Don't use residual data on PReP machines.
noresume [SWSUSP] Disables resume and restore original swap space.
noresume [SWSUSP] Disables resume and restores original swap
space.
no-scroll [VGA] Disables scrollback.
This is required for the Braillex ib80-piezo Braille
@ -972,8 +988,8 @@ running once the system is up.
opl3sa= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma2>,<mpu_io>,<mpu_irq>
opl3sa2= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma2>,<mss_io>,<mpu_io>,<ymode>,<loopback>[,<isapnp>,<multiple]
opl3sa2= [HW,OSS] Format:
<io>,<irq>,<dma>,<dma2>,<mss_io>,<mpu_io>,<ymode>,<loopback>[,<isapnp>,<multiple]
oprofile.timer= [HW]
Use timer interrupt instead of performance counters
@ -995,33 +1011,30 @@ running once the system is up.
0 for XT, 1 for AT (default is AT).
Format: <mode>
parport=0 [HW,PPT] Specify parallel ports. 0 disables.
parport=auto Use 'auto' to force the driver to use
parport=0xBBB[,IRQ[,DMA]] any IRQ/DMA settings detected (the
default is to ignore detected IRQ/DMA
settings because of possible
conflicts). You can specify the base
address, IRQ, and DMA settings; IRQ and
DMA should be numbers, or 'auto' (for
using detected settings on that
particular port), or 'nofifo' (to avoid
using a FIFO even if it is detected).
Parallel ports are assigned in the
order they are specified on the command
line, starting with parport0.
parport= [HW,PPT] Specify parallel ports. 0 disables.
Format: { 0 | auto | 0xBBB[,IRQ[,DMA]] }
Use 'auto' to force the driver to use any
IRQ/DMA settings detected (the default is to
ignore detected IRQ/DMA settings because of
possible conflicts). You can specify the base
address, IRQ, and DMA settings; IRQ and DMA
should be numbers, or 'auto' (for using detected
settings on that particular port), or 'nofifo'
(to avoid using a FIFO even if it is detected).
Parallel ports are assigned in the order they
are specified on the command line, starting
with parport0.
parport_init_mode=
[HW,PPT] Configure VIA parallel port to
operate in specific mode. This is
necessary on Pegasos computer where
firmware has no options for setting up
parallel port mode and sets it to
spp. Currently this function knows
686a and 8231 chips.
parport_init_mode= [HW,PPT]
Configure VIA parallel port to operate in
a specific mode. This is necessary on Pegasos
computer where firmware has no options for setting
up parallel port mode and sets it to spp.
Currently this function knows 686a and 8231 chips.
Format: [spp|ps2|epp|ecp|ecpepp]
pas2= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma16>,<sb_io>,<sb_irq>,<sb_dma>,<sb_dma16>
pas2= [HW,OSS] Format:
<io>,<irq>,<dma>,<dma16>,<sb_io>,<sb_irq>,<sb_dma>,<sb_dma16>
pas16= [HW,SCSI]
See header of drivers/scsi/pas16.c.
@ -1041,55 +1054,58 @@ running once the system is up.
hardware access methods are allowed. Use this
if you experience crashes upon bootup and you
suspect they are caused by the BIOS.
conf1 [IA-32] Force use of PCI Configuration Mechanism 1.
conf2 [IA-32] Force use of PCI Configuration Mechanism 2.
conf1 [IA-32] Force use of PCI Configuration
Mechanism 1.
conf2 [IA-32] Force use of PCI Configuration
Mechanism 2.
nosort [IA-32] Don't sort PCI devices according to
order given by the PCI BIOS. This sorting is done
to get a device order compatible with older kernels.
order given by the PCI BIOS. This sorting is
done to get a device order compatible with
older kernels.
biosirq [IA-32] Use PCI BIOS calls to get the interrupt
routing table. These calls are known to be buggy
on several machines and they hang the machine when used,
but on other computers it's the only way to get the
interrupt routing table. Try this option if the kernel
is unable to allocate IRQs or discover secondary PCI
buses on your motherboard.
on several machines and they hang the machine
when used, but on other computers it's the only
way to get the interrupt routing table. Try
this option if the kernel is unable to allocate
IRQs or discover secondary PCI buses on your
motherboard.
rom [IA-32] Assign address space to expansion ROMs.
Use with caution as certain devices share address
decoders between ROMs and other resources.
irqmask=0xMMMM [IA-32] Set a bit mask of IRQs allowed to be assigned
automatically to PCI devices. You can make the kernel
exclude IRQs of your ISA cards this way.
Use with caution as certain devices share
address decoders between ROMs and other
resources.
irqmask=0xMMMM [IA-32] Set a bit mask of IRQs allowed to be
assigned automatically to PCI devices. You can
make the kernel exclude IRQs of your ISA cards
this way.
pirqaddr=0xAAAAA [IA-32] Specify the physical address
of the PIRQ table (normally generated
by the BIOS) if it is outside the
F0000h-100000h range.
lastbus=N [IA-32] Scan all buses till bus #N. Can be useful
if the kernel is unable to find your secondary buses
and you want to tell it explicitly which ones they are.
lastbus=N [IA-32] Scan all buses thru bus #N. Can be
useful if the kernel is unable to find your
secondary buses and you want to tell it
explicitly which ones they are.
assign-busses [IA-32] Always assign all PCI bus
numbers ourselves, overriding
whatever the firmware may have
done.
usepirqmask [IA-32] Honor the possible IRQ mask
stored in the BIOS $PIR table. This is
needed on some systems with broken
BIOSes, notably some HP Pavilion N5400
and Omnibook XE3 notebooks. This will
have no effect if ACPI IRQ routing is
enabled.
whatever the firmware may have done.
usepirqmask [IA-32] Honor the possible IRQ mask stored
in the BIOS $PIR table. This is needed on
some systems with broken BIOSes, notably
some HP Pavilion N5400 and Omnibook XE3
notebooks. This will have no effect if ACPI
IRQ routing is enabled.
noacpi [IA-32] Do not use ACPI for IRQ routing
or for PCI scanning.
routeirq Do IRQ routing for all PCI devices.
This is normally done in pci_enable_device(),
so this option is a temporary workaround
for broken drivers that don't call it.
firmware [ARM] Do not re-enumerate the bus but
instead just use the configuration
from the bootloader. This is currently
used on IXP2000 systems where the
bus has to be configured a certain way
for adjunct CPUs.
firmware [ARM] Do not re-enumerate the bus but instead
just use the configuration from the
bootloader. This is currently used on
IXP2000 systems where the bus has to be
configured a certain way for adjunct CPUs.
pcmv= [HW,PCMCIA] BadgePAD 4
@ -1130,14 +1146,15 @@ running once the system is up.
Ranges are in pairs (I/O port base and size).
pnp_reserve_mem=
[ISAPNP] Exclude memory regions for the autoconfiguration
[ISAPNP] Exclude memory regions for the
autoconfiguration.
Ranges are in pairs (memory base and size).
profile= [KNL] Enable kernel profiling via /proc/profile
{ schedule | <number> }
(param: schedule - profile schedule points}
(param: profile step/bucket size as a power of 2 for
statistical time based profiling)
Format: [schedule,]<number>
Param: "schedule" - profile schedule points.
Param: <number> - step/bucket size as a power of 2 for
statistical time based profiling.
processor.max_cstate= [HW,ACPI]
Limit processor to maximum C-state
@ -1148,20 +1165,21 @@ running once the system is up.
See Documentation/ramdisk.txt.
psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to
probe for (bare|imps|exps|lifebook|any).
probe for; one of (bare|imps|exps|lifebook|any).
psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports
per second.
psmouse.resetafter=
[HW,MOUSE] Try to reset the device after so many bad packets
psmouse.resetafter= [HW,MOUSE]
Try to reset the device after so many bad packets
(0 = never).
psmouse.resolution=
[HW,MOUSE] Set desired mouse resolution, in dpi.
psmouse.smartscroll=
[HW,MOUSE] Controls Logitech smartscroll autorepeat,
[HW,MOUSE] Controls Logitech smartscroll autorepeat.
0 = disabled, 1 = enabled (default).
pss= [HW,OSS] Personal Sound System (ECHO ESC614)
Format: <io>,<mss_io>,<mss_irq>,<mss_dma>,<mpu_io>,<mpu_irq>
Format:
<io>,<mss_io>,<mss_irq>,<mss_dma>,<mpu_io>,<mpu_irq>
pt. [PARIDE]
See Documentation/paride.txt.
@ -1176,8 +1194,7 @@ running once the system is up.
ramdisk= [RAM] Sizes of RAM disks in kilobytes [deprecated]
See Documentation/ramdisk.txt.
ramdisk_blocksize=
[RAM]
ramdisk_blocksize= [RAM]
See Documentation/ramdisk.txt.
ramdisk_size= [RAM] Sizes of RAM disks in kilobytes
@ -1195,7 +1212,8 @@ running once the system is up.
reserve= [KNL,BUGS] Force the kernel to ignore some iomem area
resume= [SWSUSP] Specify the partition device for software suspension
resume= [SWSUSP]
Specify the partition device for software suspend
rhash_entries= [KNL,NET]
Set number of hash buckets for route cache
@ -1258,8 +1276,7 @@ running once the system is up.
serialnumber [BUGS=IA-32]
sg_def_reserved_size=
[SCSI]
sg_def_reserved_size= [SCSI]
sgalaxy= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma2>,<sgbase>
@ -1479,14 +1496,16 @@ running once the system is up.
tp720= [HW,PS2]
trix= [HW,OSS] MediaTrix AudioTrix Pro
Format: <io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>
Format:
<io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>
tsdev.xres= [TS] Horizontal screen resolution.
tsdev.yres= [TS] Vertical screen resolution.
turbografx.map[2|3]=
[HW,JOY] TurboGraFX parallel port interface
Format: <port#>,<js1>,<js2>,<js3>,<js4>,<js5>,<js6>,<js7>
turbografx.map[2|3]= [HW,JOY]
TurboGraFX parallel port interface
Format:
<port#>,<js1>,<js2>,<js3>,<js4>,<js5>,<js6>,<js7>
See also Documentation/input/joystick-parport.txt
u14-34f= [HW,SCSI] UltraStor 14F/34F SCSI host adapter
@ -1507,12 +1526,13 @@ running once the system is up.
See Documentation/fb/modedb.txt.
vga= [BOOT,IA-32] Select a particular video mode
See Documentation/i386/boot.txt and Documentation/svga.txt.
See Documentation/i386/boot.txt and
Documentation/svga.txt.
Use vga=ask for menu.
This is actually a boot loader parameter; the value is
passed to the kernel using a special protocol.
vmalloc=nn[KMG] [KNL,BOOT] forces the vmalloc area to have an exact
vmalloc=nn[KMG] [KNL,BOOT] Forces the vmalloc area to have an exact
size of <nn>. This can be used to increase the
minimum size (128MB on x86). It can also be used to
decrease the size and leave more room for directly
@ -1538,21 +1558,25 @@ running once the system is up.
xd_geo= See header of drivers/block/xd.c.
xirc2ps_cs= [NET,PCMCIA]
Format: <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
______________________________________________________________________
Changelog:
The last known update (for 2.4.0) - the changelog was not kept before.
2000-06-?? Mr. Unknown
The last known update (for 2.4.0) - the changelog was not kept before.
2002-11-24 Petr Baudis <pasky@ucw.cz>
Randy Dunlap <randy.dunlap@verizon.net>
Update for 2.5.49, description for most of the options introduced,
references to other documentation (C files, READMEs, ..), added S390,
PPC, SPARC, MTD, ALSA and OSS category. Minor corrections and
reformatting.
2002-11-24 Petr Baudis <pasky@ucw.cz>
Randy Dunlap <randy.dunlap@verizon.net>
2005-10-19 Randy Dunlap <rdunlap@xenotime.net>
Lots of typos, whitespace, some reformatting.
TODO:

Просмотреть файл

@ -777,7 +777,7 @@ doing so is the same as described in the "Configuring Multiple Bonds
Manually" section, below.
NOTE: It has been observed that some Red Hat supplied kernels
are apparently unable to rename modules at load time (the "-obonding1"
are apparently unable to rename modules at load time (the "-o bond1"
part). Attempts to pass that option to modprobe will produce an
"Operation not permitted" error. This has been reported on some
Fedora Core kernels, and has been seen on RHEL 4 as well. On kernels
@ -883,7 +883,8 @@ the above does not work, and the second bonding instance never sees
its options. In that case, the second options line can be substituted
as follows:
install bonding1 /sbin/modprobe bonding -obond1 mode=balance-alb miimon=50
install bond1 /sbin/modprobe --ignore-install bonding -o bond1 \
mode=balance-alb miimon=50
This may be repeated any number of times, specifying a new and
unique name in place of bond1 for each subsequent instance.

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 14
EXTRAVERSION =-rc4
EXTRAVERSION =
NAME=Affluent Albatross
# *DOCUMENTATION*
@ -334,7 +334,7 @@ KALLSYMS = scripts/kallsyms
PERL = perl
CHECK = sparse
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ $(CF)
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ -Wbitwise $(CF)
MODFLAGS = -DMODULE
CFLAGS_MODULE = $(MODFLAGS)
AFLAGS_MODULE = $(MODFLAGS)
@ -372,7 +372,7 @@ export MODVERDIR := $(if $(KBUILD_EXTMOD),$(firstword $(KBUILD_EXTMOD))/).tmp_ve
# Files to ignore in find ... statements
RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o -name CVS -o -name .pc -o -name .hg \) -prune -o
RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS --exclude .pc --exclude .hg
export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS --exclude .pc --exclude .hg
# ===========================================================================
# Rules shared between *config targets and build targets

Просмотреть файл

@ -154,7 +154,7 @@ pci_dma_supported(struct pci_dev *hwdev, dma_addr_t mask)
void *
dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, int gfp)
dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;

Просмотреть файл

@ -397,7 +397,7 @@ pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_addrp)
{
void *cpu_addr;
long order = get_order(size);
int gfp = GFP_ATOMIC;
gfp_t gfp = GFP_ATOMIC;
try_again:
cpu_addr = (void *)__get_free_pages(gfp, order);

Просмотреть файл

@ -67,7 +67,7 @@ static void impd1_setvco(struct clk *clk, struct icst525_vco vco)
}
writel(0, impd1->base + IMPD1_LOCK);
#if DEBUG
#ifdef DEBUG
vco.v = val & 0x1ff;
vco.r = (val >> 9) & 0x7f;
vco.s = (val >> 16) & 7;
@ -427,17 +427,18 @@ static int impd1_probe(struct lm_device *dev)
return ret;
}
static int impd1_remove_one(struct device *dev, void *data)
{
device_unregister(dev);
return 0;
}
static void impd1_remove(struct lm_device *dev)
{
struct impd1_module *impd1 = lm_get_drvdata(dev);
struct list_head *l, *n;
int i;
list_for_each_safe(l, n, &dev->dev.children) {
struct device *d = list_to_dev(l);
device_unregister(d);
}
device_for_each_child(&dev->dev, NULL, impd1_remove_one);
for (i = 0; i < ARRAY_SIZE(impd1->vcos); i++)
clk_unregister(&impd1->vcos[i]);

Просмотреть файл

@ -488,6 +488,7 @@ static int is_pxafb_device(struct device * dev, void * data)
unsigned long spitz_get_hsync_len(void)
{
#ifdef CONFIG_FB_PXA
if (!spitz_pxafb_dev) {
spitz_pxafb_dev = bus_find_device(&platform_bus_type, NULL, NULL, is_pxafb_device);
if (!spitz_pxafb_dev)
@ -496,6 +497,7 @@ unsigned long spitz_get_hsync_len(void)
if (!get_hsync_time)
get_hsync_time = symbol_get(pxafb_get_hsync_time);
if (!get_hsync_time)
#endif
return 0;
return pxafb_get_hsync_time(spitz_pxafb_dev);

Просмотреть файл

@ -250,6 +250,25 @@ void __init pxa_set_i2c_info(struct i2c_pxa_platform_data *info)
i2c_device.dev.platform_data = info;
}
static struct resource i2s_resources[] = {
{
.start = 0x40400000,
.end = 0x40400083,
.flags = IORESOURCE_MEM,
}, {
.start = IRQ_I2S,
.end = IRQ_I2S,
.flags = IORESOURCE_IRQ,
},
};
static struct platform_device i2s_device = {
.name = "pxa2xx-i2s",
.id = -1,
.resource = i2c_resources,
.num_resources = ARRAY_SIZE(i2s_resources),
};
static struct platform_device *devices[] __initdata = {
&pxamci_device,
&udc_device,
@ -258,6 +277,7 @@ static struct platform_device *devices[] __initdata = {
&btuart_device,
&stuart_device,
&i2c_device,
&i2s_device,
};
static int __init pxa_init(void)

Просмотреть файл

@ -98,7 +98,10 @@ struct clk *clk_get(struct device *dev, const char *id)
struct clk *clk = ERR_PTR(-ENOENT);
int idno;
idno = (dev == NULL) ? -1 : to_platform_device(dev)->id;
if (dev == NULL || dev->bus != &platform_bus_type)
idno = -1;
else
idno = to_platform_device(dev)->id;
down(&clocks_sem);

Просмотреть файл

@ -307,9 +307,9 @@ static void bast_nand_select(struct s3c2410_nand_set *set, int slot)
}
static struct s3c2410_platform_nand bast_nand_info = {
.tacls = 40,
.twrph0 = 80,
.twrph1 = 80,
.tacls = 30,
.twrph0 = 60,
.twrph1 = 60,
.nr_sets = ARRAY_SIZE(bast_nand_sets),
.sets = bast_nand_sets,
.select_chip = bast_nand_select,

Просмотреть файл

@ -75,7 +75,7 @@ static struct vm_region consistent_head = {
};
static struct vm_region *
vm_region_alloc(struct vm_region *head, size_t size, int gfp)
vm_region_alloc(struct vm_region *head, size_t size, gfp_t gfp)
{
unsigned long addr = head->vm_start, end = head->vm_end - size;
unsigned long flags;
@ -133,7 +133,7 @@ static struct vm_region *vm_region_find(struct vm_region *head, unsigned long ad
#endif
static void *
__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, int gfp,
__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp,
pgprot_t prot)
{
struct page *page;
@ -251,7 +251,7 @@ __dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, int gfp,
* virtual and bus address for that space.
*/
void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, int gfp)
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp)
{
return __dma_alloc(dev, size, handle, gfp,
pgprot_noncached(pgprot_kernel));
@ -263,7 +263,7 @@ EXPORT_SYMBOL(dma_alloc_coherent);
* dma_alloc_coherent above.
*/
void *
dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *handle, int gfp)
dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp)
{
return __dma_alloc(dev, size, handle, gfp,
pgprot_writecombine(pgprot_kernel));

Просмотреть файл

@ -55,7 +55,14 @@ ENTRY(cpu_v6_proc_init)
mov pc, lr
ENTRY(cpu_v6_proc_fin)
mov pc, lr
stmfd sp!, {lr}
cpsid if @ disable interrupts
bl v6_flush_kern_cache_all
mrc p15, 0, r0, c1, c0, 0 @ ctrl register
bic r0, r0, #0x1000 @ ...i............
bic r0, r0, #0x0006 @ .............ca.
mcr p15, 0, r0, c1, c0, 0 @ disable caches
ldmfd sp!, {pc}
/*
* cpu_v6_reset(loc)

Просмотреть файл

@ -33,7 +33,7 @@ struct dma_alloc_record {
static DEFINE_SPINLOCK(dma_alloc_lock);
static LIST_HEAD(dma_alloc_list);
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, int gfp)
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp)
{
struct dma_alloc_record *new;
struct list_head *this = &dma_alloc_list;

Просмотреть файл

@ -17,7 +17,7 @@
#include <linux/highmem.h>
#include <asm/io.h>
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, int gfp)
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;

Просмотреть файл

@ -81,7 +81,7 @@ static int map_page(unsigned long va, unsigned long pa, pgprot_t prot)
* portions of the kernel with single large page TLB entries, and
* still get unique uncached pages for consistent DMA.
*/
void *consistent_alloc(int gfp, size_t size, dma_addr_t *dma_handle)
void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *dma_handle)
{
struct vm_struct *area;
unsigned long page, va, pa;

Просмотреть файл

@ -44,7 +44,7 @@
#define PFX "powernow-k8: "
#define BFX PFX "BIOS error: "
#define VERSION "version 1.50.3"
#define VERSION "version 1.50.4"
#include "powernow-k8.h"
/* serialize freq changes */
@ -111,8 +111,8 @@ static int query_current_values_with_pending_wait(struct powernow_k8_data *data)
u32 i = 0;
do {
if (i++ > 0x1000000) {
printk(KERN_ERR PFX "detected change pending stuck\n");
if (i++ > 10000) {
dprintk("detected change pending stuck\n");
return 1;
}
rdmsr(MSR_FIDVID_STATUS, lo, hi);
@ -159,6 +159,7 @@ static int write_new_fid(struct powernow_k8_data *data, u32 fid)
{
u32 lo;
u32 savevid = data->currvid;
u32 i = 0;
if ((fid & INVALID_FID_MASK) || (data->currvid & INVALID_VID_MASK)) {
printk(KERN_ERR PFX "internal error - overflow on fid write\n");
@ -170,10 +171,13 @@ static int write_new_fid(struct powernow_k8_data *data, u32 fid)
dprintk("writing fid 0x%x, lo 0x%x, hi 0x%x\n",
fid, lo, data->plllock * PLL_LOCK_CONVERSION);
do {
wrmsr(MSR_FIDVID_CTL, lo, data->plllock * PLL_LOCK_CONVERSION);
if (query_current_values_with_pending_wait(data))
if (i++ > 100) {
printk(KERN_ERR PFX "internal error - pending bit very stuck - no further pstate changes possible\n");
return 1;
}
} while (query_current_values_with_pending_wait(data));
count_off_irt(data);
@ -197,6 +201,7 @@ static int write_new_vid(struct powernow_k8_data *data, u32 vid)
{
u32 lo;
u32 savefid = data->currfid;
int i = 0;
if ((data->currfid & INVALID_FID_MASK) || (vid & INVALID_VID_MASK)) {
printk(KERN_ERR PFX "internal error - overflow on vid write\n");
@ -208,10 +213,13 @@ static int write_new_vid(struct powernow_k8_data *data, u32 vid)
dprintk("writing vid 0x%x, lo 0x%x, hi 0x%x\n",
vid, lo, STOP_GRANT_5NS);
do {
wrmsr(MSR_FIDVID_CTL, lo, STOP_GRANT_5NS);
if (query_current_values_with_pending_wait(data))
if (i++ > 100) {
printk(KERN_ERR PFX "internal error - pending bit very stuck - no further pstate changes possible\n");
return 1;
}
} while (query_current_values_with_pending_wait(data));
if (savefid != data->currfid) {
printk(KERN_ERR PFX "fid changed on vid trans, old 0x%x new 0x%x\n",

Просмотреть файл

@ -71,7 +71,7 @@ hwsw_init (void)
}
void *
hwsw_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, int flags)
hwsw_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags)
{
if (use_swiotlb(dev))
return swiotlb_alloc_coherent(dev, size, dma_handle, flags);

Просмотреть файл

@ -1076,7 +1076,7 @@ void sba_unmap_single(struct device *dev, dma_addr_t iova, size_t size, int dir)
* See Documentation/DMA-mapping.txt
*/
void *
sba_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, int flags)
sba_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags)
{
struct ioc *ioc;
void *addr;

Просмотреть файл

@ -123,8 +123,8 @@ swiotlb_init_with_default_size (size_t default_size)
/*
* Get IO TLB memory from the low pages
*/
io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
(1 << IO_TLB_SHIFT));
io_tlb_start = alloc_bootmem_low_pages_limit(io_tlb_nslabs *
(1 << IO_TLB_SHIFT), 0x100000000);
if (!io_tlb_start)
panic("Cannot allocate SWIOTLB buffer");
io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
@ -314,7 +314,7 @@ sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
void *
swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, int flags)
dma_addr_t *dma_handle, gfp_t flags)
{
unsigned long dev_addr;
void *ret;

Просмотреть файл

@ -939,7 +939,7 @@ xpc_map_bte_errors(bte_result_t error)
static inline void *
xpc_kmalloc_cacheline_aligned(size_t size, int flags, void **base)
xpc_kmalloc_cacheline_aligned(size_t size, gfp_t flags, void **base)
{
/* see if kmalloc will give us cachline aligned memory by default */
*base = kmalloc(size, flags);

Просмотреть файл

@ -75,7 +75,7 @@ EXPORT_SYMBOL(sn_dma_set_mask);
* more information.
*/
void *sn_dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int flags)
dma_addr_t * dma_handle, gfp_t flags)
{
void *cpuaddr;
unsigned long phys_addr;

Просмотреть файл

@ -18,7 +18,7 @@
#include <asm/io.h>
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;
/* ignore region specifiers */
@ -39,7 +39,7 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
EXPORT_SYMBOL(dma_alloc_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
__attribute__((alias("dma_alloc_noncoherent")));
EXPORT_SYMBOL(dma_alloc_coherent);

Просмотреть файл

@ -22,7 +22,7 @@
pdev_to_baddr(to_pci_dev(dev), (addr))
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;
@ -44,7 +44,7 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
EXPORT_SYMBOL(dma_alloc_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
__attribute__((alias("dma_alloc_noncoherent")));
EXPORT_SYMBOL(dma_alloc_coherent);

Просмотреть файл

@ -37,7 +37,7 @@
#define RAM_OFFSET_MASK 0x3fffffff
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;
/* ignore region specifiers */
@ -61,7 +61,7 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
EXPORT_SYMBOL(dma_alloc_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;

Просмотреть файл

@ -24,7 +24,7 @@
*/
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;
/* ignore region specifiers */
@ -45,7 +45,7 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
EXPORT_SYMBOL(dma_alloc_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t * dma_handle, int gfp)
dma_addr_t * dma_handle, gfp_t gfp)
{
void *ret;

Просмотреть файл

@ -349,7 +349,7 @@ pcxl_dma_init(void)
__initcall(pcxl_dma_init);
static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_addr_t *dma_handle, int flag)
static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag)
{
unsigned long vaddr;
unsigned long paddr;
@ -502,13 +502,13 @@ struct hppa_dma_ops pcxl_dma_ops = {
};
static void *fail_alloc_consistent(struct device *dev, size_t size,
dma_addr_t *dma_handle, int flag)
dma_addr_t *dma_handle, gfp_t flag)
{
return NULL;
}
static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, int flag)
dma_addr_t *dma_handle, gfp_t flag)
{
void *addr = NULL;

Просмотреть файл

@ -78,7 +78,7 @@ typedef struct {
const char *name2;
void (*open)(void);
void (*release)(void);
void *(*dma_alloc)(unsigned int, int);
void *(*dma_alloc)(unsigned int, gfp_t);
void (*dma_free)(void *, unsigned int);
int (*irqinit)(void);
#ifdef MODULE

Просмотреть файл

@ -318,7 +318,7 @@ struct cs_sound_settings {
static struct cs_sound_settings sound;
static void *CS_Alloc(unsigned int size, int flags);
static void *CS_Alloc(unsigned int size, gfp_t flags);
static void CS_Free(void *ptr, unsigned int size);
static int CS_IrqInit(void);
#ifdef MODULE
@ -959,7 +959,7 @@ static TRANS transCSNormalRead = {
/*** Low level stuff *********************************************************/
static void *CS_Alloc(unsigned int size, int flags)
static void *CS_Alloc(unsigned int size, gfp_t flags)
{
int order;

Просмотреть файл

@ -115,7 +115,7 @@ static struct vm_region consistent_head = {
};
static struct vm_region *
vm_region_alloc(struct vm_region *head, size_t size, int gfp)
vm_region_alloc(struct vm_region *head, size_t size, gfp_t gfp)
{
unsigned long addr = head->vm_start, end = head->vm_end - size;
unsigned long flags;
@ -173,7 +173,7 @@ static struct vm_region *vm_region_find(struct vm_region *head, unsigned long ad
* virtual and bus address for that space.
*/
void *
__dma_alloc_coherent(size_t size, dma_addr_t *handle, int gfp)
__dma_alloc_coherent(size_t size, dma_addr_t *handle, gfp_t gfp)
{
struct page *page;
struct vm_region *c;

Просмотреть файл

@ -114,9 +114,9 @@ struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
struct page *ptepage;
#ifdef CONFIG_HIGHPTE
int flags = GFP_KERNEL | __GFP_HIGHMEM | __GFP_REPEAT;
gfp_t flags = GFP_KERNEL | __GFP_HIGHMEM | __GFP_REPEAT;
#else
int flags = GFP_KERNEL | __GFP_REPEAT;
gfp_t flags = GFP_KERNEL | __GFP_REPEAT;
#endif
ptepage = alloc_pages(flags, 0);

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:12:19 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:29:10 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_POSIX_MQUEUE is not set
@ -36,6 +37,7 @@ CONFIG_HOTPLUG=y
CONFIG_KOBJECT_UEVENT=y
# CONFIG_IKCONFIG is not set
# CONFIG_CPUSETS is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
@ -95,6 +97,7 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
# CONFIG_NUMA is not set
CONFIG_SCHED_SMT=y
CONFIG_PREEMPT_NONE=y
@ -110,17 +113,18 @@ CONFIG_PPC_RTAS=y
CONFIG_RTAS_PROC=y
CONFIG_RTAS_FLASH=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
#
@ -132,8 +136,6 @@ CONFIG_PCI_NAMES=y
# PCI Hotplug Support
#
# CONFIG_HOTPLUG_PCI is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
#
# Networking
@ -163,8 +165,8 @@ CONFIG_SYN_COOKIES=y
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
CONFIG_INET_TUNNEL=y
CONFIG_IP_TCPDIAG=y
CONFIG_IP_TCPDIAG_IPV6=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
@ -181,6 +183,7 @@ CONFIG_INET6_TUNNEL=m
CONFIG_IPV6_TUNNEL=m
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_NETLINK is not set
#
# IP: Netfilter Configuration
@ -188,11 +191,14 @@ CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=y
# CONFIG_IP_NF_CT_ACCT is not set
# CONFIG_IP_NF_CONNTRACK_MARK is not set
# CONFIG_IP_NF_CONNTRACK_EVENTS is not set
CONFIG_IP_NF_CT_PROTO_SCTP=y
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
# CONFIG_IP_NF_NETBIOS_NS is not set
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_AMANDA=m
# CONFIG_IP_NF_PPTP is not set
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
@ -216,13 +222,16 @@ CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_SCTP=m
# CONFIG_IP_NF_MATCH_DCCP is not set
CONFIG_IP_NF_MATCH_COMMENT=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_MATCH_STRING=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_TARGET_NFQUEUE=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
@ -240,6 +249,7 @@ CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_TARGET_NOTRACK=m
CONFIG_IP_NF_ARPTABLES=m
@ -251,6 +261,12 @@ CONFIG_IP_NF_ARP_MANGLE=m
#
# CONFIG_IP6_NF_QUEUE is not set
# CONFIG_IP6_NF_IPTABLES is not set
# CONFIG_IP6_NF_TARGET_NFQUEUE is not set
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
@ -278,6 +294,7 @@ CONFIG_NET_CLS_ROUTE=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -291,6 +308,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -322,7 +344,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=131072
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
#
@ -395,6 +416,7 @@ CONFIG_IDEDMA_AUTO=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
#
@ -435,6 +457,11 @@ CONFIG_NETDEVICES=y
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -442,6 +469,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
#
@ -462,15 +490,18 @@ CONFIG_E1000=m
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
CONFIG_SKGE=m
# CONFIG_SK98LIN is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2 is not set
# CONFIG_SPIDER_NET is not set
# CONFIG_MV643XX_ETH is not set
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
@ -552,6 +583,7 @@ CONFIG_HW_CONSOLE=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
# CONFIG_CYCLADES is not set
# CONFIG_DIGIEPCA is not set
# CONFIG_MOXA_SMARTIO is not set
# CONFIG_ISI is not set
# CONFIG_SYNCLINK is not set
@ -642,7 +674,6 @@ CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_PROSAVAGE is not set
@ -656,7 +687,6 @@ CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_PCA_ISA is not set
# CONFIG_I2C_SENSOR is not set
#
# Miscellaneous I2C Chip support
@ -683,11 +713,16 @@ CONFIG_I2C_ALGOBIT=y
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -756,10 +791,6 @@ CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
@ -768,6 +799,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -794,13 +826,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
# CONFIG_DEVPTS_FS_XATTR is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_SECURITY is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -846,6 +876,7 @@ CONFIG_SUNRPC=m
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -923,6 +954,7 @@ CONFIG_NLS_ISO8859_15=m
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=15
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
@ -981,7 +1013,12 @@ CONFIG_CRYPTO_DEFLATE=m
# Library routines
#
# CONFIG_CRC_CCITT is not set
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=m
CONFIG_ZLIB_DEFLATE=m
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:16:59 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:30:23 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
@ -37,6 +38,7 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
@ -97,6 +99,7 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
# CONFIG_NUMA is not set
# CONFIG_SCHED_SMT is not set
CONFIG_PREEMPT_NONE=y
@ -109,19 +112,20 @@ CONFIG_HZ_250=y
CONFIG_HZ=250
CONFIG_GENERIC_HARDIRQS=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
# CONFIG_HOTPLUG_CPU is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_HOTPLUG_CPU is not set
#
# PCCARD (PCMCIA/CardBus) support
@ -132,8 +136,6 @@ CONFIG_PCI_NAMES=y
# PCI Hotplug Support
#
# CONFIG_HOTPLUG_PCI is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
#
# Networking
@ -163,8 +165,8 @@ CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_TUNNEL=y
CONFIG_IP_TCPDIAG=m
# CONFIG_IP_TCPDIAG_IPV6 is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
@ -175,6 +177,7 @@ CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_NETLINK is not set
#
# IP: Netfilter Configuration
@ -182,11 +185,14 @@ CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_CT_ACCT=y
CONFIG_IP_NF_CONNTRACK_MARK=y
CONFIG_IP_NF_CONNTRACK_EVENTS=y
CONFIG_IP_NF_CT_PROTO_SCTP=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
# CONFIG_IP_NF_NETBIOS_NS is not set
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_AMANDA=m
# CONFIG_IP_NF_PPTP is not set
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
@ -210,14 +216,18 @@ CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_SCTP=m
# CONFIG_IP_NF_MATCH_DCCP is not set
CONFIG_IP_NF_MATCH_COMMENT=m
CONFIG_IP_NF_MATCH_CONNMARK=m
CONFIG_IP_NF_MATCH_CONNBYTES=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_MATCH_STRING=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_TARGET_NFQUEUE=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
@ -235,6 +245,7 @@ CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_TARGET_CONNMARK=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_RAW=m
@ -243,6 +254,11 @@ CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
@ -270,6 +286,7 @@ CONFIG_NET_CLS_ROUTE=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -283,6 +300,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -315,7 +337,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
@ -395,6 +416,7 @@ CONFIG_IDEDMA_AUTO=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_PROC_FS=y
@ -422,6 +444,7 @@ CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_SPI_ATTRS=y
# CONFIG_SCSI_FC_ATTRS is not set
# CONFIG_SCSI_ISCSI_ATTRS is not set
# CONFIG_SCSI_SAS_ATTRS is not set
#
# SCSI low-level drivers
@ -435,10 +458,12 @@ CONFIG_SCSI_SPI_ATTRS=y
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
CONFIG_SCSI_SATA=y
# CONFIG_SCSI_SATA_AHCI is not set
CONFIG_SCSI_SATA_SVW=y
# CONFIG_SCSI_ATA_PIIX is not set
# CONFIG_SCSI_SATA_MV is not set
# CONFIG_SCSI_SATA_NV is not set
# CONFIG_SCSI_SATA_PROMISE is not set
# CONFIG_SCSI_SATA_QSTOR is not set
@ -498,6 +523,7 @@ CONFIG_DM_ZERO=m
# CONFIG_FUSION is not set
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
#
# IEEE 1394 (FireWire) support
@ -540,7 +566,6 @@ CONFIG_IEEE1394_RAWIO=y
#
CONFIG_ADB_PMU=y
CONFIG_PMAC_SMU=y
# CONFIG_PMAC_BACKLIGHT is not set
CONFIG_THERM_PM72=y
#
@ -557,6 +582,11 @@ CONFIG_TUN=m
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -564,6 +594,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
CONFIG_SUNGEM=y
# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
#
@ -585,6 +616,7 @@ CONFIG_E1000=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
# CONFIG_SKGE is not set
# CONFIG_SK98LIN is not set
CONFIG_TIGON3=m
@ -594,6 +626,7 @@ CONFIG_TIGON3=m
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
@ -760,8 +793,8 @@ CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
CONFIG_I2C_KEYWEST=y
CONFIG_I2C_PMAC_SMU=y
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_PROSAVAGE is not set
@ -775,7 +808,6 @@ CONFIG_I2C_KEYWEST=y
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_PCA_ISA is not set
# CONFIG_I2C_SENSOR is not set
#
# Miscellaneous I2C Chip support
@ -802,11 +834,16 @@ CONFIG_I2C_KEYWEST=y
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -856,6 +893,7 @@ CONFIG_FB_RADEON_I2C=y
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_CYBLA is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_VIRTUAL is not set
@ -937,6 +975,7 @@ CONFIG_USB_STORAGE_DPCM=y
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
# CONFIG_USB_STORAGE_ONETOUCH is not set
#
# USB Input Devices
@ -956,9 +995,11 @@ CONFIG_USB_HIDDEV=y
# CONFIG_USB_MTOUCH is not set
# CONFIG_USB_ITMTOUCH is not set
# CONFIG_USB_EGALAX is not set
# CONFIG_USB_YEALINK is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
#
# USB Imaging devices
@ -983,30 +1024,14 @@ CONFIG_USB_KAWETH=m
CONFIG_USB_PEGASUS=m
CONFIG_USB_RTL8150=m
CONFIG_USB_USBNET=m
#
# USB Host-to-Host Cables
#
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_GENESYS=y
CONFIG_USB_NET1080=y
CONFIG_USB_PL2301=y
CONFIG_USB_KC2190=y
#
# Intelligent USB Devices/Gadgets
#
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_ZAURUS=y
CONFIG_USB_CDCETHER=y
#
# USB Network Adapters
#
CONFIG_USB_AX8817X=y
# CONFIG_USB_NET_AX8817X is not set
CONFIG_USB_NET_CDCETHER=m
# CONFIG_USB_NET_GL620A is not set
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_PLUSB is not set
# CONFIG_USB_NET_RNDIS_HOST is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
# CONFIG_USB_NET_ZAURUS is not set
CONFIG_USB_MON=y
#
@ -1124,16 +1149,12 @@ CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
CONFIG_XFS_FS=m
CONFIG_XFS_EXPORT=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_SECURITY=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
@ -1141,6 +1162,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_AUTOFS_FS=m
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -1168,14 +1190,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVPTS_FS_XATTR=y
# CONFIG_DEVPTS_FS_SECURITY is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_SECURITY=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -1225,6 +1244,7 @@ CONFIG_CIFS=m
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -1303,6 +1323,7 @@ CONFIG_OPROFILE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
@ -1360,7 +1381,12 @@ CONFIG_CRYPTO_TEST=m
# Library routines
#
CONFIG_CRC_CCITT=m
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:17:02 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:30:56 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
@ -38,6 +39,7 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
@ -88,6 +90,7 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
# CONFIG_NUMA is not set
# CONFIG_SCHED_SMT is not set
CONFIG_PREEMPT_NONE=y
@ -101,17 +104,16 @@ CONFIG_HZ=250
CONFIG_GENERIC_HARDIRQS=y
CONFIG_LPARCFG=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
#
@ -152,8 +154,8 @@ CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_TUNNEL=y
CONFIG_IP_TCPDIAG=m
# CONFIG_IP_TCPDIAG_IPV6 is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
@ -164,6 +166,7 @@ CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_NETLINK is not set
#
# IP: Netfilter Configuration
@ -171,11 +174,14 @@ CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_CT_ACCT=y
CONFIG_IP_NF_CONNTRACK_MARK=y
CONFIG_IP_NF_CONNTRACK_EVENTS=y
CONFIG_IP_NF_CT_PROTO_SCTP=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
# CONFIG_IP_NF_NETBIOS_NS is not set
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_AMANDA=m
# CONFIG_IP_NF_PPTP is not set
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
@ -199,14 +205,18 @@ CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_SCTP=m
# CONFIG_IP_NF_MATCH_DCCP is not set
CONFIG_IP_NF_MATCH_COMMENT=m
CONFIG_IP_NF_MATCH_CONNMARK=m
CONFIG_IP_NF_MATCH_CONNBYTES=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_MATCH_STRING=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_TARGET_NFQUEUE=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
@ -224,6 +234,7 @@ CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_TARGET_CONNMARK=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_RAW=m
@ -232,6 +243,11 @@ CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
@ -259,6 +275,7 @@ CONFIG_NET_CLS_ROUTE=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -272,6 +289,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=m
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -303,7 +325,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
#
@ -323,6 +344,7 @@ CONFIG_IOSCHED_CFQ=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_PROC_FS=y
@ -350,6 +372,7 @@ CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
# CONFIG_SCSI_ISCSI_ATTRS is not set
# CONFIG_SCSI_SAS_ATTRS is not set
#
# SCSI low-level drivers
@ -363,6 +386,7 @@ CONFIG_SCSI_FC_ATTRS=y
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_SATA is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_DMX3191D is not set
@ -415,6 +439,7 @@ CONFIG_DM_ZERO=m
# CONFIG_FUSION is not set
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
#
# IEEE 1394 (FireWire) support
@ -444,6 +469,11 @@ CONFIG_TUN=m
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -451,6 +481,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
#
@ -489,6 +520,7 @@ CONFIG_E1000=m
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
# CONFIG_SKGE is not set
# CONFIG_SK98LIN is not set
# CONFIG_VIA_VELOCITY is not set
@ -498,6 +530,7 @@ CONFIG_E1000=m
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
@ -632,7 +665,6 @@ CONFIG_MAX_RAW_DEVS=256
# I2C support
#
# CONFIG_I2C is not set
# CONFIG_I2C_SENSOR is not set
#
# Dallas's 1-wire bus
@ -643,11 +675,16 @@ CONFIG_MAX_RAW_DEVS=256
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -722,16 +759,12 @@ CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
# CONFIG_JFS_STATISTICS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
CONFIG_XFS_FS=m
CONFIG_XFS_EXPORT=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_SECURITY=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
@ -739,6 +772,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_AUTOFS_FS=m
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -766,14 +800,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVPTS_FS_XATTR=y
CONFIG_DEVPTS_FS_SECURITY=y
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_SECURITY=y
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -824,6 +855,7 @@ CONFIG_CIFS_POSIX=y
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -897,6 +929,7 @@ CONFIG_OPROFILE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
@ -954,7 +987,12 @@ CONFIG_CRYPTO_TEST=m
# Library routines
#
CONFIG_CRC_CCITT=m
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:17:04 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:31:24 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
@ -37,6 +38,7 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
@ -97,6 +99,7 @@ CONFIG_FLATMEM_MANUAL=y
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
# CONFIG_NUMA is not set
# CONFIG_SCHED_SMT is not set
CONFIG_PREEMPT_NONE=y
@ -109,17 +112,18 @@ CONFIG_HZ_250=y
CONFIG_HZ=250
CONFIG_GENERIC_HARDIRQS=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
#
@ -131,8 +135,6 @@ CONFIG_PCI_NAMES=y
# PCI Hotplug Support
#
# CONFIG_HOTPLUG_PCI is not set
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
#
# Networking
@ -163,13 +165,18 @@ CONFIG_IP_PNP_DHCP=y
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_IP_TCPDIAG=y
# CONFIG_IP_TCPDIAG_IPV6 is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_NETFILTER is not set
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
@ -196,6 +203,7 @@ CONFIG_TCP_CONG_BIC=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -209,6 +217,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
# CONFIG_FW_LOADER is not set
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -240,7 +253,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=8192
# CONFIG_BLK_DEV_INITRD is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
#
@ -313,6 +325,7 @@ CONFIG_IDEDMA_AUTO=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
#
@ -353,6 +366,11 @@ CONFIG_NETDEVICES=y
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -360,6 +378,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NET_VENDOR_3COM is not set
#
@ -398,6 +417,7 @@ CONFIG_E1000=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
# CONFIG_SKGE is not set
# CONFIG_SK98LIN is not set
# CONFIG_VIA_VELOCITY is not set
@ -408,6 +428,7 @@ CONFIG_E1000=y
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
# CONFIG_IXGB is not set
# CONFIG_S2IO is not set
@ -553,7 +574,6 @@ CONFIG_I2C_AMD8111=y
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_PROSAVAGE is not set
@ -567,7 +587,6 @@ CONFIG_I2C_AMD8111=y
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_PCA_ISA is not set
# CONFIG_I2C_SENSOR is not set
#
# Miscellaneous I2C Chip support
@ -594,11 +613,16 @@ CONFIG_I2C_AMD8111=y
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -681,9 +705,11 @@ CONFIG_USB_HIDINPUT=y
# CONFIG_USB_MTOUCH is not set
# CONFIG_USB_ITMTOUCH is not set
# CONFIG_USB_EGALAX is not set
# CONFIG_USB_YEALINK is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
#
# USB Imaging devices
@ -814,10 +840,6 @@ CONFIG_JBD=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
@ -826,6 +848,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -849,14 +872,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVPTS_FS_XATTR=y
# CONFIG_DEVPTS_FS_SECURITY is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_SECURITY=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -898,6 +918,7 @@ CONFIG_RPCSEC_GSS_KRB5=y
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -975,6 +996,7 @@ CONFIG_NLS_UTF8=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
CONFIG_DEBUG_SLAB=y
# CONFIG_DEBUG_SPINLOCK is not set
@ -1034,6 +1056,7 @@ CONFIG_CRYPTO_DES=y
# Library routines
#
CONFIG_CRC_CCITT=y
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=y

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:17:07 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:32:17 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
@ -38,6 +39,7 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_CPUSETS=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
@ -104,6 +106,7 @@ CONFIG_DISCONTIGMEM_MANUAL=y
CONFIG_DISCONTIGMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_NEED_MULTIPLE_NODES=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID=y
CONFIG_NODES_SPAN_OTHER_NODES=y
CONFIG_NUMA=y
@ -124,19 +127,20 @@ CONFIG_RTAS_FLASH=m
CONFIG_SCANLOG=m
CONFIG_LPARCFG=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_HOTPLUG_CPU=y
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_PCI_LEGACY_PROC=y
CONFIG_PCI_NAMES=y
# CONFIG_PCI_DEBUG is not set
CONFIG_HOTPLUG_CPU=y
#
# PCCARD (PCMCIA/CardBus) support
@ -152,8 +156,6 @@ CONFIG_HOTPLUG_PCI=m
# CONFIG_HOTPLUG_PCI_SHPC is not set
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
#
# Networking
@ -183,8 +185,8 @@ CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_TUNNEL=y
CONFIG_IP_TCPDIAG=m
# CONFIG_IP_TCPDIAG_IPV6 is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
@ -195,6 +197,9 @@ CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
#
# IP: Netfilter Configuration
@ -202,11 +207,15 @@ CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_CT_ACCT=y
CONFIG_IP_NF_CONNTRACK_MARK=y
CONFIG_IP_NF_CONNTRACK_EVENTS=y
CONFIG_IP_NF_CONNTRACK_NETLINK=m
CONFIG_IP_NF_CT_PROTO_SCTP=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
# CONFIG_IP_NF_NETBIOS_NS is not set
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_AMANDA=m
# CONFIG_IP_NF_PPTP is not set
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
@ -230,14 +239,18 @@ CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_SCTP=m
# CONFIG_IP_NF_MATCH_DCCP is not set
CONFIG_IP_NF_MATCH_COMMENT=m
CONFIG_IP_NF_MATCH_CONNMARK=m
CONFIG_IP_NF_MATCH_CONNBYTES=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_MATCH_STRING=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_TARGET_NFQUEUE=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
@ -255,6 +268,7 @@ CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_TARGET_CONNMARK=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_RAW=m
@ -263,6 +277,11 @@ CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
@ -290,6 +309,7 @@ CONFIG_NET_CLS_ROUTE=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -303,6 +323,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -342,7 +367,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
#
@ -416,6 +440,7 @@ CONFIG_IDEDMA_AUTO=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_PROC_FS=y
@ -443,6 +468,7 @@ CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
# CONFIG_SCSI_SAS_ATTRS is not set
#
# SCSI low-level drivers
@ -456,6 +482,7 @@ CONFIG_SCSI_ISCSI_ATTRS=m
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_SATA is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_DMX3191D is not set
@ -517,6 +544,7 @@ CONFIG_DM_MULTIPATH_EMC=m
# CONFIG_FUSION is not set
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
#
# IEEE 1394 (FireWire) support
@ -546,6 +574,11 @@ CONFIG_TUN=m
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -553,6 +586,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
CONFIG_NET_VENDOR_3COM=y
CONFIG_VORTEX=y
# CONFIG_TYPHOON is not set
@ -581,6 +615,7 @@ CONFIG_E100=y
# CONFIG_EPIC100 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_NET_POCKET is not set
#
# Ethernet (1000 Mbit)
@ -594,6 +629,7 @@ CONFIG_E1000=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
# CONFIG_SKGE is not set
# CONFIG_SK98LIN is not set
# CONFIG_VIA_VELOCITY is not set
@ -604,6 +640,7 @@ CONFIG_TIGON3=y
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
CONFIG_IXGB=m
# CONFIG_IXGB_NAPI is not set
CONFIG_S2IO=m
@ -789,7 +826,6 @@ CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
@ -804,7 +840,6 @@ CONFIG_I2C_ALGOBIT=y
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_PCA_ISA is not set
# CONFIG_I2C_SENSOR is not set
#
# Miscellaneous I2C Chip support
@ -831,11 +866,16 @@ CONFIG_I2C_ALGOBIT=y
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -885,6 +925,7 @@ CONFIG_FB_RADEON_I2C=y
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_CYBLA is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_VIRTUAL is not set
@ -982,9 +1023,11 @@ CONFIG_USB_HIDDEV=y
# CONFIG_USB_MTOUCH is not set
# CONFIG_USB_ITMTOUCH is not set
# CONFIG_USB_EGALAX is not set
# CONFIG_USB_YEALINK is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
#
# USB Imaging devices
@ -1057,7 +1100,8 @@ CONFIG_USB_MON=y
# InfiniBand support
#
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_VERBS=m
# CONFIG_INFINIBAND_USER_MAD is not set
# CONFIG_INFINIBAND_USER_ACCESS is not set
CONFIG_INFINIBAND_MTHCA=m
# CONFIG_INFINIBAND_MTHCA_DEBUG is not set
CONFIG_INFINIBAND_IPOIB=m
@ -1095,16 +1139,12 @@ CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
# CONFIG_JFS_STATISTICS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
CONFIG_XFS_FS=m
CONFIG_XFS_EXPORT=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_SECURITY=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
@ -1112,6 +1152,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_AUTOFS_FS=m
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -1139,14 +1180,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVPTS_FS_XATTR=y
CONFIG_DEVPTS_FS_SECURITY=y
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_SECURITY=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -1197,6 +1235,7 @@ CONFIG_CIFS_POSIX=y
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -1261,6 +1300,7 @@ CONFIG_OPROFILE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
@ -1320,7 +1360,12 @@ CONFIG_CRYPTO_TEST=m
# Library routines
#
CONFIG_CRC_CCITT=m
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m

Просмотреть файл

@ -1,17 +1,17 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.13-rc6
# Mon Aug 8 14:16:54 2005
# Linux kernel version: 2.6.14-rc4
# Thu Oct 20 08:28:33 2005
#
CONFIG_64BIT=y
CONFIG_MMU=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_HAVE_DEC_LOCK=y
CONFIG_EARLY_PRINTK=y
CONFIG_COMPAT=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_FORCE_MAX_ZONEORDER=13
#
@ -26,6 +26,7 @@ CONFIG_INIT_ENV_ARG_LIMIT=32
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
@ -37,6 +38,7 @@ CONFIG_KOBJECT_UEVENT=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_CPUSETS=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_EMBEDDED is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
@ -106,6 +108,7 @@ CONFIG_DISCONTIGMEM_MANUAL=y
CONFIG_DISCONTIGMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
CONFIG_NEED_MULTIPLE_NODES=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA is not set
@ -126,19 +129,20 @@ CONFIG_RTAS_FLASH=m
CONFIG_SCANLOG=m
CONFIG_LPARCFG=y
CONFIG_SECCOMP=y
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=m
CONFIG_HOTPLUG_CPU=y
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# General setup
# Bus Options
#
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_BINFMT_ELF=y
CONFIG_BINFMT_MISC=m
# CONFIG_PCI_LEGACY_PROC is not set
# CONFIG_PCI_NAMES is not set
# CONFIG_PCI_DEBUG is not set
CONFIG_HOTPLUG_CPU=y
#
# PCCARD (PCMCIA/CardBus) support
@ -154,8 +158,6 @@ CONFIG_HOTPLUG_PCI=m
# CONFIG_HOTPLUG_PCI_SHPC is not set
CONFIG_HOTPLUG_PCI_RPA=m
CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
#
# Networking
@ -185,8 +187,8 @@ CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_TUNNEL=y
# CONFIG_IP_TCPDIAG is not set
# CONFIG_IP_TCPDIAG_IPV6 is not set
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
@ -197,6 +199,9 @@ CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
#
# IP: Netfilter Configuration
@ -204,11 +209,15 @@ CONFIG_NETFILTER=y
CONFIG_IP_NF_CONNTRACK=m
CONFIG_IP_NF_CT_ACCT=y
CONFIG_IP_NF_CONNTRACK_MARK=y
CONFIG_IP_NF_CONNTRACK_EVENTS=y
CONFIG_IP_NF_CONNTRACK_NETLINK=m
CONFIG_IP_NF_CT_PROTO_SCTP=m
CONFIG_IP_NF_FTP=m
CONFIG_IP_NF_IRC=m
# CONFIG_IP_NF_NETBIOS_NS is not set
CONFIG_IP_NF_TFTP=m
CONFIG_IP_NF_AMANDA=m
# CONFIG_IP_NF_PPTP is not set
CONFIG_IP_NF_QUEUE=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_LIMIT=m
@ -232,14 +241,18 @@ CONFIG_IP_NF_MATCH_OWNER=m
CONFIG_IP_NF_MATCH_ADDRTYPE=m
CONFIG_IP_NF_MATCH_REALM=m
CONFIG_IP_NF_MATCH_SCTP=m
CONFIG_IP_NF_MATCH_DCCP=m
CONFIG_IP_NF_MATCH_COMMENT=m
CONFIG_IP_NF_MATCH_CONNMARK=m
CONFIG_IP_NF_MATCH_CONNBYTES=m
CONFIG_IP_NF_MATCH_HASHLIMIT=m
CONFIG_IP_NF_MATCH_STRING=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_LOG=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_IP_NF_TARGET_TCPMSS=m
CONFIG_IP_NF_TARGET_NFQUEUE=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=m
@ -257,6 +270,7 @@ CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_DSCP=m
CONFIG_IP_NF_TARGET_MARK=m
CONFIG_IP_NF_TARGET_CLASSIFY=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_TARGET_CONNMARK=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_RAW=m
@ -265,6 +279,11 @@ CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
@ -292,6 +311,7 @@ CONFIG_NET_CLS_ROUTE=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
@ -305,6 +325,11 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
# CONFIG_DEBUG_DRIVER is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
@ -344,7 +369,6 @@ CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CDROM_PKTCDVD is not set
#
@ -422,6 +446,7 @@ CONFIG_IDEDMA_AUTO=y
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_PROC_FS=y
@ -449,6 +474,7 @@ CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
# CONFIG_SCSI_SAS_ATTRS is not set
#
# SCSI low-level drivers
@ -462,10 +488,12 @@ CONFIG_SCSI_ISCSI_ATTRS=m
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
CONFIG_SCSI_SATA=y
# CONFIG_SCSI_SATA_AHCI is not set
CONFIG_SCSI_SATA_SVW=y
# CONFIG_SCSI_ATA_PIIX is not set
# CONFIG_SCSI_SATA_MV is not set
# CONFIG_SCSI_SATA_NV is not set
# CONFIG_SCSI_SATA_PROMISE is not set
# CONFIG_SCSI_SATA_QSTOR is not set
@ -535,6 +563,7 @@ CONFIG_DM_MULTIPATH_EMC=m
# CONFIG_FUSION is not set
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_FC is not set
# CONFIG_FUSION_SAS is not set
#
# IEEE 1394 (FireWire) support
@ -578,7 +607,6 @@ CONFIG_IEEE1394_AMDTP=m
#
CONFIG_ADB_PMU=y
CONFIG_PMAC_SMU=y
# CONFIG_PMAC_BACKLIGHT is not set
CONFIG_THERM_PM72=y
#
@ -595,6 +623,11 @@ CONFIG_TUN=m
#
# CONFIG_ARCNET is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
@ -602,6 +635,7 @@ CONFIG_NET_ETHERNET=y
CONFIG_MII=y
# CONFIG_HAPPYMEAL is not set
CONFIG_SUNGEM=y
# CONFIG_CASSINI is not set
CONFIG_NET_VENDOR_3COM=y
CONFIG_VORTEX=y
# CONFIG_TYPHOON is not set
@ -630,6 +664,7 @@ CONFIG_E100=y
# CONFIG_EPIC100 is not set
# CONFIG_SUNDANCE is not set
# CONFIG_VIA_RHINE is not set
# CONFIG_NET_POCKET is not set
#
# Ethernet (1000 Mbit)
@ -643,16 +678,19 @@ CONFIG_E1000=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_R8169 is not set
# CONFIG_SIS190 is not set
# CONFIG_SKGE is not set
# CONFIG_SK98LIN is not set
# CONFIG_VIA_VELOCITY is not set
CONFIG_TIGON3=y
# CONFIG_BNX2 is not set
# CONFIG_SPIDER_NET is not set
# CONFIG_MV643XX_ETH is not set
#
# Ethernet (10000 Mbit)
#
# CONFIG_CHELSIO_T1 is not set
CONFIG_IXGB=m
# CONFIG_IXGB_NAPI is not set
# CONFIG_S2IO is not set
@ -838,8 +876,8 @@ CONFIG_I2C_AMD8111=y
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
CONFIG_I2C_KEYWEST=y
CONFIG_I2C_PMAC_SMU=y
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
@ -854,7 +892,6 @@ CONFIG_I2C_KEYWEST=y
# CONFIG_I2C_VIAPRO is not set
# CONFIG_I2C_VOODOO3 is not set
# CONFIG_I2C_PCA_ISA is not set
# CONFIG_I2C_SENSOR is not set
#
# Miscellaneous I2C Chip support
@ -881,11 +918,16 @@ CONFIG_I2C_KEYWEST=y
# Hardware Monitoring support
#
# CONFIG_HWMON is not set
# CONFIG_HWMON_VID is not set
#
# Misc devices
#
#
# Multimedia Capabilities Port drivers
#
#
# Multimedia devices
#
@ -939,6 +981,7 @@ CONFIG_FB_RADEON_I2C=y
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_CYBLA is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_VIRTUAL is not set
@ -1020,6 +1063,7 @@ CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
#
# USB Input Devices
@ -1036,9 +1080,11 @@ CONFIG_USB_HIDDEV=y
# CONFIG_USB_MTOUCH is not set
# CONFIG_USB_ITMTOUCH is not set
# CONFIG_USB_EGALAX is not set
# CONFIG_USB_YEALINK is not set
# CONFIG_USB_XPAD is not set
# CONFIG_USB_ATI_REMOTE is not set
# CONFIG_USB_KEYSPAN_REMOTE is not set
# CONFIG_USB_APPLETOUCH is not set
#
# USB Imaging devices
@ -1111,7 +1157,8 @@ CONFIG_USB_PEGASUS=y
# InfiniBand support
#
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_VERBS=m
# CONFIG_INFINIBAND_USER_MAD is not set
# CONFIG_INFINIBAND_USER_ACCESS is not set
CONFIG_INFINIBAND_MTHCA=m
# CONFIG_INFINIBAND_MTHCA_DEBUG is not set
CONFIG_INFINIBAND_IPOIB=m
@ -1149,16 +1196,12 @@ CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
# CONFIG_JFS_STATISTICS is not set
CONFIG_FS_POSIX_ACL=y
#
# XFS support
#
CONFIG_XFS_FS=m
CONFIG_XFS_EXPORT=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_SECURITY=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_INOTIFY=y
@ -1166,6 +1209,7 @@ CONFIG_INOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_AUTOFS_FS=y
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
@ -1192,14 +1236,11 @@ CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_SYSFS=y
CONFIG_DEVPTS_FS_XATTR=y
CONFIG_DEVPTS_FS_SECURITY=y
CONFIG_TMPFS=y
CONFIG_TMPFS_XATTR=y
CONFIG_TMPFS_SECURITY=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_RAMFS=y
# CONFIG_RELAYFS_FS is not set
#
# Miscellaneous filesystems
@ -1250,6 +1291,7 @@ CONFIG_CIFS_POSIX=y
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
@ -1328,6 +1370,7 @@ CONFIG_OPROFILE=y
CONFIG_DEBUG_KERNEL=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOG_BUF_SHIFT=17
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SLAB is not set
# CONFIG_DEBUG_SPINLOCK is not set
@ -1387,7 +1430,12 @@ CONFIG_CRYPTO_TEST=m
# Library routines
#
CONFIG_CRC_CCITT=m
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_LIBCRC32C=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m

Просмотреть файл

@ -66,7 +66,7 @@ static long iSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
}
if (slot < 0) { /* MSB set means secondary group */
vflags |= HPTE_V_VALID;
vflags |= HPTE_V_SECONDARY;
secondary = 1;
slot &= 0x7fffffffffffffff;
}

Просмотреть файл

@ -506,8 +506,8 @@ struct mpic * __init mpic_alloc(unsigned long phys_addr,
mpic->senses_count = senses_count;
/* Map the global registers */
mpic->gregs = ioremap(phys_addr + MPIC_GREG_BASE, 0x1000);
mpic->tmregs = mpic->gregs + (MPIC_TIMER_BASE >> 2);
mpic->gregs = ioremap(phys_addr + MPIC_GREG_BASE, 0x2000);
mpic->tmregs = mpic->gregs + ((MPIC_TIMER_BASE - MPIC_GREG_BASE) >> 2);
BUG_ON(mpic->gregs == NULL);
/* Reset */

Просмотреть файл

@ -870,7 +870,7 @@ void div128_by_32( unsigned long dividend_high, unsigned long dividend_low,
rb = ((ra + b) - (x * divisor)) << 32;
y = (rb + c)/divisor;
rc = ((rb + b) - (y * divisor)) << 32;
rc = ((rb + c) - (y * divisor)) << 32;
z = (rc + d)/divisor;

Просмотреть файл

@ -109,7 +109,7 @@ __do_get_xsec:
lwz r6,(CFG_TB_TO_XS+4)(r9)
mulhwu r4,r7,r5
mulhwu r6,r7,r6
mullw r6,r7,r5
mullw r0,r7,r5
addc r6,r6,r0
/* At this point, we have the scaled xsec value in r4 + XER:CA

Просмотреть файл

@ -799,8 +799,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long ea,
if (cpus_equal(vma->vm_mm->cpu_vm_mask, tmp))
local = 1;
__hash_page(ea, pte_val(pte) & (_PAGE_USER|_PAGE_RW), vsid, ptep,
0x300, local);
__hash_page(ea, 0, vsid, ptep, 0x300, local);
local_irq_restore(flags);
}

Просмотреть файл

@ -23,7 +23,7 @@ extern void init_rts7751r2d_IRQ(void);
extern void *rts7751r2d_ioremap(unsigned long, unsigned long);
extern int rts7751r2d_irq_demux(int irq);
extern void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, int);
extern void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, gfp_t);
extern int voyagergx_consistent_free(struct device *, size_t, void *, dma_addr_t);
/*

Просмотреть файл

@ -31,7 +31,7 @@ static LIST_HEAD(voya_alloc_list);
#define OHCI_SRAM_SIZE 0x10000
void *voyagergx_consistent_alloc(struct device *dev, size_t size,
dma_addr_t *handle, int flag)
dma_addr_t *handle, gfp_t flag)
{
struct list_head *list = &voya_alloc_list;
struct voya_alloc_entry *entry;

Просмотреть файл

@ -33,7 +33,7 @@
static int gapspci_dma_used = 0;
void *dreamcast_consistent_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, int flag)
dma_addr_t *dma_handle, gfp_t flag)
{
unsigned long buf;

Просмотреть файл

@ -11,7 +11,7 @@
#include <linux/dma-mapping.h>
#include <asm/io.h>
void *consistent_alloc(int gfp, size_t size, dma_addr_t *handle)
void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *handle)
{
struct page *page, *end, *free;
void *ret;

Просмотреть файл

@ -49,7 +49,7 @@ IPPROTO_EGP, IPPROTO_PUP, IPPROTO_UDP, IPPROTO_IDP, IPPROTO_RAW,
#else
extern void * mykmalloc(size_t s, int gfp);
extern void * mykmalloc(size_t s, gfp_t gfp);
extern void mykfree(void *);
#endif

Просмотреть файл

@ -39,7 +39,7 @@ static char * page = NULL ;
#else
void * mykmalloc(size_t s, int gfp)
void * mykmalloc(size_t s, gfp_t gfp)
{
static char * page;
static size_t free;

Просмотреть файл

@ -4,7 +4,7 @@
#include <kern_constants.h>
#define TASK_DEBUGREGS(task) ((unsigned long *) &(((char *) (task))[HOST_TASK_DEBUGREGS]))
#ifdef CONFIG_MODE_TT
#ifdef UML_CONFIG_MODE_TT
#define TASK_EXTERN_PID(task) *((int *) &(((char *) (task))[HOST_TASK_EXTERN_PID]))
#endif

Просмотреть файл

@ -183,10 +183,6 @@ struct syscall_args {
case RBP: val = UPT_RBP(regs); break; \
case ORIG_RAX: val = UPT_ORIG_RAX(regs); break; \
case CS: val = UPT_CS(regs); break; \
case DS: val = UPT_DS(regs); break; \
case ES: val = UPT_ES(regs); break; \
case FS: val = UPT_FS(regs); break; \
case GS: val = UPT_GS(regs); break; \
case EFLAGS: val = UPT_EFLAGS(regs); break; \
default : \
panic("Bad register in UPT_REG : %d\n", reg); \

Просмотреть файл

@ -3,7 +3,7 @@
#include <kern_constants.h>
#ifdef CONFIG_MODE_TT
#ifdef UML_CONFIG_MODE_TT
#define TASK_EXTERN_PID(task) *((int *) &(((char *) (task))[HOST_TASK_EXTERN_PID]))
#endif

Просмотреть файл

@ -252,7 +252,7 @@ void paging_init(void)
#endif
}
struct page *arch_validate(struct page *page, int mask, int order)
struct page *arch_validate(struct page *page, gfp_t mask, int order)
{
unsigned long addr, zero = 0;
int i;

Просмотреть файл

@ -80,7 +80,7 @@ void free_stack(unsigned long stack, int order)
unsigned long alloc_stack(int order, int atomic)
{
unsigned long page;
int flags = GFP_KERNEL;
gfp_t flags = GFP_KERNEL;
if (atomic)
flags = GFP_ATOMIC;

Просмотреть файл

@ -187,7 +187,7 @@ static void flush_gart(struct device *dev)
/* Allocate DMA memory on node near device */
noinline
static void *dma_alloc_pages(struct device *dev, unsigned gfp, unsigned order)
static void *dma_alloc_pages(struct device *dev, gfp_t gfp, unsigned order)
{
struct page *page;
int node;
@ -204,7 +204,7 @@ static void *dma_alloc_pages(struct device *dev, unsigned gfp, unsigned order)
*/
void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
unsigned gfp)
gfp_t gfp)
{
void *memory;
unsigned long dma_mask = 0;

Просмотреть файл

@ -24,7 +24,7 @@ EXPORT_SYMBOL(iommu_sac_force);
*/
void *dma_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, unsigned gfp)
dma_addr_t *dma_handle, gfp_t gfp)
{
void *ret;
u64 mask;

Просмотреть файл

@ -29,7 +29,7 @@
*/
void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, int gfp)
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp)
{
void *ret;

Просмотреть файл

@ -96,7 +96,7 @@ struct acpi_find_pci_root {
static acpi_status
do_root_bridge_busnr_callback(struct acpi_resource *resource, void *data)
{
int *busnr = (int *)data;
unsigned long *busnr = (unsigned long *)data;
struct acpi_resource_address64 address;
if (resource->id != ACPI_RSTYPE_ADDRESS16 &&
@ -115,13 +115,13 @@ do_root_bridge_busnr_callback(struct acpi_resource *resource, void *data)
static int get_root_bridge_busnr(acpi_handle handle)
{
acpi_status status;
int bus, bbn;
unsigned long bus, bbn;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer);
status = acpi_evaluate_integer(handle, METHOD_NAME__BBN, NULL,
(unsigned long *)&bbn);
&bbn);
if (status == AE_NOT_FOUND) {
/* Assume bus = 0 */
printk(KERN_INFO PREFIX
@ -153,7 +153,7 @@ static int get_root_bridge_busnr(acpi_handle handle)
}
exit:
acpi_os_free(buffer.pointer);
return bbn;
return (int)bbn;
}
static acpi_status

Просмотреть файл

@ -98,7 +98,6 @@ struct as_data {
struct as_rq *next_arq[2]; /* next in sort order */
sector_t last_sector[2]; /* last REQ_SYNC & REQ_ASYNC sectors */
struct list_head *dispatch; /* driver dispatch queue */
struct list_head *hash; /* request hash */
unsigned long exit_prob; /* probability a task will exit while
@ -239,6 +238,25 @@ static struct io_context *as_get_io_context(void)
return ioc;
}
static void as_put_io_context(struct as_rq *arq)
{
struct as_io_context *aic;
if (unlikely(!arq->io_context))
return;
aic = arq->io_context->aic;
if (arq->is_sync == REQ_SYNC && aic) {
spin_lock(&aic->lock);
set_bit(AS_TASK_IORUNNING, &aic->state);
aic->last_end_request = jiffies;
spin_unlock(&aic->lock);
}
put_io_context(arq->io_context);
}
/*
* the back merge hash support functions
*/
@ -261,14 +279,6 @@ static inline void as_del_arq_hash(struct as_rq *arq)
__as_del_arq_hash(arq);
}
static void as_remove_merge_hints(request_queue_t *q, struct as_rq *arq)
{
as_del_arq_hash(arq);
if (q->last_merge == arq->request)
q->last_merge = NULL;
}
static void as_add_arq_hash(struct as_data *ad, struct as_rq *arq)
{
struct request *rq = arq->request;
@ -312,7 +322,7 @@ static struct request *as_find_arq_hash(struct as_data *ad, sector_t offset)
BUG_ON(!arq->on_hash);
if (!rq_mergeable(__rq)) {
as_remove_merge_hints(ad->q, arq);
as_del_arq_hash(arq);
continue;
}
@ -950,23 +960,12 @@ static void as_completed_request(request_queue_t *q, struct request *rq)
WARN_ON(!list_empty(&rq->queuelist));
if (arq->state == AS_RQ_PRESCHED) {
WARN_ON(arq->io_context);
goto out;
}
if (arq->state == AS_RQ_MERGED)
goto out_ioc;
if (arq->state != AS_RQ_REMOVED) {
printk("arq->state %d\n", arq->state);
WARN_ON(1);
goto out;
}
if (!blk_fs_request(rq))
goto out;
if (ad->changed_batch && ad->nr_dispatched == 1) {
kblockd_schedule_work(&ad->antic_work);
ad->changed_batch = 0;
@ -1001,21 +1000,7 @@ static void as_completed_request(request_queue_t *q, struct request *rq)
}
}
out_ioc:
if (!arq->io_context)
goto out;
if (arq->is_sync == REQ_SYNC) {
struct as_io_context *aic = arq->io_context->aic;
if (aic) {
spin_lock(&aic->lock);
set_bit(AS_TASK_IORUNNING, &aic->state);
aic->last_end_request = jiffies;
spin_unlock(&aic->lock);
}
}
put_io_context(arq->io_context);
as_put_io_context(arq);
out:
arq->state = AS_RQ_POSTSCHED;
}
@ -1047,72 +1032,10 @@ static void as_remove_queued_request(request_queue_t *q, struct request *rq)
ad->next_arq[data_dir] = as_find_next_arq(ad, arq);
list_del_init(&arq->fifo);
as_remove_merge_hints(q, arq);
as_del_arq_hash(arq);
as_del_arq_rb(ad, arq);
}
/*
* as_remove_dispatched_request is called to remove a request which has gone
* to the dispatch list.
*/
static void as_remove_dispatched_request(request_queue_t *q, struct request *rq)
{
struct as_rq *arq = RQ_DATA(rq);
struct as_io_context *aic;
if (!arq) {
WARN_ON(1);
return;
}
WARN_ON(arq->state != AS_RQ_DISPATCHED);
WARN_ON(ON_RB(&arq->rb_node));
if (arq->io_context && arq->io_context->aic) {
aic = arq->io_context->aic;
if (aic) {
WARN_ON(!atomic_read(&aic->nr_dispatched));
atomic_dec(&aic->nr_dispatched);
}
}
}
/*
* as_remove_request is called when a driver has finished with a request.
* This should be only called for dispatched requests, but for some reason
* a POWER4 box running hwscan it does not.
*/
static void as_remove_request(request_queue_t *q, struct request *rq)
{
struct as_rq *arq = RQ_DATA(rq);
if (unlikely(arq->state == AS_RQ_NEW))
goto out;
if (ON_RB(&arq->rb_node)) {
if (arq->state != AS_RQ_QUEUED) {
printk("arq->state %d\n", arq->state);
WARN_ON(1);
goto out;
}
/*
* We'll lose the aliased request(s) here. I don't think this
* will ever happen, but if it does, hopefully someone will
* report it.
*/
WARN_ON(!list_empty(&rq->queuelist));
as_remove_queued_request(q, rq);
} else {
if (arq->state != AS_RQ_DISPATCHED) {
printk("arq->state %d\n", arq->state);
WARN_ON(1);
goto out;
}
as_remove_dispatched_request(q, rq);
}
out:
arq->state = AS_RQ_REMOVED;
}
/*
* as_fifo_expired returns 0 if there are no expired reads on the fifo,
* 1 otherwise. It is ratelimited so that we only perform the check once per
@ -1165,7 +1088,6 @@ static inline int as_batch_expired(struct as_data *ad)
static void as_move_to_dispatch(struct as_data *ad, struct as_rq *arq)
{
struct request *rq = arq->request;
struct list_head *insert;
const int data_dir = arq->is_sync;
BUG_ON(!ON_RB(&arq->rb_node));
@ -1198,13 +1120,13 @@ static void as_move_to_dispatch(struct as_data *ad, struct as_rq *arq)
/*
* take it off the sort and fifo list, add to dispatch queue
*/
insert = ad->dispatch->prev;
while (!list_empty(&rq->queuelist)) {
struct request *__rq = list_entry_rq(rq->queuelist.next);
struct as_rq *__arq = RQ_DATA(__rq);
list_move_tail(&__rq->queuelist, ad->dispatch);
list_del(&__rq->queuelist);
elv_dispatch_add_tail(ad->q, __rq);
if (__arq->io_context && __arq->io_context->aic)
atomic_inc(&__arq->io_context->aic->nr_dispatched);
@ -1218,7 +1140,8 @@ static void as_move_to_dispatch(struct as_data *ad, struct as_rq *arq)
as_remove_queued_request(ad->q, rq);
WARN_ON(arq->state != AS_RQ_QUEUED);
list_add(&rq->queuelist, insert);
elv_dispatch_sort(ad->q, rq);
arq->state = AS_RQ_DISPATCHED;
if (arq->io_context && arq->io_context->aic)
atomic_inc(&arq->io_context->aic->nr_dispatched);
@ -1230,12 +1153,42 @@ static void as_move_to_dispatch(struct as_data *ad, struct as_rq *arq)
* read/write expire, batch expire, etc, and moves it to the dispatch
* queue. Returns 1 if a request was found, 0 otherwise.
*/
static int as_dispatch_request(struct as_data *ad)
static int as_dispatch_request(request_queue_t *q, int force)
{
struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq;
const int reads = !list_empty(&ad->fifo_list[REQ_SYNC]);
const int writes = !list_empty(&ad->fifo_list[REQ_ASYNC]);
if (unlikely(force)) {
/*
* Forced dispatch, accounting is useless. Reset
* accounting states and dump fifo_lists. Note that
* batch_data_dir is reset to REQ_SYNC to avoid
* screwing write batch accounting as write batch
* accounting occurs on W->R transition.
*/
int dispatched = 0;
ad->batch_data_dir = REQ_SYNC;
ad->changed_batch = 0;
ad->new_batch = 0;
while (ad->next_arq[REQ_SYNC]) {
as_move_to_dispatch(ad, ad->next_arq[REQ_SYNC]);
dispatched++;
}
ad->last_check_fifo[REQ_SYNC] = jiffies;
while (ad->next_arq[REQ_ASYNC]) {
as_move_to_dispatch(ad, ad->next_arq[REQ_ASYNC]);
dispatched++;
}
ad->last_check_fifo[REQ_ASYNC] = jiffies;
return dispatched;
}
/* Signal that the write batch was uncontended, so we can't time it */
if (ad->batch_data_dir == REQ_ASYNC && !reads) {
if (ad->current_write_count == 0 || !writes)
@ -1359,20 +1312,6 @@ fifo_expired:
return 1;
}
static struct request *as_next_request(request_queue_t *q)
{
struct as_data *ad = q->elevator->elevator_data;
struct request *rq = NULL;
/*
* if there are still requests on the dispatch queue, grab the first
*/
if (!list_empty(ad->dispatch) || as_dispatch_request(ad))
rq = list_entry_rq(ad->dispatch->next);
return rq;
}
/*
* Add arq to a list behind alias
*/
@ -1404,17 +1343,25 @@ as_add_aliased_request(struct as_data *ad, struct as_rq *arq, struct as_rq *alia
/*
* Don't want to have to handle merges.
*/
as_remove_merge_hints(ad->q, arq);
as_del_arq_hash(arq);
}
/*
* add arq to rbtree and fifo
*/
static void as_add_request(struct as_data *ad, struct as_rq *arq)
static void as_add_request(request_queue_t *q, struct request *rq)
{
struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq = RQ_DATA(rq);
struct as_rq *alias;
int data_dir;
if (arq->state != AS_RQ_PRESCHED) {
printk("arq->state: %d\n", arq->state);
WARN_ON(1);
}
arq->state = AS_RQ_NEW;
if (rq_data_dir(arq->request) == READ
|| current->flags&PF_SYNCWRITE)
arq->is_sync = 1;
@ -1437,12 +1384,8 @@ static void as_add_request(struct as_data *ad, struct as_rq *arq)
arq->expires = jiffies + ad->fifo_expire[data_dir];
list_add_tail(&arq->fifo, &ad->fifo_list[data_dir]);
if (rq_mergeable(arq->request)) {
if (rq_mergeable(arq->request))
as_add_arq_hash(ad, arq);
if (!ad->q->last_merge)
ad->q->last_merge = arq->request;
}
as_update_arq(ad, arq); /* keep state machine up to date */
} else {
@ -1463,97 +1406,25 @@ static void as_add_request(struct as_data *ad, struct as_rq *arq)
arq->state = AS_RQ_QUEUED;
}
static void as_deactivate_request(request_queue_t *q, struct request *rq)
static void as_activate_request(request_queue_t *q, struct request *rq)
{
struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq = RQ_DATA(rq);
if (arq) {
if (arq->state == AS_RQ_REMOVED) {
WARN_ON(arq->state != AS_RQ_DISPATCHED);
arq->state = AS_RQ_REMOVED;
if (arq->io_context && arq->io_context->aic)
atomic_dec(&arq->io_context->aic->nr_dispatched);
}
static void as_deactivate_request(request_queue_t *q, struct request *rq)
{
struct as_rq *arq = RQ_DATA(rq);
WARN_ON(arq->state != AS_RQ_REMOVED);
arq->state = AS_RQ_DISPATCHED;
if (arq->io_context && arq->io_context->aic)
atomic_inc(&arq->io_context->aic->nr_dispatched);
}
} else
WARN_ON(blk_fs_request(rq)
&& (!(rq->flags & (REQ_HARDBARRIER|REQ_SOFTBARRIER))) );
/* Stop anticipating - let this request get through */
as_antic_stop(ad);
}
/*
* requeue the request. The request has not been completed, nor is it a
* new request, so don't touch accounting.
*/
static void as_requeue_request(request_queue_t *q, struct request *rq)
{
as_deactivate_request(q, rq);
list_add(&rq->queuelist, &q->queue_head);
}
/*
* Account a request that is inserted directly onto the dispatch queue.
* arq->io_context->aic->nr_dispatched should not need to be incremented
* because only new requests should come through here: requeues go through
* our explicit requeue handler.
*/
static void as_account_queued_request(struct as_data *ad, struct request *rq)
{
if (blk_fs_request(rq)) {
struct as_rq *arq = RQ_DATA(rq);
arq->state = AS_RQ_DISPATCHED;
ad->nr_dispatched++;
}
}
static void
as_insert_request(request_queue_t *q, struct request *rq, int where)
{
struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq = RQ_DATA(rq);
if (arq) {
if (arq->state != AS_RQ_PRESCHED) {
printk("arq->state: %d\n", arq->state);
WARN_ON(1);
}
arq->state = AS_RQ_NEW;
}
/* barriers must flush the reorder queue */
if (unlikely(rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER)
&& where == ELEVATOR_INSERT_SORT)) {
WARN_ON(1);
where = ELEVATOR_INSERT_BACK;
}
switch (where) {
case ELEVATOR_INSERT_BACK:
while (ad->next_arq[REQ_SYNC])
as_move_to_dispatch(ad, ad->next_arq[REQ_SYNC]);
while (ad->next_arq[REQ_ASYNC])
as_move_to_dispatch(ad, ad->next_arq[REQ_ASYNC]);
list_add_tail(&rq->queuelist, ad->dispatch);
as_account_queued_request(ad, rq);
as_antic_stop(ad);
break;
case ELEVATOR_INSERT_FRONT:
list_add(&rq->queuelist, ad->dispatch);
as_account_queued_request(ad, rq);
as_antic_stop(ad);
break;
case ELEVATOR_INSERT_SORT:
BUG_ON(!blk_fs_request(rq));
as_add_request(ad, arq);
break;
default:
BUG();
return;
}
}
/*
* as_queue_empty tells us if there are requests left in the device. It may
@ -1565,12 +1436,8 @@ static int as_queue_empty(request_queue_t *q)
{
struct as_data *ad = q->elevator->elevator_data;
if (!list_empty(&ad->fifo_list[REQ_ASYNC])
|| !list_empty(&ad->fifo_list[REQ_SYNC])
|| !list_empty(ad->dispatch))
return 0;
return 1;
return list_empty(&ad->fifo_list[REQ_ASYNC])
&& list_empty(&ad->fifo_list[REQ_SYNC]);
}
static struct request *
@ -1607,15 +1474,6 @@ as_merge(request_queue_t *q, struct request **req, struct bio *bio)
struct request *__rq;
int ret;
/*
* try last_merge to avoid going to hash
*/
ret = elv_try_last_merge(q, bio);
if (ret != ELEVATOR_NO_MERGE) {
__rq = q->last_merge;
goto out_insert;
}
/*
* see if the merge hash can satisfy a back merge
*/
@ -1644,9 +1502,6 @@ as_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE;
out:
if (rq_mergeable(__rq))
q->last_merge = __rq;
out_insert:
if (ret) {
if (rq_mergeable(__rq))
as_hot_arq_hash(ad, RQ_DATA(__rq));
@ -1693,9 +1548,6 @@ static void as_merged_request(request_queue_t *q, struct request *req)
* behind the disk head. We currently don't bother adjusting.
*/
}
if (arq->on_hash)
q->last_merge = req;
}
static void
@ -1763,6 +1615,7 @@ as_merged_requests(request_queue_t *q, struct request *req,
* kill knowledge of next, this one is a goner
*/
as_remove_queued_request(q, next);
as_put_io_context(anext);
anext->state = AS_RQ_MERGED;
}
@ -1782,7 +1635,7 @@ static void as_work_handler(void *data)
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
if (as_next_request(q))
if (!as_queue_empty(q))
q->request_fn(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
@ -1797,7 +1650,9 @@ static void as_put_request(request_queue_t *q, struct request *rq)
return;
}
if (arq->state != AS_RQ_POSTSCHED && arq->state != AS_RQ_PRESCHED) {
if (unlikely(arq->state != AS_RQ_POSTSCHED &&
arq->state != AS_RQ_PRESCHED &&
arq->state != AS_RQ_MERGED)) {
printk("arq->state %d\n", arq->state);
WARN_ON(1);
}
@ -1807,7 +1662,7 @@ static void as_put_request(request_queue_t *q, struct request *rq)
}
static int as_set_request(request_queue_t *q, struct request *rq,
struct bio *bio, int gfp_mask)
struct bio *bio, gfp_t gfp_mask)
{
struct as_data *ad = q->elevator->elevator_data;
struct as_rq *arq = mempool_alloc(ad->arq_pool, gfp_mask);
@ -1907,7 +1762,6 @@ static int as_init_queue(request_queue_t *q, elevator_t *e)
INIT_LIST_HEAD(&ad->fifo_list[REQ_ASYNC]);
ad->sort_list[REQ_SYNC] = RB_ROOT;
ad->sort_list[REQ_ASYNC] = RB_ROOT;
ad->dispatch = &q->queue_head;
ad->fifo_expire[REQ_SYNC] = default_read_expire;
ad->fifo_expire[REQ_ASYNC] = default_write_expire;
ad->antic_expire = default_antic_expire;
@ -2072,10 +1926,9 @@ static struct elevator_type iosched_as = {
.elevator_merge_fn = as_merge,
.elevator_merged_fn = as_merged_request,
.elevator_merge_req_fn = as_merged_requests,
.elevator_next_req_fn = as_next_request,
.elevator_add_req_fn = as_insert_request,
.elevator_remove_req_fn = as_remove_request,
.elevator_requeue_req_fn = as_requeue_request,
.elevator_dispatch_fn = as_dispatch_request,
.elevator_add_req_fn = as_add_request,
.elevator_activate_req_fn = as_activate_request,
.elevator_deactivate_req_fn = as_deactivate_request,
.elevator_queue_empty_fn = as_queue_empty,
.elevator_completed_req_fn = as_completed_request,

Просмотреть файл

@ -84,7 +84,6 @@ static int cfq_max_depth = 2;
(node)->rb_left = NULL; \
} while (0)
#define RB_CLEAR_ROOT(root) ((root)->rb_node = NULL)
#define ON_RB(node) ((node)->rb_color != RB_NONE)
#define rb_entry_crq(node) rb_entry((node), struct cfq_rq, rb_node)
#define rq_rb_key(rq) (rq)->sector
@ -271,10 +270,7 @@ CFQ_CFQQ_FNS(expired);
#undef CFQ_CFQQ_FNS
enum cfq_rq_state_flags {
CFQ_CRQ_FLAG_in_flight = 0,
CFQ_CRQ_FLAG_in_driver,
CFQ_CRQ_FLAG_is_sync,
CFQ_CRQ_FLAG_requeued,
CFQ_CRQ_FLAG_is_sync = 0,
};
#define CFQ_CRQ_FNS(name) \
@ -291,14 +287,11 @@ static inline int cfq_crq_##name(const struct cfq_rq *crq) \
return (crq->crq_flags & (1 << CFQ_CRQ_FLAG_##name)) != 0; \
}
CFQ_CRQ_FNS(in_flight);
CFQ_CRQ_FNS(in_driver);
CFQ_CRQ_FNS(is_sync);
CFQ_CRQ_FNS(requeued);
#undef CFQ_CRQ_FNS
static struct cfq_queue *cfq_find_cfq_hash(struct cfq_data *, unsigned int, unsigned short);
static void cfq_dispatch_sort(request_queue_t *, struct cfq_rq *);
static void cfq_dispatch_insert(request_queue_t *, struct cfq_rq *);
static void cfq_put_cfqd(struct cfq_data *cfqd);
#define process_sync(tsk) ((tsk)->flags & PF_SYNCWRITE)
@ -311,14 +304,6 @@ static inline void cfq_del_crq_hash(struct cfq_rq *crq)
hlist_del_init(&crq->hash);
}
static void cfq_remove_merge_hints(request_queue_t *q, struct cfq_rq *crq)
{
cfq_del_crq_hash(crq);
if (q->last_merge == crq->request)
q->last_merge = NULL;
}
static inline void cfq_add_crq_hash(struct cfq_data *cfqd, struct cfq_rq *crq)
{
const int hash_idx = CFQ_MHASH_FN(rq_hash_key(crq->request));
@ -347,18 +332,13 @@ static struct request *cfq_find_rq_hash(struct cfq_data *cfqd, sector_t offset)
return NULL;
}
static inline int cfq_pending_requests(struct cfq_data *cfqd)
{
return !list_empty(&cfqd->queue->queue_head) || cfqd->busy_queues;
}
/*
* scheduler run of queue, if there are requests pending and no one in the
* driver that will restart queueing
*/
static inline void cfq_schedule_dispatch(struct cfq_data *cfqd)
{
if (!cfqd->rq_in_driver && cfq_pending_requests(cfqd))
if (!cfqd->rq_in_driver && cfqd->busy_queues)
kblockd_schedule_work(&cfqd->unplug_work);
}
@ -366,7 +346,7 @@ static int cfq_queue_empty(request_queue_t *q)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
return !cfq_pending_requests(cfqd);
return !cfqd->busy_queues;
}
/*
@ -386,11 +366,6 @@ cfq_choose_req(struct cfq_data *cfqd, struct cfq_rq *crq1, struct cfq_rq *crq2)
if (crq2 == NULL)
return crq1;
if (cfq_crq_requeued(crq1) && !cfq_crq_requeued(crq2))
return crq1;
else if (cfq_crq_requeued(crq2) && !cfq_crq_requeued(crq1))
return crq2;
if (cfq_crq_is_sync(crq1) && !cfq_crq_is_sync(crq2))
return crq1;
else if (cfq_crq_is_sync(crq2) && !cfq_crq_is_sync(crq1))
@ -461,10 +436,7 @@ cfq_find_next_crq(struct cfq_data *cfqd, struct cfq_queue *cfqq,
struct cfq_rq *crq_next = NULL, *crq_prev = NULL;
struct rb_node *rbnext, *rbprev;
rbnext = NULL;
if (ON_RB(&last->rb_node))
rbnext = rb_next(&last->rb_node);
if (!rbnext) {
if (!(rbnext = rb_next(&last->rb_node))) {
rbnext = rb_first(&cfqq->sort_list);
if (rbnext == &last->rb_node)
rbnext = NULL;
@ -545,13 +517,13 @@ static void cfq_resort_rr_list(struct cfq_queue *cfqq, int preempted)
* the pending list according to last request service
*/
static inline void
cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq, int requeue)
cfq_add_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
{
BUG_ON(cfq_cfqq_on_rr(cfqq));
cfq_mark_cfqq_on_rr(cfqq);
cfqd->busy_queues++;
cfq_resort_rr_list(cfqq, requeue);
cfq_resort_rr_list(cfqq, 0);
}
static inline void
@ -571,8 +543,6 @@ cfq_del_cfqq_rr(struct cfq_data *cfqd, struct cfq_queue *cfqq)
static inline void cfq_del_crq_rb(struct cfq_rq *crq)
{
struct cfq_queue *cfqq = crq->cfq_queue;
if (ON_RB(&crq->rb_node)) {
struct cfq_data *cfqd = cfqq->cfqd;
const int sync = cfq_crq_is_sync(crq);
@ -587,7 +557,6 @@ static inline void cfq_del_crq_rb(struct cfq_rq *crq)
if (cfq_cfqq_on_rr(cfqq) && RB_EMPTY(&cfqq->sort_list))
cfq_del_cfqq_rr(cfqd, cfqq);
}
}
static struct cfq_rq *
__cfq_add_crq_rb(struct cfq_rq *crq)
@ -627,12 +596,12 @@ static void cfq_add_crq_rb(struct cfq_rq *crq)
* if that happens, put the alias on the dispatch list
*/
while ((__alias = __cfq_add_crq_rb(crq)) != NULL)
cfq_dispatch_sort(cfqd->queue, __alias);
cfq_dispatch_insert(cfqd->queue, __alias);
rb_insert_color(&crq->rb_node, &cfqq->sort_list);
if (!cfq_cfqq_on_rr(cfqq))
cfq_add_cfqq_rr(cfqd, cfqq, cfq_crq_requeued(crq));
cfq_add_cfqq_rr(cfqd, cfqq);
/*
* check if this request is a better next-serve candidate
@ -643,10 +612,8 @@ static void cfq_add_crq_rb(struct cfq_rq *crq)
static inline void
cfq_reposition_crq_rb(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
if (ON_RB(&crq->rb_node)) {
rb_erase(&crq->rb_node, &cfqq->sort_list);
cfqq->queued[cfq_crq_is_sync(crq)]--;
}
cfq_add_crq_rb(crq);
}
@ -676,49 +643,28 @@ out:
return NULL;
}
static void cfq_activate_request(request_queue_t *q, struct request *rq)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
cfqd->rq_in_driver++;
}
static void cfq_deactivate_request(request_queue_t *q, struct request *rq)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_rq *crq = RQ_DATA(rq);
if (crq) {
struct cfq_queue *cfqq = crq->cfq_queue;
if (cfq_crq_in_driver(crq)) {
cfq_clear_crq_in_driver(crq);
WARN_ON(!cfqd->rq_in_driver);
cfqd->rq_in_driver--;
}
if (cfq_crq_in_flight(crq)) {
const int sync = cfq_crq_is_sync(crq);
cfq_clear_crq_in_flight(crq);
WARN_ON(!cfqq->on_dispatch[sync]);
cfqq->on_dispatch[sync]--;
}
cfq_mark_crq_requeued(crq);
}
}
/*
* make sure the service time gets corrected on reissue of this request
*/
static void cfq_requeue_request(request_queue_t *q, struct request *rq)
{
cfq_deactivate_request(q, rq);
list_add(&rq->queuelist, &q->queue_head);
}
static void cfq_remove_request(request_queue_t *q, struct request *rq)
static void cfq_remove_request(struct request *rq)
{
struct cfq_rq *crq = RQ_DATA(rq);
if (crq) {
list_del_init(&rq->queuelist);
cfq_del_crq_rb(crq);
cfq_remove_merge_hints(q, crq);
}
cfq_del_crq_hash(crq);
}
static int
@ -728,12 +674,6 @@ cfq_merge(request_queue_t *q, struct request **req, struct bio *bio)
struct request *__rq;
int ret;
ret = elv_try_last_merge(q, bio);
if (ret != ELEVATOR_NO_MERGE) {
__rq = q->last_merge;
goto out_insert;
}
__rq = cfq_find_rq_hash(cfqd, bio->bi_sector);
if (__rq && elv_rq_merge_ok(__rq, bio)) {
ret = ELEVATOR_BACK_MERGE;
@ -748,8 +688,6 @@ cfq_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE;
out:
q->last_merge = __rq;
out_insert:
*req = __rq;
return ret;
}
@ -762,14 +700,12 @@ static void cfq_merged_request(request_queue_t *q, struct request *req)
cfq_del_crq_hash(crq);
cfq_add_crq_hash(cfqd, crq);
if (ON_RB(&crq->rb_node) && (rq_rb_key(req) != crq->rb_key)) {
if (rq_rb_key(req) != crq->rb_key) {
struct cfq_queue *cfqq = crq->cfq_queue;
cfq_update_next_crq(crq);
cfq_reposition_crq_rb(cfqq, crq);
}
q->last_merge = req;
}
static void
@ -785,7 +721,7 @@ cfq_merged_requests(request_queue_t *q, struct request *rq,
time_before(next->start_time, rq->start_time))
list_move(&rq->queuelist, &next->queuelist);
cfq_remove_request(q, next);
cfq_remove_request(next);
}
static inline void
@ -992,53 +928,15 @@ static int cfq_arm_slice_timer(struct cfq_data *cfqd, struct cfq_queue *cfqq)
return 1;
}
/*
* we dispatch cfqd->cfq_quantum requests in total from the rr_list queues,
* this function sector sorts the selected request to minimize seeks. we start
* at cfqd->last_sector, not 0.
*/
static void cfq_dispatch_sort(request_queue_t *q, struct cfq_rq *crq)
static void cfq_dispatch_insert(request_queue_t *q, struct cfq_rq *crq)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_queue *cfqq = crq->cfq_queue;
struct list_head *head = &q->queue_head, *entry = head;
struct request *__rq;
sector_t last;
list_del(&crq->request->queuelist);
last = cfqd->last_sector;
list_for_each_entry_reverse(__rq, head, queuelist) {
struct cfq_rq *__crq = RQ_DATA(__rq);
if (blk_barrier_rq(__rq))
break;
if (!blk_fs_request(__rq))
break;
if (cfq_crq_requeued(__crq))
break;
if (__rq->sector <= crq->request->sector)
break;
if (__rq->sector > last && crq->request->sector < last) {
last = crq->request->sector + crq->request->nr_sectors;
break;
}
entry = &__rq->queuelist;
}
cfqd->last_sector = last;
cfqq->next_crq = cfq_find_next_crq(cfqd, cfqq, crq);
cfq_del_crq_rb(crq);
cfq_remove_merge_hints(q, crq);
cfq_mark_crq_in_flight(crq);
cfq_clear_crq_requeued(crq);
cfq_remove_request(crq->request);
cfqq->on_dispatch[cfq_crq_is_sync(crq)]++;
list_add_tail(&crq->request->queuelist, entry);
elv_dispatch_sort(q, crq->request);
}
/*
@ -1159,7 +1057,7 @@ __cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq,
/*
* finally, insert request into driver dispatch list
*/
cfq_dispatch_sort(cfqd->queue, crq);
cfq_dispatch_insert(cfqd->queue, crq);
cfqd->dispatch_slice++;
dispatched++;
@ -1194,7 +1092,7 @@ __cfq_dispatch_requests(struct cfq_data *cfqd, struct cfq_queue *cfqq,
}
static int
cfq_dispatch_requests(request_queue_t *q, int max_dispatch, int force)
cfq_dispatch_requests(request_queue_t *q, int force)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_queue *cfqq;
@ -1204,12 +1102,25 @@ cfq_dispatch_requests(request_queue_t *q, int max_dispatch, int force)
cfqq = cfq_select_queue(cfqd, force);
if (cfqq) {
int max_dispatch;
/*
* if idle window is disabled, allow queue buildup
*/
if (!cfq_cfqq_idle_window(cfqq) &&
cfqd->rq_in_driver >= cfqd->cfq_max_depth)
return 0;
cfq_clear_cfqq_must_dispatch(cfqq);
cfq_clear_cfqq_wait_request(cfqq);
del_timer(&cfqd->idle_slice_timer);
if (!force) {
max_dispatch = cfqd->cfq_quantum;
if (cfq_class_idle(cfqq))
max_dispatch = 1;
} else
max_dispatch = INT_MAX;
return __cfq_dispatch_requests(cfqd, cfqq, max_dispatch);
}
@ -1217,93 +1128,6 @@ cfq_dispatch_requests(request_queue_t *q, int max_dispatch, int force)
return 0;
}
static inline void cfq_account_dispatch(struct cfq_rq *crq)
{
struct cfq_queue *cfqq = crq->cfq_queue;
struct cfq_data *cfqd = cfqq->cfqd;
if (unlikely(!blk_fs_request(crq->request)))
return;
/*
* accounted bit is necessary since some drivers will call
* elv_next_request() many times for the same request (eg ide)
*/
if (cfq_crq_in_driver(crq))
return;
cfq_mark_crq_in_driver(crq);
cfqd->rq_in_driver++;
}
static inline void
cfq_account_completion(struct cfq_queue *cfqq, struct cfq_rq *crq)
{
struct cfq_data *cfqd = cfqq->cfqd;
unsigned long now;
if (!cfq_crq_in_driver(crq))
return;
now = jiffies;
WARN_ON(!cfqd->rq_in_driver);
cfqd->rq_in_driver--;
if (!cfq_class_idle(cfqq))
cfqd->last_end_request = now;
if (!cfq_cfqq_dispatched(cfqq)) {
if (cfq_cfqq_on_rr(cfqq)) {
cfqq->service_last = now;
cfq_resort_rr_list(cfqq, 0);
}
if (cfq_cfqq_expired(cfqq)) {
__cfq_slice_expired(cfqd, cfqq, 0);
cfq_schedule_dispatch(cfqd);
}
}
if (cfq_crq_is_sync(crq))
crq->io_context->last_end_request = now;
}
static struct request *cfq_next_request(request_queue_t *q)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct request *rq;
if (!list_empty(&q->queue_head)) {
struct cfq_rq *crq;
dispatch:
rq = list_entry_rq(q->queue_head.next);
crq = RQ_DATA(rq);
if (crq) {
struct cfq_queue *cfqq = crq->cfq_queue;
/*
* if idle window is disabled, allow queue buildup
*/
if (!cfq_crq_in_driver(crq) &&
!cfq_cfqq_idle_window(cfqq) &&
!blk_barrier_rq(rq) &&
cfqd->rq_in_driver >= cfqd->cfq_max_depth)
return NULL;
cfq_remove_merge_hints(q, crq);
cfq_account_dispatch(crq);
}
return rq;
}
if (cfq_dispatch_requests(q, cfqd->cfq_quantum, 0))
goto dispatch;
return NULL;
}
/*
* task holds one reference to the queue, dropped when task exits. each crq
* in-flight on this queue also holds a reference, dropped when crq is freed.
@ -1422,7 +1246,7 @@ static void cfq_exit_io_context(struct cfq_io_context *cic)
}
static struct cfq_io_context *
cfq_alloc_io_context(struct cfq_data *cfqd, int gfp_mask)
cfq_alloc_io_context(struct cfq_data *cfqd, gfp_t gfp_mask)
{
struct cfq_io_context *cic = kmem_cache_alloc(cfq_ioc_pool, gfp_mask);
@ -1517,7 +1341,7 @@ static int cfq_ioc_set_ioprio(struct io_context *ioc, unsigned int ioprio)
static struct cfq_queue *
cfq_get_queue(struct cfq_data *cfqd, unsigned int key, unsigned short ioprio,
int gfp_mask)
gfp_t gfp_mask)
{
const int hashval = hash_long(key, CFQ_QHASH_SHIFT);
struct cfq_queue *cfqq, *new_cfqq = NULL;
@ -1578,7 +1402,7 @@ out:
* cfqq, so we don't need to worry about it disappearing
*/
static struct cfq_io_context *
cfq_get_io_context(struct cfq_data *cfqd, pid_t pid, int gfp_mask)
cfq_get_io_context(struct cfq_data *cfqd, pid_t pid, gfp_t gfp_mask)
{
struct io_context *ioc = NULL;
struct cfq_io_context *cic;
@ -1816,8 +1640,9 @@ cfq_crq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
}
}
static void cfq_enqueue(struct cfq_data *cfqd, struct request *rq)
static void cfq_insert_request(request_queue_t *q, struct request *rq)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct cfq_rq *crq = RQ_DATA(rq);
struct cfq_queue *cfqq = crq->cfq_queue;
@ -1827,66 +1652,43 @@ static void cfq_enqueue(struct cfq_data *cfqd, struct request *rq)
list_add_tail(&rq->queuelist, &cfqq->fifo);
if (rq_mergeable(rq)) {
if (rq_mergeable(rq))
cfq_add_crq_hash(cfqd, crq);
if (!cfqd->queue->last_merge)
cfqd->queue->last_merge = rq;
}
cfq_crq_enqueued(cfqd, cfqq, crq);
}
static void
cfq_insert_request(request_queue_t *q, struct request *rq, int where)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
switch (where) {
case ELEVATOR_INSERT_BACK:
while (cfq_dispatch_requests(q, INT_MAX, 1))
;
list_add_tail(&rq->queuelist, &q->queue_head);
/*
* If we were idling with pending requests on
* inactive cfqqs, force dispatching will
* remove the idle timer and the queue won't
* be kicked by __make_request() afterward.
* Kick it here.
*/
cfq_schedule_dispatch(cfqd);
break;
case ELEVATOR_INSERT_FRONT:
list_add(&rq->queuelist, &q->queue_head);
break;
case ELEVATOR_INSERT_SORT:
BUG_ON(!blk_fs_request(rq));
cfq_enqueue(cfqd, rq);
break;
default:
printk("%s: bad insert point %d\n", __FUNCTION__,where);
return;
}
}
static void cfq_completed_request(request_queue_t *q, struct request *rq)
{
struct cfq_rq *crq = RQ_DATA(rq);
struct cfq_queue *cfqq;
if (unlikely(!blk_fs_request(rq)))
return;
cfqq = crq->cfq_queue;
if (cfq_crq_in_flight(crq)) {
struct cfq_queue *cfqq = crq->cfq_queue;
struct cfq_data *cfqd = cfqq->cfqd;
const int sync = cfq_crq_is_sync(crq);
unsigned long now;
now = jiffies;
WARN_ON(!cfqd->rq_in_driver);
WARN_ON(!cfqq->on_dispatch[sync]);
cfqd->rq_in_driver--;
cfqq->on_dispatch[sync]--;
if (!cfq_class_idle(cfqq))
cfqd->last_end_request = now;
if (!cfq_cfqq_dispatched(cfqq)) {
if (cfq_cfqq_on_rr(cfqq)) {
cfqq->service_last = now;
cfq_resort_rr_list(cfqq, 0);
}
if (cfq_cfqq_expired(cfqq)) {
__cfq_slice_expired(cfqd, cfqq, 0);
cfq_schedule_dispatch(cfqd);
}
}
cfq_account_completion(cfqq, crq);
if (cfq_crq_is_sync(crq))
crq->io_context->last_end_request = now;
}
static struct request *
@ -2075,7 +1877,7 @@ static void cfq_put_request(request_queue_t *q, struct request *rq)
*/
static int
cfq_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
int gfp_mask)
gfp_t gfp_mask)
{
struct cfq_data *cfqd = q->elevator->elevator_data;
struct task_struct *tsk = current;
@ -2118,9 +1920,6 @@ cfq_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
INIT_HLIST_NODE(&crq->hash);
crq->cfq_queue = cfqq;
crq->io_context = cic;
cfq_clear_crq_in_flight(crq);
cfq_clear_crq_in_driver(crq);
cfq_clear_crq_requeued(crq);
if (rw == READ || process_sync(tsk))
cfq_mark_crq_is_sync(crq);
@ -2201,7 +2000,7 @@ static void cfq_idle_slice_timer(unsigned long data)
* only expire and reinvoke request handler, if there are
* other queues with pending requests
*/
if (!cfq_pending_requests(cfqd)) {
if (!cfqd->busy_queues) {
cfqd->idle_slice_timer.expires = min(now + cfqd->cfq_slice_idle, cfqq->slice_end);
add_timer(&cfqd->idle_slice_timer);
goto out_cont;
@ -2576,10 +2375,9 @@ static struct elevator_type iosched_cfq = {
.elevator_merge_fn = cfq_merge,
.elevator_merged_fn = cfq_merged_request,
.elevator_merge_req_fn = cfq_merged_requests,
.elevator_next_req_fn = cfq_next_request,
.elevator_dispatch_fn = cfq_dispatch_requests,
.elevator_add_req_fn = cfq_insert_request,
.elevator_remove_req_fn = cfq_remove_request,
.elevator_requeue_req_fn = cfq_requeue_request,
.elevator_activate_req_fn = cfq_activate_request,
.elevator_deactivate_req_fn = cfq_deactivate_request,
.elevator_queue_empty_fn = cfq_queue_empty,
.elevator_completed_req_fn = cfq_completed_request,

Просмотреть файл

@ -50,7 +50,6 @@ struct deadline_data {
* next in sort order. read, write or both are NULL
*/
struct deadline_rq *next_drq[2];
struct list_head *dispatch; /* driver dispatch queue */
struct list_head *hash; /* request hash */
unsigned int batching; /* number of sequential requests made */
sector_t last_sector; /* head position */
@ -113,15 +112,6 @@ static inline void deadline_del_drq_hash(struct deadline_rq *drq)
__deadline_del_drq_hash(drq);
}
static void
deadline_remove_merge_hints(request_queue_t *q, struct deadline_rq *drq)
{
deadline_del_drq_hash(drq);
if (q->last_merge == drq->request)
q->last_merge = NULL;
}
static inline void
deadline_add_drq_hash(struct deadline_data *dd, struct deadline_rq *drq)
{
@ -239,11 +229,10 @@ deadline_del_drq_rb(struct deadline_data *dd, struct deadline_rq *drq)
dd->next_drq[data_dir] = rb_entry_drq(rbnext);
}
if (ON_RB(&drq->rb_node)) {
BUG_ON(!ON_RB(&drq->rb_node));
rb_erase(&drq->rb_node, DRQ_RB_ROOT(dd, drq));
RB_CLEAR(&drq->rb_node);
}
}
static struct request *
deadline_find_drq_rb(struct deadline_data *dd, sector_t sector, int data_dir)
@ -286,7 +275,7 @@ deadline_find_first_drq(struct deadline_data *dd, int data_dir)
/*
* add drq to rbtree and fifo
*/
static inline void
static void
deadline_add_request(struct request_queue *q, struct request *rq)
{
struct deadline_data *dd = q->elevator->elevator_data;
@ -301,12 +290,8 @@ deadline_add_request(struct request_queue *q, struct request *rq)
drq->expires = jiffies + dd->fifo_expire[data_dir];
list_add_tail(&drq->fifo, &dd->fifo_list[data_dir]);
if (rq_mergeable(rq)) {
if (rq_mergeable(rq))
deadline_add_drq_hash(dd, drq);
if (!q->last_merge)
q->last_merge = rq;
}
}
/*
@ -315,14 +300,11 @@ deadline_add_request(struct request_queue *q, struct request *rq)
static void deadline_remove_request(request_queue_t *q, struct request *rq)
{
struct deadline_rq *drq = RQ_DATA(rq);
if (drq) {
struct deadline_data *dd = q->elevator->elevator_data;
list_del_init(&drq->fifo);
deadline_remove_merge_hints(q, drq);
deadline_del_drq_rb(dd, drq);
}
deadline_del_drq_hash(drq);
}
static int
@ -332,15 +314,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
struct request *__rq;
int ret;
/*
* try last_merge to avoid going to hash
*/
ret = elv_try_last_merge(q, bio);
if (ret != ELEVATOR_NO_MERGE) {
__rq = q->last_merge;
goto out_insert;
}
/*
* see if the merge hash can satisfy a back merge
*/
@ -373,8 +346,6 @@ deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
return ELEVATOR_NO_MERGE;
out:
q->last_merge = __rq;
out_insert:
if (ret)
deadline_hot_drq_hash(dd, RQ_DATA(__rq));
*req = __rq;
@ -399,8 +370,6 @@ static void deadline_merged_request(request_queue_t *q, struct request *req)
deadline_del_drq_rb(dd, drq);
deadline_add_drq_rb(dd, drq);
}
q->last_merge = req;
}
static void
@ -452,7 +421,7 @@ deadline_move_to_dispatch(struct deadline_data *dd, struct deadline_rq *drq)
request_queue_t *q = drq->request->q;
deadline_remove_request(q, drq->request);
list_add_tail(&drq->request->queuelist, dd->dispatch);
elv_dispatch_add_tail(q, drq->request);
}
/*
@ -502,8 +471,9 @@ static inline int deadline_check_fifo(struct deadline_data *dd, int ddir)
* deadline_dispatch_requests selects the best request according to
* read/write expire, fifo_batch, etc
*/
static int deadline_dispatch_requests(struct deadline_data *dd)
static int deadline_dispatch_requests(request_queue_t *q, int force)
{
struct deadline_data *dd = q->elevator->elevator_data;
const int reads = !list_empty(&dd->fifo_list[READ]);
const int writes = !list_empty(&dd->fifo_list[WRITE]);
struct deadline_rq *drq;
@ -597,65 +567,12 @@ dispatch_request:
return 1;
}
static struct request *deadline_next_request(request_queue_t *q)
{
struct deadline_data *dd = q->elevator->elevator_data;
struct request *rq;
/*
* if there are still requests on the dispatch queue, grab the first one
*/
if (!list_empty(dd->dispatch)) {
dispatch:
rq = list_entry_rq(dd->dispatch->next);
return rq;
}
if (deadline_dispatch_requests(dd))
goto dispatch;
return NULL;
}
static void
deadline_insert_request(request_queue_t *q, struct request *rq, int where)
{
struct deadline_data *dd = q->elevator->elevator_data;
/* barriers must flush the reorder queue */
if (unlikely(rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER)
&& where == ELEVATOR_INSERT_SORT))
where = ELEVATOR_INSERT_BACK;
switch (where) {
case ELEVATOR_INSERT_BACK:
while (deadline_dispatch_requests(dd))
;
list_add_tail(&rq->queuelist, dd->dispatch);
break;
case ELEVATOR_INSERT_FRONT:
list_add(&rq->queuelist, dd->dispatch);
break;
case ELEVATOR_INSERT_SORT:
BUG_ON(!blk_fs_request(rq));
deadline_add_request(q, rq);
break;
default:
printk("%s: bad insert point %d\n", __FUNCTION__,where);
return;
}
}
static int deadline_queue_empty(request_queue_t *q)
{
struct deadline_data *dd = q->elevator->elevator_data;
if (!list_empty(&dd->fifo_list[WRITE])
|| !list_empty(&dd->fifo_list[READ])
|| !list_empty(dd->dispatch))
return 0;
return 1;
return list_empty(&dd->fifo_list[WRITE])
&& list_empty(&dd->fifo_list[READ]);
}
static struct request *
@ -733,7 +650,6 @@ static int deadline_init_queue(request_queue_t *q, elevator_t *e)
INIT_LIST_HEAD(&dd->fifo_list[WRITE]);
dd->sort_list[READ] = RB_ROOT;
dd->sort_list[WRITE] = RB_ROOT;
dd->dispatch = &q->queue_head;
dd->fifo_expire[READ] = read_expire;
dd->fifo_expire[WRITE] = write_expire;
dd->writes_starved = writes_starved;
@ -748,15 +664,13 @@ static void deadline_put_request(request_queue_t *q, struct request *rq)
struct deadline_data *dd = q->elevator->elevator_data;
struct deadline_rq *drq = RQ_DATA(rq);
if (drq) {
mempool_free(drq, dd->drq_pool);
rq->elevator_private = NULL;
}
}
static int
deadline_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
int gfp_mask)
gfp_t gfp_mask)
{
struct deadline_data *dd = q->elevator->elevator_data;
struct deadline_rq *drq;
@ -917,9 +831,8 @@ static struct elevator_type iosched_deadline = {
.elevator_merge_fn = deadline_merge,
.elevator_merged_fn = deadline_merged_request,
.elevator_merge_req_fn = deadline_merged_requests,
.elevator_next_req_fn = deadline_next_request,
.elevator_add_req_fn = deadline_insert_request,
.elevator_remove_req_fn = deadline_remove_request,
.elevator_dispatch_fn = deadline_dispatch_requests,
.elevator_add_req_fn = deadline_add_request,
.elevator_queue_empty_fn = deadline_queue_empty,
.elevator_former_req_fn = deadline_former_request,
.elevator_latter_req_fn = deadline_latter_request,

Просмотреть файл

@ -34,6 +34,7 @@
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/compiler.h>
#include <linux/delay.h>
#include <asm/uaccess.h>
@ -83,21 +84,11 @@ inline int elv_try_merge(struct request *__rq, struct bio *bio)
}
EXPORT_SYMBOL(elv_try_merge);
inline int elv_try_last_merge(request_queue_t *q, struct bio *bio)
{
if (q->last_merge)
return elv_try_merge(q->last_merge, bio);
return ELEVATOR_NO_MERGE;
}
EXPORT_SYMBOL(elv_try_last_merge);
static struct elevator_type *elevator_find(const char *name)
{
struct elevator_type *e = NULL;
struct list_head *entry;
spin_lock_irq(&elv_list_lock);
list_for_each(entry, &elv_list) {
struct elevator_type *__e;
@ -108,7 +99,6 @@ static struct elevator_type *elevator_find(const char *name)
break;
}
}
spin_unlock_irq(&elv_list_lock);
return e;
}
@ -120,12 +110,15 @@ static void elevator_put(struct elevator_type *e)
static struct elevator_type *elevator_get(const char *name)
{
struct elevator_type *e = elevator_find(name);
struct elevator_type *e;
if (!e)
return NULL;
if (!try_module_get(e->elevator_owner))
return NULL;
spin_lock_irq(&elv_list_lock);
e = elevator_find(name);
if (e && !try_module_get(e->elevator_owner))
e = NULL;
spin_unlock_irq(&elv_list_lock);
return e;
}
@ -139,8 +132,6 @@ static int elevator_attach(request_queue_t *q, struct elevator_type *e,
eq->ops = &e->ops;
eq->elevator_type = e;
INIT_LIST_HEAD(&q->queue_head);
q->last_merge = NULL;
q->elevator = eq;
if (eq->ops->elevator_init_fn)
@ -153,11 +144,15 @@ static char chosen_elevator[16];
static void elevator_setup_default(void)
{
struct elevator_type *e;
/*
* check if default is set and exists
*/
if (chosen_elevator[0] && elevator_find(chosen_elevator))
if (chosen_elevator[0] && (e = elevator_get(chosen_elevator))) {
elevator_put(e);
return;
}
#if defined(CONFIG_IOSCHED_AS)
strcpy(chosen_elevator, "anticipatory");
@ -186,6 +181,11 @@ int elevator_init(request_queue_t *q, char *name)
struct elevator_queue *eq;
int ret = 0;
INIT_LIST_HEAD(&q->queue_head);
q->last_merge = NULL;
q->end_sector = 0;
q->boundary_rq = NULL;
elevator_setup_default();
if (!name)
@ -220,9 +220,52 @@ void elevator_exit(elevator_t *e)
kfree(e);
}
/*
* Insert rq into dispatch queue of q. Queue lock must be held on
* entry. If sort != 0, rq is sort-inserted; otherwise, rq will be
* appended to the dispatch queue. To be used by specific elevators.
*/
void elv_dispatch_sort(request_queue_t *q, struct request *rq)
{
sector_t boundary;
struct list_head *entry;
if (q->last_merge == rq)
q->last_merge = NULL;
boundary = q->end_sector;
list_for_each_prev(entry, &q->queue_head) {
struct request *pos = list_entry_rq(entry);
if (pos->flags & (REQ_SOFTBARRIER|REQ_HARDBARRIER|REQ_STARTED))
break;
if (rq->sector >= boundary) {
if (pos->sector < boundary)
continue;
} else {
if (pos->sector >= boundary)
break;
}
if (rq->sector >= pos->sector)
break;
}
list_add(&rq->queuelist, entry);
}
int elv_merge(request_queue_t *q, struct request **req, struct bio *bio)
{
elevator_t *e = q->elevator;
int ret;
if (q->last_merge) {
ret = elv_try_merge(q->last_merge, bio);
if (ret != ELEVATOR_NO_MERGE) {
*req = q->last_merge;
return ret;
}
}
if (e->ops->elevator_merge_fn)
return e->ops->elevator_merge_fn(q, req, bio);
@ -236,6 +279,8 @@ void elv_merged_request(request_queue_t *q, struct request *rq)
if (e->ops->elevator_merged_fn)
e->ops->elevator_merged_fn(q, rq);
q->last_merge = rq;
}
void elv_merge_requests(request_queue_t *q, struct request *rq,
@ -243,20 +288,13 @@ void elv_merge_requests(request_queue_t *q, struct request *rq,
{
elevator_t *e = q->elevator;
if (q->last_merge == next)
q->last_merge = NULL;
if (e->ops->elevator_merge_req_fn)
e->ops->elevator_merge_req_fn(q, rq, next);
q->last_merge = rq;
}
/*
* For careful internal use by the block layer. Essentially the same as
* a requeue in that it tells the io scheduler that this request is not
* active in the driver or hardware anymore, but we don't want the request
* added back to the scheduler. Function is not exported.
*/
void elv_deactivate_request(request_queue_t *q, struct request *rq)
void elv_requeue_request(request_queue_t *q, struct request *rq)
{
elevator_t *e = q->elevator;
@ -264,18 +302,13 @@ void elv_deactivate_request(request_queue_t *q, struct request *rq)
* it already went through dequeue, we need to decrement the
* in_flight count again
*/
if (blk_account_rq(rq))
if (blk_account_rq(rq)) {
q->in_flight--;
rq->flags &= ~REQ_STARTED;
if (e->ops->elevator_deactivate_req_fn)
if (blk_sorted_rq(rq) && e->ops->elevator_deactivate_req_fn)
e->ops->elevator_deactivate_req_fn(q, rq);
}
void elv_requeue_request(request_queue_t *q, struct request *rq)
{
elv_deactivate_request(q, rq);
rq->flags &= ~REQ_STARTED;
/*
* if this is the flush, requeue the original instead and drop the flush
@ -285,31 +318,27 @@ void elv_requeue_request(request_queue_t *q, struct request *rq)
rq = rq->end_io_data;
}
/*
* the request is prepped and may have some resources allocated.
* allowing unprepped requests to pass this one may cause resource
* deadlock. turn on softbarrier.
*/
rq->flags |= REQ_SOFTBARRIER;
/*
* if iosched has an explicit requeue hook, then use that. otherwise
* just put the request at the front of the queue
*/
if (q->elevator->ops->elevator_requeue_req_fn)
q->elevator->ops->elevator_requeue_req_fn(q, rq);
else
__elv_add_request(q, rq, ELEVATOR_INSERT_FRONT, 0);
}
void __elv_add_request(request_queue_t *q, struct request *rq, int where,
int plug)
{
if (rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER)) {
/*
* barriers implicitly indicate back insertion
*/
if (rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER) &&
where == ELEVATOR_INSERT_SORT)
if (where == ELEVATOR_INSERT_SORT)
where = ELEVATOR_INSERT_BACK;
/*
* this request is scheduling boundary, update end_sector
*/
if (blk_fs_request(rq)) {
q->end_sector = rq_end_sector(rq);
q->boundary_rq = rq;
}
} else if (!(rq->flags & REQ_ELVPRIV) && where == ELEVATOR_INSERT_SORT)
where = ELEVATOR_INSERT_BACK;
if (plug)
@ -317,8 +346,46 @@ void __elv_add_request(request_queue_t *q, struct request *rq, int where,
rq->q = q;
if (!test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags)) {
q->elevator->ops->elevator_add_req_fn(q, rq, where);
switch (where) {
case ELEVATOR_INSERT_FRONT:
rq->flags |= REQ_SOFTBARRIER;
list_add(&rq->queuelist, &q->queue_head);
break;
case ELEVATOR_INSERT_BACK:
rq->flags |= REQ_SOFTBARRIER;
while (q->elevator->ops->elevator_dispatch_fn(q, 1))
;
list_add_tail(&rq->queuelist, &q->queue_head);
/*
* We kick the queue here for the following reasons.
* - The elevator might have returned NULL previously
* to delay requests and returned them now. As the
* queue wasn't empty before this request, ll_rw_blk
* won't run the queue on return, resulting in hang.
* - Usually, back inserted requests won't be merged
* with anything. There's no point in delaying queue
* processing.
*/
blk_remove_plug(q);
q->request_fn(q);
break;
case ELEVATOR_INSERT_SORT:
BUG_ON(!blk_fs_request(rq));
rq->flags |= REQ_SORTED;
q->elevator->ops->elevator_add_req_fn(q, rq);
if (q->last_merge == NULL && rq_mergeable(rq))
q->last_merge = rq;
break;
default:
printk(KERN_ERR "%s: bad insertion point %d\n",
__FUNCTION__, where);
BUG();
}
if (blk_queue_plugged(q)) {
int nrq = q->rq.count[READ] + q->rq.count[WRITE]
@ -327,13 +394,6 @@ void __elv_add_request(request_queue_t *q, struct request *rq, int where,
if (nrq >= q->unplug_thresh)
__generic_unplug_device(q);
}
} else
/*
* if drain is set, store the request "locally". when the drain
* is finished, the requests will be handed ordered to the io
* scheduler
*/
list_add_tail(&rq->queuelist, &q->drain_list);
}
void elv_add_request(request_queue_t *q, struct request *rq, int where,
@ -348,13 +408,19 @@ void elv_add_request(request_queue_t *q, struct request *rq, int where,
static inline struct request *__elv_next_request(request_queue_t *q)
{
struct request *rq = q->elevator->ops->elevator_next_req_fn(q);
struct request *rq;
if (unlikely(list_empty(&q->queue_head) &&
!q->elevator->ops->elevator_dispatch_fn(q, 0)))
return NULL;
rq = list_entry_rq(q->queue_head.next);
/*
* if this is a barrier write and the device has to issue a
* flush sequence to support it, check how far we are
*/
if (rq && blk_fs_request(rq) && blk_barrier_rq(rq)) {
if (blk_fs_request(rq) && blk_barrier_rq(rq)) {
BUG_ON(q->ordered == QUEUE_ORDERED_NONE);
if (q->ordered == QUEUE_ORDERED_FLUSH &&
@ -371,15 +437,30 @@ struct request *elv_next_request(request_queue_t *q)
int ret;
while ((rq = __elv_next_request(q)) != NULL) {
if (!(rq->flags & REQ_STARTED)) {
elevator_t *e = q->elevator;
/*
* just mark as started even if we don't start it, a request
* that has been delayed should not be passed by new incoming
* requests
* This is the first time the device driver
* sees this request (possibly after
* requeueing). Notify IO scheduler.
*/
if (blk_sorted_rq(rq) &&
e->ops->elevator_activate_req_fn)
e->ops->elevator_activate_req_fn(q, rq);
/*
* just mark as started even if we don't start
* it, a request that has been delayed should
* not be passed by new incoming requests
*/
rq->flags |= REQ_STARTED;
}
if (rq == q->last_merge)
q->last_merge = NULL;
if (!q->boundary_rq || q->boundary_rq == rq) {
q->end_sector = rq_end_sector(rq);
q->boundary_rq = NULL;
}
if ((rq->flags & REQ_DONTPREP) || !q->prep_rq_fn)
break;
@ -391,9 +472,9 @@ struct request *elv_next_request(request_queue_t *q)
/*
* the request may have been (partially) prepped.
* we need to keep this request in the front to
* avoid resource deadlock. turn on softbarrier.
* avoid resource deadlock. REQ_STARTED will
* prevent other fs requests from passing this one.
*/
rq->flags |= REQ_SOFTBARRIER;
rq = NULL;
break;
} else if (ret == BLKPREP_KILL) {
@ -416,42 +497,32 @@ struct request *elv_next_request(request_queue_t *q)
return rq;
}
void elv_remove_request(request_queue_t *q, struct request *rq)
void elv_dequeue_request(request_queue_t *q, struct request *rq)
{
elevator_t *e = q->elevator;
BUG_ON(list_empty(&rq->queuelist));
list_del_init(&rq->queuelist);
/*
* the time frame between a request being removed from the lists
* and to it is freed is accounted as io that is in progress at
* the driver side. note that we only account requests that the
* driver has seen (REQ_STARTED set), to avoid false accounting
* for request-request merges
* the driver side.
*/
if (blk_account_rq(rq))
q->in_flight++;
/*
* the main clearing point for q->last_merge is on retrieval of
* request by driver (it calls elv_next_request()), but it _can_
* also happen here if a request is added to the queue but later
* deleted without ever being given to driver (merged with another
* request).
*/
if (rq == q->last_merge)
q->last_merge = NULL;
if (e->ops->elevator_remove_req_fn)
e->ops->elevator_remove_req_fn(q, rq);
}
int elv_queue_empty(request_queue_t *q)
{
elevator_t *e = q->elevator;
if (!list_empty(&q->queue_head))
return 0;
if (e->ops->elevator_queue_empty_fn)
return e->ops->elevator_queue_empty_fn(q);
return list_empty(&q->queue_head);
return 1;
}
struct request *elv_latter_request(request_queue_t *q, struct request *rq)
@ -487,7 +558,7 @@ struct request *elv_former_request(request_queue_t *q, struct request *rq)
}
int elv_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
int gfp_mask)
gfp_t gfp_mask)
{
elevator_t *e = q->elevator;
@ -523,12 +594,12 @@ void elv_completed_request(request_queue_t *q, struct request *rq)
/*
* request is released from the driver, io must be done
*/
if (blk_account_rq(rq))
if (blk_account_rq(rq)) {
q->in_flight--;
if (e->ops->elevator_completed_req_fn)
if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
e->ops->elevator_completed_req_fn(q, rq);
}
}
int elv_register_queue(struct request_queue *q)
{
@ -555,10 +626,9 @@ void elv_unregister_queue(struct request_queue *q)
int elv_register(struct elevator_type *e)
{
spin_lock_irq(&elv_list_lock);
if (elevator_find(e->elevator_name))
BUG();
spin_lock_irq(&elv_list_lock);
list_add_tail(&e->list, &elv_list);
spin_unlock_irq(&elv_list_lock);
@ -582,25 +652,36 @@ EXPORT_SYMBOL_GPL(elv_unregister);
* switch to new_e io scheduler. be careful not to introduce deadlocks -
* we don't free the old io scheduler, before we have allocated what we
* need for the new one. this way we have a chance of going back to the old
* one, if the new one fails init for some reason. we also do an intermediate
* switch to noop to ensure safety with stack-allocated requests, since they
* don't originate from the block layer allocator. noop is safe here, because
* it never needs to touch the elevator itself for completion events. DRAIN
* flags will make sure we don't touch it for additions either.
* one, if the new one fails init for some reason.
*/
static void elevator_switch(request_queue_t *q, struct elevator_type *new_e)
{
elevator_t *e = kmalloc(sizeof(elevator_t), GFP_KERNEL);
struct elevator_type *noop_elevator = NULL;
elevator_t *old_elevator;
elevator_t *old_elevator, *e;
/*
* Allocate new elevator
*/
e = kmalloc(sizeof(elevator_t), GFP_KERNEL);
if (!e)
goto error;
/*
* first step, drain requests from the block freelist
* Turn on BYPASS and drain all requests w/ elevator private data
*/
blk_wait_queue_drained(q, 0);
spin_lock_irq(q->queue_lock);
set_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
while (q->elevator->ops->elevator_dispatch_fn(q, 1))
;
while (q->rq.elvpriv) {
spin_unlock_irq(q->queue_lock);
msleep(10);
spin_lock_irq(q->queue_lock);
}
spin_unlock_irq(q->queue_lock);
/*
* unregister old elevator data
@ -608,18 +689,6 @@ static void elevator_switch(request_queue_t *q, struct elevator_type *new_e)
elv_unregister_queue(q);
old_elevator = q->elevator;
/*
* next step, switch to noop since it uses no private rq structures
* and doesn't allocate any memory for anything. then wait for any
* non-fs requests in-flight
*/
noop_elevator = elevator_get("noop");
spin_lock_irq(q->queue_lock);
elevator_attach(q, noop_elevator, e);
spin_unlock_irq(q->queue_lock);
blk_wait_queue_drained(q, 1);
/*
* attach and start new elevator
*/
@ -630,11 +699,10 @@ static void elevator_switch(request_queue_t *q, struct elevator_type *new_e)
goto fail_register;
/*
* finally exit old elevator and start queue again
* finally exit old elevator and turn off BYPASS.
*/
elevator_exit(old_elevator);
blk_finish_queue_drain(q);
elevator_put(noop_elevator);
clear_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
return;
fail_register:
@ -643,13 +711,13 @@ fail_register:
* one again (along with re-adding the sysfs dir)
*/
elevator_exit(e);
e = NULL;
fail:
q->elevator = old_elevator;
elv_register_queue(q);
blk_finish_queue_drain(q);
clear_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
kfree(e);
error:
if (noop_elevator)
elevator_put(noop_elevator);
elevator_put(new_e);
printk(KERN_ERR "elevator: switch to %s failed\n",new_e->elevator_name);
}
@ -701,11 +769,12 @@ ssize_t elv_iosched_show(request_queue_t *q, char *name)
return len;
}
EXPORT_SYMBOL(elv_dispatch_sort);
EXPORT_SYMBOL(elv_add_request);
EXPORT_SYMBOL(__elv_add_request);
EXPORT_SYMBOL(elv_requeue_request);
EXPORT_SYMBOL(elv_next_request);
EXPORT_SYMBOL(elv_remove_request);
EXPORT_SYMBOL(elv_dequeue_request);
EXPORT_SYMBOL(elv_queue_empty);
EXPORT_SYMBOL(elv_completed_request);
EXPORT_SYMBOL(elevator_exit);

Просмотреть файл

@ -263,8 +263,6 @@ void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn)
blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
blk_queue_activity_fn(q, NULL, NULL);
INIT_LIST_HEAD(&q->drain_list);
}
EXPORT_SYMBOL(blk_queue_make_request);
@ -353,6 +351,8 @@ static void blk_pre_flush_end_io(struct request *flush_rq)
struct request *rq = flush_rq->end_io_data;
request_queue_t *q = rq->q;
elv_completed_request(q, flush_rq);
rq->flags |= REQ_BAR_PREFLUSH;
if (!flush_rq->errors)
@ -369,6 +369,8 @@ static void blk_post_flush_end_io(struct request *flush_rq)
struct request *rq = flush_rq->end_io_data;
request_queue_t *q = rq->q;
elv_completed_request(q, flush_rq);
rq->flags |= REQ_BAR_POSTFLUSH;
q->end_flush_fn(q, flush_rq);
@ -408,8 +410,6 @@ struct request *blk_start_pre_flush(request_queue_t *q, struct request *rq)
if (!list_empty(&rq->queuelist))
blkdev_dequeue_request(rq);
elv_deactivate_request(q, rq);
flush_rq->end_io_data = rq;
flush_rq->end_io = blk_pre_flush_end_io;
@ -1040,6 +1040,7 @@ EXPORT_SYMBOL(blk_queue_invalidate_tags);
static char *rq_flags[] = {
"REQ_RW",
"REQ_FAILFAST",
"REQ_SORTED",
"REQ_SOFTBARRIER",
"REQ_HARDBARRIER",
"REQ_CMD",
@ -1047,6 +1048,7 @@ static char *rq_flags[] = {
"REQ_STARTED",
"REQ_DONTPREP",
"REQ_QUEUED",
"REQ_ELVPRIV",
"REQ_PC",
"REQ_BLOCK_PC",
"REQ_SENSE",
@ -1637,9 +1639,9 @@ static int blk_init_free_list(request_queue_t *q)
rl->count[READ] = rl->count[WRITE] = 0;
rl->starved[READ] = rl->starved[WRITE] = 0;
rl->elvpriv = 0;
init_waitqueue_head(&rl->wait[READ]);
init_waitqueue_head(&rl->wait[WRITE]);
init_waitqueue_head(&rl->drain);
rl->rq_pool = mempool_create_node(BLKDEV_MIN_RQ, mempool_alloc_slab,
mempool_free_slab, request_cachep, q->node);
@ -1652,13 +1654,13 @@ static int blk_init_free_list(request_queue_t *q)
static int __make_request(request_queue_t *, struct bio *);
request_queue_t *blk_alloc_queue(int gfp_mask)
request_queue_t *blk_alloc_queue(gfp_t gfp_mask)
{
return blk_alloc_queue_node(gfp_mask, -1);
}
EXPORT_SYMBOL(blk_alloc_queue);
request_queue_t *blk_alloc_queue_node(int gfp_mask, int node_id)
request_queue_t *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
{
request_queue_t *q;
@ -1782,12 +1784,14 @@ EXPORT_SYMBOL(blk_get_queue);
static inline void blk_free_request(request_queue_t *q, struct request *rq)
{
if (rq->flags & REQ_ELVPRIV)
elv_put_request(q, rq);
mempool_free(rq, q->rq.rq_pool);
}
static inline struct request *
blk_alloc_request(request_queue_t *q, int rw, struct bio *bio, int gfp_mask)
blk_alloc_request(request_queue_t *q, int rw, struct bio *bio,
int priv, gfp_t gfp_mask)
{
struct request *rq = mempool_alloc(q->rq.rq_pool, gfp_mask);
@ -1800,12 +1804,16 @@ blk_alloc_request(request_queue_t *q, int rw, struct bio *bio, int gfp_mask)
*/
rq->flags = rw;
if (!elv_set_request(q, rq, bio, gfp_mask))
return rq;
if (priv) {
if (unlikely(elv_set_request(q, rq, bio, gfp_mask))) {
mempool_free(rq, q->rq.rq_pool);
return NULL;
}
rq->flags |= REQ_ELVPRIV;
}
return rq;
}
/*
* ioc_batching returns true if the ioc is a valid batching request and
@ -1860,22 +1868,18 @@ static void __freed_request(request_queue_t *q, int rw)
* A request has just been released. Account for it, update the full and
* congestion status, wake up any waiters. Called under q->queue_lock.
*/
static void freed_request(request_queue_t *q, int rw)
static void freed_request(request_queue_t *q, int rw, int priv)
{
struct request_list *rl = &q->rq;
rl->count[rw]--;
if (priv)
rl->elvpriv--;
__freed_request(q, rw);
if (unlikely(rl->starved[rw ^ 1]))
__freed_request(q, rw ^ 1);
if (!rl->count[READ] && !rl->count[WRITE]) {
smp_mb();
if (unlikely(waitqueue_active(&rl->drain)))
wake_up(&rl->drain);
}
}
#define blkdev_free_rq(list) list_entry((list)->next, struct request, queuelist)
@ -1885,14 +1889,12 @@ static void freed_request(request_queue_t *q, int rw)
* Returns !NULL on success, with queue_lock *not held*.
*/
static struct request *get_request(request_queue_t *q, int rw, struct bio *bio,
int gfp_mask)
gfp_t gfp_mask)
{
struct request *rq = NULL;
struct request_list *rl = &q->rq;
struct io_context *ioc = current_io_context(GFP_ATOMIC);
if (unlikely(test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags)))
goto out;
int priv;
if (rl->count[rw]+1 >= q->nr_requests) {
/*
@ -1937,9 +1939,14 @@ get_rq:
rl->starved[rw] = 0;
if (rl->count[rw] >= queue_congestion_on_threshold(q))
set_queue_congested(q, rw);
priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
if (priv)
rl->elvpriv++;
spin_unlock_irq(q->queue_lock);
rq = blk_alloc_request(q, rw, bio, gfp_mask);
rq = blk_alloc_request(q, rw, bio, priv, gfp_mask);
if (!rq) {
/*
* Allocation failed presumably due to memory. Undo anything
@ -1949,7 +1956,7 @@ get_rq:
* wait queue, but this is pretty rare.
*/
spin_lock_irq(q->queue_lock);
freed_request(q, rw);
freed_request(q, rw, priv);
/*
* in the very unlikely event that allocation failed and no
@ -2019,7 +2026,7 @@ static struct request *get_request_wait(request_queue_t *q, int rw,
return rq;
}
struct request *blk_get_request(request_queue_t *q, int rw, int gfp_mask)
struct request *blk_get_request(request_queue_t *q, int rw, gfp_t gfp_mask)
{
struct request *rq;
@ -2251,7 +2258,7 @@ EXPORT_SYMBOL(blk_rq_unmap_user);
* @gfp_mask: memory allocation flags
*/
int blk_rq_map_kern(request_queue_t *q, struct request *rq, void *kbuf,
unsigned int len, unsigned int gfp_mask)
unsigned int len, gfp_t gfp_mask)
{
struct bio *bio;
@ -2433,13 +2440,15 @@ void disk_round_stats(struct gendisk *disk)
{
unsigned long now = jiffies;
if (now == disk->stamp)
return;
if (disk->in_flight) {
__disk_stat_add(disk, time_in_queue,
disk->in_flight * (now - disk->stamp));
__disk_stat_add(disk, io_ticks, (now - disk->stamp));
}
disk->stamp = now;
if (disk->in_flight)
__disk_stat_add(disk, io_ticks, (now - disk->stamp_idle));
disk->stamp_idle = now;
}
/*
@ -2454,6 +2463,8 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
if (unlikely(--req->ref_count))
return;
elv_completed_request(q, req);
req->rq_status = RQ_INACTIVE;
req->rl = NULL;
@ -2463,26 +2474,25 @@ static void __blk_put_request(request_queue_t *q, struct request *req)
*/
if (rl) {
int rw = rq_data_dir(req);
elv_completed_request(q, req);
int priv = req->flags & REQ_ELVPRIV;
BUG_ON(!list_empty(&req->queuelist));
blk_free_request(q, req);
freed_request(q, rw);
freed_request(q, rw, priv);
}
}
void blk_put_request(struct request *req)
{
/*
* if req->rl isn't set, this request didnt originate from the
* block layer, so it's safe to just disregard it
*/
if (req->rl) {
unsigned long flags;
request_queue_t *q = req->q;
/*
* Gee, IDE calls in w/ NULL q. Fix IDE and remove the
* following if (q) test.
*/
if (q) {
spin_lock_irqsave(q->queue_lock, flags);
__blk_put_request(q, req);
spin_unlock_irqrestore(q->queue_lock, flags);
@ -2797,97 +2807,6 @@ static inline void blk_partition_remap(struct bio *bio)
}
}
void blk_finish_queue_drain(request_queue_t *q)
{
struct request_list *rl = &q->rq;
struct request *rq;
int requeued = 0;
spin_lock_irq(q->queue_lock);
clear_bit(QUEUE_FLAG_DRAIN, &q->queue_flags);
while (!list_empty(&q->drain_list)) {
rq = list_entry_rq(q->drain_list.next);
list_del_init(&rq->queuelist);
elv_requeue_request(q, rq);
requeued++;
}
if (requeued)
q->request_fn(q);
spin_unlock_irq(q->queue_lock);
wake_up(&rl->wait[0]);
wake_up(&rl->wait[1]);
wake_up(&rl->drain);
}
static int wait_drain(request_queue_t *q, struct request_list *rl, int dispatch)
{
int wait = rl->count[READ] + rl->count[WRITE];
if (dispatch)
wait += !list_empty(&q->queue_head);
return wait;
}
/*
* We rely on the fact that only requests allocated through blk_alloc_request()
* have io scheduler private data structures associated with them. Any other
* type of request (allocated on stack or through kmalloc()) should not go
* to the io scheduler core, but be attached to the queue head instead.
*/
void blk_wait_queue_drained(request_queue_t *q, int wait_dispatch)
{
struct request_list *rl = &q->rq;
DEFINE_WAIT(wait);
spin_lock_irq(q->queue_lock);
set_bit(QUEUE_FLAG_DRAIN, &q->queue_flags);
while (wait_drain(q, rl, wait_dispatch)) {
prepare_to_wait(&rl->drain, &wait, TASK_UNINTERRUPTIBLE);
if (wait_drain(q, rl, wait_dispatch)) {
__generic_unplug_device(q);
spin_unlock_irq(q->queue_lock);
io_schedule();
spin_lock_irq(q->queue_lock);
}
finish_wait(&rl->drain, &wait);
}
spin_unlock_irq(q->queue_lock);
}
/*
* block waiting for the io scheduler being started again.
*/
static inline void block_wait_queue_running(request_queue_t *q)
{
DEFINE_WAIT(wait);
while (unlikely(test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags))) {
struct request_list *rl = &q->rq;
prepare_to_wait_exclusive(&rl->drain, &wait,
TASK_UNINTERRUPTIBLE);
/*
* re-check the condition. avoids using prepare_to_wait()
* in the fast path (queue is running)
*/
if (test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags))
io_schedule();
finish_wait(&rl->drain, &wait);
}
}
static void handle_bad_sector(struct bio *bio)
{
char b[BDEVNAME_SIZE];
@ -2983,8 +2902,6 @@ end_io:
if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags)))
goto end_io;
block_wait_queue_running(q);
/*
* If this device has partitions, remap block n
* of partition p to block n+start(p) of the disk.
@ -3393,7 +3310,7 @@ void exit_io_context(void)
* but since the current task itself holds a reference, the context can be
* used in general code, so long as it stays within `current` context.
*/
struct io_context *current_io_context(int gfp_flags)
struct io_context *current_io_context(gfp_t gfp_flags)
{
struct task_struct *tsk = current;
struct io_context *ret;
@ -3424,7 +3341,7 @@ EXPORT_SYMBOL(current_io_context);
*
* This is always called in the context of the task which submitted the I/O.
*/
struct io_context *get_io_context(int gfp_flags)
struct io_context *get_io_context(gfp_t gfp_flags)
{
struct io_context *ret;
ret = current_io_context(gfp_flags);

Просмотреть файл

@ -881,7 +881,7 @@ loop_init_xfer(struct loop_device *lo, struct loop_func_table *xfer,
static int loop_clr_fd(struct loop_device *lo, struct block_device *bdev)
{
struct file *filp = lo->lo_backing_file;
int gfp = lo->old_gfp_mask;
gfp_t gfp = lo->old_gfp_mask;
if (lo->lo_state != Lo_bound)
return -ENXIO;

Просмотреть файл

@ -7,57 +7,19 @@
#include <linux/module.h>
#include <linux/init.h>
/*
* See if we can find a request that this buffer can be coalesced with.
*/
static int elevator_noop_merge(request_queue_t *q, struct request **req,
struct bio *bio)
static void elevator_noop_add_request(request_queue_t *q, struct request *rq)
{
int ret;
ret = elv_try_last_merge(q, bio);
if (ret != ELEVATOR_NO_MERGE)
*req = q->last_merge;
return ret;
elv_dispatch_add_tail(q, rq);
}
static void elevator_noop_merge_requests(request_queue_t *q, struct request *req,
struct request *next)
static int elevator_noop_dispatch(request_queue_t *q, int force)
{
list_del_init(&next->queuelist);
}
static void elevator_noop_add_request(request_queue_t *q, struct request *rq,
int where)
{
if (where == ELEVATOR_INSERT_FRONT)
list_add(&rq->queuelist, &q->queue_head);
else
list_add_tail(&rq->queuelist, &q->queue_head);
/*
* new merges must not precede this barrier
*/
if (rq->flags & REQ_HARDBARRIER)
q->last_merge = NULL;
else if (!q->last_merge)
q->last_merge = rq;
}
static struct request *elevator_noop_next_request(request_queue_t *q)
{
if (!list_empty(&q->queue_head))
return list_entry_rq(q->queue_head.next);
return NULL;
return 0;
}
static struct elevator_type elevator_noop = {
.ops = {
.elevator_merge_fn = elevator_noop_merge,
.elevator_merge_req_fn = elevator_noop_merge_requests,
.elevator_next_req_fn = elevator_noop_next_request,
.elevator_dispatch_fn = elevator_noop_dispatch,
.elevator_add_req_fn = elevator_noop_add_request,
},
.elevator_name = "noop",

Просмотреть файл

@ -348,7 +348,7 @@ static int rd_open(struct inode *inode, struct file *filp)
struct block_device *bdev = inode->i_bdev;
struct address_space *mapping;
unsigned bsize;
int gfp_mask;
gfp_t gfp_mask;
inode = igrab(bdev->bd_inode);
rd_bdev[unit] = bdev;

3
drivers/char/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
consolemap_deftbl.c
defkeymap.c

Просмотреть файл

@ -148,7 +148,8 @@ static __inline__ struct page *drm_do_vm_shm_nopage(struct vm_area_struct *vma,
offset = address - vma->vm_start;
i = (unsigned long)map->handle + offset;
page = vmalloc_to_page((void *)i);
page = (map->type == _DRM_CONSISTENT) ?
virt_to_page((void *)i) : vmalloc_to_page((void *)i);
if (!page)
return NOPAGE_OOM;
get_page(page);

Просмотреть файл

@ -437,7 +437,7 @@ static int mga_do_agp_dma_bootstrap(drm_device_t * dev,
drm_mga_dma_bootstrap_t * dma_bs)
{
drm_mga_private_t * const dev_priv = (drm_mga_private_t *) dev->dev_private;
const unsigned int warp_size = mga_warp_microcode_size(dev_priv);
unsigned int warp_size = mga_warp_microcode_size(dev_priv);
int err;
unsigned offset;
const unsigned secondary_size = dma_bs->secondary_bin_count
@ -499,6 +499,12 @@ static int mga_do_agp_dma_bootstrap(drm_device_t * dev,
return err;
}
/* Make drm_addbufs happy by not trying to create a mapping for less
* than a page.
*/
if (warp_size < PAGE_SIZE)
warp_size = PAGE_SIZE;
offset = 0;
err = drm_addmap( dev, offset, warp_size,
_DRM_AGP, _DRM_READ_ONLY, & dev_priv->warp );
@ -587,7 +593,7 @@ static int mga_do_pci_dma_bootstrap(drm_device_t * dev,
drm_mga_dma_bootstrap_t * dma_bs)
{
drm_mga_private_t * const dev_priv = (drm_mga_private_t *) dev->dev_private;
const unsigned int warp_size = mga_warp_microcode_size(dev_priv);
unsigned int warp_size = mga_warp_microcode_size(dev_priv);
unsigned int primary_size;
unsigned int bin_count;
int err;
@ -599,6 +605,12 @@ static int mga_do_pci_dma_bootstrap(drm_device_t * dev,
return DRM_ERR(EFAULT);
}
/* Make drm_addbufs happy by not trying to create a mapping for less
* than a page.
*/
if (warp_size < PAGE_SIZE)
warp_size = PAGE_SIZE;
/* The proper alignment is 0x100 for this mapping */
err = drm_addmap(dev, 0, warp_size, _DRM_CONSISTENT,
_DRM_READ_ONLY, &dev_priv->warp);
@ -812,6 +824,10 @@ static int mga_do_init_dma( drm_device_t *dev, drm_mga_init_t *init )
}
if (! dev_priv->used_new_dma_init) {
dev_priv->dma_access = MGA_PAGPXFER;
dev_priv->wagp_enable = MGA_WAGP_ENABLE;
dev_priv->status = drm_core_findmap(dev, init->status_offset);
if (!dev_priv->status) {
DRM_ERROR("failed to find status page!\n");
@ -928,7 +944,7 @@ static int mga_do_cleanup_dma( drm_device_t *dev )
drm_mga_private_t *dev_priv = dev->dev_private;
if ((dev_priv->warp != NULL)
&& (dev_priv->mmio->type != _DRM_CONSISTENT))
&& (dev_priv->warp->type != _DRM_CONSISTENT))
drm_core_ioremapfree(dev_priv->warp, dev);
if ((dev_priv->primary != NULL)

Просмотреть файл

@ -227,7 +227,7 @@ static inline u32 _MGA_READ(u32 *addr)
#define MGA_EMIT_STATE( dev_priv, dirty ) \
do { \
if ( (dirty) & ~MGA_UPLOAD_CLIPRECTS ) { \
if ( dev_priv->chipset == MGA_CARD_TYPE_G400 ) { \
if ( dev_priv->chipset >= MGA_CARD_TYPE_G400 ) { \
mga_g400_emit_state( dev_priv ); \
} else { \
mga_g200_emit_state( dev_priv ); \

Просмотреть файл

@ -53,7 +53,7 @@ static void mga_emit_clip_rect( drm_mga_private_t *dev_priv,
/* Force reset of DWGCTL on G400 (eliminates clip disable bit).
*/
if (dev_priv->chipset == MGA_CARD_TYPE_G400) {
if (dev_priv->chipset >= MGA_CARD_TYPE_G400) {
DMA_BLOCK(MGA_DWGCTL, ctx->dwgctl,
MGA_LEN + MGA_EXEC, 0x80000000,
MGA_DWGCTL, ctx->dwgctl,

Просмотреть файл

@ -1136,7 +1136,7 @@ static void radeon_cp_init_ring_buffer( drm_device_t *dev,
} else
#endif
ring_start = (dev_priv->cp_ring->offset
- dev->sg->handle
- (unsigned long)dev->sg->virtual
+ dev_priv->gart_vm_start);
RADEON_WRITE( RADEON_CP_RB_BASE, ring_start );
@ -1164,7 +1164,8 @@ static void radeon_cp_init_ring_buffer( drm_device_t *dev,
drm_sg_mem_t *entry = dev->sg;
unsigned long tmp_ofs, page_ofs;
tmp_ofs = dev_priv->ring_rptr->offset - dev->sg->handle;
tmp_ofs = dev_priv->ring_rptr->offset -
(unsigned long)dev->sg->virtual;
page_ofs = tmp_ofs >> PAGE_SHIFT;
RADEON_WRITE( RADEON_CP_RB_RPTR_ADDR,
@ -1491,7 +1492,7 @@ static int radeon_do_init_cp( drm_device_t *dev, drm_radeon_init_t *init )
else
#endif
dev_priv->gart_buffers_offset = (dev->agp_buffer_map->offset
- dev->sg->handle
- (unsigned long)dev->sg->virtual
+ dev_priv->gart_vm_start);
DRM_DEBUG( "dev_priv->gart_size %d\n",

Просмотреть файл

@ -62,7 +62,7 @@
static inline unsigned char *alloc_buf(void)
{
unsigned int prio = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
gfp_t prio = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
if (PAGE_SIZE != N_TTY_BUF_SIZE)
return kmalloc(N_TTY_BUF_SIZE, prio);

Просмотреть файл

@ -315,9 +315,9 @@ static void dbs_check_cpu(int cpu)
policy = this_dbs_info->cur_policy;
if ( init_flag == 0 ) {
for ( /* NULL */; init_flag < NR_CPUS; init_flag++ ) {
dbs_info = &per_cpu(cpu_dbs_info, init_flag);
requested_freq[cpu] = dbs_info->cur_policy->cur;
for_each_online_cpu(j) {
dbs_info = &per_cpu(cpu_dbs_info, j);
requested_freq[j] = dbs_info->cur_policy->cur;
}
init_flag = 1;
}

Просмотреть файл

@ -1630,7 +1630,7 @@ static void ether1394_complete_cb(void *__ptask)
/* Transmit a packet (called by kernel) */
static int ether1394_tx (struct sk_buff *skb, struct net_device *dev)
{
int kmflags = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
gfp_t kmflags = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
struct eth1394hdr *eth;
struct eth1394_priv *priv = netdev_priv(dev);
int proto;

Просмотреть файл

@ -2283,8 +2283,9 @@ static void ohci_schedule_iso_tasklets(struct ti_ohci *ohci,
{
struct ohci1394_iso_tasklet *t;
unsigned long mask;
unsigned long flags;
spin_lock(&ohci->iso_tasklet_list_lock);
spin_lock_irqsave(&ohci->iso_tasklet_list_lock, flags);
list_for_each_entry(t, &ohci->iso_tasklet_list, link) {
mask = 1 << t->context;
@ -2295,8 +2296,7 @@ static void ohci_schedule_iso_tasklets(struct ti_ohci *ohci,
tasklet_schedule(&t->tasklet);
}
spin_unlock(&ohci->iso_tasklet_list_lock);
spin_unlock_irqrestore(&ohci->iso_tasklet_list_lock, flags);
}
static irqreturn_t ohci_irq_handler(int irq, void *dev_id,

Просмотреть файл

@ -412,6 +412,7 @@ static void fcp_request(struct hpsb_host *host, int nodeid, int direction,
static ssize_t raw1394_read(struct file *file, char __user * buffer,
size_t count, loff_t * offset_is_ignored)
{
unsigned long flags;
struct file_info *fi = (struct file_info *)file->private_data;
struct list_head *lh;
struct pending_request *req;
@ -435,10 +436,10 @@ static ssize_t raw1394_read(struct file *file, char __user * buffer,
}
}
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
lh = fi->req_complete.next;
list_del(lh);
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
req = list_entry(lh, struct pending_request, list);
@ -486,6 +487,7 @@ static int state_opened(struct file_info *fi, struct pending_request *req)
static int state_initialized(struct file_info *fi, struct pending_request *req)
{
unsigned long flags;
struct host_info *hi;
struct raw1394_khost_list *khl;
@ -499,7 +501,7 @@ static int state_initialized(struct file_info *fi, struct pending_request *req)
switch (req->req.type) {
case RAW1394_REQ_LIST_CARDS:
spin_lock_irq(&host_info_lock);
spin_lock_irqsave(&host_info_lock, flags);
khl = kmalloc(sizeof(struct raw1394_khost_list) * host_count,
SLAB_ATOMIC);
@ -513,7 +515,7 @@ static int state_initialized(struct file_info *fi, struct pending_request *req)
khl++;
}
}
spin_unlock_irq(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, flags);
if (khl != NULL) {
req->req.error = RAW1394_ERROR_NONE;
@ -528,7 +530,7 @@ static int state_initialized(struct file_info *fi, struct pending_request *req)
break;
case RAW1394_REQ_SET_CARD:
spin_lock_irq(&host_info_lock);
spin_lock_irqsave(&host_info_lock, flags);
if (req->req.misc < host_count) {
list_for_each_entry(hi, &host_info_list, list) {
if (!req->req.misc--)
@ -550,7 +552,7 @@ static int state_initialized(struct file_info *fi, struct pending_request *req)
} else {
req->req.error = RAW1394_ERROR_INVALID_ARG;
}
spin_unlock_irq(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, flags);
req->req.length = 0;
break;
@ -569,7 +571,6 @@ static void handle_iso_listen(struct file_info *fi, struct pending_request *req)
{
int channel = req->req.misc;
spin_lock_irq(&host_info_lock);
if ((channel > 63) || (channel < -64)) {
req->req.error = RAW1394_ERROR_INVALID_ARG;
} else if (channel >= 0) {
@ -601,7 +602,6 @@ static void handle_iso_listen(struct file_info *fi, struct pending_request *req)
req->req.length = 0;
queue_complete_req(req);
spin_unlock_irq(&host_info_lock);
}
static void handle_fcp_listen(struct file_info *fi, struct pending_request *req)
@ -627,6 +627,7 @@ static void handle_fcp_listen(struct file_info *fi, struct pending_request *req)
static int handle_async_request(struct file_info *fi,
struct pending_request *req, int node)
{
unsigned long flags;
struct hpsb_packet *packet = NULL;
u64 addr = req->req.address & 0xffffffffffffULL;
@ -761,9 +762,9 @@ static int handle_async_request(struct file_info *fi,
hpsb_set_packet_complete_task(packet,
(void (*)(void *))queue_complete_cb, req);
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
list_add_tail(&req->list, &fi->req_pending);
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
packet->generation = req->req.generation;
@ -779,6 +780,7 @@ static int handle_async_request(struct file_info *fi,
static int handle_iso_send(struct file_info *fi, struct pending_request *req,
int channel)
{
unsigned long flags;
struct hpsb_packet *packet;
packet = hpsb_make_isopacket(fi->host, req->req.length, channel & 0x3f,
@ -804,9 +806,9 @@ static int handle_iso_send(struct file_info *fi, struct pending_request *req,
(void (*)(void *))queue_complete_req,
req);
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
list_add_tail(&req->list, &fi->req_pending);
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
/* Update the generation of the packet just before sending. */
packet->generation = req->req.generation;
@ -821,6 +823,7 @@ static int handle_iso_send(struct file_info *fi, struct pending_request *req,
static int handle_async_send(struct file_info *fi, struct pending_request *req)
{
unsigned long flags;
struct hpsb_packet *packet;
int header_length = req->req.misc & 0xffff;
int expect_response = req->req.misc >> 16;
@ -867,9 +870,9 @@ static int handle_async_send(struct file_info *fi, struct pending_request *req)
hpsb_set_packet_complete_task(packet,
(void (*)(void *))queue_complete_cb, req);
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
list_add_tail(&req->list, &fi->req_pending);
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
/* Update the generation of the packet just before sending. */
packet->generation = req->req.generation;
@ -885,6 +888,7 @@ static int handle_async_send(struct file_info *fi, struct pending_request *req)
static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
u64 addr, size_t length, u16 flags)
{
unsigned long irqflags;
struct pending_request *req;
struct host_info *hi;
struct file_info *fi = NULL;
@ -899,7 +903,7 @@ static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
"addr: %4.4x %8.8x length: %Zu", nodeid,
(u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
length);
spin_lock(&host_info_lock);
spin_lock_irqsave(&host_info_lock, irqflags);
hi = find_host_info(host); /* search address-entry */
if (hi != NULL) {
list_for_each_entry(fi, &hi->file_info_list, list) {
@ -924,7 +928,7 @@ static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
if (!found) {
printk(KERN_ERR "raw1394: arm_read FAILED addr_entry not found"
" -> rcode_address_error\n");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_ADDRESS_ERROR);
} else {
DBGMSG("arm_read addr_entry FOUND");
@ -954,7 +958,7 @@ static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
req = __alloc_pending_request(SLAB_ATOMIC);
if (!req) {
DBGMSG("arm_read -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
}
@ -974,7 +978,7 @@ static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
if (!(req->data)) {
free_pending_request(req);
DBGMSG("arm_read -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
}
@ -1031,13 +1035,14 @@ static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
sizeof(struct arm_request));
queue_complete_req(req);
}
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (rcode);
}
static int arm_write(struct hpsb_host *host, int nodeid, int destid,
quadlet_t * data, u64 addr, size_t length, u16 flags)
{
unsigned long irqflags;
struct pending_request *req;
struct host_info *hi;
struct file_info *fi = NULL;
@ -1052,7 +1057,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, int destid,
"addr: %4.4x %8.8x length: %Zu", nodeid,
(u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
length);
spin_lock(&host_info_lock);
spin_lock_irqsave(&host_info_lock, irqflags);
hi = find_host_info(host); /* search address-entry */
if (hi != NULL) {
list_for_each_entry(fi, &hi->file_info_list, list) {
@ -1077,7 +1082,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, int destid,
if (!found) {
printk(KERN_ERR "raw1394: arm_write FAILED addr_entry not found"
" -> rcode_address_error\n");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_ADDRESS_ERROR);
} else {
DBGMSG("arm_write addr_entry FOUND");
@ -1106,7 +1111,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, int destid,
req = __alloc_pending_request(SLAB_ATOMIC);
if (!req) {
DBGMSG("arm_write -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request my be retried */
}
@ -1118,7 +1123,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, int destid,
if (!(req->data)) {
free_pending_request(req);
DBGMSG("arm_write -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
}
@ -1165,7 +1170,7 @@ static int arm_write(struct hpsb_host *host, int nodeid, int destid,
sizeof(struct arm_request));
queue_complete_req(req);
}
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (rcode);
}
@ -1173,6 +1178,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
u64 addr, quadlet_t data, quadlet_t arg, int ext_tcode,
u16 flags)
{
unsigned long irqflags;
struct pending_request *req;
struct host_info *hi;
struct file_info *fi = NULL;
@ -1198,7 +1204,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
(u32) (addr & 0xFFFFFFFF), ext_tcode & 0xFF,
be32_to_cpu(data), be32_to_cpu(arg));
}
spin_lock(&host_info_lock);
spin_lock_irqsave(&host_info_lock, irqflags);
hi = find_host_info(host); /* search address-entry */
if (hi != NULL) {
list_for_each_entry(fi, &hi->file_info_list, list) {
@ -1224,7 +1230,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
if (!found) {
printk(KERN_ERR "raw1394: arm_lock FAILED addr_entry not found"
" -> rcode_address_error\n");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_ADDRESS_ERROR);
} else {
DBGMSG("arm_lock addr_entry FOUND");
@ -1307,7 +1313,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
req = __alloc_pending_request(SLAB_ATOMIC);
if (!req) {
DBGMSG("arm_lock -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
}
@ -1316,7 +1322,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
if (!(req->data)) {
free_pending_request(req);
DBGMSG("arm_lock -> rcode_conflict_error");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
}
@ -1382,7 +1388,7 @@ static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
sizeof(struct arm_response) + 2 * sizeof(*store));
queue_complete_req(req);
}
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (rcode);
}
@ -1390,6 +1396,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
u64 addr, octlet_t data, octlet_t arg, int ext_tcode,
u16 flags)
{
unsigned long irqflags;
struct pending_request *req;
struct host_info *hi;
struct file_info *fi = NULL;
@ -1422,7 +1429,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
(u32) ((be64_to_cpu(arg) >> 32) & 0xFFFFFFFF),
(u32) (be64_to_cpu(arg) & 0xFFFFFFFF));
}
spin_lock(&host_info_lock);
spin_lock_irqsave(&host_info_lock, irqflags);
hi = find_host_info(host); /* search addressentry in file_info's for host */
if (hi != NULL) {
list_for_each_entry(fi, &hi->file_info_list, list) {
@ -1449,7 +1456,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
printk(KERN_ERR
"raw1394: arm_lock64 FAILED addr_entry not found"
" -> rcode_address_error\n");
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (RCODE_ADDRESS_ERROR);
} else {
DBGMSG("arm_lock64 addr_entry FOUND");
@ -1533,7 +1540,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
DBGMSG("arm_lock64 -> entering notification-section");
req = __alloc_pending_request(SLAB_ATOMIC);
if (!req) {
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
DBGMSG("arm_lock64 -> rcode_conflict_error");
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
@ -1542,7 +1549,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
req->data = kmalloc(size, SLAB_ATOMIC);
if (!(req->data)) {
free_pending_request(req);
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
DBGMSG("arm_lock64 -> rcode_conflict_error");
return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
The request may be retried */
@ -1609,7 +1616,7 @@ static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
sizeof(struct arm_response) + 2 * sizeof(*store));
queue_complete_req(req);
}
spin_unlock(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, irqflags);
return (rcode);
}
@ -1980,6 +1987,7 @@ static int write_phypacket(struct file_info *fi, struct pending_request *req)
struct hpsb_packet *packet = NULL;
int retval = 0;
quadlet_t data;
unsigned long flags;
data = be32_to_cpu((u32) req->req.sendb);
DBGMSG("write_phypacket called - quadlet 0x%8.8x ", data);
@ -1990,9 +1998,9 @@ static int write_phypacket(struct file_info *fi, struct pending_request *req)
req->packet = packet;
hpsb_set_packet_complete_task(packet,
(void (*)(void *))queue_complete_cb, req);
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
list_add_tail(&req->list, &fi->req_pending);
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
packet->generation = req->req.generation;
retval = hpsb_send_packet(packet);
DBGMSG("write_phypacket send_packet called => retval: %d ", retval);
@ -2659,14 +2667,15 @@ static unsigned int raw1394_poll(struct file *file, poll_table * pt)
{
struct file_info *fi = file->private_data;
unsigned int mask = POLLOUT | POLLWRNORM;
unsigned long flags;
poll_wait(file, &fi->poll_wait_complete, pt);
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
if (!list_empty(&fi->req_complete)) {
mask |= POLLIN | POLLRDNORM;
}
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
return mask;
}
@ -2710,6 +2719,7 @@ static int raw1394_release(struct inode *inode, struct file *file)
struct arm_addr *arm_addr = NULL;
int another_host;
int csr_mod = 0;
unsigned long flags;
if (fi->iso_state != RAW1394_ISO_INACTIVE)
raw1394_iso_shutdown(fi);
@ -2720,13 +2730,11 @@ static int raw1394_release(struct inode *inode, struct file *file)
}
}
spin_lock_irq(&host_info_lock);
spin_lock_irqsave(&host_info_lock, flags);
fi->listen_channels = 0;
spin_unlock_irq(&host_info_lock);
fail = 0;
/* set address-entries invalid */
spin_lock_irq(&host_info_lock);
while (!list_empty(&fi->addr_list)) {
another_host = 0;
@ -2777,14 +2785,14 @@ static int raw1394_release(struct inode *inode, struct file *file)
vfree(addr->addr_space_buffer);
kfree(addr);
} /* while */
spin_unlock_irq(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, flags);
if (fail > 0) {
printk(KERN_ERR "raw1394: during addr_list-release "
"error(s) occurred \n");
}
while (!done) {
spin_lock_irq(&fi->reqlists_lock);
spin_lock_irqsave(&fi->reqlists_lock, flags);
while (!list_empty(&fi->req_complete)) {
lh = fi->req_complete.next;
@ -2798,7 +2806,7 @@ static int raw1394_release(struct inode *inode, struct file *file)
if (list_empty(&fi->req_pending))
done = 1;
spin_unlock_irq(&fi->reqlists_lock);
spin_unlock_irqrestore(&fi->reqlists_lock, flags);
if (!done)
down_interruptible(&fi->complete_sem);
@ -2828,9 +2836,9 @@ static int raw1394_release(struct inode *inode, struct file *file)
fi->host->id);
if (fi->state == connected) {
spin_lock_irq(&host_info_lock);
spin_lock_irqsave(&host_info_lock, flags);
list_del(&fi->list);
spin_unlock_irq(&host_info_lock);
spin_unlock_irqrestore(&host_info_lock, flags);
put_device(&fi->host->device);
}

Просмотреть файл

@ -524,7 +524,7 @@ void mthca_cmd_use_polling(struct mthca_dev *dev)
}
struct mthca_mailbox *mthca_alloc_mailbox(struct mthca_dev *dev,
unsigned int gfp_mask)
gfp_t gfp_mask)
{
struct mthca_mailbox *mailbox;

Просмотреть файл

@ -248,7 +248,7 @@ void mthca_cmd_event(struct mthca_dev *dev, u16 token,
u8 status, u64 out_param);
struct mthca_mailbox *mthca_alloc_mailbox(struct mthca_dev *dev,
unsigned int gfp_mask);
gfp_t gfp_mask);
void mthca_free_mailbox(struct mthca_dev *dev, struct mthca_mailbox *mailbox);
int mthca_SYS_EN(struct mthca_dev *dev, u8 *status);

Просмотреть файл

@ -396,20 +396,21 @@ static irqreturn_t mthca_tavor_interrupt(int irq, void *dev_ptr, struct pt_regs
writel(dev->eq_table.clr_mask, dev->eq_table.clr_int);
ecr = readl(dev->eq_regs.tavor.ecr_base + 4);
if (ecr) {
if (!ecr)
return IRQ_NONE;
writel(ecr, dev->eq_regs.tavor.ecr_base +
MTHCA_ECR_CLR_BASE - MTHCA_ECR_BASE + 4);
for (i = 0; i < MTHCA_NUM_EQ; ++i)
if (ecr & dev->eq_table.eq[i].eqn_mask &&
mthca_eq_int(dev, &dev->eq_table.eq[i])) {
if (ecr & dev->eq_table.eq[i].eqn_mask) {
if (mthca_eq_int(dev, &dev->eq_table.eq[i]))
tavor_set_eq_ci(dev, &dev->eq_table.eq[i],
dev->eq_table.eq[i].cons_index);
tavor_eq_req_not(dev, dev->eq_table.eq[i].eqn);
}
}
return IRQ_RETVAL(ecr);
return IRQ_HANDLED;
}
static irqreturn_t mthca_tavor_msi_x_interrupt(int irq, void *eq_ptr,

Просмотреть файл

@ -82,7 +82,7 @@ void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm)
}
struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages,
unsigned int gfp_mask)
gfp_t gfp_mask)
{
struct mthca_icm *icm;
struct mthca_icm_chunk *chunk = NULL;

Просмотреть файл

@ -77,7 +77,7 @@ struct mthca_icm_iter {
struct mthca_dev;
struct mthca_icm *mthca_alloc_icm(struct mthca_dev *dev, int npages,
unsigned int gfp_mask);
gfp_t gfp_mask);
void mthca_free_icm(struct mthca_dev *dev, struct mthca_icm *icm);
struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,

Просмотреть файл

@ -91,7 +91,7 @@ int bitmap_active(struct bitmap *bitmap)
#define WRITE_POOL_SIZE 256
/* mempool for queueing pending writes on the bitmap file */
static void *write_pool_alloc(unsigned int gfp_flags, void *data)
static void *write_pool_alloc(gfp_t gfp_flags, void *data)
{
return kmalloc(sizeof(struct page_list), gfp_flags);
}

Просмотреть файл

@ -331,7 +331,7 @@ crypt_alloc_buffer(struct crypt_config *cc, unsigned int size,
{
struct bio *bio;
unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
int gfp_mask = GFP_NOIO | __GFP_HIGHMEM;
gfp_t gfp_mask = GFP_NOIO | __GFP_HIGHMEM;
unsigned int i;
/*

Просмотреть файл

@ -3063,6 +3063,7 @@ static int md_thread(void * arg)
* many dirty RAID5 blocks.
*/
allow_signal(SIGKILL);
complete(thread->event);
while (!kthread_should_stop()) {
void (*run)(mddev_t *);
@ -3111,7 +3112,7 @@ mdk_thread_t *md_register_thread(void (*run) (mddev_t *), mddev_t *mddev,
thread->mddev = mddev;
thread->name = name;
thread->timeout = MAX_SCHEDULE_TIMEOUT;
thread->tsk = kthread_run(md_thread, thread, mdname(thread->mddev));
thread->tsk = kthread_run(md_thread, thread, name, mdname(thread->mddev));
if (IS_ERR(thread->tsk)) {
kfree(thread);
return NULL;
@ -3567,8 +3568,10 @@ static void md_do_sync(mddev_t *mddev)
mddev->curr_resync = 2;
try_again:
if (signal_pending(current)) {
if (signal_pending(current) ||
kthread_should_stop()) {
flush_signals(current);
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
goto skip;
}
ITERATE_MDDEV(mddev2,tmp) {
@ -3588,8 +3591,9 @@ static void md_do_sync(mddev_t *mddev)
*/
continue;
prepare_to_wait(&resync_wait, &wq, TASK_INTERRUPTIBLE);
if (!signal_pending(current)
&& mddev2->curr_resync >= mddev->curr_resync) {
if (!signal_pending(current) &&
!kthread_should_stop() &&
mddev2->curr_resync >= mddev->curr_resync) {
printk(KERN_INFO "md: delaying resync of %s"
" until %s has finished resync (they"
" share one or more physical units)\n",
@ -3695,7 +3699,7 @@ static void md_do_sync(mddev_t *mddev)
}
if (signal_pending(current)) {
if (signal_pending(current) || kthread_should_stop()) {
/*
* got a signal, exit.
*/

Просмотреть файл

@ -262,7 +262,6 @@ config VIDEO_SAA7134_DVB
depends on VIDEO_SAA7134 && DVB_CORE
select VIDEO_BUF_DVB
select DVB_MT352
select DVB_CX22702
select DVB_TDA1004X
---help---
This adds support for DVB cards based on the

Просмотреть файл

@ -257,7 +257,7 @@ static void mptsas_print_device_pg0(SasDevicePage0_t *pg0)
printk("SAS Address=0x%llX\n", le64_to_cpu(sas_address));
printk("Target ID=0x%X\n", pg0->TargetID);
printk("Bus=0x%X\n", pg0->Bus);
printk("PhyNum=0x%X\n", pg0->PhyNum);
printk("Parent Phy Num=0x%X\n", pg0->PhyNum);
printk("Access Status=0x%X\n", le16_to_cpu(pg0->AccessStatus));
printk("Device Info=0x%X\n", le32_to_cpu(pg0->DeviceInfo));
printk("Flags=0x%X\n", le16_to_cpu(pg0->Flags));
@ -270,7 +270,7 @@ static void mptsas_print_expander_pg1(SasExpanderPage1_t *pg1)
printk("---- SAS EXPANDER PAGE 1 ------------\n");
printk("Physical Port=0x%X\n", pg1->PhysicalPort);
printk("PHY Identifier=0x%X\n", pg1->Phy);
printk("PHY Identifier=0x%X\n", pg1->PhyIdentifier);
printk("Negotiated Link Rate=0x%X\n", pg1->NegotiatedLinkRate);
printk("Programmed Link Rate=0x%X\n", pg1->ProgrammedLinkRate);
printk("Hardware Link Rate=0x%X\n", pg1->HwLinkRate);
@ -604,7 +604,7 @@ mptsas_sas_expander_pg1(MPT_ADAPTER *ioc, struct mptsas_phyinfo *phy_info,
mptsas_print_expander_pg1(buffer);
/* save config data */
phy_info->phy_id = buffer->Phy;
phy_info->phy_id = buffer->PhyIdentifier;
phy_info->port_id = buffer->PhysicalPort;
phy_info->negotiated_link_rate = buffer->NegotiatedLinkRate;
phy_info->programmed_link_rate = buffer->ProgrammedLinkRate;
@ -825,6 +825,8 @@ mptsas_probe_hba_phys(MPT_ADAPTER *ioc, int *index)
mptsas_sas_device_pg0(ioc, &port_info->phy_info[i].identify,
(MPI_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE <<
MPI_SAS_DEVICE_PGAD_FORM_SHIFT), handle);
port_info->phy_info[i].identify.phy_id =
port_info->phy_info[i].phy_id;
handle = port_info->phy_info[i].identify.handle;
if (port_info->phy_info[i].attached.handle) {
@ -881,6 +883,8 @@ mptsas_probe_expander_phys(MPT_ADAPTER *ioc, u32 *handle, int *index)
(MPI_SAS_DEVICE_PGAD_FORM_HANDLE <<
MPI_SAS_DEVICE_PGAD_FORM_SHIFT),
port_info->phy_info[i].identify.handle);
port_info->phy_info[i].identify.phy_id =
port_info->phy_info[i].phy_id;
}
if (port_info->phy_info[i].attached.handle) {

Просмотреть файл

@ -1027,8 +1027,7 @@ static void cp_reset_hw (struct cp_private *cp)
if (!(cpr8(Cmd) & CmdReset))
return;
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(10);
schedule_timeout_uninterruptible(10);
}
printk(KERN_ERR "%s: hardware reset timeout\n", cp->dev->name);
@ -1575,6 +1574,7 @@ static struct ethtool_ops cp_ethtool_ops = {
.set_wol = cp_set_wol,
.get_strings = cp_get_strings,
.get_ethtool_stats = cp_get_ethtool_stats,
.get_perm_addr = ethtool_op_get_perm_addr,
};
static int cp_ioctl (struct net_device *dev, struct ifreq *rq, int cmd)
@ -1773,6 +1773,7 @@ static int cp_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
for (i = 0; i < 3; i++)
((u16 *) (dev->dev_addr))[i] =
le16_to_cpu (read_eeprom (regs, i + 7, addr_len));
memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
dev->open = cp_open;
dev->stop = cp_close;

Просмотреть файл

@ -552,7 +552,8 @@ const static struct {
{ "RTL-8100B/8139D",
HW_REVID(1, 1, 1, 0, 1, 0, 1),
HasLWake,
HasHltClk /* XXX undocumented? */
| HasLWake,
},
{ "RTL-8101",
@ -970,6 +971,7 @@ static int __devinit rtl8139_init_one (struct pci_dev *pdev,
for (i = 0; i < 3; i++)
((u16 *) (dev->dev_addr))[i] =
le16_to_cpu (read_eeprom (ioaddr, i + 7, addr_len));
memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
/* The Rtl8139-specific entries in the device structure. */
dev->open = rtl8139_open;
@ -2465,6 +2467,7 @@ static struct ethtool_ops rtl8139_ethtool_ops = {
.get_strings = rtl8139_get_strings,
.get_stats_count = rtl8139_get_stats_count,
.get_ethtool_stats = rtl8139_get_ethtool_stats,
.get_perm_addr = ethtool_op_get_perm_addr,
};
static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)

Просмотреть файл

@ -475,6 +475,14 @@ config SGI_IOC3_ETH_HW_TX_CSUM
the moment only acceleration of IPv4 is supported. This option
enables offloading for checksums on transmit. If unsure, say Y.
config MIPS_SIM_NET
tristate "MIPS simulator Network device (EXPERIMENTAL)"
depends on NETDEVICES && MIPS_SIM && EXPERIMENTAL
help
The MIPSNET device is a simple Ethernet network device which is
emulated by the MIPS Simulator.
If you are not using a MIPSsim or are unsure, say N.
config SGI_O2MACE_ETH
tristate "SGI O2 MACE Fast Ethernet support"
depends on NET_ETHERNET && SGI_IP32=y
@ -2083,6 +2091,7 @@ config SPIDER_NET
config GIANFAR
tristate "Gianfar Ethernet"
depends on 85xx || 83xx
select PHYLIB
help
This driver supports the Gigabit TSEC on the MPC85xx
family of chips, and the FEC on the 8540
@ -2243,6 +2252,20 @@ config ISERIES_VETH
tristate "iSeries Virtual Ethernet driver support"
depends on PPC_ISERIES
config RIONET
tristate "RapidIO Ethernet over messaging driver support"
depends on NETDEVICES && RAPIDIO
config RIONET_TX_SIZE
int "Number of outbound queue entries"
depends on RIONET
default "128"
config RIONET_RX_SIZE
int "Number of inbound queue entries"
depends on RIONET
default "128"
config FDDI
bool "FDDI driver support"
depends on (PCI || EISA)

Просмотреть файл

@ -13,7 +13,7 @@ obj-$(CONFIG_CHELSIO_T1) += chelsio/
obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_GIANFAR) += gianfar_driver.o
gianfar_driver-objs := gianfar.o gianfar_ethtool.o gianfar_phy.o
gianfar_driver-objs := gianfar.o gianfar_ethtool.o gianfar_mii.o
#
# link order important here
@ -64,6 +64,7 @@ obj-$(CONFIG_SKFP) += skfp/
obj-$(CONFIG_VIA_RHINE) += via-rhine.o
obj-$(CONFIG_VIA_VELOCITY) += via-velocity.o
obj-$(CONFIG_ADAPTEC_STARFIRE) += starfire.o
obj-$(CONFIG_RIONET) += rionet.o
#
# end link order section
@ -166,6 +167,7 @@ obj-$(CONFIG_EQUALIZER) += eql.o
obj-$(CONFIG_MIPS_JAZZ_SONIC) += jazzsonic.o
obj-$(CONFIG_MIPS_GT96100ETH) += gt96100eth.o
obj-$(CONFIG_MIPS_AU1X00_ENET) += au1000_eth.o
obj-$(CONFIG_MIPS_SIM_NET) += mipsnet.o
obj-$(CONFIG_SGI_IOC3_ETH) += ioc3-eth.o
obj-$(CONFIG_DECLANCE) += declance.o
obj-$(CONFIG_ATARILANCE) += atarilance.o

Просмотреть файл

@ -151,13 +151,6 @@ struct au1000_private *au_macs[NUM_ETH_INTERFACES];
SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | \
SUPPORTED_Autoneg
static char *phy_link[] =
{ "unknown",
"10Base2", "10BaseT",
"AUI",
"100BaseT", "100BaseTX", "100BaseFX"
};
int bcm_5201_init(struct net_device *dev, int phy_addr)
{
s16 data;
@ -785,6 +778,7 @@ static struct mii_chip_info {
{"Broadcom BCM5201 10/100 BaseT PHY",0x0040,0x6212, &bcm_5201_ops,0},
{"Broadcom BCM5221 10/100 BaseT PHY",0x0040,0x61e4, &bcm_5201_ops,0},
{"Broadcom BCM5222 10/100 BaseT PHY",0x0040,0x6322, &bcm_5201_ops,1},
{"NS DP83847 PHY", 0x2000, 0x5c30, &bcm_5201_ops ,0},
{"AMD 79C901 HomePNA PHY",0x0000,0x35c8, &am79c901_ops,0},
{"AMD 79C874 10/100 BaseT PHY",0x0022,0x561b, &am79c874_ops,0},
{"LSI 80227 10/100 BaseT PHY",0x0016,0xf840, &lsi_80227_ops,0},
@ -1045,7 +1039,7 @@ found:
#endif
if (aup->mii->chip_info == NULL) {
printk(KERN_ERR "%s: Au1x No MII transceivers found!\n",
printk(KERN_ERR "%s: Au1x No known MII transceivers found!\n",
dev->name);
return -1;
}
@ -1546,6 +1540,9 @@ au1000_probe(u32 ioaddr, int irq, int port_num)
printk(KERN_ERR "%s: out of memory\n", dev->name);
goto err_out;
}
aup->mii->next = NULL;
aup->mii->chip_info = NULL;
aup->mii->status = 0;
aup->mii->mii_control_reg = 0;
aup->mii->mii_data_reg = 0;

Просмотреть файл

@ -106,6 +106,29 @@ static int b44_poll(struct net_device *dev, int *budget);
static void b44_poll_controller(struct net_device *dev);
#endif
static int dma_desc_align_mask;
static int dma_desc_sync_size;
static inline void b44_sync_dma_desc_for_device(struct pci_dev *pdev,
dma_addr_t dma_base,
unsigned long offset,
enum dma_data_direction dir)
{
dma_sync_single_range_for_device(&pdev->dev, dma_base,
offset & dma_desc_align_mask,
dma_desc_sync_size, dir);
}
static inline void b44_sync_dma_desc_for_cpu(struct pci_dev *pdev,
dma_addr_t dma_base,
unsigned long offset,
enum dma_data_direction dir)
{
dma_sync_single_range_for_cpu(&pdev->dev, dma_base,
offset & dma_desc_align_mask,
dma_desc_sync_size, dir);
}
static inline unsigned long br32(const struct b44 *bp, unsigned long reg)
{
return readl(bp->regs + reg);
@ -668,6 +691,11 @@ static int b44_alloc_rx_skb(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
dp->ctrl = cpu_to_le32(ctrl);
dp->addr = cpu_to_le32((u32) mapping + bp->rx_offset + bp->dma_offset);
if (bp->flags & B44_FLAG_RX_RING_HACK)
b44_sync_dma_desc_for_device(bp->pdev, bp->rx_ring_dma,
dest_idx * sizeof(dp),
DMA_BIDIRECTIONAL);
return RX_PKT_BUF_SZ;
}
@ -692,6 +720,11 @@ static void b44_recycle_rx(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
pci_unmap_addr_set(dest_map, mapping,
pci_unmap_addr(src_map, mapping));
if (bp->flags & B44_FLAG_RX_RING_HACK)
b44_sync_dma_desc_for_cpu(bp->pdev, bp->rx_ring_dma,
src_idx * sizeof(src_desc),
DMA_BIDIRECTIONAL);
ctrl = src_desc->ctrl;
if (dest_idx == (B44_RX_RING_SIZE - 1))
ctrl |= cpu_to_le32(DESC_CTRL_EOT);
@ -700,8 +733,14 @@ static void b44_recycle_rx(struct b44 *bp, int src_idx, u32 dest_idx_unmasked)
dest_desc->ctrl = ctrl;
dest_desc->addr = src_desc->addr;
src_map->skb = NULL;
if (bp->flags & B44_FLAG_RX_RING_HACK)
b44_sync_dma_desc_for_device(bp->pdev, bp->rx_ring_dma,
dest_idx * sizeof(dest_desc),
DMA_BIDIRECTIONAL);
pci_dma_sync_single_for_device(bp->pdev, src_desc->addr,
RX_PKT_BUF_SZ,
PCI_DMA_FROMDEVICE);
@ -959,6 +998,11 @@ static int b44_start_xmit(struct sk_buff *skb, struct net_device *dev)
bp->tx_ring[entry].ctrl = cpu_to_le32(ctrl);
bp->tx_ring[entry].addr = cpu_to_le32((u32) mapping+bp->dma_offset);
if (bp->flags & B44_FLAG_TX_RING_HACK)
b44_sync_dma_desc_for_device(bp->pdev, bp->tx_ring_dma,
entry * sizeof(bp->tx_ring[0]),
DMA_TO_DEVICE);
entry = NEXT_TX(entry);
bp->tx_prod = entry;
@ -1064,6 +1108,16 @@ static void b44_init_rings(struct b44 *bp)
memset(bp->rx_ring, 0, B44_RX_RING_BYTES);
memset(bp->tx_ring, 0, B44_TX_RING_BYTES);
if (bp->flags & B44_FLAG_RX_RING_HACK)
dma_sync_single_for_device(&bp->pdev->dev, bp->rx_ring_dma,
DMA_TABLE_BYTES,
PCI_DMA_BIDIRECTIONAL);
if (bp->flags & B44_FLAG_TX_RING_HACK)
dma_sync_single_for_device(&bp->pdev->dev, bp->tx_ring_dma,
DMA_TABLE_BYTES,
PCI_DMA_TODEVICE);
for (i = 0; i < bp->rx_pending; i++) {
if (b44_alloc_rx_skb(bp, -1, i) < 0)
break;
@ -1085,14 +1139,28 @@ static void b44_free_consistent(struct b44 *bp)
bp->tx_buffers = NULL;
}
if (bp->rx_ring) {
if (bp->flags & B44_FLAG_RX_RING_HACK) {
dma_unmap_single(&bp->pdev->dev, bp->rx_ring_dma,
DMA_TABLE_BYTES,
DMA_BIDIRECTIONAL);
kfree(bp->rx_ring);
} else
pci_free_consistent(bp->pdev, DMA_TABLE_BYTES,
bp->rx_ring, bp->rx_ring_dma);
bp->rx_ring = NULL;
bp->flags &= ~B44_FLAG_RX_RING_HACK;
}
if (bp->tx_ring) {
if (bp->flags & B44_FLAG_TX_RING_HACK) {
dma_unmap_single(&bp->pdev->dev, bp->tx_ring_dma,
DMA_TABLE_BYTES,
DMA_TO_DEVICE);
kfree(bp->tx_ring);
} else
pci_free_consistent(bp->pdev, DMA_TABLE_BYTES,
bp->tx_ring, bp->tx_ring_dma);
bp->tx_ring = NULL;
bp->flags &= ~B44_FLAG_TX_RING_HACK;
}
}
@ -1118,12 +1186,56 @@ static int b44_alloc_consistent(struct b44 *bp)
size = DMA_TABLE_BYTES;
bp->rx_ring = pci_alloc_consistent(bp->pdev, size, &bp->rx_ring_dma);
if (!bp->rx_ring)
if (!bp->rx_ring) {
/* Allocation may have failed due to pci_alloc_consistent
insisting on use of GFP_DMA, which is more restrictive
than necessary... */
struct dma_desc *rx_ring;
dma_addr_t rx_ring_dma;
if (!(rx_ring = (struct dma_desc *)kmalloc(size, GFP_KERNEL)))
goto out_err;
bp->tx_ring = pci_alloc_consistent(bp->pdev, size, &bp->tx_ring_dma);
if (!bp->tx_ring)
memset(rx_ring, 0, size);
rx_ring_dma = dma_map_single(&bp->pdev->dev, rx_ring,
DMA_TABLE_BYTES,
DMA_BIDIRECTIONAL);
if (rx_ring_dma + size > B44_DMA_MASK) {
kfree(rx_ring);
goto out_err;
}
bp->rx_ring = rx_ring;
bp->rx_ring_dma = rx_ring_dma;
bp->flags |= B44_FLAG_RX_RING_HACK;
}
bp->tx_ring = pci_alloc_consistent(bp->pdev, size, &bp->tx_ring_dma);
if (!bp->tx_ring) {
/* Allocation may have failed due to pci_alloc_consistent
insisting on use of GFP_DMA, which is more restrictive
than necessary... */
struct dma_desc *tx_ring;
dma_addr_t tx_ring_dma;
if (!(tx_ring = (struct dma_desc *)kmalloc(size, GFP_KERNEL)))
goto out_err;
memset(tx_ring, 0, size);
tx_ring_dma = dma_map_single(&bp->pdev->dev, tx_ring,
DMA_TABLE_BYTES,
DMA_TO_DEVICE);
if (tx_ring_dma + size > B44_DMA_MASK) {
kfree(tx_ring);
goto out_err;
}
bp->tx_ring = tx_ring;
bp->tx_ring_dma = tx_ring_dma;
bp->flags |= B44_FLAG_TX_RING_HACK;
}
return 0;
@ -1676,6 +1788,7 @@ static struct ethtool_ops b44_ethtool_ops = {
.set_pauseparam = b44_set_pauseparam,
.get_msglevel = b44_get_msglevel,
.set_msglevel = b44_set_msglevel,
.get_perm_addr = ethtool_op_get_perm_addr,
};
static int b44_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
@ -1718,6 +1831,7 @@ static int __devinit b44_get_invariants(struct b44 *bp)
bp->dev->dev_addr[3] = eeprom[80];
bp->dev->dev_addr[4] = eeprom[83];
bp->dev->dev_addr[5] = eeprom[82];
memcpy(bp->dev->perm_addr, bp->dev->dev_addr, bp->dev->addr_len);
bp->phy_addr = eeprom[90] & 0x1f;
@ -1971,6 +2085,12 @@ static struct pci_driver b44_driver = {
static int __init b44_init(void)
{
unsigned int dma_desc_align_size = dma_get_cache_alignment();
/* Setup paramaters for syncing RX/TX DMA descriptors */
dma_desc_align_mask = ~(dma_desc_align_size - 1);
dma_desc_sync_size = max(dma_desc_align_size, sizeof(struct dma_desc));
return pci_module_init(&b44_driver);
}

Просмотреть файл

@ -400,6 +400,8 @@ struct b44 {
#define B44_FLAG_ADV_100HALF 0x04000000
#define B44_FLAG_ADV_100FULL 0x08000000
#define B44_FLAG_INTERNAL_PHY 0x10000000
#define B44_FLAG_RX_RING_HACK 0x20000000
#define B44_FLAG_TX_RING_HACK 0x40000000
u32 rx_offset;

Просмотреть файл

@ -4241,6 +4241,43 @@ out:
return 0;
}
static void bond_activebackup_xmit_copy(struct sk_buff *skb,
struct bonding *bond,
struct slave *slave)
{
struct sk_buff *skb2 = skb_copy(skb, GFP_ATOMIC);
struct ethhdr *eth_data;
u8 *hwaddr;
int res;
if (!skb2) {
printk(KERN_ERR DRV_NAME ": Error: "
"bond_activebackup_xmit_copy(): skb_copy() failed\n");
return;
}
skb2->mac.raw = (unsigned char *)skb2->data;
eth_data = eth_hdr(skb2);
/* Pick an appropriate source MAC address
* -- use slave's perm MAC addr, unless used by bond
* -- otherwise, borrow active slave's perm MAC addr
* since that will not be used
*/
hwaddr = slave->perm_hwaddr;
if (!memcmp(eth_data->h_source, hwaddr, ETH_ALEN))
hwaddr = bond->curr_active_slave->perm_hwaddr;
/* Set source MAC address appropriately */
memcpy(eth_data->h_source, hwaddr, ETH_ALEN);
res = bond_dev_queue_xmit(bond, skb2, slave->dev);
if (res)
dev_kfree_skb(skb2);
return;
}
/*
* in active-backup mode, we know that bond->curr_active_slave is always valid if
* the bond has a usable interface.
@ -4257,10 +4294,26 @@ static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_d
goto out;
}
if (bond->curr_active_slave) { /* one usable interface */
res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev);
if (!bond->curr_active_slave)
goto out;
/* Xmit IGMP frames on all slaves to ensure rapid fail-over
for multicast traffic on snooping switches */
if (skb->protocol == __constant_htons(ETH_P_IP) &&
skb->nh.iph->protocol == IPPROTO_IGMP) {
struct slave *slave, *active_slave;
int i;
active_slave = bond->curr_active_slave;
bond_for_each_slave_from_to(bond, slave, i, active_slave->next,
active_slave->prev)
if (IS_UP(slave->dev) &&
(slave->link == BOND_LINK_UP))
bond_activebackup_xmit_copy(skb, bond, slave);
}
res = bond_dev_queue_xmit(bond, skb, bond->curr_active_slave->dev);
out:
if (res) {
/* no suitable interface, frame not sent */

Просмотреть файл

@ -489,7 +489,7 @@ static int cas_page_free(struct cas *cp, cas_page_t *page)
/* local page allocation routines for the receive buffers. jumbo pages
* require at least 8K contiguous and 8K aligned buffers.
*/
static cas_page_t *cas_page_alloc(struct cas *cp, const int flags)
static cas_page_t *cas_page_alloc(struct cas *cp, const gfp_t flags)
{
cas_page_t *page;
@ -561,7 +561,7 @@ static void cas_spare_free(struct cas *cp)
}
/* replenish spares if needed */
static void cas_spare_recover(struct cas *cp, const int flags)
static void cas_spare_recover(struct cas *cp, const gfp_t flags)
{
struct list_head list, *elem, *tmp;
int needed, i;

Просмотреть файл

@ -5,7 +5,7 @@
*
* adopted from sunlance.c by Richard van den Berg
*
* Copyright (C) 2002, 2003 Maciej W. Rozycki
* Copyright (C) 2002, 2003, 2005 Maciej W. Rozycki
*
* additional sources:
* - PMAD-AA TURBOchannel Ethernet Module Functional Specification,
@ -57,13 +57,15 @@
#include <linux/string.h>
#include <asm/addrspace.h>
#include <asm/system.h>
#include <asm/dec/interrupts.h>
#include <asm/dec/ioasic.h>
#include <asm/dec/ioasic_addrs.h>
#include <asm/dec/kn01.h>
#include <asm/dec/machtype.h>
#include <asm/dec/system.h>
#include <asm/dec/tc.h>
#include <asm/system.h>
static char version[] __devinitdata =
"declance.c: v0.009 by Linux MIPS DECstation task force\n";
@ -79,10 +81,6 @@ MODULE_LICENSE("GPL");
#define PMAD_LANCE 2
#define PMAX_LANCE 3
#ifndef CONFIG_TC
unsigned long system_base;
unsigned long dmaptr;
#endif
#define LE_CSR0 0
#define LE_CSR1 1
@ -237,7 +235,7 @@ struct lance_init_block {
/*
* This works *only* for the ring descriptors
*/
#define LANCE_ADDR(x) (PHYSADDR(x) >> 1)
#define LANCE_ADDR(x) (CPHYSADDR(x) >> 1)
struct lance_private {
struct net_device *next;
@ -697,12 +695,13 @@ out:
spin_unlock(&lp->lock);
}
static void lance_dma_merr_int(const int irq, void *dev_id,
static irqreturn_t lance_dma_merr_int(const int irq, void *dev_id,
struct pt_regs *regs)
{
struct net_device *dev = (struct net_device *) dev_id;
printk("%s: DMA error\n", dev->name);
return IRQ_HANDLED;
}
static irqreturn_t
@ -1026,10 +1025,6 @@ static int __init dec_lance_init(const int type, const int slot)
unsigned long esar_base;
unsigned char *esar;
#ifndef CONFIG_TC
system_base = KN01_LANCE_BASE;
#endif
if (dec_lance_debug && version_printed++ == 0)
printk(version);
@ -1062,16 +1057,16 @@ static int __init dec_lance_init(const int type, const int slot)
switch (type) {
#ifdef CONFIG_TC
case ASIC_LANCE:
dev->base_addr = system_base + IOASIC_LANCE;
dev->base_addr = CKSEG1ADDR(dec_kn_slot_base + IOASIC_LANCE);
/* buffer space for the on-board LANCE shared memory */
/*
* FIXME: ugly hack!
*/
dev->mem_start = KSEG1ADDR(0x00020000);
dev->mem_start = CKSEG1ADDR(0x00020000);
dev->mem_end = dev->mem_start + 0x00020000;
dev->irq = dec_interrupt[DEC_IRQ_LANCE];
esar_base = system_base + IOASIC_ESAR;
esar_base = CKSEG1ADDR(dec_kn_slot_base + IOASIC_ESAR);
/* Workaround crash with booting KN04 2.1k from Disk */
memset((void *)dev->mem_start, 0,
@ -1101,14 +1096,14 @@ static int __init dec_lance_init(const int type, const int slot)
/* Setup I/O ASIC LANCE DMA. */
lp->dma_irq = dec_interrupt[DEC_IRQ_LANCE_MERR];
ioasic_write(IO_REG_LANCE_DMA_P,
PHYSADDR(dev->mem_start) << 3);
CPHYSADDR(dev->mem_start) << 3);
break;
case PMAD_LANCE:
claim_tc_card(slot);
dev->mem_start = get_tc_base_addr(slot);
dev->mem_start = CKSEG1ADDR(get_tc_base_addr(slot));
dev->base_addr = dev->mem_start + 0x100000;
dev->irq = get_tc_irq_nr(slot);
esar_base = dev->mem_start + 0x1c0002;
@ -1137,9 +1132,9 @@ static int __init dec_lance_init(const int type, const int slot)
case PMAX_LANCE:
dev->irq = dec_interrupt[DEC_IRQ_LANCE];
dev->base_addr = KN01_LANCE_BASE;
dev->mem_start = KN01_LANCE_BASE + 0x01000000;
esar_base = KN01_RTC_BASE + 1;
dev->base_addr = CKSEG1ADDR(KN01_SLOT_BASE + KN01_LANCE);
dev->mem_start = CKSEG1ADDR(KN01_SLOT_BASE + KN01_LANCE_MEM);
esar_base = CKSEG1ADDR(KN01_SLOT_BASE + KN01_ESAR + 1);
lp->dma_irq = -1;
/*

Просмотреть файл

@ -2201,6 +2201,7 @@ static struct ethtool_ops e100_ethtool_ops = {
.phys_id = e100_phys_id,
.get_stats_count = e100_get_stats_count,
.get_ethtool_stats = e100_get_ethtool_stats,
.get_perm_addr = ethtool_op_get_perm_addr,
};
static int e100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
@ -2351,7 +2352,8 @@ static int __devinit e100_probe(struct pci_dev *pdev,
e100_phy_init(nic);
memcpy(netdev->dev_addr, nic->eeprom, ETH_ALEN);
if(!is_valid_ether_addr(netdev->dev_addr)) {
memcpy(netdev->perm_addr, nic->eeprom, ETH_ALEN);
if(!is_valid_ether_addr(netdev->perm_addr)) {
DPRINTK(PROBE, ERR, "Invalid MAC address from "
"EEPROM, aborting.\n");
err = -EAGAIN;

Просмотреть файл

@ -72,6 +72,10 @@
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <linux/if_vlan.h>
#ifdef CONFIG_E1000_MQ
#include <linux/cpu.h>
#include <linux/smp.h>
#endif
#define BAR_0 0
#define BAR_1 1
@ -165,10 +169,33 @@ struct e1000_buffer {
uint16_t next_to_watch;
};
struct e1000_ps_page { struct page *ps_page[MAX_PS_BUFFERS]; };
struct e1000_ps_page_dma { uint64_t ps_page_dma[MAX_PS_BUFFERS]; };
struct e1000_ps_page { struct page *ps_page[PS_PAGE_BUFFERS]; };
struct e1000_ps_page_dma { uint64_t ps_page_dma[PS_PAGE_BUFFERS]; };
struct e1000_desc_ring {
struct e1000_tx_ring {
/* pointer to the descriptor ring memory */
void *desc;
/* physical address of the descriptor ring */
dma_addr_t dma;
/* length of descriptor ring in bytes */
unsigned int size;
/* number of descriptors in the ring */
unsigned int count;
/* next descriptor to associate a buffer with */
unsigned int next_to_use;
/* next descriptor to check for DD status bit */
unsigned int next_to_clean;
/* array of buffer information structs */
struct e1000_buffer *buffer_info;
struct e1000_buffer previous_buffer_info;
spinlock_t tx_lock;
uint16_t tdh;
uint16_t tdt;
uint64_t pkt;
};
struct e1000_rx_ring {
/* pointer to the descriptor ring memory */
void *desc;
/* physical address of the descriptor ring */
@ -186,6 +213,10 @@ struct e1000_desc_ring {
/* arrays of page information for packet split */
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
uint16_t rdh;
uint16_t rdt;
uint64_t pkt;
};
#define E1000_DESC_UNUSED(R) \
@ -227,9 +258,10 @@ struct e1000_adapter {
unsigned long led_status;
/* TX */
struct e1000_desc_ring tx_ring;
struct e1000_buffer previous_buffer_info;
spinlock_t tx_lock;
struct e1000_tx_ring *tx_ring; /* One per active queue */
#ifdef CONFIG_E1000_MQ
struct e1000_tx_ring **cpu_tx_ring; /* per-cpu */
#endif
uint32_t txd_cmd;
uint32_t tx_int_delay;
uint32_t tx_abs_int_delay;
@ -246,19 +278,33 @@ struct e1000_adapter {
/* RX */
#ifdef CONFIG_E1000_NAPI
boolean_t (*clean_rx) (struct e1000_adapter *adapter, int *work_done,
int work_to_do);
boolean_t (*clean_rx) (struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do);
#else
boolean_t (*clean_rx) (struct e1000_adapter *adapter);
boolean_t (*clean_rx) (struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring);
#endif
void (*alloc_rx_buf) (struct e1000_adapter *adapter);
struct e1000_desc_ring rx_ring;
void (*alloc_rx_buf) (struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring);
struct e1000_rx_ring *rx_ring; /* One per active queue */
#ifdef CONFIG_E1000_NAPI
struct net_device *polling_netdev; /* One per active queue */
#endif
#ifdef CONFIG_E1000_MQ
struct net_device **cpu_netdev; /* per-cpu */
struct call_async_data_struct rx_sched_call_data;
int cpu_for_queue[4];
#endif
int num_queues;
uint64_t hw_csum_err;
uint64_t hw_csum_good;
uint64_t rx_hdr_split;
uint32_t rx_int_delay;
uint32_t rx_abs_int_delay;
boolean_t rx_csum;
boolean_t rx_ps;
unsigned int rx_ps_pages;
uint32_t gorcl;
uint64_t gorcl_old;
uint16_t rx_ps_bsize0;
@ -278,8 +324,8 @@ struct e1000_adapter {
struct e1000_phy_stats phy_stats;
uint32_t test_icr;
struct e1000_desc_ring test_tx_ring;
struct e1000_desc_ring test_rx_ring;
struct e1000_tx_ring test_tx_ring;
struct e1000_rx_ring test_rx_ring;
int msg_enable;

Просмотреть файл

@ -39,10 +39,10 @@ extern int e1000_up(struct e1000_adapter *adapter);
extern void e1000_down(struct e1000_adapter *adapter);
extern void e1000_reset(struct e1000_adapter *adapter);
extern int e1000_set_spd_dplx(struct e1000_adapter *adapter, uint16_t spddplx);
extern int e1000_setup_rx_resources(struct e1000_adapter *adapter);
extern int e1000_setup_tx_resources(struct e1000_adapter *adapter);
extern void e1000_free_rx_resources(struct e1000_adapter *adapter);
extern void e1000_free_tx_resources(struct e1000_adapter *adapter);
extern int e1000_setup_all_rx_resources(struct e1000_adapter *adapter);
extern int e1000_setup_all_tx_resources(struct e1000_adapter *adapter);
extern void e1000_free_all_rx_resources(struct e1000_adapter *adapter);
extern void e1000_free_all_tx_resources(struct e1000_adapter *adapter);
extern void e1000_update_stats(struct e1000_adapter *adapter);
struct e1000_stats {
@ -91,7 +91,8 @@ static const struct e1000_stats e1000_gstrings_stats[] = {
{ "tx_flow_control_xoff", E1000_STAT(stats.xofftxc) },
{ "rx_long_byte_count", E1000_STAT(stats.gorcl) },
{ "rx_csum_offload_good", E1000_STAT(hw_csum_good) },
{ "rx_csum_offload_errors", E1000_STAT(hw_csum_err) }
{ "rx_csum_offload_errors", E1000_STAT(hw_csum_err) },
{ "rx_header_split", E1000_STAT(rx_hdr_split) },
};
#define E1000_STATS_LEN \
sizeof(e1000_gstrings_stats) / sizeof(struct e1000_stats)
@ -546,8 +547,10 @@ e1000_set_eeprom(struct net_device *netdev,
ret_val = e1000_write_eeprom(hw, first_word,
last_word - first_word + 1, eeprom_buff);
/* Update the checksum over the first part of the EEPROM if needed */
if((ret_val == 0) && first_word <= EEPROM_CHECKSUM_REG)
/* Update the checksum over the first part of the EEPROM if needed
* and flush shadow RAM for 82573 conrollers */
if((ret_val == 0) && ((first_word <= EEPROM_CHECKSUM_REG) ||
(hw->mac_type == e1000_82573)))
e1000_update_eeprom_checksum(hw);
kfree(eeprom_buff);
@ -576,8 +579,8 @@ e1000_get_ringparam(struct net_device *netdev,
{
struct e1000_adapter *adapter = netdev_priv(netdev);
e1000_mac_type mac_type = adapter->hw.mac_type;
struct e1000_desc_ring *txdr = &adapter->tx_ring;
struct e1000_desc_ring *rxdr = &adapter->rx_ring;
struct e1000_tx_ring *txdr = adapter->tx_ring;
struct e1000_rx_ring *rxdr = adapter->rx_ring;
ring->rx_max_pending = (mac_type < e1000_82544) ? E1000_MAX_RXD :
E1000_MAX_82544_RXD;
@ -597,20 +600,40 @@ e1000_set_ringparam(struct net_device *netdev,
{
struct e1000_adapter *adapter = netdev_priv(netdev);
e1000_mac_type mac_type = adapter->hw.mac_type;
struct e1000_desc_ring *txdr = &adapter->tx_ring;
struct e1000_desc_ring *rxdr = &adapter->rx_ring;
struct e1000_desc_ring tx_old, tx_new, rx_old, rx_new;
int err;
struct e1000_tx_ring *txdr, *tx_old, *tx_new;
struct e1000_rx_ring *rxdr, *rx_old, *rx_new;
int i, err, tx_ring_size, rx_ring_size;
tx_ring_size = sizeof(struct e1000_tx_ring) * adapter->num_queues;
rx_ring_size = sizeof(struct e1000_rx_ring) * adapter->num_queues;
if (netif_running(adapter->netdev))
e1000_down(adapter);
tx_old = adapter->tx_ring;
rx_old = adapter->rx_ring;
adapter->tx_ring = kmalloc(tx_ring_size, GFP_KERNEL);
if (!adapter->tx_ring) {
err = -ENOMEM;
goto err_setup_rx;
}
memset(adapter->tx_ring, 0, tx_ring_size);
adapter->rx_ring = kmalloc(rx_ring_size, GFP_KERNEL);
if (!adapter->rx_ring) {
kfree(adapter->tx_ring);
err = -ENOMEM;
goto err_setup_rx;
}
memset(adapter->rx_ring, 0, rx_ring_size);
txdr = adapter->tx_ring;
rxdr = adapter->rx_ring;
if((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
return -EINVAL;
if(netif_running(adapter->netdev))
e1000_down(adapter);
rxdr->count = max(ring->rx_pending,(uint32_t)E1000_MIN_RXD);
rxdr->count = min(rxdr->count,(uint32_t)(mac_type < e1000_82544 ?
E1000_MAX_RXD : E1000_MAX_82544_RXD));
@ -621,11 +644,16 @@ e1000_set_ringparam(struct net_device *netdev,
E1000_MAX_TXD : E1000_MAX_82544_TXD));
E1000_ROUNDUP(txdr->count, REQ_TX_DESCRIPTOR_MULTIPLE);
for (i = 0; i < adapter->num_queues; i++) {
txdr[i].count = txdr->count;
rxdr[i].count = rxdr->count;
}
if(netif_running(adapter->netdev)) {
/* Try to get new resources before deleting old */
if((err = e1000_setup_rx_resources(adapter)))
if ((err = e1000_setup_all_rx_resources(adapter)))
goto err_setup_rx;
if((err = e1000_setup_tx_resources(adapter)))
if ((err = e1000_setup_all_tx_resources(adapter)))
goto err_setup_tx;
/* save the new, restore the old in order to free it,
@ -635,8 +663,10 @@ e1000_set_ringparam(struct net_device *netdev,
tx_new = adapter->tx_ring;
adapter->rx_ring = rx_old;
adapter->tx_ring = tx_old;
e1000_free_rx_resources(adapter);
e1000_free_tx_resources(adapter);
e1000_free_all_rx_resources(adapter);
e1000_free_all_tx_resources(adapter);
kfree(tx_old);
kfree(rx_old);
adapter->rx_ring = rx_new;
adapter->tx_ring = tx_new;
if((err = e1000_up(adapter)))
@ -645,7 +675,7 @@ e1000_set_ringparam(struct net_device *netdev,
return 0;
err_setup_tx:
e1000_free_rx_resources(adapter);
e1000_free_all_rx_resources(adapter);
err_setup_rx:
adapter->rx_ring = rx_old;
adapter->tx_ring = tx_old;
@ -696,6 +726,11 @@ e1000_reg_test(struct e1000_adapter *adapter, uint64_t *data)
* Some bits that get toggled are ignored.
*/
switch (adapter->hw.mac_type) {
/* there are several bits on newer hardware that are r/w */
case e1000_82571:
case e1000_82572:
toggle = 0x7FFFF3FF;
break;
case e1000_82573:
toggle = 0x7FFFF033;
break;
@ -898,8 +933,8 @@ e1000_intr_test(struct e1000_adapter *adapter, uint64_t *data)
static void
e1000_free_desc_rings(struct e1000_adapter *adapter)
{
struct e1000_desc_ring *txdr = &adapter->test_tx_ring;
struct e1000_desc_ring *rxdr = &adapter->test_rx_ring;
struct e1000_tx_ring *txdr = &adapter->test_tx_ring;
struct e1000_rx_ring *rxdr = &adapter->test_rx_ring;
struct pci_dev *pdev = adapter->pdev;
int i;
@ -941,8 +976,8 @@ e1000_free_desc_rings(struct e1000_adapter *adapter)
static int
e1000_setup_desc_rings(struct e1000_adapter *adapter)
{
struct e1000_desc_ring *txdr = &adapter->test_tx_ring;
struct e1000_desc_ring *rxdr = &adapter->test_rx_ring;
struct e1000_tx_ring *txdr = &adapter->test_tx_ring;
struct e1000_rx_ring *rxdr = &adapter->test_rx_ring;
struct pci_dev *pdev = adapter->pdev;
uint32_t rctl;
int size, i, ret_val;
@ -1245,6 +1280,8 @@ e1000_set_phy_loopback(struct e1000_adapter *adapter)
case e1000_82541_rev_2:
case e1000_82547:
case e1000_82547_rev_2:
case e1000_82571:
case e1000_82572:
case e1000_82573:
return e1000_integrated_phy_loopback(adapter);
break;
@ -1340,8 +1377,8 @@ e1000_check_lbtest_frame(struct sk_buff *skb, unsigned int frame_size)
static int
e1000_run_loopback_test(struct e1000_adapter *adapter)
{
struct e1000_desc_ring *txdr = &adapter->test_tx_ring;
struct e1000_desc_ring *rxdr = &adapter->test_rx_ring;
struct e1000_tx_ring *txdr = &adapter->test_tx_ring;
struct e1000_rx_ring *rxdr = &adapter->test_rx_ring;
struct pci_dev *pdev = adapter->pdev;
int i, j, k, l, lc, good_cnt, ret_val=0;
unsigned long time;
@ -1509,6 +1546,7 @@ e1000_diag_test(struct net_device *netdev,
data[2] = 0;
data[3] = 0;
}
msleep_interruptible(4 * 1000);
}
static void
@ -1625,7 +1663,7 @@ e1000_phys_id(struct net_device *netdev, uint32_t data)
if(!data || data > (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ))
data = (uint32_t)(MAX_SCHEDULE_TIMEOUT / HZ);
if(adapter->hw.mac_type < e1000_82573) {
if(adapter->hw.mac_type < e1000_82571) {
if(!adapter->blink_timer.function) {
init_timer(&adapter->blink_timer);
adapter->blink_timer.function = e1000_led_blink_callback;
@ -1739,6 +1777,7 @@ struct ethtool_ops e1000_ethtool_ops = {
.phys_id = e1000_phys_id,
.get_stats_count = e1000_get_stats_count,
.get_ethtool_stats = e1000_get_ethtool_stats,
.get_perm_addr = ethtool_op_get_perm_addr,
};
void e1000_set_ethtool_ops(struct net_device *netdev)

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше