Merge branch 'misc' into for-linus

This commit is contained in:
James Bottomley 2014-01-22 09:57:27 -08:00
Родитель dcaf9aed99 3ce438df10
Коммит 4b1a9a5e40
83 изменённых файлов: 4639 добавлений и 17723 удалений

Просмотреть файл

@ -42,8 +42,6 @@ aic79xx.txt
- Adaptec Ultra320 SCSI host adapters
aic7xxx.txt
- info on driver for Adaptec controllers
aic7xxx_old.txt
- info on driver for Adaptec controllers, old generation
arcmsr_spec.txt
- ARECA FIRMWARE SPEC (for IOP331 adapter)
dc395x.txt

Просмотреть файл

@ -1,511 +0,0 @@
AIC7xxx Driver for Linux
Introduction
----------------------------
The AIC7xxx SCSI driver adds support for Adaptec (http://www.adaptec.com)
SCSI controllers and chipsets. Major portions of the driver and driver
development are shared between both Linux and FreeBSD. Support for the
AIC-7xxx chipsets have been in the default Linux kernel since approximately
linux-1.1.x and fairly stable since linux-1.2.x, and are also in FreeBSD
2.1.0 or later.
Supported cards/chipsets
----------------------------
Adaptec Cards
----------------------------
AHA-274x
AHA-274xT
AHA-2842
AHA-2910B
AHA-2920C
AHA-2930
AHA-2930U
AHA-2930CU
AHA-2930U2
AHA-2940
AHA-2940W
AHA-2940U
AHA-2940UW
AHA-2940UW-PRO
AHA-2940AU
AHA-2940U2W
AHA-2940U2
AHA-2940U2B
AHA-2940U2BOEM
AHA-2944D
AHA-2944WD
AHA-2944UD
AHA-2944UWD
AHA-2950U2
AHA-2950U2W
AHA-2950U2B
AHA-29160M
AHA-3940
AHA-3940U
AHA-3940W
AHA-3940UW
AHA-3940AUW
AHA-3940U2W
AHA-3950U2B
AHA-3950U2D
AHA-3960D
AHA-39160M
AHA-3985
AHA-3985U
AHA-3985W
AHA-3985UW
Motherboard Chipsets
----------------------------
AIC-777x
AIC-785x
AIC-786x
AIC-787x
AIC-788x
AIC-789x
AIC-3860
Bus Types
----------------------------
W - Wide SCSI, SCSI-3, 16bit bus, 68pin connector, will also support
SCSI-1/SCSI-2 50pin devices, transfer rates up to 20MB/s.
U - Ultra SCSI, transfer rates up to 40MB/s.
U2- Ultra 2 SCSI, transfer rates up to 80MB/s.
D - Differential SCSI.
T - Twin Channel SCSI. Up to 14 SCSI devices.
AHA-274x - EISA SCSI controller
AHA-284x - VLB SCSI controller
AHA-29xx - PCI SCSI controller
AHA-394x - PCI controllers with two separate SCSI controllers on-board.
AHA-398x - PCI RAID controllers with three separate SCSI controllers
on-board.
Not Supported Devices
------------------------------
Adaptec Cards
----------------------------
AHA-2920 (Only the cards that use the Future Domain chipset are not
supported, any 2920 cards based on Adaptec AIC chipsets,
such as the 2920C, are supported)
AAA-13x Raid Adapters
AAA-113x Raid Port Card
Motherboard Chipsets
----------------------------
AIC-7810
Bus Types
----------------------------
R - Raid Port busses are not supported.
The hardware RAID devices sold by Adaptec are *NOT* supported by this
driver (and will people please stop emailing me about them, they are
a totally separate beast from the bare SCSI controllers and this driver
cannot be retrofitted in any sane manner to support the hardware RAID
features on those cards - Doug Ledford).
People
------------------------------
Justin T Gibbs gibbs@plutotech.com
(BSD Driver Author)
Dan Eischen deischen@iworks.InterWorks.org
(Original Linux Driver Co-maintainer)
Dean Gehnert deang@teleport.com
(Original Linux FTP/patch maintainer)
Jess Johnson jester@frenzy.com
(AIC7xxx FAQ author)
Doug Ledford dledford@redhat.com
(Current Linux aic7xxx-5.x.x Driver/Patch/FTP maintainer)
Special thanks go to John Aycock (aycock@cpsc.ucalgary.ca), the original
author of the driver. John has since retired from the project. Thanks
again for all his work!
Mailing list
------------------------------
There is a mailing list available for users who want to track development
and converse with other users and developers. This list is for both
FreeBSD and Linux support of the AIC7xxx chipsets.
To subscribe to the AIC7xxx mailing list send mail to the list server,
with "subscribe AIC7xxx" in the body (no Subject: required):
To: majordomo@FreeBSD.ORG
---
subscribe AIC7xxx
To unsubscribe from the list, send mail to the list server with:
To: majordomo@FreeBSD.ORG
---
unsubscribe AIC7xxx
Send regular messages and replies to: AIC7xxx@FreeBSD.ORG
Boot Command line options
------------------------------
"aic7xxx=no_reset" - Eliminate the SCSI bus reset during startup.
Some SCSI devices need the initial reset that this option disables
in order to work. If you have problems at bootup, please make sure
you aren't using this option.
"aic7xxx=reverse_scan" - Certain PCI motherboards scan for devices at
bootup by scanning from the highest numbered PCI device to the
lowest numbered PCI device, others do just the opposite and scan
from lowest to highest numbered PCI device. There is no reliable
way to autodetect this ordering. So, we default to the most common
order, which is lowest to highest. Then, in case your motherboard
scans from highest to lowest, we have this option. If your BIOS
finds the drives on controller A before controller B but the linux
kernel finds your drives on controller B before A, then you should
use this option.
"aic7xxx=extended" - Force the driver to detect extended drive translation
on your controller. This helps those people who have cards without
a SEEPROM make sure that linux and all other operating systems think
the same way about your hard drives.
"aic7xxx=scbram" - Some cards have external SCB RAM that can be used to
give the card more hardware SCB slots. This allows the driver to use
that SCB RAM. Without this option, the driver won't touch the SCB
RAM because it is known to cause problems on a few cards out there
(such as 3985 class cards).
"aic7xxx=irq_trigger:x" - Replace x with either 0 or 1 to force the kernel
to use the correct IRQ type for your card. This only applies to EISA
based controllers. On these controllers, 0 is for Edge triggered
interrupts, and 1 is for Level triggered interrupts. If you aren't
sure or don't know which IRQ trigger type your EISA card uses, then
let the kernel autodetect the trigger type.
"aic7xxx=verbose" - This option can be used in one of two ways. If you
simply specify aic7xxx=verbose, then the kernel will automatically
pick the default set of verbose messages for you to see.
Alternatively, you can specify the command as
"aic7xxx=verbose:0xXXXX" where the X entries are replaced with
hexadecimal digits. This option is a bit field type option. For
a full listing of the available options, search for the
#define VERBOSE_xxxxxx lines in the aic7xxx.c file. If you want
verbose messages, then it is recommended that you simply use the
aic7xxx=verbose variant of this command.
"aic7xxx=pci_parity:x" - This option controls whether or not the driver
enables PCI parity error checking on the PCI bus. By default, this
checking is disabled. To enable the checks, simply specify pci_parity
with no value afterwords. To reverse the parity from even to odd,
supply any number other than 0 or 255. In short:
pci_parity - Even parity checking (even is the normal PCI parity)
pci_parity:x - Where x > 0, Odd parity checking
pci_parity:0 - No check (default)
NOTE: In order to get Even PCI parity checking, you must use the
version of the option that does not include the : and a number at
the end (unless you want to enter exactly 2^32 - 1 as the number).
"aic7xxx=no_probe" - This option will disable the probing for any VLB
based 2842 controllers and any EISA based controllers. This is
needed on certain newer motherboards where the normal EISA I/O ranges
have been claimed by other PCI devices. Probing on those machines
will often result in the machine crashing or spontaneously rebooting
during startup. Examples of machines that need this are the
Dell PowerEdge 6300 machines.
"aic7xxx=seltime:2" - This option controls how long the card waits
during a device selection sequence for the device to respond.
The original SCSI spec says that this "should be" 256ms. This
is generally not required with modern devices. However, some
very old SCSI I devices need the full 256ms. Most modern devices
can run fine with only 64ms. The default for this option is
64ms. If you need to change this option, then use the following
table to set the proper value in the example above:
0 - 256ms
1 - 128ms
2 - 64ms
3 - 32ms
"aic7xxx=panic_on_abort" - This option is for debugging and will cause
the driver to panic the linux kernel and freeze the system the first
time the drivers abort or reset routines are called. This is most
helpful when some problem causes infinite reset loops that scroll too
fast to see. By using this option, you can write down what the errors
actually are and send that information to me so it can be fixed.
"aic7xxx=dump_card" - This option will print out the *entire* set of
configuration registers on the card during the init sequence. This
is a debugging aid used to see exactly what state the card is in
when we finally finish our initialization routines. If you don't
have documentation on the chipsets, this will do you absolutely
no good unless you are simply trying to write all the information
down in order to send it to me.
"aic7xxx=dump_sequencer" - This is the same as the above options except
that instead of dumping the register contents on the card, this
option dumps the contents of the sequencer program RAM. This gives
the ability to verify that the instructions downloaded to the
card's sequencer are indeed what they are supposed to be. Again,
unless you have documentation to tell you how to interpret these
numbers, then it is totally useless.
"aic7xxx=override_term:0xffffffff" - This option is used to force the
termination on your SCSI controllers to a particular setting. This
is a bit mask variable that applies for up to 8 aic7xxx SCSI channels.
Each channel gets 4 bits, divided as follows:
bit 3 2 1 0
| | | Enable/Disable Single Ended Low Byte Termination
| | En/Disable Single Ended High Byte Termination
| En/Disable Low Byte LVD Termination
En/Disable High Byte LVD Termination
The upper 2 bits that deal with LVD termination only apply to Ultra2
controllers. Furthermore, due to the current Ultra2 controller
designs, these bits are tied together such that setting either bit
enables both low and high byte LVD termination. It is not possible
to only set high or low byte LVD termination in this manner. This is
an artifact of the BIOS definition on Ultra2 controllers. For other
controllers, the only important bits are the two lowest bits. Setting
the higher bits on non-Ultra2 controllers has no effect. A few
examples of how to use this option:
Enable low and high byte termination on a non-ultra2 controller that
is the first aic7xxx controller (the correct bits are 0011),
aic7xxx=override_term:0x3
Enable all termination on the third aic7xxx controller, high byte
termination on the second aic7xxx controller, and low and high byte
SE termination on the first aic7xxx controller
(bits are 1111 0010 0011),
aic7xxx=override_term:0xf23
No attempt has been made to make this option non-cryptic. It really
shouldn't be used except in dire circumstances, and if that happens,
I'm probably going to be telling you what to set this to anyway :)
"aic7xxx=stpwlev:0xffffffff" - This option is used to control the STPWLEV
bit in the DEVCONFIG PCI register. Currently, this is one of the
very few registers that we have absolutely *no* way of detecting
what the variable should be. It depends entirely on how the chipset
and external terminators were coupled by the card/motherboard maker.
Further, a chip reset (at power up) always sets this bit to 0. If
there is no BIOS to run on the chipset/card (such as with a 2910C
or a motherboard controller with the BIOS totally disabled) then
the variable may not get set properly. Of course, if the proper
setting was 0, then that's what it would be after the reset, but if
the proper setting is actually 1.....you get the picture. Now, since
we can't detect this at all, I've added this option to force the
setting. If you have a BIOS on your controller then you should never
need to use this option. However, if you are having lots of SCSI
reset problems and can't seem to get them knocked out, this may help.
Here's a test to know for certain if you need this option. Make
a boot floppy that you can use to boot your computer up and that
will detect the aic7xxx controller. Next, power down your computer.
While it's down, unplug all SCSI cables from your Adaptec SCSI
controller. Boot the system back up to the Adaptec EZ-SCSI BIOS
and then make sure that termination is enabled on your adapter (if
you have an Adaptec BIOS of course). Next, boot up the floppy you
made and wait for it to detect the aic7xxx controller. If the kernel
finds the controller fine, says scsi : x hosts and then tries to
detect your devices like normal, up to the point where it fails to
mount your root file system and panics, then you're fine. If, on
the other hand, the system goes into an infinite reset loop, then
you need to use this option and/or the previous option to force the
proper termination settings on your controller. If this happens,
then you next need to figure out what your settings should be.
To find the correct settings, power your machine back down, connect
back up the SCSI cables, and boot back into your machine like normal.
However, boot with the aic7xxx=verbose:0x39 option. Record the
initial DEVCONFIG values for each of your aic7xxx controllers as
they are listed, and also record what the machine is detecting as
the proper termination on your controllers. NOTE: the order in
which the initial DEVCONFIG values are printed out is not guaranteed
to be the same order as the SCSI controllers are registered. The
above option and this option both work on the order of the SCSI
controllers as they are registered, so make sure you match the right
DEVCONFIG values with the right controllers if you have more than
one aic7xxx controller.
Once you have the detected termination settings and the initial
DEVCONFIG values for each controller, then figure out what the
termination on each of the controllers *should* be. Hopefully, that
part is correct, but it could possibly be wrong if there is
bogus cable detection logic on your controller or something similar.
If all the controllers have the correct termination settings, then
don't set the aic7xxx=override_term variable at all, leave it alone.
Next, on any controllers that go into an infinite reset loop when
you unplug all the SCSI cables, get the starting DEVCONFIG value.
If the initial DEVCONFIG value is divisible by 2, then the correct
setting for that controller is 0. If it's an odd number, then
the correct setting for that controller is 1. For any other
controllers that didn't have an infinite reset problem, then reverse
the above options. If DEVCONFIG was even, then the correct setting
is 1, if not then the correct setting is 0.
Now that you know what the correct setting was for each controller,
we need to encode that into the aic7xxx=stpwlev:0x... variable.
This variable is a bit field encoded variable. Bit 0 is for the first
aic7xxx controller, bit 1 for the next, etc. Put all these bits
together and you get a number. For example, if the third aic7xxx
needed a 1, but the second and first both needed a 0, then the bits
would be 100 in binary. This then translates to 0x04. You would
therefore set aic7xxx=stpwlev:0x04. This is fairly standard binary
to hexadecimal conversions here. If you aren't up to speed on the
binary->hex conversion then send an email to the aic7xxx mailing
list and someone can help you out.
"aic7xxx=tag_info:{{8,8..},{8,8..},..}" - This option is used to disable
or enable Tagged Command Queueing (TCQ) on specific devices. As of
driver version 5.1.11, TCQ is now either on or off by default
according to the setting you choose during the make config process.
In order to en/disable TCQ for certain devices at boot time, a user
may use this boot param. The driver will then parse this message out
and en/disable the specific device entries that are present based upon
the value given. The param line is parsed in the following manner:
{ - first instance indicates the start of this parameter values
second instance is the start of entries for a particular
device entry
} - end the entries for a particular host adapter, or end the entire
set of parameter entries
, - move to next entry. Inside of a set of device entries, this
moves us to the next device on the list. Outside of device
entries, this moves us to the next host adapter
. - Same effect as , but is safe to use with insmod.
x - the number to enter into the array at this position.
0 = Enable tagged queueing on this device and use the default
queue depth
1-254 = Enable tagged queueing on this device and use this
number as the queue depth
255 = Disable tagged queueing on this device.
Note: anything above 32 for an actual queue depth is wasteful
and not recommended.
A few examples of how this can be used:
tag_info:{{8,12,,0,,255,4}}
This line will only effect the first aic7xxx card registered. It
will set scsi id 0 to a queue depth of 8, id 1 to 12, leave id 2
at the default, set id 3 to tagged queueing enabled and use the
default queue depth, id 4 default, id 5 disabled, and id 6 to 4.
Any not specified entries stay at the default value, repeated
commas with no value specified will simply increment to the next id
without changing anything for the missing values.
tag_info:{,,,{,,,255}}
First, second, and third adapters at default values. Fourth
adapter, id 3 is disabled. Notice that leading commas simply
increment what the first number effects, and there are no need
for trailing commas. When you close out an adapter, or the
entire entry, anything not explicitly set stays at the default
value.
A final note on this option. The scanner I used for this isn't
perfect or highly robust. If you mess the line up, the worst that
should happen is that the line will get ignored. If you don't
close out the entire entry with the final bracket, then any other
aic7xxx options after this will get ignored. So, in general, be
sure of what you are entering, and after you have it right, just
add it to the lilo.conf file so there won't be any mistakes. As
a means of checking this parser, the entire tag_info array for
each card is now printed out in the /proc/scsi/aic7xxx/x file. You
can use that to verify that your options were parsed correctly.
Boot command line options may be combined to form the proper set of options
a user might need. For example, the following is valid:
aic7xxx=verbose,extended,irq_trigger:1
The only requirement is that individual options be separated by a comma or
a period on the command line.
Module Loading command options
------------------------------
When loading the aic7xxx driver as a module, the exact same options are
available to the user. However, the syntax to specify the options changes
slightly. For insmod, you need to wrap the aic7xxx= argument in quotes
and replace all ',' with '.'. So, for example, a valid insmod line
would be:
insmod aic7xxx aic7xxx='verbose.irq_trigger:1.extended'
This line should result in the *exact* same behaviour as if you typed
it in at the lilo prompt and the driver was compiled into the kernel
instead of being a module. The reason for the single quote is so that
the shell won't try to interpret anything in the line, such as {.
Insmod assumes any options starting with a letter instead of a number
is a character string (which is what we want) and by switching all of
the commas to periods, insmod won't interpret this as more than one
string and write junk into our binary image. I consider it a bug in
the insmod program that even if you wrap your string in quotes (quotes
that pass the shell mind you and that insmod sees) it still treats
a comma inside of those quotes as starting a new variable, resulting
in memory scribbles if you don't switch the commas to periods.
Kernel Compile options
------------------------------
The various kernel compile time options for this driver are now fairly
well documented in the file drivers/scsi/Kconfig. In order to
see this documentation, you need to use one of the advanced configuration
programs (menuconfig and xconfig). If you are using the "make menuconfig"
method of configuring your kernel, then you would simply highlight the
option in question and hit the ? key. If you are using the "make xconfig"
method of configuring your kernel, then simply click on the help button
next to the option you have questions about. The help information from
the Configure.help file will then get automatically displayed.
/proc support
------------------------------
The /proc support for the AIC7xxx can be found in the /proc/scsi/aic7xxx/
directory. That directory contains a file for each SCSI controller in
the system. Each file presents the current configuration and transfer
statistics (enabled with #define in aic7xxx.c) for each controller.
Thanks to Michael Neuffer for his upper-level SCSI help, and
Matthew Jacob for statistics support.
Debugging the driver
------------------------------
Should you have problems with this driver, and would like some help in
getting them solved, there are a couple debugging items built into
the driver to facilitate getting the needed information from the system.
In general, I need a complete description of the problem, with as many
logs as possible concerning what happens. To help with this, there is
a command option aic7xxx=panic_on_abort. This option, when set, forces
the driver to panic the kernel on the first SCSI abort issued by the
mid level SCSI code. If your system is going to reset loops and you
can't read the screen, then this is what you need. Not only will it
stop the system, but it also prints out a large amount of state
information in the process. Second, if you specify the option
"aic7xxx=verbose:0x1ffff", the system will print out *SOOOO* much
information as it runs that you won't be able to see anything.
However, this can actually be very useful if your machine simply
locks up when trying to boot, since it will pin-point what was last
happening (in regards to the aic7xxx driver) immediately prior to
the lockup. This is really only useful if your machine simply can
not boot up successfully. If you can get your machine to run, then
this will produce far too much information.
FTP sites
------------------------------
ftp://ftp.redhat.com/pub/aic/
- Out of date. I used to keep stuff here, but too many people
complained about having a hard time getting into Red Hat's ftp
server. So use the web site below instead.
ftp://ftp.pcnet.com/users/eischen/Linux/
- Dan Eischen's driver distribution area
ftp://ekf2.vsb.cz/pub/linux/kernel/aic7xxx/ftp.teleport.com/
- European Linux mirror of Teleport site
Web sites
------------------------------
http://people.redhat.com/dledford/
- My web site, also the primary aic7xxx site with several related
pages.
Dean W. Gehnert
deang@teleport.com
$Revision: 3.0 $
Modified by Doug Ledford 1998-2000

Просмотреть файл

@ -42,20 +42,14 @@ discussion.
Once LLDD gets hold of a scmd, either the LLDD will complete the
command by calling scsi_done callback passed from midlayer when
invoking hostt->queuecommand() or SCSI midlayer will time it out.
invoking hostt->queuecommand() or the block layer will time it out.
[1-2-1] Completing a scmd w/ scsi_done
For all non-EH commands, scsi_done() is the completion callback. It
does the following.
1. Delete timeout timer. If it fails, it means that timeout timer
has expired and is going to finish the command. Just return.
2. Link scmd to per-cpu scsi_done_q using scmd->en_entry
3. Raise SCSI_SOFTIRQ
just calls blk_complete_request() to delete the block layer timer and
raise SCSI_SOFTIRQ
SCSI_SOFTIRQ handler scsi_softirq calls scsi_decide_disposition() to
determine what to do with the command. scsi_decide_disposition()
@ -64,10 +58,12 @@ with the command.
- SUCCESS
scsi_finish_command() is invoked for the command. The
function does some maintenance choirs and notify completion by
calling scmd->done() callback, which, for fs requests, would
be HLD completion callback - sd:sd_rw_intr, sr:rw_intr,
st:st_intr.
function does some maintenance chores and then calls
scsi_io_completion() to finish the I/O.
scsi_io_completion() then notifies the block layer on
the completed request by calling blk_end_request and
friends or figures out what to do with the remainder
of the data in case of an error.
- NEEDS_RETRY
- ADD_TO_MLQUEUE
@ -86,33 +82,45 @@ function
1. invokes optional hostt->eh_timed_out() callback. Return value can
be one of
- EH_HANDLED
This indicates that eh_timed_out() dealt with the timeout. The
scmd is passed to __scsi_done() and thus linked into per-cpu
scsi_done_q. Normal command completion described in [1-2-1]
follows.
- BLK_EH_HANDLED
This indicates that eh_timed_out() dealt with the timeout.
The command is passed back to the block layer and completed
via __blk_complete_requests().
- EH_RESET_TIMER
*NOTE* After returning BLK_EH_HANDLED the SCSI layer is
assumed to be finished with the command, and no other
functions from the SCSI layer will be called. So this
should typically only be returned if the eh_timed_out()
handler raced with normal completion.
- BLK_EH_RESET_TIMER
This indicates that more time is required to finish the
command. Timer is restarted. This action is counted as a
retry and only allowed scmd->allowed + 1(!) times. Once the
limit is reached, action for EH_NOT_HANDLED is taken instead.
limit is reached, action for BLK_EH_NOT_HANDLED is taken instead.
*NOTE* This action is racy as the LLDD could finish the scmd
after the timeout has expired but before it's added back. In
such cases, scsi_done() would think that timeout has occurred
and return without doing anything. We lose completion and the
command will time out again.
- EH_NOT_HANDLED
This is the same as when eh_timed_out() callback doesn't exist.
- BLK_EH_NOT_HANDLED
eh_timed_out() callback did not handle the command.
Step #2 is taken.
2. If the host supports asynchronous completion (as indicated by the
no_async_abort setting in the host template) scsi_abort_command()
is invoked to schedule an asynchrous abort. If that fails
Step #3 is taken.
2. scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD) is invoked for the
command. See [1-3] for more information.
[1-3] Asynchronous command aborts
[1-3] How EH takes over
After a timeout occurs a command abort is scheduled from
scsi_abort_command(). If the abort is successful the command
will either be retried (if the number of retries is not exhausted)
or terminated with DID_TIME_OUT.
Otherwise scsi_eh_scmd_add() is invoked for the command.
See [1-4] for more information.
[1-4] How EH takes over
scmds enter EH via scsi_eh_scmd_add(), which does the following.
@ -320,7 +328,8 @@ scmd->allowed.
<<scsi_eh_abort_cmds>>
This action is taken for each timed out command.
This action is taken for each timed out command when
no_async_abort is enabled in the host template.
hostt->eh_abort_handler() is invoked for each scmd. The
handler returns SUCCESS if it has succeeded to make LLDD and
all related hardware forget about the scmd.

Просмотреть файл

@ -882,8 +882,11 @@ Details:
*
* Calling context: kernel thread
*
* Notes: Invoked from scsi_eh thread. No other commands will be
* queued on current host during eh.
* Notes: If 'no_async_abort' is defined this callback
* will be invoked from scsi_eh thread. No other commands
* will then be queued on current host during eh.
* Otherwise it will be called whenever scsi_times_out()
* is called due to a command timeout.
*
* Optionally defined in: LLD
**/
@ -1257,6 +1260,8 @@ of interest:
address space
use_clustering - 1=>SCSI commands in mid level's queue can be merged,
0=>disallow SCSI command merging
no_async_abort - 1=>Asynchronous aborts are not supported
0=>Timed-out commands will be aborted asynchronously
hostt - pointer to driver's struct scsi_host_template from which
this struct Scsi_Host instance was spawned
hostt->proc_name - name of LLD. This is the driver name that sysfs uses

Просмотреть файл

@ -484,7 +484,6 @@ M: Hannes Reinecke <hare@suse.de>
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/aic7xxx/
F: drivers/scsi/aic7xxx_old/
AIMSLAB FM RADIO RECEIVER DRIVER
M: Hans Verkuil <hverkuil@xs4all.nl>
@ -6901,8 +6900,7 @@ S: Maintained
F: drivers/scsi/qla1280.[ch]
QLOGIC QLA2XXX FC-SCSI DRIVER
M: Andrew Vasquez <andrew.vasquez@qlogic.com>
M: linux-driver@qlogic.com
M: qla2xxx-upstream@qlogic.com
L: linux-scsi@vger.kernel.org
S: Supported
F: Documentation/scsi/LICENSE.qla2xxx
@ -7456,8 +7454,9 @@ F: include/scsi/srp.h
SCSI SG DRIVER
M: Doug Gilbert <dgilbert@interlog.com>
L: linux-scsi@vger.kernel.org
W: http://www.torque.net/sg
W: http://sg.danny.cz/sg
S: Maintained
F: Documentation/scsi/scsi-generic.txt
F: drivers/scsi/sg.c
F: include/scsi/sg.h

Просмотреть файл

@ -499,47 +499,6 @@ config SCSI_AACRAID
source "drivers/scsi/aic7xxx/Kconfig.aic7xxx"
config SCSI_AIC7XXX_OLD
tristate "Adaptec AIC7xxx support (old driver)"
depends on (ISA || EISA || PCI ) && SCSI
help
WARNING This driver is an older aic7xxx driver and is no longer
under active development. Adaptec, Inc. is writing a new driver to
take the place of this one, and it is recommended that whenever
possible, people should use the new Adaptec written driver instead
of this one. This driver will eventually be phased out entirely.
This is support for the various aic7xxx based Adaptec SCSI
controllers. These include the 274x EISA cards; 284x VLB cards;
2902, 2910, 293x, 294x, 394x, 3985 and several other PCI and
motherboard based SCSI controllers from Adaptec. It does not support
the AAA-13x RAID controllers from Adaptec, nor will it likely ever
support them. It does not support the 2920 cards from Adaptec that
use the Future Domain SCSI controller chip. For those cards, you
need the "Future Domain 16xx SCSI support" driver.
In general, if the controller is based on an Adaptec SCSI controller
chip from the aic777x series or the aic78xx series, this driver
should work. The only exception is the 7810 which is specifically
not supported (that's the RAID controller chip on the AAA-13x
cards).
Note that the AHA2920 SCSI host adapter is *not* supported by this
driver; choose "Future Domain 16xx SCSI support" instead if you have
one of those.
Information on the configuration options for this controller can be
found by checking the help file for each of the available
configuration options. You should read
<file:Documentation/scsi/aic7xxx_old.txt> at a minimum before
contacting the maintainer with any questions. The SCSI-HOWTO,
available from <http://www.tldp.org/docs.html#howto>, can also
be of great help.
To compile this driver as a module, choose M here: the
module will be called aic7xxx_old.
source "drivers/scsi/aic7xxx/Kconfig.aic79xx"
source "drivers/scsi/aic94xx/Kconfig"
source "drivers/scsi/mvsas/Kconfig"

Просмотреть файл

@ -70,7 +70,6 @@ obj-$(CONFIG_SCSI_AHA1740) += aha1740.o
obj-$(CONFIG_SCSI_AIC7XXX) += aic7xxx/
obj-$(CONFIG_SCSI_AIC79XX) += aic7xxx/
obj-$(CONFIG_SCSI_AACRAID) += aacraid/
obj-$(CONFIG_SCSI_AIC7XXX_OLD) += aic7xxx_old.o
obj-$(CONFIG_SCSI_AIC94XX) += aic94xx/
obj-$(CONFIG_SCSI_PM8001) += pm8001/
obj-$(CONFIG_SCSI_ISCI) += isci/

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,28 +0,0 @@
/*+M*************************************************************************
* Adaptec AIC7xxx device driver for Linux.
*
* Copyright (c) 1994 John Aycock
* The University of Calgary Department of Computer Science.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
*
* $Id: aic7xxx.h,v 3.2 1996/07/23 03:37:26 deang Exp $
*-M*************************************************************************/
#ifndef _aic7xxx_h
#define _aic7xxx_h
#define AIC7XXX_H_VERSION "5.2.0"
#endif /* _aic7xxx_h */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,270 +0,0 @@
/*+M*************************************************************************
* Adaptec AIC7xxx device driver proc support for Linux.
*
* Copyright (c) 1995, 1996 Dean W. Gehnert
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
*
* ----------------------------------------------------------------
* o Modified from the EATA-DMA /proc support.
* o Additional support for device block statistics provided by
* Matthew Jacob.
* o Correction of overflow by Heinz Mauelshagen
* o Adittional corrections by Doug Ledford
*
* Dean W. Gehnert, deang@teleport.com, 05/01/96
*
* $Id: aic7xxx_proc.c,v 4.1 1997/06/97 08:23:42 deang Exp $
*-M*************************************************************************/
#define HDRB \
" 0 - 4K 4 - 16K 16 - 64K 64 - 256K 256K - 1M 1M+"
/*+F*************************************************************************
* Function:
* aic7xxx_show_info
*
* Description:
* Return information to handle /proc support for the driver.
*-F*************************************************************************/
int
aic7xxx_show_info(struct seq_file *m, struct Scsi_Host *HBAptr)
{
struct aic7xxx_host *p;
struct aic_dev_data *aic_dev;
struct scsi_device *sdptr;
unsigned char i;
unsigned char tindex;
for(p=first_aic7xxx; p && p->host != HBAptr; p=p->next)
;
if (!p)
{
seq_printf(m, "Can't find adapter for host number %d\n", HBAptr->host_no);
return 0;
}
p = (struct aic7xxx_host *) HBAptr->hostdata;
seq_printf(m, "Adaptec AIC7xxx driver version: ");
seq_printf(m, "%s/", AIC7XXX_C_VERSION);
seq_printf(m, "%s", AIC7XXX_H_VERSION);
seq_printf(m, "\n");
seq_printf(m, "Adapter Configuration:\n");
seq_printf(m, " SCSI Adapter: %s\n",
board_names[p->board_name_index]);
if (p->flags & AHC_TWIN)
seq_printf(m, " Twin Channel Controller ");
else
{
char *channel = "";
char *ultra = "";
char *wide = "Narrow ";
if (p->flags & AHC_MULTI_CHANNEL)
{
channel = " Channel A";
if (p->flags & (AHC_CHNLB|AHC_CHNLC))
channel = (p->flags & AHC_CHNLB) ? " Channel B" : " Channel C";
}
if (p->features & AHC_WIDE)
wide = "Wide ";
if (p->features & AHC_ULTRA3)
{
switch(p->chip & AHC_CHIPID_MASK)
{
case AHC_AIC7892:
case AHC_AIC7899:
ultra = "Ultra-160/m LVD/SE ";
break;
default:
ultra = "Ultra-3 LVD/SE ";
break;
}
}
else if (p->features & AHC_ULTRA2)
ultra = "Ultra-2 LVD/SE ";
else if (p->features & AHC_ULTRA)
ultra = "Ultra ";
seq_printf(m, " %s%sController%s ",
ultra, wide, channel);
}
switch(p->chip & ~AHC_CHIPID_MASK)
{
case AHC_VL:
seq_printf(m, "at VLB slot %d\n", p->pci_device_fn);
break;
case AHC_EISA:
seq_printf(m, "at EISA slot %d\n", p->pci_device_fn);
break;
default:
seq_printf(m, "at PCI %d/%d/%d\n", p->pci_bus,
PCI_SLOT(p->pci_device_fn), PCI_FUNC(p->pci_device_fn));
break;
}
if( !(p->maddr) )
{
seq_printf(m, " Programmed I/O Base: %lx\n", p->base);
}
else
{
seq_printf(m, " PCI MMAPed I/O Base: 0x%lx\n", p->mbase);
}
if( (p->chip & (AHC_VL | AHC_EISA)) )
{
seq_printf(m, " BIOS Memory Address: 0x%08x\n", p->bios_address);
}
seq_printf(m, " Adapter SEEPROM Config: %s\n",
(p->flags & AHC_SEEPROM_FOUND) ? "SEEPROM found and used." :
((p->flags & AHC_USEDEFAULTS) ? "SEEPROM not found, using defaults." :
"SEEPROM not found, using leftover BIOS values.") );
seq_printf(m, " Adaptec SCSI BIOS: %s\n",
(p->flags & AHC_BIOS_ENABLED) ? "Enabled" : "Disabled");
seq_printf(m, " IRQ: %d\n", HBAptr->irq);
seq_printf(m, " SCBs: Active %d, Max Active %d,\n",
p->activescbs, p->max_activescbs);
seq_printf(m, " Allocated %d, HW %d, "
"Page %d\n", p->scb_data->numscbs, p->scb_data->maxhscbs,
p->scb_data->maxscbs);
if (p->flags & AHC_EXTERNAL_SRAM)
seq_printf(m, " Using External SCB SRAM\n");
seq_printf(m, " Interrupts: %ld", p->isr_count);
if (p->chip & AHC_EISA)
{
seq_printf(m, " %s\n",
(p->pause & IRQMS) ? "(Level Sensitive)" : "(Edge Triggered)");
}
else
{
seq_printf(m, "\n");
}
seq_printf(m, " BIOS Control Word: 0x%04x\n",
p->bios_control);
seq_printf(m, " Adapter Control Word: 0x%04x\n",
p->adapter_control);
seq_printf(m, " Extended Translation: %sabled\n",
(p->flags & AHC_EXTEND_TRANS_A) ? "En" : "Dis");
seq_printf(m, "Disconnect Enable Flags: 0x%04x\n", p->discenable);
if (p->features & (AHC_ULTRA | AHC_ULTRA2))
{
seq_printf(m, " Ultra Enable Flags: 0x%04x\n", p->ultraenb);
}
seq_printf(m, "Default Tag Queue Depth: %d\n", aic7xxx_default_queue_depth);
seq_printf(m, " Tagged Queue By Device array for aic7xxx host "
"instance %d:\n", p->instance);
seq_printf(m, " {");
for(i=0; i < (MAX_TARGETS - 1); i++)
seq_printf(m, "%d,",aic7xxx_tag_info[p->instance].tag_commands[i]);
seq_printf(m, "%d}\n",aic7xxx_tag_info[p->instance].tag_commands[i]);
seq_printf(m, "\n");
seq_printf(m, "Statistics:\n\n");
list_for_each_entry(aic_dev, &p->aic_devs, list)
{
sdptr = aic_dev->SDptr;
tindex = sdptr->channel << 3 | sdptr->id;
seq_printf(m, "(scsi%d:%d:%d:%d)\n",
p->host_no, sdptr->channel, sdptr->id, sdptr->lun);
seq_printf(m, " Device using %s/%s",
(aic_dev->cur.width == MSG_EXT_WDTR_BUS_16_BIT) ?
"Wide" : "Narrow",
(aic_dev->cur.offset != 0) ?
"Sync transfers at " : "Async transfers.\n" );
if (aic_dev->cur.offset != 0)
{
struct aic7xxx_syncrate *sync_rate;
unsigned char options = aic_dev->cur.options;
int period = aic_dev->cur.period;
int rate = (aic_dev->cur.width ==
MSG_EXT_WDTR_BUS_16_BIT) ? 1 : 0;
sync_rate = aic7xxx_find_syncrate(p, &period, 0, &options);
if (sync_rate != NULL)
{
seq_printf(m, "%s MByte/sec, offset %d\n",
sync_rate->rate[rate],
aic_dev->cur.offset );
}
else
{
seq_printf(m, "3.3 MByte/sec, offset %d\n",
aic_dev->cur.offset );
}
}
seq_printf(m, " Transinfo settings: ");
seq_printf(m, "current(%d/%d/%d/%d), ",
aic_dev->cur.period,
aic_dev->cur.offset,
aic_dev->cur.width,
aic_dev->cur.options);
seq_printf(m, "goal(%d/%d/%d/%d), ",
aic_dev->goal.period,
aic_dev->goal.offset,
aic_dev->goal.width,
aic_dev->goal.options);
seq_printf(m, "user(%d/%d/%d/%d)\n",
p->user[tindex].period,
p->user[tindex].offset,
p->user[tindex].width,
p->user[tindex].options);
if(sdptr->simple_tags)
{
seq_printf(m, " Tagged Command Queueing Enabled, Ordered Tags %s, Depth %d/%d\n", sdptr->ordered_tags ? "Enabled" : "Disabled", sdptr->queue_depth, aic_dev->max_q_depth);
}
if(aic_dev->barrier_total)
seq_printf(m, " Total transfers %ld:\n (%ld/%ld/%ld/%ld reads/writes/REQ_BARRIER/Ordered Tags)\n",
aic_dev->r_total+aic_dev->w_total, aic_dev->r_total, aic_dev->w_total,
aic_dev->barrier_total, aic_dev->ordered_total);
else
seq_printf(m, " Total transfers %ld:\n (%ld/%ld reads/writes)\n",
aic_dev->r_total+aic_dev->w_total, aic_dev->r_total, aic_dev->w_total);
seq_printf(m, "%s\n", HDRB);
seq_printf(m, " Reads:");
for (i = 0; i < ARRAY_SIZE(aic_dev->r_bins); i++)
{
seq_printf(m, " %10ld", aic_dev->r_bins[i]);
}
seq_printf(m, "\n");
seq_printf(m, " Writes:");
for (i = 0; i < ARRAY_SIZE(aic_dev->w_bins); i++)
{
seq_printf(m, " %10ld", aic_dev->w_bins[i]);
}
seq_printf(m, "\n");
seq_printf(m, "\n\n");
}
return 0;
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-indent-level: 2
* c-brace-imaginary-offset: 0
* c-brace-offset: -2
* c-argdecl-indent: 2
* c-label-offset: -2
* c-continued-statement-offset: 2
* c-continued-brace-offset: 0
* indent-tabs-mode: nil
* tab-width: 8
* End:
*/

Просмотреть файл

@ -1,629 +0,0 @@
/*
* DO NOT EDIT - This file is automatically generated.
*/
#define SCSISEQ 0x00
#define TEMODE 0x80
#define ENSELO 0x40
#define ENSELI 0x20
#define ENRSELI 0x10
#define ENAUTOATNO 0x08
#define ENAUTOATNI 0x04
#define ENAUTOATNP 0x02
#define SCSIRSTO 0x01
#define SXFRCTL0 0x01
#define DFON 0x80
#define DFPEXP 0x40
#define FAST20 0x20
#define CLRSTCNT 0x10
#define SPIOEN 0x08
#define SCAMEN 0x04
#define CLRCHN 0x02
#define SXFRCTL1 0x02
#define BITBUCKET 0x80
#define SWRAPEN 0x40
#define ENSPCHK 0x20
#define STIMESEL 0x18
#define ENSTIMER 0x04
#define ACTNEGEN 0x02
#define STPWEN 0x01
#define SCSISIGO 0x03
#define CDO 0x80
#define IOO 0x40
#define MSGO 0x20
#define ATNO 0x10
#define SELO 0x08
#define BSYO 0x04
#define REQO 0x02
#define ACKO 0x01
#define SCSISIGI 0x03
#define ATNI 0x10
#define SELI 0x08
#define BSYI 0x04
#define REQI 0x02
#define ACKI 0x01
#define SCSIRATE 0x04
#define WIDEXFER 0x80
#define SXFR_ULTRA2 0x7f
#define SXFR 0x70
#define SOFS 0x0f
#define SCSIID 0x05
#define SCSIOFFSET 0x05
#define SOFS_ULTRA2 0x7f
#define SCSIDATL 0x06
#define SCSIDATH 0x07
#define STCNT 0x08
#define OPTIONMODE 0x08
#define AUTORATEEN 0x80
#define AUTOACKEN 0x40
#define ATNMGMNTEN 0x20
#define BUSFREEREV 0x10
#define EXPPHASEDIS 0x08
#define SCSIDATL_IMGEN 0x04
#define AUTO_MSGOUT_DE 0x02
#define DIS_MSGIN_DUALEDGE 0x01
#define CLRSINT0 0x0b
#define CLRSELDO 0x40
#define CLRSELDI 0x20
#define CLRSELINGO 0x10
#define CLRSWRAP 0x08
#define CLRSPIORDY 0x02
#define SSTAT0 0x0b
#define TARGET 0x80
#define SELDO 0x40
#define SELDI 0x20
#define SELINGO 0x10
#define IOERR 0x08
#define SWRAP 0x08
#define SDONE 0x04
#define SPIORDY 0x02
#define DMADONE 0x01
#define CLRSINT1 0x0c
#define CLRSELTIMEO 0x80
#define CLRATNO 0x40
#define CLRSCSIRSTI 0x20
#define CLRBUSFREE 0x08
#define CLRSCSIPERR 0x04
#define CLRPHASECHG 0x02
#define CLRREQINIT 0x01
#define SSTAT1 0x0c
#define SELTO 0x80
#define ATNTARG 0x40
#define SCSIRSTI 0x20
#define PHASEMIS 0x10
#define BUSFREE 0x08
#define SCSIPERR 0x04
#define PHASECHG 0x02
#define REQINIT 0x01
#define SSTAT2 0x0d
#define OVERRUN 0x80
#define SHVALID 0x40
#define WIDE_RES 0x20
#define SFCNT 0x1f
#define EXP_ACTIVE 0x10
#define CRCVALERR 0x08
#define CRCENDERR 0x04
#define CRCREQERR 0x02
#define DUAL_EDGE_ERROR 0x01
#define SSTAT3 0x0e
#define SCSICNT 0xf0
#define OFFCNT 0x0f
#define SCSIID_ULTRA2 0x0f
#define OID 0x0f
#define SIMODE0 0x10
#define ENSELDO 0x40
#define ENSELDI 0x20
#define ENSELINGO 0x10
#define ENIOERR 0x08
#define ENSWRAP 0x08
#define ENSDONE 0x04
#define ENSPIORDY 0x02
#define ENDMADONE 0x01
#define SIMODE1 0x11
#define ENSELTIMO 0x80
#define ENATNTARG 0x40
#define ENSCSIRST 0x20
#define ENPHASEMIS 0x10
#define ENBUSFREE 0x08
#define ENSCSIPERR 0x04
#define ENPHASECHG 0x02
#define ENREQINIT 0x01
#define SCSIBUSL 0x12
#define SCSIBUSH 0x13
#define SHADDR 0x14
#define SELTIMER 0x18
#define STAGE6 0x20
#define STAGE5 0x10
#define STAGE4 0x08
#define STAGE3 0x04
#define STAGE2 0x02
#define STAGE1 0x01
#define SELID 0x19
#define SELID_MASK 0xf0
#define ONEBIT 0x08
#define SPIOCAP 0x1b
#define SOFT1 0x80
#define SOFT0 0x40
#define SOFTCMDEN 0x20
#define HAS_BRDCTL 0x10
#define SEEPROM 0x08
#define EEPROM 0x04
#define ROM 0x02
#define SSPIOCPS 0x01
#define BRDCTL 0x1d
#define BRDDAT7 0x80
#define BRDDAT6 0x40
#define BRDDAT5 0x20
#define BRDDAT4 0x10
#define BRDSTB 0x10
#define BRDCS 0x08
#define BRDDAT3 0x08
#define BRDDAT2 0x04
#define BRDRW 0x04
#define BRDRW_ULTRA2 0x02
#define BRDCTL1 0x02
#define BRDSTB_ULTRA2 0x01
#define BRDCTL0 0x01
#define SEECTL 0x1e
#define EXTARBACK 0x80
#define EXTARBREQ 0x40
#define SEEMS 0x20
#define SEERDY 0x10
#define SEECS 0x08
#define SEECK 0x04
#define SEEDO 0x02
#define SEEDI 0x01
#define SBLKCTL 0x1f
#define DIAGLEDEN 0x80
#define DIAGLEDON 0x40
#define AUTOFLUSHDIS 0x20
#define ENAB40 0x08
#define ENAB20 0x04
#define SELWIDE 0x02
#define XCVR 0x01
#define SRAM_BASE 0x20
#define TARG_SCSIRATE 0x20
#define ULTRA_ENB 0x30
#define DISC_DSB 0x32
#define MSG_OUT 0x34
#define DMAPARAMS 0x35
#define PRELOADEN 0x80
#define WIDEODD 0x40
#define SCSIEN 0x20
#define SDMAENACK 0x10
#define SDMAEN 0x10
#define HDMAEN 0x08
#define HDMAENACK 0x08
#define DIRECTION 0x04
#define FIFOFLUSH 0x02
#define FIFORESET 0x01
#define SEQ_FLAGS 0x36
#define IDENTIFY_SEEN 0x80
#define SCBPTR_VALID 0x20
#define DPHASE 0x10
#define AMTARGET 0x08
#define WIDE_BUS 0x02
#define TWIN_BUS 0x01
#define SAVED_TCL 0x37
#define SG_COUNT 0x38
#define SG_NEXT 0x39
#define LASTPHASE 0x3d
#define P_MESGIN 0xe0
#define PHASE_MASK 0xe0
#define P_STATUS 0xc0
#define P_MESGOUT 0xa0
#define P_COMMAND 0x80
#define CDI 0x80
#define IOI 0x40
#define P_DATAIN 0x40
#define MSGI 0x20
#define P_BUSFREE 0x01
#define P_DATAOUT 0x00
#define WAITING_SCBH 0x3e
#define DISCONNECTED_SCBH 0x3f
#define FREE_SCBH 0x40
#define HSCB_ADDR 0x41
#define SCBID_ADDR 0x45
#define TMODE_CMDADDR 0x49
#define KERNEL_QINPOS 0x4d
#define QINPOS 0x4e
#define QOUTPOS 0x4f
#define TMODE_CMDADDR_NEXT 0x50
#define ARG_1 0x51
#define RETURN_1 0x51
#define SEND_MSG 0x80
#define SEND_SENSE 0x40
#define SEND_REJ 0x20
#define MSGOUT_PHASEMIS 0x10
#define ARG_2 0x52
#define RETURN_2 0x52
#define LAST_MSG 0x53
#define PREFETCH_CNT 0x54
#define SCSICONF 0x5a
#define TERM_ENB 0x80
#define RESET_SCSI 0x40
#define HWSCSIID 0x0f
#define HSCSIID 0x07
#define HOSTCONF 0x5d
#define HA_274_BIOSCTRL 0x5f
#define BIOSMODE 0x30
#define BIOSDISABLED 0x30
#define CHANNEL_B_PRIMARY 0x08
#define SEQCTL 0x60
#define PERRORDIS 0x80
#define PAUSEDIS 0x40
#define FAILDIS 0x20
#define FASTMODE 0x10
#define BRKADRINTEN 0x08
#define STEP 0x04
#define SEQRESET 0x02
#define LOADRAM 0x01
#define SEQRAM 0x61
#define SEQADDR0 0x62
#define SEQADDR1 0x63
#define SEQADDR1_MASK 0x01
#define ACCUM 0x64
#define SINDEX 0x65
#define DINDEX 0x66
#define ALLONES 0x69
#define ALLZEROS 0x6a
#define NONE 0x6a
#define FLAGS 0x6b
#define ZERO 0x02
#define CARRY 0x01
#define SINDIR 0x6c
#define DINDIR 0x6d
#define FUNCTION1 0x6e
#define STACK 0x6f
#define TARG_OFFSET 0x70
#define BCTL 0x84
#define ACE 0x08
#define ENABLE 0x01
#define DSCOMMAND0 0x84
#define INTSCBRAMSEL 0x08
#define RAMPS 0x04
#define USCBSIZE32 0x02
#define CIOPARCKEN 0x01
#define DSCOMMAND 0x84
#define CACHETHEN 0x80
#define DPARCKEN 0x40
#define MPARCKEN 0x20
#define EXTREQLCK 0x10
#define BUSTIME 0x85
#define BOFF 0xf0
#define BON 0x0f
#define BUSSPD 0x86
#define DFTHRSH 0xc0
#define STBOFF 0x38
#define STBON 0x07
#define DSPCISTATUS 0x86
#define DFTHRSH_100 0xc0
#define HCNTRL 0x87
#define POWRDN 0x40
#define SWINT 0x10
#define IRQMS 0x08
#define PAUSE 0x04
#define INTEN 0x02
#define CHIPRST 0x01
#define CHIPRSTACK 0x01
#define HADDR 0x88
#define HCNT 0x8c
#define SCBPTR 0x90
#define INTSTAT 0x91
#define SEQINT_MASK 0xf1
#define DATA_OVERRUN 0xe1
#define MSGIN_PHASEMIS 0xd1
#define TRACEPOINT2 0xc1
#define SEQ_SG_FIXUP 0xb1
#define AWAITING_MSG 0xa1
#define RESIDUAL 0x81
#define BAD_STATUS 0x71
#define REJECT_MSG 0x61
#define WIDE_RESIDUE 0x51
#define EXTENDED_MSG 0x41
#define NO_MATCH 0x31
#define NO_IDENT 0x21
#define SEND_REJECT 0x11
#define INT_PEND 0x0f
#define BRKADRINT 0x08
#define SCSIINT 0x04
#define CMDCMPLT 0x02
#define BAD_PHASE 0x01
#define SEQINT 0x01
#define CLRINT 0x92
#define CLRPARERR 0x10
#define CLRBRKADRINT 0x08
#define CLRSCSIINT 0x04
#define CLRCMDINT 0x02
#define CLRSEQINT 0x01
#define ERROR 0x92
#define CIOPARERR 0x80
#define PCIERRSTAT 0x40
#define MPARERR 0x20
#define DPARERR 0x10
#define SQPARERR 0x08
#define ILLOPCODE 0x04
#define DSCTMOUT 0x02
#define ILLSADDR 0x02
#define ILLHADDR 0x01
#define DFCNTRL 0x93
#define DFSTATUS 0x94
#define PRELOAD_AVAIL 0x80
#define DWORDEMP 0x20
#define MREQPEND 0x10
#define HDONE 0x08
#define DFTHRESH 0x04
#define FIFOFULL 0x02
#define FIFOEMP 0x01
#define DFDAT 0x99
#define SCBCNT 0x9a
#define SCBAUTO 0x80
#define SCBCNT_MASK 0x1f
#define QINFIFO 0x9b
#define QINCNT 0x9c
#define SCSIDATL_IMG 0x9c
#define QOUTFIFO 0x9d
#define CRCCONTROL1 0x9d
#define CRCONSEEN 0x80
#define CRCVALCHKEN 0x40
#define CRCENDCHKEN 0x20
#define CRCREQCHKEN 0x10
#define TARGCRCENDEN 0x08
#define TARGCRCCNTEN 0x04
#define SCSIPHASE 0x9e
#define SP_STATUS 0x20
#define SP_COMMAND 0x10
#define SP_MSG_IN 0x08
#define SP_MSG_OUT 0x04
#define SP_DATA_IN 0x02
#define SP_DATA_OUT 0x01
#define QOUTCNT 0x9e
#define SFUNCT 0x9f
#define ALT_MODE 0x80
#define SCB_CONTROL 0xa0
#define MK_MESSAGE 0x80
#define DISCENB 0x40
#define TAG_ENB 0x20
#define DISCONNECTED 0x04
#define SCB_TAG_TYPE 0x03
#define SCB_BASE 0xa0
#define SCB_TCL 0xa1
#define TID 0xf0
#define SELBUSB 0x08
#define LID 0x07
#define SCB_TARGET_STATUS 0xa2
#define SCB_SGCOUNT 0xa3
#define SCB_SGPTR 0xa4
#define SCB_RESID_SGCNT 0xa8
#define SCB_RESID_DCNT 0xa9
#define SCB_DATAPTR 0xac
#define SCB_DATACNT 0xb0
#define SCB_CMDPTR 0xb4
#define SCB_CMDLEN 0xb8
#define SCB_TAG 0xb9
#define SCB_NEXT 0xba
#define SCB_PREV 0xbb
#define SCB_BUSYTARGETS 0xbc
#define SEECTL_2840 0xc0
#define CS_2840 0x04
#define CK_2840 0x02
#define DO_2840 0x01
#define STATUS_2840 0xc1
#define EEPROM_TF 0x80
#define BIOS_SEL 0x60
#define ADSEL 0x1e
#define DI_2840 0x01
#define CCHADDR 0xe0
#define CCHCNT 0xe8
#define CCSGRAM 0xe9
#define CCSGADDR 0xea
#define CCSGCTL 0xeb
#define CCSGDONE 0x80
#define CCSGEN 0x08
#define FLAG 0x02
#define CCSGRESET 0x01
#define CCSCBRAM 0xec
#define CCSCBADDR 0xed
#define CCSCBCTL 0xee
#define CCSCBDONE 0x80
#define ARRDONE 0x40
#define CCARREN 0x10
#define CCSCBEN 0x08
#define CCSCBDIR 0x04
#define CCSCBRESET 0x01
#define CCSCBCNT 0xef
#define CCSCBPTR 0xf1
#define HNSCB_QOFF 0xf4
#define HESCB_QOFF 0xf5
#define SNSCB_QOFF 0xf6
#define SESCB_QOFF 0xf7
#define SDSCB_QOFF 0xf8
#define QOFF_CTLSTA 0xfa
#define ESTABLISH_SCB_AVAIL 0x80
#define SCB_AVAIL 0x40
#define SNSCB_ROLLOVER 0x20
#define SDSCB_ROLLOVER 0x10
#define SESCB_ROLLOVER 0x08
#define SCB_QSIZE 0x07
#define SCB_QSIZE_256 0x06
#define DFF_THRSH 0xfb
#define WR_DFTHRSH 0x70
#define WR_DFTHRSH_MAX 0x70
#define WR_DFTHRSH_90 0x60
#define WR_DFTHRSH_85 0x50
#define WR_DFTHRSH_75 0x40
#define WR_DFTHRSH_63 0x30
#define WR_DFTHRSH_50 0x20
#define WR_DFTHRSH_25 0x10
#define RD_DFTHRSH_MAX 0x07
#define RD_DFTHRSH 0x07
#define RD_DFTHRSH_90 0x06
#define RD_DFTHRSH_85 0x05
#define RD_DFTHRSH_75 0x04
#define RD_DFTHRSH_63 0x03
#define RD_DFTHRSH_50 0x02
#define RD_DFTHRSH_25 0x01
#define WR_DFTHRSH_MIN 0x00
#define RD_DFTHRSH_MIN 0x00
#define SG_CACHEPTR 0xfc
#define SG_USER_DATA 0xfc
#define LAST_SEG 0x02
#define LAST_SEG_DONE 0x01
#define CMD_GROUP2_BYTE_DELTA 0xfa
#define MAX_OFFSET_8BIT 0x0f
#define BUS_16_BIT 0x01
#define QINFIFO_OFFSET 0x02
#define CMD_GROUP5_BYTE_DELTA 0x0b
#define CMD_GROUP_CODE_SHIFT 0x05
#define MAX_OFFSET_ULTRA2 0x7f
#define MAX_OFFSET_16BIT 0x08
#define BUS_8_BIT 0x00
#define QOUTFIFO_OFFSET 0x01
#define UNTAGGEDSCB_OFFSET 0x00
#define CCSGRAM_MAXSEGS 0x10
#define SCB_LIST_NULL 0xff
#define SG_SIZEOF 0x08
#define CMD_GROUP4_BYTE_DELTA 0x04
#define CMD_GROUP0_BYTE_DELTA 0xfc
#define HOST_MSG 0xff
#define BUS_32_BIT 0x02
#define CCSGADDR_MAX 0x80
/* Downloaded Constant Definitions */
#define TMODE_NUMCMDS 0x00

Просмотреть файл

@ -1,817 +0,0 @@
/*
* DO NOT EDIT - This file is automatically generated.
*/
static unsigned char seqprog[] = {
0xff, 0x6a, 0x06, 0x08,
0x7f, 0x02, 0x04, 0x08,
0x12, 0x6a, 0x00, 0x00,
0xff, 0x6a, 0xd6, 0x09,
0xff, 0x6a, 0xdc, 0x09,
0x00, 0x65, 0xca, 0x58,
0xf7, 0x01, 0x02, 0x08,
0xff, 0x4e, 0xc8, 0x08,
0xbf, 0x60, 0xc0, 0x08,
0x60, 0x0b, 0x86, 0x68,
0x40, 0x00, 0x0c, 0x68,
0x08, 0x1f, 0x3e, 0x10,
0x60, 0x0b, 0x86, 0x68,
0x40, 0x00, 0x0c, 0x68,
0x08, 0x1f, 0x3e, 0x10,
0xff, 0x3e, 0x48, 0x60,
0x40, 0xfa, 0x10, 0x78,
0xff, 0xf6, 0xd4, 0x08,
0x01, 0x4e, 0x9c, 0x18,
0x40, 0x60, 0xc0, 0x00,
0x00, 0x4d, 0x10, 0x70,
0x01, 0x4e, 0x9c, 0x18,
0xbf, 0x60, 0xc0, 0x08,
0x00, 0x6a, 0x86, 0x5c,
0xff, 0x4e, 0xc8, 0x18,
0x02, 0x6a, 0x70, 0x5b,
0xff, 0x52, 0x20, 0x09,
0x0d, 0x6a, 0x6a, 0x00,
0x00, 0x52, 0xe6, 0x5b,
0x03, 0xb0, 0x52, 0x31,
0xff, 0xb0, 0x52, 0x09,
0xff, 0xb1, 0x54, 0x09,
0xff, 0xb2, 0x56, 0x09,
0xff, 0xa3, 0x50, 0x09,
0xff, 0x3e, 0x74, 0x09,
0xff, 0x90, 0x7c, 0x08,
0xff, 0x3e, 0x20, 0x09,
0x00, 0x65, 0x4e, 0x58,
0x00, 0x65, 0x0c, 0x40,
0xf7, 0x1f, 0xca, 0x08,
0x08, 0xa1, 0xc8, 0x08,
0x00, 0x65, 0xca, 0x00,
0xff, 0x65, 0x3e, 0x08,
0xf0, 0xa1, 0xc8, 0x08,
0x0f, 0x0f, 0x1e, 0x08,
0x00, 0x0f, 0x1e, 0x00,
0xf0, 0xa1, 0xc8, 0x08,
0x0f, 0x05, 0x0a, 0x08,
0x00, 0x05, 0x0a, 0x00,
0xff, 0x6a, 0x0c, 0x08,
0x5a, 0x6a, 0x00, 0x04,
0x12, 0x65, 0x02, 0x00,
0x31, 0x6a, 0xca, 0x00,
0x80, 0x37, 0x6e, 0x68,
0xff, 0x65, 0xca, 0x18,
0xff, 0x37, 0xdc, 0x08,
0xff, 0x6e, 0xc8, 0x08,
0x00, 0x6c, 0x76, 0x78,
0x20, 0x01, 0x02, 0x00,
0x4c, 0x37, 0xc8, 0x28,
0x08, 0x1f, 0x7e, 0x78,
0x08, 0x37, 0x6e, 0x00,
0x08, 0x64, 0xc8, 0x00,
0x70, 0x64, 0xca, 0x18,
0xff, 0x6c, 0x0a, 0x08,
0x20, 0x64, 0xca, 0x18,
0xff, 0x6c, 0x08, 0x0c,
0x40, 0x0b, 0x96, 0x68,
0x20, 0x6a, 0x16, 0x00,
0xf0, 0x19, 0x6e, 0x08,
0x08, 0x6a, 0x18, 0x00,
0x08, 0x11, 0x22, 0x00,
0x08, 0x6a, 0x66, 0x58,
0x08, 0x6a, 0x68, 0x00,
0x00, 0x65, 0xaa, 0x40,
0x12, 0x6a, 0x00, 0x00,
0x40, 0x6a, 0x16, 0x00,
0xff, 0x3e, 0x20, 0x09,
0xff, 0xba, 0x7c, 0x08,
0xff, 0xa1, 0x6e, 0x08,
0x08, 0x6a, 0x18, 0x00,
0x08, 0x11, 0x22, 0x00,
0x08, 0x6a, 0x66, 0x58,
0x80, 0x6a, 0x68, 0x00,
0x80, 0x36, 0x6c, 0x00,
0x00, 0x65, 0xba, 0x5b,
0xff, 0x3d, 0xc8, 0x08,
0xbf, 0x64, 0xe2, 0x78,
0x80, 0x64, 0xc8, 0x71,
0xa0, 0x64, 0xf8, 0x71,
0xc0, 0x64, 0xf0, 0x71,
0xe0, 0x64, 0x38, 0x72,
0x01, 0x6a, 0x22, 0x01,
0x00, 0x65, 0xaa, 0x40,
0xf7, 0x11, 0x22, 0x08,
0x00, 0x65, 0xca, 0x58,
0xff, 0x06, 0xd4, 0x08,
0xf7, 0x01, 0x02, 0x08,
0x09, 0x0c, 0xc4, 0x78,
0x08, 0x0c, 0x0c, 0x68,
0x01, 0x6a, 0x22, 0x01,
0xff, 0x6a, 0x26, 0x09,
0x02, 0x6a, 0x08, 0x30,
0xff, 0x6a, 0x08, 0x08,
0xdf, 0x01, 0x02, 0x08,
0x01, 0x6a, 0x7a, 0x00,
0xff, 0x6a, 0x6c, 0x0c,
0x04, 0x14, 0x10, 0x31,
0x03, 0xa9, 0x18, 0x31,
0x03, 0xa9, 0x10, 0x30,
0x08, 0x6a, 0xcc, 0x00,
0xa9, 0x6a, 0xd0, 0x5b,
0x00, 0x65, 0x02, 0x41,
0xa8, 0x6a, 0x6a, 0x00,
0x79, 0x6a, 0x6a, 0x00,
0x40, 0x3d, 0xea, 0x68,
0x04, 0x35, 0x6a, 0x00,
0x00, 0x65, 0x2a, 0x5b,
0x80, 0x6a, 0xd4, 0x01,
0x10, 0x36, 0xd6, 0x68,
0x10, 0x36, 0x6c, 0x00,
0x07, 0xac, 0x10, 0x31,
0x05, 0xa3, 0x70, 0x30,
0x03, 0x8c, 0x10, 0x30,
0x88, 0x6a, 0xcc, 0x00,
0xac, 0x6a, 0xc8, 0x5b,
0x00, 0x65, 0xc2, 0x5b,
0x38, 0x6a, 0xcc, 0x00,
0xa3, 0x6a, 0xcc, 0x5b,
0xff, 0x38, 0x12, 0x69,
0x80, 0x02, 0x04, 0x00,
0xe7, 0x35, 0x6a, 0x08,
0x03, 0x69, 0x18, 0x31,
0x03, 0x69, 0x10, 0x30,
0xff, 0x6a, 0x10, 0x00,
0xff, 0x6a, 0x12, 0x00,
0xff, 0x6a, 0x14, 0x00,
0x22, 0x38, 0xc8, 0x28,
0x01, 0x38, 0x1c, 0x61,
0x02, 0x64, 0xc8, 0x00,
0x01, 0x38, 0x1c, 0x61,
0xbf, 0x35, 0x6a, 0x08,
0xff, 0x64, 0xf8, 0x09,
0xff, 0x35, 0x26, 0x09,
0x80, 0x02, 0xa4, 0x69,
0x10, 0x0c, 0x7a, 0x69,
0x80, 0x94, 0x22, 0x79,
0x00, 0x35, 0x0a, 0x5b,
0x80, 0x02, 0xa4, 0x69,
0xff, 0x65, 0x94, 0x79,
0x01, 0x38, 0x70, 0x71,
0xff, 0x38, 0x70, 0x18,
0xff, 0x38, 0x94, 0x79,
0x80, 0xea, 0x4a, 0x61,
0xef, 0x38, 0xc8, 0x18,
0x80, 0x6a, 0xc8, 0x00,
0x00, 0x65, 0x3c, 0x49,
0x33, 0x38, 0xc8, 0x28,
0xff, 0x64, 0xd0, 0x09,
0x04, 0x39, 0xc0, 0x31,
0x09, 0x6a, 0xd6, 0x01,
0x80, 0xeb, 0x42, 0x79,
0xf7, 0xeb, 0xd6, 0x09,
0x08, 0xeb, 0x46, 0x69,
0x01, 0x6a, 0xd6, 0x01,
0x08, 0xe9, 0x10, 0x31,
0x03, 0x8c, 0x10, 0x30,
0xff, 0x38, 0x70, 0x18,
0x88, 0x6a, 0xcc, 0x00,
0x39, 0x6a, 0xce, 0x5b,
0x08, 0x6a, 0x18, 0x01,
0xff, 0x6a, 0x1a, 0x09,
0xff, 0x6a, 0x1c, 0x09,
0x0d, 0x93, 0x26, 0x01,
0x00, 0x65, 0x78, 0x5c,
0x88, 0x6a, 0xcc, 0x00,
0x00, 0x65, 0x6a, 0x5c,
0x00, 0x65, 0xc2, 0x5b,
0xff, 0x6a, 0xc8, 0x08,
0x08, 0x39, 0x72, 0x18,
0x00, 0x3a, 0x74, 0x20,
0x00, 0x65, 0x02, 0x41,
0x01, 0x0c, 0x6c, 0x79,
0x10, 0x0c, 0x02, 0x79,
0x10, 0x0c, 0x7a, 0x69,
0x01, 0xfc, 0x70, 0x79,
0xff, 0x6a, 0x70, 0x08,
0x01, 0x0c, 0x76, 0x79,
0x10, 0x0c, 0x02, 0x79,
0x00, 0x65, 0xae, 0x59,
0x01, 0xfc, 0x94, 0x69,
0x40, 0x0d, 0x84, 0x69,
0xb1, 0x6a, 0x22, 0x01,
0x00, 0x65, 0x94, 0x41,
0x2e, 0xfc, 0xa2, 0x28,
0x3f, 0x38, 0xc8, 0x08,
0x00, 0x51, 0x94, 0x71,
0xff, 0x6a, 0xc8, 0x08,
0xf8, 0x39, 0x72, 0x18,
0xff, 0x3a, 0x74, 0x20,
0x01, 0x38, 0x70, 0x18,
0x00, 0x65, 0x86, 0x41,
0x03, 0x08, 0x52, 0x31,
0xff, 0x38, 0x50, 0x09,
0x12, 0x01, 0x02, 0x00,
0xff, 0x08, 0x52, 0x09,
0xff, 0x09, 0x54, 0x09,
0xff, 0x0a, 0x56, 0x09,
0xff, 0x38, 0x50, 0x09,
0x00, 0x65, 0xaa, 0x40,
0x10, 0x0c, 0xa4, 0x79,
0x00, 0x65, 0xae, 0x59,
0x7f, 0x02, 0x04, 0x08,
0xe1, 0x6a, 0x22, 0x01,
0x00, 0x65, 0xaa, 0x40,
0x04, 0x93, 0xc2, 0x69,
0xdf, 0x93, 0x26, 0x09,
0x20, 0x93, 0xb2, 0x69,
0x02, 0x93, 0x26, 0x01,
0x01, 0x94, 0xb6, 0x79,
0x01, 0x94, 0xb6, 0x79,
0x01, 0x94, 0xb6, 0x79,
0x01, 0x94, 0xb6, 0x79,
0x01, 0x94, 0xb6, 0x79,
0x10, 0x94, 0xc0, 0x69,
0xd7, 0x93, 0x26, 0x09,
0x28, 0x93, 0xc4, 0x69,
0xff, 0x6a, 0xd4, 0x0c,
0x00, 0x65, 0x2a, 0x5b,
0x05, 0xb4, 0x10, 0x31,
0x02, 0x6a, 0x1a, 0x31,
0x03, 0x8c, 0x10, 0x30,
0x88, 0x6a, 0xcc, 0x00,
0xb4, 0x6a, 0xcc, 0x5b,
0xff, 0x6a, 0x1a, 0x09,
0xff, 0x6a, 0x1c, 0x09,
0x00, 0x65, 0xc2, 0x5b,
0x3d, 0x6a, 0x0a, 0x5b,
0xac, 0x6a, 0x26, 0x01,
0x04, 0x0b, 0xde, 0x69,
0x04, 0x0b, 0xe4, 0x69,
0x10, 0x0c, 0xe0, 0x79,
0x02, 0x03, 0xe8, 0x79,
0x11, 0x0c, 0xe4, 0x79,
0xd7, 0x93, 0x26, 0x09,
0x28, 0x93, 0xea, 0x69,
0x12, 0x01, 0x02, 0x00,
0x00, 0x65, 0xaa, 0x40,
0x00, 0x65, 0x2a, 0x5b,
0xff, 0x06, 0x44, 0x09,
0x00, 0x65, 0xaa, 0x40,
0x10, 0x3d, 0x06, 0x00,
0xff, 0x34, 0xca, 0x08,
0x80, 0x65, 0x1c, 0x62,
0x0f, 0xa1, 0xca, 0x08,
0x07, 0xa1, 0xca, 0x08,
0x40, 0xa0, 0xc8, 0x08,
0x00, 0x65, 0xca, 0x00,
0x80, 0x65, 0xca, 0x00,
0x80, 0xa0, 0x0c, 0x7a,
0xff, 0x65, 0x0c, 0x08,
0x00, 0x65, 0x1e, 0x42,
0x20, 0xa0, 0x24, 0x7a,
0xff, 0x65, 0x0c, 0x08,
0x00, 0x65, 0xba, 0x5b,
0xa0, 0x3d, 0x2c, 0x62,
0x23, 0xa0, 0x0c, 0x08,
0x00, 0x65, 0xba, 0x5b,
0xa0, 0x3d, 0x2c, 0x62,
0x00, 0xb9, 0x24, 0x42,
0xff, 0x65, 0x24, 0x62,
0xa1, 0x6a, 0x22, 0x01,
0xff, 0x6a, 0xd4, 0x08,
0x10, 0x51, 0x2c, 0x72,
0x40, 0x6a, 0x18, 0x00,
0xff, 0x65, 0x0c, 0x08,
0x00, 0x65, 0xba, 0x5b,
0xa0, 0x3d, 0xf6, 0x71,
0x40, 0x6a, 0x18, 0x00,
0xff, 0x34, 0xa6, 0x08,
0x80, 0x34, 0x34, 0x62,
0x7f, 0xa0, 0x40, 0x09,
0x08, 0x6a, 0x68, 0x00,
0x00, 0x65, 0xaa, 0x40,
0x64, 0x6a, 0x00, 0x5b,
0x80, 0x64, 0xaa, 0x6a,
0x04, 0x64, 0x8c, 0x72,
0x02, 0x64, 0x92, 0x72,
0x00, 0x6a, 0x54, 0x72,
0x03, 0x64, 0xa6, 0x72,
0x01, 0x64, 0x88, 0x72,
0x07, 0x64, 0xe8, 0x72,
0x08, 0x64, 0x50, 0x72,
0x23, 0x64, 0xec, 0x72,
0x11, 0x6a, 0x22, 0x01,
0x07, 0x6a, 0xf2, 0x5a,
0xff, 0x06, 0xd4, 0x08,
0x00, 0x65, 0xaa, 0x40,
0xff, 0xa8, 0x58, 0x6a,
0xff, 0xa2, 0x70, 0x7a,
0x01, 0x6a, 0x6a, 0x00,
0x00, 0xb9, 0xe6, 0x5b,
0xff, 0xa2, 0x70, 0x7a,
0x71, 0x6a, 0x22, 0x01,
0xff, 0x6a, 0xd4, 0x08,
0x40, 0x51, 0x70, 0x62,
0x0d, 0x6a, 0x6a, 0x00,
0x00, 0xb9, 0xe6, 0x5b,
0xff, 0x3e, 0x74, 0x09,
0xff, 0x90, 0x7c, 0x08,
0x00, 0x65, 0x4e, 0x58,
0x00, 0x65, 0xbc, 0x40,
0x20, 0xa0, 0x78, 0x6a,
0xff, 0x37, 0xc8, 0x08,
0x00, 0x6a, 0x90, 0x5b,
0xff, 0x6a, 0xa6, 0x5b,
0xff, 0xf8, 0xc8, 0x08,
0xff, 0x4f, 0xc8, 0x08,
0x01, 0x6a, 0x90, 0x5b,
0x00, 0xb9, 0xa6, 0x5b,
0x01, 0x4f, 0x9e, 0x18,
0x02, 0x6a, 0x22, 0x01,
0x00, 0x65, 0x80, 0x5c,
0x00, 0x65, 0xbc, 0x40,
0x41, 0x6a, 0x22, 0x01,
0x00, 0x65, 0xaa, 0x40,
0x04, 0xa0, 0x40, 0x01,
0x00, 0x65, 0x98, 0x5c,
0x00, 0x65, 0xbc, 0x40,
0x10, 0x36, 0x50, 0x7a,
0x05, 0x38, 0x46, 0x31,
0x04, 0x14, 0x58, 0x31,
0x03, 0xa9, 0x60, 0x31,
0xa3, 0x6a, 0xcc, 0x00,
0x38, 0x6a, 0xcc, 0x5b,
0xac, 0x6a, 0xcc, 0x00,
0x14, 0x6a, 0xce, 0x5b,
0xa9, 0x6a, 0xd0, 0x5b,
0x00, 0x65, 0x50, 0x42,
0xef, 0x36, 0x6c, 0x08,
0x00, 0x65, 0x50, 0x42,
0x0f, 0x64, 0xc8, 0x08,
0x07, 0x64, 0xc8, 0x08,
0x00, 0x37, 0x6e, 0x00,
0xff, 0x6a, 0xa4, 0x00,
0x00, 0x65, 0x60, 0x5b,
0xff, 0x51, 0xbc, 0x72,
0x20, 0x36, 0xc6, 0x7a,
0x00, 0x90, 0x4e, 0x5b,
0x00, 0x65, 0xc8, 0x42,
0xff, 0x06, 0xd4, 0x08,
0x00, 0x65, 0xba, 0x5b,
0xe0, 0x3d, 0xe2, 0x62,
0x20, 0x12, 0xe2, 0x62,
0x51, 0x6a, 0xf6, 0x5a,
0x00, 0x65, 0x48, 0x5b,
0xff, 0x37, 0xc8, 0x08,
0x00, 0xa1, 0xda, 0x62,
0x04, 0xa0, 0xda, 0x7a,
0xfb, 0xa0, 0x40, 0x09,
0x80, 0x36, 0x6c, 0x00,
0x80, 0xa0, 0x50, 0x7a,
0x7f, 0xa0, 0x40, 0x09,
0xff, 0x6a, 0xf2, 0x5a,
0x00, 0x65, 0x50, 0x42,
0x04, 0xa0, 0xe0, 0x7a,
0x00, 0x65, 0x98, 0x5c,
0x00, 0x65, 0xe2, 0x42,
0x00, 0x65, 0x80, 0x5c,
0x31, 0x6a, 0x22, 0x01,
0x0c, 0x6a, 0xf2, 0x5a,
0x00, 0x65, 0x50, 0x42,
0x61, 0x6a, 0x22, 0x01,
0x00, 0x65, 0x50, 0x42,
0x51, 0x6a, 0xf6, 0x5a,
0x51, 0x6a, 0x22, 0x01,
0x00, 0x65, 0x50, 0x42,
0x10, 0x3d, 0x06, 0x00,
0xff, 0x65, 0x68, 0x0c,
0xff, 0x06, 0xd4, 0x08,
0x01, 0x0c, 0xf8, 0x7a,
0x04, 0x0c, 0xfa, 0x6a,
0xe0, 0x03, 0x7a, 0x08,
0xe0, 0x3d, 0x06, 0x63,
0xff, 0x65, 0xcc, 0x08,
0xff, 0x12, 0xda, 0x0c,
0xff, 0x06, 0xd4, 0x0c,
0xd1, 0x6a, 0x22, 0x01,
0x00, 0x65, 0xaa, 0x40,
0xff, 0x65, 0x26, 0x09,
0x01, 0x0b, 0x1a, 0x6b,
0x10, 0x0c, 0x0c, 0x7b,
0x04, 0x0b, 0x14, 0x6b,
0xff, 0x6a, 0xca, 0x08,
0x04, 0x93, 0x18, 0x6b,
0x01, 0x94, 0x16, 0x7b,
0x10, 0x94, 0x18, 0x6b,
0x80, 0x3d, 0x1e, 0x73,
0x0f, 0x04, 0x22, 0x6b,
0x02, 0x03, 0x22, 0x7b,
0x11, 0x0c, 0x1e, 0x7b,
0xc7, 0x93, 0x26, 0x09,
0xff, 0x99, 0xd4, 0x08,
0x38, 0x93, 0x24, 0x6b,
0xff, 0x6a, 0xd4, 0x0c,
0x80, 0x36, 0x28, 0x6b,
0x21, 0x6a, 0x22, 0x05,
0xff, 0x65, 0x20, 0x09,
0xff, 0x51, 0x36, 0x63,
0xff, 0x37, 0xc8, 0x08,
0xa1, 0x6a, 0x42, 0x43,
0xff, 0x51, 0xc8, 0x08,
0xb9, 0x6a, 0x42, 0x43,
0xff, 0x90, 0xa4, 0x08,
0xff, 0xba, 0x46, 0x73,
0xff, 0xba, 0x20, 0x09,
0xff, 0x65, 0xca, 0x18,
0x00, 0x6c, 0x3a, 0x63,
0xff, 0x90, 0xca, 0x0c,
0xff, 0x6a, 0xca, 0x04,
0x20, 0x36, 0x5a, 0x7b,
0x00, 0x90, 0x2e, 0x5b,
0xff, 0x65, 0x5a, 0x73,
0xff, 0x52, 0x58, 0x73,
0xff, 0xba, 0xcc, 0x08,
0xff, 0x52, 0x20, 0x09,
0xff, 0x66, 0x74, 0x09,
0xff, 0x65, 0x20, 0x0d,
0xff, 0xba, 0x7e, 0x0c,
0x00, 0x6a, 0x86, 0x5c,
0x0d, 0x6a, 0x6a, 0x00,
0x00, 0x51, 0xe6, 0x43,
0xff, 0x3f, 0xb4, 0x73,
0xff, 0x6a, 0xa2, 0x00,
0x00, 0x3f, 0x2e, 0x5b,
0xff, 0x65, 0xb4, 0x73,
0x20, 0x36, 0x6c, 0x00,
0x20, 0xa0, 0x6e, 0x6b,
0xff, 0xb9, 0xa2, 0x0c,
0xff, 0x6a, 0xa2, 0x04,
0xff, 0x65, 0xa4, 0x08,
0xe0, 0x6a, 0xcc, 0x00,
0x45, 0x6a, 0xda, 0x5b,
0x01, 0x6a, 0xd0, 0x01,
0x09, 0x6a, 0xd6, 0x01,
0x80, 0xeb, 0x7a, 0x7b,
0x01, 0x6a, 0xd6, 0x01,
0x01, 0xe9, 0xa4, 0x34,
0x88, 0x6a, 0xcc, 0x00,
0x45, 0x6a, 0xda, 0x5b,
0x01, 0x6a, 0x18, 0x01,
0xff, 0x6a, 0x1a, 0x09,
0xff, 0x6a, 0x1c, 0x09,
0x0d, 0x6a, 0x26, 0x01,
0x00, 0x65, 0x78, 0x5c,
0xff, 0x99, 0xa4, 0x0c,
0xff, 0x65, 0xa4, 0x08,
0xe0, 0x6a, 0xcc, 0x00,
0x45, 0x6a, 0xda, 0x5b,
0x01, 0x6a, 0xd0, 0x01,
0x01, 0x6a, 0xdc, 0x05,
0x88, 0x6a, 0xcc, 0x00,
0x45, 0x6a, 0xda, 0x5b,
0x01, 0x6a, 0x18, 0x01,
0xff, 0x6a, 0x1a, 0x09,
0xff, 0x6a, 0x1c, 0x09,
0x01, 0x6a, 0x26, 0x05,
0x01, 0x65, 0xd8, 0x31,
0x09, 0xee, 0xdc, 0x01,
0x80, 0xee, 0xaa, 0x7b,
0xff, 0x6a, 0xdc, 0x0d,
0xff, 0x65, 0x32, 0x09,
0x0a, 0x93, 0x26, 0x01,
0x00, 0x65, 0x78, 0x44,
0xff, 0x37, 0xc8, 0x08,
0x00, 0x6a, 0x70, 0x5b,
0xff, 0x52, 0xa2, 0x0c,
0x01, 0x0c, 0xba, 0x7b,
0x04, 0x0c, 0xba, 0x6b,
0xe0, 0x03, 0x06, 0x08,
0xe0, 0x03, 0x7a, 0x0c,
0xff, 0x8c, 0x10, 0x08,
0xff, 0x8d, 0x12, 0x08,
0xff, 0x8e, 0x14, 0x0c,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x08,
0xff, 0x6c, 0xda, 0x0c,
0x3d, 0x64, 0xa4, 0x28,
0x55, 0x64, 0xc8, 0x28,
0x00, 0x6c, 0xda, 0x18,
0xff, 0x52, 0xc8, 0x08,
0x00, 0x6c, 0xda, 0x20,
0xff, 0x6a, 0xc8, 0x08,
0x00, 0x6c, 0xda, 0x20,
0x00, 0x6c, 0xda, 0x24,
0xff, 0x65, 0xc8, 0x08,
0xe0, 0x6a, 0xcc, 0x00,
0x41, 0x6a, 0xd6, 0x5b,
0xff, 0x90, 0xe2, 0x09,
0x20, 0x6a, 0xd0, 0x01,
0x04, 0x35, 0xf8, 0x7b,
0x1d, 0x6a, 0xdc, 0x01,
0xdc, 0xee, 0xf4, 0x63,
0x00, 0x65, 0x0e, 0x44,
0x01, 0x6a, 0xdc, 0x01,
0x20, 0xa0, 0xd8, 0x31,
0x09, 0xee, 0xdc, 0x01,
0x80, 0xee, 0xfe, 0x7b,
0x11, 0x6a, 0xdc, 0x01,
0x50, 0xee, 0x02, 0x64,
0x20, 0x6a, 0xd0, 0x01,
0x09, 0x6a, 0xdc, 0x01,
0x88, 0xee, 0x08, 0x64,
0x19, 0x6a, 0xdc, 0x01,
0xd8, 0xee, 0x0c, 0x64,
0xff, 0x6a, 0xdc, 0x09,
0x18, 0xee, 0x10, 0x6c,
0xff, 0x6a, 0xd4, 0x0c,
0x88, 0x6a, 0xcc, 0x00,
0x41, 0x6a, 0xd6, 0x5b,
0x20, 0x6a, 0x18, 0x01,
0xff, 0x6a, 0x1a, 0x09,
0xff, 0x6a, 0x1c, 0x09,
0xff, 0x35, 0x26, 0x09,
0x04, 0x35, 0x3c, 0x6c,
0xa0, 0x6a, 0xca, 0x00,
0x20, 0x65, 0xc8, 0x18,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0xff, 0x6c, 0x32, 0x09,
0x00, 0x65, 0x26, 0x64,
0x0a, 0x93, 0x26, 0x01,
0x00, 0x65, 0x78, 0x44,
0xa0, 0x6a, 0xcc, 0x00,
0xe8, 0x6a, 0xc8, 0x00,
0x01, 0x94, 0x40, 0x6c,
0x10, 0x94, 0x42, 0x6c,
0x08, 0x94, 0x54, 0x6c,
0x08, 0x94, 0x54, 0x6c,
0x08, 0x94, 0x54, 0x6c,
0x00, 0x65, 0x68, 0x5c,
0x08, 0x64, 0xc8, 0x18,
0x00, 0x8c, 0xca, 0x18,
0x00, 0x65, 0x4a, 0x4c,
0x00, 0x65, 0x40, 0x44,
0xf7, 0x93, 0x26, 0x09,
0x08, 0x93, 0x56, 0x6c,
0x00, 0x65, 0x68, 0x5c,
0x08, 0x64, 0xc8, 0x18,
0x08, 0x64, 0x58, 0x64,
0xff, 0x6a, 0xd4, 0x0c,
0x00, 0x65, 0x78, 0x5c,
0x00, 0x65, 0x68, 0x5c,
0x00, 0x65, 0x68, 0x5c,
0x00, 0x65, 0x68, 0x5c,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x08,
0xff, 0x99, 0xda, 0x0c,
0x08, 0x94, 0x78, 0x7c,
0xf7, 0x93, 0x26, 0x09,
0x08, 0x93, 0x7c, 0x6c,
0xff, 0x6a, 0xd4, 0x0c,
0xff, 0x40, 0x74, 0x09,
0xff, 0x90, 0x80, 0x08,
0xff, 0x6a, 0x72, 0x05,
0xff, 0x40, 0x94, 0x64,
0xff, 0x3f, 0x8c, 0x64,
0xff, 0x6a, 0xca, 0x04,
0xff, 0x3f, 0x20, 0x09,
0x01, 0x6a, 0x6a, 0x00,
0x00, 0xb9, 0xe6, 0x5b,
0xff, 0xba, 0x7e, 0x0c,
0xff, 0x40, 0x20, 0x09,
0xff, 0xba, 0x80, 0x0c,
0xff, 0x3f, 0x74, 0x09,
0xff, 0x90, 0x7e, 0x0c,
};
static int aic7xxx_patch15_func(struct aic7xxx_host *p);
static int
aic7xxx_patch15_func(struct aic7xxx_host *p)
{
return ((p->bugs & AHC_BUG_SCBCHAN_UPLOAD) != 0);
}
static int aic7xxx_patch14_func(struct aic7xxx_host *p);
static int
aic7xxx_patch14_func(struct aic7xxx_host *p)
{
return ((p->bugs & AHC_BUG_PCI_2_1_RETRY) != 0);
}
static int aic7xxx_patch13_func(struct aic7xxx_host *p);
static int
aic7xxx_patch13_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_WIDE) != 0);
}
static int aic7xxx_patch12_func(struct aic7xxx_host *p);
static int
aic7xxx_patch12_func(struct aic7xxx_host *p)
{
return ((p->bugs & AHC_BUG_AUTOFLUSH) != 0);
}
static int aic7xxx_patch11_func(struct aic7xxx_host *p);
static int
aic7xxx_patch11_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_ULTRA2) == 0);
}
static int aic7xxx_patch10_func(struct aic7xxx_host *p);
static int
aic7xxx_patch10_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_CMD_CHAN) == 0);
}
static int aic7xxx_patch9_func(struct aic7xxx_host *p);
static int
aic7xxx_patch9_func(struct aic7xxx_host *p)
{
return ((p->chip & AHC_CHIPID_MASK) == AHC_AIC7895);
}
static int aic7xxx_patch8_func(struct aic7xxx_host *p);
static int
aic7xxx_patch8_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_ULTRA) != 0);
}
static int aic7xxx_patch7_func(struct aic7xxx_host *p);
static int
aic7xxx_patch7_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_ULTRA2) != 0);
}
static int aic7xxx_patch6_func(struct aic7xxx_host *p);
static int
aic7xxx_patch6_func(struct aic7xxx_host *p)
{
return ((p->flags & AHC_PAGESCBS) == 0);
}
static int aic7xxx_patch5_func(struct aic7xxx_host *p);
static int
aic7xxx_patch5_func(struct aic7xxx_host *p)
{
return ((p->flags & AHC_PAGESCBS) != 0);
}
static int aic7xxx_patch4_func(struct aic7xxx_host *p);
static int
aic7xxx_patch4_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_QUEUE_REGS) != 0);
}
static int aic7xxx_patch3_func(struct aic7xxx_host *p);
static int
aic7xxx_patch3_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_TWIN) != 0);
}
static int aic7xxx_patch2_func(struct aic7xxx_host *p);
static int
aic7xxx_patch2_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_QUEUE_REGS) == 0);
}
static int aic7xxx_patch1_func(struct aic7xxx_host *p);
static int
aic7xxx_patch1_func(struct aic7xxx_host *p)
{
return ((p->features & AHC_CMD_CHAN) != 0);
}
static int aic7xxx_patch0_func(struct aic7xxx_host *p);
static int
aic7xxx_patch0_func(struct aic7xxx_host *p)
{
return (0);
}
struct sequencer_patch {
int (*patch_func)(struct aic7xxx_host *);
unsigned int begin :10,
skip_instr :10,
skip_patch :12;
} sequencer_patches[] = {
{ aic7xxx_patch1_func, 3, 2, 1 },
{ aic7xxx_patch2_func, 7, 1, 1 },
{ aic7xxx_patch2_func, 8, 1, 1 },
{ aic7xxx_patch3_func, 11, 4, 1 },
{ aic7xxx_patch4_func, 16, 3, 2 },
{ aic7xxx_patch0_func, 19, 4, 1 },
{ aic7xxx_patch5_func, 23, 1, 1 },
{ aic7xxx_patch6_func, 26, 1, 1 },
{ aic7xxx_patch1_func, 29, 1, 2 },
{ aic7xxx_patch0_func, 30, 3, 1 },
{ aic7xxx_patch3_func, 39, 4, 1 },
{ aic7xxx_patch7_func, 43, 3, 2 },
{ aic7xxx_patch0_func, 46, 3, 1 },
{ aic7xxx_patch8_func, 52, 7, 1 },
{ aic7xxx_patch3_func, 60, 3, 1 },
{ aic7xxx_patch7_func, 63, 2, 1 },
{ aic7xxx_patch7_func, 102, 1, 2 },
{ aic7xxx_patch0_func, 103, 2, 1 },
{ aic7xxx_patch7_func, 107, 2, 1 },
{ aic7xxx_patch9_func, 109, 1, 1 },
{ aic7xxx_patch10_func, 110, 2, 1 },
{ aic7xxx_patch7_func, 113, 1, 2 },
{ aic7xxx_patch0_func, 114, 1, 1 },
{ aic7xxx_patch1_func, 118, 1, 1 },
{ aic7xxx_patch1_func, 121, 3, 3 },
{ aic7xxx_patch11_func, 123, 1, 1 },
{ aic7xxx_patch0_func, 124, 5, 1 },
{ aic7xxx_patch7_func, 132, 1, 1 },
{ aic7xxx_patch9_func, 133, 1, 1 },
{ aic7xxx_patch10_func, 134, 3, 1 },
{ aic7xxx_patch7_func, 137, 3, 2 },
{ aic7xxx_patch0_func, 140, 2, 1 },
{ aic7xxx_patch7_func, 142, 5, 2 },
{ aic7xxx_patch0_func, 147, 3, 1 },
{ aic7xxx_patch7_func, 150, 1, 2 },
{ aic7xxx_patch0_func, 151, 2, 1 },
{ aic7xxx_patch1_func, 153, 15, 4 },
{ aic7xxx_patch11_func, 166, 1, 2 },
{ aic7xxx_patch0_func, 167, 1, 1 },
{ aic7xxx_patch0_func, 168, 10, 1 },
{ aic7xxx_patch7_func, 181, 1, 2 },
{ aic7xxx_patch0_func, 182, 2, 1 },
{ aic7xxx_patch7_func, 184, 18, 1 },
{ aic7xxx_patch1_func, 202, 3, 3 },
{ aic7xxx_patch7_func, 204, 1, 1 },
{ aic7xxx_patch0_func, 205, 4, 1 },
{ aic7xxx_patch7_func, 210, 2, 1 },
{ aic7xxx_patch7_func, 215, 13, 3 },
{ aic7xxx_patch12_func, 218, 1, 1 },
{ aic7xxx_patch12_func, 219, 4, 1 },
{ aic7xxx_patch1_func, 229, 3, 3 },
{ aic7xxx_patch11_func, 231, 1, 1 },
{ aic7xxx_patch0_func, 232, 5, 1 },
{ aic7xxx_patch11_func, 237, 1, 2 },
{ aic7xxx_patch0_func, 238, 9, 1 },
{ aic7xxx_patch13_func, 254, 1, 2 },
{ aic7xxx_patch0_func, 255, 1, 1 },
{ aic7xxx_patch4_func, 316, 1, 2 },
{ aic7xxx_patch0_func, 317, 1, 1 },
{ aic7xxx_patch2_func, 320, 1, 1 },
{ aic7xxx_patch1_func, 330, 3, 2 },
{ aic7xxx_patch0_func, 333, 5, 1 },
{ aic7xxx_patch13_func, 341, 1, 2 },
{ aic7xxx_patch0_func, 342, 1, 1 },
{ aic7xxx_patch5_func, 347, 1, 1 },
{ aic7xxx_patch11_func, 389, 15, 2 },
{ aic7xxx_patch14_func, 402, 1, 1 },
{ aic7xxx_patch1_func, 441, 7, 2 },
{ aic7xxx_patch0_func, 448, 8, 1 },
{ aic7xxx_patch1_func, 457, 4, 2 },
{ aic7xxx_patch0_func, 461, 6, 1 },
{ aic7xxx_patch1_func, 467, 4, 2 },
{ aic7xxx_patch0_func, 471, 3, 1 },
{ aic7xxx_patch10_func, 481, 10, 1 },
{ aic7xxx_patch1_func, 500, 22, 5 },
{ aic7xxx_patch11_func, 508, 4, 1 },
{ aic7xxx_patch7_func, 512, 7, 3 },
{ aic7xxx_patch15_func, 512, 5, 2 },
{ aic7xxx_patch0_func, 517, 2, 1 },
{ aic7xxx_patch10_func, 522, 50, 3 },
{ aic7xxx_patch14_func, 543, 17, 2 },
{ aic7xxx_patch0_func, 560, 4, 1 },
{ aic7xxx_patch10_func, 572, 4, 1 },
{ aic7xxx_patch5_func, 576, 2, 1 },
{ aic7xxx_patch5_func, 579, 9, 1 },
};

Просмотреть файл

@ -1,49 +0,0 @@
/* Messages (1 byte) */ /* I/T (M)andatory or (O)ptional */
#define MSG_CMDCOMPLETE 0x00 /* M/M */
#define MSG_EXTENDED 0x01 /* O/O */
#define MSG_SAVEDATAPOINTER 0x02 /* O/O */
#define MSG_RESTOREPOINTERS 0x03 /* O/O */
#define MSG_DISCONNECT 0x04 /* O/O */
#define MSG_INITIATOR_DET_ERR 0x05 /* M/M */
#define MSG_ABORT 0x06 /* O/M */
#define MSG_MESSAGE_REJECT 0x07 /* M/M */
#define MSG_NOOP 0x08 /* M/M */
#define MSG_PARITY_ERROR 0x09 /* M/M */
#define MSG_LINK_CMD_COMPLETE 0x0a /* O/O */
#define MSG_LINK_CMD_COMPLETEF 0x0b /* O/O */
#define MSG_BUS_DEV_RESET 0x0c /* O/M */
#define MSG_ABORT_TAG 0x0d /* O/O */
#define MSG_CLEAR_QUEUE 0x0e /* O/O */
#define MSG_INIT_RECOVERY 0x0f /* O/O */
#define MSG_REL_RECOVERY 0x10 /* O/O */
#define MSG_TERM_IO_PROC 0x11 /* O/O */
/* Messages (2 byte) */
#define MSG_SIMPLE_Q_TAG 0x20 /* O/O */
#define MSG_HEAD_OF_Q_TAG 0x21 /* O/O */
#define MSG_ORDERED_Q_TAG 0x22 /* O/O */
#define MSG_IGN_WIDE_RESIDUE 0x23 /* O/O */
/* Identify message */ /* M/M */
#define MSG_IDENTIFYFLAG 0x80
#define MSG_IDENTIFY_DISCFLAG 0x40
#define MSG_IDENTIFY(lun, disc) (((disc) ? 0xc0 : MSG_IDENTIFYFLAG) | (lun))
#define MSG_ISIDENTIFY(m) ((m) & MSG_IDENTIFYFLAG)
/* Extended messages (opcode and length) */
#define MSG_EXT_SDTR 0x01
#define MSG_EXT_SDTR_LEN 0x03
#define MSG_EXT_WDTR 0x03
#define MSG_EXT_WDTR_LEN 0x02
#define MSG_EXT_WDTR_BUS_8_BIT 0x00
#define MSG_EXT_WDTR_BUS_16_BIT 0x01
#define MSG_EXT_WDTR_BUS_32_BIT 0x02
#define MSG_EXT_PPR 0x04
#define MSG_EXT_PPR_LEN 0x06
#define MSG_EXT_PPR_OPTION_ST 0x00
#define MSG_EXT_PPR_OPTION_DT_CRC 0x02
#define MSG_EXT_PPR_OPTION_DT_UNITS 0x03
#define MSG_EXT_PPR_OPTION_DT_CRC_QUICK 0x04
#define MSG_EXT_PPR_OPTION_DT_UNITS_QUICK 0x05

Просмотреть файл

@ -1,135 +0,0 @@
/*
* Instruction formats for the sequencer program downloaded to
* Aic7xxx SCSI host adapters
*
* Copyright (c) 1997, 1998 Justin T. Gibbs.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification, immediately at the beginning of the file.
* 2. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* Where this Software is combined with software released under the terms of
* the GNU General Public License ("GPL") and the terms of the GPL would require the
* combined work to also be released under the terms of the GPL, the terms
* and conditions of this License will apply in addition to those of the
* GPL with the exception of any terms or conditions of this License that
* conflict with, or are expressly prohibited by, the GPL.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR
* ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $Id: sequencer.h,v 1.3 1997/09/27 19:37:31 gibbs Exp $
*/
#ifdef __LITTLE_ENDIAN_BITFIELD
struct ins_format1 {
unsigned int
immediate : 8,
source : 9,
destination : 9,
ret : 1,
opcode : 4,
parity : 1;
};
struct ins_format2 {
unsigned int
shift_control : 8,
source : 9,
destination : 9,
ret : 1,
opcode : 4,
parity : 1;
};
struct ins_format3 {
unsigned int
immediate : 8,
source : 9,
address : 10,
opcode : 4,
parity : 1;
};
#elif defined(__BIG_ENDIAN_BITFIELD)
struct ins_format1 {
unsigned int
parity : 1,
opcode : 4,
ret : 1,
destination : 9,
source : 9,
immediate : 8;
};
struct ins_format2 {
unsigned int
parity : 1,
opcode : 4,
ret : 1,
destination : 9,
source : 9,
shift_control : 8;
};
struct ins_format3 {
unsigned int
parity : 1,
opcode : 4,
address : 10,
source : 9,
immediate : 8;
};
#endif
union ins_formats {
struct ins_format1 format1;
struct ins_format2 format2;
struct ins_format3 format3;
unsigned char bytes[4];
unsigned int integer;
};
struct instruction {
union ins_formats format;
unsigned int srcline;
struct symbol *patch_label;
struct {
struct instruction *stqe_next;
} links;
};
#define AIC_OP_OR 0x0
#define AIC_OP_AND 0x1
#define AIC_OP_XOR 0x2
#define AIC_OP_ADD 0x3
#define AIC_OP_ADC 0x4
#define AIC_OP_ROL 0x5
#define AIC_OP_BMOV 0x6
#define AIC_OP_JMP 0x8
#define AIC_OP_JC 0x9
#define AIC_OP_JNC 0xa
#define AIC_OP_CALL 0xb
#define AIC_OP_JNE 0xc
#define AIC_OP_JNZ 0xd
#define AIC_OP_JE 0xe
#define AIC_OP_JZ 0xf
/* Pseudo Ops */
#define AIC_OP_SHL 0x10
#define AIC_OP_SHR 0x20
#define AIC_OP_ROR 0x30

Просмотреть файл

@ -541,10 +541,8 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
ip_type = BE2_IPV6;
len = mgmt_get_if_info(phba, ip_type, &if_info);
if (len) {
kfree(if_info);
if (len)
return len;
}
switch (param) {
case ISCSI_NET_PARAM_IPV4_ADDR:
@ -569,7 +567,7 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
break;
case ISCSI_NET_PARAM_VLAN_ID:
if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE)
return -EINVAL;
len = -EINVAL;
else
len = sprintf(buf, "%d\n",
(if_info->vlan_priority &
@ -577,7 +575,7 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
break;
case ISCSI_NET_PARAM_VLAN_PRIORITY:
if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE)
return -EINVAL;
len = -EINVAL;
else
len = sprintf(buf, "%d\n",
((if_info->vlan_priority >> 13) &

Просмотреть файл

@ -1367,10 +1367,6 @@ bfa_faa_query(struct bfa_s *bfa, struct bfa_faa_attr_s *attr,
struct bfa_iocfc_s *iocfc = &bfa->iocfc;
bfa_status_t status;
iocfc->faa_args.faa_attr = attr;
iocfc->faa_args.faa_cb.faa_cbfn = cbfn;
iocfc->faa_args.faa_cb.faa_cbarg = cbarg;
status = bfa_faa_validate_request(bfa);
if (status != BFA_STATUS_OK)
return status;
@ -1378,6 +1374,10 @@ bfa_faa_query(struct bfa_s *bfa, struct bfa_faa_attr_s *attr,
if (iocfc->faa_args.busy == BFA_TRUE)
return BFA_STATUS_DEVBUSY;
iocfc->faa_args.faa_attr = attr;
iocfc->faa_args.faa_cb.faa_cbfn = cbfn;
iocfc->faa_args.faa_cb.faa_cbarg = cbarg;
iocfc->faa_args.busy = BFA_TRUE;
memset(&faa_attr_req, 0, sizeof(struct bfi_faa_query_s));
bfi_h2i_set(faa_attr_req.mh, BFI_MC_IOCFC,

Просмотреть файл

@ -132,6 +132,7 @@ enum bfa_status {
BFA_STATUS_ETIMER = 5, /* Timer expired - Retry, if persists,
* contact support */
BFA_STATUS_EPROTOCOL = 6, /* Protocol error */
BFA_STATUS_BADFLASH = 9, /* Flash is bad */
BFA_STATUS_SFP_UNSUPP = 10, /* Unsupported SFP - Replace SFP */
BFA_STATUS_UNKNOWN_VFID = 11, /* VF_ID not found */
BFA_STATUS_DATACORRUPTED = 12, /* Diag returned data corrupted */

Просмотреть файл

@ -1026,7 +1026,7 @@ struct fc_alpabm_s {
#define FC_ED_TOV 2
#define FC_REC_TOV (FC_ED_TOV + 1)
#define FC_RA_TOV 10
#define FC_ELS_TOV ((2 * FC_RA_TOV) + 1)
#define FC_ELS_TOV (2 * FC_RA_TOV)
#define FC_FCCT_TOV (3 * FC_RA_TOV)
/*

Просмотреть файл

@ -773,7 +773,20 @@ bfa_fcs_lport_uf_recv(struct bfa_fcs_lport_s *lport,
bfa_trc(lport->fcs, fchs->type);
if (!bfa_fcs_lport_is_online(lport)) {
bfa_stats(lport, uf_recv_drops);
/*
* In direct attach topology, it is possible to get a PLOGI
* before the lport is online due to port feature
* (QoS/Trunk/FEC/CR), so send a rjt
*/
if ((fchs->type == FC_TYPE_ELS) &&
(els_cmd->els_code == FC_ELS_PLOGI)) {
bfa_fcs_lport_send_ls_rjt(lport, fchs,
FC_LS_RJT_RSN_UNABLE_TO_PERF_CMD,
FC_LS_RJT_EXP_NO_ADDL_INFO);
bfa_stats(lport, plogi_rcvd);
} else
bfa_stats(lport, uf_recv_drops);
return;
}

Просмотреть файл

@ -21,6 +21,7 @@
#include "bfi_reg.h"
#include "bfa_defs.h"
#include "bfa_defs_svc.h"
#include "bfi.h"
BFA_TRC_FILE(CNA, IOC);
@ -45,6 +46,14 @@ BFA_TRC_FILE(CNA, IOC);
#define BFA_DBG_FWTRC_OFF(_fn) (BFI_IOC_TRC_OFF + BFA_DBG_FWTRC_LEN * (_fn))
#define bfa_ioc_state_disabled(__sm) \
(((__sm) == BFI_IOC_UNINIT) || \
((__sm) == BFI_IOC_INITING) || \
((__sm) == BFI_IOC_HWINIT) || \
((__sm) == BFI_IOC_DISABLED) || \
((__sm) == BFI_IOC_FAIL) || \
((__sm) == BFI_IOC_CFG_DISABLED))
/*
* Asic specific macros : see bfa_hw_cb.c and bfa_hw_ct.c for details.
*/
@ -102,6 +111,12 @@ static void bfa_ioc_disable_comp(struct bfa_ioc_s *ioc);
static void bfa_ioc_lpu_stop(struct bfa_ioc_s *ioc);
static void bfa_ioc_fail_notify(struct bfa_ioc_s *ioc);
static void bfa_ioc_pf_fwmismatch(struct bfa_ioc_s *ioc);
static enum bfi_ioc_img_ver_cmp_e bfa_ioc_fw_ver_patch_cmp(
struct bfi_ioc_image_hdr_s *base_fwhdr,
struct bfi_ioc_image_hdr_s *fwhdr_to_cmp);
static enum bfi_ioc_img_ver_cmp_e bfa_ioc_flash_fwver_cmp(
struct bfa_ioc_s *ioc,
struct bfi_ioc_image_hdr_s *base_fwhdr);
/*
* IOC state machine definitions/declarations
@ -1454,28 +1469,42 @@ bfa_ioc_fwver_get(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr)
}
/*
* Returns TRUE if same.
* Returns TRUE if driver is willing to work with current smem f/w version.
*/
bfa_boolean_t
bfa_ioc_fwver_cmp(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr)
bfa_ioc_fwver_cmp(struct bfa_ioc_s *ioc,
struct bfi_ioc_image_hdr_s *smem_fwhdr)
{
struct bfi_ioc_image_hdr_s *drv_fwhdr;
int i;
enum bfi_ioc_img_ver_cmp_e smem_flash_cmp, drv_smem_cmp;
drv_fwhdr = (struct bfi_ioc_image_hdr_s *)
bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), 0);
for (i = 0; i < BFI_IOC_MD5SUM_SZ; i++) {
if (fwhdr->md5sum[i] != cpu_to_le32(drv_fwhdr->md5sum[i])) {
bfa_trc(ioc, i);
bfa_trc(ioc, fwhdr->md5sum[i]);
bfa_trc(ioc, drv_fwhdr->md5sum[i]);
return BFA_FALSE;
}
/*
* If smem is incompatible or old, driver should not work with it.
*/
drv_smem_cmp = bfa_ioc_fw_ver_patch_cmp(drv_fwhdr, smem_fwhdr);
if (drv_smem_cmp == BFI_IOC_IMG_VER_INCOMP ||
drv_smem_cmp == BFI_IOC_IMG_VER_OLD) {
return BFA_FALSE;
}
bfa_trc(ioc, fwhdr->md5sum[0]);
return BFA_TRUE;
/*
* IF Flash has a better F/W than smem do not work with smem.
* If smem f/w == flash f/w, as smem f/w not old | incmp, work with it.
* If Flash is old or incomp work with smem iff smem f/w == drv f/w.
*/
smem_flash_cmp = bfa_ioc_flash_fwver_cmp(ioc, smem_fwhdr);
if (smem_flash_cmp == BFI_IOC_IMG_VER_BETTER) {
return BFA_FALSE;
} else if (smem_flash_cmp == BFI_IOC_IMG_VER_SAME) {
return BFA_TRUE;
} else {
return (drv_smem_cmp == BFI_IOC_IMG_VER_SAME) ?
BFA_TRUE : BFA_FALSE;
}
}
/*
@ -1485,17 +1514,9 @@ bfa_ioc_fwver_cmp(struct bfa_ioc_s *ioc, struct bfi_ioc_image_hdr_s *fwhdr)
static bfa_boolean_t
bfa_ioc_fwver_valid(struct bfa_ioc_s *ioc, u32 boot_env)
{
struct bfi_ioc_image_hdr_s fwhdr, *drv_fwhdr;
struct bfi_ioc_image_hdr_s fwhdr;
bfa_ioc_fwver_get(ioc, &fwhdr);
drv_fwhdr = (struct bfi_ioc_image_hdr_s *)
bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), 0);
if (fwhdr.signature != cpu_to_le32(drv_fwhdr->signature)) {
bfa_trc(ioc, fwhdr.signature);
bfa_trc(ioc, drv_fwhdr->signature);
return BFA_FALSE;
}
if (swab32(fwhdr.bootenv) != boot_env) {
bfa_trc(ioc, fwhdr.bootenv);
@ -1506,6 +1527,168 @@ bfa_ioc_fwver_valid(struct bfa_ioc_s *ioc, u32 boot_env)
return bfa_ioc_fwver_cmp(ioc, &fwhdr);
}
static bfa_boolean_t
bfa_ioc_fwver_md5_check(struct bfi_ioc_image_hdr_s *fwhdr_1,
struct bfi_ioc_image_hdr_s *fwhdr_2)
{
int i;
for (i = 0; i < BFI_IOC_MD5SUM_SZ; i++)
if (fwhdr_1->md5sum[i] != fwhdr_2->md5sum[i])
return BFA_FALSE;
return BFA_TRUE;
}
/*
* Returns TRUE if major minor and maintainence are same.
* If patch versions are same, check for MD5 Checksum to be same.
*/
static bfa_boolean_t
bfa_ioc_fw_ver_compatible(struct bfi_ioc_image_hdr_s *drv_fwhdr,
struct bfi_ioc_image_hdr_s *fwhdr_to_cmp)
{
if (drv_fwhdr->signature != fwhdr_to_cmp->signature)
return BFA_FALSE;
if (drv_fwhdr->fwver.major != fwhdr_to_cmp->fwver.major)
return BFA_FALSE;
if (drv_fwhdr->fwver.minor != fwhdr_to_cmp->fwver.minor)
return BFA_FALSE;
if (drv_fwhdr->fwver.maint != fwhdr_to_cmp->fwver.maint)
return BFA_FALSE;
if (drv_fwhdr->fwver.patch == fwhdr_to_cmp->fwver.patch &&
drv_fwhdr->fwver.phase == fwhdr_to_cmp->fwver.phase &&
drv_fwhdr->fwver.build == fwhdr_to_cmp->fwver.build) {
return bfa_ioc_fwver_md5_check(drv_fwhdr, fwhdr_to_cmp);
}
return BFA_TRUE;
}
static bfa_boolean_t
bfa_ioc_flash_fwver_valid(struct bfi_ioc_image_hdr_s *flash_fwhdr)
{
if (flash_fwhdr->fwver.major == 0 || flash_fwhdr->fwver.major == 0xFF)
return BFA_FALSE;
return BFA_TRUE;
}
static bfa_boolean_t fwhdr_is_ga(struct bfi_ioc_image_hdr_s *fwhdr)
{
if (fwhdr->fwver.phase == 0 &&
fwhdr->fwver.build == 0)
return BFA_TRUE;
return BFA_FALSE;
}
/*
* Returns TRUE if both are compatible and patch of fwhdr_to_cmp is better.
*/
static enum bfi_ioc_img_ver_cmp_e
bfa_ioc_fw_ver_patch_cmp(struct bfi_ioc_image_hdr_s *base_fwhdr,
struct bfi_ioc_image_hdr_s *fwhdr_to_cmp)
{
if (bfa_ioc_fw_ver_compatible(base_fwhdr, fwhdr_to_cmp) == BFA_FALSE)
return BFI_IOC_IMG_VER_INCOMP;
if (fwhdr_to_cmp->fwver.patch > base_fwhdr->fwver.patch)
return BFI_IOC_IMG_VER_BETTER;
else if (fwhdr_to_cmp->fwver.patch < base_fwhdr->fwver.patch)
return BFI_IOC_IMG_VER_OLD;
/*
* GA takes priority over internal builds of the same patch stream.
* At this point major minor maint and patch numbers are same.
*/
if (fwhdr_is_ga(base_fwhdr) == BFA_TRUE) {
if (fwhdr_is_ga(fwhdr_to_cmp))
return BFI_IOC_IMG_VER_SAME;
else
return BFI_IOC_IMG_VER_OLD;
} else {
if (fwhdr_is_ga(fwhdr_to_cmp))
return BFI_IOC_IMG_VER_BETTER;
}
if (fwhdr_to_cmp->fwver.phase > base_fwhdr->fwver.phase)
return BFI_IOC_IMG_VER_BETTER;
else if (fwhdr_to_cmp->fwver.phase < base_fwhdr->fwver.phase)
return BFI_IOC_IMG_VER_OLD;
if (fwhdr_to_cmp->fwver.build > base_fwhdr->fwver.build)
return BFI_IOC_IMG_VER_BETTER;
else if (fwhdr_to_cmp->fwver.build < base_fwhdr->fwver.build)
return BFI_IOC_IMG_VER_OLD;
/*
* All Version Numbers are equal.
* Md5 check to be done as a part of compatibility check.
*/
return BFI_IOC_IMG_VER_SAME;
}
#define BFA_FLASH_PART_FWIMG_ADDR 0x100000 /* fw image address */
bfa_status_t
bfa_ioc_flash_img_get_chnk(struct bfa_ioc_s *ioc, u32 off,
u32 *fwimg)
{
return bfa_flash_raw_read(ioc->pcidev.pci_bar_kva,
BFA_FLASH_PART_FWIMG_ADDR + (off * sizeof(u32)),
(char *)fwimg, BFI_FLASH_CHUNK_SZ);
}
static enum bfi_ioc_img_ver_cmp_e
bfa_ioc_flash_fwver_cmp(struct bfa_ioc_s *ioc,
struct bfi_ioc_image_hdr_s *base_fwhdr)
{
struct bfi_ioc_image_hdr_s *flash_fwhdr;
bfa_status_t status;
u32 fwimg[BFI_FLASH_CHUNK_SZ_WORDS];
status = bfa_ioc_flash_img_get_chnk(ioc, 0, fwimg);
if (status != BFA_STATUS_OK)
return BFI_IOC_IMG_VER_INCOMP;
flash_fwhdr = (struct bfi_ioc_image_hdr_s *) fwimg;
if (bfa_ioc_flash_fwver_valid(flash_fwhdr) == BFA_TRUE)
return bfa_ioc_fw_ver_patch_cmp(base_fwhdr, flash_fwhdr);
else
return BFI_IOC_IMG_VER_INCOMP;
}
/*
* Invalidate fwver signature
*/
bfa_status_t
bfa_ioc_fwsig_invalidate(struct bfa_ioc_s *ioc)
{
u32 pgnum, pgoff;
u32 loff = 0;
enum bfi_ioc_state ioc_fwstate;
ioc_fwstate = bfa_ioc_get_cur_ioc_fwstate(ioc);
if (!bfa_ioc_state_disabled(ioc_fwstate))
return BFA_STATUS_ADAPTER_ENABLED;
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
bfa_mem_write(ioc->ioc_regs.smem_page_start, loff, BFA_IOC_FW_INV_SIGN);
return BFA_STATUS_OK;
}
/*
* Conditionally flush any pending message from firmware at start.
*/
@ -1544,8 +1727,8 @@ bfa_ioc_hwinit(struct bfa_ioc_s *ioc, bfa_boolean_t force)
BFA_FALSE : bfa_ioc_fwver_valid(ioc, boot_env);
if (!fwvalid) {
bfa_ioc_boot(ioc, boot_type, boot_env);
bfa_ioc_poll_fwinit(ioc);
if (bfa_ioc_boot(ioc, boot_type, boot_env) == BFA_STATUS_OK)
bfa_ioc_poll_fwinit(ioc);
return;
}
@ -1580,8 +1763,8 @@ bfa_ioc_hwinit(struct bfa_ioc_s *ioc, bfa_boolean_t force)
/*
* Initialize the h/w for any other states.
*/
bfa_ioc_boot(ioc, boot_type, boot_env);
bfa_ioc_poll_fwinit(ioc);
if (bfa_ioc_boot(ioc, boot_type, boot_env) == BFA_STATUS_OK)
bfa_ioc_poll_fwinit(ioc);
}
static void
@ -1684,7 +1867,7 @@ bfa_ioc_hb_monitor(struct bfa_ioc_s *ioc)
/*
* Initiate a full firmware download.
*/
static void
static bfa_status_t
bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
u32 boot_env)
{
@ -1694,28 +1877,60 @@ bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
u32 chunkno = 0;
u32 i;
u32 asicmode;
u32 fwimg_size;
u32 fwimg_buf[BFI_FLASH_CHUNK_SZ_WORDS];
bfa_status_t status;
if (boot_env == BFI_FWBOOT_ENV_OS &&
boot_type == BFI_FWBOOT_TYPE_FLASH) {
fwimg_size = BFI_FLASH_IMAGE_SZ/sizeof(u32);
status = bfa_ioc_flash_img_get_chnk(ioc,
BFA_IOC_FLASH_CHUNK_ADDR(chunkno), fwimg_buf);
if (status != BFA_STATUS_OK)
return status;
fwimg = fwimg_buf;
} else {
fwimg_size = bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc));
fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc),
BFA_IOC_FLASH_CHUNK_ADDR(chunkno));
}
bfa_trc(ioc, fwimg_size);
bfa_trc(ioc, bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc)));
fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), chunkno);
pgnum = PSS_SMEM_PGNUM(ioc->ioc_regs.smem_pg0, loff);
pgoff = PSS_SMEM_PGOFF(loff);
writel(pgnum, ioc->ioc_regs.host_page_num_fn);
for (i = 0; i < bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc)); i++) {
for (i = 0; i < fwimg_size; i++) {
if (BFA_IOC_FLASH_CHUNK_NO(i) != chunkno) {
chunkno = BFA_IOC_FLASH_CHUNK_NO(i);
fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc),
if (boot_env == BFI_FWBOOT_ENV_OS &&
boot_type == BFI_FWBOOT_TYPE_FLASH) {
status = bfa_ioc_flash_img_get_chnk(ioc,
BFA_IOC_FLASH_CHUNK_ADDR(chunkno),
fwimg_buf);
if (status != BFA_STATUS_OK)
return status;
fwimg = fwimg_buf;
} else {
fwimg = bfa_cb_image_get_chunk(
bfa_ioc_asic_gen(ioc),
BFA_IOC_FLASH_CHUNK_ADDR(chunkno));
}
}
/*
* write smem
*/
bfa_mem_write(ioc->ioc_regs.smem_page_start, loff,
cpu_to_le32(fwimg[BFA_IOC_FLASH_OFFSET_IN_CHUNK(i)]));
fwimg[BFA_IOC_FLASH_OFFSET_IN_CHUNK(i)]);
loff += sizeof(u32);
@ -1733,8 +1948,12 @@ bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
ioc->ioc_regs.host_page_num_fn);
/*
* Set boot type and device mode at the end.
* Set boot type, env and device mode at the end.
*/
if (boot_env == BFI_FWBOOT_ENV_OS &&
boot_type == BFI_FWBOOT_TYPE_FLASH) {
boot_type = BFI_FWBOOT_TYPE_NORMAL;
}
asicmode = BFI_FWBOOT_DEVMODE(ioc->asic_gen, ioc->asic_mode,
ioc->port0_mode, ioc->port1_mode);
bfa_mem_write(ioc->ioc_regs.smem_page_start, BFI_FWBOOT_DEVMODE_OFF,
@ -1743,6 +1962,7 @@ bfa_ioc_download_fw(struct bfa_ioc_s *ioc, u32 boot_type,
swab32(boot_type));
bfa_mem_write(ioc->ioc_regs.smem_page_start, BFI_FWBOOT_ENV_OFF,
swab32(boot_env));
return BFA_STATUS_OK;
}
@ -2002,13 +2222,30 @@ bfa_ioc_pll_init(struct bfa_ioc_s *ioc)
* Interface used by diag module to do firmware boot with memory test
* as the entry vector.
*/
void
bfa_status_t
bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type, u32 boot_env)
{
struct bfi_ioc_image_hdr_s *drv_fwhdr;
bfa_status_t status;
bfa_ioc_stats(ioc, ioc_boots);
if (bfa_ioc_pll_init(ioc) != BFA_STATUS_OK)
return;
return BFA_STATUS_FAILED;
if (boot_env == BFI_FWBOOT_ENV_OS &&
boot_type == BFI_FWBOOT_TYPE_NORMAL) {
drv_fwhdr = (struct bfi_ioc_image_hdr_s *)
bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), 0);
/*
* Work with Flash iff flash f/w is better than driver f/w.
* Otherwise push drivers firmware.
*/
if (bfa_ioc_flash_fwver_cmp(ioc, drv_fwhdr) ==
BFI_IOC_IMG_VER_BETTER)
boot_type = BFI_FWBOOT_TYPE_FLASH;
}
/*
* Initialize IOC state of all functions on a chip reset.
@ -2022,8 +2259,14 @@ bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type, u32 boot_env)
}
bfa_ioc_msgflush(ioc);
bfa_ioc_download_fw(ioc, boot_type, boot_env);
bfa_ioc_lpu_start(ioc);
status = bfa_ioc_download_fw(ioc, boot_type, boot_env);
if (status == BFA_STATUS_OK)
bfa_ioc_lpu_start(ioc);
else {
WARN_ON(boot_type == BFI_FWBOOT_TYPE_MEMTEST);
bfa_iocpf_timeout(ioc);
}
return status;
}
/*
@ -2419,14 +2662,6 @@ bfa_ioc_fw_mismatch(struct bfa_ioc_s *ioc)
bfa_fsm_cmp_state(&ioc->iocpf, bfa_iocpf_sm_mismatch);
}
#define bfa_ioc_state_disabled(__sm) \
(((__sm) == BFI_IOC_UNINIT) || \
((__sm) == BFI_IOC_INITING) || \
((__sm) == BFI_IOC_HWINIT) || \
((__sm) == BFI_IOC_DISABLED) || \
((__sm) == BFI_IOC_FAIL) || \
((__sm) == BFI_IOC_CFG_DISABLED))
/*
* Check if adapter is disabled -- both IOCs should be in a disabled
* state.
@ -6423,3 +6658,407 @@ bfa_fru_intr(void *fruarg, struct bfi_mbmsg_s *msg)
WARN_ON(1);
}
}
/*
* register definitions
*/
#define FLI_CMD_REG 0x0001d000
#define FLI_RDDATA_REG 0x0001d010
#define FLI_ADDR_REG 0x0001d004
#define FLI_DEV_STATUS_REG 0x0001d014
#define BFA_FLASH_FIFO_SIZE 128 /* fifo size */
#define BFA_FLASH_CHECK_MAX 10000 /* max # of status check */
#define BFA_FLASH_BLOCKING_OP_MAX 1000000 /* max # of blocking op check */
#define BFA_FLASH_WIP_MASK 0x01 /* write in progress bit mask */
enum bfa_flash_cmd {
BFA_FLASH_FAST_READ = 0x0b, /* fast read */
BFA_FLASH_READ_STATUS = 0x05, /* read status */
};
/**
* @brief hardware error definition
*/
enum bfa_flash_err {
BFA_FLASH_NOT_PRESENT = -1, /*!< flash not present */
BFA_FLASH_UNINIT = -2, /*!< flash not initialized */
BFA_FLASH_BAD = -3, /*!< flash bad */
BFA_FLASH_BUSY = -4, /*!< flash busy */
BFA_FLASH_ERR_CMD_ACT = -5, /*!< command active never cleared */
BFA_FLASH_ERR_FIFO_CNT = -6, /*!< fifo count never cleared */
BFA_FLASH_ERR_WIP = -7, /*!< write-in-progress never cleared */
BFA_FLASH_ERR_TIMEOUT = -8, /*!< fli timeout */
BFA_FLASH_ERR_LEN = -9, /*!< invalid length */
};
/**
* @brief flash command register data structure
*/
union bfa_flash_cmd_reg_u {
struct {
#ifdef __BIG_ENDIAN
u32 act:1;
u32 rsv:1;
u32 write_cnt:9;
u32 read_cnt:9;
u32 addr_cnt:4;
u32 cmd:8;
#else
u32 cmd:8;
u32 addr_cnt:4;
u32 read_cnt:9;
u32 write_cnt:9;
u32 rsv:1;
u32 act:1;
#endif
} r;
u32 i;
};
/**
* @brief flash device status register data structure
*/
union bfa_flash_dev_status_reg_u {
struct {
#ifdef __BIG_ENDIAN
u32 rsv:21;
u32 fifo_cnt:6;
u32 busy:1;
u32 init_status:1;
u32 present:1;
u32 bad:1;
u32 good:1;
#else
u32 good:1;
u32 bad:1;
u32 present:1;
u32 init_status:1;
u32 busy:1;
u32 fifo_cnt:6;
u32 rsv:21;
#endif
} r;
u32 i;
};
/**
* @brief flash address register data structure
*/
union bfa_flash_addr_reg_u {
struct {
#ifdef __BIG_ENDIAN
u32 addr:24;
u32 dummy:8;
#else
u32 dummy:8;
u32 addr:24;
#endif
} r;
u32 i;
};
/**
* dg flash_raw_private Flash raw private functions
*/
static void
bfa_flash_set_cmd(void __iomem *pci_bar, u8 wr_cnt,
u8 rd_cnt, u8 ad_cnt, u8 op)
{
union bfa_flash_cmd_reg_u cmd;
cmd.i = 0;
cmd.r.act = 1;
cmd.r.write_cnt = wr_cnt;
cmd.r.read_cnt = rd_cnt;
cmd.r.addr_cnt = ad_cnt;
cmd.r.cmd = op;
writel(cmd.i, (pci_bar + FLI_CMD_REG));
}
static void
bfa_flash_set_addr(void __iomem *pci_bar, u32 address)
{
union bfa_flash_addr_reg_u addr;
addr.r.addr = address & 0x00ffffff;
addr.r.dummy = 0;
writel(addr.i, (pci_bar + FLI_ADDR_REG));
}
static int
bfa_flash_cmd_act_check(void __iomem *pci_bar)
{
union bfa_flash_cmd_reg_u cmd;
cmd.i = readl(pci_bar + FLI_CMD_REG);
if (cmd.r.act)
return BFA_FLASH_ERR_CMD_ACT;
return 0;
}
/**
* @brief
* Flush FLI data fifo.
*
* @param[in] pci_bar - pci bar address
* @param[in] dev_status - device status
*
* Return 0 on success, negative error number on error.
*/
static u32
bfa_flash_fifo_flush(void __iomem *pci_bar)
{
u32 i;
u32 t;
union bfa_flash_dev_status_reg_u dev_status;
dev_status.i = readl(pci_bar + FLI_DEV_STATUS_REG);
if (!dev_status.r.fifo_cnt)
return 0;
/* fifo counter in terms of words */
for (i = 0; i < dev_status.r.fifo_cnt; i++)
t = readl(pci_bar + FLI_RDDATA_REG);
/*
* Check the device status. It may take some time.
*/
for (i = 0; i < BFA_FLASH_CHECK_MAX; i++) {
dev_status.i = readl(pci_bar + FLI_DEV_STATUS_REG);
if (!dev_status.r.fifo_cnt)
break;
}
if (dev_status.r.fifo_cnt)
return BFA_FLASH_ERR_FIFO_CNT;
return 0;
}
/**
* @brief
* Read flash status.
*
* @param[in] pci_bar - pci bar address
*
* Return 0 on success, negative error number on error.
*/
static u32
bfa_flash_status_read(void __iomem *pci_bar)
{
union bfa_flash_dev_status_reg_u dev_status;
u32 status;
u32 ret_status;
int i;
status = bfa_flash_fifo_flush(pci_bar);
if (status < 0)
return status;
bfa_flash_set_cmd(pci_bar, 0, 4, 0, BFA_FLASH_READ_STATUS);
for (i = 0; i < BFA_FLASH_CHECK_MAX; i++) {
status = bfa_flash_cmd_act_check(pci_bar);
if (!status)
break;
}
if (status)
return status;
dev_status.i = readl(pci_bar + FLI_DEV_STATUS_REG);
if (!dev_status.r.fifo_cnt)
return BFA_FLASH_BUSY;
ret_status = readl(pci_bar + FLI_RDDATA_REG);
ret_status >>= 24;
status = bfa_flash_fifo_flush(pci_bar);
if (status < 0)
return status;
return ret_status;
}
/**
* @brief
* Start flash read operation.
*
* @param[in] pci_bar - pci bar address
* @param[in] offset - flash address offset
* @param[in] len - read data length
* @param[in] buf - read data buffer
*
* Return 0 on success, negative error number on error.
*/
static u32
bfa_flash_read_start(void __iomem *pci_bar, u32 offset, u32 len,
char *buf)
{
u32 status;
/*
* len must be mutiple of 4 and not exceeding fifo size
*/
if (len == 0 || len > BFA_FLASH_FIFO_SIZE || (len & 0x03) != 0)
return BFA_FLASH_ERR_LEN;
/*
* check status
*/
status = bfa_flash_status_read(pci_bar);
if (status == BFA_FLASH_BUSY)
status = bfa_flash_status_read(pci_bar);
if (status < 0)
return status;
/*
* check if write-in-progress bit is cleared
*/
if (status & BFA_FLASH_WIP_MASK)
return BFA_FLASH_ERR_WIP;
bfa_flash_set_addr(pci_bar, offset);
bfa_flash_set_cmd(pci_bar, 0, (u8)len, 4, BFA_FLASH_FAST_READ);
return 0;
}
/**
* @brief
* Check flash read operation.
*
* @param[in] pci_bar - pci bar address
*
* Return flash device status, 1 if busy, 0 if not.
*/
static u32
bfa_flash_read_check(void __iomem *pci_bar)
{
if (bfa_flash_cmd_act_check(pci_bar))
return 1;
return 0;
}
/**
* @brief
* End flash read operation.
*
* @param[in] pci_bar - pci bar address
* @param[in] len - read data length
* @param[in] buf - read data buffer
*
*/
static void
bfa_flash_read_end(void __iomem *pci_bar, u32 len, char *buf)
{
u32 i;
/*
* read data fifo up to 32 words
*/
for (i = 0; i < len; i += 4) {
u32 w = readl(pci_bar + FLI_RDDATA_REG);
*((u32 *) (buf + i)) = swab32(w);
}
bfa_flash_fifo_flush(pci_bar);
}
/**
* @brief
* Perform flash raw read.
*
* @param[in] pci_bar - pci bar address
* @param[in] offset - flash partition address offset
* @param[in] buf - read data buffer
* @param[in] len - read data length
*
* Return status.
*/
#define FLASH_BLOCKING_OP_MAX 500
#define FLASH_SEM_LOCK_REG 0x18820
static int
bfa_raw_sem_get(void __iomem *bar)
{
int locked;
locked = readl((bar + FLASH_SEM_LOCK_REG));
return !locked;
}
bfa_status_t
bfa_flash_sem_get(void __iomem *bar)
{
u32 n = FLASH_BLOCKING_OP_MAX;
while (!bfa_raw_sem_get(bar)) {
if (--n <= 0)
return BFA_STATUS_BADFLASH;
udelay(10000);
}
return BFA_STATUS_OK;
}
void
bfa_flash_sem_put(void __iomem *bar)
{
writel(0, (bar + FLASH_SEM_LOCK_REG));
}
bfa_status_t
bfa_flash_raw_read(void __iomem *pci_bar, u32 offset, char *buf,
u32 len)
{
u32 n, status;
u32 off, l, s, residue, fifo_sz;
residue = len;
off = 0;
fifo_sz = BFA_FLASH_FIFO_SIZE;
status = bfa_flash_sem_get(pci_bar);
if (status != BFA_STATUS_OK)
return status;
while (residue) {
s = offset + off;
n = s / fifo_sz;
l = (n + 1) * fifo_sz - s;
if (l > residue)
l = residue;
status = bfa_flash_read_start(pci_bar, offset + off, l,
&buf[off]);
if (status < 0) {
bfa_flash_sem_put(pci_bar);
return BFA_STATUS_FAILED;
}
n = BFA_FLASH_BLOCKING_OP_MAX;
while (bfa_flash_read_check(pci_bar)) {
if (--n <= 0) {
bfa_flash_sem_put(pci_bar);
return BFA_STATUS_FAILED;
}
}
bfa_flash_read_end(pci_bar, l, &buf[off]);
residue -= l;
off += l;
}
bfa_flash_sem_put(pci_bar);
return BFA_STATUS_OK;
}

Просмотреть файл

@ -515,6 +515,8 @@ void bfa_flash_attach(struct bfa_flash_s *flash, struct bfa_ioc_s *ioc,
void *dev, struct bfa_trc_mod_s *trcmod, bfa_boolean_t mincfg);
void bfa_flash_memclaim(struct bfa_flash_s *flash,
u8 *dm_kva, u64 dm_pa, bfa_boolean_t mincfg);
bfa_status_t bfa_flash_raw_read(void __iomem *pci_bar_kva,
u32 offset, char *buf, u32 len);
/*
* DIAG module specific
@ -888,7 +890,7 @@ void bfa_ioc_enable(struct bfa_ioc_s *ioc);
void bfa_ioc_disable(struct bfa_ioc_s *ioc);
bfa_boolean_t bfa_ioc_intx_claim(struct bfa_ioc_s *ioc);
void bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type,
bfa_status_t bfa_ioc_boot(struct bfa_ioc_s *ioc, u32 boot_type,
u32 boot_env);
void bfa_ioc_isr(struct bfa_ioc_s *ioc, struct bfi_mbmsg_s *msg);
void bfa_ioc_error_isr(struct bfa_ioc_s *ioc);
@ -919,6 +921,7 @@ bfa_status_t bfa_ioc_debug_fwtrc(struct bfa_ioc_s *ioc, void *trcdata,
int *trclen);
bfa_status_t bfa_ioc_debug_fwcore(struct bfa_ioc_s *ioc, void *buf,
u32 *offset, int *buflen);
bfa_status_t bfa_ioc_fwsig_invalidate(struct bfa_ioc_s *ioc);
bfa_boolean_t bfa_ioc_sem_get(void __iomem *sem_reg);
void bfa_ioc_fwver_get(struct bfa_ioc_s *ioc,
struct bfi_ioc_image_hdr_s *fwhdr);
@ -956,6 +959,8 @@ bfa_status_t bfa_ablk_optrom_en(struct bfa_ablk_s *ablk,
bfa_status_t bfa_ablk_optrom_dis(struct bfa_ablk_s *ablk,
bfa_ablk_cbfn_t cbfn, void *cbarg);
bfa_status_t bfa_ioc_flash_img_get_chnk(struct bfa_ioc_s *ioc, u32 off,
u32 *fwimg);
/*
* bfa mfg wwn API functions
*/

Просмотреть файл

@ -81,6 +81,29 @@ bfa_ioc_set_cb_hwif(struct bfa_ioc_s *ioc)
static bfa_boolean_t
bfa_ioc_cb_firmware_lock(struct bfa_ioc_s *ioc)
{
enum bfi_ioc_state alt_fwstate, cur_fwstate;
struct bfi_ioc_image_hdr_s fwhdr;
cur_fwstate = bfa_ioc_cb_get_cur_ioc_fwstate(ioc);
bfa_trc(ioc, cur_fwstate);
alt_fwstate = bfa_ioc_cb_get_alt_ioc_fwstate(ioc);
bfa_trc(ioc, alt_fwstate);
/*
* Uninit implies this is the only driver as of now.
*/
if (cur_fwstate == BFI_IOC_UNINIT)
return BFA_TRUE;
/*
* Check if another driver with a different firmware is active
*/
bfa_ioc_fwver_get(ioc, &fwhdr);
if (!bfa_ioc_fwver_cmp(ioc, &fwhdr) &&
alt_fwstate != BFI_IOC_DISABLED) {
bfa_trc(ioc, alt_fwstate);
return BFA_FALSE;
}
return BFA_TRUE;
}

Просмотреть файл

@ -6758,7 +6758,7 @@ bfa_dport_scn(struct bfa_dport_s *dport, struct bfi_diag_dport_scn_s *msg)
dport->rp_pwwn = msg->info.teststart.pwwn;
dport->rp_nwwn = msg->info.teststart.nwwn;
dport->lpcnt = cpu_to_be32(msg->info.teststart.numfrm);
bfa_dport_result_start(dport, BFA_DPORT_OPMODE_AUTO);
bfa_dport_result_start(dport, msg->info.teststart.mode);
break;
case BFI_DPORT_SCN_SUBTESTSTART:

Просмотреть файл

@ -63,9 +63,9 @@ int max_rport_logins = BFA_FCS_MAX_RPORT_LOGINS;
u32 bfi_image_cb_size, bfi_image_ct_size, bfi_image_ct2_size;
u32 *bfi_image_cb, *bfi_image_ct, *bfi_image_ct2;
#define BFAD_FW_FILE_CB "cbfw-3.2.1.1.bin"
#define BFAD_FW_FILE_CT "ctfw-3.2.1.1.bin"
#define BFAD_FW_FILE_CT2 "ct2fw-3.2.1.1.bin"
#define BFAD_FW_FILE_CB "cbfw-3.2.3.0.bin"
#define BFAD_FW_FILE_CT "ctfw-3.2.3.0.bin"
#define BFAD_FW_FILE_CT2 "ct2fw-3.2.3.0.bin"
static u32 *bfad_load_fwimg(struct pci_dev *pdev);
static void bfad_free_fwimg(void);
@ -204,6 +204,7 @@ static void
bfad_sm_created(struct bfad_s *bfad, enum bfad_sm_event event)
{
unsigned long flags;
bfa_status_t ret;
bfa_trc(bfad, event);
@ -217,7 +218,7 @@ bfad_sm_created(struct bfad_s *bfad, enum bfad_sm_event event)
if (bfad_setup_intr(bfad)) {
printk(KERN_WARNING "bfad%d: bfad_setup_intr failed\n",
bfad->inst_no);
bfa_sm_send_event(bfad, BFAD_E_INTR_INIT_FAILED);
bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED);
break;
}
@ -242,8 +243,26 @@ bfad_sm_created(struct bfad_s *bfad, enum bfad_sm_event event)
printk(KERN_WARNING
"bfa %s: bfa init failed\n",
bfad->pci_name);
spin_lock_irqsave(&bfad->bfad_lock, flags);
bfa_fcs_init(&bfad->bfa_fcs);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
ret = bfad_cfg_pport(bfad, BFA_LPORT_ROLE_FCP_IM);
if (ret != BFA_STATUS_OK) {
init_completion(&bfad->comp);
spin_lock_irqsave(&bfad->bfad_lock, flags);
bfad->pport.flags |= BFAD_PORT_DELETE;
bfa_fcs_exit(&bfad->bfa_fcs);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
wait_for_completion(&bfad->comp);
bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED);
break;
}
bfad->bfad_flags |= BFAD_HAL_INIT_FAIL;
bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED);
bfa_sm_send_event(bfad, BFAD_E_HAL_INIT_FAILED);
}
break;
@ -273,12 +292,14 @@ bfad_sm_initializing(struct bfad_s *bfad, enum bfad_sm_event event)
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
retval = bfad_start_ops(bfad);
if (retval != BFA_STATUS_OK)
if (retval != BFA_STATUS_OK) {
bfa_sm_set_state(bfad, bfad_sm_failed);
break;
}
bfa_sm_set_state(bfad, bfad_sm_operational);
break;
case BFAD_E_INTR_INIT_FAILED:
case BFAD_E_INIT_FAILED:
bfa_sm_set_state(bfad, bfad_sm_uninit);
kthread_stop(bfad->bfad_tsk);
spin_lock_irqsave(&bfad->bfad_lock, flags);
@ -286,7 +307,7 @@ bfad_sm_initializing(struct bfad_s *bfad, enum bfad_sm_event event)
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
break;
case BFAD_E_INIT_FAILED:
case BFAD_E_HAL_INIT_FAILED:
bfa_sm_set_state(bfad, bfad_sm_failed);
break;
default:
@ -310,13 +331,8 @@ bfad_sm_failed(struct bfad_s *bfad, enum bfad_sm_event event)
break;
case BFAD_E_STOP:
if (bfad->bfad_flags & BFAD_CFG_PPORT_DONE)
bfad_uncfg_pport(bfad);
if (bfad->bfad_flags & BFAD_FC4_PROBE_DONE) {
bfad_im_probe_undo(bfad);
bfad->bfad_flags &= ~BFAD_FC4_PROBE_DONE;
}
bfad_stop(bfad);
bfa_sm_set_state(bfad, bfad_sm_fcs_exit);
bfa_sm_send_event(bfad, BFAD_E_FCS_EXIT_COMP);
break;
case BFAD_E_EXIT_COMP:
@ -824,7 +840,7 @@ bfad_drv_init(struct bfad_s *bfad)
printk(KERN_WARNING
"Not enough memory to attach all Brocade HBA ports, %s",
"System may need more memory.\n");
goto out_hal_mem_alloc_failure;
return BFA_STATUS_FAILED;
}
bfad->bfa.trcmod = bfad->trcmod;
@ -841,31 +857,11 @@ bfad_drv_init(struct bfad_s *bfad)
bfad->bfa_fcs.trcmod = bfad->trcmod;
bfa_fcs_attach(&bfad->bfa_fcs, &bfad->bfa, bfad, BFA_FALSE);
bfad->bfa_fcs.fdmi_enabled = fdmi_enable;
bfa_fcs_init(&bfad->bfa_fcs);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
bfad->bfad_flags |= BFAD_DRV_INIT_DONE;
/* configure base port */
rc = bfad_cfg_pport(bfad, BFA_LPORT_ROLE_FCP_IM);
if (rc != BFA_STATUS_OK)
goto out_cfg_pport_fail;
return BFA_STATUS_OK;
out_cfg_pport_fail:
/* fcs exit - on cfg pport failure */
spin_lock_irqsave(&bfad->bfad_lock, flags);
init_completion(&bfad->comp);
bfad->pport.flags |= BFAD_PORT_DELETE;
bfa_fcs_exit(&bfad->bfa_fcs);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
wait_for_completion(&bfad->comp);
/* bfa detach - free hal memory */
bfa_detach(&bfad->bfa);
bfad_hal_mem_release(bfad);
out_hal_mem_alloc_failure:
return BFA_STATUS_FAILED;
}
void
@ -1009,13 +1005,19 @@ bfad_start_ops(struct bfad_s *bfad) {
/* FCS driver info init */
spin_lock_irqsave(&bfad->bfad_lock, flags);
bfa_fcs_driver_info_init(&bfad->bfa_fcs, &driver_info);
if (bfad->bfad_flags & BFAD_CFG_PPORT_DONE)
bfa_fcs_update_cfg(&bfad->bfa_fcs);
else
bfa_fcs_init(&bfad->bfa_fcs);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
/*
* FCS update cfg - reset the pwwn/nwwn of fabric base logical port
* with values learned during bfa_init firmware GETATTR REQ.
*/
bfa_fcs_update_cfg(&bfad->bfa_fcs);
if (!(bfad->bfad_flags & BFAD_CFG_PPORT_DONE)) {
retval = bfad_cfg_pport(bfad, BFA_LPORT_ROLE_FCP_IM);
if (retval != BFA_STATUS_OK)
return BFA_STATUS_FAILED;
}
/* Setup fc host fixed attribute if the lk supports */
bfad_fc_host_init(bfad->pport.im_port);
@ -1026,10 +1028,6 @@ bfad_start_ops(struct bfad_s *bfad) {
printk(KERN_WARNING "bfad_im_probe failed\n");
if (bfa_sm_cmp_state(bfad, bfad_sm_initializing))
bfa_sm_set_state(bfad, bfad_sm_failed);
bfad_im_probe_undo(bfad);
bfad->bfad_flags &= ~BFAD_FC4_PROBE_DONE;
bfad_uncfg_pport(bfad);
bfad_stop(bfad);
return BFA_STATUS_FAILED;
} else
bfad->bfad_flags |= BFAD_FC4_PROBE_DONE;
@ -1399,7 +1397,6 @@ bfad_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
return 0;
out_bfad_sm_failure:
bfa_detach(&bfad->bfa);
bfad_hal_mem_release(bfad);
out_drv_init_failure:
/* Remove the debugfs node for this bfad */
@ -1534,7 +1531,7 @@ restart_bfa(struct bfad_s *bfad)
if (bfad_setup_intr(bfad)) {
dev_printk(KERN_WARNING, &pdev->dev,
"%s: bfad_setup_intr failed\n", bfad->pci_name);
bfa_sm_send_event(bfad, BFAD_E_INTR_INIT_FAILED);
bfa_sm_send_event(bfad, BFAD_E_INIT_FAILED);
return -1;
}

Просмотреть файл

@ -228,6 +228,18 @@ bfad_iocmd_iocfc_get_attr(struct bfad_s *bfad, void *cmd)
return 0;
}
int
bfad_iocmd_ioc_fw_sig_inv(struct bfad_s *bfad, void *cmd)
{
struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd;
unsigned long flags;
spin_lock_irqsave(&bfad->bfad_lock, flags);
iocmd->status = bfa_ioc_fwsig_invalidate(&bfad->bfa.ioc);
spin_unlock_irqrestore(&bfad->bfad_lock, flags);
return 0;
}
int
bfad_iocmd_iocfc_set_intr(struct bfad_s *bfad, void *cmd)
{
@ -2893,6 +2905,9 @@ bfad_iocmd_handler(struct bfad_s *bfad, unsigned int cmd, void *iocmd,
case IOCMD_IOC_PCIFN_CFG:
rc = bfad_iocmd_ioc_get_pcifn_cfg(bfad, iocmd);
break;
case IOCMD_IOC_FW_SIG_INV:
rc = bfad_iocmd_ioc_fw_sig_inv(bfad, iocmd);
break;
case IOCMD_PCIFN_CREATE:
rc = bfad_iocmd_pcifn_create(bfad, iocmd);
break;

Просмотреть файл

@ -34,6 +34,7 @@ enum {
IOCMD_IOC_RESET_FWSTATS,
IOCMD_IOC_SET_ADAPTER_NAME,
IOCMD_IOC_SET_PORT_NAME,
IOCMD_IOC_FW_SIG_INV,
IOCMD_IOCFC_GET_ATTR,
IOCMD_IOCFC_SET_INTR,
IOCMD_PORT_ENABLE,

Просмотреть файл

@ -57,7 +57,7 @@
#ifdef BFA_DRIVER_VERSION
#define BFAD_DRIVER_VERSION BFA_DRIVER_VERSION
#else
#define BFAD_DRIVER_VERSION "3.2.21.1"
#define BFAD_DRIVER_VERSION "3.2.23.0"
#endif
#define BFAD_PROTO_NAME FCPI_NAME
@ -240,8 +240,8 @@ enum bfad_sm_event {
BFAD_E_KTHREAD_CREATE_FAILED = 2,
BFAD_E_INIT = 3,
BFAD_E_INIT_SUCCESS = 4,
BFAD_E_INIT_FAILED = 5,
BFAD_E_INTR_INIT_FAILED = 6,
BFAD_E_HAL_INIT_FAILED = 5,
BFAD_E_INIT_FAILED = 6,
BFAD_E_FCS_EXIT_COMP = 7,
BFAD_E_EXIT_COMP = 8,
BFAD_E_STOP = 9

Просмотреть файл

@ -46,6 +46,7 @@
*/
#define BFI_FLASH_CHUNK_SZ 256 /* Flash chunk size */
#define BFI_FLASH_CHUNK_SZ_WORDS (BFI_FLASH_CHUNK_SZ/sizeof(u32))
#define BFI_FLASH_IMAGE_SZ 0x100000
/*
* Msg header common to all msgs
@ -324,7 +325,29 @@ struct bfi_ioc_getattr_reply_s {
#define BFI_IOC_TRC_ENTS 256
#define BFI_IOC_FW_SIGNATURE (0xbfadbfad)
#define BFA_IOC_FW_INV_SIGN (0xdeaddead)
#define BFI_IOC_MD5SUM_SZ 4
struct bfi_ioc_fwver_s {
#ifdef __BIG_ENDIAN
uint8_t patch;
uint8_t maint;
uint8_t minor;
uint8_t major;
uint8_t rsvd[2];
uint8_t build;
uint8_t phase;
#else
uint8_t major;
uint8_t minor;
uint8_t maint;
uint8_t patch;
uint8_t phase;
uint8_t build;
uint8_t rsvd[2];
#endif
};
struct bfi_ioc_image_hdr_s {
u32 signature; /* constant signature */
u8 asic_gen; /* asic generation */
@ -333,10 +356,18 @@ struct bfi_ioc_image_hdr_s {
u8 port1_mode; /* device mode for port 1 */
u32 exec; /* exec vector */
u32 bootenv; /* fimware boot env */
u32 rsvd_b[4];
u32 rsvd_b[2];
struct bfi_ioc_fwver_s fwver;
u32 md5sum[BFI_IOC_MD5SUM_SZ];
};
enum bfi_ioc_img_ver_cmp_e {
BFI_IOC_IMG_VER_INCOMP,
BFI_IOC_IMG_VER_OLD,
BFI_IOC_IMG_VER_SAME,
BFI_IOC_IMG_VER_BETTER
};
#define BFI_FWBOOT_DEVMODE_OFF 4
#define BFI_FWBOOT_TYPE_OFF 8
#define BFI_FWBOOT_ENV_OFF 12
@ -346,6 +377,12 @@ struct bfi_ioc_image_hdr_s {
((u32)(__p0_mode)) << 8 | \
((u32)(__p1_mode)))
enum bfi_fwboot_type {
BFI_FWBOOT_TYPE_NORMAL = 0,
BFI_FWBOOT_TYPE_FLASH = 1,
BFI_FWBOOT_TYPE_MEMTEST = 2,
};
#define BFI_FWBOOT_TYPE_NORMAL 0
#define BFI_FWBOOT_TYPE_MEMTEST 2
#define BFI_FWBOOT_ENV_OS 0
@ -1107,7 +1144,8 @@ struct bfi_diag_dport_scn_teststart_s {
wwn_t pwwn; /* switch port wwn. 8 bytes */
wwn_t nwwn; /* switch node wwn. 8 bytes */
u8 type; /* bfa_diag_dport_test_type_e */
u8 rsvd[3];
u8 mode; /* bfa_diag_dport_test_opmode */
u8 rsvd[2];
u32 numfrm; /* from switch uint in 1M */
};

Просмотреть файл

@ -169,6 +169,7 @@ void scsi_remove_host(struct Scsi_Host *shost)
spin_unlock_irqrestore(shost->host_lock, flags);
scsi_autopm_get_host(shost);
flush_workqueue(shost->tmf_work_q);
scsi_forget_host(shost);
mutex_unlock(&shost->scan_mutex);
scsi_proc_host_rm(shost);
@ -294,6 +295,8 @@ static void scsi_host_dev_release(struct device *dev)
scsi_proc_hostdir_rm(shost->hostt);
if (shost->tmf_work_q)
destroy_workqueue(shost->tmf_work_q);
if (shost->ehandler)
kthread_stop(shost->ehandler);
if (shost->work_q)
@ -316,11 +319,11 @@ static void scsi_host_dev_release(struct device *dev)
kfree(shost);
}
static unsigned int shost_eh_deadline;
static int shost_eh_deadline = -1;
module_param_named(eh_deadline, shost_eh_deadline, uint, S_IRUGO|S_IWUSR);
module_param_named(eh_deadline, shost_eh_deadline, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(eh_deadline,
"SCSI EH timeout in seconds (should be between 1 and 2^32-1)");
"SCSI EH timeout in seconds (should be between 0 and 2^31-1)");
static struct device_type scsi_host_type = {
.name = "scsi_host",
@ -360,7 +363,6 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
INIT_LIST_HEAD(&shost->eh_cmd_q);
INIT_LIST_HEAD(&shost->starved_list);
init_waitqueue_head(&shost->host_wait);
mutex_init(&shost->scan_mutex);
/*
@ -394,9 +396,18 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
shost->unchecked_isa_dma = sht->unchecked_isa_dma;
shost->use_clustering = sht->use_clustering;
shost->ordered_tag = sht->ordered_tag;
shost->eh_deadline = shost_eh_deadline * HZ;
shost->no_write_same = sht->no_write_same;
if (shost_eh_deadline == -1)
shost->eh_deadline = -1;
else if ((ulong) shost_eh_deadline * HZ > INT_MAX) {
shost_printk(KERN_WARNING, shost,
"eh_deadline %u too large, setting to %u\n",
shost_eh_deadline, INT_MAX / HZ);
shost->eh_deadline = INT_MAX;
} else
shost->eh_deadline = shost_eh_deadline * HZ;
if (sht->supported_mode == MODE_UNKNOWN)
/* means we didn't set it ... default to INITIATOR */
shost->active_mode = MODE_INITIATOR;
@ -444,9 +455,19 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
goto fail_kfree;
}
shost->tmf_work_q = alloc_workqueue("scsi_tmf_%d",
WQ_UNBOUND | WQ_MEM_RECLAIM,
1, shost->host_no);
if (!shost->tmf_work_q) {
printk(KERN_WARNING "scsi%d: failed to create tmf workq\n",
shost->host_no);
goto fail_kthread;
}
scsi_proc_hostdir_add(shost->hostt);
return shost;
fail_kthread:
kthread_stop(shost->ehandler);
fail_kfree:
kfree(shost);
return NULL;

Просмотреть файл

@ -29,7 +29,6 @@
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/timer.h>
#include <linux/seq_file.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/compat.h>
@ -96,7 +95,6 @@ static const struct pci_device_id hpsa_pci_device_id[] = {
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3351},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3352},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3353},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x334D},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3354},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3355},
{PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSF, 0x103C, 0x3356},
@ -143,7 +141,6 @@ static struct board_type products[] = {
{0x3351103C, "Smart Array P420", &SA5_access},
{0x3352103C, "Smart Array P421", &SA5_access},
{0x3353103C, "Smart Array P822", &SA5_access},
{0x334D103C, "Smart Array P822se", &SA5_access},
{0x3354103C, "Smart Array P420i", &SA5_access},
{0x3355103C, "Smart Array P220i", &SA5_access},
{0x3356103C, "Smart Array P721m", &SA5_access},
@ -171,10 +168,6 @@ static struct board_type products[] = {
static int number_of_controllers;
static struct list_head hpsa_ctlr_list = LIST_HEAD_INIT(hpsa_ctlr_list);
static spinlock_t lockup_detector_lock;
static struct task_struct *hpsa_lockup_detector;
static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id);
static irqreturn_t do_hpsa_intr_msi(int irq, void *dev_id);
static int hpsa_ioctl(struct scsi_device *dev, int cmd, void *arg);
@ -1248,10 +1241,8 @@ static void complete_scsi_command(struct CommandList *cp)
}
if (ei->ScsiStatus == SAM_STAT_CHECK_CONDITION) {
if (check_for_unit_attention(h, cp)) {
cmd->result = DID_SOFT_ERROR << 16;
if (check_for_unit_attention(h, cp))
break;
}
if (sense_key == ILLEGAL_REQUEST) {
/*
* SCSI REPORT_LUNS is commonly unsupported on
@ -1783,6 +1774,7 @@ static unsigned char *ext_target_model[] = {
"MSA2312",
"MSA2324",
"P2000 G3 SAS",
"MSA 2040 SAS",
NULL,
};
@ -3171,7 +3163,7 @@ static int hpsa_big_passthru_ioctl(struct ctlr_info *h, void __user *argp)
hpsa_pci_unmap(h->pdev, c, i,
PCI_DMA_BIDIRECTIONAL);
status = -ENOMEM;
goto cleanup1;
goto cleanup0;
}
c->SG[i].Addr.lower = temp64.val32.lower;
c->SG[i].Addr.upper = temp64.val32.upper;
@ -3187,24 +3179,23 @@ static int hpsa_big_passthru_ioctl(struct ctlr_info *h, void __user *argp)
/* Copy the error information out */
memcpy(&ioc->error_info, c->err_info, sizeof(ioc->error_info));
if (copy_to_user(argp, ioc, sizeof(*ioc))) {
cmd_special_free(h, c);
status = -EFAULT;
goto cleanup1;
goto cleanup0;
}
if (ioc->Request.Type.Direction == XFER_READ && ioc->buf_size > 0) {
/* Copy the data out of the buffer we created */
BYTE __user *ptr = ioc->buf;
for (i = 0; i < sg_used; i++) {
if (copy_to_user(ptr, buff[i], buff_size[i])) {
cmd_special_free(h, c);
status = -EFAULT;
goto cleanup1;
goto cleanup0;
}
ptr += buff_size[i];
}
}
cmd_special_free(h, c);
status = 0;
cleanup0:
cmd_special_free(h, c);
cleanup1:
if (buff) {
for (i = 0; i < sg_used; i++)
@ -3223,6 +3214,36 @@ static void check_ioctl_unit_attention(struct ctlr_info *h,
c->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION)
(void) check_for_unit_attention(h, c);
}
static int increment_passthru_count(struct ctlr_info *h)
{
unsigned long flags;
spin_lock_irqsave(&h->passthru_count_lock, flags);
if (h->passthru_count >= HPSA_MAX_CONCURRENT_PASSTHRUS) {
spin_unlock_irqrestore(&h->passthru_count_lock, flags);
return -1;
}
h->passthru_count++;
spin_unlock_irqrestore(&h->passthru_count_lock, flags);
return 0;
}
static void decrement_passthru_count(struct ctlr_info *h)
{
unsigned long flags;
spin_lock_irqsave(&h->passthru_count_lock, flags);
if (h->passthru_count <= 0) {
spin_unlock_irqrestore(&h->passthru_count_lock, flags);
/* not expecting to get here. */
dev_warn(&h->pdev->dev, "Bug detected, passthru_count seems to be incorrect.\n");
return;
}
h->passthru_count--;
spin_unlock_irqrestore(&h->passthru_count_lock, flags);
}
/*
* ioctl
*/
@ -3230,6 +3251,7 @@ static int hpsa_ioctl(struct scsi_device *dev, int cmd, void *arg)
{
struct ctlr_info *h;
void __user *argp = (void __user *)arg;
int rc;
h = sdev_to_hba(dev);
@ -3244,9 +3266,17 @@ static int hpsa_ioctl(struct scsi_device *dev, int cmd, void *arg)
case CCISS_GETDRIVVER:
return hpsa_getdrivver_ioctl(h, argp);
case CCISS_PASSTHRU:
return hpsa_passthru_ioctl(h, argp);
if (increment_passthru_count(h))
return -EAGAIN;
rc = hpsa_passthru_ioctl(h, argp);
decrement_passthru_count(h);
return rc;
case CCISS_BIG_PASSTHRU:
return hpsa_big_passthru_ioctl(h, argp);
if (increment_passthru_count(h))
return -EAGAIN;
rc = hpsa_big_passthru_ioctl(h, argp);
decrement_passthru_count(h);
return rc;
default:
return -ENOTTY;
}
@ -3445,9 +3475,11 @@ static void start_io(struct ctlr_info *h)
c = list_entry(h->reqQ.next, struct CommandList, list);
/* can't do anything if fifo is full */
if ((h->access.fifo_full(h))) {
h->fifo_recently_full = 1;
dev_warn(&h->pdev->dev, "fifo full\n");
break;
}
h->fifo_recently_full = 0;
/* Get the first entry from the Request Q */
removeQ(c);
@ -3501,15 +3533,41 @@ static inline int bad_tag(struct ctlr_info *h, u32 tag_index,
static inline void finish_cmd(struct CommandList *c)
{
unsigned long flags;
int io_may_be_stalled = 0;
struct ctlr_info *h = c->h;
spin_lock_irqsave(&c->h->lock, flags);
spin_lock_irqsave(&h->lock, flags);
removeQ(c);
spin_unlock_irqrestore(&c->h->lock, flags);
/*
* Check for possibly stalled i/o.
*
* If a fifo_full condition is encountered, requests will back up
* in h->reqQ. This queue is only emptied out by start_io which is
* only called when a new i/o request comes in. If no i/o's are
* forthcoming, the i/o's in h->reqQ can get stuck. So we call
* start_io from here if we detect such a danger.
*
* Normally, we shouldn't hit this case, but pounding on the
* CCISS_PASSTHRU ioctl can provoke it. Only call start_io if
* commands_outstanding is low. We want to avoid calling
* start_io from in here as much as possible, and esp. don't
* want to get in a cycle where we call start_io every time
* through here.
*/
if (unlikely(h->fifo_recently_full) &&
h->commands_outstanding < 5)
io_may_be_stalled = 1;
spin_unlock_irqrestore(&h->lock, flags);
dial_up_lockup_detection_on_fw_flash_complete(c->h, c);
if (likely(c->cmd_type == CMD_SCSI))
complete_scsi_command(c);
else if (c->cmd_type == CMD_IOCTL_PEND)
complete(c->waiting);
if (unlikely(io_may_be_stalled))
start_io(h);
}
static inline u32 hpsa_tag_contains_index(u32 tag)
@ -3785,6 +3843,13 @@ static int hpsa_controller_hard_reset(struct pci_dev *pdev,
*/
dev_info(&pdev->dev, "using doorbell to reset controller\n");
writel(use_doorbell, vaddr + SA5_DOORBELL);
/* PMC hardware guys tell us we need a 5 second delay after
* doorbell reset and before any attempt to talk to the board
* at all to ensure that this actually works and doesn't fall
* over in some weird corner cases.
*/
msleep(5000);
} else { /* Try to do it the PCI power state way */
/* Quoting from the Open CISS Specification: "The Power
@ -3981,16 +4046,6 @@ static int hpsa_kdump_hard_reset_controller(struct pci_dev *pdev)
need a little pause here */
msleep(HPSA_POST_RESET_PAUSE_MSECS);
/* Wait for board to become not ready, then ready. */
dev_info(&pdev->dev, "Waiting for board to reset.\n");
rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY);
if (rc) {
dev_warn(&pdev->dev,
"failed waiting for board to reset."
" Will try soft reset.\n");
rc = -ENOTSUPP; /* Not expected, but try soft reset later */
goto unmap_cfgtable;
}
rc = hpsa_wait_for_board_state(pdev, vaddr, BOARD_READY);
if (rc) {
dev_warn(&pdev->dev,
@ -4308,16 +4363,17 @@ static inline bool hpsa_CISS_signature_present(struct ctlr_info *h)
return true;
}
/* Need to enable prefetch in the SCSI core for 6400 in x86 */
static inline void hpsa_enable_scsi_prefetch(struct ctlr_info *h)
static inline void hpsa_set_driver_support_bits(struct ctlr_info *h)
{
#ifdef CONFIG_X86
u32 prefetch;
u32 driver_support;
prefetch = readl(&(h->cfgtable->SCSI_Prefetch));
prefetch |= 0x100;
writel(prefetch, &(h->cfgtable->SCSI_Prefetch));
#ifdef CONFIG_X86
/* Need to enable prefetch in the SCSI core for 6400 in x86 */
driver_support = readl(&(h->cfgtable->driver_support));
driver_support |= ENABLE_SCSI_PREFETCH;
#endif
driver_support |= ENABLE_UNIT_ATTN;
writel(driver_support, &(h->cfgtable->driver_support));
}
/* Disable DMA prefetch for the P600. Otherwise an ASIC bug may result
@ -4427,7 +4483,7 @@ static int hpsa_pci_init(struct ctlr_info *h)
err = -ENODEV;
goto err_out_free_res;
}
hpsa_enable_scsi_prefetch(h);
hpsa_set_driver_support_bits(h);
hpsa_p600_dma_prefetch_quirk(h);
err = hpsa_enter_simple_mode(h);
if (err)
@ -4638,16 +4694,6 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
kfree(h);
}
static void remove_ctlr_from_lockup_detector_list(struct ctlr_info *h)
{
assert_spin_locked(&lockup_detector_lock);
if (!hpsa_lockup_detector)
return;
if (h->lockup_detected)
return; /* already stopped the lockup detector */
list_del(&h->lockup_list);
}
/* Called when controller lockup detected. */
static void fail_all_cmds_on_list(struct ctlr_info *h, struct list_head *list)
{
@ -4666,8 +4712,6 @@ static void controller_lockup_detected(struct ctlr_info *h)
{
unsigned long flags;
assert_spin_locked(&lockup_detector_lock);
remove_ctlr_from_lockup_detector_list(h);
h->access.set_intr_mask(h, HPSA_INTR_OFF);
spin_lock_irqsave(&h->lock, flags);
h->lockup_detected = readl(h->vaddr + SA5_SCRATCHPAD_OFFSET);
@ -4687,7 +4731,6 @@ static void detect_controller_lockup(struct ctlr_info *h)
u32 heartbeat;
unsigned long flags;
assert_spin_locked(&lockup_detector_lock);
now = get_jiffies_64();
/* If we've received an interrupt recently, we're ok. */
if (time_after64(h->last_intr_timestamp +
@ -4717,68 +4760,22 @@ static void detect_controller_lockup(struct ctlr_info *h)
h->last_heartbeat_timestamp = now;
}
static int detect_controller_lockup_thread(void *notused)
{
struct ctlr_info *h;
unsigned long flags;
while (1) {
struct list_head *this, *tmp;
schedule_timeout_interruptible(HEARTBEAT_SAMPLE_INTERVAL);
if (kthread_should_stop())
break;
spin_lock_irqsave(&lockup_detector_lock, flags);
list_for_each_safe(this, tmp, &hpsa_ctlr_list) {
h = list_entry(this, struct ctlr_info, lockup_list);
detect_controller_lockup(h);
}
spin_unlock_irqrestore(&lockup_detector_lock, flags);
}
return 0;
}
static void add_ctlr_to_lockup_detector_list(struct ctlr_info *h)
static void hpsa_monitor_ctlr_worker(struct work_struct *work)
{
unsigned long flags;
h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
spin_lock_irqsave(&lockup_detector_lock, flags);
list_add_tail(&h->lockup_list, &hpsa_ctlr_list);
spin_unlock_irqrestore(&lockup_detector_lock, flags);
}
static void start_controller_lockup_detector(struct ctlr_info *h)
{
/* Start the lockup detector thread if not already started */
if (!hpsa_lockup_detector) {
spin_lock_init(&lockup_detector_lock);
hpsa_lockup_detector =
kthread_run(detect_controller_lockup_thread,
NULL, HPSA);
}
if (!hpsa_lockup_detector) {
dev_warn(&h->pdev->dev,
"Could not start lockup detector thread\n");
struct ctlr_info *h = container_of(to_delayed_work(work),
struct ctlr_info, monitor_ctlr_work);
detect_controller_lockup(h);
if (h->lockup_detected)
return;
spin_lock_irqsave(&h->lock, flags);
if (h->remove_in_progress) {
spin_unlock_irqrestore(&h->lock, flags);
return;
}
add_ctlr_to_lockup_detector_list(h);
}
static void stop_controller_lockup_detector(struct ctlr_info *h)
{
unsigned long flags;
spin_lock_irqsave(&lockup_detector_lock, flags);
remove_ctlr_from_lockup_detector_list(h);
/* If the list of ctlr's to monitor is empty, stop the thread */
if (list_empty(&hpsa_ctlr_list)) {
spin_unlock_irqrestore(&lockup_detector_lock, flags);
kthread_stop(hpsa_lockup_detector);
spin_lock_irqsave(&lockup_detector_lock, flags);
hpsa_lockup_detector = NULL;
}
spin_unlock_irqrestore(&lockup_detector_lock, flags);
schedule_delayed_work(&h->monitor_ctlr_work,
h->heartbeat_sample_interval);
spin_unlock_irqrestore(&h->lock, flags);
}
static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
@ -4822,6 +4819,7 @@ reinit_after_soft_reset:
INIT_LIST_HEAD(&h->reqQ);
spin_lock_init(&h->lock);
spin_lock_init(&h->scan_lock);
spin_lock_init(&h->passthru_count_lock);
rc = hpsa_pci_init(h);
if (rc != 0)
goto clean1;
@ -4925,7 +4923,12 @@ reinit_after_soft_reset:
hpsa_hba_inquiry(h);
hpsa_register_scsi(h); /* hook ourselves into SCSI subsystem */
start_controller_lockup_detector(h);
/* Monitor the controller for firmware lockups */
h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
INIT_DELAYED_WORK(&h->monitor_ctlr_work, hpsa_monitor_ctlr_worker);
schedule_delayed_work(&h->monitor_ctlr_work,
h->heartbeat_sample_interval);
return 0;
clean4:
@ -4942,6 +4945,15 @@ static void hpsa_flush_cache(struct ctlr_info *h)
{
char *flush_buf;
struct CommandList *c;
unsigned long flags;
/* Don't bother trying to flush the cache if locked up */
spin_lock_irqsave(&h->lock, flags);
if (unlikely(h->lockup_detected)) {
spin_unlock_irqrestore(&h->lock, flags);
return;
}
spin_unlock_irqrestore(&h->lock, flags);
flush_buf = kzalloc(4, GFP_KERNEL);
if (!flush_buf)
@ -4991,13 +5003,20 @@ static void hpsa_free_device_info(struct ctlr_info *h)
static void hpsa_remove_one(struct pci_dev *pdev)
{
struct ctlr_info *h;
unsigned long flags;
if (pci_get_drvdata(pdev) == NULL) {
dev_err(&pdev->dev, "unable to remove device\n");
return;
}
h = pci_get_drvdata(pdev);
stop_controller_lockup_detector(h);
/* Get rid of any controller monitoring work items */
spin_lock_irqsave(&h->lock, flags);
h->remove_in_progress = 1;
cancel_delayed_work(&h->monitor_ctlr_work);
spin_unlock_irqrestore(&h->lock, flags);
hpsa_unregister_scsi(h); /* unhook from SCSI subsystem */
hpsa_shutdown(pdev);
iounmap(h->vaddr);

Просмотреть файл

@ -114,6 +114,11 @@ struct ctlr_info {
struct TransTable_struct *transtable;
unsigned long transMethod;
/* cap concurrent passthrus at some reasonable maximum */
#define HPSA_MAX_CONCURRENT_PASSTHRUS (20)
spinlock_t passthru_count_lock; /* protects passthru_count */
int passthru_count;
/*
* Performant mode completion buffers
*/
@ -130,7 +135,9 @@ struct ctlr_info {
u32 heartbeat_sample_interval;
atomic_t firmware_flash_in_progress;
u32 lockup_detected;
struct list_head lockup_list;
struct delayed_work monitor_ctlr_work;
int remove_in_progress;
u32 fifo_recently_full;
/* Address of h->q[x] is passed to intr handler to know which queue */
u8 q[MAX_REPLY_QUEUES];
u32 TMFSupportFlags; /* cache what task mgmt funcs are supported. */

Просмотреть файл

@ -356,7 +356,9 @@ struct CfgTable {
u32 TransMethodOffset;
u8 ServerName[16];
u32 HeartBeat;
u32 SCSI_Prefetch;
u32 driver_support;
#define ENABLE_SCSI_PREFETCH 0x100
#define ENABLE_UNIT_ATTN 0x01
u32 MaxScatterGatherElements;
u32 MaxLogicalUnits;
u32 MaxPhysicalDevices;

Просмотреть файл

@ -220,7 +220,7 @@ module_param_named(max_devs, ipr_max_devs, int, 0);
MODULE_PARM_DESC(max_devs, "Specify the maximum number of physical devices. "
"[Default=" __stringify(IPR_DEFAULT_SIS64_DEVS) "]");
module_param_named(number_of_msix, ipr_number_of_msix, int, 0);
MODULE_PARM_DESC(number_of_msix, "Specify the number of MSIX interrupts to use on capable adapters (1 - 5). (default:2)");
MODULE_PARM_DESC(number_of_msix, "Specify the number of MSIX interrupts to use on capable adapters (1 - 16). (default:2)");
MODULE_LICENSE("GPL");
MODULE_VERSION(IPR_DRIVER_VERSION);

Просмотреть файл

@ -301,7 +301,7 @@ IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_LOST | IPR_PCII_MMIO_ERROR)
* Dump literals
*/
#define IPR_FMT2_MAX_IOA_DUMP_SIZE (4 * 1024 * 1024)
#define IPR_FMT3_MAX_IOA_DUMP_SIZE (32 * 1024 * 1024)
#define IPR_FMT3_MAX_IOA_DUMP_SIZE (80 * 1024 * 1024)
#define IPR_FMT2_NUM_SDT_ENTRIES 511
#define IPR_FMT3_NUM_SDT_ENTRIES 0xFFF
#define IPR_FMT2_MAX_NUM_DUMP_PAGES ((IPR_FMT2_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1)
@ -311,7 +311,7 @@ IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_LOST | IPR_PCII_MMIO_ERROR)
* Misc literals
*/
#define IPR_NUM_IOADL_ENTRIES IPR_MAX_SGLIST
#define IPR_MAX_MSIX_VECTORS 0x5
#define IPR_MAX_MSIX_VECTORS 0x10
#define IPR_MAX_HRRQ_NUM 0x10
#define IPR_INIT_HRRQ 0x0

Просмотреть файл

@ -2945,6 +2945,7 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
free_pages((unsigned long) conn->data,
get_order(ISCSI_DEF_MAX_RECV_SEG_LEN));
kfree(conn->persistent_address);
kfree(conn->local_ipaddr);
kfifo_in(&session->cmdpool.queue, (void*)&conn->login_task,
sizeof(void*));
if (session->leadconn == conn)
@ -3269,6 +3270,8 @@ int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
sscanf(buf, "%d", &val);
session->discovery_sess = !!val;
break;
case ISCSI_PARAM_LOCAL_IPADDR:
return iscsi_switch_str_param(&conn->local_ipaddr, buf);
default:
return -ENOSYS;
}
@ -3542,6 +3545,9 @@ int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
case ISCSI_PARAM_TCP_RECV_WSF:
len = sprintf(buf, "%u\n", conn->tcp_recv_wsf);
break;
case ISCSI_PARAM_LOCAL_IPADDR:
len = sprintf(buf, "%s\n", conn->local_ipaddr);
break;
default:
return -ENOSYS;
}

Просмотреть файл

@ -4001,7 +4001,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
goto debug_failed;
}
} else
phba->debug_dumpHBASlim = NULL;
phba->debug_dumpHostSlim = NULL;
/* Setup dumpData */
snprintf(name, sizeof(name), "dumpData");

Просмотреть файл

@ -260,6 +260,8 @@ int __init macscsi_detect(struct scsi_host_template * tpnt)
/* Once we support multiple 5380s (e.g. DuoDock) we'll do
something different here */
instance = scsi_register (tpnt, sizeof(struct NCR5380_hostdata));
if (instance == NULL)
return 0;
if (macintosh_config->ident == MAC_MODEL_IIFX) {
mac_scsi_regp = via1+0x8000;

Просмотреть файл

@ -1527,7 +1527,6 @@ struct megasas_instance {
u32 *reply_queue;
dma_addr_t reply_queue_h;
unsigned long base_addr;
struct megasas_register_set __iomem *reg_set;
u32 *reply_post_host_index_addr[MR_MAX_MSIX_REG_ARRAY];
struct megasas_pd_list pd_list[MEGASAS_MAX_PD];

Просмотреть файл

@ -3615,6 +3615,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
u32 max_sectors_1;
u32 max_sectors_2;
u32 tmp_sectors, msix_enable, scratch_pad_2;
resource_size_t base_addr;
struct megasas_register_set __iomem *reg_set;
struct megasas_ctrl_info *ctrl_info;
unsigned long bar_list;
@ -3623,14 +3624,14 @@ static int megasas_init_fw(struct megasas_instance *instance)
/* Find first memory bar */
bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM);
instance->bar = find_first_bit(&bar_list, sizeof(unsigned long));
instance->base_addr = pci_resource_start(instance->pdev, instance->bar);
if (pci_request_selected_regions(instance->pdev, instance->bar,
"megasas: LSI")) {
printk(KERN_DEBUG "megasas: IO memory region busy!\n");
return -EBUSY;
}
instance->reg_set = ioremap_nocache(instance->base_addr, 8192);
base_addr = pci_resource_start(instance->pdev, instance->bar);
instance->reg_set = ioremap_nocache(base_addr, 8192);
if (!instance->reg_set) {
printk(KERN_DEBUG "megasas: Failed to map IO mem\n");

Просмотреть файл

@ -2502,7 +2502,7 @@ qla1280_mailbox_command(struct scsi_qla_host *ha, uint8_t mr, uint16_t *mb)
/* Issue set host interrupt command. */
/* set up a timer just in case we're really jammed */
init_timer(&timer);
init_timer_on_stack(&timer);
timer.expires = jiffies + 20*HZ;
timer.data = (unsigned long)ha;
timer.function = qla1280_mailbox_timeout;

Просмотреть файл

@ -862,7 +862,7 @@ qla2x00_alloc_sysfs_attr(scsi_qla_host_t *vha)
}
void
qla2x00_free_sysfs_attr(scsi_qla_host_t *vha)
qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
{
struct Scsi_Host *host = vha->host;
struct sysfs_entry *iter;
@ -880,7 +880,7 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha)
iter->attr);
}
if (ha->beacon_blink_led == 1)
if (stop_beacon && ha->beacon_blink_led == 1)
ha->isp_ops->beacon_off(vha);
}
@ -890,7 +890,7 @@ static ssize_t
qla2x00_drvr_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return snprintf(buf, PAGE_SIZE, "%s\n", qla2x00_version_str);
return scnprintf(buf, PAGE_SIZE, "%s\n", qla2x00_version_str);
}
static ssize_t
@ -901,7 +901,7 @@ qla2x00_fw_version_show(struct device *dev,
struct qla_hw_data *ha = vha->hw;
char fw_str[128];
return snprintf(buf, PAGE_SIZE, "%s\n",
return scnprintf(buf, PAGE_SIZE, "%s\n",
ha->isp_ops->fw_version_str(vha, fw_str));
}
@ -914,15 +914,15 @@ qla2x00_serial_num_show(struct device *dev, struct device_attribute *attr,
uint32_t sn;
if (IS_QLAFX00(vha->hw)) {
return snprintf(buf, PAGE_SIZE, "%s\n",
return scnprintf(buf, PAGE_SIZE, "%s\n",
vha->hw->mr.serial_num);
} else if (IS_FWI2_CAPABLE(ha)) {
qla2xxx_get_vpd_field(vha, "SN", buf, PAGE_SIZE);
return snprintf(buf, PAGE_SIZE, "%s\n", buf);
qla2xxx_get_vpd_field(vha, "SN", buf, PAGE_SIZE - 1);
return strlen(strcat(buf, "\n"));
}
sn = ((ha->serial0 & 0x1f) << 16) | (ha->serial2 << 8) | ha->serial1;
return snprintf(buf, PAGE_SIZE, "%c%05d\n", 'A' + sn / 100000,
return scnprintf(buf, PAGE_SIZE, "%c%05d\n", 'A' + sn / 100000,
sn % 100000);
}
@ -931,7 +931,7 @@ qla2x00_isp_name_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
return snprintf(buf, PAGE_SIZE, "ISP%04X\n", vha->hw->pdev->device);
return scnprintf(buf, PAGE_SIZE, "ISP%04X\n", vha->hw->pdev->device);
}
static ssize_t
@ -942,10 +942,10 @@ qla2x00_isp_id_show(struct device *dev, struct device_attribute *attr,
struct qla_hw_data *ha = vha->hw;
if (IS_QLAFX00(vha->hw))
return snprintf(buf, PAGE_SIZE, "%s\n",
return scnprintf(buf, PAGE_SIZE, "%s\n",
vha->hw->mr.hw_version);
return snprintf(buf, PAGE_SIZE, "%04x %04x %04x %04x\n",
return scnprintf(buf, PAGE_SIZE, "%04x %04x %04x %04x\n",
ha->product_id[0], ha->product_id[1], ha->product_id[2],
ha->product_id[3]);
}
@ -956,11 +956,7 @@ qla2x00_model_name_show(struct device *dev, struct device_attribute *attr,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
if (IS_QLAFX00(vha->hw))
return snprintf(buf, PAGE_SIZE, "%s\n",
vha->hw->mr.product_name);
return snprintf(buf, PAGE_SIZE, "%s\n", vha->hw->model_number);
return scnprintf(buf, PAGE_SIZE, "%s\n", vha->hw->model_number);
}
static ssize_t
@ -968,7 +964,7 @@ qla2x00_model_desc_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
return snprintf(buf, PAGE_SIZE, "%s\n",
return scnprintf(buf, PAGE_SIZE, "%s\n",
vha->hw->model_desc ? vha->hw->model_desc : "");
}
@ -979,7 +975,7 @@ qla2x00_pci_info_show(struct device *dev, struct device_attribute *attr,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
char pci_info[30];
return snprintf(buf, PAGE_SIZE, "%s\n",
return scnprintf(buf, PAGE_SIZE, "%s\n",
vha->hw->isp_ops->pci_info_str(vha, pci_info));
}
@ -994,29 +990,29 @@ qla2x00_link_state_show(struct device *dev, struct device_attribute *attr,
if (atomic_read(&vha->loop_state) == LOOP_DOWN ||
atomic_read(&vha->loop_state) == LOOP_DEAD ||
vha->device_flags & DFLG_NO_CABLE)
len = snprintf(buf, PAGE_SIZE, "Link Down\n");
len = scnprintf(buf, PAGE_SIZE, "Link Down\n");
else if (atomic_read(&vha->loop_state) != LOOP_READY ||
qla2x00_reset_active(vha))
len = snprintf(buf, PAGE_SIZE, "Unknown Link State\n");
len = scnprintf(buf, PAGE_SIZE, "Unknown Link State\n");
else {
len = snprintf(buf, PAGE_SIZE, "Link Up - ");
len = scnprintf(buf, PAGE_SIZE, "Link Up - ");
switch (ha->current_topology) {
case ISP_CFG_NL:
len += snprintf(buf + len, PAGE_SIZE-len, "Loop\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Loop\n");
break;
case ISP_CFG_FL:
len += snprintf(buf + len, PAGE_SIZE-len, "FL_Port\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "FL_Port\n");
break;
case ISP_CFG_N:
len += snprintf(buf + len, PAGE_SIZE-len,
len += scnprintf(buf + len, PAGE_SIZE-len,
"N_Port to N_Port\n");
break;
case ISP_CFG_F:
len += snprintf(buf + len, PAGE_SIZE-len, "F_Port\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "F_Port\n");
break;
default:
len += snprintf(buf + len, PAGE_SIZE-len, "Loop\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Loop\n");
break;
}
}
@ -1032,10 +1028,10 @@ qla2x00_zio_show(struct device *dev, struct device_attribute *attr,
switch (vha->hw->zio_mode) {
case QLA_ZIO_MODE_6:
len += snprintf(buf + len, PAGE_SIZE-len, "Mode 6\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Mode 6\n");
break;
case QLA_ZIO_DISABLED:
len += snprintf(buf + len, PAGE_SIZE-len, "Disabled\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Disabled\n");
break;
}
return len;
@ -1075,7 +1071,7 @@ qla2x00_zio_timer_show(struct device *dev, struct device_attribute *attr,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
return snprintf(buf, PAGE_SIZE, "%d us\n", vha->hw->zio_timer * 100);
return scnprintf(buf, PAGE_SIZE, "%d us\n", vha->hw->zio_timer * 100);
}
static ssize_t
@ -1105,9 +1101,9 @@ qla2x00_beacon_show(struct device *dev, struct device_attribute *attr,
int len = 0;
if (vha->hw->beacon_blink_led)
len += snprintf(buf + len, PAGE_SIZE-len, "Enabled\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Enabled\n");
else
len += snprintf(buf + len, PAGE_SIZE-len, "Disabled\n");
len += scnprintf(buf + len, PAGE_SIZE-len, "Disabled\n");
return len;
}
@ -1149,7 +1145,7 @@ qla2x00_optrom_bios_version_show(struct device *dev,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
struct qla_hw_data *ha = vha->hw;
return snprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->bios_revision[1],
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->bios_revision[1],
ha->bios_revision[0]);
}
@ -1159,7 +1155,7 @@ qla2x00_optrom_efi_version_show(struct device *dev,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
struct qla_hw_data *ha = vha->hw;
return snprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->efi_revision[1],
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->efi_revision[1],
ha->efi_revision[0]);
}
@ -1169,7 +1165,7 @@ qla2x00_optrom_fcode_version_show(struct device *dev,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
struct qla_hw_data *ha = vha->hw;
return snprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->fcode_revision[1],
return scnprintf(buf, PAGE_SIZE, "%d.%02d\n", ha->fcode_revision[1],
ha->fcode_revision[0]);
}
@ -1179,7 +1175,7 @@ qla2x00_optrom_fw_version_show(struct device *dev,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
struct qla_hw_data *ha = vha->hw;
return snprintf(buf, PAGE_SIZE, "%d.%02d.%02d %d\n",
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d %d\n",
ha->fw_revision[0], ha->fw_revision[1], ha->fw_revision[2],
ha->fw_revision[3]);
}
@ -1192,9 +1188,9 @@ qla2x00_optrom_gold_fw_version_show(struct device *dev,
struct qla_hw_data *ha = vha->hw;
if (!IS_QLA81XX(ha) && !IS_QLA83XX(ha))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%d)\n",
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%d)\n",
ha->gold_fw_version[0], ha->gold_fw_version[1],
ha->gold_fw_version[2], ha->gold_fw_version[3]);
}
@ -1204,7 +1200,7 @@ qla2x00_total_isp_aborts_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
return snprintf(buf, PAGE_SIZE, "%d\n",
return scnprintf(buf, PAGE_SIZE, "%d\n",
vha->qla_stats.total_isp_aborts);
}
@ -1218,16 +1214,16 @@ qla24xx_84xx_fw_version_show(struct device *dev,
struct qla_hw_data *ha = vha->hw;
if (!IS_QLA84XX(ha))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
if (ha->cs84xx->op_fw_version == 0)
rval = qla84xx_verify_chip(vha, status);
if ((rval == QLA_SUCCESS) && (status[0] == 0))
return snprintf(buf, PAGE_SIZE, "%u\n",
return scnprintf(buf, PAGE_SIZE, "%u\n",
(uint32_t)ha->cs84xx->op_fw_version);
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
}
static ssize_t
@ -1238,9 +1234,9 @@ qla2x00_mpi_version_show(struct device *dev, struct device_attribute *attr,
struct qla_hw_data *ha = vha->hw;
if (!IS_QLA81XX(ha) && !IS_QLA8031(ha) && !IS_QLA8044(ha))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%x)\n",
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%x)\n",
ha->mpi_version[0], ha->mpi_version[1], ha->mpi_version[2],
ha->mpi_capabilities);
}
@ -1253,9 +1249,9 @@ qla2x00_phy_version_show(struct device *dev, struct device_attribute *attr,
struct qla_hw_data *ha = vha->hw;
if (!IS_QLA81XX(ha) && !IS_QLA8031(ha))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%d.%02d.%02d\n",
return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d\n",
ha->phy_version[0], ha->phy_version[1], ha->phy_version[2]);
}
@ -1266,7 +1262,7 @@ qla2x00_flash_block_size_show(struct device *dev,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
struct qla_hw_data *ha = vha->hw;
return snprintf(buf, PAGE_SIZE, "0x%x\n", ha->fdt_block_size);
return scnprintf(buf, PAGE_SIZE, "0x%x\n", ha->fdt_block_size);
}
static ssize_t
@ -1276,9 +1272,9 @@ qla2x00_vlan_id_show(struct device *dev, struct device_attribute *attr,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
if (!IS_CNA_CAPABLE(vha->hw))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%d\n", vha->fcoe_vlan_id);
return scnprintf(buf, PAGE_SIZE, "%d\n", vha->fcoe_vlan_id);
}
static ssize_t
@ -1288,9 +1284,9 @@ qla2x00_vn_port_mac_address_show(struct device *dev,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
if (!IS_CNA_CAPABLE(vha->hw))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%pMR\n", vha->fcoe_vn_port_mac);
return scnprintf(buf, PAGE_SIZE, "%pMR\n", vha->fcoe_vn_port_mac);
}
static ssize_t
@ -1299,7 +1295,7 @@ qla2x00_fabric_param_show(struct device *dev, struct device_attribute *attr,
{
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
return snprintf(buf, PAGE_SIZE, "%d\n", vha->hw->switch_cap);
return scnprintf(buf, PAGE_SIZE, "%d\n", vha->hw->switch_cap);
}
static ssize_t
@ -1320,10 +1316,10 @@ qla2x00_thermal_temp_show(struct device *dev,
}
if (qla2x00_get_thermal_temp(vha, &temp) == QLA_SUCCESS)
return snprintf(buf, PAGE_SIZE, "%d\n", temp);
return scnprintf(buf, PAGE_SIZE, "%d\n", temp);
done:
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
}
static ssize_t
@ -1337,7 +1333,7 @@ qla2x00_fw_state_show(struct device *dev, struct device_attribute *attr,
if (IS_QLAFX00(vha->hw)) {
pstate = qlafx00_fw_state_show(dev, attr, buf);
return snprintf(buf, PAGE_SIZE, "0x%x\n", pstate);
return scnprintf(buf, PAGE_SIZE, "0x%x\n", pstate);
}
if (qla2x00_reset_active(vha))
@ -1348,7 +1344,7 @@ qla2x00_fw_state_show(struct device *dev, struct device_attribute *attr,
if (rval != QLA_SUCCESS)
memset(state, -1, sizeof(state));
return snprintf(buf, PAGE_SIZE, "0x%x 0x%x 0x%x 0x%x 0x%x\n", state[0],
return scnprintf(buf, PAGE_SIZE, "0x%x 0x%x 0x%x 0x%x 0x%x\n", state[0],
state[1], state[2], state[3], state[4]);
}
@ -1359,9 +1355,9 @@ qla2x00_diag_requests_show(struct device *dev,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
if (!IS_BIDI_CAPABLE(vha->hw))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%llu\n", vha->bidi_stats.io_count);
return scnprintf(buf, PAGE_SIZE, "%llu\n", vha->bidi_stats.io_count);
}
static ssize_t
@ -1371,9 +1367,9 @@ qla2x00_diag_megabytes_show(struct device *dev,
scsi_qla_host_t *vha = shost_priv(class_to_shost(dev));
if (!IS_BIDI_CAPABLE(vha->hw))
return snprintf(buf, PAGE_SIZE, "\n");
return scnprintf(buf, PAGE_SIZE, "\n");
return snprintf(buf, PAGE_SIZE, "%llu\n",
return scnprintf(buf, PAGE_SIZE, "%llu\n",
vha->bidi_stats.transfer_bytes >> 20);
}
@ -1392,7 +1388,7 @@ qla2x00_fw_dump_size_show(struct device *dev, struct device_attribute *attr,
else
size = ha->fw_dump_len;
return snprintf(buf, PAGE_SIZE, "%d\n", size);
return scnprintf(buf, PAGE_SIZE, "%d\n", size);
}
static DEVICE_ATTR(driver_version, S_IRUGO, qla2x00_drvr_version_show, NULL);

Просмотреть файл

@ -2021,6 +2021,46 @@ done:
return rval;
}
static int
qla26xx_serdes_op(struct fc_bsg_job *bsg_job)
{
struct Scsi_Host *host = bsg_job->shost;
scsi_qla_host_t *vha = shost_priv(host);
int rval = 0;
struct qla_serdes_reg sr;
memset(&sr, 0, sizeof(sr));
sg_copy_to_buffer(bsg_job->request_payload.sg_list,
bsg_job->request_payload.sg_cnt, &sr, sizeof(sr));
switch (sr.cmd) {
case INT_SC_SERDES_WRITE_REG:
rval = qla2x00_write_serdes_word(vha, sr.addr, sr.val);
bsg_job->reply->reply_payload_rcv_len = 0;
break;
case INT_SC_SERDES_READ_REG:
rval = qla2x00_read_serdes_word(vha, sr.addr, &sr.val);
sg_copy_from_buffer(bsg_job->reply_payload.sg_list,
bsg_job->reply_payload.sg_cnt, &sr, sizeof(sr));
bsg_job->reply->reply_payload_rcv_len = sizeof(sr);
break;
default:
ql_log(ql_log_warn, vha, 0x708c,
"Unknown serdes cmd %x.\n", sr.cmd);
rval = -EDOM;
break;
}
bsg_job->reply->reply_data.vendor_reply.vendor_rsp[0] =
rval ? EXT_STATUS_MAILBOX : 0;
bsg_job->reply_len = sizeof(struct fc_bsg_reply);
bsg_job->reply->result = DID_OK << 16;
bsg_job->job_done(bsg_job);
return 0;
}
static int
qla2x00_process_vendor_specific(struct fc_bsg_job *bsg_job)
{
@ -2069,6 +2109,10 @@ qla2x00_process_vendor_specific(struct fc_bsg_job *bsg_job)
case QL_VND_FX00_MGMT_CMD:
return qlafx00_mgmt_cmd(bsg_job);
case QL_VND_SERDES_OP:
return qla26xx_serdes_op(bsg_job);
default:
return -ENOSYS;
}

Просмотреть файл

@ -23,6 +23,7 @@
#define QL_VND_WRITE_I2C 0x10
#define QL_VND_READ_I2C 0x11
#define QL_VND_FX00_MGMT_CMD 0x12
#define QL_VND_SERDES_OP 0x13
/* BSG Vendor specific subcode returns */
#define EXT_STATUS_OK 0
@ -212,4 +213,16 @@ struct qla_i2c_access {
uint8_t buffer[0x40];
} __packed;
/* 26xx serdes register interface */
/* serdes reg commands */
#define INT_SC_SERDES_READ_REG 1
#define INT_SC_SERDES_WRITE_REG 2
struct qla_serdes_reg {
uint16_t cmd;
uint16_t addr;
uint16_t val;
} __packed;
#endif

Просмотреть файл

@ -11,8 +11,9 @@
* ----------------------------------------------------------------------
* | Level | Last Value Used | Holes |
* ----------------------------------------------------------------------
* | Module Init and Probe | 0x0159 | 0x4b,0xba,0xfa |
* | Mailbox commands | 0x1181 | 0x111a-0x111b |
* | Module Init and Probe | 0x015b | 0x4b,0xba,0xfa |
* | | | 0x0x015a |
* | Mailbox commands | 0x1187 | 0x111a-0x111b |
* | | | 0x1155-0x1158 |
* | | | 0x1018-0x1019 |
* | | | 0x1115-0x1116 |
@ -26,7 +27,7 @@
* | | | 0x302d,0x3033 |
* | | | 0x3036,0x3038 |
* | | | 0x303a |
* | DPC Thread | 0x4022 | 0x4002,0x4013 |
* | DPC Thread | 0x4023 | 0x4002,0x4013 |
* | Async Events | 0x5087 | 0x502b-0x502f |
* | | | 0x5047,0x5052 |
* | | | 0x5084,0x5075 |

Просмотреть файл

@ -862,7 +862,6 @@ struct mbx_cmd_32 {
*/
#define MBC_LOAD_RAM 1 /* Load RAM. */
#define MBC_EXECUTE_FIRMWARE 2 /* Execute firmware. */
#define MBC_WRITE_RAM_WORD 4 /* Write RAM word. */
#define MBC_READ_RAM_WORD 5 /* Read RAM word. */
#define MBC_MAILBOX_REGISTER_TEST 6 /* Wrap incoming mailboxes */
#define MBC_VERIFY_CHECKSUM 7 /* Verify checksum. */
@ -937,6 +936,8 @@ struct mbx_cmd_32 {
/*
* ISP24xx mailbox commands
*/
#define MBC_WRITE_SERDES 0x3 /* Write serdes word. */
#define MBC_READ_SERDES 0x4 /* Read serdes word. */
#define MBC_SERDES_PARAMS 0x10 /* Serdes Tx Parameters. */
#define MBC_GET_IOCB_STATUS 0x12 /* Get IOCB status command. */
#define MBC_PORT_PARAMS 0x1A /* Port iDMA Parameters. */
@ -2734,7 +2735,6 @@ struct req_que {
srb_t **outstanding_cmds;
uint32_t current_outstanding_cmd;
uint16_t num_outstanding_cmds;
#define MAX_Q_DEPTH 32
int max_q_depth;
dma_addr_t dma_fx00;
@ -3302,12 +3302,7 @@ struct qla_hw_data {
struct work_struct nic_core_reset;
struct work_struct idc_state_handler;
struct work_struct nic_core_unrecoverable;
#define HOST_QUEUE_RAMPDOWN_INTERVAL (60 * HZ)
#define HOST_QUEUE_RAMPUP_INTERVAL (30 * HZ)
unsigned long host_last_rampdown_time;
unsigned long host_last_rampup_time;
int cfg_lun_q_depth;
struct work_struct board_disable;
struct mr_data_fx00 mr;
@ -3372,12 +3367,11 @@ typedef struct scsi_qla_host {
#define MPI_RESET_NEEDED 19 /* Initiate MPI FW reset */
#define ISP_QUIESCE_NEEDED 20 /* Driver need some quiescence */
#define SCR_PENDING 21 /* SCR in target mode */
#define HOST_RAMP_DOWN_QUEUE_DEPTH 22
#define HOST_RAMP_UP_QUEUE_DEPTH 23
#define PORT_UPDATE_NEEDED 24
#define FX00_RESET_RECOVERY 25
#define FX00_TARGET_SCAN 26
#define FX00_CRITEMP_RECOVERY 27
#define PORT_UPDATE_NEEDED 22
#define FX00_RESET_RECOVERY 23
#define FX00_TARGET_SCAN 24
#define FX00_CRITEMP_RECOVERY 25
#define FX00_HOST_INFO_RESEND 26
uint32_t device_flags;
#define SWITCH_FOUND BIT_0

Просмотреть файл

@ -98,7 +98,6 @@ extern int qlport_down_retry;
extern int ql2xplogiabsentdevice;
extern int ql2xloginretrycount;
extern int ql2xfdmienable;
extern int ql2xmaxqdepth;
extern int ql2xallocfwdump;
extern int ql2xextended_error_logging;
extern int ql2xiidmaenable;
@ -160,6 +159,9 @@ extern int qla83xx_clear_drv_presence(scsi_qla_host_t *vha);
extern int __qla83xx_clear_drv_presence(scsi_qla_host_t *vha);
extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
extern void qla2x00_disable_board_on_pci_error(struct work_struct *);
/*
* Global Functions in qla_mid.c source file.
*/
@ -338,6 +340,11 @@ qla2x00_eh_wait_for_pending_commands(scsi_qla_host_t *, unsigned int,
extern int
qla2x00_system_error(scsi_qla_host_t *);
extern int
qla2x00_write_serdes_word(scsi_qla_host_t *, uint16_t, uint16_t);
extern int
qla2x00_read_serdes_word(scsi_qla_host_t *, uint16_t, uint16_t *);
extern int
qla2x00_set_serdes_params(scsi_qla_host_t *, uint16_t, uint16_t, uint16_t);
@ -455,6 +462,7 @@ extern uint8_t *qla25xx_read_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
extern int qla25xx_write_nvram_data(scsi_qla_host_t *, uint8_t *, uint32_t,
uint32_t);
extern int qla2x00_is_a_vp_did(scsi_qla_host_t *, uint32_t);
bool qla2x00_check_reg_for_disconnect(scsi_qla_host_t *, uint32_t);
extern int qla2x00_beacon_on(struct scsi_qla_host *);
extern int qla2x00_beacon_off(struct scsi_qla_host *);
@ -541,10 +549,9 @@ struct fc_function_template;
extern struct fc_function_template qla2xxx_transport_functions;
extern struct fc_function_template qla2xxx_transport_vport_functions;
extern void qla2x00_alloc_sysfs_attr(scsi_qla_host_t *);
extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *, bool);
extern void qla2x00_init_host_attr(scsi_qla_host_t *);
extern void qla2x00_alloc_sysfs_attr(scsi_qla_host_t *);
extern void qla2x00_free_sysfs_attr(scsi_qla_host_t *);
extern int qla2x00_loopback_test(scsi_qla_host_t *, struct msg_echo_lb *, uint16_t *);
extern int qla2x00_echo_test(scsi_qla_host_t *,
struct msg_echo_lb *, uint16_t *);
@ -725,7 +732,7 @@ extern inline void qla8044_set_qsnt_ready(struct scsi_qla_host *vha);
extern inline void qla8044_need_reset_handler(struct scsi_qla_host *vha);
extern int qla8044_device_state_handler(struct scsi_qla_host *vha);
extern void qla8044_clear_qsnt_ready(struct scsi_qla_host *vha);
extern void qla8044_clear_drv_active(struct scsi_qla_host *vha);
extern void qla8044_clear_drv_active(struct qla_hw_data *);
void qla8044_get_minidump(struct scsi_qla_host *vha);
int qla8044_collect_md_data(struct scsi_qla_host *vha);
extern int qla8044_md_get_template(scsi_qla_host_t *);

Просмотреть файл

@ -1694,6 +1694,8 @@ enable_82xx_npiv:
if (!fw_major_version && ql2xallocfwdump
&& !(IS_P3P_TYPE(ha)))
qla2x00_alloc_fw_dump(vha);
} else {
goto failed;
}
} else {
ql_log(ql_log_fatal, vha, 0x00cd,

Просмотреть файл

@ -260,25 +260,6 @@ qla2x00_gid_list_size(struct qla_hw_data *ha)
return sizeof(struct gid_list_info) * ha->max_fibre_devices;
}
static inline void
qla2x00_do_host_ramp_up(scsi_qla_host_t *vha)
{
if (vha->hw->cfg_lun_q_depth >= ql2xmaxqdepth)
return;
/* Wait at least HOST_QUEUE_RAMPDOWN_INTERVAL before ramping up */
if (time_before(jiffies, (vha->hw->host_last_rampdown_time +
HOST_QUEUE_RAMPDOWN_INTERVAL)))
return;
/* Wait at least HOST_QUEUE_RAMPUP_INTERVAL between each ramp up */
if (time_before(jiffies, (vha->hw->host_last_rampup_time +
HOST_QUEUE_RAMPUP_INTERVAL)))
return;
set_bit(HOST_RAMP_UP_QUEUE_DEPTH, &vha->dpc_flags);
}
static inline void
qla2x00_handle_mbx_completion(struct qla_hw_data *ha, int status)
{

Просмотреть файл

@ -56,6 +56,16 @@ qla2100_intr_handler(int irq, void *dev_id)
vha = pci_get_drvdata(ha->pdev);
for (iter = 50; iter--; ) {
hccr = RD_REG_WORD(&reg->hccr);
/* Check for PCI disconnection */
if (hccr == 0xffff) {
/*
* Schedule this on the default system workqueue so that
* all the adapter workqueues and the DPC thread can be
* shutdown cleanly.
*/
schedule_work(&ha->board_disable);
break;
}
if (hccr & HCCR_RISC_PAUSE) {
if (pci_channel_offline(ha->pdev))
break;
@ -110,6 +120,22 @@ qla2100_intr_handler(int irq, void *dev_id)
return (IRQ_HANDLED);
}
bool
qla2x00_check_reg_for_disconnect(scsi_qla_host_t *vha, uint32_t reg)
{
/* Check for PCI disconnection */
if (reg == 0xffffffff) {
/*
* Schedule this on the default system workqueue so that all the
* adapter workqueues and the DPC thread can be shutdown
* cleanly.
*/
schedule_work(&vha->hw->board_disable);
return true;
} else
return false;
}
/**
* qla2300_intr_handler() - Process interrupts for the ISP23xx and ISP63xx.
* @irq:
@ -148,11 +174,14 @@ qla2300_intr_handler(int irq, void *dev_id)
vha = pci_get_drvdata(ha->pdev);
for (iter = 50; iter--; ) {
stat = RD_REG_DWORD(&reg->u.isp2300.host_status);
if (qla2x00_check_reg_for_disconnect(vha, stat))
break;
if (stat & HSR_RISC_PAUSED) {
if (unlikely(pci_channel_offline(ha->pdev)))
break;
hccr = RD_REG_WORD(&reg->hccr);
if (hccr & (BIT_15 | BIT_13 | BIT_11 | BIT_8))
ql_log(ql_log_warn, vha, 0x5026,
"Parity error -- HCCR=%x, Dumping "
@ -269,11 +298,18 @@ qla81xx_idc_event(scsi_qla_host_t *vha, uint16_t aen, uint16_t descr)
{ "Complete", "Request Notification", "Time Extension" };
int rval;
struct device_reg_24xx __iomem *reg24 = &vha->hw->iobase->isp24;
struct device_reg_82xx __iomem *reg82 = &vha->hw->iobase->isp82;
uint16_t __iomem *wptr;
uint16_t cnt, timeout, mb[QLA_IDC_ACK_REGS];
/* Seed data -- mailbox1 -> mailbox7. */
wptr = (uint16_t __iomem *)&reg24->mailbox1;
if (IS_QLA81XX(vha->hw) || IS_QLA83XX(vha->hw))
wptr = (uint16_t __iomem *)&reg24->mailbox1;
else if (IS_QLA8044(vha->hw))
wptr = (uint16_t __iomem *)&reg82->mailbox_out[1];
else
return;
for (cnt = 0; cnt < QLA_IDC_ACK_REGS; cnt++, wptr++)
mb[cnt] = RD_REG_WORD(wptr);
@ -287,7 +323,7 @@ qla81xx_idc_event(scsi_qla_host_t *vha, uint16_t aen, uint16_t descr)
case MBA_IDC_COMPLETE:
if (mb[1] >> 15) {
vha->hw->flags.idc_compl_status = 1;
if (vha->hw->notify_dcbx_comp)
if (vha->hw->notify_dcbx_comp && !vha->vp_idx)
complete(&vha->hw->dcbx_comp);
}
break;
@ -758,7 +794,7 @@ skip_rio:
ql_dbg(ql_dbg_async, vha, 0x500d,
"DCBX Completed -- %04x %04x %04x.\n",
mb[1], mb[2], mb[3]);
if (ha->notify_dcbx_comp)
if (ha->notify_dcbx_comp && !vha->vp_idx)
complete(&ha->dcbx_comp);
} else
@ -1032,7 +1068,7 @@ skip_rio:
}
}
case MBA_IDC_COMPLETE:
if (ha->notify_lb_portup_comp)
if (ha->notify_lb_portup_comp && !vha->vp_idx)
complete(&ha->lb_portup_comp);
/* Fallthru */
case MBA_IDC_TIME_EXT:
@ -1991,7 +2027,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
/* Fast path completion. */
if (comp_status == CS_COMPLETE && scsi_status == 0) {
qla2x00_do_host_ramp_up(vha);
qla2x00_process_completed_request(vha, req, handle);
return;
@ -2250,9 +2285,6 @@ out:
cp->cmnd, scsi_bufflen(cp), rsp_info_len,
resid_len, fw_resid_len);
if (!res)
qla2x00_do_host_ramp_up(vha);
if (rsp->status_srb == NULL)
sp->done(ha, sp, res);
}
@ -2575,6 +2607,8 @@ qla24xx_intr_handler(int irq, void *dev_id)
vha = pci_get_drvdata(ha->pdev);
for (iter = 50; iter--; ) {
stat = RD_REG_DWORD(&reg->host_status);
if (qla2x00_check_reg_for_disconnect(vha, stat))
break;
if (stat & HSRX_RISC_PAUSED) {
if (unlikely(pci_channel_offline(ha->pdev)))
break;
@ -2644,6 +2678,7 @@ qla24xx_msix_rsp_q(int irq, void *dev_id)
struct device_reg_24xx __iomem *reg;
struct scsi_qla_host *vha;
unsigned long flags;
uint32_t stat = 0;
rsp = (struct rsp_que *) dev_id;
if (!rsp) {
@ -2657,11 +2692,19 @@ qla24xx_msix_rsp_q(int irq, void *dev_id)
spin_lock_irqsave(&ha->hardware_lock, flags);
vha = pci_get_drvdata(ha->pdev);
/*
* Use host_status register to check to PCI disconnection before we
* we process the response queue.
*/
stat = RD_REG_DWORD(&reg->host_status);
if (qla2x00_check_reg_for_disconnect(vha, stat))
goto out;
qla24xx_process_response_queue(vha, rsp);
if (!ha->flags.disable_msix_handshake) {
WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT);
RD_REG_DWORD_RELAXED(&reg->hccr);
}
out:
spin_unlock_irqrestore(&ha->hardware_lock, flags);
return IRQ_HANDLED;
@ -2671,9 +2714,11 @@ static irqreturn_t
qla25xx_msix_rsp_q(int irq, void *dev_id)
{
struct qla_hw_data *ha;
scsi_qla_host_t *vha;
struct rsp_que *rsp;
struct device_reg_24xx __iomem *reg;
unsigned long flags;
uint32_t hccr = 0;
rsp = (struct rsp_que *) dev_id;
if (!rsp) {
@ -2682,17 +2727,21 @@ qla25xx_msix_rsp_q(int irq, void *dev_id)
return IRQ_NONE;
}
ha = rsp->hw;
vha = pci_get_drvdata(ha->pdev);
/* Clear the interrupt, if enabled, for this response queue */
if (!ha->flags.disable_msix_handshake) {
reg = &ha->iobase->isp24;
spin_lock_irqsave(&ha->hardware_lock, flags);
WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT);
RD_REG_DWORD_RELAXED(&reg->hccr);
hccr = RD_REG_DWORD_RELAXED(&reg->hccr);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
if (qla2x00_check_reg_for_disconnect(vha, hccr))
goto out;
queue_work_on((int) (rsp->id - 1), ha->wq, &rsp->q_work);
out:
return IRQ_HANDLED;
}
@ -2723,6 +2772,8 @@ qla24xx_msix_default(int irq, void *dev_id)
vha = pci_get_drvdata(ha->pdev);
do {
stat = RD_REG_DWORD(&reg->host_status);
if (qla2x00_check_reg_for_disconnect(vha, stat))
break;
if (stat & HSRX_RISC_PAUSED) {
if (unlikely(pci_channel_offline(ha->pdev)))
break;
@ -2937,7 +2988,7 @@ msix_out:
int
qla2x00_request_irqs(struct qla_hw_data *ha, struct rsp_que *rsp)
{
int ret;
int ret = QLA_FUNCTION_FAILED;
device_reg_t __iomem *reg = ha->iobase;
scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
@ -2971,10 +3022,12 @@ qla2x00_request_irqs(struct qla_hw_data *ha, struct rsp_que *rsp)
ha->chip_revision, ha->fw_attributes);
goto clear_risc_ints;
}
ql_log(ql_log_info, vha, 0x0037,
"MSI-X Falling back-to MSI mode -%d.\n", ret);
skip_msix:
ql_log(ql_log_info, vha, 0x0037,
"Falling back-to MSI mode -%d.\n", ret);
if (!IS_QLA24XX(ha) && !IS_QLA2532(ha) && !IS_QLA8432(ha) &&
!IS_QLA8001(ha) && !IS_P3P_TYPE(ha) && !IS_QLAFX00(ha))
goto skip_msi;
@ -2986,14 +3039,13 @@ skip_msix:
ha->flags.msi_enabled = 1;
} else
ql_log(ql_log_warn, vha, 0x0039,
"MSI-X; Falling back-to INTa mode -- %d.\n", ret);
"Falling back-to INTa mode -- %d.\n", ret);
skip_msi:
/* Skip INTx on ISP82xx. */
if (!ha->flags.msi_enabled && IS_QLA82XX(ha))
return QLA_FUNCTION_FAILED;
skip_msi:
ret = request_irq(ha->pdev->irq, ha->isp_ops->intr_handler,
ha->flags.msi_enabled ? 0 : IRQF_SHARED,
QLA2XXX_DRIVER_NAME, rsp);

Просмотреть файл

@ -468,7 +468,7 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
mcp->mb[1] = MSW(risc_addr);
mcp->mb[2] = LSW(risc_addr);
mcp->mb[3] = 0;
if (IS_QLA81XX(ha) || IS_QLA83XX(ha)) {
if (IS_QLA25XX(ha) || IS_QLA81XX(ha) || IS_QLA83XX(ha)) {
struct nvram_81xx *nv = ha->nvram;
mcp->mb[4] = (nv->enhanced_features &
EXTENDED_BB_CREDITS);
@ -1214,7 +1214,7 @@ qla2x00_init_firmware(scsi_qla_host_t *vha, uint16_t size)
mcp->mb[6] = MSW(MSD(ha->init_cb_dma));
mcp->mb[7] = LSW(MSD(ha->init_cb_dma));
mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0;
if ((IS_QLA81XX(ha) || IS_QLA83XX(ha)) && ha->ex_init_cb->ex_version) {
if (ha->ex_init_cb && ha->ex_init_cb->ex_version) {
mcp->mb[1] = BIT_0;
mcp->mb[10] = MSW(ha->ex_init_cb_dma);
mcp->mb[11] = LSW(ha->ex_init_cb_dma);
@ -2800,6 +2800,75 @@ qla2x00_system_error(scsi_qla_host_t *vha)
return rval;
}
int
qla2x00_write_serdes_word(scsi_qla_host_t *vha, uint16_t addr, uint16_t data)
{
int rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
if (!IS_QLA2031(vha->hw))
return QLA_FUNCTION_FAILED;
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1182,
"Entered %s.\n", __func__);
mcp->mb[0] = MBC_WRITE_SERDES;
mcp->mb[1] = addr;
mcp->mb[2] = data & 0xff;
mcp->mb[3] = 0;
mcp->out_mb = MBX_3|MBX_2|MBX_1|MBX_0;
mcp->in_mb = MBX_0;
mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp);
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_mbx, vha, 0x1183,
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
} else {
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1184,
"Done %s.\n", __func__);
}
return rval;
}
int
qla2x00_read_serdes_word(scsi_qla_host_t *vha, uint16_t addr, uint16_t *data)
{
int rval;
mbx_cmd_t mc;
mbx_cmd_t *mcp = &mc;
if (!IS_QLA2031(vha->hw))
return QLA_FUNCTION_FAILED;
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1185,
"Entered %s.\n", __func__);
mcp->mb[0] = MBC_READ_SERDES;
mcp->mb[1] = addr;
mcp->mb[3] = 0;
mcp->out_mb = MBX_3|MBX_1|MBX_0;
mcp->in_mb = MBX_1|MBX_0;
mcp->tov = MBX_TOV_SECONDS;
mcp->flags = 0;
rval = qla2x00_mailbox_command(vha, mcp);
*data = mcp->mb[1] & 0xff;
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_mbx, vha, 0x1186,
"Failed=%x mb[0]=%x.\n", rval, mcp->mb[0]);
} else {
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1187,
"Done %s.\n", __func__);
}
return rval;
}
/**
* qla2x00_set_serdes_params() -
* @ha: HA context

Просмотреть файл

@ -1610,6 +1610,22 @@ qlafx00_timer_routine(scsi_qla_host_t *vha)
ha->mr.fw_critemp_timer_tick--;
}
}
if (ha->mr.host_info_resend) {
/*
* Incomplete host info might be sent to firmware
* durinng system boot - info should be resend
*/
if (ha->mr.hinfo_resend_timer_tick == 0) {
ha->mr.host_info_resend = false;
set_bit(FX00_HOST_INFO_RESEND, &vha->dpc_flags);
ha->mr.hinfo_resend_timer_tick =
QLAFX00_HINFO_RESEND_INTERVAL;
qla2xxx_wake_dpc(vha);
} else {
ha->mr.hinfo_resend_timer_tick--;
}
}
}
/*
@ -1867,6 +1883,7 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
goto done_free_sp;
}
break;
case FXDISC_ABORT_IOCTL:
default:
break;
}
@ -1888,6 +1905,8 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
p_sysid->sysname, SYSNAME_LENGTH);
strncpy(phost_info->nodename,
p_sysid->nodename, NODENAME_LENGTH);
if (!strcmp(phost_info->nodename, "(none)"))
ha->mr.host_info_resend = true;
strncpy(phost_info->release,
p_sysid->release, RELEASE_LENGTH);
strncpy(phost_info->version,
@ -1948,8 +1967,8 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
if (fx_type == FXDISC_GET_CONFIG_INFO) {
struct config_info_data *pinfo =
(struct config_info_data *) fdisc->u.fxiocb.rsp_addr;
memcpy(&vha->hw->mr.product_name, pinfo->product_name,
sizeof(vha->hw->mr.product_name));
strcpy(vha->hw->model_number, pinfo->model_num);
strcpy(vha->hw->model_desc, pinfo->model_description);
memcpy(&vha->hw->mr.symbolic_name, pinfo->symbolic_name,
sizeof(vha->hw->mr.symbolic_name));
memcpy(&vha->hw->mr.serial_num, pinfo->serial_num,
@ -1993,7 +2012,11 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
ql_dump_buffer(ql_dbg_init + ql_dbg_buffer, vha, 0x0146,
(uint8_t *)pinfo, 16);
memcpy(vha->hw->gid_list, pinfo, QLAFX00_TGT_NODE_LIST_SIZE);
}
} else if (fx_type == FXDISC_ABORT_IOCTL)
fdisc->u.fxiocb.result =
(fdisc->u.fxiocb.result == cpu_to_le32(0x68)) ?
cpu_to_le32(QLA_SUCCESS) : cpu_to_le32(QLA_FUNCTION_FAILED);
rval = le32_to_cpu(fdisc->u.fxiocb.result);
done_unmap_dma:
@ -2092,6 +2115,10 @@ qlafx00_abort_command(srb_t *sp)
/* Command not found. */
return QLA_FUNCTION_FAILED;
}
if (sp->type == SRB_FXIOCB_DCMD)
return qlafx00_fx_disc(vha, &vha->hw->mr.fcport,
FXDISC_ABORT_IOCTL);
return qlafx00_async_abt_cmd(sp);
}
@ -2419,7 +2446,6 @@ qlafx00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
/* Fast path completion. */
if (comp_status == CS_COMPLETE && scsi_status == 0) {
qla2x00_do_host_ramp_up(vha);
qla2x00_process_completed_request(vha, req, handle);
return;
}
@ -2630,9 +2656,6 @@ check_scsi_status:
rsp_info_len, resid_len, fw_resid_len, sense_len,
par_sense_len, rsp_info_len);
if (!res)
qla2x00_do_host_ramp_up(vha);
if (rsp->status_srb == NULL)
sp->done(ha, sp, res);
}
@ -3021,6 +3044,8 @@ qlafx00_intr_handler(int irq, void *dev_id)
vha = pci_get_drvdata(ha->pdev);
for (iter = 50; iter--; clr_intr = 0) {
stat = QLAFX00_RD_INTR_REG(ha);
if (qla2x00_check_reg_for_disconnect(vha, stat))
break;
if ((stat & QLAFX00_HST_INT_STS_BITS) == 0)
break;

Просмотреть файл

@ -304,7 +304,9 @@ struct register_host_info {
#define QLAFX00_TGT_NODE_LIST_SIZE (sizeof(uint32_t) * 32)
struct config_info_data {
uint8_t product_name[256];
uint8_t model_num[16];
uint8_t model_description[80];
uint8_t reserved0[160];
uint8_t symbolic_name[64];
uint8_t serial_num[32];
uint8_t hw_version[16];
@ -343,6 +345,7 @@ struct config_info_data {
#define FXDISC_GET_TGT_NODE_INFO 0x80
#define FXDISC_GET_TGT_NODE_LIST 0x81
#define FXDISC_REG_HOST_INFO 0x99
#define FXDISC_ABORT_IOCTL 0xff
#define QLAFX00_HBA_ICNTRL_REG 0x20B08
#define QLAFX00_ICR_ENB_MASK 0x80000000
@ -490,7 +493,6 @@ struct qla_mt_iocb_rsp_fx00 {
#define FX00_DEF_RATOV 10
struct mr_data_fx00 {
uint8_t product_name[256];
uint8_t symbolic_name[64];
uint8_t serial_num[32];
uint8_t hw_version[16];
@ -511,6 +513,8 @@ struct mr_data_fx00 {
uint32_t old_aenmbx0_state;
uint32_t critical_temperature;
bool extended_io_enabled;
bool host_info_resend;
uint8_t hinfo_resend_timer_tick;
};
#define QLAFX00_EXTENDED_IO_EN_MASK 0x20
@ -537,7 +541,11 @@ struct mr_data_fx00 {
#define QLAFX00_RESET_INTERVAL 120 /* number of seconds */
#define QLAFX00_MAX_RESET_INTERVAL 600 /* number of seconds */
#define QLAFX00_CRITEMP_INTERVAL 60 /* number of seconds */
#define QLAFX00_HINFO_RESEND_INTERVAL 60 /* number of seconds */
#define QLAFX00_CRITEMP_THRSHLD 80 /* Celsius degrees */
/* Max conncurrent IOs that can be queued */
#define QLAFX00_MAX_CANQUEUE 1024
#endif

Просмотреть файл

@ -2096,6 +2096,7 @@ qla82xx_msix_default(int irq, void *dev_id)
int status = 0;
unsigned long flags;
uint32_t stat = 0;
uint32_t host_int = 0;
uint16_t mb[4];
rsp = (struct rsp_que *) dev_id;
@ -2111,7 +2112,10 @@ qla82xx_msix_default(int irq, void *dev_id)
spin_lock_irqsave(&ha->hardware_lock, flags);
vha = pci_get_drvdata(ha->pdev);
do {
if (RD_REG_DWORD(&reg->host_int)) {
host_int = RD_REG_DWORD(&reg->host_int);
if (qla2x00_check_reg_for_disconnect(vha, host_int))
break;
if (host_int) {
stat = RD_REG_DWORD(&reg->host_status);
switch (stat & 0xff) {
@ -2156,6 +2160,7 @@ qla82xx_msix_rsp_q(int irq, void *dev_id)
struct rsp_que *rsp;
struct device_reg_82xx __iomem *reg;
unsigned long flags;
uint32_t host_int = 0;
rsp = (struct rsp_que *) dev_id;
if (!rsp) {
@ -2168,8 +2173,12 @@ qla82xx_msix_rsp_q(int irq, void *dev_id)
reg = &ha->iobase->isp82;
spin_lock_irqsave(&ha->hardware_lock, flags);
vha = pci_get_drvdata(ha->pdev);
host_int = RD_REG_DWORD(&reg->host_int);
if (qla2x00_check_reg_for_disconnect(vha, host_int))
goto out;
qla24xx_process_response_queue(vha, rsp);
WRT_REG_DWORD(&reg->host_int, 0);
out:
spin_unlock_irqrestore(&ha->hardware_lock, flags);
return IRQ_HANDLED;
}
@ -2183,6 +2192,7 @@ qla82xx_poll(int irq, void *dev_id)
struct device_reg_82xx __iomem *reg;
int status = 0;
uint32_t stat;
uint32_t host_int = 0;
uint16_t mb[4];
unsigned long flags;
@ -2198,7 +2208,10 @@ qla82xx_poll(int irq, void *dev_id)
spin_lock_irqsave(&ha->hardware_lock, flags);
vha = pci_get_drvdata(ha->pdev);
if (RD_REG_DWORD(&reg->host_int)) {
host_int = RD_REG_DWORD(&reg->host_int);
if (qla2x00_check_reg_for_disconnect(vha, host_int))
goto out;
if (host_int) {
stat = RD_REG_DWORD(&reg->host_status);
switch (stat & 0xff) {
case 0x1:
@ -2224,8 +2237,9 @@ qla82xx_poll(int irq, void *dev_id)
stat * 0xff);
break;
}
WRT_REG_DWORD(&reg->host_int, 0);
}
WRT_REG_DWORD(&reg->host_int, 0);
out:
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
@ -3003,7 +3017,7 @@ qla8xxx_dev_failed_handler(scsi_qla_host_t *vha)
qla82xx_clear_drv_active(ha);
qla82xx_idc_unlock(ha);
} else if (IS_QLA8044(ha)) {
qla8044_clear_drv_active(vha);
qla8044_clear_drv_active(ha);
qla8044_idc_unlock(ha);
}

Просмотреть файл

@ -1257,10 +1257,10 @@ exit_start_fw:
}
void
qla8044_clear_drv_active(struct scsi_qla_host *vha)
qla8044_clear_drv_active(struct qla_hw_data *ha)
{
uint32_t drv_active;
struct qla_hw_data *ha = vha->hw;
struct scsi_qla_host *vha = pci_get_drvdata(ha->pdev);
drv_active = qla8044_rd_direct(vha, QLA8044_CRB_DRV_ACTIVE_INDEX);
drv_active &= ~(1 << (ha->portnum));
@ -1324,7 +1324,7 @@ qla8044_device_bootstrap(struct scsi_qla_host *vha)
if (rval != QLA_SUCCESS) {
ql_log(ql_log_info, vha, 0xb0b3,
"%s: HW State: FAILED\n", __func__);
qla8044_clear_drv_active(vha);
qla8044_clear_drv_active(ha);
qla8044_wr_direct(vha, QLA8044_CRB_DEV_STATE_INDEX,
QLA8XXX_DEV_FAILED);
return rval;
@ -1555,6 +1555,15 @@ qla8044_need_reset_handler(struct scsi_qla_host *vha)
qla8044_idc_lock(ha);
}
drv_state = qla8044_rd_direct(vha,
QLA8044_CRB_DRV_STATE_INDEX);
drv_active = qla8044_rd_direct(vha,
QLA8044_CRB_DRV_ACTIVE_INDEX);
ql_log(ql_log_info, vha, 0xb0c5,
"%s(%ld): drv_state = 0x%x, drv_active = 0x%x\n",
__func__, vha->host_no, drv_state, drv_active);
if (!ha->flags.nic_core_reset_owner) {
ql_dbg(ql_dbg_p3p, vha, 0xb0c3,
"%s(%ld): reset acknowledged\n",
@ -1580,23 +1589,15 @@ qla8044_need_reset_handler(struct scsi_qla_host *vha)
dev_state = qla8044_rd_direct(vha,
QLA8044_CRB_DEV_STATE_INDEX);
} while (dev_state == QLA8XXX_DEV_NEED_RESET);
} while (((drv_state & drv_active) != drv_active) &&
(dev_state == QLA8XXX_DEV_NEED_RESET));
} else {
qla8044_set_rst_ready(vha);
/* wait for 10 seconds for reset ack from all functions */
reset_timeout = jiffies + (ha->fcoe_reset_timeout * HZ);
drv_state = qla8044_rd_direct(vha,
QLA8044_CRB_DRV_STATE_INDEX);
drv_active = qla8044_rd_direct(vha,
QLA8044_CRB_DRV_ACTIVE_INDEX);
ql_log(ql_log_info, vha, 0xb0c5,
"%s(%ld): drv_state = 0x%x, drv_active = 0x%x\n",
__func__, vha->host_no, drv_state, drv_active);
while (drv_state != drv_active) {
while ((drv_state & drv_active) != drv_active) {
if (time_after_eq(jiffies, reset_timeout)) {
ql_log(ql_log_info, vha, 0xb0c6,
"%s: RESET TIMEOUT!"
@ -1736,7 +1737,7 @@ qla8044_update_idc_reg(struct scsi_qla_host *vha)
rval = qla8044_set_idc_ver(vha);
if (rval == QLA_FUNCTION_FAILED)
qla8044_clear_drv_active(vha);
qla8044_clear_drv_active(ha);
qla8044_idc_unlock(ha);
exit_update_idc_reg:
@ -1859,7 +1860,7 @@ qla8044_device_state_handler(struct scsi_qla_host *vha)
goto exit;
case QLA8XXX_DEV_COLD:
rval = qla8044_device_bootstrap(vha);
goto exit;
break;
case QLA8XXX_DEV_INITIALIZING:
qla8044_idc_unlock(ha);
msleep(1000);

Просмотреть файл

@ -110,7 +110,8 @@ MODULE_PARM_DESC(ql2xfdmienable,
"Enables FDMI registrations. "
"0 - no FDMI. Default is 1 - perform FDMI.");
int ql2xmaxqdepth = MAX_Q_DEPTH;
#define MAX_Q_DEPTH 32
static int ql2xmaxqdepth = MAX_Q_DEPTH;
module_param(ql2xmaxqdepth, int, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(ql2xmaxqdepth,
"Maximum queue depth to set for each LUN. "
@ -728,10 +729,8 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
}
sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
if (!sp) {
set_bit(HOST_RAMP_DOWN_QUEUE_DEPTH, &vha->dpc_flags);
if (!sp)
goto qc24_host_busy;
}
sp->u.scmd.cmd = cmd;
sp->type = SRB_SCSI_CMD;
@ -744,7 +743,6 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3013,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
set_bit(HOST_RAMP_DOWN_QUEUE_DEPTH, &vha->dpc_flags);
goto qc24_host_busy_free_sp;
}
@ -1474,81 +1472,6 @@ qla2x00_change_queue_type(struct scsi_device *sdev, int tag_type)
return tag_type;
}
static void
qla2x00_host_ramp_down_queuedepth(scsi_qla_host_t *vha)
{
scsi_qla_host_t *vp;
struct Scsi_Host *shost;
struct scsi_device *sdev;
struct qla_hw_data *ha = vha->hw;
unsigned long flags;
ha->host_last_rampdown_time = jiffies;
if (ha->cfg_lun_q_depth <= vha->host->cmd_per_lun)
return;
if ((ha->cfg_lun_q_depth / 2) < vha->host->cmd_per_lun)
ha->cfg_lun_q_depth = vha->host->cmd_per_lun;
else
ha->cfg_lun_q_depth = ha->cfg_lun_q_depth / 2;
/*
* Geometrically ramp down the queue depth for all devices on this
* adapter
*/
spin_lock_irqsave(&ha->vport_slock, flags);
list_for_each_entry(vp, &ha->vp_list, list) {
shost = vp->host;
shost_for_each_device(sdev, shost) {
if (sdev->queue_depth > shost->cmd_per_lun) {
if (sdev->queue_depth < ha->cfg_lun_q_depth)
continue;
ql_dbg(ql_dbg_io, vp, 0x3031,
"%ld:%d:%d: Ramping down queue depth to %d",
vp->host_no, sdev->id, sdev->lun,
ha->cfg_lun_q_depth);
qla2x00_change_queue_depth(sdev,
ha->cfg_lun_q_depth, SCSI_QDEPTH_DEFAULT);
}
}
}
spin_unlock_irqrestore(&ha->vport_slock, flags);
return;
}
static void
qla2x00_host_ramp_up_queuedepth(scsi_qla_host_t *vha)
{
scsi_qla_host_t *vp;
struct Scsi_Host *shost;
struct scsi_device *sdev;
struct qla_hw_data *ha = vha->hw;
unsigned long flags;
ha->host_last_rampup_time = jiffies;
ha->cfg_lun_q_depth++;
/*
* Linearly ramp up the queue depth for all devices on this
* adapter
*/
spin_lock_irqsave(&ha->vport_slock, flags);
list_for_each_entry(vp, &ha->vp_list, list) {
shost = vp->host;
shost_for_each_device(sdev, shost) {
if (sdev->queue_depth > ha->cfg_lun_q_depth)
continue;
qla2x00_change_queue_depth(sdev, ha->cfg_lun_q_depth,
SCSI_QDEPTH_RAMP_UP);
}
}
spin_unlock_irqrestore(&ha->vport_slock, flags);
return;
}
/**
* qla2x00_config_dma_addressing() - Configure OS DMA addressing method.
* @ha: HA context
@ -2424,7 +2347,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
ha->init_cb_size = sizeof(init_cb_t);
ha->link_data_rate = PORT_SPEED_UNKNOWN;
ha->optrom_size = OPTROM_SIZE_2300;
ha->cfg_lun_q_depth = ql2xmaxqdepth;
/* Assign ISP specific operations. */
if (IS_QLA2100(ha)) {
@ -2573,6 +2495,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
ha->mr.fw_reset_timer_tick = QLAFX00_RESET_INTERVAL;
ha->mr.fw_critemp_timer_tick = QLAFX00_CRITEMP_INTERVAL;
ha->mr.fw_hbt_en = 1;
ha->mr.host_info_resend = false;
ha->mr.hinfo_resend_timer_tick = QLAFX00_HINFO_RESEND_INTERVAL;
}
ql_dbg_pci(ql_dbg_init, pdev, 0x001e,
@ -2638,7 +2562,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
host = base_vha->host;
base_vha->req = req;
if (IS_QLAFX00(ha))
host->can_queue = 1024;
host->can_queue = QLAFX00_MAX_CANQUEUE;
else
host->can_queue = req->length + 128;
if (IS_QLA2XXX_MIDTYPE(ha))
@ -2816,6 +2740,8 @@ que_init:
*/
qla2xxx_wake_dpc(base_vha);
INIT_WORK(&ha->board_disable, qla2x00_disable_board_on_pci_error);
if (IS_QLA8031(ha) || IS_MCTP_CAPABLE(ha)) {
sprintf(wq_name, "qla2xxx_%lu_dpc_lp_wq", base_vha->host_no);
ha->dpc_lp_wq = create_singlethread_workqueue(wq_name);
@ -2955,7 +2881,7 @@ probe_hw_failed:
}
if (IS_QLA8044(ha)) {
qla8044_idc_lock(ha);
qla8044_clear_drv_active(base_vha);
qla8044_clear_drv_active(ha);
qla8044_idc_unlock(ha);
}
iospace_config_failed:
@ -2979,22 +2905,6 @@ probe_out:
return ret;
}
static void
qla2x00_stop_dpc_thread(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
struct task_struct *t = ha->dpc_thread;
if (ha->dpc_thread == NULL)
return;
/*
* qla2xxx_wake_dpc checks for ->dpc_thread
* so we need to zero it out.
*/
ha->dpc_thread = NULL;
kthread_stop(t);
}
static void
qla2x00_shutdown(struct pci_dev *pdev)
{
@ -3038,29 +2948,14 @@ qla2x00_shutdown(struct pci_dev *pdev)
qla2x00_free_fw_dump(ha);
}
/* Deletes all the virtual ports for a given ha */
static void
qla2x00_remove_one(struct pci_dev *pdev)
qla2x00_delete_all_vps(struct qla_hw_data *ha, scsi_qla_host_t *base_vha)
{
scsi_qla_host_t *base_vha, *vha;
struct qla_hw_data *ha;
struct Scsi_Host *scsi_host;
scsi_qla_host_t *vha;
unsigned long flags;
/*
* If the PCI device is disabled that means that probe failed and any
* resources should be have cleaned up on probe exit.
*/
if (!atomic_read(&pdev->enable_cnt))
return;
base_vha = pci_get_drvdata(pdev);
ha = base_vha->hw;
ha->flags.host_shutting_down = 1;
set_bit(UNLOADING, &base_vha->dpc_flags);
if (IS_QLAFX00(ha))
qlafx00_driver_shutdown(base_vha, 20);
mutex_lock(&ha->vport_lock);
while (ha->cur_vport_count) {
spin_lock_irqsave(&ha->vport_slock, flags);
@ -3068,7 +2963,7 @@ qla2x00_remove_one(struct pci_dev *pdev)
BUG_ON(base_vha->list.next == &ha->vp_list);
/* This assumes first entry in ha->vp_list is always base vha */
vha = list_first_entry(&base_vha->list, scsi_qla_host_t, list);
scsi_host_get(vha->host);
scsi_host = scsi_host_get(vha->host);
spin_unlock_irqrestore(&ha->vport_slock, flags);
mutex_unlock(&ha->vport_lock);
@ -3079,27 +2974,12 @@ qla2x00_remove_one(struct pci_dev *pdev)
mutex_lock(&ha->vport_lock);
}
mutex_unlock(&ha->vport_lock);
}
if (IS_QLA8031(ha)) {
ql_dbg(ql_dbg_p3p, base_vha, 0xb07e,
"Clearing fcoe driver presence.\n");
if (qla83xx_clear_drv_presence(base_vha) != QLA_SUCCESS)
ql_dbg(ql_dbg_p3p, base_vha, 0xb079,
"Error while clearing DRV-Presence.\n");
}
qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
qla2x00_dfs_remove(base_vha);
qla84xx_put_chip(base_vha);
/* Disable timer */
if (base_vha->timer_active)
qla2x00_stop_timer(base_vha);
base_vha->flags.online = 0;
/* Stops all deferred work threads */
static void
qla2x00_destroy_deferred_work(struct qla_hw_data *ha)
{
/* Flush the work queue and remove it */
if (ha->wq) {
flush_workqueue(ha->wq);
@ -3133,27 +3013,12 @@ qla2x00_remove_one(struct pci_dev *pdev)
ha->dpc_thread = NULL;
kthread_stop(t);
}
qlt_remove_target(ha, base_vha);
}
qla2x00_free_sysfs_attr(base_vha);
fc_remove_host(base_vha->host);
scsi_remove_host(base_vha->host);
qla2x00_free_device(base_vha);
scsi_host_put(base_vha->host);
if (IS_QLA8044(ha)) {
qla8044_idc_lock(ha);
qla8044_clear_drv_active(base_vha);
qla8044_idc_unlock(ha);
}
static void
qla2x00_unmap_iobases(struct qla_hw_data *ha)
{
if (IS_QLA82XX(ha)) {
qla82xx_idc_lock(ha);
qla82xx_clear_drv_active(ha);
qla82xx_idc_unlock(ha);
iounmap((device_reg_t __iomem *)ha->nx_pcibase);
if (!ql2xdbwr)
@ -3171,6 +3036,84 @@ qla2x00_remove_one(struct pci_dev *pdev)
if (IS_QLA83XX(ha) && ha->msixbase)
iounmap(ha->msixbase);
}
}
static void
qla2x00_clear_drv_active(scsi_qla_host_t *vha)
{
struct qla_hw_data *ha = vha->hw;
if (IS_QLA8044(ha)) {
qla8044_idc_lock(ha);
qla8044_clear_drv_active(ha);
qla8044_idc_unlock(ha);
} else if (IS_QLA82XX(ha)) {
qla82xx_idc_lock(ha);
qla82xx_clear_drv_active(ha);
qla82xx_idc_unlock(ha);
}
}
static void
qla2x00_remove_one(struct pci_dev *pdev)
{
scsi_qla_host_t *base_vha;
struct qla_hw_data *ha;
/*
* If the PCI device is disabled that means that probe failed and any
* resources should be have cleaned up on probe exit.
*/
if (!atomic_read(&pdev->enable_cnt))
return;
base_vha = pci_get_drvdata(pdev);
ha = base_vha->hw;
set_bit(UNLOADING, &base_vha->dpc_flags);
if (IS_QLAFX00(ha))
qlafx00_driver_shutdown(base_vha, 20);
qla2x00_delete_all_vps(ha, base_vha);
if (IS_QLA8031(ha)) {
ql_dbg(ql_dbg_p3p, base_vha, 0xb07e,
"Clearing fcoe driver presence.\n");
if (qla83xx_clear_drv_presence(base_vha) != QLA_SUCCESS)
ql_dbg(ql_dbg_p3p, base_vha, 0xb079,
"Error while clearing DRV-Presence.\n");
}
qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
qla2x00_dfs_remove(base_vha);
qla84xx_put_chip(base_vha);
/* Disable timer */
if (base_vha->timer_active)
qla2x00_stop_timer(base_vha);
base_vha->flags.online = 0;
qla2x00_destroy_deferred_work(ha);
qlt_remove_target(ha, base_vha);
qla2x00_free_sysfs_attr(base_vha, true);
fc_remove_host(base_vha->host);
scsi_remove_host(base_vha->host);
qla2x00_free_device(base_vha);
scsi_host_put(base_vha->host);
qla2x00_clear_drv_active(base_vha);
qla2x00_unmap_iobases(ha);
pci_release_selected_regions(ha->pdev, ha->bars);
kfree(ha);
@ -3192,9 +3135,8 @@ qla2x00_free_device(scsi_qla_host_t *vha)
if (vha->timer_active)
qla2x00_stop_timer(vha);
qla2x00_stop_dpc_thread(vha);
qla25xx_delete_queues(vha);
if (ha->flags.fce_enabled)
qla2x00_disable_fce_trace(vha, NULL, NULL);
@ -4731,6 +4673,66 @@ exit:
return rval;
}
void
qla2x00_disable_board_on_pci_error(struct work_struct *work)
{
struct qla_hw_data *ha = container_of(work, struct qla_hw_data,
board_disable);
struct pci_dev *pdev = ha->pdev;
scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
ql_log(ql_log_warn, base_vha, 0x015b,
"Disabling adapter.\n");
set_bit(UNLOADING, &base_vha->dpc_flags);
qla2x00_delete_all_vps(ha, base_vha);
qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
qla2x00_dfs_remove(base_vha);
qla84xx_put_chip(base_vha);
if (base_vha->timer_active)
qla2x00_stop_timer(base_vha);
base_vha->flags.online = 0;
qla2x00_destroy_deferred_work(ha);
/*
* Do not try to stop beacon blink as it will issue a mailbox
* command.
*/
qla2x00_free_sysfs_attr(base_vha, false);
fc_remove_host(base_vha->host);
scsi_remove_host(base_vha->host);
base_vha->flags.init_done = 0;
qla25xx_delete_queues(base_vha);
qla2x00_free_irqs(base_vha);
qla2x00_free_fcports(base_vha);
qla2x00_mem_free(ha);
qla82xx_md_free(base_vha);
qla2x00_free_queues(ha);
scsi_host_put(base_vha->host);
qla2x00_unmap_iobases(ha);
pci_release_selected_regions(ha->pdev, ha->bars);
kfree(ha);
ha = NULL;
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
}
/**************************************************************************
* qla2x00_do_dpc
* This kernel thread is a task that is schedule by the interrupt handler
@ -4863,6 +4865,14 @@ qla2x00_do_dpc(void *data)
ql_dbg(ql_dbg_dpc, base_vha, 0x401f,
"ISPFx00 Target Scan End\n");
}
if (test_and_clear_bit(FX00_HOST_INFO_RESEND,
&base_vha->dpc_flags)) {
ql_dbg(ql_dbg_dpc, base_vha, 0x4023,
"ISPFx00 Host Info resend scheduled\n");
qlafx00_fx_disc(base_vha,
&base_vha->hw->mr.fcport,
FXDISC_REG_HOST_INFO);
}
}
if (test_and_clear_bit(ISP_ABORT_NEEDED,
@ -4990,17 +5000,6 @@ loop_resync_check:
qla2xxx_flash_npiv_conf(base_vha);
}
if (test_and_clear_bit(HOST_RAMP_DOWN_QUEUE_DEPTH,
&base_vha->dpc_flags)) {
/* Prevents simultaneous ramp up and down */
clear_bit(HOST_RAMP_UP_QUEUE_DEPTH,
&base_vha->dpc_flags);
qla2x00_host_ramp_down_queuedepth(base_vha);
}
if (test_and_clear_bit(HOST_RAMP_UP_QUEUE_DEPTH,
&base_vha->dpc_flags))
qla2x00_host_ramp_up_queuedepth(base_vha);
intr_on_check:
if (!ha->interrupts_on)
ha->isp_ops->enable_intrs(ha);
@ -5095,9 +5094,20 @@ qla2x00_timer(scsi_qla_host_t *vha)
return;
}
/* Hardware read to raise pending EEH errors during mailbox waits. */
if (!pci_channel_offline(ha->pdev))
/*
* Hardware read to raise pending EEH errors during mailbox waits. If
* the read returns -1 then disable the board.
*/
if (!pci_channel_offline(ha->pdev)) {
pci_read_config_word(ha->pdev, PCI_VENDOR_ID, &w);
if (w == 0xffff)
/*
* Schedule this on the default system workqueue so that
* all the adapter workqueues and the DPC thread can be
* shutdown cleanly.
*/
schedule_work(&ha->board_disable);
}
/* Make sure qla82xx_watchdog is run only for physical port */
if (!vha->vp_idx && IS_P3P_TYPE(ha)) {
@ -5182,7 +5192,6 @@ qla2x00_timer(scsi_qla_host_t *vha)
"Loop down - seconds remaining %d.\n",
atomic_read(&vha->loop_down_timer));
}
/* Check if beacon LED needs to be blinked for physical host only */
if (!vha->vp_idx && (ha->beacon_blink_led == 1)) {
/* There is no beacon_blink function for ISP82xx */
@ -5206,9 +5215,7 @@ qla2x00_timer(scsi_qla_host_t *vha)
test_bit(ISP_UNRECOVERABLE, &vha->dpc_flags) ||
test_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags) ||
test_bit(VP_DPC_NEEDED, &vha->dpc_flags) ||
test_bit(RELOGIN_NEEDED, &vha->dpc_flags) ||
test_bit(HOST_RAMP_DOWN_QUEUE_DEPTH, &vha->dpc_flags) ||
test_bit(HOST_RAMP_UP_QUEUE_DEPTH, &vha->dpc_flags))) {
test_bit(RELOGIN_NEEDED, &vha->dpc_flags))) {
ql_dbg(ql_dbg_timer, vha, 0x600b,
"isp_abort_needed=%d loop_resync_needed=%d "
"fcport_update_needed=%d start_dpc=%d "
@ -5221,15 +5228,12 @@ qla2x00_timer(scsi_qla_host_t *vha)
ql_dbg(ql_dbg_timer, vha, 0x600c,
"beacon_blink_needed=%d isp_unrecoverable=%d "
"fcoe_ctx_reset_needed=%d vp_dpc_needed=%d "
"relogin_needed=%d, host_ramp_down_needed=%d "
"host_ramp_up_needed=%d.\n",
"relogin_needed=%d.\n",
test_bit(BEACON_BLINK_NEEDED, &vha->dpc_flags),
test_bit(ISP_UNRECOVERABLE, &vha->dpc_flags),
test_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags),
test_bit(VP_DPC_NEEDED, &vha->dpc_flags),
test_bit(RELOGIN_NEEDED, &vha->dpc_flags),
test_bit(HOST_RAMP_UP_QUEUE_DEPTH, &vha->dpc_flags),
test_bit(HOST_RAMP_DOWN_QUEUE_DEPTH, &vha->dpc_flags));
test_bit(RELOGIN_NEEDED, &vha->dpc_flags));
qla2xxx_wake_dpc(vha);
}

Просмотреть файл

@ -7,7 +7,7 @@
/*
* Driver version
*/
#define QLA2XXX_VERSION "8.06.00.08-k"
#define QLA2XXX_VERSION "8.06.00.12-k"
#define QLA_DRIVER_MAJOR_VER 8
#define QLA_DRIVER_MINOR_VER 6

Просмотреть файл

@ -446,6 +446,363 @@ leave:
return rval;
}
static void ql4xxx_execute_diag_cmd(struct bsg_job *bsg_job)
{
struct Scsi_Host *host = iscsi_job_to_shost(bsg_job);
struct scsi_qla_host *ha = to_qla_host(host);
struct iscsi_bsg_request *bsg_req = bsg_job->request;
struct iscsi_bsg_reply *bsg_reply = bsg_job->reply;
uint8_t *rsp_ptr = NULL;
uint32_t mbox_cmd[MBOX_REG_COUNT];
uint32_t mbox_sts[MBOX_REG_COUNT];
int status = QLA_ERROR;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: in\n", __func__));
if (test_bit(DPC_RESET_HA, &ha->dpc_flags)) {
ql4_printk(KERN_INFO, ha, "%s: Adapter reset in progress. Invalid Request\n",
__func__);
bsg_reply->result = DID_ERROR << 16;
goto exit_diag_mem_test;
}
bsg_reply->reply_payload_rcv_len = 0;
memcpy(mbox_cmd, &bsg_req->rqst_data.h_vendor.vendor_cmd[1],
sizeof(uint32_t) * MBOX_REG_COUNT);
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: mbox_cmd: %08X %08X %08X %08X %08X %08X %08X %08X\n",
__func__, mbox_cmd[0], mbox_cmd[1], mbox_cmd[2],
mbox_cmd[3], mbox_cmd[4], mbox_cmd[5], mbox_cmd[6],
mbox_cmd[7]));
status = qla4xxx_mailbox_command(ha, MBOX_REG_COUNT, 8, &mbox_cmd[0],
&mbox_sts[0]);
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: mbox_sts: %08X %08X %08X %08X %08X %08X %08X %08X\n",
__func__, mbox_sts[0], mbox_sts[1], mbox_sts[2],
mbox_sts[3], mbox_sts[4], mbox_sts[5], mbox_sts[6],
mbox_sts[7]));
if (status == QLA_SUCCESS)
bsg_reply->result = DID_OK << 16;
else
bsg_reply->result = DID_ERROR << 16;
/* Send mbox_sts to application */
bsg_job->reply_len = sizeof(struct iscsi_bsg_reply) + sizeof(mbox_sts);
rsp_ptr = ((uint8_t *)bsg_reply) + sizeof(struct iscsi_bsg_reply);
memcpy(rsp_ptr, mbox_sts, sizeof(mbox_sts));
exit_diag_mem_test:
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: bsg_reply->result = x%x, status = %s\n",
__func__, bsg_reply->result, STATUS(status)));
bsg_job_done(bsg_job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
}
static int qla4_83xx_wait_for_loopback_config_comp(struct scsi_qla_host *ha,
int wait_for_link)
{
int status = QLA_SUCCESS;
if (!wait_for_completion_timeout(&ha->idc_comp, (IDC_COMP_TOV * HZ))) {
ql4_printk(KERN_INFO, ha, "%s: IDC Complete notification not received, Waiting for another %d timeout",
__func__, ha->idc_extend_tmo);
if (ha->idc_extend_tmo) {
if (!wait_for_completion_timeout(&ha->idc_comp,
(ha->idc_extend_tmo * HZ))) {
ha->notify_idc_comp = 0;
ha->notify_link_up_comp = 0;
ql4_printk(KERN_WARNING, ha, "%s: IDC Complete notification not received",
__func__);
status = QLA_ERROR;
goto exit_wait;
} else {
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: IDC Complete notification received\n",
__func__));
}
}
} else {
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: IDC Complete notification received\n",
__func__));
}
ha->notify_idc_comp = 0;
if (wait_for_link) {
if (!wait_for_completion_timeout(&ha->link_up_comp,
(IDC_COMP_TOV * HZ))) {
ha->notify_link_up_comp = 0;
ql4_printk(KERN_WARNING, ha, "%s: LINK UP notification not received",
__func__);
status = QLA_ERROR;
goto exit_wait;
} else {
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: LINK UP notification received\n",
__func__));
}
ha->notify_link_up_comp = 0;
}
exit_wait:
return status;
}
static int qla4_83xx_pre_loopback_config(struct scsi_qla_host *ha,
uint32_t *mbox_cmd)
{
uint32_t config = 0;
int status = QLA_SUCCESS;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: in\n", __func__));
status = qla4_83xx_get_port_config(ha, &config);
if (status != QLA_SUCCESS)
goto exit_pre_loopback_config;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: Default port config=%08X\n",
__func__, config));
if ((config & ENABLE_INTERNAL_LOOPBACK) ||
(config & ENABLE_EXTERNAL_LOOPBACK)) {
ql4_printk(KERN_INFO, ha, "%s: Loopback diagnostics already in progress. Invalid requiest\n",
__func__);
goto exit_pre_loopback_config;
}
if (mbox_cmd[1] == QL_DIAG_CMD_TEST_INT_LOOPBACK)
config |= ENABLE_INTERNAL_LOOPBACK;
if (mbox_cmd[1] == QL_DIAG_CMD_TEST_EXT_LOOPBACK)
config |= ENABLE_EXTERNAL_LOOPBACK;
config &= ~ENABLE_DCBX;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: New port config=%08X\n",
__func__, config));
ha->notify_idc_comp = 1;
ha->notify_link_up_comp = 1;
/* get the link state */
qla4xxx_get_firmware_state(ha);
status = qla4_83xx_set_port_config(ha, &config);
if (status != QLA_SUCCESS) {
ha->notify_idc_comp = 0;
ha->notify_link_up_comp = 0;
goto exit_pre_loopback_config;
}
exit_pre_loopback_config:
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: status = %s\n", __func__,
STATUS(status)));
return status;
}
static int qla4_83xx_post_loopback_config(struct scsi_qla_host *ha,
uint32_t *mbox_cmd)
{
int status = QLA_SUCCESS;
uint32_t config = 0;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: in\n", __func__));
status = qla4_83xx_get_port_config(ha, &config);
if (status != QLA_SUCCESS)
goto exit_post_loopback_config;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: port config=%08X\n", __func__,
config));
if (mbox_cmd[1] == QL_DIAG_CMD_TEST_INT_LOOPBACK)
config &= ~ENABLE_INTERNAL_LOOPBACK;
else if (mbox_cmd[1] == QL_DIAG_CMD_TEST_EXT_LOOPBACK)
config &= ~ENABLE_EXTERNAL_LOOPBACK;
config |= ENABLE_DCBX;
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: Restore default port config=%08X\n", __func__,
config));
ha->notify_idc_comp = 1;
if (ha->addl_fw_state & FW_ADDSTATE_LINK_UP)
ha->notify_link_up_comp = 1;
status = qla4_83xx_set_port_config(ha, &config);
if (status != QLA_SUCCESS) {
ql4_printk(KERN_INFO, ha, "%s: Scheduling adapter reset\n",
__func__);
set_bit(DPC_RESET_HA, &ha->dpc_flags);
clear_bit(AF_LOOPBACK, &ha->flags);
goto exit_post_loopback_config;
}
exit_post_loopback_config:
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: status = %s\n", __func__,
STATUS(status)));
return status;
}
static void qla4xxx_execute_diag_loopback_cmd(struct bsg_job *bsg_job)
{
struct Scsi_Host *host = iscsi_job_to_shost(bsg_job);
struct scsi_qla_host *ha = to_qla_host(host);
struct iscsi_bsg_request *bsg_req = bsg_job->request;
struct iscsi_bsg_reply *bsg_reply = bsg_job->reply;
uint8_t *rsp_ptr = NULL;
uint32_t mbox_cmd[MBOX_REG_COUNT];
uint32_t mbox_sts[MBOX_REG_COUNT];
int wait_for_link = 1;
int status = QLA_ERROR;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: in\n", __func__));
bsg_reply->reply_payload_rcv_len = 0;
if (test_bit(AF_LOOPBACK, &ha->flags)) {
ql4_printk(KERN_INFO, ha, "%s: Loopback Diagnostics already in progress. Invalid Request\n",
__func__);
bsg_reply->result = DID_ERROR << 16;
goto exit_loopback_cmd;
}
if (test_bit(DPC_RESET_HA, &ha->dpc_flags)) {
ql4_printk(KERN_INFO, ha, "%s: Adapter reset in progress. Invalid Request\n",
__func__);
bsg_reply->result = DID_ERROR << 16;
goto exit_loopback_cmd;
}
memcpy(mbox_cmd, &bsg_req->rqst_data.h_vendor.vendor_cmd[1],
sizeof(uint32_t) * MBOX_REG_COUNT);
if (is_qla8032(ha) || is_qla8042(ha)) {
status = qla4_83xx_pre_loopback_config(ha, mbox_cmd);
if (status != QLA_SUCCESS) {
bsg_reply->result = DID_ERROR << 16;
goto exit_loopback_cmd;
}
status = qla4_83xx_wait_for_loopback_config_comp(ha,
wait_for_link);
if (status != QLA_SUCCESS) {
bsg_reply->result = DID_TIME_OUT << 16;
goto restore;
}
}
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: mbox_cmd: %08X %08X %08X %08X %08X %08X %08X %08X\n",
__func__, mbox_cmd[0], mbox_cmd[1], mbox_cmd[2],
mbox_cmd[3], mbox_cmd[4], mbox_cmd[5], mbox_cmd[6],
mbox_cmd[7]));
status = qla4xxx_mailbox_command(ha, MBOX_REG_COUNT, 8, &mbox_cmd[0],
&mbox_sts[0]);
if (status == QLA_SUCCESS)
bsg_reply->result = DID_OK << 16;
else
bsg_reply->result = DID_ERROR << 16;
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: mbox_sts: %08X %08X %08X %08X %08X %08X %08X %08X\n",
__func__, mbox_sts[0], mbox_sts[1], mbox_sts[2],
mbox_sts[3], mbox_sts[4], mbox_sts[5], mbox_sts[6],
mbox_sts[7]));
/* Send mbox_sts to application */
bsg_job->reply_len = sizeof(struct iscsi_bsg_reply) + sizeof(mbox_sts);
rsp_ptr = ((uint8_t *)bsg_reply) + sizeof(struct iscsi_bsg_reply);
memcpy(rsp_ptr, mbox_sts, sizeof(mbox_sts));
restore:
if (is_qla8032(ha) || is_qla8042(ha)) {
status = qla4_83xx_post_loopback_config(ha, mbox_cmd);
if (status != QLA_SUCCESS) {
bsg_reply->result = DID_ERROR << 16;
goto exit_loopback_cmd;
}
/* for pre_loopback_config() wait for LINK UP only
* if PHY LINK is UP */
if (!(ha->addl_fw_state & FW_ADDSTATE_LINK_UP))
wait_for_link = 0;
status = qla4_83xx_wait_for_loopback_config_comp(ha,
wait_for_link);
if (status != QLA_SUCCESS) {
bsg_reply->result = DID_TIME_OUT << 16;
goto exit_loopback_cmd;
}
}
exit_loopback_cmd:
DEBUG2(ql4_printk(KERN_INFO, ha,
"%s: bsg_reply->result = x%x, status = %s\n",
__func__, bsg_reply->result, STATUS(status)));
bsg_job_done(bsg_job, bsg_reply->result,
bsg_reply->reply_payload_rcv_len);
}
static int qla4xxx_execute_diag_test(struct bsg_job *bsg_job)
{
struct Scsi_Host *host = iscsi_job_to_shost(bsg_job);
struct scsi_qla_host *ha = to_qla_host(host);
struct iscsi_bsg_request *bsg_req = bsg_job->request;
uint32_t diag_cmd;
int rval = -EINVAL;
DEBUG2(ql4_printk(KERN_INFO, ha, "%s: in\n", __func__));
diag_cmd = bsg_req->rqst_data.h_vendor.vendor_cmd[1];
if (diag_cmd == MBOX_CMD_DIAG_TEST) {
switch (bsg_req->rqst_data.h_vendor.vendor_cmd[2]) {
case QL_DIAG_CMD_TEST_DDR_SIZE:
case QL_DIAG_CMD_TEST_DDR_RW:
case QL_DIAG_CMD_TEST_ONCHIP_MEM_RW:
case QL_DIAG_CMD_TEST_NVRAM:
case QL_DIAG_CMD_TEST_FLASH_ROM:
case QL_DIAG_CMD_TEST_DMA_XFER:
case QL_DIAG_CMD_SELF_DDR_RW:
case QL_DIAG_CMD_SELF_ONCHIP_MEM_RW:
/* Execute diag test for adapter RAM/FLASH */
ql4xxx_execute_diag_cmd(bsg_job);
/* Always return success as we want to sent bsg_reply
* to Application */
rval = QLA_SUCCESS;
break;
case QL_DIAG_CMD_TEST_INT_LOOPBACK:
case QL_DIAG_CMD_TEST_EXT_LOOPBACK:
/* Execute diag test for Network */
qla4xxx_execute_diag_loopback_cmd(bsg_job);
/* Always return success as we want to sent bsg_reply
* to Application */
rval = QLA_SUCCESS;
break;
default:
ql4_printk(KERN_ERR, ha, "%s: Invalid diag test: 0x%x\n",
__func__,
bsg_req->rqst_data.h_vendor.vendor_cmd[2]);
}
} else if ((diag_cmd == MBOX_CMD_SET_LED_CONFIG) ||
(diag_cmd == MBOX_CMD_GET_LED_CONFIG)) {
ql4xxx_execute_diag_cmd(bsg_job);
rval = QLA_SUCCESS;
} else {
ql4_printk(KERN_ERR, ha, "%s: Invalid diag cmd: 0x%x\n",
__func__, diag_cmd);
}
return rval;
}
/**
* qla4xxx_process_vendor_specific - handle vendor specific bsg request
* @job: iscsi_bsg_job to handle
@ -479,6 +836,9 @@ int qla4xxx_process_vendor_specific(struct bsg_job *bsg_job)
case QLISCSI_VND_GET_ACB:
return qla4xxx_bsg_get_acb(bsg_job);
case QLISCSI_VND_DIAG_TEST:
return qla4xxx_execute_diag_test(bsg_job);
default:
ql4_printk(KERN_ERR, ha, "%s: invalid BSG vendor command: "
"0x%x\n", __func__, bsg_req->msgcode);

Просмотреть файл

@ -15,5 +15,18 @@
#define QLISCSI_VND_UPDATE_NVRAM 5
#define QLISCSI_VND_RESTORE_DEFAULTS 6
#define QLISCSI_VND_GET_ACB 7
#define QLISCSI_VND_DIAG_TEST 8
/* QLISCSI_VND_DIAG_CMD sub code */
#define QL_DIAG_CMD_TEST_DDR_SIZE 0x2
#define QL_DIAG_CMD_TEST_DDR_RW 0x3
#define QL_DIAG_CMD_TEST_ONCHIP_MEM_RW 0x4
#define QL_DIAG_CMD_TEST_NVRAM 0x5 /* Only ISP4XXX */
#define QL_DIAG_CMD_TEST_FLASH_ROM 0x6
#define QL_DIAG_CMD_TEST_INT_LOOPBACK 0x7
#define QL_DIAG_CMD_TEST_EXT_LOOPBACK 0x8
#define QL_DIAG_CMD_TEST_DMA_XFER 0x9 /* Only ISP4XXX */
#define QL_DIAG_CMD_SELF_DDR_RW 0xC
#define QL_DIAG_CMD_SELF_ONCHIP_MEM_RW 0xD
#endif

Просмотреть файл

@ -73,6 +73,7 @@
#define QLA_SUCCESS 0
#define QLA_ERROR 1
#define STATUS(status) status == QLA_ERROR ? "FAILED" : "SUCCEEDED"
/*
* Data bit definitions
@ -179,6 +180,10 @@
n &= ~v; \
}
#define OP_STATE(o, f, p) { \
p = (o & f) ? "enable" : "disable"; \
}
/*
* Retry & Timeout Values
*/
@ -206,6 +211,8 @@
#define MAX_RESET_HA_RETRIES 2
#define FW_ALIVE_WAIT_TOV 3
#define IDC_EXTEND_TOV 8
#define IDC_COMP_TOV 5
#define LINK_UP_COMP_TOV 30
#define CMD_SP(Cmnd) ((Cmnd)->SCp.ptr)
@ -476,6 +483,34 @@ struct ipaddress_config {
uint16_t eth_mtu_size;
uint16_t ipv4_port;
uint16_t ipv6_port;
uint8_t control;
uint16_t ipv6_tcp_options;
uint8_t tcp_wsf;
uint8_t ipv6_tcp_wsf;
uint8_t ipv4_tos;
uint8_t ipv4_cache_id;
uint8_t ipv6_cache_id;
uint8_t ipv4_alt_cid_len;
uint8_t ipv4_alt_cid[11];
uint8_t ipv4_vid_len;
uint8_t ipv4_vid[11];
uint8_t ipv4_ttl;
uint16_t ipv6_flow_lbl;
uint8_t ipv6_traffic_class;
uint8_t ipv6_hop_limit;
uint32_t ipv6_nd_reach_time;
uint32_t ipv6_nd_rexmit_timer;
uint32_t ipv6_nd_stale_timeout;
uint8_t ipv6_dup_addr_detect_count;
uint32_t ipv6_gw_advrt_mtu;
uint16_t def_timeout;
uint8_t abort_timer;
uint16_t iscsi_options;
uint16_t iscsi_max_pdu_size;
uint16_t iscsi_first_burst_len;
uint16_t iscsi_max_outstnd_r2t;
uint16_t iscsi_max_burst_len;
uint8_t iscsi_name[224];
};
#define QL4_CHAP_MAX_NAME_LEN 256
@ -790,6 +825,11 @@ struct scsi_qla_host {
uint32_t pf_bit;
struct qla4_83xx_idc_information idc_info;
struct addr_ctrl_blk *saved_acb;
int notify_idc_comp;
int notify_link_up_comp;
int idc_extend_tmo;
struct completion idc_comp;
struct completion link_up_comp;
};
struct ql4_task_data {

Просмотреть файл

@ -410,6 +410,7 @@ struct qla_flt_region {
#define DDB_DS_LOGIN_IN_PROCESS 0x07
#define MBOX_CMD_GET_FW_STATE 0x0069
#define MBOX_CMD_GET_INIT_FW_CTRL_BLOCK_DEFAULTS 0x006A
#define MBOX_CMD_DIAG_TEST 0x0075
#define MBOX_CMD_GET_SYS_INFO 0x0078
#define MBOX_CMD_GET_NVRAM 0x0078 /* For 40xx */
#define MBOX_CMD_SET_NVRAM 0x0079 /* For 40xx */
@ -425,8 +426,17 @@ struct qla_flt_region {
#define MBOX_CMD_GET_IP_ADDR_STATE 0x0091
#define MBOX_CMD_SEND_IPV6_ROUTER_SOL 0x0092
#define MBOX_CMD_GET_DB_ENTRY_CURRENT_IP_ADDR 0x0093
#define MBOX_CMD_SET_PORT_CONFIG 0x0122
#define MBOX_CMD_GET_PORT_CONFIG 0x0123
#define MBOX_CMD_SET_LED_CONFIG 0x0125
#define MBOX_CMD_GET_LED_CONFIG 0x0126
#define MBOX_CMD_MINIDUMP 0x0129
/* Port Config */
#define ENABLE_INTERNAL_LOOPBACK 0x04
#define ENABLE_EXTERNAL_LOOPBACK 0x08
#define ENABLE_DCBX 0x10
/* Minidump subcommand */
#define MINIDUMP_GET_SIZE_SUBCOMMAND 0x00
#define MINIDUMP_GET_TMPLT_SUBCOMMAND 0x01
@ -535,10 +545,6 @@ struct qla_flt_region {
#define FLASH_OPT_COMMIT 2
#define FLASH_OPT_RMW_COMMIT 3
/* Loopback type */
#define ENABLE_INTERNAL_LOOPBACK 0x04
#define ENABLE_EXTERNAL_LOOPBACK 0x08
/* generic defines to enable/disable params */
#define QL4_PARAM_DISABLE 0
#define QL4_PARAM_ENABLE 1
@ -551,6 +557,7 @@ struct addr_ctrl_blk {
#define IFCB_VER_MIN 0x01
#define IFCB_VER_MAX 0x02
uint8_t control; /* 01 */
#define CTRLOPT_NEW_CONN_DISABLE 0x0002
uint16_t fw_options; /* 02-03 */
#define FWOPT_HEARTBEAT_ENABLE 0x1000
@ -582,11 +589,40 @@ struct addr_ctrl_blk {
uint32_t shdwreg_addr_hi; /* 2C-2F */
uint16_t iscsi_opts; /* 30-31 */
#define ISCSIOPTS_HEADER_DIGEST_EN 0x2000
#define ISCSIOPTS_DATA_DIGEST_EN 0x1000
#define ISCSIOPTS_IMMEDIATE_DATA_EN 0x0800
#define ISCSIOPTS_INITIAL_R2T_EN 0x0400
#define ISCSIOPTS_DATA_SEQ_INORDER_EN 0x0200
#define ISCSIOPTS_DATA_PDU_INORDER_EN 0x0100
#define ISCSIOPTS_CHAP_AUTH_EN 0x0080
#define ISCSIOPTS_SNACK_EN 0x0040
#define ISCSIOPTS_DISCOVERY_LOGOUT_EN 0x0020
#define ISCSIOPTS_BIDI_CHAP_EN 0x0010
#define ISCSIOPTS_DISCOVERY_AUTH_EN 0x0008
#define ISCSIOPTS_STRICT_LOGIN_COMP_EN 0x0004
#define ISCSIOPTS_ERL 0x0003
uint16_t ipv4_tcp_opts; /* 32-33 */
#define TCPOPT_DELAYED_ACK_DISABLE 0x8000
#define TCPOPT_DHCP_ENABLE 0x0200
#define TCPOPT_DNS_SERVER_IP_EN 0x0100
#define TCPOPT_SLP_DA_INFO_EN 0x0080
#define TCPOPT_NAGLE_ALGO_DISABLE 0x0020
#define TCPOPT_WINDOW_SCALE_DISABLE 0x0010
#define TCPOPT_TIMER_SCALE 0x000E
#define TCPOPT_TIMESTAMP_ENABLE 0x0001
uint16_t ipv4_ip_opts; /* 34-35 */
#define IPOPT_IPV4_PROTOCOL_ENABLE 0x8000
#define IPOPT_IPV4_TOS_EN 0x4000
#define IPOPT_VLAN_TAGGING_ENABLE 0x2000
#define IPOPT_GRAT_ARP_EN 0x1000
#define IPOPT_ALT_CID_EN 0x0800
#define IPOPT_REQ_VID_EN 0x0400
#define IPOPT_USE_VID_EN 0x0200
#define IPOPT_LEARN_IQN_EN 0x0100
#define IPOPT_FRAGMENTATION_DISABLE 0x0010
#define IPOPT_IN_FORWARD_EN 0x0008
#define IPOPT_ARP_REDIRECT_EN 0x0004
uint16_t iscsi_max_pdu_size; /* 36-37 */
uint8_t ipv4_tos; /* 38 */
@ -637,15 +673,24 @@ struct addr_ctrl_blk {
uint32_t cookie; /* 200-203 */
uint16_t ipv6_port; /* 204-205 */
uint16_t ipv6_opts; /* 206-207 */
#define IPV6_OPT_IPV6_PROTOCOL_ENABLE 0x8000
#define IPV6_OPT_VLAN_TAGGING_ENABLE 0x2000
#define IPV6_OPT_IPV6_PROTOCOL_ENABLE 0x8000
#define IPV6_OPT_VLAN_TAGGING_ENABLE 0x2000
#define IPV6_OPT_GRAT_NEIGHBOR_ADV_EN 0x1000
#define IPV6_OPT_REDIRECT_EN 0x0004
uint16_t ipv6_addtl_opts; /* 208-209 */
#define IPV6_ADDOPT_IGNORE_ICMP_ECHO_REQ 0x0040
#define IPV6_ADDOPT_MLD_EN 0x0004
#define IPV6_ADDOPT_NEIGHBOR_DISCOVERY_ADDR_ENABLE 0x0002 /* Pri ACB
Only */
#define IPV6_ADDOPT_AUTOCONFIG_LINK_LOCAL_ADDR 0x0001
uint16_t ipv6_tcp_opts; /* 20A-20B */
#define IPV6_TCPOPT_DELAYED_ACK_DISABLE 0x8000
#define IPV6_TCPOPT_NAGLE_ALGO_DISABLE 0x0020
#define IPV6_TCPOPT_WINDOW_SCALE_DISABLE 0x0010
#define IPV6_TCPOPT_TIMER_SCALE 0x000E
#define IPV6_TCPOPT_TIMESTAMP_EN 0x0001
uint8_t ipv6_tcp_wsf; /* 20C */
uint16_t ipv6_flow_lbl; /* 20D-20F */
uint8_t ipv6_dflt_rtr_addr[16]; /* 210-21F */
@ -1252,7 +1297,88 @@ struct response {
};
struct ql_iscsi_stats {
uint8_t reserved1[656]; /* 0000-028F */
uint64_t mac_tx_frames; /* 0000–0007 */
uint64_t mac_tx_bytes; /* 0008–000F */
uint64_t mac_tx_multicast_frames; /* 0010–0017 */
uint64_t mac_tx_broadcast_frames; /* 0018–001F */
uint64_t mac_tx_pause_frames; /* 0020–0027 */
uint64_t mac_tx_control_frames; /* 0028–002F */
uint64_t mac_tx_deferral; /* 0030–0037 */
uint64_t mac_tx_excess_deferral; /* 0038–003F */
uint64_t mac_tx_late_collision; /* 0040–0047 */
uint64_t mac_tx_abort; /* 0048–004F */
uint64_t mac_tx_single_collision; /* 0050–0057 */
uint64_t mac_tx_multiple_collision; /* 0058–005F */
uint64_t mac_tx_collision; /* 0060–0067 */
uint64_t mac_tx_frames_dropped; /* 0068–006F */
uint64_t mac_tx_jumbo_frames; /* 0070–0077 */
uint64_t mac_rx_frames; /* 0078–007F */
uint64_t mac_rx_bytes; /* 0080–0087 */
uint64_t mac_rx_unknown_control_frames; /* 0088–008F */
uint64_t mac_rx_pause_frames; /* 0090–0097 */
uint64_t mac_rx_control_frames; /* 0098–009F */
uint64_t mac_rx_dribble; /* 00A0–00A7 */
uint64_t mac_rx_frame_length_error; /* 00A8–00AF */
uint64_t mac_rx_jabber; /* 00B0–00B7 */
uint64_t mac_rx_carrier_sense_error; /* 00B8–00BF */
uint64_t mac_rx_frame_discarded; /* 00C0–00C7 */
uint64_t mac_rx_frames_dropped; /* 00C8–00CF */
uint64_t mac_crc_error; /* 00D0–00D7 */
uint64_t mac_encoding_error; /* 00D8–00DF */
uint64_t mac_rx_length_error_large; /* 00E0–00E7 */
uint64_t mac_rx_length_error_small; /* 00E8–00EF */
uint64_t mac_rx_multicast_frames; /* 00F0–00F7 */
uint64_t mac_rx_broadcast_frames; /* 00F8–00FF */
uint64_t ip_tx_packets; /* 0100–0107 */
uint64_t ip_tx_bytes; /* 0108–010F */
uint64_t ip_tx_fragments; /* 0110–0117 */
uint64_t ip_rx_packets; /* 0118–011F */
uint64_t ip_rx_bytes; /* 0120–0127 */
uint64_t ip_rx_fragments; /* 0128–012F */
uint64_t ip_datagram_reassembly; /* 0130–0137 */
uint64_t ip_invalid_address_error; /* 0138–013F */
uint64_t ip_error_packets; /* 0140–0147 */
uint64_t ip_fragrx_overlap; /* 0148–014F */
uint64_t ip_fragrx_outoforder; /* 0150–0157 */
uint64_t ip_datagram_reassembly_timeout; /* 0158–015F */
uint64_t ipv6_tx_packets; /* 0160–0167 */
uint64_t ipv6_tx_bytes; /* 0168–016F */
uint64_t ipv6_tx_fragments; /* 0170–0177 */
uint64_t ipv6_rx_packets; /* 0178–017F */
uint64_t ipv6_rx_bytes; /* 0180–0187 */
uint64_t ipv6_rx_fragments; /* 0188–018F */
uint64_t ipv6_datagram_reassembly; /* 0190–0197 */
uint64_t ipv6_invalid_address_error; /* 0198–019F */
uint64_t ipv6_error_packets; /* 01A0–01A7 */
uint64_t ipv6_fragrx_overlap; /* 01A8–01AF */
uint64_t ipv6_fragrx_outoforder; /* 01B0–01B7 */
uint64_t ipv6_datagram_reassembly_timeout; /* 01B8–01BF */
uint64_t tcp_tx_segments; /* 01C0–01C7 */
uint64_t tcp_tx_bytes; /* 01C8–01CF */
uint64_t tcp_rx_segments; /* 01D0–01D7 */
uint64_t tcp_rx_byte; /* 01D8–01DF */
uint64_t tcp_duplicate_ack_retx; /* 01E0–01E7 */
uint64_t tcp_retx_timer_expired; /* 01E8–01EF */
uint64_t tcp_rx_duplicate_ack; /* 01F0–01F7 */
uint64_t tcp_rx_pure_ackr; /* 01F8–01FF */
uint64_t tcp_tx_delayed_ack; /* 0200–0207 */
uint64_t tcp_tx_pure_ack; /* 0208–020F */
uint64_t tcp_rx_segment_error; /* 0210–0217 */
uint64_t tcp_rx_segment_outoforder; /* 0218–021F */
uint64_t tcp_rx_window_probe; /* 0220–0227 */
uint64_t tcp_rx_window_update; /* 0228–022F */
uint64_t tcp_tx_window_probe_persist; /* 0230–0237 */
uint64_t ecc_error_correction; /* 0238–023F */
uint64_t iscsi_pdu_tx; /* 0240-0247 */
uint64_t iscsi_data_bytes_tx; /* 0248-024F */
uint64_t iscsi_pdu_rx; /* 0250-0257 */
uint64_t iscsi_data_bytes_rx; /* 0258-025F */
uint64_t iscsi_io_completed; /* 0260-0267 */
uint64_t iscsi_unexpected_io_rx; /* 0268-026F */
uint64_t iscsi_format_error; /* 0270-0277 */
uint64_t iscsi_hdr_digest_error; /* 0278-027F */
uint64_t iscsi_data_digest_error; /* 0280-0287 */
uint64_t iscsi_sequence_error; /* 0288-028F */
uint32_t tx_cmd_pdu; /* 0290-0293 */
uint32_t tx_resp_pdu; /* 0294-0297 */
uint32_t rx_cmd_pdu; /* 0298-029B */

Просмотреть файл

@ -276,6 +276,9 @@ int qla4xxx_get_acb(struct scsi_qla_host *ha, dma_addr_t acb_dma,
int qla4_84xx_config_acb(struct scsi_qla_host *ha, int acb_config);
int qla4_83xx_ms_mem_write_128b(struct scsi_qla_host *ha,
uint64_t addr, uint32_t *data, uint32_t count);
uint8_t qla4xxx_set_ipaddr_state(uint8_t fw_ipaddr_state);
int qla4_83xx_get_port_config(struct scsi_qla_host *ha, uint32_t *config);
int qla4_83xx_set_port_config(struct scsi_qla_host *ha, uint32_t *config);
extern int ql4xextended_error_logging;
extern int ql4xdontresethba;

Просмотреть файл

@ -606,6 +606,36 @@ static int qla4_83xx_loopback_in_progress(struct scsi_qla_host *ha)
return rval;
}
static void qla4xxx_update_ipaddr_state(struct scsi_qla_host *ha,
uint32_t ipaddr_idx,
uint32_t ipaddr_fw_state)
{
uint8_t ipaddr_state;
uint8_t ip_idx;
ip_idx = ipaddr_idx & 0xF;
ipaddr_state = qla4xxx_set_ipaddr_state((uint8_t)ipaddr_fw_state);
switch (ip_idx) {
case 0:
ha->ip_config.ipv4_addr_state = ipaddr_state;
break;
case 1:
ha->ip_config.ipv6_link_local_state = ipaddr_state;
break;
case 2:
ha->ip_config.ipv6_addr0_state = ipaddr_state;
break;
case 3:
ha->ip_config.ipv6_addr1_state = ipaddr_state;
break;
default:
ql4_printk(KERN_INFO, ha, "%s: Invalid IPADDR index %d\n",
__func__, ip_idx);
}
}
/**
* qla4xxx_isr_decode_mailbox - decodes mailbox status
* @ha: Pointer to host adapter structure.
@ -620,6 +650,7 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
int i;
uint32_t mbox_sts[MBOX_AEN_REG_COUNT];
__le32 __iomem *mailbox_out;
uint32_t opcode = 0;
if (is_qla8032(ha) || is_qla8042(ha))
mailbox_out = &ha->qla4_83xx_reg->mailbox_out[0];
@ -698,6 +729,11 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
qla4xxx_post_aen_work(ha, ISCSI_EVENT_LINKUP,
sizeof(mbox_sts),
(uint8_t *) mbox_sts);
if ((is_qla8032(ha) || is_qla8042(ha)) &&
ha->notify_link_up_comp)
complete(&ha->link_up_comp);
break;
case MBOX_ASTS_LINK_DOWN:
@ -741,6 +777,8 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
"mbox_sts[3]=%04x\n", ha->host_no, mbox_sts[0],
mbox_sts[2], mbox_sts[3]);
qla4xxx_update_ipaddr_state(ha, mbox_sts[5],
mbox_sts[3]);
/* mbox_sts[2] = Old ACB state
* mbox_sts[3] = new ACB state */
if ((mbox_sts[3] == ACB_STATE_VALID) &&
@ -841,8 +879,6 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
break;
case MBOX_ASTS_IDC_REQUEST_NOTIFICATION:
{
uint32_t opcode;
if (is_qla8032(ha) || is_qla8042(ha)) {
DEBUG2(ql4_printk(KERN_INFO, ha,
"scsi%ld: AEN %04x, mbox_sts[1]=%08x, mbox_sts[2]=%08x, mbox_sts[3]=%08x, mbox_sts[4]=%08x\n",
@ -862,7 +898,6 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
}
}
break;
}
case MBOX_ASTS_IDC_COMPLETE:
if (is_qla8032(ha) || is_qla8042(ha)) {
@ -875,6 +910,14 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
"scsi:%ld: AEN %04x IDC Complete notification\n",
ha->host_no, mbox_sts[0]));
opcode = mbox_sts[1] >> 16;
if (ha->notify_idc_comp)
complete(&ha->idc_comp);
if ((opcode == MBOX_CMD_SET_PORT_CONFIG) ||
(opcode == MBOX_CMD_PORT_RESET))
ha->idc_info.info2 = mbox_sts[3];
if (qla4_83xx_loopback_in_progress(ha)) {
set_bit(AF_LOOPBACK, &ha->flags);
} else {
@ -907,6 +950,8 @@ static void qla4xxx_isr_decode_mailbox(struct scsi_qla_host * ha,
DEBUG2(ql4_printk(KERN_INFO, ha,
"scsi%ld: AEN %04x Received IDC Extend Timeout notification\n",
ha->host_no, mbox_sts[0]));
/* new IDC timeout */
ha->idc_extend_tmo = mbox_sts[1];
break;
case MBOX_ASTS_INITIALIZATION_FAILED:

Просмотреть файл

@ -418,6 +418,38 @@ qla4xxx_get_ifcb(struct scsi_qla_host *ha, uint32_t *mbox_cmd,
return QLA_SUCCESS;
}
uint8_t qla4xxx_set_ipaddr_state(uint8_t fw_ipaddr_state)
{
uint8_t ipaddr_state;
switch (fw_ipaddr_state) {
case IP_ADDRSTATE_UNCONFIGURED:
ipaddr_state = ISCSI_IPDDRESS_STATE_UNCONFIGURED;
break;
case IP_ADDRSTATE_INVALID:
ipaddr_state = ISCSI_IPDDRESS_STATE_INVALID;
break;
case IP_ADDRSTATE_ACQUIRING:
ipaddr_state = ISCSI_IPDDRESS_STATE_ACQUIRING;
break;
case IP_ADDRSTATE_TENTATIVE:
ipaddr_state = ISCSI_IPDDRESS_STATE_TENTATIVE;
break;
case IP_ADDRSTATE_DEPRICATED:
ipaddr_state = ISCSI_IPDDRESS_STATE_DEPRECATED;
break;
case IP_ADDRSTATE_PREFERRED:
ipaddr_state = ISCSI_IPDDRESS_STATE_VALID;
break;
case IP_ADDRSTATE_DISABLING:
ipaddr_state = ISCSI_IPDDRESS_STATE_DISABLING;
break;
default:
ipaddr_state = ISCSI_IPDDRESS_STATE_UNCONFIGURED;
}
return ipaddr_state;
}
static void
qla4xxx_update_local_ip(struct scsi_qla_host *ha,
struct addr_ctrl_blk *init_fw_cb)
@ -425,7 +457,7 @@ qla4xxx_update_local_ip(struct scsi_qla_host *ha,
ha->ip_config.tcp_options = le16_to_cpu(init_fw_cb->ipv4_tcp_opts);
ha->ip_config.ipv4_options = le16_to_cpu(init_fw_cb->ipv4_ip_opts);
ha->ip_config.ipv4_addr_state =
le16_to_cpu(init_fw_cb->ipv4_addr_state);
qla4xxx_set_ipaddr_state(init_fw_cb->ipv4_addr_state);
ha->ip_config.eth_mtu_size =
le16_to_cpu(init_fw_cb->eth_mtu_size);
ha->ip_config.ipv4_port = le16_to_cpu(init_fw_cb->ipv4_port);
@ -434,6 +466,8 @@ qla4xxx_update_local_ip(struct scsi_qla_host *ha,
ha->ip_config.ipv6_options = le16_to_cpu(init_fw_cb->ipv6_opts);
ha->ip_config.ipv6_addl_options =
le16_to_cpu(init_fw_cb->ipv6_addtl_opts);
ha->ip_config.ipv6_tcp_options =
le16_to_cpu(init_fw_cb->ipv6_tcp_opts);
}
/* Save IPv4 Address Info */
@ -448,17 +482,65 @@ qla4xxx_update_local_ip(struct scsi_qla_host *ha,
sizeof(init_fw_cb->ipv4_gw_addr)));
ha->ip_config.ipv4_vlan_tag = be16_to_cpu(init_fw_cb->ipv4_vlan_tag);
ha->ip_config.control = init_fw_cb->control;
ha->ip_config.tcp_wsf = init_fw_cb->ipv4_tcp_wsf;
ha->ip_config.ipv4_tos = init_fw_cb->ipv4_tos;
ha->ip_config.ipv4_cache_id = init_fw_cb->ipv4_cacheid;
ha->ip_config.ipv4_alt_cid_len = init_fw_cb->ipv4_dhcp_alt_cid_len;
memcpy(ha->ip_config.ipv4_alt_cid, init_fw_cb->ipv4_dhcp_alt_cid,
min(sizeof(ha->ip_config.ipv4_alt_cid),
sizeof(init_fw_cb->ipv4_dhcp_alt_cid)));
ha->ip_config.ipv4_vid_len = init_fw_cb->ipv4_dhcp_vid_len;
memcpy(ha->ip_config.ipv4_vid, init_fw_cb->ipv4_dhcp_vid,
min(sizeof(ha->ip_config.ipv4_vid),
sizeof(init_fw_cb->ipv4_dhcp_vid)));
ha->ip_config.ipv4_ttl = init_fw_cb->ipv4_ttl;
ha->ip_config.def_timeout = le16_to_cpu(init_fw_cb->def_timeout);
ha->ip_config.abort_timer = init_fw_cb->abort_timer;
ha->ip_config.iscsi_options = le16_to_cpu(init_fw_cb->iscsi_opts);
ha->ip_config.iscsi_max_pdu_size =
le16_to_cpu(init_fw_cb->iscsi_max_pdu_size);
ha->ip_config.iscsi_first_burst_len =
le16_to_cpu(init_fw_cb->iscsi_fburst_len);
ha->ip_config.iscsi_max_outstnd_r2t =
le16_to_cpu(init_fw_cb->iscsi_max_outstnd_r2t);
ha->ip_config.iscsi_max_burst_len =
le16_to_cpu(init_fw_cb->iscsi_max_burst_len);
memcpy(ha->ip_config.iscsi_name, init_fw_cb->iscsi_name,
min(sizeof(ha->ip_config.iscsi_name),
sizeof(init_fw_cb->iscsi_name)));
if (is_ipv6_enabled(ha)) {
/* Save IPv6 Address */
ha->ip_config.ipv6_link_local_state =
le16_to_cpu(init_fw_cb->ipv6_lnk_lcl_addr_state);
qla4xxx_set_ipaddr_state(init_fw_cb->ipv6_lnk_lcl_addr_state);
ha->ip_config.ipv6_addr0_state =
le16_to_cpu(init_fw_cb->ipv6_addr0_state);
qla4xxx_set_ipaddr_state(init_fw_cb->ipv6_addr0_state);
ha->ip_config.ipv6_addr1_state =
le16_to_cpu(init_fw_cb->ipv6_addr1_state);
ha->ip_config.ipv6_default_router_state =
le16_to_cpu(init_fw_cb->ipv6_dflt_rtr_state);
qla4xxx_set_ipaddr_state(init_fw_cb->ipv6_addr1_state);
switch (le16_to_cpu(init_fw_cb->ipv6_dflt_rtr_state)) {
case IPV6_RTRSTATE_UNKNOWN:
ha->ip_config.ipv6_default_router_state =
ISCSI_ROUTER_STATE_UNKNOWN;
break;
case IPV6_RTRSTATE_MANUAL:
ha->ip_config.ipv6_default_router_state =
ISCSI_ROUTER_STATE_MANUAL;
break;
case IPV6_RTRSTATE_ADVERTISED:
ha->ip_config.ipv6_default_router_state =
ISCSI_ROUTER_STATE_ADVERTISED;
break;
case IPV6_RTRSTATE_STALE:
ha->ip_config.ipv6_default_router_state =
ISCSI_ROUTER_STATE_STALE;
break;
default:
ha->ip_config.ipv6_default_router_state =
ISCSI_ROUTER_STATE_UNKNOWN;
}
ha->ip_config.ipv6_link_local_addr.in6_u.u6_addr8[0] = 0xFE;
ha->ip_config.ipv6_link_local_addr.in6_u.u6_addr8[1] = 0x80;
@ -479,6 +561,23 @@ qla4xxx_update_local_ip(struct scsi_qla_host *ha,
ha->ip_config.ipv6_vlan_tag =
be16_to_cpu(init_fw_cb->ipv6_vlan_tag);
ha->ip_config.ipv6_port = le16_to_cpu(init_fw_cb->ipv6_port);
ha->ip_config.ipv6_cache_id = init_fw_cb->ipv6_cache_id;
ha->ip_config.ipv6_flow_lbl =
le16_to_cpu(init_fw_cb->ipv6_flow_lbl);
ha->ip_config.ipv6_traffic_class =
init_fw_cb->ipv6_traffic_class;
ha->ip_config.ipv6_hop_limit = init_fw_cb->ipv6_hop_limit;
ha->ip_config.ipv6_nd_reach_time =
le32_to_cpu(init_fw_cb->ipv6_nd_reach_time);
ha->ip_config.ipv6_nd_rexmit_timer =
le32_to_cpu(init_fw_cb->ipv6_nd_rexmit_timer);
ha->ip_config.ipv6_nd_stale_timeout =
le32_to_cpu(init_fw_cb->ipv6_nd_stale_timeout);
ha->ip_config.ipv6_dup_addr_detect_count =
init_fw_cb->ipv6_dup_addr_detect_count;
ha->ip_config.ipv6_gw_advrt_mtu =
le32_to_cpu(init_fw_cb->ipv6_gw_advrt_mtu);
ha->ip_config.ipv6_tcp_wsf = init_fw_cb->ipv6_tcp_wsf;
}
}
@ -2317,3 +2416,46 @@ exit_config_acb:
rval == QLA_SUCCESS ? "SUCCEEDED" : "FAILED"));
return rval;
}
int qla4_83xx_get_port_config(struct scsi_qla_host *ha, uint32_t *config)
{
uint32_t mbox_cmd[MBOX_REG_COUNT];
uint32_t mbox_sts[MBOX_REG_COUNT];
int status;
memset(&mbox_cmd, 0, sizeof(mbox_cmd));
memset(&mbox_sts, 0, sizeof(mbox_sts));
mbox_cmd[0] = MBOX_CMD_GET_PORT_CONFIG;
status = qla4xxx_mailbox_command(ha, MBOX_REG_COUNT, MBOX_REG_COUNT,
mbox_cmd, mbox_sts);
if (status == QLA_SUCCESS)
*config = mbox_sts[1];
else
ql4_printk(KERN_ERR, ha, "%s: failed status %04X\n", __func__,
mbox_sts[0]);
return status;
}
int qla4_83xx_set_port_config(struct scsi_qla_host *ha, uint32_t *config)
{
uint32_t mbox_cmd[MBOX_REG_COUNT];
uint32_t mbox_sts[MBOX_REG_COUNT];
int status;
memset(&mbox_cmd, 0, sizeof(mbox_cmd));
memset(&mbox_sts, 0, sizeof(mbox_sts));
mbox_cmd[0] = MBOX_CMD_SET_PORT_CONFIG;
mbox_cmd[1] = *config;
status = qla4xxx_mailbox_command(ha, MBOX_REG_COUNT, MBOX_REG_COUNT,
mbox_cmd, mbox_sts);
if (status != QLA_SUCCESS)
ql4_printk(KERN_ERR, ha, "%s: failed status %04X\n", __func__,
mbox_sts[0]);
return status;
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -5,4 +5,4 @@
* See LICENSE.qla4xxx for copyright and licensing details.
*/
#define QLA4XXX_DRIVER_VERSION "5.04.00-k1"
#define QLA4XXX_DRIVER_VERSION "5.04.00-k3"

Просмотреть файл

@ -297,6 +297,7 @@ struct scsi_cmnd *scsi_get_command(struct scsi_device *dev, gfp_t gfp_mask)
cmd->device = dev;
INIT_LIST_HEAD(&cmd->list);
INIT_DELAYED_WORK(&cmd->abort_work, scmd_eh_abort_handler);
spin_lock_irqsave(&dev->list_lock, flags);
list_add_tail(&cmd->list, &dev->cmd_list);
spin_unlock_irqrestore(&dev->list_lock, flags);
@ -353,6 +354,8 @@ void scsi_put_command(struct scsi_cmnd *cmd)
list_del_init(&cmd->list);
spin_unlock_irqrestore(&cmd->device->list_lock, flags);
cancel_delayed_work(&cmd->abort_work);
__scsi_put_command(cmd->device->host, cmd, &sdev->sdev_gendev);
}
EXPORT_SYMBOL(scsi_put_command);
@ -742,15 +745,13 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
}
/**
* scsi_done - Enqueue the finished SCSI command into the done queue.
* scsi_done - Invoke completion on finished SCSI command.
* @cmd: The SCSI Command for which a low-level device driver (LLDD) gives
* ownership back to SCSI Core -- i.e. the LLDD has finished with it.
*
* Description: This function is the mid-level's (SCSI Core) interrupt routine,
* which regains ownership of the SCSI command (de facto) from a LLDD, and
* enqueues the command to the done queue for further processing.
*
* This is the producer of the done queue who enqueues at the tail.
* calls blk_complete_request() for further processing.
*
* This function is interrupt context safe.
*/

Просмотреть файл

@ -2873,13 +2873,13 @@ static int scsi_debug_show_info(struct seq_file *m, struct Scsi_Host *host)
return 0;
}
static ssize_t sdebug_delay_show(struct device_driver * ddp, char * buf)
static ssize_t delay_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_delay);
}
static ssize_t sdebug_delay_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t delay_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int delay;
char work[20];
@ -2892,16 +2892,15 @@ static ssize_t sdebug_delay_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(delay, S_IRUGO | S_IWUSR, sdebug_delay_show,
sdebug_delay_store);
static DRIVER_ATTR_RW(delay);
static ssize_t sdebug_opts_show(struct device_driver * ddp, char * buf)
static ssize_t opts_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "0x%x\n", scsi_debug_opts);
}
static ssize_t sdebug_opts_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t opts_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int opts;
char work[20];
@ -2921,15 +2920,14 @@ opts_done:
scsi_debug_cmnd_count = 0;
return count;
}
DRIVER_ATTR(opts, S_IRUGO | S_IWUSR, sdebug_opts_show,
sdebug_opts_store);
static DRIVER_ATTR_RW(opts);
static ssize_t sdebug_ptype_show(struct device_driver * ddp, char * buf)
static ssize_t ptype_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_ptype);
}
static ssize_t sdebug_ptype_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t ptype_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -2939,14 +2937,14 @@ static ssize_t sdebug_ptype_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(ptype, S_IRUGO | S_IWUSR, sdebug_ptype_show, sdebug_ptype_store);
static DRIVER_ATTR_RW(ptype);
static ssize_t sdebug_dsense_show(struct device_driver * ddp, char * buf)
static ssize_t dsense_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dsense);
}
static ssize_t sdebug_dsense_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t dsense_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -2956,15 +2954,14 @@ static ssize_t sdebug_dsense_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(dsense, S_IRUGO | S_IWUSR, sdebug_dsense_show,
sdebug_dsense_store);
static DRIVER_ATTR_RW(dsense);
static ssize_t sdebug_fake_rw_show(struct device_driver * ddp, char * buf)
static ssize_t fake_rw_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_fake_rw);
}
static ssize_t sdebug_fake_rw_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t fake_rw_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -2974,15 +2971,14 @@ static ssize_t sdebug_fake_rw_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(fake_rw, S_IRUGO | S_IWUSR, sdebug_fake_rw_show,
sdebug_fake_rw_store);
static DRIVER_ATTR_RW(fake_rw);
static ssize_t sdebug_no_lun_0_show(struct device_driver * ddp, char * buf)
static ssize_t no_lun_0_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_no_lun_0);
}
static ssize_t sdebug_no_lun_0_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t no_lun_0_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -2992,15 +2988,14 @@ static ssize_t sdebug_no_lun_0_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(no_lun_0, S_IRUGO | S_IWUSR, sdebug_no_lun_0_show,
sdebug_no_lun_0_store);
static DRIVER_ATTR_RW(no_lun_0);
static ssize_t sdebug_num_tgts_show(struct device_driver * ddp, char * buf)
static ssize_t num_tgts_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_num_tgts);
}
static ssize_t sdebug_num_tgts_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t num_tgts_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3011,27 +3006,26 @@ static ssize_t sdebug_num_tgts_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(num_tgts, S_IRUGO | S_IWUSR, sdebug_num_tgts_show,
sdebug_num_tgts_store);
static DRIVER_ATTR_RW(num_tgts);
static ssize_t sdebug_dev_size_mb_show(struct device_driver * ddp, char * buf)
static ssize_t dev_size_mb_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dev_size_mb);
}
DRIVER_ATTR(dev_size_mb, S_IRUGO, sdebug_dev_size_mb_show, NULL);
static DRIVER_ATTR_RO(dev_size_mb);
static ssize_t sdebug_num_parts_show(struct device_driver * ddp, char * buf)
static ssize_t num_parts_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_num_parts);
}
DRIVER_ATTR(num_parts, S_IRUGO, sdebug_num_parts_show, NULL);
static DRIVER_ATTR_RO(num_parts);
static ssize_t sdebug_every_nth_show(struct device_driver * ddp, char * buf)
static ssize_t every_nth_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_every_nth);
}
static ssize_t sdebug_every_nth_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t every_nth_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int nth;
@ -3042,15 +3036,14 @@ static ssize_t sdebug_every_nth_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(every_nth, S_IRUGO | S_IWUSR, sdebug_every_nth_show,
sdebug_every_nth_store);
static DRIVER_ATTR_RW(every_nth);
static ssize_t sdebug_max_luns_show(struct device_driver * ddp, char * buf)
static ssize_t max_luns_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_max_luns);
}
static ssize_t sdebug_max_luns_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t max_luns_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3061,15 +3054,14 @@ static ssize_t sdebug_max_luns_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(max_luns, S_IRUGO | S_IWUSR, sdebug_max_luns_show,
sdebug_max_luns_store);
static DRIVER_ATTR_RW(max_luns);
static ssize_t sdebug_max_queue_show(struct device_driver * ddp, char * buf)
static ssize_t max_queue_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_max_queue);
}
static ssize_t sdebug_max_queue_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t max_queue_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3080,27 +3072,26 @@ static ssize_t sdebug_max_queue_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(max_queue, S_IRUGO | S_IWUSR, sdebug_max_queue_show,
sdebug_max_queue_store);
static DRIVER_ATTR_RW(max_queue);
static ssize_t sdebug_no_uld_show(struct device_driver * ddp, char * buf)
static ssize_t no_uld_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_no_uld);
}
DRIVER_ATTR(no_uld, S_IRUGO, sdebug_no_uld_show, NULL);
static DRIVER_ATTR_RO(no_uld);
static ssize_t sdebug_scsi_level_show(struct device_driver * ddp, char * buf)
static ssize_t scsi_level_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_scsi_level);
}
DRIVER_ATTR(scsi_level, S_IRUGO, sdebug_scsi_level_show, NULL);
static DRIVER_ATTR_RO(scsi_level);
static ssize_t sdebug_virtual_gb_show(struct device_driver * ddp, char * buf)
static ssize_t virtual_gb_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_virtual_gb);
}
static ssize_t sdebug_virtual_gb_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t virtual_gb_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3113,16 +3104,15 @@ static ssize_t sdebug_virtual_gb_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(virtual_gb, S_IRUGO | S_IWUSR, sdebug_virtual_gb_show,
sdebug_virtual_gb_store);
static DRIVER_ATTR_RW(virtual_gb);
static ssize_t sdebug_add_host_show(struct device_driver * ddp, char * buf)
static ssize_t add_host_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_add_host);
}
static ssize_t sdebug_add_host_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t add_host_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int delta_hosts;
@ -3139,16 +3129,14 @@ static ssize_t sdebug_add_host_store(struct device_driver * ddp,
}
return count;
}
DRIVER_ATTR(add_host, S_IRUGO | S_IWUSR, sdebug_add_host_show,
sdebug_add_host_store);
static DRIVER_ATTR_RW(add_host);
static ssize_t sdebug_vpd_use_hostno_show(struct device_driver * ddp,
char * buf)
static ssize_t vpd_use_hostno_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_vpd_use_hostno);
}
static ssize_t sdebug_vpd_use_hostno_store(struct device_driver * ddp,
const char * buf, size_t count)
static ssize_t vpd_use_hostno_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3158,40 +3146,39 @@ static ssize_t sdebug_vpd_use_hostno_store(struct device_driver * ddp,
}
return -EINVAL;
}
DRIVER_ATTR(vpd_use_hostno, S_IRUGO | S_IWUSR, sdebug_vpd_use_hostno_show,
sdebug_vpd_use_hostno_store);
static DRIVER_ATTR_RW(vpd_use_hostno);
static ssize_t sdebug_sector_size_show(struct device_driver * ddp, char * buf)
static ssize_t sector_size_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%u\n", scsi_debug_sector_size);
}
DRIVER_ATTR(sector_size, S_IRUGO, sdebug_sector_size_show, NULL);
static DRIVER_ATTR_RO(sector_size);
static ssize_t sdebug_dix_show(struct device_driver *ddp, char *buf)
static ssize_t dix_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dix);
}
DRIVER_ATTR(dix, S_IRUGO, sdebug_dix_show, NULL);
static DRIVER_ATTR_RO(dix);
static ssize_t sdebug_dif_show(struct device_driver *ddp, char *buf)
static ssize_t dif_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dif);
}
DRIVER_ATTR(dif, S_IRUGO, sdebug_dif_show, NULL);
static DRIVER_ATTR_RO(dif);
static ssize_t sdebug_guard_show(struct device_driver *ddp, char *buf)
static ssize_t guard_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%u\n", scsi_debug_guard);
}
DRIVER_ATTR(guard, S_IRUGO, sdebug_guard_show, NULL);
static DRIVER_ATTR_RO(guard);
static ssize_t sdebug_ato_show(struct device_driver *ddp, char *buf)
static ssize_t ato_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_ato);
}
DRIVER_ATTR(ato, S_IRUGO, sdebug_ato_show, NULL);
static DRIVER_ATTR_RO(ato);
static ssize_t sdebug_map_show(struct device_driver *ddp, char *buf)
static ssize_t map_show(struct device_driver *ddp, char *buf)
{
ssize_t count;
@ -3206,15 +3193,14 @@ static ssize_t sdebug_map_show(struct device_driver *ddp, char *buf)
return count;
}
DRIVER_ATTR(map, S_IRUGO, sdebug_map_show, NULL);
static DRIVER_ATTR_RO(map);
static ssize_t sdebug_removable_show(struct device_driver *ddp,
char *buf)
static ssize_t removable_show(struct device_driver *ddp, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_removable ? 1 : 0);
}
static ssize_t sdebug_removable_store(struct device_driver *ddp,
const char *buf, size_t count)
static ssize_t removable_store(struct device_driver *ddp, const char *buf,
size_t count)
{
int n;
@ -3224,74 +3210,43 @@ static ssize_t sdebug_removable_store(struct device_driver *ddp,
}
return -EINVAL;
}
DRIVER_ATTR(removable, S_IRUGO | S_IWUSR, sdebug_removable_show,
sdebug_removable_store);
static DRIVER_ATTR_RW(removable);
/* Note: The following function creates attribute files in the
/* Note: The following array creates attribute files in the
/sys/bus/pseudo/drivers/scsi_debug directory. The advantage of these
files (over those found in the /sys/module/scsi_debug/parameters
directory) is that auxiliary actions can be triggered when an attribute
is changed. For example see: sdebug_add_host_store() above.
*/
static int do_create_driverfs_files(void)
{
int ret;
ret = driver_create_file(&sdebug_driverfs_driver, &driver_attr_add_host);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_delay);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dev_size_mb);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dsense);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_every_nth);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_fake_rw);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_max_luns);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_max_queue);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_no_lun_0);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_no_uld);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_num_parts);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_num_tgts);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_ptype);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_opts);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_removable);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_scsi_level);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_virtual_gb);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_vpd_use_hostno);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_sector_size);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dix);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dif);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_guard);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_ato);
ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_map);
return ret;
}
static void do_remove_driverfs_files(void)
{
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_map);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_ato);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_guard);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dif);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dix);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_sector_size);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_vpd_use_hostno);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_virtual_gb);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_scsi_level);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_opts);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_ptype);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_removable);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_num_tgts);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_num_parts);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_no_uld);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_no_lun_0);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_max_queue);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_max_luns);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_fake_rw);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_every_nth);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dsense);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dev_size_mb);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_delay);
driver_remove_file(&sdebug_driverfs_driver, &driver_attr_add_host);
}
static struct attribute *sdebug_drv_attrs[] = {
&driver_attr_delay.attr,
&driver_attr_opts.attr,
&driver_attr_ptype.attr,
&driver_attr_dsense.attr,
&driver_attr_fake_rw.attr,
&driver_attr_no_lun_0.attr,
&driver_attr_num_tgts.attr,
&driver_attr_dev_size_mb.attr,
&driver_attr_num_parts.attr,
&driver_attr_every_nth.attr,
&driver_attr_max_luns.attr,
&driver_attr_max_queue.attr,
&driver_attr_no_uld.attr,
&driver_attr_scsi_level.attr,
&driver_attr_virtual_gb.attr,
&driver_attr_add_host.attr,
&driver_attr_vpd_use_hostno.attr,
&driver_attr_sector_size.attr,
&driver_attr_dix.attr,
&driver_attr_dif.attr,
&driver_attr_guard.attr,
&driver_attr_ato.attr,
&driver_attr_map.attr,
&driver_attr_removable.attr,
NULL,
};
ATTRIBUTE_GROUPS(sdebug_drv);
struct device *pseudo_primary;
@ -3456,12 +3411,6 @@ static int __init scsi_debug_init(void)
ret);
goto bus_unreg;
}
ret = do_create_driverfs_files();
if (ret < 0) {
printk(KERN_WARNING "scsi_debug: driver_create_file error: %d\n",
ret);
goto del_files;
}
init_all_queued();
@ -3482,9 +3431,6 @@ static int __init scsi_debug_init(void)
}
return 0;
del_files:
do_remove_driverfs_files();
driver_unregister(&sdebug_driverfs_driver);
bus_unreg:
bus_unregister(&pseudo_lld_bus);
dev_unreg:
@ -3506,7 +3452,6 @@ static void __exit scsi_debug_exit(void)
stop_all_queued();
for (; k; k--)
sdebug_remove_adapter();
do_remove_driverfs_files();
driver_unregister(&sdebug_driverfs_driver);
bus_unregister(&pseudo_lld_bus);
root_device_unregister(pseudo_primary);
@ -4096,4 +4041,5 @@ static struct bus_type pseudo_lld_bus = {
.match = pseudo_lld_bus_match,
.probe = sdebug_driver_probe,
.remove = sdebug_driver_remove,
.drv_groups = sdebug_drv_groups,
};

Просмотреть файл

@ -53,6 +53,8 @@ static void scsi_eh_done(struct scsi_cmnd *scmd);
#define HOST_RESET_SETTLE_TIME (10)
static int scsi_eh_try_stu(struct scsi_cmnd *scmd);
static int scsi_try_to_abort_cmd(struct scsi_host_template *,
struct scsi_cmnd *);
/* called with shost->host_lock held */
void scsi_eh_wakeup(struct Scsi_Host *shost)
@ -89,16 +91,137 @@ EXPORT_SYMBOL_GPL(scsi_schedule_eh);
static int scsi_host_eh_past_deadline(struct Scsi_Host *shost)
{
if (!shost->last_reset || !shost->eh_deadline)
if (!shost->last_reset || shost->eh_deadline == -1)
return 0;
if (time_before(jiffies,
shost->last_reset + shost->eh_deadline))
/*
* 32bit accesses are guaranteed to be atomic
* (on all supported architectures), so instead
* of using a spinlock we can as well double check
* if eh_deadline has been set to 'off' during the
* time_before call.
*/
if (time_before(jiffies, shost->last_reset + shost->eh_deadline) &&
shost->eh_deadline > -1)
return 0;
return 1;
}
/**
* scmd_eh_abort_handler - Handle command aborts
* @work: command to be aborted.
*/
void
scmd_eh_abort_handler(struct work_struct *work)
{
struct scsi_cmnd *scmd =
container_of(work, struct scsi_cmnd, abort_work.work);
struct scsi_device *sdev = scmd->device;
int rtn;
if (scsi_host_eh_past_deadline(sdev->host)) {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p eh timeout, not aborting\n",
scmd));
} else {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"aborting command %p\n", scmd));
rtn = scsi_try_to_abort_cmd(sdev->host->hostt, scmd);
if (rtn == SUCCESS) {
scmd->result |= DID_TIME_OUT << 16;
if (scsi_host_eh_past_deadline(sdev->host)) {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p eh timeout, "
"not retrying aborted "
"command\n", scmd));
} else if (!scsi_noretry_cmd(scmd) &&
(++scmd->retries <= scmd->allowed)) {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_WARNING, scmd,
"scmd %p retry "
"aborted command\n", scmd));
scsi_queue_insert(scmd, SCSI_MLQUEUE_EH_RETRY);
return;
} else {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_WARNING, scmd,
"scmd %p finish "
"aborted command\n", scmd));
scsi_finish_command(scmd);
return;
}
} else {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p abort failed, rtn %d\n",
scmd, rtn));
}
}
if (!scsi_eh_scmd_add(scmd, 0)) {
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_WARNING, scmd,
"scmd %p terminate "
"aborted command\n", scmd));
scmd->result |= DID_TIME_OUT << 16;
scsi_finish_command(scmd);
}
}
/**
* scsi_abort_command - schedule a command abort
* @scmd: scmd to abort.
*
* We only need to abort commands after a command timeout
*/
static int
scsi_abort_command(struct scsi_cmnd *scmd)
{
struct scsi_device *sdev = scmd->device;
struct Scsi_Host *shost = sdev->host;
unsigned long flags;
if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) {
/*
* Retry after abort failed, escalate to next level.
*/
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p previous abort failed\n", scmd));
cancel_delayed_work(&scmd->abort_work);
return FAILED;
}
/*
* Do not try a command abort if
* SCSI EH has already started.
*/
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_in_recovery(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p not aborting, host in recovery\n",
scmd));
return FAILED;
}
if (shost->eh_deadline != -1 && !shost->last_reset)
shost->last_reset = jiffies;
spin_unlock_irqrestore(shost->host_lock, flags);
scmd->eh_eflags |= SCSI_EH_ABORT_SCHEDULED;
SCSI_LOG_ERROR_RECOVERY(3,
scmd_printk(KERN_INFO, scmd,
"scmd %p abort scheduled\n", scmd));
queue_delayed_work(shost->tmf_work_q, &scmd->abort_work, HZ / 100);
return SUCCESS;
}
/**
* scsi_eh_scmd_add - add scsi cmd to error handling.
* @scmd: scmd to run eh on.
@ -121,10 +244,12 @@ int scsi_eh_scmd_add(struct scsi_cmnd *scmd, int eh_flag)
if (scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY))
goto out_unlock;
if (shost->eh_deadline && !shost->last_reset)
if (shost->eh_deadline != -1 && !shost->last_reset)
shost->last_reset = jiffies;
ret = 1;
if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED)
eh_flag &= ~SCSI_EH_CANCEL_CMD;
scmd->eh_eflags |= eh_flag;
list_add_tail(&scmd->eh_entry, &shost->eh_cmd_q);
shost->host_failed++;
@ -153,7 +278,7 @@ enum blk_eh_timer_return scsi_times_out(struct request *req)
trace_scsi_dispatch_cmd_timeout(scmd);
scsi_log_completion(scmd, TIMEOUT_ERROR);
if (host->eh_deadline && !host->last_reset)
if (host->eh_deadline != -1 && !host->last_reset)
host->last_reset = jiffies;
if (host->transportt->eh_timed_out)
@ -161,6 +286,10 @@ enum blk_eh_timer_return scsi_times_out(struct request *req)
else if (host->hostt->eh_timed_out)
rtn = host->hostt->eh_timed_out(scmd);
if (rtn == BLK_EH_NOT_HANDLED && !host->hostt->no_async_abort)
if (scsi_abort_command(scmd) == SUCCESS)
return BLK_EH_NOT_HANDLED;
scmd->result |= DID_TIME_OUT << 16;
if (unlikely(rtn == BLK_EH_NOT_HANDLED &&
@ -941,12 +1070,6 @@ retry:
scsi_eh_restore_cmnd(scmd, &ses);
if (scmd->request->cmd_type != REQ_TYPE_BLOCK_PC) {
struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
if (sdrv->eh_action)
rtn = sdrv->eh_action(scmd, cmnd, cmnd_size, rtn);
}
return rtn;
}
@ -964,6 +1087,16 @@ static int scsi_request_sense(struct scsi_cmnd *scmd)
return scsi_send_eh_cmnd(scmd, NULL, 0, scmd->device->eh_timeout, ~0);
}
static int scsi_eh_action(struct scsi_cmnd *scmd, int rtn)
{
if (scmd->request->cmd_type != REQ_TYPE_BLOCK_PC) {
struct scsi_driver *sdrv = scsi_cmd_to_driver(scmd);
if (sdrv->eh_action)
rtn = sdrv->eh_action(scmd, rtn);
}
return rtn;
}
/**
* scsi_eh_finish_cmd - Handle a cmd that eh is finished with.
* @scmd: Original SCSI cmd that eh has finished.
@ -1010,7 +1143,6 @@ int scsi_eh_get_sense(struct list_head *work_q,
struct scsi_cmnd *scmd, *next;
struct Scsi_Host *shost;
int rtn;
unsigned long flags;
list_for_each_entry_safe(scmd, next, work_q, eh_entry) {
if ((scmd->eh_eflags & SCSI_EH_CANCEL_CMD) ||
@ -1018,16 +1150,13 @@ int scsi_eh_get_sense(struct list_head *work_q,
continue;
shost = scmd->device->host;
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, shost,
"skip %s, past eh deadline\n",
__func__));
break;
}
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(2, scmd_printk(KERN_INFO, scmd,
"%s: requesting sense\n",
current->comm));
@ -1113,26 +1242,21 @@ static int scsi_eh_test_devices(struct list_head *cmd_list,
struct scsi_cmnd *scmd, *next;
struct scsi_device *sdev;
int finish_cmds;
unsigned long flags;
while (!list_empty(cmd_list)) {
scmd = list_entry(cmd_list->next, struct scsi_cmnd, eh_entry);
sdev = scmd->device;
if (!try_stu) {
spin_lock_irqsave(sdev->host->host_lock, flags);
if (scsi_host_eh_past_deadline(sdev->host)) {
/* Push items back onto work_q */
list_splice_init(cmd_list, work_q);
spin_unlock_irqrestore(sdev->host->host_lock,
flags);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, sdev->host,
"skip %s, past eh deadline",
__func__));
break;
}
spin_unlock_irqrestore(sdev->host->host_lock, flags);
}
finish_cmds = !scsi_device_online(scmd->device) ||
@ -1142,7 +1266,9 @@ static int scsi_eh_test_devices(struct list_head *cmd_list,
list_for_each_entry_safe(scmd, next, cmd_list, eh_entry)
if (scmd->device == sdev) {
if (finish_cmds)
if (finish_cmds &&
(try_stu ||
scsi_eh_action(scmd, SUCCESS) == SUCCESS))
scsi_eh_finish_cmd(scmd, done_q);
else
list_move_tail(&scmd->eh_entry, work_q);
@ -1171,15 +1297,12 @@ static int scsi_eh_abort_cmds(struct list_head *work_q,
LIST_HEAD(check_list);
int rtn;
struct Scsi_Host *shost;
unsigned long flags;
list_for_each_entry_safe(scmd, next, work_q, eh_entry) {
if (!(scmd->eh_eflags & SCSI_EH_CANCEL_CMD))
continue;
shost = scmd->device->host;
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
list_splice_init(&check_list, work_q);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, shost,
@ -1187,7 +1310,6 @@ static int scsi_eh_abort_cmds(struct list_head *work_q,
__func__));
return list_empty(work_q);
}
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(3, printk("%s: aborting cmd:"
"0x%p\n", current->comm,
scmd));
@ -1251,19 +1373,15 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
{
struct scsi_cmnd *scmd, *stu_scmd, *next;
struct scsi_device *sdev;
unsigned long flags;
shost_for_each_device(sdev, shost) {
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, shost,
"skip %s, past eh deadline\n",
__func__));
break;
}
spin_unlock_irqrestore(shost->host_lock, flags);
stu_scmd = NULL;
list_for_each_entry(scmd, work_q, eh_entry)
if (scmd->device == sdev && SCSI_SENSE_VALID(scmd) &&
@ -1283,7 +1401,8 @@ static int scsi_eh_stu(struct Scsi_Host *shost,
!scsi_eh_tur(stu_scmd)) {
list_for_each_entry_safe(scmd, next,
work_q, eh_entry) {
if (scmd->device == sdev)
if (scmd->device == sdev &&
scsi_eh_action(scmd, SUCCESS) == SUCCESS)
scsi_eh_finish_cmd(scmd, done_q);
}
}
@ -1316,20 +1435,16 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
{
struct scsi_cmnd *scmd, *bdr_scmd, *next;
struct scsi_device *sdev;
unsigned long flags;
int rtn;
shost_for_each_device(sdev, shost) {
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, shost,
"skip %s, past eh deadline\n",
__func__));
break;
}
spin_unlock_irqrestore(shost->host_lock, flags);
bdr_scmd = NULL;
list_for_each_entry(scmd, work_q, eh_entry)
if (scmd->device == sdev) {
@ -1350,7 +1465,8 @@ static int scsi_eh_bus_device_reset(struct Scsi_Host *shost,
!scsi_eh_tur(bdr_scmd)) {
list_for_each_entry_safe(scmd, next,
work_q, eh_entry) {
if (scmd->device == sdev)
if (scmd->device == sdev &&
scsi_eh_action(scmd, rtn) != FAILED)
scsi_eh_finish_cmd(scmd,
done_q);
}
@ -1389,11 +1505,8 @@ static int scsi_eh_target_reset(struct Scsi_Host *shost,
struct scsi_cmnd *next, *scmd;
int rtn;
unsigned int id;
unsigned long flags;
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
/* push back on work queue for further processing */
list_splice_init(&check_list, work_q);
list_splice_init(&tmp_list, work_q);
@ -1403,7 +1516,6 @@ static int scsi_eh_target_reset(struct Scsi_Host *shost,
__func__));
return list_empty(work_q);
}
spin_unlock_irqrestore(shost->host_lock, flags);
scmd = list_entry(tmp_list.next, struct scsi_cmnd, eh_entry);
id = scmd_id(scmd);
@ -1448,7 +1560,6 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
LIST_HEAD(check_list);
unsigned int channel;
int rtn;
unsigned long flags;
/*
* we really want to loop over the various channels, and do this on
@ -1458,9 +1569,7 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
*/
for (channel = 0; channel <= shost->max_channel; channel++) {
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_eh_past_deadline(shost)) {
spin_unlock_irqrestore(shost->host_lock, flags);
list_splice_init(&check_list, work_q);
SCSI_LOG_ERROR_RECOVERY(3,
shost_printk(KERN_INFO, shost,
@ -1468,7 +1577,6 @@ static int scsi_eh_bus_reset(struct Scsi_Host *shost,
__func__));
return list_empty(work_q);
}
spin_unlock_irqrestore(shost->host_lock, flags);
chan_scmd = NULL;
list_for_each_entry(scmd, work_q, eh_entry) {
@ -1569,7 +1677,7 @@ static void scsi_eh_offline_sdevs(struct list_head *work_q,
}
/**
* scsi_noretry_cmd - determinte if command should be failed fast
* scsi_noretry_cmd - determine if command should be failed fast
* @scmd: SCSI cmd to examine.
*/
int scsi_noretry_cmd(struct scsi_cmnd *scmd)
@ -1577,6 +1685,8 @@ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
switch (host_byte(scmd->result)) {
case DID_OK:
break;
case DID_TIME_OUT:
goto check_type;
case DID_BUS_BUSY:
return (scmd->request->cmd_flags & REQ_FAILFAST_TRANSPORT);
case DID_PARITY:
@ -1590,18 +1700,19 @@ int scsi_noretry_cmd(struct scsi_cmnd *scmd)
return (scmd->request->cmd_flags & REQ_FAILFAST_DRIVER);
}
switch (status_byte(scmd->result)) {
case CHECK_CONDITION:
/*
* assume caller has checked sense and determinted
* the check condition was retryable.
*/
if (scmd->request->cmd_flags & REQ_FAILFAST_DEV ||
scmd->request->cmd_type == REQ_TYPE_BLOCK_PC)
return 1;
}
if (status_byte(scmd->result) != CHECK_CONDITION)
return 0;
return 0;
check_type:
/*
* assume caller has checked sense and determined
* the check condition was retryable.
*/
if (scmd->request->cmd_flags & REQ_FAILFAST_DEV ||
scmd->request->cmd_type == REQ_TYPE_BLOCK_PC)
return 1;
else
return 0;
}
/**
@ -1651,9 +1762,13 @@ int scsi_decide_disposition(struct scsi_cmnd *scmd)
* looks good. drop through, and check the next byte.
*/
break;
case DID_ABORT:
if (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) {
scmd->result |= DID_TIME_OUT << 16;
return SUCCESS;
}
case DID_NO_CONNECT:
case DID_BAD_TARGET:
case DID_ABORT:
/*
* note - this means that we just report the status back
* to the top level driver, not that we actually think
@ -1999,7 +2114,7 @@ static void scsi_unjam_host(struct Scsi_Host *shost)
scsi_eh_ready_devs(shost, &eh_work_q, &eh_done_q);
spin_lock_irqsave(shost->host_lock, flags);
if (shost->eh_deadline)
if (shost->eh_deadline != -1)
shost->last_reset = 0;
spin_unlock_irqrestore(shost->host_lock, flags);
scsi_eh_flush_done_q(&eh_done_q);

Просмотреть файл

@ -16,6 +16,8 @@
#include "scsi_priv.h"
#ifdef CONFIG_PM_SLEEP
static int scsi_dev_type_suspend(struct device *dev, int (*cb)(struct device *))
{
int err;
@ -43,8 +45,6 @@ static int scsi_dev_type_resume(struct device *dev, int (*cb)(struct device *))
return err;
}
#ifdef CONFIG_PM_SLEEP
static int
scsi_bus_suspend_common(struct device *dev, int (*cb)(struct device *))
{
@ -145,38 +145,22 @@ static int scsi_bus_restore(struct device *dev)
#ifdef CONFIG_PM_RUNTIME
static int sdev_blk_runtime_suspend(struct scsi_device *sdev,
int (*cb)(struct device *))
static int sdev_runtime_suspend(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
struct scsi_device *sdev = to_scsi_device(dev);
int err;
err = blk_pre_runtime_suspend(sdev->request_queue);
if (err)
return err;
if (cb)
err = cb(&sdev->sdev_gendev);
if (pm && pm->runtime_suspend)
err = pm->runtime_suspend(dev);
blk_post_runtime_suspend(sdev->request_queue, err);
return err;
}
static int sdev_runtime_suspend(struct device *dev)
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int (*cb)(struct device *) = pm ? pm->runtime_suspend : NULL;
struct scsi_device *sdev = to_scsi_device(dev);
int err;
if (sdev->request_queue->dev)
return sdev_blk_runtime_suspend(sdev, cb);
err = scsi_dev_type_suspend(dev, cb);
if (err == -EAGAIN)
pm_schedule_suspend(dev, jiffies_to_msecs(
round_jiffies_up_relative(HZ/10)));
return err;
}
static int scsi_runtime_suspend(struct device *dev)
{
int err = 0;
@ -190,29 +174,18 @@ static int scsi_runtime_suspend(struct device *dev)
return err;
}
static int sdev_blk_runtime_resume(struct scsi_device *sdev,
int (*cb)(struct device *))
{
int err = 0;
blk_pre_runtime_resume(sdev->request_queue);
if (cb)
err = cb(&sdev->sdev_gendev);
blk_post_runtime_resume(sdev->request_queue, err);
return err;
}
static int sdev_runtime_resume(struct device *dev)
{
struct scsi_device *sdev = to_scsi_device(dev);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
int (*cb)(struct device *) = pm ? pm->runtime_resume : NULL;
int err = 0;
if (sdev->request_queue->dev)
return sdev_blk_runtime_resume(sdev, cb);
else
return scsi_dev_type_resume(dev, cb);
blk_pre_runtime_resume(sdev->request_queue);
if (pm && pm->runtime_resume)
err = pm->runtime_resume(dev);
blk_post_runtime_resume(sdev->request_queue, err);
return err;
}
static int scsi_runtime_resume(struct device *dev)
@ -235,14 +208,11 @@ static int scsi_runtime_idle(struct device *dev)
/* Insert hooks here for targets, hosts, and transport classes */
if (scsi_is_sdev_device(dev)) {
struct scsi_device *sdev = to_scsi_device(dev);
if (sdev->request_queue->dev) {
pm_runtime_mark_last_busy(dev);
pm_runtime_autosuspend(dev);
return -EBUSY;
}
pm_runtime_mark_last_busy(dev);
pm_runtime_autosuspend(dev);
return -EBUSY;
}
return 0;
}

Просмотреть файл

@ -19,6 +19,7 @@ struct scsi_nl_hdr;
* Scsi Error Handler Flags
*/
#define SCSI_EH_CANCEL_CMD 0x0001 /* Cancel this cmd */
#define SCSI_EH_ABORT_SCHEDULED 0x0002 /* Abort has been scheduled */
#define SCSI_SENSE_VALID(scmd) \
(((scmd)->sense_buffer[0] & 0x70) == 0x70)
@ -66,6 +67,7 @@ extern int __init scsi_init_devinfo(void);
extern void scsi_exit_devinfo(void);
/* scsi_error.c */
extern void scmd_eh_abort_handler(struct work_struct *work);
extern enum blk_eh_timer_return scsi_times_out(struct request *req);
extern int scsi_error_handler(void *host);
extern int scsi_decide_disposition(struct scsi_cmnd *cmd);

Просмотреть файл

@ -287,7 +287,9 @@ show_shost_eh_deadline(struct device *dev,
{
struct Scsi_Host *shost = class_to_shost(dev);
return sprintf(buf, "%d\n", shost->eh_deadline / HZ);
if (shost->eh_deadline == -1)
return snprintf(buf, strlen("off") + 2, "off\n");
return sprintf(buf, "%u\n", shost->eh_deadline / HZ);
}
static ssize_t
@ -296,22 +298,34 @@ store_shost_eh_deadline(struct device *dev, struct device_attribute *attr,
{
struct Scsi_Host *shost = class_to_shost(dev);
int ret = -EINVAL;
int deadline;
unsigned long flags;
unsigned long deadline, flags;
if (shost->transportt && shost->transportt->eh_strategy_handler)
return ret;
if (sscanf(buf, "%d\n", &deadline) == 1) {
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_in_recovery(shost))
ret = -EBUSY;
else {
shost->eh_deadline = deadline * HZ;
ret = count;
}
spin_unlock_irqrestore(shost->host_lock, flags);
if (!strncmp(buf, "off", strlen("off")))
deadline = -1;
else {
ret = kstrtoul(buf, 10, &deadline);
if (ret)
return ret;
if (deadline * HZ > UINT_MAX)
return -EINVAL;
}
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_in_recovery(shost))
ret = -EBUSY;
else {
if (deadline == -1)
shost->eh_deadline = -1;
else
shost->eh_deadline = deadline * HZ;
ret = count;
}
spin_unlock_irqrestore(shost->host_lock, flags);
return ret;
}

Просмотреть файл

@ -305,20 +305,71 @@ show_##type##_##name(struct device *dev, struct device_attribute *attr, \
iscsi_iface_attr_show(type, name, ISCSI_NET_PARAM, param) \
static ISCSI_IFACE_ATTR(type, name, S_IRUGO, show_##type##_##name, NULL);
/* generic read only ipvi4 attribute */
#define iscsi_iface_attr(type, name, param) \
iscsi_iface_attr_show(type, name, ISCSI_IFACE_PARAM, param) \
static ISCSI_IFACE_ATTR(type, name, S_IRUGO, show_##type##_##name, NULL);
/* generic read only ipv4 attribute */
iscsi_iface_net_attr(ipv4_iface, ipaddress, ISCSI_NET_PARAM_IPV4_ADDR);
iscsi_iface_net_attr(ipv4_iface, gateway, ISCSI_NET_PARAM_IPV4_GW);
iscsi_iface_net_attr(ipv4_iface, subnet, ISCSI_NET_PARAM_IPV4_SUBNET);
iscsi_iface_net_attr(ipv4_iface, bootproto, ISCSI_NET_PARAM_IPV4_BOOTPROTO);
iscsi_iface_net_attr(ipv4_iface, dhcp_dns_address_en,
ISCSI_NET_PARAM_IPV4_DHCP_DNS_ADDR_EN);
iscsi_iface_net_attr(ipv4_iface, dhcp_slp_da_info_en,
ISCSI_NET_PARAM_IPV4_DHCP_SLP_DA_EN);
iscsi_iface_net_attr(ipv4_iface, tos_en, ISCSI_NET_PARAM_IPV4_TOS_EN);
iscsi_iface_net_attr(ipv4_iface, tos, ISCSI_NET_PARAM_IPV4_TOS);
iscsi_iface_net_attr(ipv4_iface, grat_arp_en,
ISCSI_NET_PARAM_IPV4_GRAT_ARP_EN);
iscsi_iface_net_attr(ipv4_iface, dhcp_alt_client_id_en,
ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID_EN);
iscsi_iface_net_attr(ipv4_iface, dhcp_alt_client_id,
ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID);
iscsi_iface_net_attr(ipv4_iface, dhcp_req_vendor_id_en,
ISCSI_NET_PARAM_IPV4_DHCP_REQ_VENDOR_ID_EN);
iscsi_iface_net_attr(ipv4_iface, dhcp_use_vendor_id_en,
ISCSI_NET_PARAM_IPV4_DHCP_USE_VENDOR_ID_EN);
iscsi_iface_net_attr(ipv4_iface, dhcp_vendor_id,
ISCSI_NET_PARAM_IPV4_DHCP_VENDOR_ID);
iscsi_iface_net_attr(ipv4_iface, dhcp_learn_iqn_en,
ISCSI_NET_PARAM_IPV4_DHCP_LEARN_IQN_EN);
iscsi_iface_net_attr(ipv4_iface, fragment_disable,
ISCSI_NET_PARAM_IPV4_FRAGMENT_DISABLE);
iscsi_iface_net_attr(ipv4_iface, incoming_forwarding_en,
ISCSI_NET_PARAM_IPV4_IN_FORWARD_EN);
iscsi_iface_net_attr(ipv4_iface, ttl, ISCSI_NET_PARAM_IPV4_TTL);
/* generic read only ipv6 attribute */
iscsi_iface_net_attr(ipv6_iface, ipaddress, ISCSI_NET_PARAM_IPV6_ADDR);
iscsi_iface_net_attr(ipv6_iface, link_local_addr, ISCSI_NET_PARAM_IPV6_LINKLOCAL);
iscsi_iface_net_attr(ipv6_iface, link_local_addr,
ISCSI_NET_PARAM_IPV6_LINKLOCAL);
iscsi_iface_net_attr(ipv6_iface, router_addr, ISCSI_NET_PARAM_IPV6_ROUTER);
iscsi_iface_net_attr(ipv6_iface, ipaddr_autocfg,
ISCSI_NET_PARAM_IPV6_ADDR_AUTOCFG);
iscsi_iface_net_attr(ipv6_iface, link_local_autocfg,
ISCSI_NET_PARAM_IPV6_LINKLOCAL_AUTOCFG);
iscsi_iface_net_attr(ipv6_iface, link_local_state,
ISCSI_NET_PARAM_IPV6_LINKLOCAL_STATE);
iscsi_iface_net_attr(ipv6_iface, router_state,
ISCSI_NET_PARAM_IPV6_ROUTER_STATE);
iscsi_iface_net_attr(ipv6_iface, grat_neighbor_adv_en,
ISCSI_NET_PARAM_IPV6_GRAT_NEIGHBOR_ADV_EN);
iscsi_iface_net_attr(ipv6_iface, mld_en, ISCSI_NET_PARAM_IPV6_MLD_EN);
iscsi_iface_net_attr(ipv6_iface, flow_label, ISCSI_NET_PARAM_IPV6_FLOW_LABEL);
iscsi_iface_net_attr(ipv6_iface, traffic_class,
ISCSI_NET_PARAM_IPV6_TRAFFIC_CLASS);
iscsi_iface_net_attr(ipv6_iface, hop_limit, ISCSI_NET_PARAM_IPV6_HOP_LIMIT);
iscsi_iface_net_attr(ipv6_iface, nd_reachable_tmo,
ISCSI_NET_PARAM_IPV6_ND_REACHABLE_TMO);
iscsi_iface_net_attr(ipv6_iface, nd_rexmit_time,
ISCSI_NET_PARAM_IPV6_ND_REXMIT_TIME);
iscsi_iface_net_attr(ipv6_iface, nd_stale_tmo,
ISCSI_NET_PARAM_IPV6_ND_STALE_TMO);
iscsi_iface_net_attr(ipv6_iface, dup_addr_detect_cnt,
ISCSI_NET_PARAM_IPV6_DUP_ADDR_DETECT_CNT);
iscsi_iface_net_attr(ipv6_iface, router_adv_link_mtu,
ISCSI_NET_PARAM_IPV6_RTR_ADV_LINK_MTU);
/* common read only iface attribute */
iscsi_iface_net_attr(iface, enabled, ISCSI_NET_PARAM_IFACE_ENABLE);
@ -327,6 +378,40 @@ iscsi_iface_net_attr(iface, vlan_priority, ISCSI_NET_PARAM_VLAN_PRIORITY);
iscsi_iface_net_attr(iface, vlan_enabled, ISCSI_NET_PARAM_VLAN_ENABLED);
iscsi_iface_net_attr(iface, mtu, ISCSI_NET_PARAM_MTU);
iscsi_iface_net_attr(iface, port, ISCSI_NET_PARAM_PORT);
iscsi_iface_net_attr(iface, ipaddress_state, ISCSI_NET_PARAM_IPADDR_STATE);
iscsi_iface_net_attr(iface, delayed_ack_en, ISCSI_NET_PARAM_DELAYED_ACK_EN);
iscsi_iface_net_attr(iface, tcp_nagle_disable,
ISCSI_NET_PARAM_TCP_NAGLE_DISABLE);
iscsi_iface_net_attr(iface, tcp_wsf_disable, ISCSI_NET_PARAM_TCP_WSF_DISABLE);
iscsi_iface_net_attr(iface, tcp_wsf, ISCSI_NET_PARAM_TCP_WSF);
iscsi_iface_net_attr(iface, tcp_timer_scale, ISCSI_NET_PARAM_TCP_TIMER_SCALE);
iscsi_iface_net_attr(iface, tcp_timestamp_en, ISCSI_NET_PARAM_TCP_TIMESTAMP_EN);
iscsi_iface_net_attr(iface, cache_id, ISCSI_NET_PARAM_CACHE_ID);
iscsi_iface_net_attr(iface, redirect_en, ISCSI_NET_PARAM_REDIRECT_EN);
/* common iscsi specific settings attributes */
iscsi_iface_attr(iface, def_taskmgmt_tmo, ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO);
iscsi_iface_attr(iface, header_digest, ISCSI_IFACE_PARAM_HDRDGST_EN);
iscsi_iface_attr(iface, data_digest, ISCSI_IFACE_PARAM_DATADGST_EN);
iscsi_iface_attr(iface, immediate_data, ISCSI_IFACE_PARAM_IMM_DATA_EN);
iscsi_iface_attr(iface, initial_r2t, ISCSI_IFACE_PARAM_INITIAL_R2T_EN);
iscsi_iface_attr(iface, data_seq_in_order,
ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN);
iscsi_iface_attr(iface, data_pdu_in_order, ISCSI_IFACE_PARAM_PDU_INORDER_EN);
iscsi_iface_attr(iface, erl, ISCSI_IFACE_PARAM_ERL);
iscsi_iface_attr(iface, max_recv_dlength, ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH);
iscsi_iface_attr(iface, first_burst_len, ISCSI_IFACE_PARAM_FIRST_BURST);
iscsi_iface_attr(iface, max_outstanding_r2t, ISCSI_IFACE_PARAM_MAX_R2T);
iscsi_iface_attr(iface, max_burst_len, ISCSI_IFACE_PARAM_MAX_BURST);
iscsi_iface_attr(iface, chap_auth, ISCSI_IFACE_PARAM_CHAP_AUTH_EN);
iscsi_iface_attr(iface, bidi_chap, ISCSI_IFACE_PARAM_BIDI_CHAP_EN);
iscsi_iface_attr(iface, discovery_auth_optional,
ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL);
iscsi_iface_attr(iface, discovery_logout,
ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN);
iscsi_iface_attr(iface, strict_login_comp_en,
ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN);
iscsi_iface_attr(iface, initiator_name, ISCSI_IFACE_PARAM_INITIATOR_NAME);
static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int i)
@ -335,6 +420,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
struct iscsi_iface *iface = iscsi_dev_to_iface(dev);
struct iscsi_transport *t = iface->transport;
int param;
int param_type;
if (attr == &dev_attr_iface_enabled.attr)
param = ISCSI_NET_PARAM_IFACE_ENABLE;
@ -348,6 +434,60 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
param = ISCSI_NET_PARAM_MTU;
else if (attr == &dev_attr_iface_port.attr)
param = ISCSI_NET_PARAM_PORT;
else if (attr == &dev_attr_iface_ipaddress_state.attr)
param = ISCSI_NET_PARAM_IPADDR_STATE;
else if (attr == &dev_attr_iface_delayed_ack_en.attr)
param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
else if (attr == &dev_attr_iface_tcp_wsf.attr)
param = ISCSI_NET_PARAM_TCP_WSF;
else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
else if (attr == &dev_attr_iface_cache_id.attr)
param = ISCSI_NET_PARAM_CACHE_ID;
else if (attr == &dev_attr_iface_redirect_en.attr)
param = ISCSI_NET_PARAM_REDIRECT_EN;
else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
else if (attr == &dev_attr_iface_header_digest.attr)
param = ISCSI_IFACE_PARAM_HDRDGST_EN;
else if (attr == &dev_attr_iface_data_digest.attr)
param = ISCSI_IFACE_PARAM_DATADGST_EN;
else if (attr == &dev_attr_iface_immediate_data.attr)
param = ISCSI_IFACE_PARAM_IMM_DATA_EN;
else if (attr == &dev_attr_iface_initial_r2t.attr)
param = ISCSI_IFACE_PARAM_INITIAL_R2T_EN;
else if (attr == &dev_attr_iface_data_seq_in_order.attr)
param = ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN;
else if (attr == &dev_attr_iface_data_pdu_in_order.attr)
param = ISCSI_IFACE_PARAM_PDU_INORDER_EN;
else if (attr == &dev_attr_iface_erl.attr)
param = ISCSI_IFACE_PARAM_ERL;
else if (attr == &dev_attr_iface_max_recv_dlength.attr)
param = ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH;
else if (attr == &dev_attr_iface_first_burst_len.attr)
param = ISCSI_IFACE_PARAM_FIRST_BURST;
else if (attr == &dev_attr_iface_max_outstanding_r2t.attr)
param = ISCSI_IFACE_PARAM_MAX_R2T;
else if (attr == &dev_attr_iface_max_burst_len.attr)
param = ISCSI_IFACE_PARAM_MAX_BURST;
else if (attr == &dev_attr_iface_chap_auth.attr)
param = ISCSI_IFACE_PARAM_CHAP_AUTH_EN;
else if (attr == &dev_attr_iface_bidi_chap.attr)
param = ISCSI_IFACE_PARAM_BIDI_CHAP_EN;
else if (attr == &dev_attr_iface_discovery_auth_optional.attr)
param = ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL;
else if (attr == &dev_attr_iface_discovery_logout.attr)
param = ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN;
else if (attr == &dev_attr_iface_strict_login_comp_en.attr)
param = ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN;
else if (attr == &dev_attr_iface_initiator_name.attr)
param = ISCSI_IFACE_PARAM_INITIATOR_NAME;
else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4) {
if (attr == &dev_attr_ipv4_iface_ipaddress.attr)
param = ISCSI_NET_PARAM_IPV4_ADDR;
@ -357,6 +497,42 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
param = ISCSI_NET_PARAM_IPV4_SUBNET;
else if (attr == &dev_attr_ipv4_iface_bootproto.attr)
param = ISCSI_NET_PARAM_IPV4_BOOTPROTO;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_dns_address_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_DNS_ADDR_EN;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_slp_da_info_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_SLP_DA_EN;
else if (attr == &dev_attr_ipv4_iface_tos_en.attr)
param = ISCSI_NET_PARAM_IPV4_TOS_EN;
else if (attr == &dev_attr_ipv4_iface_tos.attr)
param = ISCSI_NET_PARAM_IPV4_TOS;
else if (attr == &dev_attr_ipv4_iface_grat_arp_en.attr)
param = ISCSI_NET_PARAM_IPV4_GRAT_ARP_EN;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_alt_client_id_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID_EN;
else if (attr == &dev_attr_ipv4_iface_dhcp_alt_client_id.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_req_vendor_id_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_REQ_VENDOR_ID_EN;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_use_vendor_id_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_USE_VENDOR_ID_EN;
else if (attr == &dev_attr_ipv4_iface_dhcp_vendor_id.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_VENDOR_ID;
else if (attr ==
&dev_attr_ipv4_iface_dhcp_learn_iqn_en.attr)
param = ISCSI_NET_PARAM_IPV4_DHCP_LEARN_IQN_EN;
else if (attr ==
&dev_attr_ipv4_iface_fragment_disable.attr)
param = ISCSI_NET_PARAM_IPV4_FRAGMENT_DISABLE;
else if (attr ==
&dev_attr_ipv4_iface_incoming_forwarding_en.attr)
param = ISCSI_NET_PARAM_IPV4_IN_FORWARD_EN;
else if (attr == &dev_attr_ipv4_iface_ttl.attr)
param = ISCSI_NET_PARAM_IPV4_TTL;
else
return 0;
} else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV6) {
@ -370,6 +546,31 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
param = ISCSI_NET_PARAM_IPV6_ADDR_AUTOCFG;
else if (attr == &dev_attr_ipv6_iface_link_local_autocfg.attr)
param = ISCSI_NET_PARAM_IPV6_LINKLOCAL_AUTOCFG;
else if (attr == &dev_attr_ipv6_iface_link_local_state.attr)
param = ISCSI_NET_PARAM_IPV6_LINKLOCAL_STATE;
else if (attr == &dev_attr_ipv6_iface_router_state.attr)
param = ISCSI_NET_PARAM_IPV6_ROUTER_STATE;
else if (attr ==
&dev_attr_ipv6_iface_grat_neighbor_adv_en.attr)
param = ISCSI_NET_PARAM_IPV6_GRAT_NEIGHBOR_ADV_EN;
else if (attr == &dev_attr_ipv6_iface_mld_en.attr)
param = ISCSI_NET_PARAM_IPV6_MLD_EN;
else if (attr == &dev_attr_ipv6_iface_flow_label.attr)
param = ISCSI_NET_PARAM_IPV6_FLOW_LABEL;
else if (attr == &dev_attr_ipv6_iface_traffic_class.attr)
param = ISCSI_NET_PARAM_IPV6_TRAFFIC_CLASS;
else if (attr == &dev_attr_ipv6_iface_hop_limit.attr)
param = ISCSI_NET_PARAM_IPV6_HOP_LIMIT;
else if (attr == &dev_attr_ipv6_iface_nd_reachable_tmo.attr)
param = ISCSI_NET_PARAM_IPV6_ND_REACHABLE_TMO;
else if (attr == &dev_attr_ipv6_iface_nd_rexmit_time.attr)
param = ISCSI_NET_PARAM_IPV6_ND_REXMIT_TIME;
else if (attr == &dev_attr_ipv6_iface_nd_stale_tmo.attr)
param = ISCSI_NET_PARAM_IPV6_ND_STALE_TMO;
else if (attr == &dev_attr_ipv6_iface_dup_addr_detect_cnt.attr)
param = ISCSI_NET_PARAM_IPV6_DUP_ADDR_DETECT_CNT;
else if (attr == &dev_attr_ipv6_iface_router_adv_link_mtu.attr)
param = ISCSI_NET_PARAM_IPV6_RTR_ADV_LINK_MTU;
else
return 0;
} else {
@ -377,7 +578,32 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
return 0;
}
return t->attr_is_visible(ISCSI_NET_PARAM, param);
switch (param) {
case ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO:
case ISCSI_IFACE_PARAM_HDRDGST_EN:
case ISCSI_IFACE_PARAM_DATADGST_EN:
case ISCSI_IFACE_PARAM_IMM_DATA_EN:
case ISCSI_IFACE_PARAM_INITIAL_R2T_EN:
case ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN:
case ISCSI_IFACE_PARAM_PDU_INORDER_EN:
case ISCSI_IFACE_PARAM_ERL:
case ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH:
case ISCSI_IFACE_PARAM_FIRST_BURST:
case ISCSI_IFACE_PARAM_MAX_R2T:
case ISCSI_IFACE_PARAM_MAX_BURST:
case ISCSI_IFACE_PARAM_CHAP_AUTH_EN:
case ISCSI_IFACE_PARAM_BIDI_CHAP_EN:
case ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL:
case ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN:
case ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN:
case ISCSI_IFACE_PARAM_INITIATOR_NAME:
param_type = ISCSI_IFACE_PARAM;
break;
default:
param_type = ISCSI_NET_PARAM;
}
return t->attr_is_visible(param_type, param);
}
static struct attribute *iscsi_iface_attrs[] = {
@ -396,6 +622,59 @@ static struct attribute *iscsi_iface_attrs[] = {
&dev_attr_ipv6_iface_link_local_autocfg.attr,
&dev_attr_iface_mtu.attr,
&dev_attr_iface_port.attr,
&dev_attr_iface_ipaddress_state.attr,
&dev_attr_iface_delayed_ack_en.attr,
&dev_attr_iface_tcp_nagle_disable.attr,
&dev_attr_iface_tcp_wsf_disable.attr,
&dev_attr_iface_tcp_wsf.attr,
&dev_attr_iface_tcp_timer_scale.attr,
&dev_attr_iface_tcp_timestamp_en.attr,
&dev_attr_iface_cache_id.attr,
&dev_attr_iface_redirect_en.attr,
&dev_attr_iface_def_taskmgmt_tmo.attr,
&dev_attr_iface_header_digest.attr,
&dev_attr_iface_data_digest.attr,
&dev_attr_iface_immediate_data.attr,
&dev_attr_iface_initial_r2t.attr,
&dev_attr_iface_data_seq_in_order.attr,
&dev_attr_iface_data_pdu_in_order.attr,
&dev_attr_iface_erl.attr,
&dev_attr_iface_max_recv_dlength.attr,
&dev_attr_iface_first_burst_len.attr,
&dev_attr_iface_max_outstanding_r2t.attr,
&dev_attr_iface_max_burst_len.attr,
&dev_attr_iface_chap_auth.attr,
&dev_attr_iface_bidi_chap.attr,
&dev_attr_iface_discovery_auth_optional.attr,
&dev_attr_iface_discovery_logout.attr,
&dev_attr_iface_strict_login_comp_en.attr,
&dev_attr_iface_initiator_name.attr,
&dev_attr_ipv4_iface_dhcp_dns_address_en.attr,
&dev_attr_ipv4_iface_dhcp_slp_da_info_en.attr,
&dev_attr_ipv4_iface_tos_en.attr,
&dev_attr_ipv4_iface_tos.attr,
&dev_attr_ipv4_iface_grat_arp_en.attr,
&dev_attr_ipv4_iface_dhcp_alt_client_id_en.attr,
&dev_attr_ipv4_iface_dhcp_alt_client_id.attr,
&dev_attr_ipv4_iface_dhcp_req_vendor_id_en.attr,
&dev_attr_ipv4_iface_dhcp_use_vendor_id_en.attr,
&dev_attr_ipv4_iface_dhcp_vendor_id.attr,
&dev_attr_ipv4_iface_dhcp_learn_iqn_en.attr,
&dev_attr_ipv4_iface_fragment_disable.attr,
&dev_attr_ipv4_iface_incoming_forwarding_en.attr,
&dev_attr_ipv4_iface_ttl.attr,
&dev_attr_ipv6_iface_link_local_state.attr,
&dev_attr_ipv6_iface_router_state.attr,
&dev_attr_ipv6_iface_grat_neighbor_adv_en.attr,
&dev_attr_ipv6_iface_mld_en.attr,
&dev_attr_ipv6_iface_flow_label.attr,
&dev_attr_ipv6_iface_traffic_class.attr,
&dev_attr_ipv6_iface_hop_limit.attr,
&dev_attr_ipv6_iface_nd_reachable_tmo.attr,
&dev_attr_ipv6_iface_nd_rexmit_time.attr,
&dev_attr_ipv6_iface_nd_stale_tmo.attr,
&dev_attr_ipv6_iface_dup_addr_detect_cnt.attr,
&dev_attr_ipv6_iface_router_adv_link_mtu.attr,
NULL,
};
@ -404,6 +683,61 @@ static struct attribute_group iscsi_iface_group = {
.is_visible = iscsi_iface_attr_is_visible,
};
/* convert iscsi_ipaddress_state values to ascii string name */
static const struct {
enum iscsi_ipaddress_state value;
char *name;
} iscsi_ipaddress_state_names[] = {
{ISCSI_IPDDRESS_STATE_UNCONFIGURED, "Unconfigured" },
{ISCSI_IPDDRESS_STATE_ACQUIRING, "Acquiring" },
{ISCSI_IPDDRESS_STATE_TENTATIVE, "Tentative" },
{ISCSI_IPDDRESS_STATE_VALID, "Valid" },
{ISCSI_IPDDRESS_STATE_DISABLING, "Disabling" },
{ISCSI_IPDDRESS_STATE_INVALID, "Invalid" },
{ISCSI_IPDDRESS_STATE_DEPRECATED, "Deprecated" },
};
char *iscsi_get_ipaddress_state_name(enum iscsi_ipaddress_state port_state)
{
int i;
char *state = NULL;
for (i = 0; i < ARRAY_SIZE(iscsi_ipaddress_state_names); i++) {
if (iscsi_ipaddress_state_names[i].value == port_state) {
state = iscsi_ipaddress_state_names[i].name;
break;
}
}
return state;
}
EXPORT_SYMBOL_GPL(iscsi_get_ipaddress_state_name);
/* convert iscsi_router_state values to ascii string name */
static const struct {
enum iscsi_router_state value;
char *name;
} iscsi_router_state_names[] = {
{ISCSI_ROUTER_STATE_UNKNOWN, "Unknown" },
{ISCSI_ROUTER_STATE_ADVERTISED, "Advertised" },
{ISCSI_ROUTER_STATE_MANUAL, "Manual" },
{ISCSI_ROUTER_STATE_STALE, "Stale" },
};
char *iscsi_get_router_state_name(enum iscsi_router_state router_state)
{
int i;
char *state = NULL;
for (i = 0; i < ARRAY_SIZE(iscsi_router_state_names); i++) {
if (iscsi_router_state_names[i].value == router_state) {
state = iscsi_router_state_names[i].name;
break;
}
}
return state;
}
EXPORT_SYMBOL_GPL(iscsi_get_router_state_name);
struct iscsi_iface *
iscsi_create_iface(struct Scsi_Host *shost, struct iscsi_transport *transport,
uint32_t iface_type, uint32_t iface_num, int dd_size)
@ -3081,6 +3415,73 @@ exit_logout_sid:
return err;
}
static int
iscsi_get_host_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
{
struct iscsi_uevent *ev = nlmsg_data(nlh);
struct Scsi_Host *shost = NULL;
struct iscsi_internal *priv;
struct sk_buff *skbhost_stats;
struct nlmsghdr *nlhhost_stats;
struct iscsi_uevent *evhost_stats;
int host_stats_size = 0;
int len, err = 0;
char *buf;
if (!transport->get_host_stats)
return -EINVAL;
priv = iscsi_if_transport_lookup(transport);
if (!priv)
return -EINVAL;
host_stats_size = sizeof(struct iscsi_offload_host_stats);
len = nlmsg_total_size(sizeof(*ev) + host_stats_size);
shost = scsi_host_lookup(ev->u.get_host_stats.host_no);
if (!shost) {
pr_err("%s: failed. Cound not find host no %u\n",
__func__, ev->u.get_host_stats.host_no);
return -ENODEV;
}
do {
int actual_size;
skbhost_stats = alloc_skb(len, GFP_KERNEL);
if (!skbhost_stats) {
pr_err("cannot deliver host stats: OOM\n");
err = -ENOMEM;
goto exit_host_stats;
}
nlhhost_stats = __nlmsg_put(skbhost_stats, 0, 0, 0,
(len - sizeof(*nlhhost_stats)), 0);
evhost_stats = nlmsg_data(nlhhost_stats);
memset(evhost_stats, 0, sizeof(*evhost_stats));
evhost_stats->transport_handle = iscsi_handle(transport);
evhost_stats->type = nlh->nlmsg_type;
evhost_stats->u.get_host_stats.host_no =
ev->u.get_host_stats.host_no;
buf = (char *)((char *)evhost_stats + sizeof(*evhost_stats));
memset(buf, 0, host_stats_size);
err = transport->get_host_stats(shost, buf, host_stats_size);
actual_size = nlmsg_total_size(sizeof(*ev) + host_stats_size);
skb_trim(skbhost_stats, NLMSG_ALIGN(actual_size));
nlhhost_stats->nlmsg_len = actual_size;
err = iscsi_multicast_skb(skbhost_stats, ISCSI_NL_GRP_ISCSID,
GFP_KERNEL);
} while (err < 0 && err != -ECONNREFUSED);
exit_host_stats:
scsi_host_put(shost);
return err;
}
static int
iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
{
@ -3260,6 +3661,9 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
err = iscsi_set_chap(transport, ev,
nlmsg_attrlen(nlh, sizeof(*ev)));
break;
case ISCSI_UEVENT_GET_HOST_STATS:
err = iscsi_get_host_stats(transport, nlh);
break;
default:
err = -ENOSYS;
break;
@ -3368,6 +3772,7 @@ iscsi_conn_attr(ipv6_flow_label, ISCSI_PARAM_IPV6_FLOW_LABEL);
iscsi_conn_attr(is_fw_assigned_ipv6, ISCSI_PARAM_IS_FW_ASSIGNED_IPV6);
iscsi_conn_attr(tcp_xmit_wsf, ISCSI_PARAM_TCP_XMIT_WSF);
iscsi_conn_attr(tcp_recv_wsf, ISCSI_PARAM_TCP_RECV_WSF);
iscsi_conn_attr(local_ipaddr, ISCSI_PARAM_LOCAL_IPADDR);
#define iscsi_conn_ep_attr_show(param) \
@ -3437,6 +3842,7 @@ static struct attribute *iscsi_conn_attrs[] = {
&dev_attr_conn_is_fw_assigned_ipv6.attr,
&dev_attr_conn_tcp_xmit_wsf.attr,
&dev_attr_conn_tcp_recv_wsf.attr,
&dev_attr_conn_local_ipaddr.attr,
NULL,
};
@ -3506,6 +3912,8 @@ static umode_t iscsi_conn_attr_is_visible(struct kobject *kobj,
param = ISCSI_PARAM_TCP_XMIT_WSF;
else if (attr == &dev_attr_conn_tcp_recv_wsf.attr)
param = ISCSI_PARAM_TCP_RECV_WSF;
else if (attr == &dev_attr_conn_local_ipaddr.attr)
param = ISCSI_PARAM_LOCAL_IPADDR;
else {
WARN_ONCE(1, "Invalid conn attr");
return 0;

Просмотреть файл

@ -110,7 +110,7 @@ static int sd_suspend_runtime(struct device *);
static int sd_resume(struct device *);
static void sd_rescan(struct device *);
static int sd_done(struct scsi_cmnd *);
static int sd_eh_action(struct scsi_cmnd *, unsigned char *, int, int);
static int sd_eh_action(struct scsi_cmnd *, int);
static void sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer);
static void scsi_disk_release(struct device *cdev);
static void sd_print_sense_hdr(struct scsi_disk *, struct scsi_sense_hdr *);
@ -1551,23 +1551,23 @@ static const struct block_device_operations sd_fops = {
/**
* sd_eh_action - error handling callback
* @scmd: sd-issued command that has failed
* @eh_cmnd: The command that was sent during error handling
* @eh_cmnd_len: Length of eh_cmnd in bytes
* @eh_disp: The recovery disposition suggested by the midlayer
*
* This function is called by the SCSI midlayer upon completion of
* an error handling command (TEST UNIT READY, START STOP UNIT,
* etc.) The command sent to the device by the error handler is
* stored in eh_cmnd. The result of sending the eh command is
* passed in eh_disp.
* This function is called by the SCSI midlayer upon completion of an
* error test command (currently TEST UNIT READY). The result of sending
* the eh command is passed in eh_disp. We're looking for devices that
* fail medium access commands but are OK with non access commands like
* test unit ready (so wrongly see the device as having a successful
* recovery)
**/
static int sd_eh_action(struct scsi_cmnd *scmd, unsigned char *eh_cmnd,
int eh_cmnd_len, int eh_disp)
static int sd_eh_action(struct scsi_cmnd *scmd, int eh_disp)
{
struct scsi_disk *sdkp = scsi_disk(scmd->request->rq_disk);
if (!scsi_device_online(scmd->device) ||
!scsi_medium_access_command(scmd))
!scsi_medium_access_command(scmd) ||
host_byte(scmd->result) != DID_TIME_OUT ||
eh_disp != SUCCESS)
return eh_disp;
/*
@ -1577,9 +1577,7 @@ static int sd_eh_action(struct scsi_cmnd *scmd, unsigned char *eh_cmnd,
* process of recovering or has it suffered an internal failure
* that prevents access to the storage medium.
*/
if (host_byte(scmd->result) == DID_TIME_OUT && eh_disp == SUCCESS &&
eh_cmnd_len && eh_cmnd[0] == TEST_UNIT_READY)
sdkp->medium_access_timed_out++;
sdkp->medium_access_timed_out++;
/*
* If the device keeps failing read/write commands but TEST UNIT
@ -1628,7 +1626,7 @@ static unsigned int sd_completed_bytes(struct scsi_cmnd *scmd)
end_lba <<= 1;
} else {
/* be careful ... don't want any overflows */
u64 factor = scmd->device->sector_size / 512;
unsigned int factor = scmd->device->sector_size / 512;
do_div(start_lba, factor);
do_div(end_lba, factor);
}

Просмотреть файл

@ -161,14 +161,10 @@ static inline struct scsi_cd *scsi_cd_get(struct gendisk *disk)
goto out;
cd = scsi_cd(disk);
kref_get(&cd->kref);
if (scsi_device_get(cd->device))
goto out_put;
if (!scsi_autopm_get_device(cd->device))
goto out;
out_put:
kref_put(&cd->kref, sr_kref_release);
cd = NULL;
if (scsi_device_get(cd->device)) {
kref_put(&cd->kref, sr_kref_release);
cd = NULL;
}
out:
mutex_unlock(&sr_ref_mutex);
return cd;
@ -180,7 +176,6 @@ static void scsi_cd_put(struct scsi_cd *cd)
mutex_lock(&sr_ref_mutex);
kref_put(&cd->kref, sr_kref_release);
scsi_autopm_put_device(sdev);
scsi_device_put(sdev);
mutex_unlock(&sr_ref_mutex);
}
@ -558,8 +553,6 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
void __user *argp = (void __user *)arg;
int ret;
scsi_autopm_get_device(cd->device);
mutex_lock(&sr_mutex);
/*
@ -591,7 +584,6 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
out:
mutex_unlock(&sr_mutex);
scsi_autopm_put_device(cd->device);
return ret;
}
@ -599,17 +591,11 @@ static unsigned int sr_block_check_events(struct gendisk *disk,
unsigned int clearing)
{
struct scsi_cd *cd = scsi_cd(disk);
unsigned int ret;
if (atomic_read(&cd->device->disk_events_disable_depth) == 0) {
scsi_autopm_get_device(cd->device);
ret = cdrom_check_events(&cd->cdi, clearing);
scsi_autopm_put_device(cd->device);
} else {
ret = 0;
}
if (atomic_read(&cd->device->disk_events_disable_depth))
return 0;
return ret;
return cdrom_check_events(&cd->cdi, clearing);
}
static int sr_block_revalidate_disk(struct gendisk *disk)
@ -617,8 +603,6 @@ static int sr_block_revalidate_disk(struct gendisk *disk)
struct scsi_cd *cd = scsi_cd(disk);
struct scsi_sense_hdr sshdr;
scsi_autopm_get_device(cd->device);
/* if the unit is not ready, nothing more to do */
if (scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr))
goto out;
@ -626,7 +610,6 @@ static int sr_block_revalidate_disk(struct gendisk *disk)
sr_cd_check(&cd->cdi);
get_sectorsize(cd);
out:
scsi_autopm_put_device(cd->device);
return 0;
}
@ -747,6 +730,12 @@ static int sr_probe(struct device *dev)
if (register_cdrom(&cd->cdi))
goto fail_put;
/*
* Initialize block layer runtime PM stuffs before the
* periodic event checking request gets started in add_disk.
*/
blk_pm_runtime_init(sdev->request_queue, dev);
dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
add_disk(disk);

Просмотреть файл

@ -3719,7 +3719,7 @@ static struct st_buffer *new_tape_buffer(int need_dma, int max_sg)
static int enlarge_buffer(struct st_buffer * STbuffer, int new_size, int need_dma)
{
int segs, nbr, max_segs, b_size, order, got;
int segs, max_segs, b_size, order, got;
gfp_t priority;
if (new_size <= STbuffer->buffer_size)
@ -3729,9 +3729,6 @@ static int enlarge_buffer(struct st_buffer * STbuffer, int new_size, int need_dm
normalize_buffer(STbuffer); /* Avoid extra segment */
max_segs = STbuffer->use_sg;
nbr = max_segs - STbuffer->frp_segs;
if (nbr <= 0)
return 0;
priority = GFP_KERNEL | __GFP_NOWARN;
if (need_dma)

Просмотреть файл

@ -70,6 +70,7 @@ enum iscsi_uevent_e {
ISCSI_UEVENT_LOGOUT_FLASHNODE = UEVENT_BASE + 29,
ISCSI_UEVENT_LOGOUT_FLASHNODE_SID = UEVENT_BASE + 30,
ISCSI_UEVENT_SET_CHAP = UEVENT_BASE + 31,
ISCSI_UEVENT_GET_HOST_STATS = UEVENT_BASE + 32,
/* up events */
ISCSI_KEVENT_RECV_PDU = KEVENT_BASE + 1,
@ -242,6 +243,9 @@ struct iscsi_uevent {
uint32_t host_no;
uint32_t sid;
} logout_flashnode_sid;
struct msg_get_host_stats {
uint32_t host_no;
} get_host_stats;
} u;
union {
/* messages k -> u */
@ -311,6 +315,7 @@ enum iscsi_param_type {
ISCSI_NET_PARAM, /* iscsi_net_param */
ISCSI_FLASHNODE_PARAM, /* iscsi_flashnode_param */
ISCSI_CHAP_PARAM, /* iscsi_chap_param */
ISCSI_IFACE_PARAM, /* iscsi_iface_param */
};
/* structure for minimalist usecase */
@ -383,28 +388,106 @@ struct iscsi_path {
#define ISCSI_VLAN_DISABLE 0x01
#define ISCSI_VLAN_ENABLE 0x02
/* iscsi generic enable/disabled setting for various features */
#define ISCSI_NET_PARAM_DISABLE 0x01
#define ISCSI_NET_PARAM_ENABLE 0x02
/* iSCSI network params */
enum iscsi_net_param {
ISCSI_NET_PARAM_IPV4_ADDR = 1,
ISCSI_NET_PARAM_IPV4_SUBNET = 2,
ISCSI_NET_PARAM_IPV4_GW = 3,
ISCSI_NET_PARAM_IPV4_BOOTPROTO = 4,
ISCSI_NET_PARAM_MAC = 5,
ISCSI_NET_PARAM_IPV6_LINKLOCAL = 6,
ISCSI_NET_PARAM_IPV6_ADDR = 7,
ISCSI_NET_PARAM_IPV6_ROUTER = 8,
ISCSI_NET_PARAM_IPV6_ADDR_AUTOCFG = 9,
ISCSI_NET_PARAM_IPV6_LINKLOCAL_AUTOCFG = 10,
ISCSI_NET_PARAM_IPV6_ROUTER_AUTOCFG = 11,
ISCSI_NET_PARAM_IFACE_ENABLE = 12,
ISCSI_NET_PARAM_VLAN_ID = 13,
ISCSI_NET_PARAM_VLAN_PRIORITY = 14,
ISCSI_NET_PARAM_VLAN_ENABLED = 15,
ISCSI_NET_PARAM_VLAN_TAG = 16,
ISCSI_NET_PARAM_IFACE_TYPE = 17,
ISCSI_NET_PARAM_IFACE_NAME = 18,
ISCSI_NET_PARAM_MTU = 19,
ISCSI_NET_PARAM_PORT = 20,
ISCSI_NET_PARAM_IPV4_SUBNET,
ISCSI_NET_PARAM_IPV4_GW,
ISCSI_NET_PARAM_IPV4_BOOTPROTO,
ISCSI_NET_PARAM_MAC,
ISCSI_NET_PARAM_IPV6_LINKLOCAL,
ISCSI_NET_PARAM_IPV6_ADDR,
ISCSI_NET_PARAM_IPV6_ROUTER,
ISCSI_NET_PARAM_IPV6_ADDR_AUTOCFG,
ISCSI_NET_PARAM_IPV6_LINKLOCAL_AUTOCFG,
ISCSI_NET_PARAM_IPV6_ROUTER_AUTOCFG,
ISCSI_NET_PARAM_IFACE_ENABLE,
ISCSI_NET_PARAM_VLAN_ID,
ISCSI_NET_PARAM_VLAN_PRIORITY,
ISCSI_NET_PARAM_VLAN_ENABLED,
ISCSI_NET_PARAM_VLAN_TAG,
ISCSI_NET_PARAM_IFACE_TYPE,
ISCSI_NET_PARAM_IFACE_NAME,
ISCSI_NET_PARAM_MTU,
ISCSI_NET_PARAM_PORT,
ISCSI_NET_PARAM_IPADDR_STATE,
ISCSI_NET_PARAM_IPV6_LINKLOCAL_STATE,
ISCSI_NET_PARAM_IPV6_ROUTER_STATE,
ISCSI_NET_PARAM_DELAYED_ACK_EN,
ISCSI_NET_PARAM_TCP_NAGLE_DISABLE,
ISCSI_NET_PARAM_TCP_WSF_DISABLE,
ISCSI_NET_PARAM_TCP_WSF,
ISCSI_NET_PARAM_TCP_TIMER_SCALE,
ISCSI_NET_PARAM_TCP_TIMESTAMP_EN,
ISCSI_NET_PARAM_CACHE_ID,
ISCSI_NET_PARAM_IPV4_DHCP_DNS_ADDR_EN,
ISCSI_NET_PARAM_IPV4_DHCP_SLP_DA_EN,
ISCSI_NET_PARAM_IPV4_TOS_EN,
ISCSI_NET_PARAM_IPV4_TOS,
ISCSI_NET_PARAM_IPV4_GRAT_ARP_EN,
ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID_EN,
ISCSI_NET_PARAM_IPV4_DHCP_ALT_CLIENT_ID,
ISCSI_NET_PARAM_IPV4_DHCP_REQ_VENDOR_ID_EN,
ISCSI_NET_PARAM_IPV4_DHCP_USE_VENDOR_ID_EN,
ISCSI_NET_PARAM_IPV4_DHCP_VENDOR_ID,
ISCSI_NET_PARAM_IPV4_DHCP_LEARN_IQN_EN,
ISCSI_NET_PARAM_IPV4_FRAGMENT_DISABLE,
ISCSI_NET_PARAM_IPV4_IN_FORWARD_EN,
ISCSI_NET_PARAM_IPV4_TTL,
ISCSI_NET_PARAM_IPV6_GRAT_NEIGHBOR_ADV_EN,
ISCSI_NET_PARAM_IPV6_MLD_EN,
ISCSI_NET_PARAM_IPV6_FLOW_LABEL,
ISCSI_NET_PARAM_IPV6_TRAFFIC_CLASS,
ISCSI_NET_PARAM_IPV6_HOP_LIMIT,
ISCSI_NET_PARAM_IPV6_ND_REACHABLE_TMO,
ISCSI_NET_PARAM_IPV6_ND_REXMIT_TIME,
ISCSI_NET_PARAM_IPV6_ND_STALE_TMO,
ISCSI_NET_PARAM_IPV6_DUP_ADDR_DETECT_CNT,
ISCSI_NET_PARAM_IPV6_RTR_ADV_LINK_MTU,
ISCSI_NET_PARAM_REDIRECT_EN,
};
enum iscsi_ipaddress_state {
ISCSI_IPDDRESS_STATE_UNCONFIGURED,
ISCSI_IPDDRESS_STATE_ACQUIRING,
ISCSI_IPDDRESS_STATE_TENTATIVE,
ISCSI_IPDDRESS_STATE_VALID,
ISCSI_IPDDRESS_STATE_DISABLING,
ISCSI_IPDDRESS_STATE_INVALID,
ISCSI_IPDDRESS_STATE_DEPRECATED,
};
enum iscsi_router_state {
ISCSI_ROUTER_STATE_UNKNOWN,
ISCSI_ROUTER_STATE_ADVERTISED,
ISCSI_ROUTER_STATE_MANUAL,
ISCSI_ROUTER_STATE_STALE,
};
/* iSCSI specific settings params for iface */
enum iscsi_iface_param {
ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO,
ISCSI_IFACE_PARAM_HDRDGST_EN,
ISCSI_IFACE_PARAM_DATADGST_EN,
ISCSI_IFACE_PARAM_IMM_DATA_EN,
ISCSI_IFACE_PARAM_INITIAL_R2T_EN,
ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN,
ISCSI_IFACE_PARAM_PDU_INORDER_EN,
ISCSI_IFACE_PARAM_ERL,
ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH,
ISCSI_IFACE_PARAM_FIRST_BURST,
ISCSI_IFACE_PARAM_MAX_R2T,
ISCSI_IFACE_PARAM_MAX_BURST,
ISCSI_IFACE_PARAM_CHAP_AUTH_EN,
ISCSI_IFACE_PARAM_BIDI_CHAP_EN,
ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL,
ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN,
ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN,
ISCSI_IFACE_PARAM_INITIATOR_NAME,
};
enum iscsi_conn_state {
@ -535,6 +618,7 @@ enum iscsi_param {
ISCSI_PARAM_DISCOVERY_PARENT_IDX,
ISCSI_PARAM_DISCOVERY_PARENT_TYPE,
ISCSI_PARAM_LOCAL_IPADDR,
/* must always be last */
ISCSI_PARAM_MAX,
};
@ -766,4 +850,112 @@ struct iscsi_chap_rec {
uint8_t password_length;
};
#define ISCSI_HOST_STATS_CUSTOM_MAX 32
#define ISCSI_HOST_STATS_CUSTOM_DESC_MAX 64
struct iscsi_host_stats_custom {
char desc[ISCSI_HOST_STATS_CUSTOM_DESC_MAX];
uint64_t value;
};
/* struct iscsi_offload_host_stats: Host statistics,
* Include statistics for MAC, IP, TCP & iSCSI.
*/
struct iscsi_offload_host_stats {
/* MAC */
uint64_t mactx_frames;
uint64_t mactx_bytes;
uint64_t mactx_multicast_frames;
uint64_t mactx_broadcast_frames;
uint64_t mactx_pause_frames;
uint64_t mactx_control_frames;
uint64_t mactx_deferral;
uint64_t mactx_excess_deferral;
uint64_t mactx_late_collision;
uint64_t mactx_abort;
uint64_t mactx_single_collision;
uint64_t mactx_multiple_collision;
uint64_t mactx_collision;
uint64_t mactx_frames_dropped;
uint64_t mactx_jumbo_frames;
uint64_t macrx_frames;
uint64_t macrx_bytes;
uint64_t macrx_unknown_control_frames;
uint64_t macrx_pause_frames;
uint64_t macrx_control_frames;
uint64_t macrx_dribble;
uint64_t macrx_frame_length_error;
uint64_t macrx_jabber;
uint64_t macrx_carrier_sense_error;
uint64_t macrx_frame_discarded;
uint64_t macrx_frames_dropped;
uint64_t mac_crc_error;
uint64_t mac_encoding_error;
uint64_t macrx_length_error_large;
uint64_t macrx_length_error_small;
uint64_t macrx_multicast_frames;
uint64_t macrx_broadcast_frames;
/* IP */
uint64_t iptx_packets;
uint64_t iptx_bytes;
uint64_t iptx_fragments;
uint64_t iprx_packets;
uint64_t iprx_bytes;
uint64_t iprx_fragments;
uint64_t ip_datagram_reassembly;
uint64_t ip_invalid_address_error;
uint64_t ip_error_packets;
uint64_t ip_fragrx_overlap;
uint64_t ip_fragrx_outoforder;
uint64_t ip_datagram_reassembly_timeout;
uint64_t ipv6tx_packets;
uint64_t ipv6tx_bytes;
uint64_t ipv6tx_fragments;
uint64_t ipv6rx_packets;
uint64_t ipv6rx_bytes;
uint64_t ipv6rx_fragments;
uint64_t ipv6_datagram_reassembly;
uint64_t ipv6_invalid_address_error;
uint64_t ipv6_error_packets;
uint64_t ipv6_fragrx_overlap;
uint64_t ipv6_fragrx_outoforder;
uint64_t ipv6_datagram_reassembly_timeout;
/* TCP */
uint64_t tcptx_segments;
uint64_t tcptx_bytes;
uint64_t tcprx_segments;
uint64_t tcprx_byte;
uint64_t tcp_duplicate_ack_retx;
uint64_t tcp_retx_timer_expired;
uint64_t tcprx_duplicate_ack;
uint64_t tcprx_pure_ackr;
uint64_t tcptx_delayed_ack;
uint64_t tcptx_pure_ack;
uint64_t tcprx_segment_error;
uint64_t tcprx_segment_outoforder;
uint64_t tcprx_window_probe;
uint64_t tcprx_window_update;
uint64_t tcptx_window_probe_persist;
/* ECC */
uint64_t ecc_error_correction;
/* iSCSI */
uint64_t iscsi_pdu_tx;
uint64_t iscsi_data_bytes_tx;
uint64_t iscsi_pdu_rx;
uint64_t iscsi_data_bytes_rx;
uint64_t iscsi_io_completed;
uint64_t iscsi_unexpected_io_rx;
uint64_t iscsi_format_error;
uint64_t iscsi_hdr_digest_error;
uint64_t iscsi_data_digest_error;
uint64_t iscsi_sequence_error;
/*
* iSCSI Custom Host Statistics support, i.e. Transport could
* extend existing host statistics with its own specific statistics
* up to ISCSI_HOST_STATS_CUSTOM_MAX
*/
uint32_t custom_length;
struct iscsi_host_stats_custom custom[0]
__aligned(sizeof(uint64_t));
};
#endif

Просмотреть файл

@ -231,6 +231,7 @@ struct iscsi_conn {
uint8_t ipv6_traffic_class;
uint8_t ipv6_flow_label;
uint8_t is_fw_assigned_ipv6;
char *local_ipaddr;
/* MIB-statistics */
uint64_t txdata_octets;

Просмотреть файл

@ -55,6 +55,7 @@ struct scsi_cmnd {
struct scsi_device *device;
struct list_head list; /* scsi_cmnd participates in queue lists */
struct list_head eh_entry; /* entry for the host eh_cmd_q */
struct delayed_work abort_work;
int eh_eflags; /* Used by error handlr */
/*

Просмотреть файл

@ -16,7 +16,7 @@ struct scsi_driver {
void (*rescan)(struct device *);
int (*done)(struct scsi_cmnd *);
int (*eh_action)(struct scsi_cmnd *, unsigned char *, int, int);
int (*eh_action)(struct scsi_cmnd *, int);
};
#define to_scsi_driver(drv) \
container_of((drv), struct scsi_driver, gendrv)

Просмотреть файл

@ -478,6 +478,11 @@ struct scsi_host_template {
/* True if the controller does not support WRITE SAME */
unsigned no_write_same:1;
/*
* True if asynchronous aborts are not supported
*/
unsigned no_async_abort:1;
/*
* Countdown for host blocking with no commands outstanding.
*/
@ -689,6 +694,11 @@ struct Scsi_Host {
char work_q_name[20];
struct workqueue_struct *work_q;
/*
* Task management function work queue
*/
struct workqueue_struct *tmf_work_q;
/*
* Host has rejected a command because it was busy.
*/

Просмотреть файл

@ -166,6 +166,7 @@ struct iscsi_transport {
int (*logout_flashnode) (struct iscsi_bus_flash_session *fnode_sess,
struct iscsi_bus_flash_conn *fnode_conn);
int (*logout_flashnode_sid) (struct iscsi_cls_session *cls_sess);
int (*get_host_stats) (struct Scsi_Host *shost, char *buf, int len);
};
/*
@ -478,4 +479,7 @@ iscsi_find_flashnode_sess(struct Scsi_Host *shost, void *data,
extern struct device *
iscsi_find_flashnode_conn(struct iscsi_bus_flash_session *fnode_sess);
extern char *
iscsi_get_ipaddress_state_name(enum iscsi_ipaddress_state port_state);
extern char *iscsi_get_router_state_name(enum iscsi_router_state router_state);
#endif