SCSI misc on 20200129
This series is slightly unusual because it includes Arnd's compat ioctl tree here:1c46a2cf2d
Merge tag 'block-ioctl-cleanup-5.6' into 5.6/scsi-queue Excluding Arnd's changes, this is mostly an update of the usual drivers: megaraid_sas, mpt3sas, qla2xxx, ufs, lpfc, hisi_sas. There are a couple of core and base updates around error propagation and atomicity in the attribute container base we use for the SCSI transport classes. The rest is minor changes and updates. Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com> -----BEGIN PGP SIGNATURE----- iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXjHQJyYcamFtZXMuYm90 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishZZ8AQC02N+v iUnTl1YxGPjIWBbnHuUxN2Qbb9D3C6gAT1LkigEArlk163K3A1XEQHF/VNCdAz/f 01XYTd3p1VHuegIBHlk= =Cn52 -----END PGP SIGNATURE----- Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI updates from James Bottomley: "This series is slightly unusual because it includes Arnd's compat ioctl tree here:1c46a2cf2d
Merge tag 'block-ioctl-cleanup-5.6' into 5.6/scsi-queue Excluding Arnd's changes, this is mostly an update of the usual drivers: megaraid_sas, mpt3sas, qla2xxx, ufs, lpfc, hisi_sas. There are a couple of core and base updates around error propagation and atomicity in the attribute container base we use for the SCSI transport classes. The rest is minor changes and updates" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (149 commits) scsi: hisi_sas: Rename hisi_sas_cq.pci_irq_mask scsi: hisi_sas: Add prints for v3 hw interrupt converge and automatic affinity scsi: hisi_sas: Modify the file permissions of trigger_dump to write only scsi: hisi_sas: Replace magic number when handle channel interrupt scsi: hisi_sas: replace spin_lock_irqsave/spin_unlock_restore with spin_lock/spin_unlock scsi: hisi_sas: use threaded irq to process CQ interrupts scsi: ufs: Use UFS device indicated maximum LU number scsi: ufs: Add max_lu_supported in struct ufs_dev_info scsi: ufs: Delete is_init_prefetch from struct ufs_hba scsi: ufs: Inline two functions into their callers scsi: ufs: Move ufshcd_get_max_pwr_mode() to ufshcd_device_params_init() scsi: ufs: Split ufshcd_probe_hba() based on its called flow scsi: ufs: Delete struct ufs_dev_desc scsi: ufs: Fix ufshcd_probe_hba() reture value in case ufshcd_scsi_add_wlus() fails scsi: ufs-mediatek: enable low-power mode for hibern8 state scsi: ufs: export some functions for vendor usage scsi: ufs-mediatek: add dbg_register_dump implementation scsi: qla2xxx: Fix a NULL pointer dereference in an error path scsi: qla1280: Make checking for 64bit support consistent scsi: megaraid_sas: Update driver version to 07.713.01.00-rc1 ...
This commit is contained in:
Коммит
33c84e89ab
|
@ -40,6 +40,7 @@ Core utilities
|
|||
gcc-plugins
|
||||
symbol-namespaces
|
||||
padata
|
||||
ioctl
|
||||
|
||||
|
||||
Interfaces for kernel debugging
|
||||
|
|
|
@ -0,0 +1,253 @@
|
|||
======================
|
||||
ioctl based interfaces
|
||||
======================
|
||||
|
||||
ioctl() is the most common way for applications to interface
|
||||
with device drivers. It is flexible and easily extended by adding new
|
||||
commands and can be passed through character devices, block devices as
|
||||
well as sockets and other special file descriptors.
|
||||
|
||||
However, it is also very easy to get ioctl command definitions wrong,
|
||||
and hard to fix them later without breaking existing applications,
|
||||
so this documentation tries to help developers get it right.
|
||||
|
||||
Command number definitions
|
||||
==========================
|
||||
|
||||
The command number, or request number, is the second argument passed to
|
||||
the ioctl system call. While this can be any 32-bit number that uniquely
|
||||
identifies an action for a particular driver, there are a number of
|
||||
conventions around defining them.
|
||||
|
||||
``include/uapi/asm-generic/ioctl.h`` provides four macros for defining
|
||||
ioctl commands that follow modern conventions: ``_IO``, ``_IOR``,
|
||||
``_IOW``, and ``_IOWR``. These should be used for all new commands,
|
||||
with the correct parameters:
|
||||
|
||||
_IO/_IOR/_IOW/_IOWR
|
||||
The macro name specifies how the argument will be used. It may be a
|
||||
pointer to data to be passed into the kernel (_IOW), out of the kernel
|
||||
(_IOR), or both (_IOWR). _IO can indicate either commands with no
|
||||
argument or those passing an integer value instead of a pointer.
|
||||
It is recommended to only use _IO for commands without arguments,
|
||||
and use pointers for passing data.
|
||||
|
||||
type
|
||||
An 8-bit number, often a character literal, specific to a subsystem
|
||||
or driver, and listed in :doc:`../userspace-api/ioctl/ioctl-number`
|
||||
|
||||
nr
|
||||
An 8-bit number identifying the specific command, unique for a give
|
||||
value of 'type'
|
||||
|
||||
data_type
|
||||
The name of the data type pointed to by the argument, the command number
|
||||
encodes the ``sizeof(data_type)`` value in a 13-bit or 14-bit integer,
|
||||
leading to a limit of 8191 bytes for the maximum size of the argument.
|
||||
Note: do not pass sizeof(data_type) type into _IOR/_IOW/IOWR, as that
|
||||
will lead to encoding sizeof(sizeof(data_type)), i.e. sizeof(size_t).
|
||||
_IO does not have a data_type parameter.
|
||||
|
||||
|
||||
Interface versions
|
||||
==================
|
||||
|
||||
Some subsystems use version numbers in data structures to overload
|
||||
commands with different interpretations of the argument.
|
||||
|
||||
This is generally a bad idea, since changes to existing commands tend
|
||||
to break existing applications.
|
||||
|
||||
A better approach is to add a new ioctl command with a new number. The
|
||||
old command still needs to be implemented in the kernel for compatibility,
|
||||
but this can be a wrapper around the new implementation.
|
||||
|
||||
Return code
|
||||
===========
|
||||
|
||||
ioctl commands can return negative error codes as documented in errno(3);
|
||||
these get turned into errno values in user space. On success, the return
|
||||
code should be zero. It is also possible but not recommended to return
|
||||
a positive 'long' value.
|
||||
|
||||
When the ioctl callback is called with an unknown command number, the
|
||||
handler returns either -ENOTTY or -ENOIOCTLCMD, which also results in
|
||||
-ENOTTY being returned from the system call. Some subsystems return
|
||||
-ENOSYS or -EINVAL here for historic reasons, but this is wrong.
|
||||
|
||||
Prior to Linux 5.5, compat_ioctl handlers were required to return
|
||||
-ENOIOCTLCMD in order to use the fallback conversion into native
|
||||
commands. As all subsystems are now responsible for handling compat
|
||||
mode themselves, this is no longer needed, but it may be important to
|
||||
consider when backporting bug fixes to older kernels.
|
||||
|
||||
Timestamps
|
||||
==========
|
||||
|
||||
Traditionally, timestamps and timeout values are passed as ``struct
|
||||
timespec`` or ``struct timeval``, but these are problematic because of
|
||||
incompatible definitions of these structures in user space after the
|
||||
move to 64-bit time_t.
|
||||
|
||||
The ``struct __kernel_timespec`` type can be used instead to be embedded
|
||||
in other data structures when separate second/nanosecond values are
|
||||
desired, or passed to user space directly. This is still not ideal though,
|
||||
as the structure matches neither the kernel's timespec64 nor the user
|
||||
space timespec exactly. The get_timespec64() and put_timespec64() helper
|
||||
functions can be used to ensure that the layout remains compatible with
|
||||
user space and the padding is treated correctly.
|
||||
|
||||
As it is cheap to convert seconds to nanoseconds, but the opposite
|
||||
requires an expensive 64-bit division, a simple __u64 nanosecond value
|
||||
can be simpler and more efficient.
|
||||
|
||||
Timeout values and timestamps should ideally use CLOCK_MONOTONIC time,
|
||||
as returned by ktime_get_ns() or ktime_get_ts64(). Unlike
|
||||
CLOCK_REALTIME, this makes the timestamps immune from jumping backwards
|
||||
or forwards due to leap second adjustments and clock_settime() calls.
|
||||
|
||||
ktime_get_real_ns() can be used for CLOCK_REALTIME timestamps that
|
||||
need to be persistent across a reboot or between multiple machines.
|
||||
|
||||
32-bit compat mode
|
||||
==================
|
||||
|
||||
In order to support 32-bit user space running on a 64-bit machine, each
|
||||
subsystem or driver that implements an ioctl callback handler must also
|
||||
implement the corresponding compat_ioctl handler.
|
||||
|
||||
As long as all the rules for data structures are followed, this is as
|
||||
easy as setting the .compat_ioctl pointer to a helper function such as
|
||||
compat_ptr_ioctl() or blkdev_compat_ptr_ioctl().
|
||||
|
||||
compat_ptr()
|
||||
------------
|
||||
|
||||
On the s390 architecture, 31-bit user space has ambiguous representations
|
||||
for data pointers, with the upper bit being ignored. When running such
|
||||
a process in compat mode, the compat_ptr() helper must be used to
|
||||
clear the upper bit of a compat_uptr_t and turn it into a valid 64-bit
|
||||
pointer. On other architectures, this macro only performs a cast to a
|
||||
``void __user *`` pointer.
|
||||
|
||||
In an compat_ioctl() callback, the last argument is an unsigned long,
|
||||
which can be interpreted as either a pointer or a scalar depending on
|
||||
the command. If it is a scalar, then compat_ptr() must not be used, to
|
||||
ensure that the 64-bit kernel behaves the same way as a 32-bit kernel
|
||||
for arguments with the upper bit set.
|
||||
|
||||
The compat_ptr_ioctl() helper can be used in place of a custom
|
||||
compat_ioctl file operation for drivers that only take arguments that
|
||||
are pointers to compatible data structures.
|
||||
|
||||
Structure layout
|
||||
----------------
|
||||
|
||||
Compatible data structures have the same layout on all architectures,
|
||||
avoiding all problematic members:
|
||||
|
||||
* ``long`` and ``unsigned long`` are the size of a register, so
|
||||
they can be either 32-bit or 64-bit wide and cannot be used in portable
|
||||
data structures. Fixed-length replacements are ``__s32``, ``__u32``,
|
||||
``__s64`` and ``__u64``.
|
||||
|
||||
* Pointers have the same problem, in addition to requiring the
|
||||
use of compat_ptr(). The best workaround is to use ``__u64``
|
||||
in place of pointers, which requires a cast to ``uintptr_t`` in user
|
||||
space, and the use of u64_to_user_ptr() in the kernel to convert
|
||||
it back into a user pointer.
|
||||
|
||||
* On the x86-32 (i386) architecture, the alignment of 64-bit variables
|
||||
is only 32-bit, but they are naturally aligned on most other
|
||||
architectures including x86-64. This means a structure like::
|
||||
|
||||
struct foo {
|
||||
__u32 a;
|
||||
__u64 b;
|
||||
__u32 c;
|
||||
};
|
||||
|
||||
has four bytes of padding between a and b on x86-64, plus another four
|
||||
bytes of padding at the end, but no padding on i386, and it needs a
|
||||
compat_ioctl conversion handler to translate between the two formats.
|
||||
|
||||
To avoid this problem, all structures should have their members
|
||||
naturally aligned, or explicit reserved fields added in place of the
|
||||
implicit padding. The ``pahole`` tool can be used for checking the
|
||||
alignment.
|
||||
|
||||
* On ARM OABI user space, structures are padded to multiples of 32-bit,
|
||||
making some structs incompatible with modern EABI kernels if they
|
||||
do not end on a 32-bit boundary.
|
||||
|
||||
* On the m68k architecture, struct members are not guaranteed to have an
|
||||
alignment greater than 16-bit, which is a problem when relying on
|
||||
implicit padding.
|
||||
|
||||
* Bitfields and enums generally work as one would expect them to,
|
||||
but some properties of them are implementation-defined, so it is better
|
||||
to avoid them completely in ioctl interfaces.
|
||||
|
||||
* ``char`` members can be either signed or unsigned, depending on
|
||||
the architecture, so the __u8 and __s8 types should be used for 8-bit
|
||||
integer values, though char arrays are clearer for fixed-length strings.
|
||||
|
||||
Information leaks
|
||||
=================
|
||||
|
||||
Uninitialized data must not be copied back to user space, as this can
|
||||
cause an information leak, which can be used to defeat kernel address
|
||||
space layout randomization (KASLR), helping in an attack.
|
||||
|
||||
For this reason (and for compat support) it is best to avoid any
|
||||
implicit padding in data structures. Where there is implicit padding
|
||||
in an existing structure, kernel drivers must be careful to fully
|
||||
initialize an instance of the structure before copying it to user
|
||||
space. This is usually done by calling memset() before assigning to
|
||||
individual members.
|
||||
|
||||
Subsystem abstractions
|
||||
======================
|
||||
|
||||
While some device drivers implement their own ioctl function, most
|
||||
subsystems implement the same command for multiple drivers. Ideally the
|
||||
subsystem has an .ioctl() handler that copies the arguments from and
|
||||
to user space, passing them into subsystem specific callback functions
|
||||
through normal kernel pointers.
|
||||
|
||||
This helps in various ways:
|
||||
|
||||
* Applications written for one driver are more likely to work for
|
||||
another one in the same subsystem if there are no subtle differences
|
||||
in the user space ABI.
|
||||
|
||||
* The complexity of user space access and data structure layout is done
|
||||
in one place, reducing the potential for implementation bugs.
|
||||
|
||||
* It is more likely to be reviewed by experienced developers
|
||||
that can spot problems in the interface when the ioctl is shared
|
||||
between multiple drivers than when it is only used in a single driver.
|
||||
|
||||
Alternatives to ioctl
|
||||
=====================
|
||||
|
||||
There are many cases in which ioctl is not the best solution for a
|
||||
problem. Alternatives include:
|
||||
|
||||
* System calls are a better choice for a system-wide feature that
|
||||
is not tied to a physical device or constrained by the file system
|
||||
permissions of a character device node
|
||||
|
||||
* netlink is the preferred way of configuring any network related
|
||||
objects through sockets.
|
||||
|
||||
* debugfs is used for ad-hoc interfaces for debugging functionality
|
||||
that does not need to be exposed as a stable interface to applications.
|
||||
|
||||
* sysfs is a good way to expose the state of an in-kernel object
|
||||
that is not tied to a file descriptor.
|
||||
|
||||
* configfs can be used for more complex configuration than sysfs
|
||||
|
||||
* A custom file system can provide extra flexibility with a simple
|
||||
user interface but adds a lot of complexity to the implementation.
|
|
@ -4,6 +4,9 @@
|
|||
*/
|
||||
#ifndef __ASM_COMPAT_H
|
||||
#define __ASM_COMPAT_H
|
||||
|
||||
#include <asm-generic/compat.h>
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
/*
|
||||
|
@ -13,8 +16,6 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/sched/task_stack.h>
|
||||
|
||||
#include <asm-generic/compat.h>
|
||||
|
||||
#define COMPAT_USER_HZ 100
|
||||
#ifdef __AARCH64EB__
|
||||
#define COMPAT_UTS_MACHINE "armv8b\0\0"
|
||||
|
@ -113,23 +114,6 @@ typedef u32 compat_sigset_word;
|
|||
|
||||
#define COMPAT_OFF_T_MAX 0x7fffffff
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
#define compat_user_stack_pointer() (user_stack_pointer(task_pt_regs(current)))
|
||||
#define COMPAT_MINSIGSTKSZ 2048
|
||||
|
||||
|
|
|
@ -100,24 +100,6 @@ typedef u32 compat_sigset_word;
|
|||
|
||||
#define COMPAT_OFF_T_MAX 0x7fffffff
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
/* cast to a __user pointer via "unsigned long" makes sparse happy */
|
||||
return (void __user *)(unsigned long)(long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
struct pt_regs *regs = (struct pt_regs *)
|
||||
|
|
|
@ -173,23 +173,6 @@ struct compat_shmid64_ds {
|
|||
#define COMPAT_ELF_NGREG 80
|
||||
typedef compat_ulong_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static __inline__ void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
struct pt_regs *regs = ¤t->thread.regs;
|
||||
|
|
|
@ -96,23 +96,6 @@ typedef u32 compat_sigset_word;
|
|||
|
||||
#define COMPAT_OFF_T_MAX 0x7fffffff
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
struct pt_regs *regs = current->thread.regs;
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
#include <linux/sched.h>
|
||||
#include <asm/processor.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/compat.h>
|
||||
#include <linux/compat.h>
|
||||
#include <asm/oprofile_impl.h>
|
||||
|
||||
#define STACK_SP(STACK) *(STACK)
|
||||
|
|
|
@ -177,11 +177,7 @@ static inline void __user *compat_ptr(compat_uptr_t uptr)
|
|||
{
|
||||
return (void __user *)(unsigned long)(uptr & 0x7fffffffUL);
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
#define compat_ptr(uptr) compat_ptr(uptr)
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
|
|
|
@ -125,23 +125,6 @@ typedef u32 compat_sigset_word;
|
|||
|
||||
#define COMPAT_OFF_T_MAX 0x7fffffff
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static inline void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
|
|
|
@ -113,6 +113,7 @@ static const struct block_device_operations ubd_blops = {
|
|||
.open = ubd_open,
|
||||
.release = ubd_release,
|
||||
.ioctl = ubd_ioctl,
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
.getgeo = ubd_getgeo,
|
||||
};
|
||||
|
||||
|
|
|
@ -177,23 +177,6 @@ typedef struct user_regs_struct compat_elf_gregset_t;
|
|||
(!!(task_pt_regs(current)->orig_ax & __X32_SYSCALL_BIT))
|
||||
#endif
|
||||
|
||||
/*
|
||||
* A pointer passed in from user mode. This should not
|
||||
* be used for syscall parameters, just declare them
|
||||
* as pointers because the syscall entry code will have
|
||||
* appropriately converted them already.
|
||||
*/
|
||||
|
||||
static inline void __user *compat_ptr(compat_uptr_t uptr)
|
||||
{
|
||||
return (void __user *)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline compat_uptr_t ptr_to_compat(void __user *uptr)
|
||||
{
|
||||
return (u32)(unsigned long)uptr;
|
||||
}
|
||||
|
||||
static inline void __user *arch_compat_alloc_user_space(long len)
|
||||
{
|
||||
compat_uptr_t sp;
|
||||
|
|
|
@ -25,7 +25,6 @@ obj-$(CONFIG_MQ_IOSCHED_KYBER) += kyber-iosched.o
|
|||
bfq-y := bfq-iosched.o bfq-wf2q.o bfq-cgroup.o
|
||||
obj-$(CONFIG_IOSCHED_BFQ) += bfq.o
|
||||
|
||||
obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
|
||||
obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
|
||||
obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o
|
||||
obj-$(CONFIG_BLK_DEV_INTEGRITY_T10) += t10-pi.o
|
||||
|
|
|
@ -382,6 +382,7 @@ static const struct file_operations bsg_fops = {
|
|||
.open = bsg_open,
|
||||
.release = bsg_release,
|
||||
.unlocked_ioctl = bsg_ioctl,
|
||||
.compat_ioctl = compat_ptr_ioctl,
|
||||
.owner = THIS_MODULE,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
|
|
@ -1,427 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/blkpg.h>
|
||||
#include <linux/blktrace_api.h>
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/elevator.h>
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/pr.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
static int compat_put_ushort(unsigned long arg, unsigned short val)
|
||||
{
|
||||
return put_user(val, (unsigned short __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
static int compat_put_int(unsigned long arg, int val)
|
||||
{
|
||||
return put_user(val, (compat_int_t __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
static int compat_put_uint(unsigned long arg, unsigned int val)
|
||||
{
|
||||
return put_user(val, (compat_uint_t __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
static int compat_put_long(unsigned long arg, long val)
|
||||
{
|
||||
return put_user(val, (compat_long_t __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
static int compat_put_ulong(unsigned long arg, compat_ulong_t val)
|
||||
{
|
||||
return put_user(val, (compat_ulong_t __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
static int compat_put_u64(unsigned long arg, u64 val)
|
||||
{
|
||||
return put_user(val, (compat_u64 __user *)compat_ptr(arg));
|
||||
}
|
||||
|
||||
struct compat_hd_geometry {
|
||||
unsigned char heads;
|
||||
unsigned char sectors;
|
||||
unsigned short cylinders;
|
||||
u32 start;
|
||||
};
|
||||
|
||||
static int compat_hdio_getgeo(struct gendisk *disk, struct block_device *bdev,
|
||||
struct compat_hd_geometry __user *ugeo)
|
||||
{
|
||||
struct hd_geometry geo;
|
||||
int ret;
|
||||
|
||||
if (!ugeo)
|
||||
return -EINVAL;
|
||||
if (!disk->fops->getgeo)
|
||||
return -ENOTTY;
|
||||
|
||||
memset(&geo, 0, sizeof(geo));
|
||||
/*
|
||||
* We need to set the startsect first, the driver may
|
||||
* want to override it.
|
||||
*/
|
||||
geo.start = get_start_sect(bdev);
|
||||
ret = disk->fops->getgeo(bdev, &geo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = copy_to_user(ugeo, &geo, 4);
|
||||
ret |= put_user(geo.start, &ugeo->start);
|
||||
if (ret)
|
||||
ret = -EFAULT;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int compat_hdio_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
unsigned long __user *p;
|
||||
int error;
|
||||
|
||||
p = compat_alloc_user_space(sizeof(unsigned long));
|
||||
error = __blkdev_driver_ioctl(bdev, mode,
|
||||
cmd, (unsigned long)p);
|
||||
if (error == 0) {
|
||||
unsigned int __user *uvp = compat_ptr(arg);
|
||||
unsigned long v;
|
||||
if (get_user(v, p) || put_user(v, uvp))
|
||||
error = -EFAULT;
|
||||
}
|
||||
return error;
|
||||
}
|
||||
|
||||
struct compat_cdrom_read_audio {
|
||||
union cdrom_addr addr;
|
||||
u8 addr_format;
|
||||
compat_int_t nframes;
|
||||
compat_caddr_t buf;
|
||||
};
|
||||
|
||||
struct compat_cdrom_generic_command {
|
||||
unsigned char cmd[CDROM_PACKET_SIZE];
|
||||
compat_caddr_t buffer;
|
||||
compat_uint_t buflen;
|
||||
compat_int_t stat;
|
||||
compat_caddr_t sense;
|
||||
unsigned char data_direction;
|
||||
compat_int_t quiet;
|
||||
compat_int_t timeout;
|
||||
compat_caddr_t reserved[1];
|
||||
};
|
||||
|
||||
static int compat_cdrom_read_audio(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct cdrom_read_audio __user *cdread_audio;
|
||||
struct compat_cdrom_read_audio __user *cdread_audio32;
|
||||
__u32 data;
|
||||
void __user *datap;
|
||||
|
||||
cdread_audio = compat_alloc_user_space(sizeof(*cdread_audio));
|
||||
cdread_audio32 = compat_ptr(arg);
|
||||
|
||||
if (copy_in_user(&cdread_audio->addr,
|
||||
&cdread_audio32->addr,
|
||||
(sizeof(*cdread_audio32) -
|
||||
sizeof(compat_caddr_t))))
|
||||
return -EFAULT;
|
||||
|
||||
if (get_user(data, &cdread_audio32->buf))
|
||||
return -EFAULT;
|
||||
datap = compat_ptr(data);
|
||||
if (put_user(datap, &cdread_audio->buf))
|
||||
return -EFAULT;
|
||||
|
||||
return __blkdev_driver_ioctl(bdev, mode, cmd,
|
||||
(unsigned long)cdread_audio);
|
||||
}
|
||||
|
||||
static int compat_cdrom_generic_command(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct cdrom_generic_command __user *cgc;
|
||||
struct compat_cdrom_generic_command __user *cgc32;
|
||||
u32 data;
|
||||
unsigned char dir;
|
||||
int itmp;
|
||||
|
||||
cgc = compat_alloc_user_space(sizeof(*cgc));
|
||||
cgc32 = compat_ptr(arg);
|
||||
|
||||
if (copy_in_user(&cgc->cmd, &cgc32->cmd, sizeof(cgc->cmd)) ||
|
||||
get_user(data, &cgc32->buffer) ||
|
||||
put_user(compat_ptr(data), &cgc->buffer) ||
|
||||
copy_in_user(&cgc->buflen, &cgc32->buflen,
|
||||
(sizeof(unsigned int) + sizeof(int))) ||
|
||||
get_user(data, &cgc32->sense) ||
|
||||
put_user(compat_ptr(data), &cgc->sense) ||
|
||||
get_user(dir, &cgc32->data_direction) ||
|
||||
put_user(dir, &cgc->data_direction) ||
|
||||
get_user(itmp, &cgc32->quiet) ||
|
||||
put_user(itmp, &cgc->quiet) ||
|
||||
get_user(itmp, &cgc32->timeout) ||
|
||||
put_user(itmp, &cgc->timeout) ||
|
||||
get_user(data, &cgc32->reserved[0]) ||
|
||||
put_user(compat_ptr(data), &cgc->reserved[0]))
|
||||
return -EFAULT;
|
||||
|
||||
return __blkdev_driver_ioctl(bdev, mode, cmd, (unsigned long)cgc);
|
||||
}
|
||||
|
||||
struct compat_blkpg_ioctl_arg {
|
||||
compat_int_t op;
|
||||
compat_int_t flags;
|
||||
compat_int_t datalen;
|
||||
compat_caddr_t data;
|
||||
};
|
||||
|
||||
static int compat_blkpg_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, struct compat_blkpg_ioctl_arg __user *ua32)
|
||||
{
|
||||
struct blkpg_ioctl_arg __user *a = compat_alloc_user_space(sizeof(*a));
|
||||
compat_caddr_t udata;
|
||||
compat_int_t n;
|
||||
int err;
|
||||
|
||||
err = get_user(n, &ua32->op);
|
||||
err |= put_user(n, &a->op);
|
||||
err |= get_user(n, &ua32->flags);
|
||||
err |= put_user(n, &a->flags);
|
||||
err |= get_user(n, &ua32->datalen);
|
||||
err |= put_user(n, &a->datalen);
|
||||
err |= get_user(udata, &ua32->data);
|
||||
err |= put_user(compat_ptr(udata), &a->data);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return blkdev_ioctl(bdev, mode, cmd, (unsigned long)a);
|
||||
}
|
||||
|
||||
#define BLKBSZGET_32 _IOR(0x12, 112, int)
|
||||
#define BLKBSZSET_32 _IOW(0x12, 113, int)
|
||||
#define BLKGETSIZE64_32 _IOR(0x12, 114, int)
|
||||
|
||||
static int compat_blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned cmd, unsigned long arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
case HDIO_GET_UNMASKINTR:
|
||||
case HDIO_GET_MULTCOUNT:
|
||||
case HDIO_GET_KEEPSETTINGS:
|
||||
case HDIO_GET_32BIT:
|
||||
case HDIO_GET_NOWERR:
|
||||
case HDIO_GET_DMA:
|
||||
case HDIO_GET_NICE:
|
||||
case HDIO_GET_WCACHE:
|
||||
case HDIO_GET_ACOUSTIC:
|
||||
case HDIO_GET_ADDRESS:
|
||||
case HDIO_GET_BUSSTATE:
|
||||
return compat_hdio_ioctl(bdev, mode, cmd, arg);
|
||||
case CDROMREADAUDIO:
|
||||
return compat_cdrom_read_audio(bdev, mode, cmd, arg);
|
||||
case CDROM_SEND_PACKET:
|
||||
return compat_cdrom_generic_command(bdev, mode, cmd, arg);
|
||||
|
||||
/*
|
||||
* No handler required for the ones below, we just need to
|
||||
* convert arg to a 64 bit pointer.
|
||||
*/
|
||||
case BLKSECTSET:
|
||||
/*
|
||||
* 0x03 -- HD/IDE ioctl's used by hdparm and friends.
|
||||
* Some need translations, these do not.
|
||||
*/
|
||||
case HDIO_GET_IDENTITY:
|
||||
case HDIO_DRIVE_TASK:
|
||||
case HDIO_DRIVE_CMD:
|
||||
/* 0x330 is reserved -- it used to be HDIO_GETGEO_BIG */
|
||||
case 0x330:
|
||||
/* CDROM stuff */
|
||||
case CDROMPAUSE:
|
||||
case CDROMRESUME:
|
||||
case CDROMPLAYMSF:
|
||||
case CDROMPLAYTRKIND:
|
||||
case CDROMREADTOCHDR:
|
||||
case CDROMREADTOCENTRY:
|
||||
case CDROMSTOP:
|
||||
case CDROMSTART:
|
||||
case CDROMEJECT:
|
||||
case CDROMVOLCTRL:
|
||||
case CDROMSUBCHNL:
|
||||
case CDROMMULTISESSION:
|
||||
case CDROM_GET_MCN:
|
||||
case CDROMRESET:
|
||||
case CDROMVOLREAD:
|
||||
case CDROMSEEK:
|
||||
case CDROMPLAYBLK:
|
||||
case CDROMCLOSETRAY:
|
||||
case CDROM_DISC_STATUS:
|
||||
case CDROM_CHANGER_NSLOTS:
|
||||
case CDROM_GET_CAPABILITY:
|
||||
/* Ignore cdrom.h about these next 5 ioctls, they absolutely do
|
||||
* not take a struct cdrom_read, instead they take a struct cdrom_msf
|
||||
* which is compatible.
|
||||
*/
|
||||
case CDROMREADMODE2:
|
||||
case CDROMREADMODE1:
|
||||
case CDROMREADRAW:
|
||||
case CDROMREADCOOKED:
|
||||
case CDROMREADALL:
|
||||
/* DVD ioctls */
|
||||
case DVD_READ_STRUCT:
|
||||
case DVD_WRITE_STRUCT:
|
||||
case DVD_AUTH:
|
||||
arg = (unsigned long)compat_ptr(arg);
|
||||
/* These intepret arg as an unsigned long, not as a pointer,
|
||||
* so we must not do compat_ptr() conversion. */
|
||||
case HDIO_SET_MULTCOUNT:
|
||||
case HDIO_SET_UNMASKINTR:
|
||||
case HDIO_SET_KEEPSETTINGS:
|
||||
case HDIO_SET_32BIT:
|
||||
case HDIO_SET_NOWERR:
|
||||
case HDIO_SET_DMA:
|
||||
case HDIO_SET_PIO_MODE:
|
||||
case HDIO_SET_NICE:
|
||||
case HDIO_SET_WCACHE:
|
||||
case HDIO_SET_ACOUSTIC:
|
||||
case HDIO_SET_BUSSTATE:
|
||||
case HDIO_SET_ADDRESS:
|
||||
case CDROMEJECT_SW:
|
||||
case CDROM_SET_OPTIONS:
|
||||
case CDROM_CLEAR_OPTIONS:
|
||||
case CDROM_SELECT_SPEED:
|
||||
case CDROM_SELECT_DISC:
|
||||
case CDROM_MEDIA_CHANGED:
|
||||
case CDROM_DRIVE_STATUS:
|
||||
case CDROM_LOCKDOOR:
|
||||
case CDROM_DEBUG:
|
||||
break;
|
||||
default:
|
||||
/* unknown ioctl number */
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
|
||||
return __blkdev_driver_ioctl(bdev, mode, cmd, arg);
|
||||
}
|
||||
|
||||
/* Most of the generic ioctls are handled in the normal fallback path.
|
||||
This assumes the blkdev's low level compat_ioctl always returns
|
||||
ENOIOCTLCMD for unknown ioctls. */
|
||||
long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
|
||||
{
|
||||
int ret = -ENOIOCTLCMD;
|
||||
struct inode *inode = file->f_mapping->host;
|
||||
struct block_device *bdev = inode->i_bdev;
|
||||
struct gendisk *disk = bdev->bd_disk;
|
||||
fmode_t mode = file->f_mode;
|
||||
loff_t size;
|
||||
unsigned int max_sectors;
|
||||
|
||||
/*
|
||||
* O_NDELAY can be altered using fcntl(.., F_SETFL, ..), so we have
|
||||
* to updated it before every ioctl.
|
||||
*/
|
||||
if (file->f_flags & O_NDELAY)
|
||||
mode |= FMODE_NDELAY;
|
||||
else
|
||||
mode &= ~FMODE_NDELAY;
|
||||
|
||||
switch (cmd) {
|
||||
case HDIO_GETGEO:
|
||||
return compat_hdio_getgeo(disk, bdev, compat_ptr(arg));
|
||||
case BLKPBSZGET:
|
||||
return compat_put_uint(arg, bdev_physical_block_size(bdev));
|
||||
case BLKIOMIN:
|
||||
return compat_put_uint(arg, bdev_io_min(bdev));
|
||||
case BLKIOOPT:
|
||||
return compat_put_uint(arg, bdev_io_opt(bdev));
|
||||
case BLKALIGNOFF:
|
||||
return compat_put_int(arg, bdev_alignment_offset(bdev));
|
||||
case BLKDISCARDZEROES:
|
||||
return compat_put_uint(arg, 0);
|
||||
case BLKFLSBUF:
|
||||
case BLKROSET:
|
||||
case BLKDISCARD:
|
||||
case BLKSECDISCARD:
|
||||
case BLKZEROOUT:
|
||||
/*
|
||||
* the ones below are implemented in blkdev_locked_ioctl,
|
||||
* but we call blkdev_ioctl, which gets the lock for us
|
||||
*/
|
||||
case BLKRRPART:
|
||||
case BLKREPORTZONE:
|
||||
case BLKRESETZONE:
|
||||
case BLKOPENZONE:
|
||||
case BLKCLOSEZONE:
|
||||
case BLKFINISHZONE:
|
||||
case BLKGETZONESZ:
|
||||
case BLKGETNRZONES:
|
||||
return blkdev_ioctl(bdev, mode, cmd,
|
||||
(unsigned long)compat_ptr(arg));
|
||||
case BLKBSZSET_32:
|
||||
return blkdev_ioctl(bdev, mode, BLKBSZSET,
|
||||
(unsigned long)compat_ptr(arg));
|
||||
case BLKPG:
|
||||
return compat_blkpg_ioctl(bdev, mode, cmd, compat_ptr(arg));
|
||||
case BLKRAGET:
|
||||
case BLKFRAGET:
|
||||
if (!arg)
|
||||
return -EINVAL;
|
||||
return compat_put_long(arg,
|
||||
(bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
|
||||
case BLKROGET: /* compatible */
|
||||
return compat_put_int(arg, bdev_read_only(bdev) != 0);
|
||||
case BLKBSZGET_32: /* get the logical block size (cf. BLKSSZGET) */
|
||||
return compat_put_int(arg, block_size(bdev));
|
||||
case BLKSSZGET: /* get block device hardware sector size */
|
||||
return compat_put_int(arg, bdev_logical_block_size(bdev));
|
||||
case BLKSECTGET:
|
||||
max_sectors = min_t(unsigned int, USHRT_MAX,
|
||||
queue_max_sectors(bdev_get_queue(bdev)));
|
||||
return compat_put_ushort(arg, max_sectors);
|
||||
case BLKROTATIONAL:
|
||||
return compat_put_ushort(arg,
|
||||
!blk_queue_nonrot(bdev_get_queue(bdev)));
|
||||
case BLKRASET: /* compatible, but no compat_ptr (!) */
|
||||
case BLKFRASET:
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE;
|
||||
return 0;
|
||||
case BLKGETSIZE:
|
||||
size = i_size_read(bdev->bd_inode);
|
||||
if ((size >> 9) > ~0UL)
|
||||
return -EFBIG;
|
||||
return compat_put_ulong(arg, size >> 9);
|
||||
|
||||
case BLKGETSIZE64_32:
|
||||
return compat_put_u64(arg, i_size_read(bdev->bd_inode));
|
||||
|
||||
case BLKTRACESETUP32:
|
||||
case BLKTRACESTART: /* compatible */
|
||||
case BLKTRACESTOP: /* compatible */
|
||||
case BLKTRACETEARDOWN: /* compatible */
|
||||
ret = blk_trace_ioctl(bdev, cmd, compat_ptr(arg));
|
||||
return ret;
|
||||
case IOC_PR_REGISTER:
|
||||
case IOC_PR_RESERVE:
|
||||
case IOC_PR_RELEASE:
|
||||
case IOC_PR_PREEMPT:
|
||||
case IOC_PR_PREEMPT_ABORT:
|
||||
case IOC_PR_CLEAR:
|
||||
return blkdev_ioctl(bdev, mode, cmd,
|
||||
(unsigned long)compat_ptr(arg));
|
||||
default:
|
||||
if (disk->fops->compat_ioctl)
|
||||
ret = disk->fops->compat_ioctl(bdev, mode, cmd, arg);
|
||||
if (ret == -ENOIOCTLCMD)
|
||||
ret = compat_blkdev_driver_ioctl(bdev, mode, cmd, arg);
|
||||
return ret;
|
||||
}
|
||||
}
|
321
block/ioctl.c
321
block/ioctl.c
|
@ -1,5 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/capability.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/gfp.h>
|
||||
|
@ -11,12 +12,12 @@
|
|||
#include <linux/pr.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user *arg)
|
||||
static int blkpg_do_ioctl(struct block_device *bdev,
|
||||
struct blkpg_partition __user *upart, int op)
|
||||
{
|
||||
struct block_device *bdevp;
|
||||
struct gendisk *disk;
|
||||
struct hd_struct *part, *lpart;
|
||||
struct blkpg_ioctl_arg a;
|
||||
struct blkpg_partition p;
|
||||
struct disk_part_iter piter;
|
||||
long long start, length;
|
||||
|
@ -24,9 +25,7 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
|
|||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
if (copy_from_user(&a, arg, sizeof(struct blkpg_ioctl_arg)))
|
||||
return -EFAULT;
|
||||
if (copy_from_user(&p, a.data, sizeof(struct blkpg_partition)))
|
||||
if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
|
||||
return -EFAULT;
|
||||
disk = bdev->bd_disk;
|
||||
if (bdev != bdev->bd_contains)
|
||||
|
@ -34,7 +33,7 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
|
|||
partno = p.pno;
|
||||
if (partno <= 0)
|
||||
return -EINVAL;
|
||||
switch (a.op) {
|
||||
switch (op) {
|
||||
case BLKPG_ADD_PARTITION:
|
||||
start = p.start >> 9;
|
||||
length = p.length >> 9;
|
||||
|
@ -155,6 +154,39 @@ static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user
|
|||
}
|
||||
}
|
||||
|
||||
static int blkpg_ioctl(struct block_device *bdev,
|
||||
struct blkpg_ioctl_arg __user *arg)
|
||||
{
|
||||
struct blkpg_partition __user *udata;
|
||||
int op;
|
||||
|
||||
if (get_user(op, &arg->op) || get_user(udata, &arg->data))
|
||||
return -EFAULT;
|
||||
|
||||
return blkpg_do_ioctl(bdev, udata, op);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
struct compat_blkpg_ioctl_arg {
|
||||
compat_int_t op;
|
||||
compat_int_t flags;
|
||||
compat_int_t datalen;
|
||||
compat_caddr_t data;
|
||||
};
|
||||
|
||||
static int compat_blkpg_ioctl(struct block_device *bdev,
|
||||
struct compat_blkpg_ioctl_arg __user *arg)
|
||||
{
|
||||
compat_caddr_t udata;
|
||||
int op;
|
||||
|
||||
if (get_user(op, &arg->op) || get_user(udata, &arg->data))
|
||||
return -EFAULT;
|
||||
|
||||
return blkpg_do_ioctl(bdev, compat_ptr(udata), op);
|
||||
}
|
||||
#endif
|
||||
|
||||
static int blkdev_reread_part(struct block_device *bdev)
|
||||
{
|
||||
int ret;
|
||||
|
@ -238,36 +270,48 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode,
|
|||
BLKDEV_ZERO_NOUNMAP);
|
||||
}
|
||||
|
||||
static int put_ushort(unsigned long arg, unsigned short val)
|
||||
static int put_ushort(unsigned short __user *argp, unsigned short val)
|
||||
{
|
||||
return put_user(val, (unsigned short __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int put_int(unsigned long arg, int val)
|
||||
static int put_int(int __user *argp, int val)
|
||||
{
|
||||
return put_user(val, (int __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int put_uint(unsigned long arg, unsigned int val)
|
||||
static int put_uint(unsigned int __user *argp, unsigned int val)
|
||||
{
|
||||
return put_user(val, (unsigned int __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int put_long(unsigned long arg, long val)
|
||||
static int put_long(long __user *argp, long val)
|
||||
{
|
||||
return put_user(val, (long __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int put_ulong(unsigned long arg, unsigned long val)
|
||||
static int put_ulong(unsigned long __user *argp, unsigned long val)
|
||||
{
|
||||
return put_user(val, (unsigned long __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int put_u64(unsigned long arg, u64 val)
|
||||
static int put_u64(u64 __user *argp, u64 val)
|
||||
{
|
||||
return put_user(val, (u64 __user *)arg);
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static int compat_put_long(compat_long_t *argp, long val)
|
||||
{
|
||||
return put_user(val, argp);
|
||||
}
|
||||
|
||||
static int compat_put_ulong(compat_ulong_t *argp, compat_ulong_t val)
|
||||
{
|
||||
return put_user(val, argp);
|
||||
}
|
||||
#endif
|
||||
|
||||
int __blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned cmd, unsigned long arg)
|
||||
{
|
||||
|
@ -285,6 +329,26 @@ int __blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
*/
|
||||
EXPORT_SYMBOL_GPL(__blkdev_driver_ioctl);
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
/*
|
||||
* This is the equivalent of compat_ptr_ioctl(), to be used by block
|
||||
* drivers that implement only commands that are completely compatible
|
||||
* between 32-bit and 64-bit user space
|
||||
*/
|
||||
int blkdev_compat_ptr_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned cmd, unsigned long arg)
|
||||
{
|
||||
struct gendisk *disk = bdev->bd_disk;
|
||||
|
||||
if (disk->fops->ioctl)
|
||||
return disk->fops->ioctl(bdev, mode, cmd,
|
||||
(unsigned long)compat_ptr(arg));
|
||||
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
EXPORT_SYMBOL(blkdev_compat_ptr_ioctl);
|
||||
#endif
|
||||
|
||||
static int blkdev_pr_register(struct block_device *bdev,
|
||||
struct pr_registration __user *arg)
|
||||
{
|
||||
|
@ -455,6 +519,45 @@ static int blkdev_getgeo(struct block_device *bdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
struct compat_hd_geometry {
|
||||
unsigned char heads;
|
||||
unsigned char sectors;
|
||||
unsigned short cylinders;
|
||||
u32 start;
|
||||
};
|
||||
|
||||
static int compat_hdio_getgeo(struct block_device *bdev,
|
||||
struct compat_hd_geometry __user *ugeo)
|
||||
{
|
||||
struct gendisk *disk = bdev->bd_disk;
|
||||
struct hd_geometry geo;
|
||||
int ret;
|
||||
|
||||
if (!ugeo)
|
||||
return -EINVAL;
|
||||
if (!disk->fops->getgeo)
|
||||
return -ENOTTY;
|
||||
|
||||
memset(&geo, 0, sizeof(geo));
|
||||
/*
|
||||
* We need to set the startsect first, the driver may
|
||||
* want to override it.
|
||||
*/
|
||||
geo.start = get_start_sect(bdev);
|
||||
ret = disk->fops->getgeo(bdev, &geo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = copy_to_user(ugeo, &geo, 4);
|
||||
ret |= put_user(geo.start, &ugeo->start);
|
||||
if (ret)
|
||||
ret = -EFAULT;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* set the logical block size */
|
||||
static int blkdev_bszset(struct block_device *bdev, fmode_t mode,
|
||||
int __user *argp)
|
||||
|
@ -481,13 +584,13 @@ static int blkdev_bszset(struct block_device *bdev, fmode_t mode,
|
|||
}
|
||||
|
||||
/*
|
||||
* always keep this in sync with compat_blkdev_ioctl()
|
||||
* Common commands that are handled the same way on native and compat
|
||||
* user space. Note the separate arg/argp parameters that are needed
|
||||
* to deal with the compat_ptr() conversion.
|
||||
*/
|
||||
int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
|
||||
unsigned long arg)
|
||||
static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned cmd, unsigned long arg, void __user *argp)
|
||||
{
|
||||
void __user *argp = (void __user *)arg;
|
||||
loff_t size;
|
||||
unsigned int max_sectors;
|
||||
|
||||
switch (cmd) {
|
||||
|
@ -510,60 +613,39 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
|
|||
case BLKFINISHZONE:
|
||||
return blkdev_zone_mgmt_ioctl(bdev, mode, cmd, arg);
|
||||
case BLKGETZONESZ:
|
||||
return put_uint(arg, bdev_zone_sectors(bdev));
|
||||
return put_uint(argp, bdev_zone_sectors(bdev));
|
||||
case BLKGETNRZONES:
|
||||
return put_uint(arg, blkdev_nr_zones(bdev->bd_disk));
|
||||
case HDIO_GETGEO:
|
||||
return blkdev_getgeo(bdev, argp);
|
||||
case BLKRAGET:
|
||||
case BLKFRAGET:
|
||||
if (!arg)
|
||||
return -EINVAL;
|
||||
return put_long(arg, (bdev->bd_bdi->ra_pages*PAGE_SIZE) / 512);
|
||||
return put_uint(argp, blkdev_nr_zones(bdev->bd_disk));
|
||||
case BLKROGET:
|
||||
return put_int(arg, bdev_read_only(bdev) != 0);
|
||||
case BLKBSZGET: /* get block device soft block size (cf. BLKSSZGET) */
|
||||
return put_int(arg, block_size(bdev));
|
||||
return put_int(argp, bdev_read_only(bdev) != 0);
|
||||
case BLKSSZGET: /* get block device logical block size */
|
||||
return put_int(arg, bdev_logical_block_size(bdev));
|
||||
return put_int(argp, bdev_logical_block_size(bdev));
|
||||
case BLKPBSZGET: /* get block device physical block size */
|
||||
return put_uint(arg, bdev_physical_block_size(bdev));
|
||||
return put_uint(argp, bdev_physical_block_size(bdev));
|
||||
case BLKIOMIN:
|
||||
return put_uint(arg, bdev_io_min(bdev));
|
||||
return put_uint(argp, bdev_io_min(bdev));
|
||||
case BLKIOOPT:
|
||||
return put_uint(arg, bdev_io_opt(bdev));
|
||||
return put_uint(argp, bdev_io_opt(bdev));
|
||||
case BLKALIGNOFF:
|
||||
return put_int(arg, bdev_alignment_offset(bdev));
|
||||
return put_int(argp, bdev_alignment_offset(bdev));
|
||||
case BLKDISCARDZEROES:
|
||||
return put_uint(arg, 0);
|
||||
return put_uint(argp, 0);
|
||||
case BLKSECTGET:
|
||||
max_sectors = min_t(unsigned int, USHRT_MAX,
|
||||
queue_max_sectors(bdev_get_queue(bdev)));
|
||||
return put_ushort(arg, max_sectors);
|
||||
return put_ushort(argp, max_sectors);
|
||||
case BLKROTATIONAL:
|
||||
return put_ushort(arg, !blk_queue_nonrot(bdev_get_queue(bdev)));
|
||||
return put_ushort(argp, !blk_queue_nonrot(bdev_get_queue(bdev)));
|
||||
case BLKRASET:
|
||||
case BLKFRASET:
|
||||
if(!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE;
|
||||
return 0;
|
||||
case BLKBSZSET:
|
||||
return blkdev_bszset(bdev, mode, argp);
|
||||
case BLKPG:
|
||||
return blkpg_ioctl(bdev, argp);
|
||||
case BLKRRPART:
|
||||
return blkdev_reread_part(bdev);
|
||||
case BLKGETSIZE:
|
||||
size = i_size_read(bdev->bd_inode);
|
||||
if ((size >> 9) > ~0UL)
|
||||
return -EFBIG;
|
||||
return put_ulong(arg, size >> 9);
|
||||
case BLKGETSIZE64:
|
||||
return put_u64(arg, i_size_read(bdev->bd_inode));
|
||||
case BLKTRACESTART:
|
||||
case BLKTRACESTOP:
|
||||
case BLKTRACESETUP:
|
||||
case BLKTRACETEARDOWN:
|
||||
return blk_trace_ioctl(bdev, cmd, argp);
|
||||
case IOC_PR_REGISTER:
|
||||
|
@ -579,7 +661,132 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
|
|||
case IOC_PR_CLEAR:
|
||||
return blkdev_pr_clear(bdev, argp);
|
||||
default:
|
||||
return __blkdev_driver_ioctl(bdev, mode, cmd, arg);
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blkdev_ioctl);
|
||||
|
||||
/*
|
||||
* Always keep this in sync with compat_blkdev_ioctl()
|
||||
* to handle all incompatible commands in both functions.
|
||||
*
|
||||
* New commands must be compatible and go into blkdev_common_ioctl
|
||||
*/
|
||||
int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
|
||||
unsigned long arg)
|
||||
{
|
||||
int ret;
|
||||
loff_t size;
|
||||
void __user *argp = (void __user *)arg;
|
||||
|
||||
switch (cmd) {
|
||||
/* These need separate implementations for the data structure */
|
||||
case HDIO_GETGEO:
|
||||
return blkdev_getgeo(bdev, argp);
|
||||
case BLKPG:
|
||||
return blkpg_ioctl(bdev, argp);
|
||||
|
||||
/* Compat mode returns 32-bit data instead of 'long' */
|
||||
case BLKRAGET:
|
||||
case BLKFRAGET:
|
||||
if (!argp)
|
||||
return -EINVAL;
|
||||
return put_long(argp, (bdev->bd_bdi->ra_pages*PAGE_SIZE) / 512);
|
||||
case BLKGETSIZE:
|
||||
size = i_size_read(bdev->bd_inode);
|
||||
if ((size >> 9) > ~0UL)
|
||||
return -EFBIG;
|
||||
return put_ulong(argp, size >> 9);
|
||||
|
||||
/* The data is compatible, but the command number is different */
|
||||
case BLKBSZGET: /* get block device soft block size (cf. BLKSSZGET) */
|
||||
return put_int(argp, block_size(bdev));
|
||||
case BLKBSZSET:
|
||||
return blkdev_bszset(bdev, mode, argp);
|
||||
case BLKGETSIZE64:
|
||||
return put_u64(argp, i_size_read(bdev->bd_inode));
|
||||
|
||||
/* Incompatible alignment on i386 */
|
||||
case BLKTRACESETUP:
|
||||
return blk_trace_ioctl(bdev, cmd, argp);
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
ret = blkdev_common_ioctl(bdev, mode, cmd, arg, argp);
|
||||
if (ret == -ENOIOCTLCMD)
|
||||
return __blkdev_driver_ioctl(bdev, mode, cmd, arg);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blkdev_ioctl); /* for /dev/raw */
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
#define BLKBSZGET_32 _IOR(0x12, 112, int)
|
||||
#define BLKBSZSET_32 _IOW(0x12, 113, int)
|
||||
#define BLKGETSIZE64_32 _IOR(0x12, 114, int)
|
||||
|
||||
/* Most of the generic ioctls are handled in the normal fallback path.
|
||||
This assumes the blkdev's low level compat_ioctl always returns
|
||||
ENOIOCTLCMD for unknown ioctls. */
|
||||
long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
|
||||
{
|
||||
int ret;
|
||||
void __user *argp = compat_ptr(arg);
|
||||
struct inode *inode = file->f_mapping->host;
|
||||
struct block_device *bdev = inode->i_bdev;
|
||||
struct gendisk *disk = bdev->bd_disk;
|
||||
fmode_t mode = file->f_mode;
|
||||
loff_t size;
|
||||
|
||||
/*
|
||||
* O_NDELAY can be altered using fcntl(.., F_SETFL, ..), so we have
|
||||
* to updated it before every ioctl.
|
||||
*/
|
||||
if (file->f_flags & O_NDELAY)
|
||||
mode |= FMODE_NDELAY;
|
||||
else
|
||||
mode &= ~FMODE_NDELAY;
|
||||
|
||||
switch (cmd) {
|
||||
/* These need separate implementations for the data structure */
|
||||
case HDIO_GETGEO:
|
||||
return compat_hdio_getgeo(bdev, argp);
|
||||
case BLKPG:
|
||||
return compat_blkpg_ioctl(bdev, argp);
|
||||
|
||||
/* Compat mode returns 32-bit data instead of 'long' */
|
||||
case BLKRAGET:
|
||||
case BLKFRAGET:
|
||||
if (!argp)
|
||||
return -EINVAL;
|
||||
return compat_put_long(argp,
|
||||
(bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
|
||||
case BLKGETSIZE:
|
||||
size = i_size_read(bdev->bd_inode);
|
||||
if ((size >> 9) > ~0UL)
|
||||
return -EFBIG;
|
||||
return compat_put_ulong(argp, size >> 9);
|
||||
|
||||
/* The data is compatible, but the command number is different */
|
||||
case BLKBSZGET_32: /* get the logical block size (cf. BLKSSZGET) */
|
||||
return put_int(argp, bdev_logical_block_size(bdev));
|
||||
case BLKBSZSET_32:
|
||||
return blkdev_bszset(bdev, mode, argp);
|
||||
case BLKGETSIZE64_32:
|
||||
return put_u64(argp, i_size_read(bdev->bd_inode));
|
||||
|
||||
/* Incompatible alignment on i386 */
|
||||
case BLKTRACESETUP32:
|
||||
return blk_trace_ioctl(bdev, cmd, argp);
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
ret = blkdev_common_ioctl(bdev, mode, cmd, arg, argp);
|
||||
if (ret == -ENOIOCTLCMD && disk->fops->compat_ioctl)
|
||||
ret = disk->fops->compat_ioctl(bdev, mode, cmd, arg);
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <scsi/scsi.h>
|
||||
#include <scsi/scsi_ioctl.h>
|
||||
#include <scsi/scsi_cmnd.h>
|
||||
#include <scsi/sg.h>
|
||||
|
||||
struct blk_cmd_filter {
|
||||
unsigned long read_ok[BLK_SCSI_CMD_PER_LONG];
|
||||
|
@ -550,34 +551,6 @@ static inline int blk_send_start_stop(struct request_queue *q,
|
|||
return __blk_send_generic(q, bd_disk, GPCMD_START_STOP_UNIT, data);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
struct compat_sg_io_hdr {
|
||||
compat_int_t interface_id; /* [i] 'S' for SCSI generic (required) */
|
||||
compat_int_t dxfer_direction; /* [i] data transfer direction */
|
||||
unsigned char cmd_len; /* [i] SCSI command length ( <= 16 bytes) */
|
||||
unsigned char mx_sb_len; /* [i] max length to write to sbp */
|
||||
unsigned short iovec_count; /* [i] 0 implies no scatter gather */
|
||||
compat_uint_t dxfer_len; /* [i] byte count of data transfer */
|
||||
compat_uint_t dxferp; /* [i], [*io] points to data transfer memory
|
||||
or scatter gather list */
|
||||
compat_uptr_t cmdp; /* [i], [*i] points to command to perform */
|
||||
compat_uptr_t sbp; /* [i], [*o] points to sense_buffer memory */
|
||||
compat_uint_t timeout; /* [i] MAX_UINT->no timeout (unit: millisec) */
|
||||
compat_uint_t flags; /* [i] 0 -> default, see SG_FLAG... */
|
||||
compat_int_t pack_id; /* [i->o] unused internally (normally) */
|
||||
compat_uptr_t usr_ptr; /* [i->o] unused internally */
|
||||
unsigned char status; /* [o] scsi status */
|
||||
unsigned char masked_status; /* [o] shifted, masked scsi status */
|
||||
unsigned char msg_status; /* [o] messaging level data (optional) */
|
||||
unsigned char sb_len_wr; /* [o] byte count actually written to sbp */
|
||||
unsigned short host_status; /* [o] errors from host adapter */
|
||||
unsigned short driver_status; /* [o] errors from software driver */
|
||||
compat_int_t resid; /* [o] dxfer_len - actual_transferred */
|
||||
compat_uint_t duration; /* [o] time taken by cmd (unit: millisec) */
|
||||
compat_uint_t info; /* [o] auxiliary information */
|
||||
};
|
||||
#endif
|
||||
|
||||
int put_sg_io_hdr(const struct sg_io_hdr *hdr, void __user *argp)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
@ -666,6 +639,136 @@ int get_sg_io_hdr(struct sg_io_hdr *hdr, const void __user *argp)
|
|||
}
|
||||
EXPORT_SYMBOL(get_sg_io_hdr);
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
struct compat_cdrom_generic_command {
|
||||
unsigned char cmd[CDROM_PACKET_SIZE];
|
||||
compat_caddr_t buffer;
|
||||
compat_uint_t buflen;
|
||||
compat_int_t stat;
|
||||
compat_caddr_t sense;
|
||||
unsigned char data_direction;
|
||||
compat_int_t quiet;
|
||||
compat_int_t timeout;
|
||||
compat_caddr_t reserved[1];
|
||||
};
|
||||
#endif
|
||||
|
||||
static int scsi_get_cdrom_generic_arg(struct cdrom_generic_command *cgc,
|
||||
const void __user *arg)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (in_compat_syscall()) {
|
||||
struct compat_cdrom_generic_command cgc32;
|
||||
|
||||
if (copy_from_user(&cgc32, arg, sizeof(cgc32)))
|
||||
return -EFAULT;
|
||||
|
||||
*cgc = (struct cdrom_generic_command) {
|
||||
.buffer = compat_ptr(cgc32.buffer),
|
||||
.buflen = cgc32.buflen,
|
||||
.stat = cgc32.stat,
|
||||
.sense = compat_ptr(cgc32.sense),
|
||||
.data_direction = cgc32.data_direction,
|
||||
.quiet = cgc32.quiet,
|
||||
.timeout = cgc32.timeout,
|
||||
.reserved[0] = compat_ptr(cgc32.reserved[0]),
|
||||
};
|
||||
memcpy(&cgc->cmd, &cgc32.cmd, CDROM_PACKET_SIZE);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
if (copy_from_user(cgc, arg, sizeof(*cgc)))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scsi_put_cdrom_generic_arg(const struct cdrom_generic_command *cgc,
|
||||
void __user *arg)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (in_compat_syscall()) {
|
||||
struct compat_cdrom_generic_command cgc32 = {
|
||||
.buffer = (uintptr_t)(cgc->buffer),
|
||||
.buflen = cgc->buflen,
|
||||
.stat = cgc->stat,
|
||||
.sense = (uintptr_t)(cgc->sense),
|
||||
.data_direction = cgc->data_direction,
|
||||
.quiet = cgc->quiet,
|
||||
.timeout = cgc->timeout,
|
||||
.reserved[0] = (uintptr_t)(cgc->reserved[0]),
|
||||
};
|
||||
memcpy(&cgc32.cmd, &cgc->cmd, CDROM_PACKET_SIZE);
|
||||
|
||||
if (copy_to_user(arg, &cgc32, sizeof(cgc32)))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
if (copy_to_user(arg, cgc, sizeof(*cgc)))
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scsi_cdrom_send_packet(struct request_queue *q,
|
||||
struct gendisk *bd_disk,
|
||||
fmode_t mode, void __user *arg)
|
||||
{
|
||||
struct cdrom_generic_command cgc;
|
||||
struct sg_io_hdr hdr;
|
||||
int err;
|
||||
|
||||
err = scsi_get_cdrom_generic_arg(&cgc, arg);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
cgc.timeout = clock_t_to_jiffies(cgc.timeout);
|
||||
memset(&hdr, 0, sizeof(hdr));
|
||||
hdr.interface_id = 'S';
|
||||
hdr.cmd_len = sizeof(cgc.cmd);
|
||||
hdr.dxfer_len = cgc.buflen;
|
||||
switch (cgc.data_direction) {
|
||||
case CGC_DATA_UNKNOWN:
|
||||
hdr.dxfer_direction = SG_DXFER_UNKNOWN;
|
||||
break;
|
||||
case CGC_DATA_WRITE:
|
||||
hdr.dxfer_direction = SG_DXFER_TO_DEV;
|
||||
break;
|
||||
case CGC_DATA_READ:
|
||||
hdr.dxfer_direction = SG_DXFER_FROM_DEV;
|
||||
break;
|
||||
case CGC_DATA_NONE:
|
||||
hdr.dxfer_direction = SG_DXFER_NONE;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
hdr.dxferp = cgc.buffer;
|
||||
hdr.sbp = cgc.sense;
|
||||
if (hdr.sbp)
|
||||
hdr.mx_sb_len = sizeof(struct request_sense);
|
||||
hdr.timeout = jiffies_to_msecs(cgc.timeout);
|
||||
hdr.cmdp = ((struct cdrom_generic_command __user*) arg)->cmd;
|
||||
hdr.cmd_len = sizeof(cgc.cmd);
|
||||
|
||||
err = sg_io(q, bd_disk, &hdr, mode);
|
||||
if (err == -EFAULT)
|
||||
return -EFAULT;
|
||||
|
||||
if (hdr.status)
|
||||
return -EIO;
|
||||
|
||||
cgc.stat = err;
|
||||
cgc.buflen = hdr.resid;
|
||||
if (scsi_put_cdrom_generic_arg(&cgc, arg))
|
||||
return -EFAULT;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
int scsi_cmd_ioctl(struct request_queue *q, struct gendisk *bd_disk, fmode_t mode,
|
||||
unsigned int cmd, void __user *arg)
|
||||
{
|
||||
|
@ -716,60 +819,9 @@ int scsi_cmd_ioctl(struct request_queue *q, struct gendisk *bd_disk, fmode_t mod
|
|||
err = -EFAULT;
|
||||
break;
|
||||
}
|
||||
case CDROM_SEND_PACKET: {
|
||||
struct cdrom_generic_command cgc;
|
||||
struct sg_io_hdr hdr;
|
||||
|
||||
err = -EFAULT;
|
||||
if (copy_from_user(&cgc, arg, sizeof(cgc)))
|
||||
break;
|
||||
cgc.timeout = clock_t_to_jiffies(cgc.timeout);
|
||||
memset(&hdr, 0, sizeof(hdr));
|
||||
hdr.interface_id = 'S';
|
||||
hdr.cmd_len = sizeof(cgc.cmd);
|
||||
hdr.dxfer_len = cgc.buflen;
|
||||
err = 0;
|
||||
switch (cgc.data_direction) {
|
||||
case CGC_DATA_UNKNOWN:
|
||||
hdr.dxfer_direction = SG_DXFER_UNKNOWN;
|
||||
break;
|
||||
case CGC_DATA_WRITE:
|
||||
hdr.dxfer_direction = SG_DXFER_TO_DEV;
|
||||
break;
|
||||
case CGC_DATA_READ:
|
||||
hdr.dxfer_direction = SG_DXFER_FROM_DEV;
|
||||
break;
|
||||
case CGC_DATA_NONE:
|
||||
hdr.dxfer_direction = SG_DXFER_NONE;
|
||||
break;
|
||||
default:
|
||||
err = -EINVAL;
|
||||
}
|
||||
if (err)
|
||||
break;
|
||||
|
||||
hdr.dxferp = cgc.buffer;
|
||||
hdr.sbp = cgc.sense;
|
||||
if (hdr.sbp)
|
||||
hdr.mx_sb_len = sizeof(struct request_sense);
|
||||
hdr.timeout = jiffies_to_msecs(cgc.timeout);
|
||||
hdr.cmdp = ((struct cdrom_generic_command __user*) arg)->cmd;
|
||||
hdr.cmd_len = sizeof(cgc.cmd);
|
||||
|
||||
err = sg_io(q, bd_disk, &hdr, mode);
|
||||
if (err == -EFAULT)
|
||||
break;
|
||||
|
||||
if (hdr.status)
|
||||
err = -EIO;
|
||||
|
||||
cgc.stat = err;
|
||||
cgc.buflen = hdr.resid;
|
||||
if (copy_to_user(arg, &cgc, sizeof(cgc)))
|
||||
err = -EFAULT;
|
||||
|
||||
case CDROM_SEND_PACKET:
|
||||
err = scsi_cdrom_send_packet(q, bd_disk, mode, arg);
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* old junk scsi send command ioctl
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
* - http://www.t13.org/
|
||||
*/
|
||||
|
||||
#include <linux/compat.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/blkdev.h>
|
||||
|
@ -761,6 +762,10 @@ static int ata_ioc32(struct ata_port *ap)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This handles both native and compat commands, so anything added
|
||||
* here must have a compatible argument, or check in_compat_syscall()
|
||||
*/
|
||||
int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *scsidev,
|
||||
unsigned int cmd, void __user *arg)
|
||||
{
|
||||
|
@ -773,6 +778,10 @@ int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *scsidev,
|
|||
spin_lock_irqsave(ap->lock, flags);
|
||||
val = ata_ioc32(ap);
|
||||
spin_unlock_irqrestore(ap->lock, flags);
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (in_compat_syscall())
|
||||
return put_user(val, (compat_ulong_t __user *)arg);
|
||||
#endif
|
||||
return put_user(val, (unsigned long __user *)arg);
|
||||
|
||||
case HDIO_SET_32BIT:
|
||||
|
|
|
@ -236,6 +236,109 @@ attribute_container_remove_device(struct device *dev,
|
|||
mutex_unlock(&attribute_container_mutex);
|
||||
}
|
||||
|
||||
static int
|
||||
do_attribute_container_device_trigger_safe(struct device *dev,
|
||||
struct attribute_container *cont,
|
||||
int (*fn)(struct attribute_container *,
|
||||
struct device *, struct device *),
|
||||
int (*undo)(struct attribute_container *,
|
||||
struct device *, struct device *))
|
||||
{
|
||||
int ret;
|
||||
struct internal_container *ic, *failed;
|
||||
struct klist_iter iter;
|
||||
|
||||
if (attribute_container_no_classdevs(cont))
|
||||
return fn(cont, dev, NULL);
|
||||
|
||||
klist_for_each_entry(ic, &cont->containers, node, &iter) {
|
||||
if (dev == ic->classdev.parent) {
|
||||
ret = fn(cont, dev, &ic->classdev);
|
||||
if (ret) {
|
||||
failed = ic;
|
||||
klist_iter_exit(&iter);
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
if (!undo)
|
||||
return ret;
|
||||
|
||||
/* Attempt to undo the work partially done. */
|
||||
klist_for_each_entry(ic, &cont->containers, node, &iter) {
|
||||
if (ic == failed) {
|
||||
klist_iter_exit(&iter);
|
||||
break;
|
||||
}
|
||||
if (dev == ic->classdev.parent)
|
||||
undo(cont, dev, &ic->classdev);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* attribute_container_device_trigger_safe - execute a trigger for each
|
||||
* matching classdev or fail all of them.
|
||||
*
|
||||
* @dev: The generic device to run the trigger for
|
||||
* @fn the function to execute for each classdev.
|
||||
* @undo A function to undo the work previously done in case of error
|
||||
*
|
||||
* This function is a safe version of
|
||||
* attribute_container_device_trigger. It stops on the first error and
|
||||
* undo the partial work that has been done, on previous classdev. It
|
||||
* is guaranteed that either they all succeeded, or none of them
|
||||
* succeeded.
|
||||
*/
|
||||
int
|
||||
attribute_container_device_trigger_safe(struct device *dev,
|
||||
int (*fn)(struct attribute_container *,
|
||||
struct device *,
|
||||
struct device *),
|
||||
int (*undo)(struct attribute_container *,
|
||||
struct device *,
|
||||
struct device *))
|
||||
{
|
||||
struct attribute_container *cont, *failed = NULL;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&attribute_container_mutex);
|
||||
|
||||
list_for_each_entry(cont, &attribute_container_list, node) {
|
||||
|
||||
if (!cont->match(cont, dev))
|
||||
continue;
|
||||
|
||||
ret = do_attribute_container_device_trigger_safe(dev, cont,
|
||||
fn, undo);
|
||||
if (ret) {
|
||||
failed = cont;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (ret && !WARN_ON(!undo)) {
|
||||
list_for_each_entry(cont, &attribute_container_list, node) {
|
||||
|
||||
if (failed == cont)
|
||||
break;
|
||||
|
||||
if (!cont->match(cont, dev))
|
||||
continue;
|
||||
|
||||
do_attribute_container_device_trigger_safe(dev, cont,
|
||||
undo, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&attribute_container_mutex);
|
||||
return ret;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* attribute_container_device_trigger - execute a trigger for each matching classdev
|
||||
*
|
||||
|
|
|
@ -30,6 +30,10 @@
|
|||
#include <linux/attribute_container.h>
|
||||
#include <linux/transport_class.h>
|
||||
|
||||
static int transport_remove_classdev(struct attribute_container *cont,
|
||||
struct device *dev,
|
||||
struct device *classdev);
|
||||
|
||||
/**
|
||||
* transport_class_register - register an initial transport class
|
||||
*
|
||||
|
@ -172,10 +176,11 @@ static int transport_add_class_device(struct attribute_container *cont,
|
|||
* routine is simply a trigger point used to add the device to the
|
||||
* system and register attributes for it.
|
||||
*/
|
||||
|
||||
void transport_add_device(struct device *dev)
|
||||
int transport_add_device(struct device *dev)
|
||||
{
|
||||
attribute_container_device_trigger(dev, transport_add_class_device);
|
||||
return attribute_container_device_trigger_safe(dev,
|
||||
transport_add_class_device,
|
||||
transport_remove_classdev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(transport_add_device);
|
||||
|
||||
|
|
|
@ -329,6 +329,7 @@ static const struct block_device_operations aoe_bdops = {
|
|||
.open = aoeblk_open,
|
||||
.release = aoeblk_release,
|
||||
.ioctl = aoeblk_ioctl,
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
.getgeo = aoeblk_getgeo,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
|
|
@ -3879,6 +3879,9 @@ static int fd_compat_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
|
|||
{
|
||||
int drive = (long)bdev->bd_disk->private_data;
|
||||
switch (cmd) {
|
||||
case CDROMEJECT: /* CD-ROM eject */
|
||||
case 0x6470: /* SunOS floppy eject */
|
||||
|
||||
case FDMSGON:
|
||||
case FDMSGOFF:
|
||||
case FDSETEMSGTRESH:
|
||||
|
|
|
@ -275,6 +275,9 @@ static const struct block_device_operations pcd_bdops = {
|
|||
.open = pcd_block_open,
|
||||
.release = pcd_block_release,
|
||||
.ioctl = pcd_block_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.ioctl = blkdev_compat_ptr_ioctl,
|
||||
#endif
|
||||
.check_events = pcd_block_check_events,
|
||||
};
|
||||
|
||||
|
|
|
@ -874,6 +874,7 @@ static const struct block_device_operations pd_fops = {
|
|||
.open = pd_open,
|
||||
.release = pd_release,
|
||||
.ioctl = pd_ioctl,
|
||||
.compat_ioctl = pd_ioctl,
|
||||
.getgeo = pd_getgeo,
|
||||
.check_events = pd_check_events,
|
||||
.revalidate_disk= pd_revalidate
|
||||
|
|
|
@ -276,6 +276,7 @@ static const struct block_device_operations pf_fops = {
|
|||
.open = pf_open,
|
||||
.release = pf_release,
|
||||
.ioctl = pf_ioctl,
|
||||
.compat_ioctl = pf_ioctl,
|
||||
.getgeo = pf_getgeo,
|
||||
.check_events = pf_check_events,
|
||||
};
|
||||
|
|
|
@ -2663,28 +2663,6 @@ static int pkt_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd,
|
|||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static int pkt_compat_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
switch (cmd) {
|
||||
/* compatible */
|
||||
case CDROMEJECT:
|
||||
case CDROMMULTISESSION:
|
||||
case CDROMREADTOCENTRY:
|
||||
case SCSI_IOCTL_SEND_COMMAND:
|
||||
return pkt_ioctl(bdev, mode, cmd, (unsigned long)compat_ptr(arg));
|
||||
|
||||
|
||||
/* FIXME: no handler so far */
|
||||
case CDROM_LAST_WRITTEN:
|
||||
/* handled in compat_blkdev_driver_ioctl */
|
||||
case CDROM_SEND_PACKET:
|
||||
default:
|
||||
return -ENOIOCTLCMD;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
static unsigned int pkt_check_events(struct gendisk *disk,
|
||||
unsigned int clearing)
|
||||
{
|
||||
|
@ -2706,9 +2684,7 @@ static const struct block_device_operations pktcdvd_ops = {
|
|||
.open = pkt_open,
|
||||
.release = pkt_close,
|
||||
.ioctl = pkt_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = pkt_compat_ioctl,
|
||||
#endif
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
.check_events = pkt_check_events,
|
||||
};
|
||||
|
||||
|
|
|
@ -171,6 +171,7 @@ static const struct block_device_operations vdc_fops = {
|
|||
.owner = THIS_MODULE,
|
||||
.getgeo = vdc_getgeo,
|
||||
.ioctl = vdc_ioctl,
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
};
|
||||
|
||||
static void vdc_blk_queue_start(struct vdc_port *port)
|
||||
|
|
|
@ -405,6 +405,9 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
|||
|
||||
static const struct block_device_operations virtblk_fops = {
|
||||
.ioctl = virtblk_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
#endif
|
||||
.owner = THIS_MODULE,
|
||||
.getgeo = virtblk_getgeo,
|
||||
};
|
||||
|
|
|
@ -2632,6 +2632,7 @@ static const struct block_device_operations xlvbd_block_fops =
|
|||
.release = blkif_release,
|
||||
.getgeo = blkif_getgeo,
|
||||
.ioctl = blkif_ioctl,
|
||||
.compat_ioctl = blkdev_compat_ptr_ioctl,
|
||||
};
|
||||
|
||||
|
||||
|
|
|
@ -3017,9 +3017,31 @@ static noinline int mmc_ioctl_cdrom_read_audio(struct cdrom_device_info *cdi,
|
|||
struct cdrom_read_audio ra;
|
||||
int lba;
|
||||
|
||||
if (copy_from_user(&ra, (struct cdrom_read_audio __user *)arg,
|
||||
sizeof(ra)))
|
||||
return -EFAULT;
|
||||
#ifdef CONFIG_COMPAT
|
||||
if (in_compat_syscall()) {
|
||||
struct compat_cdrom_read_audio {
|
||||
union cdrom_addr addr;
|
||||
u8 addr_format;
|
||||
compat_int_t nframes;
|
||||
compat_caddr_t buf;
|
||||
} ra32;
|
||||
|
||||
if (copy_from_user(&ra32, arg, sizeof(ra32)))
|
||||
return -EFAULT;
|
||||
|
||||
ra = (struct cdrom_read_audio) {
|
||||
.addr = ra32.addr,
|
||||
.addr_format = ra32.addr_format,
|
||||
.nframes = ra32.nframes,
|
||||
.buf = compat_ptr(ra32.buf),
|
||||
};
|
||||
} else
|
||||
#endif
|
||||
{
|
||||
if (copy_from_user(&ra, (struct cdrom_read_audio __user *)arg,
|
||||
sizeof(ra)))
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
if (ra.addr_format == CDROM_MSF)
|
||||
lba = msf_to_lba(ra.addr.msf.minute,
|
||||
|
@ -3271,9 +3293,10 @@ static noinline int mmc_ioctl_cdrom_last_written(struct cdrom_device_info *cdi,
|
|||
ret = cdrom_get_last_written(cdi, &last);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (copy_to_user((long __user *)arg, &last, sizeof(last)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
if (in_compat_syscall())
|
||||
return put_user(last, (__s32 __user *)arg);
|
||||
|
||||
return put_user(last, (long __user *)arg);
|
||||
}
|
||||
|
||||
static int mmc_ioctl(struct cdrom_device_info *cdi, unsigned int cmd,
|
||||
|
|
|
@ -518,6 +518,9 @@ static const struct block_device_operations gdrom_bdops = {
|
|||
.release = gdrom_bdops_release,
|
||||
.check_events = gdrom_bdops_check_events,
|
||||
.ioctl = gdrom_bdops_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.ioctl = blkdev_compat_ptr_ioctl,
|
||||
#endif
|
||||
};
|
||||
|
||||
static irqreturn_t gdrom_command_interrupt(int irq, void *dev_id)
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
|
||||
#define IDECD_VERSION "5.00"
|
||||
|
||||
#include <linux/compat.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/kernel.h>
|
||||
|
@ -1710,6 +1711,41 @@ static int idecd_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int idecd_locked_compat_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct cdrom_info *info = ide_drv_g(bdev->bd_disk, cdrom_info);
|
||||
void __user *argp = compat_ptr(arg);
|
||||
int err;
|
||||
|
||||
switch (cmd) {
|
||||
case CDROMSETSPINDOWN:
|
||||
return idecd_set_spindown(&info->devinfo, (unsigned long)argp);
|
||||
case CDROMGETSPINDOWN:
|
||||
return idecd_get_spindown(&info->devinfo, (unsigned long)argp);
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
err = generic_ide_ioctl(info->drive, bdev, cmd, arg);
|
||||
if (err == -EINVAL)
|
||||
err = cdrom_ioctl(&info->devinfo, bdev, mode, cmd,
|
||||
(unsigned long)argp);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int idecd_compat_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&ide_cd_mutex);
|
||||
ret = idecd_locked_compat_ioctl(bdev, mode, cmd, arg);
|
||||
mutex_unlock(&ide_cd_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned int idecd_check_events(struct gendisk *disk,
|
||||
unsigned int clearing)
|
||||
|
@ -1732,6 +1768,8 @@ static const struct block_device_operations idecd_ops = {
|
|||
.open = idecd_open,
|
||||
.release = idecd_release,
|
||||
.ioctl = idecd_ioctl,
|
||||
.compat_ioctl = IS_ENABLED(CONFIG_COMPAT) ?
|
||||
idecd_compat_ioctl : NULL,
|
||||
.check_events = idecd_check_events,
|
||||
.revalidate_disk = idecd_revalidate_disk
|
||||
};
|
||||
|
|
|
@ -794,4 +794,5 @@ const struct ide_disk_ops ide_ata_disk_ops = {
|
|||
.set_doorlock = ide_disk_set_doorlock,
|
||||
.do_request = ide_do_rw_disk,
|
||||
.ioctl = ide_disk_ioctl,
|
||||
.compat_ioctl = ide_disk_ioctl,
|
||||
};
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/mm.h>
|
||||
|
@ -546,4 +547,7 @@ const struct ide_disk_ops ide_atapi_disk_ops = {
|
|||
.set_doorlock = ide_set_media_lock,
|
||||
.do_request = ide_floppy_do_request,
|
||||
.ioctl = ide_floppy_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = ide_floppy_compat_ioctl,
|
||||
#endif
|
||||
};
|
||||
|
|
|
@ -26,6 +26,8 @@ void ide_floppy_create_read_capacity_cmd(struct ide_atapi_pc *);
|
|||
/* ide-floppy_ioctl.c */
|
||||
int ide_floppy_ioctl(ide_drive_t *, struct block_device *, fmode_t,
|
||||
unsigned int, unsigned long);
|
||||
int ide_floppy_compat_ioctl(ide_drive_t *, struct block_device *, fmode_t,
|
||||
unsigned int, unsigned long);
|
||||
|
||||
#ifdef CONFIG_IDE_PROC_FS
|
||||
/* ide-floppy_proc.c */
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/ide.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
|
@ -302,3 +303,37 @@ out:
|
|||
mutex_unlock(&ide_floppy_ioctl_mutex);
|
||||
return err;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
int ide_floppy_compat_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
||||
fmode_t mode, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct ide_atapi_pc pc;
|
||||
void __user *argp = compat_ptr(arg);
|
||||
int err;
|
||||
|
||||
mutex_lock(&ide_floppy_ioctl_mutex);
|
||||
if (cmd == CDROMEJECT || cmd == CDROM_LOCKDOOR) {
|
||||
err = ide_floppy_lockdoor(drive, &pc, arg, cmd);
|
||||
goto out;
|
||||
}
|
||||
|
||||
err = ide_floppy_format_ioctl(drive, &pc, mode, cmd, argp);
|
||||
if (err != -ENOTTY)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* skip SCSI_IOCTL_SEND_COMMAND (deprecated)
|
||||
* and CDROM_SEND_PACKET (legacy) ioctls
|
||||
*/
|
||||
if (cmd != CDROM_SEND_PACKET && cmd != SCSI_IOCTL_SEND_COMMAND)
|
||||
err = scsi_cmd_blk_ioctl(bdev, mode, cmd, argp);
|
||||
|
||||
if (err == -ENOTTY)
|
||||
err = generic_ide_ioctl(drive, bdev, cmd, arg);
|
||||
|
||||
out:
|
||||
mutex_unlock(&ide_floppy_ioctl_mutex);
|
||||
return err;
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -341,11 +341,28 @@ static int ide_gd_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
return drive->disk_ops->ioctl(drive, bdev, mode, cmd, arg);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static int ide_gd_compat_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct ide_disk_obj *idkp = ide_drv_g(bdev->bd_disk, ide_disk_obj);
|
||||
ide_drive_t *drive = idkp->drive;
|
||||
|
||||
if (!drive->disk_ops->compat_ioctl)
|
||||
return -ENOIOCTLCMD;
|
||||
|
||||
return drive->disk_ops->compat_ioctl(drive, bdev, mode, cmd, arg);
|
||||
}
|
||||
#endif
|
||||
|
||||
static const struct block_device_operations ide_gd_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = ide_gd_unlocked_open,
|
||||
.release = ide_gd_release,
|
||||
.ioctl = ide_gd_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.ioctl = ide_gd_compat_ioctl,
|
||||
#endif
|
||||
.getgeo = ide_gd_getgeo,
|
||||
.check_events = ide_gd_check_events,
|
||||
.unlock_native_capacity = ide_gd_unlock_native_capacity,
|
||||
|
|
|
@ -3,11 +3,20 @@
|
|||
* IDE ioctls handling.
|
||||
*/
|
||||
|
||||
#include <linux/compat.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/hdreg.h>
|
||||
#include <linux/ide.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
static int put_user_long(long val, unsigned long arg)
|
||||
{
|
||||
if (in_compat_syscall())
|
||||
return put_user(val, (compat_long_t __user *)compat_ptr(arg));
|
||||
|
||||
return put_user(val, (long __user *)arg);
|
||||
}
|
||||
|
||||
static const struct ide_ioctl_devset ide_ioctl_settings[] = {
|
||||
{ HDIO_GET_32BIT, HDIO_SET_32BIT, &ide_devset_io_32bit },
|
||||
{ HDIO_GET_KEEPSETTINGS, HDIO_SET_KEEPSETTINGS, &ide_devset_keepsettings },
|
||||
|
@ -37,7 +46,7 @@ read_val:
|
|||
mutex_lock(&ide_setting_mtx);
|
||||
err = ds->get(drive);
|
||||
mutex_unlock(&ide_setting_mtx);
|
||||
return err >= 0 ? put_user(err, (long __user *)arg) : err;
|
||||
return err >= 0 ? put_user_long(err, arg) : err;
|
||||
|
||||
set_val:
|
||||
if (bdev != bdev->bd_contains)
|
||||
|
@ -56,7 +65,7 @@ set_val:
|
|||
EXPORT_SYMBOL_GPL(ide_setting_ioctl);
|
||||
|
||||
static int ide_get_identity_ioctl(ide_drive_t *drive, unsigned int cmd,
|
||||
unsigned long arg)
|
||||
void __user *argp)
|
||||
{
|
||||
u16 *id = NULL;
|
||||
int size = (cmd == HDIO_GET_IDENTITY) ? (ATA_ID_WORDS * 2) : 142;
|
||||
|
@ -77,7 +86,7 @@ static int ide_get_identity_ioctl(ide_drive_t *drive, unsigned int cmd,
|
|||
memcpy(id, drive->id, size);
|
||||
ata_id_to_hd_driveid(id);
|
||||
|
||||
if (copy_to_user((void __user *)arg, id, size))
|
||||
if (copy_to_user(argp, id, size))
|
||||
rc = -EFAULT;
|
||||
|
||||
kfree(id);
|
||||
|
@ -87,10 +96,10 @@ out:
|
|||
|
||||
static int ide_get_nice_ioctl(ide_drive_t *drive, unsigned long arg)
|
||||
{
|
||||
return put_user((!!(drive->dev_flags & IDE_DFLAG_DSC_OVERLAP)
|
||||
return put_user_long((!!(drive->dev_flags & IDE_DFLAG_DSC_OVERLAP)
|
||||
<< IDE_NICE_DSC_OVERLAP) |
|
||||
(!!(drive->dev_flags & IDE_DFLAG_NICE1)
|
||||
<< IDE_NICE_1), (long __user *)arg);
|
||||
<< IDE_NICE_1), arg);
|
||||
}
|
||||
|
||||
static int ide_set_nice_ioctl(ide_drive_t *drive, unsigned long arg)
|
||||
|
@ -115,7 +124,7 @@ static int ide_set_nice_ioctl(ide_drive_t *drive, unsigned long arg)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
|
||||
static int ide_cmd_ioctl(ide_drive_t *drive, void __user *argp)
|
||||
{
|
||||
u8 *buf = NULL;
|
||||
int bufsize = 0, err = 0;
|
||||
|
@ -123,7 +132,7 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
|
|||
struct ide_cmd cmd;
|
||||
struct ide_taskfile *tf = &cmd.tf;
|
||||
|
||||
if (NULL == (void *) arg) {
|
||||
if (NULL == argp) {
|
||||
struct request *rq;
|
||||
|
||||
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, 0);
|
||||
|
@ -135,7 +144,7 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
|
|||
return err;
|
||||
}
|
||||
|
||||
if (copy_from_user(args, (void __user *)arg, 4))
|
||||
if (copy_from_user(args, argp, 4))
|
||||
return -EFAULT;
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -181,19 +190,18 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
|
|||
args[1] = tf->error;
|
||||
args[2] = tf->nsect;
|
||||
abort:
|
||||
if (copy_to_user((void __user *)arg, &args, 4))
|
||||
if (copy_to_user(argp, &args, 4))
|
||||
err = -EFAULT;
|
||||
if (buf) {
|
||||
if (copy_to_user((void __user *)(arg + 4), buf, bufsize))
|
||||
if (copy_to_user((argp + 4), buf, bufsize))
|
||||
err = -EFAULT;
|
||||
kfree(buf);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
static int ide_task_ioctl(ide_drive_t *drive, unsigned long arg)
|
||||
static int ide_task_ioctl(ide_drive_t *drive, void __user *p)
|
||||
{
|
||||
void __user *p = (void __user *)arg;
|
||||
int err = 0;
|
||||
u8 args[7];
|
||||
struct ide_cmd cmd;
|
||||
|
@ -237,6 +245,10 @@ int generic_ide_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
|||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
int err;
|
||||
void __user *argp = (void __user *)arg;
|
||||
|
||||
if (in_compat_syscall())
|
||||
argp = compat_ptr(arg);
|
||||
|
||||
err = ide_setting_ioctl(drive, bdev, cmd, arg, ide_ioctl_settings);
|
||||
if (err != -EOPNOTSUPP)
|
||||
|
@ -247,7 +259,7 @@ int generic_ide_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
|||
case HDIO_GET_IDENTITY:
|
||||
if (bdev != bdev->bd_contains)
|
||||
return -EINVAL;
|
||||
return ide_get_identity_ioctl(drive, cmd, arg);
|
||||
return ide_get_identity_ioctl(drive, cmd, argp);
|
||||
case HDIO_GET_NICE:
|
||||
return ide_get_nice_ioctl(drive, arg);
|
||||
case HDIO_SET_NICE:
|
||||
|
@ -258,6 +270,9 @@ int generic_ide_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
|||
case HDIO_DRIVE_TASKFILE:
|
||||
if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
/* missing compat handler for HDIO_DRIVE_TASKFILE */
|
||||
if (in_compat_syscall())
|
||||
return -ENOTTY;
|
||||
if (drive->media == ide_disk)
|
||||
return ide_taskfile_ioctl(drive, arg);
|
||||
return -ENOMSG;
|
||||
|
@ -265,11 +280,11 @@ int generic_ide_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
|||
case HDIO_DRIVE_CMD:
|
||||
if (!capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
return ide_cmd_ioctl(drive, arg);
|
||||
return ide_cmd_ioctl(drive, argp);
|
||||
case HDIO_DRIVE_TASK:
|
||||
if (!capable(CAP_SYS_RAWIO))
|
||||
return -EACCES;
|
||||
return ide_task_ioctl(drive, arg);
|
||||
return ide_task_ioctl(drive, argp);
|
||||
case HDIO_DRIVE_RESET:
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
|
@ -277,7 +292,7 @@ int generic_ide_ioctl(ide_drive_t *drive, struct block_device *bdev,
|
|||
case HDIO_GET_BUSSTATE:
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
if (put_user(BUSSTATE_ON, (long __user *)arg))
|
||||
if (put_user_long(BUSSTATE_ON, arg))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
case HDIO_SET_BUSSTATE:
|
||||
|
|
|
@ -1945,11 +1945,22 @@ static int idetape_ioctl(struct block_device *bdev, fmode_t mode,
|
|||
return err;
|
||||
}
|
||||
|
||||
static int idetape_compat_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
if (cmd == 0x0340 || cmd == 0x350)
|
||||
arg = (unsigned long)compat_ptr(arg);
|
||||
|
||||
return idetape_ioctl(bdev, mode, cmd, arg);
|
||||
}
|
||||
|
||||
static const struct block_device_operations idetape_block_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = idetape_open,
|
||||
.release = idetape_release,
|
||||
.ioctl = idetape_ioctl,
|
||||
.compat_ioctl = IS_ENABLED(CONFIG_COMPAT) ?
|
||||
idetape_compat_ioctl : NULL,
|
||||
};
|
||||
|
||||
static int ide_tape_probe(ide_drive_t *drive)
|
||||
|
|
|
@ -134,7 +134,7 @@ static char *blogic_cmd_failure_reason;
|
|||
static void blogic_announce_drvr(struct blogic_adapter *adapter)
|
||||
{
|
||||
blogic_announce("***** BusLogic SCSI Driver Version " blogic_drvr_version " of " blogic_drvr_date " *****\n", adapter);
|
||||
blogic_announce("Copyright 1995-1998 by Leonard N. Zubkoff " "<lnz@dandelion.com>\n", adapter);
|
||||
blogic_announce("Copyright 1995-1998 by Leonard N. Zubkoff <lnz@dandelion.com>\n", adapter);
|
||||
}
|
||||
|
||||
|
||||
|
@ -440,7 +440,7 @@ static int blogic_cmd(struct blogic_adapter *adapter, enum blogic_opcode opcode,
|
|||
goto done;
|
||||
}
|
||||
if (blogic_global_options.trace_config)
|
||||
blogic_notice("blogic_cmd(%02X) Status = %02X: " "(Modify I/O Address)\n", adapter, opcode, statusreg.all);
|
||||
blogic_notice("blogic_cmd(%02X) Status = %02X: (Modify I/O Address)\n", adapter, opcode, statusreg.all);
|
||||
result = 0;
|
||||
goto done;
|
||||
}
|
||||
|
@ -716,23 +716,23 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
|
|||
pci_addr = base_addr1 = pci_resource_start(pci_device, 1);
|
||||
|
||||
if (pci_resource_flags(pci_device, 0) & IORESOURCE_MEM) {
|
||||
blogic_err("BusLogic: Base Address0 0x%X not I/O for " "MultiMaster Host Adapter\n", NULL, base_addr0);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%X\n", NULL, bus, device, io_addr);
|
||||
blogic_err("BusLogic: Base Address0 0x%lX not I/O for MultiMaster Host Adapter\n", NULL, base_addr0);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%lX\n", NULL, bus, device, io_addr);
|
||||
continue;
|
||||
}
|
||||
if (pci_resource_flags(pci_device, 1) & IORESOURCE_IO) {
|
||||
blogic_err("BusLogic: Base Address1 0x%X not Memory for " "MultiMaster Host Adapter\n", NULL, base_addr1);
|
||||
blogic_err("at PCI Bus %d Device %d PCI Address 0x%X\n", NULL, bus, device, pci_addr);
|
||||
blogic_err("BusLogic: Base Address1 0x%lX not Memory for MultiMaster Host Adapter\n", NULL, base_addr1);
|
||||
blogic_err("at PCI Bus %d Device %d PCI Address 0x%lX\n", NULL, bus, device, pci_addr);
|
||||
continue;
|
||||
}
|
||||
if (irq_ch == 0) {
|
||||
blogic_err("BusLogic: IRQ Channel %d invalid for " "MultiMaster Host Adapter\n", NULL, irq_ch);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%X\n", NULL, bus, device, io_addr);
|
||||
blogic_err("BusLogic: IRQ Channel %d invalid for MultiMaster Host Adapter\n", NULL, irq_ch);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%lX\n", NULL, bus, device, io_addr);
|
||||
continue;
|
||||
}
|
||||
if (blogic_global_options.trace_probe) {
|
||||
blogic_notice("BusLogic: PCI MultiMaster Host Adapter " "detected at\n", NULL);
|
||||
blogic_notice("BusLogic: PCI Bus %d Device %d I/O Address " "0x%X PCI Address 0x%X\n", NULL, bus, device, io_addr, pci_addr);
|
||||
blogic_notice("BusLogic: PCI MultiMaster Host Adapter detected at\n", NULL);
|
||||
blogic_notice("BusLogic: PCI Bus %d Device %d I/O Address 0x%lX PCI Address 0x%lX\n", NULL, bus, device, io_addr, pci_addr);
|
||||
}
|
||||
/*
|
||||
Issue the Inquire PCI Host Adapter Information command to determine
|
||||
|
@ -818,7 +818,7 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
|
|||
nonpr_mmcount++;
|
||||
mmcount++;
|
||||
} else
|
||||
blogic_warn("BusLogic: Too many Host Adapters " "detected\n", NULL);
|
||||
blogic_warn("BusLogic: Too many Host Adapters detected\n", NULL);
|
||||
}
|
||||
/*
|
||||
If the AutoSCSI "Use Bus And Device # For PCI Scanning Seq."
|
||||
|
@ -956,23 +956,23 @@ static int __init blogic_init_fp_probeinfo(struct blogic_adapter *adapter)
|
|||
pci_addr = base_addr1 = pci_resource_start(pci_device, 1);
|
||||
#ifdef CONFIG_SCSI_FLASHPOINT
|
||||
if (pci_resource_flags(pci_device, 0) & IORESOURCE_MEM) {
|
||||
blogic_err("BusLogic: Base Address0 0x%X not I/O for " "FlashPoint Host Adapter\n", NULL, base_addr0);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%X\n", NULL, bus, device, io_addr);
|
||||
blogic_err("BusLogic: Base Address0 0x%lX not I/O for FlashPoint Host Adapter\n", NULL, base_addr0);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%lX\n", NULL, bus, device, io_addr);
|
||||
continue;
|
||||
}
|
||||
if (pci_resource_flags(pci_device, 1) & IORESOURCE_IO) {
|
||||
blogic_err("BusLogic: Base Address1 0x%X not Memory for " "FlashPoint Host Adapter\n", NULL, base_addr1);
|
||||
blogic_err("at PCI Bus %d Device %d PCI Address 0x%X\n", NULL, bus, device, pci_addr);
|
||||
blogic_err("BusLogic: Base Address1 0x%lX not Memory for FlashPoint Host Adapter\n", NULL, base_addr1);
|
||||
blogic_err("at PCI Bus %d Device %d PCI Address 0x%lX\n", NULL, bus, device, pci_addr);
|
||||
continue;
|
||||
}
|
||||
if (irq_ch == 0) {
|
||||
blogic_err("BusLogic: IRQ Channel %d invalid for " "FlashPoint Host Adapter\n", NULL, irq_ch);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%X\n", NULL, bus, device, io_addr);
|
||||
blogic_err("BusLogic: IRQ Channel %d invalid for FlashPoint Host Adapter\n", NULL, irq_ch);
|
||||
blogic_err("at PCI Bus %d Device %d I/O Address 0x%lX\n", NULL, bus, device, io_addr);
|
||||
continue;
|
||||
}
|
||||
if (blogic_global_options.trace_probe) {
|
||||
blogic_notice("BusLogic: FlashPoint Host Adapter " "detected at\n", NULL);
|
||||
blogic_notice("BusLogic: PCI Bus %d Device %d I/O Address " "0x%X PCI Address 0x%X\n", NULL, bus, device, io_addr, pci_addr);
|
||||
blogic_notice("BusLogic: FlashPoint Host Adapter detected at\n", NULL);
|
||||
blogic_notice("BusLogic: PCI Bus %d Device %d I/O Address 0x%lX PCI Address 0x%lX\n", NULL, bus, device, io_addr, pci_addr);
|
||||
}
|
||||
if (blogic_probeinfo_count < BLOGIC_MAX_ADAPTERS) {
|
||||
struct blogic_probeinfo *probeinfo =
|
||||
|
@ -987,11 +987,11 @@ static int __init blogic_init_fp_probeinfo(struct blogic_adapter *adapter)
|
|||
probeinfo->pci_device = pci_dev_get(pci_device);
|
||||
fpcount++;
|
||||
} else
|
||||
blogic_warn("BusLogic: Too many Host Adapters " "detected\n", NULL);
|
||||
blogic_warn("BusLogic: Too many Host Adapters detected\n", NULL);
|
||||
#else
|
||||
blogic_err("BusLogic: FlashPoint Host Adapter detected at " "PCI Bus %d Device %d\n", NULL, bus, device);
|
||||
blogic_err("BusLogic: I/O Address 0x%X PCI Address 0x%X, irq %d, " "but FlashPoint\n", NULL, io_addr, pci_addr, irq_ch);
|
||||
blogic_err("BusLogic: support was omitted in this kernel " "configuration.\n", NULL);
|
||||
blogic_err("BusLogic: FlashPoint Host Adapter detected at PCI Bus %d Device %d\n", NULL, bus, device);
|
||||
blogic_err("BusLogic: I/O Address 0x%lX PCI Address 0x%lX, irq %d, but FlashPoint\n", NULL, io_addr, pci_addr, irq_ch);
|
||||
blogic_err("BusLogic: support was omitted in this kernel configuration.\n", NULL);
|
||||
#endif
|
||||
}
|
||||
/*
|
||||
|
@ -1099,9 +1099,9 @@ static bool blogic_failure(struct blogic_adapter *adapter, char *msg)
|
|||
if (adapter->adapter_bus_type == BLOGIC_PCI_BUS) {
|
||||
blogic_err("While configuring BusLogic PCI Host Adapter at\n",
|
||||
adapter);
|
||||
blogic_err("Bus %d Device %d I/O Address 0x%X PCI Address 0x%X:\n", adapter, adapter->bus, adapter->dev, adapter->io_addr, adapter->pci_addr);
|
||||
blogic_err("Bus %d Device %d I/O Address 0x%lX PCI Address 0x%lX:\n", adapter, adapter->bus, adapter->dev, adapter->io_addr, adapter->pci_addr);
|
||||
} else
|
||||
blogic_err("While configuring BusLogic Host Adapter at " "I/O Address 0x%X:\n", adapter, adapter->io_addr);
|
||||
blogic_err("While configuring BusLogic Host Adapter at I/O Address 0x%lX:\n", adapter, adapter->io_addr);
|
||||
blogic_err("%s FAILED - DETACHING\n", adapter, msg);
|
||||
if (blogic_cmd_failure_reason != NULL)
|
||||
blogic_err("ADDITIONAL FAILURE INFO - %s\n", adapter,
|
||||
|
@ -1129,13 +1129,13 @@ static bool __init blogic_probe(struct blogic_adapter *adapter)
|
|||
fpinfo->present = false;
|
||||
if (!(FlashPoint_ProbeHostAdapter(fpinfo) == 0 &&
|
||||
fpinfo->present)) {
|
||||
blogic_err("BusLogic: FlashPoint Host Adapter detected at " "PCI Bus %d Device %d\n", adapter, adapter->bus, adapter->dev);
|
||||
blogic_err("BusLogic: I/O Address 0x%X PCI Address 0x%X, " "but FlashPoint\n", adapter, adapter->io_addr, adapter->pci_addr);
|
||||
blogic_err("BusLogic: FlashPoint Host Adapter detected at PCI Bus %d Device %d\n", adapter, adapter->bus, adapter->dev);
|
||||
blogic_err("BusLogic: I/O Address 0x%lX PCI Address 0x%lX, but FlashPoint\n", adapter, adapter->io_addr, adapter->pci_addr);
|
||||
blogic_err("BusLogic: Probe Function failed to validate it.\n", adapter);
|
||||
return false;
|
||||
}
|
||||
if (blogic_global_options.trace_probe)
|
||||
blogic_notice("BusLogic_Probe(0x%X): FlashPoint Found\n", adapter, adapter->io_addr);
|
||||
blogic_notice("BusLogic_Probe(0x%lX): FlashPoint Found\n", adapter, adapter->io_addr);
|
||||
/*
|
||||
Indicate the Host Adapter Probe completed successfully.
|
||||
*/
|
||||
|
@ -1152,7 +1152,7 @@ static bool __init blogic_probe(struct blogic_adapter *adapter)
|
|||
intreg.all = blogic_rdint(adapter);
|
||||
georeg.all = blogic_rdgeom(adapter);
|
||||
if (blogic_global_options.trace_probe)
|
||||
blogic_notice("BusLogic_Probe(0x%X): Status 0x%02X, Interrupt 0x%02X, " "Geometry 0x%02X\n", adapter, adapter->io_addr, statusreg.all, intreg.all, georeg.all);
|
||||
blogic_notice("BusLogic_Probe(0x%lX): Status 0x%02X, Interrupt 0x%02X, Geometry 0x%02X\n", adapter, adapter->io_addr, statusreg.all, intreg.all, georeg.all);
|
||||
if (statusreg.all == 0 || statusreg.sr.diag_active ||
|
||||
statusreg.sr.cmd_param_busy || statusreg.sr.rsvd ||
|
||||
statusreg.sr.cmd_invalid || intreg.ir.rsvd != 0)
|
||||
|
@ -1231,7 +1231,7 @@ static bool blogic_hwreset(struct blogic_adapter *adapter, bool hard_reset)
|
|||
udelay(100);
|
||||
}
|
||||
if (blogic_global_options.trace_hw_reset)
|
||||
blogic_notice("BusLogic_HardwareReset(0x%X): Diagnostic Active, " "Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
blogic_notice("BusLogic_HardwareReset(0x%lX): Diagnostic Active, Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
if (timeout < 0)
|
||||
return false;
|
||||
/*
|
||||
|
@ -1251,7 +1251,7 @@ static bool blogic_hwreset(struct blogic_adapter *adapter, bool hard_reset)
|
|||
udelay(100);
|
||||
}
|
||||
if (blogic_global_options.trace_hw_reset)
|
||||
blogic_notice("BusLogic_HardwareReset(0x%X): Diagnostic Completed, " "Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
blogic_notice("BusLogic_HardwareReset(0x%lX): Diagnostic Completed, Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
if (timeout < 0)
|
||||
return false;
|
||||
/*
|
||||
|
@ -1267,7 +1267,7 @@ static bool blogic_hwreset(struct blogic_adapter *adapter, bool hard_reset)
|
|||
udelay(100);
|
||||
}
|
||||
if (blogic_global_options.trace_hw_reset)
|
||||
blogic_notice("BusLogic_HardwareReset(0x%X): Host Adapter Ready, " "Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
blogic_notice("BusLogic_HardwareReset(0x%lX): Host Adapter Ready, Status 0x%02X\n", adapter, adapter->io_addr, statusreg.all);
|
||||
if (timeout < 0)
|
||||
return false;
|
||||
/*
|
||||
|
@ -1323,7 +1323,7 @@ static bool __init blogic_checkadapter(struct blogic_adapter *adapter)
|
|||
Provide tracing information if requested and return.
|
||||
*/
|
||||
if (blogic_global_options.trace_probe)
|
||||
blogic_notice("BusLogic_Check(0x%X): MultiMaster %s\n", adapter,
|
||||
blogic_notice("BusLogic_Check(0x%lX): MultiMaster %s\n", adapter,
|
||||
adapter->io_addr,
|
||||
(result ? "Found" : "Not Found"));
|
||||
return result;
|
||||
|
@ -1836,7 +1836,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
|
|||
int tgt_id;
|
||||
|
||||
blogic_info("Configuring BusLogic Model %s %s%s%s%s SCSI Host Adapter\n", adapter, adapter->model, blogic_adapter_busnames[adapter->adapter_bus_type], (adapter->wide ? " Wide" : ""), (adapter->differential ? " Differential" : ""), (adapter->ultra ? " Ultra" : ""));
|
||||
blogic_info(" Firmware Version: %s, I/O Address: 0x%X, " "IRQ Channel: %d/%s\n", adapter, adapter->fw_ver, adapter->io_addr, adapter->irq_ch, (adapter->level_int ? "Level" : "Edge"));
|
||||
blogic_info(" Firmware Version: %s, I/O Address: 0x%lX, IRQ Channel: %d/%s\n", adapter, adapter->fw_ver, adapter->io_addr, adapter->irq_ch, (adapter->level_int ? "Level" : "Edge"));
|
||||
if (adapter->adapter_bus_type != BLOGIC_PCI_BUS) {
|
||||
blogic_info(" DMA Channel: ", adapter);
|
||||
if (adapter->dma_ch > 0)
|
||||
|
@ -1844,7 +1844,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
|
|||
else
|
||||
blogic_info("None, ", adapter);
|
||||
if (adapter->bios_addr > 0)
|
||||
blogic_info("BIOS Address: 0x%X, ", adapter,
|
||||
blogic_info("BIOS Address: 0x%lX, ", adapter,
|
||||
adapter->bios_addr);
|
||||
else
|
||||
blogic_info("BIOS Address: None, ", adapter);
|
||||
|
@ -1852,7 +1852,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
|
|||
blogic_info(" PCI Bus: %d, Device: %d, Address: ", adapter,
|
||||
adapter->bus, adapter->dev);
|
||||
if (adapter->pci_addr > 0)
|
||||
blogic_info("0x%X, ", adapter, adapter->pci_addr);
|
||||
blogic_info("0x%lX, ", adapter, adapter->pci_addr);
|
||||
else
|
||||
blogic_info("Unassigned, ", adapter);
|
||||
}
|
||||
|
@ -1932,10 +1932,10 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
|
|||
blogic_info(" Disconnect/Reconnect: %s, Tagged Queuing: %s\n", adapter,
|
||||
discon_msg, tagq_msg);
|
||||
if (blogic_multimaster_type(adapter)) {
|
||||
blogic_info(" Scatter/Gather Limit: %d of %d segments, " "Mailboxes: %d\n", adapter, adapter->drvr_sglimit, adapter->adapter_sglimit, adapter->mbox_count);
|
||||
blogic_info(" Driver Queue Depth: %d, " "Host Adapter Queue Depth: %d\n", adapter, adapter->drvr_qdepth, adapter->adapter_qdepth);
|
||||
blogic_info(" Scatter/Gather Limit: %d of %d segments, Mailboxes: %d\n", adapter, adapter->drvr_sglimit, adapter->adapter_sglimit, adapter->mbox_count);
|
||||
blogic_info(" Driver Queue Depth: %d, Host Adapter Queue Depth: %d\n", adapter, adapter->drvr_qdepth, adapter->adapter_qdepth);
|
||||
} else
|
||||
blogic_info(" Driver Queue Depth: %d, " "Scatter/Gather Limit: %d segments\n", adapter, adapter->drvr_qdepth, adapter->drvr_sglimit);
|
||||
blogic_info(" Driver Queue Depth: %d, Scatter/Gather Limit: %d segments\n", adapter, adapter->drvr_qdepth, adapter->drvr_sglimit);
|
||||
blogic_info(" Tagged Queue Depth: ", adapter);
|
||||
common_tagq_depth = true;
|
||||
for (tgt_id = 1; tgt_id < adapter->maxdev; tgt_id++)
|
||||
|
@ -2717,7 +2717,7 @@ static void blogic_scan_inbox(struct blogic_adapter *adapter)
|
|||
then there is most likely a bug in
|
||||
the Host Adapter firmware.
|
||||
*/
|
||||
blogic_warn("Illegal CCB #%ld status %d in " "Incoming Mailbox\n", adapter, ccb->serial, ccb->status);
|
||||
blogic_warn("Illegal CCB #%ld status %d in Incoming Mailbox\n", adapter, ccb->serial, ccb->status);
|
||||
}
|
||||
}
|
||||
next_inbox->comp_code = BLOGIC_INBOX_FREE;
|
||||
|
@ -2752,7 +2752,7 @@ static void blogic_process_ccbs(struct blogic_adapter *adapter)
|
|||
if (ccb->opcode == BLOGIC_BDR) {
|
||||
int tgt_id = ccb->tgt_id;
|
||||
|
||||
blogic_warn("Bus Device Reset CCB #%ld to Target " "%d Completed\n", adapter, ccb->serial, tgt_id);
|
||||
blogic_warn("Bus Device Reset CCB #%ld to Target %d Completed\n", adapter, ccb->serial, tgt_id);
|
||||
blogic_inc_count(&adapter->tgt_stats[tgt_id].bdr_done);
|
||||
adapter->tgt_flags[tgt_id].tagq_active = false;
|
||||
adapter->cmds_since_rst[tgt_id] = 0;
|
||||
|
@ -2829,7 +2829,7 @@ static void blogic_process_ccbs(struct blogic_adapter *adapter)
|
|||
if (blogic_global_options.trace_err) {
|
||||
int i;
|
||||
blogic_notice("CCB #%ld Target %d: Result %X Host "
|
||||
"Adapter Status %02X " "Target Status %02X\n", adapter, ccb->serial, ccb->tgt_id, command->result, ccb->adapter_status, ccb->tgt_status);
|
||||
"Adapter Status %02X Target Status %02X\n", adapter, ccb->serial, ccb->tgt_id, command->result, ccb->adapter_status, ccb->tgt_status);
|
||||
blogic_notice("CDB ", adapter);
|
||||
for (i = 0; i < ccb->cdblen; i++)
|
||||
blogic_notice(" %02X", adapter, ccb->cdb[i]);
|
||||
|
@ -3203,12 +3203,12 @@ static int blogic_qcmd_lck(struct scsi_cmnd *command,
|
|||
*/
|
||||
if (!blogic_write_outbox(adapter, BLOGIC_MBOX_START, ccb)) {
|
||||
spin_unlock_irq(adapter->scsi_host->host_lock);
|
||||
blogic_warn("Unable to write Outgoing Mailbox - " "Pausing for 1 second\n", adapter);
|
||||
blogic_warn("Unable to write Outgoing Mailbox - Pausing for 1 second\n", adapter);
|
||||
blogic_delay(1);
|
||||
spin_lock_irq(adapter->scsi_host->host_lock);
|
||||
if (!blogic_write_outbox(adapter, BLOGIC_MBOX_START,
|
||||
ccb)) {
|
||||
blogic_warn("Still unable to write Outgoing Mailbox - " "Host Adapter Dead?\n", adapter);
|
||||
blogic_warn("Still unable to write Outgoing Mailbox - Host Adapter Dead?\n", adapter);
|
||||
blogic_dealloc_ccb(ccb, 1);
|
||||
command->result = DID_ERROR << 16;
|
||||
command->scsi_done(command);
|
||||
|
@ -3443,8 +3443,8 @@ static int blogic_diskparam(struct scsi_device *sdev, struct block_device *dev,
|
|||
if (diskparam->cylinders != saved_cyl)
|
||||
blogic_warn("Adopting Geometry %d/%d from Partition Table\n", adapter, diskparam->heads, diskparam->sectors);
|
||||
} else if (part_end_head > 0 || part_end_sector > 0) {
|
||||
blogic_warn("Warning: Partition Table appears to " "have Geometry %d/%d which is\n", adapter, part_end_head + 1, part_end_sector);
|
||||
blogic_warn("not compatible with current BusLogic " "Host Adapter Geometry %d/%d\n", adapter, diskparam->heads, diskparam->sectors);
|
||||
blogic_warn("Warning: Partition Table appears to have Geometry %d/%d which is\n", adapter, part_end_head + 1, part_end_sector);
|
||||
blogic_warn("not compatible with current BusLogic Host Adapter Geometry %d/%d\n", adapter, diskparam->heads, diskparam->sectors);
|
||||
}
|
||||
}
|
||||
kfree(buf);
|
||||
|
@ -3689,7 +3689,7 @@ static int __init blogic_parseopts(char *options)
|
|||
blogic_probe_options.probe134 = true;
|
||||
break;
|
||||
default:
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(invalid I/O Address 0x%X)\n", NULL, io_addr);
|
||||
blogic_err("BusLogic: Invalid Driver Options (invalid I/O Address 0x%lX)\n", NULL, io_addr);
|
||||
return 0;
|
||||
}
|
||||
} else if (blogic_parse(&options, "NoProbeISA"))
|
||||
|
@ -3710,7 +3710,7 @@ static int __init blogic_parseopts(char *options)
|
|||
for (tgt_id = 0; tgt_id < BLOGIC_MAXDEV; tgt_id++) {
|
||||
unsigned short qdepth = simple_strtoul(options, &options, 0);
|
||||
if (qdepth > BLOGIC_MAX_TAG_DEPTH) {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(invalid Queue Depth %d)\n", NULL, qdepth);
|
||||
blogic_err("BusLogic: Invalid Driver Options (invalid Queue Depth %d)\n", NULL, qdepth);
|
||||
return 0;
|
||||
}
|
||||
drvr_opts->qdepth[tgt_id] = qdepth;
|
||||
|
@ -3719,12 +3719,12 @@ static int __init blogic_parseopts(char *options)
|
|||
else if (*options == ']')
|
||||
break;
|
||||
else {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(',' or ']' expected at '%s')\n", NULL, options);
|
||||
blogic_err("BusLogic: Invalid Driver Options (',' or ']' expected at '%s')\n", NULL, options);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
if (*options != ']') {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(']' expected at '%s')\n", NULL, options);
|
||||
blogic_err("BusLogic: Invalid Driver Options (']' expected at '%s')\n", NULL, options);
|
||||
return 0;
|
||||
} else
|
||||
options++;
|
||||
|
@ -3732,7 +3732,7 @@ static int __init blogic_parseopts(char *options)
|
|||
unsigned short qdepth = simple_strtoul(options, &options, 0);
|
||||
if (qdepth == 0 ||
|
||||
qdepth > BLOGIC_MAX_TAG_DEPTH) {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(invalid Queue Depth %d)\n", NULL, qdepth);
|
||||
blogic_err("BusLogic: Invalid Driver Options (invalid Queue Depth %d)\n", NULL, qdepth);
|
||||
return 0;
|
||||
}
|
||||
drvr_opts->common_qdepth = qdepth;
|
||||
|
@ -3778,7 +3778,7 @@ static int __init blogic_parseopts(char *options)
|
|||
unsigned short bus_settle_time =
|
||||
simple_strtoul(options, &options, 0);
|
||||
if (bus_settle_time > 5 * 60) {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(invalid Bus Settle Time %d)\n", NULL, bus_settle_time);
|
||||
blogic_err("BusLogic: Invalid Driver Options (invalid Bus Settle Time %d)\n", NULL, bus_settle_time);
|
||||
return 0;
|
||||
}
|
||||
drvr_opts->bus_settle_time = bus_settle_time;
|
||||
|
@ -3803,14 +3803,14 @@ static int __init blogic_parseopts(char *options)
|
|||
if (*options == ',')
|
||||
options++;
|
||||
else if (*options != ';' && *options != '\0') {
|
||||
blogic_err("BusLogic: Unexpected Driver Option '%s' " "ignored\n", NULL, options);
|
||||
blogic_err("BusLogic: Unexpected Driver Option '%s' ignored\n", NULL, options);
|
||||
*options = '\0';
|
||||
}
|
||||
}
|
||||
if (!(blogic_drvr_options_count == 0 ||
|
||||
blogic_probeinfo_count == 0 ||
|
||||
blogic_drvr_options_count == blogic_probeinfo_count)) {
|
||||
blogic_err("BusLogic: Invalid Driver Options " "(all or no I/O Addresses must be specified)\n", NULL);
|
||||
blogic_err("BusLogic: Invalid Driver Options (all or no I/O Addresses must be specified)\n", NULL);
|
||||
return 0;
|
||||
}
|
||||
/*
|
||||
|
@ -3864,7 +3864,7 @@ static int __init blogic_setup(char *str)
|
|||
(void) get_options(str, ARRAY_SIZE(ints), ints);
|
||||
|
||||
if (ints[0] != 0) {
|
||||
blogic_err("BusLogic: Obsolete Command Line Entry " "Format Ignored\n", NULL);
|
||||
blogic_err("BusLogic: Obsolete Command Line Entry Format Ignored\n", NULL);
|
||||
return 0;
|
||||
}
|
||||
if (str == NULL || *str == '\0')
|
||||
|
|
|
@ -2314,7 +2314,7 @@ ahc_find_syncrate(struct ahc_softc *ahc, u_int *period,
|
|||
* At some speeds, we only support
|
||||
* ST transfers.
|
||||
*/
|
||||
if ((syncrate->sxfr_u2 & ST_SXFR) != 0)
|
||||
if ((syncrate->sxfr_u2 & ST_SXFR) != 0)
|
||||
*ppr_options &= ~MSG_EXT_PPR_DT_REQ;
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -54,6 +54,9 @@ static struct scsi_host_template aic94xx_sht = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.track_queue_depth = 1,
|
||||
};
|
||||
|
||||
|
|
|
@ -872,6 +872,10 @@ static long ch_ioctl_compat(struct file * file,
|
|||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
scsi_changer *ch = file->private_data;
|
||||
int retval = scsi_ioctl_block_when_processing_errors(ch->device, cmd,
|
||||
file->f_flags & O_NDELAY);
|
||||
if (retval)
|
||||
return retval;
|
||||
|
||||
switch (cmd) {
|
||||
case CHIOGPARAMS:
|
||||
|
@ -883,7 +887,7 @@ static long ch_ioctl_compat(struct file * file,
|
|||
case CHIOINITELEM:
|
||||
case CHIOSVOLTAG:
|
||||
/* compatible */
|
||||
return ch_ioctl(file, cmd, arg);
|
||||
return ch_ioctl(file, cmd, (unsigned long)compat_ptr(arg));
|
||||
case CHIOGSTATUS32:
|
||||
{
|
||||
struct changer_element_status32 ces32;
|
||||
|
@ -898,8 +902,7 @@ static long ch_ioctl_compat(struct file * file,
|
|||
return ch_gstatus(ch, ces32.ces_type, data);
|
||||
}
|
||||
default:
|
||||
// return scsi_ioctl_compat(ch->device, cmd, (void*)arg);
|
||||
return -ENOIOCTLCMD;
|
||||
return scsi_compat_ioctl(ch->device, cmd, compat_ptr(arg));
|
||||
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1383,7 +1383,7 @@ csio_device_reset(struct device *dev,
|
|||
return -EINVAL;
|
||||
|
||||
/* Delete NPIV lnodes */
|
||||
csio_lnodes_exit(hw, 1);
|
||||
csio_lnodes_exit(hw, 1);
|
||||
|
||||
/* Block upper IOs */
|
||||
csio_lnodes_block_request(hw);
|
||||
|
|
|
@ -243,8 +243,6 @@ static void esp_set_all_config3(struct esp *esp, u8 val)
|
|||
/* Reset the ESP chip, _not_ the SCSI bus. */
|
||||
static void esp_reset_esp(struct esp *esp)
|
||||
{
|
||||
u8 family_code, version;
|
||||
|
||||
/* Now reset the ESP chip */
|
||||
scsi_esp_cmd(esp, ESP_CMD_RC);
|
||||
scsi_esp_cmd(esp, ESP_CMD_NULL | ESP_CMD_DMA);
|
||||
|
@ -257,14 +255,19 @@ static void esp_reset_esp(struct esp *esp)
|
|||
*/
|
||||
esp->max_period = ((35 * esp->ccycle) / 1000);
|
||||
if (esp->rev == FAST) {
|
||||
version = esp_read8(ESP_UID);
|
||||
family_code = (version & 0xf8) >> 3;
|
||||
if (family_code == 0x02)
|
||||
u8 family_code = ESP_FAMILY(esp_read8(ESP_UID));
|
||||
|
||||
if (family_code == ESP_UID_F236) {
|
||||
esp->rev = FAS236;
|
||||
else if (family_code == 0x0a)
|
||||
} else if (family_code == ESP_UID_HME) {
|
||||
esp->rev = FASHME; /* Version is usually '5'. */
|
||||
else
|
||||
} else if (family_code == ESP_UID_FSC) {
|
||||
esp->rev = FSC;
|
||||
/* Enable Active Negation */
|
||||
esp_write8(ESP_CONFIG4_RADE, ESP_CFG4);
|
||||
} else {
|
||||
esp->rev = FAS100A;
|
||||
}
|
||||
esp->min_period = ((4 * esp->ccycle) / 1000);
|
||||
} else {
|
||||
esp->min_period = ((5 * esp->ccycle) / 1000);
|
||||
|
@ -308,7 +311,7 @@ static void esp_reset_esp(struct esp *esp)
|
|||
|
||||
case FAS236:
|
||||
case PCSCSI:
|
||||
/* Fast 236, AM53c974 or HME */
|
||||
case FSC:
|
||||
esp_write8(esp->config2, ESP_CFG2);
|
||||
if (esp->rev == FASHME) {
|
||||
u8 cfg3 = esp->target[0].esp_config3;
|
||||
|
@ -2373,10 +2376,11 @@ static const char *esp_chip_names[] = {
|
|||
"ESP100A",
|
||||
"ESP236",
|
||||
"FAS236",
|
||||
"AM53C974",
|
||||
"53CF9x-2",
|
||||
"FAS100A",
|
||||
"FAST",
|
||||
"FASHME",
|
||||
"AM53C974",
|
||||
};
|
||||
|
||||
static struct scsi_transport_template *esp_transport_template;
|
||||
|
|
|
@ -78,12 +78,14 @@
|
|||
#define ESP_CONFIG3_IMS 0x80 /* ID msg chk'ng (esp/fas236) */
|
||||
#define ESP_CONFIG3_OBPUSH 0x80 /* Push odd-byte to dma (hme) */
|
||||
|
||||
/* ESP config register 4 read-write, found only on am53c974 chips */
|
||||
#define ESP_CONFIG4_RADE 0x04 /* Active negation */
|
||||
#define ESP_CONFIG4_RAE 0x08 /* Active negation on REQ and ACK */
|
||||
#define ESP_CONFIG4_PWD 0x20 /* Reduced power feature */
|
||||
#define ESP_CONFIG4_GE0 0x40 /* Glitch eater bit 0 */
|
||||
#define ESP_CONFIG4_GE1 0x80 /* Glitch eater bit 1 */
|
||||
/* ESP config register 4 read-write */
|
||||
#define ESP_CONFIG4_BBTE 0x01 /* Back-to-back transfers (fsc) */
|
||||
#define ESP_CONGIG4_TEST 0x02 /* Transfer counter test mode (fsc) */
|
||||
#define ESP_CONFIG4_RADE 0x04 /* Active negation (am53c974/fsc) */
|
||||
#define ESP_CONFIG4_RAE 0x08 /* Act. negation REQ/ACK (am53c974) */
|
||||
#define ESP_CONFIG4_PWD 0x20 /* Reduced power feature (am53c974) */
|
||||
#define ESP_CONFIG4_GE0 0x40 /* Glitch eater bit 0 (am53c974) */
|
||||
#define ESP_CONFIG4_GE1 0x80 /* Glitch eater bit 1 (am53c974) */
|
||||
|
||||
#define ESP_CONFIG_GE_12NS (0)
|
||||
#define ESP_CONFIG_GE_25NS (ESP_CONFIG_GE1)
|
||||
|
@ -209,10 +211,15 @@
|
|||
#define ESP_TEST_TS 0x04 /* Tristate test mode */
|
||||
|
||||
/* ESP unique ID register read-only, found on fas236+fas100a only */
|
||||
#define ESP_UID_FAM 0xf8 /* ESP family bitmask */
|
||||
|
||||
#define ESP_FAMILY(uid) (((uid) & ESP_UID_FAM) >> 3)
|
||||
|
||||
/* Values for the ESP family bits */
|
||||
#define ESP_UID_F100A 0x00 /* ESP FAS100A */
|
||||
#define ESP_UID_F236 0x02 /* ESP FAS236 */
|
||||
#define ESP_UID_REV 0x07 /* ESP revision */
|
||||
#define ESP_UID_FAM 0xf8 /* ESP family */
|
||||
#define ESP_UID_HME 0x0a /* FAS HME */
|
||||
#define ESP_UID_FSC 0x14 /* NCR/Symbios Logic 53CF9x-2 */
|
||||
|
||||
/* ESP fifo flags register read-only */
|
||||
/* Note that the following implies a 16 byte FIFO on the ESP. */
|
||||
|
@ -257,15 +264,17 @@ struct esp_cmd_priv {
|
|||
};
|
||||
#define ESP_CMD_PRIV(CMD) ((struct esp_cmd_priv *)(&(CMD)->SCp))
|
||||
|
||||
/* NOTE: this enum is ordered based on chip features! */
|
||||
enum esp_rev {
|
||||
ESP100 = 0x00, /* NCR53C90 - very broken */
|
||||
ESP100A = 0x01, /* NCR53C90A */
|
||||
ESP236 = 0x02,
|
||||
FAS236 = 0x03,
|
||||
FAS100A = 0x04,
|
||||
FAST = 0x05,
|
||||
FASHME = 0x06,
|
||||
PCSCSI = 0x07, /* AM53c974 */
|
||||
ESP100, /* NCR53C90 - very broken */
|
||||
ESP100A, /* NCR53C90A */
|
||||
ESP236,
|
||||
FAS236,
|
||||
PCSCSI, /* AM53c974 */
|
||||
FSC, /* NCR/Symbios Logic 53CF9x-2 */
|
||||
FAS100A,
|
||||
FAST,
|
||||
FASHME,
|
||||
};
|
||||
|
||||
struct esp_cmd_entry {
|
||||
|
|
|
@ -180,10 +180,10 @@ struct hisi_sas_port {
|
|||
|
||||
struct hisi_sas_cq {
|
||||
struct hisi_hba *hisi_hba;
|
||||
const struct cpumask *pci_irq_mask;
|
||||
struct tasklet_struct tasklet;
|
||||
const struct cpumask *irq_mask;
|
||||
int rd_point;
|
||||
int id;
|
||||
int irq_no;
|
||||
};
|
||||
|
||||
struct hisi_sas_dq {
|
||||
|
@ -627,7 +627,7 @@ extern void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba,
|
|||
extern void hisi_sas_init_mem(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_rst_work_handler(struct work_struct *work);
|
||||
extern void hisi_sas_sync_rst_work_handler(struct work_struct *work);
|
||||
extern void hisi_sas_kill_tasklets(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_sync_irqs(struct hisi_hba *hisi_hba);
|
||||
extern void hisi_sas_phy_oob_ready(struct hisi_hba *hisi_hba, int phy_no);
|
||||
extern bool hisi_sas_notify_phy_event(struct hisi_sas_phy *phy,
|
||||
enum hisi_sas_phy_event event);
|
||||
|
|
|
@ -163,13 +163,11 @@ static void hisi_sas_slot_index_clear(struct hisi_hba *hisi_hba, int slot_idx)
|
|||
|
||||
static void hisi_sas_slot_index_free(struct hisi_hba *hisi_hba, int slot_idx)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (hisi_hba->hw->slot_index_alloc ||
|
||||
slot_idx >= HISI_SAS_UNRESERVED_IPTT) {
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
spin_lock(&hisi_hba->lock);
|
||||
hisi_sas_slot_index_clear(hisi_hba, slot_idx);
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -185,12 +183,11 @@ static int hisi_sas_slot_index_alloc(struct hisi_hba *hisi_hba,
|
|||
{
|
||||
int index;
|
||||
void *bitmap = hisi_hba->slot_index_tags;
|
||||
unsigned long flags;
|
||||
|
||||
if (scsi_cmnd)
|
||||
return scsi_cmnd->request->tag;
|
||||
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
spin_lock(&hisi_hba->lock);
|
||||
index = find_next_zero_bit(bitmap, hisi_hba->slot_index_count,
|
||||
hisi_hba->last_slot_index + 1);
|
||||
if (index >= hisi_hba->slot_index_count) {
|
||||
|
@ -198,13 +195,13 @@ static int hisi_sas_slot_index_alloc(struct hisi_hba *hisi_hba,
|
|||
hisi_hba->slot_index_count,
|
||||
HISI_SAS_UNRESERVED_IPTT);
|
||||
if (index >= hisi_hba->slot_index_count) {
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
return -SAS_QUEUE_FULL;
|
||||
}
|
||||
}
|
||||
hisi_sas_slot_index_set(hisi_hba, index);
|
||||
hisi_hba->last_slot_index = index;
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
|
||||
return index;
|
||||
}
|
||||
|
@ -220,7 +217,6 @@ static void hisi_sas_slot_index_init(struct hisi_hba *hisi_hba)
|
|||
void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task,
|
||||
struct hisi_sas_slot *slot)
|
||||
{
|
||||
unsigned long flags;
|
||||
int device_id = slot->device_id;
|
||||
struct hisi_sas_device *sas_dev = &hisi_hba->devices[device_id];
|
||||
|
||||
|
@ -247,9 +243,9 @@ void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task,
|
|||
}
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&sas_dev->lock, flags);
|
||||
spin_lock(&sas_dev->lock);
|
||||
list_del_init(&slot->entry);
|
||||
spin_unlock_irqrestore(&sas_dev->lock, flags);
|
||||
spin_unlock(&sas_dev->lock);
|
||||
|
||||
memset(slot, 0, offsetof(struct hisi_sas_slot, buf));
|
||||
|
||||
|
@ -489,14 +485,14 @@ static int hisi_sas_task_prep(struct sas_task *task,
|
|||
slot_idx = rc;
|
||||
slot = &hisi_hba->slot_info[slot_idx];
|
||||
|
||||
spin_lock_irqsave(&dq->lock, flags);
|
||||
spin_lock(&dq->lock);
|
||||
wr_q_index = dq->wr_point;
|
||||
dq->wr_point = (dq->wr_point + 1) % HISI_SAS_QUEUE_SLOTS;
|
||||
list_add_tail(&slot->delivery, &dq->list);
|
||||
spin_unlock_irqrestore(&dq->lock, flags);
|
||||
spin_lock_irqsave(&sas_dev->lock, flags);
|
||||
spin_unlock(&dq->lock);
|
||||
spin_lock(&sas_dev->lock);
|
||||
list_add_tail(&slot->entry, &sas_dev->list);
|
||||
spin_unlock_irqrestore(&sas_dev->lock, flags);
|
||||
spin_unlock(&sas_dev->lock);
|
||||
|
||||
dlvry_queue = dq->id;
|
||||
dlvry_queue_slot = wr_q_index;
|
||||
|
@ -562,7 +558,6 @@ static int hisi_sas_task_exec(struct sas_task *task, gfp_t gfp_flags,
|
|||
{
|
||||
u32 rc;
|
||||
u32 pass = 0;
|
||||
unsigned long flags;
|
||||
struct hisi_hba *hisi_hba;
|
||||
struct device *dev;
|
||||
struct domain_device *device = task->dev;
|
||||
|
@ -606,9 +601,9 @@ static int hisi_sas_task_exec(struct sas_task *task, gfp_t gfp_flags,
|
|||
dev_err(dev, "task exec: failed[%d]!\n", rc);
|
||||
|
||||
if (likely(pass)) {
|
||||
spin_lock_irqsave(&dq->lock, flags);
|
||||
spin_lock(&dq->lock);
|
||||
hisi_hba->hw->start_delivery(dq);
|
||||
spin_unlock_irqrestore(&dq->lock, flags);
|
||||
spin_unlock(&dq->lock);
|
||||
}
|
||||
|
||||
return rc;
|
||||
|
@ -659,12 +654,11 @@ static struct hisi_sas_device *hisi_sas_alloc_dev(struct domain_device *device)
|
|||
{
|
||||
struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
|
||||
struct hisi_sas_device *sas_dev = NULL;
|
||||
unsigned long flags;
|
||||
int last = hisi_hba->last_dev_id;
|
||||
int first = (hisi_hba->last_dev_id + 1) % HISI_SAS_MAX_DEVICES;
|
||||
int i;
|
||||
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
spin_lock(&hisi_hba->lock);
|
||||
for (i = first; i != last; i %= HISI_SAS_MAX_DEVICES) {
|
||||
if (hisi_hba->devices[i].dev_type == SAS_PHY_UNUSED) {
|
||||
int queue = i % hisi_hba->queue_count;
|
||||
|
@ -684,7 +678,7 @@ static struct hisi_sas_device *hisi_sas_alloc_dev(struct domain_device *device)
|
|||
i++;
|
||||
}
|
||||
hisi_hba->last_dev_id = i;
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
|
||||
return sas_dev;
|
||||
}
|
||||
|
@ -1233,10 +1227,10 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
|
|||
struct hisi_sas_cq *cq =
|
||||
&hisi_hba->cq[slot->dlvry_queue];
|
||||
/*
|
||||
* flush tasklet to avoid free'ing task
|
||||
* sync irq to avoid free'ing task
|
||||
* before using task in IO completion
|
||||
*/
|
||||
tasklet_kill(&cq->tasklet);
|
||||
synchronize_irq(cq->irq_no);
|
||||
slot->task = NULL;
|
||||
}
|
||||
|
||||
|
@ -1626,11 +1620,11 @@ static int hisi_sas_abort_task(struct sas_task *task)
|
|||
|
||||
if (slot) {
|
||||
/*
|
||||
* flush tasklet to avoid free'ing task
|
||||
* sync irq to avoid free'ing task
|
||||
* before using task in IO completion
|
||||
*/
|
||||
cq = &hisi_hba->cq[slot->dlvry_queue];
|
||||
tasklet_kill(&cq->tasklet);
|
||||
synchronize_irq(cq->irq_no);
|
||||
}
|
||||
spin_unlock_irqrestore(&task->task_state_lock, flags);
|
||||
rc = TMF_RESP_FUNC_COMPLETE;
|
||||
|
@ -1694,10 +1688,10 @@ static int hisi_sas_abort_task(struct sas_task *task)
|
|||
if (((rc < 0) || (rc == TMF_RESP_FUNC_FAILED)) &&
|
||||
task->lldd_task) {
|
||||
/*
|
||||
* flush tasklet to avoid free'ing task
|
||||
* sync irq to avoid free'ing task
|
||||
* before using task in IO completion
|
||||
*/
|
||||
tasklet_kill(&cq->tasklet);
|
||||
synchronize_irq(cq->irq_no);
|
||||
slot->task = NULL;
|
||||
}
|
||||
}
|
||||
|
@ -1965,14 +1959,14 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, int device_id,
|
|||
slot_idx = rc;
|
||||
slot = &hisi_hba->slot_info[slot_idx];
|
||||
|
||||
spin_lock_irqsave(&dq->lock, flags);
|
||||
spin_lock(&dq->lock);
|
||||
wr_q_index = dq->wr_point;
|
||||
dq->wr_point = (dq->wr_point + 1) % HISI_SAS_QUEUE_SLOTS;
|
||||
list_add_tail(&slot->delivery, &dq->list);
|
||||
spin_unlock_irqrestore(&dq->lock, flags);
|
||||
spin_lock_irqsave(&sas_dev->lock, flags);
|
||||
spin_unlock(&dq->lock);
|
||||
spin_lock(&sas_dev->lock);
|
||||
list_add_tail(&slot->entry, &sas_dev->list);
|
||||
spin_unlock_irqrestore(&sas_dev->lock, flags);
|
||||
spin_unlock(&sas_dev->lock);
|
||||
|
||||
dlvry_queue = dq->id;
|
||||
dlvry_queue_slot = wr_q_index;
|
||||
|
@ -2001,9 +1995,9 @@ hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, int device_id,
|
|||
spin_unlock_irqrestore(&task->task_state_lock, flags);
|
||||
WRITE_ONCE(slot->ready, 1);
|
||||
/* send abort command to the chip */
|
||||
spin_lock_irqsave(&dq->lock, flags);
|
||||
spin_lock(&dq->lock);
|
||||
hisi_hba->hw->start_delivery(dq);
|
||||
spin_unlock_irqrestore(&dq->lock, flags);
|
||||
spin_unlock(&dq->lock);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -2076,10 +2070,10 @@ _hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
|
|||
struct hisi_sas_cq *cq =
|
||||
&hisi_hba->cq[slot->dlvry_queue];
|
||||
/*
|
||||
* flush tasklet to avoid free'ing task
|
||||
* sync irq to avoid free'ing task
|
||||
* before using task in IO completion
|
||||
*/
|
||||
tasklet_kill(&cq->tasklet);
|
||||
synchronize_irq(cq->irq_no);
|
||||
slot->task = NULL;
|
||||
}
|
||||
dev_err(dev, "internal task abort: timeout and not done.\n");
|
||||
|
@ -2131,7 +2125,7 @@ hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
|
|||
case HISI_SAS_INT_ABT_DEV:
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
const struct cpumask *mask = cq->pci_irq_mask;
|
||||
const struct cpumask *mask = cq->irq_mask;
|
||||
|
||||
if (mask && !cpumask_intersects(cpu_online_mask, mask))
|
||||
continue;
|
||||
|
@ -2225,17 +2219,17 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_phy_down);
|
||||
|
||||
void hisi_sas_kill_tasklets(struct hisi_hba *hisi_hba)
|
||||
void hisi_sas_sync_irqs(struct hisi_hba *hisi_hba)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
|
||||
tasklet_kill(&cq->tasklet);
|
||||
synchronize_irq(cq->irq_no);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_kill_tasklets);
|
||||
EXPORT_SYMBOL_GPL(hisi_sas_sync_irqs);
|
||||
|
||||
int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type)
|
||||
{
|
||||
|
@ -3936,7 +3930,7 @@ void hisi_sas_debugfs_init(struct hisi_hba *hisi_hba)
|
|||
|
||||
hisi_hba->debugfs_dir = debugfs_create_dir(dev_name(dev),
|
||||
hisi_sas_debugfs_dir);
|
||||
debugfs_create_file("trigger_dump", 0600,
|
||||
debugfs_create_file("trigger_dump", 0200,
|
||||
hisi_hba->debugfs_dir,
|
||||
hisi_hba,
|
||||
&hisi_sas_debugfs_trigger_dump_fops);
|
||||
|
|
|
@ -1772,6 +1772,9 @@ static struct scsi_host_template sht_v1_hw = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = host_attrs_v1_hw,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
};
|
||||
|
|
|
@ -773,7 +773,6 @@ slot_index_alloc_quirk_v2_hw(struct hisi_hba *hisi_hba,
|
|||
struct hisi_sas_device *sas_dev = device->lldd_dev;
|
||||
int sata_idx = sas_dev->sata_idx;
|
||||
int start, end;
|
||||
unsigned long flags;
|
||||
|
||||
if (!sata_dev) {
|
||||
/*
|
||||
|
@ -797,12 +796,12 @@ slot_index_alloc_quirk_v2_hw(struct hisi_hba *hisi_hba,
|
|||
end = 64 * (sata_idx + 2);
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
spin_lock(&hisi_hba->lock);
|
||||
while (1) {
|
||||
start = find_next_zero_bit(bitmap,
|
||||
hisi_hba->slot_index_count, start);
|
||||
if (start >= end) {
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
return -SAS_QUEUE_FULL;
|
||||
}
|
||||
/*
|
||||
|
@ -814,7 +813,7 @@ slot_index_alloc_quirk_v2_hw(struct hisi_hba *hisi_hba,
|
|||
}
|
||||
|
||||
set_bit(start, bitmap);
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
return start;
|
||||
}
|
||||
|
||||
|
@ -843,9 +842,8 @@ hisi_sas_device *alloc_dev_quirk_v2_hw(struct domain_device *device)
|
|||
struct hisi_sas_device *sas_dev = NULL;
|
||||
int i, sata_dev = dev_is_sata(device);
|
||||
int sata_idx = -1;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&hisi_hba->lock, flags);
|
||||
spin_lock(&hisi_hba->lock);
|
||||
|
||||
if (sata_dev)
|
||||
if (!sata_index_alloc_v2_hw(hisi_hba, &sata_idx))
|
||||
|
@ -876,7 +874,7 @@ hisi_sas_device *alloc_dev_quirk_v2_hw(struct domain_device *device)
|
|||
}
|
||||
|
||||
out:
|
||||
spin_unlock_irqrestore(&hisi_hba->lock, flags);
|
||||
spin_unlock(&hisi_hba->lock);
|
||||
|
||||
return sas_dev;
|
||||
}
|
||||
|
@ -3111,9 +3109,9 @@ static irqreturn_t fatal_axi_int_v2_hw(int irq_no, void *p)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void cq_tasklet_v2_hw(unsigned long val)
|
||||
static irqreturn_t cq_thread_v2_hw(int irq_no, void *p)
|
||||
{
|
||||
struct hisi_sas_cq *cq = (struct hisi_sas_cq *)val;
|
||||
struct hisi_sas_cq *cq = p;
|
||||
struct hisi_hba *hisi_hba = cq->hisi_hba;
|
||||
struct hisi_sas_slot *slot;
|
||||
struct hisi_sas_itct *itct;
|
||||
|
@ -3181,6 +3179,8 @@ static void cq_tasklet_v2_hw(unsigned long val)
|
|||
/* update rd_point */
|
||||
cq->rd_point = rd_point;
|
||||
hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
|
||||
|
@ -3191,9 +3191,7 @@ static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
|
|||
|
||||
hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue);
|
||||
|
||||
tasklet_schedule(&cq->tasklet);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
return IRQ_WAKE_THREAD;
|
||||
}
|
||||
|
||||
static irqreturn_t sata_int_v2_hw(int irq_no, void *p)
|
||||
|
@ -3360,18 +3358,18 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba)
|
|||
|
||||
for (queue_no = 0; queue_no < hisi_hba->queue_count; queue_no++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[queue_no];
|
||||
struct tasklet_struct *t = &cq->tasklet;
|
||||
|
||||
irq = irq_map[queue_no + 96];
|
||||
rc = devm_request_irq(dev, irq, cq_interrupt_v2_hw, 0,
|
||||
DRV_NAME " cq", cq);
|
||||
cq->irq_no = irq_map[queue_no + 96];
|
||||
rc = devm_request_threaded_irq(dev, cq->irq_no,
|
||||
cq_interrupt_v2_hw,
|
||||
cq_thread_v2_hw, IRQF_ONESHOT,
|
||||
DRV_NAME " cq", cq);
|
||||
if (rc) {
|
||||
dev_err(dev, "irq init: could not request cq interrupt %d, rc=%d\n",
|
||||
irq, rc);
|
||||
rc = -ENOENT;
|
||||
goto err_out;
|
||||
}
|
||||
tasklet_init(t, cq_tasklet_v2_hw, (unsigned long)cq);
|
||||
}
|
||||
|
||||
hisi_hba->cq_nvecs = hisi_hba->queue_count;
|
||||
|
@ -3432,7 +3430,6 @@ static int soft_reset_v2_hw(struct hisi_hba *hisi_hba)
|
|||
|
||||
interrupt_disable_v2_hw(hisi_hba);
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0x0);
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
|
||||
hisi_sas_stop_phys(hisi_hba);
|
||||
|
||||
|
@ -3551,6 +3548,9 @@ static struct scsi_host_template sht_v2_hw = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = host_attrs_v2_hw,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
};
|
||||
|
@ -3603,11 +3603,6 @@ static int hisi_sas_v2_probe(struct platform_device *pdev)
|
|||
|
||||
static int hisi_sas_v2_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct sas_ha_struct *sha = platform_get_drvdata(pdev);
|
||||
struct hisi_hba *hisi_hba = sha->lldd_ha;
|
||||
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
|
||||
return hisi_sas_remove(pdev);
|
||||
}
|
||||
|
||||
|
|
|
@ -495,6 +495,13 @@ struct hisi_sas_err_record_v3 {
|
|||
#define BASE_VECTORS_V3_HW 16
|
||||
#define MIN_AFFINE_VECTORS_V3_HW (BASE_VECTORS_V3_HW + 1)
|
||||
|
||||
#define CHNL_INT_STS_MSK 0xeeeeeeee
|
||||
#define CHNL_INT_STS_PHY_MSK 0xe
|
||||
#define CHNL_INT_STS_INT0_MSK BIT(1)
|
||||
#define CHNL_INT_STS_INT1_MSK BIT(2)
|
||||
#define CHNL_INT_STS_INT2_MSK BIT(3)
|
||||
#define CHNL_WIDTH 4
|
||||
|
||||
enum {
|
||||
DSM_FUNC_ERR_HANDLE_MSI = 0,
|
||||
};
|
||||
|
@ -1819,19 +1826,19 @@ static irqreturn_t int_chnl_int_v3_hw(int irq_no, void *p)
|
|||
int phy_no = 0;
|
||||
|
||||
irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS)
|
||||
& 0xeeeeeeee;
|
||||
& CHNL_INT_STS_MSK;
|
||||
|
||||
while (irq_msk) {
|
||||
if (irq_msk & (2 << (phy_no * 4)))
|
||||
if (irq_msk & (CHNL_INT_STS_INT0_MSK << (phy_no * CHNL_WIDTH)))
|
||||
handle_chl_int0_v3_hw(hisi_hba, phy_no);
|
||||
|
||||
if (irq_msk & (4 << (phy_no * 4)))
|
||||
if (irq_msk & (CHNL_INT_STS_INT1_MSK << (phy_no * CHNL_WIDTH)))
|
||||
handle_chl_int1_v3_hw(hisi_hba, phy_no);
|
||||
|
||||
if (irq_msk & (8 << (phy_no * 4)))
|
||||
if (irq_msk & (CHNL_INT_STS_INT2_MSK << (phy_no * CHNL_WIDTH)))
|
||||
handle_chl_int2_v3_hw(hisi_hba, phy_no);
|
||||
|
||||
irq_msk &= ~(0xe << (phy_no * 4));
|
||||
irq_msk &= ~(CHNL_INT_STS_PHY_MSK << (phy_no * CHNL_WIDTH));
|
||||
phy_no++;
|
||||
}
|
||||
|
||||
|
@ -2299,9 +2306,9 @@ out:
|
|||
return sts;
|
||||
}
|
||||
|
||||
static void cq_tasklet_v3_hw(unsigned long val)
|
||||
static irqreturn_t cq_thread_v3_hw(int irq_no, void *p)
|
||||
{
|
||||
struct hisi_sas_cq *cq = (struct hisi_sas_cq *)val;
|
||||
struct hisi_sas_cq *cq = p;
|
||||
struct hisi_hba *hisi_hba = cq->hisi_hba;
|
||||
struct hisi_sas_slot *slot;
|
||||
struct hisi_sas_complete_v3_hdr *complete_queue;
|
||||
|
@ -2338,6 +2345,8 @@ static void cq_tasklet_v3_hw(unsigned long val)
|
|||
/* update rd_point */
|
||||
cq->rd_point = rd_point;
|
||||
hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
|
||||
|
@ -2348,9 +2357,7 @@ static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
|
|||
|
||||
hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue);
|
||||
|
||||
tasklet_schedule(&cq->tasklet);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
return IRQ_WAKE_THREAD;
|
||||
}
|
||||
|
||||
static void setup_reply_map_v3_hw(struct hisi_hba *hisi_hba, int nvecs)
|
||||
|
@ -2365,7 +2372,7 @@ static void setup_reply_map_v3_hw(struct hisi_hba *hisi_hba, int nvecs)
|
|||
BASE_VECTORS_V3_HW);
|
||||
if (!mask)
|
||||
goto fallback;
|
||||
cq->pci_irq_mask = mask;
|
||||
cq->irq_mask = mask;
|
||||
for_each_cpu(cpu, mask)
|
||||
hisi_hba->reply_map[cpu] = queue;
|
||||
}
|
||||
|
@ -2389,6 +2396,8 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
.pre_vectors = BASE_VECTORS_V3_HW,
|
||||
};
|
||||
|
||||
dev_info(dev, "Enable MSI auto-affinity\n");
|
||||
|
||||
min_msi = MIN_AFFINE_VECTORS_V3_HW;
|
||||
|
||||
hisi_hba->reply_map = devm_kcalloc(dev, nr_cpu_ids,
|
||||
|
@ -2441,15 +2450,20 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
goto free_irq_vectors;
|
||||
}
|
||||
|
||||
/* Init tasklets for cq only */
|
||||
if (hisi_sas_intr_conv)
|
||||
dev_info(dev, "Enable interrupt converge\n");
|
||||
|
||||
for (i = 0; i < hisi_hba->cq_nvecs; i++) {
|
||||
struct hisi_sas_cq *cq = &hisi_hba->cq[i];
|
||||
struct tasklet_struct *t = &cq->tasklet;
|
||||
int nr = hisi_sas_intr_conv ? 16 : 16 + i;
|
||||
unsigned long irqflags = hisi_sas_intr_conv ? IRQF_SHARED : 0;
|
||||
unsigned long irqflags = hisi_sas_intr_conv ? IRQF_SHARED :
|
||||
IRQF_ONESHOT;
|
||||
|
||||
rc = devm_request_irq(dev, pci_irq_vector(pdev, nr),
|
||||
cq_interrupt_v3_hw, irqflags,
|
||||
cq->irq_no = pci_irq_vector(pdev, nr);
|
||||
rc = devm_request_threaded_irq(dev, cq->irq_no,
|
||||
cq_interrupt_v3_hw,
|
||||
cq_thread_v3_hw,
|
||||
irqflags,
|
||||
DRV_NAME " cq", cq);
|
||||
if (rc) {
|
||||
dev_err(dev, "could not request cq%d interrupt, rc=%d\n",
|
||||
|
@ -2457,8 +2471,6 @@ static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
|
|||
rc = -ENOENT;
|
||||
goto free_irq_vectors;
|
||||
}
|
||||
|
||||
tasklet_init(t, cq_tasklet_v3_hw, (unsigned long)cq);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -2534,7 +2546,6 @@ static int disable_host_v3_hw(struct hisi_hba *hisi_hba)
|
|||
|
||||
interrupt_disable_v3_hw(hisi_hba);
|
||||
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0x0);
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
|
||||
hisi_sas_stop_phys(hisi_hba);
|
||||
|
||||
|
@ -2910,7 +2921,7 @@ static void debugfs_snapshot_prepare_v3_hw(struct hisi_hba *hisi_hba)
|
|||
|
||||
wait_cmds_complete_timeout_v3_hw(hisi_hba, 100, 5000);
|
||||
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
hisi_sas_sync_irqs(hisi_hba);
|
||||
}
|
||||
|
||||
static void debugfs_snapshot_restore_v3_hw(struct hisi_hba *hisi_hba)
|
||||
|
@ -3075,6 +3086,9 @@ static struct scsi_host_template sht_v3_hw = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = host_attrs_v3_hw,
|
||||
.tag_alloc_policy = BLK_TAG_ALLOC_RR,
|
||||
.host_reset = hisi_sas_host_reset,
|
||||
|
@ -3309,7 +3323,6 @@ static void hisi_sas_v3_remove(struct pci_dev *pdev)
|
|||
sas_remove_host(sha->core.shost);
|
||||
|
||||
hisi_sas_v3_destroy_irqs(pdev, hisi_hba);
|
||||
hisi_sas_kill_tasklets(hisi_hba);
|
||||
pci_release_regions(pdev);
|
||||
pci_disable_device(pdev);
|
||||
hisi_sas_free(hisi_hba);
|
||||
|
|
|
@ -1877,7 +1877,6 @@ static void ibmvscsis_send_messages(struct scsi_info *vscsi)
|
|||
*/
|
||||
struct viosrp_crq *crq = (struct viosrp_crq *)&msg_hi;
|
||||
struct ibmvscsis_cmd *cmd, *nxt;
|
||||
struct iu_entry *iue;
|
||||
long rc = ADAPT_SUCCESS;
|
||||
bool retry = false;
|
||||
|
||||
|
@ -1931,8 +1930,6 @@ static void ibmvscsis_send_messages(struct scsi_info *vscsi)
|
|||
*/
|
||||
vscsi->credit += 1;
|
||||
} else {
|
||||
iue = cmd->iue;
|
||||
|
||||
crq->valid = VALID_CMD_RESP_EL;
|
||||
crq->format = cmd->rsp.format;
|
||||
|
||||
|
@ -3796,7 +3793,6 @@ static int ibmvscsis_queue_data_in(struct se_cmd *se_cmd)
|
|||
se_cmd);
|
||||
struct iu_entry *iue = cmd->iue;
|
||||
struct scsi_info *vscsi = cmd->adapter;
|
||||
char *sd;
|
||||
uint len = 0;
|
||||
int rc;
|
||||
|
||||
|
@ -3804,7 +3800,6 @@ static int ibmvscsis_queue_data_in(struct se_cmd *se_cmd)
|
|||
1);
|
||||
if (rc) {
|
||||
dev_err(&vscsi->dev, "srp_transfer_data failed: %d\n", rc);
|
||||
sd = se_cmd->sense_buffer;
|
||||
se_cmd->scsi_sense_length = 18;
|
||||
memset(se_cmd->sense_buffer, 0, se_cmd->scsi_sense_length);
|
||||
/* Logical Unit Communication Time-out asc/ascq = 0x0801 */
|
||||
|
|
|
@ -1640,7 +1640,7 @@ static int initio_state_6(struct initio_host * host)
|
|||
*
|
||||
*/
|
||||
|
||||
int initio_state_7(struct initio_host * host)
|
||||
static int initio_state_7(struct initio_host * host)
|
||||
{
|
||||
int cnt, i;
|
||||
|
||||
|
|
|
@ -6727,6 +6727,9 @@ static struct scsi_host_template driver_template = {
|
|||
.name = "IPR",
|
||||
.info = ipr_ioa_info,
|
||||
.ioctl = ipr_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = ipr_ioctl,
|
||||
#endif
|
||||
.queuecommand = ipr_queuecommand,
|
||||
.eh_abort_handler = ipr_eh_abort,
|
||||
.eh_device_reset_handler = ipr_eh_dev_reset,
|
||||
|
|
|
@ -168,6 +168,9 @@ static struct scsi_host_template isci_sht = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = isci_host_attrs,
|
||||
.track_queue_depth = 1,
|
||||
};
|
||||
|
|
|
@ -887,6 +887,10 @@ free_host:
|
|||
static void iscsi_sw_tcp_session_destroy(struct iscsi_cls_session *cls_session)
|
||||
{
|
||||
struct Scsi_Host *shost = iscsi_session_to_shost(cls_session);
|
||||
struct iscsi_session *session = cls_session->dd_data;
|
||||
|
||||
if (WARN_ON_ONCE(session->leadconn))
|
||||
return;
|
||||
|
||||
iscsi_tcp_r2tpool_free(cls_session->dd_data);
|
||||
iscsi_session_teardown(cls_session);
|
||||
|
|
|
@ -137,7 +137,7 @@ static void sas_ata_task_done(struct sas_task *task)
|
|||
} else {
|
||||
ac = sas_to_ata_err(stat);
|
||||
if (ac) {
|
||||
pr_warn("%s: SAS error %x\n", __func__, stat->stat);
|
||||
pr_warn("%s: SAS error 0x%x\n", __func__, stat->stat);
|
||||
/* We saw a SAS error. Send a vague error. */
|
||||
if (!link->sactive) {
|
||||
qc->err_mask = ac;
|
||||
|
|
|
@ -179,7 +179,7 @@ int sas_notify_lldd_dev_found(struct domain_device *dev)
|
|||
|
||||
res = i->dft->lldd_dev_found(dev);
|
||||
if (res) {
|
||||
pr_warn("driver on host %s cannot handle device %llx, error:%d\n",
|
||||
pr_warn("driver on host %s cannot handle device %016llx, error:%d\n",
|
||||
dev_name(sas_ha->dev),
|
||||
SAS_ADDR(dev->sas_addr), res);
|
||||
}
|
||||
|
|
|
@ -500,7 +500,7 @@ static int sas_ex_general(struct domain_device *dev)
|
|||
ex_assign_report_general(dev, rg_resp);
|
||||
|
||||
if (dev->ex_dev.configuring) {
|
||||
pr_debug("RG: ex %llx self-configuring...\n",
|
||||
pr_debug("RG: ex %016llx self-configuring...\n",
|
||||
SAS_ADDR(dev->sas_addr));
|
||||
schedule_timeout_interruptible(5*HZ);
|
||||
} else
|
||||
|
@ -881,7 +881,7 @@ static struct domain_device *sas_ex_discover_end_dev(
|
|||
|
||||
res = sas_discover_end_dev(child);
|
||||
if (res) {
|
||||
pr_notice("sas_discover_end_dev() for device %16llx at %016llx:%02d returned 0x%x\n",
|
||||
pr_notice("sas_discover_end_dev() for device %016llx at %016llx:%02d returned 0x%x\n",
|
||||
SAS_ADDR(child->sas_addr),
|
||||
SAS_ADDR(parent->sas_addr), phy_id, res);
|
||||
goto out_list_del;
|
||||
|
|
|
@ -107,7 +107,7 @@ static inline void sas_smp_host_handler(struct bsg_job *job,
|
|||
|
||||
static inline void sas_fail_probe(struct domain_device *dev, const char *func, int err)
|
||||
{
|
||||
pr_warn("%s: for %s device %16llx returned %d\n",
|
||||
pr_warn("%s: for %s device %016llx returned %d\n",
|
||||
func, dev->parent ? "exp-attached" :
|
||||
"direct-attached",
|
||||
SAS_ADDR(dev->sas_addr), err);
|
||||
|
|
|
@ -165,7 +165,7 @@ static void sas_form_port(struct asd_sas_phy *phy)
|
|||
}
|
||||
sas_port_add_phy(port->port, phy->phy);
|
||||
|
||||
pr_debug("%s added to %s, phy_mask:0x%x (%16llx)\n",
|
||||
pr_debug("%s added to %s, phy_mask:0x%x (%016llx)\n",
|
||||
dev_name(&phy->phy->dev), dev_name(&port->port->dev),
|
||||
port->phy_mask,
|
||||
SAS_ADDR(port->attached_sas_addr));
|
||||
|
|
|
@ -330,7 +330,7 @@ static int sas_recover_lu(struct domain_device *dev, struct scsi_cmnd *cmd)
|
|||
|
||||
int_to_scsilun(cmd->device->lun, &lun);
|
||||
|
||||
pr_notice("eh: device %llx LUN %llx has the task\n",
|
||||
pr_notice("eh: device %016llx LUN 0x%llx has the task\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
cmd->device->lun);
|
||||
|
||||
|
@ -615,7 +615,7 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head *
|
|||
reset:
|
||||
tmf_resp = sas_recover_lu(task->dev, cmd);
|
||||
if (tmf_resp == TMF_RESP_FUNC_COMPLETE) {
|
||||
pr_notice("dev %016llx LU %llx is recovered\n",
|
||||
pr_notice("dev %016llx LU 0x%llx is recovered\n",
|
||||
SAS_ADDR(task->dev),
|
||||
cmd->device->lun);
|
||||
sas_eh_finish_cmd(cmd);
|
||||
|
@ -666,7 +666,7 @@ static void sas_eh_handle_sas_errors(struct Scsi_Host *shost, struct list_head *
|
|||
* of effort could recover from errors. Quite
|
||||
* possibly the HA just disappeared.
|
||||
*/
|
||||
pr_err("error from device %llx, LUN %llx couldn't be recovered in any way\n",
|
||||
pr_err("error from device %016llx, LUN 0x%llx couldn't be recovered in any way\n",
|
||||
SAS_ADDR(task->dev->sas_addr),
|
||||
cmd->device->lun);
|
||||
|
||||
|
@ -851,7 +851,7 @@ int sas_slave_configure(struct scsi_device *scsi_dev)
|
|||
if (scsi_dev->tagged_supported) {
|
||||
scsi_change_queue_depth(scsi_dev, SAS_DEF_QD);
|
||||
} else {
|
||||
pr_notice("device %llx, LUN %llx doesn't support TCQ\n",
|
||||
pr_notice("device %016llx, LUN 0x%llx doesn't support TCQ\n",
|
||||
SAS_ADDR(dev->sas_addr), scsi_dev->lun);
|
||||
scsi_change_queue_depth(scsi_dev, 1);
|
||||
}
|
||||
|
|
|
@ -27,7 +27,7 @@ void sas_ssp_task_response(struct device *dev, struct sas_task *task,
|
|||
memcpy(tstat->buf, iu->sense_data, tstat->buf_valid_size);
|
||||
|
||||
if (iu->status != SAM_STAT_CHECK_CONDITION)
|
||||
dev_warn(dev, "dev %llx sent sense data, but stat(%x) is not CHECK CONDITION\n",
|
||||
dev_warn(dev, "dev %016llx sent sense data, but stat(0x%x) is not CHECK CONDITION\n",
|
||||
SAS_ADDR(task->dev->sas_addr), iu->status);
|
||||
}
|
||||
else
|
||||
|
|
|
@ -1223,6 +1223,8 @@ struct lpfc_hba {
|
|||
#define LPFC_POLL_HB 1 /* slowpath heartbeat */
|
||||
#define LPFC_POLL_FASTPATH 0 /* called from fastpath */
|
||||
#define LPFC_POLL_SLOWPATH 1 /* called from slowpath */
|
||||
|
||||
char os_host_name[MAXHOSTNAMELEN];
|
||||
};
|
||||
|
||||
static inline struct Scsi_Host *
|
||||
|
|
|
@ -4123,14 +4123,13 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
|
|||
/*
|
||||
* The 'topology' is not a configurable parameter if :
|
||||
* - persistent topology enabled
|
||||
* - G7 adapters
|
||||
* - G6 with no private loop support
|
||||
* - G7/G6 with no private loop support
|
||||
*/
|
||||
|
||||
if (((phba->hba_flag & HBA_PERSISTENT_TOPO) ||
|
||||
if ((phba->hba_flag & HBA_PERSISTENT_TOPO ||
|
||||
(!phba->sli4_hba.pc_sli4_params.pls &&
|
||||
phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC) ||
|
||||
phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) &&
|
||||
(phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC ||
|
||||
phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC))) &&
|
||||
val == 4) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"3114 Loop mode not supported\n");
|
||||
|
|
|
@ -180,7 +180,7 @@ int lpfc_issue_gidft(struct lpfc_vport *vport);
|
|||
int lpfc_get_gidft_type(struct lpfc_vport *vport, struct lpfc_iocbq *iocbq);
|
||||
int lpfc_ns_cmd(struct lpfc_vport *, int, uint8_t, uint32_t);
|
||||
int lpfc_fdmi_cmd(struct lpfc_vport *, struct lpfc_nodelist *, int, uint32_t);
|
||||
void lpfc_fdmi_num_disc_check(struct lpfc_vport *);
|
||||
void lpfc_fdmi_change_check(struct lpfc_vport *vport);
|
||||
void lpfc_delayed_disc_tmo(struct timer_list *);
|
||||
void lpfc_delayed_disc_timeout_handler(struct lpfc_vport *);
|
||||
|
||||
|
|
|
@ -1493,33 +1493,35 @@ int
|
|||
lpfc_vport_symbolic_node_name(struct lpfc_vport *vport, char *symbol,
|
||||
size_t size)
|
||||
{
|
||||
char fwrev[FW_REV_STR_SIZE];
|
||||
int n;
|
||||
char fwrev[FW_REV_STR_SIZE] = {0};
|
||||
char tmp[MAXHOSTNAMELEN] = {0};
|
||||
|
||||
memset(symbol, 0, size);
|
||||
|
||||
scnprintf(tmp, sizeof(tmp), "Emulex %s", vport->phba->ModelName);
|
||||
if (strlcat(symbol, tmp, size) >= size)
|
||||
goto buffer_done;
|
||||
|
||||
lpfc_decode_firmware_rev(vport->phba, fwrev, 0);
|
||||
scnprintf(tmp, sizeof(tmp), " FV%s", fwrev);
|
||||
if (strlcat(symbol, tmp, size) >= size)
|
||||
goto buffer_done;
|
||||
|
||||
n = scnprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
|
||||
if (size < n)
|
||||
return n;
|
||||
scnprintf(tmp, sizeof(tmp), " DV%s", lpfc_release_version);
|
||||
if (strlcat(symbol, tmp, size) >= size)
|
||||
goto buffer_done;
|
||||
|
||||
n += scnprintf(symbol + n, size - n, " FV%s", fwrev);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
n += scnprintf(symbol + n, size - n, " DV%s.",
|
||||
lpfc_release_version);
|
||||
if (size < n)
|
||||
return n;
|
||||
|
||||
n += scnprintf(symbol + n, size - n, " HN:%s.",
|
||||
init_utsname()->nodename);
|
||||
if (size < n)
|
||||
return n;
|
||||
scnprintf(tmp, sizeof(tmp), " HN:%s", vport->phba->os_host_name);
|
||||
if (strlcat(symbol, tmp, size) >= size)
|
||||
goto buffer_done;
|
||||
|
||||
/* Note :- OS name is "Linux" */
|
||||
n += scnprintf(symbol + n, size - n, " OS:%s",
|
||||
init_utsname()->sysname);
|
||||
return n;
|
||||
scnprintf(tmp, sizeof(tmp), " OS:%s", init_utsname()->sysname);
|
||||
strlcat(symbol, tmp, size);
|
||||
|
||||
buffer_done:
|
||||
return strnlen(symbol, size);
|
||||
|
||||
}
|
||||
|
||||
static uint32_t
|
||||
|
@ -1998,14 +2000,16 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
|
|||
|
||||
|
||||
/**
|
||||
* lpfc_fdmi_num_disc_check - Check how many mapped NPorts we are connected to
|
||||
* lpfc_fdmi_change_check - Check for changed FDMI parameters
|
||||
* @vport: pointer to a host virtual N_Port data structure.
|
||||
*
|
||||
* Called from hbeat timeout routine to check if the number of discovered
|
||||
* ports has changed. If so, re-register thar port Attribute.
|
||||
* Check how many mapped NPorts we are connected to
|
||||
* Check if our hostname changed
|
||||
* Called from hbeat timeout routine to check if any FDMI parameters
|
||||
* changed. If so, re-register those Attributes.
|
||||
*/
|
||||
void
|
||||
lpfc_fdmi_num_disc_check(struct lpfc_vport *vport)
|
||||
lpfc_fdmi_change_check(struct lpfc_vport *vport)
|
||||
{
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct lpfc_nodelist *ndlp;
|
||||
|
@ -2018,17 +2022,41 @@ lpfc_fdmi_num_disc_check(struct lpfc_vport *vport)
|
|||
if (!(vport->fc_flag & FC_FABRIC))
|
||||
return;
|
||||
|
||||
ndlp = lpfc_findnode_did(vport, FDMI_DID);
|
||||
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp))
|
||||
return;
|
||||
|
||||
/* Check if system hostname changed */
|
||||
if (strcmp(phba->os_host_name, init_utsname()->nodename)) {
|
||||
memset(phba->os_host_name, 0, sizeof(phba->os_host_name));
|
||||
scnprintf(phba->os_host_name, sizeof(phba->os_host_name), "%s",
|
||||
init_utsname()->nodename);
|
||||
lpfc_ns_cmd(vport, SLI_CTNS_RSNN_NN, 0, 0);
|
||||
|
||||
/* Since this effects multiple HBA and PORT attributes, we need
|
||||
* de-register and go thru the whole FDMI registration cycle.
|
||||
* DHBA -> DPRT -> RHBA -> RPA (physical port)
|
||||
* DPRT -> RPRT (vports)
|
||||
*/
|
||||
if (vport->port_type == LPFC_PHYSICAL_PORT)
|
||||
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DHBA, 0);
|
||||
else
|
||||
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DPRT, 0);
|
||||
|
||||
/* Since this code path registers all the port attributes
|
||||
* we can just return without further checking.
|
||||
*/
|
||||
return;
|
||||
}
|
||||
|
||||
if (!(vport->fdmi_port_mask & LPFC_FDMI_PORT_ATTR_num_disc))
|
||||
return;
|
||||
|
||||
/* Check if the number of mapped NPorts changed */
|
||||
cnt = lpfc_find_map_node(vport);
|
||||
if (cnt == vport->fdmi_num_disc)
|
||||
return;
|
||||
|
||||
ndlp = lpfc_findnode_did(vport, FDMI_DID);
|
||||
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp))
|
||||
return;
|
||||
|
||||
if (vport->port_type == LPFC_PHYSICAL_PORT) {
|
||||
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPA,
|
||||
LPFC_FDMI_PORT_ATTR_num_disc);
|
||||
|
@ -2616,8 +2644,8 @@ lpfc_fdmi_port_attr_host_name(struct lpfc_vport *vport,
|
|||
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
|
||||
memset(ae, 0, 256);
|
||||
|
||||
snprintf(ae->un.AttrString, sizeof(ae->un.AttrString), "%s",
|
||||
init_utsname()->nodename);
|
||||
scnprintf(ae->un.AttrString, sizeof(ae->un.AttrString), "%s",
|
||||
vport->phba->os_host_name);
|
||||
|
||||
len = strnlen(ae->un.AttrString, sizeof(ae->un.AttrString));
|
||||
len += (len & 3) ? (4 - (len & 3)) : 4;
|
||||
|
|
|
@ -2085,6 +2085,8 @@ static int lpfc_debugfs_ras_log_data(struct lpfc_hba *phba,
|
|||
int copied = 0;
|
||||
struct lpfc_dmabuf *dmabuf, *next;
|
||||
|
||||
memset(buffer, 0, size);
|
||||
|
||||
spin_lock_irq(&phba->hbalock);
|
||||
if (phba->ras_fwlog.state != ACTIVE) {
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
|
@ -2094,10 +2096,15 @@ static int lpfc_debugfs_ras_log_data(struct lpfc_hba *phba,
|
|||
|
||||
list_for_each_entry_safe(dmabuf, next,
|
||||
&phba->ras_fwlog.fwlog_buff_list, list) {
|
||||
/* Check if copying will go over size and a '\0' char */
|
||||
if ((copied + LPFC_RAS_MAX_ENTRY_SIZE) >= (size - 1)) {
|
||||
memcpy(buffer + copied, dmabuf->virt,
|
||||
size - copied - 1);
|
||||
copied += size - copied - 1;
|
||||
break;
|
||||
}
|
||||
memcpy(buffer + copied, dmabuf->virt, LPFC_RAS_MAX_ENTRY_SIZE);
|
||||
copied += LPFC_RAS_MAX_ENTRY_SIZE;
|
||||
if (size > copied)
|
||||
break;
|
||||
}
|
||||
return copied;
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <linux/kthread.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/utsname.h>
|
||||
|
||||
#include <scsi/scsi.h>
|
||||
#include <scsi/scsi_device.h>
|
||||
|
@ -3315,6 +3316,10 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
|
|||
lpfc_sli4_clear_fcf_rr_bmask(phba);
|
||||
}
|
||||
|
||||
/* Prepare for LINK up registrations */
|
||||
memset(phba->os_host_name, 0, sizeof(phba->os_host_name));
|
||||
scnprintf(phba->os_host_name, sizeof(phba->os_host_name), "%s",
|
||||
init_utsname()->nodename);
|
||||
return;
|
||||
out:
|
||||
lpfc_vport_set_state(vport, FC_VPORT_FAILED);
|
||||
|
|
|
@ -3925,6 +3925,9 @@ struct lpfc_mbx_wr_object {
|
|||
#define LPFC_CHANGE_STATUS_FW_RESET 0x02
|
||||
#define LPFC_CHANGE_STATUS_PORT_MIGRATION 0x04
|
||||
#define LPFC_CHANGE_STATUS_PCI_RESET 0x05
|
||||
#define lpfc_wr_object_csf_SHIFT 8
|
||||
#define lpfc_wr_object_csf_MASK 0x00000001
|
||||
#define lpfc_wr_object_csf_WORD word5
|
||||
} response;
|
||||
} u;
|
||||
};
|
||||
|
|
|
@ -1362,7 +1362,7 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
|
|||
if (vports != NULL)
|
||||
for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
|
||||
lpfc_rcv_seq_check_edtov(vports[i]);
|
||||
lpfc_fdmi_num_disc_check(vports[i]);
|
||||
lpfc_fdmi_change_check(vports[i]);
|
||||
}
|
||||
lpfc_destroy_vport_work_array(phba, vports);
|
||||
|
||||
|
@ -8320,14 +8320,6 @@ lpfc_map_topology(struct lpfc_hba *phba, struct lpfc_mbx_read_config *rd_config)
|
|||
phba->hba_flag |= HBA_PERSISTENT_TOPO;
|
||||
switch (phba->pcidev->device) {
|
||||
case PCI_DEVICE_ID_LANCER_G7_FC:
|
||||
if (tf || (pt == LINK_FLAGS_LOOP)) {
|
||||
/* Invalid values from FW - use driver params */
|
||||
phba->hba_flag &= ~HBA_PERSISTENT_TOPO;
|
||||
} else {
|
||||
/* Prism only supports PT2PT topology */
|
||||
phba->cfg_topology = FLAGS_TOPOLOGY_MODE_PT_PT;
|
||||
}
|
||||
break;
|
||||
case PCI_DEVICE_ID_LANCER_G6_FC:
|
||||
if (!tf) {
|
||||
phba->cfg_topology = ((pt == LINK_FLAGS_LOOP)
|
||||
|
@ -10449,6 +10441,8 @@ lpfc_sli4_pci_mem_unset(struct lpfc_hba *phba)
|
|||
case LPFC_SLI_INTF_IF_TYPE_6:
|
||||
iounmap(phba->sli4_hba.drbl_regs_memmap_p);
|
||||
iounmap(phba->sli4_hba.conf_regs_memmap_p);
|
||||
if (phba->sli4_hba.dpp_regs_memmap_p)
|
||||
iounmap(phba->sli4_hba.dpp_regs_memmap_p);
|
||||
break;
|
||||
case LPFC_SLI_INTF_IF_TYPE_1:
|
||||
default:
|
||||
|
|
|
@ -308,7 +308,7 @@ lpfc_defer_pt2pt_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *link_mbox)
|
|||
mb->mbxStatus);
|
||||
mempool_free(login_mbox, phba->mbox_mem_pool);
|
||||
mempool_free(link_mbox, phba->mbox_mem_pool);
|
||||
lpfc_sli_release_iocbq(phba, save_iocb);
|
||||
kfree(save_iocb);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -325,7 +325,61 @@ lpfc_defer_pt2pt_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *link_mbox)
|
|||
}
|
||||
|
||||
mempool_free(link_mbox, phba->mbox_mem_pool);
|
||||
lpfc_sli_release_iocbq(phba, save_iocb);
|
||||
kfree(save_iocb);
|
||||
}
|
||||
|
||||
/**
|
||||
* lpfc_defer_tgt_acc - Progress SLI4 target rcv PLOGI handler
|
||||
* @phba: Pointer to HBA context object.
|
||||
* @pmb: Pointer to mailbox object.
|
||||
*
|
||||
* This function provides the unreg rpi mailbox completion handler for a tgt.
|
||||
* The routine frees the memory resources associated with the completed
|
||||
* mailbox command and transmits the ELS ACC.
|
||||
*
|
||||
* This routine is only called if we are SLI4, acting in target
|
||||
* mode and the remote NPort issues the PLOGI after link up.
|
||||
**/
|
||||
static void
|
||||
lpfc_defer_acc_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
|
||||
{
|
||||
struct lpfc_vport *vport = pmb->vport;
|
||||
struct lpfc_nodelist *ndlp = pmb->ctx_ndlp;
|
||||
LPFC_MBOXQ_t *mbox = pmb->context3;
|
||||
struct lpfc_iocbq *piocb = NULL;
|
||||
int rc;
|
||||
|
||||
if (mbox) {
|
||||
pmb->context3 = NULL;
|
||||
piocb = mbox->context3;
|
||||
mbox->context3 = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Complete the unreg rpi mbx request, and update flags.
|
||||
* This will also restart any deferred events.
|
||||
*/
|
||||
lpfc_nlp_get(ndlp);
|
||||
lpfc_sli4_unreg_rpi_cmpl_clr(phba, pmb);
|
||||
|
||||
if (!piocb) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY | LOG_ELS,
|
||||
"4578 PLOGI ACC fail\n");
|
||||
if (mbox)
|
||||
mempool_free(mbox, phba->mbox_mem_pool);
|
||||
goto out;
|
||||
}
|
||||
|
||||
rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, piocb, ndlp, mbox);
|
||||
if (rc) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY | LOG_ELS,
|
||||
"4579 PLOGI ACC fail %x\n", rc);
|
||||
if (mbox)
|
||||
mempool_free(mbox, phba->mbox_mem_pool);
|
||||
}
|
||||
kfree(piocb);
|
||||
out:
|
||||
lpfc_nlp_put(ndlp);
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -345,6 +399,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
struct lpfc_iocbq *save_iocb;
|
||||
struct ls_rjt stat;
|
||||
uint32_t vid, flag;
|
||||
u16 rpi;
|
||||
int rc, defer_acc;
|
||||
|
||||
memset(&stat, 0, sizeof (struct ls_rjt));
|
||||
|
@ -488,7 +543,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
link_mbox->vport = vport;
|
||||
link_mbox->ctx_ndlp = ndlp;
|
||||
|
||||
save_iocb = lpfc_sli_get_iocbq(phba);
|
||||
save_iocb = kzalloc(sizeof(*save_iocb), GFP_KERNEL);
|
||||
if (!save_iocb)
|
||||
goto out;
|
||||
/* Save info from cmd IOCB used in rsp */
|
||||
|
@ -513,7 +568,36 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
goto out;
|
||||
|
||||
/* Registering an existing RPI behaves differently for SLI3 vs SLI4 */
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
if (phba->nvmet_support && !defer_acc) {
|
||||
link_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
|
||||
if (!link_mbox)
|
||||
goto out;
|
||||
|
||||
/* As unique identifiers such as iotag would be overwritten
|
||||
* with those from the cmdiocb, allocate separate temporary
|
||||
* storage for the copy.
|
||||
*/
|
||||
save_iocb = kzalloc(sizeof(*save_iocb), GFP_KERNEL);
|
||||
if (!save_iocb)
|
||||
goto out;
|
||||
|
||||
/* Unreg RPI is required for SLI4. */
|
||||
rpi = phba->sli4_hba.rpi_ids[ndlp->nlp_rpi];
|
||||
lpfc_unreg_login(phba, vport->vpi, rpi, link_mbox);
|
||||
link_mbox->vport = vport;
|
||||
link_mbox->ctx_ndlp = ndlp;
|
||||
link_mbox->mbox_cmpl = lpfc_defer_acc_rsp;
|
||||
|
||||
if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) &&
|
||||
(!(vport->fc_flag & FC_OFFLINE_MODE)))
|
||||
ndlp->nlp_flag |= NLP_UNREG_INP;
|
||||
|
||||
/* Save info from cmd IOCB used in rsp */
|
||||
memcpy(save_iocb, cmdiocb, sizeof(*save_iocb));
|
||||
|
||||
/* Delay sending ACC till unreg RPI completes. */
|
||||
defer_acc = 1;
|
||||
} else if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
lpfc_unreg_rpi(vport, ndlp);
|
||||
|
||||
rc = lpfc_reg_rpi(phba, vport->vpi, icmd->un.rcvels.remoteID,
|
||||
|
@ -553,6 +637,9 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
if ((vport->port_type == LPFC_NPIV_PORT &&
|
||||
vport->cfg_restrict_login)) {
|
||||
|
||||
/* no deferred ACC */
|
||||
kfree(save_iocb);
|
||||
|
||||
/* In order to preserve RPIs, we want to cleanup
|
||||
* the default RPI the firmware created to rcv
|
||||
* this ELS request. The only way to do this is
|
||||
|
@ -571,8 +658,12 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
}
|
||||
if (defer_acc) {
|
||||
/* So the order here should be:
|
||||
* Issue CONFIG_LINK mbox
|
||||
* CONFIG_LINK cmpl
|
||||
* SLI3 pt2pt
|
||||
* Issue CONFIG_LINK mbox
|
||||
* CONFIG_LINK cmpl
|
||||
* SLI4 tgt
|
||||
* Issue UNREG RPI mbx
|
||||
* UNREG RPI cmpl
|
||||
* Issue PLOGI ACC
|
||||
* PLOGI ACC cmpl
|
||||
* Issue REG_LOGIN mbox
|
||||
|
@ -596,10 +687,9 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
|
|||
out:
|
||||
if (defer_acc)
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
|
||||
"4577 pt2pt discovery failure: %p %p %p\n",
|
||||
"4577 discovery failure: %p %p %p\n",
|
||||
save_iocb, link_mbox, login_mbox);
|
||||
if (save_iocb)
|
||||
lpfc_sli_release_iocbq(phba, save_iocb);
|
||||
kfree(save_iocb);
|
||||
if (link_mbox)
|
||||
mempool_free(link_mbox, phba->mbox_mem_pool);
|
||||
if (login_mbox)
|
||||
|
|
|
@ -481,7 +481,7 @@ lpfc_sli4_vport_delete_fcp_xri_aborted(struct lpfc_vport *vport)
|
|||
spin_lock(&qp->abts_io_buf_list_lock);
|
||||
list_for_each_entry_safe(psb, next_psb,
|
||||
&qp->lpfc_abts_io_buf_list, list) {
|
||||
if (psb->cur_iocbq.iocb_flag == LPFC_IO_NVME)
|
||||
if (psb->cur_iocbq.iocb_flag & LPFC_IO_NVME)
|
||||
continue;
|
||||
|
||||
if (psb->rdata && psb->rdata->pnode &&
|
||||
|
@ -528,7 +528,7 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
|
|||
list_del_init(&psb->list);
|
||||
psb->flags &= ~LPFC_SBUF_XBUSY;
|
||||
psb->status = IOSTAT_SUCCESS;
|
||||
if (psb->cur_iocbq.iocb_flag == LPFC_IO_NVME) {
|
||||
if (psb->cur_iocbq.iocb_flag & LPFC_IO_NVME) {
|
||||
qp->abts_nvme_io_bufs--;
|
||||
spin_unlock(&qp->abts_io_buf_list_lock);
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflag);
|
||||
|
|
|
@ -4918,8 +4918,17 @@ static int
|
|||
lpfc_sli4_rb_setup(struct lpfc_hba *phba)
|
||||
{
|
||||
phba->hbq_in_use = 1;
|
||||
phba->hbqs[LPFC_ELS_HBQ].entry_count =
|
||||
lpfc_hbq_defs[LPFC_ELS_HBQ]->entry_count;
|
||||
/**
|
||||
* Specific case when the MDS diagnostics is enabled and supported.
|
||||
* The receive buffer count is truncated to manage the incoming
|
||||
* traffic.
|
||||
**/
|
||||
if (phba->cfg_enable_mds_diags && phba->mds_diags_support)
|
||||
phba->hbqs[LPFC_ELS_HBQ].entry_count =
|
||||
lpfc_hbq_defs[LPFC_ELS_HBQ]->entry_count >> 1;
|
||||
else
|
||||
phba->hbqs[LPFC_ELS_HBQ].entry_count =
|
||||
lpfc_hbq_defs[LPFC_ELS_HBQ]->entry_count;
|
||||
phba->hbq_count = 1;
|
||||
lpfc_sli_hbqbuf_init_hbqs(phba, LPFC_ELS_HBQ);
|
||||
/* Initially populate or replenish the HBQs */
|
||||
|
@ -19449,7 +19458,7 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
|
|||
struct lpfc_mbx_wr_object *wr_object;
|
||||
LPFC_MBOXQ_t *mbox;
|
||||
int rc = 0, i = 0;
|
||||
uint32_t shdr_status, shdr_add_status, shdr_change_status;
|
||||
uint32_t shdr_status, shdr_add_status, shdr_change_status, shdr_csf;
|
||||
uint32_t mbox_tmo;
|
||||
struct lpfc_dmabuf *dmabuf;
|
||||
uint32_t written = 0;
|
||||
|
@ -19506,6 +19515,16 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
|
|||
if (check_change_status) {
|
||||
shdr_change_status = bf_get(lpfc_wr_object_change_status,
|
||||
&wr_object->u.response);
|
||||
|
||||
if (shdr_change_status == LPFC_CHANGE_STATUS_FW_RESET ||
|
||||
shdr_change_status == LPFC_CHANGE_STATUS_PORT_MIGRATION) {
|
||||
shdr_csf = bf_get(lpfc_wr_object_csf,
|
||||
&wr_object->u.response);
|
||||
if (shdr_csf)
|
||||
shdr_change_status =
|
||||
LPFC_CHANGE_STATUS_PCI_RESET;
|
||||
}
|
||||
|
||||
switch (shdr_change_status) {
|
||||
case (LPFC_CHANGE_STATUS_PHYS_DEV_RESET):
|
||||
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
* included with this package. *
|
||||
*******************************************************************/
|
||||
|
||||
#define LPFC_DRIVER_VERSION "12.6.0.2"
|
||||
#define LPFC_DRIVER_VERSION "12.6.0.3"
|
||||
#define LPFC_DRIVER_NAME "lpfc"
|
||||
|
||||
/* Used for SLI 2/3 */
|
||||
|
|
|
@ -21,8 +21,8 @@
|
|||
/*
|
||||
* MegaRAID SAS Driver meta data
|
||||
*/
|
||||
#define MEGASAS_VERSION "07.710.50.00-rc1"
|
||||
#define MEGASAS_RELDATE "June 28, 2019"
|
||||
#define MEGASAS_VERSION "07.713.01.00-rc1"
|
||||
#define MEGASAS_RELDATE "Dec 27, 2019"
|
||||
|
||||
#define MEGASAS_MSIX_NAME_LEN 32
|
||||
|
||||
|
@ -2233,9 +2233,9 @@ enum MR_PD_TYPE {
|
|||
|
||||
/* JBOD Queue depth definitions */
|
||||
#define MEGASAS_SATA_QD 32
|
||||
#define MEGASAS_SAS_QD 64
|
||||
#define MEGASAS_SAS_QD 256
|
||||
#define MEGASAS_DEFAULT_PD_QD 64
|
||||
#define MEGASAS_NVME_QD 32
|
||||
#define MEGASAS_NVME_QD 64
|
||||
|
||||
#define MR_DEFAULT_NVME_PAGE_SIZE 4096
|
||||
#define MR_DEFAULT_NVME_PAGE_SHIFT 12
|
||||
|
@ -2640,10 +2640,11 @@ enum MEGASAS_OCR_CAUSE {
|
|||
};
|
||||
|
||||
enum DCMD_RETURN_STATUS {
|
||||
DCMD_SUCCESS = 0,
|
||||
DCMD_TIMEOUT = 1,
|
||||
DCMD_FAILED = 2,
|
||||
DCMD_NOT_FIRED = 3,
|
||||
DCMD_SUCCESS = 0x00,
|
||||
DCMD_TIMEOUT = 0x01,
|
||||
DCMD_FAILED = 0x02,
|
||||
DCMD_BUSY = 0x03,
|
||||
DCMD_INIT = 0xff,
|
||||
};
|
||||
|
||||
u8
|
||||
|
|
|
@ -1099,7 +1099,7 @@ megasas_issue_polled(struct megasas_instance *instance, struct megasas_cmd *cmd)
|
|||
if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
|
||||
dev_err(&instance->pdev->dev, "Failed from %s %d\n",
|
||||
__func__, __LINE__);
|
||||
return DCMD_NOT_FIRED;
|
||||
return DCMD_INIT;
|
||||
}
|
||||
|
||||
instance->instancet->issue_dcmd(instance, cmd);
|
||||
|
@ -1123,19 +1123,19 @@ megasas_issue_blocked_cmd(struct megasas_instance *instance,
|
|||
struct megasas_cmd *cmd, int timeout)
|
||||
{
|
||||
int ret = 0;
|
||||
cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS;
|
||||
cmd->cmd_status_drv = DCMD_INIT;
|
||||
|
||||
if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
|
||||
dev_err(&instance->pdev->dev, "Failed from %s %d\n",
|
||||
__func__, __LINE__);
|
||||
return DCMD_NOT_FIRED;
|
||||
return DCMD_INIT;
|
||||
}
|
||||
|
||||
instance->instancet->issue_dcmd(instance, cmd);
|
||||
|
||||
if (timeout) {
|
||||
ret = wait_event_timeout(instance->int_cmd_wait_q,
|
||||
cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ);
|
||||
cmd->cmd_status_drv != DCMD_INIT, timeout * HZ);
|
||||
if (!ret) {
|
||||
dev_err(&instance->pdev->dev,
|
||||
"DCMD(opcode: 0x%x) is timed out, func:%s\n",
|
||||
|
@ -1144,10 +1144,9 @@ megasas_issue_blocked_cmd(struct megasas_instance *instance,
|
|||
}
|
||||
} else
|
||||
wait_event(instance->int_cmd_wait_q,
|
||||
cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS);
|
||||
cmd->cmd_status_drv != DCMD_INIT);
|
||||
|
||||
return (cmd->cmd_status_drv == MFI_STAT_OK) ?
|
||||
DCMD_SUCCESS : DCMD_FAILED;
|
||||
return cmd->cmd_status_drv;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1190,19 +1189,19 @@ megasas_issue_blocked_abort_cmd(struct megasas_instance *instance,
|
|||
cpu_to_le32(upper_32_bits(cmd_to_abort->frame_phys_addr));
|
||||
|
||||
cmd->sync_cmd = 1;
|
||||
cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS;
|
||||
cmd->cmd_status_drv = DCMD_INIT;
|
||||
|
||||
if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
|
||||
dev_err(&instance->pdev->dev, "Failed from %s %d\n",
|
||||
__func__, __LINE__);
|
||||
return DCMD_NOT_FIRED;
|
||||
return DCMD_INIT;
|
||||
}
|
||||
|
||||
instance->instancet->issue_dcmd(instance, cmd);
|
||||
|
||||
if (timeout) {
|
||||
ret = wait_event_timeout(instance->abort_cmd_wait_q,
|
||||
cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ);
|
||||
cmd->cmd_status_drv != DCMD_INIT, timeout * HZ);
|
||||
if (!ret) {
|
||||
opcode = cmd_to_abort->frame->dcmd.opcode;
|
||||
dev_err(&instance->pdev->dev,
|
||||
|
@ -1212,13 +1211,12 @@ megasas_issue_blocked_abort_cmd(struct megasas_instance *instance,
|
|||
}
|
||||
} else
|
||||
wait_event(instance->abort_cmd_wait_q,
|
||||
cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS);
|
||||
cmd->cmd_status_drv != DCMD_INIT);
|
||||
|
||||
cmd->sync_cmd = 0;
|
||||
|
||||
megasas_return_cmd(instance, cmd);
|
||||
return (cmd->cmd_status_drv == MFI_STAT_OK) ?
|
||||
DCMD_SUCCESS : DCMD_FAILED;
|
||||
return cmd->cmd_status_drv;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1887,6 +1885,10 @@ void megasas_set_dynamic_target_properties(struct scsi_device *sdev,
|
|||
|
||||
mr_device_priv_data->is_tm_capable =
|
||||
raid->capability.tmCapable;
|
||||
|
||||
if (!raid->flags.isEPD)
|
||||
sdev->no_write_same = 1;
|
||||
|
||||
} else if (instance->use_seqnum_jbod_fp) {
|
||||
pd_index = (sdev->channel * MEGASAS_MAX_DEV_PER_CHANNEL) +
|
||||
sdev->id;
|
||||
|
@ -2150,6 +2152,12 @@ static void megasas_complete_outstanding_ioctls(struct megasas_instance *instanc
|
|||
|
||||
void megaraid_sas_kill_hba(struct megasas_instance *instance)
|
||||
{
|
||||
if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
|
||||
dev_warn(&instance->pdev->dev,
|
||||
"Adapter already dead, skipping kill HBA\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* Set critical error to block I/O & ioctls in case caller didn't */
|
||||
atomic_set(&instance->adprecovery, MEGASAS_HW_CRITICAL_ERROR);
|
||||
/* Wait 1 second to ensure IO or ioctls in build have posted */
|
||||
|
@ -2726,7 +2734,7 @@ static int megasas_wait_for_outstanding(struct megasas_instance *instance)
|
|||
"reset queue\n",
|
||||
reset_cmd);
|
||||
|
||||
reset_cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS;
|
||||
reset_cmd->cmd_status_drv = DCMD_INIT;
|
||||
instance->instancet->fire_cmd(instance,
|
||||
reset_cmd->frame_phys_addr,
|
||||
0, instance->reg_set);
|
||||
|
@ -3416,7 +3424,6 @@ static struct scsi_host_template megasas_template = {
|
|||
.bios_param = megasas_bios_param,
|
||||
.change_queue_depth = scsi_change_queue_depth,
|
||||
.max_segment_size = 0xffffffff,
|
||||
.no_write_same = 1,
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -3432,7 +3439,11 @@ static void
|
|||
megasas_complete_int_cmd(struct megasas_instance *instance,
|
||||
struct megasas_cmd *cmd)
|
||||
{
|
||||
cmd->cmd_status_drv = cmd->frame->io.cmd_status;
|
||||
if (cmd->cmd_status_drv == DCMD_INIT)
|
||||
cmd->cmd_status_drv =
|
||||
(cmd->frame->io.cmd_status == MFI_STAT_OK) ?
|
||||
DCMD_SUCCESS : DCMD_FAILED;
|
||||
|
||||
wake_up(&instance->int_cmd_wait_q);
|
||||
}
|
||||
|
||||
|
@ -3451,7 +3462,7 @@ megasas_complete_abort(struct megasas_instance *instance,
|
|||
{
|
||||
if (cmd->sync_cmd) {
|
||||
cmd->sync_cmd = 0;
|
||||
cmd->cmd_status_drv = 0;
|
||||
cmd->cmd_status_drv = DCMD_SUCCESS;
|
||||
wake_up(&instance->abort_cmd_wait_q);
|
||||
}
|
||||
}
|
||||
|
@ -3727,7 +3738,7 @@ megasas_issue_pending_cmds_again(struct megasas_instance *instance)
|
|||
dev_notice(&instance->pdev->dev, "%p synchronous cmd"
|
||||
"on the internal reset queue,"
|
||||
"issue it again.\n", cmd);
|
||||
cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS;
|
||||
cmd->cmd_status_drv = DCMD_INIT;
|
||||
instance->instancet->fire_cmd(instance,
|
||||
cmd->frame_phys_addr,
|
||||
0, instance->reg_set);
|
||||
|
@ -4392,7 +4403,8 @@ dcmd_timeout_ocr_possible(struct megasas_instance *instance) {
|
|||
if (instance->adapter_type == MFI_SERIES)
|
||||
return KILL_ADAPTER;
|
||||
else if (instance->unload ||
|
||||
test_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags))
|
||||
test_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE,
|
||||
&instance->reset_flags))
|
||||
return IGNORE_TIMEOUT;
|
||||
else
|
||||
return INITIATE_OCR;
|
||||
|
@ -7593,6 +7605,7 @@ megasas_resume(struct pci_dev *pdev)
|
|||
struct Scsi_Host *host;
|
||||
struct megasas_instance *instance;
|
||||
int irq_flags = PCI_IRQ_LEGACY;
|
||||
u32 status_reg;
|
||||
|
||||
instance = pci_get_drvdata(pdev);
|
||||
|
||||
|
@ -7620,9 +7633,35 @@ megasas_resume(struct pci_dev *pdev)
|
|||
/*
|
||||
* We expect the FW state to be READY
|
||||
*/
|
||||
if (megasas_transition_to_ready(instance, 0))
|
||||
goto fail_ready_state;
|
||||
|
||||
if (megasas_transition_to_ready(instance, 0)) {
|
||||
dev_info(&instance->pdev->dev,
|
||||
"Failed to transition controller to ready from %s!\n",
|
||||
__func__);
|
||||
if (instance->adapter_type != MFI_SERIES) {
|
||||
status_reg =
|
||||
instance->instancet->read_fw_status_reg(instance);
|
||||
if (!(status_reg & MFI_RESET_ADAPTER) ||
|
||||
((megasas_adp_reset_wait_for_ready
|
||||
(instance, true, 0)) == FAILED))
|
||||
goto fail_ready_state;
|
||||
} else {
|
||||
atomic_set(&instance->fw_reset_no_pci_access, 1);
|
||||
instance->instancet->adp_reset
|
||||
(instance, instance->reg_set);
|
||||
atomic_set(&instance->fw_reset_no_pci_access, 0);
|
||||
|
||||
/* waiting for about 30 seconds before retry */
|
||||
ssleep(30);
|
||||
|
||||
if (megasas_transition_to_ready(instance, 0))
|
||||
goto fail_ready_state;
|
||||
}
|
||||
|
||||
dev_info(&instance->pdev->dev,
|
||||
"FW restarted successfully from %s!\n",
|
||||
__func__);
|
||||
}
|
||||
if (megasas_set_dma_mask(instance))
|
||||
goto fail_set_dma_mask;
|
||||
|
||||
|
@ -8036,6 +8075,7 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
|
|||
dma_addr_t sense_handle;
|
||||
unsigned long *sense_ptr;
|
||||
u32 opcode = 0;
|
||||
int ret = DCMD_SUCCESS;
|
||||
|
||||
memset(kbuff_arr, 0, sizeof(kbuff_arr));
|
||||
|
||||
|
@ -8176,13 +8216,18 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance,
|
|||
* cmd to the SCSI mid-layer
|
||||
*/
|
||||
cmd->sync_cmd = 1;
|
||||
if (megasas_issue_blocked_cmd(instance, cmd, 0) == DCMD_NOT_FIRED) {
|
||||
|
||||
ret = megasas_issue_blocked_cmd(instance, cmd, 0);
|
||||
switch (ret) {
|
||||
case DCMD_INIT:
|
||||
case DCMD_BUSY:
|
||||
cmd->sync_cmd = 0;
|
||||
dev_err(&instance->pdev->dev,
|
||||
"return -EBUSY from %s %d cmd 0x%x opcode 0x%x cmd->cmd_status_drv 0x%x\n",
|
||||
__func__, __LINE__, cmd->frame->hdr.cmd, opcode,
|
||||
cmd->cmd_status_drv);
|
||||
return -EBUSY;
|
||||
__func__, __LINE__, cmd->frame->hdr.cmd, opcode,
|
||||
cmd->cmd_status_drv);
|
||||
error = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
cmd->sync_cmd = 0;
|
||||
|
|
|
@ -364,6 +364,35 @@ megasas_fusion_update_can_queue(struct megasas_instance *instance, int fw_boot_c
|
|||
instance->max_fw_cmds = instance->max_fw_cmds-1;
|
||||
}
|
||||
}
|
||||
|
||||
static inline void
|
||||
megasas_get_msix_index(struct megasas_instance *instance,
|
||||
struct scsi_cmnd *scmd,
|
||||
struct megasas_cmd_fusion *cmd,
|
||||
u8 data_arms)
|
||||
{
|
||||
int sdev_busy;
|
||||
|
||||
/* nr_hw_queue = 1 for MegaRAID */
|
||||
struct blk_mq_hw_ctx *hctx =
|
||||
scmd->device->request_queue->queue_hw_ctx[0];
|
||||
|
||||
sdev_busy = atomic_read(&hctx->nr_active);
|
||||
|
||||
if (instance->perf_mode == MR_BALANCED_PERF_MODE &&
|
||||
sdev_busy > (data_arms * MR_DEVICE_HIGH_IOPS_DEPTH))
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) /
|
||||
MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start);
|
||||
else if (instance->msix_load_balance)
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
(mega_mod64(atomic64_add_return(1, &instance->total_io_count),
|
||||
instance->msix_vectors));
|
||||
else
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
instance->reply_map[raw_smp_processor_id()];
|
||||
}
|
||||
|
||||
/**
|
||||
* megasas_free_cmds_fusion - Free all the cmds in the free cmd pool
|
||||
* @instance: Adapter soft state
|
||||
|
@ -1312,7 +1341,9 @@ megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) {
|
|||
}
|
||||
|
||||
if (ret == DCMD_TIMEOUT)
|
||||
megaraid_sas_kill_hba(instance);
|
||||
dev_warn(&instance->pdev->dev,
|
||||
"%s DCMD timed out, continue without JBOD sequence map\n",
|
||||
__func__);
|
||||
|
||||
if (ret == DCMD_SUCCESS)
|
||||
instance->pd_seq_map_id++;
|
||||
|
@ -1394,7 +1425,9 @@ megasas_get_ld_map_info(struct megasas_instance *instance)
|
|||
ret = megasas_issue_polled(instance, cmd);
|
||||
|
||||
if (ret == DCMD_TIMEOUT)
|
||||
megaraid_sas_kill_hba(instance);
|
||||
dev_warn(&instance->pdev->dev,
|
||||
"%s DCMD timed out, RAID map is disabled\n",
|
||||
__func__);
|
||||
|
||||
megasas_return_cmd(instance, cmd);
|
||||
|
||||
|
@ -2825,19 +2858,7 @@ megasas_build_ldio_fusion(struct megasas_instance *instance,
|
|||
fp_possible = (io_info.fpOkForIo > 0) ? true : false;
|
||||
}
|
||||
|
||||
if ((instance->perf_mode == MR_BALANCED_PERF_MODE) &&
|
||||
atomic_read(&scp->device->device_busy) >
|
||||
(io_info.data_arms * MR_DEVICE_HIGH_IOPS_DEPTH))
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) /
|
||||
MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start);
|
||||
else if (instance->msix_load_balance)
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
(mega_mod64(atomic64_add_return(1, &instance->total_io_count),
|
||||
instance->msix_vectors));
|
||||
else
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
instance->reply_map[raw_smp_processor_id()];
|
||||
megasas_get_msix_index(instance, scp, cmd, io_info.data_arms);
|
||||
|
||||
if (instance->adapter_type >= VENTURA_SERIES) {
|
||||
/* FP for Optimal raid level 1.
|
||||
|
@ -3158,18 +3179,7 @@ megasas_build_syspd_fusion(struct megasas_instance *instance,
|
|||
|
||||
cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
|
||||
|
||||
if ((instance->perf_mode == MR_BALANCED_PERF_MODE) &&
|
||||
atomic_read(&scmd->device->device_busy) > MR_DEVICE_HIGH_IOPS_DEPTH)
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) /
|
||||
MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start);
|
||||
else if (instance->msix_load_balance)
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
(mega_mod64(atomic64_add_return(1, &instance->total_io_count),
|
||||
instance->msix_vectors));
|
||||
else
|
||||
cmd->request_desc->SCSIIO.MSIxIndex =
|
||||
instance->reply_map[raw_smp_processor_id()];
|
||||
megasas_get_msix_index(instance, scmd, cmd, 1);
|
||||
|
||||
if (!fp_possible) {
|
||||
/* system pd firmware path */
|
||||
|
@ -4219,7 +4229,8 @@ void megasas_reset_reply_desc(struct megasas_instance *instance)
|
|||
* megasas_refire_mgmt_cmd : Re-fire management commands
|
||||
* @instance: Controller's soft instance
|
||||
*/
|
||||
static void megasas_refire_mgmt_cmd(struct megasas_instance *instance)
|
||||
void megasas_refire_mgmt_cmd(struct megasas_instance *instance,
|
||||
bool return_ioctl)
|
||||
{
|
||||
int j;
|
||||
struct megasas_cmd_fusion *cmd_fusion;
|
||||
|
@ -4283,6 +4294,16 @@ static void megasas_refire_mgmt_cmd(struct megasas_instance *instance)
|
|||
break;
|
||||
}
|
||||
|
||||
if (return_ioctl && cmd_mfi->sync_cmd &&
|
||||
cmd_mfi->frame->hdr.cmd != MFI_CMD_ABORT) {
|
||||
dev_err(&instance->pdev->dev,
|
||||
"return -EBUSY from %s %d cmd 0x%x opcode 0x%x\n",
|
||||
__func__, __LINE__, cmd_mfi->frame->hdr.cmd,
|
||||
le32_to_cpu(cmd_mfi->frame->dcmd.opcode));
|
||||
cmd_mfi->cmd_status_drv = DCMD_BUSY;
|
||||
result = COMPLETE_CMD;
|
||||
}
|
||||
|
||||
switch (result) {
|
||||
case REFIRE_CMD:
|
||||
megasas_fire_cmd_fusion(instance, req_desc);
|
||||
|
@ -4297,6 +4318,37 @@ static void megasas_refire_mgmt_cmd(struct megasas_instance *instance)
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* megasas_return_polled_cmds: Return polled mode commands back to the pool
|
||||
* before initiating an OCR.
|
||||
* @instance: Controller's soft instance
|
||||
*/
|
||||
static void
|
||||
megasas_return_polled_cmds(struct megasas_instance *instance)
|
||||
{
|
||||
int i;
|
||||
struct megasas_cmd_fusion *cmd_fusion;
|
||||
struct fusion_context *fusion;
|
||||
struct megasas_cmd *cmd_mfi;
|
||||
|
||||
fusion = instance->ctrl_context;
|
||||
|
||||
for (i = instance->max_scsi_cmds; i < instance->max_fw_cmds; i++) {
|
||||
cmd_fusion = fusion->cmd_list[i];
|
||||
cmd_mfi = instance->cmd_list[cmd_fusion->sync_cmd_idx];
|
||||
|
||||
if (cmd_mfi->flags & DRV_DCMD_POLLED_MODE) {
|
||||
if (megasas_dbg_lvl & OCR_DEBUG)
|
||||
dev_info(&instance->pdev->dev,
|
||||
"%s %d return cmd 0x%x opcode 0x%x\n",
|
||||
__func__, __LINE__, cmd_mfi->frame->hdr.cmd,
|
||||
le32_to_cpu(cmd_mfi->frame->dcmd.opcode));
|
||||
cmd_mfi->flags &= ~DRV_DCMD_POLLED_MODE;
|
||||
megasas_return_cmd(instance, cmd_mfi);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* megasas_track_scsiio : Track SCSI IOs outstanding to a SCSI device
|
||||
* @instance: per adapter struct
|
||||
|
@ -4847,6 +4899,7 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
|
|||
if (instance->requestorId && !instance->skip_heartbeat_timer_del)
|
||||
del_timer_sync(&instance->sriov_heartbeat_timer);
|
||||
set_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
|
||||
set_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE, &instance->reset_flags);
|
||||
atomic_set(&instance->adprecovery, MEGASAS_ADPRESET_SM_POLLING);
|
||||
instance->instancet->disable_intr(instance);
|
||||
megasas_sync_irqs((unsigned long)instance);
|
||||
|
@ -4951,7 +5004,9 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
|
|||
goto kill_hba;
|
||||
}
|
||||
|
||||
megasas_refire_mgmt_cmd(instance);
|
||||
megasas_refire_mgmt_cmd(instance,
|
||||
(i == (MEGASAS_FUSION_MAX_RESET_TRIES - 1)
|
||||
? 1 : 0));
|
||||
|
||||
/* Reset load balance info */
|
||||
if (fusion->load_balance_info)
|
||||
|
@ -4959,8 +5014,16 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
|
|||
(sizeof(struct LD_LOAD_BALANCE_INFO) *
|
||||
MAX_LOGICAL_DRIVES_EXT));
|
||||
|
||||
if (!megasas_get_map_info(instance))
|
||||
if (!megasas_get_map_info(instance)) {
|
||||
megasas_sync_map_info(instance);
|
||||
} else {
|
||||
/*
|
||||
* Return pending polled mode cmds before
|
||||
* retrying OCR
|
||||
*/
|
||||
megasas_return_polled_cmds(instance);
|
||||
continue;
|
||||
}
|
||||
|
||||
megasas_setup_jbod_map(instance);
|
||||
|
||||
|
@ -4987,6 +5050,15 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
|
|||
megasas_set_dynamic_target_properties(sdev, is_target_prop);
|
||||
}
|
||||
|
||||
status_reg = instance->instancet->read_fw_status_reg
|
||||
(instance);
|
||||
abs_state = status_reg & MFI_STATE_MASK;
|
||||
if (abs_state != MFI_STATE_OPERATIONAL) {
|
||||
dev_info(&instance->pdev->dev,
|
||||
"Adapter is not OPERATIONAL, state 0x%x for scsi:%d\n",
|
||||
abs_state, instance->host->host_no);
|
||||
goto out;
|
||||
}
|
||||
atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL);
|
||||
|
||||
dev_info(&instance->pdev->dev,
|
||||
|
@ -5046,7 +5118,7 @@ kill_hba:
|
|||
instance->skip_heartbeat_timer_del = 1;
|
||||
retval = FAILED;
|
||||
out:
|
||||
clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags);
|
||||
clear_bit(MEGASAS_FUSION_OCR_NOT_POSSIBLE, &instance->reset_flags);
|
||||
mutex_unlock(&instance->reset_mutex);
|
||||
return retval;
|
||||
}
|
||||
|
|
|
@ -89,6 +89,7 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
|
|||
|
||||
#define MEGASAS_FP_CMD_LEN 16
|
||||
#define MEGASAS_FUSION_IN_RESET 0
|
||||
#define MEGASAS_FUSION_OCR_NOT_POSSIBLE 1
|
||||
#define RAID_1_PEER_CMDS 2
|
||||
#define JBOD_MAPS_COUNT 2
|
||||
#define MEGASAS_REDUCE_QD_COUNT 64
|
||||
|
@ -864,9 +865,20 @@ struct MR_LD_RAID {
|
|||
u8 regTypeReqOnRead;
|
||||
__le16 seqNum;
|
||||
|
||||
struct {
|
||||
u32 ldSyncRequired:1;
|
||||
u32 reserved:31;
|
||||
struct {
|
||||
#ifndef MFI_BIG_ENDIAN
|
||||
u32 ldSyncRequired:1;
|
||||
u32 regTypeReqOnReadIsValid:1;
|
||||
u32 isEPD:1;
|
||||
u32 enableSLDOnAllRWIOs:1;
|
||||
u32 reserved:28;
|
||||
#else
|
||||
u32 reserved:28;
|
||||
u32 enableSLDOnAllRWIOs:1;
|
||||
u32 isEPD:1;
|
||||
u32 regTypeReqOnReadIsValid:1;
|
||||
u32 ldSyncRequired:1;
|
||||
#endif
|
||||
} flags;
|
||||
|
||||
u8 LUN[8]; /* 0x24 8 byte LUN field used for SCSI IO's */
|
||||
|
|
|
@ -122,6 +122,9 @@
|
|||
* 08-28-18 02.00.53 Bumped MPI2_HEADER_VERSION_UNIT.
|
||||
* Added MPI2_IOCSTATUS_FAILURE
|
||||
* 12-17-18 02.00.54 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 06-24-19 02.00.55 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 08-01-19 02.00.56 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* 10-02-19 02.00.57 Bumped MPI2_HEADER_VERSION_UNIT
|
||||
* --------------------------------------------------------------------------
|
||||
*/
|
||||
|
||||
|
@ -162,7 +165,7 @@
|
|||
|
||||
|
||||
/* Unit and Dev versioning for this MPI header set */
|
||||
#define MPI2_HEADER_VERSION_UNIT (0x36)
|
||||
#define MPI2_HEADER_VERSION_UNIT (0x39)
|
||||
#define MPI2_HEADER_VERSION_DEV (0x00)
|
||||
#define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00)
|
||||
#define MPI2_HEADER_VERSION_UNIT_SHIFT (8)
|
||||
|
@ -181,6 +184,7 @@
|
|||
#define MPI2_IOC_STATE_READY (0x10000000)
|
||||
#define MPI2_IOC_STATE_OPERATIONAL (0x20000000)
|
||||
#define MPI2_IOC_STATE_FAULT (0x40000000)
|
||||
#define MPI2_IOC_STATE_COREDUMP (0x50000000)
|
||||
|
||||
#define MPI2_IOC_STATE_MASK (0xF0000000)
|
||||
#define MPI2_IOC_STATE_SHIFT (28)
|
||||
|
|
|
@ -249,6 +249,8 @@
|
|||
* 08-28-18 02.00.46 Added NVMs Write Cache flag to IOUnitPage1
|
||||
* Added DMDReport Delay Time defines to PCIeIOUnitPage1
|
||||
* 12-17-18 02.00.47 Swap locations of Slotx2 and Slotx4 in ManPage 7.
|
||||
* 08-01-19 02.00.49 Add MPI26_MANPAGE7_FLAG_X2_X4_SLOT_INFO_VALID
|
||||
* Add MPI26_IOUNITPAGE1_NVME_WRCACHE_SHIFT
|
||||
*/
|
||||
|
||||
#ifndef MPI2_CNFG_H
|
||||
|
@ -891,6 +893,8 @@ typedef struct _MPI2_CONFIG_PAGE_MAN_7 {
|
|||
#define MPI2_MANPAGE7_FLAG_EVENTREPLAY_SLOT_ORDER (0x00000002)
|
||||
#define MPI2_MANPAGE7_FLAG_USE_SLOT_INFO (0x00000001)
|
||||
|
||||
#define MPI26_MANPAGE7_FLAG_CONN_LANE_USE_PINOUT (0x00000020)
|
||||
#define MPI26_MANPAGE7_FLAG_X2_X4_SLOT_INFO_VALID (0x00000010)
|
||||
|
||||
/*
|
||||
*Generic structure to use for product-specific manufacturing pages
|
||||
|
@ -962,9 +966,10 @@ typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1 {
|
|||
|
||||
/* IO Unit Page 1 Flags defines */
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_MASK (0x00030000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_ENABLE (0x00000000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_DISABLE (0x00010000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_NO_CHANGE (0x00020000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_SHIFT (16)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_NO_CHANGE (0x00000000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_ENABLE (0x00010000)
|
||||
#define MPI26_IOUNITPAGE1_NVME_WRCACHE_DISABLE (0x00020000)
|
||||
#define MPI2_IOUNITPAGE1_ATA_SECURITY_FREEZE_LOCK (0x00004000)
|
||||
#define MPI25_IOUNITPAGE1_NEW_DEVICE_FAST_PATH_DISABLE (0x00002000)
|
||||
#define MPI25_IOUNITPAGE1_DISABLE_FAST_PATH (0x00001000)
|
||||
|
@ -3931,7 +3936,13 @@ typedef struct _MPI26_CONFIG_PAGE_PCIEDEV_2 {
|
|||
U32 MaximumDataTransferSize; /*0x0C */
|
||||
U32 Capabilities; /*0x10 */
|
||||
U16 NOIOB; /* 0x14 */
|
||||
U16 Reserved2; /* 0x16 */
|
||||
U16 ShutdownLatency; /* 0x16 */
|
||||
U16 VendorID; /* 0x18 */
|
||||
U16 DeviceID; /* 0x1A */
|
||||
U16 SubsystemVendorID; /* 0x1C */
|
||||
U16 SubsystemID; /* 0x1E */
|
||||
U8 RevisionID; /* 0x20 */
|
||||
U8 Reserved21[3]; /* 0x21 */
|
||||
} MPI26_CONFIG_PAGE_PCIEDEV_2, *PTR_MPI26_CONFIG_PAGE_PCIEDEV_2,
|
||||
Mpi26PCIeDevicePage2_t, *pMpi26PCIeDevicePage2_t;
|
||||
|
||||
|
|
|
@ -19,6 +19,10 @@
|
|||
* 09-07-18 02.06.03 Added MPI26_EVENT_PCIE_TOPO_PI_16_LANES
|
||||
* 12-17-18 02.06.04 Addd MPI2_EXT_IMAGE_TYPE_PBLP
|
||||
* Shorten some defines to be compatible with DOS
|
||||
* 06-24-19 02.06.05 Whitespace adjustments to help with identifier
|
||||
* checking tool.
|
||||
* 10-02-19 02.06.06 Added MPI26_IMAGE_HEADER_SIG1_COREDUMP
|
||||
* Added MPI2_FLASH_REGION_COREDUMP
|
||||
*/
|
||||
#ifndef MPI2_IMAGE_H
|
||||
#define MPI2_IMAGE_H
|
||||
|
@ -213,6 +217,8 @@ typedef struct _MPI26_COMPONENT_IMAGE_HEADER {
|
|||
#define MPI26_IMAGE_HEADER_SIG1_NVDATA (0x5444564E)
|
||||
#define MPI26_IMAGE_HEADER_SIG1_GAS_GAUGE (0x20534147)
|
||||
#define MPI26_IMAGE_HEADER_SIG1_PBLP (0x504C4250)
|
||||
/* little-endian "DUMP" */
|
||||
#define MPI26_IMAGE_HEADER_SIG1_COREDUMP (0x504D5544)
|
||||
|
||||
/**** Definitions for Signature2 field ****/
|
||||
#define MPI26_IMAGE_HEADER_SIGNATURE2_VALUE (0x50584546)
|
||||
|
@ -359,6 +365,7 @@ typedef struct _MPI2_FLASH_LAYOUT_DATA {
|
|||
#define MPI2_FLASH_REGION_MR_NVDATA (0x14)
|
||||
#define MPI2_FLASH_REGION_CPLD (0x15)
|
||||
#define MPI2_FLASH_REGION_PSOC (0x16)
|
||||
#define MPI2_FLASH_REGION_COREDUMP (0x17)
|
||||
|
||||
/*ImageRevision */
|
||||
#define MPI2_FLASH_LAYOUT_IMAGE_REVISION (0x00)
|
||||
|
|
|
@ -175,6 +175,10 @@
|
|||
* Moved FW image definitions ionto new mpi2_image,h
|
||||
* 08-14-18 02.00.36 Fixed definition of MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16)
|
||||
* 09-07-18 02.00.37 Added MPI26_EVENT_PCIE_TOPO_PI_16_LANES
|
||||
* 10-02-19 02.00.38 Added MPI26_IOCINIT_CFGFLAGS_COREDUMP_ENABLE
|
||||
* Added MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED
|
||||
* Added MPI2_FW_DOWNLOAD_ITYPE_COREDUMP
|
||||
* Added MPI2_FW_UPLOAD_ITYPE_COREDUMP
|
||||
* --------------------------------------------------------------------------
|
||||
*/
|
||||
|
||||
|
@ -248,6 +252,7 @@ typedef struct _MPI2_IOC_INIT_REQUEST {
|
|||
|
||||
/*ConfigurationFlags */
|
||||
#define MPI26_IOCINIT_CFGFLAGS_NVME_SGL_FORMAT (0x0001)
|
||||
#define MPI26_IOCINIT_CFGFLAGS_COREDUMP_ENABLE (0x0002)
|
||||
|
||||
/*minimum depth for a Reply Descriptor Post Queue */
|
||||
#define MPI2_RDPQ_DEPTH_MIN (16)
|
||||
|
@ -377,6 +382,7 @@ typedef struct _MPI2_IOC_FACTS_REPLY {
|
|||
/*ProductID field uses MPI2_FW_HEADER_PID_ */
|
||||
|
||||
/*IOCCapabilities */
|
||||
#define MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED (0x00200000)
|
||||
#define MPI26_IOCFACTS_CAPABILITY_PCIE_SRIOV (0x00100000)
|
||||
#define MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ (0x00080000)
|
||||
#define MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE (0x00040000)
|
||||
|
@ -1458,8 +1464,8 @@ typedef struct _MPI2_FW_DOWNLOAD_REQUEST {
|
|||
/*MPI v2.6 and newer */
|
||||
#define MPI2_FW_DOWNLOAD_ITYPE_CPLD (0x15)
|
||||
#define MPI2_FW_DOWNLOAD_ITYPE_PSOC (0x16)
|
||||
#define MPI2_FW_DOWNLOAD_ITYPE_COREDUMP (0x17)
|
||||
#define MPI2_FW_DOWNLOAD_ITYPE_MIN_PRODUCT_SPECIFIC (0xF0)
|
||||
#define MPI2_FW_DOWNLOAD_ITYPE_TERMINATE (0xFF)
|
||||
|
||||
/*MPI v2.0 FWDownload TransactionContext Element */
|
||||
typedef struct _MPI2_FW_DOWNLOAD_TCSGE {
|
||||
|
|
|
@ -123,8 +123,15 @@ enum mpt3sas_perf_mode {
|
|||
MPT_PERF_MODE_LATENCY = 2,
|
||||
};
|
||||
|
||||
static int
|
||||
_base_wait_on_iocstate(struct MPT3SAS_ADAPTER *ioc,
|
||||
u32 ioc_state, int timeout);
|
||||
static int
|
||||
_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
|
||||
static void
|
||||
_base_mask_interrupts(struct MPT3SAS_ADAPTER *ioc);
|
||||
static void
|
||||
_base_clear_outstanding_commands(struct MPT3SAS_ADAPTER *ioc);
|
||||
|
||||
/**
|
||||
* mpt3sas_base_check_cmd_timeout - Function
|
||||
|
@ -609,7 +616,8 @@ _base_fault_reset_work(struct work_struct *work)
|
|||
|
||||
|
||||
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
if (ioc->shost_recovery || ioc->pci_error_recovery)
|
||||
if ((ioc->shost_recovery && (ioc->ioc_coredump_loop == 0)) ||
|
||||
ioc->pci_error_recovery)
|
||||
goto rearm_timer;
|
||||
spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
|
||||
|
@ -656,20 +664,64 @@ _base_fault_reset_work(struct work_struct *work)
|
|||
return; /* don't rearm timer */
|
||||
}
|
||||
|
||||
ioc->non_operational_loop = 0;
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_COREDUMP) {
|
||||
u8 timeout = (ioc->manu_pg11.CoreDumpTOSec) ?
|
||||
ioc->manu_pg11.CoreDumpTOSec :
|
||||
MPT3SAS_DEFAULT_COREDUMP_TIMEOUT_SECONDS;
|
||||
|
||||
timeout /= (FAULT_POLLING_INTERVAL/1000);
|
||||
|
||||
if (ioc->ioc_coredump_loop == 0) {
|
||||
mpt3sas_print_coredump_info(ioc,
|
||||
doorbell & MPI2_DOORBELL_DATA_MASK);
|
||||
/* do not accept any IOs and disable the interrupts */
|
||||
spin_lock_irqsave(
|
||||
&ioc->ioc_reset_in_progress_lock, flags);
|
||||
ioc->shost_recovery = 1;
|
||||
spin_unlock_irqrestore(
|
||||
&ioc->ioc_reset_in_progress_lock, flags);
|
||||
_base_mask_interrupts(ioc);
|
||||
_base_clear_outstanding_commands(ioc);
|
||||
}
|
||||
|
||||
ioc_info(ioc, "%s: CoreDump loop %d.",
|
||||
__func__, ioc->ioc_coredump_loop);
|
||||
|
||||
/* Wait until CoreDump completes or times out */
|
||||
if (ioc->ioc_coredump_loop++ < timeout) {
|
||||
spin_lock_irqsave(
|
||||
&ioc->ioc_reset_in_progress_lock, flags);
|
||||
goto rearm_timer;
|
||||
}
|
||||
}
|
||||
|
||||
if (ioc->ioc_coredump_loop) {
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_COREDUMP)
|
||||
ioc_err(ioc, "%s: CoreDump completed. LoopCount: %d",
|
||||
__func__, ioc->ioc_coredump_loop);
|
||||
else
|
||||
ioc_err(ioc, "%s: CoreDump Timed out. LoopCount: %d",
|
||||
__func__, ioc->ioc_coredump_loop);
|
||||
ioc->ioc_coredump_loop = MPT3SAS_COREDUMP_LOOP_DONE;
|
||||
}
|
||||
ioc->non_operational_loop = 0;
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL) {
|
||||
rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
ioc_warn(ioc, "%s: hard reset: %s\n",
|
||||
__func__, rc == 0 ? "success" : "failed");
|
||||
doorbell = mpt3sas_base_get_iocstate(ioc, 0);
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT)
|
||||
mpt3sas_base_fault_info(ioc, doorbell &
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_print_fault_code(ioc, doorbell &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
} else if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP)
|
||||
mpt3sas_print_coredump_info(ioc, doorbell &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
if (rc && (doorbell & MPI2_IOC_STATE_MASK) !=
|
||||
MPI2_IOC_STATE_OPERATIONAL)
|
||||
return; /* don't rearm timer */
|
||||
}
|
||||
ioc->ioc_coredump_loop = 0;
|
||||
|
||||
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
rearm_timer:
|
||||
|
@ -748,6 +800,49 @@ mpt3sas_base_fault_info(struct MPT3SAS_ADAPTER *ioc , u16 fault_code)
|
|||
ioc_err(ioc, "fault_state(0x%04x)!\n", fault_code);
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_base_coredump_info - verbose translation of firmware CoreDump state
|
||||
* @ioc: per adapter object
|
||||
* @fault_code: fault code
|
||||
*
|
||||
* Return nothing.
|
||||
*/
|
||||
void
|
||||
mpt3sas_base_coredump_info(struct MPT3SAS_ADAPTER *ioc, u16 fault_code)
|
||||
{
|
||||
ioc_err(ioc, "coredump_state(0x%04x)!\n", fault_code);
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_base_wait_for_coredump_completion - Wait until coredump
|
||||
* completes or times out
|
||||
* @ioc: per adapter object
|
||||
*
|
||||
* Returns 0 for success, non-zero for failure.
|
||||
*/
|
||||
int
|
||||
mpt3sas_base_wait_for_coredump_completion(struct MPT3SAS_ADAPTER *ioc,
|
||||
const char *caller)
|
||||
{
|
||||
u8 timeout = (ioc->manu_pg11.CoreDumpTOSec) ?
|
||||
ioc->manu_pg11.CoreDumpTOSec :
|
||||
MPT3SAS_DEFAULT_COREDUMP_TIMEOUT_SECONDS;
|
||||
|
||||
int ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_FAULT,
|
||||
timeout);
|
||||
|
||||
if (ioc_state)
|
||||
ioc_err(ioc,
|
||||
"%s: CoreDump timed out. (ioc_state=0x%x)\n",
|
||||
caller, ioc_state);
|
||||
else
|
||||
ioc_info(ioc,
|
||||
"%s: CoreDump completed. (ioc_state=0x%x)\n",
|
||||
caller, ioc_state);
|
||||
|
||||
return ioc_state;
|
||||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_halt_firmware - halt's mpt controller firmware
|
||||
* @ioc: per adapter object
|
||||
|
@ -768,9 +863,14 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc)
|
|||
dump_stack();
|
||||
|
||||
doorbell = ioc->base_readl(&ioc->chip->Doorbell);
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT)
|
||||
mpt3sas_base_fault_info(ioc , doorbell);
|
||||
else {
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_print_fault_code(ioc, doorbell &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
} else if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
mpt3sas_print_coredump_info(ioc, doorbell &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
} else {
|
||||
writel(0xC0FFEE00, &ioc->chip->Doorbell);
|
||||
ioc_err(ioc, "Firmware is halted due to command timeout\n");
|
||||
}
|
||||
|
@ -3103,6 +3203,8 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
|
|||
*/
|
||||
if (!ioc->combined_reply_queue &&
|
||||
ioc->hba_mpi_version_belonged != MPI2_VERSION) {
|
||||
ioc_info(ioc,
|
||||
"combined ReplyQueue is off, Enabling msix load balance\n");
|
||||
ioc->msix_load_balance = true;
|
||||
}
|
||||
|
||||
|
@ -3115,9 +3217,7 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
|
|||
|
||||
r = _base_alloc_irq_vectors(ioc);
|
||||
if (r < 0) {
|
||||
dfailprintk(ioc,
|
||||
ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n",
|
||||
r));
|
||||
ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n", r);
|
||||
goto try_ioapic;
|
||||
}
|
||||
|
||||
|
@ -3206,9 +3306,15 @@ _base_check_for_fault_and_issue_reset(struct MPT3SAS_ADAPTER *ioc)
|
|||
dhsprintk(ioc, pr_info("%s: ioc_state(0x%08x)\n", __func__, ioc_state));
|
||||
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_base_fault_info(ioc, ioc_state &
|
||||
mpt3sas_print_fault_code(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
rc = _base_diag_reset(ioc);
|
||||
} else if ((ioc_state & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
mpt3sas_print_coredump_info(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
mpt3sas_base_wait_for_coredump_completion(ioc, __func__);
|
||||
rc = _base_diag_reset(ioc);
|
||||
}
|
||||
|
||||
return rc;
|
||||
|
@ -3279,7 +3385,8 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
|
|||
}
|
||||
|
||||
if (ioc->chip == NULL) {
|
||||
ioc_err(ioc, "unable to map adapter memory! or resource not found\n");
|
||||
ioc_err(ioc,
|
||||
"unable to map adapter memory! or resource not found\n");
|
||||
r = -EINVAL;
|
||||
goto out_fail;
|
||||
}
|
||||
|
@ -3318,8 +3425,8 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc->combined_reply_index_count,
|
||||
sizeof(resource_size_t *), GFP_KERNEL);
|
||||
if (!ioc->replyPostRegisterIndex) {
|
||||
dfailprintk(ioc,
|
||||
ioc_warn(ioc, "allocation for reply Post Register Index failed!!!\n"));
|
||||
ioc_err(ioc,
|
||||
"allocation for replyPostRegisterIndex failed!\n");
|
||||
r = -ENOMEM;
|
||||
goto out_fail;
|
||||
}
|
||||
|
@ -3466,6 +3573,22 @@ _base_get_msix_index(struct MPT3SAS_ADAPTER *ioc,
|
|||
return ioc->cpu_msix_table[raw_smp_processor_id()];
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_sdev_nr_inflight_request -get number of inflight requests
|
||||
* of a request queue.
|
||||
* @q: request_queue object
|
||||
*
|
||||
* returns number of inflight request of a request queue.
|
||||
*/
|
||||
inline unsigned long
|
||||
_base_sdev_nr_inflight_request(struct request_queue *q)
|
||||
{
|
||||
struct blk_mq_hw_ctx *hctx = q->queue_hw_ctx[0];
|
||||
|
||||
return atomic_read(&hctx->nr_active);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* _base_get_high_iops_msix_index - get the msix index of
|
||||
* high iops queues
|
||||
|
@ -3485,7 +3608,7 @@ _base_get_high_iops_msix_index(struct MPT3SAS_ADAPTER *ioc,
|
|||
* reply queues in terms of batch count 16 when outstanding
|
||||
* IOs on the target device is >=8.
|
||||
*/
|
||||
if (atomic_read(&scmd->device->device_busy) >
|
||||
if (_base_sdev_nr_inflight_request(scmd->device->request_queue) >
|
||||
MPT3SAS_DEVICE_HIGH_IOPS_DEPTH)
|
||||
return base_mod64((
|
||||
atomic64_add_return(1, &ioc->high_iops_outstanding) /
|
||||
|
@ -4264,7 +4387,8 @@ _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc)
|
|||
fwpkg_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
|
||||
&fwpkg_data_dma, GFP_KERNEL);
|
||||
if (!fwpkg_data) {
|
||||
ioc_err(ioc, "failure at %s:%d/%s()!\n",
|
||||
ioc_err(ioc,
|
||||
"Memory allocation for fwpkg data failed at %s:%d/%s()!\n",
|
||||
__FILE__, __LINE__, __func__);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
@ -4994,12 +5118,13 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc->reply_free_queue_depth = ioc->hba_queue_depth + 64;
|
||||
}
|
||||
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "scatter gather: sge_in_main_msg(%d), sge_per_chain(%d), sge_per_io(%d), chains_per_io(%d)\n",
|
||||
ioc->max_sges_in_main_message,
|
||||
ioc->max_sges_in_chain_message,
|
||||
ioc->shost->sg_tablesize,
|
||||
ioc->chains_needed_per_io));
|
||||
ioc_info(ioc,
|
||||
"scatter gather: sge_in_main_msg(%d), sge_per_chain(%d), "
|
||||
"sge_per_io(%d), chains_per_io(%d)\n",
|
||||
ioc->max_sges_in_main_message,
|
||||
ioc->max_sges_in_chain_message,
|
||||
ioc->shost->sg_tablesize,
|
||||
ioc->chains_needed_per_io);
|
||||
|
||||
/* reply post queue, 16 byte align */
|
||||
reply_post_free_sz = ioc->reply_post_queue_depth *
|
||||
|
@ -5109,15 +5234,13 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc->internal_dma = ioc->hi_priority_dma + (ioc->hi_priority_depth *
|
||||
ioc->request_sz);
|
||||
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "request pool(0x%p): depth(%d), frame_size(%d), pool_size(%d kB)\n",
|
||||
ioc->request, ioc->hba_queue_depth,
|
||||
ioc->request_sz,
|
||||
(ioc->hba_queue_depth * ioc->request_sz) / 1024));
|
||||
ioc_info(ioc,
|
||||
"request pool(0x%p) - dma(0x%llx): "
|
||||
"depth(%d), frame_size(%d), pool_size(%d kB)\n",
|
||||
ioc->request, (unsigned long long) ioc->request_dma,
|
||||
ioc->hba_queue_depth, ioc->request_sz,
|
||||
(ioc->hba_queue_depth * ioc->request_sz) / 1024);
|
||||
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "request pool: dma(0x%llx)\n",
|
||||
(unsigned long long)ioc->request_dma));
|
||||
total_sz += sz;
|
||||
|
||||
dinitprintk(ioc,
|
||||
|
@ -5302,13 +5425,12 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
|
|||
goto out;
|
||||
}
|
||||
}
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "sense pool(0x%p): depth(%d), element_size(%d), pool_size(%d kB)\n",
|
||||
ioc->sense, ioc->scsiio_depth,
|
||||
SCSI_SENSE_BUFFERSIZE, sz / 1024));
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "sense_dma(0x%llx)\n",
|
||||
(unsigned long long)ioc->sense_dma));
|
||||
ioc_info(ioc,
|
||||
"sense pool(0x%p)- dma(0x%llx): depth(%d),"
|
||||
"element_size(%d), pool_size(%d kB)\n",
|
||||
ioc->sense, (unsigned long long)ioc->sense_dma, ioc->scsiio_depth,
|
||||
SCSI_SENSE_BUFFERSIZE, sz / 1024);
|
||||
|
||||
total_sz += sz;
|
||||
|
||||
/* reply pool, 4 byte align */
|
||||
|
@ -5386,12 +5508,10 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc_err(ioc, "config page: dma_pool_alloc failed\n");
|
||||
goto out;
|
||||
}
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "config page(0x%p): size(%d)\n",
|
||||
ioc->config_page, ioc->config_page_sz));
|
||||
dinitprintk(ioc,
|
||||
ioc_info(ioc, "config_page_dma(0x%llx)\n",
|
||||
(unsigned long long)ioc->config_page_dma));
|
||||
|
||||
ioc_info(ioc, "config page(0x%p) - dma(0x%llx): size(%d)\n",
|
||||
ioc->config_page, (unsigned long long)ioc->config_page_dma,
|
||||
ioc->config_page_sz);
|
||||
total_sz += ioc->config_page_sz;
|
||||
|
||||
ioc_info(ioc, "Allocated physical memory: size(%d kB)\n",
|
||||
|
@ -5446,6 +5566,8 @@ _base_wait_on_iocstate(struct MPT3SAS_ADAPTER *ioc, u32 ioc_state, int timeout)
|
|||
return 0;
|
||||
if (count && current_state == MPI2_IOC_STATE_FAULT)
|
||||
break;
|
||||
if (count && current_state == MPI2_IOC_STATE_COREDUMP)
|
||||
break;
|
||||
|
||||
usleep_range(1000, 1500);
|
||||
count++;
|
||||
|
@ -5547,7 +5669,12 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout)
|
|||
doorbell = ioc->base_readl(&ioc->chip->Doorbell);
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_base_fault_info(ioc , doorbell);
|
||||
mpt3sas_print_fault_code(ioc, doorbell);
|
||||
return -EFAULT;
|
||||
}
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
mpt3sas_print_coredump_info(ioc, doorbell);
|
||||
return -EFAULT;
|
||||
}
|
||||
} else if (int_status == 0xFFFFFFFF)
|
||||
|
@ -5609,6 +5736,7 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout)
|
|||
{
|
||||
u32 ioc_state;
|
||||
int r = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (reset_type != MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET) {
|
||||
ioc_err(ioc, "%s: unknown reset_type\n", __func__);
|
||||
|
@ -5627,6 +5755,7 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout)
|
|||
r = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, timeout);
|
||||
if (ioc_state) {
|
||||
ioc_err(ioc, "%s: failed going to ready state (ioc_state=0x%x)\n",
|
||||
|
@ -5635,6 +5764,26 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout)
|
|||
goto out;
|
||||
}
|
||||
out:
|
||||
if (r != 0) {
|
||||
ioc_state = mpt3sas_base_get_iocstate(ioc, 0);
|
||||
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
/*
|
||||
* Wait for IOC state CoreDump to clear only during
|
||||
* HBA initialization & release time.
|
||||
*/
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP && (ioc->is_driver_loading == 1 ||
|
||||
ioc->fault_reset_work_q == NULL)) {
|
||||
spin_unlock_irqrestore(
|
||||
&ioc->ioc_reset_in_progress_lock, flags);
|
||||
mpt3sas_print_coredump_info(ioc, ioc_state);
|
||||
mpt3sas_base_wait_for_coredump_completion(ioc,
|
||||
__func__);
|
||||
spin_lock_irqsave(
|
||||
&ioc->ioc_reset_in_progress_lock, flags);
|
||||
}
|
||||
spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
}
|
||||
ioc_info(ioc, "message unit reset: %s\n",
|
||||
r == 0 ? "SUCCESS" : "FAILED");
|
||||
return r;
|
||||
|
@ -5782,7 +5931,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
|
|||
mfp = (__le32 *)reply;
|
||||
pr_info("\toffset:data\n");
|
||||
for (i = 0; i < reply_bytes/4; i++)
|
||||
pr_info("\t[0x%02x]:%08x\n", i*4,
|
||||
ioc_info(ioc, "\t[0x%02x]:%08x\n", i*4,
|
||||
le32_to_cpu(mfp[i]));
|
||||
}
|
||||
return 0;
|
||||
|
@ -5850,10 +5999,9 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
|
|||
ioc->ioc_link_reset_in_progress)
|
||||
ioc->ioc_link_reset_in_progress = 0;
|
||||
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->base_cmds.status, mpi_request,
|
||||
sizeof(Mpi2SasIoUnitControlRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc, ioc->base_cmds.status,
|
||||
mpi_request, sizeof(Mpi2SasIoUnitControlRequest_t)/4,
|
||||
issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
if (ioc->base_cmds.status & MPT3_CMD_REPLY_VALID)
|
||||
|
@ -5926,10 +6074,9 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
|
|||
wait_for_completion_timeout(&ioc->base_cmds.done,
|
||||
msecs_to_jiffies(10000));
|
||||
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->base_cmds.status, mpi_request,
|
||||
sizeof(Mpi2SepRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->base_cmds.status, mpi_request,
|
||||
sizeof(Mpi2SepRequest_t)/4, issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
if (ioc->base_cmds.status & MPT3_CMD_REPLY_VALID)
|
||||
|
@ -6028,9 +6175,15 @@ _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout)
|
|||
}
|
||||
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_base_fault_info(ioc, ioc_state &
|
||||
mpt3sas_print_fault_code(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
goto issue_diag_reset;
|
||||
} else if ((ioc_state & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
ioc_info(ioc,
|
||||
"%s: Skipping the diag reset here. (ioc_state=0x%x)\n",
|
||||
__func__, ioc_state);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, timeout);
|
||||
|
@ -6209,6 +6362,12 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc)
|
|||
cpu_to_le64((u64)ioc->reply_post[0].reply_post_free_dma);
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the flag to enable CoreDump state feature in IOC firmware.
|
||||
*/
|
||||
mpi_request.ConfigurationFlags |=
|
||||
cpu_to_le16(MPI26_IOCINIT_CFGFLAGS_COREDUMP_ENABLE);
|
||||
|
||||
/* This time stamp specifies number of milliseconds
|
||||
* since epoch ~ midnight January 1, 1970.
|
||||
*/
|
||||
|
@ -6220,9 +6379,9 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc)
|
|||
int i;
|
||||
|
||||
mfp = (__le32 *)&mpi_request;
|
||||
pr_info("\toffset:data\n");
|
||||
ioc_info(ioc, "\toffset:data\n");
|
||||
for (i = 0; i < sizeof(Mpi2IOCInitRequest_t)/4; i++)
|
||||
pr_info("\t[0x%02x]:%08x\n", i*4,
|
||||
ioc_info(ioc, "\t[0x%02x]:%08x\n", i*4,
|
||||
le32_to_cpu(mfp[i]));
|
||||
}
|
||||
|
||||
|
@ -6592,8 +6751,11 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
|
|||
/* wait 100 msec */
|
||||
msleep(100);
|
||||
|
||||
if (count++ > 20)
|
||||
if (count++ > 20) {
|
||||
ioc_info(ioc,
|
||||
"Stop writing magic sequence after 20 retries\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
|
||||
drsprintk(ioc,
|
||||
|
@ -6617,8 +6779,11 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
|
|||
|
||||
host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
|
||||
|
||||
if (host_diagnostic == 0xFFFFFFFF)
|
||||
if (host_diagnostic == 0xFFFFFFFF) {
|
||||
ioc_info(ioc,
|
||||
"Invalid host diagnostic register value\n");
|
||||
goto out;
|
||||
}
|
||||
if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER))
|
||||
break;
|
||||
|
||||
|
@ -6705,16 +6870,33 @@ _base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, enum reset_type type)
|
|||
return 0;
|
||||
|
||||
if (ioc_state & MPI2_DOORBELL_USED) {
|
||||
dhsprintk(ioc, ioc_info(ioc, "unexpected doorbell active!\n"));
|
||||
ioc_info(ioc, "unexpected doorbell active!\n");
|
||||
goto issue_diag_reset;
|
||||
}
|
||||
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_base_fault_info(ioc, ioc_state &
|
||||
mpt3sas_print_fault_code(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
goto issue_diag_reset;
|
||||
}
|
||||
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_COREDUMP) {
|
||||
/*
|
||||
* if host reset is invoked while watch dog thread is waiting
|
||||
* for IOC state to be changed to Fault state then driver has
|
||||
* to wait here for CoreDump state to clear otherwise reset
|
||||
* will be issued to the FW and FW move the IOC state to
|
||||
* reset state without copying the FW logs to coredump region.
|
||||
*/
|
||||
if (ioc->ioc_coredump_loop != MPT3SAS_COREDUMP_LOOP_DONE) {
|
||||
mpt3sas_print_coredump_info(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
mpt3sas_base_wait_for_coredump_completion(ioc,
|
||||
__func__);
|
||||
}
|
||||
goto issue_diag_reset;
|
||||
}
|
||||
|
||||
if (type == FORCE_BIG_HAMMER)
|
||||
goto issue_diag_reset;
|
||||
|
||||
|
@ -6958,8 +7140,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc->cpu_msix_table = kzalloc(ioc->cpu_msix_table_sz, GFP_KERNEL);
|
||||
ioc->reply_queue_count = 1;
|
||||
if (!ioc->cpu_msix_table) {
|
||||
dfailprintk(ioc,
|
||||
ioc_info(ioc, "allocation for cpu_msix_table failed!!!\n"));
|
||||
ioc_info(ioc, "Allocation for cpu_msix_table failed!!!\n");
|
||||
r = -ENOMEM;
|
||||
goto out_free_resources;
|
||||
}
|
||||
|
@ -6968,8 +7149,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
|
|||
ioc->reply_post_host_index = kcalloc(ioc->cpu_msix_table_sz,
|
||||
sizeof(resource_size_t *), GFP_KERNEL);
|
||||
if (!ioc->reply_post_host_index) {
|
||||
dfailprintk(ioc,
|
||||
ioc_info(ioc, "allocation for reply_post_host_index failed!!!\n"));
|
||||
ioc_info(ioc, "Allocation for reply_post_host_index failed!!!\n");
|
||||
r = -ENOMEM;
|
||||
goto out_free_resources;
|
||||
}
|
||||
|
@ -7195,6 +7375,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
|
|||
sizeof(struct mpt3sas_facts));
|
||||
|
||||
ioc->non_operational_loop = 0;
|
||||
ioc->ioc_coredump_loop = 0;
|
||||
ioc->got_task_abort_from_ioctl = 0;
|
||||
return 0;
|
||||
|
||||
|
@ -7276,14 +7457,14 @@ static void _base_pre_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
|||
}
|
||||
|
||||
/**
|
||||
* _base_after_reset_handler - after reset handler
|
||||
* _base_clear_outstanding_mpt_commands - clears outstanding mpt commands
|
||||
* @ioc: per adapter object
|
||||
*/
|
||||
static void _base_after_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
||||
static void
|
||||
_base_clear_outstanding_mpt_commands(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
mpt3sas_scsih_after_reset_handler(ioc);
|
||||
mpt3sas_ctl_after_reset_handler(ioc);
|
||||
dtmprintk(ioc, ioc_info(ioc, "%s: MPT3_IOC_AFTER_RESET\n", __func__));
|
||||
dtmprintk(ioc,
|
||||
ioc_info(ioc, "%s: clear outstanding mpt cmds\n", __func__));
|
||||
if (ioc->transport_cmds.status & MPT3_CMD_PENDING) {
|
||||
ioc->transport_cmds.status |= MPT3_CMD_RESET;
|
||||
mpt3sas_base_free_smid(ioc, ioc->transport_cmds.smid);
|
||||
|
@ -7316,6 +7497,17 @@ static void _base_after_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_clear_outstanding_commands - clear all outstanding commands
|
||||
* @ioc: per adapter object
|
||||
*/
|
||||
static void _base_clear_outstanding_commands(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
mpt3sas_scsih_clear_outstanding_scsi_tm_commands(ioc);
|
||||
mpt3sas_ctl_clear_outstanding_ioctls(ioc);
|
||||
_base_clear_outstanding_mpt_commands(ioc);
|
||||
}
|
||||
|
||||
/**
|
||||
* _base_reset_done_handler - reset done handler
|
||||
* @ioc: per adapter object
|
||||
|
@ -7474,7 +7666,9 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
|
|||
MPT3_DIAG_BUFFER_IS_RELEASED))) {
|
||||
is_trigger = 1;
|
||||
ioc_state = mpt3sas_base_get_iocstate(ioc, 0);
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT)
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT ||
|
||||
(ioc_state & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP)
|
||||
is_fault = 1;
|
||||
}
|
||||
_base_pre_reset_handler(ioc);
|
||||
|
@ -7483,7 +7677,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
|
|||
r = _base_make_ioc_ready(ioc, type);
|
||||
if (r)
|
||||
goto out;
|
||||
_base_after_reset_handler(ioc);
|
||||
_base_clear_outstanding_commands(ioc);
|
||||
|
||||
/* If this hard reset is called while port enable is active, then
|
||||
* there is no reason to call make_ioc_operational
|
||||
|
@ -7514,9 +7708,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
|
|||
_base_reset_done_handler(ioc);
|
||||
|
||||
out:
|
||||
dtmprintk(ioc,
|
||||
ioc_info(ioc, "%s: %s\n",
|
||||
__func__, r == 0 ? "SUCCESS" : "FAILED"));
|
||||
ioc_info(ioc, "%s: %s\n", __func__, r == 0 ? "SUCCESS" : "FAILED");
|
||||
|
||||
spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
|
||||
ioc->shost_recovery = 0;
|
||||
|
|
|
@ -76,8 +76,8 @@
|
|||
#define MPT3SAS_DRIVER_NAME "mpt3sas"
|
||||
#define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>"
|
||||
#define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver"
|
||||
#define MPT3SAS_DRIVER_VERSION "32.100.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 32
|
||||
#define MPT3SAS_DRIVER_VERSION "33.100.00.00"
|
||||
#define MPT3SAS_MAJOR_VERSION 33
|
||||
#define MPT3SAS_MINOR_VERSION 100
|
||||
#define MPT3SAS_BUILD_VERSION 0
|
||||
#define MPT3SAS_RELEASE_VERSION 00
|
||||
|
@ -90,6 +90,10 @@
|
|||
#define MPT2SAS_BUILD_VERSION 0
|
||||
#define MPT2SAS_RELEASE_VERSION 00
|
||||
|
||||
/* CoreDump: Default timeout */
|
||||
#define MPT3SAS_DEFAULT_COREDUMP_TIMEOUT_SECONDS (15) /*15 seconds*/
|
||||
#define MPT3SAS_COREDUMP_LOOP_DONE (0xFF)
|
||||
|
||||
/*
|
||||
* Set MPT3SAS_SG_DEPTH value based on user input.
|
||||
*/
|
||||
|
@ -140,6 +144,7 @@
|
|||
#define MAX_CHAIN_ELEMT_SZ 16
|
||||
#define DEFAULT_NUM_FWCHAIN_ELEMTS 8
|
||||
|
||||
#define IO_UNIT_CONTROL_SHUTDOWN_TIMEOUT 6
|
||||
#define FW_IMG_HDR_READ_TIMEOUT 15
|
||||
|
||||
#define IOC_OPERATIONAL_WAIT_COUNT 10
|
||||
|
@ -398,7 +403,10 @@ struct Mpi2ManufacturingPage11_t {
|
|||
u8 HostTraceBufferFlags; /* 4Fh */
|
||||
u16 HostTraceBufferMaxSizeKB; /* 50h */
|
||||
u16 HostTraceBufferMinSizeKB; /* 52h */
|
||||
__le32 Reserved10[2]; /* 54h - 5Bh */
|
||||
u8 CoreDumpTOSec; /* 54h */
|
||||
u8 Reserved8; /* 55h */
|
||||
u16 Reserved9; /* 56h */
|
||||
__le32 Reserved10; /* 58h */
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -589,6 +597,7 @@ static inline void sas_device_put(struct _sas_device *s)
|
|||
* @connector_name: ASCII value of the Connector's name
|
||||
* @serial_number: pointer of serial number string allocated runtime
|
||||
* @access_status: Device's Access Status
|
||||
* @shutdown_latency: NVMe device's RTD3 Entry Latency
|
||||
* @refcount: reference count for deletion
|
||||
*/
|
||||
struct _pcie_device {
|
||||
|
@ -611,6 +620,7 @@ struct _pcie_device {
|
|||
u8 *serial_number;
|
||||
u8 reset_timeout;
|
||||
u8 access_status;
|
||||
u16 shutdown_latency;
|
||||
struct kref refcount;
|
||||
};
|
||||
/**
|
||||
|
@ -1045,6 +1055,7 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
|
|||
* @cpu_msix_table: table for mapping cpus to msix index
|
||||
* @cpu_msix_table_sz: table size
|
||||
* @total_io_cnt: Gives total IO count, used to load balance the interrupts
|
||||
* @ioc_coredump_loop: will have non-zero value when FW is in CoreDump state
|
||||
* @high_iops_outstanding: used to load balance the interrupts
|
||||
* within high iops reply queues
|
||||
* @msix_load_balance: Enables load balancing of interrupts across
|
||||
|
@ -1073,6 +1084,10 @@ typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
|
|||
* @event_context: unique id for each logged event
|
||||
* @event_log: event log pointer
|
||||
* @event_masks: events that are masked
|
||||
* @max_shutdown_latency: timeout value for NVMe shutdown operation,
|
||||
* which is equal that NVMe drive's RTD3 Entry Latency
|
||||
* which has reported maximum RTD3 Entry Latency value
|
||||
* among attached NVMe drives.
|
||||
* @facts: static facts data
|
||||
* @prev_fw_facts: previous fw facts data
|
||||
* @pfacts: static port facts data
|
||||
|
@ -1231,6 +1246,7 @@ struct MPT3SAS_ADAPTER {
|
|||
u32 ioc_reset_count;
|
||||
MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
|
||||
u32 non_operational_loop;
|
||||
u8 ioc_coredump_loop;
|
||||
atomic64_t total_io_cnt;
|
||||
atomic64_t high_iops_outstanding;
|
||||
bool msix_load_balance;
|
||||
|
@ -1283,7 +1299,7 @@ struct MPT3SAS_ADAPTER {
|
|||
|
||||
u8 tm_custom_handling;
|
||||
u8 nvme_abort_timeout;
|
||||
|
||||
u16 max_shutdown_latency;
|
||||
|
||||
/* static config pages */
|
||||
struct mpt3sas_facts facts;
|
||||
|
@ -1531,6 +1547,17 @@ void *mpt3sas_base_get_reply_virt_addr(struct MPT3SAS_ADAPTER *ioc,
|
|||
u32 mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked);
|
||||
|
||||
void mpt3sas_base_fault_info(struct MPT3SAS_ADAPTER *ioc , u16 fault_code);
|
||||
#define mpt3sas_print_fault_code(ioc, fault_code) \
|
||||
do { pr_err("%s fault info from func: %s\n", ioc->name, __func__); \
|
||||
mpt3sas_base_fault_info(ioc, fault_code); } while (0)
|
||||
|
||||
void mpt3sas_base_coredump_info(struct MPT3SAS_ADAPTER *ioc, u16 fault_code);
|
||||
#define mpt3sas_print_coredump_info(ioc, fault_code) \
|
||||
do { pr_err("%s fault info from func: %s\n", ioc->name, __func__); \
|
||||
mpt3sas_base_coredump_info(ioc, fault_code); } while (0)
|
||||
|
||||
int mpt3sas_base_wait_for_coredump_completion(struct MPT3SAS_ADAPTER *ioc,
|
||||
const char *caller);
|
||||
int mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2SasIoUnitControlReply_t *mpi_reply,
|
||||
Mpi2SasIoUnitControlRequest_t *mpi_request);
|
||||
|
@ -1552,6 +1579,11 @@ mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc);
|
|||
|
||||
u8 mpt3sas_base_check_cmd_timeout(struct MPT3SAS_ADAPTER *ioc,
|
||||
u8 status, void *mpi_request, int sz);
|
||||
#define mpt3sas_check_cmd_timeout(ioc, status, mpi_request, sz, issue_reset) \
|
||||
do { ioc_err(ioc, "In func: %s\n", __func__); \
|
||||
issue_reset = mpt3sas_base_check_cmd_timeout(ioc, \
|
||||
status, mpi_request, sz); } while (0)
|
||||
|
||||
int mpt3sas_wait_for_ioc(struct MPT3SAS_ADAPTER *ioc, int wait_count);
|
||||
|
||||
/* scsih shared API */
|
||||
|
@ -1560,7 +1592,8 @@ struct scsi_cmnd *mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc,
|
|||
u8 mpt3sas_scsih_event_callback(struct MPT3SAS_ADAPTER *ioc, u8 msix_index,
|
||||
u32 reply);
|
||||
void mpt3sas_scsih_pre_reset_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_scsih_after_reset_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_scsih_clear_outstanding_scsi_tm_commands(
|
||||
struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_scsih_reset_done_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
|
||||
int mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
|
||||
|
@ -1694,7 +1727,7 @@ void mpt3sas_ctl_exit(ushort hbas_to_enumerate);
|
|||
u8 mpt3sas_ctl_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
|
||||
u32 reply);
|
||||
void mpt3sas_ctl_pre_reset_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_ctl_after_reset_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_ctl_clear_outstanding_ioctls(struct MPT3SAS_ADAPTER *ioc);
|
||||
void mpt3sas_ctl_reset_done_handler(struct MPT3SAS_ADAPTER *ioc);
|
||||
u8 mpt3sas_ctl_event_callback(struct MPT3SAS_ADAPTER *ioc,
|
||||
u8 msix_index, u32 reply);
|
||||
|
|
|
@ -101,9 +101,6 @@ _config_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid,
|
|||
Mpi2ConfigRequest_t *mpi_request;
|
||||
char *desc = NULL;
|
||||
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
return;
|
||||
|
||||
mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
|
||||
switch (mpi_request->Header.PageType & MPI2_CONFIG_PAGETYPE_MASK) {
|
||||
case MPI2_CONFIG_PAGETYPE_IO_UNIT:
|
||||
|
@ -269,7 +266,8 @@ mpt3sas_config_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
|
|||
mpi_reply->MsgLength*4);
|
||||
}
|
||||
ioc->config_cmds.status &= ~MPT3_CMD_PENDING;
|
||||
_config_display_some_debug(ioc, smid, "config_done", mpi_reply);
|
||||
if (ioc->logging_level & MPT_DEBUG_CONFIG)
|
||||
_config_display_some_debug(ioc, smid, "config_done", mpi_reply);
|
||||
ioc->config_cmds.smid = USHRT_MAX;
|
||||
complete(&ioc->config_cmds.done);
|
||||
return 1;
|
||||
|
@ -305,6 +303,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
u8 retry_count, issue_host_reset = 0;
|
||||
struct config_request mem;
|
||||
u32 ioc_status = UINT_MAX;
|
||||
u8 issue_reset = 0;
|
||||
|
||||
mutex_lock(&ioc->config_cmds.mutex);
|
||||
if (ioc->config_cmds.status != MPT3_CMD_NOT_USED) {
|
||||
|
@ -378,14 +377,18 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
config_request = mpt3sas_base_get_msg_frame(ioc, smid);
|
||||
ioc->config_cmds.smid = smid;
|
||||
memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
|
||||
_config_display_some_debug(ioc, smid, "config_request", NULL);
|
||||
if (ioc->logging_level & MPT_DEBUG_CONFIG)
|
||||
_config_display_some_debug(ioc, smid, "config_request", NULL);
|
||||
init_completion(&ioc->config_cmds.done);
|
||||
ioc->put_smid_default(ioc, smid);
|
||||
wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ);
|
||||
if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->config_cmds.status, mpi_request,
|
||||
sizeof(Mpi2ConfigRequest_t)/4);
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
_config_display_some_debug(ioc,
|
||||
smid, "config_request", NULL);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->config_cmds.status, mpi_request,
|
||||
sizeof(Mpi2ConfigRequest_t)/4, issue_reset);
|
||||
retry_count++;
|
||||
if (ioc->config_cmds.smid == smid)
|
||||
mpt3sas_base_free_smid(ioc, smid);
|
||||
|
@ -404,8 +407,11 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
/* Reply Frame Sanity Checks to workaround FW issues */
|
||||
if ((mpi_request->Header.PageType & 0xF) !=
|
||||
(mpi_reply->Header.PageType & 0xF)) {
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
_config_display_some_debug(ioc,
|
||||
smid, "config_request", NULL);
|
||||
_debug_dump_mf(mpi_request, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->reply_sz/4);
|
||||
panic("%s: %s: Firmware BUG: mpi_reply mismatch: Requested PageType(0x%02x) Reply PageType(0x%02x)\n",
|
||||
ioc->name, __func__,
|
||||
mpi_request->Header.PageType & 0xF,
|
||||
|
@ -415,8 +421,11 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
if (((mpi_request->Header.PageType & 0xF) ==
|
||||
MPI2_CONFIG_PAGETYPE_EXTENDED) &&
|
||||
mpi_request->ExtPageType != mpi_reply->ExtPageType) {
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
_config_display_some_debug(ioc,
|
||||
smid, "config_request", NULL);
|
||||
_debug_dump_mf(mpi_request, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->reply_sz/4);
|
||||
panic("%s: %s: Firmware BUG: mpi_reply mismatch: Requested ExtPageType(0x%02x) Reply ExtPageType(0x%02x)\n",
|
||||
ioc->name, __func__,
|
||||
mpi_request->ExtPageType,
|
||||
|
@ -439,8 +448,11 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
if (p) {
|
||||
if ((mpi_request->Header.PageType & 0xF) !=
|
||||
(p[3] & 0xF)) {
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
_config_display_some_debug(ioc,
|
||||
smid, "config_request", NULL);
|
||||
_debug_dump_mf(mpi_request, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->reply_sz/4);
|
||||
_debug_dump_config(p, min_t(u16, mem.sz,
|
||||
config_page_sz)/4);
|
||||
panic("%s: %s: Firmware BUG: config page mismatch: Requested PageType(0x%02x) Reply PageType(0x%02x)\n",
|
||||
|
@ -452,8 +464,11 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
|
|||
if (((mpi_request->Header.PageType & 0xF) ==
|
||||
MPI2_CONFIG_PAGETYPE_EXTENDED) &&
|
||||
(mpi_request->ExtPageType != p[6])) {
|
||||
if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
|
||||
_config_display_some_debug(ioc,
|
||||
smid, "config_request", NULL);
|
||||
_debug_dump_mf(mpi_request, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->request_sz/4);
|
||||
_debug_dump_reply(mpi_reply, ioc->reply_sz/4);
|
||||
_debug_dump_config(p, min_t(u16, mem.sz,
|
||||
config_page_sz)/4);
|
||||
panic("%s: %s: Firmware BUG: config page mismatch: Requested ExtPageType(0x%02x) Reply ExtPageType(0x%02x)\n",
|
||||
|
|
|
@ -180,6 +180,12 @@ _ctl_display_some_debug(struct MPT3SAS_ADAPTER *ioc, u16 smid,
|
|||
case MPI2_FUNCTION_SMP_PASSTHROUGH:
|
||||
desc = "smp_passthrough";
|
||||
break;
|
||||
case MPI2_FUNCTION_TOOLBOX:
|
||||
desc = "toolbox";
|
||||
break;
|
||||
case MPI2_FUNCTION_NVME_ENCAPSULATED:
|
||||
desc = "nvme_encapsulated";
|
||||
break;
|
||||
}
|
||||
|
||||
if (!desc)
|
||||
|
@ -478,14 +484,15 @@ void mpt3sas_ctl_pre_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
|||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_ctl_reset_handler - reset callback handler (for ctl)
|
||||
* mpt3sas_ctl_reset_handler - clears outstanding ioctl cmd.
|
||||
* @ioc: per adapter object
|
||||
*
|
||||
* The handler for doing any required cleanup or initialization.
|
||||
*/
|
||||
void mpt3sas_ctl_after_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
||||
void mpt3sas_ctl_clear_outstanding_ioctls(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
dtmprintk(ioc, ioc_info(ioc, "%s: MPT3_IOC_AFTER_RESET\n", __func__));
|
||||
dtmprintk(ioc,
|
||||
ioc_info(ioc, "%s: clear outstanding ioctl cmd\n", __func__));
|
||||
if (ioc->ctl_cmds.status & MPT3_CMD_PENDING) {
|
||||
ioc->ctl_cmds.status |= MPT3_CMD_RESET;
|
||||
mpt3sas_base_free_smid(ioc, ioc->ctl_cmds.smid);
|
||||
|
@ -1021,10 +1028,9 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
|
|||
ioc->ignore_loginfos = 0;
|
||||
}
|
||||
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
karg.data_sge_offset);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
karg.data_sge_offset, issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
|
||||
|
@ -1325,7 +1331,8 @@ _ctl_do_reset(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
|
|||
__func__));
|
||||
|
||||
retval = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
ioc_info(ioc, "host reset: %s\n", ((!retval) ? "SUCCESS" : "FAILED"));
|
||||
ioc_info(ioc,
|
||||
"Ioctl: host reset: %s\n", ((!retval) ? "SUCCESS" : "FAILED"));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1733,10 +1740,9 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
|
|||
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
|
||||
|
||||
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagBufferPostRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagBufferPostRequest_t)/4, issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
|
||||
|
@ -2108,6 +2114,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
|
|||
u16 ioc_status;
|
||||
u32 ioc_state;
|
||||
int rc;
|
||||
u8 reset_needed = 0;
|
||||
|
||||
dctlprintk(ioc, ioc_info(ioc, "%s\n",
|
||||
__func__));
|
||||
|
@ -2115,6 +2122,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
|
|||
rc = 0;
|
||||
*issue_reset = 0;
|
||||
|
||||
|
||||
ioc_state = mpt3sas_base_get_iocstate(ioc, 1);
|
||||
if (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
|
||||
if (ioc->diag_buffer_status[buffer_type] &
|
||||
|
@ -2157,9 +2165,10 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
|
|||
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
|
||||
|
||||
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
*issue_reset = mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagReleaseRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagReleaseRequest_t)/4, reset_needed);
|
||||
*issue_reset = reset_needed;
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
@ -2417,10 +2426,9 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
|
|||
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
|
||||
|
||||
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagBufferPostRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->ctl_cmds.status, mpi_request,
|
||||
sizeof(Mpi2DiagBufferPostRequest_t)/4, issue_reset);
|
||||
goto issue_host_reset;
|
||||
}
|
||||
|
||||
|
|
|
@ -1049,6 +1049,34 @@ mpt3sas_get_pdev_by_handle(struct MPT3SAS_ADAPTER *ioc, u16 handle)
|
|||
return pcie_device;
|
||||
}
|
||||
|
||||
/**
|
||||
* _scsih_set_nvme_max_shutdown_latency - Update max_shutdown_latency.
|
||||
* @ioc: per adapter object
|
||||
* Context: This function will acquire ioc->pcie_device_lock
|
||||
*
|
||||
* Update ioc->max_shutdown_latency to that NVMe drives RTD3 Entry Latency
|
||||
* which has reported maximum among all available NVMe drives.
|
||||
* Minimum max_shutdown_latency will be six seconds.
|
||||
*/
|
||||
static void
|
||||
_scsih_set_nvme_max_shutdown_latency(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
struct _pcie_device *pcie_device;
|
||||
unsigned long flags;
|
||||
u16 shutdown_latency = IO_UNIT_CONTROL_SHUTDOWN_TIMEOUT;
|
||||
|
||||
spin_lock_irqsave(&ioc->pcie_device_lock, flags);
|
||||
list_for_each_entry(pcie_device, &ioc->pcie_device_list, list) {
|
||||
if (pcie_device->shutdown_latency) {
|
||||
if (shutdown_latency < pcie_device->shutdown_latency)
|
||||
shutdown_latency =
|
||||
pcie_device->shutdown_latency;
|
||||
}
|
||||
}
|
||||
ioc->max_shutdown_latency = shutdown_latency;
|
||||
spin_unlock_irqrestore(&ioc->pcie_device_lock, flags);
|
||||
}
|
||||
|
||||
/**
|
||||
* _scsih_pcie_device_remove - remove pcie_device from list.
|
||||
* @ioc: per adapter object
|
||||
|
@ -1063,6 +1091,7 @@ _scsih_pcie_device_remove(struct MPT3SAS_ADAPTER *ioc,
|
|||
{
|
||||
unsigned long flags;
|
||||
int was_on_pcie_device_list = 0;
|
||||
u8 update_latency = 0;
|
||||
|
||||
if (!pcie_device)
|
||||
return;
|
||||
|
@ -1082,11 +1111,21 @@ _scsih_pcie_device_remove(struct MPT3SAS_ADAPTER *ioc,
|
|||
list_del_init(&pcie_device->list);
|
||||
was_on_pcie_device_list = 1;
|
||||
}
|
||||
if (pcie_device->shutdown_latency == ioc->max_shutdown_latency)
|
||||
update_latency = 1;
|
||||
spin_unlock_irqrestore(&ioc->pcie_device_lock, flags);
|
||||
if (was_on_pcie_device_list) {
|
||||
kfree(pcie_device->serial_number);
|
||||
pcie_device_put(pcie_device);
|
||||
}
|
||||
|
||||
/*
|
||||
* This device's RTD3 Entry Latency matches IOC's
|
||||
* max_shutdown_latency. Recalculate IOC's max_shutdown_latency
|
||||
* from the available drives as current drive is getting removed.
|
||||
*/
|
||||
if (update_latency)
|
||||
_scsih_set_nvme_max_shutdown_latency(ioc);
|
||||
}
|
||||
|
||||
|
||||
|
@ -1101,6 +1140,7 @@ _scsih_pcie_device_remove_by_handle(struct MPT3SAS_ADAPTER *ioc, u16 handle)
|
|||
struct _pcie_device *pcie_device;
|
||||
unsigned long flags;
|
||||
int was_on_pcie_device_list = 0;
|
||||
u8 update_latency = 0;
|
||||
|
||||
if (ioc->shost_recovery)
|
||||
return;
|
||||
|
@ -1113,12 +1153,22 @@ _scsih_pcie_device_remove_by_handle(struct MPT3SAS_ADAPTER *ioc, u16 handle)
|
|||
was_on_pcie_device_list = 1;
|
||||
pcie_device_put(pcie_device);
|
||||
}
|
||||
if (pcie_device->shutdown_latency == ioc->max_shutdown_latency)
|
||||
update_latency = 1;
|
||||
}
|
||||
spin_unlock_irqrestore(&ioc->pcie_device_lock, flags);
|
||||
if (was_on_pcie_device_list) {
|
||||
_scsih_pcie_device_remove_from_sml(ioc, pcie_device);
|
||||
pcie_device_put(pcie_device);
|
||||
}
|
||||
|
||||
/*
|
||||
* This device's RTD3 Entry Latency matches IOC's
|
||||
* max_shutdown_latency. Recalculate IOC's max_shutdown_latency
|
||||
* from the available drives as current drive is getting removed.
|
||||
*/
|
||||
if (update_latency)
|
||||
_scsih_set_nvme_max_shutdown_latency(ioc);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1554,7 +1604,12 @@ scsih_change_queue_depth(struct scsi_device *sdev, int qdepth)
|
|||
max_depth = 1;
|
||||
if (qdepth > max_depth)
|
||||
qdepth = max_depth;
|
||||
return scsi_change_queue_depth(sdev, qdepth);
|
||||
scsi_change_queue_depth(sdev, qdepth);
|
||||
sdev_printk(KERN_INFO, sdev,
|
||||
"qdepth(%d), tagged(%d), scsi_level(%d), cmd_que(%d)\n",
|
||||
sdev->queue_depth, sdev->tagged_supported,
|
||||
sdev->scsi_level, ((sdev->inquiry[7] & 2) >> 1));
|
||||
return sdev->queue_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2673,6 +2728,7 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
|
|||
u16 smid = 0;
|
||||
u32 ioc_state;
|
||||
int rc;
|
||||
u8 issue_reset = 0;
|
||||
|
||||
lockdep_assert_held(&ioc->tm_cmds.mutex);
|
||||
|
||||
|
@ -2695,7 +2751,13 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
|
|||
}
|
||||
|
||||
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_base_fault_info(ioc, ioc_state &
|
||||
mpt3sas_print_fault_code(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
return (!rc) ? SUCCESS : FAILED;
|
||||
} else if ((ioc_state & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
mpt3sas_print_coredump_info(ioc, ioc_state &
|
||||
MPI2_DOORBELL_DATA_MASK);
|
||||
rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
return (!rc) ? SUCCESS : FAILED;
|
||||
|
@ -2726,9 +2788,10 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, u64 lun,
|
|||
ioc->put_smid_hi_priority(ioc, smid, msix_task);
|
||||
wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ);
|
||||
if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
if (mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->tm_cmds.status, mpi_request,
|
||||
sizeof(Mpi2SCSITaskManagementRequest_t)/4)) {
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->tm_cmds.status, mpi_request,
|
||||
sizeof(Mpi2SCSITaskManagementRequest_t)/4, issue_reset);
|
||||
if (issue_reset) {
|
||||
rc = mpt3sas_base_hard_reset_handler(ioc,
|
||||
FORCE_BIG_HAMMER);
|
||||
rc = (!rc) ? SUCCESS : FAILED;
|
||||
|
@ -2875,15 +2938,17 @@ scsih_abort(struct scsi_cmnd *scmd)
|
|||
|
||||
u8 timeout = 30;
|
||||
struct _pcie_device *pcie_device = NULL;
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"attempting task abort! scmd(%p)\n", scmd);
|
||||
sdev_printk(KERN_INFO, scmd->device, "attempting task abort!"
|
||||
"scmd(0x%p), outstanding for %u ms & timeout %u ms\n",
|
||||
scmd, jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc),
|
||||
(scmd->request->timeout / HZ) * 1000);
|
||||
_scsih_tm_display_info(ioc, scmd);
|
||||
|
||||
sas_device_priv_data = scmd->device->hostdata;
|
||||
if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
|
||||
ioc->remove_host) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"device been deleted! scmd(%p)\n", scmd);
|
||||
"device been deleted! scmd(0x%p)\n", scmd);
|
||||
scmd->result = DID_NO_CONNECT << 16;
|
||||
scmd->scsi_done(scmd);
|
||||
r = SUCCESS;
|
||||
|
@ -2892,6 +2957,8 @@ scsih_abort(struct scsi_cmnd *scmd)
|
|||
|
||||
/* check for completed command */
|
||||
if (st == NULL || st->cb_idx == 0xFF) {
|
||||
sdev_printk(KERN_INFO, scmd->device, "No reference found at "
|
||||
"driver, assuming scmd(0x%p) might have completed\n", scmd);
|
||||
scmd->result = DID_RESET << 16;
|
||||
r = SUCCESS;
|
||||
goto out;
|
||||
|
@ -2920,7 +2987,7 @@ scsih_abort(struct scsi_cmnd *scmd)
|
|||
if (r == SUCCESS && st->cb_idx != 0xFF)
|
||||
r = FAILED;
|
||||
out:
|
||||
sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(%p)\n",
|
||||
sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(0x%p)\n",
|
||||
((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
|
||||
if (pcie_device)
|
||||
pcie_device_put(pcie_device);
|
||||
|
@ -2949,14 +3016,14 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
|
|||
struct MPT3SAS_TARGET *target_priv_data = starget->hostdata;
|
||||
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"attempting device reset! scmd(%p)\n", scmd);
|
||||
"attempting device reset! scmd(0x%p)\n", scmd);
|
||||
_scsih_tm_display_info(ioc, scmd);
|
||||
|
||||
sas_device_priv_data = scmd->device->hostdata;
|
||||
if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
|
||||
ioc->remove_host) {
|
||||
sdev_printk(KERN_INFO, scmd->device,
|
||||
"device been deleted! scmd(%p)\n", scmd);
|
||||
"device been deleted! scmd(0x%p)\n", scmd);
|
||||
scmd->result = DID_NO_CONNECT << 16;
|
||||
scmd->scsi_done(scmd);
|
||||
r = SUCCESS;
|
||||
|
@ -2996,7 +3063,7 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
|
|||
if (r == SUCCESS && atomic_read(&scmd->device->device_busy))
|
||||
r = FAILED;
|
||||
out:
|
||||
sdev_printk(KERN_INFO, scmd->device, "device reset: %s scmd(%p)\n",
|
||||
sdev_printk(KERN_INFO, scmd->device, "device reset: %s scmd(0x%p)\n",
|
||||
((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
|
||||
|
||||
if (sas_device)
|
||||
|
@ -3027,15 +3094,15 @@ scsih_target_reset(struct scsi_cmnd *scmd)
|
|||
struct scsi_target *starget = scmd->device->sdev_target;
|
||||
struct MPT3SAS_TARGET *target_priv_data = starget->hostdata;
|
||||
|
||||
starget_printk(KERN_INFO, starget, "attempting target reset! scmd(%p)\n",
|
||||
scmd);
|
||||
starget_printk(KERN_INFO, starget,
|
||||
"attempting target reset! scmd(0x%p)\n", scmd);
|
||||
_scsih_tm_display_info(ioc, scmd);
|
||||
|
||||
sas_device_priv_data = scmd->device->hostdata;
|
||||
if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
|
||||
ioc->remove_host) {
|
||||
starget_printk(KERN_INFO, starget, "target been deleted! scmd(%p)\n",
|
||||
scmd);
|
||||
starget_printk(KERN_INFO, starget,
|
||||
"target been deleted! scmd(0x%p)\n", scmd);
|
||||
scmd->result = DID_NO_CONNECT << 16;
|
||||
scmd->scsi_done(scmd);
|
||||
r = SUCCESS;
|
||||
|
@ -3074,7 +3141,7 @@ scsih_target_reset(struct scsi_cmnd *scmd)
|
|||
if (r == SUCCESS && atomic_read(&starget->target_busy))
|
||||
r = FAILED;
|
||||
out:
|
||||
starget_printk(KERN_INFO, starget, "target reset: %s scmd(%p)\n",
|
||||
starget_printk(KERN_INFO, starget, "target reset: %s scmd(0x%p)\n",
|
||||
((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
|
||||
|
||||
if (sas_device)
|
||||
|
@ -3097,7 +3164,7 @@ scsih_host_reset(struct scsi_cmnd *scmd)
|
|||
struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
|
||||
int r, retval;
|
||||
|
||||
ioc_info(ioc, "attempting host reset! scmd(%p)\n", scmd);
|
||||
ioc_info(ioc, "attempting host reset! scmd(0x%p)\n", scmd);
|
||||
scsi_print_command(scmd);
|
||||
|
||||
if (ioc->is_driver_loading || ioc->remove_host) {
|
||||
|
@ -3109,7 +3176,7 @@ scsih_host_reset(struct scsi_cmnd *scmd)
|
|||
retval = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
r = (retval < 0) ? FAILED : SUCCESS;
|
||||
out:
|
||||
ioc_info(ioc, "host reset: %s scmd(%p)\n",
|
||||
ioc_info(ioc, "host reset: %s scmd(0x%p)\n",
|
||||
r == SUCCESS ? "SUCCESS" : "FAILED", scmd);
|
||||
|
||||
return r;
|
||||
|
@ -4475,6 +4542,7 @@ static void
|
|||
_scsih_temp_threshold_events(struct MPT3SAS_ADAPTER *ioc,
|
||||
Mpi2EventDataTemperature_t *event_data)
|
||||
{
|
||||
u32 doorbell;
|
||||
if (ioc->temp_sensors_count >= event_data->SensorNum) {
|
||||
ioc_err(ioc, "Temperature Threshold flags %s%s%s%s exceeded for Sensor: %d !!!\n",
|
||||
le16_to_cpu(event_data->Status) & 0x1 ? "0 " : " ",
|
||||
|
@ -4484,6 +4552,18 @@ _scsih_temp_threshold_events(struct MPT3SAS_ADAPTER *ioc,
|
|||
event_data->SensorNum);
|
||||
ioc_err(ioc, "Current Temp In Celsius: %d\n",
|
||||
event_data->CurrentTemperature);
|
||||
if (ioc->hba_mpi_version_belonged != MPI2_VERSION) {
|
||||
doorbell = mpt3sas_base_get_iocstate(ioc, 0);
|
||||
if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_FAULT) {
|
||||
mpt3sas_print_fault_code(ioc,
|
||||
doorbell & MPI2_DOORBELL_DATA_MASK);
|
||||
} else if ((doorbell & MPI2_IOC_STATE_MASK) ==
|
||||
MPI2_IOC_STATE_COREDUMP) {
|
||||
mpt3sas_print_coredump_info(ioc,
|
||||
doorbell & MPI2_DOORBELL_DATA_MASK);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -6933,6 +7013,16 @@ _scsih_pcie_add_device(struct MPT3SAS_ADAPTER *ioc, u16 handle)
|
|||
le32_to_cpu(pcie_device_pg0.DeviceInfo)))) {
|
||||
pcie_device->nvme_mdts =
|
||||
le32_to_cpu(pcie_device_pg2.MaximumDataTransferSize);
|
||||
pcie_device->shutdown_latency =
|
||||
le16_to_cpu(pcie_device_pg2.ShutdownLatency);
|
||||
/*
|
||||
* Set IOC's max_shutdown_latency to drive's RTD3 Entry Latency
|
||||
* if drive's RTD3 Entry Latency is greater then IOC's
|
||||
* max_shutdown_latency.
|
||||
*/
|
||||
if (pcie_device->shutdown_latency > ioc->max_shutdown_latency)
|
||||
ioc->max_shutdown_latency =
|
||||
pcie_device->shutdown_latency;
|
||||
if (pcie_device_pg2.ControllerResetTO)
|
||||
pcie_device->reset_timeout =
|
||||
pcie_device_pg2.ControllerResetTO;
|
||||
|
@ -7669,10 +7759,9 @@ _scsih_ir_fastpath(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phys_disk_num)
|
|||
wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ);
|
||||
|
||||
if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
issue_reset =
|
||||
mpt3sas_base_check_cmd_timeout(ioc,
|
||||
ioc->scsih_cmds.status, mpi_request,
|
||||
sizeof(Mpi2RaidActionRequest_t)/4);
|
||||
mpt3sas_check_cmd_timeout(ioc,
|
||||
ioc->scsih_cmds.status, mpi_request,
|
||||
sizeof(Mpi2RaidActionRequest_t)/4, issue_reset);
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
@ -9272,15 +9361,17 @@ void mpt3sas_scsih_pre_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
|||
}
|
||||
|
||||
/**
|
||||
* mpt3sas_scsih_after_reset_handler - reset callback handler (for scsih)
|
||||
* mpt3sas_scsih_clear_outstanding_scsi_tm_commands - clears outstanding
|
||||
* scsi & tm cmds.
|
||||
* @ioc: per adapter object
|
||||
*
|
||||
* The handler for doing any required cleanup or initialization.
|
||||
*/
|
||||
void
|
||||
mpt3sas_scsih_after_reset_handler(struct MPT3SAS_ADAPTER *ioc)
|
||||
mpt3sas_scsih_clear_outstanding_scsi_tm_commands(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
dtmprintk(ioc, ioc_info(ioc, "%s: MPT3_IOC_AFTER_RESET\n", __func__));
|
||||
dtmprintk(ioc,
|
||||
ioc_info(ioc, "%s: clear outstanding scsi & tm cmds\n", __func__));
|
||||
if (ioc->scsih_cmds.status & MPT3_CMD_PENDING) {
|
||||
ioc->scsih_cmds.status |= MPT3_CMD_RESET;
|
||||
mpt3sas_base_free_smid(ioc, ioc->scsih_cmds.smid);
|
||||
|
@ -9357,6 +9448,7 @@ _mpt3sas_fw_work(struct MPT3SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
|
|||
}
|
||||
_scsih_remove_unresponding_devices(ioc);
|
||||
_scsih_scan_for_devices_after_reset(ioc);
|
||||
_scsih_set_nvme_max_shutdown_latency(ioc);
|
||||
break;
|
||||
case MPT3SAS_PORT_ENABLE_COMPLETE:
|
||||
ioc->start_scan = 0;
|
||||
|
@ -9659,6 +9751,75 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
|
|||
kfree(sas_expander);
|
||||
}
|
||||
|
||||
/**
|
||||
* _scsih_nvme_shutdown - NVMe shutdown notification
|
||||
* @ioc: per adapter object
|
||||
*
|
||||
* Sending IoUnitControl request with shutdown operation code to alert IOC that
|
||||
* the host system is shutting down so that IOC can issue NVMe shutdown to
|
||||
* NVMe drives attached to it.
|
||||
*/
|
||||
static void
|
||||
_scsih_nvme_shutdown(struct MPT3SAS_ADAPTER *ioc)
|
||||
{
|
||||
Mpi26IoUnitControlRequest_t *mpi_request;
|
||||
Mpi26IoUnitControlReply_t *mpi_reply;
|
||||
u16 smid;
|
||||
|
||||
/* are there any NVMe devices ? */
|
||||
if (list_empty(&ioc->pcie_device_list))
|
||||
return;
|
||||
|
||||
mutex_lock(&ioc->scsih_cmds.mutex);
|
||||
|
||||
if (ioc->scsih_cmds.status != MPT3_CMD_NOT_USED) {
|
||||
ioc_err(ioc, "%s: scsih_cmd in use\n", __func__);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ioc->scsih_cmds.status = MPT3_CMD_PENDING;
|
||||
|
||||
smid = mpt3sas_base_get_smid(ioc, ioc->scsih_cb_idx);
|
||||
if (!smid) {
|
||||
ioc_err(ioc,
|
||||
"%s: failed obtaining a smid\n", __func__);
|
||||
ioc->scsih_cmds.status = MPT3_CMD_NOT_USED;
|
||||
goto out;
|
||||
}
|
||||
|
||||
mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
|
||||
ioc->scsih_cmds.smid = smid;
|
||||
memset(mpi_request, 0, sizeof(Mpi26IoUnitControlRequest_t));
|
||||
mpi_request->Function = MPI2_FUNCTION_IO_UNIT_CONTROL;
|
||||
mpi_request->Operation = MPI26_CTRL_OP_SHUTDOWN;
|
||||
|
||||
init_completion(&ioc->scsih_cmds.done);
|
||||
ioc->put_smid_default(ioc, smid);
|
||||
/* Wait for max_shutdown_latency seconds */
|
||||
ioc_info(ioc,
|
||||
"Io Unit Control shutdown (sending), Shutdown latency %d sec\n",
|
||||
ioc->max_shutdown_latency);
|
||||
wait_for_completion_timeout(&ioc->scsih_cmds.done,
|
||||
ioc->max_shutdown_latency*HZ);
|
||||
|
||||
if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) {
|
||||
ioc_err(ioc, "%s: timeout\n", __func__);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (ioc->scsih_cmds.status & MPT3_CMD_REPLY_VALID) {
|
||||
mpi_reply = ioc->scsih_cmds.reply;
|
||||
ioc_info(ioc, "Io Unit Control shutdown (complete):"
|
||||
"ioc_status(0x%04x), loginfo(0x%08x)\n",
|
||||
le16_to_cpu(mpi_reply->IOCStatus),
|
||||
le32_to_cpu(mpi_reply->IOCLogInfo));
|
||||
}
|
||||
out:
|
||||
ioc->scsih_cmds.status = MPT3_CMD_NOT_USED;
|
||||
mutex_unlock(&ioc->scsih_cmds.mutex);
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* _scsih_ir_shutdown - IR shutdown notification
|
||||
* @ioc: per adapter object
|
||||
|
@ -9851,6 +10012,7 @@ scsih_shutdown(struct pci_dev *pdev)
|
|||
&ioc->ioc_pg1_copy);
|
||||
|
||||
_scsih_ir_shutdown(ioc);
|
||||
_scsih_nvme_shutdown(ioc);
|
||||
mpt3sas_base_detach(ioc);
|
||||
}
|
||||
|
||||
|
@ -10533,6 +10695,8 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
ioc->tm_sas_control_cb_idx = tm_sas_control_cb_idx;
|
||||
ioc->logging_level = logging_level;
|
||||
ioc->schedule_dead_ioc_flush_running_cmds = &_scsih_flush_running_cmds;
|
||||
/* Host waits for minimum of six seconds */
|
||||
ioc->max_shutdown_latency = IO_UNIT_CONTROL_SHUTDOWN_TIMEOUT;
|
||||
/*
|
||||
* Enable MEMORY MOVE support flag.
|
||||
*/
|
||||
|
@ -10681,6 +10845,7 @@ scsih_suspend(struct pci_dev *pdev, pm_message_t state)
|
|||
mpt3sas_base_stop_watchdog(ioc);
|
||||
flush_scheduled_work();
|
||||
scsi_block_requests(shost);
|
||||
_scsih_nvme_shutdown(ioc);
|
||||
device_state = pci_choose_state(pdev, state);
|
||||
ioc_info(ioc, "pdev=0x%p, slot=%s, entering operating state [D%d]\n",
|
||||
pdev, pci_name(pdev), device_state);
|
||||
|
@ -10715,7 +10880,7 @@ scsih_resume(struct pci_dev *pdev)
|
|||
r = mpt3sas_base_map_resources(ioc);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
ioc_info(ioc, "Issuing Hard Reset as part of OS Resume\n");
|
||||
mpt3sas_base_hard_reset_handler(ioc, SOFT_RESET);
|
||||
scsi_unblock_requests(shost);
|
||||
mpt3sas_base_start_watchdog(ioc);
|
||||
|
@ -10784,6 +10949,7 @@ scsih_pci_slot_reset(struct pci_dev *pdev)
|
|||
if (rc)
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
|
||||
ioc_info(ioc, "Issuing Hard Reset as part of PCI Slot Reset\n");
|
||||
rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
|
||||
|
||||
ioc_warn(ioc, "hard reset: %s\n",
|
||||
|
|
|
@ -719,11 +719,10 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
|
|||
sas_device_put(sas_device);
|
||||
}
|
||||
|
||||
if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
|
||||
dev_printk(KERN_INFO, &rphy->dev,
|
||||
"add: handle(0x%04x), sas_addr(0x%016llx)\n",
|
||||
handle, (unsigned long long)
|
||||
mpt3sas_port->remote_identify.sas_address);
|
||||
dev_info(&rphy->dev,
|
||||
"add: handle(0x%04x), sas_addr(0x%016llx)\n", handle,
|
||||
(unsigned long long)mpt3sas_port->remote_identify.sas_address);
|
||||
|
||||
mpt3sas_port->rphy = rphy;
|
||||
spin_lock_irqsave(&ioc->sas_node_lock, flags);
|
||||
list_add_tail(&mpt3sas_port->port_list, &sas_node->sas_port_list);
|
||||
|
@ -813,6 +812,8 @@ mpt3sas_transport_port_remove(struct MPT3SAS_ADAPTER *ioc, u64 sas_address,
|
|||
}
|
||||
if (!ioc->remove_host)
|
||||
sas_port_delete(mpt3sas_port->port);
|
||||
ioc_info(ioc, "%s: removed: sas_addr(0x%016llx)\n",
|
||||
__func__, (unsigned long long)sas_address);
|
||||
kfree(mpt3sas_port);
|
||||
}
|
||||
|
||||
|
|
|
@ -47,6 +47,9 @@ static struct scsi_host_template mvs_sht = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = mvst_host_attrs,
|
||||
.track_queue_depth = 1,
|
||||
};
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0
|
||||
*
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Linux Driver for Mylex DAC960/AcceleRAID/eXtremeRAID PCI RAID Controllers
|
||||
*
|
||||
* Copyright 2017 Hannes Reinecke, SUSE Linux GmbH <hare@suse.com>
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0
|
||||
*
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Linux Driver for Mylex DAC960/AcceleRAID/eXtremeRAID PCI RAID Controllers
|
||||
*
|
||||
* This driver supports the newer, SCSI-based firmware interface only.
|
||||
|
|
|
@ -101,6 +101,9 @@ static struct scsi_host_template pm8001_sht = {
|
|||
.eh_target_reset_handler = sas_eh_target_reset_handler,
|
||||
.target_destroy = sas_target_destroy,
|
||||
.ioctl = sas_ioctl,
|
||||
#ifdef CONFIG_COMPAT
|
||||
.compat_ioctl = sas_ioctl,
|
||||
#endif
|
||||
.shost_attrs = pm8001_host_attrs,
|
||||
.track_queue_depth = 1,
|
||||
};
|
||||
|
|
|
@ -1699,6 +1699,16 @@ qla1280_load_firmware_pio(struct scsi_qla_host *ha)
|
|||
return err;
|
||||
}
|
||||
|
||||
#ifdef QLA_64BIT_PTR
|
||||
#define LOAD_CMD MBC_LOAD_RAM_A64_ROM
|
||||
#define DUMP_CMD MBC_DUMP_RAM_A64_ROM
|
||||
#define CMD_ARGS (BIT_7 | BIT_6 | BIT_4 | BIT_3 | BIT_2 | BIT_1 | BIT_0)
|
||||
#else
|
||||
#define LOAD_CMD MBC_LOAD_RAM
|
||||
#define DUMP_CMD MBC_DUMP_RAM
|
||||
#define CMD_ARGS (BIT_4 | BIT_3 | BIT_2 | BIT_1 | BIT_0)
|
||||
#endif
|
||||
|
||||
#define DUMP_IT_BACK 0 /* for debug of RISC loading */
|
||||
static int
|
||||
qla1280_load_firmware_dma(struct scsi_qla_host *ha)
|
||||
|
@ -1748,7 +1758,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
|
|||
for(i = 0; i < cnt; i++)
|
||||
((__le16 *)ha->request_ring)[i] = fw_data[i];
|
||||
|
||||
mb[0] = MBC_LOAD_RAM;
|
||||
mb[0] = LOAD_CMD;
|
||||
mb[1] = risc_address;
|
||||
mb[4] = cnt;
|
||||
mb[3] = ha->request_dma & 0xffff;
|
||||
|
@ -1759,8 +1769,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
|
|||
__func__, mb[0],
|
||||
(void *)(long)ha->request_dma,
|
||||
mb[6], mb[7], mb[2], mb[3]);
|
||||
err = qla1280_mailbox_command(ha, BIT_4 | BIT_3 | BIT_2 |
|
||||
BIT_1 | BIT_0, mb);
|
||||
err = qla1280_mailbox_command(ha, CMD_ARGS, mb);
|
||||
if (err) {
|
||||
printk(KERN_ERR "scsi(%li): Failed to load partial "
|
||||
"segment of f\n", ha->host_no);
|
||||
|
@ -1768,7 +1777,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
|
|||
}
|
||||
|
||||
#if DUMP_IT_BACK
|
||||
mb[0] = MBC_DUMP_RAM;
|
||||
mb[0] = DUMP_CMD;
|
||||
mb[1] = risc_address;
|
||||
mb[4] = cnt;
|
||||
mb[3] = p_tbuf & 0xffff;
|
||||
|
@ -1776,8 +1785,7 @@ qla1280_load_firmware_dma(struct scsi_qla_host *ha)
|
|||
mb[7] = upper_32_bits(p_tbuf) & 0xffff;
|
||||
mb[6] = upper_32_bits(p_tbuf) >> 16;
|
||||
|
||||
err = qla1280_mailbox_command(ha, BIT_4 | BIT_3 | BIT_2 |
|
||||
BIT_1 | BIT_0, mb);
|
||||
err = qla1280_mailbox_command(ha, CMD_ARGS, mb);
|
||||
if (err) {
|
||||
printk(KERN_ERR
|
||||
"Failed to dump partial segment of f/w\n");
|
||||
|
|
|
@ -277,6 +277,8 @@ struct device_reg {
|
|||
#define MBC_MAILBOX_REGISTER_TEST 6 /* Wrap incoming mailboxes */
|
||||
#define MBC_VERIFY_CHECKSUM 7 /* Verify checksum */
|
||||
#define MBC_ABOUT_FIRMWARE 8 /* Get firmware revision */
|
||||
#define MBC_LOAD_RAM_A64_ROM 9 /* Load RAM 64bit ROM version */
|
||||
#define MBC_DUMP_RAM_A64_ROM 0x0a /* Dump RAM 64bit ROM version */
|
||||
#define MBC_INIT_REQUEST_QUEUE 0x10 /* Initialize request queue */
|
||||
#define MBC_INIT_RESPONSE_QUEUE 0x11 /* Initialize response queue */
|
||||
#define MBC_EXECUTE_IOCB 0x12 /* Execute IOCB command */
|
||||
|
|
|
@ -54,7 +54,8 @@ void qla2x00_bsg_sp_free(srb_t *sp)
|
|||
if (sp->type == SRB_CT_CMD ||
|
||||
sp->type == SRB_FXIOCB_BCMD ||
|
||||
sp->type == SRB_ELS_CMD_HST)
|
||||
kfree(sp->fcport);
|
||||
qla2x00_free_fcport(sp->fcport);
|
||||
|
||||
qla2x00_rel_sp(sp);
|
||||
}
|
||||
|
||||
|
@ -405,7 +406,7 @@ done_unmap_sg:
|
|||
|
||||
done_free_fcport:
|
||||
if (bsg_request->msgcode == FC_BSG_RPT_ELS)
|
||||
kfree(fcport);
|
||||
qla2x00_free_fcport(fcport);
|
||||
done:
|
||||
return rval;
|
||||
}
|
||||
|
@ -545,7 +546,7 @@ qla2x00_process_ct(struct bsg_job *bsg_job)
|
|||
return rval;
|
||||
|
||||
done_free_fcport:
|
||||
kfree(fcport);
|
||||
qla2x00_free_fcport(fcport);
|
||||
done_unmap_sg:
|
||||
dma_unmap_sg(&ha->pdev->dev, bsg_job->request_payload.sg_list,
|
||||
bsg_job->request_payload.sg_cnt, DMA_TO_DEVICE);
|
||||
|
@ -796,7 +797,7 @@ qla2x00_process_loopback(struct bsg_job *bsg_job)
|
|||
|
||||
if (atomic_read(&vha->loop_state) == LOOP_READY &&
|
||||
(ha->current_topology == ISP_CFG_F ||
|
||||
(le32_to_cpu(*(uint32_t *)req_data) == ELS_OPCODE_BYTE &&
|
||||
(get_unaligned_le32(req_data) == ELS_OPCODE_BYTE &&
|
||||
req_data_len == MAX_ELS_FRAME_PAYLOAD)) &&
|
||||
elreq.options == EXTERNAL_LOOPBACK) {
|
||||
type = "FC_BSG_HST_VENDOR_ECHO_DIAG";
|
||||
|
@ -2049,7 +2050,7 @@ qlafx00_mgmt_cmd(struct bsg_job *bsg_job)
|
|||
return rval;
|
||||
|
||||
done_free_fcport:
|
||||
kfree(fcport);
|
||||
qla2x00_free_fcport(fcport);
|
||||
|
||||
done_unmap_rsp_sg:
|
||||
if (piocb_rqst->flags & SRB_FXDISC_RESP_DMA_VALID)
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
* | Device Discovery | 0x2134 | 0x210e-0x2116 |
|
||||
* | | | 0x211a |
|
||||
* | | | 0x211c-0x2128 |
|
||||
* | | | 0x212a-0x2130 |
|
||||
* | | | 0x212a-0x2134 |
|
||||
* | Queue Command and IO tracing | 0x3074 | 0x300b |
|
||||
* | | | 0x3027-0x3028 |
|
||||
* | | | 0x303d-0x3041 |
|
||||
|
|
|
@ -2402,6 +2402,7 @@ typedef struct fc_port {
|
|||
unsigned int scan_needed:1;
|
||||
unsigned int n2n_flag:1;
|
||||
unsigned int explicit_logout:1;
|
||||
unsigned int prli_pend_timer:1;
|
||||
|
||||
struct completion nvme_del_done;
|
||||
uint32_t nvme_prli_service_param;
|
||||
|
@ -2428,6 +2429,7 @@ typedef struct fc_port {
|
|||
struct work_struct free_work;
|
||||
struct work_struct reg_work;
|
||||
uint64_t jiffies_at_registration;
|
||||
unsigned long prli_expired;
|
||||
struct qlt_plogi_ack_t *plogi_link[QLT_PLOGI_LINK_MAX];
|
||||
|
||||
uint16_t tgt_id;
|
||||
|
@ -2464,6 +2466,7 @@ typedef struct fc_port {
|
|||
struct qla_tgt_sess *tgt_session;
|
||||
struct ct_sns_desc ct_desc;
|
||||
enum discovery_state disc_state;
|
||||
atomic_t shadow_disc_state;
|
||||
enum discovery_state next_disc_state;
|
||||
enum login_state fw_login_state;
|
||||
unsigned long dm_login_expire;
|
||||
|
@ -2510,6 +2513,19 @@ struct event_arg {
|
|||
|
||||
extern const char *const port_state_str[5];
|
||||
|
||||
static const char * const port_dstate_str[] = {
|
||||
"DELETED",
|
||||
"GNN_ID",
|
||||
"GNL",
|
||||
"LOGIN_PEND",
|
||||
"LOGIN_FAILED",
|
||||
"GPDB",
|
||||
"UPD_FCPORT",
|
||||
"LOGIN_COMPLETE",
|
||||
"ADISC",
|
||||
"DELETE_PEND"
|
||||
};
|
||||
|
||||
/*
|
||||
* FC port flags.
|
||||
*/
|
||||
|
@ -3263,7 +3279,6 @@ enum qla_work_type {
|
|||
QLA_EVT_IDC_ACK,
|
||||
QLA_EVT_ASYNC_LOGIN,
|
||||
QLA_EVT_ASYNC_LOGOUT,
|
||||
QLA_EVT_ASYNC_LOGOUT_DONE,
|
||||
QLA_EVT_ASYNC_ADISC,
|
||||
QLA_EVT_UEVENT,
|
||||
QLA_EVT_AENFX,
|
||||
|
@ -3953,7 +3968,7 @@ struct qla_hw_data {
|
|||
void *sfp_data;
|
||||
dma_addr_t sfp_data_dma;
|
||||
|
||||
void *flt;
|
||||
struct qla_flt_header *flt;
|
||||
dma_addr_t flt_dma;
|
||||
|
||||
#define XGMAC_DATA_SIZE 4096
|
||||
|
@ -4845,6 +4860,9 @@ struct sff_8247_a0 {
|
|||
(ha->fc4_type_priority == FC4_PRIORITY_NVME)) || \
|
||||
NVME_ONLY_TARGET(fcport)) \
|
||||
|
||||
#define PRLI_PHASE(_cls) \
|
||||
((_cls == DSC_LS_PRLI_PEND) || (_cls == DSC_LS_PRLI_COMP))
|
||||
|
||||
#include "qla_target.h"
|
||||
#include "qla_gbl.h"
|
||||
#include "qla_dbg.h"
|
||||
|
|
|
@ -1354,12 +1354,12 @@ struct vp_rpt_id_entry_24xx {
|
|||
uint8_t port_id[3];
|
||||
uint8_t format;
|
||||
union {
|
||||
struct {
|
||||
struct _f0 {
|
||||
/* format 0 loop */
|
||||
uint8_t vp_idx_map[16];
|
||||
uint8_t reserved_4[32];
|
||||
} f0;
|
||||
struct {
|
||||
struct _f1 {
|
||||
/* format 1 fabric */
|
||||
uint8_t vpstat1_subcode; /* vp_status=1 subcode */
|
||||
uint8_t flags;
|
||||
|
@ -1381,21 +1381,22 @@ struct vp_rpt_id_entry_24xx {
|
|||
uint16_t bbcr;
|
||||
uint8_t reserved_5[6];
|
||||
} f1;
|
||||
struct { /* format 2: N2N direct connect */
|
||||
uint8_t vpstat1_subcode;
|
||||
uint8_t flags;
|
||||
uint16_t rsv6;
|
||||
uint8_t rsv2[12];
|
||||
struct _f2 { /* format 2: N2N direct connect */
|
||||
uint8_t vpstat1_subcode;
|
||||
uint8_t flags;
|
||||
uint16_t fip_flags;
|
||||
uint8_t rsv2[12];
|
||||
|
||||
uint8_t ls_rjt_vendor;
|
||||
uint8_t ls_rjt_explanation;
|
||||
uint8_t ls_rjt_reason;
|
||||
uint8_t rsv3[5];
|
||||
uint8_t ls_rjt_vendor;
|
||||
uint8_t ls_rjt_explanation;
|
||||
uint8_t ls_rjt_reason;
|
||||
uint8_t rsv3[5];
|
||||
|
||||
uint8_t port_name[8];
|
||||
uint8_t node_name[8];
|
||||
uint8_t remote_nport_id[4];
|
||||
uint32_t reserved_5;
|
||||
uint8_t port_name[8];
|
||||
uint8_t node_name[8];
|
||||
uint16_t bbcr;
|
||||
uint8_t reserved_5[2];
|
||||
uint8_t remote_nport_id[4];
|
||||
} f2;
|
||||
} u;
|
||||
};
|
||||
|
@ -1470,13 +1471,6 @@ struct qla_flt_location {
|
|||
uint16_t checksum;
|
||||
};
|
||||
|
||||
struct qla_flt_header {
|
||||
uint16_t version;
|
||||
uint16_t length;
|
||||
uint16_t checksum;
|
||||
uint16_t unused;
|
||||
};
|
||||
|
||||
#define FLT_REG_FW 0x01
|
||||
#define FLT_REG_BOOT_CODE 0x07
|
||||
#define FLT_REG_VPD_0 0x14
|
||||
|
@ -1537,6 +1531,14 @@ struct qla_flt_region {
|
|||
uint32_t end;
|
||||
};
|
||||
|
||||
struct qla_flt_header {
|
||||
uint16_t version;
|
||||
uint16_t length;
|
||||
uint16_t checksum;
|
||||
uint16_t unused;
|
||||
struct qla_flt_region region[0];
|
||||
};
|
||||
|
||||
#define FLT_REGION_SIZE 16
|
||||
#define FLT_MAX_REGIONS 0xFF
|
||||
#define FLT_REGIONS_SIZE (FLT_REGION_SIZE * FLT_MAX_REGIONS)
|
||||
|
|
|
@ -72,14 +72,13 @@ extern int qla2x00_async_adisc(struct scsi_qla_host *, fc_port_t *,
|
|||
extern int qla2x00_async_tm_cmd(fc_port_t *, uint32_t, uint32_t, uint32_t);
|
||||
extern void qla2x00_async_login_done(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
extern void qla2x00_async_logout_done(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
struct qla_work_evt *qla2x00_alloc_work(struct scsi_qla_host *,
|
||||
enum qla_work_type);
|
||||
extern int qla24xx_async_gnl(struct scsi_qla_host *, fc_port_t *);
|
||||
int qla2x00_post_work(struct scsi_qla_host *vha, struct qla_work_evt *e);
|
||||
extern void *qla2x00_alloc_iocbs_ready(struct qla_qpair *, srb_t *);
|
||||
extern int qla24xx_update_fcport_fcp_prio(scsi_qla_host_t *, fc_port_t *);
|
||||
extern int qla24xx_async_abort_cmd(srb_t *, bool);
|
||||
|
||||
extern void qla2x00_set_fcport_state(fc_port_t *fcport, int state);
|
||||
extern fc_port_t *
|
||||
|
@ -182,8 +181,6 @@ extern int qla2x00_post_async_login_work(struct scsi_qla_host *, fc_port_t *,
|
|||
uint16_t *);
|
||||
extern int qla2x00_post_async_logout_work(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
extern int qla2x00_post_async_logout_done_work(struct scsi_qla_host *,
|
||||
fc_port_t *, uint16_t *);
|
||||
extern int qla2x00_post_async_adisc_work(struct scsi_qla_host *, fc_port_t *,
|
||||
uint16_t *);
|
||||
extern int qla2x00_post_async_adisc_done_work(struct scsi_qla_host *,
|
||||
|
@ -201,6 +198,7 @@ extern void qla2x00_free_host(struct scsi_qla_host *);
|
|||
extern void qla2x00_relogin(struct scsi_qla_host *);
|
||||
extern void qla2x00_do_work(struct scsi_qla_host *);
|
||||
extern void qla2x00_free_fcports(struct scsi_qla_host *);
|
||||
extern void qla2x00_free_fcport(fc_port_t *);
|
||||
|
||||
extern void qla83xx_schedule_work(scsi_qla_host_t *, int);
|
||||
extern void qla83xx_service_idc_aen(struct work_struct *);
|
||||
|
@ -253,8 +251,9 @@ extern scsi_qla_host_t *qla24xx_create_vhost(struct fc_vport *);
|
|||
extern void qla2x00_sp_free_dma(srb_t *sp);
|
||||
extern char *qla2x00_get_fw_version_str(struct scsi_qla_host *, char *);
|
||||
|
||||
extern void qla2x00_mark_device_lost(scsi_qla_host_t *, fc_port_t *, int, int);
|
||||
extern void qla2x00_mark_all_devices_lost(scsi_qla_host_t *, int);
|
||||
extern void qla2x00_mark_device_lost(scsi_qla_host_t *, fc_port_t *, int);
|
||||
extern void qla2x00_mark_all_devices_lost(scsi_qla_host_t *);
|
||||
extern int qla24xx_async_abort_cmd(srb_t *, bool);
|
||||
|
||||
extern struct fw_blob *qla2x00_request_firmware(scsi_qla_host_t *);
|
||||
|
||||
|
|
|
@ -2963,7 +2963,6 @@ int qla24xx_post_gpsc_work(struct scsi_qla_host *vha, fc_port_t *fcport)
|
|||
return QLA_FUNCTION_FAILED;
|
||||
|
||||
e->u.fcport.fcport = fcport;
|
||||
fcport->flags |= FCF_ASYNC_ACTIVE;
|
||||
return qla2x00_post_work(vha, e);
|
||||
}
|
||||
|
||||
|
@ -3097,9 +3096,7 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
|
|||
|
||||
done_free_sp:
|
||||
sp->free(sp);
|
||||
fcport->flags &= ~FCF_ASYNC_SENT;
|
||||
done:
|
||||
fcport->flags &= ~FCF_ASYNC_ACTIVE;
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
@ -4290,7 +4287,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
|
|||
if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
|
||||
return rval;
|
||||
|
||||
fcport->disc_state = DSC_GNN_ID;
|
||||
qla2x00_set_fcport_disc_state(fcport, DSC_GNN_ID);
|
||||
sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
|
||||
if (!sp)
|
||||
goto done;
|
||||
|
@ -4464,7 +4461,6 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
|
|||
|
||||
done_free_sp:
|
||||
sp->free(sp);
|
||||
fcport->flags &= ~FCF_ASYNC_SENT;
|
||||
done:
|
||||
return rval;
|
||||
}
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче