This series consists of the usual driver updates (ufs, qla2xxx,
 smartpqi, target, zfcp, fnic, mpt3sas, ibmvfc) plus a load of
 cleanups, a major power management rework and a load of assorted minor
 updates.  There are a few core updates (formatting fixes being the big
 one) but nothing major this cycle.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCX9o0KSYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishbOZAP9D5NTN
 J7dJUo2MIMy84YBu+d9ag7yLlNiRWVY2yw5vHwD/Z7JjAVLwz/tzmyjU9//o2J6w
 hwhOv6Uto89gLCWSEz8=
 =KUPT
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This consists of the usual driver updates (ufs, qla2xxx, smartpqi,
  target, zfcp, fnic, mpt3sas, ibmvfc) plus a load of cleanups, a major
  power management rework and a load of assorted minor updates.

  There are a few core updates (formatting fixes being the big one) but
  nothing major this cycle"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (279 commits)
  scsi: mpt3sas: Update driver version to 36.100.00.00
  scsi: mpt3sas: Handle trigger page after firmware update
  scsi: mpt3sas: Add persistent MPI trigger page
  scsi: mpt3sas: Add persistent SCSI sense trigger page
  scsi: mpt3sas: Add persistent Event trigger page
  scsi: mpt3sas: Add persistent Master trigger page
  scsi: mpt3sas: Add persistent trigger pages support
  scsi: mpt3sas: Sync time periodically between driver and firmware
  scsi: qla2xxx: Update version to 10.02.00.104-k
  scsi: qla2xxx: Fix device loss on 4G and older HBAs
  scsi: qla2xxx: If fcport is undergoing deletion complete I/O with retry
  scsi: qla2xxx: Fix the call trace for flush workqueue
  scsi: qla2xxx: Fix flash update in 28XX adapters on big endian machines
  scsi: qla2xxx: Handle aborts correctly for port undergoing deletion
  scsi: qla2xxx: Fix N2N and NVMe connect retry failure
  scsi: qla2xxx: Fix FW initialization error on big endian machines
  scsi: qla2xxx: Fix crash during driver load on big endian machines
  scsi: qla2xxx: Fix compilation issue in PPC systems
  scsi: qla2xxx: Don't check for fw_started while posting NVMe command
  scsi: qla2xxx: Tear down session if FW say it is down
  ...
This commit is contained in:
Linus Torvalds 2020-12-16 13:34:31 -08:00
Родитель 69f637c335 be1b500212
Коммит 60f7c503d9
197 изменённых файлов: 11837 добавлений и 8069 удалений

Просмотреть файл

@ -0,0 +1,23 @@
What: /sys/class/fc_host/hostX/statistics/fpin_cn_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of congestion notification
events recorded by the F_Port, reported using fabric
performance impact notification (FPIN) event.
What: /sys/class/fc_host/hostX/statistics/fpin_li_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of link integrity error
events recorded by the F_Port/Nx_Port, reported using fabric
performance impact notification (FPIN) event.
What: /sys/class/fc_host/hostX/statistics/fpin_dn_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of delivery related errors
recorded by the F_Port/Nx_Port, reported using fabric
performance impact notification (FPIN) event.

Просмотреть файл

@ -0,0 +1,23 @@
What: /sys/class/fc_remote_ports/rport-X:Y-Z/statistics/fpin_cn_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of congestion notification
events recorded by the F_Port/Nx_Port, reported using fabric
performance impact notification (FPIN) event.
What: /sys/class/fc_remote_ports/rport-X:Y-Z/statistics/fpin_li_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of link integrity error
events recorded by the F_Port/Nx_Port, reported using fabric
performance impact notification (FPIN) event.
What: /sys/class/fc_remote_ports/rport-X:Y-Z/statistics/fpin_dn_yyy
Date: July 2020
Contact: Shyam Sundar <ssundar@marvell.com>
Description:
These files contain the number of delivery related errors
recorded by the F_Port/Nx_Port, reported using fabric
performance impact notification (FPIN) event.

Просмотреть файл

@ -15780,6 +15780,15 @@ F: Documentation/scsi/st.rst
F: drivers/scsi/st.* F: drivers/scsi/st.*
F: drivers/scsi/st_*.h F: drivers/scsi/st_*.h
SCSI TARGET CORE USER DRIVER
M: Bodo Stroesser <bostroesser@gmail.com>
L: linux-scsi@vger.kernel.org
L: target-devel@vger.kernel.org
S: Supported
F: Documentation/target/tcmu-design.rst
F: drivers/target/target_core_user.c
F: include/uapi/linux/target_core_user.h
SCSI TARGET SUBSYSTEM SCSI TARGET SUBSYSTEM
M: "Martin K. Petersen" <martin.petersen@oracle.com> M: "Martin K. Petersen" <martin.petersen@oracle.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org

Просмотреть файл

@ -1404,7 +1404,7 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
break; break;
default: default:
errors++; errors++;
blk_mq_end_request(rq, BLK_STS_IOERR); blk_mq_end_request(rq, ret);
} }
} while (!list_empty(list)); } while (!list_empty(list));
out: out:

Просмотреть файл

@ -1528,12 +1528,12 @@ isert_check_pi_status(struct se_cmd *se_cmd, struct ib_mr *sig_mr)
} }
sec_offset_err = mr_status.sig_err.sig_err_offset; sec_offset_err = mr_status.sig_err.sig_err_offset;
do_div(sec_offset_err, block_size); do_div(sec_offset_err, block_size);
se_cmd->bad_sector = sec_offset_err + se_cmd->t_task_lba; se_cmd->sense_info = sec_offset_err + se_cmd->t_task_lba;
isert_err("PI error found type %d at sector 0x%llx " isert_err("PI error found type %d at sector 0x%llx "
"expected 0x%x vs actual 0x%x\n", "expected 0x%x vs actual 0x%x\n",
mr_status.sig_err.err_type, mr_status.sig_err.err_type,
(unsigned long long)se_cmd->bad_sector, (unsigned long long)se_cmd->sense_info,
mr_status.sig_err.expected, mr_status.sig_err.expected,
mr_status.sig_err.actual); mr_status.sig_err.actual);
ret = 1; ret = 1;
@ -2471,7 +2471,7 @@ isert_wait4cmds(struct iscsi_conn *conn)
isert_info("iscsi_conn %p\n", conn); isert_info("iscsi_conn %p\n", conn);
if (conn->sess) { if (conn->sess) {
target_sess_cmd_list_set_waiting(conn->sess->se_sess); target_stop_session(conn->sess->se_sess);
target_wait_for_sess_cmds(conn->sess->se_sess); target_wait_for_sess_cmds(conn->sess->se_sess);
} }
} }

Просмотреть файл

@ -2085,7 +2085,7 @@ static void srpt_release_channel_work(struct work_struct *w)
se_sess = ch->sess; se_sess = ch->sess;
BUG_ON(!se_sess); BUG_ON(!se_sess);
target_sess_cmd_list_set_waiting(se_sess); target_stop_session(se_sess);
target_wait_for_sess_cmds(se_sess); target_wait_for_sess_cmds(se_sess);
target_remove_session(se_sess); target_remove_session(se_sess);

Просмотреть файл

@ -57,7 +57,7 @@
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/interrupt.h> /* needed for in_interrupt() proto */ #include <linux/interrupt.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <scsi/scsi_host.h> #include <scsi/scsi_host.h>
@ -473,7 +473,6 @@ mpt_turbo_reply(MPT_ADAPTER *ioc, u32 pa)
mpt_free_msg_frame(ioc, mf); mpt_free_msg_frame(ioc, mf);
mb(); mb();
return; return;
break;
} }
mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa); mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa);
break; break;
@ -6336,7 +6335,6 @@ SendEventAck(MPT_ADAPTER *ioc, EventNotificationReply_t *evnp)
* Page header is updated. * Page header is updated.
* *
* Returns 0 for success * Returns 0 for success
* -EPERM if not allowed due to ISR context
* -EAGAIN if no msg frames currently available * -EAGAIN if no msg frames currently available
* -EFAULT for non-successful reply or no reply (timeout) * -EFAULT for non-successful reply or no reply (timeout)
*/ */
@ -6354,19 +6352,10 @@ mpt_config(MPT_ADAPTER *ioc, CONFIGPARMS *pCfg)
u8 page_type = 0, extend_page; u8 page_type = 0, extend_page;
unsigned long timeleft; unsigned long timeleft;
unsigned long flags; unsigned long flags;
int in_isr;
u8 issue_hard_reset = 0; u8 issue_hard_reset = 0;
u8 retry_count = 0; u8 retry_count = 0;
/* Prevent calling wait_event() (below), if caller happens might_sleep();
* to be in ISR context, because that is fatal!
*/
in_isr = in_interrupt();
if (in_isr) {
dcprintk(ioc, printk(MYIOC_s_WARN_FMT "Config request not allowed in ISR context!\n",
ioc->name));
return -EPERM;
}
/* don't send a config page during diag reset */ /* don't send a config page during diag reset */
spin_lock_irqsave(&ioc->taskmgmt_lock, flags); spin_lock_irqsave(&ioc->taskmgmt_lock, flags);

Просмотреть файл

@ -50,7 +50,7 @@
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/delay.h> /* for mdelay */ #include <linux/delay.h> /* for mdelay */
#include <linux/interrupt.h> /* needed for in_interrupt() proto */ #include <linux/interrupt.h>
#include <linux/reboot.h> /* notifier code */ #include <linux/reboot.h> /* notifier code */
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/sort.h> #include <linux/sort.h>

Просмотреть файл

@ -289,6 +289,7 @@ mptsas_add_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event,
spin_lock_irqsave(&ioc->fw_event_lock, flags); spin_lock_irqsave(&ioc->fw_event_lock, flags);
list_add_tail(&fw_event->list, &ioc->fw_event_list); list_add_tail(&fw_event->list, &ioc->fw_event_list);
fw_event->users = 1;
INIT_DELAYED_WORK(&fw_event->work, mptsas_firmware_event_work); INIT_DELAYED_WORK(&fw_event->work, mptsas_firmware_event_work);
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: add (fw_event=0x%p)" devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: add (fw_event=0x%p)"
"on cpuid %d\n", ioc->name, __func__, "on cpuid %d\n", ioc->name, __func__,
@ -314,6 +315,15 @@ mptsas_requeue_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event,
spin_unlock_irqrestore(&ioc->fw_event_lock, flags); spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
} }
static void __mptsas_free_fw_event(MPT_ADAPTER *ioc,
struct fw_event_work *fw_event)
{
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: kfree (fw_event=0x%p)\n",
ioc->name, __func__, fw_event));
list_del(&fw_event->list);
kfree(fw_event);
}
/* free memory associated to a sas firmware event */ /* free memory associated to a sas firmware event */
static void static void
mptsas_free_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event) mptsas_free_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event)
@ -321,10 +331,9 @@ mptsas_free_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event)
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&ioc->fw_event_lock, flags); spin_lock_irqsave(&ioc->fw_event_lock, flags);
devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT "%s: kfree (fw_event=0x%p)\n", fw_event->users--;
ioc->name, __func__, fw_event)); if (!fw_event->users)
list_del(&fw_event->list); __mptsas_free_fw_event(ioc, fw_event);
kfree(fw_event);
spin_unlock_irqrestore(&ioc->fw_event_lock, flags); spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
} }
@ -333,9 +342,10 @@ mptsas_free_fw_event(MPT_ADAPTER *ioc, struct fw_event_work *fw_event)
static void static void
mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc) mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc)
{ {
struct fw_event_work *fw_event, *next; struct fw_event_work *fw_event;
struct mptsas_target_reset_event *target_reset_list, *n; struct mptsas_target_reset_event *target_reset_list, *n;
MPT_SCSI_HOST *hd = shost_priv(ioc->sh); MPT_SCSI_HOST *hd = shost_priv(ioc->sh);
unsigned long flags;
/* flush the target_reset_list */ /* flush the target_reset_list */
if (!list_empty(&hd->target_reset_list)) { if (!list_empty(&hd->target_reset_list)) {
@ -350,14 +360,29 @@ mptsas_cleanup_fw_event_q(MPT_ADAPTER *ioc)
} }
} }
if (list_empty(&ioc->fw_event_list) || if (list_empty(&ioc->fw_event_list) || !ioc->fw_event_q)
!ioc->fw_event_q || in_interrupt())
return; return;
list_for_each_entry_safe(fw_event, next, &ioc->fw_event_list, list) { spin_lock_irqsave(&ioc->fw_event_lock, flags);
if (cancel_delayed_work(&fw_event->work))
mptsas_free_fw_event(ioc, fw_event); while (!list_empty(&ioc->fw_event_list)) {
bool canceled = false;
fw_event = list_first_entry(&ioc->fw_event_list,
struct fw_event_work, list);
fw_event->users++;
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
if (cancel_delayed_work_sync(&fw_event->work))
canceled = true;
spin_lock_irqsave(&ioc->fw_event_lock, flags);
if (canceled)
fw_event->users--;
fw_event->users--;
WARN_ON_ONCE(fw_event->users);
__mptsas_free_fw_event(ioc, fw_event);
} }
spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
} }

Просмотреть файл

@ -107,6 +107,7 @@ struct mptsas_hotplug_event {
struct fw_event_work { struct fw_event_work {
struct list_head list; struct list_head list;
struct delayed_work work; struct delayed_work work;
int users;
MPT_ADAPTER *ioc; MPT_ADAPTER *ioc;
u32 event; u32 event;
u8 retries; u8 retries;

Просмотреть файл

@ -52,7 +52,7 @@
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/delay.h> /* for mdelay */ #include <linux/delay.h> /* for mdelay */
#include <linux/interrupt.h> /* needed for in_interrupt() proto */ #include <linux/interrupt.h>
#include <linux/reboot.h> /* notifier code */ #include <linux/reboot.h> /* notifier code */
#include <linux/workqueue.h> #include <linux/workqueue.h>

Просмотреть файл

@ -52,7 +52,7 @@
#include <linux/kdev_t.h> #include <linux/kdev_t.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/delay.h> /* for mdelay */ #include <linux/delay.h> /* for mdelay */
#include <linux/interrupt.h> /* needed for in_interrupt() proto */ #include <linux/interrupt.h>
#include <linux/reboot.h> /* notifier code */ #include <linux/reboot.h> /* notifier code */
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/raid_class.h> #include <linux/raid_class.h>

Просмотреть файл

@ -292,6 +292,14 @@ static void _zfcp_status_read_scheduler(struct work_struct *work)
stat_work)); stat_work));
} }
static void zfcp_version_change_lost_work(struct work_struct *work)
{
struct zfcp_adapter *adapter = container_of(work, struct zfcp_adapter,
version_change_lost_work);
zfcp_fsf_exchange_config_data_sync(adapter->qdio, NULL);
}
static void zfcp_print_sl(struct seq_file *m, struct service_level *sl) static void zfcp_print_sl(struct seq_file *m, struct service_level *sl)
{ {
struct zfcp_adapter *adapter = struct zfcp_adapter *adapter =
@ -353,6 +361,8 @@ struct zfcp_adapter *zfcp_adapter_enqueue(struct ccw_device *ccw_device)
INIT_WORK(&adapter->stat_work, _zfcp_status_read_scheduler); INIT_WORK(&adapter->stat_work, _zfcp_status_read_scheduler);
INIT_DELAYED_WORK(&adapter->scan_work, zfcp_fc_scan_ports); INIT_DELAYED_WORK(&adapter->scan_work, zfcp_fc_scan_ports);
INIT_WORK(&adapter->ns_up_work, zfcp_fc_sym_name_update); INIT_WORK(&adapter->ns_up_work, zfcp_fc_sym_name_update);
INIT_WORK(&adapter->version_change_lost_work,
zfcp_version_change_lost_work);
adapter->next_port_scan = jiffies; adapter->next_port_scan = jiffies;
@ -429,6 +439,7 @@ void zfcp_adapter_unregister(struct zfcp_adapter *adapter)
cancel_delayed_work_sync(&adapter->scan_work); cancel_delayed_work_sync(&adapter->scan_work);
cancel_work_sync(&adapter->stat_work); cancel_work_sync(&adapter->stat_work);
cancel_work_sync(&adapter->ns_up_work); cancel_work_sync(&adapter->ns_up_work);
cancel_work_sync(&adapter->version_change_lost_work);
zfcp_destroy_adapter_work_queue(adapter); zfcp_destroy_adapter_work_queue(adapter);
zfcp_fc_wka_ports_force_offline(adapter->gs); zfcp_fc_wka_ports_force_offline(adapter->gs);

Просмотреть файл

@ -200,6 +200,7 @@ struct zfcp_adapter {
struct zfcp_fc_events events; struct zfcp_fc_events events;
unsigned long next_port_scan; unsigned long next_port_scan;
struct zfcp_diag_adapter *diagnostics; struct zfcp_diag_adapter *diagnostics;
struct work_struct version_change_lost_work;
}; };
struct zfcp_port { struct zfcp_port {

Просмотреть файл

@ -20,8 +20,6 @@ extern struct zfcp_port *zfcp_get_port_by_wwpn(struct zfcp_adapter *, u64);
extern struct zfcp_adapter *zfcp_adapter_enqueue(struct ccw_device *); extern struct zfcp_adapter *zfcp_adapter_enqueue(struct ccw_device *);
extern struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *, u64, u32, extern struct zfcp_port *zfcp_port_enqueue(struct zfcp_adapter *, u64, u32,
u32); u32);
extern void zfcp_sg_free_table(struct scatterlist *, int);
extern int zfcp_sg_setup_table(struct scatterlist *, int);
extern void zfcp_adapter_release(struct kref *); extern void zfcp_adapter_release(struct kref *);
extern void zfcp_adapter_unregister(struct zfcp_adapter *); extern void zfcp_adapter_unregister(struct zfcp_adapter *);

Просмотреть файл

@ -242,6 +242,19 @@ static void zfcp_fsf_status_read_link_down(struct zfcp_fsf_req *req)
} }
} }
static void
zfcp_fsf_status_read_version_change(struct zfcp_adapter *adapter,
struct fsf_status_read_buffer *sr_buf)
{
if (sr_buf->status_subtype == FSF_STATUS_READ_SUB_LIC_CHANGE) {
u32 version = sr_buf->payload.version_change.current_version;
WRITE_ONCE(adapter->fsf_lic_version, version);
snprintf(fc_host_firmware_version(adapter->scsi_host),
FC_VERSION_STRING_SIZE, "%#08x", version);
}
}
static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req) static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
{ {
struct zfcp_adapter *adapter = req->adapter; struct zfcp_adapter *adapter = req->adapter;
@ -296,10 +309,16 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
case FSF_STATUS_READ_NOTIFICATION_LOST: case FSF_STATUS_READ_NOTIFICATION_LOST:
if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_INCOMING_ELS) if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_INCOMING_ELS)
zfcp_fc_conditional_port_scan(adapter); zfcp_fc_conditional_port_scan(adapter);
if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_VERSION_CHANGE)
queue_work(adapter->work_queue,
&adapter->version_change_lost_work);
break; break;
case FSF_STATUS_READ_FEATURE_UPDATE_ALERT: case FSF_STATUS_READ_FEATURE_UPDATE_ALERT:
adapter->adapter_features = sr_buf->payload.word[0]; adapter->adapter_features = sr_buf->payload.word[0];
break; break;
case FSF_STATUS_READ_VERSION_CHANGE:
zfcp_fsf_status_read_version_change(adapter, sr_buf);
break;
} }
mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data); mempool_free(virt_to_page(sr_buf), adapter->pool.sr_data);

Просмотреть файл

@ -134,6 +134,7 @@
#define FSF_STATUS_READ_LINK_UP 0x00000006 #define FSF_STATUS_READ_LINK_UP 0x00000006
#define FSF_STATUS_READ_NOTIFICATION_LOST 0x00000009 #define FSF_STATUS_READ_NOTIFICATION_LOST 0x00000009
#define FSF_STATUS_READ_FEATURE_UPDATE_ALERT 0x0000000C #define FSF_STATUS_READ_FEATURE_UPDATE_ALERT 0x0000000C
#define FSF_STATUS_READ_VERSION_CHANGE 0x0000000D
/* status subtypes for link down */ /* status subtypes for link down */
#define FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK 0x00000000 #define FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK 0x00000000
@ -142,6 +143,10 @@
/* status subtypes for unsolicited status notification lost */ /* status subtypes for unsolicited status notification lost */
#define FSF_STATUS_READ_SUB_INCOMING_ELS 0x00000001 #define FSF_STATUS_READ_SUB_INCOMING_ELS 0x00000001
#define FSF_STATUS_READ_SUB_VERSION_CHANGE 0x00000100
/* status subtypes for version change */
#define FSF_STATUS_READ_SUB_LIC_CHANGE 0x00000001
/* topologie that is detected by the adapter */ /* topologie that is detected by the adapter */
#define FSF_TOPO_P2P 0x00000001 #define FSF_TOPO_P2P 0x00000001
@ -226,6 +231,11 @@ struct fsf_link_down_info {
u8 vendor_specific_code; u8 vendor_specific_code;
} __attribute__ ((packed)); } __attribute__ ((packed));
struct fsf_version_change {
u32 current_version;
u32 previous_version;
} __packed;
struct fsf_status_read_buffer { struct fsf_status_read_buffer {
u32 status_type; u32 status_type;
u32 status_subtype; u32 status_subtype;
@ -242,6 +252,7 @@ struct fsf_status_read_buffer {
u32 word[FSF_STATUS_READ_PAYLOAD_SIZE/sizeof(u32)]; u32 word[FSF_STATUS_READ_PAYLOAD_SIZE/sizeof(u32)];
struct fsf_link_down_info link_down_info; struct fsf_link_down_info link_down_info;
struct fsf_bit_error_payload bit_error; struct fsf_bit_error_payload bit_error;
struct fsf_version_change version_change;
} payload; } payload;
} __attribute__ ((packed)); } __attribute__ ((packed));

Просмотреть файл

@ -10,6 +10,7 @@
#define KMSG_COMPONENT "zfcp" #define KMSG_COMPONENT "zfcp"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/lockdep.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include "zfcp_ext.h" #include "zfcp_ext.h"
@ -131,6 +132,33 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdires2"); zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdires2");
} }
static void zfcp_qdio_irq_tasklet(struct tasklet_struct *tasklet)
{
struct zfcp_qdio *qdio = from_tasklet(qdio, tasklet, irq_tasklet);
struct ccw_device *cdev = qdio->adapter->ccw_device;
unsigned int start, error;
int completed;
/* Check the Response Queue, and kick off the Request Queue tasklet: */
completed = qdio_get_next_buffers(cdev, 0, &start, &error);
if (completed < 0)
return;
if (completed > 0)
zfcp_qdio_int_resp(cdev, error, 0, start, completed,
(unsigned long) qdio);
if (qdio_start_irq(cdev))
/* More work pending: */
tasklet_schedule(&qdio->irq_tasklet);
}
static void zfcp_qdio_poll(struct ccw_device *cdev, unsigned long data)
{
struct zfcp_qdio *qdio = (struct zfcp_qdio *) data;
tasklet_schedule(&qdio->irq_tasklet);
}
static struct qdio_buffer_element * static struct qdio_buffer_element *
zfcp_qdio_sbal_chain(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req) zfcp_qdio_sbal_chain(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
{ {
@ -256,6 +284,13 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
int retval; int retval;
u8 sbal_number = q_req->sbal_number; u8 sbal_number = q_req->sbal_number;
/*
* This should actually be a spin_lock_bh(stat_lock), to protect against
* zfcp_qdio_int_req() in tasklet context.
* But we can't do so (and are safe), as we always get called with IRQs
* disabled by spin_lock_irq[save](req_q_lock).
*/
lockdep_assert_irqs_disabled();
spin_lock(&qdio->stat_lock); spin_lock(&qdio->stat_lock);
zfcp_qdio_account(qdio); zfcp_qdio_account(qdio);
spin_unlock(&qdio->stat_lock); spin_unlock(&qdio->stat_lock);
@ -332,6 +367,8 @@ void zfcp_qdio_close(struct zfcp_qdio *qdio)
wake_up(&qdio->req_q_wq); wake_up(&qdio->req_q_wq);
tasklet_disable(&qdio->irq_tasklet);
qdio_stop_irq(adapter->ccw_device);
qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR); qdio_shutdown(adapter->ccw_device, QDIO_FLAG_CLEANUP_USING_CLEAR);
/* cleanup used outbound sbals */ /* cleanup used outbound sbals */
@ -387,6 +424,7 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
init_data.no_output_qs = 1; init_data.no_output_qs = 1;
init_data.input_handler = zfcp_qdio_int_resp; init_data.input_handler = zfcp_qdio_int_resp;
init_data.output_handler = zfcp_qdio_int_req; init_data.output_handler = zfcp_qdio_int_req;
init_data.irq_poll = zfcp_qdio_poll;
init_data.int_parm = (unsigned long) qdio; init_data.int_parm = (unsigned long) qdio;
init_data.input_sbal_addr_array = input_sbals; init_data.input_sbal_addr_array = input_sbals;
init_data.output_sbal_addr_array = output_sbals; init_data.output_sbal_addr_array = output_sbals;
@ -433,6 +471,11 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q); atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q);
atomic_or(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status); atomic_or(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);
/* Enable processing for QDIO interrupts: */
tasklet_enable(&qdio->irq_tasklet);
/* This results in a qdio_start_irq(): */
tasklet_schedule(&qdio->irq_tasklet);
zfcp_qdio_shost_update(adapter, qdio); zfcp_qdio_shost_update(adapter, qdio);
return 0; return 0;
@ -450,6 +493,8 @@ void zfcp_qdio_destroy(struct zfcp_qdio *qdio)
if (!qdio) if (!qdio)
return; return;
tasklet_kill(&qdio->irq_tasklet);
if (qdio->adapter->ccw_device) if (qdio->adapter->ccw_device)
qdio_free(qdio->adapter->ccw_device); qdio_free(qdio->adapter->ccw_device);
@ -475,6 +520,8 @@ int zfcp_qdio_setup(struct zfcp_adapter *adapter)
spin_lock_init(&qdio->req_q_lock); spin_lock_init(&qdio->req_q_lock);
spin_lock_init(&qdio->stat_lock); spin_lock_init(&qdio->stat_lock);
tasklet_setup(&qdio->irq_tasklet, zfcp_qdio_irq_tasklet);
tasklet_disable(&qdio->irq_tasklet);
adapter->qdio = qdio; adapter->qdio = qdio;
return 0; return 0;

Просмотреть файл

@ -10,6 +10,7 @@
#ifndef ZFCP_QDIO_H #ifndef ZFCP_QDIO_H
#define ZFCP_QDIO_H #define ZFCP_QDIO_H
#include <linux/interrupt.h>
#include <asm/qdio.h> #include <asm/qdio.h>
#define ZFCP_QDIO_SBALE_LEN PAGE_SIZE #define ZFCP_QDIO_SBALE_LEN PAGE_SIZE
@ -44,6 +45,7 @@ struct zfcp_qdio {
u64 req_q_util; u64 req_q_util;
atomic_t req_q_full; atomic_t req_q_full;
wait_queue_head_t req_q_wq; wait_queue_head_t req_q_wq;
struct tasklet_struct irq_tasklet;
struct zfcp_adapter *adapter; struct zfcp_adapter *adapter;
u16 max_sbale_per_sbal; u16 max_sbale_per_sbal;
u16 max_sbale_per_req; u16 max_sbale_per_req;

Просмотреть файл

@ -2191,10 +2191,10 @@ static void twa_remove(struct pci_dev *pdev)
twa_device_extension_count--; twa_device_extension_count--;
} /* End twa_remove() */ } /* End twa_remove() */
#ifdef CONFIG_PM
/* This function is called on PCI suspend */ /* This function is called on PCI suspend */
static int twa_suspend(struct pci_dev *pdev, pm_message_t state) static int __maybe_unused twa_suspend(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev);
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = pci_get_drvdata(pdev);
TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata; TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
@ -2214,32 +2214,19 @@ static int twa_suspend(struct pci_dev *pdev, pm_message_t state)
} }
TW_CLEAR_ALL_INTERRUPTS(tw_dev); TW_CLEAR_ALL_INTERRUPTS(tw_dev);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0; return 0;
} /* End twa_suspend() */ } /* End twa_suspend() */
/* This function is called on PCI resume */ /* This function is called on PCI resume */
static int twa_resume(struct pci_dev *pdev) static int __maybe_unused twa_resume(struct device *dev)
{ {
int retval = 0; int retval = 0;
struct pci_dev *pdev = to_pci_dev(dev);
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = pci_get_drvdata(pdev);
TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata; TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
printk(KERN_WARNING "3w-9xxx: Resuming host %d.\n", tw_dev->host->host_no); printk(KERN_WARNING "3w-9xxx: Resuming host %d.\n", tw_dev->host->host_no);
pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
pci_restore_state(pdev);
retval = pci_enable_device(pdev);
if (retval) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x39, "Enable device failed during resume");
return retval;
}
pci_set_master(pdev);
pci_try_set_mwi(pdev); pci_try_set_mwi(pdev);
retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
@ -2277,11 +2264,9 @@ static int twa_resume(struct pci_dev *pdev)
out_disable_device: out_disable_device:
scsi_remove_host(host); scsi_remove_host(host);
pci_disable_device(pdev);
return retval; return retval;
} /* End twa_resume() */ } /* End twa_resume() */
#endif
/* PCI Devices supported by this driver */ /* PCI Devices supported by this driver */
static struct pci_device_id twa_pci_tbl[] = { static struct pci_device_id twa_pci_tbl[] = {
@ -2297,16 +2282,15 @@ static struct pci_device_id twa_pci_tbl[] = {
}; };
MODULE_DEVICE_TABLE(pci, twa_pci_tbl); MODULE_DEVICE_TABLE(pci, twa_pci_tbl);
static SIMPLE_DEV_PM_OPS(twa_pm_ops, twa_suspend, twa_resume);
/* pci_driver initializer */ /* pci_driver initializer */
static struct pci_driver twa_driver = { static struct pci_driver twa_driver = {
.name = "3w-9xxx", .name = "3w-9xxx",
.id_table = twa_pci_tbl, .id_table = twa_pci_tbl,
.probe = twa_probe, .probe = twa_probe,
.remove = twa_remove, .remove = twa_remove,
#ifdef CONFIG_PM .driver.pm = &twa_pm_ops,
.suspend = twa_suspend,
.resume = twa_resume,
#endif
.shutdown = twa_shutdown .shutdown = twa_shutdown
}; };

Просмотреть файл

@ -1756,11 +1756,10 @@ static void twl_remove(struct pci_dev *pdev)
twl_device_extension_count--; twl_device_extension_count--;
} /* End twl_remove() */ } /* End twl_remove() */
#ifdef CONFIG_PM
/* This function is called on PCI suspend */ /* This function is called on PCI suspend */
static int twl_suspend(struct pci_dev *pdev, pm_message_t state) static int __maybe_unused twl_suspend(struct device *dev)
{ {
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = dev_get_drvdata(dev);
TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata; TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
printk(KERN_WARNING "3w-sas: Suspending host %d.\n", tw_dev->host->host_no); printk(KERN_WARNING "3w-sas: Suspending host %d.\n", tw_dev->host->host_no);
@ -1779,32 +1778,18 @@ static int twl_suspend(struct pci_dev *pdev, pm_message_t state)
/* Clear doorbell interrupt */ /* Clear doorbell interrupt */
TWL_CLEAR_DB_INTERRUPT(tw_dev); TWL_CLEAR_DB_INTERRUPT(tw_dev);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0; return 0;
} /* End twl_suspend() */ } /* End twl_suspend() */
/* This function is called on PCI resume */ /* This function is called on PCI resume */
static int twl_resume(struct pci_dev *pdev) static int __maybe_unused twl_resume(struct device *dev)
{ {
int retval = 0; int retval = 0;
struct pci_dev *pdev = to_pci_dev(dev);
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = pci_get_drvdata(pdev);
TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata; TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
printk(KERN_WARNING "3w-sas: Resuming host %d.\n", tw_dev->host->host_no); printk(KERN_WARNING "3w-sas: Resuming host %d.\n", tw_dev->host->host_no);
pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
pci_restore_state(pdev);
retval = pci_enable_device(pdev);
if (retval) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x24, "Enable device failed during resume");
return retval;
}
pci_set_master(pdev);
pci_try_set_mwi(pdev); pci_try_set_mwi(pdev);
retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
@ -1842,11 +1827,9 @@ static int twl_resume(struct pci_dev *pdev)
out_disable_device: out_disable_device:
scsi_remove_host(host); scsi_remove_host(host);
pci_disable_device(pdev);
return retval; return retval;
} /* End twl_resume() */ } /* End twl_resume() */
#endif
/* PCI Devices supported by this driver */ /* PCI Devices supported by this driver */
static struct pci_device_id twl_pci_tbl[] = { static struct pci_device_id twl_pci_tbl[] = {
@ -1855,16 +1838,15 @@ static struct pci_device_id twl_pci_tbl[] = {
}; };
MODULE_DEVICE_TABLE(pci, twl_pci_tbl); MODULE_DEVICE_TABLE(pci, twl_pci_tbl);
static SIMPLE_DEV_PM_OPS(twl_pm_ops, twl_suspend, twl_resume);
/* pci_driver initializer */ /* pci_driver initializer */
static struct pci_driver twl_driver = { static struct pci_driver twl_driver = {
.name = "3w-sas", .name = "3w-sas",
.id_table = twl_pci_tbl, .id_table = twl_pci_tbl,
.probe = twl_probe, .probe = twl_probe,
.remove = twl_remove, .remove = twl_remove,
#ifdef CONFIG_PM .driver.pm = &twl_pm_ops,
.suspend = twl_suspend,
.resume = twl_resume,
#endif
.shutdown = twl_shutdown .shutdown = twl_shutdown
}; };

Просмотреть файл

@ -132,7 +132,7 @@
static unsigned int disconnect_mask = ~0; static unsigned int disconnect_mask = ~0;
module_param(disconnect_mask, int, 0444); module_param(disconnect_mask, int, 0444);
static int do_abort(struct Scsi_Host *); static int do_abort(struct Scsi_Host *, unsigned int);
static void do_reset(struct Scsi_Host *); static void do_reset(struct Scsi_Host *);
static void bus_reset_cleanup(struct Scsi_Host *); static void bus_reset_cleanup(struct Scsi_Host *);
@ -197,7 +197,7 @@ static inline void set_resid_from_SCp(struct scsi_cmnd *cmd)
* @reg2: Second 5380 register to poll * @reg2: Second 5380 register to poll
* @bit2: Second bitmask to check * @bit2: Second bitmask to check
* @val2: Second expected value * @val2: Second expected value
* @wait: Time-out in jiffies * @wait: Time-out in jiffies, 0 if sleeping is not allowed
* *
* Polls the chip in a reasonably efficient manner waiting for an * Polls the chip in a reasonably efficient manner waiting for an
* event to occur. After a short quick poll we begin to yield the CPU * event to occur. After a short quick poll we begin to yield the CPU
@ -223,7 +223,7 @@ static int NCR5380_poll_politely2(struct NCR5380_hostdata *hostdata,
cpu_relax(); cpu_relax();
} while (n--); } while (n--);
if (irqs_disabled() || in_interrupt()) if (!wait)
return -ETIMEDOUT; return -ETIMEDOUT;
/* Repeatedly sleep for 1 ms until deadline */ /* Repeatedly sleep for 1 ms until deadline */
@ -486,7 +486,7 @@ static int NCR5380_maybe_reset_bus(struct Scsi_Host *instance)
break; break;
case 2: case 2:
shost_printk(KERN_ERR, instance, "bus busy, attempting abort\n"); shost_printk(KERN_ERR, instance, "bus busy, attempting abort\n");
do_abort(instance); do_abort(instance, 1);
break; break;
case 4: case 4:
shost_printk(KERN_ERR, instance, "bus busy, attempting reset\n"); shost_printk(KERN_ERR, instance, "bus busy, attempting reset\n");
@ -580,11 +580,14 @@ static int NCR5380_queue_command(struct Scsi_Host *instance,
cmd->result = 0; cmd->result = 0;
if (!NCR5380_acquire_dma_irq(instance))
return SCSI_MLQUEUE_HOST_BUSY;
spin_lock_irqsave(&hostdata->lock, flags); spin_lock_irqsave(&hostdata->lock, flags);
if (!NCR5380_acquire_dma_irq(instance)) {
spin_unlock_irqrestore(&hostdata->lock, flags);
return SCSI_MLQUEUE_HOST_BUSY;
}
/* /*
* Insert the cmd into the issue queue. Note that REQUEST SENSE * Insert the cmd into the issue queue. Note that REQUEST SENSE
* commands are added to the head of the queue since any command will * commands are added to the head of the queue since any command will
@ -722,7 +725,6 @@ static void NCR5380_main(struct work_struct *work)
if (!NCR5380_select(instance, cmd)) { if (!NCR5380_select(instance, cmd)) {
dsprintk(NDEBUG_MAIN, instance, "main: select complete\n"); dsprintk(NDEBUG_MAIN, instance, "main: select complete\n");
maybe_release_dma_irq(instance);
} else { } else {
dsprintk(NDEBUG_MAIN | NDEBUG_QUEUES, instance, dsprintk(NDEBUG_MAIN | NDEBUG_QUEUES, instance,
"main: select failed, returning %p to queue\n", cmd); "main: select failed, returning %p to queue\n", cmd);
@ -734,8 +736,10 @@ static void NCR5380_main(struct work_struct *work)
NCR5380_information_transfer(instance); NCR5380_information_transfer(instance);
done = 0; done = 0;
} }
if (!hostdata->connected) if (!hostdata->connected) {
NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask);
maybe_release_dma_irq(instance);
}
spin_unlock_irq(&hostdata->lock); spin_unlock_irq(&hostdata->lock);
if (!done) if (!done)
cond_resched(); cond_resched();
@ -818,7 +822,7 @@ static void NCR5380_dma_complete(struct Scsi_Host *instance)
if (toPIO > 0) { if (toPIO > 0) {
dsprintk(NDEBUG_DMA, instance, dsprintk(NDEBUG_DMA, instance,
"Doing %d byte PIO to 0x%p\n", cnt, *data); "Doing %d byte PIO to 0x%p\n", cnt, *data);
NCR5380_transfer_pio(instance, &p, &cnt, data); NCR5380_transfer_pio(instance, &p, &cnt, data, 0);
*count -= toPIO - cnt; *count -= toPIO - cnt;
} }
} }
@ -1185,7 +1189,7 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
goto out; goto out;
} }
if (!hostdata->selecting) { if (!hostdata->selecting) {
do_abort(instance); do_abort(instance, 0);
return false; return false;
} }
@ -1196,7 +1200,7 @@ static bool NCR5380_select(struct Scsi_Host *instance, struct scsi_cmnd *cmd)
len = 1; len = 1;
data = tmp; data = tmp;
phase = PHASE_MSGOUT; phase = PHASE_MSGOUT;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
if (len) { if (len) {
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
cmd->result = DID_ERROR << 16; cmd->result = DID_ERROR << 16;
@ -1234,7 +1238,8 @@ out:
* *
* Inputs : instance - instance of driver, *phase - pointer to * Inputs : instance - instance of driver, *phase - pointer to
* what phase is expected, *count - pointer to number of * what phase is expected, *count - pointer to number of
* bytes to transfer, **data - pointer to data pointer. * bytes to transfer, **data - pointer to data pointer,
* can_sleep - 1 or 0 when sleeping is permitted or not, respectively.
* *
* Returns : -1 when different phase is entered without transferring * Returns : -1 when different phase is entered without transferring
* maximum number of bytes, 0 if all bytes are transferred or exit * maximum number of bytes, 0 if all bytes are transferred or exit
@ -1253,7 +1258,7 @@ out:
static int NCR5380_transfer_pio(struct Scsi_Host *instance, static int NCR5380_transfer_pio(struct Scsi_Host *instance,
unsigned char *phase, int *count, unsigned char *phase, int *count,
unsigned char **data) unsigned char **data, unsigned int can_sleep)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance); struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char p = *phase, tmp; unsigned char p = *phase, tmp;
@ -1274,7 +1279,8 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
* valid * valid
*/ */
if (NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, HZ) < 0) if (NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ,
HZ * can_sleep) < 0)
break; break;
dsprintk(NDEBUG_HANDSHAKE, instance, "REQ asserted\n"); dsprintk(NDEBUG_HANDSHAKE, instance, "REQ asserted\n");
@ -1320,7 +1326,7 @@ static int NCR5380_transfer_pio(struct Scsi_Host *instance,
} }
if (NCR5380_poll_politely(hostdata, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, 0, 5 * HZ) < 0) STATUS_REG, SR_REQ, 0, 5 * HZ * can_sleep) < 0)
break; break;
dsprintk(NDEBUG_HANDSHAKE, instance, "REQ negated, handshake complete\n"); dsprintk(NDEBUG_HANDSHAKE, instance, "REQ negated, handshake complete\n");
@ -1395,11 +1401,12 @@ static void do_reset(struct Scsi_Host *instance)
* do_abort - abort the currently established nexus by going to * do_abort - abort the currently established nexus by going to
* MESSAGE OUT phase and sending an ABORT message. * MESSAGE OUT phase and sending an ABORT message.
* @instance: relevant scsi host instance * @instance: relevant scsi host instance
* @can_sleep: 1 or 0 when sleeping is permitted or not, respectively
* *
* Returns 0 on success, negative error code on failure. * Returns 0 on success, negative error code on failure.
*/ */
static int do_abort(struct Scsi_Host *instance) static int do_abort(struct Scsi_Host *instance, unsigned int can_sleep)
{ {
struct NCR5380_hostdata *hostdata = shost_priv(instance); struct NCR5380_hostdata *hostdata = shost_priv(instance);
unsigned char *msgptr, phase, tmp; unsigned char *msgptr, phase, tmp;
@ -1419,7 +1426,8 @@ static int do_abort(struct Scsi_Host *instance)
* the target sees, so we just handshake. * the target sees, so we just handshake.
*/ */
rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ); rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ,
10 * HZ * can_sleep);
if (rc < 0) if (rc < 0)
goto out; goto out;
@ -1430,7 +1438,8 @@ static int do_abort(struct Scsi_Host *instance)
if (tmp != PHASE_MSGOUT) { if (tmp != PHASE_MSGOUT) {
NCR5380_write(INITIATOR_COMMAND_REG, NCR5380_write(INITIATOR_COMMAND_REG,
ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK); ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK);
rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, 0, 3 * HZ); rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, 0,
3 * HZ * can_sleep);
if (rc < 0) if (rc < 0)
goto out; goto out;
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN);
@ -1440,7 +1449,7 @@ static int do_abort(struct Scsi_Host *instance)
msgptr = &tmp; msgptr = &tmp;
len = 1; len = 1;
phase = PHASE_MSGOUT; phase = PHASE_MSGOUT;
NCR5380_transfer_pio(instance, &phase, &len, &msgptr); NCR5380_transfer_pio(instance, &phase, &len, &msgptr, can_sleep);
if (len) if (len)
rc = -ENXIO; rc = -ENXIO;
@ -1619,12 +1628,12 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
*/ */
if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_DRQ, BASR_DRQ, HZ) < 0) { BASR_DRQ, BASR_DRQ, 0) < 0) {
result = -1; result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n"); shost_printk(KERN_ERR, instance, "PDMA read: DRQ timeout\n");
} }
if (NCR5380_poll_politely(hostdata, STATUS_REG, if (NCR5380_poll_politely(hostdata, STATUS_REG,
SR_REQ, 0, HZ) < 0) { SR_REQ, 0, 0) < 0) {
result = -1; result = -1;
shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n"); shost_printk(KERN_ERR, instance, "PDMA read: !REQ timeout\n");
} }
@ -1636,7 +1645,7 @@ static int NCR5380_transfer_dma(struct Scsi_Host *instance,
*/ */
if (NCR5380_poll_politely2(hostdata, if (NCR5380_poll_politely2(hostdata,
BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ, BUS_AND_STATUS_REG, BASR_DRQ, BASR_DRQ,
BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, HZ) < 0) { BUS_AND_STATUS_REG, BASR_PHASE_MATCH, 0, 0) < 0) {
result = -1; result = -1;
shost_printk(KERN_ERR, instance, "PDMA write: DRQ and phase timeout\n"); shost_printk(KERN_ERR, instance, "PDMA write: DRQ and phase timeout\n");
} }
@ -1733,7 +1742,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
#if (NDEBUG & NDEBUG_NO_DATAOUT) #if (NDEBUG & NDEBUG_NO_DATAOUT)
shost_printk(KERN_DEBUG, instance, "NDEBUG_NO_DATAOUT set, attempted DATAOUT aborted\n"); shost_printk(KERN_DEBUG, instance, "NDEBUG_NO_DATAOUT set, attempted DATAOUT aborted\n");
sink = 1; sink = 1;
do_abort(instance); do_abort(instance, 0);
cmd->result = DID_ERROR << 16; cmd->result = DID_ERROR << 16;
complete_cmd(instance, cmd); complete_cmd(instance, cmd);
hostdata->connected = NULL; hostdata->connected = NULL;
@ -1789,7 +1798,8 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
NCR5380_PIO_CHUNK_SIZE); NCR5380_PIO_CHUNK_SIZE);
len = transfersize; len = transfersize;
NCR5380_transfer_pio(instance, &phase, &len, NCR5380_transfer_pio(instance, &phase, &len,
(unsigned char **)&cmd->SCp.ptr); (unsigned char **)&cmd->SCp.ptr,
0);
cmd->SCp.this_residual -= transfersize - len; cmd->SCp.this_residual -= transfersize - len;
} }
#ifdef CONFIG_SUN3 #ifdef CONFIG_SUN3
@ -1800,7 +1810,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
case PHASE_MSGIN: case PHASE_MSGIN:
len = 1; len = 1;
data = &tmp; data = &tmp;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
cmd->SCp.Message = tmp; cmd->SCp.Message = tmp;
switch (tmp) { switch (tmp) {
@ -1841,7 +1851,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
*/ */
NCR5380_write(TARGET_COMMAND_REG, 0); NCR5380_write(TARGET_COMMAND_REG, 0);
maybe_release_dma_irq(instance);
return; return;
case MESSAGE_REJECT: case MESSAGE_REJECT:
/* Accept message by clearing ACK */ /* Accept message by clearing ACK */
@ -1907,7 +1916,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
len = 2; len = 2;
data = extended_msg + 1; data = extended_msg + 1;
phase = PHASE_MSGIN; phase = PHASE_MSGIN;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 1);
dsprintk(NDEBUG_EXTENDED, instance, "length %d, code 0x%02x\n", dsprintk(NDEBUG_EXTENDED, instance, "length %d, code 0x%02x\n",
(int)extended_msg[1], (int)extended_msg[1],
(int)extended_msg[2]); (int)extended_msg[2]);
@ -1920,7 +1929,7 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
data = extended_msg + 3; data = extended_msg + 3;
phase = PHASE_MSGIN; phase = PHASE_MSGIN;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 1);
dsprintk(NDEBUG_EXTENDED, instance, "message received, residual %d\n", dsprintk(NDEBUG_EXTENDED, instance, "message received, residual %d\n",
len); len);
@ -1967,13 +1976,12 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
len = 1; len = 1;
data = &msgout; data = &msgout;
hostdata->last_message = msgout; hostdata->last_message = msgout;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
if (msgout == ABORT) { if (msgout == ABORT) {
hostdata->connected = NULL; hostdata->connected = NULL;
hostdata->busy[scmd_id(cmd)] &= ~(1 << cmd->device->lun); hostdata->busy[scmd_id(cmd)] &= ~(1 << cmd->device->lun);
cmd->result = DID_ERROR << 16; cmd->result = DID_ERROR << 16;
complete_cmd(instance, cmd); complete_cmd(instance, cmd);
maybe_release_dma_irq(instance);
return; return;
} }
msgout = NOP; msgout = NOP;
@ -1986,12 +1994,12 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
* PSEUDO-DMA architecture we should probably * PSEUDO-DMA architecture we should probably
* use the dma transfer function. * use the dma transfer function.
*/ */
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
break; break;
case PHASE_STATIN: case PHASE_STATIN:
len = 1; len = 1;
data = &tmp; data = &tmp;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
cmd->SCp.Status = tmp; cmd->SCp.Status = tmp;
break; break;
default: default:
@ -2050,7 +2058,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_BSY); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_BSY);
if (NCR5380_poll_politely(hostdata, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_SEL, 0, 2 * HZ) < 0) { STATUS_REG, SR_SEL, 0, 0) < 0) {
shost_printk(KERN_ERR, instance, "reselect: !SEL timeout\n"); shost_printk(KERN_ERR, instance, "reselect: !SEL timeout\n");
NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE);
return; return;
@ -2062,12 +2070,12 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
*/ */
if (NCR5380_poll_politely(hostdata, if (NCR5380_poll_politely(hostdata,
STATUS_REG, SR_REQ, SR_REQ, 2 * HZ) < 0) { STATUS_REG, SR_REQ, SR_REQ, 0) < 0) {
if ((NCR5380_read(STATUS_REG) & (SR_BSY | SR_SEL)) == 0) if ((NCR5380_read(STATUS_REG) & (SR_BSY | SR_SEL)) == 0)
/* BUS FREE phase */ /* BUS FREE phase */
return; return;
shost_printk(KERN_ERR, instance, "reselect: REQ timeout\n"); shost_printk(KERN_ERR, instance, "reselect: REQ timeout\n");
do_abort(instance); do_abort(instance, 0);
return; return;
} }
@ -2083,10 +2091,10 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
unsigned char *data = msg; unsigned char *data = msg;
unsigned char phase = PHASE_MSGIN; unsigned char phase = PHASE_MSGIN;
NCR5380_transfer_pio(instance, &phase, &len, &data); NCR5380_transfer_pio(instance, &phase, &len, &data, 0);
if (len) { if (len) {
do_abort(instance); do_abort(instance, 0);
return; return;
} }
} }
@ -2096,7 +2104,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
shost_printk(KERN_ERR, instance, "expecting IDENTIFY message, got "); shost_printk(KERN_ERR, instance, "expecting IDENTIFY message, got ");
spi_print_msg(msg); spi_print_msg(msg);
printk("\n"); printk("\n");
do_abort(instance); do_abort(instance, 0);
return; return;
} }
lun = msg[0] & 0x07; lun = msg[0] & 0x07;
@ -2136,7 +2144,7 @@ static void NCR5380_reselect(struct Scsi_Host *instance)
* Since we have an established nexus that we can't do anything * Since we have an established nexus that we can't do anything
* with, we must abort it. * with, we must abort it.
*/ */
if (do_abort(instance) == 0) if (do_abort(instance, 0) == 0)
hostdata->busy[target] &= ~(1 << lun); hostdata->busy[target] &= ~(1 << lun);
return; return;
} }
@ -2283,7 +2291,7 @@ static int NCR5380_abort(struct scsi_cmnd *cmd)
dsprintk(NDEBUG_ABORT, instance, "abort: cmd %p is connected\n", cmd); dsprintk(NDEBUG_ABORT, instance, "abort: cmd %p is connected\n", cmd);
hostdata->connected = NULL; hostdata->connected = NULL;
hostdata->dma_len = 0; hostdata->dma_len = 0;
if (do_abort(instance) < 0) { if (do_abort(instance, 0) < 0) {
set_host_byte(cmd, DID_ERROR); set_host_byte(cmd, DID_ERROR);
complete_cmd(instance, cmd); complete_cmd(instance, cmd);
result = FAILED; result = FAILED;
@ -2309,7 +2317,6 @@ out:
} }
queue_work(hostdata->work_q, &hostdata->main_task); queue_work(hostdata->work_q, &hostdata->main_task);
maybe_release_dma_irq(instance);
spin_unlock_irqrestore(&hostdata->lock, flags); spin_unlock_irqrestore(&hostdata->lock, flags);
return result; return result;
@ -2365,7 +2372,6 @@ static void bus_reset_cleanup(struct Scsi_Host *instance)
hostdata->dma_len = 0; hostdata->dma_len = 0;
queue_work(hostdata->work_q, &hostdata->main_task); queue_work(hostdata->work_q, &hostdata->main_task);
maybe_release_dma_irq(instance);
} }
/** /**

Просмотреть файл

@ -277,7 +277,8 @@ static const char *NCR5380_info(struct Scsi_Host *instance);
static void NCR5380_reselect(struct Scsi_Host *instance); static void NCR5380_reselect(struct Scsi_Host *instance);
static bool NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *); static bool NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *);
static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data,
unsigned int can_sleep);
static int NCR5380_poll_politely2(struct NCR5380_hostdata *, static int NCR5380_poll_politely2(struct NCR5380_hostdata *,
unsigned int, u8, u8, unsigned int, u8, u8,
unsigned int, u8, u8, unsigned long); unsigned int, u8, u8, unsigned long);

Просмотреть файл

@ -25,6 +25,7 @@
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/compat.h>
#include <linux/delay.h> /* ssleep prototype */ #include <linux/delay.h> /* ssleep prototype */
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
@ -226,6 +227,12 @@ static int open_getadapter_fib(struct aac_dev * dev, void __user *arg)
return status; return status;
} }
struct compat_fib_ioctl {
u32 fibctx;
s32 wait;
compat_uptr_t fib;
};
/** /**
* next_getadapter_fib - get the next fib * next_getadapter_fib - get the next fib
* @dev: adapter to use * @dev: adapter to use
@ -243,8 +250,19 @@ static int next_getadapter_fib(struct aac_dev * dev, void __user *arg)
struct list_head * entry; struct list_head * entry;
unsigned long flags; unsigned long flags;
if(copy_from_user((void *)&f, arg, sizeof(struct fib_ioctl))) if (in_compat_syscall()) {
return -EFAULT; struct compat_fib_ioctl cf;
if (copy_from_user(&cf, arg, sizeof(struct compat_fib_ioctl)))
return -EFAULT;
f.fibctx = cf.fibctx;
f.wait = cf.wait;
f.fib = compat_ptr(cf.fib);
} else {
if (copy_from_user(&f, arg, sizeof(struct fib_ioctl)))
return -EFAULT;
}
/* /*
* Verify that the HANDLE passed in was a valid AdapterFibContext * Verify that the HANDLE passed in was a valid AdapterFibContext
* *

Просмотреть файл

@ -1448,6 +1448,7 @@ retry_next:
break; break;
} }
scsi_rescan_device(&device->sdev_gendev); scsi_rescan_device(&device->sdev_gendev);
break;
default: default:
break; break;

Просмотреть файл

@ -1182,63 +1182,6 @@ static long aac_cfg_ioctl(struct file *file,
return aac_do_ioctl(aac, cmd, (void __user *)arg); return aac_do_ioctl(aac, cmd, (void __user *)arg);
} }
#ifdef CONFIG_COMPAT
static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long arg)
{
long ret;
switch (cmd) {
case FSACTL_MINIPORT_REV_CHECK:
case FSACTL_SENDFIB:
case FSACTL_OPEN_GET_ADAPTER_FIB:
case FSACTL_CLOSE_GET_ADAPTER_FIB:
case FSACTL_SEND_RAW_SRB:
case FSACTL_GET_PCI_INFO:
case FSACTL_QUERY_DISK:
case FSACTL_DELETE_DISK:
case FSACTL_FORCE_DELETE_DISK:
case FSACTL_GET_CONTAINERS:
case FSACTL_SEND_LARGE_FIB:
ret = aac_do_ioctl(dev, cmd, (void __user *)arg);
break;
case FSACTL_GET_NEXT_ADAPTER_FIB: {
struct fib_ioctl __user *f;
f = compat_alloc_user_space(sizeof(*f));
ret = 0;
if (clear_user(f, sizeof(*f)))
ret = -EFAULT;
if (copy_in_user(f, (void __user *)arg, sizeof(struct fib_ioctl) - sizeof(u32)))
ret = -EFAULT;
if (!ret)
ret = aac_do_ioctl(dev, cmd, f);
break;
}
default:
ret = -ENOIOCTLCMD;
break;
}
return ret;
}
static int aac_compat_ioctl(struct scsi_device *sdev, unsigned int cmd,
void __user *arg)
{
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg);
}
static long aac_compat_cfg_ioctl(struct file *file, unsigned cmd, unsigned long arg)
{
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
return aac_compat_do_ioctl(file->private_data, cmd, arg);
}
#endif
static ssize_t aac_show_model(struct device *device, static ssize_t aac_show_model(struct device *device,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
@ -1523,7 +1466,7 @@ static const struct file_operations aac_cfg_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.unlocked_ioctl = aac_cfg_ioctl, .unlocked_ioctl = aac_cfg_ioctl,
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = aac_compat_cfg_ioctl, .compat_ioctl = aac_cfg_ioctl,
#endif #endif
.open = aac_cfg_open, .open = aac_cfg_open,
.llseek = noop_llseek, .llseek = noop_llseek,
@ -1536,7 +1479,7 @@ static struct scsi_host_template aac_driver_template = {
.info = aac_info, .info = aac_info,
.ioctl = aac_ioctl, .ioctl = aac_ioctl,
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = aac_compat_ioctl, .compat_ioctl = aac_ioctl,
#endif #endif
.queuecommand = aac_queuecommand, .queuecommand = aac_queuecommand,
.bios_param = aac_biosparm, .bios_param = aac_biosparm,
@ -1910,11 +1853,9 @@ error_iounmap:
} }
#if (defined(CONFIG_PM)) static int __maybe_unused aac_suspend(struct device *dev)
static int aac_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct Scsi_Host *shost = dev_get_drvdata(dev);
struct Scsi_Host *shost = pci_get_drvdata(pdev);
struct aac_dev *aac = (struct aac_dev *)shost->hostdata; struct aac_dev *aac = (struct aac_dev *)shost->hostdata;
scsi_host_block(shost); scsi_host_block(shost);
@ -1923,29 +1864,14 @@ static int aac_suspend(struct pci_dev *pdev, pm_message_t state)
aac_release_resources(aac); aac_release_resources(aac);
pci_set_drvdata(pdev, shost);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0; return 0;
} }
static int aac_resume(struct pci_dev *pdev) static int __maybe_unused aac_resume(struct device *dev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev);
struct aac_dev *aac = (struct aac_dev *)shost->hostdata; struct aac_dev *aac = (struct aac_dev *)shost->hostdata;
int r;
pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
pci_restore_state(pdev);
r = pci_enable_device(pdev);
if (r)
goto fail_device;
pci_set_master(pdev);
if (aac_acquire_resources(aac)) if (aac_acquire_resources(aac))
goto fail_device; goto fail_device;
/* /*
@ -1960,10 +1886,8 @@ static int aac_resume(struct pci_dev *pdev)
fail_device: fail_device:
printk(KERN_INFO "%s%d: resume failed.\n", aac->name, aac->id); printk(KERN_INFO "%s%d: resume failed.\n", aac->name, aac->id);
scsi_host_put(shost); scsi_host_put(shost);
pci_disable_device(pdev);
return -ENODEV; return -ENODEV;
} }
#endif
static void aac_shutdown(struct pci_dev *dev) static void aac_shutdown(struct pci_dev *dev)
{ {
@ -2108,15 +2032,14 @@ static struct pci_error_handlers aac_pci_err_handler = {
.resume = aac_pci_resume, .resume = aac_pci_resume,
}; };
static SIMPLE_DEV_PM_OPS(aac_pm_ops, aac_suspend, aac_resume);
static struct pci_driver aac_pci_driver = { static struct pci_driver aac_pci_driver = {
.name = AAC_DRIVERNAME, .name = AAC_DRIVERNAME,
.id_table = aac_pci_tbl, .id_table = aac_pci_tbl,
.probe = aac_probe_one, .probe = aac_probe_one,
.remove = aac_remove_one, .remove = aac_remove_one,
#if (defined(CONFIG_PM)) .driver.pm = &aac_pm_ops,
.suspend = aac_suspend,
.resume = aac_resume,
#endif
.shutdown = aac_shutdown, .shutdown = aac_shutdown,
.err_handler = &aac_pci_err_handler, .err_handler = &aac_pci_err_handler,
}; };

Просмотреть файл

@ -2876,15 +2876,15 @@ static int asc_get_eeprom_string(ushort *serialnum, uchar *cp)
static void asc_prt_asc_board_eeprom(struct seq_file *m, struct Scsi_Host *shost) static void asc_prt_asc_board_eeprom(struct seq_file *m, struct Scsi_Host *shost)
{ {
struct asc_board *boardp = shost_priv(shost); struct asc_board *boardp = shost_priv(shost);
ASC_DVC_VAR *asc_dvc_varp;
ASCEEP_CONFIG *ep; ASCEEP_CONFIG *ep;
int i; int i;
#ifdef CONFIG_ISA
int isa_dma_speed[] = { 10, 8, 7, 6, 5, 4, 3, 2 };
#endif /* CONFIG_ISA */
uchar serialstr[13]; uchar serialstr[13];
#ifdef CONFIG_ISA
ASC_DVC_VAR *asc_dvc_varp;
int isa_dma_speed[] = { 10, 8, 7, 6, 5, 4, 3, 2 };
asc_dvc_varp = &boardp->dvc_var.asc_dvc_var; asc_dvc_varp = &boardp->dvc_var.asc_dvc_var;
#endif /* CONFIG_ISA */
ep = &boardp->eep_config.asc_eep; ep = &boardp->eep_config.asc_eep;
seq_printf(m, seq_printf(m,
@ -3171,7 +3171,6 @@ static void asc_prt_adv_board_eeprom(struct seq_file *m, struct Scsi_Host *shost
static void asc_prt_driver_conf(struct seq_file *m, struct Scsi_Host *shost) static void asc_prt_driver_conf(struct seq_file *m, struct Scsi_Host *shost)
{ {
struct asc_board *boardp = shost_priv(shost); struct asc_board *boardp = shost_priv(shost);
int chip_scsi_id;
seq_printf(m, seq_printf(m,
"\nLinux Driver Configuration and Information for AdvanSys SCSI Host %d:\n", "\nLinux Driver Configuration and Information for AdvanSys SCSI Host %d:\n",
@ -3197,12 +3196,6 @@ static void asc_prt_driver_conf(struct seq_file *m, struct Scsi_Host *shost)
boardp->asc_n_io_port); boardp->asc_n_io_port);
seq_printf(m, " io_port 0x%lx\n", shost->io_port); seq_printf(m, " io_port 0x%lx\n", shost->io_port);
if (ASC_NARROW_BOARD(boardp)) {
chip_scsi_id = boardp->dvc_cfg.asc_dvc_cfg.chip_scsi_id;
} else {
chip_scsi_id = boardp->dvc_var.adv_dvc_var.chip_scsi_id;
}
} }
/* /*
@ -6111,7 +6104,6 @@ static int AdvISR(ADV_DVC_VAR *asc_dvc)
{ {
AdvPortAddr iop_base; AdvPortAddr iop_base;
uchar int_stat; uchar int_stat;
ushort target_bit;
ADV_CARR_T *free_carrp; ADV_CARR_T *free_carrp;
__le32 irq_next_vpa; __le32 irq_next_vpa;
ADV_SCSI_REQ_Q *scsiq; ADV_SCSI_REQ_Q *scsiq;
@ -6198,8 +6190,6 @@ static int AdvISR(ADV_DVC_VAR *asc_dvc)
asc_dvc->carr_freelist = free_carrp; asc_dvc->carr_freelist = free_carrp;
asc_dvc->carr_pending_cnt--; asc_dvc->carr_pending_cnt--;
target_bit = ADV_TID_TO_TIDMASK(scsiq->target_id);
/* /*
* Clear request microcode control flag. * Clear request microcode control flag.
*/ */

Просмотреть файл

@ -152,6 +152,7 @@ static int aha1740_makecode(unchar *sense, unchar *status)
retval=DID_ERROR; /* It's an Overrun */ retval=DID_ERROR; /* It's an Overrun */
/* If not overrun, assume underrun and /* If not overrun, assume underrun and
* ignore it! */ * ignore it! */
break;
case 0x00: /* No info, assume no error, should case 0x00: /* No info, assume no error, should
* not occur */ * not occur */
break; break;

Просмотреть файл

@ -1330,10 +1330,8 @@ const struct ahd_pci_identity *ahd_find_pci_device(ahd_dev_softc_t);
int ahd_pci_config(struct ahd_softc *, int ahd_pci_config(struct ahd_softc *,
const struct ahd_pci_identity *); const struct ahd_pci_identity *);
int ahd_pci_test_register_access(struct ahd_softc *); int ahd_pci_test_register_access(struct ahd_softc *);
#ifdef CONFIG_PM void __maybe_unused ahd_pci_suspend(struct ahd_softc *);
void ahd_pci_suspend(struct ahd_softc *); void __maybe_unused ahd_pci_resume(struct ahd_softc *);
void ahd_pci_resume(struct ahd_softc *);
#endif
/************************** SCB and SCB queue management **********************/ /************************** SCB and SCB queue management **********************/
void ahd_qinfifo_requeue_tail(struct ahd_softc *ahd, void ahd_qinfifo_requeue_tail(struct ahd_softc *ahd,
@ -1344,10 +1342,8 @@ struct ahd_softc *ahd_alloc(void *platform_arg, char *name);
int ahd_softc_init(struct ahd_softc *); int ahd_softc_init(struct ahd_softc *);
void ahd_controller_info(struct ahd_softc *ahd, char *buf); void ahd_controller_info(struct ahd_softc *ahd, char *buf);
int ahd_init(struct ahd_softc *ahd); int ahd_init(struct ahd_softc *ahd);
#ifdef CONFIG_PM int __maybe_unused ahd_suspend(struct ahd_softc *ahd);
int ahd_suspend(struct ahd_softc *ahd); void __maybe_unused ahd_resume(struct ahd_softc *ahd);
void ahd_resume(struct ahd_softc *ahd);
#endif
int ahd_default_config(struct ahd_softc *ahd); int ahd_default_config(struct ahd_softc *ahd);
int ahd_parse_vpddata(struct ahd_softc *ahd, int ahd_parse_vpddata(struct ahd_softc *ahd,
struct vpd_config *vpd); struct vpd_config *vpd);

Просмотреть файл

@ -6130,6 +6130,7 @@ ahd_free(struct ahd_softc *ahd)
fallthrough; fallthrough;
case 2: case 2:
ahd_dma_tag_destroy(ahd, ahd->shared_data_dmat); ahd_dma_tag_destroy(ahd, ahd->shared_data_dmat);
break;
case 1: case 1:
break; break;
case 0: case 0:
@ -6542,8 +6543,8 @@ ahd_fini_scbdata(struct ahd_softc *ahd)
kfree(hscb_map); kfree(hscb_map);
} }
ahd_dma_tag_destroy(ahd, scb_data->hscb_dmat); ahd_dma_tag_destroy(ahd, scb_data->hscb_dmat);
/* FALLTHROUGH */
} }
fallthrough;
case 4: case 4:
case 3: case 3:
case 2: case 2:
@ -7866,11 +7867,9 @@ ahd_pause_and_flushwork(struct ahd_softc *ahd)
ahd->flags &= ~AHD_ALL_INTERRUPTS; ahd->flags &= ~AHD_ALL_INTERRUPTS;
} }
#ifdef CONFIG_PM int __maybe_unused
int
ahd_suspend(struct ahd_softc *ahd) ahd_suspend(struct ahd_softc *ahd)
{ {
ahd_pause_and_flushwork(ahd); ahd_pause_and_flushwork(ahd);
if (LIST_FIRST(&ahd->pending_scbs) != NULL) { if (LIST_FIRST(&ahd->pending_scbs) != NULL) {
@ -7881,15 +7880,13 @@ ahd_suspend(struct ahd_softc *ahd)
return (0); return (0);
} }
void void __maybe_unused
ahd_resume(struct ahd_softc *ahd) ahd_resume(struct ahd_softc *ahd)
{ {
ahd_reset(ahd, /*reinit*/TRUE); ahd_reset(ahd, /*reinit*/TRUE);
ahd_intr_enable(ahd, TRUE); ahd_intr_enable(ahd, TRUE);
ahd_restart(ahd); ahd_restart(ahd);
} }
#endif
/************************** Busy Target Table *********************************/ /************************** Busy Target Table *********************************/
/* /*
@ -8911,6 +8908,7 @@ ahd_handle_scsi_status(struct ahd_softc *ahd, struct scb *scb)
break; break;
case SIU_PFC_ILLEGAL_REQUEST: case SIU_PFC_ILLEGAL_REQUEST:
printk("Illegal request\n"); printk("Illegal request\n");
break;
default: default:
break; break;
} }

Просмотреть файл

@ -2140,7 +2140,6 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
u_int saved_scbptr; u_int saved_scbptr;
u_int active_scbptr; u_int active_scbptr;
u_int last_phase; u_int last_phase;
u_int saved_scsiid;
u_int cdb_byte; u_int cdb_byte;
int retval = SUCCESS; int retval = SUCCESS;
int was_paused; int was_paused;
@ -2254,7 +2253,7 @@ ahd_linux_queue_abort_cmd(struct scsi_cmnd *cmd)
* passed in command. That command is currently active on the * passed in command. That command is currently active on the
* bus or is in the disconnected state. * bus or is in the disconnected state.
*/ */
saved_scsiid = ahd_inb(ahd, SAVED_SCSIID); ahd_inb(ahd, SAVED_SCSIID);
if (last_phase != P_BUSFREE if (last_phase != P_BUSFREE
&& SCB_GET_TAG(pending_scb) == active_scbptr) { && SCB_GET_TAG(pending_scb) == active_scbptr) {

Просмотреть файл

@ -74,11 +74,10 @@ static const struct pci_device_id ahd_linux_pci_id_table[] = {
MODULE_DEVICE_TABLE(pci, ahd_linux_pci_id_table); MODULE_DEVICE_TABLE(pci, ahd_linux_pci_id_table);
#ifdef CONFIG_PM static int __maybe_unused
static int ahd_linux_pci_dev_suspend(struct device *dev)
ahd_linux_pci_dev_suspend(struct pci_dev *pdev, pm_message_t mesg)
{ {
struct ahd_softc *ahd = pci_get_drvdata(pdev); struct ahd_softc *ahd = dev_get_drvdata(dev);
int rc; int rc;
if ((rc = ahd_suspend(ahd))) if ((rc = ahd_suspend(ahd)))
@ -86,39 +85,20 @@ ahd_linux_pci_dev_suspend(struct pci_dev *pdev, pm_message_t mesg)
ahd_pci_suspend(ahd); ahd_pci_suspend(ahd);
pci_save_state(pdev);
pci_disable_device(pdev);
if (mesg.event & PM_EVENT_SLEEP)
pci_set_power_state(pdev, PCI_D3hot);
return rc; return rc;
} }
static int static int __maybe_unused
ahd_linux_pci_dev_resume(struct pci_dev *pdev) ahd_linux_pci_dev_resume(struct device *dev)
{ {
struct ahd_softc *ahd = pci_get_drvdata(pdev); struct ahd_softc *ahd = dev_get_drvdata(dev);
int rc;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
if ((rc = pci_enable_device(pdev))) {
dev_printk(KERN_ERR, &pdev->dev,
"failed to enable device after resume (%d)\n", rc);
return rc;
}
pci_set_master(pdev);
ahd_pci_resume(ahd); ahd_pci_resume(ahd);
ahd_resume(ahd); ahd_resume(ahd);
return rc; return 0;
} }
#endif
static void static void
ahd_linux_pci_dev_remove(struct pci_dev *pdev) ahd_linux_pci_dev_remove(struct pci_dev *pdev)
@ -224,13 +204,14 @@ ahd_linux_pci_dev_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return (0); return (0);
} }
static SIMPLE_DEV_PM_OPS(ahd_linux_pci_dev_pm_ops,
ahd_linux_pci_dev_suspend,
ahd_linux_pci_dev_resume);
static struct pci_driver aic79xx_pci_driver = { static struct pci_driver aic79xx_pci_driver = {
.name = "aic79xx", .name = "aic79xx",
.probe = ahd_linux_pci_dev_probe, .probe = ahd_linux_pci_dev_probe,
#ifdef CONFIG_PM .driver.pm = &ahd_linux_pci_dev_pm_ops,
.suspend = ahd_linux_pci_dev_suspend,
.resume = ahd_linux_pci_dev_resume,
#endif
.remove = ahd_linux_pci_dev_remove, .remove = ahd_linux_pci_dev_remove,
.id_table = ahd_linux_pci_id_table .id_table = ahd_linux_pci_id_table
}; };

Просмотреть файл

@ -377,8 +377,7 @@ ahd_pci_config(struct ahd_softc *ahd, const struct ahd_pci_identity *entry)
return ahd_pci_map_int(ahd); return ahd_pci_map_int(ahd);
} }
#ifdef CONFIG_PM void __maybe_unused
void
ahd_pci_suspend(struct ahd_softc *ahd) ahd_pci_suspend(struct ahd_softc *ahd)
{ {
/* /*
@ -394,7 +393,7 @@ ahd_pci_suspend(struct ahd_softc *ahd)
} }
void void __maybe_unused
ahd_pci_resume(struct ahd_softc *ahd) ahd_pci_resume(struct ahd_softc *ahd)
{ {
ahd_pci_write_config(ahd->dev_softc, DEVCONFIG, ahd_pci_write_config(ahd->dev_softc, DEVCONFIG,
@ -404,7 +403,6 @@ ahd_pci_resume(struct ahd_softc *ahd)
ahd_pci_write_config(ahd->dev_softc, CSIZE_LATTIME, ahd_pci_write_config(ahd->dev_softc, CSIZE_LATTIME,
ahd->suspend_state.pci_state.csize_lattime, /*bytes*/1); ahd->suspend_state.pci_state.csize_lattime, /*bytes*/1);
} }
#endif
/* /*
* Perform some simple tests that should catch situations where * Perform some simple tests that should catch situations where

Просмотреть файл

@ -1134,9 +1134,7 @@ const struct ahc_pci_identity *ahc_find_pci_device(ahc_dev_softc_t);
int ahc_pci_config(struct ahc_softc *, int ahc_pci_config(struct ahc_softc *,
const struct ahc_pci_identity *); const struct ahc_pci_identity *);
int ahc_pci_test_register_access(struct ahc_softc *); int ahc_pci_test_register_access(struct ahc_softc *);
#ifdef CONFIG_PM void __maybe_unused ahc_pci_resume(struct ahc_softc *ahc);
void ahc_pci_resume(struct ahc_softc *ahc);
#endif
/*************************** EISA/VL Front End ********************************/ /*************************** EISA/VL Front End ********************************/
struct aic7770_identity *aic7770_find_device(uint32_t); struct aic7770_identity *aic7770_find_device(uint32_t);
@ -1160,10 +1158,8 @@ int ahc_chip_init(struct ahc_softc *ahc);
int ahc_init(struct ahc_softc *ahc); int ahc_init(struct ahc_softc *ahc);
void ahc_intr_enable(struct ahc_softc *ahc, int enable); void ahc_intr_enable(struct ahc_softc *ahc, int enable);
void ahc_pause_and_flushwork(struct ahc_softc *ahc); void ahc_pause_and_flushwork(struct ahc_softc *ahc);
#ifdef CONFIG_PM int __maybe_unused ahc_suspend(struct ahc_softc *ahc);
int ahc_suspend(struct ahc_softc *ahc); int __maybe_unused ahc_resume(struct ahc_softc *ahc);
int ahc_resume(struct ahc_softc *ahc);
#endif
void ahc_set_unit(struct ahc_softc *, int); void ahc_set_unit(struct ahc_softc *, int);
void ahc_set_name(struct ahc_softc *, char *); void ahc_set_name(struct ahc_softc *, char *);
void ahc_free(struct ahc_softc *ahc); void ahc_free(struct ahc_softc *ahc);

Просмотреть файл

@ -4478,6 +4478,7 @@ ahc_free(struct ahc_softc *ahc)
fallthrough; fallthrough;
case 2: case 2:
ahc_dma_tag_destroy(ahc, ahc->shared_data_dmat); ahc_dma_tag_destroy(ahc, ahc->shared_data_dmat);
fallthrough;
case 1: case 1:
break; break;
case 0: case 0:
@ -5590,8 +5591,7 @@ ahc_pause_and_flushwork(struct ahc_softc *ahc)
ahc->flags &= ~AHC_ALL_INTERRUPTS; ahc->flags &= ~AHC_ALL_INTERRUPTS;
} }
#ifdef CONFIG_PM int __maybe_unused
int
ahc_suspend(struct ahc_softc *ahc) ahc_suspend(struct ahc_softc *ahc)
{ {
@ -5617,7 +5617,7 @@ ahc_suspend(struct ahc_softc *ahc)
return (0); return (0);
} }
int int __maybe_unused
ahc_resume(struct ahc_softc *ahc) ahc_resume(struct ahc_softc *ahc)
{ {
@ -5626,7 +5626,6 @@ ahc_resume(struct ahc_softc *ahc)
ahc_restart(ahc); ahc_restart(ahc);
return (0); return (0);
} }
#endif
/************************** Busy Target Table *********************************/ /************************** Busy Target Table *********************************/
/* /*
* Return the untagged transaction id for a given target/channel lun. * Return the untagged transaction id for a given target/channel lun.
@ -5867,9 +5866,8 @@ ahc_search_qinfifo(struct ahc_softc *ahc, int target, char channel,
if ((scb->flags & SCB_ACTIVE) == 0) if ((scb->flags & SCB_ACTIVE) == 0)
printk("Inactive SCB in qinfifo\n"); printk("Inactive SCB in qinfifo\n");
ahc_done(ahc, scb); ahc_done(ahc, scb);
/* FALLTHROUGH */
} }
fallthrough;
case SEARCH_REMOVE: case SEARCH_REMOVE:
break; break;
case SEARCH_COUNT: case SEARCH_COUNT:

Просмотреть файл

@ -121,47 +121,23 @@ static const struct pci_device_id ahc_linux_pci_id_table[] = {
MODULE_DEVICE_TABLE(pci, ahc_linux_pci_id_table); MODULE_DEVICE_TABLE(pci, ahc_linux_pci_id_table);
#ifdef CONFIG_PM static int __maybe_unused
static int ahc_linux_pci_dev_suspend(struct device *dev)
ahc_linux_pci_dev_suspend(struct pci_dev *pdev, pm_message_t mesg)
{ {
struct ahc_softc *ahc = pci_get_drvdata(pdev); struct ahc_softc *ahc = dev_get_drvdata(dev);
int rc;
if ((rc = ahc_suspend(ahc))) return ahc_suspend(ahc);
return rc;
pci_save_state(pdev);
pci_disable_device(pdev);
if (mesg.event & PM_EVENT_SLEEP)
pci_set_power_state(pdev, PCI_D3hot);
return rc;
} }
static int static int __maybe_unused
ahc_linux_pci_dev_resume(struct pci_dev *pdev) ahc_linux_pci_dev_resume(struct device *dev)
{ {
struct ahc_softc *ahc = pci_get_drvdata(pdev); struct ahc_softc *ahc = dev_get_drvdata(dev);
int rc;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
if ((rc = pci_enable_device(pdev))) {
dev_printk(KERN_ERR, &pdev->dev,
"failed to enable device after resume (%d)\n", rc);
return rc;
}
pci_set_master(pdev);
ahc_pci_resume(ahc); ahc_pci_resume(ahc);
return (ahc_resume(ahc)); return (ahc_resume(ahc));
} }
#endif
static void static void
ahc_linux_pci_dev_remove(struct pci_dev *pdev) ahc_linux_pci_dev_remove(struct pci_dev *pdev)
@ -319,14 +295,14 @@ ahc_pci_write_config(ahc_dev_softc_t pci, int reg, uint32_t value, int width)
} }
} }
static SIMPLE_DEV_PM_OPS(ahc_linux_pci_dev_pm_ops,
ahc_linux_pci_dev_suspend,
ahc_linux_pci_dev_resume);
static struct pci_driver aic7xxx_pci_driver = { static struct pci_driver aic7xxx_pci_driver = {
.name = "aic7xxx", .name = "aic7xxx",
.probe = ahc_linux_pci_dev_probe, .probe = ahc_linux_pci_dev_probe,
#ifdef CONFIG_PM .driver.pm = &ahc_linux_pci_dev_pm_ops,
.suspend = ahc_linux_pci_dev_suspend,
.resume = ahc_linux_pci_dev_resume,
#endif
.remove = ahc_linux_pci_dev_remove, .remove = ahc_linux_pci_dev_remove,
.id_table = ahc_linux_pci_id_table .id_table = ahc_linux_pci_id_table
}; };

Просмотреть файл

@ -2008,8 +2008,7 @@ ahc_pci_chip_init(struct ahc_softc *ahc)
return (ahc_chip_init(ahc)); return (ahc_chip_init(ahc));
} }
#ifdef CONFIG_PM void __maybe_unused
void
ahc_pci_resume(struct ahc_softc *ahc) ahc_pci_resume(struct ahc_softc *ahc)
{ {
/* /*
@ -2040,7 +2039,6 @@ ahc_pci_resume(struct ahc_softc *ahc)
ahc_release_seeprom(&sd); ahc_release_seeprom(&sd);
} }
} }
#endif
static int static int
ahc_aic785X_setup(struct ahc_softc *ahc) ahc_aic785X_setup(struct ahc_softc *ahc)

Просмотреть файл

@ -721,6 +721,7 @@ static void set_speed_mask(u8 *speed_mask, struct asd_phy_desc *pd)
fallthrough; fallthrough;
case SAS_LINK_RATE_3_0_GBPS: case SAS_LINK_RATE_3_0_GBPS:
*speed_mask |= SAS_SPEED_15_DIS; *speed_mask |= SAS_SPEED_15_DIS;
fallthrough;
default: default:
case SAS_LINK_RATE_1_5_GBPS: case SAS_LINK_RATE_1_5_GBPS:
/* nothing to do */ /* nothing to do */
@ -739,6 +740,7 @@ static void set_speed_mask(u8 *speed_mask, struct asd_phy_desc *pd)
switch (pd->min_sata_lrate) { switch (pd->min_sata_lrate) {
case SAS_LINK_RATE_3_0_GBPS: case SAS_LINK_RATE_3_0_GBPS:
*speed_mask |= SATA_SPEED_15_DIS; *speed_mask |= SATA_SPEED_15_DIS;
fallthrough;
default: default:
case SAS_LINK_RATE_1_5_GBPS: case SAS_LINK_RATE_1_5_GBPS:
/* nothing to do */ /* nothing to do */

Просмотреть файл

@ -269,7 +269,6 @@ Again:
case TA_I_T_NEXUS_LOSS: case TA_I_T_NEXUS_LOSS:
opcode = dl->status_block[0]; opcode = dl->status_block[0];
goto Again; goto Again;
break;
case TF_INV_CONN_HANDLE: case TF_INV_CONN_HANDLE:
ts->resp = SAS_TASK_UNDELIVERED; ts->resp = SAS_TASK_UNDELIVERED;
ts->stat = SAS_DEVICE_UNKNOWN; ts->stat = SAS_DEVICE_UNKNOWN;
@ -316,6 +315,7 @@ Again:
break; break;
case SAS_PROTOCOL_SSP: case SAS_PROTOCOL_SSP:
asd_unbuild_ssp_ascb(ascb); asd_unbuild_ssp_ascb(ascb);
break;
default: default:
break; break;
} }
@ -610,6 +610,7 @@ out_err_unmap:
break; break;
case SAS_PROTOCOL_SSP: case SAS_PROTOCOL_SSP:
asd_unbuild_ssp_ascb(a); asd_unbuild_ssp_ascb(a);
break;
default: default:
break; break;
} }

Просмотреть файл

@ -83,6 +83,7 @@ struct device_attribute;
#define PCI_DEVICE_ID_ARECA_1886 0x188A #define PCI_DEVICE_ID_ARECA_1886 0x188A
#define ARCMSR_HOURS (1000 * 60 * 60 * 4) #define ARCMSR_HOURS (1000 * 60 * 60 * 4)
#define ARCMSR_MINUTES (1000 * 60 * 60) #define ARCMSR_MINUTES (1000 * 60 * 60)
#define ARCMSR_DEFAULT_TIMEOUT 90
/* /*
********************************************************************************** **********************************************************************************
** **

Просмотреть файл

@ -99,6 +99,10 @@ static int set_date_time = 0;
module_param(set_date_time, int, S_IRUGO); module_param(set_date_time, int, S_IRUGO);
MODULE_PARM_DESC(set_date_time, " send date, time to iop(0 ~ 1), set_date_time=1(enable), default(=0) is disable"); MODULE_PARM_DESC(set_date_time, " send date, time to iop(0 ~ 1), set_date_time=1(enable), default(=0) is disable");
static int cmd_timeout = ARCMSR_DEFAULT_TIMEOUT;
module_param(cmd_timeout, int, S_IRUGO);
MODULE_PARM_DESC(cmd_timeout, " scsi cmd timeout(0 ~ 120 sec.), default is 90");
#define ARCMSR_SLEEPTIME 10 #define ARCMSR_SLEEPTIME 10
#define ARCMSR_RETRYCOUNT 12 #define ARCMSR_RETRYCOUNT 12
@ -113,8 +117,8 @@ static int arcmsr_bios_param(struct scsi_device *sdev,
static int arcmsr_queue_command(struct Scsi_Host *h, struct scsi_cmnd *cmd); static int arcmsr_queue_command(struct Scsi_Host *h, struct scsi_cmnd *cmd);
static int arcmsr_probe(struct pci_dev *pdev, static int arcmsr_probe(struct pci_dev *pdev,
const struct pci_device_id *id); const struct pci_device_id *id);
static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state); static int __maybe_unused arcmsr_suspend(struct device *dev);
static int arcmsr_resume(struct pci_dev *pdev); static int __maybe_unused arcmsr_resume(struct device *dev);
static void arcmsr_remove(struct pci_dev *pdev); static void arcmsr_remove(struct pci_dev *pdev);
static void arcmsr_shutdown(struct pci_dev *pdev); static void arcmsr_shutdown(struct pci_dev *pdev);
static void arcmsr_iop_init(struct AdapterControlBlock *acb); static void arcmsr_iop_init(struct AdapterControlBlock *acb);
@ -140,6 +144,7 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb);
static void arcmsr_free_irq(struct pci_dev *, struct AdapterControlBlock *); static void arcmsr_free_irq(struct pci_dev *, struct AdapterControlBlock *);
static void arcmsr_wait_firmware_ready(struct AdapterControlBlock *acb); static void arcmsr_wait_firmware_ready(struct AdapterControlBlock *acb);
static void arcmsr_set_iop_datetime(struct timer_list *); static void arcmsr_set_iop_datetime(struct timer_list *);
static int arcmsr_slave_config(struct scsi_device *sdev);
static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth) static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth)
{ {
if (queue_depth > ARCMSR_MAX_CMD_PERLUN) if (queue_depth > ARCMSR_MAX_CMD_PERLUN)
@ -155,6 +160,7 @@ static struct scsi_host_template arcmsr_scsi_host_template = {
.eh_abort_handler = arcmsr_abort, .eh_abort_handler = arcmsr_abort,
.eh_bus_reset_handler = arcmsr_bus_reset, .eh_bus_reset_handler = arcmsr_bus_reset,
.bios_param = arcmsr_bios_param, .bios_param = arcmsr_bios_param,
.slave_configure = arcmsr_slave_config,
.change_queue_depth = arcmsr_adjust_disk_queue_depth, .change_queue_depth = arcmsr_adjust_disk_queue_depth,
.can_queue = ARCMSR_DEFAULT_OUTSTANDING_CMD, .can_queue = ARCMSR_DEFAULT_OUTSTANDING_CMD,
.this_id = ARCMSR_SCSI_INITIATOR_ID, .this_id = ARCMSR_SCSI_INITIATOR_ID,
@ -216,13 +222,14 @@ static struct pci_device_id arcmsr_device_id_table[] = {
}; };
MODULE_DEVICE_TABLE(pci, arcmsr_device_id_table); MODULE_DEVICE_TABLE(pci, arcmsr_device_id_table);
static SIMPLE_DEV_PM_OPS(arcmsr_pm_ops, arcmsr_suspend, arcmsr_resume);
static struct pci_driver arcmsr_pci_driver = { static struct pci_driver arcmsr_pci_driver = {
.name = "arcmsr", .name = "arcmsr",
.id_table = arcmsr_device_id_table, .id_table = arcmsr_device_id_table,
.probe = arcmsr_probe, .probe = arcmsr_probe,
.remove = arcmsr_remove, .remove = arcmsr_remove,
.suspend = arcmsr_suspend, .driver.pm = &arcmsr_pm_ops,
.resume = arcmsr_resume,
.shutdown = arcmsr_shutdown, .shutdown = arcmsr_shutdown,
}; };
/* /*
@ -1126,8 +1133,9 @@ static void arcmsr_free_irq(struct pci_dev *pdev,
pci_free_irq_vectors(pdev); pci_free_irq_vectors(pdev);
} }
static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state) static int __maybe_unused arcmsr_suspend(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev);
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = pci_get_drvdata(pdev);
struct AdapterControlBlock *acb = struct AdapterControlBlock *acb =
(struct AdapterControlBlock *)host->hostdata; (struct AdapterControlBlock *)host->hostdata;
@ -1140,29 +1148,18 @@ static int arcmsr_suspend(struct pci_dev *pdev, pm_message_t state)
flush_work(&acb->arcmsr_do_message_isr_bh); flush_work(&acb->arcmsr_do_message_isr_bh);
arcmsr_stop_adapter_bgrb(acb); arcmsr_stop_adapter_bgrb(acb);
arcmsr_flush_adapter_cache(acb); arcmsr_flush_adapter_cache(acb);
pci_set_drvdata(pdev, host);
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, pci_choose_state(pdev, state));
return 0; return 0;
} }
static int arcmsr_resume(struct pci_dev *pdev) static int __maybe_unused arcmsr_resume(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev);
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = pci_get_drvdata(pdev);
struct AdapterControlBlock *acb = struct AdapterControlBlock *acb =
(struct AdapterControlBlock *)host->hostdata; (struct AdapterControlBlock *)host->hostdata;
pci_set_power_state(pdev, PCI_D0);
pci_enable_wake(pdev, PCI_D0, 0);
pci_restore_state(pdev);
if (pci_enable_device(pdev)) {
pr_warn("%s: pci_enable_device error\n", __func__);
return -ENODEV;
}
if (arcmsr_set_dma_mask(acb)) if (arcmsr_set_dma_mask(acb))
goto controller_unregister; goto controller_unregister;
pci_set_master(pdev);
if (arcmsr_request_irq(pdev, acb) == FAILED) if (arcmsr_request_irq(pdev, acb) == FAILED)
goto controller_stop; goto controller_stop;
switch (acb->adapter_type) { switch (acb->adapter_type) {
@ -1207,9 +1204,7 @@ controller_unregister:
if (acb->adapter_type == ACB_ADAPTER_TYPE_F) if (acb->adapter_type == ACB_ADAPTER_TYPE_F)
arcmsr_free_io_queue(acb); arcmsr_free_io_queue(acb);
arcmsr_unmap_pciregion(acb); arcmsr_unmap_pciregion(acb);
pci_release_regions(pdev);
scsi_host_put(host); scsi_host_put(host);
pci_disable_device(pdev);
return -ENODEV; return -ENODEV;
} }
@ -3156,10 +3151,12 @@ message_out:
static struct CommandControlBlock *arcmsr_get_freeccb(struct AdapterControlBlock *acb) static struct CommandControlBlock *arcmsr_get_freeccb(struct AdapterControlBlock *acb)
{ {
struct list_head *head = &acb->ccb_free_list; struct list_head *head;
struct CommandControlBlock *ccb = NULL; struct CommandControlBlock *ccb = NULL;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&acb->ccblist_lock, flags); spin_lock_irqsave(&acb->ccblist_lock, flags);
head = &acb->ccb_free_list;
if (!list_empty(head)) { if (!list_empty(head)) {
ccb = list_entry(head->next, struct CommandControlBlock, list); ccb = list_entry(head->next, struct CommandControlBlock, list);
list_del_init(&ccb->list); list_del_init(&ccb->list);
@ -3193,11 +3190,11 @@ static void arcmsr_handle_virtual_command(struct AdapterControlBlock *acb,
/* ISO, ECMA, & ANSI versions */ /* ISO, ECMA, & ANSI versions */
inqdata[4] = 31; inqdata[4] = 31;
/* length of additional data */ /* length of additional data */
strncpy(&inqdata[8], "Areca ", 8); memcpy(&inqdata[8], "Areca ", 8);
/* Vendor Identification */ /* Vendor Identification */
strncpy(&inqdata[16], "RAID controller ", 16); memcpy(&inqdata[16], "RAID controller ", 16);
/* Product Identification */ /* Product Identification */
strncpy(&inqdata[32], "R001", 4); /* Product Revision */ memcpy(&inqdata[32], "R001", 4); /* Product Revision */
sg = scsi_sglist(cmd); sg = scsi_sglist(cmd);
buffer = kmap_atomic(sg_page(sg)) + sg->offset; buffer = kmap_atomic(sg_page(sg)) + sg->offset;
@ -3256,6 +3253,16 @@ static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd,
static DEF_SCSI_QCMD(arcmsr_queue_command) static DEF_SCSI_QCMD(arcmsr_queue_command)
static int arcmsr_slave_config(struct scsi_device *sdev)
{
unsigned int dev_timeout;
dev_timeout = sdev->request_queue->rq_timeout;
if ((cmd_timeout > 0) && ((cmd_timeout * HZ) > dev_timeout))
blk_queue_rq_timeout(sdev->request_queue, cmd_timeout * HZ);
return 0;
}
static void arcmsr_get_adapter_config(struct AdapterControlBlock *pACB, uint32_t *rwbuffer) static void arcmsr_get_adapter_config(struct AdapterControlBlock *pACB, uint32_t *rwbuffer)
{ {
int count; int count;

Просмотреть файл

@ -376,15 +376,11 @@ static int falcon_get_lock(struct Scsi_Host *instance)
if (IS_A_TT()) if (IS_A_TT())
return 1; return 1;
if (stdma_is_locked_by(scsi_falcon_intr) && if (stdma_is_locked_by(scsi_falcon_intr))
instance->hostt->can_queue > 1)
return 1; return 1;
if (in_interrupt()) /* stdma_lock() may sleep which means it can't be used here */
return stdma_try_lock(scsi_falcon_intr, instance); return stdma_try_lock(scsi_falcon_intr, instance);
stdma_lock(scsi_falcon_intr, instance);
return 1;
} }
#ifndef MODULE #ifndef MODULE

Просмотреть файл

@ -164,7 +164,7 @@ DEVICE_ATTR(beiscsi_active_session_count, S_IRUGO,
beiscsi_active_session_disp, NULL); beiscsi_active_session_disp, NULL);
DEVICE_ATTR(beiscsi_free_session_count, S_IRUGO, DEVICE_ATTR(beiscsi_free_session_count, S_IRUGO,
beiscsi_free_session_disp, NULL); beiscsi_free_session_disp, NULL);
struct device_attribute *beiscsi_attrs[] = { static struct device_attribute *beiscsi_attrs[] = {
&dev_attr_beiscsi_log_enable, &dev_attr_beiscsi_log_enable,
&dev_attr_beiscsi_drvr_ver, &dev_attr_beiscsi_drvr_ver,
&dev_attr_beiscsi_adapter_family, &dev_attr_beiscsi_adapter_family,

Просмотреть файл

@ -1244,18 +1244,14 @@ beiscsi_adap_family_disp(struct device *dev, struct device_attribute *attr,
case OC_DEVICE_ID2: case OC_DEVICE_ID2:
return snprintf(buf, PAGE_SIZE, return snprintf(buf, PAGE_SIZE,
"Obsolete/Unsupported BE2 Adapter Family\n"); "Obsolete/Unsupported BE2 Adapter Family\n");
break;
case BE_DEVICE_ID2: case BE_DEVICE_ID2:
case OC_DEVICE_ID3: case OC_DEVICE_ID3:
return snprintf(buf, PAGE_SIZE, "BE3-R Adapter Family\n"); return snprintf(buf, PAGE_SIZE, "BE3-R Adapter Family\n");
break;
case OC_SKH_ID1: case OC_SKH_ID1:
return snprintf(buf, PAGE_SIZE, "Skyhawk-R Adapter Family\n"); return snprintf(buf, PAGE_SIZE, "Skyhawk-R Adapter Family\n");
break;
default: default:
return snprintf(buf, PAGE_SIZE, return snprintf(buf, PAGE_SIZE,
"Unknown Adapter Family: 0x%x\n", dev_id); "Unknown Adapter Family: 0x%x\n", dev_id);
break;
} }
} }

Просмотреть файл

@ -5671,7 +5671,7 @@ bfa_fcs_lport_scn_process_rscn(struct bfa_fcs_lport_s *port,
bfa_fcs_lport_ms_fabric_rscn(port); bfa_fcs_lport_ms_fabric_rscn(port);
break; break;
} }
/* !!!!!!!!! Fall Through !!!!!!!!!!!!! */ fallthrough;
case FC_RSCN_FORMAT_AREA: case FC_RSCN_FORMAT_AREA:
case FC_RSCN_FORMAT_DOMAIN: case FC_RSCN_FORMAT_DOMAIN:

Просмотреть файл

@ -387,7 +387,7 @@ bfa_ioc_sm_getattr(struct bfa_ioc_s *ioc, enum ioc_event event)
case IOC_E_PFFAILED: case IOC_E_PFFAILED:
case IOC_E_HWERROR: case IOC_E_HWERROR:
bfa_ioc_timer_stop(ioc); bfa_ioc_timer_stop(ioc);
/* !!! fall through !!! */ fallthrough;
case IOC_E_TIMEOUT: case IOC_E_TIMEOUT:
ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE);
bfa_fsm_set_state(ioc, bfa_ioc_sm_fail); bfa_fsm_set_state(ioc, bfa_ioc_sm_fail);
@ -437,7 +437,7 @@ bfa_ioc_sm_op(struct bfa_ioc_s *ioc, enum ioc_event event)
case IOC_E_PFFAILED: case IOC_E_PFFAILED:
case IOC_E_HWERROR: case IOC_E_HWERROR:
bfa_hb_timer_stop(ioc); bfa_hb_timer_stop(ioc);
/* !!! fall through !!! */ fallthrough;
case IOC_E_HBFAIL: case IOC_E_HBFAIL:
if (ioc->iocpf.auto_recover) if (ioc->iocpf.auto_recover)
bfa_fsm_set_state(ioc, bfa_ioc_sm_fail_retry); bfa_fsm_set_state(ioc, bfa_ioc_sm_fail_retry);
@ -3299,6 +3299,7 @@ bfa_ablk_isr(void *cbarg, struct bfi_mbmsg_s *msg)
case BFI_ABLK_I2H_PORT_CONFIG: case BFI_ABLK_I2H_PORT_CONFIG:
/* update config port mode */ /* update config port mode */
ablk->ioc->port_mode_cfg = rsp->port_mode; ablk->ioc->port_mode_cfg = rsp->port_mode;
break;
case BFI_ABLK_I2H_PF_DELETE: case BFI_ABLK_I2H_PF_DELETE:
case BFI_ABLK_I2H_PF_UPDATE: case BFI_ABLK_I2H_PF_UPDATE:
@ -5871,6 +5872,7 @@ bfa_dconf_sm_uninit(struct bfa_dconf_mod_s *dconf, enum bfa_dconf_event event)
break; break;
case BFA_DCONF_SM_EXIT: case BFA_DCONF_SM_EXIT:
bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE);
break;
case BFA_DCONF_SM_IOCDISABLE: case BFA_DCONF_SM_IOCDISABLE:
case BFA_DCONF_SM_WR: case BFA_DCONF_SM_WR:
case BFA_DCONF_SM_FLASH_COMP: case BFA_DCONF_SM_FLASH_COMP:

Просмотреть файл

@ -51,7 +51,6 @@
#include <scsi/scsi_tcq.h> #include <scsi/scsi_tcq.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/libfcoe.h> #include <scsi/libfcoe.h>
#include <scsi/fc_encode.h>
#include <scsi/scsi_transport.h> #include <scsi/scsi_transport.h>
#include <scsi/scsi_transport_fc.h> #include <scsi/scsi_transport_fc.h>
#include <scsi/fc/fc_fip.h> #include <scsi/fc/fc_fip.h>

Просмотреть файл

@ -2088,7 +2088,7 @@ static int __bnx2fc_disable(struct fcoe_ctlr *ctlr)
{ {
struct bnx2fc_interface *interface = fcoe_ctlr_priv(ctlr); struct bnx2fc_interface *interface = fcoe_ctlr_priv(ctlr);
if (interface->enabled == true) { if (interface->enabled) {
if (!ctlr->lp) { if (!ctlr->lp) {
pr_err(PFX "__bnx2fc_disable: lport not found\n"); pr_err(PFX "__bnx2fc_disable: lport not found\n");
return -ENODEV; return -ENODEV;
@ -2186,7 +2186,7 @@ static int __bnx2fc_enable(struct fcoe_ctlr *ctlr)
struct cnic_fc_npiv_tbl *npiv_tbl; struct cnic_fc_npiv_tbl *npiv_tbl;
struct fc_lport *lport; struct fc_lport *lport;
if (interface->enabled == false) { if (!interface->enabled) {
if (!ctlr->lp) { if (!ctlr->lp) {
pr_err(PFX "__bnx2fc_enable: lport not found\n"); pr_err(PFX "__bnx2fc_enable: lport not found\n");
return -ENODEV; return -ENODEV;
@ -2277,7 +2277,7 @@ static int bnx2fc_ctlr_enabled(struct fcoe_ctlr_device *cdev)
case FCOE_CTLR_UNUSED: case FCOE_CTLR_UNUSED:
default: default:
return -ENOTSUPP; return -ENOTSUPP;
}; }
} }
enum bnx2fc_create_link_state { enum bnx2fc_create_link_state {

Просмотреть файл

@ -770,7 +770,6 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
} else } else
printk(KERN_ERR PFX "SRR in progress\n"); printk(KERN_ERR PFX "SRR in progress\n");
goto ret_err_rqe; goto ret_err_rqe;
break;
default: default:
break; break;
} }

Просмотреть файл

@ -830,6 +830,7 @@ csio_wr_destroy_queues(struct csio_hw *hw, bool cmd)
if (flq_idx != -1) if (flq_idx != -1)
csio_q_flid(hw, flq_idx) = CSIO_MAX_QID; csio_q_flid(hw, flq_idx) = CSIO_MAX_QID;
} }
break;
default: default:
break; break;
} }

Просмотреть файл

@ -1356,7 +1356,7 @@ void selection_timeout_missed(unsigned long ptr)
static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb, static u8 start_scsi(struct AdapterCtlBlk* acb, struct DeviceCtlBlk* dcb,
struct ScsiReqBlk* srb) struct ScsiReqBlk* srb)
{ {
u16 s_stat2, return_code; u16 __maybe_unused s_stat2, return_code;
u8 s_stat, scsicommand, i, identify_message; u8 s_stat, scsicommand, i, identify_message;
u8 *ptr; u8 *ptr;
dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> srb=%p\n", dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> srb=%p\n",
@ -2397,7 +2397,6 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
} }
#endif /* DC395x_LASTPIO */ #endif /* DC395x_LASTPIO */
else { /* xfer pad */ else { /* xfer pad */
u8 data = 0, data2 = 0;
if (srb->sg_count) { if (srb->sg_count) {
srb->adapter_status = H_OVER_UNDER_RUN; srb->adapter_status = H_OVER_UNDER_RUN;
srb->status |= OVER_RUN; srb->status |= OVER_RUN;
@ -2412,8 +2411,8 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2, DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2,
CFG2_WIDEFIFO); CFG2_WIDEFIFO);
if (io_dir & DMACMD_DIR) { if (io_dir & DMACMD_DIR) {
data = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); DC395x_read8(acb, TRM_S1040_SCSI_FIFO);
data2 = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); DC395x_read8(acb, TRM_S1040_SCSI_FIFO);
} else { } else {
/* Danger, Robinson: If you find KGs /* Danger, Robinson: If you find KGs
* scattered over the wide disk, the driver * scattered over the wide disk, the driver
@ -2427,7 +2426,7 @@ static void data_io_transfer(struct AdapterCtlBlk *acb,
/* Danger, Robinson: If you find a collection of Ks on your disk /* Danger, Robinson: If you find a collection of Ks on your disk
* something broke :-( */ * something broke :-( */
if (io_dir & DMACMD_DIR) if (io_dir & DMACMD_DIR)
data = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); DC395x_read8(acb, TRM_S1040_SCSI_FIFO);
else else
DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 'K'); DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 'K');
} }
@ -2989,7 +2988,6 @@ static void reselect(struct AdapterCtlBlk *acb)
struct ScsiReqBlk *srb = NULL; struct ScsiReqBlk *srb = NULL;
u16 rsel_tar_lun_id; u16 rsel_tar_lun_id;
u8 id, lun; u8 id, lun;
u8 arblostflag = 0;
dprintkdbg(DBG_0, "reselect: acb=%p\n", acb); dprintkdbg(DBG_0, "reselect: acb=%p\n", acb);
clear_fifo(acb, "reselect"); clear_fifo(acb, "reselect");
@ -3011,7 +3009,6 @@ static void reselect(struct AdapterCtlBlk *acb)
srb->cmd, dcb->target_id, srb->cmd, dcb->target_id,
dcb->target_lun, rsel_tar_lun_id, dcb->target_lun, rsel_tar_lun_id,
DC395x_read16(acb, TRM_S1040_SCSI_STATUS)); DC395x_read16(acb, TRM_S1040_SCSI_STATUS));
arblostflag = 1;
/*srb->state |= SRB_DISCONNECT; */ /*srb->state |= SRB_DISCONNECT; */
srb->state = SRB_READY; srb->state = SRB_READY;
@ -3042,7 +3039,7 @@ static void reselect(struct AdapterCtlBlk *acb)
"disconnection? <%02i-%i>\n", "disconnection? <%02i-%i>\n",
dcb->target_id, dcb->target_lun); dcb->target_id, dcb->target_lun);
if (dcb->sync_mode & EN_TAG_QUEUEING /*&& !arblostflag */) { if (dcb->sync_mode & EN_TAG_QUEUEING) {
srb = acb->tmp_srb; srb = acb->tmp_srb;
dcb->active_srb = srb; dcb->active_srb = srb;
} else { } else {
@ -3390,11 +3387,9 @@ static void doing_srb_done(struct AdapterCtlBlk *acb, u8 did_flag,
struct scsi_cmnd *p; struct scsi_cmnd *p;
list_for_each_entry_safe(srb, tmp, &dcb->srb_going_list, list) { list_for_each_entry_safe(srb, tmp, &dcb->srb_going_list, list) {
enum dma_data_direction dir;
int result; int result;
p = srb->cmd; p = srb->cmd;
dir = p->sc_data_direction;
result = MK_RES(0, did_flag, 0, 0); result = MK_RES(0, did_flag, 0, 0);
printk("G:%p(%02i-%i) ", p, printk("G:%p(%02i-%i) ", p,
p->device->id, (u8)p->device->lun); p->device->id, (u8)p->device->lun);

Просмотреть файл

@ -408,12 +408,20 @@ static char print_alua_state(unsigned char state)
static int alua_check_sense(struct scsi_device *sdev, static int alua_check_sense(struct scsi_device *sdev,
struct scsi_sense_hdr *sense_hdr) struct scsi_sense_hdr *sense_hdr)
{ {
struct alua_dh_data *h = sdev->handler_data;
struct alua_port_group *pg;
switch (sense_hdr->sense_key) { switch (sense_hdr->sense_key) {
case NOT_READY: case NOT_READY:
if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) { if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) {
/* /*
* LUN Not Accessible - ALUA state transition * LUN Not Accessible - ALUA state transition
*/ */
rcu_read_lock();
pg = rcu_dereference(h->pg);
if (pg)
pg->state = SCSI_ACCESS_STATE_TRANSITIONING;
rcu_read_unlock();
alua_check(sdev, false); alua_check(sdev, false);
return NEEDS_RETRY; return NEEDS_RETRY;
} }
@ -1092,7 +1100,7 @@ static blk_status_t alua_prep_fn(struct scsi_device *sdev, struct request *req)
case SCSI_ACCESS_STATE_LBA: case SCSI_ACCESS_STATE_LBA:
return BLK_STS_OK; return BLK_STS_OK;
case SCSI_ACCESS_STATE_TRANSITIONING: case SCSI_ACCESS_STATE_TRANSITIONING:
return BLK_STS_RESOURCE; return BLK_STS_AGAIN;
default: default:
req->rq_flags |= RQF_QUIET; req->rq_flags |= RQF_QUIET;
return BLK_STS_IOERR; return BLK_STS_IOERR;

Просмотреть файл

@ -996,8 +996,9 @@ void esas2r_adapter_tasklet(unsigned long context);
irqreturn_t esas2r_interrupt(int irq, void *dev_id); irqreturn_t esas2r_interrupt(int irq, void *dev_id);
irqreturn_t esas2r_msi_interrupt(int irq, void *dev_id); irqreturn_t esas2r_msi_interrupt(int irq, void *dev_id);
void esas2r_kickoff_timer(struct esas2r_adapter *a); void esas2r_kickoff_timer(struct esas2r_adapter *a);
int esas2r_suspend(struct pci_dev *pcid, pm_message_t state);
int esas2r_resume(struct pci_dev *pcid); extern const struct dev_pm_ops esas2r_pm_ops;
void esas2r_fw_event_off(struct esas2r_adapter *a); void esas2r_fw_event_off(struct esas2r_adapter *a);
void esas2r_fw_event_on(struct esas2r_adapter *a); void esas2r_fw_event_on(struct esas2r_adapter *a);
bool esas2r_nvram_write(struct esas2r_adapter *a, struct esas2r_request *rq, bool esas2r_nvram_write(struct esas2r_adapter *a, struct esas2r_request *rq,

Просмотреть файл

@ -1031,8 +1031,9 @@ static u32 esas2r_disc_get_phys_addr(struct esas2r_sg_context *sgc, u64 *addr)
{ {
struct esas2r_adapter *a = sgc->adapter; struct esas2r_adapter *a = sgc->adapter;
if (sgc->length > ESAS2R_DISC_BUF_LEN) if (sgc->length > ESAS2R_DISC_BUF_LEN) {
esas2r_bugon(); esas2r_bugon();
}
*addr = a->uncached_phys *addr = a->uncached_phys
+ (u64)((u8 *)a->disc_buffer - a->uncached); + (u64)((u8 *)a->disc_buffer - a->uncached);

Просмотреть файл

@ -412,10 +412,11 @@ int esas2r_init_adapter(struct Scsi_Host *host, struct pci_dev *pcid,
esas2r_disable_chip_interrupts(a); esas2r_disable_chip_interrupts(a);
esas2r_check_adapter(a); esas2r_check_adapter(a);
if (!esas2r_init_adapter_hw(a, true)) if (!esas2r_init_adapter_hw(a, true)) {
esas2r_log(ESAS2R_LOG_CRIT, "failed to initialize hardware!"); esas2r_log(ESAS2R_LOG_CRIT, "failed to initialize hardware!");
else } else {
esas2r_debug("esas2r_init_adapter ok"); esas2r_debug("esas2r_init_adapter ok");
}
esas2r_claim_interrupts(a); esas2r_claim_interrupts(a);
@ -640,53 +641,27 @@ void esas2r_kill_adapter(int i)
} }
} }
int esas2r_suspend(struct pci_dev *pdev, pm_message_t state) static int __maybe_unused esas2r_suspend(struct device *dev)
{ {
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = dev_get_drvdata(dev);
u32 device_state;
struct esas2r_adapter *a = (struct esas2r_adapter *)host->hostdata; struct esas2r_adapter *a = (struct esas2r_adapter *)host->hostdata;
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev), "suspending adapter()"); esas2r_log_dev(ESAS2R_LOG_INFO, dev, "suspending adapter()");
if (!a) if (!a)
return -ENODEV; return -ENODEV;
esas2r_adapter_power_down(a, 1); esas2r_adapter_power_down(a, 1);
device_state = pci_choose_state(pdev, state); esas2r_log_dev(ESAS2R_LOG_INFO, dev, "esas2r_suspend(): 0");
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_save_state() called");
pci_save_state(pdev);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_disable_device() called");
pci_disable_device(pdev);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_set_power_state() called");
pci_set_power_state(pdev, device_state);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev), "esas2r_suspend(): 0");
return 0; return 0;
} }
int esas2r_resume(struct pci_dev *pdev) static int __maybe_unused esas2r_resume(struct device *dev)
{ {
struct Scsi_Host *host = pci_get_drvdata(pdev); struct Scsi_Host *host = dev_get_drvdata(dev);
struct esas2r_adapter *a = (struct esas2r_adapter *)host->hostdata; struct esas2r_adapter *a = (struct esas2r_adapter *)host->hostdata;
int rez; int rez = 0;
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev), "resuming adapter()"); esas2r_log_dev(ESAS2R_LOG_INFO, dev, "resuming adapter()");
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_set_power_state(PCI_D0) "
"called");
pci_set_power_state(pdev, PCI_D0);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_enable_wake(PCI_D0, 0) "
"called");
pci_enable_wake(pdev, PCI_D0, 0);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_restore_state() called");
pci_restore_state(pdev);
esas2r_log_dev(ESAS2R_LOG_INFO, &(pdev->dev),
"pci_enable_device() called");
rez = pci_enable_device(pdev);
pci_set_master(pdev);
if (!a) { if (!a) {
rez = -ENODEV; rez = -ENODEV;
@ -730,11 +705,13 @@ int esas2r_resume(struct pci_dev *pdev)
} }
error_exit: error_exit:
esas2r_log_dev(ESAS2R_LOG_CRIT, &(pdev->dev), "esas2r_resume(): %d", esas2r_log_dev(ESAS2R_LOG_CRIT, dev, "esas2r_resume(): %d",
rez); rez);
return rez; return rez;
} }
SIMPLE_DEV_PM_OPS(esas2r_pm_ops, esas2r_suspend, esas2r_resume);
bool esas2r_set_degraded_mode(struct esas2r_adapter *a, char *error_str) bool esas2r_set_degraded_mode(struct esas2r_adapter *a, char *error_str)
{ {
set_bit(AF_DEGRADED_MODE, &a->flags); set_bit(AF_DEGRADED_MODE, &a->flags);

Просмотреть файл

@ -688,8 +688,9 @@ static void esas2r_doorbell_interrupt(struct esas2r_adapter *a, u32 doorbell)
esas2r_local_reset_adapter(a); esas2r_local_reset_adapter(a);
} }
if (!(doorbell & DRBL_FORCE_INT)) if (!(doorbell & DRBL_FORCE_INT)) {
esas2r_trace_exit(); esas2r_trace_exit();
}
} }
void esas2r_force_interrupt(struct esas2r_adapter *a) void esas2r_force_interrupt(struct esas2r_adapter *a)
@ -862,10 +863,11 @@ void esas2r_send_reset_ae(struct esas2r_adapter *a, bool pwr_mgt)
ae.byflags = 0; ae.byflags = 0;
ae.bylength = (u8)sizeof(struct atto_vda_ae_hdr); ae.bylength = (u8)sizeof(struct atto_vda_ae_hdr);
if (pwr_mgt) if (pwr_mgt) {
esas2r_hdebug("*** sending power management AE ***"); esas2r_hdebug("*** sending power management AE ***");
else } else {
esas2r_hdebug("*** sending reset AE ***"); esas2r_hdebug("*** sending reset AE ***");
}
esas2r_queue_fw_event(a, fw_event_vda_ae, &ae, esas2r_queue_fw_event(a, fw_event_vda_ae, &ae,
sizeof(union atto_vda_ae)); sizeof(union atto_vda_ae));

Просмотреть файл

@ -346,8 +346,7 @@ static struct pci_driver
.id_table = esas2r_pci_table, .id_table = esas2r_pci_table,
.probe = esas2r_probe, .probe = esas2r_probe,
.remove = esas2r_remove, .remove = esas2r_remove,
.suspend = esas2r_suspend, .driver.pm = &esas2r_pm_ops,
.resume = esas2r_resume,
}; };
static int esas2r_probe(struct pci_dev *pcid, static int esas2r_probe(struct pci_dev *pcid,
@ -894,15 +893,11 @@ static void complete_task_management_request(struct esas2r_adapter *a,
esas2r_free_request(a, rq); esas2r_free_request(a, rq);
} }
/** /*
* Searches the specified queue for the specified queue for the command * Searches the specified queue for the specified queue for the command
* to abort. * to abort.
* *
* @param [in] a * Return 0 on failure, 1 if command was not found, 2 if command was found
* @param [in] abort_request
* @param [in] cmd
* t
* @return 0 on failure, 1 if command was not found, 2 if command was found
*/ */
static int esas2r_check_active_queue(struct esas2r_adapter *a, static int esas2r_check_active_queue(struct esas2r_adapter *a,
struct esas2r_request **abort_request, struct esas2r_request **abort_request,

Просмотреть файл

@ -1894,7 +1894,6 @@ static int fcoe_device_notification(struct notifier_block *notifier,
mutex_unlock(&fcoe_config_mutex); mutex_unlock(&fcoe_config_mutex);
fcoe_ctlr_device_delete(fcoe_ctlr_to_ctlr_dev(ctlr)); fcoe_ctlr_device_delete(fcoe_ctlr_to_ctlr_dev(ctlr));
goto out; goto out;
break;
case NETDEV_FEAT_CHANGE: case NETDEV_FEAT_CHANGE:
fcoe_netdev_features_change(lport, netdev); fcoe_netdev_features_change(lport, netdev);
break; break;
@ -2024,7 +2023,7 @@ static int fcoe_ctlr_enabled(struct fcoe_ctlr_device *cdev)
case FCOE_CTLR_UNUSED: case FCOE_CTLR_UNUSED:
default: default:
return -ENOTSUPP; return -ENOTSUPP;
}; }
} }
/** /**

Просмотреть файл

@ -312,7 +312,7 @@ static ssize_t store_ctlr_mode(struct device *dev,
default: default:
LIBFCOE_SYSFS_DBG(ctlr, "Mode change not supported.\n"); LIBFCOE_SYSFS_DBG(ctlr, "Mode change not supported.\n");
return -ENOTSUPP; return -ENOTSUPP;
}; }
} }
static FCOE_DEVICE_ATTR(ctlr, mode, S_IRUGO | S_IWUSR, static FCOE_DEVICE_ATTR(ctlr, mode, S_IRUGO | S_IWUSR,
@ -346,7 +346,7 @@ static ssize_t store_ctlr_enabled(struct device *dev,
break; break;
case FCOE_CTLR_UNUSED: case FCOE_CTLR_UNUSED:
return -ENOTSUPP; return -ENOTSUPP;
}; }
rc = ctlr->f->set_fcoe_ctlr_enabled(ctlr); rc = ctlr->f->set_fcoe_ctlr_enabled(ctlr);
if (rc) if (rc)

Просмотреть файл

@ -39,7 +39,7 @@
#define DRV_NAME "fnic" #define DRV_NAME "fnic"
#define DRV_DESCRIPTION "Cisco FCoE HBA Driver" #define DRV_DESCRIPTION "Cisco FCoE HBA Driver"
#define DRV_VERSION "1.6.0.47" #define DRV_VERSION "1.6.0.53"
#define PFX DRV_NAME ": " #define PFX DRV_NAME ": "
#define DFX DRV_NAME "%d: " #define DFX DRV_NAME "%d: "
@ -245,6 +245,7 @@ struct fnic {
u32 vlan_hw_insert:1; /* let hw insert the tag */ u32 vlan_hw_insert:1; /* let hw insert the tag */
u32 in_remove:1; /* fnic device in removal */ u32 in_remove:1; /* fnic device in removal */
u32 stop_rx_link_events:1; /* stop proc. rx frames, link events */ u32 stop_rx_link_events:1; /* stop proc. rx frames, link events */
u32 link_events:1; /* set when we get any link event*/
struct completion *remove_wait; /* device remove thread blocks */ struct completion *remove_wait; /* device remove thread blocks */

Просмотреть файл

@ -56,6 +56,8 @@ void fnic_handle_link(struct work_struct *work)
spin_lock_irqsave(&fnic->fnic_lock, flags); spin_lock_irqsave(&fnic->fnic_lock, flags);
fnic->link_events = 1; /* less work to just set everytime*/
if (fnic->stop_rx_link_events) { if (fnic->stop_rx_link_events) {
spin_unlock_irqrestore(&fnic->fnic_lock, flags); spin_unlock_irqrestore(&fnic->fnic_lock, flags);
return; return;
@ -73,7 +75,7 @@ void fnic_handle_link(struct work_struct *work)
atomic64_set(&fnic->fnic_stats.misc_stats.current_port_speed, atomic64_set(&fnic->fnic_stats.misc_stats.current_port_speed,
new_port_speed); new_port_speed);
if (old_port_speed != new_port_speed) if (old_port_speed != new_port_speed)
shost_printk(KERN_INFO, fnic->lport->host, FNIC_MAIN_DBG(KERN_INFO, fnic->lport->host,
"Current vnic speed set to : %llu\n", "Current vnic speed set to : %llu\n",
new_port_speed); new_port_speed);
@ -1349,7 +1351,7 @@ void fnic_handle_fip_timer(struct fnic *fnic)
} }
vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list); vlan = list_first_entry(&fnic->vlans, struct fcoe_vlan, list);
shost_printk(KERN_DEBUG, fnic->lport->host, FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host,
"fip_timer: vlan %d state %d sol_count %d\n", "fip_timer: vlan %d state %d sol_count %d\n",
vlan->vid, vlan->state, vlan->sol_count); vlan->vid, vlan->state, vlan->sol_count);
switch (vlan->state) { switch (vlan->state) {
@ -1372,7 +1374,7 @@ void fnic_handle_fip_timer(struct fnic *fnic)
* no response on this vlan, remove from the list. * no response on this vlan, remove from the list.
* Try the next vlan * Try the next vlan
*/ */
shost_printk(KERN_INFO, fnic->lport->host, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host,
"Dequeue this VLAN ID %d from list\n", "Dequeue this VLAN ID %d from list\n",
vlan->vid); vlan->vid);
list_del(&vlan->list); list_del(&vlan->list);
@ -1382,7 +1384,7 @@ void fnic_handle_fip_timer(struct fnic *fnic)
/* we exhausted all vlans, restart vlan disc */ /* we exhausted all vlans, restart vlan disc */
spin_unlock_irqrestore(&fnic->vlans_lock, spin_unlock_irqrestore(&fnic->vlans_lock,
flags); flags);
shost_printk(KERN_INFO, fnic->lport->host, FNIC_FCS_DBG(KERN_INFO, fnic->lport->host,
"fip_timer: vlan list empty, " "fip_timer: vlan list empty, "
"trigger vlan disc\n"); "trigger vlan disc\n");
fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC); fnic_event_enq(fnic, FNIC_EVT_START_VLAN_DISC);

Просмотреть файл

@ -580,6 +580,8 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
fnic->lport = lp; fnic->lport = lp;
fnic->ctlr.lp = lp; fnic->ctlr.lp = lp;
fnic->link_events = 0;
snprintf(fnic->name, sizeof(fnic->name) - 1, "%s%d", DRV_NAME, snprintf(fnic->name, sizeof(fnic->name) - 1, "%s%d", DRV_NAME,
host->host_no); host->host_no);
@ -740,6 +742,7 @@ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
for (i = 0; i < FNIC_IO_LOCKS; i++) for (i = 0; i < FNIC_IO_LOCKS; i++)
spin_lock_init(&fnic->io_req_lock[i]); spin_lock_init(&fnic->io_req_lock[i]);
err = -ENOMEM;
fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache); fnic->io_req_pool = mempool_create_slab_pool(2, fnic_io_req_cache);
if (!fnic->io_req_pool) if (!fnic->io_req_pool)
goto err_out_free_resources; goto err_out_free_resources;

Просмотреть файл

@ -921,10 +921,11 @@ static void fnic_fcpio_icmnd_cmpl_handler(struct fnic *fnic,
case FCPIO_SUCCESS: case FCPIO_SUCCESS:
sc->result = (DID_OK << 16) | icmnd_cmpl->scsi_status; sc->result = (DID_OK << 16) | icmnd_cmpl->scsi_status;
xfer_len = scsi_bufflen(sc); xfer_len = scsi_bufflen(sc);
scsi_set_resid(sc, icmnd_cmpl->residual);
if (icmnd_cmpl->flags & FCPIO_ICMND_CMPL_RESID_UNDER) if (icmnd_cmpl->flags & FCPIO_ICMND_CMPL_RESID_UNDER) {
xfer_len -= icmnd_cmpl->residual; xfer_len -= icmnd_cmpl->residual;
scsi_set_resid(sc, icmnd_cmpl->residual);
}
if (icmnd_cmpl->scsi_status == SAM_STAT_CHECK_CONDITION) if (icmnd_cmpl->scsi_status == SAM_STAT_CHECK_CONDITION)
atomic64_inc(&fnic_stats->misc_stats.check_condition); atomic64_inc(&fnic_stats->misc_stats.check_condition);
@ -1734,15 +1735,14 @@ void fnic_terminate_rport_io(struct fc_rport *rport)
continue; continue;
} }
cmd_rport = starget_to_rport(scsi_target(sc->device)); io_req = (struct fnic_io_req *)CMD_SP(sc);
if (rport != cmd_rport) { if (!io_req) {
spin_unlock_irqrestore(io_lock, flags); spin_unlock_irqrestore(io_lock, flags);
continue; continue;
} }
io_req = (struct fnic_io_req *)CMD_SP(sc); cmd_rport = starget_to_rport(scsi_target(sc->device));
if (rport != cmd_rport) {
if (!io_req || rport != cmd_rport) {
spin_unlock_irqrestore(io_lock, flags); spin_unlock_irqrestore(io_lock, flags);
continue; continue;
} }
@ -2673,7 +2673,8 @@ void fnic_scsi_abort_io(struct fc_lport *lp)
/* Issue firmware reset for fnic, wait for reset to complete */ /* Issue firmware reset for fnic, wait for reset to complete */
retry_fw_reset: retry_fw_reset:
spin_lock_irqsave(&fnic->fnic_lock, flags); spin_lock_irqsave(&fnic->fnic_lock, flags);
if (unlikely(fnic->state == FNIC_IN_FC_TRANS_ETH_MODE)) { if (unlikely(fnic->state == FNIC_IN_FC_TRANS_ETH_MODE) &&
fnic->link_events) {
/* fw reset is in progress, poll for its completion */ /* fw reset is in progress, poll for its completion */
spin_unlock_irqrestore(&fnic->fnic_lock, flags); spin_unlock_irqrestore(&fnic->fnic_lock, flags);
schedule_timeout(msecs_to_jiffies(100)); schedule_timeout(msecs_to_jiffies(100));

Просмотреть файл

@ -529,14 +529,14 @@ static inline int generic_NCR5380_precv(struct NCR5380_hostdata *hostdata,
if (start == len - 128) { if (start == len - 128) {
/* Ignore End of DMA interrupt for the final buffer */ /* Ignore End of DMA interrupt for the final buffer */
if (NCR5380_poll_politely(hostdata, hostdata->c400_ctl_status, if (NCR5380_poll_politely(hostdata, hostdata->c400_ctl_status,
CSR_HOST_BUF_NOT_RDY, 0, HZ / 64) < 0) CSR_HOST_BUF_NOT_RDY, 0, 0) < 0)
break; break;
} else { } else {
if (NCR5380_poll_politely2(hostdata, hostdata->c400_ctl_status, if (NCR5380_poll_politely2(hostdata, hostdata->c400_ctl_status,
CSR_HOST_BUF_NOT_RDY, 0, CSR_HOST_BUF_NOT_RDY, 0,
hostdata->c400_ctl_status, hostdata->c400_ctl_status,
CSR_GATED_53C80_IRQ, CSR_GATED_53C80_IRQ,
CSR_GATED_53C80_IRQ, HZ / 64) < 0 || CSR_GATED_53C80_IRQ, 0) < 0 ||
NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY)
break; break;
} }
@ -565,7 +565,7 @@ static inline int generic_NCR5380_precv(struct NCR5380_hostdata *hostdata,
if (residual == 0 && NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, if (residual == 0 && NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER,
BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER,
HZ / 64) < 0) 0) < 0)
scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n", scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n",
__func__); __func__);
@ -597,7 +597,7 @@ static inline int generic_NCR5380_psend(struct NCR5380_hostdata *hostdata,
CSR_HOST_BUF_NOT_RDY, 0, CSR_HOST_BUF_NOT_RDY, 0,
hostdata->c400_ctl_status, hostdata->c400_ctl_status,
CSR_GATED_53C80_IRQ, CSR_GATED_53C80_IRQ,
CSR_GATED_53C80_IRQ, HZ / 64) < 0 || CSR_GATED_53C80_IRQ, 0) < 0 ||
NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) { NCR5380_read(hostdata->c400_ctl_status) & CSR_HOST_BUF_NOT_RDY) {
/* Both 128 B buffers are in use */ /* Both 128 B buffers are in use */
if (start >= 128) if (start >= 128)
@ -644,13 +644,13 @@ static inline int generic_NCR5380_psend(struct NCR5380_hostdata *hostdata,
if (residual == 0) { if (residual == 0) {
if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG, if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG,
TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT, TCR_LAST_BYTE_SENT,
HZ / 64) < 0) 0) < 0)
scmd_printk(KERN_ERR, hostdata->connected, scmd_printk(KERN_ERR, hostdata->connected,
"%s: Last Byte Sent timeout\n", __func__); "%s: Last Byte Sent timeout\n", __func__);
if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, if (NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG,
BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER, BASR_END_DMA_TRANSFER,
HZ / 64) < 0) 0) < 0)
scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n", scmd_printk(KERN_ERR, hostdata->connected, "%s: End of DMA timeout\n",
__func__); __func__);
} }

Просмотреть файл

@ -243,24 +243,6 @@ struct hisi_sas_slot {
u16 idx; u16 idx;
}; };
#define HISI_SAS_DEBUGFS_REG(x) {#x, x}
struct hisi_sas_debugfs_reg_lu {
char *name;
int off;
};
struct hisi_sas_debugfs_reg {
const struct hisi_sas_debugfs_reg_lu *lu;
int count;
int base_off;
union {
u32 (*read_global_reg)(struct hisi_hba *hisi_hba, u32 off);
u32 (*read_port_reg)(struct hisi_hba *hisi_hba, int port,
u32 off);
};
};
struct hisi_sas_iost_itct_cache { struct hisi_sas_iost_itct_cache {
u32 data[HISI_SAS_IOST_ITCT_CACHE_DW_SZ]; u32 data[HISI_SAS_IOST_ITCT_CACHE_DW_SZ];
}; };
@ -350,15 +332,8 @@ struct hisi_sas_hw {
int delay_ms, int timeout_ms); int delay_ms, int timeout_ms);
void (*snapshot_prepare)(struct hisi_hba *hisi_hba); void (*snapshot_prepare)(struct hisi_hba *hisi_hba);
void (*snapshot_restore)(struct hisi_hba *hisi_hba); void (*snapshot_restore)(struct hisi_hba *hisi_hba);
int (*set_bist)(struct hisi_hba *hisi_hba, bool enable);
void (*read_iost_itct_cache)(struct hisi_hba *hisi_hba,
enum hisi_sas_debugfs_cache_type type,
u32 *cache);
int complete_hdr_size; int complete_hdr_size;
struct scsi_host_template *sht; struct scsi_host_template *sht;
const struct hisi_sas_debugfs_reg *debugfs_reg_array[DEBUGFS_REGS_NUM];
const struct hisi_sas_debugfs_reg *debugfs_reg_port;
}; };
#define HISI_SAS_MAX_DEBUGFS_DUMP (50) #define HISI_SAS_MAX_DEBUGFS_DUMP (50)
@ -673,7 +648,4 @@ extern void hisi_sas_release_tasks(struct hisi_hba *hisi_hba);
extern u8 hisi_sas_get_prog_phy_linkrate_mask(enum sas_linkrate max); extern u8 hisi_sas_get_prog_phy_linkrate_mask(enum sas_linkrate max);
extern void hisi_sas_controller_reset_prepare(struct hisi_hba *hisi_hba); extern void hisi_sas_controller_reset_prepare(struct hisi_hba *hisi_hba);
extern void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba); extern void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba);
extern void hisi_sas_debugfs_init(struct hisi_hba *hisi_hba);
extern void hisi_sas_debugfs_exit(struct hisi_hba *hisi_hba);
extern void hisi_sas_debugfs_work_handler(struct work_struct *work);
#endif #endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -3881,8 +3881,6 @@ static unsigned char hpsa_volume_offline(struct ctlr_info *h,
u8 sense_key, asc, ascq; u8 sense_key, asc, ascq;
int sense_len; int sense_len;
int rc, ldstat = 0; int rc, ldstat = 0;
u16 cmd_status;
u8 scsi_status;
#define ASC_LUN_NOT_READY 0x04 #define ASC_LUN_NOT_READY 0x04
#define ASCQ_LUN_NOT_READY_FORMAT_IN_PROGRESS 0x04 #define ASCQ_LUN_NOT_READY_FORMAT_IN_PROGRESS 0x04
#define ASCQ_LUN_NOT_READY_INITIALIZING_CMD_REQ 0x02 #define ASCQ_LUN_NOT_READY_INITIALIZING_CMD_REQ 0x02
@ -3902,8 +3900,6 @@ static unsigned char hpsa_volume_offline(struct ctlr_info *h,
else else
sense_len = c->err_info->SenseLen; sense_len = c->err_info->SenseLen;
decode_sense_data(sense, sense_len, &sense_key, &asc, &ascq); decode_sense_data(sense, sense_len, &sense_key, &asc, &ascq);
cmd_status = c->err_info->CommandStatus;
scsi_status = c->err_info->ScsiStatus;
cmd_free(h, c); cmd_free(h, c);
/* Determine the reason for not ready state */ /* Determine the reason for not ready state */
@ -4351,7 +4347,7 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
u32 ndev_allocated = 0; u32 ndev_allocated = 0;
struct hpsa_scsi_dev_t **currentsd, *this_device, *tmpdevice; struct hpsa_scsi_dev_t **currentsd, *this_device, *tmpdevice;
int ncurrent = 0; int ncurrent = 0;
int i, n_ext_target_devs, ndevs_to_allocate; int i, ndevs_to_allocate;
int raid_ctlr_position; int raid_ctlr_position;
bool physical_device; bool physical_device;
DECLARE_BITMAP(lunzerobits, MAX_EXT_TARGETS); DECLARE_BITMAP(lunzerobits, MAX_EXT_TARGETS);
@ -4416,7 +4412,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
raid_ctlr_position = nphysicals + nlogicals; raid_ctlr_position = nphysicals + nlogicals;
/* adjust our table of devices */ /* adjust our table of devices */
n_ext_target_devs = 0;
for (i = 0; i < nphysicals + nlogicals + 1; i++) { for (i = 0; i < nphysicals + nlogicals + 1; i++) {
u8 *lunaddrbytes, is_OBDR = 0; u8 *lunaddrbytes, is_OBDR = 0;
int rc = 0; int rc = 0;
@ -4580,7 +4575,7 @@ static int hpsa_scatter_gather(struct ctlr_info *h,
struct scsi_cmnd *cmd) struct scsi_cmnd *cmd)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int use_sg, i, sg_limit, chained, last_sg; int use_sg, i, sg_limit, chained;
struct SGDescriptor *curr_sg; struct SGDescriptor *curr_sg;
BUG_ON(scsi_sg_count(cmd) > h->maxsgentries); BUG_ON(scsi_sg_count(cmd) > h->maxsgentries);
@ -4602,7 +4597,6 @@ static int hpsa_scatter_gather(struct ctlr_info *h,
curr_sg = cp->SG; curr_sg = cp->SG;
chained = use_sg > h->max_cmd_sg_entries; chained = use_sg > h->max_cmd_sg_entries;
sg_limit = chained ? h->max_cmd_sg_entries - 1 : use_sg; sg_limit = chained ? h->max_cmd_sg_entries - 1 : use_sg;
last_sg = scsi_sg_count(cmd) - 1;
scsi_for_each_sg(cmd, sg, sg_limit, i) { scsi_for_each_sg(cmd, sg, sg_limit, i) {
hpsa_set_sg_descriptor(curr_sg, sg); hpsa_set_sg_descriptor(curr_sg, sg);
curr_sg++; curr_sg++;
@ -7442,7 +7436,6 @@ static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
dev_warn(&pdev->dev, dev_warn(&pdev->dev,
"base address is invalid\n"); "base address is invalid\n");
return -1; return -1;
break;
} }
} }
if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0) if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0)
@ -8636,7 +8629,7 @@ static struct ctlr_info *hpda_alloc_ctlr_info(void)
static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
int dac, rc; int rc;
struct ctlr_info *h; struct ctlr_info *h;
int try_soft_reset = 0; int try_soft_reset = 0;
unsigned long flags; unsigned long flags;
@ -8712,13 +8705,9 @@ reinit_after_soft_reset:
/* configure PCI DMA stuff */ /* configure PCI DMA stuff */
rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
if (rc == 0) { if (rc != 0) {
dac = 1;
} else {
rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
if (rc == 0) { if (rc != 0) {
dac = 0;
} else {
dev_err(&pdev->dev, "no suitable DMA available\n"); dev_err(&pdev->dev, "no suitable DMA available\n");
goto clean3; /* shost, pci, lu, aer/h */ goto clean3; /* shost, pci, lu, aer/h */
} }
@ -9092,25 +9081,27 @@ static void hpsa_remove_one(struct pci_dev *pdev)
hpda_free_ctlr_info(h); /* init_one 1 */ hpda_free_ctlr_info(h); /* init_one 1 */
} }
static int hpsa_suspend(__attribute__((unused)) struct pci_dev *pdev, static int __maybe_unused hpsa_suspend(
__attribute__((unused)) pm_message_t state) __attribute__((unused)) struct device *dev)
{ {
return -ENOSYS; return -ENOSYS;
} }
static int hpsa_resume(__attribute__((unused)) struct pci_dev *pdev) static int __maybe_unused hpsa_resume
(__attribute__((unused)) struct device *dev)
{ {
return -ENOSYS; return -ENOSYS;
} }
static SIMPLE_DEV_PM_OPS(hpsa_pm_ops, hpsa_suspend, hpsa_resume);
static struct pci_driver hpsa_pci_driver = { static struct pci_driver hpsa_pci_driver = {
.name = HPSA, .name = HPSA,
.probe = hpsa_init_one, .probe = hpsa_init_one,
.remove = hpsa_remove_one, .remove = hpsa_remove_one,
.id_table = hpsa_pci_device_id, /* id_table */ .id_table = hpsa_pci_device_id, /* id_table */
.shutdown = hpsa_shutdown, .shutdown = hpsa_shutdown,
.suspend = hpsa_suspend, .driver.pm = &hpsa_pm_ops,
.resume = hpsa_resume,
}; };
/* Fill in bucket_map[], given nsgs (the max number of /* Fill in bucket_map[], given nsgs (the max number of
@ -9299,10 +9290,9 @@ static int hpsa_enter_performant_mode(struct ctlr_info *h, u32 trans_support)
} else if (trans_support & CFGTBL_Trans_io_accel2) { } else if (trans_support & CFGTBL_Trans_io_accel2) {
u64 cfg_offset, cfg_base_addr_index; u64 cfg_offset, cfg_base_addr_index;
u32 bft2_offset, cfg_base_addr; u32 bft2_offset, cfg_base_addr;
int rc;
rc = hpsa_find_cfg_addrs(h->pdev, h->vaddr, &cfg_base_addr, hpsa_find_cfg_addrs(h->pdev, h->vaddr, &cfg_base_addr,
&cfg_base_addr_index, &cfg_offset); &cfg_base_addr_index, &cfg_offset);
BUILD_BUG_ON(offsetof(struct io_accel2_cmd, sg) != 64); BUILD_BUG_ON(offsetof(struct io_accel2_cmd, sg) != 64);
bft2[15] = h->ioaccel_maxsg + HPSA_IOACCEL2_HEADER_SZ; bft2[15] = h->ioaccel_maxsg + HPSA_IOACCEL2_HEADER_SZ;
calc_bucket_map(bft2, ARRAY_SIZE(bft2), h->ioaccel_maxsg, calc_bucket_map(bft2, ARRAY_SIZE(bft2), h->ioaccel_maxsg,

Просмотреть файл

@ -758,7 +758,6 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
scp->result = SAM_STAT_CHECK_CONDITION; scp->result = SAM_STAT_CHECK_CONDITION;
memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE); memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE);
goto skip_resid; goto skip_resid;
break;
default: default:
scp->result = DRIVER_INVALID << 24 | DID_ABORT << 16; scp->result = DRIVER_INVALID << 24 | DID_ABORT << 16;

Просмотреть файл

@ -138,6 +138,31 @@ static void ibmvfc_tgt_move_login(struct ibmvfc_target *);
static const char *unknown_error = "unknown error"; static const char *unknown_error = "unknown error";
static int ibmvfc_check_caps(struct ibmvfc_host *vhost, unsigned long cap_flags)
{
u64 host_caps = be64_to_cpu(vhost->login_buf->resp.capabilities);
return (host_caps & cap_flags) ? 1 : 0;
}
static struct ibmvfc_fcp_cmd_iu *ibmvfc_get_fcp_iu(struct ibmvfc_host *vhost,
struct ibmvfc_cmd *vfc_cmd)
{
if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN))
return &vfc_cmd->v2.iu;
else
return &vfc_cmd->v1.iu;
}
static struct ibmvfc_fcp_rsp *ibmvfc_get_fcp_rsp(struct ibmvfc_host *vhost,
struct ibmvfc_cmd *vfc_cmd)
{
if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN))
return &vfc_cmd->v2.rsp;
else
return &vfc_cmd->v1.rsp;
}
#ifdef CONFIG_SCSI_IBMVFC_TRACE #ifdef CONFIG_SCSI_IBMVFC_TRACE
/** /**
* ibmvfc_trc_start - Log a start trace entry * ibmvfc_trc_start - Log a start trace entry
@ -149,6 +174,7 @@ static void ibmvfc_trc_start(struct ibmvfc_event *evt)
struct ibmvfc_host *vhost = evt->vhost; struct ibmvfc_host *vhost = evt->vhost;
struct ibmvfc_cmd *vfc_cmd = &evt->iu.cmd; struct ibmvfc_cmd *vfc_cmd = &evt->iu.cmd;
struct ibmvfc_mad_common *mad = &evt->iu.mad_common; struct ibmvfc_mad_common *mad = &evt->iu.mad_common;
struct ibmvfc_fcp_cmd_iu *iu = ibmvfc_get_fcp_iu(vhost, vfc_cmd);
struct ibmvfc_trace_entry *entry; struct ibmvfc_trace_entry *entry;
entry = &vhost->trace[vhost->trace_index++]; entry = &vhost->trace[vhost->trace_index++];
@ -159,11 +185,11 @@ static void ibmvfc_trc_start(struct ibmvfc_event *evt)
switch (entry->fmt) { switch (entry->fmt) {
case IBMVFC_CMD_FORMAT: case IBMVFC_CMD_FORMAT:
entry->op_code = vfc_cmd->iu.cdb[0]; entry->op_code = iu->cdb[0];
entry->scsi_id = be64_to_cpu(vfc_cmd->tgt_scsi_id); entry->scsi_id = be64_to_cpu(vfc_cmd->tgt_scsi_id);
entry->lun = scsilun_to_int(&vfc_cmd->iu.lun); entry->lun = scsilun_to_int(&iu->lun);
entry->tmf_flags = vfc_cmd->iu.tmf_flags; entry->tmf_flags = iu->tmf_flags;
entry->u.start.xfer_len = be32_to_cpu(vfc_cmd->iu.xfer_len); entry->u.start.xfer_len = be32_to_cpu(iu->xfer_len);
break; break;
case IBMVFC_MAD_FORMAT: case IBMVFC_MAD_FORMAT:
entry->op_code = be32_to_cpu(mad->opcode); entry->op_code = be32_to_cpu(mad->opcode);
@ -183,6 +209,8 @@ static void ibmvfc_trc_end(struct ibmvfc_event *evt)
struct ibmvfc_host *vhost = evt->vhost; struct ibmvfc_host *vhost = evt->vhost;
struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd; struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd;
struct ibmvfc_mad_common *mad = &evt->xfer_iu->mad_common; struct ibmvfc_mad_common *mad = &evt->xfer_iu->mad_common;
struct ibmvfc_fcp_cmd_iu *iu = ibmvfc_get_fcp_iu(vhost, vfc_cmd);
struct ibmvfc_fcp_rsp *rsp = ibmvfc_get_fcp_rsp(vhost, vfc_cmd);
struct ibmvfc_trace_entry *entry = &vhost->trace[vhost->trace_index++]; struct ibmvfc_trace_entry *entry = &vhost->trace[vhost->trace_index++];
entry->evt = evt; entry->evt = evt;
@ -192,15 +220,15 @@ static void ibmvfc_trc_end(struct ibmvfc_event *evt)
switch (entry->fmt) { switch (entry->fmt) {
case IBMVFC_CMD_FORMAT: case IBMVFC_CMD_FORMAT:
entry->op_code = vfc_cmd->iu.cdb[0]; entry->op_code = iu->cdb[0];
entry->scsi_id = be64_to_cpu(vfc_cmd->tgt_scsi_id); entry->scsi_id = be64_to_cpu(vfc_cmd->tgt_scsi_id);
entry->lun = scsilun_to_int(&vfc_cmd->iu.lun); entry->lun = scsilun_to_int(&iu->lun);
entry->tmf_flags = vfc_cmd->iu.tmf_flags; entry->tmf_flags = iu->tmf_flags;
entry->u.end.status = be16_to_cpu(vfc_cmd->status); entry->u.end.status = be16_to_cpu(vfc_cmd->status);
entry->u.end.error = be16_to_cpu(vfc_cmd->error); entry->u.end.error = be16_to_cpu(vfc_cmd->error);
entry->u.end.fcp_rsp_flags = vfc_cmd->rsp.flags; entry->u.end.fcp_rsp_flags = rsp->flags;
entry->u.end.rsp_code = vfc_cmd->rsp.data.info.rsp_code; entry->u.end.rsp_code = rsp->data.info.rsp_code;
entry->u.end.scsi_status = vfc_cmd->rsp.scsi_status; entry->u.end.scsi_status = rsp->scsi_status;
break; break;
case IBMVFC_MAD_FORMAT: case IBMVFC_MAD_FORMAT:
entry->op_code = be32_to_cpu(mad->opcode); entry->op_code = be32_to_cpu(mad->opcode);
@ -260,10 +288,10 @@ static const char *ibmvfc_get_cmd_error(u16 status, u16 error)
* Return value: * Return value:
* SCSI result value to return for completed command * SCSI result value to return for completed command
**/ **/
static int ibmvfc_get_err_result(struct ibmvfc_cmd *vfc_cmd) static int ibmvfc_get_err_result(struct ibmvfc_host *vhost, struct ibmvfc_cmd *vfc_cmd)
{ {
int err; int err;
struct ibmvfc_fcp_rsp *rsp = &vfc_cmd->rsp; struct ibmvfc_fcp_rsp *rsp = ibmvfc_get_fcp_rsp(vhost, vfc_cmd);
int fc_rsp_len = be32_to_cpu(rsp->fcp_rsp_len); int fc_rsp_len = be32_to_cpu(rsp->fcp_rsp_len);
if ((rsp->flags & FCP_RSP_LEN_VALID) && if ((rsp->flags & FCP_RSP_LEN_VALID) &&
@ -1227,7 +1255,7 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost)
login_info->flags |= cpu_to_be16(IBMVFC_CLIENT_MIGRATED); login_info->flags |= cpu_to_be16(IBMVFC_CLIENT_MIGRATED);
login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ); login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ);
login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE); login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN);
login_info->async.va = cpu_to_be64(vhost->async_crq.msg_token); login_info->async.va = cpu_to_be64(vhost->async_crq.msg_token);
login_info->async.len = cpu_to_be32(vhost->async_crq.size * sizeof(*vhost->async_crq.msgs)); login_info->async.len = cpu_to_be32(vhost->async_crq.size * sizeof(*vhost->async_crq.msgs));
strncpy(login_info->partition_name, vhost->partition_name, IBMVFC_MAX_NAME); strncpy(login_info->partition_name, vhost->partition_name, IBMVFC_MAX_NAME);
@ -1378,6 +1406,7 @@ static int ibmvfc_map_sg_data(struct scsi_cmnd *scmd,
int sg_mapped; int sg_mapped;
struct srp_direct_buf *data = &vfc_cmd->ioba; struct srp_direct_buf *data = &vfc_cmd->ioba;
struct ibmvfc_host *vhost = dev_get_drvdata(dev); struct ibmvfc_host *vhost = dev_get_drvdata(dev);
struct ibmvfc_fcp_cmd_iu *iu = ibmvfc_get_fcp_iu(evt->vhost, vfc_cmd);
if (cls3_error) if (cls3_error)
vfc_cmd->flags |= cpu_to_be16(IBMVFC_CLASS_3_ERR); vfc_cmd->flags |= cpu_to_be16(IBMVFC_CLASS_3_ERR);
@ -1394,10 +1423,10 @@ static int ibmvfc_map_sg_data(struct scsi_cmnd *scmd,
if (scmd->sc_data_direction == DMA_TO_DEVICE) { if (scmd->sc_data_direction == DMA_TO_DEVICE) {
vfc_cmd->flags |= cpu_to_be16(IBMVFC_WRITE); vfc_cmd->flags |= cpu_to_be16(IBMVFC_WRITE);
vfc_cmd->iu.add_cdb_len |= IBMVFC_WRDATA; iu->add_cdb_len |= IBMVFC_WRDATA;
} else { } else {
vfc_cmd->flags |= cpu_to_be16(IBMVFC_READ); vfc_cmd->flags |= cpu_to_be16(IBMVFC_READ);
vfc_cmd->iu.add_cdb_len |= IBMVFC_RDDATA; iu->add_cdb_len |= IBMVFC_RDDATA;
} }
if (sg_mapped == 1) { if (sg_mapped == 1) {
@ -1516,7 +1545,7 @@ static void ibmvfc_log_error(struct ibmvfc_event *evt)
{ {
struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd; struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd;
struct ibmvfc_host *vhost = evt->vhost; struct ibmvfc_host *vhost = evt->vhost;
struct ibmvfc_fcp_rsp *rsp = &vfc_cmd->rsp; struct ibmvfc_fcp_rsp *rsp = ibmvfc_get_fcp_rsp(vhost, vfc_cmd);
struct scsi_cmnd *cmnd = evt->cmnd; struct scsi_cmnd *cmnd = evt->cmnd;
const char *err = unknown_error; const char *err = unknown_error;
int index = ibmvfc_get_err_index(be16_to_cpu(vfc_cmd->status), be16_to_cpu(vfc_cmd->error)); int index = ibmvfc_get_err_index(be16_to_cpu(vfc_cmd->status), be16_to_cpu(vfc_cmd->error));
@ -1570,7 +1599,7 @@ static void ibmvfc_relogin(struct scsi_device *sdev)
static void ibmvfc_scsi_done(struct ibmvfc_event *evt) static void ibmvfc_scsi_done(struct ibmvfc_event *evt)
{ {
struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd; struct ibmvfc_cmd *vfc_cmd = &evt->xfer_iu->cmd;
struct ibmvfc_fcp_rsp *rsp = &vfc_cmd->rsp; struct ibmvfc_fcp_rsp *rsp = ibmvfc_get_fcp_rsp(evt->vhost, vfc_cmd);
struct scsi_cmnd *cmnd = evt->cmnd; struct scsi_cmnd *cmnd = evt->cmnd;
u32 rsp_len = 0; u32 rsp_len = 0;
u32 sense_len = be32_to_cpu(rsp->fcp_sense_len); u32 sense_len = be32_to_cpu(rsp->fcp_sense_len);
@ -1584,7 +1613,7 @@ static void ibmvfc_scsi_done(struct ibmvfc_event *evt)
scsi_set_resid(cmnd, 0); scsi_set_resid(cmnd, 0);
if (vfc_cmd->status) { if (vfc_cmd->status) {
cmnd->result = ibmvfc_get_err_result(vfc_cmd); cmnd->result = ibmvfc_get_err_result(evt->vhost, vfc_cmd);
if (rsp->flags & FCP_RSP_LEN_VALID) if (rsp->flags & FCP_RSP_LEN_VALID)
rsp_len = be32_to_cpu(rsp->fcp_rsp_len); rsp_len = be32_to_cpu(rsp->fcp_rsp_len);
@ -1646,6 +1675,33 @@ static inline int ibmvfc_host_chkready(struct ibmvfc_host *vhost)
return result; return result;
} }
static struct ibmvfc_cmd *ibmvfc_init_vfc_cmd(struct ibmvfc_event *evt, struct scsi_device *sdev)
{
struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
struct ibmvfc_host *vhost = evt->vhost;
struct ibmvfc_cmd *vfc_cmd = &evt->iu.cmd;
struct ibmvfc_fcp_cmd_iu *iu = ibmvfc_get_fcp_iu(vhost, vfc_cmd);
struct ibmvfc_fcp_rsp *rsp = ibmvfc_get_fcp_rsp(vhost, vfc_cmd);
size_t offset;
memset(vfc_cmd, 0, sizeof(*vfc_cmd));
if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN)) {
offset = offsetof(struct ibmvfc_cmd, v2.rsp);
vfc_cmd->target_wwpn = cpu_to_be64(rport->port_name);
} else
offset = offsetof(struct ibmvfc_cmd, v1.rsp);
vfc_cmd->resp.va = cpu_to_be64(be64_to_cpu(evt->crq.ioba) + offset);
vfc_cmd->resp.len = cpu_to_be32(sizeof(*rsp));
vfc_cmd->frame_type = cpu_to_be32(IBMVFC_SCSI_FCP_TYPE);
vfc_cmd->payload_len = cpu_to_be32(sizeof(*iu));
vfc_cmd->resp_len = cpu_to_be32(sizeof(*rsp));
vfc_cmd->cancel_key = cpu_to_be32((unsigned long)sdev->hostdata);
vfc_cmd->tgt_scsi_id = cpu_to_be64(rport->port_id);
int_to_scsilun(sdev->lun, &iu->lun);
return vfc_cmd;
}
/** /**
* ibmvfc_queuecommand - The queuecommand function of the scsi template * ibmvfc_queuecommand - The queuecommand function of the scsi template
* @cmnd: struct scsi_cmnd to be executed * @cmnd: struct scsi_cmnd to be executed
@ -1660,6 +1716,7 @@ static int ibmvfc_queuecommand_lck(struct scsi_cmnd *cmnd,
struct ibmvfc_host *vhost = shost_priv(cmnd->device->host); struct ibmvfc_host *vhost = shost_priv(cmnd->device->host);
struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device)); struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device));
struct ibmvfc_cmd *vfc_cmd; struct ibmvfc_cmd *vfc_cmd;
struct ibmvfc_fcp_cmd_iu *iu;
struct ibmvfc_event *evt; struct ibmvfc_event *evt;
int rc; int rc;
@ -1675,24 +1732,20 @@ static int ibmvfc_queuecommand_lck(struct scsi_cmnd *cmnd,
ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT); ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT);
evt->cmnd = cmnd; evt->cmnd = cmnd;
cmnd->scsi_done = done; cmnd->scsi_done = done;
vfc_cmd = &evt->iu.cmd;
memset(vfc_cmd, 0, sizeof(*vfc_cmd)); vfc_cmd = ibmvfc_init_vfc_cmd(evt, cmnd->device);
vfc_cmd->resp.va = cpu_to_be64(be64_to_cpu(evt->crq.ioba) + offsetof(struct ibmvfc_cmd, rsp)); iu = ibmvfc_get_fcp_iu(vhost, vfc_cmd);
vfc_cmd->resp.len = cpu_to_be32(sizeof(vfc_cmd->rsp));
vfc_cmd->frame_type = cpu_to_be32(IBMVFC_SCSI_FCP_TYPE); iu->xfer_len = cpu_to_be32(scsi_bufflen(cmnd));
vfc_cmd->payload_len = cpu_to_be32(sizeof(vfc_cmd->iu)); memcpy(iu->cdb, cmnd->cmnd, cmnd->cmd_len);
vfc_cmd->resp_len = cpu_to_be32(sizeof(vfc_cmd->rsp));
vfc_cmd->cancel_key = cpu_to_be32((unsigned long)cmnd->device->hostdata);
vfc_cmd->tgt_scsi_id = cpu_to_be64(rport->port_id);
vfc_cmd->iu.xfer_len = cpu_to_be32(scsi_bufflen(cmnd));
int_to_scsilun(cmnd->device->lun, &vfc_cmd->iu.lun);
memcpy(vfc_cmd->iu.cdb, cmnd->cmnd, cmnd->cmd_len);
if (cmnd->flags & SCMD_TAGGED) { if (cmnd->flags & SCMD_TAGGED) {
vfc_cmd->task_tag = cpu_to_be64(cmnd->tag); vfc_cmd->task_tag = cpu_to_be64(cmnd->tag);
vfc_cmd->iu.pri_task_attr = IBMVFC_SIMPLE_TASK; iu->pri_task_attr = IBMVFC_SIMPLE_TASK;
} }
vfc_cmd->correlation = cpu_to_be64(evt);
if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev)))) if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev))))
return ibmvfc_send_event(evt, vhost, 0); return ibmvfc_send_event(evt, vhost, 0);
@ -2016,7 +2069,8 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc)
struct ibmvfc_cmd *tmf; struct ibmvfc_cmd *tmf;
struct ibmvfc_event *evt = NULL; struct ibmvfc_event *evt = NULL;
union ibmvfc_iu rsp_iu; union ibmvfc_iu rsp_iu;
struct ibmvfc_fcp_rsp *fc_rsp = &rsp_iu.cmd.rsp; struct ibmvfc_fcp_cmd_iu *iu;
struct ibmvfc_fcp_rsp *fc_rsp = ibmvfc_get_fcp_rsp(vhost, &rsp_iu.cmd);
int rsp_rc = -EBUSY; int rsp_rc = -EBUSY;
unsigned long flags; unsigned long flags;
int rsp_code = 0; int rsp_code = 0;
@ -2025,19 +2079,13 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc)
if (vhost->state == IBMVFC_ACTIVE) { if (vhost->state == IBMVFC_ACTIVE) {
evt = ibmvfc_get_event(vhost); evt = ibmvfc_get_event(vhost);
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT);
tmf = ibmvfc_init_vfc_cmd(evt, sdev);
iu = ibmvfc_get_fcp_iu(vhost, tmf);
tmf = &evt->iu.cmd;
memset(tmf, 0, sizeof(*tmf));
tmf->resp.va = cpu_to_be64(be64_to_cpu(evt->crq.ioba) + offsetof(struct ibmvfc_cmd, rsp));
tmf->resp.len = cpu_to_be32(sizeof(tmf->rsp));
tmf->frame_type = cpu_to_be32(IBMVFC_SCSI_FCP_TYPE);
tmf->payload_len = cpu_to_be32(sizeof(tmf->iu));
tmf->resp_len = cpu_to_be32(sizeof(tmf->rsp));
tmf->cancel_key = cpu_to_be32((unsigned long)sdev->hostdata);
tmf->tgt_scsi_id = cpu_to_be64(rport->port_id);
int_to_scsilun(sdev->lun, &tmf->iu.lun);
tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF)); tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF));
tmf->iu.tmf_flags = type; if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN))
tmf->target_wwpn = cpu_to_be64(rport->port_name);
iu->tmf_flags = type;
evt->sync_iu = &rsp_iu; evt->sync_iu = &rsp_iu;
init_completion(&evt->comp); init_completion(&evt->comp);
@ -2055,7 +2103,7 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc)
wait_for_completion(&evt->comp); wait_for_completion(&evt->comp);
if (rsp_iu.cmd.status) if (rsp_iu.cmd.status)
rsp_code = ibmvfc_get_err_result(&rsp_iu.cmd); rsp_code = ibmvfc_get_err_result(vhost, &rsp_iu.cmd);
if (rsp_code) { if (rsp_code) {
if (fc_rsp->flags & FCP_RSP_LEN_VALID) if (fc_rsp->flags & FCP_RSP_LEN_VALID)
@ -2227,12 +2275,17 @@ static int ibmvfc_cancel_all(struct scsi_device *sdev, int type)
tmf = &evt->iu.tmf; tmf = &evt->iu.tmf;
memset(tmf, 0, sizeof(*tmf)); memset(tmf, 0, sizeof(*tmf));
tmf->common.version = cpu_to_be32(1); if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN)) {
tmf->common.version = cpu_to_be32(2);
tmf->target_wwpn = cpu_to_be64(rport->port_name);
} else {
tmf->common.version = cpu_to_be32(1);
}
tmf->common.opcode = cpu_to_be32(IBMVFC_TMF_MAD); tmf->common.opcode = cpu_to_be32(IBMVFC_TMF_MAD);
tmf->common.length = cpu_to_be16(sizeof(*tmf)); tmf->common.length = cpu_to_be16(sizeof(*tmf));
tmf->scsi_id = cpu_to_be64(rport->port_id); tmf->scsi_id = cpu_to_be64(rport->port_id);
int_to_scsilun(sdev->lun, &tmf->lun); int_to_scsilun(sdev->lun, &tmf->lun);
if (!(be64_to_cpu(vhost->login_buf->resp.capabilities) & IBMVFC_CAN_SUPPRESS_ABTS)) if (!ibmvfc_check_caps(vhost, IBMVFC_CAN_SUPPRESS_ABTS))
type &= ~IBMVFC_TMF_SUPPRESS_ABTS; type &= ~IBMVFC_TMF_SUPPRESS_ABTS;
if (vhost->state == IBMVFC_ACTIVE) if (vhost->state == IBMVFC_ACTIVE)
tmf->flags = cpu_to_be32((type | IBMVFC_TMF_LUA_VALID)); tmf->flags = cpu_to_be32((type | IBMVFC_TMF_LUA_VALID));
@ -2331,7 +2384,8 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev)
struct ibmvfc_cmd *tmf; struct ibmvfc_cmd *tmf;
struct ibmvfc_event *evt, *found_evt; struct ibmvfc_event *evt, *found_evt;
union ibmvfc_iu rsp_iu; union ibmvfc_iu rsp_iu;
struct ibmvfc_fcp_rsp *fc_rsp = &rsp_iu.cmd.rsp; struct ibmvfc_fcp_cmd_iu *iu;
struct ibmvfc_fcp_rsp *fc_rsp = ibmvfc_get_fcp_rsp(vhost, &rsp_iu.cmd);
int rc, rsp_rc = -EBUSY; int rc, rsp_rc = -EBUSY;
unsigned long flags, timeout = IBMVFC_ABORT_TIMEOUT; unsigned long flags, timeout = IBMVFC_ABORT_TIMEOUT;
int rsp_code = 0; int rsp_code = 0;
@ -2355,21 +2409,17 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev)
if (vhost->state == IBMVFC_ACTIVE) { if (vhost->state == IBMVFC_ACTIVE) {
evt = ibmvfc_get_event(vhost); evt = ibmvfc_get_event(vhost);
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT);
tmf = ibmvfc_init_vfc_cmd(evt, sdev);
iu = ibmvfc_get_fcp_iu(vhost, tmf);
tmf = &evt->iu.cmd; if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN))
memset(tmf, 0, sizeof(*tmf)); tmf->target_wwpn = cpu_to_be64(rport->port_name);
tmf->resp.va = cpu_to_be64(be64_to_cpu(evt->crq.ioba) + offsetof(struct ibmvfc_cmd, rsp)); iu->tmf_flags = IBMVFC_ABORT_TASK_SET;
tmf->resp.len = cpu_to_be32(sizeof(tmf->rsp));
tmf->frame_type = cpu_to_be32(IBMVFC_SCSI_FCP_TYPE);
tmf->payload_len = cpu_to_be32(sizeof(tmf->iu));
tmf->resp_len = cpu_to_be32(sizeof(tmf->rsp));
tmf->cancel_key = cpu_to_be32((unsigned long)sdev->hostdata);
tmf->tgt_scsi_id = cpu_to_be64(rport->port_id);
int_to_scsilun(sdev->lun, &tmf->iu.lun);
tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF)); tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF));
tmf->iu.tmf_flags = IBMVFC_ABORT_TASK_SET;
evt->sync_iu = &rsp_iu; evt->sync_iu = &rsp_iu;
tmf->correlation = cpu_to_be64(evt);
init_completion(&evt->comp); init_completion(&evt->comp);
rsp_rc = ibmvfc_send_event(evt, vhost, default_timeout); rsp_rc = ibmvfc_send_event(evt, vhost, default_timeout);
} }
@ -2414,7 +2464,7 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev)
} }
if (rsp_iu.cmd.status) if (rsp_iu.cmd.status)
rsp_code = ibmvfc_get_err_result(&rsp_iu.cmd); rsp_code = ibmvfc_get_err_result(vhost, &rsp_iu.cmd);
if (rsp_code) { if (rsp_code) {
if (fc_rsp->flags & FCP_RSP_LEN_VALID) if (fc_rsp->flags & FCP_RSP_LEN_VALID)
@ -3025,7 +3075,7 @@ static ssize_t ibmvfc_show_host_npiv_version(struct device *dev,
{ {
struct Scsi_Host *shost = class_to_shost(dev); struct Scsi_Host *shost = class_to_shost(dev);
struct ibmvfc_host *vhost = shost_priv(shost); struct ibmvfc_host *vhost = shost_priv(shost);
return snprintf(buf, PAGE_SIZE, "%d\n", vhost->login_buf->resp.version); return snprintf(buf, PAGE_SIZE, "%d\n", be32_to_cpu(vhost->login_buf->resp.version));
} }
static ssize_t ibmvfc_show_host_capabilities(struct device *dev, static ssize_t ibmvfc_show_host_capabilities(struct device *dev,
@ -3033,7 +3083,7 @@ static ssize_t ibmvfc_show_host_capabilities(struct device *dev,
{ {
struct Scsi_Host *shost = class_to_shost(dev); struct Scsi_Host *shost = class_to_shost(dev);
struct ibmvfc_host *vhost = shost_priv(shost); struct ibmvfc_host *vhost = shost_priv(shost);
return snprintf(buf, PAGE_SIZE, "%llx\n", vhost->login_buf->resp.capabilities); return snprintf(buf, PAGE_SIZE, "%llx\n", be64_to_cpu(vhost->login_buf->resp.capabilities));
} }
/** /**
@ -3445,7 +3495,12 @@ static void ibmvfc_tgt_send_prli(struct ibmvfc_target *tgt)
evt->tgt = tgt; evt->tgt = tgt;
prli = &evt->iu.prli; prli = &evt->iu.prli;
memset(prli, 0, sizeof(*prli)); memset(prli, 0, sizeof(*prli));
prli->common.version = cpu_to_be32(1); if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN)) {
prli->common.version = cpu_to_be32(2);
prli->target_wwpn = cpu_to_be64(tgt->wwpn);
} else {
prli->common.version = cpu_to_be32(1);
}
prli->common.opcode = cpu_to_be32(IBMVFC_PROCESS_LOGIN); prli->common.opcode = cpu_to_be32(IBMVFC_PROCESS_LOGIN);
prli->common.length = cpu_to_be16(sizeof(*prli)); prli->common.length = cpu_to_be16(sizeof(*prli));
prli->scsi_id = cpu_to_be64(tgt->scsi_id); prli->scsi_id = cpu_to_be64(tgt->scsi_id);
@ -3548,7 +3603,12 @@ static void ibmvfc_tgt_send_plogi(struct ibmvfc_target *tgt)
evt->tgt = tgt; evt->tgt = tgt;
plogi = &evt->iu.plogi; plogi = &evt->iu.plogi;
memset(plogi, 0, sizeof(*plogi)); memset(plogi, 0, sizeof(*plogi));
plogi->common.version = cpu_to_be32(1); if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN)) {
plogi->common.version = cpu_to_be32(2);
plogi->target_wwpn = cpu_to_be64(tgt->wwpn);
} else {
plogi->common.version = cpu_to_be32(1);
}
plogi->common.opcode = cpu_to_be32(IBMVFC_PORT_LOGIN); plogi->common.opcode = cpu_to_be32(IBMVFC_PORT_LOGIN);
plogi->common.length = cpu_to_be16(sizeof(*plogi)); plogi->common.length = cpu_to_be16(sizeof(*plogi));
plogi->scsi_id = cpu_to_be64(tgt->scsi_id); plogi->scsi_id = cpu_to_be64(tgt->scsi_id);
@ -3948,7 +4008,12 @@ static void ibmvfc_adisc_timeout(struct timer_list *t)
evt->tgt = tgt; evt->tgt = tgt;
tmf = &evt->iu.tmf; tmf = &evt->iu.tmf;
memset(tmf, 0, sizeof(*tmf)); memset(tmf, 0, sizeof(*tmf));
tmf->common.version = cpu_to_be32(1); if (ibmvfc_check_caps(vhost, IBMVFC_HANDLE_VF_WWPN)) {
tmf->common.version = cpu_to_be32(2);
tmf->target_wwpn = cpu_to_be64(tgt->wwpn);
} else {
tmf->common.version = cpu_to_be32(1);
}
tmf->common.opcode = cpu_to_be32(IBMVFC_TMF_MAD); tmf->common.opcode = cpu_to_be32(IBMVFC_TMF_MAD);
tmf->common.length = cpu_to_be16(sizeof(*tmf)); tmf->common.length = cpu_to_be16(sizeof(*tmf));
tmf->scsi_id = cpu_to_be64(tgt->scsi_id); tmf->scsi_id = cpu_to_be64(tgt->scsi_id);
@ -4391,7 +4456,7 @@ static void ibmvfc_npiv_login(struct ibmvfc_host *vhost)
ibmvfc_dbg(vhost, "Sent NPIV login\n"); ibmvfc_dbg(vhost, "Sent NPIV login\n");
else else
ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD); ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
}; }
/** /**
* ibmvfc_npiv_logout_done - Completion handler for NPIV Logout * ibmvfc_npiv_logout_done - Completion handler for NPIV Logout

Просмотреть файл

@ -54,6 +54,7 @@
#define IBMVFC_MAD_SUCCESS 0x00 #define IBMVFC_MAD_SUCCESS 0x00
#define IBMVFC_MAD_NOT_SUPPORTED 0xF1 #define IBMVFC_MAD_NOT_SUPPORTED 0xF1
#define IBMVFC_MAD_VERSION_NOT_SUPP 0xF2
#define IBMVFC_MAD_FAILED 0xF7 #define IBMVFC_MAD_FAILED 0xF7
#define IBMVFC_MAD_DRIVER_FAILED 0xEE #define IBMVFC_MAD_DRIVER_FAILED 0xEE
#define IBMVFC_MAD_CRQ_ERROR 0xEF #define IBMVFC_MAD_CRQ_ERROR 0xEF
@ -168,6 +169,8 @@ struct ibmvfc_npiv_login {
#define IBMVFC_CAN_MIGRATE 0x01 #define IBMVFC_CAN_MIGRATE 0x01
#define IBMVFC_CAN_USE_CHANNELS 0x02 #define IBMVFC_CAN_USE_CHANNELS 0x02
#define IBMVFC_CAN_HANDLE_FPIN 0x04 #define IBMVFC_CAN_HANDLE_FPIN 0x04
#define IBMVFC_CAN_USE_MAD_VERSION 0x08
#define IBMVFC_CAN_SEND_VF_WWPN 0x10
__be64 node_name; __be64 node_name;
struct srp_direct_buf async; struct srp_direct_buf async;
u8 partition_name[IBMVFC_MAX_NAME]; u8 partition_name[IBMVFC_MAX_NAME];
@ -211,7 +214,9 @@ struct ibmvfc_npiv_login_resp {
__be64 capabilities; __be64 capabilities;
#define IBMVFC_CAN_FLUSH_ON_HALT 0x08 #define IBMVFC_CAN_FLUSH_ON_HALT 0x08
#define IBMVFC_CAN_SUPPRESS_ABTS 0x10 #define IBMVFC_CAN_SUPPRESS_ABTS 0x10
#define IBMVFC_CAN_SUPPORT_CHANNELS 0x20 #define IBMVFC_MAD_VERSION_CAP 0x20
#define IBMVFC_HANDLE_VF_WWPN 0x40
#define IBMVFC_CAN_SUPPORT_CHANNELS 0x80
__be32 max_cmds; __be32 max_cmds;
__be32 scsi_id_sz; __be32 scsi_id_sz;
__be64 max_dma_len; __be64 max_dma_len;
@ -293,6 +298,7 @@ struct ibmvfc_port_login {
__be32 reserved2; __be32 reserved2;
struct ibmvfc_service_parms service_parms; struct ibmvfc_service_parms service_parms;
struct ibmvfc_service_parms service_parms_change; struct ibmvfc_service_parms service_parms_change;
__be64 target_wwpn;
__be64 reserved3[2]; __be64 reserved3[2];
} __packed __aligned(8); } __packed __aligned(8);
@ -344,6 +350,7 @@ struct ibmvfc_process_login {
__be16 status; __be16 status;
__be16 error; /* also fc_reason */ __be16 error; /* also fc_reason */
__be32 reserved2; __be32 reserved2;
__be64 target_wwpn;
__be64 reserved3[2]; __be64 reserved3[2];
} __packed __aligned(8); } __packed __aligned(8);
@ -378,6 +385,8 @@ struct ibmvfc_tmf {
__be32 cancel_key; __be32 cancel_key;
__be32 my_cancel_key; __be32 my_cancel_key;
__be32 pad; __be32 pad;
__be64 target_wwpn;
__be64 task_tag;
__be64 reserved[2]; __be64 reserved[2];
} __packed __aligned(8); } __packed __aligned(8);
@ -474,9 +483,19 @@ struct ibmvfc_cmd {
__be64 correlation; __be64 correlation;
__be64 tgt_scsi_id; __be64 tgt_scsi_id;
__be64 tag; __be64 tag;
__be64 reserved3[2]; __be64 target_wwpn;
struct ibmvfc_fcp_cmd_iu iu; __be64 reserved3;
struct ibmvfc_fcp_rsp rsp; union {
struct {
struct ibmvfc_fcp_cmd_iu iu;
struct ibmvfc_fcp_rsp rsp;
} v1;
struct {
__be64 reserved4;
struct ibmvfc_fcp_cmd_iu iu;
struct ibmvfc_fcp_rsp rsp;
} v2;
};
} __packed __aligned(8); } __packed __aligned(8);
struct ibmvfc_passthru_fc_iu { struct ibmvfc_passthru_fc_iu {
@ -503,6 +522,7 @@ struct ibmvfc_passthru_iu {
__be64 correlation; __be64 correlation;
__be64 scsi_id; __be64 scsi_id;
__be64 tag; __be64 tag;
__be64 target_wwpn;
__be64 reserved2[2]; __be64 reserved2[2];
} __packed __aligned(8); } __packed __aligned(8);

Просмотреть файл

@ -9487,7 +9487,6 @@ static pci_ers_result_t ipr_pci_error_detected(struct pci_dev *pdev,
case pci_channel_io_perm_failure: case pci_channel_io_perm_failure:
ipr_pci_perm_failure(pdev); ipr_pci_perm_failure(pdev);
return PCI_ERS_RESULT_DISCONNECT; return PCI_ERS_RESULT_DISCONNECT;
break;
default: default:
break; break;
} }

Просмотреть файл

@ -715,10 +715,6 @@ static int isci_suspend(struct device *dev)
isci_host_deinit(ihost); isci_host_deinit(ihost);
} }
pci_save_state(pdev);
pci_disable_device(pdev);
pci_set_power_state(pdev, PCI_D3hot);
return 0; return 0;
} }
@ -726,19 +722,7 @@ static int isci_resume(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
struct isci_host *ihost; struct isci_host *ihost;
int rc, i; int i;
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
rc = pcim_enable_device(pdev);
if (rc) {
dev_err(&pdev->dev,
"enabling device failure after resume(%d)\n", rc);
return rc;
}
pci_set_master(pdev);
for_each_isci_host(i, ihost, pdev) { for_each_isci_host(i, ihost, pdev) {
sas_prep_resume_ha(&ihost->sas_ha); sas_prep_resume_ha(&ihost->sas_ha);

Просмотреть файл

@ -753,7 +753,6 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
default: default:
phy_event_warn(iphy, state, event_code); phy_event_warn(iphy, state, event_code);
return SCI_FAILURE; return SCI_FAILURE;
break;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
case SCI_PHY_SUB_AWAIT_IAF_UF: case SCI_PHY_SUB_AWAIT_IAF_UF:
@ -958,7 +957,6 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
default: default:
phy_event_warn(iphy, state, event_code); phy_event_warn(iphy, state, event_code);
return SCI_FAILURE_INVALID_STATE; return SCI_FAILURE_INVALID_STATE;
break;
} }
return SCI_SUCCESS; return SCI_SUCCESS;
default: default:

Просмотреть файл

@ -180,7 +180,7 @@ static void iscsi_sw_tcp_state_change(struct sock *sk)
} }
/** /**
* iscsi_write_space - Called when more output buffer space is available * iscsi_sw_tcp_write_space - Called when more output buffer space is available
* @sk: socket space is available for * @sk: socket space is available for
**/ **/
static void iscsi_sw_tcp_write_space(struct sock *sk) static void iscsi_sw_tcp_write_space(struct sock *sk)
@ -353,7 +353,7 @@ error:
} }
/** /**
* iscsi_tcp_xmit_qlen - return the number of bytes queued for xmit * iscsi_sw_tcp_xmit_qlen - return the number of bytes queued for xmit
* @conn: iscsi connection * @conn: iscsi connection
*/ */
static inline int iscsi_sw_tcp_xmit_qlen(struct iscsi_conn *conn) static inline int iscsi_sw_tcp_xmit_qlen(struct iscsi_conn *conn)

Просмотреть файл

@ -15,7 +15,7 @@
#include <scsi/fc/fc_ns.h> #include <scsi/fc/fc_ns.h>
#include <scsi/fc/fc_els.h> #include <scsi/fc/fc_els.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h> #include "fc_encode.h"
#include "fc_libfc.h" #include "fc_libfc.h"
/** /**

Просмотреть файл

@ -9,6 +9,7 @@
#define _FC_ENCODE_H_ #define _FC_ENCODE_H_
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <scsi/fc/fc_ms.h>
/* /*
* F_CTL values for simple requests and responses. * F_CTL values for simple requests and responses.
@ -39,35 +40,6 @@ struct fc_ct_req {
} payload; } payload;
}; };
static inline void __fc_fill_fc_hdr(struct fc_frame_header *fh,
enum fc_rctl r_ctl,
u32 did, u32 sid, enum fc_fh_type type,
u32 f_ctl, u32 parm_offset)
{
WARN_ON(r_ctl == 0);
fh->fh_r_ctl = r_ctl;
hton24(fh->fh_d_id, did);
hton24(fh->fh_s_id, sid);
fh->fh_type = type;
hton24(fh->fh_f_ctl, f_ctl);
fh->fh_cs_ctl = 0;
fh->fh_df_ctl = 0;
fh->fh_parm_offset = htonl(parm_offset);
}
/**
* fill FC header fields in specified fc_frame
*/
static inline void fc_fill_fc_hdr(struct fc_frame *fp, enum fc_rctl r_ctl,
u32 did, u32 sid, enum fc_fh_type type,
u32 f_ctl, u32 parm_offset)
{
struct fc_frame_header *fh;
fh = fc_frame_header_get(fp);
__fc_fill_fc_hdr(fh, r_ctl, did, sid, type, f_ctl, parm_offset);
}
/** /**
* fc_adisc_fill() - Fill in adisc request frame * fc_adisc_fill() - Fill in adisc request frame
* @lport: local port. * @lport: local port.
@ -191,6 +163,14 @@ static inline int fc_ct_ns_fill(struct fc_lport *lport,
return 0; return 0;
} }
static inline void fc_ct_ms_fill_attr(struct fc_fdmi_attr_entry *entry,
const char *in, size_t len)
{
int copied = strscpy(entry->value, in, len);
if (copied > 0)
memset(entry->value, copied, len - copied);
}
/** /**
* fc_ct_ms_fill() - Fill in a mgmt service request frame * fc_ct_ms_fill() - Fill in a mgmt service request frame
* @lport: local port. * @lport: local port.
@ -260,7 +240,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_MANUFACTURER, put_unaligned_be16(FC_FDMI_HBA_ATTR_MANUFACTURER,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_manufacturer(lport->host), fc_host_manufacturer(lport->host),
FC_FDMI_HBA_ATTR_MANUFACTURER_LEN); FC_FDMI_HBA_ATTR_MANUFACTURER_LEN);
@ -272,7 +252,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_SERIALNUMBER, put_unaligned_be16(FC_FDMI_HBA_ATTR_SERIALNUMBER,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_serial_number(lport->host), fc_host_serial_number(lport->host),
FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN); FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN);
@ -284,7 +264,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_MODEL, put_unaligned_be16(FC_FDMI_HBA_ATTR_MODEL,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_model(lport->host), fc_host_model(lport->host),
FC_FDMI_HBA_ATTR_MODEL_LEN); FC_FDMI_HBA_ATTR_MODEL_LEN);
@ -296,7 +276,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_MODELDESCRIPTION, put_unaligned_be16(FC_FDMI_HBA_ATTR_MODELDESCRIPTION,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_model_description(lport->host), fc_host_model_description(lport->host),
FC_FDMI_HBA_ATTR_MODELDESCR_LEN); FC_FDMI_HBA_ATTR_MODELDESCR_LEN);
@ -308,7 +288,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_HARDWAREVERSION, put_unaligned_be16(FC_FDMI_HBA_ATTR_HARDWAREVERSION,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_hardware_version(lport->host), fc_host_hardware_version(lport->host),
FC_FDMI_HBA_ATTR_HARDWAREVERSION_LEN); FC_FDMI_HBA_ATTR_HARDWAREVERSION_LEN);
@ -320,7 +300,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_DRIVERVERSION, put_unaligned_be16(FC_FDMI_HBA_ATTR_DRIVERVERSION,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_driver_version(lport->host), fc_host_driver_version(lport->host),
FC_FDMI_HBA_ATTR_DRIVERVERSION_LEN); FC_FDMI_HBA_ATTR_DRIVERVERSION_LEN);
@ -332,7 +312,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_OPTIONROMVERSION, put_unaligned_be16(FC_FDMI_HBA_ATTR_OPTIONROMVERSION,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_optionrom_version(lport->host), fc_host_optionrom_version(lport->host),
FC_FDMI_HBA_ATTR_OPTIONROMVERSION_LEN); FC_FDMI_HBA_ATTR_OPTIONROMVERSION_LEN);
@ -344,7 +324,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
put_unaligned_be16(FC_FDMI_HBA_ATTR_FIRMWAREVERSION, put_unaligned_be16(FC_FDMI_HBA_ATTR_FIRMWAREVERSION,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_firmware_version(lport->host), fc_host_firmware_version(lport->host),
FC_FDMI_HBA_ATTR_FIRMWAREVERSION_LEN); FC_FDMI_HBA_ATTR_FIRMWAREVERSION_LEN);
@ -439,7 +419,7 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
/* Use the sysfs device name */ /* Use the sysfs device name */
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
dev_name(&lport->host->shost_gendev), dev_name(&lport->host->shost_gendev),
strnlen(dev_name(&lport->host->shost_gendev), strnlen(dev_name(&lport->host->shost_gendev),
FC_FDMI_PORT_ATTR_HOSTNAME_LEN)); FC_FDMI_PORT_ATTR_HOSTNAME_LEN));
@ -453,12 +433,12 @@ static inline int fc_ct_ms_fill(struct fc_lport *lport,
&entry->type); &entry->type);
put_unaligned_be16(len, &entry->len); put_unaligned_be16(len, &entry->len);
if (strlen(fc_host_system_hostname(lport->host))) if (strlen(fc_host_system_hostname(lport->host)))
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
fc_host_system_hostname(lport->host), fc_host_system_hostname(lport->host),
strnlen(fc_host_system_hostname(lport->host), strnlen(fc_host_system_hostname(lport->host),
FC_FDMI_PORT_ATTR_HOSTNAME_LEN)); FC_FDMI_PORT_ATTR_HOSTNAME_LEN));
else else
strncpy((char *)&entry->value, fc_ct_ms_fill_attr(entry,
init_utsname()->nodename, init_utsname()->nodename,
FC_FDMI_PORT_ATTR_HOSTNAME_LEN); FC_FDMI_PORT_ATTR_HOSTNAME_LEN);
break; break;

Просмотреть файл

@ -20,7 +20,6 @@
#include <scsi/fc/fc_fc2.h> #include <scsi/fc/fc_fc2.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h>
#include "fc_libfc.h" #include "fc_libfc.h"
@ -272,7 +271,7 @@ static void fc_exch_setup_hdr(struct fc_exch *ep, struct fc_frame *fp,
if (f_ctl & FC_FC_END_SEQ) { if (f_ctl & FC_FC_END_SEQ) {
fr_eof(fp) = FC_EOF_T; fr_eof(fp) = FC_EOF_T;
if (fc_sof_needs_ack(ep->class)) if (fc_sof_needs_ack((enum fc_sof)ep->class))
fr_eof(fp) = FC_EOF_N; fr_eof(fp) = FC_EOF_N;
/* /*
* From F_CTL. * From F_CTL.

Просмотреть файл

@ -26,8 +26,8 @@
#include <scsi/fc/fc_fc2.h> #include <scsi/fc/fc_fc2.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h>
#include "fc_encode.h"
#include "fc_libfc.h" #include "fc_libfc.h"
static struct kmem_cache *scsi_pkt_cachep; static struct kmem_cache *scsi_pkt_cachep;

Просмотреть файл

@ -12,8 +12,8 @@
#include <linux/module.h> #include <linux/module.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h>
#include "fc_encode.h"
#include "fc_libfc.h" #include "fc_libfc.h"
MODULE_AUTHOR("Open-FCoE.org"); MODULE_AUTHOR("Open-FCoE.org");

Просмотреть файл

@ -84,9 +84,9 @@
#include <scsi/fc/fc_gs.h> #include <scsi/fc/fc_gs.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include "fc_encode.h"
#include "fc_libfc.h" #include "fc_libfc.h"
/* Fabric IDs to use for point-to-point mode, chosen on whims. */ /* Fabric IDs to use for point-to-point mode, chosen on whims. */

Просмотреть файл

@ -58,8 +58,8 @@
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <scsi/libfc.h> #include <scsi/libfc.h>
#include <scsi/fc_encode.h>
#include "fc_encode.h"
#include "fc_libfc.h" #include "fc_libfc.h"
static struct workqueue_struct *rport_event_queue; static struct workqueue_struct *rport_event_queue;

Просмотреть файл

@ -780,7 +780,7 @@ int iscsi_conn_send_pdu(struct iscsi_cls_conn *cls_conn, struct iscsi_hdr *hdr,
EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu); EXPORT_SYMBOL_GPL(iscsi_conn_send_pdu);
/** /**
* iscsi_cmd_rsp - SCSI Command Response processing * iscsi_scsi_cmd_rsp - SCSI Command Response processing
* @conn: iscsi connection * @conn: iscsi connection
* @hdr: iscsi header * @hdr: iscsi header
* @task: scsi command task * @task: scsi command task

Просмотреть файл

@ -664,11 +664,18 @@ struct lpfc_hba {
void (*lpfc_scsi_prep_cmnd) void (*lpfc_scsi_prep_cmnd)
(struct lpfc_vport *, struct lpfc_io_buf *, (struct lpfc_vport *, struct lpfc_io_buf *,
struct lpfc_nodelist *); struct lpfc_nodelist *);
int (*lpfc_scsi_prep_cmnd_buf)
(struct lpfc_vport *vport,
struct lpfc_io_buf *lpfc_cmd,
uint8_t tmo);
/* IOCB interface function jump table entries */ /* IOCB interface function jump table entries */
int (*__lpfc_sli_issue_iocb) int (*__lpfc_sli_issue_iocb)
(struct lpfc_hba *, uint32_t, (struct lpfc_hba *, uint32_t,
struct lpfc_iocbq *, uint32_t); struct lpfc_iocbq *, uint32_t);
int (*__lpfc_sli_issue_fcp_io)
(struct lpfc_hba *phba, uint32_t ring_number,
struct lpfc_iocbq *piocb, uint32_t flag);
void (*__lpfc_sli_release_iocbq)(struct lpfc_hba *, void (*__lpfc_sli_release_iocbq)(struct lpfc_hba *,
struct lpfc_iocbq *); struct lpfc_iocbq *);
int (*lpfc_hba_down_post)(struct lpfc_hba *phba); int (*lpfc_hba_down_post)(struct lpfc_hba *phba);
@ -744,7 +751,8 @@ struct lpfc_hba {
#define LS_NPIV_FAB_SUPPORTED 0x2 /* Fabric supports NPIV */ #define LS_NPIV_FAB_SUPPORTED 0x2 /* Fabric supports NPIV */
#define LS_IGNORE_ERATT 0x4 /* intr handler should ignore ERATT */ #define LS_IGNORE_ERATT 0x4 /* intr handler should ignore ERATT */
#define LS_MDS_LINK_DOWN 0x8 /* MDS Diagnostics Link Down */ #define LS_MDS_LINK_DOWN 0x8 /* MDS Diagnostics Link Down */
#define LS_MDS_LOOPBACK 0x10 /* MDS Diagnostics Link Up (Loopback) */ #define LS_MDS_LOOPBACK 0x10 /* MDS Diagnostics Link Up (Loopback) */
#define LS_CT_VEN_RPA 0x20 /* Vendor RPA sent to switch */
uint32_t hba_flag; /* hba generic flags */ uint32_t hba_flag; /* hba generic flags */
#define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */ #define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */
@ -753,7 +761,7 @@ struct lpfc_hba {
#define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/ #define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/
#define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */ #define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */
#define HBA_PERSISTENT_TOPO 0x20 /* Persistent topology support in hba */ #define HBA_PERSISTENT_TOPO 0x20 /* Persistent topology support in hba */
#define ELS_XRI_ABORT_EVENT 0x40 #define ELS_XRI_ABORT_EVENT 0x40 /* ELS_XRI abort event was queued */
#define ASYNC_EVENT 0x80 #define ASYNC_EVENT 0x80
#define LINK_DISABLED 0x100 /* Link disabled by user */ #define LINK_DISABLED 0x100 /* Link disabled by user */
#define FCF_TS_INPROG 0x200 /* FCF table scan in progress */ #define FCF_TS_INPROG 0x200 /* FCF table scan in progress */
@ -922,6 +930,7 @@ struct lpfc_hba {
#define LPFC_ENABLE_NVME 2 #define LPFC_ENABLE_NVME 2
#define LPFC_ENABLE_BOTH 3 #define LPFC_ENABLE_BOTH 3
uint32_t cfg_enable_pbde; uint32_t cfg_enable_pbde;
uint32_t cfg_enable_mi;
struct nvmet_fc_target_port *targetport; struct nvmet_fc_target_port *targetport;
lpfc_vpd_t vpd; /* vital product data */ lpfc_vpd_t vpd; /* vital product data */
@ -1129,8 +1138,6 @@ struct lpfc_hba {
uint8_t hb_outstanding; uint8_t hb_outstanding;
struct timer_list rrq_tmr; struct timer_list rrq_tmr;
enum hba_temp_state over_temp_state; enum hba_temp_state over_temp_state;
/* ndlp reference management */
spinlock_t ndlp_lock;
/* /*
* Following bit will be set for all buffer tags which are not * Following bit will be set for all buffer tags which are not
* associated with any HBQ. * associated with any HBQ.

Просмотреть файл

@ -57,10 +57,6 @@
#define LPFC_MIN_DEVLOSS_TMO 1 #define LPFC_MIN_DEVLOSS_TMO 1
#define LPFC_MAX_DEVLOSS_TMO 255 #define LPFC_MAX_DEVLOSS_TMO 255
#define LPFC_DEF_MRQ_POST 512
#define LPFC_MIN_MRQ_POST 512
#define LPFC_MAX_MRQ_POST 2048
/* /*
* Write key size should be multiple of 4. If write key is changed * Write key size should be multiple of 4. If write key is changed
* make sure that library write key is also changed. * make sure that library write key is also changed.
@ -372,11 +368,11 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
nrport = NULL; nrport = NULL;
spin_lock(&vport->phba->hbalock); spin_lock(&ndlp->lock);
rport = lpfc_ndlp_get_nrport(ndlp); rport = lpfc_ndlp_get_nrport(ndlp);
if (rport) if (rport)
nrport = rport->remoteport; nrport = rport->remoteport;
spin_unlock(&vport->phba->hbalock); spin_unlock(&ndlp->lock);
if (!nrport) if (!nrport)
continue; continue;
@ -1505,6 +1501,7 @@ lpfc_sli4_pdev_status_reg_wait(struct lpfc_hba *phba)
/** /**
* lpfc_sli4_pdev_reg_request - Request physical dev to perform a register acc * lpfc_sli4_pdev_reg_request - Request physical dev to perform a register acc
* @phba: lpfc_hba pointer. * @phba: lpfc_hba pointer.
* @opcode: The sli4 config command opcode.
* *
* Description: * Description:
* Request SLI4 interface type-2 device to perform a physical register set * Request SLI4 interface type-2 device to perform a physical register set
@ -2288,7 +2285,7 @@ lpfc_enable_bbcr_set(struct lpfc_hba *phba, uint val)
return -EINVAL; return -EINVAL;
} }
/** /*
* lpfc_param_show - Return a cfg attribute value in decimal * lpfc_param_show - Return a cfg attribute value in decimal
* *
* Description: * Description:
@ -2314,7 +2311,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
phba->cfg_##attr);\ phba->cfg_##attr);\
} }
/** /*
* lpfc_param_hex_show - Return a cfg attribute value in hex * lpfc_param_hex_show - Return a cfg attribute value in hex
* *
* Description: * Description:
@ -2342,7 +2339,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
phba->cfg_##attr);\ phba->cfg_##attr);\
} }
/** /*
* lpfc_param_init - Initializes a cfg attribute * lpfc_param_init - Initializes a cfg attribute
* *
* Description: * Description:
@ -2376,7 +2373,7 @@ lpfc_##attr##_init(struct lpfc_hba *phba, uint val) \
return -EINVAL;\ return -EINVAL;\
} }
/** /*
* lpfc_param_set - Set a cfg attribute value * lpfc_param_set - Set a cfg attribute value
* *
* Description: * Description:
@ -2413,7 +2410,7 @@ lpfc_##attr##_set(struct lpfc_hba *phba, uint val) \
return -EINVAL;\ return -EINVAL;\
} }
/** /*
* lpfc_param_store - Set a vport attribute value * lpfc_param_store - Set a vport attribute value
* *
* Description: * Description:
@ -2453,7 +2450,7 @@ lpfc_##attr##_store(struct device *dev, struct device_attribute *attr, \
return -EINVAL;\ return -EINVAL;\
} }
/** /*
* lpfc_vport_param_show - Return decimal formatted cfg attribute value * lpfc_vport_param_show - Return decimal formatted cfg attribute value
* *
* Description: * Description:
@ -2477,7 +2474,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\ return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
} }
/** /*
* lpfc_vport_param_hex_show - Return hex formatted attribute value * lpfc_vport_param_hex_show - Return hex formatted attribute value
* *
* Description: * Description:
@ -2502,7 +2499,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
return scnprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\ return scnprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
} }
/** /*
* lpfc_vport_param_init - Initialize a vport cfg attribute * lpfc_vport_param_init - Initialize a vport cfg attribute
* *
* Description: * Description:
@ -2535,7 +2532,7 @@ lpfc_##attr##_init(struct lpfc_vport *vport, uint val) \
return -EINVAL;\ return -EINVAL;\
} }
/** /*
* lpfc_vport_param_set - Set a vport cfg attribute * lpfc_vport_param_set - Set a vport cfg attribute
* *
* Description: * Description:
@ -2571,7 +2568,7 @@ lpfc_##attr##_set(struct lpfc_vport *vport, uint val) \
return -EINVAL;\ return -EINVAL;\
} }
/** /*
* lpfc_vport_param_store - Set a vport attribute * lpfc_vport_param_store - Set a vport attribute
* *
* Description: * Description:
@ -2774,7 +2771,7 @@ lpfc_soft_wwpn_show(struct device *dev, struct device_attribute *attr,
/** /**
* lpfc_soft_wwpn_store - Set the ww port name of the adapter * lpfc_soft_wwpn_store - Set the ww port name of the adapter
* @dev class device that is converted into a Scsi_host. * @dev: class device that is converted into a Scsi_host.
* @attr: device attribute, not used. * @attr: device attribute, not used.
* @buf: contains the wwpn in hexadecimal. * @buf: contains the wwpn in hexadecimal.
* @count: number of wwpn bytes in buf * @count: number of wwpn bytes in buf
@ -2871,7 +2868,8 @@ lpfc_soft_wwnn_show(struct device *dev, struct device_attribute *attr,
/** /**
* lpfc_soft_wwnn_store - sets the ww node name of the adapter * lpfc_soft_wwnn_store - sets the ww node name of the adapter
* @cdev: class device that is converted into a Scsi_host. * @dev: class device that is converted into a Scsi_host.
* @attr: device attribute, not used.
* @buf: contains the ww node name in hexadecimal. * @buf: contains the ww node name in hexadecimal.
* @count: number of wwnn bytes in buf. * @count: number of wwnn bytes in buf.
* *
@ -3207,9 +3205,11 @@ static DEVICE_ATTR(lpfc_xlane_lun_status, S_IRUGO,
* lpfc_oas_lun_state_set - enable or disable a lun for Optimized Access Storage * lpfc_oas_lun_state_set - enable or disable a lun for Optimized Access Storage
* (OAS) operations. * (OAS) operations.
* @phba: lpfc_hba pointer. * @phba: lpfc_hba pointer.
* @ndlp: pointer to fcp target node. * @vpt_wwpn: wwpn of the vport associated with the returned lun
* @tgt_wwpn: wwpn of the target associated with the returned lun
* @lun: the fc lun for setting oas state. * @lun: the fc lun for setting oas state.
* @oas_state: the oas state to be set to the lun. * @oas_state: the oas state to be set to the lun.
* @pri: priority
* *
* Returns: * Returns:
* SUCCESS : 0 * SUCCESS : 0
@ -3247,6 +3247,7 @@ lpfc_oas_lun_state_set(struct lpfc_hba *phba, uint8_t vpt_wwpn[],
* @vpt_wwpn: wwpn of the vport associated with the returned lun * @vpt_wwpn: wwpn of the vport associated with the returned lun
* @tgt_wwpn: wwpn of the target associated with the returned lun * @tgt_wwpn: wwpn of the target associated with the returned lun
* @lun_status: status of the lun returned lun * @lun_status: status of the lun returned lun
* @lun_pri: priority of the lun returned lun
* *
* Returns the first or next lun enabled for OAS operations for the vport/target * Returns the first or next lun enabled for OAS operations for the vport/target
* specified. If a lun is found, its vport wwpn, target wwpn and status is * specified. If a lun is found, its vport wwpn, target wwpn and status is
@ -3285,6 +3286,7 @@ lpfc_oas_lun_get_next(struct lpfc_hba *phba, uint8_t vpt_wwpn[],
* @tgt_wwpn: target wwpn by reference. * @tgt_wwpn: target wwpn by reference.
* @lun: the fc lun for setting oas state. * @lun: the fc lun for setting oas state.
* @oas_state: the oas state to be set to the oas_lun. * @oas_state: the oas state to be set to the oas_lun.
* @pri: priority
* *
* This routine enables (OAS_LUN_ENABLE) or disables (OAS_LUN_DISABLE) * This routine enables (OAS_LUN_ENABLE) or disables (OAS_LUN_DISABLE)
* a lun for OAS operations. * a lun for OAS operations.
@ -3359,6 +3361,7 @@ lpfc_oas_lun_show(struct device *dev, struct device_attribute *attr,
* @dev: class device that is converted into a Scsi_host. * @dev: class device that is converted into a Scsi_host.
* @attr: device attribute, not used. * @attr: device attribute, not used.
* @buf: buffer for passing information. * @buf: buffer for passing information.
* @count: size of the formatting string
* *
* This function sets the OAS state for lun. Before this function is called, * This function sets the OAS state for lun. Before this function is called,
* the vport wwpn, target wwpn, and oas state need to be set. * the vport wwpn, target wwpn, and oas state need to be set.
@ -3631,16 +3634,14 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
shost = lpfc_shost_from_vport(vport); shost = lpfc_shost_from_vport(vport);
spin_lock_irq(shost->host_lock); spin_lock_irq(shost->host_lock);
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp))
continue;
if (ndlp->rport) if (ndlp->rport)
ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo; ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
#if (IS_ENABLED(CONFIG_NVME_FC)) #if (IS_ENABLED(CONFIG_NVME_FC))
spin_lock(&vport->phba->hbalock); spin_lock(&ndlp->lock);
rport = lpfc_ndlp_get_nrport(ndlp); rport = lpfc_ndlp_get_nrport(ndlp);
if (rport) if (rport)
remoteport = rport->remoteport; remoteport = rport->remoteport;
spin_unlock(&vport->phba->hbalock); spin_unlock(&ndlp->lock);
if (rport && remoteport) if (rport && remoteport)
nvme_fc_set_remoteport_devloss(remoteport, nvme_fc_set_remoteport_devloss(remoteport,
vport->cfg_devloss_tmo); vport->cfg_devloss_tmo);
@ -3820,7 +3821,7 @@ lpfc_vport_param_init(tgt_queue_depth, LPFC_MAX_TGT_QDEPTH,
/** /**
* lpfc_tgt_queue_depth_store: Sets an attribute value. * lpfc_tgt_queue_depth_store: Sets an attribute value.
* @phba: pointer the the adapter structure. * @vport: lpfc vport structure pointer.
* @val: integer attribute value. * @val: integer attribute value.
* *
* Description: Sets the parameter to the new value. * Description: Sets the parameter to the new value.
@ -4005,8 +4006,10 @@ LPFC_ATTR(topology, 0, 0, 6,
/** /**
* lpfc_topology_set - Set the adapters topology field * lpfc_topology_set - Set the adapters topology field
* @phba: lpfc_hba pointer. * @dev: class device that is converted into a scsi_host.
* @val: topology value. * @attr:device attribute, not used.
* @buf: buffer for passing information.
* @count: size of the data buffer.
* *
* Description: * Description:
* If val is in a valid range then set the adapter's topology field and * If val is in a valid range then set the adapter's topology field and
@ -4125,6 +4128,7 @@ static DEVICE_ATTR_RO(lpfc_static_vport);
/** /**
* lpfc_stat_data_ctrl_store - write call back for lpfc_stat_data_ctrl sysfs file * lpfc_stat_data_ctrl_store - write call back for lpfc_stat_data_ctrl sysfs file
* @dev: Pointer to class device. * @dev: Pointer to class device.
* @attr: Unused.
* @buf: Data buffer. * @buf: Data buffer.
* @count: Size of the data buffer. * @count: Size of the data buffer.
* *
@ -4288,7 +4292,8 @@ lpfc_stat_data_ctrl_store(struct device *dev, struct device_attribute *attr,
/** /**
* lpfc_stat_data_ctrl_show - Read function for lpfc_stat_data_ctrl sysfs file * lpfc_stat_data_ctrl_show - Read function for lpfc_stat_data_ctrl sysfs file
* @dev: Pointer to class device object. * @dev: Pointer to class device.
* @attr: Unused.
* @buf: Data buffer. * @buf: Data buffer.
* *
* This function is the read call back function for * This function is the read call back function for
@ -4367,7 +4372,7 @@ static DEVICE_ATTR_RW(lpfc_stat_data_ctrl);
* @filp: sysfs file * @filp: sysfs file
* @kobj: Pointer to the kernel object * @kobj: Pointer to the kernel object
* @bin_attr: Attribute object * @bin_attr: Attribute object
* @buff: Buffer pointer * @buf: Buffer pointer
* @off: File offset * @off: File offset
* @count: Buffer size * @count: Buffer size
* *
@ -4397,7 +4402,7 @@ sysfs_drvr_stat_data_read(struct file *filp, struct kobject *kobj,
spin_lock_irq(shost->host_lock); spin_lock_irq(shost->host_lock);
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp) || !ndlp->lat_data) if (!ndlp->lat_data)
continue; continue;
if (nport_index > 0) { if (nport_index > 0) {
@ -4454,8 +4459,10 @@ static struct bin_attribute sysfs_drvr_stat_data_attr = {
*/ */
/** /**
* lpfc_link_speed_set - Set the adapters link speed * lpfc_link_speed_set - Set the adapters link speed
* @phba: lpfc_hba pointer. * @dev: Pointer to class device.
* @val: link speed value. * @attr: Unused.
* @buf: Data buffer.
* @count: Size of the data buffer.
* *
* Description: * Description:
* If val is in a valid range then set the adapter's link speed field and * If val is in a valid range then set the adapter's link speed field and
@ -5463,8 +5470,6 @@ lpfc_max_scsicmpl_time_set(struct lpfc_vport *vport, int val)
spin_lock_irq(shost->host_lock); spin_lock_irq(shost->host_lock);
list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp))
continue;
if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
continue; continue;
ndlp->cmd_qdepth = vport->cfg_tgt_queue_depth; ndlp->cmd_qdepth = vport->cfg_tgt_queue_depth;
@ -6138,6 +6143,14 @@ LPFC_BBCR_ATTR_RW(enable_bbcr, 1, 0, 1, "Enable BBC Recovery");
*/ */
LPFC_ATTR_RW(enable_dpp, 1, 0, 1, "Enable Direct Packet Push"); LPFC_ATTR_RW(enable_dpp, 1, 0, 1, "Enable Direct Packet Push");
/*
* lpfc_enable_mi: Enable FDMI MIB
* 0 = disabled
* 1 = enabled (default)
* Value range is [0,1].
*/
LPFC_ATTR_R(enable_mi, 1, 0, 1, "Enable MI");
struct device_attribute *lpfc_hba_attrs[] = { struct device_attribute *lpfc_hba_attrs[] = {
&dev_attr_nvme_info, &dev_attr_nvme_info,
&dev_attr_scsi_stat, &dev_attr_scsi_stat,
@ -6255,6 +6268,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
&dev_attr_lpfc_ras_fwlog_func, &dev_attr_lpfc_ras_fwlog_func,
&dev_attr_lpfc_enable_bbcr, &dev_attr_lpfc_enable_bbcr,
&dev_attr_lpfc_enable_dpp, &dev_attr_lpfc_enable_dpp,
&dev_attr_lpfc_enable_mi,
NULL, NULL,
}; };
@ -6964,8 +6978,7 @@ lpfc_get_node_by_target(struct scsi_target *starget)
spin_lock_irq(shost->host_lock); spin_lock_irq(shost->host_lock);
/* Search for this, mapped, target ID */ /* Search for this, mapped, target ID */
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
if (NLP_CHK_NODE_ACT(ndlp) && if (ndlp->nlp_state == NLP_STE_MAPPED_NODE &&
ndlp->nlp_state == NLP_STE_MAPPED_NODE &&
starget->id == ndlp->nlp_sid) { starget->id == ndlp->nlp_sid) {
spin_unlock_irq(shost->host_lock); spin_unlock_irq(shost->host_lock);
return ndlp; return ndlp;
@ -7040,7 +7053,7 @@ lpfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout)
else else
rport->dev_loss_tmo = 1; rport->dev_loss_tmo = 1;
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { if (!ndlp) {
dev_info(&rport->dev, "Cannot find remote node to " dev_info(&rport->dev, "Cannot find remote node to "
"set rport dev loss tmo, port_id x%x\n", "set rport dev loss tmo, port_id x%x\n",
rport->port_id); rport->port_id);
@ -7056,7 +7069,7 @@ lpfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout)
#endif #endif
} }
/** /*
* lpfc_rport_show_function - Return rport target information * lpfc_rport_show_function - Return rport target information
* *
* Description: * Description:
@ -7105,6 +7118,7 @@ lpfc_set_vport_symbolic_name(struct fc_vport *fc_vport)
/** /**
* lpfc_hba_log_verbose_init - Set hba's log verbose level * lpfc_hba_log_verbose_init - Set hba's log verbose level
* @phba: Pointer to lpfc_hba struct. * @phba: Pointer to lpfc_hba struct.
* @verbose: Verbose level to set.
* *
* This function is called by the lpfc_get_cfgparam() routine to set the * This function is called by the lpfc_get_cfgparam() routine to set the
* module lpfc_log_verbose into the @phba cfg_log_verbose for use with * module lpfc_log_verbose into the @phba cfg_log_verbose for use with
@ -7359,6 +7373,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
lpfc_irq_chann_init(phba, lpfc_irq_chann); lpfc_irq_chann_init(phba, lpfc_irq_chann);
lpfc_enable_bbcr_init(phba, lpfc_enable_bbcr); lpfc_enable_bbcr_init(phba, lpfc_enable_bbcr);
lpfc_enable_dpp_init(phba, lpfc_enable_dpp); lpfc_enable_dpp_init(phba, lpfc_enable_dpp);
lpfc_enable_mi_init(phba, lpfc_enable_mi);
if (phba->sli_rev != LPFC_SLI_REV4) { if (phba->sli_rev != LPFC_SLI_REV4) {
/* NVME only supported on SLI4 */ /* NVME only supported on SLI4 */

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2009-2015 Emulex. All rights reserved. * * Copyright (C) 2009-2015 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -329,7 +329,7 @@ lpfc_bsg_send_mgmt_cmd_cmp(struct lpfc_hba *phba,
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);
iocb = &dd_data->context_un.iocb; iocb = &dd_data->context_un.iocb;
ndlp = iocb->ndlp; ndlp = iocb->cmdiocbq->context_un.ndlp;
rmp = iocb->rmp; rmp = iocb->rmp;
cmp = cmdiocbq->context2; cmp = cmdiocbq->context2;
bmp = cmdiocbq->context3; bmp = cmdiocbq->context3;
@ -366,8 +366,8 @@ lpfc_bsg_send_mgmt_cmd_cmp(struct lpfc_hba *phba,
lpfc_free_bsg_buffers(phba, rmp); lpfc_free_bsg_buffers(phba, rmp);
lpfc_mbuf_free(phba, bmp->virt, bmp->phys); lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
kfree(bmp); kfree(bmp);
lpfc_sli_release_iocbq(phba, cmdiocbq);
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
lpfc_sli_release_iocbq(phba, cmdiocbq);
kfree(dd_data); kfree(dd_data);
/* Complete the job if the job is still active */ /* Complete the job if the job is still active */
@ -408,6 +408,9 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
/* in case no data is transferred */ /* in case no data is transferred */
bsg_reply->reply_payload_rcv_len = 0; bsg_reply->reply_payload_rcv_len = 0;
if (ndlp->nlp_flag & NLP_ELS_SND_MASK)
return -ENODEV;
/* allocate our bsg tracking structure */ /* allocate our bsg tracking structure */
dd_data = kmalloc(sizeof(struct bsg_job_data), GFP_KERNEL); dd_data = kmalloc(sizeof(struct bsg_job_data), GFP_KERNEL);
if (!dd_data) { if (!dd_data) {
@ -417,20 +420,10 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
goto no_dd_data; goto no_dd_data;
} }
if (!lpfc_nlp_get(ndlp)) {
rc = -ENODEV;
goto no_ndlp;
}
if (ndlp->nlp_flag & NLP_ELS_SND_MASK) {
rc = -ENODEV;
goto free_ndlp;
}
cmdiocbq = lpfc_sli_get_iocbq(phba); cmdiocbq = lpfc_sli_get_iocbq(phba);
if (!cmdiocbq) { if (!cmdiocbq) {
rc = -ENOMEM; rc = -ENOMEM;
goto free_ndlp; goto free_dd;
} }
cmd = &cmdiocbq->iocb; cmd = &cmdiocbq->iocb;
@ -496,11 +489,10 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
cmdiocbq->context1 = dd_data; cmdiocbq->context1 = dd_data;
cmdiocbq->context2 = cmp; cmdiocbq->context2 = cmp;
cmdiocbq->context3 = bmp; cmdiocbq->context3 = bmp;
cmdiocbq->context_un.ndlp = ndlp;
dd_data->type = TYPE_IOCB; dd_data->type = TYPE_IOCB;
dd_data->set_job = job; dd_data->set_job = job;
dd_data->context_un.iocb.cmdiocbq = cmdiocbq; dd_data->context_un.iocb.cmdiocbq = cmdiocbq;
dd_data->context_un.iocb.ndlp = ndlp;
dd_data->context_un.iocb.rmp = rmp; dd_data->context_un.iocb.rmp = rmp;
job->dd_data = dd_data; job->dd_data = dd_data;
@ -514,8 +506,13 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
readl(phba->HCregaddr); /* flush */ readl(phba->HCregaddr); /* flush */
} }
iocb_stat = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, cmdiocbq, 0); cmdiocbq->context_un.ndlp = lpfc_nlp_get(ndlp);
if (!cmdiocbq->context_un.ndlp) {
rc = -ENODEV;
goto free_rmp;
}
iocb_stat = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, cmdiocbq, 0);
if (iocb_stat == IOCB_SUCCESS) { if (iocb_stat == IOCB_SUCCESS) {
spin_lock_irqsave(&phba->hbalock, flags); spin_lock_irqsave(&phba->hbalock, flags);
/* make sure the I/O had not been completed yet */ /* make sure the I/O had not been completed yet */
@ -532,7 +529,7 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job)
} }
/* iocb failed so cleanup */ /* iocb failed so cleanup */
job->dd_data = NULL; lpfc_nlp_put(ndlp);
free_rmp: free_rmp:
lpfc_free_bsg_buffers(phba, rmp); lpfc_free_bsg_buffers(phba, rmp);
@ -544,9 +541,7 @@ free_bmp:
kfree(bmp); kfree(bmp);
free_cmdiocbq: free_cmdiocbq:
lpfc_sli_release_iocbq(phba, cmdiocbq); lpfc_sli_release_iocbq(phba, cmdiocbq);
free_ndlp: free_dd:
lpfc_nlp_put(ndlp);
no_ndlp:
kfree(dd_data); kfree(dd_data);
no_dd_data: no_dd_data:
/* make error code available to userspace */ /* make error code available to userspace */
@ -640,8 +635,9 @@ lpfc_bsg_rport_els_cmp(struct lpfc_hba *phba,
} }
} }
lpfc_nlp_put(ndlp);
lpfc_els_free_iocb(phba, cmdiocbq); lpfc_els_free_iocb(phba, cmdiocbq);
lpfc_nlp_put(ndlp);
kfree(dd_data); kfree(dd_data);
/* Complete the job if the job is still active */ /* Complete the job if the job is still active */
@ -718,15 +714,14 @@ lpfc_bsg_rport_els(struct bsg_job *job)
goto release_ndlp; goto release_ndlp;
} }
rpi = ndlp->nlp_rpi;
/* Transfer the request payload to allocated command dma buffer */ /* Transfer the request payload to allocated command dma buffer */
sg_copy_to_buffer(job->request_payload.sg_list, sg_copy_to_buffer(job->request_payload.sg_list,
job->request_payload.sg_cnt, job->request_payload.sg_cnt,
((struct lpfc_dmabuf *)cmdiocbq->context2)->virt, ((struct lpfc_dmabuf *)cmdiocbq->context2)->virt,
cmdsize); cmdsize);
rpi = ndlp->nlp_rpi;
if (phba->sli_rev == LPFC_SLI_REV4) if (phba->sli_rev == LPFC_SLI_REV4)
cmdiocbq->iocb.ulpContext = phba->sli4_hba.rpi_ids[rpi]; cmdiocbq->iocb.ulpContext = phba->sli4_hba.rpi_ids[rpi];
else else
@ -752,8 +747,13 @@ lpfc_bsg_rport_els(struct bsg_job *job)
readl(phba->HCregaddr); /* flush */ readl(phba->HCregaddr); /* flush */
} }
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, cmdiocbq, 0); cmdiocbq->context1 = lpfc_nlp_get(ndlp);
if (!cmdiocbq->context1) {
rc = -EIO;
goto linkdown_err;
}
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, cmdiocbq, 0);
if (rc == IOCB_SUCCESS) { if (rc == IOCB_SUCCESS) {
spin_lock_irqsave(&phba->hbalock, flags); spin_lock_irqsave(&phba->hbalock, flags);
/* make sure the I/O had not been completed/released */ /* make sure the I/O had not been completed/released */
@ -769,11 +769,9 @@ lpfc_bsg_rport_els(struct bsg_job *job)
rc = -EIO; rc = -EIO;
} }
/* iocb failed so cleanup */ /* I/O issue failed. Cleanup resources. */
job->dd_data = NULL;
linkdown_err: linkdown_err:
cmdiocbq->context1 = ndlp;
lpfc_els_free_iocb(phba, cmdiocbq); lpfc_els_free_iocb(phba, cmdiocbq);
release_ndlp: release_ndlp:
@ -902,11 +900,8 @@ diag_cmd_data_free(struct lpfc_hba *phba, struct lpfc_dmabufext *mlist)
return 0; return 0;
} }
/** /*
* lpfc_bsg_ct_unsol_event - process an unsolicited CT command * lpfc_bsg_ct_unsol_event - process an unsolicited CT command
* @phba:
* @pring:
* @piocbq:
* *
* This function is called when an unsolicited CT command is received. It * This function is called when an unsolicited CT command is received. It
* forwards the event to any processes registered to receive CT events. * forwards the event to any processes registered to receive CT events.
@ -939,28 +934,9 @@ lpfc_bsg_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
INIT_LIST_HEAD(&head); INIT_LIST_HEAD(&head);
list_add_tail(&head, &piocbq->list); list_add_tail(&head, &piocbq->list);
if (piocbq->iocb.ulpBdeCount == 0 || ct_req = (struct lpfc_sli_ct_request *)bdeBuf1;
piocbq->iocb.un.cont64[0].tus.f.bdeSize == 0)
goto error_ct_unsol_exit;
if (phba->link_state == LPFC_HBA_ERROR ||
(!(phba->sli.sli_flag & LPFC_SLI_ACTIVE)))
goto error_ct_unsol_exit;
if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED)
dmabuf = bdeBuf1;
else {
dma_addr = getPaddr(piocbq->iocb.un.cont64[0].addrHigh,
piocbq->iocb.un.cont64[0].addrLow);
dmabuf = lpfc_sli_ringpostbuf_get(phba, pring, dma_addr);
}
if (dmabuf == NULL)
goto error_ct_unsol_exit;
ct_req = (struct lpfc_sli_ct_request *)dmabuf->virt;
evt_req_id = ct_req->FsType; evt_req_id = ct_req->FsType;
cmd = ct_req->CommandResponse.bits.CmdRsp; cmd = ct_req->CommandResponse.bits.CmdRsp;
if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
lpfc_sli_ringpostbuf_put(phba, pring, dmabuf);
spin_lock_irqsave(&phba->ct_ev_lock, flags); spin_lock_irqsave(&phba->ct_ev_lock, flags);
list_for_each_entry(evt, &phba->ct_ev_waiters, node) { list_for_each_entry(evt, &phba->ct_ev_waiters, node) {
@ -1474,7 +1450,8 @@ lpfc_issue_ct_rsp_cmp(struct lpfc_hba *phba,
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @job: Pointer to the job object. * @job: Pointer to the job object.
* @tag: tag index value into the ports context exchange array. * @tag: tag index value into the ports context exchange array.
* @bmp: Pointer to a dma buffer descriptor. * @cmp: Pointer to a cmp dma buffer descriptor.
* @bmp: Pointer to a bmp dma buffer descriptor.
* @num_entry: Number of enties in the bde. * @num_entry: Number of enties in the bde.
**/ **/
static int static int
@ -1490,6 +1467,15 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
unsigned long flags; unsigned long flags;
uint32_t creg_val; uint32_t creg_val;
ndlp = lpfc_findnode_did(phba->pport, phba->ct_ctx[tag].SID);
if (!ndlp) {
lpfc_printf_log(phba, KERN_WARNING, LOG_ELS,
"2721 ndlp null for oxid %x SID %x\n",
phba->ct_ctx[tag].rxid,
phba->ct_ctx[tag].SID);
return IOCB_ERROR;
}
/* allocate our bsg tracking structure */ /* allocate our bsg tracking structure */
dd_data = kmalloc(sizeof(struct bsg_job_data), GFP_KERNEL); dd_data = kmalloc(sizeof(struct bsg_job_data), GFP_KERNEL);
if (!dd_data) { if (!dd_data) {
@ -1540,12 +1526,6 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
goto issue_ct_rsp_exit; goto issue_ct_rsp_exit;
} }
/* Check if the ndlp is active */
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) {
rc = IOCB_ERROR;
goto issue_ct_rsp_exit;
}
/* get a refernece count so the ndlp doesn't go away while /* get a refernece count so the ndlp doesn't go away while
* we respond * we respond
*/ */
@ -1580,7 +1560,11 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
dd_data->type = TYPE_IOCB; dd_data->type = TYPE_IOCB;
dd_data->set_job = job; dd_data->set_job = job;
dd_data->context_un.iocb.cmdiocbq = ctiocb; dd_data->context_un.iocb.cmdiocbq = ctiocb;
dd_data->context_un.iocb.ndlp = ndlp; dd_data->context_un.iocb.ndlp = lpfc_nlp_get(ndlp);
if (!dd_data->context_un.iocb.ndlp) {
rc = -IOCB_ERROR;
goto issue_ct_rsp_exit;
}
dd_data->context_un.iocb.rmp = NULL; dd_data->context_un.iocb.rmp = NULL;
job->dd_data = dd_data; job->dd_data = dd_data;
@ -1595,7 +1579,6 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
} }
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, ctiocb, 0); rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, ctiocb, 0);
if (rc == IOCB_SUCCESS) { if (rc == IOCB_SUCCESS) {
spin_lock_irqsave(&phba->hbalock, flags); spin_lock_irqsave(&phba->hbalock, flags);
/* make sure the I/O had not been completed/released */ /* make sure the I/O had not been completed/released */
@ -1609,6 +1592,7 @@ lpfc_issue_ct_rsp(struct lpfc_hba *phba, struct bsg_job *job, uint32_t tag,
/* iocb failed so cleanup */ /* iocb failed so cleanup */
job->dd_data = NULL; job->dd_data = NULL;
lpfc_nlp_put(ndlp);
issue_ct_rsp_exit: issue_ct_rsp_exit:
lpfc_sli_release_iocbq(phba, ctiocb); lpfc_sli_release_iocbq(phba, ctiocb);
@ -3535,6 +3519,7 @@ static int lpfc_bsg_check_cmd_access(struct lpfc_hba *phba,
mb->mbxCommand); mb->mbxCommand);
return -EPERM; return -EPERM;
} }
break;
case MBX_WRITE_NV: case MBX_WRITE_NV:
case MBX_WRITE_VPARMS: case MBX_WRITE_VPARMS:
case MBX_LOAD_SM: case MBX_LOAD_SM:
@ -3886,9 +3871,9 @@ lpfc_bsg_sli_cfg_dma_desc_setup(struct lpfc_hba *phba, enum nemb_type nemb_tp,
/** /**
* lpfc_bsg_sli_cfg_mse_read_cmd_ext - sli_config non-embedded mailbox cmd read * lpfc_bsg_sli_cfg_mse_read_cmd_ext - sli_config non-embedded mailbox cmd read
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a BSG mailbox object. * @job: Pointer to the job object.
* @nemb_tp: Enumerate of non-embedded mailbox command type. * @nemb_tp: Enumerate of non-embedded mailbox command type.
* @dmabuff: Pointer to a DMA buffer descriptor. * @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine performs SLI_CONFIG (0x9B) read mailbox command operation with * This routine performs SLI_CONFIG (0x9B) read mailbox command operation with
* non-embedded external bufffers. * non-embedded external bufffers.
@ -4075,8 +4060,9 @@ job_error:
/** /**
* lpfc_bsg_sli_cfg_write_cmd_ext - sli_config non-embedded mailbox cmd write * lpfc_bsg_sli_cfg_write_cmd_ext - sli_config non-embedded mailbox cmd write
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a BSG mailbox object. * @job: Pointer to the job object.
* @dmabuff: Pointer to a DMA buffer descriptor. * @nemb_tp: Enumerate of non-embedded mailbox command type.
* @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine performs SLI_CONFIG (0x9B) write mailbox command operation with * This routine performs SLI_CONFIG (0x9B) write mailbox command operation with
* non-embedded external bufffers. * non-embedded external bufffers.
@ -4241,8 +4227,8 @@ job_error:
/** /**
* lpfc_bsg_handle_sli_cfg_mbox - handle sli-cfg mailbox cmd with ext buffer * lpfc_bsg_handle_sli_cfg_mbox - handle sli-cfg mailbox cmd with ext buffer
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a BSG mailbox object. * @job: Pointer to the job object.
* @dmabuff: Pointer to a DMA buffer descriptor. * @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine handles SLI_CONFIG (0x9B) mailbox command with non-embedded * This routine handles SLI_CONFIG (0x9B) mailbox command with non-embedded
* external bufffers, including both 0x9B with non-embedded MSEs and 0x9B * external bufffers, including both 0x9B with non-embedded MSEs and 0x9B
@ -4393,7 +4379,7 @@ lpfc_bsg_mbox_ext_abort(struct lpfc_hba *phba)
/** /**
* lpfc_bsg_read_ebuf_get - get the next mailbox read external buffer * lpfc_bsg_read_ebuf_get - get the next mailbox read external buffer
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @dmabuf: Pointer to a DMA buffer descriptor. * @job: Pointer to the job object.
* *
* This routine extracts the next mailbox read external buffer back to * This routine extracts the next mailbox read external buffer back to
* user space through BSG. * user space through BSG.
@ -4463,6 +4449,7 @@ lpfc_bsg_read_ebuf_get(struct lpfc_hba *phba, struct bsg_job *job)
/** /**
* lpfc_bsg_write_ebuf_set - set the next mailbox write external buffer * lpfc_bsg_write_ebuf_set - set the next mailbox write external buffer
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @job: Pointer to the job object.
* @dmabuf: Pointer to a DMA buffer descriptor. * @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine sets up the next mailbox read external buffer obtained * This routine sets up the next mailbox read external buffer obtained
@ -4588,8 +4575,8 @@ job_error:
/** /**
* lpfc_bsg_handle_sli_cfg_ebuf - handle ext buffer with sli-cfg mailbox cmd * lpfc_bsg_handle_sli_cfg_ebuf - handle ext buffer with sli-cfg mailbox cmd
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a BSG mailbox object. * @job: Pointer to the job object.
* @dmabuff: Pointer to a DMA buffer descriptor. * @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine handles the external buffer with SLI_CONFIG (0x9B) mailbox * This routine handles the external buffer with SLI_CONFIG (0x9B) mailbox
* command with multiple non-embedded external buffers. * command with multiple non-embedded external buffers.
@ -4633,8 +4620,8 @@ lpfc_bsg_handle_sli_cfg_ebuf(struct lpfc_hba *phba, struct bsg_job *job,
/** /**
* lpfc_bsg_handle_sli_cfg_ext - handle sli-cfg mailbox with external buffer * lpfc_bsg_handle_sli_cfg_ext - handle sli-cfg mailbox with external buffer
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a BSG mailbox object. * @job: Pointer to the job object.
* @dmabuff: Pointer to a DMA buffer descriptor. * @dmabuf: Pointer to a DMA buffer descriptor.
* *
* This routine checkes and handles non-embedded multi-buffer SLI_CONFIG * This routine checkes and handles non-embedded multi-buffer SLI_CONFIG
* (0x9B) mailbox commands and external buffers. * (0x9B) mailbox commands and external buffers.
@ -4707,7 +4694,7 @@ sli_cfg_ext_error:
/** /**
* lpfc_bsg_issue_mbox - issues a mailbox command on behalf of an app * lpfc_bsg_issue_mbox - issues a mailbox command on behalf of an app
* @phba: Pointer to HBA context object. * @phba: Pointer to HBA context object.
* @mb: Pointer to a mailbox object. * @job: Pointer to the job object.
* @vport: Pointer to a vport object. * @vport: Pointer to a vport object.
* *
* Allocate a tracking object, mailbox command memory, get a mailbox * Allocate a tracking object, mailbox command memory, get a mailbox
@ -5935,7 +5922,7 @@ lpfc_bsg_timeout(struct bsg_job *job)
} }
} }
if (list_empty(&completions)) if (list_empty(&completions))
lpfc_sli_issue_abort_iotag(phba, pring, cmdiocb); lpfc_sli_issue_abort_iotag(phba, pring, cmdiocb, NULL);
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);
if (!list_empty(&completions)) { if (!list_empty(&completions)) {
lpfc_sli_cancel_iocbs(phba, &completions, lpfc_sli_cancel_iocbs(phba, &completions,
@ -5972,7 +5959,7 @@ lpfc_bsg_timeout(struct bsg_job *job)
} }
} }
if (list_empty(&completions)) if (list_empty(&completions))
lpfc_sli_issue_abort_iotag(phba, pring, cmdiocb); lpfc_sli_issue_abort_iotag(phba, pring, cmdiocb, NULL);
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);
if (!list_empty(&completions)) { if (!list_empty(&completions)) {
lpfc_sli_cancel_iocbs(phba, &completions, lpfc_sli_cancel_iocbs(phba, &completions,

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -88,8 +88,6 @@ void lpfc_mbx_cmpl_reg_vfi(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_unregister_vfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *); void lpfc_unregister_vfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_enqueue_node(struct lpfc_vport *, struct lpfc_nodelist *); void lpfc_enqueue_node(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_dequeue_node(struct lpfc_vport *, struct lpfc_nodelist *); void lpfc_dequeue_node(struct lpfc_vport *, struct lpfc_nodelist *);
struct lpfc_nodelist *lpfc_enable_node(struct lpfc_vport *,
struct lpfc_nodelist *, int);
void lpfc_nlp_set_state(struct lpfc_vport *, struct lpfc_nodelist *, int); void lpfc_nlp_set_state(struct lpfc_vport *, struct lpfc_nodelist *, int);
void lpfc_drop_node(struct lpfc_vport *, struct lpfc_nodelist *); void lpfc_drop_node(struct lpfc_vport *, struct lpfc_nodelist *);
void lpfc_set_disctmo(struct lpfc_vport *); void lpfc_set_disctmo(struct lpfc_vport *);
@ -322,8 +320,12 @@ void lpfc_sli_def_mbox_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
void lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *, LPFC_MBOXQ_t *); void lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *, LPFC_MBOXQ_t *);
int lpfc_sli_issue_iocb(struct lpfc_hba *, uint32_t, int lpfc_sli_issue_iocb(struct lpfc_hba *, uint32_t,
struct lpfc_iocbq *, uint32_t); struct lpfc_iocbq *, uint32_t);
int lpfc_sli_issue_fcp_io(struct lpfc_hba *phba, uint32_t ring_number,
struct lpfc_iocbq *piocb, uint32_t flag);
int lpfc_sli4_issue_wqe(struct lpfc_hba *phba, struct lpfc_sli4_hdw_queue *qp, int lpfc_sli4_issue_wqe(struct lpfc_hba *phba, struct lpfc_sli4_hdw_queue *qp,
struct lpfc_iocbq *pwqe); struct lpfc_iocbq *pwqe);
int lpfc_sli4_issue_abort_iotag(struct lpfc_hba *phba,
struct lpfc_iocbq *cmdiocb, void *cmpl);
struct lpfc_sglq *__lpfc_clear_active_sglq(struct lpfc_hba *phba, uint16_t xri); struct lpfc_sglq *__lpfc_clear_active_sglq(struct lpfc_hba *phba, uint16_t xri);
struct lpfc_sglq *__lpfc_sli_get_nvmet_sglq(struct lpfc_hba *phba, struct lpfc_sglq *__lpfc_sli_get_nvmet_sglq(struct lpfc_hba *phba,
struct lpfc_iocbq *piocbq); struct lpfc_iocbq *piocbq);
@ -348,7 +350,7 @@ int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *); void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
int lpfc_sli_hbq_size(void); int lpfc_sli_hbq_size(void);
int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *, int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
struct lpfc_iocbq *); struct lpfc_iocbq *, void *);
int lpfc_sli_sum_iocb(struct lpfc_vport *, uint16_t, uint64_t, lpfc_ctx_cmd); int lpfc_sli_sum_iocb(struct lpfc_vport *, uint16_t, uint64_t, lpfc_ctx_cmd);
int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t, int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t,
uint64_t, lpfc_ctx_cmd); uint64_t, lpfc_ctx_cmd);
@ -371,6 +373,8 @@ int lpfc_sli_issue_iocb_wait(struct lpfc_hba *, uint32_t,
uint32_t); uint32_t);
void lpfc_sli_abort_fcp_cmpl(struct lpfc_hba *, struct lpfc_iocbq *, void lpfc_sli_abort_fcp_cmpl(struct lpfc_hba *, struct lpfc_iocbq *,
struct lpfc_iocbq *); struct lpfc_iocbq *);
void lpfc_sli4_abort_fcp_cmpl(struct lpfc_hba *h, struct lpfc_iocbq *i,
struct lpfc_wcqe_complete *w);
void lpfc_sli_free_hbq(struct lpfc_hba *, struct hbq_dmabuf *); void lpfc_sli_free_hbq(struct lpfc_hba *, struct hbq_dmabuf *);
@ -592,11 +596,13 @@ struct lpfc_io_buf *lpfc_get_io_buf(struct lpfc_hba *phba,
void lpfc_release_io_buf(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd, void lpfc_release_io_buf(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd,
struct lpfc_sli4_hdw_queue *qp); struct lpfc_sli4_hdw_queue *qp);
void lpfc_io_ktime(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd); void lpfc_io_ktime(struct lpfc_hba *phba, struct lpfc_io_buf *ncmd);
void lpfc_nvme_cmd_template(void); void lpfc_wqe_cmd_template(void);
void lpfc_nvmet_cmd_template(void); void lpfc_nvmet_cmd_template(void);
void lpfc_nvme_cancel_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn); void lpfc_nvme_cancel_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn);
void lpfc_nvme_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt);
extern int lpfc_enable_nvmet_cnt; extern int lpfc_enable_nvmet_cnt;
extern unsigned long long lpfc_enable_nvmet[]; extern unsigned long long lpfc_enable_nvmet[];
extern int lpfc_no_hba_reset_cnt; extern int lpfc_no_hba_reset_cnt;
extern unsigned long lpfc_no_hba_reset[]; extern unsigned long lpfc_no_hba_reset[];
extern union lpfc_wqe128 lpfc_iread_cmd_template;
extern union lpfc_wqe128 lpfc_iwrite_cmd_template;
extern union lpfc_wqe128 lpfc_icmnd_cmd_template;

Просмотреть файл

@ -99,21 +99,265 @@ lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
lpfc_ct_ignore_hbq_buffer(phba, piocbq, mp, size); lpfc_ct_ignore_hbq_buffer(phba, piocbq, mp, size);
} }
/**
* lpfc_ct_unsol_cmpl : Completion callback function for unsol ct commands
* @phba : pointer to lpfc hba data structure.
* @cmdiocb : pointer to lpfc command iocb data structure.
* @rspiocb : pointer to lpfc response iocb data structure.
*
* This routine is the callback function for issuing unsol ct reject command.
* The memory allocated in the reject command path is freed up here.
**/
static void
lpfc_ct_unsol_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
struct lpfc_iocbq *rspiocb)
{
struct lpfc_nodelist *ndlp;
struct lpfc_dmabuf *mp, *bmp;
ndlp = (struct lpfc_nodelist *)cmdiocb->context1;
if (ndlp)
lpfc_nlp_put(ndlp);
mp = cmdiocb->context2;
bmp = cmdiocb->context3;
if (mp) {
lpfc_mbuf_free(phba, mp->virt, mp->phys);
kfree(mp);
cmdiocb->context2 = NULL;
}
if (bmp) {
lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
kfree(bmp);
cmdiocb->context3 = NULL;
}
lpfc_sli_release_iocbq(phba, cmdiocb);
}
/**
* lpfc_ct_reject_event : Issue reject for unhandled CT MIB commands
* @ndlp : pointer to a node-list data structure.
* ct_req : pointer to the CT request data structure.
* rx_id : rx_id of the received UNSOL CT command
* ox_id : ox_id of the UNSOL CT command
*
* This routine is invoked by the lpfc_ct_handle_mibreq routine for sending
* a reject response. Reject response is sent for the unhandled commands.
**/
static void
lpfc_ct_reject_event(struct lpfc_nodelist *ndlp,
struct lpfc_sli_ct_request *ct_req,
u16 rx_id, u16 ox_id)
{
struct lpfc_vport *vport = ndlp->vport;
struct lpfc_hba *phba = vport->phba;
struct lpfc_sli_ct_request *ct_rsp;
struct lpfc_iocbq *cmdiocbq = NULL;
struct lpfc_dmabuf *bmp = NULL;
struct lpfc_dmabuf *mp = NULL;
struct ulp_bde64 *bpl;
IOCB_t *icmd;
u8 rc = 0;
/* fill in BDEs for command */
mp = kmalloc(sizeof(*mp), GFP_KERNEL);
if (!mp) {
rc = 1;
goto ct_exit;
}
mp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &mp->phys);
if (!mp->virt) {
rc = 2;
goto ct_free_mp;
}
/* Allocate buffer for Buffer ptr list */
bmp = kmalloc(sizeof(*bmp), GFP_KERNEL);
if (!bmp) {
rc = 3;
goto ct_free_mpvirt;
}
bmp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &bmp->phys);
if (!bmp->virt) {
rc = 4;
goto ct_free_bmp;
}
INIT_LIST_HEAD(&mp->list);
INIT_LIST_HEAD(&bmp->list);
bpl = (struct ulp_bde64 *)bmp->virt;
memset(bpl, 0, sizeof(struct ulp_bde64));
bpl->addrHigh = le32_to_cpu(putPaddrHigh(mp->phys));
bpl->addrLow = le32_to_cpu(putPaddrLow(mp->phys));
bpl->tus.f.bdeFlags = BUFF_TYPE_BLP_64;
bpl->tus.f.bdeSize = (LPFC_CT_PREAMBLE - 4);
bpl->tus.w = le32_to_cpu(bpl->tus.w);
ct_rsp = (struct lpfc_sli_ct_request *)mp->virt;
memset(ct_rsp, 0, sizeof(struct lpfc_sli_ct_request));
ct_rsp->RevisionId.bits.Revision = SLI_CT_REVISION;
ct_rsp->RevisionId.bits.InId = 0;
ct_rsp->FsType = ct_req->FsType;
ct_rsp->FsSubType = ct_req->FsSubType;
ct_rsp->CommandResponse.bits.Size = 0;
ct_rsp->CommandResponse.bits.CmdRsp =
cpu_to_be16(SLI_CT_RESPONSE_FS_RJT);
ct_rsp->ReasonCode = SLI_CT_REQ_NOT_SUPPORTED;
ct_rsp->Explanation = SLI_CT_NO_ADDITIONAL_EXPL;
cmdiocbq = lpfc_sli_get_iocbq(phba);
if (!cmdiocbq) {
rc = 5;
goto ct_free_bmpvirt;
}
icmd = &cmdiocbq->iocb;
icmd->un.genreq64.bdl.ulpIoTag32 = 0;
icmd->un.genreq64.bdl.addrHigh = putPaddrHigh(bmp->phys);
icmd->un.genreq64.bdl.addrLow = putPaddrLow(bmp->phys);
icmd->un.genreq64.bdl.bdeFlags = BUFF_TYPE_BLP_64;
icmd->un.genreq64.bdl.bdeSize = sizeof(struct ulp_bde64);
icmd->un.genreq64.w5.hcsw.Fctl = (LS | LA);
icmd->un.genreq64.w5.hcsw.Dfctl = 0;
icmd->un.genreq64.w5.hcsw.Rctl = FC_RCTL_DD_SOL_CTL;
icmd->un.genreq64.w5.hcsw.Type = FC_TYPE_CT;
icmd->ulpCommand = CMD_XMIT_SEQUENCE64_CX;
icmd->ulpBdeCount = 1;
icmd->ulpLe = 1;
icmd->ulpClass = CLASS3;
/* Save for completion so we can release these resources */
cmdiocbq->context1 = lpfc_nlp_get(ndlp);
cmdiocbq->context2 = (uint8_t *)mp;
cmdiocbq->context3 = (uint8_t *)bmp;
cmdiocbq->iocb_cmpl = lpfc_ct_unsol_cmpl;
icmd->ulpContext = rx_id; /* Xri / rx_id */
icmd->unsli3.rcvsli3.ox_id = ox_id;
icmd->un.ulpWord[3] =
phba->sli4_hba.rpi_ids[ndlp->nlp_rpi];
icmd->ulpTimeout = (3 * phba->fc_ratov);
cmdiocbq->retry = 0;
cmdiocbq->vport = vport;
cmdiocbq->context_un.ndlp = NULL;
cmdiocbq->drvrTimeout = icmd->ulpTimeout + LPFC_DRVR_TIMEOUT;
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, cmdiocbq, 0);
if (!rc)
return;
rc = 6;
lpfc_nlp_put(ndlp);
lpfc_sli_release_iocbq(phba, cmdiocbq);
ct_free_bmpvirt:
lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
ct_free_bmp:
kfree(bmp);
ct_free_mpvirt:
lpfc_mbuf_free(phba, mp->virt, mp->phys);
ct_free_mp:
kfree(mp);
ct_exit:
lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
"6440 Unsol CT: Rsp err %d Data: x%x\n",
rc, vport->fc_flag);
}
/**
* lpfc_ct_handle_mibreq - Process an unsolicited CT MIB request data buffer
* @phba: pointer to lpfc hba data structure.
* @ctiocb: pointer to lpfc CT command iocb data structure.
*
* This routine is used for processing the IOCB associated with a unsolicited
* CT MIB request. It first determines whether there is an existing ndlp that
* matches the DID from the unsolicited IOCB. If not, it will return.
**/
static void
lpfc_ct_handle_mibreq(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocbq)
{
struct lpfc_sli_ct_request *ct_req;
struct lpfc_nodelist *ndlp = NULL;
struct lpfc_vport *vport = NULL;
IOCB_t *icmd = &ctiocbq->iocb;
u32 mi_cmd, vpi;
u32 did = 0;
vpi = ctiocbq->iocb.unsli3.rcvsli3.vpi;
vport = lpfc_find_vport_by_vpid(phba, vpi);
if (!vport) {
lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
"6437 Unsol CT: VPORT NULL vpi : x%x\n",
vpi);
return;
}
did = ctiocbq->iocb.un.rcvels.remoteID;
if (icmd->ulpStatus) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"6438 Unsol CT: status:x%x/x%x did : x%x\n",
icmd->ulpStatus, icmd->un.ulpWord[4], did);
return;
}
/* Ignore traffic received during vport shutdown */
if (vport->fc_flag & FC_UNLOADING)
return;
ndlp = lpfc_findnode_did(vport, did);
if (!ndlp) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"6439 Unsol CT: NDLP Not Found for DID : x%x",
did);
return;
}
ct_req = ((struct lpfc_sli_ct_request *)
(((struct lpfc_dmabuf *)ctiocbq->context2)->virt));
mi_cmd = ct_req->CommandResponse.bits.CmdRsp;
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"6442 : MI Cmd : x%x Not Supported\n", mi_cmd);
lpfc_ct_reject_event(ndlp, ct_req,
ctiocbq->iocb.ulpContext,
ctiocbq->iocb.unsli3.rcvsli3.ox_id);
}
/**
* lpfc_ct_unsol_event - Process an unsolicited event from a ct sli ring
* @phba: pointer to lpfc hba data structure.
* @pring: pointer to a SLI ring.
* @ctiocbq: pointer to lpfc ct iocb data structure.
*
* This routine is used to process an unsolicited event received from a SLI
* (Service Level Interface) ring. The actual processing of the data buffer
* associated with the unsolicited event is done by invoking appropriate routine
* after properly set up the iocb buffer from the SLI ring on which the
* unsolicited event was received.
**/
void void
lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
struct lpfc_iocbq *piocbq) struct lpfc_iocbq *ctiocbq)
{ {
struct lpfc_dmabuf *mp = NULL; struct lpfc_dmabuf *mp = NULL;
IOCB_t *icmd = &piocbq->iocb; IOCB_t *icmd = &ctiocbq->iocb;
int i; int i;
struct lpfc_iocbq *iocbq; struct lpfc_iocbq *iocbq;
dma_addr_t paddr; dma_addr_t dma_addr;
uint32_t size; uint32_t size;
struct list_head head; struct list_head head;
struct lpfc_dmabuf *bdeBuf; struct lpfc_sli_ct_request *ct_req;
struct lpfc_dmabuf *bdeBuf1 = ctiocbq->context2;
struct lpfc_dmabuf *bdeBuf2 = ctiocbq->context3;
if (lpfc_bsg_ct_unsol_event(phba, pring, piocbq) == 0) ctiocbq->context1 = NULL;
return; ctiocbq->context2 = NULL;
ctiocbq->context3 = NULL;
if (unlikely(icmd->ulpStatus == IOSTAT_NEED_BUFFER)) { if (unlikely(icmd->ulpStatus == IOSTAT_NEED_BUFFER)) {
lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ); lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
@ -127,46 +371,75 @@ lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
return; return;
} }
/* If there are no BDEs associated with this IOCB, /* If there are no BDEs associated
* there is nothing to do. * with this IOCB, there is nothing to do.
*/ */
if (icmd->ulpBdeCount == 0) if (icmd->ulpBdeCount == 0)
return; return;
if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
ctiocbq->context2 = bdeBuf1;
if (icmd->ulpBdeCount == 2)
ctiocbq->context3 = bdeBuf2;
} else {
dma_addr = getPaddr(icmd->un.cont64[0].addrHigh,
icmd->un.cont64[0].addrLow);
ctiocbq->context2 = lpfc_sli_ringpostbuf_get(phba, pring,
dma_addr);
if (icmd->ulpBdeCount == 2) {
dma_addr = getPaddr(icmd->un.cont64[1].addrHigh,
icmd->un.cont64[1].addrLow);
ctiocbq->context3 = lpfc_sli_ringpostbuf_get(phba,
pring,
dma_addr);
}
}
ct_req = ((struct lpfc_sli_ct_request *)
(((struct lpfc_dmabuf *)ctiocbq->context2)->virt));
if (ct_req->FsType == SLI_CT_MANAGEMENT_SERVICE &&
ct_req->FsSubType == SLI_CT_MIB_Subtypes) {
lpfc_ct_handle_mibreq(phba, ctiocbq);
} else {
if (!lpfc_bsg_ct_unsol_event(phba, pring, ctiocbq))
return;
}
if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) { if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
INIT_LIST_HEAD(&head); INIT_LIST_HEAD(&head);
list_add_tail(&head, &piocbq->list); list_add_tail(&head, &ctiocbq->list);
list_for_each_entry(iocbq, &head, list) { list_for_each_entry(iocbq, &head, list) {
icmd = &iocbq->iocb; icmd = &iocbq->iocb;
if (icmd->ulpBdeCount == 0) if (icmd->ulpBdeCount == 0)
continue; continue;
bdeBuf = iocbq->context2; bdeBuf1 = iocbq->context2;
iocbq->context2 = NULL; iocbq->context2 = NULL;
size = icmd->un.cont64[0].tus.f.bdeSize; size = icmd->un.cont64[0].tus.f.bdeSize;
lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf, size); lpfc_ct_unsol_buffer(phba, ctiocbq, bdeBuf1, size);
lpfc_in_buf_free(phba, bdeBuf); lpfc_in_buf_free(phba, bdeBuf1);
if (icmd->ulpBdeCount == 2) { if (icmd->ulpBdeCount == 2) {
bdeBuf = iocbq->context3; bdeBuf2 = iocbq->context3;
iocbq->context3 = NULL; iocbq->context3 = NULL;
size = icmd->unsli3.rcvsli3.bde2.tus.f.bdeSize; size = icmd->unsli3.rcvsli3.bde2.tus.f.bdeSize;
lpfc_ct_unsol_buffer(phba, piocbq, bdeBuf, lpfc_ct_unsol_buffer(phba, ctiocbq, bdeBuf2,
size); size);
lpfc_in_buf_free(phba, bdeBuf); lpfc_in_buf_free(phba, bdeBuf2);
} }
} }
list_del(&head); list_del(&head);
} else { } else {
INIT_LIST_HEAD(&head); INIT_LIST_HEAD(&head);
list_add_tail(&head, &piocbq->list); list_add_tail(&head, &ctiocbq->list);
list_for_each_entry(iocbq, &head, list) { list_for_each_entry(iocbq, &head, list) {
icmd = &iocbq->iocb; icmd = &iocbq->iocb;
if (icmd->ulpBdeCount == 0) if (icmd->ulpBdeCount == 0)
lpfc_ct_unsol_buffer(phba, iocbq, NULL, 0); lpfc_ct_unsol_buffer(phba, iocbq, NULL, 0);
for (i = 0; i < icmd->ulpBdeCount; i++) { for (i = 0; i < icmd->ulpBdeCount; i++) {
paddr = getPaddr(icmd->un.cont64[i].addrHigh, dma_addr = getPaddr(icmd->un.cont64[i].addrHigh,
icmd->un.cont64[i].addrLow); icmd->un.cont64[i].addrLow);
mp = lpfc_sli_ringpostbuf_get(phba, pring, mp = lpfc_sli_ringpostbuf_get(phba, pring,
paddr); dma_addr);
size = icmd->un.cont64[i].tus.f.bdeSize; size = icmd->un.cont64[i].tus.f.bdeSize;
lpfc_ct_unsol_buffer(phba, iocbq, mp, size); lpfc_ct_unsol_buffer(phba, iocbq, mp, size);
lpfc_in_buf_free(phba, mp); lpfc_in_buf_free(phba, mp);
@ -275,10 +548,8 @@ lpfc_ct_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocb)
{ {
struct lpfc_dmabuf *buf_ptr; struct lpfc_dmabuf *buf_ptr;
if (ctiocb->context_un.ndlp) { /* I/O job is complete so context is now invalid*/
lpfc_nlp_put(ctiocb->context_un.ndlp); ctiocb->context_un.ndlp = NULL;
ctiocb->context_un.ndlp = NULL;
}
if (ctiocb->context1) { if (ctiocb->context1) {
buf_ptr = (struct lpfc_dmabuf *) ctiocb->context1; buf_ptr = (struct lpfc_dmabuf *) ctiocb->context1;
lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys); lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
@ -345,7 +616,6 @@ lpfc_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
/* Save for completion so we can release these resources */ /* Save for completion so we can release these resources */
geniocb->context1 = (uint8_t *) inp; geniocb->context1 = (uint8_t *) inp;
geniocb->context2 = (uint8_t *) outp; geniocb->context2 = (uint8_t *) outp;
geniocb->context_un.ndlp = lpfc_nlp_get(ndlp);
/* Fill in payload, bp points to frame payload */ /* Fill in payload, bp points to frame payload */
icmd->ulpCommand = CMD_GEN_REQUEST64_CR; icmd->ulpCommand = CMD_GEN_REQUEST64_CR;
@ -384,16 +654,21 @@ lpfc_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
geniocb->drvrTimeout = icmd->ulpTimeout + LPFC_DRVR_TIMEOUT; geniocb->drvrTimeout = icmd->ulpTimeout + LPFC_DRVR_TIMEOUT;
geniocb->vport = vport; geniocb->vport = vport;
geniocb->retry = retry; geniocb->retry = retry;
geniocb->context_un.ndlp = lpfc_nlp_get(ndlp);
if (!geniocb->context_un.ndlp)
goto out;
rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, geniocb, 0); rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, geniocb, 0);
if (rc == IOCB_ERROR) { if (rc == IOCB_ERROR) {
geniocb->context_un.ndlp = NULL; geniocb->context_un.ndlp = NULL;
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
lpfc_sli_release_iocbq(phba, geniocb); goto out;
return 1;
} }
return 0; return 0;
out:
lpfc_sli_release_iocbq(phba, geniocb);
return 1;
} }
/* /*
@ -467,7 +742,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
ndlp = lpfc_setup_disc_node(vport, Did); ndlp = lpfc_setup_disc_node(vport, Did);
if (ndlp && NLP_CHK_NODE_ACT(ndlp)) { if (ndlp) {
lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
"Parse GID_FTrsp: did:x%x flg:x%x x%x", "Parse GID_FTrsp: did:x%x flg:x%x x%x",
Did, ndlp->nlp_flag, vport->fc_flag); Did, ndlp->nlp_flag, vport->fc_flag);
@ -518,7 +793,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
* Don't even bother to send GFF_ID. * Don't even bother to send GFF_ID.
*/ */
ndlp = lpfc_findnode_did(vport, Did); ndlp = lpfc_findnode_did(vport, Did);
if (ndlp && NLP_CHK_NODE_ACT(ndlp) && if (ndlp &&
(ndlp->nlp_type & (ndlp->nlp_type &
(NLP_FCP_TARGET | NLP_NVME_TARGET))) { (NLP_FCP_TARGET | NLP_NVME_TARGET))) {
if (fc4_type == FC_TYPE_FCP) if (fc4_type == FC_TYPE_FCP)
@ -550,7 +825,6 @@ lpfc_ns_rsp_audit_did(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_nodelist *ndlp = NULL; struct lpfc_nodelist *ndlp = NULL;
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
char *str; char *str;
if (phba->cfg_ns_query == LPFC_NS_QUERY_GID_FT) if (phba->cfg_ns_query == LPFC_NS_QUERY_GID_FT)
@ -579,12 +853,12 @@ lpfc_ns_rsp_audit_did(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
if (ndlp->nlp_type != NLP_NVME_INITIATOR || if (ndlp->nlp_type != NLP_NVME_INITIATOR ||
ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) ndlp->nlp_state != NLP_STE_UNMAPPED_NODE)
continue; continue;
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
if (ndlp->nlp_DID == Did) if (ndlp->nlp_DID == Did)
ndlp->nlp_flag &= ~NLP_NVMET_RECOV; ndlp->nlp_flag &= ~NLP_NVMET_RECOV;
else else
ndlp->nlp_flag |= NLP_NVMET_RECOV; ndlp->nlp_flag |= NLP_NVMET_RECOV;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
} }
} }
} }
@ -600,7 +874,6 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint8_t fc4_type,
uint32_t Did, CTentry; uint32_t Did, CTentry;
int Cnt; int Cnt;
struct list_head head; struct list_head head;
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_nodelist *ndlp = NULL; struct lpfc_nodelist *ndlp = NULL;
lpfc_set_disctmo(vport); lpfc_set_disctmo(vport);
@ -646,9 +919,9 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint8_t fc4_type,
continue; continue;
lpfc_disc_state_machine(vport, ndlp, NULL, lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RECOVERY); NLP_EVT_DEVICE_RECOVERY);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NVMET_RECOV; ndlp->nlp_flag &= ~NLP_NVMET_RECOV;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
} }
} }
@ -861,8 +1134,8 @@ lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
lpfc_disc_start(vport); lpfc_disc_start(vport);
} }
out: out:
cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ndlp);
return; return;
} }
@ -1068,8 +1341,8 @@ lpfc_cmpl_ct_cmd_gid_pt(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
lpfc_disc_start(vport); lpfc_disc_start(vport);
} }
out: out:
cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ndlp);
} }
static void static void
@ -1084,7 +1357,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
struct lpfc_sli_ct_request *CTrsp; struct lpfc_sli_ct_request *CTrsp;
int did, rc, retry; int did, rc, retry;
uint8_t fbits; uint8_t fbits;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp = NULL, *free_ndlp = NULL;
did = ((struct lpfc_sli_ct_request *) inp->virt)->un.gff.PortId; did = ((struct lpfc_sli_ct_request *) inp->virt)->un.gff.PortId;
did = be32_to_cpu(did); did = be32_to_cpu(did);
@ -1150,7 +1423,9 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
cmdiocb->retry, did); cmdiocb->retry, did);
if (rc == 0) { if (rc == 0) {
/* success */ /* success */
free_ndlp = cmdiocb->context_un.ndlp;
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(free_ndlp);
return; return;
} }
} }
@ -1164,7 +1439,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* This is a target port, unregistered port, or the GFF_ID failed */ /* This is a target port, unregistered port, or the GFF_ID failed */
ndlp = lpfc_setup_disc_node(vport, did); ndlp = lpfc_setup_disc_node(vport, did);
if (ndlp && NLP_CHK_NODE_ACT(ndlp)) { if (ndlp) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
"0242 Process x%x GFF " "0242 Process x%x GFF "
"NameServer Rsp Data: x%x x%x x%x\n", "NameServer Rsp Data: x%x x%x x%x\n",
@ -1203,7 +1478,10 @@ out:
} }
lpfc_disc_start(vport); lpfc_disc_start(vport);
} }
free_ndlp = cmdiocb->context_un.ndlp;
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(free_ndlp);
return; return;
} }
@ -1217,7 +1495,8 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
struct lpfc_dmabuf *outp = (struct lpfc_dmabuf *)cmdiocb->context2; struct lpfc_dmabuf *outp = (struct lpfc_dmabuf *)cmdiocb->context2;
struct lpfc_sli_ct_request *CTrsp; struct lpfc_sli_ct_request *CTrsp;
int did; int did;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp = NULL;
struct lpfc_nodelist *ns_ndlp = NULL;
uint32_t fc4_data_0, fc4_data_1; uint32_t fc4_data_0, fc4_data_1;
did = ((struct lpfc_sli_ct_request *)inp->virt)->un.gft.PortId; did = ((struct lpfc_sli_ct_request *)inp->virt)->un.gft.PortId;
@ -1227,6 +1506,9 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
"GFT_ID cmpl: status:x%x/x%x did:x%x", "GFT_ID cmpl: status:x%x/x%x did:x%x",
irsp->ulpStatus, irsp->un.ulpWord[4], did); irsp->ulpStatus, irsp->un.ulpWord[4], did);
/* Preserve the nameserver node to release the reference. */
ns_ndlp = cmdiocb->context_un.ndlp;
if (irsp->ulpStatus == IOSTAT_SUCCESS) { if (irsp->ulpStatus == IOSTAT_SUCCESS) {
/* Good status, continue checking */ /* Good status, continue checking */
CTrsp = (struct lpfc_sli_ct_request *)outp->virt; CTrsp = (struct lpfc_sli_ct_request *)outp->virt;
@ -1242,6 +1524,10 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
(fc4_data_1 & LPFC_FC4_TYPE_BITMASK) ? (fc4_data_1 & LPFC_FC4_TYPE_BITMASK) ?
"NVME" : " "); "NVME" : " ");
/* Lookup the NPort_ID queried in the GFT_ID and find the
* driver's local node. It's an error if the driver
* doesn't have one.
*/
ndlp = lpfc_findnode_did(vport, did); ndlp = lpfc_findnode_did(vport, did);
if (ndlp) { if (ndlp) {
/* The bitmask value for FCP and NVME FCP types is /* The bitmask value for FCP and NVME FCP types is
@ -1287,6 +1573,7 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
"3065 GFT_ID failed x%08x\n", irsp->ulpStatus); "3065 GFT_ID failed x%08x\n", irsp->ulpStatus);
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ns_ndlp);
} }
static void static void
@ -1356,8 +1643,8 @@ lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
} }
out: out:
cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ndlp);
return; return;
} }
@ -1599,8 +1886,7 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
int rc = 0; int rc = 0;
ndlp = lpfc_findnode_did(vport, NameServer_DID); ndlp = lpfc_findnode_did(vport, NameServer_DID);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) if (!ndlp || ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) {
|| ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) {
rc=1; rc=1;
goto ns_cmd_exit; goto ns_cmd_exit;
} }
@ -1841,11 +2127,6 @@ lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
} }
rc=6; rc=6;
/* Decrement ndlp reference count to release ndlp reference held
* for the failed command's callback function.
*/
lpfc_nlp_put(ndlp);
ns_cmd_free_bmpvirt: ns_cmd_free_bmpvirt:
lpfc_mbuf_free(phba, bmp->virt, bmp->phys); lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
ns_cmd_free_bmp: ns_cmd_free_bmp:
@ -1882,7 +2163,7 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
uint16_t fdmi_cmd = CTcmd->CommandResponse.bits.CmdRsp; uint16_t fdmi_cmd = CTcmd->CommandResponse.bits.CmdRsp;
uint16_t fdmi_rsp = CTrsp->CommandResponse.bits.CmdRsp; uint16_t fdmi_rsp = CTrsp->CommandResponse.bits.CmdRsp;
IOCB_t *irsp = &rspiocb->iocb; IOCB_t *irsp = &rspiocb->iocb;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp, *free_ndlp = NULL;
uint32_t latt, cmd, err; uint32_t latt, cmd, err;
latt = lpfc_els_chk_latt(vport); latt = lpfc_els_chk_latt(vport);
@ -1928,10 +2209,13 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
be16_to_cpu(fdmi_cmd), latt, irsp->ulpStatus, be16_to_cpu(fdmi_cmd), latt, irsp->ulpStatus,
irsp->un.ulpWord[4]); irsp->un.ulpWord[4]);
} }
free_ndlp = cmdiocb->context_un.ndlp;
lpfc_ct_free_iocb(phba, cmdiocb); lpfc_ct_free_iocb(phba, cmdiocb);
lpfc_nlp_put(free_ndlp);
ndlp = lpfc_findnode_did(vport, FDMI_DID); ndlp = lpfc_findnode_did(vport, FDMI_DID);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) if (!ndlp)
return; return;
/* Check for a CT LS_RJT response */ /* Check for a CT LS_RJT response */
@ -1959,6 +2243,7 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
vport->fdmi_port_mask = LPFC_FDMI1_PORT_ATTR; vport->fdmi_port_mask = LPFC_FDMI1_PORT_ATTR;
/* Start over */ /* Start over */
lpfc_fdmi_cmd(vport, ndlp, cmd, 0); lpfc_fdmi_cmd(vport, ndlp, cmd, 0);
return;
} }
if (vport->fdmi_port_mask == LPFC_FDMI2_SMART_ATTR) { if (vport->fdmi_port_mask == LPFC_FDMI2_SMART_ATTR) {
vport->fdmi_port_mask = LPFC_FDMI2_PORT_ATTR; vport->fdmi_port_mask = LPFC_FDMI2_PORT_ATTR;
@ -1968,12 +2253,21 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
return; return;
case SLI_MGMT_RPA: case SLI_MGMT_RPA:
/* No retry on Vendor RPA */
if (phba->link_flag & LS_CT_VEN_RPA) {
lpfc_printf_vlog(vport, KERN_ERR,
LOG_DISCOVERY | LOG_ELS,
"6460 VEN FDMI RPA failure\n");
phba->link_flag &= ~LS_CT_VEN_RPA;
return;
}
if (vport->fdmi_port_mask == LPFC_FDMI2_PORT_ATTR) { if (vport->fdmi_port_mask == LPFC_FDMI2_PORT_ATTR) {
/* Fallback to FDMI-1 */ /* Fallback to FDMI-1 */
vport->fdmi_hba_mask = LPFC_FDMI1_HBA_ATTR; vport->fdmi_hba_mask = LPFC_FDMI1_HBA_ATTR;
vport->fdmi_port_mask = LPFC_FDMI1_PORT_ATTR; vport->fdmi_port_mask = LPFC_FDMI1_PORT_ATTR;
/* Start over */ /* Start over */
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DHBA, 0); lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DHBA, 0);
return;
} }
if (vport->fdmi_port_mask == LPFC_FDMI2_SMART_ATTR) { if (vport->fdmi_port_mask == LPFC_FDMI2_SMART_ATTR) {
vport->fdmi_port_mask = LPFC_FDMI2_PORT_ATTR; vport->fdmi_port_mask = LPFC_FDMI2_PORT_ATTR;
@ -2004,6 +2298,33 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
else else
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPRT, 0); lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPRT, 0);
break; break;
case SLI_MGMT_RPA:
if (vport->port_type == LPFC_PHYSICAL_PORT &&
phba->cfg_enable_mi &&
phba->sli4_hba.pc_sli4_params.mi_ver > LPFC_MIB1_SUPPORT) {
/* mi is only for the phyical port, no vports */
if (phba->link_flag & LS_CT_VEN_RPA) {
lpfc_printf_vlog(vport, KERN_INFO,
LOG_DISCOVERY | LOG_ELS,
"6449 VEN RPA Success\n");
break;
}
if (lpfc_fdmi_cmd(vport, ndlp, cmd,
LPFC_FDMI_VENDOR_ATTR_mi) == 0)
phba->link_flag |= LS_CT_VEN_RPA;
lpfc_printf_vlog(vport, KERN_INFO,
LOG_DISCOVERY | LOG_ELS,
"6458 Send MI FDMI:%x Flag x%x\n",
phba->sli4_hba.pc_sli4_params.mi_value,
phba->link_flag);
} else {
lpfc_printf_vlog(vport, KERN_INFO,
LOG_DISCOVERY | LOG_ELS,
"6459 No FDMI VEN MI support - "
"RPA Success\n");
}
break;
} }
return; return;
} }
@ -2033,7 +2354,7 @@ lpfc_fdmi_change_check(struct lpfc_vport *vport)
return; return;
ndlp = lpfc_findnode_did(vport, FDMI_DID); ndlp = lpfc_findnode_did(vport, FDMI_DID);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) if (!ndlp)
return; return;
/* Check if system hostname changed */ /* Check if system hostname changed */
@ -2974,6 +3295,28 @@ lpfc_fdmi_smart_attr_security(struct lpfc_vport *vport,
return size; return size;
} }
static int
lpfc_fdmi_vendor_attr_mi(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad)
{
struct lpfc_hba *phba = vport->phba;
struct lpfc_fdmi_attr_entry *ae;
uint32_t len, size;
char mibrevision[16];
ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
memset(ae, 0, 256);
sprintf(mibrevision, "ELXE2EM:%04d",
phba->sli4_hba.pc_sli4_params.mi_value);
strncpy(ae->un.AttrString, &mibrevision[0], sizeof(ae->un.AttrString));
len = strnlen(ae->un.AttrString, sizeof(ae->un.AttrString));
len += (len & 3) ? (4 - (len & 3)) : 4;
size = FOURBYTES + len;
ad->AttrLen = cpu_to_be16(size);
ad->AttrType = cpu_to_be16(RPRT_VENDOR_MI);
return size;
}
/* RHBA attribute jump table */ /* RHBA attribute jump table */
int (*lpfc_fdmi_hba_action[]) int (*lpfc_fdmi_hba_action[])
(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad) = { (struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad) = {
@ -3025,6 +3368,7 @@ int (*lpfc_fdmi_port_action[])
lpfc_fdmi_smart_attr_port_info, /* bit20 RPRT_SMART_PORT_INFO */ lpfc_fdmi_smart_attr_port_info, /* bit20 RPRT_SMART_PORT_INFO */
lpfc_fdmi_smart_attr_qos, /* bit21 RPRT_SMART_QOS */ lpfc_fdmi_smart_attr_qos, /* bit21 RPRT_SMART_QOS */
lpfc_fdmi_smart_attr_security, /* bit22 RPRT_SMART_SECURITY */ lpfc_fdmi_smart_attr_security, /* bit22 RPRT_SMART_SECURITY */
lpfc_fdmi_vendor_attr_mi, /* bit23 RPRT_VENDOR_MI */
}; };
/** /**
@ -3056,7 +3400,7 @@ lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void (*cmpl)(struct lpfc_hba *, struct lpfc_iocbq *, void (*cmpl)(struct lpfc_hba *, struct lpfc_iocbq *,
struct lpfc_iocbq *); struct lpfc_iocbq *);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) if (!ndlp)
return 0; return 0;
cmpl = lpfc_cmpl_ct_disc_fdmi; /* called from discovery */ cmpl = lpfc_cmpl_ct_disc_fdmi; /* called from discovery */
@ -3250,12 +3594,6 @@ port_out:
if (!lpfc_ct_cmd(vport, mp, bmp, ndlp, cmpl, rsp_size, 0)) if (!lpfc_ct_cmd(vport, mp, bmp, ndlp, cmpl, rsp_size, 0))
return 0; return 0;
/*
* Decrement ndlp reference count to release ndlp reference held
* for the failed command's callback function.
*/
lpfc_nlp_put(ndlp);
fdmi_cmd_free_bmpvirt: fdmi_cmd_free_bmpvirt:
lpfc_mbuf_free(phba, bmp->virt, bmp->phys); lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
fdmi_cmd_free_bmp: fdmi_cmd_free_bmp:

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2007-2015 Emulex. All rights reserved. * * Copyright (C) 2007-2015 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -895,8 +895,6 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
if (ndlp->nlp_type & NLP_NVME_INITIATOR) if (ndlp->nlp_type & NLP_NVME_INITIATOR)
len += scnprintf(buf + len, len += scnprintf(buf + len,
size - len, "NVME_INITIATOR "); size - len, "NVME_INITIATOR ");
len += scnprintf(buf+len, size-len, "usgmap:%x ",
ndlp->nlp_usg_map);
len += scnprintf(buf+len, size-len, "refcnt:%x", len += scnprintf(buf+len, size-len, "refcnt:%x",
kref_read(&ndlp->kref)); kref_read(&ndlp->kref));
if (iocnt) { if (iocnt) {
@ -957,13 +955,13 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
len += scnprintf(buf + len, size - len, "\tRport List:\n"); len += scnprintf(buf + len, size - len, "\tRport List:\n");
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
/* local short-hand pointer. */ /* local short-hand pointer. */
spin_lock(&phba->hbalock); spin_lock(&ndlp->lock);
rport = lpfc_ndlp_get_nrport(ndlp); rport = lpfc_ndlp_get_nrport(ndlp);
if (rport) if (rport)
nrport = rport->remoteport; nrport = rport->remoteport;
else else
nrport = NULL; nrport = NULL;
spin_unlock(&phba->hbalock); spin_unlock(&ndlp->lock);
if (!nrport) if (!nrport)
continue; continue;
@ -3341,7 +3339,6 @@ lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes,
break; break;
case LPFC_PCI_CFG_BROWSE: /* browse all */ case LPFC_PCI_CFG_BROWSE: /* browse all */
goto pcicfg_browse; goto pcicfg_browse;
break;
default: default:
/* illegal count */ /* illegal count */
len = 0; len = 0;
@ -4187,6 +4184,7 @@ lpfc_idiag_que_param_check(struct lpfc_queue *q, int index, int count)
/** /**
* lpfc_idiag_queacc_read_qe - read a single entry from the given queue index * lpfc_idiag_queacc_read_qe - read a single entry from the given queue index
* @pbuffer: The pointer to buffer to copy the read data into. * @pbuffer: The pointer to buffer to copy the read data into.
* @len: Length of the buffer.
* @pque: The pointer to the queue to be read. * @pque: The pointer to the queue to be read.
* @index: The index into the queue entry. * @index: The index into the queue entry.
* *
@ -4381,7 +4379,7 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
} }
} }
goto error_out; goto error_out;
break;
case LPFC_IDIAG_CQ: case LPFC_IDIAG_CQ:
/* MBX complete queue */ /* MBX complete queue */
if (phba->sli4_hba.mbx_cq && if (phba->sli4_hba.mbx_cq &&
@ -4433,7 +4431,7 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
} }
} }
goto error_out; goto error_out;
break;
case LPFC_IDIAG_MQ: case LPFC_IDIAG_MQ:
/* MBX work queue */ /* MBX work queue */
if (phba->sli4_hba.mbx_wq && if (phba->sli4_hba.mbx_wq &&
@ -4447,7 +4445,7 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
goto pass_check; goto pass_check;
} }
goto error_out; goto error_out;
break;
case LPFC_IDIAG_WQ: case LPFC_IDIAG_WQ:
/* ELS work queue */ /* ELS work queue */
if (phba->sli4_hba.els_wq && if (phba->sli4_hba.els_wq &&
@ -4487,9 +4485,8 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
} }
} }
} }
goto error_out; goto error_out;
break;
case LPFC_IDIAG_RQ: case LPFC_IDIAG_RQ:
/* HDR queue */ /* HDR queue */
if (phba->sli4_hba.hdr_rq && if (phba->sli4_hba.hdr_rq &&
@ -4514,10 +4511,8 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
goto pass_check; goto pass_check;
} }
goto error_out; goto error_out;
break;
default: default:
goto error_out; goto error_out;
break;
} }
pass_check: pass_check:
@ -4766,7 +4761,7 @@ error_out:
* @phba: The pointer to hba structure. * @phba: The pointer to hba structure.
* @pbuffer: The pointer to the buffer to copy the data to. * @pbuffer: The pointer to the buffer to copy the data to.
* @len: The length of bytes to copied. * @len: The length of bytes to copied.
* @drbregid: The id to doorbell registers. * @ctlregid: The id to doorbell registers.
* *
* Description: * Description:
* This routine reads a control register and copies its content to the * This routine reads a control register and copies its content to the

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2018 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2013 Emulex. All rights reserved. * * Copyright (C) 2004-2013 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -41,6 +41,7 @@ enum lpfc_work_type {
LPFC_EVT_DEV_LOSS, LPFC_EVT_DEV_LOSS,
LPFC_EVT_FASTPATH_MGMT_EVT, LPFC_EVT_FASTPATH_MGMT_EVT,
LPFC_EVT_RESET_HBA, LPFC_EVT_RESET_HBA,
LPFC_EVT_RECOVER_PORT
}; };
/* structure used to queue event to the discovery tasklet */ /* structure used to queue event to the discovery tasklet */
@ -80,6 +81,9 @@ struct lpfc_nodelist {
struct list_head nlp_listp; struct list_head nlp_listp;
struct lpfc_name nlp_portname; struct lpfc_name nlp_portname;
struct lpfc_name nlp_nodename; struct lpfc_name nlp_nodename;
spinlock_t lock; /* Node management lock */
uint32_t nlp_flag; /* entry flags */ uint32_t nlp_flag; /* entry flags */
uint32_t nlp_DID; /* FC D_ID of entry */ uint32_t nlp_DID; /* FC D_ID of entry */
uint32_t nlp_last_elscmd; /* Last ELS cmd sent */ uint32_t nlp_last_elscmd; /* Last ELS cmd sent */
@ -115,12 +119,6 @@ struct lpfc_nodelist {
u8 nlp_nvme_info; /* NVME NSLER Support */ u8 nlp_nvme_info; /* NVME NSLER Support */
#define NLP_NVME_NSLER 0x1 /* NVME NSLER device */ #define NLP_NVME_NSLER 0x1 /* NVME NSLER device */
uint16_t nlp_usg_map; /* ndlp management usage bitmap */
#define NLP_USG_NODE_ACT_BIT 0x1 /* Indicate ndlp is actively used */
#define NLP_USG_IACT_REQ_BIT 0x2 /* Request to inactivate ndlp */
#define NLP_USG_FREE_REQ_BIT 0x4 /* Request to invoke ndlp memory free */
#define NLP_USG_FREE_ACK_BIT 0x8 /* Indicate ndlp memory free invoked */
struct timer_list nlp_delayfunc; /* Used for delayed ELS cmds */ struct timer_list nlp_delayfunc; /* Used for delayed ELS cmds */
struct lpfc_hba *phba; struct lpfc_hba *phba;
struct fc_rport *rport; /* scsi_transport_fc port structure */ struct fc_rport *rport; /* scsi_transport_fc port structure */
@ -128,6 +126,7 @@ struct lpfc_nodelist {
struct lpfc_vport *vport; struct lpfc_vport *vport;
struct lpfc_work_evt els_retry_evt; struct lpfc_work_evt els_retry_evt;
struct lpfc_work_evt dev_loss_evt; struct lpfc_work_evt dev_loss_evt;
struct lpfc_work_evt recovery_evt;
struct kref kref; struct kref kref;
atomic_t cmd_pending; atomic_t cmd_pending;
uint32_t cmd_qdepth; uint32_t cmd_qdepth;
@ -135,13 +134,17 @@ struct lpfc_nodelist {
unsigned long *active_rrqs_xri_bitmap; unsigned long *active_rrqs_xri_bitmap;
struct lpfc_scsicmd_bkt *lat_data; /* Latency data */ struct lpfc_scsicmd_bkt *lat_data; /* Latency data */
uint32_t fc4_prli_sent; uint32_t fc4_prli_sent;
uint32_t upcall_flags; uint32_t fc4_xpt_flags;
#define NLP_WAIT_FOR_UNREG 0x1 #define NLP_WAIT_FOR_UNREG 0x1
#define SCSI_XPT_REGD 0x2
#define NVME_XPT_REGD 0x4
uint32_t nvme_fb_size; /* NVME target's supported byte cnt */ uint32_t nvme_fb_size; /* NVME target's supported byte cnt */
#define NVME_FB_BIT_SHIFT 9 /* PRLI Rsp first burst in 512B units. */ #define NVME_FB_BIT_SHIFT 9 /* PRLI Rsp first burst in 512B units. */
uint32_t nlp_defer_did; uint32_t nlp_defer_did;
}; };
struct lpfc_node_rrq { struct lpfc_node_rrq {
struct list_head list; struct list_head list;
uint16_t xritag; uint16_t xritag;
@ -170,7 +173,7 @@ struct lpfc_node_rrq {
#define NLP_NVMET_RECOV 0x00001000 /* NVMET auditing node for recovery. */ #define NLP_NVMET_RECOV 0x00001000 /* NVMET auditing node for recovery. */
#define NLP_FCP_PRLI_RJT 0x00002000 /* Rport does not support FCP PRLI. */ #define NLP_FCP_PRLI_RJT 0x00002000 /* Rport does not support FCP PRLI. */
#define NLP_UNREG_INP 0x00008000 /* UNREG_RPI cmd is in progress */ #define NLP_UNREG_INP 0x00008000 /* UNREG_RPI cmd is in progress */
#define NLP_DEFER_RM 0x00010000 /* Remove this ndlp if no longer used */ #define NLP_DROPPED 0x00010000 /* Init ref count has been dropped */
#define NLP_DELAY_TMO 0x00020000 /* delay timeout is running for node */ #define NLP_DELAY_TMO 0x00020000 /* delay timeout is running for node */
#define NLP_NPR_2B_DISC 0x00040000 /* node is included in num_disc_nodes */ #define NLP_NPR_2B_DISC 0x00040000 /* node is included in num_disc_nodes */
#define NLP_RCV_PLOGI 0x00080000 /* Rcv'ed PLOGI from remote system */ #define NLP_RCV_PLOGI 0x00080000 /* Rcv'ed PLOGI from remote system */
@ -189,32 +192,6 @@ struct lpfc_node_rrq {
#define NLP_FIRSTBURST 0x40000000 /* Target supports FirstBurst */ #define NLP_FIRSTBURST 0x40000000 /* Target supports FirstBurst */
#define NLP_RPI_REGISTERED 0x80000000 /* nlp_rpi is valid */ #define NLP_RPI_REGISTERED 0x80000000 /* nlp_rpi is valid */
/* ndlp usage management macros */
#define NLP_CHK_NODE_ACT(ndlp) (((ndlp)->nlp_usg_map \
& NLP_USG_NODE_ACT_BIT) \
&& \
!((ndlp)->nlp_usg_map \
& NLP_USG_FREE_ACK_BIT))
#define NLP_SET_NODE_ACT(ndlp) ((ndlp)->nlp_usg_map \
|= NLP_USG_NODE_ACT_BIT)
#define NLP_INT_NODE_ACT(ndlp) ((ndlp)->nlp_usg_map \
= NLP_USG_NODE_ACT_BIT)
#define NLP_CLR_NODE_ACT(ndlp) ((ndlp)->nlp_usg_map \
&= ~NLP_USG_NODE_ACT_BIT)
#define NLP_CHK_IACT_REQ(ndlp) ((ndlp)->nlp_usg_map \
& NLP_USG_IACT_REQ_BIT)
#define NLP_SET_IACT_REQ(ndlp) ((ndlp)->nlp_usg_map \
|= NLP_USG_IACT_REQ_BIT)
#define NLP_CHK_FREE_REQ(ndlp) ((ndlp)->nlp_usg_map \
& NLP_USG_FREE_REQ_BIT)
#define NLP_SET_FREE_REQ(ndlp) ((ndlp)->nlp_usg_map \
|= NLP_USG_FREE_REQ_BIT)
#define NLP_CHK_FREE_ACK(ndlp) ((ndlp)->nlp_usg_map \
& NLP_USG_FREE_ACK_BIT)
#define NLP_SET_FREE_ACK(ndlp) ((ndlp)->nlp_usg_map \
|= NLP_USG_FREE_ACK_BIT)
/* There are 4 different double linked lists nodelist entries can reside on. /* There are 4 different double linked lists nodelist entries can reside on.
* The Port Login (PLOGI) list and Address Discovery (ADISC) list are used * The Port Login (PLOGI) list and Address Discovery (ADISC) list are used
* when Link Up discovery or Registered State Change Notification (RSCN) * when Link Up discovery or Registered State Change Notification (RSCN)

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1380,6 +1380,9 @@ struct lpfc_fdmi_reg_hba {
struct lpfc_fdmi_reg_port_list rpl; struct lpfc_fdmi_reg_port_list rpl;
}; };
/******** MI MIB ********/
#define SLI_CT_MIB_Subtypes 0x11
/* /*
* Register HBA Attributes (RHAT) * Register HBA Attributes (RHAT)
*/ */
@ -1465,7 +1468,7 @@ struct lpfc_fdmi_reg_portattr {
#define LPFC_FDMI2_HBA_ATTR 0x0002efff #define LPFC_FDMI2_HBA_ATTR 0x0002efff
/* /*
* Port Attrubute Types * Port Attribute Types
*/ */
#define RPRT_SUPPORTED_FC4_TYPES 0x1 /* 32 byte binary array */ #define RPRT_SUPPORTED_FC4_TYPES 0x1 /* 32 byte binary array */
#define RPRT_SUPPORTED_SPEED 0x2 /* 32-bit unsigned int */ #define RPRT_SUPPORTED_SPEED 0x2 /* 32-bit unsigned int */
@ -1483,6 +1486,7 @@ struct lpfc_fdmi_reg_portattr {
#define RPRT_PORT_STATE 0x101 /* 32-bit unsigned int */ #define RPRT_PORT_STATE 0x101 /* 32-bit unsigned int */
#define RPRT_DISC_PORT 0x102 /* 32-bit unsigned int */ #define RPRT_DISC_PORT 0x102 /* 32-bit unsigned int */
#define RPRT_PORT_ID 0x103 /* 32-bit unsigned int */ #define RPRT_PORT_ID 0x103 /* 32-bit unsigned int */
#define RPRT_VENDOR_MI 0xf047 /* vendor ascii string */
#define RPRT_SMART_SERVICE 0xf100 /* 4 to 256 byte ASCII string */ #define RPRT_SMART_SERVICE 0xf100 /* 4 to 256 byte ASCII string */
#define RPRT_SMART_GUID 0xf101 /* 8 byte WWNN + 8 byte WWPN */ #define RPRT_SMART_GUID 0xf101 /* 8 byte WWNN + 8 byte WWPN */
#define RPRT_SMART_VERSION 0xf102 /* 4 to 256 byte ASCII string */ #define RPRT_SMART_VERSION 0xf102 /* 4 to 256 byte ASCII string */
@ -1515,6 +1519,7 @@ struct lpfc_fdmi_reg_portattr {
#define LPFC_FDMI_SMART_ATTR_port_info 0x00100000 /* Vendor specific */ #define LPFC_FDMI_SMART_ATTR_port_info 0x00100000 /* Vendor specific */
#define LPFC_FDMI_SMART_ATTR_qos 0x00200000 /* Vendor specific */ #define LPFC_FDMI_SMART_ATTR_qos 0x00200000 /* Vendor specific */
#define LPFC_FDMI_SMART_ATTR_security 0x00400000 /* Vendor specific */ #define LPFC_FDMI_SMART_ATTR_security 0x00400000 /* Vendor specific */
#define LPFC_FDMI_VENDOR_ATTR_mi 0x00800000 /* Vendor specific */
/* Bit mask for FDMI-1 defined PORT attributes */ /* Bit mask for FDMI-1 defined PORT attributes */
#define LPFC_FDMI1_PORT_ATTR 0x0000003f #define LPFC_FDMI1_PORT_ATTR 0x0000003f

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2009-2016 Emulex. All rights reserved. * * Copyright (C) 2009-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -3506,8 +3506,14 @@ struct lpfc_sli4_parameters {
#define cfg_max_tow_xri_MASK 0x0000ffff #define cfg_max_tow_xri_MASK 0x0000ffff
#define cfg_max_tow_xri_WORD word20 #define cfg_max_tow_xri_WORD word20
uint32_t word21; /* RESERVED */ uint32_t word21;
uint32_t word22; /* RESERVED */ #define cfg_mib_bde_cnt_SHIFT 16
#define cfg_mib_bde_cnt_MASK 0x000000ff
#define cfg_mib_bde_cnt_WORD word21
#define cfg_mi_ver_SHIFT 0
#define cfg_mi_ver_MASK 0x0000ffff
#define cfg_mi_ver_WORD word21
uint32_t mib_size;
uint32_t word23; /* RESERVED */ uint32_t word23; /* RESERVED */
uint32_t word24; uint32_t word24;
@ -4380,9 +4386,11 @@ struct wqe_common {
#define wqe_ebde_cnt_SHIFT 0 #define wqe_ebde_cnt_SHIFT 0
#define wqe_ebde_cnt_MASK 0x0000000f #define wqe_ebde_cnt_MASK 0x0000000f
#define wqe_ebde_cnt_WORD word10 #define wqe_ebde_cnt_WORD word10
#define wqe_nvme_SHIFT 4 #define wqe_xchg_SHIFT 4
#define wqe_nvme_MASK 0x00000001 #define wqe_xchg_MASK 0x00000001
#define wqe_nvme_WORD word10 #define wqe_xchg_WORD word10
#define LPFC_SCSI_XCHG 0x0
#define LPFC_NVME_XCHG 0x1
#define wqe_oas_SHIFT 6 #define wqe_oas_SHIFT 6
#define wqe_oas_MASK 0x00000001 #define wqe_oas_MASK 0x00000001
#define wqe_oas_WORD word10 #define wqe_oas_WORD word10
@ -4880,6 +4888,8 @@ struct lpfc_grp_hdr {
#define NVME_READ_CMD 0x0 #define NVME_READ_CMD 0x0
#define FCP_COMMAND_DATA_OUT 0x1 #define FCP_COMMAND_DATA_OUT 0x1
#define NVME_WRITE_CMD 0x1 #define NVME_WRITE_CMD 0x1
#define COMMAND_DATA_IN 0x0
#define COMMAND_DATA_OUT 0x1
#define FCP_COMMAND_TRECEIVE 0x2 #define FCP_COMMAND_TRECEIVE 0x2
#define FCP_COMMAND_TRSP 0x3 #define FCP_COMMAND_TRSP 0x3
#define FCP_COMMAND_TSEND 0x7 #define FCP_COMMAND_TSEND 0x7

Просмотреть файл

@ -2844,28 +2844,6 @@ lpfc_cleanup(struct lpfc_vport *vport)
lpfc_port_link_failure(vport); lpfc_port_link_failure(vport);
list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) { list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp)) {
ndlp = lpfc_enable_node(vport, ndlp,
NLP_STE_UNUSED_NODE);
if (!ndlp)
continue;
spin_lock_irq(&phba->ndlp_lock);
NLP_SET_FREE_REQ(ndlp);
spin_unlock_irq(&phba->ndlp_lock);
/* Trigger the release of the ndlp memory */
lpfc_nlp_put(ndlp);
continue;
}
spin_lock_irq(&phba->ndlp_lock);
if (NLP_CHK_FREE_REQ(ndlp)) {
/* The ndlp should not be in memory free mode already */
spin_unlock_irq(&phba->ndlp_lock);
continue;
} else
/* Indicate request for freeing ndlp memory */
NLP_SET_FREE_REQ(ndlp);
spin_unlock_irq(&phba->ndlp_lock);
if (vport->port_type != LPFC_PHYSICAL_PORT && if (vport->port_type != LPFC_PHYSICAL_PORT &&
ndlp->nlp_DID == Fabric_DID) { ndlp->nlp_DID == Fabric_DID) {
/* Just free up ndlp with Fabric_DID for vports */ /* Just free up ndlp with Fabric_DID for vports */
@ -2873,20 +2851,23 @@ lpfc_cleanup(struct lpfc_vport *vport)
continue; continue;
} }
/* take care of nodes in unused state before the state if (ndlp->nlp_DID == Fabric_Cntl_DID &&
* machine taking action. ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
*/
if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
continue; continue;
} }
if (ndlp->nlp_type & NLP_FABRIC) /* Fabric Ports not in UNMAPPED state are cleaned up in the
* DEVICE_RM event.
*/
if (ndlp->nlp_type & NLP_FABRIC &&
ndlp->nlp_state == NLP_STE_UNMAPPED_NODE)
lpfc_disc_state_machine(vport, ndlp, NULL, lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RECOVERY); NLP_EVT_DEVICE_RECOVERY);
lpfc_disc_state_machine(vport, ndlp, NULL, if (!(ndlp->fc4_xpt_flags & (NVME_XPT_REGD|SCSI_XPT_REGD)))
NLP_EVT_DEVICE_RM); lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RM);
} }
/* At this point, ALL ndlp's should be gone /* At this point, ALL ndlp's should be gone
@ -2901,12 +2882,13 @@ lpfc_cleanup(struct lpfc_vport *vport)
list_for_each_entry_safe(ndlp, next_ndlp, list_for_each_entry_safe(ndlp, next_ndlp,
&vport->fc_nodes, nlp_listp) { &vport->fc_nodes, nlp_listp) {
lpfc_printf_vlog(ndlp->vport, KERN_ERR, lpfc_printf_vlog(ndlp->vport, KERN_ERR,
LOG_TRACE_EVENT, LOG_TRACE_EVENT,
"0282 did:x%x ndlp:x%px " "0282 did:x%x ndlp:x%px "
"usgmap:x%x refcnt:%d\n", "refcnt:%d xflags x%x nflag x%x\n",
ndlp->nlp_DID, (void *)ndlp, ndlp->nlp_DID, (void *)ndlp,
ndlp->nlp_usg_map, kref_read(&ndlp->kref),
kref_read(&ndlp->kref)); ndlp->fc4_xpt_flags,
ndlp->nlp_flag);
} }
break; break;
} }
@ -3080,7 +3062,6 @@ lpfc_sli4_node_prep(struct lpfc_hba *phba)
struct lpfc_nodelist *ndlp, *next_ndlp; struct lpfc_nodelist *ndlp, *next_ndlp;
struct lpfc_vport **vports; struct lpfc_vport **vports;
int i, rpi; int i, rpi;
unsigned long flags;
if (phba->sli_rev != LPFC_SLI_REV4) if (phba->sli_rev != LPFC_SLI_REV4)
return; return;
@ -3096,22 +3077,18 @@ lpfc_sli4_node_prep(struct lpfc_hba *phba)
list_for_each_entry_safe(ndlp, next_ndlp, list_for_each_entry_safe(ndlp, next_ndlp,
&vports[i]->fc_nodes, &vports[i]->fc_nodes,
nlp_listp) { nlp_listp) {
if (!NLP_CHK_NODE_ACT(ndlp))
continue;
rpi = lpfc_sli4_alloc_rpi(phba); rpi = lpfc_sli4_alloc_rpi(phba);
if (rpi == LPFC_RPI_ALLOC_ERROR) { if (rpi == LPFC_RPI_ALLOC_ERROR) {
spin_lock_irqsave(&phba->ndlp_lock, flags); /* TODO print log? */
NLP_CLR_NODE_ACT(ndlp);
spin_unlock_irqrestore(&phba->ndlp_lock, flags);
continue; continue;
} }
ndlp->nlp_rpi = rpi; ndlp->nlp_rpi = rpi;
lpfc_printf_vlog(ndlp->vport, KERN_INFO, lpfc_printf_vlog(ndlp->vport, KERN_INFO,
LOG_NODE | LOG_DISCOVERY, LOG_NODE | LOG_DISCOVERY,
"0009 Assign RPI x%x to ndlp x%px " "0009 Assign RPI x%x to ndlp x%px "
"DID:x%06x flg:x%x map:x%x\n", "DID:x%06x flg:x%x\n",
ndlp->nlp_rpi, ndlp, ndlp->nlp_DID, ndlp->nlp_rpi, ndlp, ndlp->nlp_DID,
ndlp->nlp_flag, ndlp->nlp_usg_map); ndlp->nlp_flag);
} }
} }
lpfc_destroy_vport_work_array(phba, vports); lpfc_destroy_vport_work_array(phba, vports);
@ -3510,8 +3487,7 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
list_for_each_entry_safe(ndlp, next_ndlp, list_for_each_entry_safe(ndlp, next_ndlp,
&vports[i]->fc_nodes, &vports[i]->fc_nodes,
nlp_listp) { nlp_listp) {
if ((!NLP_CHK_NODE_ACT(ndlp)) || if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
/* Driver must assume RPI is invalid for /* Driver must assume RPI is invalid for
* any unused or inactive node. * any unused or inactive node.
*/ */
@ -3519,33 +3495,42 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
continue; continue;
} }
if (ndlp->nlp_type & NLP_FABRIC) { spin_lock_irq(&ndlp->lock);
lpfc_disc_state_machine(vports[i], ndlp,
NULL, NLP_EVT_DEVICE_RECOVERY);
lpfc_disc_state_machine(vports[i], ndlp,
NULL, NLP_EVT_DEVICE_RM);
}
spin_lock_irq(shost->host_lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
/* /*
* Whenever an SLI4 port goes offline, free the * Whenever an SLI4 port goes offline, free the
* RPI. Get a new RPI when the adapter port * RPI. Get a new RPI when the adapter port
* comes back online. * comes back online.
*/ */
if (phba->sli_rev == LPFC_SLI_REV4) { if (phba->sli_rev == LPFC_SLI_REV4) {
lpfc_printf_vlog(ndlp->vport, KERN_INFO, lpfc_printf_vlog(vports[i], KERN_INFO,
LOG_NODE | LOG_DISCOVERY, LOG_NODE | LOG_DISCOVERY,
"0011 Free RPI x%x on " "0011 Free RPI x%x on "
"ndlp:x%px did x%x " "ndlp: %p did x%x\n",
"usgmap:x%x\n",
ndlp->nlp_rpi, ndlp, ndlp->nlp_rpi, ndlp,
ndlp->nlp_DID, ndlp->nlp_DID);
ndlp->nlp_usg_map);
lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi); lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi);
ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
} }
lpfc_unreg_rpi(vports[i], ndlp); lpfc_unreg_rpi(vports[i], ndlp);
if (ndlp->nlp_type & NLP_FABRIC) {
lpfc_disc_state_machine(vports[i], ndlp,
NULL, NLP_EVT_DEVICE_RECOVERY);
/* Don't remove the node unless the
* has been unregistered with the
* transport. If so, let dev_loss
* take care of the node.
*/
if (!(ndlp->fc4_xpt_flags &
(NVME_XPT_REGD | SCSI_XPT_REGD)))
lpfc_disc_state_machine
(vports[i], ndlp,
NULL,
NLP_EVT_DEVICE_RM);
}
} }
} }
} }
@ -4343,16 +4328,13 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
/* Seed physical port template */ /* Seed physical port template */
memcpy(template, &lpfc_template, sizeof(*template)); memcpy(template, &lpfc_template, sizeof(*template));
if (use_no_reset_hba) { if (use_no_reset_hba)
/* template is for a no reset SCSI Host */ /* template is for a no reset SCSI Host */
template->max_sectors = 0xffff;
template->eh_host_reset_handler = NULL; template->eh_host_reset_handler = NULL;
}
/* Template for all vports this physical port creates */ /* Template for all vports this physical port creates */
memcpy(&phba->vport_template, &lpfc_template, memcpy(&phba->vport_template, &lpfc_template,
sizeof(*template)); sizeof(*template));
phba->vport_template.max_sectors = 0xffff;
phba->vport_template.shost_attrs = lpfc_vport_attrs; phba->vport_template.shost_attrs = lpfc_vport_attrs;
phba->vport_template.eh_bus_reset_handler = NULL; phba->vport_template.eh_bus_reset_handler = NULL;
phba->vport_template.eh_host_reset_handler = NULL; phba->vport_template.eh_host_reset_handler = NULL;
@ -5607,11 +5589,6 @@ lpfc_sli4_perform_vport_cvl(struct lpfc_vport *vport)
ndlp->nlp_type |= NLP_FABRIC; ndlp->nlp_type |= NLP_FABRIC;
/* Put ndlp onto node list */ /* Put ndlp onto node list */
lpfc_enqueue_node(vport, ndlp); lpfc_enqueue_node(vport, ndlp);
} else if (!NLP_CHK_NODE_ACT(ndlp)) {
/* re-setup ndlp without removing from node list */
ndlp = lpfc_enable_node(vport, ndlp, NLP_STE_UNUSED_NODE);
if (!ndlp)
return 0;
} }
if ((phba->pport->port_state < LPFC_FLOGI) && if ((phba->pport->port_state < LPFC_FLOGI) &&
(phba->pport->port_state != LPFC_VPORT_FAILED)) (phba->pport->port_state != LPFC_VPORT_FAILED))
@ -5667,7 +5644,6 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba,
int rc; int rc;
struct lpfc_vport *vport; struct lpfc_vport *vport;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp;
struct Scsi_Host *shost;
int active_vlink_present; int active_vlink_present;
struct lpfc_vport **vports; struct lpfc_vport **vports;
int i; int i;
@ -5848,10 +5824,9 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba,
*/ */
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000)); jiffies + msecs_to_jiffies(1000));
shost = lpfc_shost_from_vport(vport); spin_lock_irq(&ndlp->lock);
spin_lock_irq(shost->host_lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_FDISC; ndlp->nlp_last_elscmd = ELS_CMD_FDISC;
vport->port_state = LPFC_FDISC; vport->port_state = LPFC_FDISC;
} else { } else {
@ -5958,18 +5933,21 @@ lpfc_sli4_async_grp5_evt(struct lpfc_hba *phba,
void lpfc_sli4_async_event_proc(struct lpfc_hba *phba) void lpfc_sli4_async_event_proc(struct lpfc_hba *phba)
{ {
struct lpfc_cq_event *cq_event; struct lpfc_cq_event *cq_event;
unsigned long iflags;
/* First, declare the async event has been handled */ /* First, declare the async event has been handled */
spin_lock_irq(&phba->hbalock); spin_lock_irqsave(&phba->hbalock, iflags);
phba->hba_flag &= ~ASYNC_EVENT; phba->hba_flag &= ~ASYNC_EVENT;
spin_unlock_irq(&phba->hbalock); spin_unlock_irqrestore(&phba->hbalock, iflags);
/* Now, handle all the async events */ /* Now, handle all the async events */
spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
while (!list_empty(&phba->sli4_hba.sp_asynce_work_queue)) { while (!list_empty(&phba->sli4_hba.sp_asynce_work_queue)) {
/* Get the first event from the head of the event queue */
spin_lock_irq(&phba->hbalock);
list_remove_head(&phba->sli4_hba.sp_asynce_work_queue, list_remove_head(&phba->sli4_hba.sp_asynce_work_queue,
cq_event, struct lpfc_cq_event, list); cq_event, struct lpfc_cq_event, list);
spin_unlock_irq(&phba->hbalock); spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock,
iflags);
/* Process the asynchronous event */ /* Process the asynchronous event */
switch (bf_get(lpfc_trailer_code, &cq_event->cqe.mcqe_cmpl)) { switch (bf_get(lpfc_trailer_code, &cq_event->cqe.mcqe_cmpl)) {
case LPFC_TRAILER_CODE_LINK: case LPFC_TRAILER_CODE_LINK:
@ -6001,9 +5979,12 @@ void lpfc_sli4_async_event_proc(struct lpfc_hba *phba)
&cq_event->cqe.mcqe_cmpl)); &cq_event->cqe.mcqe_cmpl));
break; break;
} }
/* Free the completion event processed to the free pool */ /* Free the completion event processed to the free pool */
lpfc_sli4_cq_event_release(phba, cq_event); lpfc_sli4_cq_event_release(phba, cq_event);
spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
} }
spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
} }
/** /**
@ -6295,9 +6276,6 @@ lpfc_setup_driver_resource_phase1(struct lpfc_hba *phba)
atomic_set(&phba->dbg_log_dmping, 0); atomic_set(&phba->dbg_log_dmping, 0);
spin_lock_init(&phba->hbalock); spin_lock_init(&phba->hbalock);
/* Initialize ndlp management spinlock */
spin_lock_init(&phba->ndlp_lock);
/* Initialize port_list spinlock */ /* Initialize port_list spinlock */
spin_lock_init(&phba->port_list_lock); spin_lock_init(&phba->port_list_lock);
INIT_LIST_HEAD(&phba->port_list); INIT_LIST_HEAD(&phba->port_list);
@ -6630,6 +6608,8 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
/* This abort list used by worker thread */ /* This abort list used by worker thread */
spin_lock_init(&phba->sli4_hba.sgl_list_lock); spin_lock_init(&phba->sli4_hba.sgl_list_lock);
spin_lock_init(&phba->sli4_hba.nvmet_io_wait_lock); spin_lock_init(&phba->sli4_hba.nvmet_io_wait_lock);
spin_lock_init(&phba->sli4_hba.asynce_list_lock);
spin_lock_init(&phba->sli4_hba.els_xri_abrt_list_lock);
/* /*
* Initialize driver internal slow-path work queues * Initialize driver internal slow-path work queues
@ -6641,8 +6621,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
INIT_LIST_HEAD(&phba->sli4_hba.sp_queue_event); INIT_LIST_HEAD(&phba->sli4_hba.sp_queue_event);
/* Asynchronous event CQ Event work queue list */ /* Asynchronous event CQ Event work queue list */
INIT_LIST_HEAD(&phba->sli4_hba.sp_asynce_work_queue); INIT_LIST_HEAD(&phba->sli4_hba.sp_asynce_work_queue);
/* Fast-path XRI aborted CQ Event work queue list */
INIT_LIST_HEAD(&phba->sli4_hba.sp_fcp_xri_aborted_work_queue);
/* Slow-path XRI aborted CQ Event work queue list */ /* Slow-path XRI aborted CQ Event work queue list */
INIT_LIST_HEAD(&phba->sli4_hba.sp_els_xri_aborted_work_queue); INIT_LIST_HEAD(&phba->sli4_hba.sp_els_xri_aborted_work_queue);
/* Receive queue CQ Event work queue list */ /* Receive queue CQ Event work queue list */
@ -7196,7 +7174,6 @@ lpfc_init_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
"1431 Invalid HBA PCI-device group: 0x%x\n", "1431 Invalid HBA PCI-device group: 0x%x\n",
dev_grp); dev_grp);
return -ENODEV; return -ENODEV;
break;
} }
return 0; return 0;
} }
@ -10174,26 +10151,28 @@ lpfc_sli4_cq_event_release(struct lpfc_hba *phba,
static void static void
lpfc_sli4_cq_event_release_all(struct lpfc_hba *phba) lpfc_sli4_cq_event_release_all(struct lpfc_hba *phba)
{ {
LIST_HEAD(cqelist); LIST_HEAD(cq_event_list);
struct lpfc_cq_event *cqe; struct lpfc_cq_event *cq_event;
unsigned long iflags; unsigned long iflags;
/* Retrieve all the pending WCQEs from pending WCQE lists */ /* Retrieve all the pending WCQEs from pending WCQE lists */
spin_lock_irqsave(&phba->hbalock, iflags);
/* Pending FCP XRI abort events */
list_splice_init(&phba->sli4_hba.sp_fcp_xri_aborted_work_queue,
&cqelist);
/* Pending ELS XRI abort events */
list_splice_init(&phba->sli4_hba.sp_els_xri_aborted_work_queue,
&cqelist);
/* Pending asynnc events */
list_splice_init(&phba->sli4_hba.sp_asynce_work_queue,
&cqelist);
spin_unlock_irqrestore(&phba->hbalock, iflags);
while (!list_empty(&cqelist)) { /* Pending ELS XRI abort events */
list_remove_head(&cqelist, cqe, struct lpfc_cq_event, list); spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
lpfc_sli4_cq_event_release(phba, cqe); list_splice_init(&phba->sli4_hba.sp_els_xri_aborted_work_queue,
&cq_event_list);
spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
/* Pending asynnc events */
spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
list_splice_init(&phba->sli4_hba.sp_asynce_work_queue,
&cq_event_list);
spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
while (!list_empty(&cq_event_list)) {
list_remove_head(&cq_event_list, cq_event,
struct lpfc_cq_event, list);
lpfc_sli4_cq_event_release(phba, cq_event);
} }
} }
@ -12310,6 +12289,21 @@ fcponly:
else else
phba->nsler = 0; phba->nsler = 0;
/* Save PB info for use during HBA setup */
sli4_params->mi_ver = bf_get(cfg_mi_ver, mbx_sli4_parameters);
sli4_params->mib_bde_cnt = bf_get(cfg_mib_bde_cnt, mbx_sli4_parameters);
sli4_params->mib_size = mbx_sli4_parameters->mib_size;
sli4_params->mi_value = LPFC_DFLT_MIB_VAL;
/* Next we check for Vendor MIB support */
if (sli4_params->mi_ver && phba->cfg_enable_mi)
phba->cfg_fdmi_on = LPFC_FDMI_SUPPORT;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"6461 MIB attr %d enable %d FDMI %d buf %d:%d\n",
sli4_params->mi_ver, phba->cfg_enable_mi,
sli4_params->mi_value, sli4_params->mib_bde_cnt,
sli4_params->mib_size);
return 0; return 0;
} }
@ -12515,10 +12509,11 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
} }
lpfc_destroy_vport_work_array(phba, vports); lpfc_destroy_vport_work_array(phba, vports);
/* Remove FC host and then SCSI host with the physical port */ /* Remove FC host with the physical port */
fc_remove_host(shost); fc_remove_host(shost);
scsi_remove_host(shost); scsi_remove_host(shost);
/* Clean up all nodes, mailboxes and IOs. */
lpfc_cleanup(vport); lpfc_cleanup(vport);
/* /*
@ -12581,8 +12576,7 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
/** /**
* lpfc_pci_suspend_one_s3 - PCI func to suspend SLI-3 device for power mgmnt * lpfc_pci_suspend_one_s3 - PCI func to suspend SLI-3 device for power mgmnt
* @pdev: pointer to PCI device * @dev_d: pointer to device
* @msg: power management message
* *
* This routine is to be called from the kernel's PCI subsystem to support * This routine is to be called from the kernel's PCI subsystem to support
* system Power Management (PM) to device with SLI-3 interface spec. When * system Power Management (PM) to device with SLI-3 interface spec. When
@ -12600,10 +12594,10 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_suspend_one_s3(struct pci_dev *pdev, pm_message_t msg) lpfc_pci_suspend_one_s3(struct device *dev_d)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev_d);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -12617,16 +12611,12 @@ lpfc_pci_suspend_one_s3(struct pci_dev *pdev, pm_message_t msg)
/* Disable interrupt from device */ /* Disable interrupt from device */
lpfc_sli_disable_intr(phba); lpfc_sli_disable_intr(phba);
/* Save device state to PCI config space */
pci_save_state(pdev);
pci_set_power_state(pdev, PCI_D3hot);
return 0; return 0;
} }
/** /**
* lpfc_pci_resume_one_s3 - PCI func to resume SLI-3 device for power mgmnt * lpfc_pci_resume_one_s3 - PCI func to resume SLI-3 device for power mgmnt
* @pdev: pointer to PCI device * @dev_d: pointer to device
* *
* This routine is to be called from the kernel's PCI subsystem to support * This routine is to be called from the kernel's PCI subsystem to support
* system Power Management (PM) to device with SLI-3 interface spec. When PM * system Power Management (PM) to device with SLI-3 interface spec. When PM
@ -12643,10 +12633,10 @@ lpfc_pci_suspend_one_s3(struct pci_dev *pdev, pm_message_t msg)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_resume_one_s3(struct pci_dev *pdev) lpfc_pci_resume_one_s3(struct device *dev_d)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev_d);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
uint32_t intr_mode; uint32_t intr_mode;
int error; int error;
@ -12654,19 +12644,6 @@ lpfc_pci_resume_one_s3(struct pci_dev *pdev)
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"0452 PCI device Power Management resume.\n"); "0452 PCI device Power Management resume.\n");
/* Restore device state from PCI config space */
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
/*
* As the new kernel behavior of pci_restore_state() API call clears
* device saved_state flag, need to save the restored state again.
*/
pci_save_state(pdev);
if (pdev->is_busmaster)
pci_set_master(pdev);
/* Startup the kernel thread for this host adapter. */ /* Startup the kernel thread for this host adapter. */
phba->worker_thread = kthread_run(lpfc_do_work, phba, phba->worker_thread = kthread_run(lpfc_do_work, phba,
"lpfc_worker_%d", phba->brd_no); "lpfc_worker_%d", phba->brd_no);
@ -13358,7 +13335,6 @@ lpfc_pci_remove_one_s4(struct pci_dev *pdev)
vport->load_flag |= FC_UNLOADING; vport->load_flag |= FC_UNLOADING;
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
/* Free the HBA sysfs attributes */
lpfc_free_sysfs_attr(vport); lpfc_free_sysfs_attr(vport);
/* Release all the vports against this physical port */ /* Release all the vports against this physical port */
@ -13371,7 +13347,7 @@ lpfc_pci_remove_one_s4(struct pci_dev *pdev)
} }
lpfc_destroy_vport_work_array(phba, vports); lpfc_destroy_vport_work_array(phba, vports);
/* Remove FC host and then SCSI host with the physical port */ /* Remove FC host with the physical port */
fc_remove_host(shost); fc_remove_host(shost);
scsi_remove_host(shost); scsi_remove_host(shost);
@ -13423,8 +13399,7 @@ lpfc_pci_remove_one_s4(struct pci_dev *pdev)
/** /**
* lpfc_pci_suspend_one_s4 - PCI func to suspend SLI-4 device for power mgmnt * lpfc_pci_suspend_one_s4 - PCI func to suspend SLI-4 device for power mgmnt
* @pdev: pointer to PCI device * @dev_d: pointer to device
* @msg: power management message
* *
* This routine is called from the kernel's PCI subsystem to support system * This routine is called from the kernel's PCI subsystem to support system
* Power Management (PM) to device with SLI-4 interface spec. When PM invokes * Power Management (PM) to device with SLI-4 interface spec. When PM invokes
@ -13442,10 +13417,10 @@ lpfc_pci_remove_one_s4(struct pci_dev *pdev)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_suspend_one_s4(struct pci_dev *pdev, pm_message_t msg) lpfc_pci_suspend_one_s4(struct device *dev_d)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev_d);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
@ -13460,16 +13435,12 @@ lpfc_pci_suspend_one_s4(struct pci_dev *pdev, pm_message_t msg)
lpfc_sli4_disable_intr(phba); lpfc_sli4_disable_intr(phba);
lpfc_sli4_queue_destroy(phba); lpfc_sli4_queue_destroy(phba);
/* Save device state to PCI config space */
pci_save_state(pdev);
pci_set_power_state(pdev, PCI_D3hot);
return 0; return 0;
} }
/** /**
* lpfc_pci_resume_one_s4 - PCI func to resume SLI-4 device for power mgmnt * lpfc_pci_resume_one_s4 - PCI func to resume SLI-4 device for power mgmnt
* @pdev: pointer to PCI device * @dev_d: pointer to device
* *
* This routine is called from the kernel's PCI subsystem to support system * This routine is called from the kernel's PCI subsystem to support system
* Power Management (PM) to device with SLI-4 interface spac. When PM invokes * Power Management (PM) to device with SLI-4 interface spac. When PM invokes
@ -13486,10 +13457,10 @@ lpfc_pci_suspend_one_s4(struct pci_dev *pdev, pm_message_t msg)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_resume_one_s4(struct pci_dev *pdev) lpfc_pci_resume_one_s4(struct device *dev_d)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev_d);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
uint32_t intr_mode; uint32_t intr_mode;
int error; int error;
@ -13497,19 +13468,6 @@ lpfc_pci_resume_one_s4(struct pci_dev *pdev)
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"0292 PCI device Power Management resume.\n"); "0292 PCI device Power Management resume.\n");
/* Restore device state from PCI config space */
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
/*
* As the new kernel behavior of pci_restore_state() API call clears
* device saved_state flag, need to save the restored state again.
*/
pci_save_state(pdev);
if (pdev->is_busmaster)
pci_set_master(pdev);
/* Startup the kernel thread for this host adapter. */ /* Startup the kernel thread for this host adapter. */
phba->worker_thread = kthread_run(lpfc_do_work, phba, phba->worker_thread = kthread_run(lpfc_do_work, phba,
"lpfc_worker_%d", phba->brd_no); "lpfc_worker_%d", phba->brd_no);
@ -13825,8 +13783,7 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
/** /**
* lpfc_pci_suspend_one - lpfc PCI func to suspend dev for power management * lpfc_pci_suspend_one - lpfc PCI func to suspend dev for power management
* @pdev: pointer to PCI device * @dev: pointer to device
* @msg: power management message
* *
* This routine is to be registered to the kernel's PCI subsystem to support * This routine is to be registered to the kernel's PCI subsystem to support
* system Power Management (PM). When PM invokes this method, it dispatches * system Power Management (PM). When PM invokes this method, it dispatches
@ -13837,19 +13794,19 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_suspend_one(struct pci_dev *pdev, pm_message_t msg) lpfc_pci_suspend_one(struct device *dev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
int rc = -ENODEV; int rc = -ENODEV;
switch (phba->pci_dev_grp) { switch (phba->pci_dev_grp) {
case LPFC_PCI_DEV_LP: case LPFC_PCI_DEV_LP:
rc = lpfc_pci_suspend_one_s3(pdev, msg); rc = lpfc_pci_suspend_one_s3(dev);
break; break;
case LPFC_PCI_DEV_OC: case LPFC_PCI_DEV_OC:
rc = lpfc_pci_suspend_one_s4(pdev, msg); rc = lpfc_pci_suspend_one_s4(dev);
break; break;
default: default:
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@ -13862,7 +13819,7 @@ lpfc_pci_suspend_one(struct pci_dev *pdev, pm_message_t msg)
/** /**
* lpfc_pci_resume_one - lpfc PCI func to resume dev for power management * lpfc_pci_resume_one - lpfc PCI func to resume dev for power management
* @pdev: pointer to PCI device * @dev: pointer to device
* *
* This routine is to be registered to the kernel's PCI subsystem to support * This routine is to be registered to the kernel's PCI subsystem to support
* system Power Management (PM). When PM invokes this method, it dispatches * system Power Management (PM). When PM invokes this method, it dispatches
@ -13873,19 +13830,19 @@ lpfc_pci_suspend_one(struct pci_dev *pdev, pm_message_t msg)
* 0 - driver suspended the device * 0 - driver suspended the device
* Error otherwise * Error otherwise
**/ **/
static int static int __maybe_unused
lpfc_pci_resume_one(struct pci_dev *pdev) lpfc_pci_resume_one(struct device *dev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = dev_get_drvdata(dev);
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
int rc = -ENODEV; int rc = -ENODEV;
switch (phba->pci_dev_grp) { switch (phba->pci_dev_grp) {
case LPFC_PCI_DEV_LP: case LPFC_PCI_DEV_LP:
rc = lpfc_pci_resume_one_s3(pdev); rc = lpfc_pci_resume_one_s3(dev);
break; break;
case LPFC_PCI_DEV_OC: case LPFC_PCI_DEV_OC:
rc = lpfc_pci_resume_one_s4(pdev); rc = lpfc_pci_resume_one_s4(dev);
break; break;
default: default:
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@ -14065,14 +14022,17 @@ static const struct pci_error_handlers lpfc_err_handler = {
.resume = lpfc_io_resume, .resume = lpfc_io_resume,
}; };
static SIMPLE_DEV_PM_OPS(lpfc_pci_pm_ops_one,
lpfc_pci_suspend_one,
lpfc_pci_resume_one);
static struct pci_driver lpfc_driver = { static struct pci_driver lpfc_driver = {
.name = LPFC_DRIVER_NAME, .name = LPFC_DRIVER_NAME,
.id_table = lpfc_id_table, .id_table = lpfc_id_table,
.probe = lpfc_pci_probe_one, .probe = lpfc_pci_probe_one,
.remove = lpfc_pci_remove_one, .remove = lpfc_pci_remove_one,
.shutdown = lpfc_pci_remove_one, .shutdown = lpfc_pci_remove_one,
.suspend = lpfc_pci_suspend_one, .driver.pm = &lpfc_pci_pm_ops_one,
.resume = lpfc_pci_resume_one,
.err_handler = &lpfc_err_handler, .err_handler = &lpfc_err_handler,
}; };
@ -14124,7 +14084,7 @@ lpfc_init(void)
fc_release_transport(lpfc_transport_template); fc_release_transport(lpfc_transport_template);
goto unregister; goto unregister;
} }
lpfc_nvme_cmd_template(); lpfc_wqe_cmd_template();
lpfc_nvmet_cmd_template(); lpfc_nvmet_cmd_template();
/* Initialize in case vector mapping is needed */ /* Initialize in case vector mapping is needed */

Просмотреть файл

@ -46,6 +46,7 @@
#define LPFC_MEM_POOL_SIZE 64 /* max elem in non-DMA safety pool */ #define LPFC_MEM_POOL_SIZE 64 /* max elem in non-DMA safety pool */
#define LPFC_DEVICE_DATA_POOL_SIZE 64 /* max elements in device data pool */ #define LPFC_DEVICE_DATA_POOL_SIZE 64 /* max elements in device data pool */
#define LPFC_RRQ_POOL_SIZE 256 /* max elements in non-DMA pool */ #define LPFC_RRQ_POOL_SIZE 256 /* max elements in non-DMA pool */
#define LPFC_MBX_POOL_SIZE 256 /* max elements in MBX non-DMA pool */
int int
lpfc_mem_alloc_active_rrq_pool_s4(struct lpfc_hba *phba) { lpfc_mem_alloc_active_rrq_pool_s4(struct lpfc_hba *phba) {
@ -111,8 +112,8 @@ lpfc_mem_alloc(struct lpfc_hba *phba, int align)
pool->current_count++; pool->current_count++;
} }
phba->mbox_mem_pool = mempool_create_kmalloc_pool(LPFC_MEM_POOL_SIZE, phba->mbox_mem_pool = mempool_create_kmalloc_pool(LPFC_MBX_POOL_SIZE,
sizeof(LPFC_MBOXQ_t)); sizeof(LPFC_MBOXQ_t));
if (!phba->mbox_mem_pool) if (!phba->mbox_mem_pool)
goto fail_free_mbuf_pool; goto fail_free_mbuf_pool;
@ -588,8 +589,6 @@ lpfc_sli4_rb_free(struct lpfc_hba *phba, struct hbq_dmabuf *dmab)
* Description: Allocates a DMA-mapped receive buffer from the lpfc_hrb_pool PCI * Description: Allocates a DMA-mapped receive buffer from the lpfc_hrb_pool PCI
* pool along a non-DMA-mapped container for it. * pool along a non-DMA-mapped container for it.
* *
* Notes: Not interrupt-safe. Must be called with no locks held.
*
* Returns: * Returns:
* pointer to HBQ on success * pointer to HBQ on success
* NULL on failure * NULL on failure
@ -599,7 +598,7 @@ lpfc_sli4_nvmet_alloc(struct lpfc_hba *phba)
{ {
struct rqb_dmabuf *dma_buf; struct rqb_dmabuf *dma_buf;
dma_buf = kzalloc(sizeof(struct rqb_dmabuf), GFP_KERNEL); dma_buf = kzalloc(sizeof(*dma_buf), GFP_KERNEL);
if (!dma_buf) if (!dma_buf)
return NULL; return NULL;
@ -722,7 +721,6 @@ lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp)
drqe.address_hi = putPaddrHigh(rqb_entry->dbuf.phys); drqe.address_hi = putPaddrHigh(rqb_entry->dbuf.phys);
rc = lpfc_sli4_rq_put(rqb_entry->hrq, rqb_entry->drq, &hrqe, &drqe); rc = lpfc_sli4_rq_put(rqb_entry->hrq, rqb_entry->drq, &hrqe, &drqe);
if (rc < 0) { if (rc < 0) {
(rqbp->rqb_free_buffer)(phba, rqb_entry);
lpfc_printf_log(phba, KERN_ERR, LOG_INIT, lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"6409 Cannot post to HRQ %d: %x %x %x " "6409 Cannot post to HRQ %d: %x %x %x "
"DRQ %x %x\n", "DRQ %x %x\n",
@ -732,6 +730,7 @@ lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp)
rqb_entry->hrq->entry_count, rqb_entry->hrq->entry_count,
rqb_entry->drq->host_index, rqb_entry->drq->host_index,
rqb_entry->drq->hba_index); rqb_entry->drq->hba_index);
(rqbp->rqb_free_buffer)(phba, rqb_entry);
} else { } else {
list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list); list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list);
rqbp->buffer_count++; rqbp->buffer_count++;

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -247,7 +247,7 @@ lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
list_for_each_entry_safe(iocb, next_iocb, &abort_list, dlist) { list_for_each_entry_safe(iocb, next_iocb, &abort_list, dlist) {
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
list_del_init(&iocb->dlist); list_del_init(&iocb->dlist);
lpfc_sli_issue_abort_iotag(phba, pring, iocb); lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
} }
@ -357,7 +357,6 @@ lpfc_defer_acc_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
* Complete the unreg rpi mbx request, and update flags. * Complete the unreg rpi mbx request, and update flags.
* This will also restart any deferred events. * This will also restart any deferred events.
*/ */
lpfc_nlp_get(ndlp);
lpfc_sli4_unreg_rpi_cmpl_clr(phba, pmb); lpfc_sli4_unreg_rpi_cmpl_clr(phba, pmb);
if (!piocb) { if (!piocb) {
@ -365,7 +364,7 @@ lpfc_defer_acc_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
"4578 PLOGI ACC fail\n"); "4578 PLOGI ACC fail\n");
if (mbox) if (mbox)
mempool_free(mbox, phba->mbox_mem_pool); mempool_free(mbox, phba->mbox_mem_pool);
goto out; return;
} }
rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, piocb, ndlp, mbox); rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, piocb, ndlp, mbox);
@ -376,15 +375,12 @@ lpfc_defer_acc_rsp(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
mempool_free(mbox, phba->mbox_mem_pool); mempool_free(mbox, phba->mbox_mem_pool);
} }
kfree(piocb); kfree(piocb);
out:
lpfc_nlp_put(ndlp);
} }
static int static int
lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
struct lpfc_iocbq *cmdiocb) struct lpfc_iocbq *cmdiocb)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_dmabuf *pcmd; struct lpfc_dmabuf *pcmd;
uint64_t nlp_portwwn = 0; uint64_t nlp_portwwn = 0;
@ -588,7 +584,10 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
rpi = phba->sli4_hba.rpi_ids[ndlp->nlp_rpi]; rpi = phba->sli4_hba.rpi_ids[ndlp->nlp_rpi];
lpfc_unreg_login(phba, vport->vpi, rpi, link_mbox); lpfc_unreg_login(phba, vport->vpi, rpi, link_mbox);
link_mbox->vport = vport; link_mbox->vport = vport;
link_mbox->ctx_ndlp = ndlp; link_mbox->ctx_ndlp = lpfc_nlp_get(ndlp);
if (!link_mbox->ctx_ndlp)
goto out;
link_mbox->mbox_cmpl = lpfc_defer_acc_rsp; link_mbox->mbox_cmpl = lpfc_defer_acc_rsp;
if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) &&
@ -617,9 +616,9 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
* command issued in lpfc_cmpl_els_acc(). * command issued in lpfc_cmpl_els_acc().
*/ */
login_mbox->vport = vport; login_mbox->vport = vport;
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI); ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
/* /*
* If there is an outstanding PLOGI issued, abort it before * If there is an outstanding PLOGI issued, abort it before
@ -648,9 +647,9 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
* this ELS request. The only way to do this is * this ELS request. The only way to do this is
* to register, then unregister the RPI. * to register, then unregister the RPI.
*/ */
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_RM_DFLT_RPI; ndlp->nlp_flag |= NLP_RM_DFLT_RPI;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD; stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD;
stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE; stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
@ -739,7 +738,6 @@ static int
lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
struct lpfc_iocbq *cmdiocb) struct lpfc_iocbq *cmdiocb)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *elsiocb; struct lpfc_iocbq *elsiocb;
struct lpfc_dmabuf *pcmd; struct lpfc_dmabuf *pcmd;
struct serv_parm *sp; struct serv_parm *sp;
@ -821,9 +819,9 @@ out:
/* 1 sec timeout */ /* 1 sec timeout */
mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000)); mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
ndlp->nlp_prev_state = ndlp->nlp_state; ndlp->nlp_prev_state = ndlp->nlp_state;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
@ -843,9 +841,9 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
/* Only call LOGO ACC for first LOGO, this avoids sending unnecessary /* Only call LOGO ACC for first LOGO, this avoids sending unnecessary
* PLOGIs during LOGO storms from a device. * PLOGIs during LOGO storms from a device.
*/ */
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_LOGO_ACC; ndlp->nlp_flag |= NLP_LOGO_ACC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
if (els_cmd == ELS_CMD_PRLO) if (els_cmd == ELS_CMD_PRLO)
lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL); lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
else else
@ -890,9 +888,9 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
*/ */
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000)); jiffies + msecs_to_jiffies(1000));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_FDISC; ndlp->nlp_last_elscmd = ELS_CMD_FDISC;
vport->port_state = LPFC_FDISC; vport->port_state = LPFC_FDISC;
} else { } else {
@ -908,9 +906,9 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
/* Only try to re-login if this is NOT a Fabric Node */ /* Only try to re-login if this is NOT a Fabric Node */
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1)); jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
} }
@ -918,9 +916,9 @@ out:
ndlp->nlp_prev_state = ndlp->nlp_state; ndlp->nlp_prev_state = ndlp->nlp_state;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
/* The driver has to wait until the ACC completes before it continues /* The driver has to wait until the ACC completes before it continues
* processing the LOGO. The action will resume in * processing the LOGO. The action will resume in
* lpfc_cmpl_els_logo_acc routine. Since part of processing includes an * lpfc_cmpl_els_logo_acc routine. Since part of processing includes an
@ -1036,12 +1034,10 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
static uint32_t static uint32_t
lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) { if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return 0; return 0;
} }
@ -1050,16 +1046,16 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
if (vport->cfg_use_adisc && ((vport->fc_flag & FC_RSCN_MODE) || if (vport->cfg_use_adisc && ((vport->fc_flag & FC_RSCN_MODE) ||
((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) && ((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) &&
(ndlp->nlp_type & NLP_FCP_TARGET)))) { (ndlp->nlp_type & NLP_FCP_TARGET)))) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NPR_ADISC; ndlp->nlp_flag |= NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return 1; return 1;
} }
} }
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_unreg_rpi(vport, ndlp); lpfc_unreg_rpi(vport, ndlp);
return 0; return 0;
} }
@ -1104,7 +1100,11 @@ lpfc_release_rpi(struct lpfc_hba *phba, struct lpfc_vport *vport,
lpfc_unreg_login(phba, vport->vpi, rpi, pmb); lpfc_unreg_login(phba, vport->vpi, rpi, pmb);
pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
pmb->vport = vport; pmb->vport = vport;
pmb->ctx_ndlp = ndlp; pmb->ctx_ndlp = lpfc_nlp_get(ndlp);
if (!pmb->ctx_ndlp) {
mempool_free(pmb, phba->mbox_mem_pool);
return;
}
if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) &&
(!(vport->fc_flag & FC_OFFLINE_MODE))) (!(vport->fc_flag & FC_OFFLINE_MODE)))
@ -1192,12 +1192,11 @@ static uint32_t
lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_LOGO_ACC; ndlp->nlp_flag |= NLP_LOGO_ACC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
return ndlp->nlp_state; return ndlp->nlp_state;
@ -1258,9 +1257,9 @@ lpfc_rcv_plogi_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (lpfc_rcv_plogi(vport, ndlp, cmdiocb) && if (lpfc_rcv_plogi(vport, ndlp, cmdiocb) &&
(ndlp->nlp_flag & NLP_NPR_2B_DISC) && (ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
(vport->num_disc_nodes)) { (vport->num_disc_nodes)) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
/* Check if there are more PLOGIs to be sent */ /* Check if there are more PLOGIs to be sent */
lpfc_more_plogi(vport); lpfc_more_plogi(vport);
if (vport->num_disc_nodes == 0) { if (vport->num_disc_nodes == 0) {
@ -1310,7 +1309,6 @@ static uint32_t
lpfc_rcv_els_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_els_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
@ -1325,9 +1323,9 @@ lpfc_rcv_els_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
/* Put ndlp in npr state set plogi timer for 1 sec */ /* Put ndlp in npr state set plogi timer for 1 sec */
mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000 * 1)); mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE; ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
@ -1342,7 +1340,6 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
uint32_t evt) uint32_t evt)
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb, *rspiocb; struct lpfc_iocbq *cmdiocb, *rspiocb;
struct lpfc_dmabuf *pcmd, *prsp, *mp; struct lpfc_dmabuf *pcmd, *prsp, *mp;
uint32_t *lp; uint32_t *lp;
@ -1488,7 +1485,11 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
ndlp->nlp_flag |= NLP_REG_LOGIN_SEND; ndlp->nlp_flag |= NLP_REG_LOGIN_SEND;
mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login;
} }
mbox->ctx_ndlp = lpfc_nlp_get(ndlp); mbox->ctx_ndlp = lpfc_nlp_get(ndlp);
if (!mbox->ctx_ndlp)
goto out;
mbox->vport = vport; mbox->vport = vport;
if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
!= MBX_NOT_FINISHED) { != MBX_NOT_FINISHED) {
@ -1537,9 +1538,6 @@ out:
ndlp->nlp_prev_state = ndlp->nlp_state; ndlp->nlp_prev_state = ndlp->nlp_state;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock);
ndlp->nlp_flag |= NLP_DEFER_RM;
spin_unlock_irq(shost->host_lock);
return NLP_STE_FREED_NODE; return NLP_STE_FREED_NODE;
} }
@ -1573,12 +1571,10 @@ static uint32_t
lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NODEV_REMOVE; ndlp->nlp_flag |= NLP_NODEV_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} else { } else {
/* software abort outstanding PLOGI */ /* software abort outstanding PLOGI */
@ -1595,7 +1591,6 @@ lpfc_device_recov_plogi_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that will mess up processing of the
@ -1609,9 +1604,9 @@ lpfc_device_recov_plogi_issue(struct lpfc_vport *vport,
ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE; ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -1620,7 +1615,6 @@ static uint32_t
lpfc_rcv_plogi_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_plogi_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_iocbq *cmdiocb; struct lpfc_iocbq *cmdiocb;
@ -1631,9 +1625,9 @@ lpfc_rcv_plogi_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) { if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
if (vport->num_disc_nodes) if (vport->num_disc_nodes)
lpfc_more_adisc(vport); lpfc_more_adisc(vport);
} }
@ -1704,7 +1698,6 @@ lpfc_cmpl_adisc_adisc_issue(struct lpfc_vport *vport,
struct lpfc_nodelist *ndlp, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_iocbq *cmdiocb, *rspiocb; struct lpfc_iocbq *cmdiocb, *rspiocb;
IOCB_t *irsp; IOCB_t *irsp;
@ -1722,9 +1715,9 @@ lpfc_cmpl_adisc_adisc_issue(struct lpfc_vport *vport,
/* 1 sec timeout */ /* 1 sec timeout */
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000)); jiffies + msecs_to_jiffies(1000));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
memset(&ndlp->nlp_nodename, 0, sizeof(struct lpfc_name)); memset(&ndlp->nlp_nodename, 0, sizeof(struct lpfc_name));
@ -1766,12 +1759,10 @@ static uint32_t
lpfc_device_rm_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_rm_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NODEV_REMOVE; ndlp->nlp_flag |= NLP_NODEV_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} else { } else {
/* software abort outstanding ADISC */ /* software abort outstanding ADISC */
@ -1788,7 +1779,6 @@ lpfc_device_recov_adisc_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that will mess up processing of the
@ -1802,9 +1792,9 @@ lpfc_device_recov_adisc_issue(struct lpfc_vport *vport,
ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE; ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -1907,7 +1897,7 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
/* software abort if any GID_FT is outstanding */ /* software abort if any GID_FT is outstanding */
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_FCP) { if (vport->cfg_enable_fc4_type != LPFC_ENABLE_FCP) {
ns_ndlp = lpfc_findnode_did(vport, NameServer_DID); ns_ndlp = lpfc_findnode_did(vport, NameServer_DID);
if (ns_ndlp && NLP_CHK_NODE_ACT(ns_ndlp)) if (ns_ndlp)
lpfc_els_abort(phba, ns_ndlp); lpfc_els_abort(phba, ns_ndlp);
} }
@ -1946,7 +1936,6 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
LPFC_MBOXQ_t *pmb = (LPFC_MBOXQ_t *) arg; LPFC_MBOXQ_t *pmb = (LPFC_MBOXQ_t *) arg;
MAILBOX_t *mb = &pmb->u.mb; MAILBOX_t *mb = &pmb->u.mb;
@ -1973,9 +1962,9 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
/* Put ndlp in npr state set plogi timer for 1 sec */ /* Put ndlp in npr state set plogi timer for 1 sec */
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1)); jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
lpfc_issue_els_logo(vport, ndlp, 0); lpfc_issue_els_logo(vport, ndlp, 0);
@ -2058,12 +2047,10 @@ lpfc_device_rm_reglogin_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NODEV_REMOVE; ndlp->nlp_flag |= NLP_NODEV_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} else { } else {
lpfc_drop_node(vport, ndlp); lpfc_drop_node(vport, ndlp);
@ -2077,8 +2064,6 @@ lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
/* Don't do anything that will mess up processing of the /* Don't do anything that will mess up processing of the
* previous RSCN. * previous RSCN.
*/ */
@ -2087,7 +2072,7 @@ lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport,
ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE; ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
/* If we are a target we won't immediately transition into PRLI, /* If we are a target we won't immediately transition into PRLI,
* so if REG_LOGIN already completed we don't need to ignore it. * so if REG_LOGIN already completed we don't need to ignore it.
@ -2097,7 +2082,7 @@ lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport,
ndlp->nlp_flag |= NLP_IGNR_REG_CMPL; ndlp->nlp_flag |= NLP_IGNR_REG_CMPL;
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2168,7 +2153,6 @@ static uint32_t
lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb, *rspiocb; struct lpfc_iocbq *cmdiocb, *rspiocb;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
IOCB_t *irsp; IOCB_t *irsp;
@ -2290,9 +2274,9 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
(vport->port_type == LPFC_NPIV_PORT) && (vport->port_type == LPFC_NPIV_PORT) &&
vport->cfg_restrict_login) { vport->cfg_restrict_login) {
out: out:
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_TARGET_REMOVE; ndlp->nlp_flag |= NLP_TARGET_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_issue_els_logo(vport, ndlp, 0); lpfc_issue_els_logo(vport, ndlp, 0);
ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE; ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
@ -2344,12 +2328,10 @@ static uint32_t
lpfc_device_rm_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_rm_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NODEV_REMOVE; ndlp->nlp_flag |= NLP_NODEV_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} else { } else {
/* software abort outstanding PLOGI */ /* software abort outstanding PLOGI */
@ -2383,7 +2365,6 @@ lpfc_device_recov_prli_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that will mess up processing of the
@ -2397,9 +2378,9 @@ lpfc_device_recov_prli_issue(struct lpfc_vport *vport,
ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE; ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2436,12 +2417,11 @@ static uint32_t
lpfc_rcv_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *)arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *)arg;
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_LOGO_ACC; ndlp->nlp_flag |= NLP_LOGO_ACC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2478,13 +2458,11 @@ static uint32_t
lpfc_cmpl_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_cmpl_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
ndlp->nlp_prev_state = NLP_STE_LOGO_ISSUE; ndlp->nlp_prev_state = NLP_STE_LOGO_ISSUE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2578,14 +2556,12 @@ lpfc_device_recov_unmap_node(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
ndlp->nlp_prev_state = NLP_STE_UNMAPPED_NODE; ndlp->nlp_prev_state = NLP_STE_UNMAPPED_NODE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
@ -2656,14 +2632,12 @@ lpfc_device_recov_mapped_node(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
ndlp->nlp_prev_state = NLP_STE_MAPPED_NODE; ndlp->nlp_prev_state = NLP_STE_MAPPED_NODE;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_disc_set_adisc(vport, ndlp); lpfc_disc_set_adisc(vport, ndlp);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2672,7 +2646,6 @@ static uint32_t
lpfc_rcv_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
/* Ignore PLOGI if we have an outstanding LOGO */ /* Ignore PLOGI if we have an outstanding LOGO */
@ -2680,9 +2653,9 @@ lpfc_rcv_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
return ndlp->nlp_state; return ndlp->nlp_state;
if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) { if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
lpfc_cancel_retry_delay_tmo(vport, ndlp); lpfc_cancel_retry_delay_tmo(vport, ndlp);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NPR_ADISC | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NPR_ADISC | NLP_NPR_2B_DISC);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
} else if (!(ndlp->nlp_flag & NLP_NPR_2B_DISC)) { } else if (!(ndlp->nlp_flag & NLP_NPR_2B_DISC)) {
/* send PLOGI immediately, move to PLOGI issue state */ /* send PLOGI immediately, move to PLOGI issue state */
if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
@ -2698,7 +2671,6 @@ static uint32_t
lpfc_rcv_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
struct ls_rjt stat; struct ls_rjt stat;
@ -2709,10 +2681,10 @@ lpfc_rcv_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
if (ndlp->nlp_flag & NLP_NPR_ADISC) { if (ndlp->nlp_flag & NLP_NPR_ADISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
ndlp->nlp_prev_state = NLP_STE_NPR_NODE; ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE); lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
lpfc_issue_els_adisc(vport, ndlp, 0); lpfc_issue_els_adisc(vport, ndlp, 0);
} else { } else {
@ -2766,27 +2738,26 @@ static uint32_t
lpfc_rcv_prlo_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_rcv_prlo_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_LOGO_ACC; ndlp->nlp_flag |= NLP_LOGO_ACC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
if ((ndlp->nlp_flag & NLP_DELAY_TMO) == 0) { if ((ndlp->nlp_flag & NLP_DELAY_TMO) == 0) {
mod_timer(&ndlp->nlp_delayfunc, mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1)); jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO; ndlp->nlp_flag |= NLP_DELAY_TMO;
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
} else { } else {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~NLP_NPR_ADISC; ndlp->nlp_flag &= ~NLP_NPR_ADISC;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
} }
return ndlp->nlp_state; return ndlp->nlp_state;
} }
@ -2797,16 +2768,12 @@ lpfc_cmpl_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
{ {
struct lpfc_iocbq *cmdiocb, *rspiocb; struct lpfc_iocbq *cmdiocb, *rspiocb;
IOCB_t *irsp; IOCB_t *irsp;
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
cmdiocb = (struct lpfc_iocbq *) arg; cmdiocb = (struct lpfc_iocbq *) arg;
rspiocb = cmdiocb->context_un.rsp_iocb; rspiocb = cmdiocb->context_un.rsp_iocb;
irsp = &rspiocb->iocb; irsp = &rspiocb->iocb;
if (irsp->ulpStatus) { if (irsp->ulpStatus) {
spin_lock_irq(shost->host_lock);
ndlp->nlp_flag |= NLP_DEFER_RM;
spin_unlock_irq(shost->host_lock);
return NLP_STE_FREED_NODE; return NLP_STE_FREED_NODE;
} }
return ndlp->nlp_state; return ndlp->nlp_state;
@ -2893,12 +2860,10 @@ static uint32_t
lpfc_device_rm_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_rm_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NODEV_REMOVE; ndlp->nlp_flag |= NLP_NODEV_REMOVE;
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} }
lpfc_drop_node(vport, ndlp); lpfc_drop_node(vport, ndlp);
@ -2909,8 +2874,6 @@ static uint32_t
lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
/* Don't do anything that will mess up processing of the /* Don't do anything that will mess up processing of the
* previous RSCN. * previous RSCN.
*/ */
@ -2918,10 +2881,10 @@ lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
return ndlp->nlp_state; return ndlp->nlp_state;
lpfc_cancel_retry_delay_tmo(vport, ndlp); lpfc_cancel_retry_delay_tmo(vport, ndlp);
spin_lock_irq(shost->host_lock); spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME);
spin_unlock_irq(shost->host_lock); spin_unlock_irq(&ndlp->lock);
return ndlp->nlp_state; return ndlp->nlp_state;
} }

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -62,180 +62,9 @@ lpfc_release_nvme_buf(struct lpfc_hba *, struct lpfc_io_buf *);
static struct nvme_fc_port_template lpfc_nvme_template; static struct nvme_fc_port_template lpfc_nvme_template;
static union lpfc_wqe128 lpfc_iread_cmd_template;
static union lpfc_wqe128 lpfc_iwrite_cmd_template;
static union lpfc_wqe128 lpfc_icmnd_cmd_template;
/* Setup WQE templates for NVME IOs */
void
lpfc_nvme_cmd_template(void)
{
union lpfc_wqe128 *wqe;
/* IREAD template */
wqe = &lpfc_iread_cmd_template;
memset(wqe, 0, sizeof(union lpfc_wqe128));
/* Word 0, 1, 2 - BDE is variable */
/* Word 3 - cmd_buff_len, payload_offset_len is zero */
/* Word 4 - total_xfer_len is variable */
/* Word 5 - is zero */
/* Word 6 - ctxt_tag, xri_tag is variable */
/* Word 7 */
bf_set(wqe_cmnd, &wqe->fcp_iread.wqe_com, CMD_FCP_IREAD64_WQE);
bf_set(wqe_pu, &wqe->fcp_iread.wqe_com, PARM_READ_CHECK);
bf_set(wqe_class, &wqe->fcp_iread.wqe_com, CLASS3);
bf_set(wqe_ct, &wqe->fcp_iread.wqe_com, SLI4_CT_RPI);
/* Word 8 - abort_tag is variable */
/* Word 9 - reqtag is variable */
/* Word 10 - dbde, wqes is variable */
bf_set(wqe_qosd, &wqe->fcp_iread.wqe_com, 0);
bf_set(wqe_nvme, &wqe->fcp_iread.wqe_com, 1);
bf_set(wqe_iod, &wqe->fcp_iread.wqe_com, LPFC_WQE_IOD_READ);
bf_set(wqe_lenloc, &wqe->fcp_iread.wqe_com, LPFC_WQE_LENLOC_WORD4);
bf_set(wqe_dbde, &wqe->fcp_iread.wqe_com, 0);
bf_set(wqe_wqes, &wqe->fcp_iread.wqe_com, 1);
/* Word 11 - pbde is variable */
bf_set(wqe_cmd_type, &wqe->fcp_iread.wqe_com, NVME_READ_CMD);
bf_set(wqe_cqid, &wqe->fcp_iread.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
bf_set(wqe_pbde, &wqe->fcp_iread.wqe_com, 1);
/* Word 12 - is zero */
/* Word 13, 14, 15 - PBDE is variable */
/* IWRITE template */
wqe = &lpfc_iwrite_cmd_template;
memset(wqe, 0, sizeof(union lpfc_wqe128));
/* Word 0, 1, 2 - BDE is variable */
/* Word 3 - cmd_buff_len, payload_offset_len is zero */
/* Word 4 - total_xfer_len is variable */
/* Word 5 - initial_xfer_len is variable */
/* Word 6 - ctxt_tag, xri_tag is variable */
/* Word 7 */
bf_set(wqe_cmnd, &wqe->fcp_iwrite.wqe_com, CMD_FCP_IWRITE64_WQE);
bf_set(wqe_pu, &wqe->fcp_iwrite.wqe_com, PARM_READ_CHECK);
bf_set(wqe_class, &wqe->fcp_iwrite.wqe_com, CLASS3);
bf_set(wqe_ct, &wqe->fcp_iwrite.wqe_com, SLI4_CT_RPI);
/* Word 8 - abort_tag is variable */
/* Word 9 - reqtag is variable */
/* Word 10 - dbde, wqes is variable */
bf_set(wqe_qosd, &wqe->fcp_iwrite.wqe_com, 0);
bf_set(wqe_nvme, &wqe->fcp_iwrite.wqe_com, 1);
bf_set(wqe_iod, &wqe->fcp_iwrite.wqe_com, LPFC_WQE_IOD_WRITE);
bf_set(wqe_lenloc, &wqe->fcp_iwrite.wqe_com, LPFC_WQE_LENLOC_WORD4);
bf_set(wqe_dbde, &wqe->fcp_iwrite.wqe_com, 0);
bf_set(wqe_wqes, &wqe->fcp_iwrite.wqe_com, 1);
/* Word 11 - pbde is variable */
bf_set(wqe_cmd_type, &wqe->fcp_iwrite.wqe_com, NVME_WRITE_CMD);
bf_set(wqe_cqid, &wqe->fcp_iwrite.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
bf_set(wqe_pbde, &wqe->fcp_iwrite.wqe_com, 1);
/* Word 12 - is zero */
/* Word 13, 14, 15 - PBDE is variable */
/* ICMND template */
wqe = &lpfc_icmnd_cmd_template;
memset(wqe, 0, sizeof(union lpfc_wqe128));
/* Word 0, 1, 2 - BDE is variable */
/* Word 3 - payload_offset_len is variable */
/* Word 4, 5 - is zero */
/* Word 6 - ctxt_tag, xri_tag is variable */
/* Word 7 */
bf_set(wqe_cmnd, &wqe->fcp_icmd.wqe_com, CMD_FCP_ICMND64_WQE);
bf_set(wqe_pu, &wqe->fcp_icmd.wqe_com, 0);
bf_set(wqe_class, &wqe->fcp_icmd.wqe_com, CLASS3);
bf_set(wqe_ct, &wqe->fcp_icmd.wqe_com, SLI4_CT_RPI);
/* Word 8 - abort_tag is variable */
/* Word 9 - reqtag is variable */
/* Word 10 - dbde, wqes is variable */
bf_set(wqe_qosd, &wqe->fcp_icmd.wqe_com, 1);
bf_set(wqe_nvme, &wqe->fcp_icmd.wqe_com, 1);
bf_set(wqe_iod, &wqe->fcp_icmd.wqe_com, LPFC_WQE_IOD_NONE);
bf_set(wqe_lenloc, &wqe->fcp_icmd.wqe_com, LPFC_WQE_LENLOC_NONE);
bf_set(wqe_dbde, &wqe->fcp_icmd.wqe_com, 0);
bf_set(wqe_wqes, &wqe->fcp_icmd.wqe_com, 1);
/* Word 11 */
bf_set(wqe_cmd_type, &wqe->fcp_icmd.wqe_com, FCP_COMMAND);
bf_set(wqe_cqid, &wqe->fcp_icmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
bf_set(wqe_pbde, &wqe->fcp_icmd.wqe_com, 0);
/* Word 12, 13, 14, 15 - is zero */
}
/**
* lpfc_nvme_prep_abort_wqe - set up 'abort' work queue entry.
* @pwqeq: Pointer to command iocb.
* @xritag: Tag that uniqely identifies the local exchange resource.
* @opt: Option bits -
* bit 0 = inhibit sending abts on the link
*
* This function is called with hbalock held.
**/
void
lpfc_nvme_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt)
{
union lpfc_wqe128 *wqe = &pwqeq->wqe;
/* WQEs are reused. Clear stale data and set key fields to
* zero like ia, iaab, iaar, xri_tag, and ctxt_tag.
*/
memset(wqe, 0, sizeof(*wqe));
if (opt & INHIBIT_ABORT)
bf_set(abort_cmd_ia, &wqe->abort_cmd, 1);
/* Abort specified xri tag, with the mask deliberately zeroed */
bf_set(abort_cmd_criteria, &wqe->abort_cmd, T_XRI_TAG);
bf_set(wqe_cmnd, &wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX);
/* Abort the IO associated with this outstanding exchange ID. */
wqe->abort_cmd.wqe_com.abort_tag = xritag;
/* iotag for the wqe completion. */
bf_set(wqe_reqtag, &wqe->abort_cmd.wqe_com, pwqeq->iotag);
bf_set(wqe_qosd, &wqe->abort_cmd.wqe_com, 1);
bf_set(wqe_lenloc, &wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE);
bf_set(wqe_cmd_type, &wqe->abort_cmd.wqe_com, OTHER_COMMAND);
bf_set(wqe_wqec, &wqe->abort_cmd.wqe_com, 1);
bf_set(wqe_cqid, &wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
}
/** /**
* lpfc_nvme_create_queue - * lpfc_nvme_create_queue -
* @pnvme_lport: Transport localport that LS is to be issued from * @pnvme_lport: Transport localport that LS is to be issued from
* @lpfc_pnvme: Pointer to the driver's nvme instance data
* @qidx: An cpu index used to affinitize IO queues and MSIX vectors. * @qidx: An cpu index used to affinitize IO queues and MSIX vectors.
* @qsize: Size of the queue in bytes * @qsize: Size of the queue in bytes
* @handle: An opaque driver handle used in follow-up calls. * @handle: An opaque driver handle used in follow-up calls.
@ -357,39 +186,47 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
struct lpfc_nvme_rport *rport = remoteport->private; struct lpfc_nvme_rport *rport = remoteport->private;
struct lpfc_vport *vport; struct lpfc_vport *vport;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp;
u32 fc4_xpt_flags;
ndlp = rport->ndlp; ndlp = rport->ndlp;
if (!ndlp) if (!ndlp) {
pr_err("**** %s: NULL ndlp on rport %p remoteport %p\n",
__func__, rport, remoteport);
goto rport_err; goto rport_err;
}
vport = ndlp->vport; vport = ndlp->vport;
if (!vport) if (!vport) {
pr_err("**** %s: Null vport on ndlp %p, ste x%x rport %p\n",
__func__, ndlp, ndlp->nlp_state, rport);
goto rport_err; goto rport_err;
}
fc4_xpt_flags = NVME_XPT_REGD | SCSI_XPT_REGD;
/* Remove this rport from the lport's list - memory is owned by the /* Remove this rport from the lport's list - memory is owned by the
* transport. Remove the ndlp reference for the NVME transport before * transport. Remove the ndlp reference for the NVME transport before
* calling state machine to remove the node. * calling state machine to remove the node.
*/ */
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
"6146 remoteport delete of remoteport x%px\n", "6146 remoteport delete of remoteport %p\n",
remoteport); remoteport);
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
/* The register rebind might have occurred before the delete /* The register rebind might have occurred before the delete
* downcall. Guard against this race. * downcall. Guard against this race.
*/ */
if (ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) { if (ndlp->fc4_xpt_flags & NLP_WAIT_FOR_UNREG)
ndlp->nrport = NULL; ndlp->fc4_xpt_flags &= ~(NLP_WAIT_FOR_UNREG | NVME_XPT_REGD);
ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG;
spin_unlock_irq(&vport->phba->hbalock);
/* Remove original register reference. The host transport spin_unlock_irq(&ndlp->lock);
* won't reference this rport/remoteport any further.
*/ /* On a devloss timeout event, one more put is executed provided the
lpfc_nlp_put(ndlp); * NVME and SCSI rport unregister requests are complete. If the vport
} else { * is unloading, this extra put is executed by lpfc_drop_node.
spin_unlock_irq(&vport->phba->hbalock); */
} if (!(ndlp->fc4_xpt_flags & fc4_xpt_flags))
lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RM);
rport_err: rport_err:
return; return;
@ -567,6 +404,13 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
/* Save for completion so we can release these resources */ /* Save for completion so we can release these resources */
genwqe->context1 = lpfc_nlp_get(ndlp); genwqe->context1 = lpfc_nlp_get(ndlp);
if (!genwqe->context1) {
dev_warn(&phba->pcidev->dev,
"Warning: Failed node ref, not sending LS_REQ\n");
lpfc_sli_release_iocbq(phba, genwqe);
return 1;
}
genwqe->context2 = (uint8_t *)pnvme_lsreq; genwqe->context2 = (uint8_t *)pnvme_lsreq;
/* Fill in payload, bp points to frame payload */ /* Fill in payload, bp points to frame payload */
@ -654,6 +498,7 @@ lpfc_nvme_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
"Data: x%x x%x rc x%x\n", "Data: x%x x%x rc x%x\n",
ndlp->nlp_DID, genwqe->iotag, ndlp->nlp_DID, genwqe->iotag,
vport->port_state, rc); vport->port_state, rc);
lpfc_nlp_put(ndlp);
lpfc_sli_release_iocbq(phba, genwqe); lpfc_sli_release_iocbq(phba, genwqe);
return 1; return 1;
} }
@ -695,7 +540,7 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
int ret; int ret;
uint16_t ntype, nstate; uint16_t ntype, nstate;
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { if (!ndlp) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6051 NVMEx LS REQ: Bad NDLP x%px, Failing " "6051 NVMEx LS REQ: Bad NDLP x%px, Failing "
"LS Req\n", "LS Req\n",
@ -787,7 +632,7 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
/** /**
* lpfc_nvme_ls_req - Issue an NVME Link Service request * lpfc_nvme_ls_req - Issue an NVME Link Service request
* @pnvme_lport: Transport localport that LS is to be issued from. * @pnvme_lport: Transport localport that LS is to be issued from.
* @nvme_rport: Transport remoteport that LS is to be sent to. * @pnvme_rport: Transport remoteport that LS is to be sent to.
* @pnvme_lsreq: the transport nvme_ls_req structure for the LS * @pnvme_lsreq: the transport nvme_ls_req structure for the LS
* *
* Driver registers this routine to handle any link service request * Driver registers this routine to handle any link service request
@ -881,7 +726,7 @@ __lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
spin_unlock(&pring->ring_lock); spin_unlock(&pring->ring_lock);
if (foundit) if (foundit)
lpfc_sli_issue_abort_iotag(phba, pring, wqe); lpfc_sli_issue_abort_iotag(phba, pring, wqe, NULL);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
if (foundit) if (foundit)
@ -940,7 +785,6 @@ lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
{ {
struct lpfc_nvme_lport *lport; struct lpfc_nvme_lport *lport;
struct lpfc_vport *vport; struct lpfc_vport *vport;
struct lpfc_hba *phba;
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp;
int ret; int ret;
@ -948,7 +792,6 @@ lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
if (unlikely(!lport)) if (unlikely(!lport))
return; return;
vport = lport->vport; vport = lport->vport;
phba = vport->phba;
if (vport->load_flag & FC_UNLOADING) if (vport->load_flag & FC_UNLOADING)
return; return;
@ -1134,7 +977,7 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
* transport is still transitioning. * transport is still transitioning.
*/ */
ndlp = lpfc_ncmd->ndlp; ndlp = lpfc_ncmd->ndlp;
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { if (!ndlp) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6062 Ignoring NVME cmpl. No ndlp\n"); "6062 Ignoring NVME cmpl. No ndlp\n");
goto out_err; goto out_err;
@ -1292,7 +1135,7 @@ out_err:
/** /**
* lpfc_nvme_prep_io_cmd - Issue an NVME-over-FCP IO * lpfc_nvme_prep_io_cmd - Issue an NVME-over-FCP IO
* @vport: pointer to a host virtual N_Port data structure * @vport: pointer to a host virtual N_Port data structure
* @lpfcn_cmd: Pointer to lpfc scsi command * @lpfc_ncmd: Pointer to lpfc scsi command
* @pnode: pointer to a node-list data structure * @pnode: pointer to a node-list data structure
* @cstat: pointer to the control status structure * @cstat: pointer to the control status structure
* *
@ -1316,9 +1159,6 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
union lpfc_wqe128 *wqe = &pwqeq->wqe; union lpfc_wqe128 *wqe = &pwqeq->wqe;
uint32_t req_len; uint32_t req_len;
if (!NLP_CHK_NODE_ACT(pnode))
return -EINVAL;
/* /*
* There are three possibilities here - use scatter-gather segment, use * There are three possibilities here - use scatter-gather segment, use
* the single mapping, or neither. * the single mapping, or neither.
@ -1390,6 +1230,9 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
/* Word 9 */ /* Word 9 */
bf_set(wqe_reqtag, &wqe->generic.wqe_com, pwqeq->iotag); bf_set(wqe_reqtag, &wqe->generic.wqe_com, pwqeq->iotag);
/* Word 10 */
bf_set(wqe_xchg, &wqe->fcp_iwrite.wqe_com, LPFC_NVME_XCHG);
/* Words 13 14 15 are for PBDE support */ /* Words 13 14 15 are for PBDE support */
pwqeq->vport = vport; pwqeq->vport = vport;
@ -1400,7 +1243,7 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport,
/** /**
* lpfc_nvme_prep_io_dma - Issue an NVME-over-FCP IO * lpfc_nvme_prep_io_dma - Issue an NVME-over-FCP IO
* @vport: pointer to a host virtual N_Port data structure * @vport: pointer to a host virtual N_Port data structure
* @lpfcn_cmd: Pointer to lpfc scsi command * @lpfc_ncmd: Pointer to lpfc scsi command
* *
* Driver registers this routine as it io request handler. This * Driver registers this routine as it io request handler. This
* routine issues an fcp WQE with data from the @lpfc_nvme_fcpreq * routine issues an fcp WQE with data from the @lpfc_nvme_fcpreq
@ -1557,7 +1400,9 @@ lpfc_nvme_prep_io_dma(struct lpfc_vport *vport,
le32_to_cpu(first_data_sgl->sge_len); le32_to_cpu(first_data_sgl->sge_len);
bde->tus.f.bdeFlags = BUFF_TYPE_BDE_64; bde->tus.f.bdeFlags = BUFF_TYPE_BDE_64;
bde->tus.w = cpu_to_le32(bde->tus.w); bde->tus.w = cpu_to_le32(bde->tus.w);
/* wqe_pbde is 1 in template */
/* Word 11 */
bf_set(wqe_pbde, &wqe->generic.wqe_com, 1);
} else { } else {
memset(&wqe->words[13], 0, (sizeof(uint32_t) * 3)); memset(&wqe->words[13], 0, (sizeof(uint32_t) * 3));
bf_set(wqe_pbde, &wqe->generic.wqe_com, 0); bf_set(wqe_pbde, &wqe->generic.wqe_com, 0);
@ -1582,16 +1427,14 @@ lpfc_nvme_prep_io_dma(struct lpfc_vport *vport,
/** /**
* lpfc_nvme_fcp_io_submit - Issue an NVME-over-FCP IO * lpfc_nvme_fcp_io_submit - Issue an NVME-over-FCP IO
* @lpfc_pnvme: Pointer to the driver's nvme instance data * @pnvme_lport: Pointer to the driver's local port data
* @lpfc_nvme_lport: Pointer to the driver's local port data * @pnvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
* @lpfc_nvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
* @lpfc_nvme_fcreq: IO request from nvme fc to driver.
* @hw_queue_handle: Driver-returned handle in lpfc_nvme_create_queue * @hw_queue_handle: Driver-returned handle in lpfc_nvme_create_queue
* @pnvme_fcreq: IO request from nvme fc to driver.
* *
* Driver registers this routine as it io request handler. This * Driver registers this routine as it io request handler. This
* routine issues an fcp WQE with data from the @lpfc_nvme_fcpreq * routine issues an fcp WQE with data from the @lpfc_nvme_fcpreq
* data structure to the rport * data structure to the rport indicated in @lpfc_nvme_rport.
indicated in @lpfc_nvme_rport.
* *
* Return value : * Return value :
* 0 - Success * 0 - Success
@ -1670,7 +1513,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
* transport is still transitioning. * transport is still transitioning.
*/ */
ndlp = rport->ndlp; ndlp = rport->ndlp;
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { if (!ndlp) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_NVME_IOERR, lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_NVME_IOERR,
"6053 Busy IO, ndlp not ready: rport x%px " "6053 Busy IO, ndlp not ready: rport x%px "
"ndlp x%px, DID x%06x\n", "ndlp x%px, DID x%06x\n",
@ -1688,7 +1531,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
"IO. State x%x, Type x%x Flg x%x\n", "IO. State x%x, Type x%x Flg x%x\n",
pnvme_rport->port_id, pnvme_rport->port_id,
ndlp->nlp_state, ndlp->nlp_type, ndlp->nlp_state, ndlp->nlp_type,
ndlp->upcall_flags); ndlp->fc4_xpt_flags);
atomic_inc(&lport->xmt_fcp_bad_ndlp); atomic_inc(&lport->xmt_fcp_bad_ndlp);
ret = -EBUSY; ret = -EBUSY;
goto out_fail; goto out_fail;
@ -1839,7 +1682,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
* lpfc_nvme_abort_fcreq_cmpl - Complete an NVME FCP abort request. * lpfc_nvme_abort_fcreq_cmpl - Complete an NVME FCP abort request.
* @phba: Pointer to HBA context object * @phba: Pointer to HBA context object
* @cmdiocb: Pointer to command iocb object. * @cmdiocb: Pointer to command iocb object.
* @rspiocb: Pointer to response iocb object. * @abts_cmpl: Pointer to wcqe complete object.
* *
* This is the callback function for any NVME FCP IO that was aborted. * This is the callback function for any NVME FCP IO that was aborted.
* *
@ -1865,11 +1708,10 @@ lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/** /**
* lpfc_nvme_fcp_abort - Issue an NVME-over-FCP ABTS * lpfc_nvme_fcp_abort - Issue an NVME-over-FCP ABTS
* @lpfc_pnvme: Pointer to the driver's nvme instance data * @pnvme_lport: Pointer to the driver's local port data
* @lpfc_nvme_lport: Pointer to the driver's local port data * @pnvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
* @lpfc_nvme_rport: Pointer to the rport getting the @lpfc_nvme_ereq
* @lpfc_nvme_fcreq: IO request from nvme fc to driver.
* @hw_queue_handle: Driver-returned handle in lpfc_nvme_create_queue * @hw_queue_handle: Driver-returned handle in lpfc_nvme_create_queue
* @pnvme_fcreq: IO request from nvme fc to driver.
* *
* Driver registers this routine as its nvme request io abort handler. This * Driver registers this routine as its nvme request io abort handler. This
* routine issues an fcp Abort WQE with data from the @lpfc_nvme_fcpreq * routine issues an fcp Abort WQE with data from the @lpfc_nvme_fcpreq
@ -1890,7 +1732,6 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
struct lpfc_vport *vport; struct lpfc_vport *vport;
struct lpfc_hba *phba; struct lpfc_hba *phba;
struct lpfc_io_buf *lpfc_nbuf; struct lpfc_io_buf *lpfc_nbuf;
struct lpfc_iocbq *abts_buf;
struct lpfc_iocbq *nvmereq_wqe; struct lpfc_iocbq *nvmereq_wqe;
struct lpfc_nvme_fcpreq_priv *freqpriv; struct lpfc_nvme_fcpreq_priv *freqpriv;
unsigned long flags; unsigned long flags;
@ -2001,42 +1842,23 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
goto out_unlock; goto out_unlock;
} }
abts_buf = __lpfc_sli_get_iocbq(phba); ret_val = lpfc_sli4_issue_abort_iotag(phba, nvmereq_wqe,
if (!abts_buf) { lpfc_nvme_abort_fcreq_cmpl);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6136 No available abort wqes. Skipping "
"Abts req for nvme_fcreq x%px xri x%x\n",
pnvme_fcreq, nvmereq_wqe->sli4_xritag);
goto out_unlock;
}
/* Ready - mark outstanding as aborted by driver. */
nvmereq_wqe->iocb_flag |= LPFC_DRIVER_ABORTED;
lpfc_nvme_prep_abort_wqe(abts_buf, nvmereq_wqe->sli4_xritag, 0);
/* ABTS WQE must go to the same WQ as the WQE to be aborted */
abts_buf->iocb_flag |= LPFC_IO_NVME;
abts_buf->hba_wqidx = nvmereq_wqe->hba_wqidx;
abts_buf->vport = vport;
abts_buf->wqe_cmpl = lpfc_nvme_abort_fcreq_cmpl;
ret_val = lpfc_sli4_issue_wqe(phba, lpfc_nbuf->hdwq, abts_buf);
spin_unlock(&lpfc_nbuf->buf_lock); spin_unlock(&lpfc_nbuf->buf_lock);
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);
if (ret_val) { if (ret_val != WQE_SUCCESS) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6137 Failed abts issue_wqe with status x%x " "6137 Failed abts issue_wqe with status x%x "
"for nvme_fcreq x%px.\n", "for nvme_fcreq x%px.\n",
ret_val, pnvme_fcreq); ret_val, pnvme_fcreq);
lpfc_sli_release_iocbq(phba, abts_buf);
return; return;
} }
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS, lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS,
"6138 Transport Abort NVME Request Issued for " "6138 Transport Abort NVME Request Issued for "
"ox_id x%x on reqtag x%x\n", "ox_id x%x\n",
nvmereq_wqe->sli4_xritag, nvmereq_wqe->sli4_xritag);
abts_buf->iotag);
return; return;
out_unlock: out_unlock:
@ -2072,9 +1894,8 @@ static struct nvme_fc_port_template lpfc_nvme_template = {
.fcprqst_priv_sz = sizeof(struct lpfc_nvme_fcpreq_priv), .fcprqst_priv_sz = sizeof(struct lpfc_nvme_fcpreq_priv),
}; };
/** /*
* lpfc_get_nvme_buf - Get a nvme buffer from io_buf_list of the HBA * lpfc_get_nvme_buf - Get a nvme buffer from io_buf_list of the HBA
* @phba: The HBA for which this call is being executed.
* *
* This routine removes a nvme buffer from head of @hdwq io_buf_list * This routine removes a nvme buffer from head of @hdwq io_buf_list
* and returns to caller. * and returns to caller.
@ -2174,7 +1995,7 @@ lpfc_release_nvme_buf(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_ncmd)
/** /**
* lpfc_nvme_create_localport - Create/Bind an nvme localport instance. * lpfc_nvme_create_localport - Create/Bind an nvme localport instance.
* @pvport - the lpfc_vport instance requesting a localport. * @vport - the lpfc_vport instance requesting a localport.
* *
* This routine is invoked to create an nvme localport instance to bind * This routine is invoked to create an nvme localport instance to bind
* to the nvme_fc_transport. It is called once during driver load * to the nvme_fc_transport. It is called once during driver load
@ -2280,6 +2101,8 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
int ret, i, pending = 0; int ret, i, pending = 0;
struct lpfc_sli_ring *pring; struct lpfc_sli_ring *pring;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
struct lpfc_sli4_hdw_queue *qp;
int abts_scsi, abts_nvme;
/* Host transport has to clean up and confirm requiring an indefinite /* Host transport has to clean up and confirm requiring an indefinite
* wait. Print a message if a 10 second wait expires and renew the * wait. Print a message if a 10 second wait expires and renew the
@ -2290,17 +2113,23 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo); ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo);
if (unlikely(!ret)) { if (unlikely(!ret)) {
pending = 0; pending = 0;
abts_scsi = 0;
abts_nvme = 0;
for (i = 0; i < phba->cfg_hdw_queue; i++) { for (i = 0; i < phba->cfg_hdw_queue; i++) {
pring = phba->sli4_hba.hdwq[i].io_wq->pring; qp = &phba->sli4_hba.hdwq[i];
pring = qp->io_wq->pring;
if (!pring) if (!pring)
continue; continue;
if (pring->txcmplq_cnt) pending += pring->txcmplq_cnt;
pending += pring->txcmplq_cnt; abts_scsi += qp->abts_scsi_io_bufs;
abts_nvme += qp->abts_nvme_io_bufs;
} }
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6176 Lport x%px Localport x%px wait " "6176 Lport x%px Localport x%px wait "
"timed out. Pending %d. Renewing.\n", "timed out. Pending %d [%d:%d]. "
lport, vport->localport, pending); "Renewing.\n",
lport, vport->localport, pending,
abts_scsi, abts_nvme);
continue; continue;
} }
break; break;
@ -2313,7 +2142,7 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
/** /**
* lpfc_nvme_destroy_localport - Destroy lpfc_nvme bound to nvme transport. * lpfc_nvme_destroy_localport - Destroy lpfc_nvme bound to nvme transport.
* @pnvme: pointer to lpfc nvme data structure. * @vport: pointer to a host virtual N_Port data structure
* *
* This routine is invoked to destroy all lports bound to the phba. * This routine is invoked to destroy all lports bound to the phba.
* The lport memory was allocated by the nvme fc transport and is * The lport memory was allocated by the nvme fc transport and is
@ -2454,14 +2283,18 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
else else
rpinfo.dev_loss_tmo = vport->cfg_devloss_tmo; rpinfo.dev_loss_tmo = vport->cfg_devloss_tmo;
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
oldrport = lpfc_ndlp_get_nrport(ndlp); oldrport = lpfc_ndlp_get_nrport(ndlp);
if (oldrport) { if (oldrport) {
prev_ndlp = oldrport->ndlp; prev_ndlp = oldrport->ndlp;
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
} else { } else {
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
lpfc_nlp_get(ndlp); if (!lpfc_nlp_get(ndlp)) {
dev_warn(&vport->phba->pcidev->dev,
"Warning - No node ref - exit register\n");
return 0;
}
} }
ret = nvme_fc_register_remoteport(localport, &rpinfo, &remote_port); ret = nvme_fc_register_remoteport(localport, &rpinfo, &remote_port);
@ -2473,9 +2306,10 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
/* Guard against an unregister/reregister /* Guard against an unregister/reregister
* race that leaves the WAIT flag set. * race that leaves the WAIT flag set.
*/ */
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG; ndlp->fc4_xpt_flags &= ~NLP_WAIT_FOR_UNREG;
spin_unlock_irq(&vport->phba->hbalock); ndlp->fc4_xpt_flags |= NVME_XPT_REGD;
spin_unlock_irq(&ndlp->lock);
rport = remote_port->private; rport = remote_port->private;
if (oldrport) { if (oldrport) {
@ -2483,10 +2317,10 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
* before dropping the ndlp ref from * before dropping the ndlp ref from
* register. * register.
*/ */
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
ndlp->nrport = NULL; ndlp->nrport = NULL;
ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG; ndlp->fc4_xpt_flags &= ~NLP_WAIT_FOR_UNREG;
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
rport->ndlp = NULL; rport->ndlp = NULL;
rport->remoteport = NULL; rport->remoteport = NULL;
@ -2495,8 +2329,7 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
* reference would cause a premature cleanup. * reference would cause a premature cleanup.
*/ */
if (prev_ndlp && prev_ndlp != ndlp) { if (prev_ndlp && prev_ndlp != ndlp) {
if ((!NLP_CHK_NODE_ACT(prev_ndlp)) || if (!prev_ndlp->nrport)
(!prev_ndlp->nrport))
lpfc_nlp_put(prev_ndlp); lpfc_nlp_put(prev_ndlp);
} }
} }
@ -2505,9 +2338,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
rport->remoteport = remote_port; rport->remoteport = remote_port;
rport->lport = lport; rport->lport = lport;
rport->ndlp = ndlp; rport->ndlp = ndlp;
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
ndlp->nrport = rport; ndlp->nrport = rport;
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
lpfc_printf_vlog(vport, KERN_INFO, lpfc_printf_vlog(vport, KERN_INFO,
LOG_NVME_DISC | LOG_NODE, LOG_NVME_DISC | LOG_NODE,
"6022 Bind lport x%px to remoteport x%px " "6022 Bind lport x%px to remoteport x%px "
@ -2532,7 +2365,7 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
#endif #endif
} }
/** /*
* lpfc_nvme_rescan_port - Check to see if we should rescan this remoteport * lpfc_nvme_rescan_port - Check to see if we should rescan this remoteport
* *
* If the ndlp represents an NVME Target, that we are logged into, * If the ndlp represents an NVME Target, that we are logged into,
@ -2546,11 +2379,11 @@ lpfc_nvme_rescan_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
struct lpfc_nvme_rport *nrport; struct lpfc_nvme_rport *nrport;
struct nvme_fc_remote_port *remoteport = NULL; struct nvme_fc_remote_port *remoteport = NULL;
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
nrport = lpfc_ndlp_get_nrport(ndlp); nrport = lpfc_ndlp_get_nrport(ndlp);
if (nrport) if (nrport)
remoteport = nrport->remoteport; remoteport = nrport->remoteport;
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
"6170 Rescan NPort DID x%06x type x%x " "6170 Rescan NPort DID x%06x type x%x "
@ -2613,20 +2446,21 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
if (!lport) if (!lport)
goto input_err; goto input_err;
spin_lock_irq(&vport->phba->hbalock); spin_lock_irq(&ndlp->lock);
rport = lpfc_ndlp_get_nrport(ndlp); rport = lpfc_ndlp_get_nrport(ndlp);
if (rport) if (rport)
remoteport = rport->remoteport; remoteport = rport->remoteport;
spin_unlock_irq(&vport->phba->hbalock); spin_unlock_irq(&ndlp->lock);
if (!remoteport) if (!remoteport)
goto input_err; goto input_err;
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
"6033 Unreg nvme remoteport x%px, portname x%llx, " "6033 Unreg nvme remoteport x%px, portname x%llx, "
"port_id x%06x, portstate x%x port type x%x\n", "port_id x%06x, portstate x%x port type x%x "
"refcnt %d\n",
remoteport, remoteport->port_name, remoteport, remoteport->port_name,
remoteport->port_id, remoteport->port_state, remoteport->port_id, remoteport->port_state,
ndlp->nlp_type); ndlp->nlp_type, kref_read(&ndlp->kref));
/* Sanity check ndlp type. Only call for NVME ports. Don't /* Sanity check ndlp type. Only call for NVME ports. Don't
* clear any rport state until the transport calls back. * clear any rport state until the transport calls back.
@ -2636,7 +2470,9 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
/* No concern about the role change on the nvme remoteport. /* No concern about the role change on the nvme remoteport.
* The transport will update it. * The transport will update it.
*/ */
ndlp->upcall_flags |= NLP_WAIT_FOR_UNREG; spin_lock_irq(&vport->phba->hbalock);
ndlp->fc4_xpt_flags |= NLP_WAIT_FOR_UNREG;
spin_unlock_irq(&vport->phba->hbalock);
/* Don't let the host nvme transport keep sending keep-alives /* Don't let the host nvme transport keep sending keep-alives
* on this remoteport. Vport is unloading, no recovery. The * on this remoteport. Vport is unloading, no recovery. The
@ -2647,8 +2483,15 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
(void)nvme_fc_set_remoteport_devloss(remoteport, 0); (void)nvme_fc_set_remoteport_devloss(remoteport, 0);
ret = nvme_fc_unregister_remoteport(remoteport); ret = nvme_fc_unregister_remoteport(remoteport);
/* The driver no longer knows if the nrport memory is valid.
* because the controller teardown process has begun and
* is asynchronous. Break the binding in the ndlp. Also
* remove the register ndlp reference to setup node release.
*/
ndlp->nrport = NULL;
lpfc_nlp_put(ndlp);
if (ret != 0) { if (ret != 0) {
lpfc_nlp_put(ndlp);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6167 NVME unregister failed %d " "6167 NVME unregister failed %d "
"port_state x%x\n", "port_state x%x\n",

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -38,7 +38,7 @@
#define LPFC_NVME_INFO_MORE_STR "\nCould be more info...\n" #define LPFC_NVME_INFO_MORE_STR "\nCould be more info...\n"
#define lpfc_ndlp_get_nrport(ndlp) \ #define lpfc_ndlp_get_nrport(ndlp) \
((!ndlp->nrport || (ndlp->upcall_flags & NLP_WAIT_FOR_UNREG)) \ ((!ndlp->nrport || (ndlp->fc4_xpt_flags & NLP_WAIT_FOR_UNREG)) \
? NULL : ndlp->nrport) ? NULL : ndlp->nrport)
struct lpfc_nvme_qhandle { struct lpfc_nvme_qhandle {

Просмотреть файл

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channsel Host Bus Adapters. * * Fibre Channsel Host Bus Adapters. *
* Copyright (C) 2017-2019 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. * * Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -105,7 +105,7 @@ lpfc_nvmet_cmd_template(void)
/* Word 9 - reqtag, rcvoxid is variable */ /* Word 9 - reqtag, rcvoxid is variable */
/* Word 10 - wqes, xc is variable */ /* Word 10 - wqes, xc is variable */
bf_set(wqe_nvme, &wqe->fcp_tsend.wqe_com, 1); bf_set(wqe_xchg, &wqe->fcp_tsend.wqe_com, LPFC_NVME_XCHG);
bf_set(wqe_dbde, &wqe->fcp_tsend.wqe_com, 1); bf_set(wqe_dbde, &wqe->fcp_tsend.wqe_com, 1);
bf_set(wqe_wqes, &wqe->fcp_tsend.wqe_com, 0); bf_set(wqe_wqes, &wqe->fcp_tsend.wqe_com, 0);
bf_set(wqe_xc, &wqe->fcp_tsend.wqe_com, 1); bf_set(wqe_xc, &wqe->fcp_tsend.wqe_com, 1);
@ -153,7 +153,7 @@ lpfc_nvmet_cmd_template(void)
/* Word 10 - xc is variable */ /* Word 10 - xc is variable */
bf_set(wqe_dbde, &wqe->fcp_treceive.wqe_com, 1); bf_set(wqe_dbde, &wqe->fcp_treceive.wqe_com, 1);
bf_set(wqe_wqes, &wqe->fcp_treceive.wqe_com, 0); bf_set(wqe_wqes, &wqe->fcp_treceive.wqe_com, 0);
bf_set(wqe_nvme, &wqe->fcp_treceive.wqe_com, 1); bf_set(wqe_xchg, &wqe->fcp_treceive.wqe_com, LPFC_NVME_XCHG);
bf_set(wqe_iod, &wqe->fcp_treceive.wqe_com, LPFC_WQE_IOD_READ); bf_set(wqe_iod, &wqe->fcp_treceive.wqe_com, LPFC_WQE_IOD_READ);
bf_set(wqe_lenloc, &wqe->fcp_treceive.wqe_com, LPFC_WQE_LENLOC_WORD12); bf_set(wqe_lenloc, &wqe->fcp_treceive.wqe_com, LPFC_WQE_LENLOC_WORD12);
bf_set(wqe_xc, &wqe->fcp_tsend.wqe_com, 1); bf_set(wqe_xc, &wqe->fcp_tsend.wqe_com, 1);
@ -195,7 +195,7 @@ lpfc_nvmet_cmd_template(void)
/* Word 10 wqes, xc is variable */ /* Word 10 wqes, xc is variable */
bf_set(wqe_dbde, &wqe->fcp_trsp.wqe_com, 1); bf_set(wqe_dbde, &wqe->fcp_trsp.wqe_com, 1);
bf_set(wqe_nvme, &wqe->fcp_trsp.wqe_com, 1); bf_set(wqe_xchg, &wqe->fcp_trsp.wqe_com, LPFC_NVME_XCHG);
bf_set(wqe_wqes, &wqe->fcp_trsp.wqe_com, 0); bf_set(wqe_wqes, &wqe->fcp_trsp.wqe_com, 0);
bf_set(wqe_xc, &wqe->fcp_trsp.wqe_com, 0); bf_set(wqe_xc, &wqe->fcp_trsp.wqe_com, 0);
bf_set(wqe_iod, &wqe->fcp_trsp.wqe_com, LPFC_WQE_IOD_NONE); bf_set(wqe_iod, &wqe->fcp_trsp.wqe_com, LPFC_WQE_IOD_NONE);
@ -371,8 +371,7 @@ finish:
/** /**
* lpfc_nvmet_ctxbuf_post - Repost a NVMET RQ DMA buffer and clean up context * lpfc_nvmet_ctxbuf_post - Repost a NVMET RQ DMA buffer and clean up context
* @phba: HBA buffer is associated with * @phba: HBA buffer is associated with
* @ctxp: context to clean up * @ctx_buf: ctx buffer context
* @mp: Buffer to free
* *
* Description: Frees the given DMA buffer in the appropriate way given by * Description: Frees the given DMA buffer in the appropriate way given by
* reposting it to its associated RQ so it can be reused. * reposting it to its associated RQ so it can be reused.
@ -1291,10 +1290,10 @@ lpfc_nvmet_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
/** /**
* lpfc_nvmet_ls_req - Issue an Link Service request * lpfc_nvmet_ls_req - Issue an Link Service request
* @targetport - pointer to target instance registered with nvmet transport. * @targetport: pointer to target instance registered with nvmet transport.
* @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv. * @hosthandle: hosthandle set by the driver in a prior ls_rqst_rcv.
* Driver sets this value to the ndlp pointer. * Driver sets this value to the ndlp pointer.
* @pnvme_lsreq - the transport nvme_ls_req structure for the LS * @pnvme_lsreq: the transport nvme_ls_req structure for the LS
* *
* Driver registers this routine to handle any link service request * Driver registers this routine to handle any link service request
* from the nvme_fc transport to a remote nvme-aware port. * from the nvme_fc transport to a remote nvme-aware port.
@ -1336,9 +1335,9 @@ lpfc_nvmet_ls_req(struct nvmet_fc_target_port *targetport,
/** /**
* lpfc_nvmet_ls_abort - Abort a prior NVME LS request * lpfc_nvmet_ls_abort - Abort a prior NVME LS request
* @targetport: Transport targetport, that LS was issued from. * @targetport: Transport targetport, that LS was issued from.
* @hosthandle - hosthandle set by the driver in a prior ls_rqst_rcv. * @hosthandle: hosthandle set by the driver in a prior ls_rqst_rcv.
* Driver sets this value to the ndlp pointer. * Driver sets this value to the ndlp pointer.
* @pnvme_lsreq - the transport nvme_ls_req structure for LS to be aborted * @pnvme_lsreq: the transport nvme_ls_req structure for LS to be aborted
* *
* Driver registers this routine to abort an NVME LS request that is * Driver registers this routine to abort an NVME LS request that is
* in progress (from the transports perspective). * in progress (from the transports perspective).
@ -1807,7 +1806,7 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
rrq_empty = list_empty(&phba->active_rrq_list); rrq_empty = list_empty(&phba->active_rrq_list);
spin_unlock_irqrestore(&phba->hbalock, iflag); spin_unlock_irqrestore(&phba->hbalock, iflag);
ndlp = lpfc_findnode_did(phba->pport, ctxp->sid); ndlp = lpfc_findnode_did(phba->pport, ctxp->sid);
if (ndlp && NLP_CHK_NODE_ACT(ndlp) && if (ndlp &&
(ndlp->nlp_state == NLP_STE_UNMAPPED_NODE || (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE ||
ndlp->nlp_state == NLP_STE_MAPPED_NODE)) { ndlp->nlp_state == NLP_STE_MAPPED_NODE)) {
lpfc_set_rrq_active(phba, ndlp, lpfc_set_rrq_active(phba, ndlp,
@ -2597,7 +2596,7 @@ lpfc_nvmet_prep_ls_wqe(struct lpfc_hba *phba,
} }
ndlp = lpfc_findnode_did(phba->pport, ctxp->sid); ndlp = lpfc_findnode_did(phba->pport, ctxp->sid);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || if (!ndlp ||
((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) && ((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) &&
(ndlp->nlp_state != NLP_STE_MAPPED_NODE))) { (ndlp->nlp_state != NLP_STE_MAPPED_NODE))) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@ -2717,7 +2716,7 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba,
} }
ndlp = lpfc_findnode_did(phba->pport, ctxp->sid); ndlp = lpfc_findnode_did(phba->pport, ctxp->sid);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || if (!ndlp ||
((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) && ((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) &&
(ndlp->nlp_state != NLP_STE_MAPPED_NODE))) { (ndlp->nlp_state != NLP_STE_MAPPED_NODE))) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
@ -3249,7 +3248,7 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
ndlp = lpfc_findnode_did(phba->pport, sid); ndlp = lpfc_findnode_did(phba->pport, sid);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || if (!ndlp ||
((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) && ((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) &&
(ndlp->nlp_state != NLP_STE_MAPPED_NODE))) { (ndlp->nlp_state != NLP_STE_MAPPED_NODE))) {
if (tgtp) if (tgtp)
@ -3328,6 +3327,46 @@ lpfc_nvmet_unsol_issue_abort(struct lpfc_hba *phba,
return 1; return 1;
} }
/**
* lpfc_nvmet_prep_abort_wqe - set up 'abort' work queue entry.
* @pwqeq: Pointer to command iocb.
* @xritag: Tag that uniqely identifies the local exchange resource.
* @opt: Option bits -
* bit 0 = inhibit sending abts on the link
*
* This function is called with hbalock held.
**/
static void
lpfc_nvmet_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt)
{
union lpfc_wqe128 *wqe = &pwqeq->wqe;
/* WQEs are reused. Clear stale data and set key fields to
* zero like ia, iaab, iaar, xri_tag, and ctxt_tag.
*/
memset(wqe, 0, sizeof(*wqe));
if (opt & INHIBIT_ABORT)
bf_set(abort_cmd_ia, &wqe->abort_cmd, 1);
/* Abort specified xri tag, with the mask deliberately zeroed */
bf_set(abort_cmd_criteria, &wqe->abort_cmd, T_XRI_TAG);
bf_set(wqe_cmnd, &wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX);
/* Abort the I/O associated with this outstanding exchange ID. */
wqe->abort_cmd.wqe_com.abort_tag = xritag;
/* iotag for the wqe completion. */
bf_set(wqe_reqtag, &wqe->abort_cmd.wqe_com, pwqeq->iotag);
bf_set(wqe_qosd, &wqe->abort_cmd.wqe_com, 1);
bf_set(wqe_lenloc, &wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE);
bf_set(wqe_cmd_type, &wqe->abort_cmd.wqe_com, OTHER_COMMAND);
bf_set(wqe_wqec, &wqe->abort_cmd.wqe_com, 1);
bf_set(wqe_cqid, &wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT);
}
static int static int
lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba, lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
struct lpfc_async_xchg_ctx *ctxp, struct lpfc_async_xchg_ctx *ctxp,
@ -3347,7 +3386,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
} }
ndlp = lpfc_findnode_did(phba->pport, sid); ndlp = lpfc_findnode_did(phba->pport, sid);
if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || if (!ndlp ||
((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) && ((ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) &&
(ndlp->nlp_state != NLP_STE_MAPPED_NODE))) { (ndlp->nlp_state != NLP_STE_MAPPED_NODE))) {
atomic_inc(&tgtp->xmt_abort_rsp_error); atomic_inc(&tgtp->xmt_abort_rsp_error);
@ -3423,7 +3462,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
/* Ready - mark outstanding as aborted by driver. */ /* Ready - mark outstanding as aborted by driver. */
abts_wqeq->iocb_flag |= LPFC_DRIVER_ABORTED; abts_wqeq->iocb_flag |= LPFC_DRIVER_ABORTED;
lpfc_nvme_prep_abort_wqe(abts_wqeq, ctxp->wqeq->sli4_xritag, opt); lpfc_nvmet_prep_abort_wqe(abts_wqeq, ctxp->wqeq->sli4_xritag, opt);
/* ABTS WQE must go to the same WQ as the WQE to be aborted */ /* ABTS WQE must go to the same WQ as the WQE to be aborted */
abts_wqeq->hba_wqidx = ctxp->wqeq->hba_wqidx; abts_wqeq->hba_wqidx = ctxp->wqeq->hba_wqidx;
@ -3596,8 +3635,8 @@ out:
/** /**
* lpfc_nvmet_invalidate_host * lpfc_nvmet_invalidate_host
* *
* @phba - pointer to the driver instance bound to an adapter port. * @phba: pointer to the driver instance bound to an adapter port.
* @ndlp - pointer to an lpfc_nodelist type * @ndlp: pointer to an lpfc_nodelist type
* *
* This routine upcalls the nvmet transport to invalidate an NVME * This routine upcalls the nvmet transport to invalidate an NVME
* host to which this target instance had active connections. * host to which this target instance had active connections.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше