2019-05-19 16:51:43 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2010-12-22 00:34:31 +03:00
|
|
|
/*
|
|
|
|
* Linux MegaRAID driver for SAS based RAID controllers
|
|
|
|
*
|
2014-11-17 12:54:03 +03:00
|
|
|
* Copyright (c) 2009-2013 LSI Corporation
|
2018-10-17 09:37:43 +03:00
|
|
|
* Copyright (c) 2013-2016 Avago Technologies
|
|
|
|
* Copyright (c) 2016-2018 Broadcom Inc.
|
2010-12-22 00:34:31 +03:00
|
|
|
*
|
|
|
|
* FILE: megaraid_sas_fp.c
|
|
|
|
*
|
2018-10-17 09:37:43 +03:00
|
|
|
* Authors: Broadcom Inc.
|
2010-12-22 00:34:31 +03:00
|
|
|
* Sumant Patro
|
|
|
|
* Varad Talamacki
|
|
|
|
* Manoj Jose
|
2018-10-17 09:37:43 +03:00
|
|
|
* Kashyap Desai <kashyap.desai@broadcom.com>
|
|
|
|
* Sumit Saxena <sumit.saxena@broadcom.com>
|
2010-12-22 00:34:31 +03:00
|
|
|
*
|
2018-10-17 09:37:43 +03:00
|
|
|
* Send feedback to: megaraidlinux.pdl@broadcom.com
|
2010-12-22 00:34:31 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/pci.h>
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/moduleparam.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/uio.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/compat.h>
|
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/poll.h>
|
scsi: megaraid_sas: IRQ poll to avoid CPU hard lockups
Issue Description:
We have seen cpu lock up issues from field if system has a large (more than
96) logical cpu count. SAS3.0 controller (Invader series) supports max 96
MSI-X vector and SAS3.5 product (Ventura) supports max 128 MSI-X vectors.
This may be a generic issue (if PCI device support completion on multiple
reply queues).
Let me explain it w.r.t megaraid_sas supported h/w just to simplify the
problem and possible changes to handle such issues. MegaRAID controller
supports multiple reply queues in completion path. Driver creates MSI-X
vectors for controller as "minimum of (FW supported Reply queues, Logical
CPUs)". If submitter is not interrupted via completion on same CPU, there
is a loop in the IO path. This behavior can cause hard/soft CPU lockups, IO
timeout, system sluggish etc.
Example - one CPU (e.g. CPU A) is busy submitting the IOs and another CPU
(e.g. CPU B) is busy with processing the corresponding IO's reply
descriptors from reply descriptor queue upon receiving the interrupts from
HBA. If CPU A is continuously pumping the IOs then always CPU B (which is
executing the ISR) will see the valid reply descriptors in the reply
descriptor queue and it will be continuously processing those reply
descriptor in a loop without quitting the ISR handler.
megaraid_sas driver will exit ISR handler if it finds unused reply
descriptor in the reply descriptor queue. Since CPU A will be continuously
sending the IOs, CPU B may always see a valid reply descriptor (posted by
HBA Firmware after processing the IO) in the reply descriptor queue. In
worst case, driver will not quit from this loop in the ISR handler.
Eventually, CPU lockup will be detected by watchdog.
Above mentioned behavior is not common if "rq_affinity" set to 2 or
affinity_hint is honored by irqbalancer as "exact". If rq_affinity is set
to 2, submitter will be always interrupted via completion on same CPU. If
irqbalancer is using "exact" policy, interrupt will be delivered to
submitter CPU.
Problem statement:
If CPU count to MSI-X vectors (reply descriptor Queues) count ratio is not
1:1, we still have exposure of issue explained above and for that we don't
have any solution.
Exposure of soft/hard lockup is seen if CPU count is more than MSI-X
supported by device.
If CPUs count to MSI-X vectors count ratio is not 1:1, (Other way, if
CPU counts to MSI-X vector count ratio is something like X:1, where X > 1)
then 'exact' irqbalance policy OR rq_affinity = 2 won't help to avoid CPU
hard/soft lockups. There won't be any one to one mapping between
CPU to MSI-X vector instead one MSI-X interrupt (or reply descriptor queue)
is shared with group/set of CPUs and there is a possibility of having a
loop in the IO path within that CPU group and may observe lockups.
For example: Consider a system having two NUMA nodes and each node having
four logical CPUs and also consider that number of MSI-X vectors enabled on
the HBA is two, then CPUs count to MSI-X vector count ratio as 4:1.
e.g.
MSI-X vector 0 is affinity to CPU 0, CPU 1, CPU 2 & CPU 3 of NUMA node 0 and
MSI-X vector 1 is affinity to CPU 4, CPU 5, CPU 6 & CPU 7 of NUMA node 1.
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 --> MSI-X 0
node 0 size: 65536 MB
node 0 free: 63176 MB
node 1 cpus: 4 5 6 7 --> MSI-X 1
node 1 size: 65536 MB
node 1 free: 63176 MB
Assume that user started an application which uses all the CPUs of NUMA
node 0 for issuing the IOs. Only one CPU from affinity list (it can be any
cpu since this behavior depends upon irqbalance) CPU0 will receive the
interrupts from MSI-X 0 for all the IOs. Eventually, CPU 0 IO submission
percentage will be decreasing and ISR processing percentage will be
increasing as it is more busy with processing the interrupts. Gradually IO
submission percentage on CPU 0 will be zero and it's ISR processing
percentage will be 100% as IO loop has already formed within the
NUMA node 0, i.e. CPU 1, CPU 2 & CPU 3 will be continuously busy with
submitting the heavy IOs and only CPU 0 is busy in the ISR path as it
always find the valid reply descriptor in the reply descriptor queue.
Eventually, we will observe the hard lockup here.
Chances of occurring of hard/soft lockups are directly proportional to
value of X. If value of X is high, then chances of observing CPU lockups is
high.
Solution:
Use IRQ poll interface defined in "irq_poll.c".
megaraid_sas driver will execute ISR routine in softirq context and it will
always quit the loop based on budget provided in IRQ poll interface.
Driver will switch to IRQ poll only when more than a threshold number of
reply descriptors are handled in one ISR. Currently threshold is set as
1/4th of HBA queue depth.
In these scenarios (i.e. where CPUs count to MSI-X vectors count ratio is
X:1 (where X > 1)), IRQ poll interface will avoid CPU hard lockups due to
voluntary exit from the reply queue processing based on budget.
Note - Only one MSI-X vector is busy doing processing.
Select CONFIG_IRQ_POLL from driver Kconfig for driver compilation.
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-05-07 20:05:35 +03:00
|
|
|
#include <linux/irq_poll.h>
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
#include <scsi/scsi.h>
|
|
|
|
#include <scsi/scsi_cmnd.h>
|
|
|
|
#include <scsi/scsi_device.h>
|
|
|
|
#include <scsi/scsi_host.h>
|
|
|
|
|
|
|
|
#include "megaraid_sas_fusion.h"
|
2011-10-09 05:15:06 +04:00
|
|
|
#include "megaraid_sas.h"
|
2010-12-22 00:34:31 +03:00
|
|
|
#include <asm/div64.h>
|
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
#define LB_PENDING_CMDS_DEFAULT 4
|
|
|
|
static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
|
2019-05-29 19:00:40 +03:00
|
|
|
module_param(lb_pending_cmds, int, 0444);
|
2014-09-12 17:27:53 +04:00
|
|
|
MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding "
|
|
|
|
"threshold. Valid Values are 1-128. Default: 4");
|
|
|
|
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
#define ABS_DIFF(a, b) (((a) > (b)) ? ((a) - (b)) : ((b) - (a)))
|
|
|
|
#define MR_LD_STATE_OPTIMAL 3
|
2015-06-10 19:08:57 +03:00
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
#define SPAN_ROW_SIZE(map, ld, index_) (MR_LdSpanPtrGet(ld, index_, map)->spanRowSize)
|
|
|
|
#define SPAN_ROW_DATA_SIZE(map_, ld, index_) (MR_LdSpanPtrGet(ld, index_, map)->spanRowDataSize)
|
|
|
|
#define SPAN_INVALID 0xff
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
/* Prototypes */
|
2014-09-12 17:27:33 +04:00
|
|
|
static void mr_update_span_set(struct MR_DRV_RAID_MAP_ALL *map,
|
2013-05-22 11:05:04 +04:00
|
|
|
PLD_SPAN_INFO ldSpanInfo);
|
|
|
|
static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
|
|
|
|
u64 stripRow, u16 stripRef, struct IO_REQUEST_INFO *io_info,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct RAID_CONTEXT *pRAID_Context, struct MR_DRV_RAID_MAP_ALL *map);
|
2013-05-22 11:05:04 +04:00
|
|
|
static u64 get_row_from_strip(struct megasas_instance *instance, u32 ld,
|
2014-09-12 17:27:33 +04:00
|
|
|
u64 strip, struct MR_DRV_RAID_MAP_ALL *map);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
u32 mega_mod64(u64 dividend, u32 divisor)
|
|
|
|
{
|
|
|
|
u64 d;
|
|
|
|
u32 remainder;
|
|
|
|
|
|
|
|
if (!divisor)
|
|
|
|
printk(KERN_ERR "megasas : DIVISOR is zero, in div fn\n");
|
|
|
|
d = dividend;
|
|
|
|
remainder = do_div(d, divisor);
|
|
|
|
return remainder;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2020-07-06 15:33:48 +03:00
|
|
|
* mega_div64_32 - Do a 64-bit division
|
|
|
|
* @dividend: Dividend
|
|
|
|
* @divisor: Divisor
|
2010-12-22 00:34:31 +03:00
|
|
|
*
|
|
|
|
* @return quotient
|
|
|
|
**/
|
2020-04-07 12:28:25 +03:00
|
|
|
static u64 mega_div64_32(uint64_t dividend, uint32_t divisor)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2020-07-06 15:33:48 +03:00
|
|
|
u64 d = dividend;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
if (!divisor)
|
|
|
|
printk(KERN_ERR "megasas : DIVISOR is zero in mod fn\n");
|
|
|
|
|
2020-07-06 15:33:48 +03:00
|
|
|
do_div(d, divisor);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
return d;
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_LD_RAID *MR_LdRaidGet(u32 ld, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
return &map->raidMap.ldSpanMap[ld].ldRaid;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct MR_SPAN_BLOCK_INFO *MR_LdSpanInfoGet(u32 ld,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL
|
2010-12-22 00:34:31 +03:00
|
|
|
*map)
|
|
|
|
{
|
|
|
|
return &map->raidMap.ldSpanMap[ld].spanBlock[0];
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
static u8 MR_LdDataArmGet(u32 ld, u32 armIdx, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
return map->raidMap.ldSpanMap[ld].dataArmMap[armIdx];
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
u16 MR_ArPdGet(u32 ar, u32 arm, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2013-09-06 14:20:52 +04:00
|
|
|
return le16_to_cpu(map->raidMap.arMapInfo[ar].pd[arm]);
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
u16 MR_LdSpanArrayGet(u32 ld, u32 span, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2013-09-06 14:20:52 +04:00
|
|
|
return le16_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].span.arrayRef);
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2015-04-23 14:02:54 +03:00
|
|
|
__le16 MR_PdDevHandleGet(u32 pd, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
return map->raidMap.devHndlInfo[pd].curDevHdl;
|
|
|
|
}
|
|
|
|
|
2017-02-10 11:59:12 +03:00
|
|
|
static u8 MR_PdInterfaceTypeGet(u32 pd, struct MR_DRV_RAID_MAP_ALL *map)
|
|
|
|
{
|
|
|
|
return map->raidMap.devHndlInfo[pd].interfaceType;
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
u16 MR_GetLDTgtId(u32 ld, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2014-02-12 22:07:46 +04:00
|
|
|
return le16_to_cpu(map->raidMap.ldSpanMap[ld].ldRaid.targetId);
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2017-02-10 11:59:19 +03:00
|
|
|
u16 MR_TargetIdToLdGet(u32 ldTgtId, struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2014-02-12 22:07:46 +04:00
|
|
|
return map->raidMap.ldTgtIdToLd[ldTgtId];
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct MR_LD_SPAN *MR_LdSpanPtrGet(u32 ld, u32 span,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
return &map->raidMap.ldSpanMap[ld].spanBlock[span].span;
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
/*
|
|
|
|
* This function will Populate Driver Map using firmware raid map
|
|
|
|
*/
|
2018-01-05 16:27:44 +03:00
|
|
|
static int MR_PopulateDrvRaidMap(struct megasas_instance *instance, u64 map_id)
|
2014-09-12 17:27:33 +04:00
|
|
|
{
|
|
|
|
struct fusion_context *fusion = instance->ctrl_context;
|
|
|
|
struct MR_FW_RAID_MAP_ALL *fw_map_old = NULL;
|
|
|
|
struct MR_FW_RAID_MAP *pFwRaidMap = NULL;
|
2017-01-11 02:20:48 +03:00
|
|
|
int i, j;
|
2015-01-05 17:35:58 +03:00
|
|
|
u16 ld_count;
|
2017-01-11 02:20:48 +03:00
|
|
|
struct MR_FW_RAID_MAP_DYNAMIC *fw_map_dyn;
|
|
|
|
struct MR_FW_RAID_MAP_EXT *fw_map_ext;
|
|
|
|
struct MR_RAID_MAP_DESC_TABLE *desc_table;
|
2014-09-12 17:27:33 +04:00
|
|
|
|
|
|
|
|
|
|
|
struct MR_DRV_RAID_MAP_ALL *drv_map =
|
2018-01-05 16:27:44 +03:00
|
|
|
fusion->ld_drv_map[(map_id & 1)];
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP *pDrvRaidMap = &drv_map->raidMap;
|
2017-01-11 02:20:48 +03:00
|
|
|
void *raid_map_data = NULL;
|
|
|
|
|
|
|
|
memset(drv_map, 0, fusion->drv_map_sz);
|
|
|
|
memset(pDrvRaidMap->ldTgtIdToLd,
|
2017-02-10 11:59:37 +03:00
|
|
|
0xff, (sizeof(u16) * MAX_LOGICAL_DRIVES_DYN));
|
2017-01-11 02:20:48 +03:00
|
|
|
|
|
|
|
if (instance->max_raid_mapsize) {
|
2018-01-05 16:27:44 +03:00
|
|
|
fw_map_dyn = fusion->ld_map[(map_id & 1)];
|
2017-01-11 02:20:48 +03:00
|
|
|
desc_table =
|
|
|
|
(struct MR_RAID_MAP_DESC_TABLE *)((void *)fw_map_dyn + le32_to_cpu(fw_map_dyn->desc_table_offset));
|
|
|
|
if (desc_table != fw_map_dyn->raid_map_desc_table)
|
|
|
|
dev_dbg(&instance->pdev->dev, "offsets of desc table are not matching desc %p original %p\n",
|
|
|
|
desc_table, fw_map_dyn->raid_map_desc_table);
|
|
|
|
|
|
|
|
ld_count = (u16)le16_to_cpu(fw_map_dyn->ld_count);
|
|
|
|
pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
|
|
|
|
pDrvRaidMap->fpPdIoTimeoutSec =
|
|
|
|
fw_map_dyn->fp_pd_io_timeout_sec;
|
2017-02-10 11:59:21 +03:00
|
|
|
pDrvRaidMap->totalSize =
|
|
|
|
cpu_to_le32(sizeof(struct MR_DRV_RAID_MAP_ALL));
|
2017-01-11 02:20:48 +03:00
|
|
|
/* point to actual data starting point*/
|
|
|
|
raid_map_data = (void *)fw_map_dyn +
|
|
|
|
le32_to_cpu(fw_map_dyn->desc_table_offset) +
|
|
|
|
le32_to_cpu(fw_map_dyn->desc_table_size);
|
|
|
|
|
|
|
|
for (i = 0; i < le32_to_cpu(fw_map_dyn->desc_table_num_elements); ++i) {
|
|
|
|
switch (le32_to_cpu(desc_table->raid_map_desc_type)) {
|
|
|
|
case RAID_MAP_DESC_TYPE_DEVHDL_INFO:
|
|
|
|
fw_map_dyn->dev_hndl_info =
|
|
|
|
(struct MR_DEV_HANDLE_INFO *)(raid_map_data + le32_to_cpu(desc_table->raid_map_desc_offset));
|
|
|
|
memcpy(pDrvRaidMap->devHndlInfo,
|
2017-02-10 11:59:37 +03:00
|
|
|
fw_map_dyn->dev_hndl_info,
|
|
|
|
sizeof(struct MR_DEV_HANDLE_INFO) *
|
|
|
|
le32_to_cpu(desc_table->raid_map_desc_elements));
|
2017-01-11 02:20:48 +03:00
|
|
|
break;
|
|
|
|
case RAID_MAP_DESC_TYPE_TGTID_INFO:
|
|
|
|
fw_map_dyn->ld_tgt_id_to_ld =
|
2017-02-10 11:59:37 +03:00
|
|
|
(u16 *)(raid_map_data +
|
|
|
|
le32_to_cpu(desc_table->raid_map_desc_offset));
|
|
|
|
for (j = 0; j < le32_to_cpu(desc_table->raid_map_desc_elements); j++) {
|
|
|
|
pDrvRaidMap->ldTgtIdToLd[j] =
|
|
|
|
le16_to_cpu(fw_map_dyn->ld_tgt_id_to_ld[j]);
|
|
|
|
}
|
2017-01-11 02:20:48 +03:00
|
|
|
break;
|
|
|
|
case RAID_MAP_DESC_TYPE_ARRAY_INFO:
|
|
|
|
fw_map_dyn->ar_map_info =
|
2017-02-10 11:59:37 +03:00
|
|
|
(struct MR_ARRAY_INFO *)
|
|
|
|
(raid_map_data + le32_to_cpu(desc_table->raid_map_desc_offset));
|
2017-01-11 02:20:48 +03:00
|
|
|
memcpy(pDrvRaidMap->arMapInfo,
|
2017-02-10 11:59:37 +03:00
|
|
|
fw_map_dyn->ar_map_info,
|
|
|
|
sizeof(struct MR_ARRAY_INFO) *
|
|
|
|
le32_to_cpu(desc_table->raid_map_desc_elements));
|
2017-01-11 02:20:48 +03:00
|
|
|
break;
|
|
|
|
case RAID_MAP_DESC_TYPE_SPAN_INFO:
|
|
|
|
fw_map_dyn->ld_span_map =
|
2017-02-10 11:59:37 +03:00
|
|
|
(struct MR_LD_SPAN_MAP *)
|
|
|
|
(raid_map_data +
|
|
|
|
le32_to_cpu(desc_table->raid_map_desc_offset));
|
2017-01-11 02:20:48 +03:00
|
|
|
memcpy(pDrvRaidMap->ldSpanMap,
|
2017-02-10 11:59:37 +03:00
|
|
|
fw_map_dyn->ld_span_map,
|
|
|
|
sizeof(struct MR_LD_SPAN_MAP) *
|
|
|
|
le32_to_cpu(desc_table->raid_map_desc_elements));
|
2017-01-11 02:20:48 +03:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
dev_dbg(&instance->pdev->dev, "wrong number of desctableElements %d\n",
|
|
|
|
fw_map_dyn->desc_table_num_elements);
|
|
|
|
}
|
|
|
|
++desc_table;
|
|
|
|
}
|
|
|
|
|
|
|
|
} else if (instance->supportmax256vd) {
|
|
|
|
fw_map_ext =
|
2018-01-05 16:27:44 +03:00
|
|
|
(struct MR_FW_RAID_MAP_EXT *)fusion->ld_map[(map_id & 1)];
|
2017-01-11 02:20:48 +03:00
|
|
|
ld_count = (u16)le16_to_cpu(fw_map_ext->ldCount);
|
|
|
|
if (ld_count > MAX_LOGICAL_DRIVES_EXT) {
|
|
|
|
dev_dbg(&instance->pdev->dev, "megaraid_sas: LD count exposed in RAID map in not valid\n");
|
2018-01-05 16:27:40 +03:00
|
|
|
return 1;
|
2017-01-11 02:20:48 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
|
|
|
|
pDrvRaidMap->fpPdIoTimeoutSec = fw_map_ext->fpPdIoTimeoutSec;
|
|
|
|
for (i = 0; i < (MAX_LOGICAL_DRIVES_EXT); i++)
|
|
|
|
pDrvRaidMap->ldTgtIdToLd[i] =
|
|
|
|
(u16)fw_map_ext->ldTgtIdToLd[i];
|
|
|
|
memcpy(pDrvRaidMap->ldSpanMap, fw_map_ext->ldSpanMap,
|
2017-02-10 11:59:37 +03:00
|
|
|
sizeof(struct MR_LD_SPAN_MAP) * ld_count);
|
2017-01-11 02:20:48 +03:00
|
|
|
memcpy(pDrvRaidMap->arMapInfo, fw_map_ext->arMapInfo,
|
2017-02-10 11:59:37 +03:00
|
|
|
sizeof(struct MR_ARRAY_INFO) * MAX_API_ARRAYS_EXT);
|
2017-01-11 02:20:48 +03:00
|
|
|
memcpy(pDrvRaidMap->devHndlInfo, fw_map_ext->devHndlInfo,
|
2017-02-10 11:59:37 +03:00
|
|
|
sizeof(struct MR_DEV_HANDLE_INFO) *
|
|
|
|
MAX_RAIDMAP_PHYSICAL_DEVICES);
|
2014-09-12 17:27:33 +04:00
|
|
|
|
|
|
|
/* New Raid map will not set totalSize, so keep expected value
|
|
|
|
* for legacy code in ValidateMapInfo
|
|
|
|
*/
|
2014-11-17 12:54:28 +03:00
|
|
|
pDrvRaidMap->totalSize =
|
|
|
|
cpu_to_le32(sizeof(struct MR_FW_RAID_MAP_EXT));
|
2014-09-12 17:27:33 +04:00
|
|
|
} else {
|
|
|
|
fw_map_old = (struct MR_FW_RAID_MAP_ALL *)
|
2018-01-05 16:27:44 +03:00
|
|
|
fusion->ld_map[(map_id & 1)];
|
2014-09-12 17:27:33 +04:00
|
|
|
pFwRaidMap = &fw_map_old->raidMap;
|
2015-01-05 17:35:58 +03:00
|
|
|
ld_count = (u16)le32_to_cpu(pFwRaidMap->ldCount);
|
2018-01-05 16:27:40 +03:00
|
|
|
if (ld_count > MAX_LOGICAL_DRIVES) {
|
|
|
|
dev_dbg(&instance->pdev->dev,
|
|
|
|
"LD count exposed in RAID map in not valid\n");
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
pDrvRaidMap->totalSize = pFwRaidMap->totalSize;
|
2015-01-05 17:35:58 +03:00
|
|
|
pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
|
2014-09-12 17:27:33 +04:00
|
|
|
pDrvRaidMap->fpPdIoTimeoutSec = pFwRaidMap->fpPdIoTimeoutSec;
|
|
|
|
for (i = 0; i < MAX_RAIDMAP_LOGICAL_DRIVES + MAX_RAIDMAP_VIEWS; i++)
|
|
|
|
pDrvRaidMap->ldTgtIdToLd[i] =
|
|
|
|
(u8)pFwRaidMap->ldTgtIdToLd[i];
|
2015-01-05 17:35:58 +03:00
|
|
|
for (i = 0; i < ld_count; i++) {
|
2014-09-12 17:27:33 +04:00
|
|
|
pDrvRaidMap->ldSpanMap[i] = pFwRaidMap->ldSpanMap[i];
|
|
|
|
}
|
|
|
|
memcpy(pDrvRaidMap->arMapInfo, pFwRaidMap->arMapInfo,
|
|
|
|
sizeof(struct MR_ARRAY_INFO) * MAX_RAIDMAP_ARRAYS);
|
|
|
|
memcpy(pDrvRaidMap->devHndlInfo, pFwRaidMap->devHndlInfo,
|
|
|
|
sizeof(struct MR_DEV_HANDLE_INFO) *
|
|
|
|
MAX_RAIDMAP_PHYSICAL_DEVICES);
|
|
|
|
}
|
2018-01-05 16:27:40 +03:00
|
|
|
|
|
|
|
return 0;
|
2014-09-12 17:27:33 +04:00
|
|
|
}
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
/*
|
|
|
|
* This function will validate Map info data provided by FW
|
|
|
|
*/
|
2018-01-05 16:27:44 +03:00
|
|
|
u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2014-09-12 17:27:33 +04:00
|
|
|
struct fusion_context *fusion;
|
|
|
|
struct MR_DRV_RAID_MAP_ALL *drv_map;
|
|
|
|
struct MR_DRV_RAID_MAP *pDrvRaidMap;
|
|
|
|
struct LD_LOAD_BALANCE_INFO *lbInfo;
|
|
|
|
PLD_SPAN_INFO ldSpanInfo;
|
2013-09-06 14:20:52 +04:00
|
|
|
struct MR_LD_RAID *raid;
|
2017-02-10 11:59:18 +03:00
|
|
|
u16 num_lds, i;
|
2013-09-06 14:20:52 +04:00
|
|
|
u16 ld;
|
2014-09-12 17:27:33 +04:00
|
|
|
u32 expected_size;
|
2013-09-06 14:20:52 +04:00
|
|
|
|
2018-01-05 16:27:44 +03:00
|
|
|
if (MR_PopulateDrvRaidMap(instance, map_id))
|
2018-01-05 16:27:40 +03:00
|
|
|
return 0;
|
2014-09-12 17:27:33 +04:00
|
|
|
|
|
|
|
fusion = instance->ctrl_context;
|
2018-01-05 16:27:44 +03:00
|
|
|
drv_map = fusion->ld_drv_map[(map_id & 1)];
|
2014-09-12 17:27:33 +04:00
|
|
|
pDrvRaidMap = &drv_map->raidMap;
|
|
|
|
|
|
|
|
lbInfo = fusion->load_balance_info;
|
|
|
|
ldSpanInfo = fusion->log_to_span;
|
|
|
|
|
2017-01-11 02:20:48 +03:00
|
|
|
if (instance->max_raid_mapsize)
|
|
|
|
expected_size = sizeof(struct MR_DRV_RAID_MAP_ALL);
|
|
|
|
else if (instance->supportmax256vd)
|
2014-09-12 17:27:33 +04:00
|
|
|
expected_size = sizeof(struct MR_FW_RAID_MAP_EXT);
|
|
|
|
else
|
|
|
|
expected_size =
|
|
|
|
(sizeof(struct MR_FW_RAID_MAP) - sizeof(struct MR_LD_SPAN_MAP) +
|
2014-11-17 12:54:28 +03:00
|
|
|
(sizeof(struct MR_LD_SPAN_MAP) * le16_to_cpu(pDrvRaidMap->ldCount)));
|
2014-09-12 17:27:33 +04:00
|
|
|
|
|
|
|
if (le32_to_cpu(pDrvRaidMap->totalSize) != expected_size) {
|
2017-01-11 02:20:48 +03:00
|
|
|
dev_dbg(&instance->pdev->dev, "megasas: map info structure size 0x%x",
|
|
|
|
le32_to_cpu(pDrvRaidMap->totalSize));
|
|
|
|
dev_dbg(&instance->pdev->dev, "is not matching expected size 0x%x\n",
|
2017-02-10 11:59:37 +03:00
|
|
|
(unsigned int)expected_size);
|
2014-09-12 17:27:33 +04:00
|
|
|
dev_err(&instance->pdev->dev, "megasas: span map %x, pDrvRaidMap->totalSize : %x\n",
|
|
|
|
(unsigned int)sizeof(struct MR_LD_SPAN_MAP),
|
|
|
|
le32_to_cpu(pDrvRaidMap->totalSize));
|
2010-12-22 00:34:31 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
if (instance->UnevenSpanSupport)
|
2014-09-12 17:27:33 +04:00
|
|
|
mr_update_span_set(drv_map, ldSpanInfo);
|
2013-05-22 11:05:04 +04:00
|
|
|
|
2017-02-10 11:59:17 +03:00
|
|
|
if (lbInfo)
|
|
|
|
mr_update_load_balance_params(drv_map, lbInfo);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2014-11-17 12:54:28 +03:00
|
|
|
num_lds = le16_to_cpu(drv_map->raidMap.ldCount);
|
2013-09-06 14:20:52 +04:00
|
|
|
|
2021-05-28 16:13:05 +03:00
|
|
|
memcpy(instance->ld_ids_prev,
|
|
|
|
instance->ld_ids_from_raidmap,
|
|
|
|
sizeof(instance->ld_ids_from_raidmap));
|
|
|
|
memset(instance->ld_ids_from_raidmap, 0xff, MEGASAS_MAX_LD_IDS);
|
2013-09-06 14:20:52 +04:00
|
|
|
/*Convert Raid capability values to CPU arch */
|
2017-02-10 11:59:18 +03:00
|
|
|
for (i = 0; (num_lds > 0) && (i < MAX_LOGICAL_DRIVES_EXT); i++) {
|
|
|
|
ld = MR_TargetIdToLdGet(i, drv_map);
|
|
|
|
|
|
|
|
/* For non existing VDs, iterate to next VD*/
|
2023-03-02 13:53:40 +03:00
|
|
|
if (ld >= MEGASAS_MAX_SUPPORTED_LD_IDS)
|
2017-02-10 11:59:18 +03:00
|
|
|
continue;
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
raid = MR_LdRaidGet(ld, drv_map);
|
2013-09-06 14:20:52 +04:00
|
|
|
le32_to_cpus((u32 *)&raid->capability);
|
2021-05-28 16:13:05 +03:00
|
|
|
instance->ld_ids_from_raidmap[i] = i;
|
2017-02-10 11:59:18 +03:00
|
|
|
num_lds--;
|
2013-09-06 14:20:52 +04:00
|
|
|
}
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-04-07 12:28:25 +03:00
|
|
|
static u32 MR_GetSpanBlock(u32 ld, u64 row, u64 *span_blk,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
struct MR_SPAN_BLOCK_INFO *pSpanBlock = MR_LdSpanInfoGet(ld, map);
|
|
|
|
struct MR_QUAD_ELEMENT *quad;
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
u32 span, j;
|
|
|
|
|
|
|
|
for (span = 0; span < raid->spanDepth; span++, pSpanBlock++) {
|
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
for (j = 0; j < le32_to_cpu(pSpanBlock->block_span_info.noElements); j++) {
|
2010-12-22 00:34:31 +03:00
|
|
|
quad = &pSpanBlock->block_span_info.quad[j];
|
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(quad->diff) == 0)
|
2013-05-22 11:05:04 +04:00
|
|
|
return SPAN_INVALID;
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le64_to_cpu(quad->logStart) <= row && row <=
|
|
|
|
le64_to_cpu(quad->logEnd) && (mega_mod64(row - le64_to_cpu(quad->logStart),
|
|
|
|
le32_to_cpu(quad->diff))) == 0) {
|
2010-12-22 00:34:31 +03:00
|
|
|
if (span_blk != NULL) {
|
2019-10-09 10:23:44 +03:00
|
|
|
u64 blk;
|
2013-09-06 14:20:52 +04:00
|
|
|
blk = mega_div64_32((row-le64_to_cpu(quad->logStart)), le32_to_cpu(quad->diff));
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
blk = (blk + le64_to_cpu(quad->offsetInSpan)) << raid->stripeShift;
|
2010-12-22 00:34:31 +03:00
|
|
|
*span_blk = blk;
|
|
|
|
}
|
|
|
|
return span;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2013-05-22 11:05:04 +04:00
|
|
|
return SPAN_INVALID;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the Span block for given row using spanset.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
* instance - HBA instance
|
|
|
|
* ld - Logical drive number
|
|
|
|
* row - Row number
|
|
|
|
* map - LD map
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* span - Span number
|
|
|
|
* block - Absolute Block number in the physical disk
|
|
|
|
* div_error - Devide error code.
|
|
|
|
*/
|
|
|
|
|
2020-04-07 12:28:25 +03:00
|
|
|
static u32 mr_spanset_get_span_block(struct megasas_instance *instance,
|
2014-09-12 17:27:33 +04:00
|
|
|
u32 ld, u64 row, u64 *span_blk, struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct fusion_context *fusion = instance->ctrl_context;
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
LD_SPAN_SET *span_set;
|
|
|
|
struct MR_QUAD_ELEMENT *quad;
|
|
|
|
u32 span, info;
|
|
|
|
PLD_SPAN_INFO ldSpanInfo = fusion->log_to_span;
|
|
|
|
|
|
|
|
for (info = 0; info < MAX_QUAD_DEPTH; info++) {
|
|
|
|
span_set = &(ldSpanInfo[ld].span_set[info]);
|
|
|
|
|
|
|
|
if (span_set->span_row_data_width == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (row > span_set->data_row_end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (span = 0; span < raid->spanDepth; span++)
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].
|
|
|
|
block_span_info.noElements) >= info+1) {
|
2013-05-22 11:05:04 +04:00
|
|
|
quad = &map->raidMap.ldSpanMap[ld].
|
|
|
|
spanBlock[span].
|
|
|
|
block_span_info.quad[info];
|
2014-11-17 12:54:28 +03:00
|
|
|
if (le32_to_cpu(quad->diff) == 0)
|
2013-05-22 11:05:04 +04:00
|
|
|
return SPAN_INVALID;
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le64_to_cpu(quad->logStart) <= row &&
|
|
|
|
row <= le64_to_cpu(quad->logEnd) &&
|
|
|
|
(mega_mod64(row - le64_to_cpu(quad->logStart),
|
|
|
|
le32_to_cpu(quad->diff))) == 0) {
|
2013-05-22 11:05:04 +04:00
|
|
|
if (span_blk != NULL) {
|
|
|
|
u64 blk;
|
|
|
|
blk = mega_div64_32
|
2013-09-06 14:20:52 +04:00
|
|
|
((row - le64_to_cpu(quad->logStart)),
|
|
|
|
le32_to_cpu(quad->diff));
|
|
|
|
blk = (blk + le64_to_cpu(quad->offsetInSpan))
|
2013-05-22 11:05:04 +04:00
|
|
|
<< raid->stripeShift;
|
|
|
|
*span_blk = blk;
|
|
|
|
}
|
|
|
|
return span;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return SPAN_INVALID;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the row for given strip using spanset.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
* instance - HBA instance
|
|
|
|
* ld - Logical drive number
|
|
|
|
* Strip - Strip
|
|
|
|
* map - LD map
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* row - row associated with strip
|
|
|
|
*/
|
|
|
|
|
|
|
|
static u64 get_row_from_strip(struct megasas_instance *instance,
|
2014-09-12 17:27:33 +04:00
|
|
|
u32 ld, u64 strip, struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct fusion_context *fusion = instance->ctrl_context;
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
LD_SPAN_SET *span_set;
|
|
|
|
PLD_SPAN_INFO ldSpanInfo = fusion->log_to_span;
|
|
|
|
u32 info, strip_offset, span, span_offset;
|
|
|
|
u64 span_set_Strip, span_set_Row, retval;
|
|
|
|
|
|
|
|
for (info = 0; info < MAX_QUAD_DEPTH; info++) {
|
|
|
|
span_set = &(ldSpanInfo[ld].span_set[info]);
|
|
|
|
|
|
|
|
if (span_set->span_row_data_width == 0)
|
|
|
|
break;
|
|
|
|
if (strip > span_set->data_strip_end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
span_set_Strip = strip - span_set->data_strip_start;
|
|
|
|
strip_offset = mega_mod64(span_set_Strip,
|
|
|
|
span_set->span_row_data_width);
|
|
|
|
span_set_Row = mega_div64_32(span_set_Strip,
|
|
|
|
span_set->span_row_data_width) * span_set->diff;
|
|
|
|
for (span = 0, span_offset = 0; span < raid->spanDepth; span++)
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].
|
2014-11-17 12:54:28 +03:00
|
|
|
block_span_info.noElements) >= info+1) {
|
2013-05-22 11:05:04 +04:00
|
|
|
if (strip_offset >=
|
|
|
|
span_set->strip_offset[span])
|
|
|
|
span_offset++;
|
|
|
|
else
|
|
|
|
break;
|
|
|
|
}
|
2017-02-10 11:59:36 +03:00
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
retval = (span_set->data_row_start + span_set_Row +
|
|
|
|
(span_offset - 1));
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
return -1LLU;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the Start Strip for given row using spanset.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
* instance - HBA instance
|
|
|
|
* ld - Logical drive number
|
|
|
|
* row - Row number
|
|
|
|
* map - LD map
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* Strip - Start strip associated with row
|
|
|
|
*/
|
|
|
|
|
|
|
|
static u64 get_strip_from_row(struct megasas_instance *instance,
|
2014-09-12 17:27:33 +04:00
|
|
|
u32 ld, u64 row, struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct fusion_context *fusion = instance->ctrl_context;
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
LD_SPAN_SET *span_set;
|
|
|
|
struct MR_QUAD_ELEMENT *quad;
|
|
|
|
PLD_SPAN_INFO ldSpanInfo = fusion->log_to_span;
|
|
|
|
u32 span, info;
|
|
|
|
u64 strip;
|
|
|
|
|
|
|
|
for (info = 0; info < MAX_QUAD_DEPTH; info++) {
|
|
|
|
span_set = &(ldSpanInfo[ld].span_set[info]);
|
|
|
|
|
|
|
|
if (span_set->span_row_data_width == 0)
|
|
|
|
break;
|
|
|
|
if (row > span_set->data_row_end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (span = 0; span < raid->spanDepth; span++)
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].
|
|
|
|
block_span_info.noElements) >= info+1) {
|
2013-05-22 11:05:04 +04:00
|
|
|
quad = &map->raidMap.ldSpanMap[ld].
|
|
|
|
spanBlock[span].block_span_info.quad[info];
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le64_to_cpu(quad->logStart) <= row &&
|
|
|
|
row <= le64_to_cpu(quad->logEnd) &&
|
|
|
|
mega_mod64((row - le64_to_cpu(quad->logStart)),
|
|
|
|
le32_to_cpu(quad->diff)) == 0) {
|
2013-05-22 11:05:04 +04:00
|
|
|
strip = mega_div64_32
|
|
|
|
(((row - span_set->data_row_start)
|
2013-09-06 14:20:52 +04:00
|
|
|
- le64_to_cpu(quad->logStart)),
|
|
|
|
le32_to_cpu(quad->diff));
|
2013-05-22 11:05:04 +04:00
|
|
|
strip *= span_set->span_row_data_width;
|
|
|
|
strip += span_set->data_strip_start;
|
|
|
|
strip += span_set->strip_offset[span];
|
|
|
|
return strip;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
dev_err(&instance->pdev->dev, "get_strip_from_row"
|
|
|
|
"returns invalid strip for ld=%x, row=%lx\n",
|
|
|
|
ld, (long unsigned int)row);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the Physical Arm for given strip using spanset.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
* instance - HBA instance
|
|
|
|
* ld - Logical drive number
|
|
|
|
* strip - Strip
|
|
|
|
* map - LD map
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* Phys Arm - Phys Arm associated with strip
|
|
|
|
*/
|
|
|
|
|
|
|
|
static u32 get_arm_from_strip(struct megasas_instance *instance,
|
2014-09-12 17:27:33 +04:00
|
|
|
u32 ld, u64 strip, struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct fusion_context *fusion = instance->ctrl_context;
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
LD_SPAN_SET *span_set;
|
|
|
|
PLD_SPAN_INFO ldSpanInfo = fusion->log_to_span;
|
|
|
|
u32 info, strip_offset, span, span_offset, retval;
|
|
|
|
|
|
|
|
for (info = 0 ; info < MAX_QUAD_DEPTH; info++) {
|
|
|
|
span_set = &(ldSpanInfo[ld].span_set[info]);
|
|
|
|
|
|
|
|
if (span_set->span_row_data_width == 0)
|
|
|
|
break;
|
|
|
|
if (strip > span_set->data_strip_end)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
strip_offset = (uint)mega_mod64
|
|
|
|
((strip - span_set->data_strip_start),
|
|
|
|
span_set->span_row_data_width);
|
|
|
|
|
|
|
|
for (span = 0, span_offset = 0; span < raid->spanDepth; span++)
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].
|
|
|
|
block_span_info.noElements) >= info+1) {
|
2013-05-22 11:05:04 +04:00
|
|
|
if (strip_offset >=
|
|
|
|
span_set->strip_offset[span])
|
|
|
|
span_offset =
|
|
|
|
span_set->strip_offset[span];
|
|
|
|
else
|
|
|
|
break;
|
|
|
|
}
|
2017-02-10 11:59:36 +03:00
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
retval = (strip_offset - span_offset);
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
dev_err(&instance->pdev->dev, "get_arm_from_strip"
|
|
|
|
"returns invalid arm for ld=%x strip=%lx\n",
|
|
|
|
ld, (long unsigned int)strip);
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This Function will return Phys arm */
|
2020-04-07 12:28:25 +03:00
|
|
|
static u8 get_arm(struct megasas_instance *instance, u32 ld, u8 span, u64 stripe,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
/* Need to check correct default value */
|
|
|
|
u32 arm = 0;
|
|
|
|
|
|
|
|
switch (raid->level) {
|
|
|
|
case 0:
|
|
|
|
case 5:
|
|
|
|
case 6:
|
|
|
|
arm = mega_mod64(stripe, SPAN_ROW_SIZE(map, ld, span));
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
/* start with logical arm */
|
|
|
|
arm = get_arm_from_strip(instance, ld, stripe, map);
|
2013-06-29 01:21:04 +04:00
|
|
|
if (arm != -1U)
|
2013-05-22 11:05:04 +04:00
|
|
|
arm *= 2;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return arm;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the arm, span and block for the specified stripe and
|
|
|
|
* reference in stripe using spanset
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
*
|
|
|
|
* ld - Logical drive number
|
|
|
|
* stripRow - Stripe number
|
|
|
|
* stripRef - Reference in stripe
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* span - Span number
|
|
|
|
* block - Absolute Block number in the physical disk
|
|
|
|
*/
|
|
|
|
static u8 mr_spanset_get_phy_params(struct megasas_instance *instance, u32 ld,
|
|
|
|
u64 stripRow, u16 stripRef, struct IO_REQUEST_INFO *io_info,
|
|
|
|
struct RAID_CONTEXT *pRAID_Context,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
2017-01-11 02:20:47 +03:00
|
|
|
u32 pd, arRef, r1_alt_pd;
|
2013-05-22 11:05:04 +04:00
|
|
|
u8 physArm, span;
|
|
|
|
u64 row;
|
2017-08-23 14:47:05 +03:00
|
|
|
u8 retval = true;
|
2013-05-22 11:05:04 +04:00
|
|
|
u64 *pdBlock = &io_info->pdBlock;
|
2015-04-23 14:02:54 +03:00
|
|
|
__le16 *pDevHandle = &io_info->devHandle;
|
2017-02-10 11:59:12 +03:00
|
|
|
u8 *pPdInterface = &io_info->pd_interface;
|
2013-05-22 11:05:04 +04:00
|
|
|
u32 logArm, rowMod, armQ, arm;
|
|
|
|
|
2017-02-10 11:59:24 +03:00
|
|
|
*pDevHandle = cpu_to_le16(MR_DEVHANDLE_INVALID);
|
2013-05-22 11:05:04 +04:00
|
|
|
|
|
|
|
/*Get row and span from io_info for Uneven Span IO.*/
|
|
|
|
row = io_info->start_row;
|
|
|
|
span = io_info->start_span;
|
|
|
|
|
|
|
|
|
|
|
|
if (raid->level == 6) {
|
|
|
|
logArm = get_arm_from_strip(instance, ld, stripRow, map);
|
2013-06-29 01:21:04 +04:00
|
|
|
if (logArm == -1U)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
rowMod = mega_mod64(row, SPAN_ROW_SIZE(map, ld, span));
|
|
|
|
armQ = SPAN_ROW_SIZE(map, ld, span) - 1 - rowMod;
|
|
|
|
arm = armQ + 1 + logArm;
|
|
|
|
if (arm >= SPAN_ROW_SIZE(map, ld, span))
|
|
|
|
arm -= SPAN_ROW_SIZE(map, ld, span);
|
|
|
|
physArm = (u8)arm;
|
|
|
|
} else
|
|
|
|
/* Calculate the arm */
|
|
|
|
physArm = get_arm(instance, ld, span, stripRow, map);
|
|
|
|
if (physArm == 0xFF)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
|
|
|
|
arRef = MR_LdSpanArrayGet(ld, span, map);
|
|
|
|
pd = MR_ArPdGet(arRef, physArm, map);
|
|
|
|
|
2017-01-11 02:20:47 +03:00
|
|
|
if (pd != MR_PD_INVALID) {
|
2013-05-22 11:05:04 +04:00
|
|
|
*pDevHandle = MR_PdDevHandleGet(pd, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
*pPdInterface = MR_PdInterfaceTypeGet(pd, map);
|
2017-01-11 02:20:47 +03:00
|
|
|
/* get second pd also for raid 1/10 fast path writes*/
|
2018-12-17 11:47:37 +03:00
|
|
|
if ((instance->adapter_type >= VENTURA_SERIES) &&
|
2017-02-10 11:59:12 +03:00
|
|
|
(raid->level == 1) &&
|
|
|
|
!io_info->isRead) {
|
2017-01-11 02:20:47 +03:00
|
|
|
r1_alt_pd = MR_ArPdGet(arRef, physArm + 1, map);
|
|
|
|
if (r1_alt_pd != MR_PD_INVALID)
|
|
|
|
io_info->r1_alt_dev_handle =
|
|
|
|
MR_PdDevHandleGet(r1_alt_pd, map);
|
|
|
|
}
|
|
|
|
} else {
|
2013-05-22 11:05:04 +04:00
|
|
|
if ((raid->level >= 5) &&
|
2017-10-19 12:48:48 +03:00
|
|
|
((instance->adapter_type == THUNDERBOLT_SERIES) ||
|
|
|
|
((instance->adapter_type == INVADER_SERIES) &&
|
2013-05-22 11:05:04 +04:00
|
|
|
(raid->regTypeReqOnRead != REGION_TYPE_UNUSED))))
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->reg_lock_flags = REGION_TYPE_EXCLUSIVE;
|
2013-05-22 11:05:04 +04:00
|
|
|
else if (raid->level == 1) {
|
2016-10-21 16:33:30 +03:00
|
|
|
physArm = physArm + 1;
|
|
|
|
pd = MR_ArPdGet(arRef, physArm, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
if (pd != MR_PD_INVALID) {
|
2013-05-22 11:05:04 +04:00
|
|
|
*pDevHandle = MR_PdDevHandleGet(pd, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
*pPdInterface = MR_PdInterfaceTypeGet(pd, map);
|
|
|
|
}
|
2013-05-22 11:05:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
*pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk);
|
2018-12-17 11:47:37 +03:00
|
|
|
if (instance->adapter_type >= VENTURA_SERIES) {
|
2017-02-10 11:59:37 +03:00
|
|
|
((struct RAID_CONTEXT_G35 *)pRAID_Context)->span_arm =
|
2017-01-11 02:20:48 +03:00
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
io_info->span_arm =
|
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
} else {
|
|
|
|
pRAID_Context->span_arm =
|
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
io_info->span_arm = pRAID_Context->span_arm;
|
|
|
|
}
|
2017-02-10 11:59:24 +03:00
|
|
|
io_info->pd_after_lb = pd;
|
2013-05-22 11:05:04 +04:00
|
|
|
return retval;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine calculates the arm, span and block for the specified stripe and
|
|
|
|
* reference in stripe.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
*
|
|
|
|
* ld - Logical drive number
|
|
|
|
* stripRow - Stripe number
|
|
|
|
* stripRef - Reference in stripe
|
|
|
|
*
|
|
|
|
* Outputs :
|
|
|
|
*
|
|
|
|
* span - Span number
|
|
|
|
* block - Absolute Block number in the physical disk
|
|
|
|
*/
|
2020-04-07 12:28:25 +03:00
|
|
|
static u8 MR_GetPhyParams(struct megasas_instance *instance, u32 ld, u64 stripRow,
|
2013-05-22 11:05:04 +04:00
|
|
|
u16 stripRef, struct IO_REQUEST_INFO *io_info,
|
|
|
|
struct RAID_CONTEXT *pRAID_Context,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
2017-01-11 02:20:47 +03:00
|
|
|
u32 pd, arRef, r1_alt_pd;
|
2010-12-22 00:34:31 +03:00
|
|
|
u8 physArm, span;
|
|
|
|
u64 row;
|
2017-08-23 14:47:05 +03:00
|
|
|
u8 retval = true;
|
2013-05-22 11:05:04 +04:00
|
|
|
u64 *pdBlock = &io_info->pdBlock;
|
2015-04-23 14:02:54 +03:00
|
|
|
__le16 *pDevHandle = &io_info->devHandle;
|
2017-02-10 11:59:12 +03:00
|
|
|
u8 *pPdInterface = &io_info->pd_interface;
|
2015-10-15 11:09:34 +03:00
|
|
|
|
2017-02-10 11:59:24 +03:00
|
|
|
*pDevHandle = cpu_to_le16(MR_DEVHANDLE_INVALID);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
row = mega_div64_32(stripRow, raid->rowDataSize);
|
|
|
|
|
|
|
|
if (raid->level == 6) {
|
|
|
|
/* logical arm within row */
|
|
|
|
u32 logArm = mega_mod64(stripRow, raid->rowDataSize);
|
|
|
|
u32 rowMod, armQ, arm;
|
|
|
|
|
|
|
|
if (raid->rowSize == 0)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2010-12-22 00:34:31 +03:00
|
|
|
/* get logical row mod */
|
|
|
|
rowMod = mega_mod64(row, raid->rowSize);
|
|
|
|
armQ = raid->rowSize-1-rowMod; /* index of Q drive */
|
|
|
|
arm = armQ+1+logArm; /* data always logically follows Q */
|
|
|
|
if (arm >= raid->rowSize) /* handle wrap condition */
|
|
|
|
arm -= raid->rowSize;
|
|
|
|
physArm = (u8)arm;
|
|
|
|
} else {
|
|
|
|
if (raid->modFactor == 0)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2010-12-22 00:34:31 +03:00
|
|
|
physArm = MR_LdDataArmGet(ld, mega_mod64(stripRow,
|
|
|
|
raid->modFactor),
|
|
|
|
map);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (raid->spanDepth == 1) {
|
|
|
|
span = 0;
|
|
|
|
*pdBlock = row << raid->stripeShift;
|
|
|
|
} else {
|
2013-05-22 11:05:04 +04:00
|
|
|
span = (u8)MR_GetSpanBlock(ld, row, pdBlock, map);
|
|
|
|
if (span == SPAN_INVALID)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Get the array on which this span is present */
|
|
|
|
arRef = MR_LdSpanArrayGet(ld, span, map);
|
|
|
|
pd = MR_ArPdGet(arRef, physArm, map); /* Get the pd */
|
|
|
|
|
2017-01-11 02:20:47 +03:00
|
|
|
if (pd != MR_PD_INVALID) {
|
2010-12-22 00:34:31 +03:00
|
|
|
/* Get dev handle from Pd. */
|
|
|
|
*pDevHandle = MR_PdDevHandleGet(pd, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
*pPdInterface = MR_PdInterfaceTypeGet(pd, map);
|
2017-01-11 02:20:47 +03:00
|
|
|
/* get second pd also for raid 1/10 fast path writes*/
|
2018-12-17 11:47:37 +03:00
|
|
|
if ((instance->adapter_type >= VENTURA_SERIES) &&
|
2017-02-10 11:59:12 +03:00
|
|
|
(raid->level == 1) &&
|
|
|
|
!io_info->isRead) {
|
2017-01-11 02:20:47 +03:00
|
|
|
r1_alt_pd = MR_ArPdGet(arRef, physArm + 1, map);
|
|
|
|
if (r1_alt_pd != MR_PD_INVALID)
|
|
|
|
io_info->r1_alt_dev_handle =
|
2017-02-10 11:59:12 +03:00
|
|
|
MR_PdDevHandleGet(r1_alt_pd, map);
|
2017-01-11 02:20:47 +03:00
|
|
|
}
|
|
|
|
} else {
|
2011-10-09 05:15:06 +04:00
|
|
|
if ((raid->level >= 5) &&
|
2017-10-19 12:48:48 +03:00
|
|
|
((instance->adapter_type == THUNDERBOLT_SERIES) ||
|
|
|
|
((instance->adapter_type == INVADER_SERIES) &&
|
2013-05-22 11:01:43 +04:00
|
|
|
(raid->regTypeReqOnRead != REGION_TYPE_UNUSED))))
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->reg_lock_flags = REGION_TYPE_EXCLUSIVE;
|
2010-12-22 00:34:31 +03:00
|
|
|
else if (raid->level == 1) {
|
|
|
|
/* Get alternate Pd. */
|
2016-10-21 16:33:30 +03:00
|
|
|
physArm = physArm + 1;
|
|
|
|
pd = MR_ArPdGet(arRef, physArm, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
if (pd != MR_PD_INVALID) {
|
2010-12-22 00:34:31 +03:00
|
|
|
/* Get dev handle from Pd */
|
|
|
|
*pDevHandle = MR_PdDevHandleGet(pd, map);
|
2017-02-10 11:59:12 +03:00
|
|
|
*pPdInterface = MR_PdInterfaceTypeGet(pd, map);
|
|
|
|
}
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
*pdBlock += stripRef + le64_to_cpu(MR_LdSpanPtrGet(ld, span, map)->startBlk);
|
2018-12-17 11:47:37 +03:00
|
|
|
if (instance->adapter_type >= VENTURA_SERIES) {
|
2017-02-10 11:59:37 +03:00
|
|
|
((struct RAID_CONTEXT_G35 *)pRAID_Context)->span_arm =
|
2017-01-11 02:20:48 +03:00
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
io_info->span_arm =
|
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
} else {
|
|
|
|
pRAID_Context->span_arm =
|
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | physArm;
|
|
|
|
io_info->span_arm = pRAID_Context->span_arm;
|
|
|
|
}
|
2017-02-10 11:59:24 +03:00
|
|
|
io_info->pd_after_lb = pd;
|
2010-12-22 00:34:31 +03:00
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
2019-06-25 14:04:29 +03:00
|
|
|
/*
|
|
|
|
* mr_get_phy_params_r56_rmw - Calculate parameters for R56 CTIO write operation
|
|
|
|
* @instance: Adapter soft state
|
|
|
|
* @ld: LD index
|
|
|
|
* @stripNo: Strip Number
|
|
|
|
* @io_info: IO info structure pointer
|
|
|
|
* pRAID_Context: RAID context pointer
|
|
|
|
* map: RAID map pointer
|
|
|
|
*
|
|
|
|
* This routine calculates the logical arm, data Arm, row number and parity arm
|
|
|
|
* for R56 CTIO write operation.
|
|
|
|
*/
|
|
|
|
static void mr_get_phy_params_r56_rmw(struct megasas_instance *instance,
|
|
|
|
u32 ld, u64 stripNo,
|
|
|
|
struct IO_REQUEST_INFO *io_info,
|
|
|
|
struct RAID_CONTEXT_G35 *pRAID_Context,
|
|
|
|
struct MR_DRV_RAID_MAP_ALL *map)
|
|
|
|
{
|
|
|
|
struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map);
|
|
|
|
u8 span, dataArms, arms, dataArm, logArm;
|
|
|
|
s8 rightmostParityArm, PParityArm;
|
|
|
|
u64 rowNum;
|
|
|
|
u64 *pdBlock = &io_info->pdBlock;
|
|
|
|
|
|
|
|
dataArms = raid->rowDataSize;
|
|
|
|
arms = raid->rowSize;
|
|
|
|
|
|
|
|
rowNum = mega_div64_32(stripNo, dataArms);
|
|
|
|
/* parity disk arm, first arm is 0 */
|
|
|
|
rightmostParityArm = (arms - 1) - mega_mod64(rowNum, arms);
|
|
|
|
|
|
|
|
/* logical arm within row */
|
|
|
|
logArm = mega_mod64(stripNo, dataArms);
|
|
|
|
/* physical arm for data */
|
|
|
|
dataArm = mega_mod64((rightmostParityArm + 1 + logArm), arms);
|
|
|
|
|
|
|
|
if (raid->spanDepth == 1) {
|
|
|
|
span = 0;
|
|
|
|
} else {
|
|
|
|
span = (u8)MR_GetSpanBlock(ld, rowNum, pdBlock, map);
|
|
|
|
if (span == SPAN_INVALID)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (raid->level == 6) {
|
|
|
|
/* P Parity arm, note this can go negative adjust if negative */
|
|
|
|
PParityArm = (arms - 2) - mega_mod64(rowNum, arms);
|
|
|
|
|
|
|
|
if (PParityArm < 0)
|
|
|
|
PParityArm += arms;
|
|
|
|
|
|
|
|
/* rightmostParityArm is P-Parity for RAID 5 and Q-Parity for RAID */
|
|
|
|
pRAID_Context->flow_specific.r56_arm_map = rightmostParityArm;
|
|
|
|
pRAID_Context->flow_specific.r56_arm_map |=
|
|
|
|
(u16)(PParityArm << RAID_CTX_R56_P_ARM_SHIFT);
|
|
|
|
} else {
|
|
|
|
pRAID_Context->flow_specific.r56_arm_map |=
|
|
|
|
(u16)(rightmostParityArm << RAID_CTX_R56_P_ARM_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
pRAID_Context->reg_lock_row_lba = cpu_to_le64(rowNum);
|
|
|
|
pRAID_Context->flow_specific.r56_arm_map |=
|
|
|
|
(u16)(logArm << RAID_CTX_R56_LOG_ARM_SHIFT);
|
|
|
|
cpu_to_le16s(&pRAID_Context->flow_specific.r56_arm_map);
|
|
|
|
pRAID_Context->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | dataArm;
|
|
|
|
pRAID_Context->raid_flags = (MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD <<
|
|
|
|
MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT);
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* MR_BuildRaidContext function
|
|
|
|
*
|
|
|
|
* This function will initiate command processing. The start/end row and strip
|
|
|
|
* information is calculated then the lock is acquired.
|
|
|
|
* This function will return 0 if region lock was acquired OR return num strips
|
|
|
|
*/
|
|
|
|
u8
|
2011-10-09 05:15:06 +04:00
|
|
|
MR_BuildRaidContext(struct megasas_instance *instance,
|
|
|
|
struct IO_REQUEST_INFO *io_info,
|
2010-12-22 00:34:31 +03:00
|
|
|
struct RAID_CONTEXT *pRAID_Context,
|
2014-09-12 17:27:33 +04:00
|
|
|
struct MR_DRV_RAID_MAP_ALL *map, u8 **raidLUN)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2015-10-15 11:09:34 +03:00
|
|
|
struct fusion_context *fusion;
|
2010-12-22 00:34:31 +03:00
|
|
|
struct MR_LD_RAID *raid;
|
2017-02-10 11:59:19 +03:00
|
|
|
u32 stripSize, stripe_mask;
|
2010-12-22 00:34:31 +03:00
|
|
|
u64 endLba, endStrip, endRow, start_row, start_strip;
|
|
|
|
u64 regStart;
|
|
|
|
u32 regSize;
|
|
|
|
u8 num_strips, numRows;
|
|
|
|
u16 ref_in_start_stripe, ref_in_end_stripe;
|
|
|
|
u64 ldStartBlock;
|
|
|
|
u32 numBlocks, ldTgtId;
|
|
|
|
u8 isRead;
|
|
|
|
u8 retval = 0;
|
2013-05-22 11:05:04 +04:00
|
|
|
u8 startlba_span = SPAN_INVALID;
|
|
|
|
u64 *pdBlock = &io_info->pdBlock;
|
2017-02-10 11:59:19 +03:00
|
|
|
u16 ld;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
ldStartBlock = io_info->ldStartBlock;
|
|
|
|
numBlocks = io_info->numBlocks;
|
|
|
|
ldTgtId = io_info->ldTgtId;
|
|
|
|
isRead = io_info->isRead;
|
2013-05-22 11:05:04 +04:00
|
|
|
io_info->IoforUnevenSpan = 0;
|
|
|
|
io_info->start_span = SPAN_INVALID;
|
2015-10-15 11:09:34 +03:00
|
|
|
fusion = instance->ctrl_context;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
ld = MR_TargetIdToLdGet(ldTgtId, map);
|
|
|
|
raid = MR_LdRaidGet(ld, map);
|
2017-01-11 02:20:46 +03:00
|
|
|
/*check read ahead bit*/
|
|
|
|
io_info->ra_capable = raid->capability.ra_capable;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
/*
|
|
|
|
* if rowDataSize @RAID map and spanRowDataSize @SPAN INFO are zero
|
|
|
|
* return FALSE
|
|
|
|
*/
|
|
|
|
if (raid->rowDataSize == 0) {
|
|
|
|
if (MR_LdSpanPtrGet(ld, 0, map)->spanRowDataSize == 0)
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
else if (instance->UnevenSpanSupport) {
|
|
|
|
io_info->IoforUnevenSpan = 1;
|
|
|
|
} else {
|
|
|
|
dev_info(&instance->pdev->dev,
|
|
|
|
"raid->rowDataSize is 0, but has SPAN[0]"
|
|
|
|
"rowDataSize = 0x%0x,"
|
|
|
|
"but there is _NO_ UnevenSpanSupport\n",
|
|
|
|
MR_LdSpanPtrGet(ld, 0, map)->spanRowDataSize);
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
stripSize = 1 << raid->stripeShift;
|
|
|
|
stripe_mask = stripSize-1;
|
2013-05-22 11:05:04 +04:00
|
|
|
|
2019-06-25 14:04:34 +03:00
|
|
|
io_info->data_arms = raid->rowDataSize;
|
2013-05-22 11:05:04 +04:00
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
/*
|
|
|
|
* calculate starting row and stripe, and number of strips and rows
|
|
|
|
*/
|
|
|
|
start_strip = ldStartBlock >> raid->stripeShift;
|
|
|
|
ref_in_start_stripe = (u16)(ldStartBlock & stripe_mask);
|
|
|
|
endLba = ldStartBlock + numBlocks - 1;
|
|
|
|
ref_in_end_stripe = (u16)(endLba & stripe_mask);
|
|
|
|
endStrip = endLba >> raid->stripeShift;
|
|
|
|
num_strips = (u8)(endStrip - start_strip + 1); /* End strip */
|
2013-05-22 11:05:04 +04:00
|
|
|
|
|
|
|
if (io_info->IoforUnevenSpan) {
|
|
|
|
start_row = get_row_from_strip(instance, ld, start_strip, map);
|
|
|
|
endRow = get_row_from_strip(instance, ld, endStrip, map);
|
|
|
|
if (start_row == -1ULL || endRow == -1ULL) {
|
|
|
|
dev_info(&instance->pdev->dev, "return from %s %d."
|
|
|
|
"Send IO w/o region lock.\n",
|
|
|
|
__func__, __LINE__);
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (raid->spanDepth == 1) {
|
|
|
|
startlba_span = 0;
|
|
|
|
*pdBlock = start_row << raid->stripeShift;
|
|
|
|
} else
|
|
|
|
startlba_span = (u8)mr_spanset_get_span_block(instance,
|
|
|
|
ld, start_row, pdBlock, map);
|
|
|
|
if (startlba_span == SPAN_INVALID) {
|
|
|
|
dev_info(&instance->pdev->dev, "return from %s %d"
|
|
|
|
"for row 0x%llx,start strip %llx"
|
|
|
|
"endSrip %llx\n", __func__, __LINE__,
|
|
|
|
(unsigned long long)start_row,
|
|
|
|
(unsigned long long)start_strip,
|
|
|
|
(unsigned long long)endStrip);
|
2017-08-23 14:47:05 +03:00
|
|
|
return false;
|
2013-05-22 11:05:04 +04:00
|
|
|
}
|
|
|
|
io_info->start_span = startlba_span;
|
|
|
|
io_info->start_row = start_row;
|
|
|
|
} else {
|
|
|
|
start_row = mega_div64_32(start_strip, raid->rowDataSize);
|
|
|
|
endRow = mega_div64_32(endStrip, raid->rowDataSize);
|
|
|
|
}
|
|
|
|
numRows = (u8)(endRow - start_row + 1);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* calculate region info.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* assume region is at the start of the first row */
|
|
|
|
regStart = start_row << raid->stripeShift;
|
|
|
|
/* assume this IO needs the full row - we'll adjust if not true */
|
|
|
|
regSize = stripSize;
|
|
|
|
|
2016-01-28 18:34:27 +03:00
|
|
|
io_info->do_fp_rlbypass = raid->capability.fpBypassRegionLock;
|
|
|
|
|
2012-03-20 06:50:00 +04:00
|
|
|
/* Check if we can send this I/O via FastPath */
|
|
|
|
if (raid->capability.fpCapable) {
|
|
|
|
if (isRead)
|
|
|
|
io_info->fpOkForIo = (raid->capability.fpReadCapable &&
|
|
|
|
((num_strips == 1) ||
|
|
|
|
raid->capability.
|
|
|
|
fpReadAcrossStripe));
|
|
|
|
else
|
|
|
|
io_info->fpOkForIo = (raid->capability.fpWriteCapable &&
|
|
|
|
((num_strips == 1) ||
|
|
|
|
raid->capability.
|
|
|
|
fpWriteAcrossStripe));
|
|
|
|
} else
|
2017-08-23 14:47:05 +03:00
|
|
|
io_info->fpOkForIo = false;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
if (numRows == 1) {
|
|
|
|
/* single-strip IOs can always lock only the data needed */
|
|
|
|
if (num_strips == 1) {
|
|
|
|
regStart += ref_in_start_stripe;
|
|
|
|
regSize = numBlocks;
|
|
|
|
}
|
|
|
|
/* multi-strip IOs always need to full stripe locked */
|
2013-05-22 11:05:04 +04:00
|
|
|
} else if (io_info->IoforUnevenSpan == 0) {
|
|
|
|
/*
|
|
|
|
* For Even span region lock optimization.
|
|
|
|
* If the start strip is the last in the start row
|
|
|
|
*/
|
2010-12-22 00:34:31 +03:00
|
|
|
if (start_strip == (start_row + 1) * raid->rowDataSize - 1) {
|
|
|
|
regStart += ref_in_start_stripe;
|
|
|
|
/* initialize count to sectors from startref to end
|
|
|
|
of strip */
|
2013-05-22 11:05:04 +04:00
|
|
|
regSize = stripSize - ref_in_start_stripe;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
/* add complete rows in the middle of the transfer */
|
2010-12-22 00:34:31 +03:00
|
|
|
if (numRows > 2)
|
|
|
|
regSize += (numRows-2) << raid->stripeShift;
|
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
/* if IO ends within first strip of last row*/
|
2010-12-22 00:34:31 +03:00
|
|
|
if (endStrip == endRow*raid->rowDataSize)
|
|
|
|
regSize += ref_in_end_stripe+1;
|
|
|
|
else
|
|
|
|
regSize += stripSize;
|
2013-05-22 11:05:04 +04:00
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* For Uneven span region lock optimization.
|
|
|
|
* If the start strip is the last in the start row
|
|
|
|
*/
|
|
|
|
if (start_strip == (get_strip_from_row(instance, ld, start_row, map) +
|
|
|
|
SPAN_ROW_DATA_SIZE(map, ld, startlba_span) - 1)) {
|
|
|
|
regStart += ref_in_start_stripe;
|
|
|
|
/* initialize count to sectors from
|
|
|
|
* startRef to end of strip
|
|
|
|
*/
|
|
|
|
regSize = stripSize - ref_in_start_stripe;
|
|
|
|
}
|
|
|
|
/* Add complete rows in the middle of the transfer*/
|
|
|
|
|
|
|
|
if (numRows > 2)
|
|
|
|
/* Add complete rows in the middle of the transfer*/
|
|
|
|
regSize += (numRows-2) << raid->stripeShift;
|
|
|
|
|
|
|
|
/* if IO ends within first strip of last row */
|
|
|
|
if (endStrip == get_strip_from_row(instance, ld, endRow, map))
|
|
|
|
regSize += ref_in_end_stripe + 1;
|
|
|
|
else
|
|
|
|
regSize += stripSize;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->timeout_value =
|
2014-03-10 13:51:28 +04:00
|
|
|
cpu_to_le16(raid->fpIoTimeoutForLd ?
|
|
|
|
raid->fpIoTimeoutForLd :
|
|
|
|
map->raidMap.fpPdIoTimeoutSec);
|
2017-10-19 12:48:48 +03:00
|
|
|
if (instance->adapter_type == INVADER_SERIES)
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->reg_lock_flags = (isRead) ?
|
2011-10-09 05:15:06 +04:00
|
|
|
raid->regTypeReqOnRead : raid->regTypeReqOnWrite;
|
2017-10-19 12:48:52 +03:00
|
|
|
else if (instance->adapter_type == THUNDERBOLT_SERIES)
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->reg_lock_flags = (isRead) ?
|
2011-10-09 05:15:06 +04:00
|
|
|
REGION_TYPE_SHARED_READ : raid->regTypeReqOnWrite;
|
2017-01-11 02:20:48 +03:00
|
|
|
pRAID_Context->virtual_disk_tgt_id = raid->targetId;
|
|
|
|
pRAID_Context->reg_lock_row_lba = cpu_to_le64(regStart);
|
|
|
|
pRAID_Context->reg_lock_length = cpu_to_le32(regSize);
|
|
|
|
pRAID_Context->config_seq_num = raid->seqNum;
|
2013-09-07 02:27:14 +04:00
|
|
|
/* save pointer to raid->LUN array */
|
|
|
|
*raidLUN = raid->LUN;
|
|
|
|
|
2019-06-25 14:04:29 +03:00
|
|
|
/* Aero R5/6 Division Offload for WRITE */
|
|
|
|
if (fusion->r56_div_offload && (raid->level >= 5) && !isRead) {
|
|
|
|
mr_get_phy_params_r56_rmw(instance, ld, start_strip, io_info,
|
|
|
|
(struct RAID_CONTEXT_G35 *)pRAID_Context,
|
|
|
|
map);
|
|
|
|
return true;
|
|
|
|
}
|
2010-12-22 00:34:31 +03:00
|
|
|
|
|
|
|
/*Get Phy Params only if FP capable, or else leave it to MR firmware
|
|
|
|
to do the calculation.*/
|
|
|
|
if (io_info->fpOkForIo) {
|
2013-05-22 11:05:04 +04:00
|
|
|
retval = io_info->IoforUnevenSpan ?
|
|
|
|
mr_spanset_get_phy_params(instance, ld,
|
|
|
|
start_strip, ref_in_start_stripe,
|
|
|
|
io_info, pRAID_Context, map) :
|
|
|
|
MR_GetPhyParams(instance, ld, start_strip,
|
|
|
|
ref_in_start_stripe, io_info,
|
|
|
|
pRAID_Context, map);
|
|
|
|
/* If IO on an invalid Pd, then FP is not possible.*/
|
2017-02-10 11:59:03 +03:00
|
|
|
if (io_info->devHandle == MR_DEVHANDLE_INVALID)
|
2017-08-23 14:47:05 +03:00
|
|
|
io_info->fpOkForIo = false;
|
2010-12-22 00:34:31 +03:00
|
|
|
return retval;
|
|
|
|
} else if (isRead) {
|
|
|
|
uint stripIdx;
|
|
|
|
for (stripIdx = 0; stripIdx < num_strips; stripIdx++) {
|
2013-05-22 11:05:04 +04:00
|
|
|
retval = io_info->IoforUnevenSpan ?
|
|
|
|
mr_spanset_get_phy_params(instance, ld,
|
|
|
|
start_strip + stripIdx,
|
|
|
|
ref_in_start_stripe, io_info,
|
|
|
|
pRAID_Context, map) :
|
|
|
|
MR_GetPhyParams(instance, ld,
|
|
|
|
start_strip + stripIdx, ref_in_start_stripe,
|
|
|
|
io_info, pRAID_Context, map);
|
|
|
|
if (!retval)
|
2017-08-23 14:47:05 +03:00
|
|
|
return true;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
}
|
2017-08-23 14:47:05 +03:00
|
|
|
return true;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2013-05-22 11:05:04 +04:00
|
|
|
/*
|
|
|
|
******************************************************************************
|
|
|
|
*
|
|
|
|
* This routine pepare spanset info from Valid Raid map and store it into
|
|
|
|
* local copy of ldSpanInfo per instance data structure.
|
|
|
|
*
|
|
|
|
* Inputs :
|
|
|
|
* map - LD map
|
|
|
|
* ldSpanInfo - ldSpanInfo per HBA instance
|
|
|
|
*
|
|
|
|
*/
|
2014-09-12 17:27:33 +04:00
|
|
|
void mr_update_span_set(struct MR_DRV_RAID_MAP_ALL *map,
|
2014-09-12 17:27:53 +04:00
|
|
|
PLD_SPAN_INFO ldSpanInfo)
|
2013-05-22 11:05:04 +04:00
|
|
|
{
|
|
|
|
u8 span, count;
|
|
|
|
u32 element, span_row_width;
|
|
|
|
u64 span_row;
|
|
|
|
struct MR_LD_RAID *raid;
|
|
|
|
LD_SPAN_SET *span_set, *span_set_prev;
|
|
|
|
struct MR_QUAD_ELEMENT *quad;
|
|
|
|
int ldCount;
|
|
|
|
u16 ld;
|
|
|
|
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) {
|
2013-05-22 11:05:04 +04:00
|
|
|
ld = MR_TargetIdToLdGet(ldCount, map);
|
2015-01-05 17:35:58 +03:00
|
|
|
if (ld >= (MAX_LOGICAL_DRIVES_EXT - 1))
|
2013-05-22 11:05:04 +04:00
|
|
|
continue;
|
|
|
|
raid = MR_LdRaidGet(ld, map);
|
|
|
|
for (element = 0; element < MAX_QUAD_DEPTH; element++) {
|
|
|
|
for (span = 0; span < raid->spanDepth; span++) {
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].spanBlock[span].
|
|
|
|
block_span_info.noElements) <
|
2013-05-22 11:05:04 +04:00
|
|
|
element + 1)
|
|
|
|
continue;
|
|
|
|
span_set = &(ldSpanInfo[ld].span_set[element]);
|
|
|
|
quad = &map->raidMap.ldSpanMap[ld].
|
|
|
|
spanBlock[span].block_span_info.
|
|
|
|
quad[element];
|
|
|
|
|
2013-09-06 14:20:52 +04:00
|
|
|
span_set->diff = le32_to_cpu(quad->diff);
|
2013-05-22 11:05:04 +04:00
|
|
|
|
|
|
|
for (count = 0, span_row_width = 0;
|
|
|
|
count < raid->spanDepth; count++) {
|
2013-09-06 14:20:52 +04:00
|
|
|
if (le32_to_cpu(map->raidMap.ldSpanMap[ld].
|
2013-05-22 11:05:04 +04:00
|
|
|
spanBlock[count].
|
|
|
|
block_span_info.
|
2013-09-06 14:20:52 +04:00
|
|
|
noElements) >= element + 1) {
|
2013-05-22 11:05:04 +04:00
|
|
|
span_set->strip_offset[count] =
|
|
|
|
span_row_width;
|
|
|
|
span_row_width +=
|
|
|
|
MR_LdSpanPtrGet
|
|
|
|
(ld, count, map)->spanRowDataSize;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
span_set->span_row_data_width = span_row_width;
|
2013-09-06 14:20:52 +04:00
|
|
|
span_row = mega_div64_32(((le64_to_cpu(quad->logEnd) -
|
|
|
|
le64_to_cpu(quad->logStart)) + le32_to_cpu(quad->diff)),
|
|
|
|
le32_to_cpu(quad->diff));
|
2013-05-22 11:05:04 +04:00
|
|
|
|
|
|
|
if (element == 0) {
|
|
|
|
span_set->log_start_lba = 0;
|
|
|
|
span_set->log_end_lba =
|
|
|
|
((span_row << raid->stripeShift)
|
|
|
|
* span_row_width) - 1;
|
|
|
|
|
|
|
|
span_set->span_row_start = 0;
|
|
|
|
span_set->span_row_end = span_row - 1;
|
|
|
|
|
|
|
|
span_set->data_strip_start = 0;
|
|
|
|
span_set->data_strip_end =
|
|
|
|
(span_row * span_row_width) - 1;
|
|
|
|
|
|
|
|
span_set->data_row_start = 0;
|
|
|
|
span_set->data_row_end =
|
2013-09-06 14:20:52 +04:00
|
|
|
(span_row * le32_to_cpu(quad->diff)) - 1;
|
2013-05-22 11:05:04 +04:00
|
|
|
} else {
|
|
|
|
span_set_prev = &(ldSpanInfo[ld].
|
|
|
|
span_set[element - 1]);
|
|
|
|
span_set->log_start_lba =
|
|
|
|
span_set_prev->log_end_lba + 1;
|
|
|
|
span_set->log_end_lba =
|
|
|
|
span_set->log_start_lba +
|
|
|
|
((span_row << raid->stripeShift)
|
|
|
|
* span_row_width) - 1;
|
|
|
|
|
|
|
|
span_set->span_row_start =
|
|
|
|
span_set_prev->span_row_end + 1;
|
|
|
|
span_set->span_row_end =
|
|
|
|
span_set->span_row_start + span_row - 1;
|
|
|
|
|
|
|
|
span_set->data_strip_start =
|
|
|
|
span_set_prev->data_strip_end + 1;
|
|
|
|
span_set->data_strip_end =
|
|
|
|
span_set->data_strip_start +
|
|
|
|
(span_row * span_row_width) - 1;
|
|
|
|
|
|
|
|
span_set->data_row_start =
|
|
|
|
span_set_prev->data_row_end + 1;
|
|
|
|
span_set->data_row_end =
|
|
|
|
span_set->data_row_start +
|
2013-09-06 14:20:52 +04:00
|
|
|
(span_row * le32_to_cpu(quad->diff)) - 1;
|
2013-05-22 11:05:04 +04:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (span == raid->spanDepth)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
void mr_update_load_balance_params(struct MR_DRV_RAID_MAP_ALL *drv_map,
|
|
|
|
struct LD_LOAD_BALANCE_INFO *lbInfo)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
|
|
|
int ldCount;
|
|
|
|
u16 ld;
|
|
|
|
struct MR_LD_RAID *raid;
|
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
if (lb_pending_cmds > 128 || lb_pending_cmds < 1)
|
|
|
|
lb_pending_cmds = LB_PENDING_CMDS_DEFAULT;
|
|
|
|
|
2014-09-12 17:27:33 +04:00
|
|
|
for (ldCount = 0; ldCount < MAX_LOGICAL_DRIVES_EXT; ldCount++) {
|
2014-09-12 17:27:53 +04:00
|
|
|
ld = MR_TargetIdToLdGet(ldCount, drv_map);
|
scsi: megaraid: fix out-of-bound array accesses
UBSAN reported those with MegaRAID SAS-3 3108,
[ 77.467308] UBSAN: Undefined behaviour in drivers/scsi/megaraid/megaraid_sas_fp.c:117:32
[ 77.475402] index 255 is out of range for type 'MR_LD_SPAN_MAP [1]'
[ 77.481677] CPU: 16 PID: 333 Comm: kworker/16:1 Not tainted 4.20.0-rc5+ #1
[ 77.488556] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.50 06/01/2018
[ 77.495791] Workqueue: events work_for_cpu_fn
[ 77.500154] Call trace:
[ 77.502610] dump_backtrace+0x0/0x2c8
[ 77.506279] show_stack+0x24/0x30
[ 77.509604] dump_stack+0x118/0x19c
[ 77.513098] ubsan_epilogue+0x14/0x60
[ 77.516765] __ubsan_handle_out_of_bounds+0xfc/0x13c
[ 77.521767] mr_update_load_balance_params+0x150/0x158 [megaraid_sas]
[ 77.528230] MR_ValidateMapInfo+0x2cc/0x10d0 [megaraid_sas]
[ 77.533825] megasas_get_map_info+0x244/0x2f0 [megaraid_sas]
[ 77.539505] megasas_init_adapter_fusion+0x9b0/0xf48 [megaraid_sas]
[ 77.545794] megasas_init_fw+0x1ab4/0x3518 [megaraid_sas]
[ 77.551212] megasas_probe_one+0x2c4/0xbe0 [megaraid_sas]
[ 77.556614] local_pci_probe+0x7c/0xf0
[ 77.560365] work_for_cpu_fn+0x34/0x50
[ 77.564118] process_one_work+0x61c/0xf08
[ 77.568129] worker_thread+0x534/0xa70
[ 77.571882] kthread+0x1c8/0x1d0
[ 77.575114] ret_from_fork+0x10/0x1c
[ 89.240332] UBSAN: Undefined behaviour in drivers/scsi/megaraid/megaraid_sas_fp.c:117:32
[ 89.248426] index 255 is out of range for type 'MR_LD_SPAN_MAP [1]'
[ 89.254700] CPU: 16 PID: 95 Comm: kworker/u130:0 Not tainted 4.20.0-rc5+ #1
[ 89.261665] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.50 06/01/2018
[ 89.268903] Workqueue: events_unbound async_run_entry_fn
[ 89.274222] Call trace:
[ 89.276680] dump_backtrace+0x0/0x2c8
[ 89.280348] show_stack+0x24/0x30
[ 89.283671] dump_stack+0x118/0x19c
[ 89.287167] ubsan_epilogue+0x14/0x60
[ 89.290835] __ubsan_handle_out_of_bounds+0xfc/0x13c
[ 89.295828] MR_LdRaidGet+0x50/0x58 [megaraid_sas]
[ 89.300638] megasas_build_io_fusion+0xbb8/0xd90 [megaraid_sas]
[ 89.306576] megasas_build_and_issue_cmd_fusion+0x138/0x460 [megaraid_sas]
[ 89.313468] megasas_queue_command+0x398/0x3d0 [megaraid_sas]
[ 89.319222] scsi_dispatch_cmd+0x1dc/0x8a8
[ 89.323321] scsi_request_fn+0x8e8/0xdd0
[ 89.327249] __blk_run_queue+0xc4/0x158
[ 89.331090] blk_execute_rq_nowait+0xf4/0x158
[ 89.335449] blk_execute_rq+0xdc/0x158
[ 89.339202] __scsi_execute+0x130/0x258
[ 89.343041] scsi_probe_and_add_lun+0x2fc/0x1488
[ 89.347661] __scsi_scan_target+0x1cc/0x8c8
[ 89.351848] scsi_scan_channel.part.3+0x8c/0xc0
[ 89.356382] scsi_scan_host_selected+0x130/0x1f0
[ 89.361002] do_scsi_scan_host+0xd8/0xf0
[ 89.364927] do_scan_async+0x9c/0x320
[ 89.368594] async_run_entry_fn+0x138/0x420
[ 89.372780] process_one_work+0x61c/0xf08
[ 89.376793] worker_thread+0x13c/0xa70
[ 89.380546] kthread+0x1c8/0x1d0
[ 89.383778] ret_from_fork+0x10/0x1c
This is because when populating Driver Map using firmware raid map, all
non-existing VDs set their ldTgtIdToLd to 0xff, so it can be skipped later.
From drivers/scsi/megaraid/megaraid_sas_base.c ,
memset(instance->ld_ids, 0xff, MEGASAS_MAX_LD_IDS);
From drivers/scsi/megaraid/megaraid_sas_fp.c ,
/* For non existing VDs, iterate to next VD*/
if (ld >= (MAX_LOGICAL_DRIVES_EXT - 1))
continue;
However, there are a few places that failed to skip those non-existing VDs
due to off-by-one errors. Then, those 0xff leaked into MR_LdRaidGet(0xff,
map) and triggered the out-of-bound accesses.
Fixes: 51087a8617fe ("megaraid_sas : Extended VD support")
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Sumit Saxena <sumit.saxena@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-12-13 16:27:27 +03:00
|
|
|
if (ld >= MAX_LOGICAL_DRIVES_EXT - 1) {
|
2010-12-22 00:34:31 +03:00
|
|
|
lbInfo[ldCount].loadBalanceFlag = 0;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
raid = MR_LdRaidGet(ld, drv_map);
|
|
|
|
if ((raid->level != 1) ||
|
|
|
|
(raid->ldState != MR_LD_STATE_OPTIMAL)) {
|
2010-12-22 00:34:31 +03:00
|
|
|
lbInfo[ldCount].loadBalanceFlag = 0;
|
2014-09-12 17:27:53 +04:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
lbInfo[ldCount].loadBalanceFlag = 1;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-07 12:28:25 +03:00
|
|
|
static u8 megasas_get_best_arm_pd(struct megasas_instance *instance,
|
2017-02-10 11:59:12 +03:00
|
|
|
struct LD_LOAD_BALANCE_INFO *lbInfo,
|
|
|
|
struct IO_REQUEST_INFO *io_info,
|
|
|
|
struct MR_DRV_RAID_MAP_ALL *drv_map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2014-09-12 17:27:53 +04:00
|
|
|
struct MR_LD_RAID *raid;
|
2017-01-11 02:20:48 +03:00
|
|
|
u16 pd1_dev_handle;
|
2014-09-12 17:27:53 +04:00
|
|
|
u16 pend0, pend1, ld;
|
2010-12-22 00:34:31 +03:00
|
|
|
u64 diff0, diff1;
|
2014-09-12 17:27:53 +04:00
|
|
|
u8 bestArm, pd0, pd1, span, arm;
|
|
|
|
u32 arRef, span_row_size;
|
|
|
|
|
|
|
|
u64 block = io_info->ldStartBlock;
|
|
|
|
u32 count = io_info->numBlocks;
|
|
|
|
|
|
|
|
span = ((io_info->span_arm & RAID_CTX_SPANARM_SPAN_MASK)
|
|
|
|
>> RAID_CTX_SPANARM_SPAN_SHIFT);
|
|
|
|
arm = (io_info->span_arm & RAID_CTX_SPANARM_ARM_MASK);
|
|
|
|
|
|
|
|
ld = MR_TargetIdToLdGet(io_info->ldTgtId, drv_map);
|
|
|
|
raid = MR_LdRaidGet(ld, drv_map);
|
|
|
|
span_row_size = instance->UnevenSpanSupport ?
|
|
|
|
SPAN_ROW_SIZE(drv_map, ld, span) : raid->rowSize;
|
|
|
|
|
|
|
|
arRef = MR_LdSpanArrayGet(ld, span, drv_map);
|
|
|
|
pd0 = MR_ArPdGet(arRef, arm, drv_map);
|
|
|
|
pd1 = MR_ArPdGet(arRef, (arm + 1) >= span_row_size ?
|
|
|
|
(arm + 1 - span_row_size) : arm + 1, drv_map);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2017-01-11 02:20:48 +03:00
|
|
|
/* Get PD1 Dev Handle */
|
|
|
|
|
|
|
|
pd1_dev_handle = MR_PdDevHandleGet(pd1, drv_map);
|
|
|
|
|
2017-02-10 11:59:12 +03:00
|
|
|
if (pd1_dev_handle == MR_DEVHANDLE_INVALID) {
|
2017-01-11 02:20:48 +03:00
|
|
|
bestArm = arm;
|
|
|
|
} else {
|
|
|
|
/* get the pending cmds for the data and mirror arms */
|
|
|
|
pend0 = atomic_read(&lbInfo->scsi_pending_cmds[pd0]);
|
|
|
|
pend1 = atomic_read(&lbInfo->scsi_pending_cmds[pd1]);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2017-01-11 02:20:48 +03:00
|
|
|
/* Determine the disk whose head is nearer to the req. block */
|
|
|
|
diff0 = ABS_DIFF(block, lbInfo->last_accessed_block[pd0]);
|
|
|
|
diff1 = ABS_DIFF(block, lbInfo->last_accessed_block[pd1]);
|
|
|
|
bestArm = (diff0 <= diff1 ? arm : arm ^ 1);
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2017-01-11 02:20:48 +03:00
|
|
|
/* Make balance count from 16 to 4 to
|
|
|
|
* keep driver in sync with Firmware
|
|
|
|
*/
|
|
|
|
if ((bestArm == arm && pend0 > pend1 + lb_pending_cmds) ||
|
2017-02-10 11:59:37 +03:00
|
|
|
(bestArm != arm && pend1 > pend0 + lb_pending_cmds))
|
2017-01-11 02:20:48 +03:00
|
|
|
bestArm ^= 1;
|
|
|
|
|
|
|
|
/* Update the last accessed block on the correct pd */
|
|
|
|
io_info->span_arm =
|
|
|
|
(span << RAID_CTX_SPANARM_SPAN_SHIFT) | bestArm;
|
|
|
|
io_info->pd_after_lb = (bestArm == arm) ? pd0 : pd1;
|
|
|
|
}
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
lbInfo->last_accessed_block[io_info->pd_after_lb] = block + count - 1;
|
|
|
|
return io_info->pd_after_lb;
|
2010-12-22 00:34:31 +03:00
|
|
|
}
|
|
|
|
|
2015-04-23 14:02:54 +03:00
|
|
|
__le16 get_updated_dev_handle(struct megasas_instance *instance,
|
2017-02-10 11:59:12 +03:00
|
|
|
struct LD_LOAD_BALANCE_INFO *lbInfo,
|
|
|
|
struct IO_REQUEST_INFO *io_info,
|
|
|
|
struct MR_DRV_RAID_MAP_ALL *drv_map)
|
2010-12-22 00:34:31 +03:00
|
|
|
{
|
2014-09-12 17:27:53 +04:00
|
|
|
u8 arm_pd;
|
2015-04-23 14:02:54 +03:00
|
|
|
__le16 devHandle;
|
2010-12-22 00:34:31 +03:00
|
|
|
|
2014-09-12 17:27:53 +04:00
|
|
|
/* get best new arm (PD ID) */
|
2017-02-10 11:59:12 +03:00
|
|
|
arm_pd = megasas_get_best_arm_pd(instance, lbInfo, io_info, drv_map);
|
2014-09-12 17:27:53 +04:00
|
|
|
devHandle = MR_PdDevHandleGet(arm_pd, drv_map);
|
2017-02-10 11:59:12 +03:00
|
|
|
io_info->pd_interface = MR_PdInterfaceTypeGet(arm_pd, drv_map);
|
2014-09-12 17:27:53 +04:00
|
|
|
atomic_inc(&lbInfo->scsi_pending_cmds[arm_pd]);
|
2017-02-10 11:59:12 +03:00
|
|
|
|
2010-12-22 00:34:31 +03:00
|
|
|
return devHandle;
|
|
|
|
}
|