MTD updates for v4.13-rc1:
General updates * Cleanups and additional flash support for "dataflash" driver * new driver for mchp23k256 SPI SRAM device * improve handling of MTDs without eraseblocks (i.e., MTD_NO_ERASE) * refactor and improve "sub-partition" handling with TRX partition parser; partitions can now be created as sub-partitions of another partition SPI NOR updates, from Cyrille Pitchen and Marek Vasut: * introduce support to the SPI 1-2-2 and 1-4-4 protocols. * introduce support to the Double Data Rate (DDR) mode. * introduce support to the Octo SPI protocols. * add support to new memory parts for Spansion, Macronix and Winbond. * add fixes for the Aspeed, STM32 and Cadence QSPI controler drivers. * clean up the st_spi_fsm driver. NAND updates, from Boris Brezillon: * addition of on-die ECC support to Micron driver * addition of helpers to help drivers choose most appropriate ECC settings * deletion of dead-code (cached programming and ->errstat() hook) * make sure drivers that do not support the SET/GET FEATURES command return ENOTSUPP use a dummy ->set/get_features implementation returning -ENOTSUPP (required for Micron on-die ECC) * change the semantic of ecc->write_page() for drivers setting the NAND_ECC_CUSTOM_PAGE_ACCESS flag * support exiting 'GET STATUS' command in default ->cmdfunc() implementations * change the prototype of ->setup_data_interface() A bunch of driver related changes: * various cleanup, fixes and improvements of the MTK driver * OMAP DT bindings fixes * support for ->setup_data_interface() in the fsmc driver * support for imx7 in the gpmi driver * finalization of the denali driver rework (thanks to Masahiro for the work he's done on this driver) * fix "bitflips in erased pages" handling in the ifc driver * addition of PM ops and dynamic timing configuration to the atmel driver -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJZZ7bdAAoJEFySrpd9RFgtZvIP+wfo25Lkv2gFRIFhnoDnxDfu 1pLVL8HrgTYBcD3dmr9ghONq+bxh2SSz3gU20i/eWmOmKy1OwaGegSj88hYpGOpS 2bwWWlczMqkX+upHw0une3ZrTb6pjoyHKHr5I5GYoJPgG2Dw2D3joehRkvMDispD 9cEik9HkyliHXy/1mqFsToe5RwdqauLbKR/a2XZQo89gt8n8Rnlt91Q5QOZytC6r GLkuQzRAf4qVi4sgDb7zvFZW7KeyGTXTLDxKZGG9JETNjzcEJZMykAWxR9SwBCHa tL7HjyaU5d2rXo4ukZ4IplKn9Y+BneDeGomy44DcGP6RAyNDqVC/R5eFW+MtlbwY rm6SDxs9vCeUBrgIaJlVqDJxca/OR3ruHKILGbEfvIy/MmRQ4keBf357Dew8o4x/ wQw2dgznn3/vs5aqSz/E+erY22gdnaHtDApaefB/D0Kqi9fs2yVaAh3gGcXmloO9 yfRfzPugMRwI29gztMkgRWKWTCfHe2JN4hLDMVwO7Rt3ucQIbz642N/4JVMEpDcX gJcaSgXn/u6xRJnEX/2u+B6ERNqVvLZ8fbnfD0fkPkjLOISvfg38xti1qgoxs8z8 tm5lMI7VR9/MKIxCXT/6Z+actDV21j/oo0QInV3YMxHDPl5KBj+migsRtDzpGhna dmztYIMYqF9I29skWgXR =ReBr -----END PGP SIGNATURE----- Merge tag 'for-linus-20170713' of git://git.infradead.org/linux-mtd Pull MTD updates from Brian Norris: "General updates: - Cleanups and additional flash support for "dataflash" driver - new driver for mchp23k256 SPI SRAM device - improve handling of MTDs without eraseblocks (i.e., MTD_NO_ERASE) - refactor and improve "sub-partition" handling with TRX partition parser; partitions can now be created as sub-partitions of another partition SPINOR updates, from Cyrille Pitchen and Marek Vasut: - introduce support to the SPI 1-2-2 and 1-4-4 protocols. - introduce support to the Double Data Rate (DDR) mode. - introduce support to the Octo SPI protocols. - add support to new memory parts for Spansion, Macronix and Winbond. - add fixes for the Aspeed, STM32 and Cadence QSPI controler drivers. - clean up the st_spi_fsm driver. NAND updates, from Boris Brezillon: - addition of on-die ECC support to Micron driver - addition of helpers to help drivers choose most appropriate ECC settings - deletion of dead-code (cached programming and ->errstat() hook) - make sure drivers that do not support the SET/GET FEATURES command return ENOTSUPP use a dummy ->set/get_features implementation returning -ENOTSUPP (required for Micron on-die ECC) - change the semantic of ecc->write_page() for drivers setting the NAND_ECC_CUSTOM_PAGE_ACCESS flag - support exiting 'GET STATUS' command in default ->cmdfunc() implementations - change the prototype of ->setup_data_interface() A bunch of driver related changes: - various cleanup, fixes and improvements of the MTK driver - OMAP DT bindings fixes - support for ->setup_data_interface() in the fsmc driver - support for imx7 in the gpmi driver - finalization of the denali driver rework (thanks to Masahiro for the work he's done on this driver) - fix "bitflips in erased pages" handling in the ifc driver - addition of PM ops and dynamic timing configuration to the atmel driver" * tag 'for-linus-20170713' of git://git.infradead.org/linux-mtd: (118 commits) Documentation: ABI: mtd: describe "offset" more precisely mtd: Fix check in mtd_unpoint() mtd: nand: mtk: release lock on error path mtd: st_spi_fsm: remove SPINOR_OP_RDSR2 and use SPINOR_OP_RDCR instead mtd: spi-nor: cqspi: remove duplicate const mtd: spi-nor: Add support for Spansion S25FL064L mtd: spi-nor: Add support for mx66u51235f mtd: nand: mtk: add ->setup_data_interface() hook mtd: nand: mtk: remove unneeded mtk_ecc_hw_init from mtk_ecc_resume mtd: nand: mtk: remove unneeded mtk_nfc_hw_init from mtk_nfc_resume mtd: nand: mtk: disable ecc irq when writing page with hwecc mtd: nand: mtk: fix incorrect register setting order about ecc irq mtd: partitions: fixup some allocate_partition() whitespace mtd: parsers: trx: fix pr_err format for printing offset MAINTAINERS: Update SPI NOR subsystem git repositories mtd: extract TRX parser out of bcm47xxpart into a separated module mtd: partitions: add support for partition parsers mtd: partitions: add support for subpartitions mtd: partitions: rename "master" to the "parent" where appropriate mtd: partitions: remove sysfs files when deleting all master's partitions ...
This commit is contained in:
Коммит
b5e16170f5
|
@ -229,6 +229,6 @@ KernelVersion: 4.1
|
|||
Contact: linux-mtd@lists.infradead.org
|
||||
Description:
|
||||
For a partition, the offset of that partition from the start
|
||||
of the master device in bytes. This attribute is absent on
|
||||
main devices, so it can be used to distinguish between
|
||||
partitions and devices that aren't partitions.
|
||||
of the parent (another partition or a flash device) in bytes.
|
||||
This attribute is absent on flash devices, so it can be used
|
||||
to distinguish them from partitions.
|
||||
|
|
|
@ -3,10 +3,23 @@
|
|||
Required properties:
|
||||
- compatible : should be one of the following:
|
||||
"altr,socfpga-denali-nand" - for Altera SOCFPGA
|
||||
"socionext,uniphier-denali-nand-v5a" - for Socionext UniPhier (v5a)
|
||||
"socionext,uniphier-denali-nand-v5b" - for Socionext UniPhier (v5b)
|
||||
- reg : should contain registers location and length for data and reg.
|
||||
- reg-names: Should contain the reg names "nand_data" and "denali_reg"
|
||||
- interrupts : The interrupt number.
|
||||
|
||||
Optional properties:
|
||||
- nand-ecc-step-size: see nand.txt for details. If present, the value must be
|
||||
512 for "altr,socfpga-denali-nand"
|
||||
1024 for "socionext,uniphier-denali-nand-v5a"
|
||||
1024 for "socionext,uniphier-denali-nand-v5b"
|
||||
- nand-ecc-strength: see nand.txt for details. Valid values are:
|
||||
8, 15 for "altr,socfpga-denali-nand"
|
||||
8, 16, 24 for "socionext,uniphier-denali-nand-v5a"
|
||||
8, 16 for "socionext,uniphier-denali-nand-v5b"
|
||||
- nand-ecc-maximize: see nand.txt for details
|
||||
|
||||
The device tree may optionally contain sub-nodes describing partitions of the
|
||||
address space. See partition.txt for more detail.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
Error location module
|
||||
|
||||
Required properties:
|
||||
- compatible: Must be "ti,am33xx-elm"
|
||||
- compatible: Must be "ti,am3352-elm"
|
||||
- reg: physical base address and size of the registers map.
|
||||
- interrupts: Interrupt number for the elm.
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ the GPMC controller with a name of "nand".
|
|||
|
||||
All timing relevant properties as well as generic gpmc child properties are
|
||||
explained in a separate documents - please refer to
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
For NAND specific properties such as ECC modes or bus width, please refer to
|
||||
Documentation/devicetree/bindings/mtd/nand.txt
|
||||
|
|
|
@ -5,7 +5,7 @@ child nodes of the GPMC controller with a name of "nor".
|
|||
|
||||
All timing relevant properties as well as generic GPMC child properties are
|
||||
explained in a separate documents. Please refer to
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
Required properties:
|
||||
- bank-width: Width of NOR flash in bytes. GPMC supports 8-bit and
|
||||
|
@ -28,7 +28,7 @@ Required properties:
|
|||
|
||||
Optional properties:
|
||||
- gpmc,XXX Additional GPMC timings and settings parameters. See
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
Optional properties for partition table parsing:
|
||||
- #address-cells: should be set to 1
|
||||
|
|
|
@ -5,7 +5,7 @@ the GPMC controller with a name of "onenand".
|
|||
|
||||
All timing relevant properties as well as generic gpmc child properties are
|
||||
explained in a separate documents - please refer to
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
Required properties:
|
||||
|
||||
|
|
|
@ -4,7 +4,12 @@ The GPMI nand controller provides an interface to control the
|
|||
NAND flash chips.
|
||||
|
||||
Required properties:
|
||||
- compatible : should be "fsl,<chip>-gpmi-nand"
|
||||
- compatible : should be "fsl,<chip>-gpmi-nand", chip can be:
|
||||
* imx23
|
||||
* imx28
|
||||
* imx6q
|
||||
* imx6sx
|
||||
* imx7d
|
||||
- reg : should contain registers location and length for gpmi and bch.
|
||||
- reg-names: Should contain the reg names "gpmi-nand" and "bch"
|
||||
- interrupts : BCH interrupt number.
|
||||
|
@ -13,6 +18,13 @@ Required properties:
|
|||
and GPMI DMA channel ID.
|
||||
Refer to dma.txt and fsl-mxs-dma.txt for details.
|
||||
- dma-names: Must be "rx-tx".
|
||||
- clocks : clocks phandle and clock specifier corresponding to each clock
|
||||
specified in clock-names.
|
||||
- clock-names : The "gpmi_io" clock is always required. Which clocks are
|
||||
exactly required depends on chip:
|
||||
* imx23/imx28 : "gpmi_io"
|
||||
* imx6q/sx : "gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch"
|
||||
* imx7d : "gpmi_io", "gpmi_bch_apb"
|
||||
|
||||
Optional properties:
|
||||
- nand-on-flash-bbt: boolean to enable on flash bbt option if not
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
* MTD SPI driver for Microchip 23K256 (and similar) serial SRAM
|
||||
|
||||
Required properties:
|
||||
- #address-cells, #size-cells : Must be present if the device has sub-nodes
|
||||
representing partitions.
|
||||
- compatible : Must be one of "microchip,mchp23k256" or "microchip,mchp23lcv1024"
|
||||
- reg : Chip-Select number
|
||||
- spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at
|
||||
|
||||
Example:
|
||||
|
||||
spi-sram@0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "microchip,mchp23k256";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <20000000>;
|
||||
};
|
|
@ -12,7 +12,8 @@ tree nodes.
|
|||
|
||||
The first part of NFC is NAND Controller Interface (NFI) HW.
|
||||
Required NFI properties:
|
||||
- compatible: Should be "mediatek,mtxxxx-nfc".
|
||||
- compatible: Should be one of "mediatek,mt2701-nfc",
|
||||
"mediatek,mt2712-nfc".
|
||||
- reg: Base physical address and size of NFI.
|
||||
- interrupts: Interrupts of NFI.
|
||||
- clocks: NFI required clocks.
|
||||
|
@ -141,7 +142,7 @@ Example:
|
|||
==============
|
||||
|
||||
Required BCH properties:
|
||||
- compatible: Should be "mediatek,mtxxxx-ecc".
|
||||
- compatible: Should be one of "mediatek,mt2701-ecc", "mediatek,mt2712-ecc".
|
||||
- reg: Base physical address and size of ECC.
|
||||
- interrupts: Interrupts of ECC.
|
||||
- clocks: ECC required clocks.
|
||||
|
|
|
@ -21,7 +21,7 @@ Optional NAND chip properties:
|
|||
|
||||
- nand-ecc-mode : String, operation mode of the NAND ecc mode.
|
||||
Supported values are: "none", "soft", "hw", "hw_syndrome",
|
||||
"hw_oob_first".
|
||||
"hw_oob_first", "on-die".
|
||||
Deprecated values:
|
||||
"soft_bch": use "soft" and nand-ecc-algo instead
|
||||
- nand-ecc-algo: string, algorithm of NAND ECC.
|
||||
|
|
|
@ -1,29 +1,49 @@
|
|||
Representing flash partitions in devicetree
|
||||
Flash partitions in device tree
|
||||
===============================
|
||||
|
||||
Partitions can be represented by sub-nodes of an mtd device. This can be used
|
||||
Flash devices can be partitioned into one or more functional ranges (e.g. "boot
|
||||
code", "nvram", "kernel").
|
||||
|
||||
Different devices may be partitioned in a different ways. Some may use a fixed
|
||||
flash layout set at production time. Some may use on-flash table that describes
|
||||
the geometry and naming/purpose of each functional region. It is also possible
|
||||
to see these methods mixed.
|
||||
|
||||
To assist system software in locating partitions, we allow describing which
|
||||
method is used for a given flash device. To describe the method there should be
|
||||
a subnode of the flash device that is named 'partitions'. It must have a
|
||||
'compatible' property, which is used to identify the method to use.
|
||||
|
||||
We currently only document a binding for fixed layouts.
|
||||
|
||||
|
||||
Fixed Partitions
|
||||
================
|
||||
|
||||
Partitions can be represented by sub-nodes of a flash device. This can be used
|
||||
on platforms which have strong conventions about which portions of a flash are
|
||||
used for what purposes, but which don't use an on-flash partition table such
|
||||
as RedBoot.
|
||||
|
||||
The partition table should be a subnode of the mtd node and should be named
|
||||
The partition table should be a subnode of the flash node and should be named
|
||||
'partitions'. This node should have the following property:
|
||||
- compatible : (required) must be "fixed-partitions"
|
||||
Partitions are then defined in subnodes of the partitions node.
|
||||
|
||||
For backwards compatibility partitions as direct subnodes of the mtd device are
|
||||
For backwards compatibility partitions as direct subnodes of the flash device are
|
||||
supported. This use is discouraged.
|
||||
NOTE: also for backwards compatibility, direct subnodes that have a compatible
|
||||
string are not considered partitions, as they may be used for other bindings.
|
||||
|
||||
#address-cells & #size-cells must both be present in the partitions subnode of the
|
||||
mtd device. There are two valid values for both:
|
||||
flash device. There are two valid values for both:
|
||||
<1>: for partitions that require a single 32-bit cell to represent their
|
||||
size/address (aka the value is below 4 GiB)
|
||||
<2>: for partitions that require two 32-bit cells to represent their
|
||||
size/address (aka the value is 4 GiB or greater).
|
||||
|
||||
Required properties:
|
||||
- reg : The partition's offset and size within the mtd bank.
|
||||
- reg : The partition's offset and size within the flash
|
||||
|
||||
Optional properties:
|
||||
- label : The label / name for this partition. If omitted, the label is taken
|
||||
|
|
|
@ -9,7 +9,7 @@ the GPMC controller with an "ethernet" name.
|
|||
|
||||
All timing relevant properties as well as generic GPMC child properties are
|
||||
explained in a separate documents. Please refer to
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
For the properties relevant to the ethernet controller connected to the GPMC
|
||||
refer to the binding documentation of the device. For example, the documentation
|
||||
|
@ -43,7 +43,7 @@ Required properties:
|
|||
|
||||
Optional properties:
|
||||
- gpmc,XXX Additional GPMC timings and settings parameters. See
|
||||
Documentation/devicetree/bindings/bus/ti-gpmc.txt
|
||||
Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
@ -3974,6 +3974,12 @@ M: Pali Rohár <pali.rohar@gmail.com>
|
|||
S: Maintained
|
||||
F: drivers/platform/x86/dell-wmi.c
|
||||
|
||||
DENALI NAND DRIVER
|
||||
M: Masahiro Yamada <yamada.masahiro@socionext.com>
|
||||
L: linux-mtd@lists.infradead.org
|
||||
S: Supported
|
||||
F: drivers/mtd/nand/denali*
|
||||
|
||||
DESIGNWARE USB2 DRD IP DRIVER
|
||||
M: John Youn <johnyoun@synopsys.com>
|
||||
L: linux-usb@vger.kernel.org
|
||||
|
@ -12464,7 +12470,8 @@ M: Marek Vasut <marek.vasut@gmail.com>
|
|||
L: linux-mtd@lists.infradead.org
|
||||
W: http://www.linux-mtd.infradead.org/
|
||||
Q: http://patchwork.ozlabs.org/project/linux-mtd/list/
|
||||
T: git git://github.com/spi-nor/linux.git
|
||||
T: git git://git.infradead.org/linux-mtd.git spi-nor/fixes
|
||||
T: git git://git.infradead.org/l2-mtd.git spi-nor/next
|
||||
S: Maintained
|
||||
F: drivers/mtd/spi-nor/
|
||||
F: include/linux/mtd/spi-nor.h
|
||||
|
|
|
@ -155,6 +155,10 @@ config MTD_BCM47XX_PARTS
|
|||
This provides partitions parser for devices based on BCM47xx
|
||||
boards.
|
||||
|
||||
menu "Partition parsers"
|
||||
source "drivers/mtd/parsers/Kconfig"
|
||||
endmenu
|
||||
|
||||
comment "User Modules And Translation Layers"
|
||||
|
||||
#
|
||||
|
|
|
@ -13,6 +13,7 @@ obj-$(CONFIG_MTD_AFS_PARTS) += afs.o
|
|||
obj-$(CONFIG_MTD_AR7_PARTS) += ar7part.o
|
||||
obj-$(CONFIG_MTD_BCM63XX_PARTS) += bcm63xxpart.o
|
||||
obj-$(CONFIG_MTD_BCM47XX_PARTS) += bcm47xxpart.o
|
||||
obj-y += parsers/
|
||||
|
||||
# 'Users' - code which presents functionality to userspace.
|
||||
obj-$(CONFIG_MTD_BLKDEVS) += mtd_blkdevs.o
|
||||
|
|
|
@ -43,7 +43,8 @@
|
|||
#define ML_MAGIC2 0x26594131
|
||||
#define TRX_MAGIC 0x30524448
|
||||
#define SHSQ_MAGIC 0x71736873 /* shsq (weird ZTE H218N endianness) */
|
||||
#define UBI_EC_MAGIC 0x23494255 /* UBI# */
|
||||
|
||||
static const char * const trx_types[] = { "trx", NULL };
|
||||
|
||||
struct trx_header {
|
||||
uint32_t magic;
|
||||
|
@ -62,89 +63,6 @@ static void bcm47xxpart_add_part(struct mtd_partition *part, const char *name,
|
|||
part->mask_flags = mask_flags;
|
||||
}
|
||||
|
||||
static const char *bcm47xxpart_trx_data_part_name(struct mtd_info *master,
|
||||
size_t offset)
|
||||
{
|
||||
uint32_t buf;
|
||||
size_t bytes_read;
|
||||
int err;
|
||||
|
||||
err = mtd_read(master, offset, sizeof(buf), &bytes_read,
|
||||
(uint8_t *)&buf);
|
||||
if (err && !mtd_is_bitflip(err)) {
|
||||
pr_err("mtd_read error while parsing (offset: 0x%X): %d\n",
|
||||
offset, err);
|
||||
goto out_default;
|
||||
}
|
||||
|
||||
if (buf == UBI_EC_MAGIC)
|
||||
return "ubi";
|
||||
|
||||
out_default:
|
||||
return "rootfs";
|
||||
}
|
||||
|
||||
static int bcm47xxpart_parse_trx(struct mtd_info *master,
|
||||
struct mtd_partition *trx,
|
||||
struct mtd_partition *parts,
|
||||
size_t parts_len)
|
||||
{
|
||||
struct trx_header header;
|
||||
size_t bytes_read;
|
||||
int curr_part = 0;
|
||||
int i, err;
|
||||
|
||||
if (parts_len < 3) {
|
||||
pr_warn("No enough space to add TRX partitions!\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
err = mtd_read(master, trx->offset, sizeof(header), &bytes_read,
|
||||
(uint8_t *)&header);
|
||||
if (err && !mtd_is_bitflip(err)) {
|
||||
pr_err("mtd_read error while reading TRX header: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
i = 0;
|
||||
|
||||
/* We have LZMA loader if offset[2] points to sth */
|
||||
if (header.offset[2]) {
|
||||
bcm47xxpart_add_part(&parts[curr_part++], "loader",
|
||||
trx->offset + header.offset[i], 0);
|
||||
i++;
|
||||
}
|
||||
|
||||
if (header.offset[i]) {
|
||||
bcm47xxpart_add_part(&parts[curr_part++], "linux",
|
||||
trx->offset + header.offset[i], 0);
|
||||
i++;
|
||||
}
|
||||
|
||||
if (header.offset[i]) {
|
||||
size_t offset = trx->offset + header.offset[i];
|
||||
const char *name = bcm47xxpart_trx_data_part_name(master,
|
||||
offset);
|
||||
|
||||
bcm47xxpart_add_part(&parts[curr_part++], name, offset, 0);
|
||||
i++;
|
||||
}
|
||||
|
||||
/*
|
||||
* Assume that every partition ends at the beginning of the one it is
|
||||
* followed by.
|
||||
*/
|
||||
for (i = 0; i < curr_part; i++) {
|
||||
u64 next_part_offset = (i < curr_part - 1) ?
|
||||
parts[i + 1].offset :
|
||||
trx->offset + trx->size;
|
||||
|
||||
parts[i].size = next_part_offset - parts[i].offset;
|
||||
}
|
||||
|
||||
return curr_part;
|
||||
}
|
||||
|
||||
/**
|
||||
* bcm47xxpart_bootpartition - gets index of TRX partition used by bootloader
|
||||
*
|
||||
|
@ -362,17 +280,10 @@ static int bcm47xxpart_parse(struct mtd_info *master,
|
|||
for (i = 0; i < trx_num; i++) {
|
||||
struct mtd_partition *trx = &parts[trx_parts[i]];
|
||||
|
||||
if (i == bcm47xxpart_bootpartition()) {
|
||||
int num_parts;
|
||||
|
||||
num_parts = bcm47xxpart_parse_trx(master, trx,
|
||||
parts + curr_part,
|
||||
BCM47XXPART_MAX_PARTS - curr_part);
|
||||
if (num_parts > 0)
|
||||
curr_part += num_parts;
|
||||
} else {
|
||||
if (i == bcm47xxpart_bootpartition())
|
||||
trx->types = trx_types;
|
||||
else
|
||||
trx->name = "failsafe";
|
||||
}
|
||||
}
|
||||
|
||||
*pparts = parts;
|
||||
|
|
|
@ -666,7 +666,7 @@ cfi_staa_writev(struct mtd_info *mtd, const struct kvec *vecs,
|
|||
size_t totlen = 0, thislen;
|
||||
int ret = 0;
|
||||
size_t buflen = 0;
|
||||
static char *buffer;
|
||||
char *buffer;
|
||||
|
||||
if (!ECCBUF_SIZE) {
|
||||
/* We should fall back to a general writev implementation.
|
||||
|
|
|
@ -95,6 +95,16 @@ config MTD_M25P80
|
|||
if you want to specify device partitioning or to use a device which
|
||||
doesn't support the JEDEC ID instruction.
|
||||
|
||||
config MTD_MCHP23K256
|
||||
tristate "Microchip 23K256 SRAM"
|
||||
depends on SPI_MASTER
|
||||
help
|
||||
This enables access to Microchip 23K256 SRAM chips, using SPI.
|
||||
|
||||
Set up your spi devices with the right board-specific
|
||||
platform data, or a device tree description if you want to
|
||||
specify device partitioning
|
||||
|
||||
config MTD_SPEAR_SMI
|
||||
tristate "SPEAR MTD NOR Support through SMI controller"
|
||||
depends on PLAT_SPEAR
|
||||
|
|
|
@ -12,6 +12,7 @@ obj-$(CONFIG_MTD_LART) += lart.o
|
|||
obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o
|
||||
obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o
|
||||
obj-$(CONFIG_MTD_M25P80) += m25p80.o
|
||||
obj-$(CONFIG_MTD_MCHP23K256) += mchp23k256.o
|
||||
obj-$(CONFIG_MTD_SPEAR_SMI) += spear_smi.o
|
||||
obj-$(CONFIG_MTD_SST25L) += sst25l.o
|
||||
obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o
|
||||
|
|
|
@ -78,11 +78,17 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
|
|||
{
|
||||
struct m25p *flash = nor->priv;
|
||||
struct spi_device *spi = flash->spi;
|
||||
struct spi_transfer t[2] = {};
|
||||
unsigned int inst_nbits, addr_nbits, data_nbits, data_idx;
|
||||
struct spi_transfer t[3] = {};
|
||||
struct spi_message m;
|
||||
int cmd_sz = m25p_cmdsz(nor);
|
||||
ssize_t ret;
|
||||
|
||||
/* get transfer protocols. */
|
||||
inst_nbits = spi_nor_get_protocol_inst_nbits(nor->write_proto);
|
||||
addr_nbits = spi_nor_get_protocol_addr_nbits(nor->write_proto);
|
||||
data_nbits = spi_nor_get_protocol_data_nbits(nor->write_proto);
|
||||
|
||||
spi_message_init(&m);
|
||||
|
||||
if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second)
|
||||
|
@ -92,12 +98,27 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
|
|||
m25p_addr2cmd(nor, to, flash->command);
|
||||
|
||||
t[0].tx_buf = flash->command;
|
||||
t[0].tx_nbits = inst_nbits;
|
||||
t[0].len = cmd_sz;
|
||||
spi_message_add_tail(&t[0], &m);
|
||||
|
||||
t[1].tx_buf = buf;
|
||||
t[1].len = len;
|
||||
spi_message_add_tail(&t[1], &m);
|
||||
/* split the op code and address bytes into two transfers if needed. */
|
||||
data_idx = 1;
|
||||
if (addr_nbits != inst_nbits) {
|
||||
t[0].len = 1;
|
||||
|
||||
t[1].tx_buf = &flash->command[1];
|
||||
t[1].tx_nbits = addr_nbits;
|
||||
t[1].len = cmd_sz - 1;
|
||||
spi_message_add_tail(&t[1], &m);
|
||||
|
||||
data_idx = 2;
|
||||
}
|
||||
|
||||
t[data_idx].tx_buf = buf;
|
||||
t[data_idx].tx_nbits = data_nbits;
|
||||
t[data_idx].len = len;
|
||||
spi_message_add_tail(&t[data_idx], &m);
|
||||
|
||||
ret = spi_sync(spi, &m);
|
||||
if (ret)
|
||||
|
@ -109,18 +130,6 @@ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor)
|
||||
{
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_DUAL:
|
||||
return 2;
|
||||
case SPI_NOR_QUAD:
|
||||
return 4;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Read an address range from the nor chip. The address range
|
||||
* may be any size provided it is within the physical boundaries.
|
||||
|
@ -130,13 +139,20 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
{
|
||||
struct m25p *flash = nor->priv;
|
||||
struct spi_device *spi = flash->spi;
|
||||
struct spi_transfer t[2];
|
||||
unsigned int inst_nbits, addr_nbits, data_nbits, data_idx;
|
||||
struct spi_transfer t[3];
|
||||
struct spi_message m;
|
||||
unsigned int dummy = nor->read_dummy;
|
||||
ssize_t ret;
|
||||
int cmd_sz;
|
||||
|
||||
/* get transfer protocols. */
|
||||
inst_nbits = spi_nor_get_protocol_inst_nbits(nor->read_proto);
|
||||
addr_nbits = spi_nor_get_protocol_addr_nbits(nor->read_proto);
|
||||
data_nbits = spi_nor_get_protocol_data_nbits(nor->read_proto);
|
||||
|
||||
/* convert the dummy cycles to the number of bytes */
|
||||
dummy /= 8;
|
||||
dummy = (dummy * addr_nbits) / 8;
|
||||
|
||||
if (spi_flash_read_supported(spi)) {
|
||||
struct spi_flash_read_message msg;
|
||||
|
@ -149,10 +165,9 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
msg.read_opcode = nor->read_opcode;
|
||||
msg.addr_width = nor->addr_width;
|
||||
msg.dummy_bytes = dummy;
|
||||
/* TODO: Support other combinations */
|
||||
msg.opcode_nbits = SPI_NBITS_SINGLE;
|
||||
msg.addr_nbits = SPI_NBITS_SINGLE;
|
||||
msg.data_nbits = m25p80_rx_nbits(nor);
|
||||
msg.opcode_nbits = inst_nbits;
|
||||
msg.addr_nbits = addr_nbits;
|
||||
msg.data_nbits = data_nbits;
|
||||
|
||||
ret = spi_flash_read(spi, &msg);
|
||||
if (ret < 0)
|
||||
|
@ -167,20 +182,45 @@ static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
m25p_addr2cmd(nor, from, flash->command);
|
||||
|
||||
t[0].tx_buf = flash->command;
|
||||
t[0].tx_nbits = inst_nbits;
|
||||
t[0].len = m25p_cmdsz(nor) + dummy;
|
||||
spi_message_add_tail(&t[0], &m);
|
||||
|
||||
t[1].rx_buf = buf;
|
||||
t[1].rx_nbits = m25p80_rx_nbits(nor);
|
||||
t[1].len = min3(len, spi_max_transfer_size(spi),
|
||||
spi_max_message_size(spi) - t[0].len);
|
||||
spi_message_add_tail(&t[1], &m);
|
||||
/*
|
||||
* Set all dummy/mode cycle bits to avoid sending some manufacturer
|
||||
* specific pattern, which might make the memory enter its Continuous
|
||||
* Read mode by mistake.
|
||||
* Based on the different mode cycle bit patterns listed and described
|
||||
* in the JESD216B specification, the 0xff value works for all memories
|
||||
* and all manufacturers.
|
||||
*/
|
||||
cmd_sz = t[0].len;
|
||||
memset(flash->command + cmd_sz - dummy, 0xff, dummy);
|
||||
|
||||
/* split the op code and address bytes into two transfers if needed. */
|
||||
data_idx = 1;
|
||||
if (addr_nbits != inst_nbits) {
|
||||
t[0].len = 1;
|
||||
|
||||
t[1].tx_buf = &flash->command[1];
|
||||
t[1].tx_nbits = addr_nbits;
|
||||
t[1].len = cmd_sz - 1;
|
||||
spi_message_add_tail(&t[1], &m);
|
||||
|
||||
data_idx = 2;
|
||||
}
|
||||
|
||||
t[data_idx].rx_buf = buf;
|
||||
t[data_idx].rx_nbits = data_nbits;
|
||||
t[data_idx].len = min3(len, spi_max_transfer_size(spi),
|
||||
spi_max_message_size(spi) - cmd_sz);
|
||||
spi_message_add_tail(&t[data_idx], &m);
|
||||
|
||||
ret = spi_sync(spi, &m);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = m.actual_length - m25p_cmdsz(nor) - dummy;
|
||||
ret = m.actual_length - cmd_sz;
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
return ret;
|
||||
|
@ -196,7 +236,11 @@ static int m25p_probe(struct spi_device *spi)
|
|||
struct flash_platform_data *data;
|
||||
struct m25p *flash;
|
||||
struct spi_nor *nor;
|
||||
enum read_mode mode = SPI_NOR_NORMAL;
|
||||
struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
char *flash_name;
|
||||
int ret;
|
||||
|
||||
|
@ -221,10 +265,19 @@ static int m25p_probe(struct spi_device *spi)
|
|||
spi_set_drvdata(spi, flash);
|
||||
flash->spi = spi;
|
||||
|
||||
if (spi->mode & SPI_RX_QUAD)
|
||||
mode = SPI_NOR_QUAD;
|
||||
else if (spi->mode & SPI_RX_DUAL)
|
||||
mode = SPI_NOR_DUAL;
|
||||
if (spi->mode & SPI_RX_QUAD) {
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
|
||||
|
||||
if (spi->mode & SPI_TX_QUAD)
|
||||
hwcaps.mask |= (SNOR_HWCAPS_READ_1_4_4 |
|
||||
SNOR_HWCAPS_PP_1_1_4 |
|
||||
SNOR_HWCAPS_PP_1_4_4);
|
||||
} else if (spi->mode & SPI_RX_DUAL) {
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
|
||||
|
||||
if (spi->mode & SPI_TX_DUAL)
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_2_2;
|
||||
}
|
||||
|
||||
if (data && data->name)
|
||||
nor->mtd.name = data->name;
|
||||
|
@ -241,7 +294,7 @@ static int m25p_probe(struct spi_device *spi)
|
|||
else
|
||||
flash_name = spi->modalias;
|
||||
|
||||
ret = spi_nor_scan(nor, flash_name, mode);
|
||||
ret = spi_nor_scan(nor, flash_name, &hwcaps);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -0,0 +1,236 @@
|
|||
/*
|
||||
* mchp23k256.c
|
||||
*
|
||||
* Driver for Microchip 23k256 SPI RAM chips
|
||||
*
|
||||
* Copyright © 2016 Andrew Lunn <andrew@lunn.ch>
|
||||
*
|
||||
* This code is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
*/
|
||||
#include <linux/device.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mtd/mtd.h>
|
||||
#include <linux/mtd/partitions.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/spi/flash.h>
|
||||
#include <linux/spi/spi.h>
|
||||
#include <linux/of_device.h>
|
||||
|
||||
#define MAX_CMD_SIZE 4
|
||||
|
||||
struct mchp23_caps {
|
||||
u8 addr_width;
|
||||
unsigned int size;
|
||||
};
|
||||
|
||||
struct mchp23k256_flash {
|
||||
struct spi_device *spi;
|
||||
struct mutex lock;
|
||||
struct mtd_info mtd;
|
||||
const struct mchp23_caps *caps;
|
||||
};
|
||||
|
||||
#define MCHP23K256_CMD_WRITE_STATUS 0x01
|
||||
#define MCHP23K256_CMD_WRITE 0x02
|
||||
#define MCHP23K256_CMD_READ 0x03
|
||||
#define MCHP23K256_MODE_SEQ BIT(6)
|
||||
|
||||
#define to_mchp23k256_flash(x) container_of(x, struct mchp23k256_flash, mtd)
|
||||
|
||||
static void mchp23k256_addr2cmd(struct mchp23k256_flash *flash,
|
||||
unsigned int addr, u8 *cmd)
|
||||
{
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Address is sent in big endian (MSB first) and we skip
|
||||
* the first entry of the cmd array which contains the cmd
|
||||
* opcode.
|
||||
*/
|
||||
for (i = flash->caps->addr_width; i > 0; i--, addr >>= 8)
|
||||
cmd[i] = addr;
|
||||
}
|
||||
|
||||
static int mchp23k256_cmdsz(struct mchp23k256_flash *flash)
|
||||
{
|
||||
return 1 + flash->caps->addr_width;
|
||||
}
|
||||
|
||||
static int mchp23k256_write(struct mtd_info *mtd, loff_t to, size_t len,
|
||||
size_t *retlen, const unsigned char *buf)
|
||||
{
|
||||
struct mchp23k256_flash *flash = to_mchp23k256_flash(mtd);
|
||||
struct spi_transfer transfer[2] = {};
|
||||
struct spi_message message;
|
||||
unsigned char command[MAX_CMD_SIZE];
|
||||
|
||||
spi_message_init(&message);
|
||||
|
||||
command[0] = MCHP23K256_CMD_WRITE;
|
||||
mchp23k256_addr2cmd(flash, to, command);
|
||||
|
||||
transfer[0].tx_buf = command;
|
||||
transfer[0].len = mchp23k256_cmdsz(flash);
|
||||
spi_message_add_tail(&transfer[0], &message);
|
||||
|
||||
transfer[1].tx_buf = buf;
|
||||
transfer[1].len = len;
|
||||
spi_message_add_tail(&transfer[1], &message);
|
||||
|
||||
mutex_lock(&flash->lock);
|
||||
|
||||
spi_sync(flash->spi, &message);
|
||||
|
||||
if (retlen && message.actual_length > sizeof(command))
|
||||
*retlen += message.actual_length - sizeof(command);
|
||||
|
||||
mutex_unlock(&flash->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mchp23k256_read(struct mtd_info *mtd, loff_t from, size_t len,
|
||||
size_t *retlen, unsigned char *buf)
|
||||
{
|
||||
struct mchp23k256_flash *flash = to_mchp23k256_flash(mtd);
|
||||
struct spi_transfer transfer[2] = {};
|
||||
struct spi_message message;
|
||||
unsigned char command[MAX_CMD_SIZE];
|
||||
|
||||
spi_message_init(&message);
|
||||
|
||||
memset(&transfer, 0, sizeof(transfer));
|
||||
command[0] = MCHP23K256_CMD_READ;
|
||||
mchp23k256_addr2cmd(flash, from, command);
|
||||
|
||||
transfer[0].tx_buf = command;
|
||||
transfer[0].len = mchp23k256_cmdsz(flash);
|
||||
spi_message_add_tail(&transfer[0], &message);
|
||||
|
||||
transfer[1].rx_buf = buf;
|
||||
transfer[1].len = len;
|
||||
spi_message_add_tail(&transfer[1], &message);
|
||||
|
||||
mutex_lock(&flash->lock);
|
||||
|
||||
spi_sync(flash->spi, &message);
|
||||
|
||||
if (retlen && message.actual_length > sizeof(command))
|
||||
*retlen += message.actual_length - sizeof(command);
|
||||
|
||||
mutex_unlock(&flash->lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the device into sequential mode. This allows read/writes to the
|
||||
* entire SRAM in a single operation
|
||||
*/
|
||||
static int mchp23k256_set_mode(struct spi_device *spi)
|
||||
{
|
||||
struct spi_transfer transfer = {};
|
||||
struct spi_message message;
|
||||
unsigned char command[2];
|
||||
|
||||
spi_message_init(&message);
|
||||
|
||||
command[0] = MCHP23K256_CMD_WRITE_STATUS;
|
||||
command[1] = MCHP23K256_MODE_SEQ;
|
||||
|
||||
transfer.tx_buf = command;
|
||||
transfer.len = sizeof(command);
|
||||
spi_message_add_tail(&transfer, &message);
|
||||
|
||||
return spi_sync(spi, &message);
|
||||
}
|
||||
|
||||
static const struct mchp23_caps mchp23k256_caps = {
|
||||
.size = SZ_32K,
|
||||
.addr_width = 2,
|
||||
};
|
||||
|
||||
static const struct mchp23_caps mchp23lcv1024_caps = {
|
||||
.size = SZ_128K,
|
||||
.addr_width = 3,
|
||||
};
|
||||
|
||||
static int mchp23k256_probe(struct spi_device *spi)
|
||||
{
|
||||
struct mchp23k256_flash *flash;
|
||||
struct flash_platform_data *data;
|
||||
int err;
|
||||
|
||||
flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL);
|
||||
if (!flash)
|
||||
return -ENOMEM;
|
||||
|
||||
flash->spi = spi;
|
||||
mutex_init(&flash->lock);
|
||||
spi_set_drvdata(spi, flash);
|
||||
|
||||
err = mchp23k256_set_mode(spi);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
data = dev_get_platdata(&spi->dev);
|
||||
|
||||
flash->caps = of_device_get_match_data(&spi->dev);
|
||||
if (!flash->caps)
|
||||
flash->caps = &mchp23k256_caps;
|
||||
|
||||
mtd_set_of_node(&flash->mtd, spi->dev.of_node);
|
||||
flash->mtd.dev.parent = &spi->dev;
|
||||
flash->mtd.type = MTD_RAM;
|
||||
flash->mtd.flags = MTD_CAP_RAM;
|
||||
flash->mtd.writesize = 1;
|
||||
flash->mtd.size = flash->caps->size;
|
||||
flash->mtd._read = mchp23k256_read;
|
||||
flash->mtd._write = mchp23k256_write;
|
||||
|
||||
err = mtd_device_register(&flash->mtd, data ? data->parts : NULL,
|
||||
data ? data->nr_parts : 0);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mchp23k256_remove(struct spi_device *spi)
|
||||
{
|
||||
struct mchp23k256_flash *flash = spi_get_drvdata(spi);
|
||||
|
||||
return mtd_device_unregister(&flash->mtd);
|
||||
}
|
||||
|
||||
static const struct of_device_id mchp23k256_of_table[] = {
|
||||
{
|
||||
.compatible = "microchip,mchp23k256",
|
||||
.data = &mchp23k256_caps,
|
||||
},
|
||||
{
|
||||
.compatible = "microchip,mchp23lcv1024",
|
||||
.data = &mchp23lcv1024_caps,
|
||||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, mchp23k256_of_table);
|
||||
|
||||
static struct spi_driver mchp23k256_driver = {
|
||||
.driver = {
|
||||
.name = "mchp23k256",
|
||||
.of_match_table = of_match_ptr(mchp23k256_of_table),
|
||||
},
|
||||
.probe = mchp23k256_probe,
|
||||
.remove = mchp23k256_remove,
|
||||
};
|
||||
|
||||
module_spi_driver(mchp23k256_driver);
|
||||
|
||||
MODULE_DESCRIPTION("MTD SPI driver for MCHP23K256 RAM chips");
|
||||
MODULE_AUTHOR("Andrew Lunn <andre@lunn.ch>");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_ALIAS("spi:mchp23k256");
|
|
@ -82,9 +82,13 @@
|
|||
#define OP_WRITE_SECURITY_REVC 0x9A
|
||||
#define OP_WRITE_SECURITY 0x9B /* revision D */
|
||||
|
||||
#define CFI_MFR_ATMEL 0x1F
|
||||
|
||||
#define DATAFLASH_SHIFT_EXTID 24
|
||||
#define DATAFLASH_SHIFT_ID 40
|
||||
|
||||
struct dataflash {
|
||||
uint8_t command[4];
|
||||
u8 command[4];
|
||||
char name[24];
|
||||
|
||||
unsigned short page_offset; /* offset in flash address */
|
||||
|
@ -129,8 +133,7 @@ static int dataflash_waitready(struct spi_device *spi)
|
|||
for (;;) {
|
||||
status = dataflash_status(spi);
|
||||
if (status < 0) {
|
||||
pr_debug("%s: status %d?\n",
|
||||
dev_name(&spi->dev), status);
|
||||
dev_dbg(&spi->dev, "status %d?\n", status);
|
||||
status = 0;
|
||||
}
|
||||
|
||||
|
@ -153,12 +156,11 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
|
|||
struct spi_transfer x = { };
|
||||
struct spi_message msg;
|
||||
unsigned blocksize = priv->page_size << 3;
|
||||
uint8_t *command;
|
||||
uint32_t rem;
|
||||
u8 *command;
|
||||
u32 rem;
|
||||
|
||||
pr_debug("%s: erase addr=0x%llx len 0x%llx\n",
|
||||
dev_name(&spi->dev), (long long)instr->addr,
|
||||
(long long)instr->len);
|
||||
dev_dbg(&spi->dev, "erase addr=0x%llx len 0x%llx\n",
|
||||
(long long)instr->addr, (long long)instr->len);
|
||||
|
||||
div_u64_rem(instr->len, priv->page_size, &rem);
|
||||
if (rem)
|
||||
|
@ -187,11 +189,11 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
|
|||
pageaddr = pageaddr << priv->page_offset;
|
||||
|
||||
command[0] = do_block ? OP_ERASE_BLOCK : OP_ERASE_PAGE;
|
||||
command[1] = (uint8_t)(pageaddr >> 16);
|
||||
command[2] = (uint8_t)(pageaddr >> 8);
|
||||
command[1] = (u8)(pageaddr >> 16);
|
||||
command[2] = (u8)(pageaddr >> 8);
|
||||
command[3] = 0;
|
||||
|
||||
pr_debug("ERASE %s: (%x) %x %x %x [%i]\n",
|
||||
dev_dbg(&spi->dev, "ERASE %s: (%x) %x %x %x [%i]\n",
|
||||
do_block ? "block" : "page",
|
||||
command[0], command[1], command[2], command[3],
|
||||
pageaddr);
|
||||
|
@ -200,8 +202,8 @@ static int dataflash_erase(struct mtd_info *mtd, struct erase_info *instr)
|
|||
(void) dataflash_waitready(spi);
|
||||
|
||||
if (status < 0) {
|
||||
printk(KERN_ERR "%s: erase %x, err %d\n",
|
||||
dev_name(&spi->dev), pageaddr, status);
|
||||
dev_err(&spi->dev, "erase %x, err %d\n",
|
||||
pageaddr, status);
|
||||
/* REVISIT: can retry instr->retries times; or
|
||||
* giveup and instr->fail_addr = instr->addr;
|
||||
*/
|
||||
|
@ -239,11 +241,11 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
struct spi_transfer x[2] = { };
|
||||
struct spi_message msg;
|
||||
unsigned int addr;
|
||||
uint8_t *command;
|
||||
u8 *command;
|
||||
int status;
|
||||
|
||||
pr_debug("%s: read 0x%x..0x%x\n", dev_name(&priv->spi->dev),
|
||||
(unsigned)from, (unsigned)(from + len));
|
||||
dev_dbg(&priv->spi->dev, "read 0x%x..0x%x\n",
|
||||
(unsigned int)from, (unsigned int)(from + len));
|
||||
|
||||
/* Calculate flash page/byte address */
|
||||
addr = (((unsigned)from / priv->page_size) << priv->page_offset)
|
||||
|
@ -251,7 +253,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
|
||||
command = priv->command;
|
||||
|
||||
pr_debug("READ: (%x) %x %x %x\n",
|
||||
dev_dbg(&priv->spi->dev, "READ: (%x) %x %x %x\n",
|
||||
command[0], command[1], command[2], command[3]);
|
||||
|
||||
spi_message_init(&msg);
|
||||
|
@ -271,9 +273,9 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
* fewer "don't care" bytes. Both buffers stay unchanged.
|
||||
*/
|
||||
command[0] = OP_READ_CONTINUOUS;
|
||||
command[1] = (uint8_t)(addr >> 16);
|
||||
command[2] = (uint8_t)(addr >> 8);
|
||||
command[3] = (uint8_t)(addr >> 0);
|
||||
command[1] = (u8)(addr >> 16);
|
||||
command[2] = (u8)(addr >> 8);
|
||||
command[3] = (u8)(addr >> 0);
|
||||
/* plus 4 "don't care" bytes */
|
||||
|
||||
status = spi_sync(priv->spi, &msg);
|
||||
|
@ -283,8 +285,7 @@ static int dataflash_read(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
*retlen = msg.actual_length - 8;
|
||||
status = 0;
|
||||
} else
|
||||
pr_debug("%s: read %x..%x --> %d\n",
|
||||
dev_name(&priv->spi->dev),
|
||||
dev_dbg(&priv->spi->dev, "read %x..%x --> %d\n",
|
||||
(unsigned)from, (unsigned)(from + len),
|
||||
status);
|
||||
return status;
|
||||
|
@ -308,10 +309,10 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
size_t remaining = len;
|
||||
u_char *writebuf = (u_char *) buf;
|
||||
int status = -EINVAL;
|
||||
uint8_t *command;
|
||||
u8 *command;
|
||||
|
||||
pr_debug("%s: write 0x%x..0x%x\n",
|
||||
dev_name(&spi->dev), (unsigned)to, (unsigned)(to + len));
|
||||
dev_dbg(&spi->dev, "write 0x%x..0x%x\n",
|
||||
(unsigned int)to, (unsigned int)(to + len));
|
||||
|
||||
spi_message_init(&msg);
|
||||
|
||||
|
@ -328,7 +329,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
|
||||
mutex_lock(&priv->lock);
|
||||
while (remaining > 0) {
|
||||
pr_debug("write @ %i:%i len=%i\n",
|
||||
dev_dbg(&spi->dev, "write @ %i:%i len=%i\n",
|
||||
pageaddr, offset, writelen);
|
||||
|
||||
/* REVISIT:
|
||||
|
@ -356,13 +357,13 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
command[2] = (addr & 0x0000FF00) >> 8;
|
||||
command[3] = 0;
|
||||
|
||||
pr_debug("TRANSFER: (%x) %x %x %x\n",
|
||||
dev_dbg(&spi->dev, "TRANSFER: (%x) %x %x %x\n",
|
||||
command[0], command[1], command[2], command[3]);
|
||||
|
||||
status = spi_sync(spi, &msg);
|
||||
if (status < 0)
|
||||
pr_debug("%s: xfer %u -> %d\n",
|
||||
dev_name(&spi->dev), addr, status);
|
||||
dev_dbg(&spi->dev, "xfer %u -> %d\n",
|
||||
addr, status);
|
||||
|
||||
(void) dataflash_waitready(priv->spi);
|
||||
}
|
||||
|
@ -374,7 +375,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
command[2] = (addr & 0x0000FF00) >> 8;
|
||||
command[3] = (addr & 0x000000FF);
|
||||
|
||||
pr_debug("PROGRAM: (%x) %x %x %x\n",
|
||||
dev_dbg(&spi->dev, "PROGRAM: (%x) %x %x %x\n",
|
||||
command[0], command[1], command[2], command[3]);
|
||||
|
||||
x[1].tx_buf = writebuf;
|
||||
|
@ -383,8 +384,8 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
status = spi_sync(spi, &msg);
|
||||
spi_transfer_del(x + 1);
|
||||
if (status < 0)
|
||||
pr_debug("%s: pgm %u/%u -> %d\n",
|
||||
dev_name(&spi->dev), addr, writelen, status);
|
||||
dev_dbg(&spi->dev, "pgm %u/%u -> %d\n",
|
||||
addr, writelen, status);
|
||||
|
||||
(void) dataflash_waitready(priv->spi);
|
||||
|
||||
|
@ -398,20 +399,20 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
command[2] = (addr & 0x0000FF00) >> 8;
|
||||
command[3] = 0;
|
||||
|
||||
pr_debug("COMPARE: (%x) %x %x %x\n",
|
||||
dev_dbg(&spi->dev, "COMPARE: (%x) %x %x %x\n",
|
||||
command[0], command[1], command[2], command[3]);
|
||||
|
||||
status = spi_sync(spi, &msg);
|
||||
if (status < 0)
|
||||
pr_debug("%s: compare %u -> %d\n",
|
||||
dev_name(&spi->dev), addr, status);
|
||||
dev_dbg(&spi->dev, "compare %u -> %d\n",
|
||||
addr, status);
|
||||
|
||||
status = dataflash_waitready(priv->spi);
|
||||
|
||||
/* Check result of the compare operation */
|
||||
if (status & (1 << 6)) {
|
||||
printk(KERN_ERR "%s: compare page %u, err %d\n",
|
||||
dev_name(&spi->dev), pageaddr, status);
|
||||
dev_err(&spi->dev, "compare page %u, err %d\n",
|
||||
pageaddr, status);
|
||||
remaining = 0;
|
||||
status = -EIO;
|
||||
break;
|
||||
|
@ -455,11 +456,11 @@ static int dataflash_get_otp_info(struct mtd_info *mtd, size_t len,
|
|||
}
|
||||
|
||||
static ssize_t otp_read(struct spi_device *spi, unsigned base,
|
||||
uint8_t *buf, loff_t off, size_t len)
|
||||
u8 *buf, loff_t off, size_t len)
|
||||
{
|
||||
struct spi_message m;
|
||||
size_t l;
|
||||
uint8_t *scratch;
|
||||
u8 *scratch;
|
||||
struct spi_transfer t;
|
||||
int status;
|
||||
|
||||
|
@ -538,7 +539,7 @@ static int dataflash_write_user_otp(struct mtd_info *mtd,
|
|||
{
|
||||
struct spi_message m;
|
||||
const size_t l = 4 + 64;
|
||||
uint8_t *scratch;
|
||||
u8 *scratch;
|
||||
struct spi_transfer t;
|
||||
struct dataflash *priv = mtd->priv;
|
||||
int status;
|
||||
|
@ -689,14 +690,15 @@ struct flash_info {
|
|||
/* JEDEC id has a high byte of zero plus three data bytes:
|
||||
* the manufacturer id, then a two byte device id.
|
||||
*/
|
||||
uint32_t jedec_id;
|
||||
u64 jedec_id;
|
||||
|
||||
/* The size listed here is what works with OP_ERASE_PAGE. */
|
||||
unsigned nr_pages;
|
||||
uint16_t pagesize;
|
||||
uint16_t pageoffset;
|
||||
u16 pagesize;
|
||||
u16 pageoffset;
|
||||
|
||||
uint16_t flags;
|
||||
u16 flags;
|
||||
#define SUP_EXTID 0x0004 /* supports extended ID data */
|
||||
#define SUP_POW2PS 0x0002 /* supports 2^N byte pages */
|
||||
#define IS_POW2PS 0x0001 /* uses 2^N byte pages */
|
||||
};
|
||||
|
@ -734,54 +736,32 @@ static struct flash_info dataflash_data[] = {
|
|||
|
||||
{ "AT45DB642x", 0x1f2800, 8192, 1056, 11, SUP_POW2PS},
|
||||
{ "at45db642d", 0x1f2800, 8192, 1024, 10, SUP_POW2PS | IS_POW2PS},
|
||||
|
||||
{ "AT45DB641E", 0x1f28000100, 32768, 264, 9, SUP_EXTID | SUP_POW2PS},
|
||||
{ "at45db641e", 0x1f28000100, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS},
|
||||
};
|
||||
|
||||
static struct flash_info *jedec_probe(struct spi_device *spi)
|
||||
static struct flash_info *jedec_lookup(struct spi_device *spi,
|
||||
u64 jedec, bool use_extid)
|
||||
{
|
||||
int tmp;
|
||||
uint8_t code = OP_READ_ID;
|
||||
uint8_t id[3];
|
||||
uint32_t jedec;
|
||||
struct flash_info *info;
|
||||
struct flash_info *info;
|
||||
int status;
|
||||
|
||||
/* JEDEC also defines an optional "extended device information"
|
||||
* string for after vendor-specific data, after the three bytes
|
||||
* we use here. Supporting some chips might require using it.
|
||||
*
|
||||
* If the vendor ID isn't Atmel's (0x1f), assume this call failed.
|
||||
* That's not an error; only rev C and newer chips handle it, and
|
||||
* only Atmel sells these chips.
|
||||
*/
|
||||
tmp = spi_write_then_read(spi, &code, 1, id, 3);
|
||||
if (tmp < 0) {
|
||||
pr_debug("%s: error %d reading JEDEC ID\n",
|
||||
dev_name(&spi->dev), tmp);
|
||||
return ERR_PTR(tmp);
|
||||
}
|
||||
if (id[0] != 0x1f)
|
||||
return NULL;
|
||||
for (info = dataflash_data;
|
||||
info < dataflash_data + ARRAY_SIZE(dataflash_data);
|
||||
info++) {
|
||||
if (use_extid && !(info->flags & SUP_EXTID))
|
||||
continue;
|
||||
|
||||
jedec = id[0];
|
||||
jedec = jedec << 8;
|
||||
jedec |= id[1];
|
||||
jedec = jedec << 8;
|
||||
jedec |= id[2];
|
||||
|
||||
for (tmp = 0, info = dataflash_data;
|
||||
tmp < ARRAY_SIZE(dataflash_data);
|
||||
tmp++, info++) {
|
||||
if (info->jedec_id == jedec) {
|
||||
pr_debug("%s: OTP, sector protect%s\n",
|
||||
dev_name(&spi->dev),
|
||||
(info->flags & SUP_POW2PS)
|
||||
? ", binary pagesize" : ""
|
||||
);
|
||||
dev_dbg(&spi->dev, "OTP, sector protect%s\n",
|
||||
(info->flags & SUP_POW2PS) ?
|
||||
", binary pagesize" : "");
|
||||
if (info->flags & SUP_POW2PS) {
|
||||
status = dataflash_status(spi);
|
||||
if (status < 0) {
|
||||
pr_debug("%s: status error %d\n",
|
||||
dev_name(&spi->dev), status);
|
||||
dev_dbg(&spi->dev, "status error %d\n",
|
||||
status);
|
||||
return ERR_PTR(status);
|
||||
}
|
||||
if (status & 0x1) {
|
||||
|
@ -796,12 +776,58 @@ static struct flash_info *jedec_probe(struct spi_device *spi)
|
|||
}
|
||||
}
|
||||
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
static struct flash_info *jedec_probe(struct spi_device *spi)
|
||||
{
|
||||
int ret;
|
||||
u8 code = OP_READ_ID;
|
||||
u64 jedec;
|
||||
u8 id[sizeof(jedec)] = {0};
|
||||
const unsigned int id_size = 5;
|
||||
struct flash_info *info;
|
||||
|
||||
/*
|
||||
* JEDEC also defines an optional "extended device information"
|
||||
* string for after vendor-specific data, after the three bytes
|
||||
* we use here. Supporting some chips might require using it.
|
||||
*
|
||||
* If the vendor ID isn't Atmel's (0x1f), assume this call failed.
|
||||
* That's not an error; only rev C and newer chips handle it, and
|
||||
* only Atmel sells these chips.
|
||||
*/
|
||||
ret = spi_write_then_read(spi, &code, 1, id, id_size);
|
||||
if (ret < 0) {
|
||||
dev_dbg(&spi->dev, "error %d reading JEDEC ID\n", ret);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
if (id[0] != CFI_MFR_ATMEL)
|
||||
return NULL;
|
||||
|
||||
jedec = be64_to_cpup((__be64 *)id);
|
||||
|
||||
/*
|
||||
* First, try to match device using extended device
|
||||
* information
|
||||
*/
|
||||
info = jedec_lookup(spi, jedec >> DATAFLASH_SHIFT_EXTID, true);
|
||||
if (!IS_ERR(info))
|
||||
return info;
|
||||
/*
|
||||
* If that fails, make another pass using regular ID
|
||||
* information
|
||||
*/
|
||||
info = jedec_lookup(spi, jedec >> DATAFLASH_SHIFT_ID, false);
|
||||
if (!IS_ERR(info))
|
||||
return info;
|
||||
/*
|
||||
* Treat other chips as errors ... we won't know the right page
|
||||
* size (it might be binary) even when we can tell which density
|
||||
* class is involved (legacy chip id scheme).
|
||||
*/
|
||||
dev_warn(&spi->dev, "JEDEC id %06x not handled\n", jedec);
|
||||
dev_warn(&spi->dev, "JEDEC id %016llx not handled\n", jedec);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
|
@ -845,8 +871,7 @@ static int dataflash_probe(struct spi_device *spi)
|
|||
*/
|
||||
status = dataflash_status(spi);
|
||||
if (status <= 0 || status == 0xff) {
|
||||
pr_debug("%s: status error %d\n",
|
||||
dev_name(&spi->dev), status);
|
||||
dev_dbg(&spi->dev, "status error %d\n", status);
|
||||
if (status == 0 || status == 0xff)
|
||||
status = -ENODEV;
|
||||
return status;
|
||||
|
@ -887,8 +912,7 @@ static int dataflash_probe(struct spi_device *spi)
|
|||
}
|
||||
|
||||
if (status < 0)
|
||||
pr_debug("%s: add_dataflash --> %d\n", dev_name(&spi->dev),
|
||||
status);
|
||||
dev_dbg(&spi->dev, "add_dataflash --> %d\n", status);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
@ -898,7 +922,7 @@ static int dataflash_remove(struct spi_device *spi)
|
|||
struct dataflash *flash = spi_get_drvdata(spi);
|
||||
int status;
|
||||
|
||||
pr_debug("%s: remove\n", dev_name(&spi->dev));
|
||||
dev_dbg(&spi->dev, "remove\n");
|
||||
|
||||
status = mtd_device_unregister(&flash->mtd);
|
||||
if (status == 0)
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
#define _MTD_SERIAL_FLASH_CMDS_H
|
||||
|
||||
/* Generic Flash Commands/OPCODEs */
|
||||
#define SPINOR_OP_RDSR2 0x35
|
||||
#define SPINOR_OP_WRVCR 0x81
|
||||
#define SPINOR_OP_RDVCR 0x85
|
||||
|
||||
|
|
|
@ -1445,7 +1445,7 @@ static int stfsm_s25fl_config(struct stfsm *fsm)
|
|||
}
|
||||
|
||||
/* Check status of 'QE' bit, update if required. */
|
||||
stfsm_read_status(fsm, SPINOR_OP_RDSR2, &cr1, 1);
|
||||
stfsm_read_status(fsm, SPINOR_OP_RDCR, &cr1, 1);
|
||||
data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
|
||||
if (data_pads == 4) {
|
||||
if (!(cr1 & STFSM_S25FL_CONFIG_QE)) {
|
||||
|
@ -1490,7 +1490,7 @@ static int stfsm_w25q_config(struct stfsm *fsm)
|
|||
return ret;
|
||||
|
||||
/* Check status of 'QE' bit, update if required. */
|
||||
stfsm_read_status(fsm, SPINOR_OP_RDSR2, &sr2, 1);
|
||||
stfsm_read_status(fsm, SPINOR_OP_RDCR, &sr2, 1);
|
||||
data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1;
|
||||
if (data_pads == 4) {
|
||||
if (!(sr2 & W25Q_STATUS_QE)) {
|
||||
|
|
|
@ -59,7 +59,7 @@ int of_flash_probe_gemini(struct platform_device *pdev,
|
|||
struct device_node *np,
|
||||
struct map_info *map)
|
||||
{
|
||||
static struct regmap *rmap;
|
||||
struct regmap *rmap;
|
||||
struct device *dev = &pdev->dev;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
|
|
@ -991,7 +991,7 @@ EXPORT_SYMBOL_GPL(mtd_point);
|
|||
/* We probably shouldn't allow XIP if the unpoint isn't a NULL */
|
||||
int mtd_unpoint(struct mtd_info *mtd, loff_t from, size_t len)
|
||||
{
|
||||
if (!mtd->_point)
|
||||
if (!mtd->_unpoint)
|
||||
return -EOPNOTSUPP;
|
||||
if (from < 0 || from >= mtd->size || len > mtd->size - from)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -37,10 +37,16 @@
|
|||
static LIST_HEAD(mtd_partitions);
|
||||
static DEFINE_MUTEX(mtd_partitions_mutex);
|
||||
|
||||
/* Our partition node structure */
|
||||
/**
|
||||
* struct mtd_part - our partition node structure
|
||||
*
|
||||
* @mtd: struct holding partition details
|
||||
* @parent: parent mtd - flash device or another partition
|
||||
* @offset: partition offset relative to the *flash device*
|
||||
*/
|
||||
struct mtd_part {
|
||||
struct mtd_info mtd;
|
||||
struct mtd_info *master;
|
||||
struct mtd_info *parent;
|
||||
uint64_t offset;
|
||||
struct list_head list;
|
||||
};
|
||||
|
@ -67,15 +73,15 @@ static int part_read(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
struct mtd_ecc_stats stats;
|
||||
int res;
|
||||
|
||||
stats = part->master->ecc_stats;
|
||||
res = part->master->_read(part->master, from + part->offset, len,
|
||||
stats = part->parent->ecc_stats;
|
||||
res = part->parent->_read(part->parent, from + part->offset, len,
|
||||
retlen, buf);
|
||||
if (unlikely(mtd_is_eccerr(res)))
|
||||
mtd->ecc_stats.failed +=
|
||||
part->master->ecc_stats.failed - stats.failed;
|
||||
part->parent->ecc_stats.failed - stats.failed;
|
||||
else
|
||||
mtd->ecc_stats.corrected +=
|
||||
part->master->ecc_stats.corrected - stats.corrected;
|
||||
part->parent->ecc_stats.corrected - stats.corrected;
|
||||
return res;
|
||||
}
|
||||
|
||||
|
@ -84,7 +90,7 @@ static int part_point(struct mtd_info *mtd, loff_t from, size_t len,
|
|||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
return part->master->_point(part->master, from + part->offset, len,
|
||||
return part->parent->_point(part->parent, from + part->offset, len,
|
||||
retlen, virt, phys);
|
||||
}
|
||||
|
||||
|
@ -92,7 +98,7 @@ static int part_unpoint(struct mtd_info *mtd, loff_t from, size_t len)
|
|||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
return part->master->_unpoint(part->master, from + part->offset, len);
|
||||
return part->parent->_unpoint(part->parent, from + part->offset, len);
|
||||
}
|
||||
|
||||
static unsigned long part_get_unmapped_area(struct mtd_info *mtd,
|
||||
|
@ -103,7 +109,7 @@ static unsigned long part_get_unmapped_area(struct mtd_info *mtd,
|
|||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
offset += part->offset;
|
||||
return part->master->_get_unmapped_area(part->master, len, offset,
|
||||
return part->parent->_get_unmapped_area(part->parent, len, offset,
|
||||
flags);
|
||||
}
|
||||
|
||||
|
@ -132,7 +138,7 @@ static int part_read_oob(struct mtd_info *mtd, loff_t from,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
res = part->master->_read_oob(part->master, from + part->offset, ops);
|
||||
res = part->parent->_read_oob(part->parent, from + part->offset, ops);
|
||||
if (unlikely(res)) {
|
||||
if (mtd_is_bitflip(res))
|
||||
mtd->ecc_stats.corrected++;
|
||||
|
@ -146,7 +152,7 @@ static int part_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
|
|||
size_t len, size_t *retlen, u_char *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_read_user_prot_reg(part->master, from, len,
|
||||
return part->parent->_read_user_prot_reg(part->parent, from, len,
|
||||
retlen, buf);
|
||||
}
|
||||
|
||||
|
@ -154,7 +160,7 @@ static int part_get_user_prot_info(struct mtd_info *mtd, size_t len,
|
|||
size_t *retlen, struct otp_info *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_get_user_prot_info(part->master, len, retlen,
|
||||
return part->parent->_get_user_prot_info(part->parent, len, retlen,
|
||||
buf);
|
||||
}
|
||||
|
||||
|
@ -162,7 +168,7 @@ static int part_read_fact_prot_reg(struct mtd_info *mtd, loff_t from,
|
|||
size_t len, size_t *retlen, u_char *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_read_fact_prot_reg(part->master, from, len,
|
||||
return part->parent->_read_fact_prot_reg(part->parent, from, len,
|
||||
retlen, buf);
|
||||
}
|
||||
|
||||
|
@ -170,7 +176,7 @@ static int part_get_fact_prot_info(struct mtd_info *mtd, size_t len,
|
|||
size_t *retlen, struct otp_info *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_get_fact_prot_info(part->master, len, retlen,
|
||||
return part->parent->_get_fact_prot_info(part->parent, len, retlen,
|
||||
buf);
|
||||
}
|
||||
|
||||
|
@ -178,7 +184,7 @@ static int part_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
size_t *retlen, const u_char *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_write(part->master, to + part->offset, len,
|
||||
return part->parent->_write(part->parent, to + part->offset, len,
|
||||
retlen, buf);
|
||||
}
|
||||
|
||||
|
@ -186,7 +192,7 @@ static int part_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
|
|||
size_t *retlen, const u_char *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_panic_write(part->master, to + part->offset, len,
|
||||
return part->parent->_panic_write(part->parent, to + part->offset, len,
|
||||
retlen, buf);
|
||||
}
|
||||
|
||||
|
@ -199,14 +205,14 @@ static int part_write_oob(struct mtd_info *mtd, loff_t to,
|
|||
return -EINVAL;
|
||||
if (ops->datbuf && to + ops->len > mtd->size)
|
||||
return -EINVAL;
|
||||
return part->master->_write_oob(part->master, to + part->offset, ops);
|
||||
return part->parent->_write_oob(part->parent, to + part->offset, ops);
|
||||
}
|
||||
|
||||
static int part_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
|
||||
size_t len, size_t *retlen, u_char *buf)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_write_user_prot_reg(part->master, from, len,
|
||||
return part->parent->_write_user_prot_reg(part->parent, from, len,
|
||||
retlen, buf);
|
||||
}
|
||||
|
||||
|
@ -214,14 +220,14 @@ static int part_lock_user_prot_reg(struct mtd_info *mtd, loff_t from,
|
|||
size_t len)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_lock_user_prot_reg(part->master, from, len);
|
||||
return part->parent->_lock_user_prot_reg(part->parent, from, len);
|
||||
}
|
||||
|
||||
static int part_writev(struct mtd_info *mtd, const struct kvec *vecs,
|
||||
unsigned long count, loff_t to, size_t *retlen)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_writev(part->master, vecs, count,
|
||||
return part->parent->_writev(part->parent, vecs, count,
|
||||
to + part->offset, retlen);
|
||||
}
|
||||
|
||||
|
@ -231,7 +237,7 @@ static int part_erase(struct mtd_info *mtd, struct erase_info *instr)
|
|||
int ret;
|
||||
|
||||
instr->addr += part->offset;
|
||||
ret = part->master->_erase(part->master, instr);
|
||||
ret = part->parent->_erase(part->parent, instr);
|
||||
if (ret) {
|
||||
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
|
||||
instr->fail_addr -= part->offset;
|
||||
|
@ -257,51 +263,51 @@ EXPORT_SYMBOL_GPL(mtd_erase_callback);
|
|||
static int part_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_lock(part->master, ofs + part->offset, len);
|
||||
return part->parent->_lock(part->parent, ofs + part->offset, len);
|
||||
}
|
||||
|
||||
static int part_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_unlock(part->master, ofs + part->offset, len);
|
||||
return part->parent->_unlock(part->parent, ofs + part->offset, len);
|
||||
}
|
||||
|
||||
static int part_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_is_locked(part->master, ofs + part->offset, len);
|
||||
return part->parent->_is_locked(part->parent, ofs + part->offset, len);
|
||||
}
|
||||
|
||||
static void part_sync(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
part->master->_sync(part->master);
|
||||
part->parent->_sync(part->parent);
|
||||
}
|
||||
|
||||
static int part_suspend(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_suspend(part->master);
|
||||
return part->parent->_suspend(part->parent);
|
||||
}
|
||||
|
||||
static void part_resume(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
part->master->_resume(part->master);
|
||||
part->parent->_resume(part->parent);
|
||||
}
|
||||
|
||||
static int part_block_isreserved(struct mtd_info *mtd, loff_t ofs)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
ofs += part->offset;
|
||||
return part->master->_block_isreserved(part->master, ofs);
|
||||
return part->parent->_block_isreserved(part->parent, ofs);
|
||||
}
|
||||
|
||||
static int part_block_isbad(struct mtd_info *mtd, loff_t ofs)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
ofs += part->offset;
|
||||
return part->master->_block_isbad(part->master, ofs);
|
||||
return part->parent->_block_isbad(part->parent, ofs);
|
||||
}
|
||||
|
||||
static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
|
||||
|
@ -310,7 +316,7 @@ static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
|
|||
int res;
|
||||
|
||||
ofs += part->offset;
|
||||
res = part->master->_block_markbad(part->master, ofs);
|
||||
res = part->parent->_block_markbad(part->parent, ofs);
|
||||
if (!res)
|
||||
mtd->ecc_stats.badblocks++;
|
||||
return res;
|
||||
|
@ -319,13 +325,13 @@ static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
|
|||
static int part_get_device(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
return part->master->_get_device(part->master);
|
||||
return part->parent->_get_device(part->parent);
|
||||
}
|
||||
|
||||
static void part_put_device(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
part->master->_put_device(part->master);
|
||||
part->parent->_put_device(part->parent);
|
||||
}
|
||||
|
||||
static int part_ooblayout_ecc(struct mtd_info *mtd, int section,
|
||||
|
@ -333,7 +339,7 @@ static int part_ooblayout_ecc(struct mtd_info *mtd, int section,
|
|||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
return mtd_ooblayout_ecc(part->master, section, oobregion);
|
||||
return mtd_ooblayout_ecc(part->parent, section, oobregion);
|
||||
}
|
||||
|
||||
static int part_ooblayout_free(struct mtd_info *mtd, int section,
|
||||
|
@ -341,7 +347,7 @@ static int part_ooblayout_free(struct mtd_info *mtd, int section,
|
|||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
return mtd_ooblayout_free(part->master, section, oobregion);
|
||||
return mtd_ooblayout_free(part->parent, section, oobregion);
|
||||
}
|
||||
|
||||
static const struct mtd_ooblayout_ops part_ooblayout_ops = {
|
||||
|
@ -353,7 +359,7 @@ static int part_max_bad_blocks(struct mtd_info *mtd, loff_t ofs, size_t len)
|
|||
{
|
||||
struct mtd_part *part = mtd_to_part(mtd);
|
||||
|
||||
return part->master->_max_bad_blocks(part->master,
|
||||
return part->parent->_max_bad_blocks(part->parent,
|
||||
ofs + part->offset, len);
|
||||
}
|
||||
|
||||
|
@ -363,63 +369,70 @@ static inline void free_partition(struct mtd_part *p)
|
|||
kfree(p);
|
||||
}
|
||||
|
||||
/*
|
||||
* This function unregisters and destroy all slave MTD objects which are
|
||||
* attached to the given master MTD object.
|
||||
/**
|
||||
* mtd_parse_part - parse MTD partition looking for subpartitions
|
||||
*
|
||||
* @slave: part that is supposed to be a container and should be parsed
|
||||
* @types: NULL-terminated array with names of partition parsers to try
|
||||
*
|
||||
* Some partitions are kind of containers with extra subpartitions (volumes).
|
||||
* There can be various formats of such containers. This function tries to use
|
||||
* specified parsers to analyze given partition and registers found
|
||||
* subpartitions on success.
|
||||
*/
|
||||
|
||||
int del_mtd_partitions(struct mtd_info *master)
|
||||
static int mtd_parse_part(struct mtd_part *slave, const char *const *types)
|
||||
{
|
||||
struct mtd_part *slave, *next;
|
||||
int ret, err = 0;
|
||||
struct mtd_partitions parsed;
|
||||
int err;
|
||||
|
||||
mutex_lock(&mtd_partitions_mutex);
|
||||
list_for_each_entry_safe(slave, next, &mtd_partitions, list)
|
||||
if (slave->master == master) {
|
||||
ret = del_mtd_device(&slave->mtd);
|
||||
if (ret < 0) {
|
||||
err = ret;
|
||||
continue;
|
||||
}
|
||||
list_del(&slave->list);
|
||||
free_partition(slave);
|
||||
}
|
||||
mutex_unlock(&mtd_partitions_mutex);
|
||||
err = parse_mtd_partitions(&slave->mtd, types, &parsed, NULL);
|
||||
if (err)
|
||||
return err;
|
||||
else if (!parsed.nr_parts)
|
||||
return -ENOENT;
|
||||
|
||||
err = add_mtd_partitions(&slave->mtd, parsed.parts, parsed.nr_parts);
|
||||
|
||||
mtd_part_parser_cleanup(&parsed);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct mtd_part *allocate_partition(struct mtd_info *master,
|
||||
static struct mtd_part *allocate_partition(struct mtd_info *parent,
|
||||
const struct mtd_partition *part, int partno,
|
||||
uint64_t cur_offset)
|
||||
{
|
||||
int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize :
|
||||
parent->erasesize;
|
||||
struct mtd_part *slave;
|
||||
u32 remainder;
|
||||
char *name;
|
||||
u64 tmp;
|
||||
|
||||
/* allocate the partition structure */
|
||||
slave = kzalloc(sizeof(*slave), GFP_KERNEL);
|
||||
name = kstrdup(part->name, GFP_KERNEL);
|
||||
if (!name || !slave) {
|
||||
printk(KERN_ERR"memory allocation error while creating partitions for \"%s\"\n",
|
||||
master->name);
|
||||
parent->name);
|
||||
kfree(name);
|
||||
kfree(slave);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
/* set up the MTD object for this partition */
|
||||
slave->mtd.type = master->type;
|
||||
slave->mtd.flags = master->flags & ~part->mask_flags;
|
||||
slave->mtd.type = parent->type;
|
||||
slave->mtd.flags = parent->flags & ~part->mask_flags;
|
||||
slave->mtd.size = part->size;
|
||||
slave->mtd.writesize = master->writesize;
|
||||
slave->mtd.writebufsize = master->writebufsize;
|
||||
slave->mtd.oobsize = master->oobsize;
|
||||
slave->mtd.oobavail = master->oobavail;
|
||||
slave->mtd.subpage_sft = master->subpage_sft;
|
||||
slave->mtd.pairing = master->pairing;
|
||||
slave->mtd.writesize = parent->writesize;
|
||||
slave->mtd.writebufsize = parent->writebufsize;
|
||||
slave->mtd.oobsize = parent->oobsize;
|
||||
slave->mtd.oobavail = parent->oobavail;
|
||||
slave->mtd.subpage_sft = parent->subpage_sft;
|
||||
slave->mtd.pairing = parent->pairing;
|
||||
|
||||
slave->mtd.name = name;
|
||||
slave->mtd.owner = master->owner;
|
||||
slave->mtd.owner = parent->owner;
|
||||
|
||||
/* NOTE: Historically, we didn't arrange MTDs as a tree out of
|
||||
* concern for showing the same data in multiple partitions.
|
||||
|
@ -429,80 +442,81 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
|
|||
* parent conditional on that option. Note, this is a way to
|
||||
* distinguish between the master and the partition in sysfs.
|
||||
*/
|
||||
slave->mtd.dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) ?
|
||||
&master->dev :
|
||||
master->dev.parent;
|
||||
slave->mtd.dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ?
|
||||
&parent->dev :
|
||||
parent->dev.parent;
|
||||
slave->mtd.dev.of_node = part->of_node;
|
||||
|
||||
slave->mtd._read = part_read;
|
||||
slave->mtd._write = part_write;
|
||||
|
||||
if (master->_panic_write)
|
||||
if (parent->_panic_write)
|
||||
slave->mtd._panic_write = part_panic_write;
|
||||
|
||||
if (master->_point && master->_unpoint) {
|
||||
if (parent->_point && parent->_unpoint) {
|
||||
slave->mtd._point = part_point;
|
||||
slave->mtd._unpoint = part_unpoint;
|
||||
}
|
||||
|
||||
if (master->_get_unmapped_area)
|
||||
if (parent->_get_unmapped_area)
|
||||
slave->mtd._get_unmapped_area = part_get_unmapped_area;
|
||||
if (master->_read_oob)
|
||||
if (parent->_read_oob)
|
||||
slave->mtd._read_oob = part_read_oob;
|
||||
if (master->_write_oob)
|
||||
if (parent->_write_oob)
|
||||
slave->mtd._write_oob = part_write_oob;
|
||||
if (master->_read_user_prot_reg)
|
||||
if (parent->_read_user_prot_reg)
|
||||
slave->mtd._read_user_prot_reg = part_read_user_prot_reg;
|
||||
if (master->_read_fact_prot_reg)
|
||||
if (parent->_read_fact_prot_reg)
|
||||
slave->mtd._read_fact_prot_reg = part_read_fact_prot_reg;
|
||||
if (master->_write_user_prot_reg)
|
||||
if (parent->_write_user_prot_reg)
|
||||
slave->mtd._write_user_prot_reg = part_write_user_prot_reg;
|
||||
if (master->_lock_user_prot_reg)
|
||||
if (parent->_lock_user_prot_reg)
|
||||
slave->mtd._lock_user_prot_reg = part_lock_user_prot_reg;
|
||||
if (master->_get_user_prot_info)
|
||||
if (parent->_get_user_prot_info)
|
||||
slave->mtd._get_user_prot_info = part_get_user_prot_info;
|
||||
if (master->_get_fact_prot_info)
|
||||
if (parent->_get_fact_prot_info)
|
||||
slave->mtd._get_fact_prot_info = part_get_fact_prot_info;
|
||||
if (master->_sync)
|
||||
if (parent->_sync)
|
||||
slave->mtd._sync = part_sync;
|
||||
if (!partno && !master->dev.class && master->_suspend &&
|
||||
master->_resume) {
|
||||
slave->mtd._suspend = part_suspend;
|
||||
slave->mtd._resume = part_resume;
|
||||
if (!partno && !parent->dev.class && parent->_suspend &&
|
||||
parent->_resume) {
|
||||
slave->mtd._suspend = part_suspend;
|
||||
slave->mtd._resume = part_resume;
|
||||
}
|
||||
if (master->_writev)
|
||||
if (parent->_writev)
|
||||
slave->mtd._writev = part_writev;
|
||||
if (master->_lock)
|
||||
if (parent->_lock)
|
||||
slave->mtd._lock = part_lock;
|
||||
if (master->_unlock)
|
||||
if (parent->_unlock)
|
||||
slave->mtd._unlock = part_unlock;
|
||||
if (master->_is_locked)
|
||||
if (parent->_is_locked)
|
||||
slave->mtd._is_locked = part_is_locked;
|
||||
if (master->_block_isreserved)
|
||||
if (parent->_block_isreserved)
|
||||
slave->mtd._block_isreserved = part_block_isreserved;
|
||||
if (master->_block_isbad)
|
||||
if (parent->_block_isbad)
|
||||
slave->mtd._block_isbad = part_block_isbad;
|
||||
if (master->_block_markbad)
|
||||
if (parent->_block_markbad)
|
||||
slave->mtd._block_markbad = part_block_markbad;
|
||||
if (master->_max_bad_blocks)
|
||||
if (parent->_max_bad_blocks)
|
||||
slave->mtd._max_bad_blocks = part_max_bad_blocks;
|
||||
|
||||
if (master->_get_device)
|
||||
if (parent->_get_device)
|
||||
slave->mtd._get_device = part_get_device;
|
||||
if (master->_put_device)
|
||||
if (parent->_put_device)
|
||||
slave->mtd._put_device = part_put_device;
|
||||
|
||||
slave->mtd._erase = part_erase;
|
||||
slave->master = master;
|
||||
slave->parent = parent;
|
||||
slave->offset = part->offset;
|
||||
|
||||
if (slave->offset == MTDPART_OFS_APPEND)
|
||||
slave->offset = cur_offset;
|
||||
if (slave->offset == MTDPART_OFS_NXTBLK) {
|
||||
tmp = cur_offset;
|
||||
slave->offset = cur_offset;
|
||||
if (mtd_mod_by_eb(cur_offset, master) != 0) {
|
||||
/* Round up to next erasesize */
|
||||
slave->offset = (mtd_div_by_eb(cur_offset, master) + 1) * master->erasesize;
|
||||
remainder = do_div(tmp, wr_alignment);
|
||||
if (remainder) {
|
||||
slave->offset += wr_alignment - remainder;
|
||||
printk(KERN_NOTICE "Moving partition %d: "
|
||||
"0x%012llx -> 0x%012llx\n", partno,
|
||||
(unsigned long long)cur_offset, (unsigned long long)slave->offset);
|
||||
|
@ -510,25 +524,25 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
|
|||
}
|
||||
if (slave->offset == MTDPART_OFS_RETAIN) {
|
||||
slave->offset = cur_offset;
|
||||
if (master->size - slave->offset >= slave->mtd.size) {
|
||||
slave->mtd.size = master->size - slave->offset
|
||||
if (parent->size - slave->offset >= slave->mtd.size) {
|
||||
slave->mtd.size = parent->size - slave->offset
|
||||
- slave->mtd.size;
|
||||
} else {
|
||||
printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n",
|
||||
part->name, master->size - slave->offset,
|
||||
part->name, parent->size - slave->offset,
|
||||
slave->mtd.size);
|
||||
/* register to preserve ordering */
|
||||
goto out_register;
|
||||
}
|
||||
}
|
||||
if (slave->mtd.size == MTDPART_SIZ_FULL)
|
||||
slave->mtd.size = master->size - slave->offset;
|
||||
slave->mtd.size = parent->size - slave->offset;
|
||||
|
||||
printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n", (unsigned long long)slave->offset,
|
||||
(unsigned long long)(slave->offset + slave->mtd.size), slave->mtd.name);
|
||||
|
||||
/* let's do some sanity checks */
|
||||
if (slave->offset >= master->size) {
|
||||
if (slave->offset >= parent->size) {
|
||||
/* let's register it anyway to preserve ordering */
|
||||
slave->offset = 0;
|
||||
slave->mtd.size = 0;
|
||||
|
@ -536,16 +550,16 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
|
|||
part->name);
|
||||
goto out_register;
|
||||
}
|
||||
if (slave->offset + slave->mtd.size > master->size) {
|
||||
slave->mtd.size = master->size - slave->offset;
|
||||
if (slave->offset + slave->mtd.size > parent->size) {
|
||||
slave->mtd.size = parent->size - slave->offset;
|
||||
printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n",
|
||||
part->name, master->name, (unsigned long long)slave->mtd.size);
|
||||
part->name, parent->name, (unsigned long long)slave->mtd.size);
|
||||
}
|
||||
if (master->numeraseregions > 1) {
|
||||
if (parent->numeraseregions > 1) {
|
||||
/* Deal with variable erase size stuff */
|
||||
int i, max = master->numeraseregions;
|
||||
int i, max = parent->numeraseregions;
|
||||
u64 end = slave->offset + slave->mtd.size;
|
||||
struct mtd_erase_region_info *regions = master->eraseregions;
|
||||
struct mtd_erase_region_info *regions = parent->eraseregions;
|
||||
|
||||
/* Find the first erase regions which is part of this
|
||||
* partition. */
|
||||
|
@ -564,37 +578,40 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
|
|||
BUG_ON(slave->mtd.erasesize == 0);
|
||||
} else {
|
||||
/* Single erase size */
|
||||
slave->mtd.erasesize = master->erasesize;
|
||||
slave->mtd.erasesize = parent->erasesize;
|
||||
}
|
||||
|
||||
if ((slave->mtd.flags & MTD_WRITEABLE) &&
|
||||
mtd_mod_by_eb(slave->offset, &slave->mtd)) {
|
||||
tmp = slave->offset;
|
||||
remainder = do_div(tmp, wr_alignment);
|
||||
if ((slave->mtd.flags & MTD_WRITEABLE) && remainder) {
|
||||
/* Doesn't start on a boundary of major erase size */
|
||||
/* FIXME: Let it be writable if it is on a boundary of
|
||||
* _minor_ erase size though */
|
||||
slave->mtd.flags &= ~MTD_WRITEABLE;
|
||||
printk(KERN_WARNING"mtd: partition \"%s\" doesn't start on an erase block boundary -- force read-only\n",
|
||||
printk(KERN_WARNING"mtd: partition \"%s\" doesn't start on an erase/write block boundary -- force read-only\n",
|
||||
part->name);
|
||||
}
|
||||
if ((slave->mtd.flags & MTD_WRITEABLE) &&
|
||||
mtd_mod_by_eb(slave->mtd.size, &slave->mtd)) {
|
||||
|
||||
tmp = slave->mtd.size;
|
||||
remainder = do_div(tmp, wr_alignment);
|
||||
if ((slave->mtd.flags & MTD_WRITEABLE) && remainder) {
|
||||
slave->mtd.flags &= ~MTD_WRITEABLE;
|
||||
printk(KERN_WARNING"mtd: partition \"%s\" doesn't end on an erase block -- force read-only\n",
|
||||
printk(KERN_WARNING"mtd: partition \"%s\" doesn't end on an erase/write block -- force read-only\n",
|
||||
part->name);
|
||||
}
|
||||
|
||||
mtd_set_ooblayout(&slave->mtd, &part_ooblayout_ops);
|
||||
slave->mtd.ecc_step_size = master->ecc_step_size;
|
||||
slave->mtd.ecc_strength = master->ecc_strength;
|
||||
slave->mtd.bitflip_threshold = master->bitflip_threshold;
|
||||
slave->mtd.ecc_step_size = parent->ecc_step_size;
|
||||
slave->mtd.ecc_strength = parent->ecc_strength;
|
||||
slave->mtd.bitflip_threshold = parent->bitflip_threshold;
|
||||
|
||||
if (master->_block_isbad) {
|
||||
if (parent->_block_isbad) {
|
||||
uint64_t offs = 0;
|
||||
|
||||
while (offs < slave->mtd.size) {
|
||||
if (mtd_block_isreserved(master, offs + slave->offset))
|
||||
if (mtd_block_isreserved(parent, offs + slave->offset))
|
||||
slave->mtd.ecc_stats.bbtblocks++;
|
||||
else if (mtd_block_isbad(master, offs + slave->offset))
|
||||
else if (mtd_block_isbad(parent, offs + slave->offset))
|
||||
slave->mtd.ecc_stats.badblocks++;
|
||||
offs += slave->mtd.erasesize;
|
||||
}
|
||||
|
@ -628,7 +645,7 @@ static int mtd_add_partition_attrs(struct mtd_part *new)
|
|||
return ret;
|
||||
}
|
||||
|
||||
int mtd_add_partition(struct mtd_info *master, const char *name,
|
||||
int mtd_add_partition(struct mtd_info *parent, const char *name,
|
||||
long long offset, long long length)
|
||||
{
|
||||
struct mtd_partition part;
|
||||
|
@ -641,7 +658,7 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
|
|||
return -EINVAL;
|
||||
|
||||
if (length == MTDPART_SIZ_FULL)
|
||||
length = master->size - offset;
|
||||
length = parent->size - offset;
|
||||
|
||||
if (length <= 0)
|
||||
return -EINVAL;
|
||||
|
@ -651,7 +668,7 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
|
|||
part.size = length;
|
||||
part.offset = offset;
|
||||
|
||||
new = allocate_partition(master, &part, -1, offset);
|
||||
new = allocate_partition(parent, &part, -1, offset);
|
||||
if (IS_ERR(new))
|
||||
return PTR_ERR(new);
|
||||
|
||||
|
@ -667,23 +684,69 @@ int mtd_add_partition(struct mtd_info *master, const char *name,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(mtd_add_partition);
|
||||
|
||||
int mtd_del_partition(struct mtd_info *master, int partno)
|
||||
/**
|
||||
* __mtd_del_partition - delete MTD partition
|
||||
*
|
||||
* @priv: internal MTD struct for partition to be deleted
|
||||
*
|
||||
* This function must be called with the partitions mutex locked.
|
||||
*/
|
||||
static int __mtd_del_partition(struct mtd_part *priv)
|
||||
{
|
||||
struct mtd_part *child, *next;
|
||||
int err;
|
||||
|
||||
list_for_each_entry_safe(child, next, &mtd_partitions, list) {
|
||||
if (child->parent == &priv->mtd) {
|
||||
err = __mtd_del_partition(child);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
sysfs_remove_files(&priv->mtd.dev.kobj, mtd_partition_attrs);
|
||||
|
||||
err = del_mtd_device(&priv->mtd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
list_del(&priv->list);
|
||||
free_partition(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function unregisters and destroy all slave MTD objects which are
|
||||
* attached to the given MTD object.
|
||||
*/
|
||||
int del_mtd_partitions(struct mtd_info *mtd)
|
||||
{
|
||||
struct mtd_part *slave, *next;
|
||||
int ret, err = 0;
|
||||
|
||||
mutex_lock(&mtd_partitions_mutex);
|
||||
list_for_each_entry_safe(slave, next, &mtd_partitions, list)
|
||||
if (slave->parent == mtd) {
|
||||
ret = __mtd_del_partition(slave);
|
||||
if (ret < 0)
|
||||
err = ret;
|
||||
}
|
||||
mutex_unlock(&mtd_partitions_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
int mtd_del_partition(struct mtd_info *mtd, int partno)
|
||||
{
|
||||
struct mtd_part *slave, *next;
|
||||
int ret = -EINVAL;
|
||||
|
||||
mutex_lock(&mtd_partitions_mutex);
|
||||
list_for_each_entry_safe(slave, next, &mtd_partitions, list)
|
||||
if ((slave->master == master) &&
|
||||
if ((slave->parent == mtd) &&
|
||||
(slave->mtd.index == partno)) {
|
||||
sysfs_remove_files(&slave->mtd.dev.kobj,
|
||||
mtd_partition_attrs);
|
||||
ret = del_mtd_device(&slave->mtd);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
list_del(&slave->list);
|
||||
free_partition(slave);
|
||||
ret = __mtd_del_partition(slave);
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&mtd_partitions_mutex);
|
||||
|
@ -724,6 +787,8 @@ int add_mtd_partitions(struct mtd_info *master,
|
|||
|
||||
add_mtd_device(&slave->mtd);
|
||||
mtd_add_partition_attrs(slave);
|
||||
if (parts[i].types)
|
||||
mtd_parse_part(slave, parts[i].types);
|
||||
|
||||
cur_offset = slave->offset + slave->mtd.size;
|
||||
}
|
||||
|
@ -799,6 +864,27 @@ static const char * const default_mtd_part_types[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static int mtd_part_do_parse(struct mtd_part_parser *parser,
|
||||
struct mtd_info *master,
|
||||
struct mtd_partitions *pparts,
|
||||
struct mtd_part_parser_data *data)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = (*parser->parse_fn)(master, &pparts->parts, data);
|
||||
pr_debug("%s: parser %s: %i\n", master->name, parser->name, ret);
|
||||
if (ret <= 0)
|
||||
return ret;
|
||||
|
||||
pr_notice("%d %s partitions found on MTD device %s\n", ret,
|
||||
parser->name, master->name);
|
||||
|
||||
pparts->nr_parts = ret;
|
||||
pparts->parser = parser;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* parse_mtd_partitions - parse MTD partitions
|
||||
* @master: the master partition (describes whole MTD device)
|
||||
|
@ -839,16 +925,10 @@ int parse_mtd_partitions(struct mtd_info *master, const char *const *types,
|
|||
parser ? parser->name : NULL);
|
||||
if (!parser)
|
||||
continue;
|
||||
ret = (*parser->parse_fn)(master, &pparts->parts, data);
|
||||
pr_debug("%s: parser %s: %i\n",
|
||||
master->name, parser->name, ret);
|
||||
if (ret > 0) {
|
||||
printk(KERN_NOTICE "%d %s partitions found on MTD device %s\n",
|
||||
ret, parser->name, master->name);
|
||||
pparts->nr_parts = ret;
|
||||
pparts->parser = parser;
|
||||
ret = mtd_part_do_parse(parser, master, pparts, data);
|
||||
/* Found partitions! */
|
||||
if (ret > 0)
|
||||
return 0;
|
||||
}
|
||||
mtd_part_parser_put(parser);
|
||||
/*
|
||||
* Stash the first error we see; only report it if no parser
|
||||
|
@ -899,6 +979,6 @@ uint64_t mtd_get_device_size(const struct mtd_info *mtd)
|
|||
if (!mtd_is_partition(mtd))
|
||||
return mtd->size;
|
||||
|
||||
return mtd_to_part(mtd)->master->size;
|
||||
return mtd_get_device_size(mtd_to_part(mtd)->parent);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mtd_get_device_size);
|
||||
|
|
|
@ -308,6 +308,7 @@ config MTD_NAND_CS553X
|
|||
config MTD_NAND_ATMEL
|
||||
tristate "Support for NAND Flash / SmartMedia on AT91"
|
||||
depends on ARCH_AT91
|
||||
select MFD_ATMEL_SMC
|
||||
help
|
||||
Enables support for NAND Flash / Smart Media Card interface
|
||||
on Atmel AT91 processors.
|
||||
|
@ -542,6 +543,7 @@ config MTD_NAND_SUNXI
|
|||
|
||||
config MTD_NAND_HISI504
|
||||
tristate "Support for NAND controller on Hisilicon SoC Hip04"
|
||||
depends on ARCH_HISI || COMPILE_TEST
|
||||
depends on HAS_DMA
|
||||
help
|
||||
Enables support for NAND controller on Hisilicon SoC Hip04.
|
||||
|
@ -555,6 +557,7 @@ config MTD_NAND_QCOM
|
|||
|
||||
config MTD_NAND_MTK
|
||||
tristate "Support for NAND controller on MTK SoCs"
|
||||
depends on ARCH_MEDIATEK || COMPILE_TEST
|
||||
depends on HAS_DMA
|
||||
help
|
||||
Enables support for NAND controller on MTK SoCs.
|
||||
|
|
|
@ -57,6 +57,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/mfd/syscon/atmel-matrix.h>
|
||||
#include <linux/mfd/syscon/atmel-smc.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mtd/nand.h>
|
||||
#include <linux/of_address.h>
|
||||
|
@ -64,7 +65,6 @@
|
|||
#include <linux/of_platform.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/platform_data/atmel.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#include "pmecc.h"
|
||||
|
@ -151,6 +151,8 @@ struct atmel_nand_cs {
|
|||
void __iomem *virt;
|
||||
dma_addr_t dma;
|
||||
} io;
|
||||
|
||||
struct atmel_smc_cs_conf smcconf;
|
||||
};
|
||||
|
||||
struct atmel_nand {
|
||||
|
@ -196,6 +198,8 @@ struct atmel_nand_controller_ops {
|
|||
void (*nand_init)(struct atmel_nand_controller *nc,
|
||||
struct atmel_nand *nand);
|
||||
int (*ecc_init)(struct atmel_nand *nand);
|
||||
int (*setup_data_interface)(struct atmel_nand *nand, int csline,
|
||||
const struct nand_data_interface *conf);
|
||||
};
|
||||
|
||||
struct atmel_nand_controller_caps {
|
||||
|
@ -912,7 +916,7 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
|
|||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
struct atmel_nand *nand = to_atmel_nand(chip);
|
||||
struct atmel_hsmc_nand_controller *nc;
|
||||
int ret;
|
||||
int ret, status;
|
||||
|
||||
nc = to_hsmc_nand_controller(chip->controller);
|
||||
|
||||
|
@ -954,6 +958,10 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
|
|||
dev_err(nc->base.dev, "Failed to program NAND page (err = %d)\n",
|
||||
ret);
|
||||
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
if (status & NAND_STATUS_FAIL)
|
||||
return -EIO;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1175,6 +1183,295 @@ static int atmel_hsmc_nand_ecc_init(struct atmel_nand *nand)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
|
||||
const struct nand_data_interface *conf,
|
||||
struct atmel_smc_cs_conf *smcconf)
|
||||
{
|
||||
u32 ncycles, totalcycles, timeps, mckperiodps;
|
||||
struct atmel_nand_controller *nc;
|
||||
int ret;
|
||||
|
||||
nc = to_nand_controller(nand->base.controller);
|
||||
|
||||
/* DDR interface not supported. */
|
||||
if (conf->type != NAND_SDR_IFACE)
|
||||
return -ENOTSUPP;
|
||||
|
||||
/*
|
||||
* tRC < 30ns implies EDO mode. This controller does not support this
|
||||
* mode.
|
||||
*/
|
||||
if (conf->timings.sdr.tRC_min < 30)
|
||||
return -ENOTSUPP;
|
||||
|
||||
atmel_smc_cs_conf_init(smcconf);
|
||||
|
||||
mckperiodps = NSEC_PER_SEC / clk_get_rate(nc->mck);
|
||||
mckperiodps *= 1000;
|
||||
|
||||
/*
|
||||
* Set write pulse timing. This one is easy to extract:
|
||||
*
|
||||
* NWE_PULSE = tWP
|
||||
*/
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tWP_min, mckperiodps);
|
||||
totalcycles = ncycles;
|
||||
ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NWE_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* The write setup timing depends on the operation done on the NAND.
|
||||
* All operations goes through the same data bus, but the operation
|
||||
* type depends on the address we are writing to (ALE/CLE address
|
||||
* lines).
|
||||
* Since we have no way to differentiate the different operations at
|
||||
* the SMC level, we must consider the worst case (the biggest setup
|
||||
* time among all operation types):
|
||||
*
|
||||
* NWE_SETUP = max(tCLS, tCS, tALS, tDS) - NWE_PULSE
|
||||
*/
|
||||
timeps = max3(conf->timings.sdr.tCLS_min, conf->timings.sdr.tCS_min,
|
||||
conf->timings.sdr.tALS_min);
|
||||
timeps = max(timeps, conf->timings.sdr.tDS_min);
|
||||
ncycles = DIV_ROUND_UP(timeps, mckperiodps);
|
||||
ncycles = ncycles > totalcycles ? ncycles - totalcycles : 0;
|
||||
totalcycles += ncycles;
|
||||
ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NWE_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* As for the write setup timing, the write hold timing depends on the
|
||||
* operation done on the NAND:
|
||||
*
|
||||
* NWE_HOLD = max(tCLH, tCH, tALH, tDH, tWH)
|
||||
*/
|
||||
timeps = max3(conf->timings.sdr.tCLH_min, conf->timings.sdr.tCH_min,
|
||||
conf->timings.sdr.tALH_min);
|
||||
timeps = max3(timeps, conf->timings.sdr.tDH_min,
|
||||
conf->timings.sdr.tWH_min);
|
||||
ncycles = DIV_ROUND_UP(timeps, mckperiodps);
|
||||
totalcycles += ncycles;
|
||||
|
||||
/*
|
||||
* The write cycle timing is directly matching tWC, but is also
|
||||
* dependent on the other timings on the setup and hold timings we
|
||||
* calculated earlier, which gives:
|
||||
*
|
||||
* NWE_CYCLE = max(tWC, NWE_SETUP + NWE_PULSE + NWE_HOLD)
|
||||
*/
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tWC_min, mckperiodps);
|
||||
ncycles = max(totalcycles, ncycles);
|
||||
ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NWE_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* We don't want the CS line to be toggled between each byte/word
|
||||
* transfer to the NAND. The only way to guarantee that is to have the
|
||||
* NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
|
||||
*
|
||||
* NCS_WR_PULSE = NWE_CYCLE
|
||||
*/
|
||||
ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_WR_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* As for the write setup timing, the read hold timing depends on the
|
||||
* operation done on the NAND:
|
||||
*
|
||||
* NRD_HOLD = max(tREH, tRHOH)
|
||||
*/
|
||||
timeps = max(conf->timings.sdr.tREH_min, conf->timings.sdr.tRHOH_min);
|
||||
ncycles = DIV_ROUND_UP(timeps, mckperiodps);
|
||||
totalcycles = ncycles;
|
||||
|
||||
/*
|
||||
* TDF = tRHZ - NRD_HOLD
|
||||
*/
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tRHZ_max, mckperiodps);
|
||||
ncycles -= totalcycles;
|
||||
|
||||
/*
|
||||
* In ONFI 4.0 specs, tRHZ has been increased to support EDO NANDs and
|
||||
* we might end up with a config that does not fit in the TDF field.
|
||||
* Just take the max value in this case and hope that the NAND is more
|
||||
* tolerant than advertised.
|
||||
*/
|
||||
if (ncycles > ATMEL_SMC_MODE_TDF_MAX)
|
||||
ncycles = ATMEL_SMC_MODE_TDF_MAX;
|
||||
else if (ncycles < ATMEL_SMC_MODE_TDF_MIN)
|
||||
ncycles = ATMEL_SMC_MODE_TDF_MIN;
|
||||
|
||||
smcconf->mode |= ATMEL_SMC_MODE_TDF(ncycles) |
|
||||
ATMEL_SMC_MODE_TDFMODE_OPTIMIZED;
|
||||
|
||||
/*
|
||||
* Read pulse timing directly matches tRP:
|
||||
*
|
||||
* NRD_PULSE = tRP
|
||||
*/
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tRP_min, mckperiodps);
|
||||
totalcycles += ncycles;
|
||||
ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NRD_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* The write cycle timing is directly matching tWC, but is also
|
||||
* dependent on the setup and hold timings we calculated earlier,
|
||||
* which gives:
|
||||
*
|
||||
* NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
|
||||
*
|
||||
* NRD_SETUP is always 0.
|
||||
*/
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
|
||||
ncycles = max(totalcycles, ncycles);
|
||||
ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NRD_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* We don't want the CS line to be toggled between each byte/word
|
||||
* transfer from the NAND. The only way to guarantee that is to have
|
||||
* the NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
|
||||
*
|
||||
* NCS_RD_PULSE = NRD_CYCLE
|
||||
*/
|
||||
ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_RD_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Txxx timings are directly matching tXXX ones. */
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tCLR_min, mckperiodps);
|
||||
ret = atmel_smc_cs_conf_set_timing(smcconf,
|
||||
ATMEL_HSMC_TIMINGS_TCLR_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tADL_min, mckperiodps);
|
||||
ret = atmel_smc_cs_conf_set_timing(smcconf,
|
||||
ATMEL_HSMC_TIMINGS_TADL_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tAR_min, mckperiodps);
|
||||
ret = atmel_smc_cs_conf_set_timing(smcconf,
|
||||
ATMEL_HSMC_TIMINGS_TAR_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tRR_min, mckperiodps);
|
||||
ret = atmel_smc_cs_conf_set_timing(smcconf,
|
||||
ATMEL_HSMC_TIMINGS_TRR_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ncycles = DIV_ROUND_UP(conf->timings.sdr.tWB_max, mckperiodps);
|
||||
ret = atmel_smc_cs_conf_set_timing(smcconf,
|
||||
ATMEL_HSMC_TIMINGS_TWB_SHIFT,
|
||||
ncycles);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Attach the CS line to the NFC logic. */
|
||||
smcconf->timings |= ATMEL_HSMC_TIMINGS_NFSEL;
|
||||
|
||||
/* Set the appropriate data bus width. */
|
||||
if (nand->base.options & NAND_BUSWIDTH_16)
|
||||
smcconf->mode |= ATMEL_SMC_MODE_DBW_16;
|
||||
|
||||
/* Operate in NRD/NWE READ/WRITEMODE. */
|
||||
smcconf->mode |= ATMEL_SMC_MODE_READMODE_NRD |
|
||||
ATMEL_SMC_MODE_WRITEMODE_NWE;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int atmel_smc_nand_setup_data_interface(struct atmel_nand *nand,
|
||||
int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct atmel_nand_controller *nc;
|
||||
struct atmel_smc_cs_conf smcconf;
|
||||
struct atmel_nand_cs *cs;
|
||||
int ret;
|
||||
|
||||
nc = to_nand_controller(nand->base.controller);
|
||||
|
||||
ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
cs = &nand->cs[csline];
|
||||
cs->smcconf = smcconf;
|
||||
atmel_smc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int atmel_hsmc_nand_setup_data_interface(struct atmel_nand *nand,
|
||||
int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct atmel_nand_controller *nc;
|
||||
struct atmel_smc_cs_conf smcconf;
|
||||
struct atmel_nand_cs *cs;
|
||||
int ret;
|
||||
|
||||
nc = to_nand_controller(nand->base.controller);
|
||||
|
||||
ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
cs = &nand->cs[csline];
|
||||
cs->smcconf = smcconf;
|
||||
|
||||
if (cs->rb.type == ATMEL_NAND_NATIVE_RB)
|
||||
cs->smcconf.timings |= ATMEL_HSMC_TIMINGS_RBNSEL(cs->rb.id);
|
||||
|
||||
atmel_hsmc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int atmel_nand_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct nand_chip *chip = mtd_to_nand(mtd);
|
||||
struct atmel_nand *nand = to_atmel_nand(chip);
|
||||
struct atmel_nand_controller *nc;
|
||||
|
||||
nc = to_nand_controller(nand->base.controller);
|
||||
|
||||
if (csline >= nand->numcs ||
|
||||
(csline < 0 && csline != NAND_DATA_IFACE_CHECK_ONLY))
|
||||
return -EINVAL;
|
||||
|
||||
return nc->caps->ops->setup_data_interface(nand, csline, conf);
|
||||
}
|
||||
|
||||
static void atmel_nand_init(struct atmel_nand_controller *nc,
|
||||
struct atmel_nand *nand)
|
||||
{
|
||||
|
@ -1192,6 +1489,9 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
|
|||
chip->write_buf = atmel_nand_write_buf;
|
||||
chip->select_chip = atmel_nand_select_chip;
|
||||
|
||||
if (nc->mck && nc->caps->ops->setup_data_interface)
|
||||
chip->setup_data_interface = atmel_nand_setup_data_interface;
|
||||
|
||||
/* Some NANDs require a longer delay than the default one (20us). */
|
||||
chip->chip_delay = 40;
|
||||
|
||||
|
@ -1677,6 +1977,12 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
|
|||
if (nc->caps->legacy_of_bindings)
|
||||
return 0;
|
||||
|
||||
nc->mck = of_clk_get(dev->parent->of_node, 0);
|
||||
if (IS_ERR(nc->mck)) {
|
||||
dev_err(dev, "Failed to retrieve MCK clk\n");
|
||||
return PTR_ERR(nc->mck);
|
||||
}
|
||||
|
||||
np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
|
||||
if (!np) {
|
||||
dev_err(dev, "Missing or invalid atmel,smc property\n");
|
||||
|
@ -1983,6 +2289,7 @@ static const struct atmel_nand_controller_ops atmel_hsmc_nc_ops = {
|
|||
.remove = atmel_hsmc_nand_controller_remove,
|
||||
.ecc_init = atmel_hsmc_nand_ecc_init,
|
||||
.nand_init = atmel_hsmc_nand_init,
|
||||
.setup_data_interface = atmel_hsmc_nand_setup_data_interface,
|
||||
};
|
||||
|
||||
static const struct atmel_nand_controller_caps atmel_sama5_nc_caps = {
|
||||
|
@ -2037,7 +2344,14 @@ atmel_smc_nand_controller_remove(struct atmel_nand_controller *nc)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
|
||||
/*
|
||||
* The SMC reg layout of at91rm9200 is completely different which prevents us
|
||||
* from re-using atmel_smc_nand_setup_data_interface() for the
|
||||
* ->setup_data_interface() hook.
|
||||
* At this point, there's no support for the at91rm9200 SMC IP, so we leave
|
||||
* ->setup_data_interface() unassigned.
|
||||
*/
|
||||
static const struct atmel_nand_controller_ops at91rm9200_nc_ops = {
|
||||
.probe = atmel_smc_nand_controller_probe,
|
||||
.remove = atmel_smc_nand_controller_remove,
|
||||
.ecc_init = atmel_nand_ecc_init,
|
||||
|
@ -2045,6 +2359,20 @@ static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
|
|||
};
|
||||
|
||||
static const struct atmel_nand_controller_caps atmel_rm9200_nc_caps = {
|
||||
.ale_offs = BIT(21),
|
||||
.cle_offs = BIT(22),
|
||||
.ops = &at91rm9200_nc_ops,
|
||||
};
|
||||
|
||||
static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
|
||||
.probe = atmel_smc_nand_controller_probe,
|
||||
.remove = atmel_smc_nand_controller_remove,
|
||||
.ecc_init = atmel_nand_ecc_init,
|
||||
.nand_init = atmel_smc_nand_init,
|
||||
.setup_data_interface = atmel_smc_nand_setup_data_interface,
|
||||
};
|
||||
|
||||
static const struct atmel_nand_controller_caps atmel_sam9260_nc_caps = {
|
||||
.ale_offs = BIT(21),
|
||||
.cle_offs = BIT(22),
|
||||
.ops = &atmel_smc_nc_ops,
|
||||
|
@ -2093,7 +2421,7 @@ static const struct of_device_id atmel_nand_controller_of_ids[] = {
|
|||
},
|
||||
{
|
||||
.compatible = "atmel,at91sam9260-nand-controller",
|
||||
.data = &atmel_rm9200_nc_caps,
|
||||
.data = &atmel_sam9260_nc_caps,
|
||||
},
|
||||
{
|
||||
.compatible = "atmel,at91sam9261-nand-controller",
|
||||
|
@ -2181,6 +2509,24 @@ static int atmel_nand_controller_remove(struct platform_device *pdev)
|
|||
return nc->caps->ops->remove(nc);
|
||||
}
|
||||
|
||||
static __maybe_unused int atmel_nand_controller_resume(struct device *dev)
|
||||
{
|
||||
struct atmel_nand_controller *nc = dev_get_drvdata(dev);
|
||||
struct atmel_nand *nand;
|
||||
|
||||
list_for_each_entry(nand, &nc->chips, node) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nand->numcs; i++)
|
||||
nand_reset(&nand->base, i);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(atmel_nand_controller_pm_ops, NULL,
|
||||
atmel_nand_controller_resume);
|
||||
|
||||
static struct platform_driver atmel_nand_controller_driver = {
|
||||
.driver = {
|
||||
.name = "atmel-nand-controller",
|
||||
|
|
|
@ -392,6 +392,8 @@ int bcm47xxnflash_ops_bcm4706_init(struct bcm47xxnflash *b47n)
|
|||
b47n->nand_chip.read_byte = bcm47xxnflash_ops_bcm4706_read_byte;
|
||||
b47n->nand_chip.read_buf = bcm47xxnflash_ops_bcm4706_read_buf;
|
||||
b47n->nand_chip.write_buf = bcm47xxnflash_ops_bcm4706_write_buf;
|
||||
b47n->nand_chip.onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
b47n->nand_chip.onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
nand_chip->chip_delay = 50;
|
||||
b47n->nand_chip.bbt_options = NAND_BBT_USE_FLASH;
|
||||
|
|
|
@ -654,6 +654,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
|
|||
cafe->nand.read_buf = cafe_read_buf;
|
||||
cafe->nand.write_buf = cafe_write_buf;
|
||||
cafe->nand.select_chip = cafe_select_chip;
|
||||
cafe->nand.onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
cafe->nand.onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
cafe->nand.chip_delay = 0;
|
||||
|
||||
|
|
|
@ -771,11 +771,14 @@ static int nand_davinci_probe(struct platform_device *pdev)
|
|||
info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
|
||||
info->chip.ecc.bytes = 10;
|
||||
info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
|
||||
info->chip.ecc.algo = NAND_ECC_BCH;
|
||||
} else {
|
||||
/* 1bit ecc hamming */
|
||||
info->chip.ecc.calculate = nand_davinci_calculate_1bit;
|
||||
info->chip.ecc.correct = nand_davinci_correct_1bit;
|
||||
info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
|
||||
info->chip.ecc.bytes = 3;
|
||||
info->chip.ecc.algo = NAND_ECC_HAMMING;
|
||||
}
|
||||
info->chip.ecc.size = 512;
|
||||
info->chip.ecc.strength = pdata->ecc_bits;
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -24,330 +24,315 @@
|
|||
#include <linux/mtd/nand.h>
|
||||
|
||||
#define DEVICE_RESET 0x0
|
||||
#define DEVICE_RESET__BANK0 0x0001
|
||||
#define DEVICE_RESET__BANK1 0x0002
|
||||
#define DEVICE_RESET__BANK2 0x0004
|
||||
#define DEVICE_RESET__BANK3 0x0008
|
||||
#define DEVICE_RESET__BANK(bank) BIT(bank)
|
||||
|
||||
#define TRANSFER_SPARE_REG 0x10
|
||||
#define TRANSFER_SPARE_REG__FLAG 0x0001
|
||||
#define TRANSFER_SPARE_REG__FLAG BIT(0)
|
||||
|
||||
#define LOAD_WAIT_CNT 0x20
|
||||
#define LOAD_WAIT_CNT__VALUE 0xffff
|
||||
#define LOAD_WAIT_CNT__VALUE GENMASK(15, 0)
|
||||
|
||||
#define PROGRAM_WAIT_CNT 0x30
|
||||
#define PROGRAM_WAIT_CNT__VALUE 0xffff
|
||||
#define PROGRAM_WAIT_CNT__VALUE GENMASK(15, 0)
|
||||
|
||||
#define ERASE_WAIT_CNT 0x40
|
||||
#define ERASE_WAIT_CNT__VALUE 0xffff
|
||||
#define ERASE_WAIT_CNT__VALUE GENMASK(15, 0)
|
||||
|
||||
#define INT_MON_CYCCNT 0x50
|
||||
#define INT_MON_CYCCNT__VALUE 0xffff
|
||||
#define INT_MON_CYCCNT__VALUE GENMASK(15, 0)
|
||||
|
||||
#define RB_PIN_ENABLED 0x60
|
||||
#define RB_PIN_ENABLED__BANK0 0x0001
|
||||
#define RB_PIN_ENABLED__BANK1 0x0002
|
||||
#define RB_PIN_ENABLED__BANK2 0x0004
|
||||
#define RB_PIN_ENABLED__BANK3 0x0008
|
||||
#define RB_PIN_ENABLED__BANK(bank) BIT(bank)
|
||||
|
||||
#define MULTIPLANE_OPERATION 0x70
|
||||
#define MULTIPLANE_OPERATION__FLAG 0x0001
|
||||
#define MULTIPLANE_OPERATION__FLAG BIT(0)
|
||||
|
||||
#define MULTIPLANE_READ_ENABLE 0x80
|
||||
#define MULTIPLANE_READ_ENABLE__FLAG 0x0001
|
||||
#define MULTIPLANE_READ_ENABLE__FLAG BIT(0)
|
||||
|
||||
#define COPYBACK_DISABLE 0x90
|
||||
#define COPYBACK_DISABLE__FLAG 0x0001
|
||||
#define COPYBACK_DISABLE__FLAG BIT(0)
|
||||
|
||||
#define CACHE_WRITE_ENABLE 0xa0
|
||||
#define CACHE_WRITE_ENABLE__FLAG 0x0001
|
||||
#define CACHE_WRITE_ENABLE__FLAG BIT(0)
|
||||
|
||||
#define CACHE_READ_ENABLE 0xb0
|
||||
#define CACHE_READ_ENABLE__FLAG 0x0001
|
||||
#define CACHE_READ_ENABLE__FLAG BIT(0)
|
||||
|
||||
#define PREFETCH_MODE 0xc0
|
||||
#define PREFETCH_MODE__PREFETCH_EN 0x0001
|
||||
#define PREFETCH_MODE__PREFETCH_BURST_LENGTH 0xfff0
|
||||
#define PREFETCH_MODE__PREFETCH_EN BIT(0)
|
||||
#define PREFETCH_MODE__PREFETCH_BURST_LENGTH GENMASK(15, 4)
|
||||
|
||||
#define CHIP_ENABLE_DONT_CARE 0xd0
|
||||
#define CHIP_EN_DONT_CARE__FLAG 0x01
|
||||
#define CHIP_EN_DONT_CARE__FLAG BIT(0)
|
||||
|
||||
#define ECC_ENABLE 0xe0
|
||||
#define ECC_ENABLE__FLAG 0x0001
|
||||
#define ECC_ENABLE__FLAG BIT(0)
|
||||
|
||||
#define GLOBAL_INT_ENABLE 0xf0
|
||||
#define GLOBAL_INT_EN_FLAG 0x01
|
||||
#define GLOBAL_INT_EN_FLAG BIT(0)
|
||||
|
||||
#define WE_2_RE 0x100
|
||||
#define WE_2_RE__VALUE 0x003f
|
||||
#define TWHR2_AND_WE_2_RE 0x100
|
||||
#define TWHR2_AND_WE_2_RE__WE_2_RE GENMASK(5, 0)
|
||||
#define TWHR2_AND_WE_2_RE__TWHR2 GENMASK(13, 8)
|
||||
|
||||
#define ADDR_2_DATA 0x110
|
||||
#define ADDR_2_DATA__VALUE 0x003f
|
||||
#define TCWAW_AND_ADDR_2_DATA 0x110
|
||||
/* The width of ADDR_2_DATA is 6 bit for old IP, 7 bit for new IP */
|
||||
#define TCWAW_AND_ADDR_2_DATA__ADDR_2_DATA GENMASK(6, 0)
|
||||
#define TCWAW_AND_ADDR_2_DATA__TCWAW GENMASK(13, 8)
|
||||
|
||||
#define RE_2_WE 0x120
|
||||
#define RE_2_WE__VALUE 0x003f
|
||||
#define RE_2_WE__VALUE GENMASK(5, 0)
|
||||
|
||||
#define ACC_CLKS 0x130
|
||||
#define ACC_CLKS__VALUE 0x000f
|
||||
#define ACC_CLKS__VALUE GENMASK(3, 0)
|
||||
|
||||
#define NUMBER_OF_PLANES 0x140
|
||||
#define NUMBER_OF_PLANES__VALUE 0x0007
|
||||
#define NUMBER_OF_PLANES__VALUE GENMASK(2, 0)
|
||||
|
||||
#define PAGES_PER_BLOCK 0x150
|
||||
#define PAGES_PER_BLOCK__VALUE 0xffff
|
||||
#define PAGES_PER_BLOCK__VALUE GENMASK(15, 0)
|
||||
|
||||
#define DEVICE_WIDTH 0x160
|
||||
#define DEVICE_WIDTH__VALUE 0x0003
|
||||
#define DEVICE_WIDTH__VALUE GENMASK(1, 0)
|
||||
|
||||
#define DEVICE_MAIN_AREA_SIZE 0x170
|
||||
#define DEVICE_MAIN_AREA_SIZE__VALUE 0xffff
|
||||
#define DEVICE_MAIN_AREA_SIZE__VALUE GENMASK(15, 0)
|
||||
|
||||
#define DEVICE_SPARE_AREA_SIZE 0x180
|
||||
#define DEVICE_SPARE_AREA_SIZE__VALUE 0xffff
|
||||
#define DEVICE_SPARE_AREA_SIZE__VALUE GENMASK(15, 0)
|
||||
|
||||
#define TWO_ROW_ADDR_CYCLES 0x190
|
||||
#define TWO_ROW_ADDR_CYCLES__FLAG 0x0001
|
||||
#define TWO_ROW_ADDR_CYCLES__FLAG BIT(0)
|
||||
|
||||
#define MULTIPLANE_ADDR_RESTRICT 0x1a0
|
||||
#define MULTIPLANE_ADDR_RESTRICT__FLAG 0x0001
|
||||
#define MULTIPLANE_ADDR_RESTRICT__FLAG BIT(0)
|
||||
|
||||
#define ECC_CORRECTION 0x1b0
|
||||
#define ECC_CORRECTION__VALUE 0x001f
|
||||
#define ECC_CORRECTION__VALUE GENMASK(4, 0)
|
||||
#define ECC_CORRECTION__ERASE_THRESHOLD GENMASK(31, 16)
|
||||
#define MAKE_ECC_CORRECTION(val, thresh) \
|
||||
(((val) & (ECC_CORRECTION__VALUE)) | \
|
||||
(((thresh) << 16) & (ECC_CORRECTION__ERASE_THRESHOLD)))
|
||||
|
||||
#define READ_MODE 0x1c0
|
||||
#define READ_MODE__VALUE 0x000f
|
||||
#define READ_MODE__VALUE GENMASK(3, 0)
|
||||
|
||||
#define WRITE_MODE 0x1d0
|
||||
#define WRITE_MODE__VALUE 0x000f
|
||||
#define WRITE_MODE__VALUE GENMASK(3, 0)
|
||||
|
||||
#define COPYBACK_MODE 0x1e0
|
||||
#define COPYBACK_MODE__VALUE 0x000f
|
||||
#define COPYBACK_MODE__VALUE GENMASK(3, 0)
|
||||
|
||||
#define RDWR_EN_LO_CNT 0x1f0
|
||||
#define RDWR_EN_LO_CNT__VALUE 0x001f
|
||||
#define RDWR_EN_LO_CNT__VALUE GENMASK(4, 0)
|
||||
|
||||
#define RDWR_EN_HI_CNT 0x200
|
||||
#define RDWR_EN_HI_CNT__VALUE 0x001f
|
||||
#define RDWR_EN_HI_CNT__VALUE GENMASK(4, 0)
|
||||
|
||||
#define MAX_RD_DELAY 0x210
|
||||
#define MAX_RD_DELAY__VALUE 0x000f
|
||||
#define MAX_RD_DELAY__VALUE GENMASK(3, 0)
|
||||
|
||||
#define CS_SETUP_CNT 0x220
|
||||
#define CS_SETUP_CNT__VALUE 0x001f
|
||||
#define CS_SETUP_CNT__VALUE GENMASK(4, 0)
|
||||
#define CS_SETUP_CNT__TWB GENMASK(17, 12)
|
||||
|
||||
#define SPARE_AREA_SKIP_BYTES 0x230
|
||||
#define SPARE_AREA_SKIP_BYTES__VALUE 0x003f
|
||||
#define SPARE_AREA_SKIP_BYTES__VALUE GENMASK(5, 0)
|
||||
|
||||
#define SPARE_AREA_MARKER 0x240
|
||||
#define SPARE_AREA_MARKER__VALUE 0xffff
|
||||
#define SPARE_AREA_MARKER__VALUE GENMASK(15, 0)
|
||||
|
||||
#define DEVICES_CONNECTED 0x250
|
||||
#define DEVICES_CONNECTED__VALUE 0x0007
|
||||
#define DEVICES_CONNECTED__VALUE GENMASK(2, 0)
|
||||
|
||||
#define DIE_MASK 0x260
|
||||
#define DIE_MASK__VALUE 0x00ff
|
||||
#define DIE_MASK__VALUE GENMASK(7, 0)
|
||||
|
||||
#define FIRST_BLOCK_OF_NEXT_PLANE 0x270
|
||||
#define FIRST_BLOCK_OF_NEXT_PLANE__VALUE 0xffff
|
||||
#define FIRST_BLOCK_OF_NEXT_PLANE__VALUE GENMASK(15, 0)
|
||||
|
||||
#define WRITE_PROTECT 0x280
|
||||
#define WRITE_PROTECT__FLAG 0x0001
|
||||
#define WRITE_PROTECT__FLAG BIT(0)
|
||||
|
||||
#define RE_2_RE 0x290
|
||||
#define RE_2_RE__VALUE 0x003f
|
||||
#define RE_2_RE__VALUE GENMASK(5, 0)
|
||||
|
||||
#define MANUFACTURER_ID 0x300
|
||||
#define MANUFACTURER_ID__VALUE 0x00ff
|
||||
#define MANUFACTURER_ID__VALUE GENMASK(7, 0)
|
||||
|
||||
#define DEVICE_ID 0x310
|
||||
#define DEVICE_ID__VALUE 0x00ff
|
||||
#define DEVICE_ID__VALUE GENMASK(7, 0)
|
||||
|
||||
#define DEVICE_PARAM_0 0x320
|
||||
#define DEVICE_PARAM_0__VALUE 0x00ff
|
||||
#define DEVICE_PARAM_0__VALUE GENMASK(7, 0)
|
||||
|
||||
#define DEVICE_PARAM_1 0x330
|
||||
#define DEVICE_PARAM_1__VALUE 0x00ff
|
||||
#define DEVICE_PARAM_1__VALUE GENMASK(7, 0)
|
||||
|
||||
#define DEVICE_PARAM_2 0x340
|
||||
#define DEVICE_PARAM_2__VALUE 0x00ff
|
||||
#define DEVICE_PARAM_2__VALUE GENMASK(7, 0)
|
||||
|
||||
#define LOGICAL_PAGE_DATA_SIZE 0x350
|
||||
#define LOGICAL_PAGE_DATA_SIZE__VALUE 0xffff
|
||||
#define LOGICAL_PAGE_DATA_SIZE__VALUE GENMASK(15, 0)
|
||||
|
||||
#define LOGICAL_PAGE_SPARE_SIZE 0x360
|
||||
#define LOGICAL_PAGE_SPARE_SIZE__VALUE 0xffff
|
||||
#define LOGICAL_PAGE_SPARE_SIZE__VALUE GENMASK(15, 0)
|
||||
|
||||
#define REVISION 0x370
|
||||
#define REVISION__VALUE 0xffff
|
||||
#define REVISION__VALUE GENMASK(15, 0)
|
||||
|
||||
#define ONFI_DEVICE_FEATURES 0x380
|
||||
#define ONFI_DEVICE_FEATURES__VALUE 0x003f
|
||||
#define ONFI_DEVICE_FEATURES__VALUE GENMASK(5, 0)
|
||||
|
||||
#define ONFI_OPTIONAL_COMMANDS 0x390
|
||||
#define ONFI_OPTIONAL_COMMANDS__VALUE 0x003f
|
||||
#define ONFI_OPTIONAL_COMMANDS__VALUE GENMASK(5, 0)
|
||||
|
||||
#define ONFI_TIMING_MODE 0x3a0
|
||||
#define ONFI_TIMING_MODE__VALUE 0x003f
|
||||
#define ONFI_TIMING_MODE__VALUE GENMASK(5, 0)
|
||||
|
||||
#define ONFI_PGM_CACHE_TIMING_MODE 0x3b0
|
||||
#define ONFI_PGM_CACHE_TIMING_MODE__VALUE 0x003f
|
||||
#define ONFI_PGM_CACHE_TIMING_MODE__VALUE GENMASK(5, 0)
|
||||
|
||||
#define ONFI_DEVICE_NO_OF_LUNS 0x3c0
|
||||
#define ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS 0x00ff
|
||||
#define ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE 0x0100
|
||||
#define ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS GENMASK(7, 0)
|
||||
#define ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE BIT(8)
|
||||
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L 0x3d0
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE 0xffff
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE GENMASK(15, 0)
|
||||
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U 0x3e0
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE 0xffff
|
||||
#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE GENMASK(15, 0)
|
||||
|
||||
#define FEATURES 0x3f0
|
||||
#define FEATURES__N_BANKS 0x0003
|
||||
#define FEATURES__ECC_MAX_ERR 0x003c
|
||||
#define FEATURES__DMA 0x0040
|
||||
#define FEATURES__CMD_DMA 0x0080
|
||||
#define FEATURES__PARTITION 0x0100
|
||||
#define FEATURES__XDMA_SIDEBAND 0x0200
|
||||
#define FEATURES__GPREG 0x0400
|
||||
#define FEATURES__INDEX_ADDR 0x0800
|
||||
#define FEATURES 0x3f0
|
||||
#define FEATURES__N_BANKS GENMASK(1, 0)
|
||||
#define FEATURES__ECC_MAX_ERR GENMASK(5, 2)
|
||||
#define FEATURES__DMA BIT(6)
|
||||
#define FEATURES__CMD_DMA BIT(7)
|
||||
#define FEATURES__PARTITION BIT(8)
|
||||
#define FEATURES__XDMA_SIDEBAND BIT(9)
|
||||
#define FEATURES__GPREG BIT(10)
|
||||
#define FEATURES__INDEX_ADDR BIT(11)
|
||||
|
||||
#define TRANSFER_MODE 0x400
|
||||
#define TRANSFER_MODE__VALUE 0x0003
|
||||
#define TRANSFER_MODE__VALUE GENMASK(1, 0)
|
||||
|
||||
#define INTR_STATUS(__bank) (0x410 + ((__bank) * 0x50))
|
||||
#define INTR_EN(__bank) (0x420 + ((__bank) * 0x50))
|
||||
#define INTR_STATUS(bank) (0x410 + (bank) * 0x50)
|
||||
#define INTR_EN(bank) (0x420 + (bank) * 0x50)
|
||||
/* bit[1:0] is used differently depending on IP version */
|
||||
#define INTR__ECC_UNCOR_ERR 0x0001 /* new IP */
|
||||
#define INTR__ECC_TRANSACTION_DONE 0x0001 /* old IP */
|
||||
#define INTR__ECC_ERR 0x0002 /* old IP */
|
||||
#define INTR__DMA_CMD_COMP 0x0004
|
||||
#define INTR__TIME_OUT 0x0008
|
||||
#define INTR__PROGRAM_FAIL 0x0010
|
||||
#define INTR__ERASE_FAIL 0x0020
|
||||
#define INTR__LOAD_COMP 0x0040
|
||||
#define INTR__PROGRAM_COMP 0x0080
|
||||
#define INTR__ERASE_COMP 0x0100
|
||||
#define INTR__PIPE_CPYBCK_CMD_COMP 0x0200
|
||||
#define INTR__LOCKED_BLK 0x0400
|
||||
#define INTR__UNSUP_CMD 0x0800
|
||||
#define INTR__INT_ACT 0x1000
|
||||
#define INTR__RST_COMP 0x2000
|
||||
#define INTR__PIPE_CMD_ERR 0x4000
|
||||
#define INTR__PAGE_XFER_INC 0x8000
|
||||
#define INTR__ECC_UNCOR_ERR BIT(0) /* new IP */
|
||||
#define INTR__ECC_TRANSACTION_DONE BIT(0) /* old IP */
|
||||
#define INTR__ECC_ERR BIT(1) /* old IP */
|
||||
#define INTR__DMA_CMD_COMP BIT(2)
|
||||
#define INTR__TIME_OUT BIT(3)
|
||||
#define INTR__PROGRAM_FAIL BIT(4)
|
||||
#define INTR__ERASE_FAIL BIT(5)
|
||||
#define INTR__LOAD_COMP BIT(6)
|
||||
#define INTR__PROGRAM_COMP BIT(7)
|
||||
#define INTR__ERASE_COMP BIT(8)
|
||||
#define INTR__PIPE_CPYBCK_CMD_COMP BIT(9)
|
||||
#define INTR__LOCKED_BLK BIT(10)
|
||||
#define INTR__UNSUP_CMD BIT(11)
|
||||
#define INTR__INT_ACT BIT(12)
|
||||
#define INTR__RST_COMP BIT(13)
|
||||
#define INTR__PIPE_CMD_ERR BIT(14)
|
||||
#define INTR__PAGE_XFER_INC BIT(15)
|
||||
#define INTR__ERASED_PAGE BIT(16)
|
||||
|
||||
#define PAGE_CNT(__bank) (0x430 + ((__bank) * 0x50))
|
||||
#define ERR_PAGE_ADDR(__bank) (0x440 + ((__bank) * 0x50))
|
||||
#define ERR_BLOCK_ADDR(__bank) (0x450 + ((__bank) * 0x50))
|
||||
#define PAGE_CNT(bank) (0x430 + (bank) * 0x50)
|
||||
#define ERR_PAGE_ADDR(bank) (0x440 + (bank) * 0x50)
|
||||
#define ERR_BLOCK_ADDR(bank) (0x450 + (bank) * 0x50)
|
||||
|
||||
#define ECC_THRESHOLD 0x600
|
||||
#define ECC_THRESHOLD__VALUE 0x03ff
|
||||
#define ECC_THRESHOLD__VALUE GENMASK(9, 0)
|
||||
|
||||
#define ECC_ERROR_BLOCK_ADDRESS 0x610
|
||||
#define ECC_ERROR_BLOCK_ADDRESS__VALUE 0xffff
|
||||
#define ECC_ERROR_BLOCK_ADDRESS__VALUE GENMASK(15, 0)
|
||||
|
||||
#define ECC_ERROR_PAGE_ADDRESS 0x620
|
||||
#define ECC_ERROR_PAGE_ADDRESS__VALUE 0x0fff
|
||||
#define ECC_ERROR_PAGE_ADDRESS__BANK 0xf000
|
||||
#define ECC_ERROR_PAGE_ADDRESS__VALUE GENMASK(11, 0)
|
||||
#define ECC_ERROR_PAGE_ADDRESS__BANK GENMASK(15, 12)
|
||||
|
||||
#define ECC_ERROR_ADDRESS 0x630
|
||||
#define ECC_ERROR_ADDRESS__OFFSET 0x0fff
|
||||
#define ECC_ERROR_ADDRESS__SECTOR_NR 0xf000
|
||||
#define ECC_ERROR_ADDRESS__OFFSET GENMASK(11, 0)
|
||||
#define ECC_ERROR_ADDRESS__SECTOR_NR GENMASK(15, 12)
|
||||
|
||||
#define ERR_CORRECTION_INFO 0x640
|
||||
#define ERR_CORRECTION_INFO__BYTEMASK 0x00ff
|
||||
#define ERR_CORRECTION_INFO__DEVICE_NR 0x0f00
|
||||
#define ERR_CORRECTION_INFO__ERROR_TYPE 0x4000
|
||||
#define ERR_CORRECTION_INFO__LAST_ERR_INFO 0x8000
|
||||
#define ERR_CORRECTION_INFO__BYTEMASK GENMASK(7, 0)
|
||||
#define ERR_CORRECTION_INFO__DEVICE_NR GENMASK(11, 8)
|
||||
#define ERR_CORRECTION_INFO__ERROR_TYPE BIT(14)
|
||||
#define ERR_CORRECTION_INFO__LAST_ERR_INFO BIT(15)
|
||||
|
||||
#define ECC_COR_INFO(bank) (0x650 + (bank) / 2 * 0x10)
|
||||
#define ECC_COR_INFO__SHIFT(bank) ((bank) % 2 * 8)
|
||||
#define ECC_COR_INFO__MAX_ERRORS 0x007f
|
||||
#define ECC_COR_INFO__UNCOR_ERR 0x0080
|
||||
#define ECC_COR_INFO__MAX_ERRORS GENMASK(6, 0)
|
||||
#define ECC_COR_INFO__UNCOR_ERR BIT(7)
|
||||
|
||||
#define CFG_DATA_BLOCK_SIZE 0x6b0
|
||||
|
||||
#define CFG_LAST_DATA_BLOCK_SIZE 0x6c0
|
||||
|
||||
#define CFG_NUM_DATA_BLOCKS 0x6d0
|
||||
|
||||
#define CFG_META_DATA_SIZE 0x6e0
|
||||
|
||||
#define DMA_ENABLE 0x700
|
||||
#define DMA_ENABLE__FLAG 0x0001
|
||||
#define DMA_ENABLE__FLAG BIT(0)
|
||||
|
||||
#define IGNORE_ECC_DONE 0x710
|
||||
#define IGNORE_ECC_DONE__FLAG 0x0001
|
||||
#define IGNORE_ECC_DONE__FLAG BIT(0)
|
||||
|
||||
#define DMA_INTR 0x720
|
||||
#define DMA_INTR_EN 0x730
|
||||
#define DMA_INTR__TARGET_ERROR 0x0001
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL0 0x0002
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL1 0x0004
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL2 0x0008
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL3 0x0010
|
||||
#define DMA_INTR__MEMCOPY_DESC_COMP 0x0020
|
||||
#define DMA_INTR__TARGET_ERROR BIT(0)
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL0 BIT(1)
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL1 BIT(2)
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL2 BIT(3)
|
||||
#define DMA_INTR__DESC_COMP_CHANNEL3 BIT(4)
|
||||
#define DMA_INTR__MEMCOPY_DESC_COMP BIT(5)
|
||||
|
||||
#define TARGET_ERR_ADDR_LO 0x740
|
||||
#define TARGET_ERR_ADDR_LO__VALUE 0xffff
|
||||
#define TARGET_ERR_ADDR_LO__VALUE GENMASK(15, 0)
|
||||
|
||||
#define TARGET_ERR_ADDR_HI 0x750
|
||||
#define TARGET_ERR_ADDR_HI__VALUE 0xffff
|
||||
#define TARGET_ERR_ADDR_HI__VALUE GENMASK(15, 0)
|
||||
|
||||
#define CHNL_ACTIVE 0x760
|
||||
#define CHNL_ACTIVE__CHANNEL0 0x0001
|
||||
#define CHNL_ACTIVE__CHANNEL1 0x0002
|
||||
#define CHNL_ACTIVE__CHANNEL2 0x0004
|
||||
#define CHNL_ACTIVE__CHANNEL3 0x0008
|
||||
|
||||
#define FAIL 1 /*failed flag*/
|
||||
#define PASS 0 /*success flag*/
|
||||
|
||||
#define CLK_X 5
|
||||
#define CLK_MULTI 4
|
||||
|
||||
#define ONFI_BLOOM_TIME 1
|
||||
#define MODE5_WORKAROUND 0
|
||||
|
||||
|
||||
#define MODE_00 0x00000000
|
||||
#define MODE_01 0x04000000
|
||||
#define MODE_10 0x08000000
|
||||
#define MODE_11 0x0C000000
|
||||
|
||||
#define ECC_SECTOR_SIZE 512
|
||||
|
||||
struct nand_buf {
|
||||
int head;
|
||||
int tail;
|
||||
uint8_t *buf;
|
||||
dma_addr_t dma_buf;
|
||||
};
|
||||
|
||||
#define INTEL_CE4100 1
|
||||
#define INTEL_MRST 2
|
||||
#define DT 3
|
||||
#define CHNL_ACTIVE__CHANNEL0 BIT(0)
|
||||
#define CHNL_ACTIVE__CHANNEL1 BIT(1)
|
||||
#define CHNL_ACTIVE__CHANNEL2 BIT(2)
|
||||
#define CHNL_ACTIVE__CHANNEL3 BIT(3)
|
||||
|
||||
struct denali_nand_info {
|
||||
struct nand_chip nand;
|
||||
int flash_bank; /* currently selected chip */
|
||||
int status;
|
||||
int platform;
|
||||
struct nand_buf buf;
|
||||
unsigned long clk_x_rate; /* bus interface clock rate */
|
||||
int active_bank; /* currently selected bank */
|
||||
struct device *dev;
|
||||
int total_used_banks;
|
||||
int page;
|
||||
void __iomem *flash_reg; /* Register Interface */
|
||||
void __iomem *flash_mem; /* Host Data/Command Interface */
|
||||
void __iomem *reg; /* Register Interface */
|
||||
void __iomem *host; /* Host Data/Command Interface */
|
||||
|
||||
/* elements used by ISR */
|
||||
struct completion complete;
|
||||
spinlock_t irq_lock;
|
||||
uint32_t irq_mask;
|
||||
uint32_t irq_status;
|
||||
int irq;
|
||||
|
||||
int devnum; /* represent how many nands connected */
|
||||
int bbtskipbytes;
|
||||
void *buf;
|
||||
dma_addr_t dma_addr;
|
||||
int dma_avail;
|
||||
int devs_per_cs; /* devices connected in parallel */
|
||||
int oob_skip_bytes;
|
||||
int max_banks;
|
||||
unsigned int revision;
|
||||
unsigned int caps;
|
||||
const struct nand_ecc_caps *ecc_caps;
|
||||
};
|
||||
|
||||
#define DENALI_CAP_HW_ECC_FIXUP BIT(0)
|
||||
#define DENALI_CAP_DMA_64BIT BIT(1)
|
||||
|
||||
int denali_calc_ecc_bytes(int step_size, int strength);
|
||||
extern int denali_init(struct denali_nand_info *denali);
|
||||
extern void denali_remove(struct denali_nand_info *denali);
|
||||
|
||||
|
|
|
@ -32,10 +32,31 @@ struct denali_dt {
|
|||
struct denali_dt_data {
|
||||
unsigned int revision;
|
||||
unsigned int caps;
|
||||
const struct nand_ecc_caps *ecc_caps;
|
||||
};
|
||||
|
||||
NAND_ECC_CAPS_SINGLE(denali_socfpga_ecc_caps, denali_calc_ecc_bytes,
|
||||
512, 8, 15);
|
||||
static const struct denali_dt_data denali_socfpga_data = {
|
||||
.caps = DENALI_CAP_HW_ECC_FIXUP,
|
||||
.ecc_caps = &denali_socfpga_ecc_caps,
|
||||
};
|
||||
|
||||
NAND_ECC_CAPS_SINGLE(denali_uniphier_v5a_ecc_caps, denali_calc_ecc_bytes,
|
||||
1024, 8, 16, 24);
|
||||
static const struct denali_dt_data denali_uniphier_v5a_data = {
|
||||
.caps = DENALI_CAP_HW_ECC_FIXUP |
|
||||
DENALI_CAP_DMA_64BIT,
|
||||
.ecc_caps = &denali_uniphier_v5a_ecc_caps,
|
||||
};
|
||||
|
||||
NAND_ECC_CAPS_SINGLE(denali_uniphier_v5b_ecc_caps, denali_calc_ecc_bytes,
|
||||
1024, 8, 16);
|
||||
static const struct denali_dt_data denali_uniphier_v5b_data = {
|
||||
.revision = 0x0501,
|
||||
.caps = DENALI_CAP_HW_ECC_FIXUP |
|
||||
DENALI_CAP_DMA_64BIT,
|
||||
.ecc_caps = &denali_uniphier_v5b_ecc_caps,
|
||||
};
|
||||
|
||||
static const struct of_device_id denali_nand_dt_ids[] = {
|
||||
|
@ -43,13 +64,21 @@ static const struct of_device_id denali_nand_dt_ids[] = {
|
|||
.compatible = "altr,socfpga-denali-nand",
|
||||
.data = &denali_socfpga_data,
|
||||
},
|
||||
{
|
||||
.compatible = "socionext,uniphier-denali-nand-v5a",
|
||||
.data = &denali_uniphier_v5a_data,
|
||||
},
|
||||
{
|
||||
.compatible = "socionext,uniphier-denali-nand-v5b",
|
||||
.data = &denali_uniphier_v5b_data,
|
||||
},
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, denali_nand_dt_ids);
|
||||
|
||||
static int denali_dt_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct resource *denali_reg, *nand_data;
|
||||
struct resource *res;
|
||||
struct denali_dt *dt;
|
||||
const struct denali_dt_data *data;
|
||||
struct denali_nand_info *denali;
|
||||
|
@ -64,9 +93,9 @@ static int denali_dt_probe(struct platform_device *pdev)
|
|||
if (data) {
|
||||
denali->revision = data->revision;
|
||||
denali->caps = data->caps;
|
||||
denali->ecc_caps = data->ecc_caps;
|
||||
}
|
||||
|
||||
denali->platform = DT;
|
||||
denali->dev = &pdev->dev;
|
||||
denali->irq = platform_get_irq(pdev, 0);
|
||||
if (denali->irq < 0) {
|
||||
|
@ -74,17 +103,15 @@ static int denali_dt_probe(struct platform_device *pdev)
|
|||
return denali->irq;
|
||||
}
|
||||
|
||||
denali_reg = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"denali_reg");
|
||||
denali->flash_reg = devm_ioremap_resource(&pdev->dev, denali_reg);
|
||||
if (IS_ERR(denali->flash_reg))
|
||||
return PTR_ERR(denali->flash_reg);
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "denali_reg");
|
||||
denali->reg = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(denali->reg))
|
||||
return PTR_ERR(denali->reg);
|
||||
|
||||
nand_data = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"nand_data");
|
||||
denali->flash_mem = devm_ioremap_resource(&pdev->dev, nand_data);
|
||||
if (IS_ERR(denali->flash_mem))
|
||||
return PTR_ERR(denali->flash_mem);
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nand_data");
|
||||
denali->host = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(denali->host))
|
||||
return PTR_ERR(denali->host);
|
||||
|
||||
dt->clk = devm_clk_get(&pdev->dev, NULL);
|
||||
if (IS_ERR(dt->clk)) {
|
||||
|
@ -93,6 +120,8 @@ static int denali_dt_probe(struct platform_device *pdev)
|
|||
}
|
||||
clk_prepare_enable(dt->clk);
|
||||
|
||||
denali->clk_x_rate = clk_get_rate(dt->clk);
|
||||
|
||||
ret = denali_init(denali);
|
||||
if (ret)
|
||||
goto out_disable_clk;
|
||||
|
|
|
@ -19,6 +19,9 @@
|
|||
|
||||
#define DENALI_NAND_NAME "denali-nand-pci"
|
||||
|
||||
#define INTEL_CE4100 1
|
||||
#define INTEL_MRST 2
|
||||
|
||||
/* List of platforms this NAND controller has be integrated into */
|
||||
static const struct pci_device_id denali_pci_ids[] = {
|
||||
{ PCI_VDEVICE(INTEL, 0x0701), INTEL_CE4100 },
|
||||
|
@ -27,6 +30,8 @@ static const struct pci_device_id denali_pci_ids[] = {
|
|||
};
|
||||
MODULE_DEVICE_TABLE(pci, denali_pci_ids);
|
||||
|
||||
NAND_ECC_CAPS_SINGLE(denali_pci_ecc_caps, denali_calc_ecc_bytes, 512, 8, 15);
|
||||
|
||||
static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
{
|
||||
int ret;
|
||||
|
@ -45,13 +50,11 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
|||
}
|
||||
|
||||
if (id->driver_data == INTEL_CE4100) {
|
||||
denali->platform = INTEL_CE4100;
|
||||
mem_base = pci_resource_start(dev, 0);
|
||||
mem_len = pci_resource_len(dev, 1);
|
||||
csr_base = pci_resource_start(dev, 1);
|
||||
csr_len = pci_resource_len(dev, 1);
|
||||
} else {
|
||||
denali->platform = INTEL_MRST;
|
||||
csr_base = pci_resource_start(dev, 0);
|
||||
csr_len = pci_resource_len(dev, 0);
|
||||
mem_base = pci_resource_start(dev, 1);
|
||||
|
@ -65,6 +68,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
|||
pci_set_master(dev);
|
||||
denali->dev = &dev->dev;
|
||||
denali->irq = dev->irq;
|
||||
denali->ecc_caps = &denali_pci_ecc_caps;
|
||||
denali->nand.ecc.options |= NAND_ECC_MAXIMIZE;
|
||||
denali->clk_x_rate = 200000000; /* 200 MHz */
|
||||
|
||||
ret = pci_request_regions(dev, DENALI_NAND_NAME);
|
||||
if (ret) {
|
||||
|
@ -72,14 +78,14 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
|||
return ret;
|
||||
}
|
||||
|
||||
denali->flash_reg = ioremap_nocache(csr_base, csr_len);
|
||||
if (!denali->flash_reg) {
|
||||
denali->reg = ioremap_nocache(csr_base, csr_len);
|
||||
if (!denali->reg) {
|
||||
dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
denali->flash_mem = ioremap_nocache(mem_base, mem_len);
|
||||
if (!denali->flash_mem) {
|
||||
denali->host = ioremap_nocache(mem_base, mem_len);
|
||||
if (!denali->host) {
|
||||
dev_err(&dev->dev, "Spectra: ioremap_nocache failed!");
|
||||
ret = -ENOMEM;
|
||||
goto failed_remap_reg;
|
||||
|
@ -94,9 +100,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
|||
return 0;
|
||||
|
||||
failed_remap_mem:
|
||||
iounmap(denali->flash_mem);
|
||||
iounmap(denali->host);
|
||||
failed_remap_reg:
|
||||
iounmap(denali->flash_reg);
|
||||
iounmap(denali->reg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -106,8 +112,8 @@ static void denali_pci_remove(struct pci_dev *dev)
|
|||
struct denali_nand_info *denali = pci_get_drvdata(dev);
|
||||
|
||||
denali_remove(denali);
|
||||
iounmap(denali->flash_reg);
|
||||
iounmap(denali->flash_mem);
|
||||
iounmap(denali->reg);
|
||||
iounmap(denali->host);
|
||||
}
|
||||
|
||||
static struct pci_driver denali_pci_driver = {
|
||||
|
|
|
@ -1260,6 +1260,8 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
|
|||
nand->read_buf = docg4_read_buf;
|
||||
nand->write_buf = docg4_write_buf16;
|
||||
nand->erase = docg4_erase_block;
|
||||
nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
nand->ecc.read_page = docg4_read_page;
|
||||
nand->ecc.write_page = docg4_write_page;
|
||||
nand->ecc.read_page_raw = docg4_read_page_raw;
|
||||
|
|
|
@ -775,6 +775,8 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
|
|||
chip->select_chip = fsl_elbc_select_chip;
|
||||
chip->cmdfunc = fsl_elbc_cmdfunc;
|
||||
chip->waitfunc = fsl_elbc_wait;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
chip->bbt_td = &bbt_main_descr;
|
||||
chip->bbt_md = &bbt_mirror_descr;
|
||||
|
|
|
@ -171,34 +171,6 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
|
|||
ifc_nand_ctrl->index += mtd->writesize;
|
||||
}
|
||||
|
||||
static int is_blank(struct mtd_info *mtd, unsigned int bufnum)
|
||||
{
|
||||
struct nand_chip *chip = mtd_to_nand(mtd);
|
||||
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
|
||||
u8 __iomem *addr = priv->vbase + bufnum * (mtd->writesize * 2);
|
||||
u32 __iomem *mainarea = (u32 __iomem *)addr;
|
||||
u8 __iomem *oob = addr + mtd->writesize;
|
||||
struct mtd_oob_region oobregion = { };
|
||||
int i, section = 0;
|
||||
|
||||
for (i = 0; i < mtd->writesize / 4; i++) {
|
||||
if (__raw_readl(&mainarea[i]) != 0xffffffff)
|
||||
return 0;
|
||||
}
|
||||
|
||||
mtd_ooblayout_ecc(mtd, section++, &oobregion);
|
||||
while (oobregion.length) {
|
||||
for (i = 0; i < oobregion.length; i++) {
|
||||
if (__raw_readb(&oob[oobregion.offset + i]) != 0xff)
|
||||
return 0;
|
||||
}
|
||||
|
||||
mtd_ooblayout_ecc(mtd, section++, &oobregion);
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* returns nonzero if entire page is blank */
|
||||
static int check_read_ecc(struct mtd_info *mtd, struct fsl_ifc_ctrl *ctrl,
|
||||
u32 *eccstat, unsigned int bufnum)
|
||||
|
@ -274,16 +246,14 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
|
|||
if (errors == 15) {
|
||||
/*
|
||||
* Uncorrectable error.
|
||||
* OK only if the whole page is blank.
|
||||
* We'll check for blank pages later.
|
||||
*
|
||||
* We disable ECCER reporting due to...
|
||||
* erratum IFC-A002770 -- so report it now if we
|
||||
* see an uncorrectable error in ECCSTAT.
|
||||
*/
|
||||
if (!is_blank(mtd, bufnum))
|
||||
ctrl->nand_stat |=
|
||||
IFC_NAND_EVTER_STAT_ECCER;
|
||||
break;
|
||||
ctrl->nand_stat |= IFC_NAND_EVTER_STAT_ECCER;
|
||||
continue;
|
||||
}
|
||||
|
||||
mtd->ecc_stats.corrected += errors;
|
||||
|
@ -678,6 +648,39 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
|
|||
return nand_fsr | NAND_STATUS_WP;
|
||||
}
|
||||
|
||||
/*
|
||||
* The controller does not check for bitflips in erased pages,
|
||||
* therefore software must check instead.
|
||||
*/
|
||||
static int check_erased_page(struct nand_chip *chip, u8 *buf)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
u8 *ecc = chip->oob_poi;
|
||||
const int ecc_size = chip->ecc.bytes;
|
||||
const int pkt_size = chip->ecc.size;
|
||||
int i, res, bitflips = 0;
|
||||
struct mtd_oob_region oobregion = { };
|
||||
|
||||
mtd_ooblayout_ecc(mtd, 0, &oobregion);
|
||||
ecc += oobregion.offset;
|
||||
|
||||
for (i = 0; i < chip->ecc.steps; ++i) {
|
||||
res = nand_check_erased_ecc_chunk(buf, pkt_size, ecc, ecc_size,
|
||||
NULL, 0,
|
||||
chip->ecc.strength);
|
||||
if (res < 0)
|
||||
mtd->ecc_stats.failed++;
|
||||
else
|
||||
mtd->ecc_stats.corrected += res;
|
||||
|
||||
bitflips = max(res, bitflips);
|
||||
buf += pkt_size;
|
||||
ecc += ecc_size;
|
||||
}
|
||||
|
||||
return bitflips;
|
||||
}
|
||||
|
||||
static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required, int page)
|
||||
{
|
||||
|
@ -689,8 +692,12 @@ static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
if (oob_required)
|
||||
fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
|
||||
|
||||
if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER)
|
||||
dev_err(priv->dev, "NAND Flash ECC Uncorrectable Error\n");
|
||||
if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER) {
|
||||
if (!oob_required)
|
||||
fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
|
||||
|
||||
return check_erased_page(chip, buf);
|
||||
}
|
||||
|
||||
if (ctrl->nand_stat != IFC_NAND_EVTER_STAT_OPC)
|
||||
mtd->ecc_stats.failed++;
|
||||
|
@ -831,6 +838,8 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
|
|||
chip->select_chip = fsl_ifc_select_chip;
|
||||
chip->cmdfunc = fsl_ifc_cmdfunc;
|
||||
chip->waitfunc = fsl_ifc_wait;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
chip->bbt_td = &bbt_main_descr;
|
||||
chip->bbt_md = &bbt_mirror_descr;
|
||||
|
@ -904,7 +913,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
|
|||
chip->ecc.algo = NAND_ECC_HAMMING;
|
||||
}
|
||||
|
||||
if (ctrl->version == FSL_IFC_VERSION_1_1_0)
|
||||
if (ctrl->version >= FSL_IFC_VERSION_1_1_0)
|
||||
fsl_ifc_sram_init(priv);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -302,25 +302,13 @@ static void fsmc_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
|
|||
* This routine initializes timing parameters related to NAND memory access in
|
||||
* FSMC registers
|
||||
*/
|
||||
static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
|
||||
uint32_t busw, struct fsmc_nand_timings *timings)
|
||||
static void fsmc_nand_setup(struct fsmc_nand_data *host,
|
||||
struct fsmc_nand_timings *tims)
|
||||
{
|
||||
uint32_t value = FSMC_DEVTYPE_NAND | FSMC_ENABLE | FSMC_WAITON;
|
||||
uint32_t tclr, tar, thiz, thold, twait, tset;
|
||||
struct fsmc_nand_timings *tims;
|
||||
struct fsmc_nand_timings default_timings = {
|
||||
.tclr = FSMC_TCLR_1,
|
||||
.tar = FSMC_TAR_1,
|
||||
.thiz = FSMC_THIZ_1,
|
||||
.thold = FSMC_THOLD_4,
|
||||
.twait = FSMC_TWAIT_6,
|
||||
.tset = FSMC_TSET_0,
|
||||
};
|
||||
|
||||
if (timings)
|
||||
tims = timings;
|
||||
else
|
||||
tims = &default_timings;
|
||||
unsigned int bank = host->bank;
|
||||
void __iomem *regs = host->regs_va;
|
||||
|
||||
tclr = (tims->tclr & FSMC_TCLR_MASK) << FSMC_TCLR_SHIFT;
|
||||
tar = (tims->tar & FSMC_TAR_MASK) << FSMC_TAR_SHIFT;
|
||||
|
@ -329,7 +317,7 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
|
|||
twait = (tims->twait & FSMC_TWAIT_MASK) << FSMC_TWAIT_SHIFT;
|
||||
tset = (tims->tset & FSMC_TSET_MASK) << FSMC_TSET_SHIFT;
|
||||
|
||||
if (busw)
|
||||
if (host->nand.options & NAND_BUSWIDTH_16)
|
||||
writel_relaxed(value | FSMC_DEVWID_16,
|
||||
FSMC_NAND_REG(regs, bank, PC));
|
||||
else
|
||||
|
@ -344,6 +332,87 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
|
|||
FSMC_NAND_REG(regs, bank, ATTRIB));
|
||||
}
|
||||
|
||||
static int fsmc_calc_timings(struct fsmc_nand_data *host,
|
||||
const struct nand_sdr_timings *sdrt,
|
||||
struct fsmc_nand_timings *tims)
|
||||
{
|
||||
unsigned long hclk = clk_get_rate(host->clk);
|
||||
unsigned long hclkn = NSEC_PER_SEC / hclk;
|
||||
uint32_t thiz, thold, twait, tset;
|
||||
|
||||
if (sdrt->tRC_min < 30000)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
tims->tar = DIV_ROUND_UP(sdrt->tAR_min / 1000, hclkn) - 1;
|
||||
if (tims->tar > FSMC_TAR_MASK)
|
||||
tims->tar = FSMC_TAR_MASK;
|
||||
tims->tclr = DIV_ROUND_UP(sdrt->tCLR_min / 1000, hclkn) - 1;
|
||||
if (tims->tclr > FSMC_TCLR_MASK)
|
||||
tims->tclr = FSMC_TCLR_MASK;
|
||||
|
||||
thiz = sdrt->tCS_min - sdrt->tWP_min;
|
||||
tims->thiz = DIV_ROUND_UP(thiz / 1000, hclkn);
|
||||
|
||||
thold = sdrt->tDH_min;
|
||||
if (thold < sdrt->tCH_min)
|
||||
thold = sdrt->tCH_min;
|
||||
if (thold < sdrt->tCLH_min)
|
||||
thold = sdrt->tCLH_min;
|
||||
if (thold < sdrt->tWH_min)
|
||||
thold = sdrt->tWH_min;
|
||||
if (thold < sdrt->tALH_min)
|
||||
thold = sdrt->tALH_min;
|
||||
if (thold < sdrt->tREH_min)
|
||||
thold = sdrt->tREH_min;
|
||||
tims->thold = DIV_ROUND_UP(thold / 1000, hclkn);
|
||||
if (tims->thold == 0)
|
||||
tims->thold = 1;
|
||||
else if (tims->thold > FSMC_THOLD_MASK)
|
||||
tims->thold = FSMC_THOLD_MASK;
|
||||
|
||||
twait = max(sdrt->tRP_min, sdrt->tWP_min);
|
||||
tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
|
||||
if (tims->twait == 0)
|
||||
tims->twait = 1;
|
||||
else if (tims->twait > FSMC_TWAIT_MASK)
|
||||
tims->twait = FSMC_TWAIT_MASK;
|
||||
|
||||
tset = max(sdrt->tCS_min - sdrt->tWP_min,
|
||||
sdrt->tCEA_max - sdrt->tREA_max);
|
||||
tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;
|
||||
if (tims->tset == 0)
|
||||
tims->tset = 1;
|
||||
else if (tims->tset > FSMC_TSET_MASK)
|
||||
tims->tset = FSMC_TSET_MASK;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fsmc_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct nand_chip *nand = mtd_to_nand(mtd);
|
||||
struct fsmc_nand_data *host = nand_get_controller_data(nand);
|
||||
struct fsmc_nand_timings tims;
|
||||
const struct nand_sdr_timings *sdrt;
|
||||
int ret;
|
||||
|
||||
sdrt = nand_get_sdr_timings(conf);
|
||||
if (IS_ERR(sdrt))
|
||||
return PTR_ERR(sdrt);
|
||||
|
||||
ret = fsmc_calc_timings(host, sdrt, &tims);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
fsmc_nand_setup(host, &tims);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* fsmc_enable_hwecc - Enables Hardware ECC through FSMC registers
|
||||
*/
|
||||
|
@ -796,10 +865,8 @@ static int fsmc_nand_probe_config_dt(struct platform_device *pdev,
|
|||
return -ENOMEM;
|
||||
ret = of_property_read_u8_array(np, "timings", (u8 *)host->dev_timings,
|
||||
sizeof(*host->dev_timings));
|
||||
if (ret) {
|
||||
dev_info(&pdev->dev, "No timings in dts specified, using default timings!\n");
|
||||
if (ret)
|
||||
host->dev_timings = NULL;
|
||||
}
|
||||
|
||||
/* Set default NAND bank to 0 */
|
||||
host->bank = 0;
|
||||
|
@ -933,9 +1000,10 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
|
|||
break;
|
||||
}
|
||||
|
||||
fsmc_nand_setup(host->regs_va, host->bank,
|
||||
nand->options & NAND_BUSWIDTH_16,
|
||||
host->dev_timings);
|
||||
if (host->dev_timings)
|
||||
fsmc_nand_setup(host, host->dev_timings);
|
||||
else
|
||||
nand->setup_data_interface = fsmc_setup_data_interface;
|
||||
|
||||
if (AMBA_REV_BITS(host->pid) >= 8) {
|
||||
nand->ecc.read_page = fsmc_read_page_hwecc;
|
||||
|
@ -986,6 +1054,9 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
|
|||
break;
|
||||
}
|
||||
|
||||
case NAND_ECC_ON_DIE:
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(&pdev->dev, "Unsupported ECC mode!\n");
|
||||
goto err_probe;
|
||||
|
@ -1073,9 +1144,8 @@ static int fsmc_nand_resume(struct device *dev)
|
|||
struct fsmc_nand_data *host = dev_get_drvdata(dev);
|
||||
if (host) {
|
||||
clk_prepare_enable(host->clk);
|
||||
fsmc_nand_setup(host->regs_va, host->bank,
|
||||
host->nand.options & NAND_BUSWIDTH_16,
|
||||
host->dev_timings);
|
||||
if (host->dev_timings)
|
||||
fsmc_nand_setup(host, host->dev_timings);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
#include "gpmi-regs.h"
|
||||
#include "bch-regs.h"
|
||||
|
||||
static struct timing_threshod timing_default_threshold = {
|
||||
static struct timing_threshold timing_default_threshold = {
|
||||
.max_data_setup_cycles = (BM_GPMI_TIMING0_DATA_SETUP >>
|
||||
BP_GPMI_TIMING0_DATA_SETUP),
|
||||
.internal_data_setup_in_ns = 0,
|
||||
|
@ -329,7 +329,7 @@ static unsigned int ns_to_cycles(unsigned int time,
|
|||
static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this,
|
||||
struct gpmi_nfc_hardware_timing *hw)
|
||||
{
|
||||
struct timing_threshod *nfc = &timing_default_threshold;
|
||||
struct timing_threshold *nfc = &timing_default_threshold;
|
||||
struct resources *r = &this->resources;
|
||||
struct nand_chip *nand = &this->nand;
|
||||
struct nand_timing target = this->timing;
|
||||
|
@ -932,7 +932,7 @@ static int enable_edo_mode(struct gpmi_nand_data *this, int mode)
|
|||
|
||||
nand->select_chip(mtd, 0);
|
||||
|
||||
/* [1] send SET FEATURE commond to NAND */
|
||||
/* [1] send SET FEATURE command to NAND */
|
||||
feature[0] = mode;
|
||||
ret = nand->onfi_set_features(mtd, nand,
|
||||
ONFI_FEATURE_ADDR_TIMING_MODE, feature);
|
||||
|
|
|
@ -82,6 +82,10 @@ static int gpmi_ooblayout_free(struct mtd_info *mtd, int section,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const char * const gpmi_clks_for_mx2x[] = {
|
||||
"gpmi_io",
|
||||
};
|
||||
|
||||
static const struct mtd_ooblayout_ops gpmi_ooblayout_ops = {
|
||||
.ecc = gpmi_ooblayout_ecc,
|
||||
.free = gpmi_ooblayout_free,
|
||||
|
@ -91,24 +95,48 @@ static const struct gpmi_devdata gpmi_devdata_imx23 = {
|
|||
.type = IS_MX23,
|
||||
.bch_max_ecc_strength = 20,
|
||||
.max_chain_delay = 16,
|
||||
.clks = gpmi_clks_for_mx2x,
|
||||
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
|
||||
};
|
||||
|
||||
static const struct gpmi_devdata gpmi_devdata_imx28 = {
|
||||
.type = IS_MX28,
|
||||
.bch_max_ecc_strength = 20,
|
||||
.max_chain_delay = 16,
|
||||
.clks = gpmi_clks_for_mx2x,
|
||||
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
|
||||
};
|
||||
|
||||
static const char * const gpmi_clks_for_mx6[] = {
|
||||
"gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
|
||||
};
|
||||
|
||||
static const struct gpmi_devdata gpmi_devdata_imx6q = {
|
||||
.type = IS_MX6Q,
|
||||
.bch_max_ecc_strength = 40,
|
||||
.max_chain_delay = 12,
|
||||
.clks = gpmi_clks_for_mx6,
|
||||
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
|
||||
};
|
||||
|
||||
static const struct gpmi_devdata gpmi_devdata_imx6sx = {
|
||||
.type = IS_MX6SX,
|
||||
.bch_max_ecc_strength = 62,
|
||||
.max_chain_delay = 12,
|
||||
.clks = gpmi_clks_for_mx6,
|
||||
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
|
||||
};
|
||||
|
||||
static const char * const gpmi_clks_for_mx7d[] = {
|
||||
"gpmi_io", "gpmi_bch_apb",
|
||||
};
|
||||
|
||||
static const struct gpmi_devdata gpmi_devdata_imx7d = {
|
||||
.type = IS_MX7D,
|
||||
.bch_max_ecc_strength = 62,
|
||||
.max_chain_delay = 12,
|
||||
.clks = gpmi_clks_for_mx7d,
|
||||
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx7d),
|
||||
};
|
||||
|
||||
static irqreturn_t bch_irq(int irq, void *cookie)
|
||||
|
@ -599,35 +627,14 @@ acquire_err:
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
static char *extra_clks_for_mx6q[GPMI_CLK_MAX] = {
|
||||
"gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
|
||||
};
|
||||
|
||||
static int gpmi_get_clks(struct gpmi_nand_data *this)
|
||||
{
|
||||
struct resources *r = &this->resources;
|
||||
char **extra_clks = NULL;
|
||||
struct clk *clk;
|
||||
int err, i;
|
||||
|
||||
/* The main clock is stored in the first. */
|
||||
r->clock[0] = devm_clk_get(this->dev, "gpmi_io");
|
||||
if (IS_ERR(r->clock[0])) {
|
||||
err = PTR_ERR(r->clock[0]);
|
||||
goto err_clock;
|
||||
}
|
||||
|
||||
/* Get extra clocks */
|
||||
if (GPMI_IS_MX6(this))
|
||||
extra_clks = extra_clks_for_mx6q;
|
||||
if (!extra_clks)
|
||||
return 0;
|
||||
|
||||
for (i = 1; i < GPMI_CLK_MAX; i++) {
|
||||
if (extra_clks[i - 1] == NULL)
|
||||
break;
|
||||
|
||||
clk = devm_clk_get(this->dev, extra_clks[i - 1]);
|
||||
for (i = 0; i < this->devdata->clks_count; i++) {
|
||||
clk = devm_clk_get(this->dev, this->devdata->clks[i]);
|
||||
if (IS_ERR(clk)) {
|
||||
err = PTR_ERR(clk);
|
||||
goto err_clock;
|
||||
|
@ -1929,12 +1936,6 @@ static int gpmi_set_geometry(struct gpmi_nand_data *this)
|
|||
return gpmi_alloc_dma_buffer(this);
|
||||
}
|
||||
|
||||
static void gpmi_nand_exit(struct gpmi_nand_data *this)
|
||||
{
|
||||
nand_release(nand_to_mtd(&this->nand));
|
||||
gpmi_free_dma_buffer(this);
|
||||
}
|
||||
|
||||
static int gpmi_init_last(struct gpmi_nand_data *this)
|
||||
{
|
||||
struct nand_chip *chip = &this->nand;
|
||||
|
@ -2048,18 +2049,20 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
|
|||
|
||||
ret = nand_boot_init(this);
|
||||
if (ret)
|
||||
goto err_out;
|
||||
goto err_nand_cleanup;
|
||||
ret = chip->scan_bbt(mtd);
|
||||
if (ret)
|
||||
goto err_out;
|
||||
goto err_nand_cleanup;
|
||||
|
||||
ret = mtd_device_register(mtd, NULL, 0);
|
||||
if (ret)
|
||||
goto err_out;
|
||||
goto err_nand_cleanup;
|
||||
return 0;
|
||||
|
||||
err_nand_cleanup:
|
||||
nand_cleanup(chip);
|
||||
err_out:
|
||||
gpmi_nand_exit(this);
|
||||
gpmi_free_dma_buffer(this);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2076,6 +2079,9 @@ static const struct of_device_id gpmi_nand_id_table[] = {
|
|||
}, {
|
||||
.compatible = "fsl,imx6sx-gpmi-nand",
|
||||
.data = &gpmi_devdata_imx6sx,
|
||||
}, {
|
||||
.compatible = "fsl,imx7d-gpmi-nand",
|
||||
.data = &gpmi_devdata_imx7d,
|
||||
}, {}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
|
||||
|
@ -2129,7 +2135,8 @@ static int gpmi_nand_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct gpmi_nand_data *this = platform_get_drvdata(pdev);
|
||||
|
||||
gpmi_nand_exit(this);
|
||||
nand_release(nand_to_mtd(&this->nand));
|
||||
gpmi_free_dma_buffer(this);
|
||||
release_resources(this);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -123,13 +123,16 @@ enum gpmi_type {
|
|||
IS_MX23,
|
||||
IS_MX28,
|
||||
IS_MX6Q,
|
||||
IS_MX6SX
|
||||
IS_MX6SX,
|
||||
IS_MX7D,
|
||||
};
|
||||
|
||||
struct gpmi_devdata {
|
||||
enum gpmi_type type;
|
||||
int bch_max_ecc_strength;
|
||||
int max_chain_delay; /* See the async EDO mode */
|
||||
const char * const *clks;
|
||||
const int clks_count;
|
||||
};
|
||||
|
||||
struct gpmi_nand_data {
|
||||
|
@ -231,7 +234,7 @@ struct gpmi_nfc_hardware_timing {
|
|||
};
|
||||
|
||||
/**
|
||||
* struct timing_threshod - Timing threshold
|
||||
* struct timing_threshold - Timing threshold
|
||||
* @max_data_setup_cycles: The maximum number of data setup cycles that
|
||||
* can be expressed in the hardware.
|
||||
* @internal_data_setup_in_ns: The time, in ns, that the NFC hardware requires
|
||||
|
@ -253,7 +256,7 @@ struct gpmi_nfc_hardware_timing {
|
|||
* progress, this is the clock frequency during
|
||||
* the most recent I/O transaction.
|
||||
*/
|
||||
struct timing_threshod {
|
||||
struct timing_threshold {
|
||||
const unsigned int max_chip_count;
|
||||
const unsigned int max_data_setup_cycles;
|
||||
const unsigned int internal_data_setup_in_ns;
|
||||
|
@ -305,6 +308,8 @@ void gpmi_copy_bits(u8 *dst, size_t dst_bit_off,
|
|||
#define GPMI_IS_MX28(x) ((x)->devdata->type == IS_MX28)
|
||||
#define GPMI_IS_MX6Q(x) ((x)->devdata->type == IS_MX6Q)
|
||||
#define GPMI_IS_MX6SX(x) ((x)->devdata->type == IS_MX6SX)
|
||||
#define GPMI_IS_MX7D(x) ((x)->devdata->type == IS_MX7D)
|
||||
|
||||
#define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x))
|
||||
#define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x) || \
|
||||
GPMI_IS_MX7D(x))
|
||||
#endif
|
||||
|
|
|
@ -764,6 +764,8 @@ static int hisi_nfc_probe(struct platform_device *pdev)
|
|||
chip->write_buf = hisi_nfc_write_buf;
|
||||
chip->read_buf = hisi_nfc_read_buf;
|
||||
chip->chip_delay = HINFC504_CHIP_DELAY;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
hisi_nfc_host_init(host);
|
||||
|
||||
|
|
|
@ -205,7 +205,7 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
mtd->ooblayout = &nand_ooblayout_lp_ops;
|
||||
mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -708,6 +708,8 @@ static int mpc5121_nfc_probe(struct platform_device *op)
|
|||
chip->read_buf = mpc5121_nfc_read_buf;
|
||||
chip->write_buf = mpc5121_nfc_write_buf;
|
||||
chip->select_chip = mpc5121_nfc_select_chip;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->bbt_options = NAND_BBT_USE_FLASH;
|
||||
chip->ecc.mode = NAND_ECC_SOFT;
|
||||
chip->ecc.algo = NAND_ECC_HAMMING;
|
||||
|
|
|
@ -28,36 +28,16 @@
|
|||
|
||||
#define ECC_IDLE_MASK BIT(0)
|
||||
#define ECC_IRQ_EN BIT(0)
|
||||
#define ECC_PG_IRQ_SEL BIT(1)
|
||||
#define ECC_OP_ENABLE (1)
|
||||
#define ECC_OP_DISABLE (0)
|
||||
|
||||
#define ECC_ENCCON (0x00)
|
||||
#define ECC_ENCCNFG (0x04)
|
||||
#define ECC_CNFG_4BIT (0)
|
||||
#define ECC_CNFG_6BIT (1)
|
||||
#define ECC_CNFG_8BIT (2)
|
||||
#define ECC_CNFG_10BIT (3)
|
||||
#define ECC_CNFG_12BIT (4)
|
||||
#define ECC_CNFG_14BIT (5)
|
||||
#define ECC_CNFG_16BIT (6)
|
||||
#define ECC_CNFG_18BIT (7)
|
||||
#define ECC_CNFG_20BIT (8)
|
||||
#define ECC_CNFG_22BIT (9)
|
||||
#define ECC_CNFG_24BIT (0xa)
|
||||
#define ECC_CNFG_28BIT (0xb)
|
||||
#define ECC_CNFG_32BIT (0xc)
|
||||
#define ECC_CNFG_36BIT (0xd)
|
||||
#define ECC_CNFG_40BIT (0xe)
|
||||
#define ECC_CNFG_44BIT (0xf)
|
||||
#define ECC_CNFG_48BIT (0x10)
|
||||
#define ECC_CNFG_52BIT (0x11)
|
||||
#define ECC_CNFG_56BIT (0x12)
|
||||
#define ECC_CNFG_60BIT (0x13)
|
||||
#define ECC_MODE_SHIFT (5)
|
||||
#define ECC_MS_SHIFT (16)
|
||||
#define ECC_ENCDIADDR (0x08)
|
||||
#define ECC_ENCIDLE (0x0C)
|
||||
#define ECC_ENCPAR(x) (0x10 + (x) * sizeof(u32))
|
||||
#define ECC_ENCIRQ_EN (0x80)
|
||||
#define ECC_ENCIRQ_STA (0x84)
|
||||
#define ECC_DECCON (0x100)
|
||||
|
@ -66,7 +46,6 @@
|
|||
#define DEC_CNFG_CORRECT (0x3 << 12)
|
||||
#define ECC_DECIDLE (0x10C)
|
||||
#define ECC_DECENUM0 (0x114)
|
||||
#define ERR_MASK (0x3f)
|
||||
#define ECC_DECDONE (0x124)
|
||||
#define ECC_DECIRQ_EN (0x200)
|
||||
#define ECC_DECIRQ_STA (0x204)
|
||||
|
@ -78,8 +57,17 @@
|
|||
#define ECC_IRQ_REG(op) ((op) == ECC_ENCODE ? \
|
||||
ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
|
||||
|
||||
struct mtk_ecc_caps {
|
||||
u32 err_mask;
|
||||
const u8 *ecc_strength;
|
||||
u8 num_ecc_strength;
|
||||
u32 encode_parity_reg0;
|
||||
int pg_irq_sel;
|
||||
};
|
||||
|
||||
struct mtk_ecc {
|
||||
struct device *dev;
|
||||
const struct mtk_ecc_caps *caps;
|
||||
void __iomem *regs;
|
||||
struct clk *clk;
|
||||
|
||||
|
@ -87,7 +75,18 @@ struct mtk_ecc {
|
|||
struct mutex lock;
|
||||
u32 sectors;
|
||||
|
||||
u8 eccdata[112];
|
||||
u8 *eccdata;
|
||||
};
|
||||
|
||||
/* ecc strength that each IP supports */
|
||||
static const u8 ecc_strength_mt2701[] = {
|
||||
4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
|
||||
40, 44, 48, 52, 56, 60
|
||||
};
|
||||
|
||||
static const u8 ecc_strength_mt2712[] = {
|
||||
4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
|
||||
40, 44, 48, 52, 56, 60, 68, 72, 80
|
||||
};
|
||||
|
||||
static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
|
||||
|
@ -136,77 +135,24 @@ static irqreturn_t mtk_ecc_irq(int irq, void *id)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
|
||||
static int mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
|
||||
{
|
||||
u32 ecc_bit = ECC_CNFG_4BIT, dec_sz, enc_sz;
|
||||
u32 reg;
|
||||
u32 ecc_bit, dec_sz, enc_sz;
|
||||
u32 reg, i;
|
||||
|
||||
switch (config->strength) {
|
||||
case 4:
|
||||
ecc_bit = ECC_CNFG_4BIT;
|
||||
break;
|
||||
case 6:
|
||||
ecc_bit = ECC_CNFG_6BIT;
|
||||
break;
|
||||
case 8:
|
||||
ecc_bit = ECC_CNFG_8BIT;
|
||||
break;
|
||||
case 10:
|
||||
ecc_bit = ECC_CNFG_10BIT;
|
||||
break;
|
||||
case 12:
|
||||
ecc_bit = ECC_CNFG_12BIT;
|
||||
break;
|
||||
case 14:
|
||||
ecc_bit = ECC_CNFG_14BIT;
|
||||
break;
|
||||
case 16:
|
||||
ecc_bit = ECC_CNFG_16BIT;
|
||||
break;
|
||||
case 18:
|
||||
ecc_bit = ECC_CNFG_18BIT;
|
||||
break;
|
||||
case 20:
|
||||
ecc_bit = ECC_CNFG_20BIT;
|
||||
break;
|
||||
case 22:
|
||||
ecc_bit = ECC_CNFG_22BIT;
|
||||
break;
|
||||
case 24:
|
||||
ecc_bit = ECC_CNFG_24BIT;
|
||||
break;
|
||||
case 28:
|
||||
ecc_bit = ECC_CNFG_28BIT;
|
||||
break;
|
||||
case 32:
|
||||
ecc_bit = ECC_CNFG_32BIT;
|
||||
break;
|
||||
case 36:
|
||||
ecc_bit = ECC_CNFG_36BIT;
|
||||
break;
|
||||
case 40:
|
||||
ecc_bit = ECC_CNFG_40BIT;
|
||||
break;
|
||||
case 44:
|
||||
ecc_bit = ECC_CNFG_44BIT;
|
||||
break;
|
||||
case 48:
|
||||
ecc_bit = ECC_CNFG_48BIT;
|
||||
break;
|
||||
case 52:
|
||||
ecc_bit = ECC_CNFG_52BIT;
|
||||
break;
|
||||
case 56:
|
||||
ecc_bit = ECC_CNFG_56BIT;
|
||||
break;
|
||||
case 60:
|
||||
ecc_bit = ECC_CNFG_60BIT;
|
||||
break;
|
||||
default:
|
||||
dev_err(ecc->dev, "invalid strength %d, default to 4 bits\n",
|
||||
config->strength);
|
||||
for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
|
||||
if (ecc->caps->ecc_strength[i] == config->strength)
|
||||
break;
|
||||
}
|
||||
|
||||
if (i == ecc->caps->num_ecc_strength) {
|
||||
dev_err(ecc->dev, "invalid ecc strength %d\n",
|
||||
config->strength);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ecc_bit = i;
|
||||
|
||||
if (config->op == ECC_ENCODE) {
|
||||
/* configure ECC encoder (in bits) */
|
||||
enc_sz = config->len << 3;
|
||||
|
@ -232,6 +178,8 @@ static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
|
|||
if (config->sectors)
|
||||
ecc->sectors = 1 << (config->sectors - 1);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
|
||||
|
@ -247,8 +195,8 @@ void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
|
|||
offset = (i >> 2) << 2;
|
||||
err = readl(ecc->regs + ECC_DECENUM0 + offset);
|
||||
err = err >> ((i % 4) * 8);
|
||||
err &= ERR_MASK;
|
||||
if (err == ERR_MASK) {
|
||||
err &= ecc->caps->err_mask;
|
||||
if (err == ecc->caps->err_mask) {
|
||||
/* uncorrectable errors */
|
||||
stats->failed++;
|
||||
continue;
|
||||
|
@ -313,6 +261,7 @@ EXPORT_SYMBOL(of_mtk_ecc_get);
|
|||
int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
|
||||
{
|
||||
enum mtk_ecc_operation op = config->op;
|
||||
u16 reg_val;
|
||||
int ret;
|
||||
|
||||
ret = mutex_lock_interruptible(&ecc->lock);
|
||||
|
@ -322,11 +271,27 @@ int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
|
|||
}
|
||||
|
||||
mtk_ecc_wait_idle(ecc, op);
|
||||
mtk_ecc_config(ecc, config);
|
||||
writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
|
||||
|
||||
init_completion(&ecc->done);
|
||||
writew(ECC_IRQ_EN, ecc->regs + ECC_IRQ_REG(op));
|
||||
ret = mtk_ecc_config(ecc, config);
|
||||
if (ret) {
|
||||
mutex_unlock(&ecc->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (config->mode != ECC_NFI_MODE || op != ECC_ENCODE) {
|
||||
init_completion(&ecc->done);
|
||||
reg_val = ECC_IRQ_EN;
|
||||
/*
|
||||
* For ECC_NFI_MODE, if ecc->caps->pg_irq_sel is 1, then it
|
||||
* means this chip can only generate one ecc irq during page
|
||||
* read / write. If is 0, generate one ecc irq each ecc step.
|
||||
*/
|
||||
if (ecc->caps->pg_irq_sel && config->mode == ECC_NFI_MODE)
|
||||
reg_val |= ECC_PG_IRQ_SEL;
|
||||
writew(reg_val, ecc->regs + ECC_IRQ_REG(op));
|
||||
}
|
||||
|
||||
writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -396,7 +361,9 @@ int mtk_ecc_encode(struct mtk_ecc *ecc, struct mtk_ecc_config *config,
|
|||
len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
|
||||
|
||||
/* write the parity bytes generated by the ECC back to temp buffer */
|
||||
__ioread32_copy(ecc->eccdata, ecc->regs + ECC_ENCPAR(0), round_up(len, 4));
|
||||
__ioread32_copy(ecc->eccdata,
|
||||
ecc->regs + ecc->caps->encode_parity_reg0,
|
||||
round_up(len, 4));
|
||||
|
||||
/* copy into possibly unaligned OOB region with actual length */
|
||||
memcpy(data + bytes, ecc->eccdata, len);
|
||||
|
@ -409,37 +376,79 @@ timeout:
|
|||
}
|
||||
EXPORT_SYMBOL(mtk_ecc_encode);
|
||||
|
||||
void mtk_ecc_adjust_strength(u32 *p)
|
||||
void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p)
|
||||
{
|
||||
u32 ecc[] = {4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
|
||||
40, 44, 48, 52, 56, 60};
|
||||
const u8 *ecc_strength = ecc->caps->ecc_strength;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(ecc); i++) {
|
||||
if (*p <= ecc[i]) {
|
||||
for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
|
||||
if (*p <= ecc_strength[i]) {
|
||||
if (!i)
|
||||
*p = ecc[i];
|
||||
else if (*p != ecc[i])
|
||||
*p = ecc[i - 1];
|
||||
*p = ecc_strength[i];
|
||||
else if (*p != ecc_strength[i])
|
||||
*p = ecc_strength[i - 1];
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
*p = ecc[ARRAY_SIZE(ecc) - 1];
|
||||
*p = ecc_strength[ecc->caps->num_ecc_strength - 1];
|
||||
}
|
||||
EXPORT_SYMBOL(mtk_ecc_adjust_strength);
|
||||
|
||||
static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
|
||||
.err_mask = 0x3f,
|
||||
.ecc_strength = ecc_strength_mt2701,
|
||||
.num_ecc_strength = 20,
|
||||
.encode_parity_reg0 = 0x10,
|
||||
.pg_irq_sel = 0,
|
||||
};
|
||||
|
||||
static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
|
||||
.err_mask = 0x7f,
|
||||
.ecc_strength = ecc_strength_mt2712,
|
||||
.num_ecc_strength = 23,
|
||||
.encode_parity_reg0 = 0x300,
|
||||
.pg_irq_sel = 1,
|
||||
};
|
||||
|
||||
static const struct of_device_id mtk_ecc_dt_match[] = {
|
||||
{
|
||||
.compatible = "mediatek,mt2701-ecc",
|
||||
.data = &mtk_ecc_caps_mt2701,
|
||||
}, {
|
||||
.compatible = "mediatek,mt2712-ecc",
|
||||
.data = &mtk_ecc_caps_mt2712,
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
||||
static int mtk_ecc_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct mtk_ecc *ecc;
|
||||
struct resource *res;
|
||||
const struct of_device_id *of_ecc_id = NULL;
|
||||
u32 max_eccdata_size;
|
||||
int irq, ret;
|
||||
|
||||
ecc = devm_kzalloc(dev, sizeof(*ecc), GFP_KERNEL);
|
||||
if (!ecc)
|
||||
return -ENOMEM;
|
||||
|
||||
of_ecc_id = of_match_device(mtk_ecc_dt_match, &pdev->dev);
|
||||
if (!of_ecc_id)
|
||||
return -ENODEV;
|
||||
|
||||
ecc->caps = of_ecc_id->data;
|
||||
|
||||
max_eccdata_size = ecc->caps->num_ecc_strength - 1;
|
||||
max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size];
|
||||
max_eccdata_size = (max_eccdata_size * ECC_PARITY_BITS + 7) >> 3;
|
||||
max_eccdata_size = round_up(max_eccdata_size, 4);
|
||||
ecc->eccdata = devm_kzalloc(dev, max_eccdata_size, GFP_KERNEL);
|
||||
if (!ecc->eccdata)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
ecc->regs = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(ecc->regs)) {
|
||||
|
@ -500,19 +509,12 @@ static int mtk_ecc_resume(struct device *dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
mtk_ecc_hw_init(ecc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(mtk_ecc_pm_ops, mtk_ecc_suspend, mtk_ecc_resume);
|
||||
#endif
|
||||
|
||||
static const struct of_device_id mtk_ecc_dt_match[] = {
|
||||
{ .compatible = "mediatek,mt2701-ecc" },
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, mtk_ecc_dt_match);
|
||||
|
||||
static struct platform_driver mtk_ecc_driver = {
|
||||
|
|
|
@ -42,7 +42,7 @@ void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
|
|||
int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
|
||||
int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
|
||||
void mtk_ecc_disable(struct mtk_ecc *);
|
||||
void mtk_ecc_adjust_strength(u32 *);
|
||||
void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p);
|
||||
|
||||
struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
|
||||
void mtk_ecc_release(struct mtk_ecc *);
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include "mtk_ecc.h"
|
||||
|
||||
/* NAND controller register definition */
|
||||
|
@ -38,23 +39,6 @@
|
|||
#define NFI_PAGEFMT (0x04)
|
||||
#define PAGEFMT_FDM_ECC_SHIFT (12)
|
||||
#define PAGEFMT_FDM_SHIFT (8)
|
||||
#define PAGEFMT_SPARE_16 (0)
|
||||
#define PAGEFMT_SPARE_26 (1)
|
||||
#define PAGEFMT_SPARE_27 (2)
|
||||
#define PAGEFMT_SPARE_28 (3)
|
||||
#define PAGEFMT_SPARE_32 (4)
|
||||
#define PAGEFMT_SPARE_36 (5)
|
||||
#define PAGEFMT_SPARE_40 (6)
|
||||
#define PAGEFMT_SPARE_44 (7)
|
||||
#define PAGEFMT_SPARE_48 (8)
|
||||
#define PAGEFMT_SPARE_49 (9)
|
||||
#define PAGEFMT_SPARE_50 (0xa)
|
||||
#define PAGEFMT_SPARE_51 (0xb)
|
||||
#define PAGEFMT_SPARE_52 (0xc)
|
||||
#define PAGEFMT_SPARE_62 (0xd)
|
||||
#define PAGEFMT_SPARE_63 (0xe)
|
||||
#define PAGEFMT_SPARE_64 (0xf)
|
||||
#define PAGEFMT_SPARE_SHIFT (4)
|
||||
#define PAGEFMT_SEC_SEL_512 BIT(2)
|
||||
#define PAGEFMT_512_2K (0)
|
||||
#define PAGEFMT_2K_4K (1)
|
||||
|
@ -115,6 +99,17 @@
|
|||
#define MTK_RESET_TIMEOUT (1000000)
|
||||
#define MTK_MAX_SECTOR (16)
|
||||
#define MTK_NAND_MAX_NSELS (2)
|
||||
#define MTK_NFC_MIN_SPARE (16)
|
||||
#define ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt) \
|
||||
((tpoecs) << 28 | (tprecs) << 22 | (tc2r) << 16 | \
|
||||
(tw2r) << 12 | (twh) << 8 | (twst) << 4 | (trlt))
|
||||
|
||||
struct mtk_nfc_caps {
|
||||
const u8 *spare_size;
|
||||
u8 num_spare_size;
|
||||
u8 pageformat_spare_shift;
|
||||
u8 nfi_clk_div;
|
||||
};
|
||||
|
||||
struct mtk_nfc_bad_mark_ctl {
|
||||
void (*bm_swap)(struct mtd_info *, u8 *buf, int raw);
|
||||
|
@ -155,6 +150,7 @@ struct mtk_nfc {
|
|||
struct mtk_ecc *ecc;
|
||||
|
||||
struct device *dev;
|
||||
const struct mtk_nfc_caps *caps;
|
||||
void __iomem *regs;
|
||||
|
||||
struct completion done;
|
||||
|
@ -163,6 +159,20 @@ struct mtk_nfc {
|
|||
u8 *buffer;
|
||||
};
|
||||
|
||||
/*
|
||||
* supported spare size of each IP.
|
||||
* order should be the same with the spare size bitfiled defination of
|
||||
* register NFI_PAGEFMT.
|
||||
*/
|
||||
static const u8 spare_size_mt2701[] = {
|
||||
16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 63, 64
|
||||
};
|
||||
|
||||
static const u8 spare_size_mt2712[] = {
|
||||
16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 61, 63, 64, 67,
|
||||
74
|
||||
};
|
||||
|
||||
static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
|
||||
{
|
||||
return container_of(nand, struct mtk_nfc_nand_chip, nand);
|
||||
|
@ -308,7 +318,7 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
|
|||
struct nand_chip *chip = mtd_to_nand(mtd);
|
||||
struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
|
||||
struct mtk_nfc *nfc = nand_get_controller_data(chip);
|
||||
u32 fmt, spare;
|
||||
u32 fmt, spare, i;
|
||||
|
||||
if (!mtd->writesize)
|
||||
return 0;
|
||||
|
@ -352,63 +362,21 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
|
|||
if (chip->ecc.size == 1024)
|
||||
spare >>= 1;
|
||||
|
||||
switch (spare) {
|
||||
case 16:
|
||||
fmt |= (PAGEFMT_SPARE_16 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 26:
|
||||
fmt |= (PAGEFMT_SPARE_26 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 27:
|
||||
fmt |= (PAGEFMT_SPARE_27 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 28:
|
||||
fmt |= (PAGEFMT_SPARE_28 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 32:
|
||||
fmt |= (PAGEFMT_SPARE_32 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 36:
|
||||
fmt |= (PAGEFMT_SPARE_36 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 40:
|
||||
fmt |= (PAGEFMT_SPARE_40 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 44:
|
||||
fmt |= (PAGEFMT_SPARE_44 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 48:
|
||||
fmt |= (PAGEFMT_SPARE_48 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 49:
|
||||
fmt |= (PAGEFMT_SPARE_49 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 50:
|
||||
fmt |= (PAGEFMT_SPARE_50 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 51:
|
||||
fmt |= (PAGEFMT_SPARE_51 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 52:
|
||||
fmt |= (PAGEFMT_SPARE_52 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 62:
|
||||
fmt |= (PAGEFMT_SPARE_62 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 63:
|
||||
fmt |= (PAGEFMT_SPARE_63 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
case 64:
|
||||
fmt |= (PAGEFMT_SPARE_64 << PAGEFMT_SPARE_SHIFT);
|
||||
break;
|
||||
default:
|
||||
dev_err(nfc->dev, "invalid spare per sector %d\n", spare);
|
||||
for (i = 0; i < nfc->caps->num_spare_size; i++) {
|
||||
if (nfc->caps->spare_size[i] == spare)
|
||||
break;
|
||||
}
|
||||
|
||||
if (i == nfc->caps->num_spare_size) {
|
||||
dev_err(nfc->dev, "invalid spare size %d\n", spare);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
fmt |= i << nfc->caps->pageformat_spare_shift;
|
||||
|
||||
fmt |= mtk_nand->fdm.reg_size << PAGEFMT_FDM_SHIFT;
|
||||
fmt |= mtk_nand->fdm.ecc_size << PAGEFMT_FDM_ECC_SHIFT;
|
||||
nfi_writew(nfc, fmt, NFI_PAGEFMT);
|
||||
nfi_writel(nfc, fmt, NFI_PAGEFMT);
|
||||
|
||||
nfc->ecc_cfg.strength = chip->ecc.strength;
|
||||
nfc->ecc_cfg.len = chip->ecc.size + mtk_nand->fdm.ecc_size;
|
||||
|
@ -531,6 +499,74 @@ static void mtk_nfc_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
|
|||
mtk_nfc_write_byte(mtd, buf[i]);
|
||||
}
|
||||
|
||||
static int mtk_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
|
||||
const struct nand_sdr_timings *timings;
|
||||
u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt;
|
||||
|
||||
timings = nand_get_sdr_timings(conf);
|
||||
if (IS_ERR(timings))
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
rate = clk_get_rate(nfc->clk.nfi_clk);
|
||||
/* There is a frequency divider in some IPs */
|
||||
rate /= nfc->caps->nfi_clk_div;
|
||||
|
||||
/* turn clock rate into KHZ */
|
||||
rate /= 1000;
|
||||
|
||||
tpoecs = max(timings->tALH_min, timings->tCLH_min) / 1000;
|
||||
tpoecs = DIV_ROUND_UP(tpoecs * rate, 1000000);
|
||||
tpoecs &= 0xf;
|
||||
|
||||
tprecs = max(timings->tCLS_min, timings->tALS_min) / 1000;
|
||||
tprecs = DIV_ROUND_UP(tprecs * rate, 1000000);
|
||||
tprecs &= 0x3f;
|
||||
|
||||
/* sdr interface has no tCR which means CE# low to RE# low */
|
||||
tc2r = 0;
|
||||
|
||||
tw2r = timings->tWHR_min / 1000;
|
||||
tw2r = DIV_ROUND_UP(tw2r * rate, 1000000);
|
||||
tw2r = DIV_ROUND_UP(tw2r - 1, 2);
|
||||
tw2r &= 0xf;
|
||||
|
||||
twh = max(timings->tREH_min, timings->tWH_min) / 1000;
|
||||
twh = DIV_ROUND_UP(twh * rate, 1000000) - 1;
|
||||
twh &= 0xf;
|
||||
|
||||
twst = timings->tWP_min / 1000;
|
||||
twst = DIV_ROUND_UP(twst * rate, 1000000) - 1;
|
||||
twst &= 0xf;
|
||||
|
||||
trlt = max(timings->tREA_max, timings->tRP_min) / 1000;
|
||||
trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1;
|
||||
trlt &= 0xf;
|
||||
|
||||
/*
|
||||
* ACCON: access timing control register
|
||||
* -------------------------------------
|
||||
* 31:28: tpoecs, minimum required time for CS post pulling down after
|
||||
* accessing the device
|
||||
* 27:22: tprecs, minimum required time for CS pre pulling down before
|
||||
* accessing the device
|
||||
* 21:16: tc2r, minimum required time from NCEB low to NREB low
|
||||
* 15:12: tw2r, minimum required time from NWEB high to NREB low.
|
||||
* 11:08: twh, write enable hold time
|
||||
* 07:04: twst, write wait states
|
||||
* 03:00: trlt, read wait states
|
||||
*/
|
||||
trlt = ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt);
|
||||
nfi_writel(nfc, trlt, NFI_ACCCON);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mtk_nfc_sector_encode(struct nand_chip *chip, u8 *data)
|
||||
{
|
||||
struct mtk_nfc *nfc = nand_get_controller_data(chip);
|
||||
|
@ -987,21 +1023,6 @@ static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
|
||||
static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
|
||||
{
|
||||
/*
|
||||
* ACCON: access timing control register
|
||||
* -------------------------------------
|
||||
* 31:28: minimum required time for CS post pulling down after accessing
|
||||
* the device
|
||||
* 27:22: minimum required time for CS pre pulling down before accessing
|
||||
* the device
|
||||
* 21:16: minimum required time from NCEB low to NREB low
|
||||
* 15:12: minimum required time from NWEB high to NREB low.
|
||||
* 11:08: write enable hold time
|
||||
* 07:04: write wait states
|
||||
* 03:00: read wait states
|
||||
*/
|
||||
nfi_writel(nfc, 0x10804211, NFI_ACCCON);
|
||||
|
||||
/*
|
||||
* CNRNB: nand ready/busy register
|
||||
* -------------------------------
|
||||
|
@ -1009,7 +1030,7 @@ static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
|
|||
* 0 : poll the status of the busy/ready signal after [7:4]*16 cycles.
|
||||
*/
|
||||
nfi_writew(nfc, 0xf1, NFI_CNRNB);
|
||||
nfi_writew(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
|
||||
nfi_writel(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
|
||||
|
||||
mtk_nfc_hw_reset(nfc);
|
||||
|
||||
|
@ -1131,12 +1152,12 @@ static void mtk_nfc_set_bad_mark_ctl(struct mtk_nfc_bad_mark_ctl *bm_ctl,
|
|||
}
|
||||
}
|
||||
|
||||
static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
|
||||
static int mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
|
||||
{
|
||||
struct nand_chip *nand = mtd_to_nand(mtd);
|
||||
u32 spare[] = {16, 26, 27, 28, 32, 36, 40, 44,
|
||||
48, 49, 50, 51, 52, 62, 63, 64};
|
||||
u32 eccsteps, i;
|
||||
struct mtk_nfc *nfc = nand_get_controller_data(nand);
|
||||
const u8 *spare = nfc->caps->spare_size;
|
||||
u32 eccsteps, i, closest_spare = 0;
|
||||
|
||||
eccsteps = mtd->writesize / nand->ecc.size;
|
||||
*sps = mtd->oobsize / eccsteps;
|
||||
|
@ -1144,28 +1165,31 @@ static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
|
|||
if (nand->ecc.size == 1024)
|
||||
*sps >>= 1;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(spare); i++) {
|
||||
if (*sps <= spare[i]) {
|
||||
if (!i)
|
||||
*sps = spare[i];
|
||||
else if (*sps != spare[i])
|
||||
*sps = spare[i - 1];
|
||||
break;
|
||||
if (*sps < MTK_NFC_MIN_SPARE)
|
||||
return -EINVAL;
|
||||
|
||||
for (i = 0; i < nfc->caps->num_spare_size; i++) {
|
||||
if (*sps >= spare[i] && spare[i] >= spare[closest_spare]) {
|
||||
closest_spare = i;
|
||||
if (*sps == spare[i])
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (i >= ARRAY_SIZE(spare))
|
||||
*sps = spare[ARRAY_SIZE(spare) - 1];
|
||||
*sps = spare[closest_spare];
|
||||
|
||||
if (nand->ecc.size == 1024)
|
||||
*sps <<= 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
|
||||
{
|
||||
struct nand_chip *nand = mtd_to_nand(mtd);
|
||||
struct mtk_nfc *nfc = nand_get_controller_data(nand);
|
||||
u32 spare;
|
||||
int free;
|
||||
int free, ret;
|
||||
|
||||
/* support only ecc hw mode */
|
||||
if (nand->ecc.mode != NAND_ECC_HW) {
|
||||
|
@ -1194,7 +1218,9 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
|
|||
nand->ecc.size = 1024;
|
||||
}
|
||||
|
||||
mtk_nfc_set_spare_per_sector(&spare, mtd);
|
||||
ret = mtk_nfc_set_spare_per_sector(&spare, mtd);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* calculate oob bytes except ecc parity data */
|
||||
free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
|
||||
|
@ -1214,7 +1240,7 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
|
|||
}
|
||||
}
|
||||
|
||||
mtk_ecc_adjust_strength(&nand->ecc.strength);
|
||||
mtk_ecc_adjust_strength(nfc->ecc, &nand->ecc.strength);
|
||||
|
||||
dev_info(dev, "eccsize %d eccstrength %d\n",
|
||||
nand->ecc.size, nand->ecc.strength);
|
||||
|
@ -1271,6 +1297,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
|
|||
nand->read_byte = mtk_nfc_read_byte;
|
||||
nand->read_buf = mtk_nfc_read_buf;
|
||||
nand->cmd_ctrl = mtk_nfc_cmd_ctrl;
|
||||
nand->setup_data_interface = mtk_nfc_setup_data_interface;
|
||||
|
||||
/* set default mode in case dt entry is missing */
|
||||
nand->ecc.mode = NAND_ECC_HW;
|
||||
|
@ -1312,7 +1339,10 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
|
||||
ret = mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mtk_nfc_set_fdm(&chip->fdm, mtd);
|
||||
mtk_nfc_set_bad_mark_ctl(&chip->bad_mark, mtd);
|
||||
|
||||
|
@ -1354,12 +1384,39 @@ static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct mtk_nfc_caps mtk_nfc_caps_mt2701 = {
|
||||
.spare_size = spare_size_mt2701,
|
||||
.num_spare_size = 16,
|
||||
.pageformat_spare_shift = 4,
|
||||
.nfi_clk_div = 1,
|
||||
};
|
||||
|
||||
static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = {
|
||||
.spare_size = spare_size_mt2712,
|
||||
.num_spare_size = 19,
|
||||
.pageformat_spare_shift = 16,
|
||||
.nfi_clk_div = 2,
|
||||
};
|
||||
|
||||
static const struct of_device_id mtk_nfc_id_table[] = {
|
||||
{
|
||||
.compatible = "mediatek,mt2701-nfc",
|
||||
.data = &mtk_nfc_caps_mt2701,
|
||||
}, {
|
||||
.compatible = "mediatek,mt2712-nfc",
|
||||
.data = &mtk_nfc_caps_mt2712,
|
||||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
|
||||
|
||||
static int mtk_nfc_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct mtk_nfc *nfc;
|
||||
struct resource *res;
|
||||
const struct of_device_id *of_nfc_id = NULL;
|
||||
int ret, irq;
|
||||
|
||||
nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
|
||||
|
@ -1423,6 +1480,14 @@ static int mtk_nfc_probe(struct platform_device *pdev)
|
|||
goto clk_disable;
|
||||
}
|
||||
|
||||
of_nfc_id = of_match_device(mtk_nfc_id_table, &pdev->dev);
|
||||
if (!of_nfc_id) {
|
||||
ret = -ENODEV;
|
||||
goto clk_disable;
|
||||
}
|
||||
|
||||
nfc->caps = of_nfc_id->data;
|
||||
|
||||
platform_set_drvdata(pdev, nfc);
|
||||
|
||||
ret = mtk_nfc_nand_chips_init(dev, nfc);
|
||||
|
@ -1485,8 +1550,6 @@ static int mtk_nfc_resume(struct device *dev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
mtk_nfc_hw_init(nfc);
|
||||
|
||||
/* reset NAND chip if VCC was powered off */
|
||||
list_for_each_entry(chip, &nfc->chips, node) {
|
||||
nand = &chip->nand;
|
||||
|
@ -1503,12 +1566,6 @@ static int mtk_nfc_resume(struct device *dev)
|
|||
static SIMPLE_DEV_PM_OPS(mtk_nfc_pm_ops, mtk_nfc_suspend, mtk_nfc_resume);
|
||||
#endif
|
||||
|
||||
static const struct of_device_id mtk_nfc_id_table[] = {
|
||||
{ .compatible = "mediatek,mt2701-nfc" },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
|
||||
|
||||
static struct platform_driver mtk_nfc_driver = {
|
||||
.probe = mtk_nfc_probe,
|
||||
.remove = mtk_nfc_remove,
|
||||
|
|
|
@ -152,9 +152,8 @@ struct mxc_nand_devtype_data {
|
|||
void (*select_chip)(struct mtd_info *mtd, int chip);
|
||||
int (*correct_data)(struct mtd_info *mtd, u_char *dat,
|
||||
u_char *read_ecc, u_char *calc_ecc);
|
||||
int (*setup_data_interface)(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only);
|
||||
int (*setup_data_interface)(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf);
|
||||
|
||||
/*
|
||||
* On i.MX21 the CONFIG2:INT bit cannot be read if interrupts are masked
|
||||
|
@ -1015,9 +1014,8 @@ static void preset_v1(struct mtd_info *mtd)
|
|||
writew(0x4, NFC_V1_V2_WRPROT);
|
||||
}
|
||||
|
||||
static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only)
|
||||
static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct nand_chip *nand_chip = mtd_to_nand(mtd);
|
||||
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
|
||||
|
@ -1075,7 +1073,7 @@ static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (check_only)
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
ret = clk_set_rate(host->clk, rate);
|
||||
|
|
|
@ -755,6 +755,16 @@ static void nand_command(struct mtd_info *mtd, unsigned int command,
|
|||
return;
|
||||
|
||||
/* This applies to read commands */
|
||||
case NAND_CMD_READ0:
|
||||
/*
|
||||
* READ0 is sometimes used to exit GET STATUS mode. When this
|
||||
* is the case no address cycles are requested, and we can use
|
||||
* this information to detect that we should not wait for the
|
||||
* device to be ready.
|
||||
*/
|
||||
if (column == -1 && page_addr == -1)
|
||||
return;
|
||||
|
||||
default:
|
||||
/*
|
||||
* If we don't have access to the busy pin, we apply the given
|
||||
|
@ -889,6 +899,15 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
|
|||
return;
|
||||
|
||||
case NAND_CMD_READ0:
|
||||
/*
|
||||
* READ0 is sometimes used to exit GET STATUS mode. When this
|
||||
* is the case no address cycles are requested, and we can use
|
||||
* this information to detect that READSTART should not be
|
||||
* issued.
|
||||
*/
|
||||
if (column == -1 && page_addr == -1)
|
||||
return;
|
||||
|
||||
chip->cmd_ctrl(mtd, NAND_CMD_READSTART,
|
||||
NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
|
||||
chip->cmd_ctrl(mtd, NAND_CMD_NONE,
|
||||
|
@ -1044,12 +1063,13 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
|
|||
/**
|
||||
* nand_reset_data_interface - Reset data interface and timings
|
||||
* @chip: The NAND chip
|
||||
* @chipnr: Internal die id
|
||||
*
|
||||
* Reset the Data interface and timings to ONFI mode 0.
|
||||
*
|
||||
* Returns 0 for success or negative error code otherwise.
|
||||
*/
|
||||
static int nand_reset_data_interface(struct nand_chip *chip)
|
||||
static int nand_reset_data_interface(struct nand_chip *chip, int chipnr)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
const struct nand_data_interface *conf;
|
||||
|
@ -1073,7 +1093,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
|
|||
*/
|
||||
|
||||
conf = nand_get_default_data_interface();
|
||||
ret = chip->setup_data_interface(mtd, conf, false);
|
||||
ret = chip->setup_data_interface(mtd, chipnr, conf);
|
||||
if (ret)
|
||||
pr_err("Failed to configure data interface to SDR timing mode 0\n");
|
||||
|
||||
|
@ -1083,6 +1103,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
|
|||
/**
|
||||
* nand_setup_data_interface - Setup the best data interface and timings
|
||||
* @chip: The NAND chip
|
||||
* @chipnr: Internal die id
|
||||
*
|
||||
* Find and configure the best data interface and NAND timings supported by
|
||||
* the chip and the driver.
|
||||
|
@ -1092,7 +1113,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
|
|||
*
|
||||
* Returns 0 for success or negative error code otherwise.
|
||||
*/
|
||||
static int nand_setup_data_interface(struct nand_chip *chip)
|
||||
static int nand_setup_data_interface(struct nand_chip *chip, int chipnr)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
int ret;
|
||||
|
@ -1116,7 +1137,7 @@ static int nand_setup_data_interface(struct nand_chip *chip)
|
|||
goto err;
|
||||
}
|
||||
|
||||
ret = chip->setup_data_interface(mtd, chip->data_interface, false);
|
||||
ret = chip->setup_data_interface(mtd, chipnr, chip->data_interface);
|
||||
err:
|
||||
return ret;
|
||||
}
|
||||
|
@ -1167,8 +1188,10 @@ static int nand_init_data_interface(struct nand_chip *chip)
|
|||
if (ret)
|
||||
continue;
|
||||
|
||||
ret = chip->setup_data_interface(mtd, chip->data_interface,
|
||||
true);
|
||||
/* Pass -1 to only */
|
||||
ret = chip->setup_data_interface(mtd,
|
||||
NAND_DATA_IFACE_CHECK_ONLY,
|
||||
chip->data_interface);
|
||||
if (!ret) {
|
||||
chip->onfi_timing_mode_default = mode;
|
||||
break;
|
||||
|
@ -1195,7 +1218,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
|
|||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
int ret;
|
||||
|
||||
ret = nand_reset_data_interface(chip);
|
||||
ret = nand_reset_data_interface(chip, chipnr);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
@ -1208,7 +1231,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
|
|||
chip->select_chip(mtd, -1);
|
||||
|
||||
chip->select_chip(mtd, chipnr);
|
||||
ret = nand_setup_data_interface(chip);
|
||||
ret = nand_setup_data_interface(chip, chipnr);
|
||||
chip->select_chip(mtd, -1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -1424,7 +1447,10 @@ static int nand_check_erased_buf(void *buf, int len, int bitflips_threshold)
|
|||
|
||||
for (; len >= sizeof(long);
|
||||
len -= sizeof(long), bitmap += sizeof(long)) {
|
||||
weight = hweight_long(*((unsigned long *)bitmap));
|
||||
unsigned long d = *((unsigned long *)bitmap);
|
||||
if (d == ~0UL)
|
||||
continue;
|
||||
weight = hweight_long(d);
|
||||
bitflips += BITS_PER_LONG - weight;
|
||||
if (unlikely(bitflips > bitflips_threshold))
|
||||
return -EBADMSG;
|
||||
|
@ -1527,14 +1553,15 @@ EXPORT_SYMBOL(nand_check_erased_ecc_chunk);
|
|||
*
|
||||
* Not for syndrome calculating ECC controllers, which use a special oob layout.
|
||||
*/
|
||||
static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required, int page)
|
||||
int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required, int page)
|
||||
{
|
||||
chip->read_buf(mtd, buf, mtd->writesize);
|
||||
if (oob_required)
|
||||
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(nand_read_page_raw);
|
||||
|
||||
/**
|
||||
* nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc
|
||||
|
@ -2472,8 +2499,8 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
|
|||
*
|
||||
* Not for syndrome calculating ECC controllers, which use a special oob layout.
|
||||
*/
|
||||
static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
const uint8_t *buf, int oob_required, int page)
|
||||
int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
const uint8_t *buf, int oob_required, int page)
|
||||
{
|
||||
chip->write_buf(mtd, buf, mtd->writesize);
|
||||
if (oob_required)
|
||||
|
@ -2481,6 +2508,7 @@ static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(nand_write_page_raw);
|
||||
|
||||
/**
|
||||
* nand_write_page_raw_syndrome - [INTERN] raw page write function
|
||||
|
@ -2718,7 +2746,7 @@ static int nand_write_page_syndrome(struct mtd_info *mtd,
|
|||
*/
|
||||
static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint32_t offset, int data_len, const uint8_t *buf,
|
||||
int oob_required, int page, int cached, int raw)
|
||||
int oob_required, int page, int raw)
|
||||
{
|
||||
int status, subpage;
|
||||
|
||||
|
@ -2744,30 +2772,12 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
if (status < 0)
|
||||
return status;
|
||||
|
||||
/*
|
||||
* Cached progamming disabled for now. Not sure if it's worth the
|
||||
* trouble. The speed gain is not very impressive. (2.3->2.6Mib/s).
|
||||
*/
|
||||
cached = 0;
|
||||
if (nand_standard_page_accessors(&chip->ecc)) {
|
||||
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
|
||||
|
||||
if (!cached || !NAND_HAS_CACHEPROG(chip)) {
|
||||
|
||||
if (nand_standard_page_accessors(&chip->ecc))
|
||||
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
/*
|
||||
* See if operation failed and additional status checks are
|
||||
* available.
|
||||
*/
|
||||
if ((status & NAND_STATUS_FAIL) && (chip->errstat))
|
||||
status = chip->errstat(mtd, chip, FL_WRITING, status,
|
||||
page);
|
||||
|
||||
if (status & NAND_STATUS_FAIL)
|
||||
return -EIO;
|
||||
} else {
|
||||
chip->cmdfunc(mtd, NAND_CMD_CACHEDPROG, -1, -1);
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -2875,7 +2885,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
|
|||
|
||||
while (1) {
|
||||
int bytes = mtd->writesize;
|
||||
int cached = writelen > bytes && page != blockmask;
|
||||
uint8_t *wbuf = buf;
|
||||
int use_bufpoi;
|
||||
int part_pagewr = (column || writelen < mtd->writesize);
|
||||
|
@ -2893,7 +2902,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
|
|||
if (use_bufpoi) {
|
||||
pr_debug("%s: using write bounce buffer for buf@%p\n",
|
||||
__func__, buf);
|
||||
cached = 0;
|
||||
if (part_pagewr)
|
||||
bytes = min_t(int, bytes - column, writelen);
|
||||
chip->pagebuf = -1;
|
||||
|
@ -2912,7 +2920,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
|
|||
}
|
||||
|
||||
ret = nand_write_page(mtd, chip, column, bytes, wbuf,
|
||||
oob_required, page, cached,
|
||||
oob_required, page,
|
||||
(ops->mode == MTD_OPS_RAW));
|
||||
if (ret)
|
||||
break;
|
||||
|
@ -3228,14 +3236,6 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr,
|
|||
|
||||
status = chip->erase(mtd, page & chip->pagemask);
|
||||
|
||||
/*
|
||||
* See if operation failed and additional status checks are
|
||||
* available
|
||||
*/
|
||||
if ((status & NAND_STATUS_FAIL) && (chip->errstat))
|
||||
status = chip->errstat(mtd, chip, FL_ERASING,
|
||||
status, page);
|
||||
|
||||
/* See if block erase succeeded */
|
||||
if (status & NAND_STATUS_FAIL) {
|
||||
pr_debug("%s: failed erase, page 0x%08x\n",
|
||||
|
@ -3421,6 +3421,25 @@ static int nand_onfi_get_features(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* nand_onfi_get_set_features_notsupp - set/get features stub returning
|
||||
* -ENOTSUPP
|
||||
* @mtd: MTD device structure
|
||||
* @chip: nand chip info structure
|
||||
* @addr: feature address.
|
||||
* @subfeature_param: the subfeature parameters, a four bytes array.
|
||||
*
|
||||
* Should be used by NAND controller drivers that do not support the SET/GET
|
||||
* FEATURES operations.
|
||||
*/
|
||||
int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
|
||||
struct nand_chip *chip, int addr,
|
||||
u8 *subfeature_param)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
EXPORT_SYMBOL(nand_onfi_get_set_features_notsupp);
|
||||
|
||||
/**
|
||||
* nand_suspend - [MTD Interface] Suspend the NAND flash
|
||||
* @mtd: MTD device structure
|
||||
|
@ -4180,6 +4199,7 @@ static const char * const nand_ecc_modes[] = {
|
|||
[NAND_ECC_HW] = "hw",
|
||||
[NAND_ECC_HW_SYNDROME] = "hw_syndrome",
|
||||
[NAND_ECC_HW_OOB_FIRST] = "hw_oob_first",
|
||||
[NAND_ECC_ON_DIE] = "on-die",
|
||||
};
|
||||
|
||||
static int of_get_nand_ecc_mode(struct device_node *np)
|
||||
|
@ -4374,7 +4394,7 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips,
|
|||
* For the other dies, nand_reset() will automatically switch to the
|
||||
* best mode for us.
|
||||
*/
|
||||
ret = nand_setup_data_interface(chip);
|
||||
ret = nand_setup_data_interface(chip, 0);
|
||||
if (ret)
|
||||
goto err_nand_init;
|
||||
|
||||
|
@ -4512,6 +4532,226 @@ static int nand_set_ecc_soft_ops(struct mtd_info *mtd)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* nand_check_ecc_caps - check the sanity of preset ECC settings
|
||||
* @chip: nand chip info structure
|
||||
* @caps: ECC caps info structure
|
||||
* @oobavail: OOB size that the ECC engine can use
|
||||
*
|
||||
* When ECC step size and strength are already set, check if they are supported
|
||||
* by the controller and the calculated ECC bytes fit within the chip's OOB.
|
||||
* On success, the calculated ECC bytes is set.
|
||||
*/
|
||||
int nand_check_ecc_caps(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
const struct nand_ecc_step_info *stepinfo;
|
||||
int preset_step = chip->ecc.size;
|
||||
int preset_strength = chip->ecc.strength;
|
||||
int nsteps, ecc_bytes;
|
||||
int i, j;
|
||||
|
||||
if (WARN_ON(oobavail < 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (!preset_step || !preset_strength)
|
||||
return -ENODATA;
|
||||
|
||||
nsteps = mtd->writesize / preset_step;
|
||||
|
||||
for (i = 0; i < caps->nstepinfos; i++) {
|
||||
stepinfo = &caps->stepinfos[i];
|
||||
|
||||
if (stepinfo->stepsize != preset_step)
|
||||
continue;
|
||||
|
||||
for (j = 0; j < stepinfo->nstrengths; j++) {
|
||||
if (stepinfo->strengths[j] != preset_strength)
|
||||
continue;
|
||||
|
||||
ecc_bytes = caps->calc_ecc_bytes(preset_step,
|
||||
preset_strength);
|
||||
if (WARN_ON_ONCE(ecc_bytes < 0))
|
||||
return ecc_bytes;
|
||||
|
||||
if (ecc_bytes * nsteps > oobavail) {
|
||||
pr_err("ECC (step, strength) = (%d, %d) does not fit in OOB",
|
||||
preset_step, preset_strength);
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
chip->ecc.bytes = ecc_bytes;
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
pr_err("ECC (step, strength) = (%d, %d) not supported on this controller",
|
||||
preset_step, preset_strength);
|
||||
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nand_check_ecc_caps);
|
||||
|
||||
/**
|
||||
* nand_match_ecc_req - meet the chip's requirement with least ECC bytes
|
||||
* @chip: nand chip info structure
|
||||
* @caps: ECC engine caps info structure
|
||||
* @oobavail: OOB size that the ECC engine can use
|
||||
*
|
||||
* If a chip's ECC requirement is provided, try to meet it with the least
|
||||
* number of ECC bytes (i.e. with the largest number of OOB-free bytes).
|
||||
* On success, the chosen ECC settings are set.
|
||||
*/
|
||||
int nand_match_ecc_req(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
const struct nand_ecc_step_info *stepinfo;
|
||||
int req_step = chip->ecc_step_ds;
|
||||
int req_strength = chip->ecc_strength_ds;
|
||||
int req_corr, step_size, strength, nsteps, ecc_bytes, ecc_bytes_total;
|
||||
int best_step, best_strength, best_ecc_bytes;
|
||||
int best_ecc_bytes_total = INT_MAX;
|
||||
int i, j;
|
||||
|
||||
if (WARN_ON(oobavail < 0))
|
||||
return -EINVAL;
|
||||
|
||||
/* No information provided by the NAND chip */
|
||||
if (!req_step || !req_strength)
|
||||
return -ENOTSUPP;
|
||||
|
||||
/* number of correctable bits the chip requires in a page */
|
||||
req_corr = mtd->writesize / req_step * req_strength;
|
||||
|
||||
for (i = 0; i < caps->nstepinfos; i++) {
|
||||
stepinfo = &caps->stepinfos[i];
|
||||
step_size = stepinfo->stepsize;
|
||||
|
||||
for (j = 0; j < stepinfo->nstrengths; j++) {
|
||||
strength = stepinfo->strengths[j];
|
||||
|
||||
/*
|
||||
* If both step size and strength are smaller than the
|
||||
* chip's requirement, it is not easy to compare the
|
||||
* resulted reliability.
|
||||
*/
|
||||
if (step_size < req_step && strength < req_strength)
|
||||
continue;
|
||||
|
||||
if (mtd->writesize % step_size)
|
||||
continue;
|
||||
|
||||
nsteps = mtd->writesize / step_size;
|
||||
|
||||
ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
|
||||
if (WARN_ON_ONCE(ecc_bytes < 0))
|
||||
continue;
|
||||
ecc_bytes_total = ecc_bytes * nsteps;
|
||||
|
||||
if (ecc_bytes_total > oobavail ||
|
||||
strength * nsteps < req_corr)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* We assume the best is to meet the chip's requrement
|
||||
* with the least number of ECC bytes.
|
||||
*/
|
||||
if (ecc_bytes_total < best_ecc_bytes_total) {
|
||||
best_ecc_bytes_total = ecc_bytes_total;
|
||||
best_step = step_size;
|
||||
best_strength = strength;
|
||||
best_ecc_bytes = ecc_bytes;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (best_ecc_bytes_total == INT_MAX)
|
||||
return -ENOTSUPP;
|
||||
|
||||
chip->ecc.size = best_step;
|
||||
chip->ecc.strength = best_strength;
|
||||
chip->ecc.bytes = best_ecc_bytes;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nand_match_ecc_req);
|
||||
|
||||
/**
|
||||
* nand_maximize_ecc - choose the max ECC strength available
|
||||
* @chip: nand chip info structure
|
||||
* @caps: ECC engine caps info structure
|
||||
* @oobavail: OOB size that the ECC engine can use
|
||||
*
|
||||
* Choose the max ECC strength that is supported on the controller, and can fit
|
||||
* within the chip's OOB. On success, the chosen ECC settings are set.
|
||||
*/
|
||||
int nand_maximize_ecc(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
const struct nand_ecc_step_info *stepinfo;
|
||||
int step_size, strength, nsteps, ecc_bytes, corr;
|
||||
int best_corr = 0;
|
||||
int best_step = 0;
|
||||
int best_strength, best_ecc_bytes;
|
||||
int i, j;
|
||||
|
||||
if (WARN_ON(oobavail < 0))
|
||||
return -EINVAL;
|
||||
|
||||
for (i = 0; i < caps->nstepinfos; i++) {
|
||||
stepinfo = &caps->stepinfos[i];
|
||||
step_size = stepinfo->stepsize;
|
||||
|
||||
/* If chip->ecc.size is already set, respect it */
|
||||
if (chip->ecc.size && step_size != chip->ecc.size)
|
||||
continue;
|
||||
|
||||
for (j = 0; j < stepinfo->nstrengths; j++) {
|
||||
strength = stepinfo->strengths[j];
|
||||
|
||||
if (mtd->writesize % step_size)
|
||||
continue;
|
||||
|
||||
nsteps = mtd->writesize / step_size;
|
||||
|
||||
ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
|
||||
if (WARN_ON_ONCE(ecc_bytes < 0))
|
||||
continue;
|
||||
|
||||
if (ecc_bytes * nsteps > oobavail)
|
||||
continue;
|
||||
|
||||
corr = strength * nsteps;
|
||||
|
||||
/*
|
||||
* If the number of correctable bits is the same,
|
||||
* bigger step_size has more reliability.
|
||||
*/
|
||||
if (corr > best_corr ||
|
||||
(corr == best_corr && step_size > best_step)) {
|
||||
best_corr = corr;
|
||||
best_step = step_size;
|
||||
best_strength = strength;
|
||||
best_ecc_bytes = ecc_bytes;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!best_corr)
|
||||
return -ENOTSUPP;
|
||||
|
||||
chip->ecc.size = best_step;
|
||||
chip->ecc.strength = best_strength;
|
||||
chip->ecc.bytes = best_ecc_bytes;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nand_maximize_ecc);
|
||||
|
||||
/*
|
||||
* Check if the chip configuration meet the datasheet requirements.
|
||||
|
||||
|
@ -4733,6 +4973,18 @@ int nand_scan_tail(struct mtd_info *mtd)
|
|||
}
|
||||
break;
|
||||
|
||||
case NAND_ECC_ON_DIE:
|
||||
if (!ecc->read_page || !ecc->write_page) {
|
||||
WARN(1, "No ECC functions supplied; on-die ECC not possible\n");
|
||||
ret = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
if (!ecc->read_oob)
|
||||
ecc->read_oob = nand_read_oob_std;
|
||||
if (!ecc->write_oob)
|
||||
ecc->write_oob = nand_write_oob_std;
|
||||
break;
|
||||
|
||||
case NAND_ECC_NONE:
|
||||
pr_warn("NAND_ECC_NONE selected by board driver. This is not recommended!\n");
|
||||
ecc->read_page = nand_read_page_raw;
|
||||
|
@ -4773,6 +5025,11 @@ int nand_scan_tail(struct mtd_info *mtd)
|
|||
goto err_free;
|
||||
}
|
||||
ecc->total = ecc->steps * ecc->bytes;
|
||||
if (ecc->total > mtd->oobsize) {
|
||||
WARN(1, "Total number of ECC bytes exceeded oobsize\n");
|
||||
ret = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
/*
|
||||
* The number of bytes available for a client to place data into
|
||||
|
|
|
@ -17,6 +17,12 @@
|
|||
|
||||
#include <linux/mtd/nand.h>
|
||||
|
||||
/*
|
||||
* Special Micron status bit that indicates when the block has been
|
||||
* corrected by on-die ECC and should be rewritten
|
||||
*/
|
||||
#define NAND_STATUS_WRITE_RECOMMENDED BIT(3)
|
||||
|
||||
struct nand_onfi_vendor_micron {
|
||||
u8 two_plane_read;
|
||||
u8 read_cache;
|
||||
|
@ -66,9 +72,197 @@ static int micron_nand_onfi_init(struct nand_chip *chip)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int micron_nand_on_die_ooblayout_ecc(struct mtd_info *mtd, int section,
|
||||
struct mtd_oob_region *oobregion)
|
||||
{
|
||||
if (section >= 4)
|
||||
return -ERANGE;
|
||||
|
||||
oobregion->offset = (section * 16) + 8;
|
||||
oobregion->length = 8;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int micron_nand_on_die_ooblayout_free(struct mtd_info *mtd, int section,
|
||||
struct mtd_oob_region *oobregion)
|
||||
{
|
||||
if (section >= 4)
|
||||
return -ERANGE;
|
||||
|
||||
oobregion->offset = (section * 16) + 2;
|
||||
oobregion->length = 6;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct mtd_ooblayout_ops micron_nand_on_die_ooblayout_ops = {
|
||||
.ecc = micron_nand_on_die_ooblayout_ecc,
|
||||
.free = micron_nand_on_die_ooblayout_free,
|
||||
};
|
||||
|
||||
static int micron_nand_on_die_ecc_setup(struct nand_chip *chip, bool enable)
|
||||
{
|
||||
u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
|
||||
|
||||
if (enable)
|
||||
feature[0] |= ONFI_FEATURE_ON_DIE_ECC_EN;
|
||||
|
||||
return chip->onfi_set_features(nand_to_mtd(chip), chip,
|
||||
ONFI_FEATURE_ON_DIE_ECC, feature);
|
||||
}
|
||||
|
||||
static int
|
||||
micron_nand_read_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required,
|
||||
int page)
|
||||
{
|
||||
int status;
|
||||
int max_bitflips = 0;
|
||||
|
||||
micron_nand_on_die_ecc_setup(chip, true);
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
|
||||
chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
|
||||
status = chip->read_byte(mtd);
|
||||
if (status & NAND_STATUS_FAIL)
|
||||
mtd->ecc_stats.failed++;
|
||||
/*
|
||||
* The internal ECC doesn't tell us the number of bitflips
|
||||
* that have been corrected, but tells us if it recommends to
|
||||
* rewrite the block. If it's the case, then we pretend we had
|
||||
* a number of bitflips equal to the ECC strength, which will
|
||||
* hint the NAND core to rewrite the block.
|
||||
*/
|
||||
else if (status & NAND_STATUS_WRITE_RECOMMENDED)
|
||||
max_bitflips = chip->ecc.strength;
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1);
|
||||
|
||||
nand_read_page_raw(mtd, chip, buf, oob_required, page);
|
||||
|
||||
micron_nand_on_die_ecc_setup(chip, false);
|
||||
|
||||
return max_bitflips;
|
||||
}
|
||||
|
||||
static int
|
||||
micron_nand_write_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
const uint8_t *buf, int oob_required,
|
||||
int page)
|
||||
{
|
||||
int status;
|
||||
|
||||
micron_nand_on_die_ecc_setup(chip, true);
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
|
||||
nand_write_page_raw(mtd, chip, buf, oob_required, page);
|
||||
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
|
||||
micron_nand_on_die_ecc_setup(chip, false);
|
||||
|
||||
return status & NAND_STATUS_FAIL ? -EIO : 0;
|
||||
}
|
||||
|
||||
static int
|
||||
micron_nand_read_page_raw_on_die_ecc(struct mtd_info *mtd,
|
||||
struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required,
|
||||
int page)
|
||||
{
|
||||
chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
|
||||
nand_read_page_raw(mtd, chip, buf, oob_required, page);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
micron_nand_write_page_raw_on_die_ecc(struct mtd_info *mtd,
|
||||
struct nand_chip *chip,
|
||||
const uint8_t *buf, int oob_required,
|
||||
int page)
|
||||
{
|
||||
int status;
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
|
||||
nand_write_page_raw(mtd, chip, buf, oob_required, page);
|
||||
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
|
||||
return status & NAND_STATUS_FAIL ? -EIO : 0;
|
||||
}
|
||||
|
||||
enum {
|
||||
/* The NAND flash doesn't support on-die ECC */
|
||||
MICRON_ON_DIE_UNSUPPORTED,
|
||||
|
||||
/*
|
||||
* The NAND flash supports on-die ECC and it can be
|
||||
* enabled/disabled by a set features command.
|
||||
*/
|
||||
MICRON_ON_DIE_SUPPORTED,
|
||||
|
||||
/*
|
||||
* The NAND flash supports on-die ECC, and it cannot be
|
||||
* disabled.
|
||||
*/
|
||||
MICRON_ON_DIE_MANDATORY,
|
||||
};
|
||||
|
||||
/*
|
||||
* Try to detect if the NAND support on-die ECC. To do this, we enable
|
||||
* the feature, and read back if it has been enabled as expected. We
|
||||
* also check if it can be disabled, because some Micron NANDs do not
|
||||
* allow disabling the on-die ECC and we don't support such NANDs for
|
||||
* now.
|
||||
*
|
||||
* This function also has the side effect of disabling on-die ECC if
|
||||
* it had been left enabled by the firmware/bootloader.
|
||||
*/
|
||||
static int micron_supports_on_die_ecc(struct nand_chip *chip)
|
||||
{
|
||||
u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
|
||||
int ret;
|
||||
|
||||
if (chip->onfi_version == 0)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
if (chip->bits_per_cell != 1)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
ret = micron_nand_on_die_ecc_setup(chip, true);
|
||||
if (ret)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
chip->onfi_get_features(nand_to_mtd(chip), chip,
|
||||
ONFI_FEATURE_ON_DIE_ECC, feature);
|
||||
if ((feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN) == 0)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
ret = micron_nand_on_die_ecc_setup(chip, false);
|
||||
if (ret)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
chip->onfi_get_features(nand_to_mtd(chip), chip,
|
||||
ONFI_FEATURE_ON_DIE_ECC, feature);
|
||||
if (feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN)
|
||||
return MICRON_ON_DIE_MANDATORY;
|
||||
|
||||
/*
|
||||
* Some Micron NANDs have an on-die ECC of 4/512, some other
|
||||
* 8/512. We only support the former.
|
||||
*/
|
||||
if (chip->onfi_params.ecc_bits != 4)
|
||||
return MICRON_ON_DIE_UNSUPPORTED;
|
||||
|
||||
return MICRON_ON_DIE_SUPPORTED;
|
||||
}
|
||||
|
||||
static int micron_nand_init(struct nand_chip *chip)
|
||||
{
|
||||
struct mtd_info *mtd = nand_to_mtd(chip);
|
||||
int ondie;
|
||||
int ret;
|
||||
|
||||
ret = micron_nand_onfi_init(chip);
|
||||
|
@ -78,6 +272,34 @@ static int micron_nand_init(struct nand_chip *chip)
|
|||
if (mtd->writesize == 2048)
|
||||
chip->bbt_options |= NAND_BBT_SCAN2NDPAGE;
|
||||
|
||||
ondie = micron_supports_on_die_ecc(chip);
|
||||
|
||||
if (ondie == MICRON_ON_DIE_MANDATORY) {
|
||||
pr_err("On-die ECC forcefully enabled, not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (chip->ecc.mode == NAND_ECC_ON_DIE) {
|
||||
if (ondie == MICRON_ON_DIE_UNSUPPORTED) {
|
||||
pr_err("On-die ECC selected but not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
chip->ecc.options = NAND_ECC_CUSTOM_PAGE_ACCESS;
|
||||
chip->ecc.bytes = 8;
|
||||
chip->ecc.size = 512;
|
||||
chip->ecc.strength = 4;
|
||||
chip->ecc.algo = NAND_ECC_BCH;
|
||||
chip->ecc.read_page = micron_nand_read_page_on_die_ecc;
|
||||
chip->ecc.write_page = micron_nand_write_page_on_die_ecc;
|
||||
chip->ecc.read_page_raw =
|
||||
micron_nand_read_page_raw_on_die_ecc;
|
||||
chip->ecc.write_page_raw =
|
||||
micron_nand_write_page_raw_on_die_ecc;
|
||||
|
||||
mtd_set_ooblayout(mtd, µn_nand_on_die_ooblayout_ops);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -166,7 +166,11 @@ static int __init orion_nand_probe(struct platform_device *pdev)
|
|||
}
|
||||
}
|
||||
|
||||
clk_prepare_enable(info->clk);
|
||||
ret = clk_prepare_enable(info->clk);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to prepare clock!\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = nand_scan(mtd, 1);
|
||||
if (ret)
|
||||
|
|
|
@ -1812,6 +1812,8 @@ static int alloc_nand_resource(struct platform_device *pdev)
|
|||
chip->write_buf = pxa3xx_nand_write_buf;
|
||||
chip->options |= NAND_NO_SUBPAGE_WRITE;
|
||||
chip->cmdfunc = nand_cmdfunc;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
}
|
||||
|
||||
nand_hw_control_init(chip->controller);
|
||||
|
|
|
@ -2008,6 +2008,8 @@ static int qcom_nand_host_init(struct qcom_nand_controller *nandc,
|
|||
chip->read_byte = qcom_nandc_read_byte;
|
||||
chip->read_buf = qcom_nandc_read_buf;
|
||||
chip->write_buf = qcom_nandc_write_buf;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
/*
|
||||
* the bad block marker is readable only when we read the last codeword
|
||||
|
|
|
@ -812,9 +812,8 @@ static int s3c2410_nand_add_partition(struct s3c2410_nand_info *info,
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only)
|
||||
static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd);
|
||||
struct s3c2410_platform_nand *pdata = info->platform;
|
||||
|
|
|
@ -1183,6 +1183,8 @@ static int flctl_probe(struct platform_device *pdev)
|
|||
nand->read_buf = flctl_read_buf;
|
||||
nand->select_chip = flctl_select_chip;
|
||||
nand->cmdfunc = flctl_cmdfunc;
|
||||
nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
if (pdata->flcmncr_val & SEL_16BIT)
|
||||
nand->options |= NAND_BUSWIDTH_16;
|
||||
|
|
|
@ -1301,7 +1301,6 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
|
|||
|
||||
sunxi_nfc_hw_ecc_enable(mtd);
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
|
||||
for (i = data_offs / ecc->size;
|
||||
i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) {
|
||||
int data_off = i * ecc->size;
|
||||
|
@ -1592,9 +1591,8 @@ static int _sunxi_nand_lookup_timing(const s32 *lut, int lut_size, u32 duration,
|
|||
#define sunxi_nand_lookup_timing(l, p, c) \
|
||||
_sunxi_nand_lookup_timing(l, ARRAY_SIZE(l), p, c)
|
||||
|
||||
static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only)
|
||||
static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
struct nand_chip *nand = mtd_to_nand(mtd);
|
||||
struct sunxi_nand_chip *chip = to_sunxi_nand(nand);
|
||||
|
@ -1707,7 +1705,7 @@ static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
|
|||
return tRHW;
|
||||
}
|
||||
|
||||
if (check_only)
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
@ -1922,7 +1920,6 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
|
|||
ecc->write_subpage = sunxi_nfc_hw_ecc_write_subpage;
|
||||
ecc->read_oob_raw = nand_read_oob_std;
|
||||
ecc->write_oob_raw = nand_write_oob_std;
|
||||
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -303,7 +303,7 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
const u8 *buf, int oob_required, int page)
|
||||
{
|
||||
struct tango_nfc *nfc = to_tango_nfc(chip->controller);
|
||||
int err, len = mtd->writesize;
|
||||
int err, status, len = mtd->writesize;
|
||||
|
||||
/* Calling tango_write_oob() would send PAGEPROG twice */
|
||||
if (oob_required)
|
||||
|
@ -314,6 +314,10 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
if (status & NAND_STATUS_FAIL)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -340,7 +344,7 @@ static void aux_write(struct nand_chip *chip, const u8 **buf, int len, int *pos)
|
|||
|
||||
if (!*buf) {
|
||||
/* skip over "len" bytes */
|
||||
chip->cmdfunc(mtd, NAND_CMD_SEQIN, *pos, -1);
|
||||
chip->cmdfunc(mtd, NAND_CMD_RNDIN, *pos, -1);
|
||||
} else {
|
||||
tango_write_buf(mtd, *buf, len);
|
||||
*buf += len;
|
||||
|
@ -431,9 +435,16 @@ static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
|||
static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
const u8 *buf, int oob_required, int page)
|
||||
{
|
||||
int status;
|
||||
|
||||
chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
|
||||
raw_write(chip, buf, chip->oob_poi);
|
||||
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
|
||||
|
||||
status = chip->waitfunc(mtd, chip);
|
||||
if (status & NAND_STATUS_FAIL)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -484,9 +495,8 @@ static u32 to_ticks(int kHz, int ps)
|
|||
return DIV_ROUND_UP_ULL((u64)kHz * ps, NSEC_PER_SEC);
|
||||
}
|
||||
|
||||
static int tango_set_timings(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only)
|
||||
static int tango_set_timings(struct mtd_info *mtd, int csline,
|
||||
const struct nand_data_interface *conf)
|
||||
{
|
||||
const struct nand_sdr_timings *sdr = nand_get_sdr_timings(conf);
|
||||
struct nand_chip *chip = mtd_to_nand(mtd);
|
||||
|
@ -498,7 +508,7 @@ static int tango_set_timings(struct mtd_info *mtd,
|
|||
if (IS_ERR(sdr))
|
||||
return PTR_ERR(sdr);
|
||||
|
||||
if (check_only)
|
||||
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
|
||||
return 0;
|
||||
|
||||
Trdy = to_ticks(kHz, sdr->tCEA_max - sdr->tREA_max);
|
||||
|
|
|
@ -703,6 +703,8 @@ static int vf610_nfc_probe(struct platform_device *pdev)
|
|||
chip->read_buf = vf610_nfc_read_buf;
|
||||
chip->write_buf = vf610_nfc_write_buf;
|
||||
chip->select_chip = vf610_nfc_select_chip;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
chip->options |= NAND_NO_SUBPAGE_WRITE;
|
||||
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
config MTD_PARSER_TRX
|
||||
tristate "Parser for TRX format partitions"
|
||||
depends on MTD && (BCM47XX || ARCH_BCM_5301X || COMPILE_TEST)
|
||||
help
|
||||
TRX is a firmware format used by Broadcom on their devices. It
|
||||
may contain up to 3/4 partitions (depending on the version).
|
||||
This driver will parse TRX header and report at least two partitions:
|
||||
kernel and rootfs.
|
|
@ -0,0 +1 @@
|
|||
obj-$(CONFIG_MTD_PARSER_TRX) += parser_trx.o
|
|
@ -0,0 +1,126 @@
|
|||
/*
|
||||
* Parser for TRX format partitions
|
||||
*
|
||||
* Copyright (C) 2012 - 2017 Rafał Miłecki <rafal@milecki.pl>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/mtd/mtd.h>
|
||||
#include <linux/mtd/partitions.h>
|
||||
|
||||
#define TRX_PARSER_MAX_PARTS 4
|
||||
|
||||
/* Magics */
|
||||
#define TRX_MAGIC 0x30524448
|
||||
#define UBI_EC_MAGIC 0x23494255 /* UBI# */
|
||||
|
||||
struct trx_header {
|
||||
uint32_t magic;
|
||||
uint32_t length;
|
||||
uint32_t crc32;
|
||||
uint16_t flags;
|
||||
uint16_t version;
|
||||
uint32_t offset[3];
|
||||
} __packed;
|
||||
|
||||
static const char *parser_trx_data_part_name(struct mtd_info *master,
|
||||
size_t offset)
|
||||
{
|
||||
uint32_t buf;
|
||||
size_t bytes_read;
|
||||
int err;
|
||||
|
||||
err = mtd_read(master, offset, sizeof(buf), &bytes_read,
|
||||
(uint8_t *)&buf);
|
||||
if (err && !mtd_is_bitflip(err)) {
|
||||
pr_err("mtd_read error while parsing (offset: 0x%zX): %d\n",
|
||||
offset, err);
|
||||
goto out_default;
|
||||
}
|
||||
|
||||
if (buf == UBI_EC_MAGIC)
|
||||
return "ubi";
|
||||
|
||||
out_default:
|
||||
return "rootfs";
|
||||
}
|
||||
|
||||
static int parser_trx_parse(struct mtd_info *mtd,
|
||||
const struct mtd_partition **pparts,
|
||||
struct mtd_part_parser_data *data)
|
||||
{
|
||||
struct mtd_partition *parts;
|
||||
struct mtd_partition *part;
|
||||
struct trx_header trx;
|
||||
size_t bytes_read;
|
||||
uint8_t curr_part = 0, i = 0;
|
||||
int err;
|
||||
|
||||
parts = kzalloc(sizeof(struct mtd_partition) * TRX_PARSER_MAX_PARTS,
|
||||
GFP_KERNEL);
|
||||
if (!parts)
|
||||
return -ENOMEM;
|
||||
|
||||
err = mtd_read(mtd, 0, sizeof(trx), &bytes_read, (uint8_t *)&trx);
|
||||
if (err) {
|
||||
pr_err("MTD reading error: %d\n", err);
|
||||
kfree(parts);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (trx.magic != TRX_MAGIC) {
|
||||
kfree(parts);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
/* We have LZMA loader if there is address in offset[2] */
|
||||
if (trx.offset[2]) {
|
||||
part = &parts[curr_part++];
|
||||
part->name = "loader";
|
||||
part->offset = trx.offset[i];
|
||||
i++;
|
||||
}
|
||||
|
||||
if (trx.offset[i]) {
|
||||
part = &parts[curr_part++];
|
||||
part->name = "linux";
|
||||
part->offset = trx.offset[i];
|
||||
i++;
|
||||
}
|
||||
|
||||
if (trx.offset[i]) {
|
||||
part = &parts[curr_part++];
|
||||
part->name = parser_trx_data_part_name(mtd, trx.offset[i]);
|
||||
part->offset = trx.offset[i];
|
||||
i++;
|
||||
}
|
||||
|
||||
/*
|
||||
* Assume that every partition ends at the beginning of the one it is
|
||||
* followed by.
|
||||
*/
|
||||
for (i = 0; i < curr_part; i++) {
|
||||
u64 next_part_offset = (i < curr_part - 1) ?
|
||||
parts[i + 1].offset : mtd->size;
|
||||
|
||||
parts[i].size = next_part_offset - parts[i].offset;
|
||||
}
|
||||
|
||||
*pparts = parts;
|
||||
return i;
|
||||
};
|
||||
|
||||
static struct mtd_part_parser mtd_parser_trx = {
|
||||
.parse_fn = parser_trx_parse,
|
||||
.name = "trx",
|
||||
};
|
||||
module_mtd_part_parser(mtd_parser_trx);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Parser for TRX format partitions");
|
|
@ -108,7 +108,7 @@ config SPI_INTEL_SPI_PLATFORM
|
|||
|
||||
config SPI_STM32_QUADSPI
|
||||
tristate "STM32 Quad SPI controller"
|
||||
depends on ARCH_STM32
|
||||
depends on ARCH_STM32 || COMPILE_TEST
|
||||
help
|
||||
This enables support for the STM32 Quad SPI controller.
|
||||
We only connect the NOR to this controller.
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/mtd/spi-nor.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/sysfs.h>
|
||||
|
||||
#define DEVICE_NAME "aspeed-smc"
|
||||
|
@ -97,6 +98,7 @@ struct aspeed_smc_chip {
|
|||
struct aspeed_smc_controller *controller;
|
||||
void __iomem *ctl; /* control register */
|
||||
void __iomem *ahb_base; /* base of chip window */
|
||||
u32 ahb_window_size; /* chip mapping window size */
|
||||
u32 ctl_val[smc_max]; /* control settings */
|
||||
enum aspeed_smc_flash_type type; /* what type of flash */
|
||||
struct spi_nor nor;
|
||||
|
@ -109,6 +111,7 @@ struct aspeed_smc_controller {
|
|||
const struct aspeed_smc_info *info; /* type info of controller */
|
||||
void __iomem *regs; /* controller registers */
|
||||
void __iomem *ahb_base; /* per-chip windows resource */
|
||||
u32 ahb_window_size; /* full mapping window size */
|
||||
|
||||
struct aspeed_smc_chip *chips[0]; /* pointers to attached chips */
|
||||
};
|
||||
|
@ -180,8 +183,7 @@ struct aspeed_smc_controller {
|
|||
|
||||
#define CONTROL_KEEP_MASK \
|
||||
(CONTROL_AAF_MODE | CONTROL_CE_INACTIVE_MASK | CONTROL_CLK_DIV4 | \
|
||||
CONTROL_IO_DUMMY_MASK | CONTROL_CLOCK_FREQ_SEL_MASK | \
|
||||
CONTROL_LSB_FIRST | CONTROL_CLOCK_MODE_3)
|
||||
CONTROL_CLOCK_FREQ_SEL_MASK | CONTROL_LSB_FIRST | CONTROL_CLOCK_MODE_3)
|
||||
|
||||
/*
|
||||
* The Segment Register uses a 8MB unit to encode the start address
|
||||
|
@ -194,6 +196,10 @@ struct aspeed_smc_controller {
|
|||
#define SEGMENT_ADDR_REG0 0x30
|
||||
#define SEGMENT_ADDR_START(_r) ((((_r) >> 16) & 0xFF) << 23)
|
||||
#define SEGMENT_ADDR_END(_r) ((((_r) >> 24) & 0xFF) << 23)
|
||||
#define SEGMENT_ADDR_VALUE(start, end) \
|
||||
(((((start) >> 23) & 0xFF) << 16) | ((((end) >> 23) & 0xFF) << 24))
|
||||
#define SEGMENT_ADDR_REG(controller, cs) \
|
||||
((controller)->regs + SEGMENT_ADDR_REG0 + (cs) * 4)
|
||||
|
||||
/*
|
||||
* In user mode all data bytes read or written to the chip decode address
|
||||
|
@ -439,8 +445,7 @@ static void __iomem *aspeed_smc_chip_base(struct aspeed_smc_chip *chip,
|
|||
u32 reg;
|
||||
|
||||
if (controller->info->nce > 1) {
|
||||
reg = readl(controller->regs + SEGMENT_ADDR_REG0 +
|
||||
chip->cs * 4);
|
||||
reg = readl(SEGMENT_ADDR_REG(controller, chip->cs));
|
||||
|
||||
if (SEGMENT_ADDR_START(reg) >= SEGMENT_ADDR_END(reg))
|
||||
return NULL;
|
||||
|
@ -451,6 +456,146 @@ static void __iomem *aspeed_smc_chip_base(struct aspeed_smc_chip *chip,
|
|||
return controller->ahb_base + offset;
|
||||
}
|
||||
|
||||
static u32 aspeed_smc_ahb_base_phy(struct aspeed_smc_controller *controller)
|
||||
{
|
||||
u32 seg0_val = readl(SEGMENT_ADDR_REG(controller, 0));
|
||||
|
||||
return SEGMENT_ADDR_START(seg0_val);
|
||||
}
|
||||
|
||||
static u32 chip_set_segment(struct aspeed_smc_chip *chip, u32 cs, u32 start,
|
||||
u32 size)
|
||||
{
|
||||
struct aspeed_smc_controller *controller = chip->controller;
|
||||
void __iomem *seg_reg;
|
||||
u32 seg_oldval, seg_newval, ahb_base_phy, end;
|
||||
|
||||
ahb_base_phy = aspeed_smc_ahb_base_phy(controller);
|
||||
|
||||
seg_reg = SEGMENT_ADDR_REG(controller, cs);
|
||||
seg_oldval = readl(seg_reg);
|
||||
|
||||
/*
|
||||
* If the chip size is not specified, use the default segment
|
||||
* size, but take into account the possible overlap with the
|
||||
* previous segment
|
||||
*/
|
||||
if (!size)
|
||||
size = SEGMENT_ADDR_END(seg_oldval) - start;
|
||||
|
||||
/*
|
||||
* The segment cannot exceed the maximum window size of the
|
||||
* controller.
|
||||
*/
|
||||
if (start + size > ahb_base_phy + controller->ahb_window_size) {
|
||||
size = ahb_base_phy + controller->ahb_window_size - start;
|
||||
dev_warn(chip->nor.dev, "CE%d window resized to %dMB",
|
||||
cs, size >> 20);
|
||||
}
|
||||
|
||||
end = start + size;
|
||||
seg_newval = SEGMENT_ADDR_VALUE(start, end);
|
||||
writel(seg_newval, seg_reg);
|
||||
|
||||
/*
|
||||
* Restore default value if something goes wrong. The chip
|
||||
* might have set some bogus value and we would loose access
|
||||
* to the chip.
|
||||
*/
|
||||
if (seg_newval != readl(seg_reg)) {
|
||||
dev_err(chip->nor.dev, "CE%d window invalid", cs);
|
||||
writel(seg_oldval, seg_reg);
|
||||
start = SEGMENT_ADDR_START(seg_oldval);
|
||||
end = SEGMENT_ADDR_END(seg_oldval);
|
||||
size = end - start;
|
||||
}
|
||||
|
||||
dev_info(chip->nor.dev, "CE%d window [ 0x%.8x - 0x%.8x ] %dMB",
|
||||
cs, start, end, size >> 20);
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
/*
|
||||
* The segment register defines the mapping window on the AHB bus and
|
||||
* it needs to be configured depending on the chip size. The segment
|
||||
* register of the following CE also needs to be tuned in order to
|
||||
* provide a contiguous window across multiple chips.
|
||||
*
|
||||
* This is expected to be called in increasing CE order
|
||||
*/
|
||||
static u32 aspeed_smc_chip_set_segment(struct aspeed_smc_chip *chip)
|
||||
{
|
||||
struct aspeed_smc_controller *controller = chip->controller;
|
||||
u32 ahb_base_phy, start;
|
||||
u32 size = chip->nor.mtd.size;
|
||||
|
||||
/*
|
||||
* Each controller has a chip size limit for direct memory
|
||||
* access
|
||||
*/
|
||||
if (size > controller->info->maxsize)
|
||||
size = controller->info->maxsize;
|
||||
|
||||
/*
|
||||
* The AST2400 SPI controller only handles one chip and does
|
||||
* not have segment registers. Let's use the chip size for the
|
||||
* AHB window.
|
||||
*/
|
||||
if (controller->info == &spi_2400_info)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* The AST2500 SPI controller has a HW bug when the CE0 chip
|
||||
* size reaches 128MB. Enforce a size limit of 120MB to
|
||||
* prevent the controller from using bogus settings in the
|
||||
* segment register.
|
||||
*/
|
||||
if (chip->cs == 0 && controller->info == &spi_2500_info &&
|
||||
size == SZ_128M) {
|
||||
size = 120 << 20;
|
||||
dev_info(chip->nor.dev,
|
||||
"CE%d window resized to %dMB (AST2500 HW quirk)",
|
||||
chip->cs, size >> 20);
|
||||
}
|
||||
|
||||
ahb_base_phy = aspeed_smc_ahb_base_phy(controller);
|
||||
|
||||
/*
|
||||
* As a start address for the current segment, use the default
|
||||
* start address if we are handling CE0 or use the previous
|
||||
* segment ending address
|
||||
*/
|
||||
if (chip->cs) {
|
||||
u32 prev = readl(SEGMENT_ADDR_REG(controller, chip->cs - 1));
|
||||
|
||||
start = SEGMENT_ADDR_END(prev);
|
||||
} else {
|
||||
start = ahb_base_phy;
|
||||
}
|
||||
|
||||
size = chip_set_segment(chip, chip->cs, start, size);
|
||||
|
||||
/* Update chip base address on the AHB bus */
|
||||
chip->ahb_base = controller->ahb_base + (start - ahb_base_phy);
|
||||
|
||||
/*
|
||||
* Now, make sure the next segment does not overlap with the
|
||||
* current one we just configured, even if there is no
|
||||
* available chip. That could break access in Command Mode.
|
||||
*/
|
||||
if (chip->cs < controller->info->nce - 1)
|
||||
chip_set_segment(chip, chip->cs + 1, start + size, 0);
|
||||
|
||||
out:
|
||||
if (size < chip->nor.mtd.size)
|
||||
dev_warn(chip->nor.dev,
|
||||
"CE%d window too small for chip %dMB",
|
||||
chip->cs, (u32)chip->nor.mtd.size >> 20);
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
static void aspeed_smc_chip_enable_write(struct aspeed_smc_chip *chip)
|
||||
{
|
||||
struct aspeed_smc_controller *controller = chip->controller;
|
||||
|
@ -524,7 +669,7 @@ static int aspeed_smc_chip_setup_init(struct aspeed_smc_chip *chip,
|
|||
*/
|
||||
chip->ahb_base = aspeed_smc_chip_base(chip, res);
|
||||
if (!chip->ahb_base) {
|
||||
dev_warn(chip->nor.dev, "CE segment window closed.\n");
|
||||
dev_warn(chip->nor.dev, "CE%d window closed", chip->cs);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -571,6 +716,9 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
|
|||
if (chip->nor.addr_width == 4 && info->set_4b)
|
||||
info->set_4b(chip);
|
||||
|
||||
/* This is for direct AHB access when using Command Mode. */
|
||||
chip->ahb_window_size = aspeed_smc_chip_set_segment(chip);
|
||||
|
||||
/*
|
||||
* base mode has not been optimized yet. use it for writes.
|
||||
*/
|
||||
|
@ -585,14 +733,12 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
|
|||
* TODO: Adjust clocks if fast read is supported and interpret
|
||||
* SPI-NOR flags to adjust controller settings.
|
||||
*/
|
||||
switch (chip->nor.flash_read) {
|
||||
case SPI_NOR_NORMAL:
|
||||
cmd = CONTROL_COMMAND_MODE_NORMAL;
|
||||
break;
|
||||
case SPI_NOR_FAST:
|
||||
cmd = CONTROL_COMMAND_MODE_FREAD;
|
||||
break;
|
||||
default:
|
||||
if (chip->nor.read_proto == SNOR_PROTO_1_1_1) {
|
||||
if (chip->nor.read_dummy == 0)
|
||||
cmd = CONTROL_COMMAND_MODE_NORMAL;
|
||||
else
|
||||
cmd = CONTROL_COMMAND_MODE_FREAD;
|
||||
} else {
|
||||
dev_err(chip->nor.dev, "unsupported SPI read mode\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -608,6 +754,11 @@ static int aspeed_smc_chip_setup_finish(struct aspeed_smc_chip *chip)
|
|||
static int aspeed_smc_setup_flash(struct aspeed_smc_controller *controller,
|
||||
struct device_node *np, struct resource *r)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
const struct aspeed_smc_info *info = controller->info;
|
||||
struct device *dev = controller->dev;
|
||||
struct device_node *child;
|
||||
|
@ -671,11 +822,11 @@ static int aspeed_smc_setup_flash(struct aspeed_smc_controller *controller,
|
|||
break;
|
||||
|
||||
/*
|
||||
* TODO: Add support for SPI_NOR_QUAD and SPI_NOR_DUAL
|
||||
* TODO: Add support for Dual and Quad SPI protocols
|
||||
* attach when board support is present as determined
|
||||
* by of property.
|
||||
*/
|
||||
ret = spi_nor_scan(nor, NULL, SPI_NOR_NORMAL);
|
||||
ret = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
|
@ -731,6 +882,8 @@ static int aspeed_smc_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(controller->ahb_base))
|
||||
return PTR_ERR(controller->ahb_base);
|
||||
|
||||
controller->ahb_window_size = resource_size(res);
|
||||
|
||||
ret = aspeed_smc_setup_flash(controller, np, res);
|
||||
if (ret)
|
||||
dev_err(dev, "Aspeed SMC probe failed %d\n", ret);
|
||||
|
|
|
@ -275,14 +275,48 @@ static void atmel_qspi_debug_command(struct atmel_qspi *aq,
|
|||
|
||||
static int atmel_qspi_run_command(struct atmel_qspi *aq,
|
||||
const struct atmel_qspi_command *cmd,
|
||||
u32 ifr_tfrtyp, u32 ifr_width)
|
||||
u32 ifr_tfrtyp, enum spi_nor_protocol proto)
|
||||
{
|
||||
u32 iar, icr, ifr, sr;
|
||||
int err = 0;
|
||||
|
||||
iar = 0;
|
||||
icr = 0;
|
||||
ifr = ifr_tfrtyp | ifr_width;
|
||||
ifr = ifr_tfrtyp;
|
||||
|
||||
/* Set the SPI protocol */
|
||||
switch (proto) {
|
||||
case SNOR_PROTO_1_1_1:
|
||||
ifr |= QSPI_IFR_WIDTH_SINGLE_BIT_SPI;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_1_1_2:
|
||||
ifr |= QSPI_IFR_WIDTH_DUAL_OUTPUT;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_1_1_4:
|
||||
ifr |= QSPI_IFR_WIDTH_QUAD_OUTPUT;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_1_2_2:
|
||||
ifr |= QSPI_IFR_WIDTH_DUAL_IO;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_1_4_4:
|
||||
ifr |= QSPI_IFR_WIDTH_QUAD_IO;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_2_2_2:
|
||||
ifr |= QSPI_IFR_WIDTH_DUAL_CMD;
|
||||
break;
|
||||
|
||||
case SNOR_PROTO_4_4_4:
|
||||
ifr |= QSPI_IFR_WIDTH_QUAD_CMD;
|
||||
break;
|
||||
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Compute instruction parameters */
|
||||
if (cmd->enable.bits.instruction) {
|
||||
|
@ -434,7 +468,7 @@ static int atmel_qspi_read_reg(struct spi_nor *nor, u8 opcode,
|
|||
cmd.rx_buf = buf;
|
||||
cmd.buf_len = len;
|
||||
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ,
|
||||
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
|
||||
nor->reg_proto);
|
||||
}
|
||||
|
||||
static int atmel_qspi_write_reg(struct spi_nor *nor, u8 opcode,
|
||||
|
@ -450,7 +484,7 @@ static int atmel_qspi_write_reg(struct spi_nor *nor, u8 opcode,
|
|||
cmd.tx_buf = buf;
|
||||
cmd.buf_len = len;
|
||||
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
|
||||
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
|
||||
nor->reg_proto);
|
||||
}
|
||||
|
||||
static ssize_t atmel_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
|
||||
|
@ -469,7 +503,7 @@ static ssize_t atmel_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
|
|||
cmd.tx_buf = write_buf;
|
||||
cmd.buf_len = len;
|
||||
ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE_MEM,
|
||||
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
|
||||
nor->write_proto);
|
||||
return (ret < 0) ? ret : len;
|
||||
}
|
||||
|
||||
|
@ -484,7 +518,7 @@ static int atmel_qspi_erase(struct spi_nor *nor, loff_t offs)
|
|||
cmd.instruction = nor->erase_opcode;
|
||||
cmd.address = (u32)offs;
|
||||
return atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_WRITE,
|
||||
QSPI_IFR_WIDTH_SINGLE_BIT_SPI);
|
||||
nor->reg_proto);
|
||||
}
|
||||
|
||||
static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
|
||||
|
@ -493,27 +527,8 @@ static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
struct atmel_qspi *aq = nor->priv;
|
||||
struct atmel_qspi_command cmd;
|
||||
u8 num_mode_cycles, num_dummy_cycles;
|
||||
u32 ifr_width;
|
||||
ssize_t ret;
|
||||
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_NORMAL:
|
||||
case SPI_NOR_FAST:
|
||||
ifr_width = QSPI_IFR_WIDTH_SINGLE_BIT_SPI;
|
||||
break;
|
||||
|
||||
case SPI_NOR_DUAL:
|
||||
ifr_width = QSPI_IFR_WIDTH_DUAL_OUTPUT;
|
||||
break;
|
||||
|
||||
case SPI_NOR_QUAD:
|
||||
ifr_width = QSPI_IFR_WIDTH_QUAD_OUTPUT;
|
||||
break;
|
||||
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (nor->read_dummy >= 2) {
|
||||
num_mode_cycles = 2;
|
||||
num_dummy_cycles = nor->read_dummy - 2;
|
||||
|
@ -536,7 +551,7 @@ static ssize_t atmel_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
cmd.rx_buf = read_buf;
|
||||
cmd.buf_len = len;
|
||||
ret = atmel_qspi_run_command(aq, &cmd, QSPI_IFR_TFRTYP_TRSFR_READ_MEM,
|
||||
ifr_width);
|
||||
nor->read_proto);
|
||||
return (ret < 0) ? ret : len;
|
||||
}
|
||||
|
||||
|
@ -590,6 +605,20 @@ static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id)
|
|||
|
||||
static int atmel_qspi_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_READ_1_1_2 |
|
||||
SNOR_HWCAPS_READ_1_2_2 |
|
||||
SNOR_HWCAPS_READ_2_2_2 |
|
||||
SNOR_HWCAPS_READ_1_1_4 |
|
||||
SNOR_HWCAPS_READ_1_4_4 |
|
||||
SNOR_HWCAPS_READ_4_4_4 |
|
||||
SNOR_HWCAPS_PP |
|
||||
SNOR_HWCAPS_PP_1_1_4 |
|
||||
SNOR_HWCAPS_PP_1_4_4 |
|
||||
SNOR_HWCAPS_PP_4_4_4,
|
||||
};
|
||||
struct device_node *child, *np = pdev->dev.of_node;
|
||||
struct atmel_qspi *aq;
|
||||
struct resource *res;
|
||||
|
@ -679,7 +708,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
|
|||
if (err)
|
||||
goto disable_clk;
|
||||
|
||||
err = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
|
||||
err = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (err)
|
||||
goto disable_clk;
|
||||
|
||||
|
|
|
@ -855,15 +855,14 @@ static int cqspi_set_protocol(struct spi_nor *nor, const int read)
|
|||
f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
|
||||
|
||||
if (read) {
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_NORMAL:
|
||||
case SPI_NOR_FAST:
|
||||
switch (nor->read_proto) {
|
||||
case SNOR_PROTO_1_1_1:
|
||||
f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
|
||||
break;
|
||||
case SPI_NOR_DUAL:
|
||||
case SNOR_PROTO_1_1_2:
|
||||
f_pdata->data_width = CQSPI_INST_TYPE_DUAL;
|
||||
break;
|
||||
case SPI_NOR_QUAD:
|
||||
case SNOR_PROTO_1_1_4:
|
||||
f_pdata->data_width = CQSPI_INST_TYPE_QUAD;
|
||||
break;
|
||||
default:
|
||||
|
@ -1069,6 +1068,13 @@ static void cqspi_controller_init(struct cqspi_st *cqspi)
|
|||
|
||||
static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_READ_1_1_2 |
|
||||
SNOR_HWCAPS_READ_1_1_4 |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
struct platform_device *pdev = cqspi->pdev;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct cqspi_flash_pdata *f_pdata;
|
||||
|
@ -1123,7 +1129,7 @@ static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
|
|||
goto err;
|
||||
}
|
||||
|
||||
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
|
||||
ret = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
|
@ -1277,7 +1283,7 @@ static const struct dev_pm_ops cqspi__dev_pm_ops = {
|
|||
#define CQSPI_DEV_PM_OPS NULL
|
||||
#endif
|
||||
|
||||
static struct of_device_id const cqspi_dt_ids[] = {
|
||||
static const struct of_device_id cqspi_dt_ids[] = {
|
||||
{.compatible = "cdns,qspi-nor",},
|
||||
{ /* end of table */ }
|
||||
};
|
||||
|
|
|
@ -957,6 +957,10 @@ static void fsl_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
|
|||
|
||||
static int fsl_qspi_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ_1_1_4 |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct fsl_qspi *q;
|
||||
|
@ -1065,7 +1069,7 @@ static int fsl_qspi_probe(struct platform_device *pdev)
|
|||
/* set the chip address for READID */
|
||||
fsl_qspi_set_base_addr(q, nor);
|
||||
|
||||
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
|
||||
ret = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (ret)
|
||||
goto mutex_failed;
|
||||
|
||||
|
|
|
@ -120,19 +120,24 @@ static inline int wait_op_finish(struct hifmc_host *host)
|
|||
(reg & FMC_INT_OP_DONE), 0, FMC_WAIT_TIMEOUT);
|
||||
}
|
||||
|
||||
static int get_if_type(enum read_mode flash_read)
|
||||
static int get_if_type(enum spi_nor_protocol proto)
|
||||
{
|
||||
enum hifmc_iftype if_type;
|
||||
|
||||
switch (flash_read) {
|
||||
case SPI_NOR_DUAL:
|
||||
switch (proto) {
|
||||
case SNOR_PROTO_1_1_2:
|
||||
if_type = IF_TYPE_DUAL;
|
||||
break;
|
||||
case SPI_NOR_QUAD:
|
||||
case SNOR_PROTO_1_2_2:
|
||||
if_type = IF_TYPE_DIO;
|
||||
break;
|
||||
case SNOR_PROTO_1_1_4:
|
||||
if_type = IF_TYPE_QUAD;
|
||||
break;
|
||||
case SPI_NOR_NORMAL:
|
||||
case SPI_NOR_FAST:
|
||||
case SNOR_PROTO_1_4_4:
|
||||
if_type = IF_TYPE_QIO;
|
||||
break;
|
||||
case SNOR_PROTO_1_1_1:
|
||||
default:
|
||||
if_type = IF_TYPE_STD;
|
||||
break;
|
||||
|
@ -253,7 +258,10 @@ static int hisi_spi_nor_dma_transfer(struct spi_nor *nor, loff_t start_off,
|
|||
writel(FMC_DMA_LEN_SET(len), host->regbase + FMC_DMA_LEN);
|
||||
|
||||
reg = OP_CFG_FM_CS(priv->chipselect);
|
||||
if_type = get_if_type(nor->flash_read);
|
||||
if (op_type == FMC_OP_READ)
|
||||
if_type = get_if_type(nor->read_proto);
|
||||
else
|
||||
if_type = get_if_type(nor->write_proto);
|
||||
reg |= OP_CFG_MEM_IF_TYPE(if_type);
|
||||
if (op_type == FMC_OP_READ)
|
||||
reg |= OP_CFG_DUMMY_NUM(nor->read_dummy >> 3);
|
||||
|
@ -321,6 +329,13 @@ static ssize_t hisi_spi_nor_write(struct spi_nor *nor, loff_t to,
|
|||
static int hisi_spi_nor_register(struct device_node *np,
|
||||
struct hifmc_host *host)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_READ_1_1_2 |
|
||||
SNOR_HWCAPS_READ_1_1_4 |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
struct device *dev = host->dev;
|
||||
struct spi_nor *nor;
|
||||
struct hifmc_priv *priv;
|
||||
|
@ -362,7 +377,7 @@ static int hisi_spi_nor_register(struct device_node *np,
|
|||
nor->read = hisi_spi_nor_read;
|
||||
nor->write = hisi_spi_nor_write;
|
||||
nor->erase = NULL;
|
||||
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
|
||||
ret = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -715,6 +715,11 @@ static void intel_spi_fill_partition(struct intel_spi *ispi,
|
|||
struct intel_spi *intel_spi_probe(struct device *dev,
|
||||
struct resource *mem, const struct intel_spi_boardinfo *info)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
struct mtd_partition part;
|
||||
struct intel_spi *ispi;
|
||||
int ret;
|
||||
|
@ -746,7 +751,7 @@ struct intel_spi *intel_spi_probe(struct device *dev,
|
|||
ispi->nor.write = intel_spi_write;
|
||||
ispi->nor.erase = intel_spi_erase;
|
||||
|
||||
ret = spi_nor_scan(&ispi->nor, NULL, SPI_NOR_NORMAL);
|
||||
ret = spi_nor_scan(&ispi->nor, NULL, &hwcaps);
|
||||
if (ret) {
|
||||
dev_info(dev, "failed to locate the chip\n");
|
||||
return ERR_PTR(ret);
|
||||
|
|
|
@ -123,20 +123,20 @@ static void mt8173_nor_set_read_mode(struct mt8173_nor *mt8173_nor)
|
|||
{
|
||||
struct spi_nor *nor = &mt8173_nor->nor;
|
||||
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_FAST:
|
||||
switch (nor->read_proto) {
|
||||
case SNOR_PROTO_1_1_1:
|
||||
writeb(nor->read_opcode, mt8173_nor->base +
|
||||
MTK_NOR_PRGDATA3_REG);
|
||||
writeb(MTK_NOR_FAST_READ, mt8173_nor->base +
|
||||
MTK_NOR_CFG1_REG);
|
||||
break;
|
||||
case SPI_NOR_DUAL:
|
||||
case SNOR_PROTO_1_1_2:
|
||||
writeb(nor->read_opcode, mt8173_nor->base +
|
||||
MTK_NOR_PRGDATA3_REG);
|
||||
writeb(MTK_NOR_DUAL_READ_EN, mt8173_nor->base +
|
||||
MTK_NOR_DUAL_REG);
|
||||
break;
|
||||
case SPI_NOR_QUAD:
|
||||
case SNOR_PROTO_1_1_4:
|
||||
writeb(nor->read_opcode, mt8173_nor->base +
|
||||
MTK_NOR_PRGDATA4_REG);
|
||||
writeb(MTK_NOR_QUAD_READ_EN, mt8173_nor->base +
|
||||
|
@ -408,6 +408,11 @@ static int mt8173_nor_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf,
|
|||
static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
|
||||
struct device_node *flash_node)
|
||||
{
|
||||
const struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_READ_1_1_2 |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
int ret;
|
||||
struct spi_nor *nor;
|
||||
|
||||
|
@ -426,7 +431,7 @@ static int mtk_nor_init(struct mt8173_nor *mt8173_nor,
|
|||
nor->write_reg = mt8173_nor_write_reg;
|
||||
nor->mtd.name = "mtk_nor";
|
||||
/* initialized with NULL */
|
||||
ret = spi_nor_scan(nor, NULL, SPI_NOR_DUAL);
|
||||
ret = spi_nor_scan(nor, NULL, &hwcaps);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -240,13 +240,12 @@ static int nxp_spifi_erase(struct spi_nor *nor, loff_t offs)
|
|||
|
||||
static int nxp_spifi_setup_memory_cmd(struct nxp_spifi *spifi)
|
||||
{
|
||||
switch (spifi->nor.flash_read) {
|
||||
case SPI_NOR_NORMAL:
|
||||
case SPI_NOR_FAST:
|
||||
switch (spifi->nor.read_proto) {
|
||||
case SNOR_PROTO_1_1_1:
|
||||
spifi->mcmd = SPIFI_CMD_FIELDFORM_ALL_SERIAL;
|
||||
break;
|
||||
case SPI_NOR_DUAL:
|
||||
case SPI_NOR_QUAD:
|
||||
case SNOR_PROTO_1_1_2:
|
||||
case SNOR_PROTO_1_1_4:
|
||||
spifi->mcmd = SPIFI_CMD_FIELDFORM_QUAD_DUAL_DATA;
|
||||
break;
|
||||
default:
|
||||
|
@ -274,7 +273,11 @@ static void nxp_spifi_dummy_id_read(struct spi_nor *nor)
|
|||
static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
|
||||
struct device_node *np)
|
||||
{
|
||||
enum read_mode flash_read;
|
||||
struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
u32 ctrl, property;
|
||||
u16 mode = 0;
|
||||
int ret;
|
||||
|
@ -308,13 +311,12 @@ static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
|
|||
|
||||
if (mode & SPI_RX_DUAL) {
|
||||
ctrl |= SPIFI_CTRL_DUAL;
|
||||
flash_read = SPI_NOR_DUAL;
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
|
||||
} else if (mode & SPI_RX_QUAD) {
|
||||
ctrl &= ~SPIFI_CTRL_DUAL;
|
||||
flash_read = SPI_NOR_QUAD;
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
|
||||
} else {
|
||||
ctrl |= SPIFI_CTRL_DUAL;
|
||||
flash_read = SPI_NOR_NORMAL;
|
||||
}
|
||||
|
||||
switch (mode & (SPI_CPHA | SPI_CPOL)) {
|
||||
|
@ -351,7 +353,7 @@ static int nxp_spifi_setup_flash(struct nxp_spifi *spifi,
|
|||
*/
|
||||
nxp_spifi_dummy_id_read(&spifi->nor);
|
||||
|
||||
ret = spi_nor_scan(&spifi->nor, NULL, flash_read);
|
||||
ret = spi_nor_scan(&spifi->nor, NULL, &hwcaps);
|
||||
if (ret) {
|
||||
dev_err(spifi->dev, "device scan failed\n");
|
||||
return ret;
|
||||
|
|
|
@ -149,24 +149,6 @@ static int read_cr(struct spi_nor *nor)
|
|||
return val;
|
||||
}
|
||||
|
||||
/*
|
||||
* Dummy Cycle calculation for different type of read.
|
||||
* It can be used to support more commands with
|
||||
* different dummy cycle requirements.
|
||||
*/
|
||||
static inline int spi_nor_read_dummy_cycles(struct spi_nor *nor)
|
||||
{
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_FAST:
|
||||
case SPI_NOR_DUAL:
|
||||
case SPI_NOR_QUAD:
|
||||
return 8;
|
||||
case SPI_NOR_NORMAL:
|
||||
return 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Write status register 1 byte
|
||||
* Returns negative if error occurred.
|
||||
|
@ -221,6 +203,10 @@ static inline u8 spi_nor_convert_3to4_read(u8 opcode)
|
|||
{ SPINOR_OP_READ_1_2_2, SPINOR_OP_READ_1_2_2_4B },
|
||||
{ SPINOR_OP_READ_1_1_4, SPINOR_OP_READ_1_1_4_4B },
|
||||
{ SPINOR_OP_READ_1_4_4, SPINOR_OP_READ_1_4_4_4B },
|
||||
|
||||
{ SPINOR_OP_READ_1_1_1_DTR, SPINOR_OP_READ_1_1_1_DTR_4B },
|
||||
{ SPINOR_OP_READ_1_2_2_DTR, SPINOR_OP_READ_1_2_2_DTR_4B },
|
||||
{ SPINOR_OP_READ_1_4_4_DTR, SPINOR_OP_READ_1_4_4_DTR_4B },
|
||||
};
|
||||
|
||||
return spi_nor_convert_opcode(opcode, spi_nor_3to4_read,
|
||||
|
@ -1022,10 +1008,12 @@ static const struct flash_info spi_nor_ids[] = {
|
|||
{ "mx25u6435f", INFO(0xc22537, 0, 64 * 1024, 128, SECT_4K) },
|
||||
{ "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) },
|
||||
{ "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) },
|
||||
{ "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) },
|
||||
{ "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
|
||||
{ "mx25u25635f", INFO(0xc22539, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_4B_OPCODES) },
|
||||
{ "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) },
|
||||
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_QUAD_READ) },
|
||||
{ "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
|
||||
{ "mx66u51235f", INFO(0xc2253a, 0, 64 * 1024, 1024, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
|
||||
{ "mx66l1g45g", INFO(0xc2201b, 0, 64 * 1024, 2048, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
|
||||
{ "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, SPI_NOR_QUAD_READ) },
|
||||
|
||||
/* Micron */
|
||||
|
@ -1036,7 +1024,7 @@ static const struct flash_info spi_nor_ids[] = {
|
|||
{ "n25q064a", INFO(0x20bb17, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q256ax1", INFO(0x20bb19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
|
||||
{ "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) },
|
||||
|
@ -1076,6 +1064,7 @@ static const struct flash_info spi_nor_ids[] = {
|
|||
{ "s25fl164k", INFO(0x014017, 0, 64 * 1024, 128, SECT_4K) },
|
||||
{ "s25fl204k", INFO(0x014013, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_DUAL_READ) },
|
||||
{ "s25fl208k", INFO(0x014014, 0, 64 * 1024, 16, SECT_4K | SPI_NOR_DUAL_READ) },
|
||||
{ "s25fl064l", INFO(0x016017, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | SPI_NOR_4B_OPCODES) },
|
||||
|
||||
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
|
||||
{ "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) },
|
||||
|
@ -1159,7 +1148,9 @@ static const struct flash_info spi_nor_ids[] = {
|
|||
{ "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) },
|
||||
{ "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) },
|
||||
{ "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) },
|
||||
{ "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) },
|
||||
{ "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
|
||||
{ "w25m512jv", INFO(0xef7119, 0, 64 * 1024, 1024,
|
||||
SECT_4K | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ) },
|
||||
|
||||
/* Catalyst / On Semiconductor -- non-JEDEC */
|
||||
{ "cat25c11", CAT25_INFO( 16, 8, 16, 1, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) },
|
||||
|
@ -1403,8 +1394,9 @@ static int macronix_quad_enable(struct spi_nor *nor)
|
|||
|
||||
write_sr(nor, val | SR_QUAD_EN_MX);
|
||||
|
||||
if (spi_nor_wait_till_ready(nor))
|
||||
return 1;
|
||||
ret = spi_nor_wait_till_ready(nor);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = read_sr(nor);
|
||||
if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) {
|
||||
|
@ -1460,30 +1452,6 @@ static int spansion_quad_enable(struct spi_nor *nor)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int set_quad_mode(struct spi_nor *nor, const struct flash_info *info)
|
||||
{
|
||||
int status;
|
||||
|
||||
switch (JEDEC_MFR(info)) {
|
||||
case SNOR_MFR_MACRONIX:
|
||||
status = macronix_quad_enable(nor);
|
||||
if (status) {
|
||||
dev_err(nor->dev, "Macronix quad-read not enabled\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return status;
|
||||
case SNOR_MFR_MICRON:
|
||||
return 0;
|
||||
default:
|
||||
status = spansion_quad_enable(nor);
|
||||
if (status) {
|
||||
dev_err(nor->dev, "Spansion quad-read not enabled\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return status;
|
||||
}
|
||||
}
|
||||
|
||||
static int spi_nor_check(struct spi_nor *nor)
|
||||
{
|
||||
if (!nor->dev || !nor->read || !nor->write ||
|
||||
|
@ -1536,8 +1504,349 @@ static int s3an_nor_scan(const struct flash_info *info, struct spi_nor *nor)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
||||
struct spi_nor_read_command {
|
||||
u8 num_mode_clocks;
|
||||
u8 num_wait_states;
|
||||
u8 opcode;
|
||||
enum spi_nor_protocol proto;
|
||||
};
|
||||
|
||||
struct spi_nor_pp_command {
|
||||
u8 opcode;
|
||||
enum spi_nor_protocol proto;
|
||||
};
|
||||
|
||||
enum spi_nor_read_command_index {
|
||||
SNOR_CMD_READ,
|
||||
SNOR_CMD_READ_FAST,
|
||||
SNOR_CMD_READ_1_1_1_DTR,
|
||||
|
||||
/* Dual SPI */
|
||||
SNOR_CMD_READ_1_1_2,
|
||||
SNOR_CMD_READ_1_2_2,
|
||||
SNOR_CMD_READ_2_2_2,
|
||||
SNOR_CMD_READ_1_2_2_DTR,
|
||||
|
||||
/* Quad SPI */
|
||||
SNOR_CMD_READ_1_1_4,
|
||||
SNOR_CMD_READ_1_4_4,
|
||||
SNOR_CMD_READ_4_4_4,
|
||||
SNOR_CMD_READ_1_4_4_DTR,
|
||||
|
||||
/* Octo SPI */
|
||||
SNOR_CMD_READ_1_1_8,
|
||||
SNOR_CMD_READ_1_8_8,
|
||||
SNOR_CMD_READ_8_8_8,
|
||||
SNOR_CMD_READ_1_8_8_DTR,
|
||||
|
||||
SNOR_CMD_READ_MAX
|
||||
};
|
||||
|
||||
enum spi_nor_pp_command_index {
|
||||
SNOR_CMD_PP,
|
||||
|
||||
/* Quad SPI */
|
||||
SNOR_CMD_PP_1_1_4,
|
||||
SNOR_CMD_PP_1_4_4,
|
||||
SNOR_CMD_PP_4_4_4,
|
||||
|
||||
/* Octo SPI */
|
||||
SNOR_CMD_PP_1_1_8,
|
||||
SNOR_CMD_PP_1_8_8,
|
||||
SNOR_CMD_PP_8_8_8,
|
||||
|
||||
SNOR_CMD_PP_MAX
|
||||
};
|
||||
|
||||
struct spi_nor_flash_parameter {
|
||||
u64 size;
|
||||
u32 page_size;
|
||||
|
||||
struct spi_nor_hwcaps hwcaps;
|
||||
struct spi_nor_read_command reads[SNOR_CMD_READ_MAX];
|
||||
struct spi_nor_pp_command page_programs[SNOR_CMD_PP_MAX];
|
||||
|
||||
int (*quad_enable)(struct spi_nor *nor);
|
||||
};
|
||||
|
||||
static void
|
||||
spi_nor_set_read_settings(struct spi_nor_read_command *read,
|
||||
u8 num_mode_clocks,
|
||||
u8 num_wait_states,
|
||||
u8 opcode,
|
||||
enum spi_nor_protocol proto)
|
||||
{
|
||||
read->num_mode_clocks = num_mode_clocks;
|
||||
read->num_wait_states = num_wait_states;
|
||||
read->opcode = opcode;
|
||||
read->proto = proto;
|
||||
}
|
||||
|
||||
static void
|
||||
spi_nor_set_pp_settings(struct spi_nor_pp_command *pp,
|
||||
u8 opcode,
|
||||
enum spi_nor_protocol proto)
|
||||
{
|
||||
pp->opcode = opcode;
|
||||
pp->proto = proto;
|
||||
}
|
||||
|
||||
static int spi_nor_init_params(struct spi_nor *nor,
|
||||
const struct flash_info *info,
|
||||
struct spi_nor_flash_parameter *params)
|
||||
{
|
||||
/* Set legacy flash parameters as default. */
|
||||
memset(params, 0, sizeof(*params));
|
||||
|
||||
/* Set SPI NOR sizes. */
|
||||
params->size = info->sector_size * info->n_sectors;
|
||||
params->page_size = info->page_size;
|
||||
|
||||
/* (Fast) Read settings. */
|
||||
params->hwcaps.mask |= SNOR_HWCAPS_READ;
|
||||
spi_nor_set_read_settings(¶ms->reads[SNOR_CMD_READ],
|
||||
0, 0, SPINOR_OP_READ,
|
||||
SNOR_PROTO_1_1_1);
|
||||
|
||||
if (!(info->flags & SPI_NOR_NO_FR)) {
|
||||
params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
|
||||
spi_nor_set_read_settings(¶ms->reads[SNOR_CMD_READ_FAST],
|
||||
0, 8, SPINOR_OP_READ_FAST,
|
||||
SNOR_PROTO_1_1_1);
|
||||
}
|
||||
|
||||
if (info->flags & SPI_NOR_DUAL_READ) {
|
||||
params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
|
||||
spi_nor_set_read_settings(¶ms->reads[SNOR_CMD_READ_1_1_2],
|
||||
0, 8, SPINOR_OP_READ_1_1_2,
|
||||
SNOR_PROTO_1_1_2);
|
||||
}
|
||||
|
||||
if (info->flags & SPI_NOR_QUAD_READ) {
|
||||
params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
|
||||
spi_nor_set_read_settings(¶ms->reads[SNOR_CMD_READ_1_1_4],
|
||||
0, 8, SPINOR_OP_READ_1_1_4,
|
||||
SNOR_PROTO_1_1_4);
|
||||
}
|
||||
|
||||
/* Page Program settings. */
|
||||
params->hwcaps.mask |= SNOR_HWCAPS_PP;
|
||||
spi_nor_set_pp_settings(¶ms->page_programs[SNOR_CMD_PP],
|
||||
SPINOR_OP_PP, SNOR_PROTO_1_1_1);
|
||||
|
||||
/* Select the procedure to set the Quad Enable bit. */
|
||||
if (params->hwcaps.mask & (SNOR_HWCAPS_READ_QUAD |
|
||||
SNOR_HWCAPS_PP_QUAD)) {
|
||||
switch (JEDEC_MFR(info)) {
|
||||
case SNOR_MFR_MACRONIX:
|
||||
params->quad_enable = macronix_quad_enable;
|
||||
break;
|
||||
|
||||
case SNOR_MFR_MICRON:
|
||||
break;
|
||||
|
||||
default:
|
||||
params->quad_enable = spansion_quad_enable;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int spi_nor_hwcaps2cmd(u32 hwcaps, const int table[][2], size_t size)
|
||||
{
|
||||
size_t i;
|
||||
|
||||
for (i = 0; i < size; i++)
|
||||
if (table[i][0] == (int)hwcaps)
|
||||
return table[i][1];
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int spi_nor_hwcaps_read2cmd(u32 hwcaps)
|
||||
{
|
||||
static const int hwcaps_read2cmd[][2] = {
|
||||
{ SNOR_HWCAPS_READ, SNOR_CMD_READ },
|
||||
{ SNOR_HWCAPS_READ_FAST, SNOR_CMD_READ_FAST },
|
||||
{ SNOR_HWCAPS_READ_1_1_1_DTR, SNOR_CMD_READ_1_1_1_DTR },
|
||||
{ SNOR_HWCAPS_READ_1_1_2, SNOR_CMD_READ_1_1_2 },
|
||||
{ SNOR_HWCAPS_READ_1_2_2, SNOR_CMD_READ_1_2_2 },
|
||||
{ SNOR_HWCAPS_READ_2_2_2, SNOR_CMD_READ_2_2_2 },
|
||||
{ SNOR_HWCAPS_READ_1_2_2_DTR, SNOR_CMD_READ_1_2_2_DTR },
|
||||
{ SNOR_HWCAPS_READ_1_1_4, SNOR_CMD_READ_1_1_4 },
|
||||
{ SNOR_HWCAPS_READ_1_4_4, SNOR_CMD_READ_1_4_4 },
|
||||
{ SNOR_HWCAPS_READ_4_4_4, SNOR_CMD_READ_4_4_4 },
|
||||
{ SNOR_HWCAPS_READ_1_4_4_DTR, SNOR_CMD_READ_1_4_4_DTR },
|
||||
{ SNOR_HWCAPS_READ_1_1_8, SNOR_CMD_READ_1_1_8 },
|
||||
{ SNOR_HWCAPS_READ_1_8_8, SNOR_CMD_READ_1_8_8 },
|
||||
{ SNOR_HWCAPS_READ_8_8_8, SNOR_CMD_READ_8_8_8 },
|
||||
{ SNOR_HWCAPS_READ_1_8_8_DTR, SNOR_CMD_READ_1_8_8_DTR },
|
||||
};
|
||||
|
||||
return spi_nor_hwcaps2cmd(hwcaps, hwcaps_read2cmd,
|
||||
ARRAY_SIZE(hwcaps_read2cmd));
|
||||
}
|
||||
|
||||
static int spi_nor_hwcaps_pp2cmd(u32 hwcaps)
|
||||
{
|
||||
static const int hwcaps_pp2cmd[][2] = {
|
||||
{ SNOR_HWCAPS_PP, SNOR_CMD_PP },
|
||||
{ SNOR_HWCAPS_PP_1_1_4, SNOR_CMD_PP_1_1_4 },
|
||||
{ SNOR_HWCAPS_PP_1_4_4, SNOR_CMD_PP_1_4_4 },
|
||||
{ SNOR_HWCAPS_PP_4_4_4, SNOR_CMD_PP_4_4_4 },
|
||||
{ SNOR_HWCAPS_PP_1_1_8, SNOR_CMD_PP_1_1_8 },
|
||||
{ SNOR_HWCAPS_PP_1_8_8, SNOR_CMD_PP_1_8_8 },
|
||||
{ SNOR_HWCAPS_PP_8_8_8, SNOR_CMD_PP_8_8_8 },
|
||||
};
|
||||
|
||||
return spi_nor_hwcaps2cmd(hwcaps, hwcaps_pp2cmd,
|
||||
ARRAY_SIZE(hwcaps_pp2cmd));
|
||||
}
|
||||
|
||||
static int spi_nor_select_read(struct spi_nor *nor,
|
||||
const struct spi_nor_flash_parameter *params,
|
||||
u32 shared_hwcaps)
|
||||
{
|
||||
int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_READ_MASK) - 1;
|
||||
const struct spi_nor_read_command *read;
|
||||
|
||||
if (best_match < 0)
|
||||
return -EINVAL;
|
||||
|
||||
cmd = spi_nor_hwcaps_read2cmd(BIT(best_match));
|
||||
if (cmd < 0)
|
||||
return -EINVAL;
|
||||
|
||||
read = ¶ms->reads[cmd];
|
||||
nor->read_opcode = read->opcode;
|
||||
nor->read_proto = read->proto;
|
||||
|
||||
/*
|
||||
* In the spi-nor framework, we don't need to make the difference
|
||||
* between mode clock cycles and wait state clock cycles.
|
||||
* Indeed, the value of the mode clock cycles is used by a QSPI
|
||||
* flash memory to know whether it should enter or leave its 0-4-4
|
||||
* (Continuous Read / XIP) mode.
|
||||
* eXecution In Place is out of the scope of the mtd sub-system.
|
||||
* Hence we choose to merge both mode and wait state clock cycles
|
||||
* into the so called dummy clock cycles.
|
||||
*/
|
||||
nor->read_dummy = read->num_mode_clocks + read->num_wait_states;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int spi_nor_select_pp(struct spi_nor *nor,
|
||||
const struct spi_nor_flash_parameter *params,
|
||||
u32 shared_hwcaps)
|
||||
{
|
||||
int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_PP_MASK) - 1;
|
||||
const struct spi_nor_pp_command *pp;
|
||||
|
||||
if (best_match < 0)
|
||||
return -EINVAL;
|
||||
|
||||
cmd = spi_nor_hwcaps_pp2cmd(BIT(best_match));
|
||||
if (cmd < 0)
|
||||
return -EINVAL;
|
||||
|
||||
pp = ¶ms->page_programs[cmd];
|
||||
nor->program_opcode = pp->opcode;
|
||||
nor->write_proto = pp->proto;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int spi_nor_select_erase(struct spi_nor *nor,
|
||||
const struct flash_info *info)
|
||||
{
|
||||
struct mtd_info *mtd = &nor->mtd;
|
||||
|
||||
#ifdef CONFIG_MTD_SPI_NOR_USE_4K_SECTORS
|
||||
/* prefer "small sector" erase if possible */
|
||||
if (info->flags & SECT_4K) {
|
||||
nor->erase_opcode = SPINOR_OP_BE_4K;
|
||||
mtd->erasesize = 4096;
|
||||
} else if (info->flags & SECT_4K_PMC) {
|
||||
nor->erase_opcode = SPINOR_OP_BE_4K_PMC;
|
||||
mtd->erasesize = 4096;
|
||||
} else
|
||||
#endif
|
||||
{
|
||||
nor->erase_opcode = SPINOR_OP_SE;
|
||||
mtd->erasesize = info->sector_size;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int spi_nor_setup(struct spi_nor *nor, const struct flash_info *info,
|
||||
const struct spi_nor_flash_parameter *params,
|
||||
const struct spi_nor_hwcaps *hwcaps)
|
||||
{
|
||||
u32 ignored_mask, shared_mask;
|
||||
bool enable_quad_io;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* Keep only the hardware capabilities supported by both the SPI
|
||||
* controller and the SPI flash memory.
|
||||
*/
|
||||
shared_mask = hwcaps->mask & params->hwcaps.mask;
|
||||
|
||||
/* SPI n-n-n protocols are not supported yet. */
|
||||
ignored_mask = (SNOR_HWCAPS_READ_2_2_2 |
|
||||
SNOR_HWCAPS_READ_4_4_4 |
|
||||
SNOR_HWCAPS_READ_8_8_8 |
|
||||
SNOR_HWCAPS_PP_4_4_4 |
|
||||
SNOR_HWCAPS_PP_8_8_8);
|
||||
if (shared_mask & ignored_mask) {
|
||||
dev_dbg(nor->dev,
|
||||
"SPI n-n-n protocols are not supported yet.\n");
|
||||
shared_mask &= ~ignored_mask;
|
||||
}
|
||||
|
||||
/* Select the (Fast) Read command. */
|
||||
err = spi_nor_select_read(nor, params, shared_mask);
|
||||
if (err) {
|
||||
dev_err(nor->dev,
|
||||
"can't select read settings supported by both the SPI controller and memory.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Select the Page Program command. */
|
||||
err = spi_nor_select_pp(nor, params, shared_mask);
|
||||
if (err) {
|
||||
dev_err(nor->dev,
|
||||
"can't select write settings supported by both the SPI controller and memory.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Select the Sector Erase command. */
|
||||
err = spi_nor_select_erase(nor, info);
|
||||
if (err) {
|
||||
dev_err(nor->dev,
|
||||
"can't select erase settings supported by both the SPI controller and memory.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Enable Quad I/O if needed. */
|
||||
enable_quad_io = (spi_nor_get_protocol_width(nor->read_proto) == 4 ||
|
||||
spi_nor_get_protocol_width(nor->write_proto) == 4);
|
||||
if (enable_quad_io && params->quad_enable) {
|
||||
err = params->quad_enable(nor);
|
||||
if (err) {
|
||||
dev_err(nor->dev, "quad mode not supported\n");
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int spi_nor_scan(struct spi_nor *nor, const char *name,
|
||||
const struct spi_nor_hwcaps *hwcaps)
|
||||
{
|
||||
struct spi_nor_flash_parameter params;
|
||||
const struct flash_info *info = NULL;
|
||||
struct device *dev = nor->dev;
|
||||
struct mtd_info *mtd = &nor->mtd;
|
||||
|
@ -1549,6 +1858,11 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Reset SPI protocol for all commands. */
|
||||
nor->reg_proto = SNOR_PROTO_1_1_1;
|
||||
nor->read_proto = SNOR_PROTO_1_1_1;
|
||||
nor->write_proto = SNOR_PROTO_1_1_1;
|
||||
|
||||
if (name)
|
||||
info = spi_nor_match_id(name);
|
||||
/* Try to auto-detect if chip name wasn't specified or not found */
|
||||
|
@ -1591,6 +1905,11 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
|||
if (info->flags & SPI_S3AN)
|
||||
nor->flags |= SNOR_F_READY_XSR_RDY;
|
||||
|
||||
/* Parse the Serial Flash Discoverable Parameters table. */
|
||||
ret = spi_nor_init_params(nor, info, ¶ms);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Atmel, SST, Intel/Numonyx, and others serial NOR tend to power up
|
||||
* with the software protection bits set
|
||||
|
@ -1611,7 +1930,7 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
|||
mtd->type = MTD_NORFLASH;
|
||||
mtd->writesize = 1;
|
||||
mtd->flags = MTD_CAP_NORFLASH;
|
||||
mtd->size = info->sector_size * info->n_sectors;
|
||||
mtd->size = params.size;
|
||||
mtd->_erase = spi_nor_erase;
|
||||
mtd->_read = spi_nor_read;
|
||||
|
||||
|
@ -1642,75 +1961,38 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
|||
if (info->flags & NO_CHIP_ERASE)
|
||||
nor->flags |= SNOR_F_NO_OP_CHIP_ERASE;
|
||||
|
||||
#ifdef CONFIG_MTD_SPI_NOR_USE_4K_SECTORS
|
||||
/* prefer "small sector" erase if possible */
|
||||
if (info->flags & SECT_4K) {
|
||||
nor->erase_opcode = SPINOR_OP_BE_4K;
|
||||
mtd->erasesize = 4096;
|
||||
} else if (info->flags & SECT_4K_PMC) {
|
||||
nor->erase_opcode = SPINOR_OP_BE_4K_PMC;
|
||||
mtd->erasesize = 4096;
|
||||
} else
|
||||
#endif
|
||||
{
|
||||
nor->erase_opcode = SPINOR_OP_SE;
|
||||
mtd->erasesize = info->sector_size;
|
||||
}
|
||||
|
||||
if (info->flags & SPI_NOR_NO_ERASE)
|
||||
mtd->flags |= MTD_NO_ERASE;
|
||||
|
||||
mtd->dev.parent = dev;
|
||||
nor->page_size = info->page_size;
|
||||
nor->page_size = params.page_size;
|
||||
mtd->writebufsize = nor->page_size;
|
||||
|
||||
if (np) {
|
||||
/* If we were instantiated by DT, use it */
|
||||
if (of_property_read_bool(np, "m25p,fast-read"))
|
||||
nor->flash_read = SPI_NOR_FAST;
|
||||
params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
|
||||
else
|
||||
nor->flash_read = SPI_NOR_NORMAL;
|
||||
params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
|
||||
} else {
|
||||
/* If we weren't instantiated by DT, default to fast-read */
|
||||
nor->flash_read = SPI_NOR_FAST;
|
||||
params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
|
||||
}
|
||||
|
||||
/* Some devices cannot do fast-read, no matter what DT tells us */
|
||||
if (info->flags & SPI_NOR_NO_FR)
|
||||
nor->flash_read = SPI_NOR_NORMAL;
|
||||
params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
|
||||
|
||||
/* Quad/Dual-read mode takes precedence over fast/normal */
|
||||
if (mode == SPI_NOR_QUAD && info->flags & SPI_NOR_QUAD_READ) {
|
||||
ret = set_quad_mode(nor, info);
|
||||
if (ret) {
|
||||
dev_err(dev, "quad mode not supported\n");
|
||||
return ret;
|
||||
}
|
||||
nor->flash_read = SPI_NOR_QUAD;
|
||||
} else if (mode == SPI_NOR_DUAL && info->flags & SPI_NOR_DUAL_READ) {
|
||||
nor->flash_read = SPI_NOR_DUAL;
|
||||
}
|
||||
|
||||
/* Default commands */
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_QUAD:
|
||||
nor->read_opcode = SPINOR_OP_READ_1_1_4;
|
||||
break;
|
||||
case SPI_NOR_DUAL:
|
||||
nor->read_opcode = SPINOR_OP_READ_1_1_2;
|
||||
break;
|
||||
case SPI_NOR_FAST:
|
||||
nor->read_opcode = SPINOR_OP_READ_FAST;
|
||||
break;
|
||||
case SPI_NOR_NORMAL:
|
||||
nor->read_opcode = SPINOR_OP_READ;
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "No Read opcode defined\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
nor->program_opcode = SPINOR_OP_PP;
|
||||
/*
|
||||
* Configure the SPI memory:
|
||||
* - select op codes for (Fast) Read, Page Program and Sector Erase.
|
||||
* - set the number of dummy cycles (mode cycles + wait states).
|
||||
* - set the SPI protocols for register and memory accesses.
|
||||
* - set the Quad Enable bit if needed (required by SPI x-y-4 protos).
|
||||
*/
|
||||
ret = spi_nor_setup(nor, info, ¶ms, hwcaps);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (info->addr_width)
|
||||
nor->addr_width = info->addr_width;
|
||||
|
@ -1732,8 +2014,6 @@ int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
nor->read_dummy = spi_nor_read_dummy_cycles(nor);
|
||||
|
||||
if (info->flags & SPI_S3AN) {
|
||||
ret = s3an_nor_scan(info, nor);
|
||||
if (ret)
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#define QUADSPI_CR 0x00
|
||||
#define CR_EN BIT(0)
|
||||
|
@ -192,15 +193,15 @@ static void stm32_qspi_set_framemode(struct spi_nor *nor,
|
|||
cmd->framemode = CCR_IMODE_1;
|
||||
|
||||
if (read) {
|
||||
switch (nor->flash_read) {
|
||||
case SPI_NOR_NORMAL:
|
||||
case SPI_NOR_FAST:
|
||||
switch (nor->read_proto) {
|
||||
default:
|
||||
case SNOR_PROTO_1_1_1:
|
||||
dmode = CCR_DMODE_1;
|
||||
break;
|
||||
case SPI_NOR_DUAL:
|
||||
case SNOR_PROTO_1_1_2:
|
||||
dmode = CCR_DMODE_2;
|
||||
break;
|
||||
case SPI_NOR_QUAD:
|
||||
case SNOR_PROTO_1_1_4:
|
||||
dmode = CCR_DMODE_4;
|
||||
break;
|
||||
}
|
||||
|
@ -375,7 +376,7 @@ static ssize_t stm32_qspi_read(struct spi_nor *nor, loff_t from, size_t len,
|
|||
struct stm32_qspi_cmd cmd;
|
||||
int err;
|
||||
|
||||
dev_dbg(qspi->dev, "read(%#.2x): buf:%p from:%#.8x len:%#x\n",
|
||||
dev_dbg(qspi->dev, "read(%#.2x): buf:%p from:%#.8x len:%#zx\n",
|
||||
nor->read_opcode, buf, (u32)from, len);
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -402,7 +403,7 @@ static ssize_t stm32_qspi_write(struct spi_nor *nor, loff_t to, size_t len,
|
|||
struct stm32_qspi_cmd cmd;
|
||||
int err;
|
||||
|
||||
dev_dbg(dev, "write(%#.2x): buf:%p to:%#.8x len:%#x\n",
|
||||
dev_dbg(dev, "write(%#.2x): buf:%p to:%#.8x len:%#zx\n",
|
||||
nor->program_opcode, buf, (u32)to, len);
|
||||
|
||||
memset(&cmd, 0, sizeof(cmd));
|
||||
|
@ -480,7 +481,12 @@ static void stm32_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops)
|
|||
static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
|
||||
struct device_node *np)
|
||||
{
|
||||
u32 width, flash_read, presc, cs_num, max_rate = 0;
|
||||
struct spi_nor_hwcaps hwcaps = {
|
||||
.mask = SNOR_HWCAPS_READ |
|
||||
SNOR_HWCAPS_READ_FAST |
|
||||
SNOR_HWCAPS_PP,
|
||||
};
|
||||
u32 width, presc, cs_num, max_rate = 0;
|
||||
struct stm32_qspi_flash *flash;
|
||||
struct mtd_info *mtd;
|
||||
int ret;
|
||||
|
@ -499,12 +505,10 @@ static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
|
|||
width = 1;
|
||||
|
||||
if (width == 4)
|
||||
flash_read = SPI_NOR_QUAD;
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4;
|
||||
else if (width == 2)
|
||||
flash_read = SPI_NOR_DUAL;
|
||||
else if (width == 1)
|
||||
flash_read = SPI_NOR_NORMAL;
|
||||
else
|
||||
hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2;
|
||||
else if (width != 1)
|
||||
return -EINVAL;
|
||||
|
||||
flash = &qspi->flash[cs_num];
|
||||
|
@ -539,7 +543,7 @@ static int stm32_qspi_flash_setup(struct stm32_qspi *qspi,
|
|||
*/
|
||||
flash->fsize = FSIZE_VAL(SZ_1K);
|
||||
|
||||
ret = spi_nor_scan(&flash->nor, NULL, flash_read);
|
||||
ret = spi_nor_scan(&flash->nor, NULL, &hwcaps);
|
||||
if (ret) {
|
||||
dev_err(qspi->dev, "device scan failed\n");
|
||||
return ret;
|
||||
|
|
|
@ -102,7 +102,7 @@ static int write_eraseblock2(int ebnum)
|
|||
if (unlikely(err || written != subpgsize * k)) {
|
||||
pr_err("error: write failed at %#llx\n",
|
||||
(long long)addr);
|
||||
if (written != subpgsize) {
|
||||
if (written != subpgsize * k) {
|
||||
pr_err(" write size: %#x\n",
|
||||
subpgsize * k);
|
||||
pr_err(" written: %#08zx\n",
|
||||
|
|
|
@ -915,6 +915,8 @@ static int spinand_probe(struct spi_device *spi_nand)
|
|||
chip->waitfunc = spinand_wait;
|
||||
chip->options |= NAND_CACHEPRG;
|
||||
chip->select_chip = spinand_select_chip;
|
||||
chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
|
||||
chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
|
||||
|
||||
mtd = nand_to_mtd(chip);
|
||||
|
||||
|
|
|
@ -107,6 +107,8 @@ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
|
|||
#define NAND_STATUS_READY 0x40
|
||||
#define NAND_STATUS_WP 0x80
|
||||
|
||||
#define NAND_DATA_IFACE_CHECK_ONLY -1
|
||||
|
||||
/*
|
||||
* Constants for ECC_MODES
|
||||
*/
|
||||
|
@ -116,6 +118,7 @@ typedef enum {
|
|||
NAND_ECC_HW,
|
||||
NAND_ECC_HW_SYNDROME,
|
||||
NAND_ECC_HW_OOB_FIRST,
|
||||
NAND_ECC_ON_DIE,
|
||||
} nand_ecc_modes_t;
|
||||
|
||||
enum nand_ecc_algo {
|
||||
|
@ -257,6 +260,8 @@ struct nand_chip;
|
|||
|
||||
/* Vendor-specific feature address (Micron) */
|
||||
#define ONFI_FEATURE_ADDR_READ_RETRY 0x89
|
||||
#define ONFI_FEATURE_ON_DIE_ECC 0x90
|
||||
#define ONFI_FEATURE_ON_DIE_ECC_EN BIT(3)
|
||||
|
||||
/* ONFI subfeature parameters length */
|
||||
#define ONFI_SUBFEATURE_PARAM_LEN 4
|
||||
|
@ -476,6 +481,44 @@ static inline void nand_hw_control_init(struct nand_hw_control *nfc)
|
|||
init_waitqueue_head(&nfc->wq);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct nand_ecc_step_info - ECC step information of ECC engine
|
||||
* @stepsize: data bytes per ECC step
|
||||
* @strengths: array of supported strengths
|
||||
* @nstrengths: number of supported strengths
|
||||
*/
|
||||
struct nand_ecc_step_info {
|
||||
int stepsize;
|
||||
const int *strengths;
|
||||
int nstrengths;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct nand_ecc_caps - capability of ECC engine
|
||||
* @stepinfos: array of ECC step information
|
||||
* @nstepinfos: number of ECC step information
|
||||
* @calc_ecc_bytes: driver's hook to calculate ECC bytes per step
|
||||
*/
|
||||
struct nand_ecc_caps {
|
||||
const struct nand_ecc_step_info *stepinfos;
|
||||
int nstepinfos;
|
||||
int (*calc_ecc_bytes)(int step_size, int strength);
|
||||
};
|
||||
|
||||
/* a shorthand to generate struct nand_ecc_caps with only one ECC stepsize */
|
||||
#define NAND_ECC_CAPS_SINGLE(__name, __calc, __step, ...) \
|
||||
static const int __name##_strengths[] = { __VA_ARGS__ }; \
|
||||
static const struct nand_ecc_step_info __name##_stepinfo = { \
|
||||
.stepsize = __step, \
|
||||
.strengths = __name##_strengths, \
|
||||
.nstrengths = ARRAY_SIZE(__name##_strengths), \
|
||||
}; \
|
||||
static const struct nand_ecc_caps __name = { \
|
||||
.stepinfos = &__name##_stepinfo, \
|
||||
.nstepinfos = 1, \
|
||||
.calc_ecc_bytes = __calc, \
|
||||
}
|
||||
|
||||
/**
|
||||
* struct nand_ecc_ctrl - Control structure for ECC
|
||||
* @mode: ECC mode
|
||||
|
@ -815,7 +858,10 @@ struct nand_manufacturer_ops {
|
|||
* @read_retries: [INTERN] the number of read retry modes supported
|
||||
* @onfi_set_features: [REPLACEABLE] set the features for ONFI nand
|
||||
* @onfi_get_features: [REPLACEABLE] get the features for ONFI nand
|
||||
* @setup_data_interface: [OPTIONAL] setup the data interface and timing
|
||||
* @setup_data_interface: [OPTIONAL] setup the data interface and timing. If
|
||||
* chipnr is set to %NAND_DATA_IFACE_CHECK_ONLY this
|
||||
* means the configuration should not be applied but
|
||||
* only checked.
|
||||
* @bbt: [INTERN] bad block table pointer
|
||||
* @bbt_td: [REPLACEABLE] bad block table descriptor for flash
|
||||
* lookup.
|
||||
|
@ -826,9 +872,6 @@ struct nand_manufacturer_ops {
|
|||
* structure which is shared among multiple independent
|
||||
* devices.
|
||||
* @priv: [OPTIONAL] pointer to private chip data
|
||||
* @errstat: [OPTIONAL] hardware specific function to perform
|
||||
* additional error status checks (determine if errors are
|
||||
* correctable).
|
||||
* @manufacturer: [INTERN] Contains manufacturer information
|
||||
*/
|
||||
|
||||
|
@ -852,16 +895,13 @@ struct nand_chip {
|
|||
int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
|
||||
int (*erase)(struct mtd_info *mtd, int page);
|
||||
int (*scan_bbt)(struct mtd_info *mtd);
|
||||
int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state,
|
||||
int status, int page);
|
||||
int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
int feature_addr, uint8_t *subfeature_para);
|
||||
int (*onfi_get_features)(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
int feature_addr, uint8_t *subfeature_para);
|
||||
int (*setup_read_retry)(struct mtd_info *mtd, int retry_mode);
|
||||
int (*setup_data_interface)(struct mtd_info *mtd,
|
||||
const struct nand_data_interface *conf,
|
||||
bool check_only);
|
||||
int (*setup_data_interface)(struct mtd_info *mtd, int chipnr,
|
||||
const struct nand_data_interface *conf);
|
||||
|
||||
|
||||
int chip_delay;
|
||||
|
@ -1244,6 +1284,15 @@ int nand_check_erased_ecc_chunk(void *data, int datalen,
|
|||
void *extraoob, int extraooblen,
|
||||
int threshold);
|
||||
|
||||
int nand_check_ecc_caps(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail);
|
||||
|
||||
int nand_match_ecc_req(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail);
|
||||
|
||||
int nand_maximize_ecc(struct nand_chip *chip,
|
||||
const struct nand_ecc_caps *caps, int oobavail);
|
||||
|
||||
/* Default write_oob implementation */
|
||||
int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
|
||||
|
||||
|
@ -1258,6 +1307,19 @@ int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
|
|||
int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
int page);
|
||||
|
||||
/* Stub used by drivers that do not support GET/SET FEATURES operations */
|
||||
int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
|
||||
struct nand_chip *chip, int addr,
|
||||
u8 *subfeature_param);
|
||||
|
||||
/* Default read_page_raw implementation */
|
||||
int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
uint8_t *buf, int oob_required, int page);
|
||||
|
||||
/* Default write_page_raw implementation */
|
||||
int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
const uint8_t *buf, int oob_required, int page);
|
||||
|
||||
/* Reset and initialize a NAND device */
|
||||
int nand_reset(struct nand_chip *chip, int chipnr);
|
||||
|
||||
|
|
|
@ -20,6 +20,12 @@
|
|||
*
|
||||
* For each partition, these fields are available:
|
||||
* name: string that will be used to label the partition's MTD device.
|
||||
* types: some partitions can be containers using specific format to describe
|
||||
* embedded subpartitions / volumes. E.g. many home routers use "firmware"
|
||||
* partition that contains at least kernel and rootfs. In such case an
|
||||
* extra parser is needed that will detect these dynamic partitions and
|
||||
* report them to the MTD subsystem. If set this property stores an array
|
||||
* of parser names to use when looking for subpartitions.
|
||||
* size: the partition size; if defined as MTDPART_SIZ_FULL, the partition
|
||||
* will extend to the end of the master MTD device.
|
||||
* offset: absolute starting position within the master MTD device; if
|
||||
|
@ -38,6 +44,7 @@
|
|||
|
||||
struct mtd_partition {
|
||||
const char *name; /* identifier string */
|
||||
const char *const *types; /* names of parsers to use if any */
|
||||
uint64_t size; /* partition size */
|
||||
uint64_t offset; /* offset within the master MTD space */
|
||||
uint32_t mask_flags; /* master MTD flags to mask out for this partition */
|
||||
|
|
|
@ -73,6 +73,15 @@
|
|||
#define SPINOR_OP_BE_32K_4B 0x5c /* Erase 32KiB block */
|
||||
#define SPINOR_OP_SE_4B 0xdc /* Sector erase (usually 64KiB) */
|
||||
|
||||
/* Double Transfer Rate opcodes - defined in JEDEC JESD216B. */
|
||||
#define SPINOR_OP_READ_1_1_1_DTR 0x0d
|
||||
#define SPINOR_OP_READ_1_2_2_DTR 0xbd
|
||||
#define SPINOR_OP_READ_1_4_4_DTR 0xed
|
||||
|
||||
#define SPINOR_OP_READ_1_1_1_DTR_4B 0x0e
|
||||
#define SPINOR_OP_READ_1_2_2_DTR_4B 0xbe
|
||||
#define SPINOR_OP_READ_1_4_4_DTR_4B 0xee
|
||||
|
||||
/* Used for SST flashes only. */
|
||||
#define SPINOR_OP_BP 0x02 /* Byte program */
|
||||
#define SPINOR_OP_WRDI 0x04 /* Write disable */
|
||||
|
@ -119,13 +128,81 @@
|
|||
/* Configuration Register bits. */
|
||||
#define CR_QUAD_EN_SPAN BIT(1) /* Spansion Quad I/O */
|
||||
|
||||
enum read_mode {
|
||||
SPI_NOR_NORMAL = 0,
|
||||
SPI_NOR_FAST,
|
||||
SPI_NOR_DUAL,
|
||||
SPI_NOR_QUAD,
|
||||
/* Supported SPI protocols */
|
||||
#define SNOR_PROTO_INST_MASK GENMASK(23, 16)
|
||||
#define SNOR_PROTO_INST_SHIFT 16
|
||||
#define SNOR_PROTO_INST(_nbits) \
|
||||
((((unsigned long)(_nbits)) << SNOR_PROTO_INST_SHIFT) & \
|
||||
SNOR_PROTO_INST_MASK)
|
||||
|
||||
#define SNOR_PROTO_ADDR_MASK GENMASK(15, 8)
|
||||
#define SNOR_PROTO_ADDR_SHIFT 8
|
||||
#define SNOR_PROTO_ADDR(_nbits) \
|
||||
((((unsigned long)(_nbits)) << SNOR_PROTO_ADDR_SHIFT) & \
|
||||
SNOR_PROTO_ADDR_MASK)
|
||||
|
||||
#define SNOR_PROTO_DATA_MASK GENMASK(7, 0)
|
||||
#define SNOR_PROTO_DATA_SHIFT 0
|
||||
#define SNOR_PROTO_DATA(_nbits) \
|
||||
((((unsigned long)(_nbits)) << SNOR_PROTO_DATA_SHIFT) & \
|
||||
SNOR_PROTO_DATA_MASK)
|
||||
|
||||
#define SNOR_PROTO_IS_DTR BIT(24) /* Double Transfer Rate */
|
||||
|
||||
#define SNOR_PROTO_STR(_inst_nbits, _addr_nbits, _data_nbits) \
|
||||
(SNOR_PROTO_INST(_inst_nbits) | \
|
||||
SNOR_PROTO_ADDR(_addr_nbits) | \
|
||||
SNOR_PROTO_DATA(_data_nbits))
|
||||
#define SNOR_PROTO_DTR(_inst_nbits, _addr_nbits, _data_nbits) \
|
||||
(SNOR_PROTO_IS_DTR | \
|
||||
SNOR_PROTO_STR(_inst_nbits, _addr_nbits, _data_nbits))
|
||||
|
||||
enum spi_nor_protocol {
|
||||
SNOR_PROTO_1_1_1 = SNOR_PROTO_STR(1, 1, 1),
|
||||
SNOR_PROTO_1_1_2 = SNOR_PROTO_STR(1, 1, 2),
|
||||
SNOR_PROTO_1_1_4 = SNOR_PROTO_STR(1, 1, 4),
|
||||
SNOR_PROTO_1_1_8 = SNOR_PROTO_STR(1, 1, 8),
|
||||
SNOR_PROTO_1_2_2 = SNOR_PROTO_STR(1, 2, 2),
|
||||
SNOR_PROTO_1_4_4 = SNOR_PROTO_STR(1, 4, 4),
|
||||
SNOR_PROTO_1_8_8 = SNOR_PROTO_STR(1, 8, 8),
|
||||
SNOR_PROTO_2_2_2 = SNOR_PROTO_STR(2, 2, 2),
|
||||
SNOR_PROTO_4_4_4 = SNOR_PROTO_STR(4, 4, 4),
|
||||
SNOR_PROTO_8_8_8 = SNOR_PROTO_STR(8, 8, 8),
|
||||
|
||||
SNOR_PROTO_1_1_1_DTR = SNOR_PROTO_DTR(1, 1, 1),
|
||||
SNOR_PROTO_1_2_2_DTR = SNOR_PROTO_DTR(1, 2, 2),
|
||||
SNOR_PROTO_1_4_4_DTR = SNOR_PROTO_DTR(1, 4, 4),
|
||||
SNOR_PROTO_1_8_8_DTR = SNOR_PROTO_DTR(1, 8, 8),
|
||||
};
|
||||
|
||||
static inline bool spi_nor_protocol_is_dtr(enum spi_nor_protocol proto)
|
||||
{
|
||||
return !!(proto & SNOR_PROTO_IS_DTR);
|
||||
}
|
||||
|
||||
static inline u8 spi_nor_get_protocol_inst_nbits(enum spi_nor_protocol proto)
|
||||
{
|
||||
return ((unsigned long)(proto & SNOR_PROTO_INST_MASK)) >>
|
||||
SNOR_PROTO_INST_SHIFT;
|
||||
}
|
||||
|
||||
static inline u8 spi_nor_get_protocol_addr_nbits(enum spi_nor_protocol proto)
|
||||
{
|
||||
return ((unsigned long)(proto & SNOR_PROTO_ADDR_MASK)) >>
|
||||
SNOR_PROTO_ADDR_SHIFT;
|
||||
}
|
||||
|
||||
static inline u8 spi_nor_get_protocol_data_nbits(enum spi_nor_protocol proto)
|
||||
{
|
||||
return ((unsigned long)(proto & SNOR_PROTO_DATA_MASK)) >>
|
||||
SNOR_PROTO_DATA_SHIFT;
|
||||
}
|
||||
|
||||
static inline u8 spi_nor_get_protocol_width(enum spi_nor_protocol proto)
|
||||
{
|
||||
return spi_nor_get_protocol_data_nbits(proto);
|
||||
}
|
||||
|
||||
#define SPI_NOR_MAX_CMD_SIZE 8
|
||||
enum spi_nor_ops {
|
||||
SPI_NOR_OPS_READ = 0,
|
||||
|
@ -154,9 +231,11 @@ enum spi_nor_option_flags {
|
|||
* @read_opcode: the read opcode
|
||||
* @read_dummy: the dummy needed by the read operation
|
||||
* @program_opcode: the program opcode
|
||||
* @flash_read: the mode of the read
|
||||
* @sst_write_second: used by the SST write operation
|
||||
* @flags: flag options for the current SPI-NOR (SNOR_F_*)
|
||||
* @read_proto: the SPI protocol for read operations
|
||||
* @write_proto: the SPI protocol for write operations
|
||||
* @reg_proto the SPI protocol for read_reg/write_reg/erase operations
|
||||
* @cmd_buf: used by the write_reg
|
||||
* @prepare: [OPTIONAL] do some preparations for the
|
||||
* read/write/erase/lock/unlock operations
|
||||
|
@ -185,7 +264,9 @@ struct spi_nor {
|
|||
u8 read_opcode;
|
||||
u8 read_dummy;
|
||||
u8 program_opcode;
|
||||
enum read_mode flash_read;
|
||||
enum spi_nor_protocol read_proto;
|
||||
enum spi_nor_protocol write_proto;
|
||||
enum spi_nor_protocol reg_proto;
|
||||
bool sst_write_second;
|
||||
u32 flags;
|
||||
u8 cmd_buf[SPI_NOR_MAX_CMD_SIZE];
|
||||
|
@ -219,11 +300,72 @@ static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor)
|
|||
return mtd_get_of_node(&nor->mtd);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct spi_nor_hwcaps - Structure for describing the hardware capabilies
|
||||
* supported by the SPI controller (bus master).
|
||||
* @mask: the bitmask listing all the supported hw capabilies
|
||||
*/
|
||||
struct spi_nor_hwcaps {
|
||||
u32 mask;
|
||||
};
|
||||
|
||||
/*
|
||||
*(Fast) Read capabilities.
|
||||
* MUST be ordered by priority: the higher bit position, the higher priority.
|
||||
* As a matter of performances, it is relevant to use Octo SPI protocols first,
|
||||
* then Quad SPI protocols before Dual SPI protocols, Fast Read and lastly
|
||||
* (Slow) Read.
|
||||
*/
|
||||
#define SNOR_HWCAPS_READ_MASK GENMASK(14, 0)
|
||||
#define SNOR_HWCAPS_READ BIT(0)
|
||||
#define SNOR_HWCAPS_READ_FAST BIT(1)
|
||||
#define SNOR_HWCAPS_READ_1_1_1_DTR BIT(2)
|
||||
|
||||
#define SNOR_HWCAPS_READ_DUAL GENMASK(6, 3)
|
||||
#define SNOR_HWCAPS_READ_1_1_2 BIT(3)
|
||||
#define SNOR_HWCAPS_READ_1_2_2 BIT(4)
|
||||
#define SNOR_HWCAPS_READ_2_2_2 BIT(5)
|
||||
#define SNOR_HWCAPS_READ_1_2_2_DTR BIT(6)
|
||||
|
||||
#define SNOR_HWCAPS_READ_QUAD GENMASK(10, 7)
|
||||
#define SNOR_HWCAPS_READ_1_1_4 BIT(7)
|
||||
#define SNOR_HWCAPS_READ_1_4_4 BIT(8)
|
||||
#define SNOR_HWCAPS_READ_4_4_4 BIT(9)
|
||||
#define SNOR_HWCAPS_READ_1_4_4_DTR BIT(10)
|
||||
|
||||
#define SNOR_HWCPAS_READ_OCTO GENMASK(14, 11)
|
||||
#define SNOR_HWCAPS_READ_1_1_8 BIT(11)
|
||||
#define SNOR_HWCAPS_READ_1_8_8 BIT(12)
|
||||
#define SNOR_HWCAPS_READ_8_8_8 BIT(13)
|
||||
#define SNOR_HWCAPS_READ_1_8_8_DTR BIT(14)
|
||||
|
||||
/*
|
||||
* Page Program capabilities.
|
||||
* MUST be ordered by priority: the higher bit position, the higher priority.
|
||||
* Like (Fast) Read capabilities, Octo/Quad SPI protocols are preferred to the
|
||||
* legacy SPI 1-1-1 protocol.
|
||||
* Note that Dual Page Programs are not supported because there is no existing
|
||||
* JEDEC/SFDP standard to define them. Also at this moment no SPI flash memory
|
||||
* implements such commands.
|
||||
*/
|
||||
#define SNOR_HWCAPS_PP_MASK GENMASK(22, 16)
|
||||
#define SNOR_HWCAPS_PP BIT(16)
|
||||
|
||||
#define SNOR_HWCAPS_PP_QUAD GENMASK(19, 17)
|
||||
#define SNOR_HWCAPS_PP_1_1_4 BIT(17)
|
||||
#define SNOR_HWCAPS_PP_1_4_4 BIT(18)
|
||||
#define SNOR_HWCAPS_PP_4_4_4 BIT(19)
|
||||
|
||||
#define SNOR_HWCAPS_PP_OCTO GENMASK(22, 20)
|
||||
#define SNOR_HWCAPS_PP_1_1_8 BIT(20)
|
||||
#define SNOR_HWCAPS_PP_1_8_8 BIT(21)
|
||||
#define SNOR_HWCAPS_PP_8_8_8 BIT(22)
|
||||
|
||||
/**
|
||||
* spi_nor_scan() - scan the SPI NOR
|
||||
* @nor: the spi_nor structure
|
||||
* @name: the chip type name
|
||||
* @mode: the read mode supported by the driver
|
||||
* @hwcaps: the hardware capabilities supported by the controller driver
|
||||
*
|
||||
* The drivers can use this fuction to scan the SPI NOR.
|
||||
* In the scanning, it will try to get all the necessary information to
|
||||
|
@ -233,6 +375,7 @@ static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor)
|
|||
*
|
||||
* Return: 0 for success, others for failure.
|
||||
*/
|
||||
int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode);
|
||||
int spi_nor_scan(struct spi_nor *nor, const char *name,
|
||||
const struct spi_nor_hwcaps *hwcaps);
|
||||
|
||||
#endif
|
||||
|
|
Загрузка…
Ссылка в новой задаче