* Fix refcounting for unpartitioned MTDs
 * Fix misspelled function parameter 'section'
 * Remove unneeded break
 * cmdline parser: Fix parsing of part-names with colons
 * mtdpart: Fix misdocumented function parameter 'mtd'
 
 MTD devices:
 * phram:
   - Allow the user to set the erase page size
   - File headers are not good candidates for kernel-doc
 * physmap-bt1-rom: Fix __iomem addrspace removal warning
 * plat-ram: correctly free memory on error path in platram_probe()
 * powernv_flash: Add function names to headers and fix 'dev'
 * docg3: Fix kernel-doc 'bad line' and 'excessive doc' issues
 
 UBI cleanup fixes:
 * gluebi: Fix misnamed function parameter documentation
 * wl: Fix a couple of kernel-doc issues
 * eba: Fix a couple of misdocumentation issues
 * kapi: Correct documentation for 'ubi_leb_read_sg's 'sgl' parameter
 * Document 'ubi_num' in struct mtd_dev_param
 
 Generic NAND core:
 * ECC management:
   - Add an I/O request tweaking mechanism
   - Entire rework of the software BCH ECC driver, creation of a real
     ECC engine, getting rid of raw NAND structures, migration to more
     generic prototypes, misc fixes and style cleanup. Moved now to the
     Generic NAND layer.
   - Entire rework of the software Hamming ECC driver, creation of a
     real ECC engine, getting rid of raw NAND structures, misc renames,
     comment updates, cleanup, and style fixes. Moved now to the
     generic NAND layer.
   - Necessary plumbing at the NAND level to retrieve generic NAND ECC
     engines (softwares and on-die).
   - Update of the bindings.
 
 Raw NAND core:
 * Geting rid of the chip->ecc.priv entry.
 * Fix miscellaneous typos in kernel-doc
 
 Raw NAND controller drivers:
 * Arasan: Document 'anfc_op's 'buf' member
 * AU1550: Ensure the presence of the right includes
 * Brcmnand: Demote non-conformant kernel-doc headers
 * Cafe: Remove superfluous param doc and add another
 * Davinci: Do not use extra dereferencing
 * Diskonchip: Marking unused variables as __always_unused
 * GPMI:
   - Fix the driver only sense CS0 R/B issue
   - Fix the random DMA timeout issue
   - Use a single line for of_device_id
   - Use of_device_get_match_data()
   - Fix reference count leak in gpmi ops
   - Cleanup makefile
   - Fix binding matching of clocks on different SoCs
 * Ingenic: remove redundant get_device() in ingenic_ecc_get()
 * Intel LGM: New NAND controller driver
 * Marvell: Drop useless line
 * Meson:
   - Fix a resource leak in init
   - Fix meson_nfc_dma_buffer_release() arguments
 * mxc:
   - Use device_get_match_data()
   - Use a single line for of_device_id
   - Remove platform data support
 * Omap:
   - Fix a bunch of kernel-doc misdemeanours
   - Finish ELM half populated function header, demote empty ones
 * s3c2410: Add documentation for 2 missing struct members
 * Sunxi: Document 'sunxi_nfc's 'caps' member
 * Qcom:
   - Add support for SDX55
   - Support for IPQ6018 QPIC NAND controller
   - Fix DMA sync on FLASH_STATUS register read
 * Rockchip: New NAND controller driver for RK3308, RK2928 and others
 * Sunxi: Add MDMA support
 
 ONENAND:
 * bbt: Fix expected kernel-doc formatting
 * Fix some kernel-doc misdemeanours
 * Fix expected kernel-doc formatting
 * Use mtd->oops_panic_write as condition
 
 SPI-NAND core:
 * Creation of a SPI-NAND on-die ECC engine
 * Move ECC related definitions earlier in the driver
 * Fix typo in comment
 * Fill a default ECC provider/algorithm
 * Remove outdated comment
 * Fix OOB read
 * Allow the case where there is no ECC engine
 * Use the external ECC engine logic
 
 SPI-NAND chip drivers:
 * Micron:
   - Add support for MT29F2G01AAAED
   - Use more specific names
 * Macronix:
   - Add support for MX35LFxG24AD
   - Add support for MX35LFxGE4AD
 * Toshiba: Demote non-conformant kernel-doc header
 
 SPI-NOR core:
 * Initial support for stateful Octal DTR mode using volatile settings
 * Preliminary support for JEDEC 251 (xSPI) and JEDEC 216D standards
 * Support for Cypress Semper flash
 * Support to specify ECC block size of SPI NOR flashes
 * Fixes to avoid clearing of non-volatile Block Protection bits at
   probe
 * hisi-sfc: Demote non-conformant kernel-doc
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEE9HuaYnbmDhq/XIDIJWrqGEe9VoQFAl/aS1sACgkQJWrqGEe9
 VoTOqQgAiu3XfM7iHvFDiz0SVL+RFzqi2jnwXHgGDATTq+vOPwAMaqnqF6xZZZLA
 BIKsLwVRJnZ9Vu6Xl2vAPaVob+QKbsvvP9kkk9H/dZJ6IW4XsWFqYotPSQQ/6ZBO
 2bmw9nQ0ZcksyUkdASGHuYlW/H5DAtQNQuQxGF5ywlZMxTEnD0wxUD5tccf1o3xk
 UYvQsQ0MNMriWCxbcdUXUmDOE9DuPdDysuLDPJs0WLnlNGgwZ/mnLvSRm6wm4nRT
 Y/pB6VcTMMEYAsujdf89LjCHlfCQuH5Zls9pxic8GkHjOcEqUeMLLXkkbQ1+61AO
 93QsOhKAsju49/aHpbpvwu5SEmLojA==
 =1fie
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
 "MTD core:
   - Fix refcounting for unpartitioned MTDs
   - Fix misspelled function parameter 'section'
   - Remove unneeded break
   - cmdline parser: Fix parsing of part-names with colons
   - mtdpart: Fix misdocumented function parameter 'mtd'

  MTD devices:
   - phram:
      - Allow the user to set the erase page size
      - File headers are not good candidates for kernel-doc
   - physmap-bt1-rom: Fix __iomem addrspace removal warning
   - plat-ram: correctly free memory on error path in platram_probe()
   - powernv_flash: Add function names to headers and fix 'dev'
   - docg3: Fix kernel-doc 'bad line' and 'excessive doc' issues

  UBI cleanup fixes:
   - gluebi: Fix misnamed function parameter documentation
   - wl: Fix a couple of kernel-doc issues
   - eba: Fix a couple of misdocumentation issues
   - kapi: Correct documentation for 'ubi_leb_read_sg's 'sgl' parameter
   - Document 'ubi_num' in struct mtd_dev_param

  Generic NAND core ECC management:
   - Add an I/O request tweaking mechanism
   - Entire rework of the software BCH ECC driver, creation of a real
     ECC engine, getting rid of raw NAND structures, migration to more
     generic prototypes, misc fixes and style cleanup. Moved now to the
     Generic NAND layer.
   - Entire rework of the software Hamming ECC driver, creation of a
     real ECC engine, getting rid of raw NAND structures, misc renames,
     comment updates, cleanup, and style fixes. Moved now to the generic
     NAND layer.
   - Necessary plumbing at the NAND level to retrieve generic NAND ECC
     engines (softwares and on-die).
   - Update of the bindings.

  Raw NAND core:
   - Geting rid of the chip->ecc.priv entry.
   - Fix miscellaneous typos in kernel-doc

  Raw NAND controller drivers:
   - Arasan: Document 'anfc_op's 'buf' member
   - AU1550: Ensure the presence of the right includes
   - Brcmnand: Demote non-conformant kernel-doc headers
   - Cafe: Remove superfluous param doc and add another
   - Davinci: Do not use extra dereferencing
   - Diskonchip: Marking unused variables as __always_unused
   - GPMI:
      - Fix the driver only sense CS0 R/B issue
      - Fix the random DMA timeout issue
      - Use a single line for of_device_id
      - Use of_device_get_match_data()
      - Fix reference count leak in gpmi ops
      - Cleanup makefile
      - Fix binding matching of clocks on different SoCs
   - Ingenic: remove redundant get_device() in ingenic_ecc_get()
   - Intel LGM: New NAND controller driver
   - Marvell: Drop useless line
   - Meson:
      - Fix a resource leak in init
      - Fix meson_nfc_dma_buffer_release() arguments
   - mxc:
      - Use device_get_match_data()
      - Use a single line for of_device_id
      - Remove platform data support
   - Omap:
      - Fix a bunch of kernel-doc misdemeanours
      - Finish ELM half populated function header, demote empty ones
   - s3c2410: Add documentation for 2 missing struct members
   - Sunxi: Document 'sunxi_nfc's 'caps' member
   - Qcom:
      - Add support for SDX55
      - Support for IPQ6018 QPIC NAND controller
      - Fix DMA sync on FLASH_STATUS register read
   - Rockchip: New NAND controller driver for RK3308, RK2928 and others
   - Sunxi: Add MDMA support

  ONENAND:
   - bbt: Fix expected kernel-doc formatting
   - Fix some kernel-doc misdemeanours
   - Fix expected kernel-doc formatting
   - Use mtd->oops_panic_write as condition

  SPI-NAND core:
   - Creation of a SPI-NAND on-die ECC engine
   - Move ECC related definitions earlier in the driver
   - Fix typo in comment
   - Fill a default ECC provider/algorithm
   - Remove outdated comment
   - Fix OOB read
   - Allow the case where there is no ECC engine
   - Use the external ECC engine logic

  SPI-NAND chip drivers:
   - Micron:
      - Add support for MT29F2G01AAAED
      - Use more specific names
   - Macronix:
      - Add support for MX35LFxG24AD
      - Add support for MX35LFxGE4AD
   - Toshiba: Demote non-conformant kernel-doc header

  SPI-NOR core:
   - Initial support for stateful Octal DTR mode using volatile settings
   - Preliminary support for JEDEC 251 (xSPI) and JEDEC 216D standards
   - Support for Cypress Semper flash
   - Support to specify ECC block size of SPI NOR flashes
   - Fixes to avoid clearing of non-volatile Block Protection bits at
     probe
   - hisi-sfc: Demote non-conformant kernel-doc"

* tag 'mtd/for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (120 commits)
  mtd: spinand: macronix: Add support for MX35LFxG24AD
  mtd: rawnand: rockchip: NFC driver for RK3308, RK2928 and others
  dt-bindings: mtd: Describe Rockchip RK3xxx NAND flash controller
  mtd: rawnand: gpmi: Use a single line for of_device_id
  mtd: rawnand: gpmi: Fix the random DMA timeout issue
  mtd: rawnand: gpmi: Fix the driver only sense CS0 R/B issue
  mtd: rawnand: qcom: Add NAND controller support for SDX55
  dt-bindings: qcom_nandc: Add SDX55 QPIC NAND documentation
  mtd: rawnand: mxc: Use a single line for of_device_id
  mtd: rawnand: mxc: Use device_get_match_data()
  mtd: rawnand: meson: Fix a resource leak in init
  mtd: rawnand: gpmi: Use of_device_get_match_data()
  mtd: rawnand: Add NAND controller support on Intel LGM SoC
  dt-bindings: mtd: Add Nand Flash Controller support for Intel LGM SoC
  mtd: spinand: micron: Add support for MT29F2G01AAAED
  mtd: spinand: micron: Use more specific names
  mtd: rawnand: gpmi: fix reference count leak in gpmi ops
  dt-bindings: mtd: gpmi-nand: Fix matching of clocks on different SoCs
  mtd: spinand: macronix: Add support for MX35LFxGE4AD
  mtd: plat-ram: correctly free memory on error path in platram_probe()
  ...
This commit is contained in:
Linus Torvalds 2020-12-16 14:58:35 -08:00
Родитель 945433be36 4c9e94dff6
Коммит a701262c02
111 изменённых файлов: 6110 добавлений и 1503 удалений

Просмотреть файл

@ -9,9 +9,6 @@ title: Freescale General-Purpose Media Interface (GPMI) binding
maintainers:
- Han Xu <han.xu@nxp.com>
allOf:
- $ref: "nand-controller.yaml"
description: |
The GPMI nand controller provides an interface to control the NAND
flash chips. The device tree may optionally contain sub-nodes
@ -58,22 +55,10 @@ properties:
clocks:
minItems: 1
maxItems: 5
items:
- description: SoC gpmi io clock
- description: SoC gpmi apb clock
- description: SoC gpmi bch clock
- description: SoC gpmi bch apb clock
- description: SoC per1 bch clock
clock-names:
minItems: 1
maxItems: 5
items:
- const: gpmi_io
- const: gpmi_apb
- const: gpmi_bch
- const: gpmi_bch_apb
- const: per1_bch
fsl,use-minimum-ecc:
type: boolean
@ -107,6 +92,67 @@ required:
unevaluatedProperties: false
allOf:
- $ref: "nand-controller.yaml"
- if:
properties:
compatible:
contains:
enum:
- fsl,imx23-gpmi-nand
- fsl,imx28-gpmi-nand
then:
properties:
clocks:
items:
- description: SoC gpmi io clock
clock-names:
items:
- const: gpmi_io
- if:
properties:
compatible:
contains:
enum:
- fsl,imx6q-gpmi-nand
- fsl,imx6sx-gpmi-nand
then:
properties:
clocks:
items:
- description: SoC gpmi io clock
- description: SoC gpmi apb clock
- description: SoC gpmi bch clock
- description: SoC gpmi bch apb clock
- description: SoC per1 bch clock
clock-names:
items:
- const: gpmi_io
- const: gpmi_apb
- const: gpmi_bch
- const: gpmi_bch_apb
- const: per1_bch
- if:
properties:
compatible:
contains:
const: fsl,imx7d-gpmi-nand
then:
properties:
clocks:
items:
- description: SoC gpmi io clock
- description: SoC gpmi bch apb clock
clock-names:
minItems: 2
maxItems: 2
items:
- const: gpmi_io
- const: gpmi_bch_apb
examples:
- |
nand-controller@8000c000 {

Просмотреть файл

@ -0,0 +1,99 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/intel,lgm-nand.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel LGM SoC NAND Controller Device Tree Bindings
allOf:
- $ref: "nand-controller.yaml"
maintainers:
- Ramuthevar Vadivel Murugan <vadivel.muruganx.ramuthevar@linux.intel.com>
properties:
compatible:
const: intel,lgm-nand
reg:
maxItems: 6
reg-names:
items:
- const: ebunand
- const: hsnand
- const: nand_cs0
- const: nand_cs1
- const: addr_sel0
- const: addr_sel1
clocks:
maxItems: 1
dmas:
maxItems: 2
dma-names:
items:
- const: tx
- const: rx
"#address-cells":
const: 1
"#size-cells":
const: 0
patternProperties:
"^nand@[a-f0-9]+$":
type: object
properties:
reg:
minimum: 0
maximum: 7
nand-ecc-mode: true
nand-ecc-algo:
const: hw
additionalProperties: false
required:
- compatible
- reg
- reg-names
- clocks
- dmas
- dma-names
- "#address-cells"
- "#size-cells"
additionalProperties: false
examples:
- |
nand-controller@e0f00000 {
compatible = "intel,lgm-nand";
reg = <0xe0f00000 0x100>,
<0xe1000000 0x300>,
<0xe1400000 0x8000>,
<0xe1c00000 0x1000>,
<0x17400000 0x4>,
<0x17c00000 0x4>;
reg-names = "ebunand", "hsnand", "nand_cs0", "nand_cs1",
"addr_sel0", "addr_sel1";
clocks = <&cgu0 125>;
dmas = <&dma0 8>, <&dma0 9>;
dma-names = "tx", "rx";
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
nand-ecc-mode = "hw";
};
};
...

Просмотреть файл

@ -46,15 +46,6 @@ patternProperties:
description:
Contains the native Ready/Busy IDs.
nand-ecc-mode:
description:
Desired ECC engine, either hardware (most of the time
embedded in the NAND controller) or software correction
(Linux will handle the calculations). soft_bch is deprecated
and should be replaced by soft and nand-ecc-algo.
$ref: /schemas/types.yaml#/definitions/string
enum: [none, soft, hw, hw_syndrome, hw_oob_first, on-die]
nand-ecc-engine:
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
@ -171,7 +162,7 @@ examples:
nand@0 {
reg = <0>;
nand-ecc-mode = "soft";
nand-use-soft-ecc-engine;
nand-ecc-algo = "bch";
/* controller specific properties */

Просмотреть файл

@ -6,8 +6,12 @@ Required properties:
SoC and it uses ADM DMA
* "qcom,ipq4019-nand" - for QPIC NAND controller v1.4.0 being used in
IPQ4019 SoC and it uses BAM DMA
* "qcom,ipq6018-nand" - for QPIC NAND controller v1.5.0 being used in
IPQ6018 SoC and it uses BAM DMA
* "qcom,ipq8074-nand" - for QPIC NAND controller v1.5.0 being used in
IPQ8074 SoC and it uses BAM DMA
* "qcom,sdx55-nand" - for QPIC NAND controller v2.0.0 being used in
SDX55 SoC and it uses BAM DMA
- reg: MMIO address range
- clocks: must contain core clock and always on clock

Просмотреть файл

@ -0,0 +1,161 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/rockchip,nand-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip SoCs NAND FLASH Controller (NFC)
allOf:
- $ref: "nand-controller.yaml#"
maintainers:
- Heiko Stuebner <heiko@sntech.de>
properties:
compatible:
oneOf:
- const: rockchip,px30-nfc
- const: rockchip,rk2928-nfc
- const: rockchip,rv1108-nfc
- items:
- const: rockchip,rk3036-nfc
- const: rockchip,rk2928-nfc
- items:
- const: rockchip,rk3308-nfc
- const: rockchip,rv1108-nfc
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
minItems: 1
items:
- description: Bus Clock
- description: Module Clock
clock-names:
minItems: 1
items:
- const: ahb
- const: nfc
assigned-clocks:
maxItems: 1
assigned-clock-rates:
maxItems: 1
power-domains:
maxItems: 1
patternProperties:
"^nand@[0-7]$":
type: object
properties:
reg:
minimum: 0
maximum: 7
nand-ecc-mode:
const: hw
nand-ecc-step-size:
const: 1024
nand-ecc-strength:
enum: [16, 24, 40, 60, 70]
description: |
The ECC configurations that can be supported are as follows.
NFC v600 ECC 16, 24, 40, 60
RK2928, RK3066, RK3188
NFC v622 ECC 16, 24, 40, 60
RK3036, RK3128
NFC v800 ECC 16
RK3308, RV1108
NFC v900 ECC 16, 40, 60, 70
RK3326, PX30
nand-bus-width:
const: 8
rockchip,boot-blks:
$ref: /schemas/types.yaml#/definitions/uint32
minimum: 2
default: 16
description:
The NFC driver need this information to select ECC
algorithms supported by the boot ROM.
Only used in combination with 'nand-is-boot-medium'.
rockchip,boot-ecc-strength:
enum: [16, 24, 40, 60, 70]
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
description: |
If specified it indicates that a different BCH/ECC setting is
supported by the boot ROM.
NFC v600 ECC 16, 24
RK2928, RK3066, RK3188
NFC v622 ECC 16, 24, 40, 60
RK3036, RK3128
NFC v800 ECC 16
RK3308, RV1108
NFC v900 ECC 16, 70
RK3326, PX30
Only used in combination with 'nand-is-boot-medium'.
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/rk3308-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
nfc: nand-controller@ff4b0000 {
compatible = "rockchip,rk3308-nfc",
"rockchip,rv1108-nfc";
reg = <0xff4b0000 0x4000>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru HCLK_NANDC>, <&cru SCLK_NANDC>;
clock-names = "ahb", "nfc";
assigned-clocks = <&clks SCLK_NANDC>;
assigned-clock-rates = <150000000>;
pinctrl-0 = <&flash_ale &flash_bus8 &flash_cle &flash_csn0
&flash_rdn &flash_rdy &flash_wrn>;
pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
nand@0 {
reg = <0>;
label = "rk-nand";
nand-bus-width = <8>;
nand-ecc-mode = "hw";
nand-ecc-step-size = <1024>;
nand-ecc-strength = <16>;
nand-is-boot-medium;
rockchip,boot-blks = <8>;
rockchip,boot-ecc-strength = <16>;
};
};
...

Просмотреть файл

@ -5,7 +5,7 @@ NAND Error-correction Code
Introduction
============
Having looked at the linux mtd/nand driver and more specific at nand_ecc.c
Having looked at the linux mtd/nand Hamming software ECC engine driver
I felt there was room for optimisation. I bashed the code for a few hours
performing tricks like table lookup removing superfluous code etc.
After that the speed was increased by 35-40%.

Просмотреть файл

@ -972,9 +972,6 @@ hints" for an explanation.
.. kernel-doc:: drivers/mtd/nand/raw/nand_base.c
:export:
.. kernel-doc:: drivers/mtd/nand/raw/nand_ecc.c
:export:
Internal Functions Provided
===========================

Просмотреть файл

@ -20,7 +20,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <linux/io.h>

Просмотреть файл

@ -34,7 +34,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <net/ax88796.h>

Просмотреть файл

@ -35,7 +35,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include "devs.h"

Просмотреть файл

@ -24,7 +24,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <linux/platform_data/asoc-s3c24xx_simtec.h>

Просмотреть файл

@ -37,7 +37,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/physmap.h>

Просмотреть файл

@ -40,7 +40,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include "gpio-cfg.h"

Просмотреть файл

@ -44,7 +44,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include "gpio-cfg.h"

Просмотреть файл

@ -33,7 +33,7 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include "cpu.h"

Просмотреть файл

@ -21,7 +21,7 @@
#include <linux/io.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <asm/mach/arch.h>

Просмотреть файл

@ -22,7 +22,7 @@
#include <linux/io.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <asm/mach/arch.h>

Просмотреть файл

@ -16,7 +16,7 @@
#include <linux/io.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/partitions.h>
#include <linux/memblock.h>

Просмотреть файл

@ -152,6 +152,7 @@ config SM_FTL
tristate "SmartMedia/xD new translation layer"
depends on BLOCK
select MTD_BLKDEVS
select MTD_NAND_CORE
select MTD_NAND_ECC_SW_HAMMING
help
This enables EXPERIMENTAL R/W support for SmartMedia/xD

Просмотреть файл

@ -816,7 +816,7 @@ static void doc_read_page_finish(struct docg3 *docg3)
/**
* calc_block_sector - Calculate blocks, pages and ofs.
*
* @from: offset in flash
* @block0: first plane block index calculated
* @block1: second plane block index calculated
@ -1783,10 +1783,9 @@ static int __init doc_set_driver_info(int chip_id, struct mtd_info *mtd)
/**
* doc_probe_device - Check if a device is available
* @base: the io space where the device is probed
* @cascade: the cascade of chips this devices will belong to
* @floor: the floor of the probed device
* @dev: the device
* @cascade: the cascade of chips this devices will belong to
*
* Checks whether a device at the specified IO range, and floor is available.
*

Просмотреть файл

@ -1,19 +1,19 @@
// SPDX-License-Identifier: GPL-2.0-only
/**
/*
* Copyright (c) ???? Jochen Schäuble <psionic@psionic.de>
* Copyright (c) 2003-2004 Joern Engel <joern@wh.fh-wedel.de>
*
* Usage:
*
* one commend line parameter per device, each in the form:
* phram=<name>,<start>,<len>
* phram=<name>,<start>,<len>[,<erasesize>]
* <name> may be up to 63 characters.
* <start> and <len> can be octal, decimal or hexadecimal. If followed
* <start>, <len>, and <erasesize> can be octal, decimal or hexadecimal. If followed
* by "ki", "Mi" or "Gi", the numbers will be interpreted as kilo, mega or
* gigabytes.
* gigabytes. <erasesize> is optional and defaults to PAGE_SIZE.
*
* Example:
* phram=swap,64Mi,128Mi phram=test,900Mi,1Mi
* phram=swap,64Mi,128Mi phram=test,900Mi,1Mi,64Ki
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@ -26,6 +26,7 @@
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h>
#include <asm/div64.h>
struct phram_mtd_list {
struct mtd_info mtd;
@ -88,7 +89,7 @@ static void unregister_devices(void)
}
}
static int register_device(char *name, phys_addr_t start, size_t len)
static int register_device(char *name, phys_addr_t start, size_t len, uint32_t erasesize)
{
struct phram_mtd_list *new;
int ret = -ENOMEM;
@ -115,7 +116,7 @@ static int register_device(char *name, phys_addr_t start, size_t len)
new->mtd._write = phram_write;
new->mtd.owner = THIS_MODULE;
new->mtd.type = MTD_RAM;
new->mtd.erasesize = PAGE_SIZE;
new->mtd.erasesize = erasesize;
new->mtd.writesize = 1;
ret = -EAGAIN;
@ -204,22 +205,23 @@ static inline void kill_final_newline(char *str)
static int phram_init_called;
/*
* This shall contain the module parameter if any. It is of the form:
* - phram=<device>,<address>,<size> for module case
* - phram.phram=<device>,<address>,<size> for built-in case
* We leave 64 bytes for the device name, 20 for the address and 20 for the
* size.
* Example: phram.phram=rootfs,0xa0000000,512Mi
* - phram=<device>,<address>,<size>[,<erasesize>] for module case
* - phram.phram=<device>,<address>,<size>[,<erasesize>] for built-in case
* We leave 64 bytes for the device name, 20 for the address , 20 for the
* size and 20 for the erasesize.
* Example: phram.phram=rootfs,0xa0000000,512Mi,65536
*/
static char phram_paramline[64 + 20 + 20];
static char phram_paramline[64 + 20 + 20 + 20];
#endif
static int phram_setup(const char *val)
{
char buf[64 + 20 + 20], *str = buf;
char *token[3];
char buf[64 + 20 + 20 + 20], *str = buf;
char *token[4];
char *name;
uint64_t start;
uint64_t len;
uint64_t erasesize = PAGE_SIZE;
int i, ret;
if (strnlen(val, sizeof(buf)) >= sizeof(buf))
@ -228,7 +230,7 @@ static int phram_setup(const char *val)
strcpy(str, val);
kill_final_newline(str);
for (i = 0; i < 3; i++)
for (i = 0; i < 4; i++)
token[i] = strsep(&str, ",");
if (str)
@ -253,11 +255,25 @@ static int phram_setup(const char *val)
goto error;
}
ret = register_device(name, start, len);
if (token[3]) {
ret = parse_num64(&erasesize, token[3]);
if (ret) {
parse_err("illegal erasesize\n");
goto error;
}
}
if (len == 0 || erasesize == 0 || erasesize > len
|| erasesize > UINT_MAX || do_div(len, (uint32_t)erasesize) != 0) {
parse_err("illegal erasesize or len\n");
goto error;
}
ret = register_device(name, start, len, (uint32_t)erasesize);
if (ret)
goto error;
pr_info("%s device: %#llx at %#llx\n", name, len, start);
pr_info("%s device: %#llx at %#llx for erasesize %#llx\n", name, len, start, erasesize);
return 0;
error:
@ -298,7 +314,7 @@ static int phram_param_call(const char *val, const struct kernel_param *kp)
}
module_param_call(phram, phram_param_call, NULL, NULL, 0200);
MODULE_PARM_DESC(phram, "Memory region to map. \"phram=<name>,<start>,<length>\"");
MODULE_PARM_DESC(phram, "Memory region to map. \"phram=<name>,<start>,<length>[,<erasesize>]\"");
static int __init init_phram(void)

Просмотреть файл

@ -126,6 +126,7 @@ out:
}
/**
* powernv_flash_read
* @mtd: the device
* @from: the offset to read from
* @len: the number of bytes to read
@ -142,6 +143,7 @@ static int powernv_flash_read(struct mtd_info *mtd, loff_t from, size_t len,
}
/**
* powernv_flash_write
* @mtd: the device
* @to: the offset to write to
* @len: the number of bytes to write
@ -158,6 +160,7 @@ static int powernv_flash_write(struct mtd_info *mtd, loff_t to, size_t len,
}
/**
* powernv_flash_erase
* @mtd: the device
* @erase: the erase info
* Returns 0 if erase successful or -ERRNO if an error occurred
@ -176,7 +179,7 @@ static int powernv_flash_erase(struct mtd_info *mtd, struct erase_info *erase)
/**
* powernv_flash_set_driver_info - Fill the mtd_info structure and docg3
* structure @pdev: The platform device
* @dev: The device structure
* @mtd: The structure to fill
*/
static int powernv_flash_set_driver_info(struct device *dev,

Просмотреть файл

@ -31,12 +31,12 @@ static map_word __xipram bt1_rom_map_read(struct map_info *map,
unsigned long ofs)
{
void __iomem *src = map->virt + ofs;
unsigned long shift;
unsigned int shift;
map_word ret;
u32 data;
/* Read data within offset dword. */
shift = (unsigned long)src & 0x3;
shift = (uintptr_t)src & 0x3;
data = readl_relaxed(src - shift);
if (!shift) {
ret.x[0] = data;
@ -60,7 +60,7 @@ static void __xipram bt1_rom_map_copy_from(struct map_info *map,
ssize_t len)
{
void __iomem *src = map->virt + from;
ssize_t shift, chunk;
unsigned int shift, chunk;
u32 data;
if (len <= 0 || from >= map->size)
@ -75,7 +75,7 @@ static void __xipram bt1_rom_map_copy_from(struct map_info *map,
* up into the next three stages: unaligned head, aligned body,
* unaligned tail.
*/
shift = (ssize_t)src & 0x3;
shift = (uintptr_t)src & 0x3;
if (shift) {
chunk = min_t(ssize_t, 4 - shift, len);
data = readl_relaxed(src - shift);

Просмотреть файл

@ -177,8 +177,12 @@ static int platram_probe(struct platform_device *pdev)
err = mtd_device_parse_register(info->mtd, pdata->probes, NULL,
pdata->partitions,
pdata->nr_partitions);
if (!err)
dev_info(&pdev->dev, "registered mtd device\n");
if (err) {
dev_err(&pdev->dev, "failed to register mtd device\n");
goto exit_free;
}
dev_info(&pdev->dev, "registered mtd device\n");
if (pdata->nr_partitions) {
/* add the whole device. */
@ -186,10 +190,11 @@ static int platram_probe(struct platform_device *pdev)
if (err) {
dev_err(&pdev->dev,
"failed to register the entire device\n");
goto exit_free;
}
}
return err;
return 0;
exit_free:
platram_remove(pdev);

Просмотреть файл

@ -881,7 +881,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
if (copy_from_user(&offs, argp, sizeof(loff_t)))
return -EFAULT;
return mtd_block_isbad(mtd, offs);
break;
}
case MEMSETBADBLOCK:
@ -891,7 +890,6 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
if (copy_from_user(&offs, argp, sizeof(loff_t)))
return -EFAULT;
return mtd_block_markbad(mtd, offs);
break;
}
case OTPSELECT:

Просмотреть файл

@ -993,6 +993,8 @@ int __get_mtd_device(struct mtd_info *mtd)
}
}
master->usecount++;
while (mtd->parent) {
mtd->usecount++;
mtd = mtd->parent;
@ -1059,6 +1061,8 @@ void __put_mtd_device(struct mtd_info *mtd)
mtd = mtd->parent;
}
master->usecount--;
if (master->_put_device)
master->_put_device(master);
@ -1578,7 +1582,7 @@ static int mtd_ooblayout_find_region(struct mtd_info *mtd, int byte,
* ECC byte
* @mtd: mtd info structure
* @eccbyte: the byte we are searching for
* @sectionp: pointer where the section id will be stored
* @section: pointer where the section id will be stored
* @oobregion: OOB region information
*
* Works like mtd_ooblayout_find_region() except it searches for a specific ECC

Просмотреть файл

@ -292,7 +292,7 @@ EXPORT_SYMBOL_GPL(mtd_add_partition);
/**
* __mtd_del_partition - delete MTD partition
*
* @priv: MTD structure to be deleted
* @mtd: MTD structure to be deleted
*
* This function must be called with the partitions mutex locked.
*/

Просмотреть файл

@ -13,7 +13,38 @@ menu "ECC engine support"
config MTD_NAND_ECC
bool
depends on MTD_NAND_CORE
select MTD_NAND_CORE
config MTD_NAND_ECC_SW_HAMMING
bool "Software Hamming ECC engine"
default y if MTD_RAW_NAND
select MTD_NAND_ECC
help
This enables support for software Hamming error
correction. This correction can correct up to 1 bit error
per chunk and detect up to 2 bit errors. While it used to be
widely used with old parts, newer NAND chips usually require
more strength correction and in this case BCH or RS will be
preferred.
config MTD_NAND_ECC_SW_HAMMING_SMC
bool "NAND ECC Smart Media byte order"
depends on MTD_NAND_ECC_SW_HAMMING
default n
help
Software ECC according to the Smart Media Specification.
The original Linux implementation had byte 0 and 1 swapped.
config MTD_NAND_ECC_SW_BCH
bool "Software BCH ECC engine"
select BCH
select MTD_NAND_ECC
default n
help
This enables support for software BCH error correction. Binary BCH
codes are more powerful and cpu intensive than traditional Hamming
ECC codes. They are used with NAND devices requiring more than 1 bit
of error correction.
endmenu

Просмотреть файл

@ -8,3 +8,5 @@ obj-y += raw/
obj-y += spi/
nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o
nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o
nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o

Просмотреть файл

@ -207,6 +207,130 @@ int nanddev_mtd_max_bad_blocks(struct mtd_info *mtd, loff_t offs, size_t len)
}
EXPORT_SYMBOL_GPL(nanddev_mtd_max_bad_blocks);
/**
* nanddev_get_ecc_engine() - Find and get a suitable ECC engine
* @nand: NAND device
*/
static int nanddev_get_ecc_engine(struct nand_device *nand)
{
int engine_type;
/* Read the user desires in terms of ECC engine/configuration */
of_get_nand_ecc_user_config(nand);
engine_type = nand->ecc.user_conf.engine_type;
if (engine_type == NAND_ECC_ENGINE_TYPE_INVALID)
engine_type = nand->ecc.defaults.engine_type;
switch (engine_type) {
case NAND_ECC_ENGINE_TYPE_NONE:
return 0;
case NAND_ECC_ENGINE_TYPE_SOFT:
nand->ecc.engine = nand_ecc_get_sw_engine(nand);
break;
case NAND_ECC_ENGINE_TYPE_ON_DIE:
nand->ecc.engine = nand_ecc_get_on_die_hw_engine(nand);
break;
case NAND_ECC_ENGINE_TYPE_ON_HOST:
pr_err("On-host hardware ECC engines not supported yet\n");
break;
default:
pr_err("Missing ECC engine type\n");
}
if (!nand->ecc.engine)
return -EINVAL;
return 0;
}
/**
* nanddev_put_ecc_engine() - Dettach and put the in-use ECC engine
* @nand: NAND device
*/
static int nanddev_put_ecc_engine(struct nand_device *nand)
{
switch (nand->ecc.ctx.conf.engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
pr_err("On-host hardware ECC engines not supported yet\n");
break;
case NAND_ECC_ENGINE_TYPE_NONE:
case NAND_ECC_ENGINE_TYPE_SOFT:
case NAND_ECC_ENGINE_TYPE_ON_DIE:
default:
break;
}
return 0;
}
/**
* nanddev_find_ecc_configuration() - Find a suitable ECC configuration
* @nand: NAND device
*/
static int nanddev_find_ecc_configuration(struct nand_device *nand)
{
int ret;
if (!nand->ecc.engine)
return -ENOTSUPP;
ret = nand_ecc_init_ctx(nand);
if (ret)
return ret;
if (!nand_ecc_is_strong_enough(nand))
pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",
nand->mtd.name);
return 0;
}
/**
* nanddev_ecc_engine_init() - Initialize an ECC engine for the chip
* @nand: NAND device
*/
int nanddev_ecc_engine_init(struct nand_device *nand)
{
int ret;
/* Look for the ECC engine to use */
ret = nanddev_get_ecc_engine(nand);
if (ret) {
pr_err("No ECC engine found\n");
return ret;
}
/* No ECC engine requested */
if (!nand->ecc.engine)
return 0;
/* Configure the engine: balance user input and chip requirements */
ret = nanddev_find_ecc_configuration(nand);
if (ret) {
pr_err("No suitable ECC configuration\n");
nanddev_put_ecc_engine(nand);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(nanddev_ecc_engine_init);
/**
* nanddev_ecc_engine_cleanup() - Cleanup ECC engine initializations
* @nand: NAND device
*/
void nanddev_ecc_engine_cleanup(struct nand_device *nand)
{
if (nand->ecc.engine)
nand_ecc_cleanup_ctx(nand);
nanddev_put_ecc_engine(nand);
}
EXPORT_SYMBOL_GPL(nanddev_ecc_engine_cleanup);
/**
* nanddev_init() - Initialize a NAND device
* @nand: NAND device

Просмотреть файл

@ -0,0 +1,406 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* This file provides ECC correction for more than 1 bit per block of data,
* using binary BCH codes. It relies on the generic BCH library lib/bch.c.
*
* Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com>
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/bitops.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand-ecc-sw-bch.h>
/**
* nand_ecc_sw_bch_calculate - Calculate the ECC corresponding to a data block
* @nand: NAND device
* @buf: Input buffer with raw data
* @code: Output buffer with ECC
*/
int nand_ecc_sw_bch_calculate(struct nand_device *nand,
const unsigned char *buf, unsigned char *code)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
unsigned int i;
memset(code, 0, engine_conf->code_size);
bch_encode(engine_conf->bch, buf, nand->ecc.ctx.conf.step_size, code);
/* apply mask so that an erased page is a valid codeword */
for (i = 0; i < engine_conf->code_size; i++)
code[i] ^= engine_conf->eccmask[i];
return 0;
}
EXPORT_SYMBOL(nand_ecc_sw_bch_calculate);
/**
* nand_ecc_sw_bch_correct - Detect, correct and report bit error(s)
* @nand: NAND device
* @buf: Raw data read from the chip
* @read_ecc: ECC bytes from the chip
* @calc_ecc: ECC calculated from the raw data
*
* Detect and correct bit errors for a data block.
*/
int nand_ecc_sw_bch_correct(struct nand_device *nand, unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
unsigned int step_size = nand->ecc.ctx.conf.step_size;
unsigned int *errloc = engine_conf->errloc;
int i, count;
count = bch_decode(engine_conf->bch, NULL, step_size, read_ecc,
calc_ecc, NULL, errloc);
if (count > 0) {
for (i = 0; i < count; i++) {
if (errloc[i] < (step_size * 8))
/* The error is in the data area: correct it */
buf[errloc[i] >> 3] ^= (1 << (errloc[i] & 7));
/* Otherwise the error is in the ECC area: nothing to do */
pr_debug("%s: corrected bitflip %u\n", __func__,
errloc[i]);
}
} else if (count < 0) {
pr_err("ECC unrecoverable error\n");
count = -EBADMSG;
}
return count;
}
EXPORT_SYMBOL(nand_ecc_sw_bch_correct);
/**
* nand_ecc_sw_bch_cleanup - Cleanup software BCH ECC resources
* @nand: NAND device
*/
static void nand_ecc_sw_bch_cleanup(struct nand_device *nand)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
bch_free(engine_conf->bch);
kfree(engine_conf->errloc);
kfree(engine_conf->eccmask);
}
/**
* nand_ecc_sw_bch_init - Initialize software BCH ECC engine
* @nand: NAND device
*
* Returns: a pointer to a new NAND BCH control structure, or NULL upon failure
*
* Initialize NAND BCH error correction. @nand.ecc parameters 'step_size' and
* 'bytes' are used to compute the following BCH parameters:
* m, the Galois field order
* t, the error correction capability
* 'bytes' should be equal to the number of bytes required to store m * t
* bits, where m is such that 2^m - 1 > step_size * 8.
*
* Example: to configure 4 bit correction per 512 bytes, you should pass
* step_size = 512 (thus, m = 13 is the smallest integer such that 2^m - 1 > 512 * 8)
* bytes = 7 (7 bytes are required to store m * t = 13 * 4 = 52 bits)
*/
static int nand_ecc_sw_bch_init(struct nand_device *nand)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
unsigned int eccsize = nand->ecc.ctx.conf.step_size;
unsigned int eccbytes = engine_conf->code_size;
unsigned int m, t, i;
unsigned char *erased_page;
int ret;
m = fls(1 + (8 * eccsize));
t = (eccbytes * 8) / m;
engine_conf->bch = bch_init(m, t, 0, false);
if (!engine_conf->bch)
return -EINVAL;
engine_conf->eccmask = kzalloc(eccbytes, GFP_KERNEL);
engine_conf->errloc = kmalloc_array(t, sizeof(*engine_conf->errloc),
GFP_KERNEL);
if (!engine_conf->eccmask || !engine_conf->errloc) {
ret = -ENOMEM;
goto cleanup;
}
/* Compute and store the inverted ECC of an erased step */
erased_page = kmalloc(eccsize, GFP_KERNEL);
if (!erased_page) {
ret = -ENOMEM;
goto cleanup;
}
memset(erased_page, 0xff, eccsize);
bch_encode(engine_conf->bch, erased_page, eccsize,
engine_conf->eccmask);
kfree(erased_page);
for (i = 0; i < eccbytes; i++)
engine_conf->eccmask[i] ^= 0xff;
/* Verify that the number of code bytes has the expected value */
if (engine_conf->bch->ecc_bytes != eccbytes) {
pr_err("Invalid number of ECC bytes: %u, expected: %u\n",
eccbytes, engine_conf->bch->ecc_bytes);
ret = -EINVAL;
goto cleanup;
}
/* Sanity checks */
if (8 * (eccsize + eccbytes) >= (1 << m)) {
pr_err("ECC step size is too large (%u)\n", eccsize);
ret = -EINVAL;
goto cleanup;
}
return 0;
cleanup:
nand_ecc_sw_bch_cleanup(nand);
return ret;
}
int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
{
struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
struct mtd_info *mtd = nanddev_to_mtd(nand);
struct nand_ecc_sw_bch_conf *engine_conf;
unsigned int code_size = 0, nsteps;
int ret;
/* Only large page NAND chips may use BCH */
if (mtd->oobsize < 64) {
pr_err("BCH cannot be used with small page NAND chips\n");
return -EINVAL;
}
if (!mtd->ooblayout)
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
conf->engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
conf->algo = NAND_ECC_ALGO_BCH;
conf->step_size = nand->ecc.user_conf.step_size;
conf->strength = nand->ecc.user_conf.strength;
/*
* Board driver should supply ECC size and ECC strength
* values to select how many bits are correctable.
* Otherwise, default to 512 bytes for large page devices and 256 for
* small page devices.
*/
if (!conf->step_size) {
if (mtd->oobsize >= 64)
conf->step_size = 512;
else
conf->step_size = 256;
conf->strength = 4;
}
nsteps = mtd->writesize / conf->step_size;
/* Maximize */
if (nand->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) {
conf->step_size = 1024;
nsteps = mtd->writesize / conf->step_size;
/* Reserve 2 bytes for the BBM */
code_size = (mtd->oobsize - 2) / nsteps;
conf->strength = code_size * 8 / fls(8 * conf->step_size);
}
if (!code_size)
code_size = DIV_ROUND_UP(conf->strength *
fls(8 * conf->step_size), 8);
if (!conf->strength)
conf->strength = (code_size * 8) / fls(8 * conf->step_size);
if (!code_size && !conf->strength) {
pr_err("Missing ECC parameters\n");
return -EINVAL;
}
engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL);
if (!engine_conf)
return -ENOMEM;
ret = nand_ecc_init_req_tweaking(&engine_conf->req_ctx, nand);
if (ret)
goto free_engine_conf;
engine_conf->code_size = code_size;
engine_conf->nsteps = nsteps;
engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
if (!engine_conf->calc_buf || !engine_conf->code_buf) {
ret = -ENOMEM;
goto free_bufs;
}
nand->ecc.ctx.priv = engine_conf;
nand->ecc.ctx.total = nsteps * code_size;
ret = nand_ecc_sw_bch_init(nand);
if (ret)
goto free_bufs;
/* Verify the layout validity */
if (mtd_ooblayout_count_eccbytes(mtd) !=
engine_conf->nsteps * engine_conf->code_size) {
pr_err("Invalid ECC layout\n");
ret = -EINVAL;
goto cleanup_bch_ctx;
}
return 0;
cleanup_bch_ctx:
nand_ecc_sw_bch_cleanup(nand);
free_bufs:
nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
kfree(engine_conf->calc_buf);
kfree(engine_conf->code_buf);
free_engine_conf:
kfree(engine_conf);
return ret;
}
EXPORT_SYMBOL(nand_ecc_sw_bch_init_ctx);
void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
if (engine_conf) {
nand_ecc_sw_bch_cleanup(nand);
nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
kfree(engine_conf->calc_buf);
kfree(engine_conf->code_buf);
kfree(engine_conf);
}
}
EXPORT_SYMBOL(nand_ecc_sw_bch_cleanup_ctx);
static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size;
int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps;
int total = nand->ecc.ctx.total;
u8 *ecccalc = engine_conf->calc_buf;
const u8 *data;
int i;
/* Nothing to do for a raw operation */
if (req->mode == MTD_OPS_RAW)
return 0;
/* This engine does not provide BBM/free OOB bytes protection */
if (!req->datalen)
return 0;
nand_ecc_tweak_req(&engine_conf->req_ctx, req);
/* No more preparation for page read */
if (req->type == NAND_PAGE_READ)
return 0;
/* Preparation for page write: derive the ECC bytes and place them */
for (i = 0, data = req->databuf.out;
eccsteps;
eccsteps--, i += eccbytes, data += eccsize)
nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]);
return mtd_ooblayout_set_eccbytes(mtd, ecccalc, (void *)req->oobbuf.out,
0, total);
}
static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv;
struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size;
int total = nand->ecc.ctx.total;
int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps;
u8 *ecccalc = engine_conf->calc_buf;
u8 *ecccode = engine_conf->code_buf;
unsigned int max_bitflips = 0;
u8 *data = req->databuf.in;
int i, ret;
/* Nothing to do for a raw operation */
if (req->mode == MTD_OPS_RAW)
return 0;
/* This engine does not provide BBM/free OOB bytes protection */
if (!req->datalen)
return 0;
/* No more preparation for page write */
if (req->type == NAND_PAGE_WRITE) {
nand_ecc_restore_req(&engine_conf->req_ctx, req);
return 0;
}
/* Finish a page read: retrieve the (raw) ECC bytes*/
ret = mtd_ooblayout_get_eccbytes(mtd, ecccode, req->oobbuf.in, 0,
total);
if (ret)
return ret;
/* Calculate the ECC bytes */
for (i = 0; eccsteps; eccsteps--, i += eccbytes, data += eccsize)
nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]);
/* Finish a page read: compare and correct */
for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in;
eccsteps;
eccsteps--, i += eccbytes, data += eccsize) {
int stat = nand_ecc_sw_bch_correct(nand, data,
&ecccode[i],
&ecccalc[i]);
if (stat < 0) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += stat;
max_bitflips = max_t(unsigned int, max_bitflips, stat);
}
}
nand_ecc_restore_req(&engine_conf->req_ctx, req);
return max_bitflips;
}
static struct nand_ecc_engine_ops nand_ecc_sw_bch_engine_ops = {
.init_ctx = nand_ecc_sw_bch_init_ctx,
.cleanup_ctx = nand_ecc_sw_bch_cleanup_ctx,
.prepare_io_req = nand_ecc_sw_bch_prepare_io_req,
.finish_io_req = nand_ecc_sw_bch_finish_io_req,
};
static struct nand_ecc_engine nand_ecc_sw_bch_engine = {
.ops = &nand_ecc_sw_bch_engine_ops,
};
struct nand_ecc_engine *nand_ecc_sw_bch_get_engine(void)
{
return &nand_ecc_sw_bch_engine;
}
EXPORT_SYMBOL(nand_ecc_sw_bch_get_engine);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>");
MODULE_DESCRIPTION("NAND software BCH ECC support");

Просмотреть файл

@ -17,9 +17,9 @@
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/slab.h>
#include <asm/byteorder.h>
/*
@ -75,7 +75,7 @@ static const char bitsperbyte[256] = {
* addressbits is a lookup table to filter out the bits from the xor-ed
* ECC data that identify the faulty location.
* this is only used for repairing parity
* see the comments in nand_correct_data for more details
* see the comments in nand_ecc_sw_hamming_correct for more details
*/
static const char addressbits[256] = {
0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01,
@ -112,30 +112,21 @@ static const char addressbits[256] = {
0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f
};
/**
* __nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte
* block
* @buf: input buffer with raw data
* @eccsize: data bytes per ECC step (256 or 512)
* @code: output buffer with ECC
* @sm_order: Smart Media byte ordering
*/
void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize,
unsigned char *code, bool sm_order)
int ecc_sw_hamming_calculate(const unsigned char *buf, unsigned int step_size,
unsigned char *code, bool sm_order)
{
const u32 *bp = (uint32_t *)buf;
const u32 eccsize_mult = (step_size == 256) ? 1 : 2;
/* current value in buffer */
u32 cur;
/* rp0..rp17 are the various accumulated parities (per byte) */
u32 rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7, rp8, rp9, rp10, rp11, rp12,
rp13, rp14, rp15, rp16, rp17;
/* Cumulative parity for all data */
u32 par;
/* Cumulative parity at the end of the loop (rp12, rp14, rp16) */
u32 tmppar;
int i;
const uint32_t *bp = (uint32_t *)buf;
/* 256 or 512 bytes/ecc */
const uint32_t eccsize_mult = eccsize >> 8;
uint32_t cur; /* current value in buffer */
/* rp0..rp15..rp17 are the various accumulated parities (per byte) */
uint32_t rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
uint32_t rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15, rp16;
uint32_t rp17;
uint32_t par; /* the cumulative parity for all data */
uint32_t tmppar; /* the cumulative parity for this iteration;
for rp12, rp14 and rp16 at the end of the
loop */
par = 0;
rp4 = 0;
@ -145,6 +136,7 @@ void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize,
rp12 = 0;
rp14 = 0;
rp16 = 0;
rp17 = 0;
/*
* The loop is unrolled a number of times;
@ -356,45 +348,35 @@ void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize,
(invparity[par & 0x55] << 2) |
(invparity[rp17] << 1) |
(invparity[rp16] << 0);
}
EXPORT_SYMBOL(__nand_calculate_ecc);
/**
* nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte
* block
* @chip: NAND chip object
* @buf: input buffer with raw data
* @code: output buffer with ECC
*/
int nand_calculate_ecc(struct nand_chip *chip, const unsigned char *buf,
unsigned char *code)
{
bool sm_order = chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER;
__nand_calculate_ecc(buf, chip->ecc.size, code, sm_order);
return 0;
}
EXPORT_SYMBOL(nand_calculate_ecc);
EXPORT_SYMBOL(ecc_sw_hamming_calculate);
/**
* __nand_correct_data - [NAND Interface] Detect and correct bit error(s)
* @buf: raw data read from the chip
* @read_ecc: ECC from the chip
* @calc_ecc: the ECC calculated from raw data
* @eccsize: data bytes per ECC step (256 or 512)
* @sm_order: Smart Media byte order
*
* Detect and correct a 1 bit error for eccsize byte block
* nand_ecc_sw_hamming_calculate - Calculate 3-byte ECC for 256/512-byte block
* @nand: NAND device
* @buf: Input buffer with raw data
* @code: Output buffer with ECC
*/
int __nand_correct_data(unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc,
unsigned int eccsize, bool sm_order)
int nand_ecc_sw_hamming_calculate(struct nand_device *nand,
const unsigned char *buf, unsigned char *code)
{
struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv;
unsigned int step_size = nand->ecc.ctx.conf.step_size;
return ecc_sw_hamming_calculate(buf, step_size, code,
engine_conf->sm_order);
}
EXPORT_SYMBOL(nand_ecc_sw_hamming_calculate);
int ecc_sw_hamming_correct(unsigned char *buf, unsigned char *read_ecc,
unsigned char *calc_ecc, unsigned int step_size,
bool sm_order)
{
const u32 eccsize_mult = step_size >> 8;
unsigned char b0, b1, b2, bit_addr;
unsigned int byte_addr;
/* 256 or 512 bytes/ecc */
const uint32_t eccsize_mult = eccsize >> 8;
/*
* b0 to b2 indicate which bit is faulty (if any)
@ -458,27 +440,220 @@ int __nand_correct_data(unsigned char *buf,
pr_err("%s: uncorrectable ECC error\n", __func__);
return -EBADMSG;
}
EXPORT_SYMBOL(__nand_correct_data);
EXPORT_SYMBOL(ecc_sw_hamming_correct);
/**
* nand_correct_data - [NAND Interface] Detect and correct bit error(s)
* @chip: NAND chip object
* @buf: raw data read from the chip
* @read_ecc: ECC from the chip
* @calc_ecc: the ECC calculated from raw data
* nand_ecc_sw_hamming_correct - Detect and correct bit error(s)
* @nand: NAND device
* @buf: Raw data read from the chip
* @read_ecc: ECC bytes read from the chip
* @calc_ecc: ECC calculated from the raw data
*
* Detect and correct a 1 bit error for 256/512 byte block
* Detect and correct up to 1 bit error per 256/512-byte block.
*/
int nand_correct_data(struct nand_chip *chip, unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc)
int nand_ecc_sw_hamming_correct(struct nand_device *nand, unsigned char *buf,
unsigned char *read_ecc,
unsigned char *calc_ecc)
{
bool sm_order = chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER;
struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv;
unsigned int step_size = nand->ecc.ctx.conf.step_size;
return __nand_correct_data(buf, read_ecc, calc_ecc, chip->ecc.size,
sm_order);
return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, step_size,
engine_conf->sm_order);
}
EXPORT_SYMBOL(nand_correct_data);
EXPORT_SYMBOL(nand_ecc_sw_hamming_correct);
int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
{
struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
struct nand_ecc_sw_hamming_conf *engine_conf;
struct mtd_info *mtd = nanddev_to_mtd(nand);
int ret;
if (!mtd->ooblayout) {
switch (mtd->oobsize) {
case 8:
case 16:
mtd_set_ooblayout(mtd, nand_get_small_page_ooblayout());
break;
case 64:
case 128:
mtd_set_ooblayout(mtd,
nand_get_large_page_hamming_ooblayout());
break;
default:
return -ENOTSUPP;
}
}
conf->engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
conf->algo = NAND_ECC_ALGO_HAMMING;
conf->step_size = nand->ecc.user_conf.step_size;
conf->strength = 1;
/* Use the strongest configuration by default */
if (conf->step_size != 256 && conf->step_size != 512)
conf->step_size = 256;
engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL);
if (!engine_conf)
return -ENOMEM;
ret = nand_ecc_init_req_tweaking(&engine_conf->req_ctx, nand);
if (ret)
goto free_engine_conf;
engine_conf->code_size = 3;
engine_conf->nsteps = mtd->writesize / conf->step_size;
engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
if (!engine_conf->calc_buf || !engine_conf->code_buf) {
ret = -ENOMEM;
goto free_bufs;
}
nand->ecc.ctx.priv = engine_conf;
nand->ecc.ctx.total = engine_conf->nsteps * engine_conf->code_size;
return 0;
free_bufs:
nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
kfree(engine_conf->calc_buf);
kfree(engine_conf->code_buf);
free_engine_conf:
kfree(engine_conf);
return ret;
}
EXPORT_SYMBOL(nand_ecc_sw_hamming_init_ctx);
void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand)
{
struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv;
if (engine_conf) {
nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
kfree(engine_conf->calc_buf);
kfree(engine_conf->code_buf);
kfree(engine_conf);
}
}
EXPORT_SYMBOL(nand_ecc_sw_hamming_cleanup_ctx);
static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv;
struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size;
int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps;
int total = nand->ecc.ctx.total;
u8 *ecccalc = engine_conf->calc_buf;
const u8 *data;
int i;
/* Nothing to do for a raw operation */
if (req->mode == MTD_OPS_RAW)
return 0;
/* This engine does not provide BBM/free OOB bytes protection */
if (!req->datalen)
return 0;
nand_ecc_tweak_req(&engine_conf->req_ctx, req);
/* No more preparation for page read */
if (req->type == NAND_PAGE_READ)
return 0;
/* Preparation for page write: derive the ECC bytes and place them */
for (i = 0, data = req->databuf.out;
eccsteps;
eccsteps--, i += eccbytes, data += eccsize)
nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]);
return mtd_ooblayout_set_eccbytes(mtd, ecccalc, (void *)req->oobbuf.out,
0, total);
}
static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv;
struct mtd_info *mtd = nanddev_to_mtd(nand);
int eccsize = nand->ecc.ctx.conf.step_size;
int total = nand->ecc.ctx.total;
int eccbytes = engine_conf->code_size;
int eccsteps = engine_conf->nsteps;
u8 *ecccalc = engine_conf->calc_buf;
u8 *ecccode = engine_conf->code_buf;
unsigned int max_bitflips = 0;
u8 *data = req->databuf.in;
int i, ret;
/* Nothing to do for a raw operation */
if (req->mode == MTD_OPS_RAW)
return 0;
/* This engine does not provide BBM/free OOB bytes protection */
if (!req->datalen)
return 0;
/* No more preparation for page write */
if (req->type == NAND_PAGE_WRITE) {
nand_ecc_restore_req(&engine_conf->req_ctx, req);
return 0;
}
/* Finish a page read: retrieve the (raw) ECC bytes*/
ret = mtd_ooblayout_get_eccbytes(mtd, ecccode, req->oobbuf.in, 0,
total);
if (ret)
return ret;
/* Calculate the ECC bytes */
for (i = 0; eccsteps; eccsteps--, i += eccbytes, data += eccsize)
nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]);
/* Finish a page read: compare and correct */
for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in;
eccsteps;
eccsteps--, i += eccbytes, data += eccsize) {
int stat = nand_ecc_sw_hamming_correct(nand, data,
&ecccode[i],
&ecccalc[i]);
if (stat < 0) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += stat;
max_bitflips = max_t(unsigned int, max_bitflips, stat);
}
}
nand_ecc_restore_req(&engine_conf->req_ctx, req);
return max_bitflips;
}
static struct nand_ecc_engine_ops nand_ecc_sw_hamming_engine_ops = {
.init_ctx = nand_ecc_sw_hamming_init_ctx,
.cleanup_ctx = nand_ecc_sw_hamming_cleanup_ctx,
.prepare_io_req = nand_ecc_sw_hamming_prepare_io_req,
.finish_io_req = nand_ecc_sw_hamming_finish_io_req,
};
static struct nand_ecc_engine nand_ecc_sw_hamming_engine = {
.ops = &nand_ecc_sw_hamming_engine_ops,
};
struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void)
{
return &nand_ecc_sw_hamming_engine;
}
EXPORT_SYMBOL(nand_ecc_sw_hamming_get_engine);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Frans Meulenbroeks <fransmeulenbroeks@gmail.com>");
MODULE_DESCRIPTION("Generic NAND ECC support");
MODULE_DESCRIPTION("NAND software Hamming ECC support");

Просмотреть файл

@ -95,6 +95,7 @@
#include <linux/module.h>
#include <linux/mtd/nand.h>
#include <linux/slab.h>
/**
* nand_ecc_init_ctx - Init the ECC engine context
@ -104,7 +105,7 @@
*/
int nand_ecc_init_ctx(struct nand_device *nand)
{
if (!nand->ecc.engine->ops->init_ctx)
if (!nand->ecc.engine || !nand->ecc.engine->ops->init_ctx)
return 0;
return nand->ecc.engine->ops->init_ctx(nand);
@ -117,7 +118,7 @@ EXPORT_SYMBOL(nand_ecc_init_ctx);
*/
void nand_ecc_cleanup_ctx(struct nand_device *nand)
{
if (nand->ecc.engine->ops->cleanup_ctx)
if (nand->ecc.engine && nand->ecc.engine->ops->cleanup_ctx)
nand->ecc.engine->ops->cleanup_ctx(nand);
}
EXPORT_SYMBOL(nand_ecc_cleanup_ctx);
@ -130,7 +131,7 @@ EXPORT_SYMBOL(nand_ecc_cleanup_ctx);
int nand_ecc_prepare_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
if (!nand->ecc.engine->ops->prepare_io_req)
if (!nand->ecc.engine || !nand->ecc.engine->ops->prepare_io_req)
return 0;
return nand->ecc.engine->ops->prepare_io_req(nand, req);
@ -145,7 +146,7 @@ EXPORT_SYMBOL(nand_ecc_prepare_io_req);
int nand_ecc_finish_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
if (!nand->ecc.engine->ops->finish_io_req)
if (!nand->ecc.engine || !nand->ecc.engine->ops->finish_io_req)
return 0;
return nand->ecc.engine->ops->finish_io_req(nand, req);
@ -479,6 +480,137 @@ bool nand_ecc_is_strong_enough(struct nand_device *nand)
}
EXPORT_SYMBOL(nand_ecc_is_strong_enough);
/* ECC engine driver internal helpers */
int nand_ecc_init_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx,
struct nand_device *nand)
{
unsigned int total_buffer_size;
ctx->nand = nand;
/* Let the user decide the exact length of each buffer */
if (!ctx->page_buffer_size)
ctx->page_buffer_size = nanddev_page_size(nand);
if (!ctx->oob_buffer_size)
ctx->oob_buffer_size = nanddev_per_page_oobsize(nand);
total_buffer_size = ctx->page_buffer_size + ctx->oob_buffer_size;
ctx->spare_databuf = kzalloc(total_buffer_size, GFP_KERNEL);
if (!ctx->spare_databuf)
return -ENOMEM;
ctx->spare_oobbuf = ctx->spare_databuf + ctx->page_buffer_size;
return 0;
}
EXPORT_SYMBOL_GPL(nand_ecc_init_req_tweaking);
void nand_ecc_cleanup_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx)
{
kfree(ctx->spare_databuf);
}
EXPORT_SYMBOL_GPL(nand_ecc_cleanup_req_tweaking);
/*
* Ensure data and OOB area is fully read/written otherwise the correction might
* not work as expected.
*/
void nand_ecc_tweak_req(struct nand_ecc_req_tweak_ctx *ctx,
struct nand_page_io_req *req)
{
struct nand_device *nand = ctx->nand;
struct nand_page_io_req *orig, *tweak;
/* Save the original request */
ctx->orig_req = *req;
ctx->bounce_data = false;
ctx->bounce_oob = false;
orig = &ctx->orig_req;
tweak = req;
/* Ensure the request covers the entire page */
if (orig->datalen < nanddev_page_size(nand)) {
ctx->bounce_data = true;
tweak->dataoffs = 0;
tweak->datalen = nanddev_page_size(nand);
tweak->databuf.in = ctx->spare_databuf;
memset(tweak->databuf.in, 0xFF, ctx->page_buffer_size);
}
if (orig->ooblen < nanddev_per_page_oobsize(nand)) {
ctx->bounce_oob = true;
tweak->ooboffs = 0;
tweak->ooblen = nanddev_per_page_oobsize(nand);
tweak->oobbuf.in = ctx->spare_oobbuf;
memset(tweak->oobbuf.in, 0xFF, ctx->oob_buffer_size);
}
/* Copy the data that must be writen in the bounce buffers, if needed */
if (orig->type == NAND_PAGE_WRITE) {
if (ctx->bounce_data)
memcpy((void *)tweak->databuf.out + orig->dataoffs,
orig->databuf.out, orig->datalen);
if (ctx->bounce_oob)
memcpy((void *)tweak->oobbuf.out + orig->ooboffs,
orig->oobbuf.out, orig->ooblen);
}
}
EXPORT_SYMBOL_GPL(nand_ecc_tweak_req);
void nand_ecc_restore_req(struct nand_ecc_req_tweak_ctx *ctx,
struct nand_page_io_req *req)
{
struct nand_page_io_req *orig, *tweak;
orig = &ctx->orig_req;
tweak = req;
/* Restore the data read from the bounce buffers, if needed */
if (orig->type == NAND_PAGE_READ) {
if (ctx->bounce_data)
memcpy(orig->databuf.in,
tweak->databuf.in + orig->dataoffs,
orig->datalen);
if (ctx->bounce_oob)
memcpy(orig->oobbuf.in,
tweak->oobbuf.in + orig->ooboffs,
orig->ooblen);
}
/* Ensure the original request is restored */
*req = *orig;
}
EXPORT_SYMBOL_GPL(nand_ecc_restore_req);
struct nand_ecc_engine *nand_ecc_get_sw_engine(struct nand_device *nand)
{
unsigned int algo = nand->ecc.user_conf.algo;
if (algo == NAND_ECC_ALGO_UNKNOWN)
algo = nand->ecc.defaults.algo;
switch (algo) {
case NAND_ECC_ALGO_HAMMING:
return nand_ecc_sw_hamming_get_engine();
case NAND_ECC_ALGO_BCH:
return nand_ecc_sw_bch_get_engine();
default:
break;
}
return NULL;
}
EXPORT_SYMBOL(nand_ecc_get_sw_engine);
struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand)
{
return nand->ecc.ondie_engine;
}
EXPORT_SYMBOL(nand_ecc_get_on_die_hw_engine);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>");
MODULE_DESCRIPTION("Generic ECC engine");

Просмотреть файл

@ -132,7 +132,7 @@ static const struct mtd_ooblayout_ops onenand_oob_128_ooblayout_ops = {
.free = onenand_ooblayout_128_free,
};
/**
/*
* onenand_oob_32_64 - oob info for large (2KB) page
*/
static int onenand_ooblayout_32_64_ecc(struct mtd_info *mtd, int section,
@ -192,7 +192,7 @@ static const unsigned char ffchars[] = {
/**
* onenand_readw - [OneNAND Interface] Read OneNAND register
* @param addr address to read
* @addr: address to read
*
* Read OneNAND register
*/
@ -203,8 +203,8 @@ static unsigned short onenand_readw(void __iomem *addr)
/**
* onenand_writew - [OneNAND Interface] Write OneNAND register with value
* @param value value to write
* @param addr address to write
* @value: value to write
* @addr: address to write
*
* Write OneNAND register with value
*/
@ -215,8 +215,8 @@ static void onenand_writew(unsigned short value, void __iomem *addr)
/**
* onenand_block_address - [DEFAULT] Get block address
* @param this onenand chip data structure
* @param block the block
* @this: onenand chip data structure
* @block: the block
* @return translated block address if DDP, otherwise same
*
* Setup Start Address 1 Register (F100h)
@ -232,8 +232,8 @@ static int onenand_block_address(struct onenand_chip *this, int block)
/**
* onenand_bufferram_address - [DEFAULT] Get bufferram address
* @param this onenand chip data structure
* @param block the block
* @this: onenand chip data structure
* @block: the block
* @return set DBS value if DDP, otherwise 0
*
* Setup Start Address 2 Register (F101h) for DDP
@ -249,8 +249,8 @@ static int onenand_bufferram_address(struct onenand_chip *this, int block)
/**
* onenand_page_address - [DEFAULT] Get page address
* @param page the page address
* @param sector the sector address
* @page: the page address
* @sector: the sector address
* @return combined page and sector address
*
* Setup Start Address 8 Register (F107h)
@ -268,10 +268,10 @@ static int onenand_page_address(int page, int sector)
/**
* onenand_buffer_address - [DEFAULT] Get buffer address
* @param dataram1 DataRAM index
* @param sectors the sector address
* @param count the number of sectors
* @return the start buffer value
* @dataram1: DataRAM index
* @sectors: the sector address
* @count: the number of sectors
* Return: the start buffer value
*
* Setup Start Buffer Register (F200h)
*/
@ -295,8 +295,8 @@ static int onenand_buffer_address(int dataram1, int sectors, int count)
/**
* flexonenand_block- For given address return block number
* @param this - OneNAND device structure
* @param addr - Address for which block number is needed
* @this: - OneNAND device structure
* @addr: - Address for which block number is needed
*/
static unsigned flexonenand_block(struct onenand_chip *this, loff_t addr)
{
@ -359,7 +359,7 @@ EXPORT_SYMBOL(onenand_addr);
/**
* onenand_get_density - [DEFAULT] Get OneNAND density
* @param dev_id OneNAND device ID
* @dev_id: OneNAND device ID
*
* Get OneNAND density from device ID
*/
@ -371,8 +371,8 @@ static inline int onenand_get_density(int dev_id)
/**
* flexonenand_region - [Flex-OneNAND] Return erase region of addr
* @param mtd MTD device structure
* @param addr address whose erase region needs to be identified
* @mtd: MTD device structure
* @addr: address whose erase region needs to be identified
*/
int flexonenand_region(struct mtd_info *mtd, loff_t addr)
{
@ -387,10 +387,10 @@ EXPORT_SYMBOL(flexonenand_region);
/**
* onenand_command - [DEFAULT] Send command to OneNAND device
* @param mtd MTD device structure
* @param cmd the command to be sent
* @param addr offset to read from or write to
* @param len number of bytes to read or write
* @mtd: MTD device structure
* @cmd: the command to be sent
* @addr: offset to read from or write to
* @len: number of bytes to read or write
*
* Send command to OneNAND device. This function is used for middle/large page
* devices (1KB/2KB Bytes per page)
@ -519,7 +519,7 @@ static int onenand_command(struct mtd_info *mtd, int cmd, loff_t addr, size_t le
/**
* onenand_read_ecc - return ecc status
* @param this onenand chip structure
* @this: onenand chip structure
*/
static inline int onenand_read_ecc(struct onenand_chip *this)
{
@ -543,8 +543,8 @@ static inline int onenand_read_ecc(struct onenand_chip *this)
/**
* onenand_wait - [DEFAULT] wait until the command is done
* @param mtd MTD device structure
* @param state state to select the max. timeout value
* @mtd: MTD device structure
* @state: state to select the max. timeout value
*
* Wait for command done. This applies to all OneNAND command
* Read can take up to 30us, erase up to 2ms and program up to 350us
@ -625,8 +625,8 @@ static int onenand_wait(struct mtd_info *mtd, int state)
/*
* onenand_interrupt - [DEFAULT] onenand interrupt handler
* @param irq onenand interrupt number
* @param dev_id interrupt data
* @irq: onenand interrupt number
* @dev_id: interrupt data
*
* complete the work
*/
@ -643,8 +643,8 @@ static irqreturn_t onenand_interrupt(int irq, void *data)
/*
* onenand_interrupt_wait - [DEFAULT] wait until the command is done
* @param mtd MTD device structure
* @param state state to select the max. timeout value
* @mtd: MTD device structure
* @state: state to select the max. timeout value
*
* Wait for command done.
*/
@ -659,8 +659,8 @@ static int onenand_interrupt_wait(struct mtd_info *mtd, int state)
/*
* onenand_try_interrupt_wait - [DEFAULT] try interrupt wait
* @param mtd MTD device structure
* @param state state to select the max. timeout value
* @mtd: MTD device structure
* @state: state to select the max. timeout value
*
* Try interrupt based wait (It is used one-time)
*/
@ -689,7 +689,7 @@ static int onenand_try_interrupt_wait(struct mtd_info *mtd, int state)
/*
* onenand_setup_wait - [OneNAND Interface] setup onenand wait method
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* There's two method to wait onenand work
* 1. polling - read interrupt status register
@ -724,8 +724,8 @@ static void onenand_setup_wait(struct mtd_info *mtd)
/**
* onenand_bufferram_offset - [DEFAULT] BufferRAM offset
* @param mtd MTD data structure
* @param area BufferRAM area
* @mtd: MTD data structure
* @area: BufferRAM area
* @return offset given area
*
* Return BufferRAM offset given area
@ -747,11 +747,11 @@ static inline int onenand_bufferram_offset(struct mtd_info *mtd, int area)
/**
* onenand_read_bufferram - [OneNAND Interface] Read the bufferram area
* @param mtd MTD data structure
* @param area BufferRAM area
* @param buffer the databuffer to put/get data
* @param offset offset to read from or write to
* @param count number of bytes to read/write
* @mtd: MTD data structure
* @area: BufferRAM area
* @buffer: the databuffer to put/get data
* @offset: offset to read from or write to
* @count: number of bytes to read/write
*
* Read the BufferRAM area
*/
@ -783,11 +783,11 @@ static int onenand_read_bufferram(struct mtd_info *mtd, int area,
/**
* onenand_sync_read_bufferram - [OneNAND Interface] Read the bufferram area with Sync. Burst mode
* @param mtd MTD data structure
* @param area BufferRAM area
* @param buffer the databuffer to put/get data
* @param offset offset to read from or write to
* @param count number of bytes to read/write
* @mtd: MTD data structure
* @area: BufferRAM area
* @buffer: the databuffer to put/get data
* @offset: offset to read from or write to
* @count: number of bytes to read/write
*
* Read the BufferRAM area with Sync. Burst Mode
*/
@ -823,11 +823,11 @@ static int onenand_sync_read_bufferram(struct mtd_info *mtd, int area,
/**
* onenand_write_bufferram - [OneNAND Interface] Write the bufferram area
* @param mtd MTD data structure
* @param area BufferRAM area
* @param buffer the databuffer to put/get data
* @param offset offset to read from or write to
* @param count number of bytes to read/write
* @mtd: MTD data structure
* @area: BufferRAM area
* @buffer: the databuffer to put/get data
* @offset: offset to read from or write to
* @count: number of bytes to read/write
*
* Write the BufferRAM area
*/
@ -864,8 +864,8 @@ static int onenand_write_bufferram(struct mtd_info *mtd, int area,
/**
* onenand_get_2x_blockpage - [GENERIC] Get blockpage at 2x program mode
* @param mtd MTD data structure
* @param addr address to check
* @mtd: MTD data structure
* @addr: address to check
* @return blockpage address
*
* Get blockpage address at 2x program mode
@ -888,8 +888,8 @@ static int onenand_get_2x_blockpage(struct mtd_info *mtd, loff_t addr)
/**
* onenand_check_bufferram - [GENERIC] Check BufferRAM information
* @param mtd MTD data structure
* @param addr address to check
* @mtd: MTD data structure
* @addr: address to check
* @return 1 if there are valid data, otherwise 0
*
* Check bufferram if there is data we required
@ -930,9 +930,9 @@ static int onenand_check_bufferram(struct mtd_info *mtd, loff_t addr)
/**
* onenand_update_bufferram - [GENERIC] Update BufferRAM information
* @param mtd MTD data structure
* @param addr address to update
* @param valid valid flag
* @mtd: MTD data structure
* @addr: address to update
* @valid: valid flag
*
* Update BufferRAM information
*/
@ -963,9 +963,9 @@ static void onenand_update_bufferram(struct mtd_info *mtd, loff_t addr,
/**
* onenand_invalidate_bufferram - [GENERIC] Invalidate BufferRAM information
* @param mtd MTD data structure
* @param addr start address to invalidate
* @param len length to invalidate
* @mtd: MTD data structure
* @addr: start address to invalidate
* @len: length to invalidate
*
* Invalidate BufferRAM information
*/
@ -986,8 +986,8 @@ static void onenand_invalidate_bufferram(struct mtd_info *mtd, loff_t addr,
/**
* onenand_get_device - [GENERIC] Get chip for selected access
* @param mtd MTD device structure
* @param new_state the state which is requested
* @mtd: MTD device structure
* @new_state: the state which is requested
*
* Get the device and lock it for exclusive access
*/
@ -1024,7 +1024,7 @@ static int onenand_get_device(struct mtd_info *mtd, int new_state)
/**
* onenand_release_device - [GENERIC] release chip
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* Deselect, release chip lock and wake up anyone waiting on the device
*/
@ -1043,10 +1043,10 @@ static void onenand_release_device(struct mtd_info *mtd)
/**
* onenand_transfer_auto_oob - [INTERN] oob auto-placement transfer
* @param mtd MTD device structure
* @param buf destination address
* @param column oob offset to read from
* @param thislen oob length to read
* @mtd: MTD device structure
* @buf: destination address
* @column: oob offset to read from
* @thislen: oob length to read
*/
static int onenand_transfer_auto_oob(struct mtd_info *mtd, uint8_t *buf, int column,
int thislen)
@ -1061,9 +1061,9 @@ static int onenand_transfer_auto_oob(struct mtd_info *mtd, uint8_t *buf, int col
/**
* onenand_recover_lsb - [Flex-OneNAND] Recover LSB page data
* @param mtd MTD device structure
* @param addr address to recover
* @param status return value from onenand_wait / onenand_bbt_wait
* @mtd: MTD device structure
* @addr: address to recover
* @status: return value from onenand_wait / onenand_bbt_wait
*
* MLC NAND Flash cell has paired pages - LSB page and MSB page. LSB page has
* lower page address and MSB page has higher page address in paired pages.
@ -1104,9 +1104,9 @@ static int onenand_recover_lsb(struct mtd_info *mtd, loff_t addr, int status)
/**
* onenand_mlc_read_ops_nolock - MLC OneNAND read main and/or out-of-band
* @param mtd MTD device structure
* @param from offset to read from
* @param ops: oob operation description structure
* @mtd: MTD device structure
* @from: offset to read from
* @ops: oob operation description structure
*
* MLC OneNAND / Flex-OneNAND has 4KB page size and 4KB dataram.
* So, read-while-load is not present.
@ -1206,9 +1206,9 @@ static int onenand_mlc_read_ops_nolock(struct mtd_info *mtd, loff_t from,
/**
* onenand_read_ops_nolock - [OneNAND Interface] OneNAND read main and/or out-of-band
* @param mtd MTD device structure
* @param from offset to read from
* @param ops: oob operation description structure
* @mtd: MTD device structure
* @from: offset to read from
* @ops: oob operation description structure
*
* OneNAND read main and/or out-of-band data
*/
@ -1335,9 +1335,9 @@ static int onenand_read_ops_nolock(struct mtd_info *mtd, loff_t from,
/**
* onenand_read_oob_nolock - [MTD Interface] OneNAND read out-of-band
* @param mtd MTD device structure
* @param from offset to read from
* @param ops: oob operation description structure
* @mtd: MTD device structure
* @from: offset to read from
* @ops: oob operation description structure
*
* OneNAND read out-of-band data from the spare area
*/
@ -1430,10 +1430,10 @@ static int onenand_read_oob_nolock(struct mtd_info *mtd, loff_t from,
/**
* onenand_read_oob - [MTD Interface] Read main and/or out-of-band
* @param mtd: MTD device structure
* @param from: offset to read from
* @param ops: oob operation description structure
* @mtd: MTD device structure
* @from: offset to read from
* @ops: oob operation description structure
*
* Read main and/or out-of-band
*/
static int onenand_read_oob(struct mtd_info *mtd, loff_t from,
@ -1466,8 +1466,8 @@ static int onenand_read_oob(struct mtd_info *mtd, loff_t from,
/**
* onenand_bbt_wait - [DEFAULT] wait until the command is done
* @param mtd MTD device structure
* @param state state to select the max. timeout value
* @mtd: MTD device structure
* @state: state to select the max. timeout value
*
* Wait for command done.
*/
@ -1517,9 +1517,9 @@ static int onenand_bbt_wait(struct mtd_info *mtd, int state)
/**
* onenand_bbt_read_oob - [MTD Interface] OneNAND read out-of-band for bbt scan
* @param mtd MTD device structure
* @param from offset to read from
* @param ops oob operation description structure
* @mtd: MTD device structure
* @from: offset to read from
* @ops: oob operation description structure
*
* OneNAND read out-of-band data from the spare area for bbt scan
*/
@ -1594,9 +1594,9 @@ int onenand_bbt_read_oob(struct mtd_info *mtd, loff_t from,
#ifdef CONFIG_MTD_ONENAND_VERIFY_WRITE
/**
* onenand_verify_oob - [GENERIC] verify the oob contents after a write
* @param mtd MTD device structure
* @param buf the databuffer to verify
* @param to offset to read from
* @mtd: MTD device structure
* @buf: the databuffer to verify
* @to: offset to read from
*/
static int onenand_verify_oob(struct mtd_info *mtd, const u_char *buf, loff_t to)
{
@ -1622,10 +1622,10 @@ static int onenand_verify_oob(struct mtd_info *mtd, const u_char *buf, loff_t to
/**
* onenand_verify - [GENERIC] verify the chip contents after a write
* @param mtd MTD device structure
* @param buf the databuffer to verify
* @param addr offset to read from
* @param len number of bytes to read and compare
* @mtd: MTD device structure
* @buf: the databuffer to verify
* @addr: offset to read from
* @len: number of bytes to read and compare
*/
static int onenand_verify(struct mtd_info *mtd, const u_char *buf, loff_t addr, size_t len)
{
@ -1684,11 +1684,11 @@ static void onenand_panic_wait(struct mtd_info *mtd)
/**
* onenand_panic_write - [MTD Interface] write buffer to FLASH in a panic context
* @param mtd MTD device structure
* @param to offset to write to
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of written bytes
* @param buf the data to write
* @mtd: MTD device structure
* @to: offset to write to
* @len: number of bytes to write
* @retlen: pointer to variable to store the number of written bytes
* @buf: the data to write
*
* Write with ECC
*/
@ -1762,11 +1762,11 @@ static int onenand_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
/**
* onenand_fill_auto_oob - [INTERN] oob auto-placement transfer
* @param mtd MTD device structure
* @param oob_buf oob buffer
* @param buf source address
* @param column oob offset to write to
* @param thislen oob length to write
* @mtd: MTD device structure
* @oob_buf: oob buffer
* @buf: source address
* @column: oob offset to write to
* @thislen: oob length to write
*/
static int onenand_fill_auto_oob(struct mtd_info *mtd, u_char *oob_buf,
const u_char *buf, int column, int thislen)
@ -1776,9 +1776,9 @@ static int onenand_fill_auto_oob(struct mtd_info *mtd, u_char *oob_buf,
/**
* onenand_write_ops_nolock - [OneNAND Interface] write main and/or out-of-band
* @param mtd MTD device structure
* @param to offset to write to
* @param ops oob operation description structure
* @mtd: MTD device structure
* @to: offset to write to
* @ops: oob operation description structure
*
* Write main and/or oob with ECC
*/
@ -1957,12 +1957,9 @@ static int onenand_write_ops_nolock(struct mtd_info *mtd, loff_t to,
/**
* onenand_write_oob_nolock - [INTERN] OneNAND write out-of-band
* @param mtd MTD device structure
* @param to offset to write to
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of written bytes
* @param buf the data to write
* @param mode operation mode
* @mtd: MTD device structure
* @to: offset to write to
* @ops: oob operation description structure
*
* OneNAND write out-of-band
*/
@ -2070,9 +2067,9 @@ static int onenand_write_oob_nolock(struct mtd_info *mtd, loff_t to,
/**
* onenand_write_oob - [MTD Interface] NAND write data and/or out-of-band
* @param mtd: MTD device structure
* @param to: offset to write
* @param ops: oob operation description structure
* @mtd: MTD device structure
* @to: offset to write
* @ops: oob operation description structure
*/
static int onenand_write_oob(struct mtd_info *mtd, loff_t to,
struct mtd_oob_ops *ops)
@ -2101,9 +2098,9 @@ static int onenand_write_oob(struct mtd_info *mtd, loff_t to,
/**
* onenand_block_isbad_nolock - [GENERIC] Check if a block is marked bad
* @param mtd MTD device structure
* @param ofs offset from device start
* @param allowbbt 1, if its allowed to access the bbt area
* @mtd: MTD device structure
* @ofs: offset from device start
* @allowbbt: 1, if its allowed to access the bbt area
*
* Check, if the block is bad. Either by reading the bad block table or
* calling of the scan function.
@ -2144,9 +2141,9 @@ static int onenand_multiblock_erase_verify(struct mtd_info *mtd,
/**
* onenand_multiblock_erase - [INTERN] erase block(s) using multiblock erase
* @param mtd MTD device structure
* @param instr erase instruction
* @param region erase region
* @mtd: MTD device structure
* @instr: erase instruction
* @block_size: block size
*
* Erase one or more blocks up to 64 block at a time
*/
@ -2254,10 +2251,10 @@ static int onenand_multiblock_erase(struct mtd_info *mtd,
/**
* onenand_block_by_block_erase - [INTERN] erase block(s) using regular erase
* @param mtd MTD device structure
* @param instr erase instruction
* @param region erase region
* @param block_size erase block size
* @mtd: MTD device structure
* @instr: erase instruction
* @region: erase region
* @block_size: erase block size
*
* Erase one or more blocks one block at a time
*/
@ -2326,8 +2323,8 @@ static int onenand_block_by_block_erase(struct mtd_info *mtd,
/**
* onenand_erase - [MTD Interface] erase block(s)
* @param mtd MTD device structure
* @param instr erase instruction
* @mtd: MTD device structure
* @instr: erase instruction
*
* Erase one or more blocks
*/
@ -2391,7 +2388,7 @@ static int onenand_erase(struct mtd_info *mtd, struct erase_info *instr)
/**
* onenand_sync - [MTD Interface] sync
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* Sync is actually a wait for chip ready function
*/
@ -2408,8 +2405,8 @@ static void onenand_sync(struct mtd_info *mtd)
/**
* onenand_block_isbad - [MTD Interface] Check whether the block at the given offset is bad
* @param mtd MTD device structure
* @param ofs offset relative to mtd start
* @mtd: MTD device structure
* @ofs: offset relative to mtd start
*
* Check whether the block is bad
*/
@ -2425,8 +2422,8 @@ static int onenand_block_isbad(struct mtd_info *mtd, loff_t ofs)
/**
* onenand_default_block_markbad - [DEFAULT] mark a block bad
* @param mtd MTD device structure
* @param ofs offset from device start
* @mtd: MTD device structure
* @ofs: offset from device start
*
* This is the default implementation, which can be overridden by
* a hardware specific driver.
@ -2460,8 +2457,8 @@ static int onenand_default_block_markbad(struct mtd_info *mtd, loff_t ofs)
/**
* onenand_block_markbad - [MTD Interface] Mark the block at the given offset as bad
* @param mtd MTD device structure
* @param ofs offset relative to mtd start
* @mtd: MTD device structure
* @ofs: offset relative to mtd start
*
* Mark the block as bad
*/
@ -2486,10 +2483,10 @@ static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs)
/**
* onenand_do_lock_cmd - [OneNAND Interface] Lock or unlock block(s)
* @param mtd MTD device structure
* @param ofs offset relative to mtd start
* @param len number of bytes to lock or unlock
* @param cmd lock or unlock command
* @mtd: MTD device structure
* @ofs: offset relative to mtd start
* @len: number of bytes to lock or unlock
* @cmd: lock or unlock command
*
* Lock or unlock one or more blocks
*/
@ -2566,9 +2563,9 @@ static int onenand_do_lock_cmd(struct mtd_info *mtd, loff_t ofs, size_t len, int
/**
* onenand_lock - [MTD Interface] Lock block(s)
* @param mtd MTD device structure
* @param ofs offset relative to mtd start
* @param len number of bytes to unlock
* @mtd: MTD device structure
* @ofs: offset relative to mtd start
* @len: number of bytes to unlock
*
* Lock one or more blocks
*/
@ -2584,9 +2581,9 @@ static int onenand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
/**
* onenand_unlock - [MTD Interface] Unlock block(s)
* @param mtd MTD device structure
* @param ofs offset relative to mtd start
* @param len number of bytes to unlock
* @mtd: MTD device structure
* @ofs: offset relative to mtd start
* @len: number of bytes to unlock
*
* Unlock one or more blocks
*/
@ -2602,7 +2599,7 @@ static int onenand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
/**
* onenand_check_lock_status - [OneNAND Interface] Check lock status
* @param this onenand chip data structure
* @this: onenand chip data structure
*
* Check lock status
*/
@ -2636,7 +2633,7 @@ static int onenand_check_lock_status(struct onenand_chip *this)
/**
* onenand_unlock_all - [OneNAND Interface] unlock all blocks
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* Unlock all blocks
*/
@ -2683,10 +2680,10 @@ static void onenand_unlock_all(struct mtd_info *mtd)
/**
* onenand_otp_command - Send OTP specific command to OneNAND device
* @param mtd MTD device structure
* @param cmd the command to be sent
* @param addr offset to read from or write to
* @param len number of bytes to read or write
* @mtd: MTD device structure
* @cmd: the command to be sent
* @addr: offset to read from or write to
* @len: number of bytes to read or write
*/
static int onenand_otp_command(struct mtd_info *mtd, int cmd, loff_t addr,
size_t len)
@ -2758,11 +2755,9 @@ static int onenand_otp_command(struct mtd_info *mtd, int cmd, loff_t addr,
/**
* onenand_otp_write_oob_nolock - [INTERN] OneNAND write out-of-band, specific to OTP
* @param mtd MTD device structure
* @param to offset to write to
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of written bytes
* @param buf the data to write
* @mtd: MTD device structure
* @to: offset to write to
* @ops: oob operation description structure
*
* OneNAND write out-of-band only for OTP
*/
@ -2889,11 +2884,11 @@ typedef int (*otp_op_t)(struct mtd_info *mtd, loff_t form, size_t len,
/**
* do_otp_read - [DEFAULT] Read OTP block area
* @param mtd MTD device structure
* @param from The offset to read
* @param len number of bytes to read
* @param retlen pointer to variable to store the number of readbytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @from: The offset to read
* @len: number of bytes to read
* @retlen: pointer to variable to store the number of readbytes
* @buf: the databuffer to put/get data
*
* Read OTP block area.
*/
@ -2926,11 +2921,11 @@ static int do_otp_read(struct mtd_info *mtd, loff_t from, size_t len,
/**
* do_otp_write - [DEFAULT] Write OTP block area
* @param mtd MTD device structure
* @param to The offset to write
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of write bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @to: The offset to write
* @len: number of bytes to write
* @retlen: pointer to variable to store the number of write bytes
* @buf: the databuffer to put/get data
*
* Write OTP block area.
*/
@ -2970,11 +2965,11 @@ static int do_otp_write(struct mtd_info *mtd, loff_t to, size_t len,
/**
* do_otp_lock - [DEFAULT] Lock OTP block area
* @param mtd MTD device structure
* @param from The offset to lock
* @param len number of bytes to lock
* @param retlen pointer to variable to store the number of lock bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @from: The offset to lock
* @len: number of bytes to lock
* @retlen: pointer to variable to store the number of lock bytes
* @buf: the databuffer to put/get data
*
* Lock OTP block area.
*/
@ -3018,13 +3013,13 @@ static int do_otp_lock(struct mtd_info *mtd, loff_t from, size_t len,
/**
* onenand_otp_walk - [DEFAULT] Handle OTP operation
* @param mtd MTD device structure
* @param from The offset to read/write
* @param len number of bytes to read/write
* @param retlen pointer to variable to store the number of read bytes
* @param buf the databuffer to put/get data
* @param action do given action
* @param mode specify user and factory
* @mtd: MTD device structure
* @from: The offset to read/write
* @len: number of bytes to read/write
* @retlen: pointer to variable to store the number of read bytes
* @buf: the databuffer to put/get data
* @action: do given action
* @mode: specify user and factory
*
* Handle OTP operation.
*/
@ -3099,10 +3094,10 @@ static int onenand_otp_walk(struct mtd_info *mtd, loff_t from, size_t len,
/**
* onenand_get_fact_prot_info - [MTD Interface] Read factory OTP info
* @param mtd MTD device structure
* @param len number of bytes to read
* @param retlen pointer to variable to store the number of read bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @len: number of bytes to read
* @retlen: pointer to variable to store the number of read bytes
* @buf: the databuffer to put/get data
*
* Read factory OTP info.
*/
@ -3115,11 +3110,11 @@ static int onenand_get_fact_prot_info(struct mtd_info *mtd, size_t len,
/**
* onenand_read_fact_prot_reg - [MTD Interface] Read factory OTP area
* @param mtd MTD device structure
* @param from The offset to read
* @param len number of bytes to read
* @param retlen pointer to variable to store the number of read bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @from: The offset to read
* @len: number of bytes to read
* @retlen: pointer to variable to store the number of read bytes
* @buf: the databuffer to put/get data
*
* Read factory OTP area.
*/
@ -3131,10 +3126,10 @@ static int onenand_read_fact_prot_reg(struct mtd_info *mtd, loff_t from,
/**
* onenand_get_user_prot_info - [MTD Interface] Read user OTP info
* @param mtd MTD device structure
* @param retlen pointer to variable to store the number of read bytes
* @param len number of bytes to read
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @retlen: pointer to variable to store the number of read bytes
* @len: number of bytes to read
* @buf: the databuffer to put/get data
*
* Read user OTP info.
*/
@ -3147,11 +3142,11 @@ static int onenand_get_user_prot_info(struct mtd_info *mtd, size_t len,
/**
* onenand_read_user_prot_reg - [MTD Interface] Read user OTP area
* @param mtd MTD device structure
* @param from The offset to read
* @param len number of bytes to read
* @param retlen pointer to variable to store the number of read bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @from: The offset to read
* @len: number of bytes to read
* @retlen: pointer to variable to store the number of read bytes
* @buf: the databuffer to put/get data
*
* Read user OTP area.
*/
@ -3163,11 +3158,11 @@ static int onenand_read_user_prot_reg(struct mtd_info *mtd, loff_t from,
/**
* onenand_write_user_prot_reg - [MTD Interface] Write user OTP area
* @param mtd MTD device structure
* @param from The offset to write
* @param len number of bytes to write
* @param retlen pointer to variable to store the number of write bytes
* @param buf the databuffer to put/get data
* @mtd: MTD device structure
* @from: The offset to write
* @len: number of bytes to write
* @retlen: pointer to variable to store the number of write bytes
* @buf: the databuffer to put/get data
*
* Write user OTP area.
*/
@ -3179,9 +3174,9 @@ static int onenand_write_user_prot_reg(struct mtd_info *mtd, loff_t from,
/**
* onenand_lock_user_prot_reg - [MTD Interface] Lock user OTP area
* @param mtd MTD device structure
* @param from The offset to lock
* @param len number of bytes to unlock
* @mtd: MTD device structure
* @from: The offset to lock
* @len: number of bytes to unlock
*
* Write lock mark on spare area in page 0 in OTP block
*/
@ -3234,7 +3229,7 @@ static int onenand_lock_user_prot_reg(struct mtd_info *mtd, loff_t from,
/**
* onenand_check_features - Check and set OneNAND features
* @param mtd MTD data structure
* @mtd: MTD data structure
*
* Check and set OneNAND features
* - lock scheme
@ -3324,8 +3319,8 @@ static void onenand_check_features(struct mtd_info *mtd)
/**
* onenand_print_device_info - Print device & version ID
* @param device device ID
* @param version version ID
* @device: device ID
* @version: version ID
*
* Print device & version ID
*/
@ -3355,7 +3350,7 @@ static const struct onenand_manufacturers onenand_manuf_ids[] = {
/**
* onenand_check_maf - Check manufacturer ID
* @param manuf manufacturer ID
* @manuf: manufacturer ID
*
* Check manufacturer ID
*/
@ -3380,9 +3375,9 @@ static int onenand_check_maf(int manuf)
}
/**
* flexonenand_get_boundary - Reads the SLC boundary
* @param onenand_info - onenand info structure
**/
* flexonenand_get_boundary - Reads the SLC boundary
* @mtd: MTD data structure
*/
static int flexonenand_get_boundary(struct mtd_info *mtd)
{
struct onenand_chip *this = mtd->priv;
@ -3422,7 +3417,7 @@ static int flexonenand_get_boundary(struct mtd_info *mtd)
/**
* flexonenand_get_size - Fill up fields in onenand_chip and mtd_info
* boundary[], diesize[], mtd->size, mtd->erasesize
* @param mtd - MTD device structure
* @mtd: - MTD device structure
*/
static void flexonenand_get_size(struct mtd_info *mtd)
{
@ -3493,9 +3488,9 @@ static void flexonenand_get_size(struct mtd_info *mtd)
/**
* flexonenand_check_blocks_erased - Check if blocks are erased
* @param mtd_info - mtd info structure
* @param start - first erase block to check
* @param end - last erase block to check
* @mtd: mtd info structure
* @start: first erase block to check
* @end: last erase block to check
*
* Converting an unerased block from MLC to SLC
* causes byte values to change. Since both data and its ECC
@ -3548,9 +3543,8 @@ static int flexonenand_check_blocks_erased(struct mtd_info *mtd, int start, int
return 0;
}
/**
/*
* flexonenand_set_boundary - Writes the SLC boundary
* @param mtd - mtd info structure
*/
static int flexonenand_set_boundary(struct mtd_info *mtd, int die,
int boundary, int lock)
@ -3640,7 +3634,7 @@ out:
/**
* onenand_chip_probe - [OneNAND Interface] The generic chip probe
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* OneNAND detection method:
* Compare the values from command with ones from register
@ -3688,7 +3682,7 @@ static int onenand_chip_probe(struct mtd_info *mtd)
/**
* onenand_probe - [OneNAND Interface] Probe the OneNAND device
* @param mtd MTD device structure
* @mtd: MTD device structure
*/
static int onenand_probe(struct mtd_info *mtd)
{
@ -3783,7 +3777,7 @@ static int onenand_probe(struct mtd_info *mtd)
/**
* onenand_suspend - [MTD Interface] Suspend the OneNAND flash
* @param mtd MTD device structure
* @mtd: MTD device structure
*/
static int onenand_suspend(struct mtd_info *mtd)
{
@ -3792,7 +3786,7 @@ static int onenand_suspend(struct mtd_info *mtd)
/**
* onenand_resume - [MTD Interface] Resume the OneNAND flash
* @param mtd MTD device structure
* @mtd: MTD device structure
*/
static void onenand_resume(struct mtd_info *mtd)
{
@ -3807,8 +3801,8 @@ static void onenand_resume(struct mtd_info *mtd)
/**
* onenand_scan - [OneNAND Interface] Scan for the OneNAND device
* @param mtd MTD device structure
* @param maxchips Number of chips to scan for
* @mtd: MTD device structure
* @maxchips: Number of chips to scan for
*
* This fills out all the not initialized function pointers
* with the defaults.
@ -3985,7 +3979,7 @@ int onenand_scan(struct mtd_info *mtd, int maxchips)
/**
* onenand_release - [OneNAND Interface] Free resources held by the OneNAND device
* @param mtd MTD device structure
* @mtd: MTD device structure
*/
void onenand_release(struct mtd_info *mtd)
{

Просмотреть файл

@ -18,10 +18,10 @@
/**
* check_short_pattern - [GENERIC] check if a pattern is in the buffer
* @param buf the buffer to search
* @param len the length of buffer to search
* @param paglen the pagelength
* @param td search pattern descriptor
* @buf: the buffer to search
* @len: the length of buffer to search
* @paglen: the pagelength
* @td: search pattern descriptor
*
* Check for a pattern at the given place. Used to search bad block
* tables and good / bad block identifiers. Same as check_pattern, but
@ -44,10 +44,10 @@ static int check_short_pattern(uint8_t *buf, int len, int paglen, struct nand_bb
/**
* create_bbt - [GENERIC] Create a bad block table by scanning the device
* @param mtd MTD device structure
* @param buf temporary buffer
* @param bd descriptor for the good/bad block search pattern
* @param chip create the table for a specific chip, -1 read all chips.
* @mtd: MTD device structure
* @buf: temporary buffer
* @bd: descriptor for the good/bad block search pattern
* @chip: create the table for a specific chip, -1 read all chips.
* Applies only if NAND_BBT_PERCHIP option is set
*
* Create a bad block table by scanning the device
@ -122,8 +122,8 @@ static int create_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr
/**
* onenand_memory_bbt - [GENERIC] create a memory based bad block table
* @param mtd MTD device structure
* @param bd descriptor for the good/bad block search pattern
* @mtd: MTD device structure
* @bd: descriptor for the good/bad block search pattern
*
* The function creates a memory based bbt by scanning the device
* for manufacturer / software marked good / bad blocks
@ -137,9 +137,9 @@ static inline int onenand_memory_bbt (struct mtd_info *mtd, struct nand_bbt_desc
/**
* onenand_isbad_bbt - [OneNAND Interface] Check if a block is bad
* @param mtd MTD device structure
* @param offs offset in the device
* @param allowbbt allow access to bad block table region
* @mtd: MTD device structure
* @offs: offset in the device
* @allowbbt: allow access to bad block table region
*/
static int onenand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt)
{
@ -166,8 +166,8 @@ static int onenand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt)
/**
* onenand_scan_bbt - [OneNAND Interface] scan, find, read and maybe create bad block table(s)
* @param mtd MTD device structure
* @param bd descriptor for the good/bad block search pattern
* @mtd: MTD device structure
* @bd: descriptor for the good/bad block search pattern
*
* The function checks, if a bad block table(s) is/are already
* available. If not it scans the device for manufacturer
@ -221,7 +221,7 @@ static struct nand_bbt_descr largepage_memorybased = {
/**
* onenand_default_bbt - [OneNAND Interface] Select a default bad block table for the device
* @param mtd MTD device structure
* @mtd: MTD device structure
*
* This function selects the default bad block table
* support for the device and calls the onenand_scan_bbt function

Просмотреть файл

@ -371,12 +371,12 @@ static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/*
* If the buffer address is not DMA-able, len is not long enough to make
* DMA transfers profitable or panic_write() may be in an interrupt
* context fallback to PIO mode.
* If the buffer address is not DMA-able, len is not long enough to
* make DMA transfers profitable or if invoked from panic_write()
* fallback to PIO mode.
*/
if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||
count < 384 || in_interrupt() || oops_in_progress)
count < 384 || mtd->oops_panic_write)
goto out_copy;
xtra = count & 3;
@ -418,12 +418,12 @@ static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/*
* If the buffer address is not DMA-able, len is not long enough to make
* DMA transfers profitable or panic_write() may be in an interrupt
* context fallback to PIO mode.
* If the buffer address is not DMA-able, len is not long enough to
* make DMA transfers profitable or if invoked from panic_write()
* fallback to PIO mode.
*/
if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||
count < 384 || in_interrupt() || oops_in_progress)
count < 384 || mtd->oops_panic_write)
goto out_copy;
dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE);

Просмотреть файл

@ -1,20 +1,8 @@
# SPDX-License-Identifier: GPL-2.0-only
config MTD_NAND_ECC_SW_HAMMING
tristate
config MTD_NAND_ECC_SW_HAMMING_SMC
bool "NAND ECC Smart Media byte order"
depends on MTD_NAND_ECC_SW_HAMMING
default n
help
Software ECC according to the Smart Media Specification.
The original Linux implementation had byte 0 and 1 swapped.
menuconfig MTD_RAW_NAND
tristate "Raw/Parallel NAND Device Support"
select MTD_NAND_CORE
select MTD_NAND_ECC
select MTD_NAND_ECC_SW_HAMMING
help
This enables support for accessing all type of raw/parallel
NAND flash devices. For further information see
@ -22,16 +10,6 @@ menuconfig MTD_RAW_NAND
if MTD_RAW_NAND
config MTD_NAND_ECC_SW_BCH
bool "Support software BCH ECC"
select BCH
default n
help
This enables support for software BCH error correction. Binary BCH
codes are more powerful and cpu intensive than traditional Hamming
ECC codes. They are used with NAND devices requiring more than 1 bit
of error correction.
comment "Raw/parallel NAND flash controllers"
config MTD_NAND_DENALI
@ -93,6 +71,7 @@ config MTD_NAND_AU1550
config MTD_NAND_NDFC
tristate "IBM/MCC 4xx NAND controller"
depends on 4xx
select MTD_NAND_ECC_SW_HAMMING
select MTD_NAND_ECC_SW_HAMMING_SMC
help
NDFC Nand Flash Controllers are integrated in IBM/AMCC's 4xx SoCs
@ -313,7 +292,7 @@ config MTD_NAND_VF610_NFC
config MTD_NAND_MXC
tristate "Freescale MXC NAND controller"
depends on ARCH_MXC || COMPILE_TEST
depends on HAS_IOMEM
depends on HAS_IOMEM && OF
help
This enables the driver for the NAND flash controller on the
MXC processors.
@ -462,6 +441,26 @@ config MTD_NAND_ARASAN
Enables the driver for the Arasan NAND flash controller on
Zynq Ultrascale+ MPSoC.
config MTD_NAND_INTEL_LGM
tristate "Support for NAND controller on Intel LGM SoC"
depends on OF || COMPILE_TEST
depends on HAS_IOMEM
help
Enables support for NAND Flash chips on Intel's LGM SoC.
NAND flash controller interfaced through the External Bus Unit.
config MTD_NAND_ROCKCHIP
tristate "Rockchip NAND controller"
depends on ARCH_ROCKCHIP && HAS_IOMEM
help
Enables support for NAND controller on Rockchip SoCs.
There are four different versions of NAND FLASH Controllers,
including:
NFC v600: RK2928, RK3066, RK3188
NFC v622: RK3036, RK3128
NFC v800: RK3308, RV1108
NFC v900: PX30, RK3326
comment "Misc"
config MTD_SM_COMMON

Просмотреть файл

@ -1,8 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_MTD_RAW_NAND) += nand.o
obj-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += nand_ecc.o
nand-$(CONFIG_MTD_NAND_ECC_SW_BCH) += nand_bch.o
obj-$(CONFIG_MTD_SM_COMMON) += sm_common.o
obj-$(CONFIG_MTD_NAND_CAFE) += cafe_nand.o
@ -58,6 +56,8 @@ obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o
obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o
obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o
obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o
obj-$(CONFIG_MTD_NAND_INTEL_LGM) += intel-nand-controller.o
obj-$(CONFIG_MTD_NAND_ROCKCHIP) += rockchip-nand-controller.o
nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o
nand-objs += nand_onfi.o

Просмотреть файл

@ -118,6 +118,7 @@
* @rdy_timeout_ms: Timeout for waits on Ready/Busy pin
* @len: Data transfer length
* @read: Data transfer direction from the controller point of view
* @buf: Data buffer
*/
struct anfc_op {
u32 pkt_reg;

Просмотреть файл

@ -3,6 +3,7 @@
* Copyright (C) 2004 Embedded Edge, LLC
*/
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/interrupt.h>

Просмотреть файл

@ -1846,7 +1846,7 @@ static void brcmnand_write_buf(struct nand_chip *chip, const uint8_t *buf,
}
}
/**
/*
* Kick EDU engine
*/
static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
@ -1937,7 +1937,7 @@ static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf,
return ret;
}
/**
/*
* Construct a FLASH_DMA descriptor as part of a linked list. You must know the
* following ahead of time:
* - Is this descriptor the beginning or end of a linked list?
@ -1970,7 +1970,7 @@ static int brcmnand_fill_dma_desc(struct brcmnand_host *host,
return 0;
}
/**
/*
* Kick the FLASH_DMA engine, with a given DMA descriptor
*/
static void brcmnand_dma_run(struct brcmnand_host *host, dma_addr_t desc)

Просмотреть файл

@ -359,10 +359,10 @@ static int cafe_nand_read_oob(struct nand_chip *chip, int page)
}
/**
* cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read
* @mtd: mtd info structure
* @chip: nand chip info structure
* @buf: buffer to store read data
* @oob_required: caller expects OOB data read to chip->oob_poi
* @page: page number to read
*
* The hw generator calculates the error syndrome automatically. Therefore
* we need a special oob layout and handling.

Просмотреть файл

@ -19,7 +19,6 @@
#include <linux/delay.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/iopoll.h>
@ -252,7 +251,7 @@ static int cs553x_attach_chip(struct nand_chip *chip)
chip->ecc.bytes = 3;
chip->ecc.hwctl = cs_enable_hwecc;
chip->ecc.calculate = cs_calculate_ecc;
chip->ecc.correct = nand_correct_data;
chip->ecc.correct = rawnand_sw_hamming_correct;
chip->ecc.strength = 1;
return 0;

Просмотреть файл

@ -586,10 +586,10 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
return PTR_ERR(pdata);
/* Use board-specific ECC config */
info->chip.ecc.engine_type = pdata->engine_type;
info->chip.ecc.placement = pdata->ecc_placement;
chip->ecc.engine_type = pdata->engine_type;
chip->ecc.placement = pdata->ecc_placement;
switch (info->chip.ecc.engine_type) {
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_NONE:
pdata->ecc_bits = 0;
break;
@ -601,7 +601,7 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
* NAND_ECC_ALGO_HAMMING to avoid adding an extra ->ecc_algo
* field to davinci_nand_pdata.
*/
info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
break;
case NAND_ECC_ENGINE_TYPE_ON_HOST:
if (pdata->ecc_bits == 4) {
@ -628,12 +628,12 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
if (ret == -EBUSY)
return ret;
info->chip.ecc.calculate = nand_davinci_calculate_4bit;
info->chip.ecc.correct = nand_davinci_correct_4bit;
info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
info->chip.ecc.bytes = 10;
info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
info->chip.ecc.algo = NAND_ECC_ALGO_BCH;
chip->ecc.calculate = nand_davinci_calculate_4bit;
chip->ecc.correct = nand_davinci_correct_4bit;
chip->ecc.hwctl = nand_davinci_hwctl_4bit;
chip->ecc.bytes = 10;
chip->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
chip->ecc.algo = NAND_ECC_ALGO_BCH;
/*
* Update ECC layout if needed ... for 1-bit HW ECC, the
@ -651,20 +651,20 @@ static int davinci_nand_attach_chip(struct nand_chip *chip)
} else if (chunks == 4 || chunks == 8) {
mtd_set_ooblayout(mtd,
nand_get_large_page_ooblayout());
info->chip.ecc.read_page = nand_davinci_read_page_hwecc_oob_first;
chip->ecc.read_page = nand_davinci_read_page_hwecc_oob_first;
} else {
return -EIO;
}
} else {
/* 1bit ecc hamming */
info->chip.ecc.calculate = nand_davinci_calculate_1bit;
info->chip.ecc.correct = nand_davinci_correct_1bit;
info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
info->chip.ecc.bytes = 3;
info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING;
chip->ecc.calculate = nand_davinci_calculate_1bit;
chip->ecc.correct = nand_davinci_correct_1bit;
chip->ecc.hwctl = nand_davinci_hwctl_1bit;
chip->ecc.bytes = 3;
chip->ecc.algo = NAND_ECC_ALGO_HAMMING;
}
info->chip.ecc.size = 512;
info->chip.ecc.strength = pdata->ecc_bits;
chip->ecc.size = 512;
chip->ecc.strength = pdata->ecc_bits;
break;
default:
return -EINVAL;
@ -899,7 +899,7 @@ static int nand_davinci_remove(struct platform_device *pdev)
int ret;
spin_lock_irq(&davinci_nand_lock);
if (info->chip.ecc.placement == NAND_ECC_PLACEMENT_INTERLEAVED)
if (chip->ecc.placement == NAND_ECC_PLACEMENT_INTERLEAVED)
ecc4_busy = false;
spin_unlock_irq(&davinci_nand_lock);

Просмотреть файл

@ -216,7 +216,7 @@ static int doc_ecc_decode(struct rs_control *rs, uint8_t *data, uint8_t *ecc)
static void DoC_Delay(struct doc_priv *doc, unsigned short cycles)
{
volatile char dummy;
volatile char __always_unused dummy;
int i;
for (i = 0; i < cycles; i++) {
@ -703,7 +703,7 @@ static int doc200x_calculate_ecc(struct nand_chip *this, const u_char *dat,
struct doc_priv *doc = nand_get_controller_data(this);
void __iomem *docptr = doc->virtadr;
int i;
int emptymatch = 1;
int __always_unused emptymatch = 1;
/* flush the pipeline */
if (DoC_is_2000(doc)) {

Просмотреть файл

@ -22,7 +22,6 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <asm/io.h>

Просмотреть файл

@ -15,7 +15,6 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/fsl_ifc.h>
#include <linux/iopoll.h>

Просмотреть файл

@ -11,7 +11,6 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/mtd.h>
#include <linux/of_platform.h>

Просмотреть файл

@ -26,7 +26,6 @@
#include <linux/types.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/mtd/partitions.h>
@ -918,7 +917,7 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
case NAND_ECC_ENGINE_TYPE_ON_HOST:
dev_info(host->dev, "Using 1-bit HW ECC scheme\n");
nand->ecc.calculate = fsmc_read_hwecc_ecc1;
nand->ecc.correct = nand_correct_data;
nand->ecc.correct = rawnand_sw_hamming_correct;
nand->ecc.hwctl = fsmc_enable_hwecc;
nand->ecc.bytes = 3;
nand->ecc.strength = 1;
@ -942,7 +941,7 @@ static int fsmc_nand_attach_chip(struct nand_chip *nand)
/*
* Don't set layout for BCH4 SW ECC. This will be
* generated later in nand_bch_init() later.
* generated later during BCH initialization.
*/
if (nand->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) {
switch (mtd->oobsize) {

Просмотреть файл

@ -1,3 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi_nand.o
gpmi_nand-objs += gpmi-nand.o
obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi-nand.o

Просмотреть файл

@ -149,8 +149,10 @@ static int gpmi_init(struct gpmi_nand_data *this)
int ret;
ret = pm_runtime_get_sync(this->dev);
if (ret < 0)
if (ret < 0) {
pm_runtime_put_noidle(this->dev);
return ret;
}
ret = gpmi_reset_block(r->gpmi_regs, false);
if (ret)
@ -179,9 +181,11 @@ static int gpmi_init(struct gpmi_nand_data *this)
/*
* Decouple the chip select from dma channel. We use dma0 for all
* the chips.
* the chips, force all NAND RDY_BUSY inputs to be sourced from
* RDY_BUSY0.
*/
writel(BM_GPMI_CTRL1_DECOUPLE_CS, r->gpmi_regs + HW_GPMI_CTRL1_SET);
writel(BM_GPMI_CTRL1_DECOUPLE_CS | BM_GPMI_CTRL1_GANGED_RDYBUSY,
r->gpmi_regs + HW_GPMI_CTRL1_SET);
err_out:
pm_runtime_mark_last_busy(this->dev);
@ -2252,7 +2256,7 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
void *buf_read = NULL;
const void *buf_write = NULL;
bool direct = false;
struct completion *completion;
struct completion *dma_completion, *bch_completion;
unsigned long to;
if (check_only)
@ -2263,8 +2267,10 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
this->transfers[i].direction = DMA_NONE;
ret = pm_runtime_get_sync(this->dev);
if (ret < 0)
if (ret < 0) {
pm_runtime_put_noidle(this->dev);
return ret;
}
/*
* This driver currently supports only one NAND chip. Plus, dies share
@ -2347,22 +2353,24 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
this->resources.bch_regs + HW_BCH_FLASH0LAYOUT1);
}
desc->callback = dma_irq_callback;
desc->callback_param = this;
dma_completion = &this->dma_done;
bch_completion = NULL;
init_completion(dma_completion);
if (this->bch && buf_read) {
writel(BM_BCH_CTRL_COMPLETE_IRQ_EN,
this->resources.bch_regs + HW_BCH_CTRL_SET);
completion = &this->bch_done;
} else {
desc->callback = dma_irq_callback;
desc->callback_param = this;
completion = &this->dma_done;
bch_completion = &this->bch_done;
init_completion(bch_completion);
}
init_completion(completion);
dmaengine_submit(desc);
dma_async_issue_pending(get_dma_chan(this));
to = wait_for_completion_timeout(completion, msecs_to_jiffies(1000));
to = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000));
if (!to) {
dev_err(this->dev, "DMA timeout, last DMA\n");
gpmi_dump_info(this);
@ -2370,6 +2378,16 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
goto unmap;
}
if (this->bch && buf_read) {
to = wait_for_completion_timeout(bch_completion, msecs_to_jiffies(1000));
if (!to) {
dev_err(this->dev, "BCH timeout, last DMA\n");
gpmi_dump_info(this);
ret = -ETIMEDOUT;
goto unmap;
}
}
writel(BM_BCH_CTRL_COMPLETE_IRQ_EN,
this->resources.bch_regs + HW_BCH_CTRL_CLR);
gpmi_clear_bch(this);
@ -2461,43 +2479,25 @@ err_out:
}
static const struct of_device_id gpmi_nand_id_table[] = {
{
.compatible = "fsl,imx23-gpmi-nand",
.data = &gpmi_devdata_imx23,
}, {
.compatible = "fsl,imx28-gpmi-nand",
.data = &gpmi_devdata_imx28,
}, {
.compatible = "fsl,imx6q-gpmi-nand",
.data = &gpmi_devdata_imx6q,
}, {
.compatible = "fsl,imx6sx-gpmi-nand",
.data = &gpmi_devdata_imx6sx,
}, {
.compatible = "fsl,imx7d-gpmi-nand",
.data = &gpmi_devdata_imx7d,
}, {}
{ .compatible = "fsl,imx23-gpmi-nand", .data = &gpmi_devdata_imx23, },
{ .compatible = "fsl,imx28-gpmi-nand", .data = &gpmi_devdata_imx28, },
{ .compatible = "fsl,imx6q-gpmi-nand", .data = &gpmi_devdata_imx6q, },
{ .compatible = "fsl,imx6sx-gpmi-nand", .data = &gpmi_devdata_imx6sx, },
{ .compatible = "fsl,imx7d-gpmi-nand", .data = &gpmi_devdata_imx7d,},
{}
};
MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
static int gpmi_nand_probe(struct platform_device *pdev)
{
struct gpmi_nand_data *this;
const struct of_device_id *of_id;
int ret;
this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL);
if (!this)
return -ENOMEM;
of_id = of_match_device(gpmi_nand_id_table, &pdev->dev);
if (of_id) {
this->devdata = of_id->data;
} else {
dev_err(&pdev->dev, "Failed to find the right device id.\n");
return -ENODEV;
}
this->devdata = of_device_get_match_data(&pdev->dev);
platform_set_drvdata(pdev, this);
this->pdev = pdev;
this->dev = &pdev->dev;

Просмотреть файл

@ -107,6 +107,7 @@
#define BV_GPMI_CTRL1_WRN_DLY_SEL_7_TO_12NS 0x2
#define BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY 0x3
#define BM_GPMI_CTRL1_GANGED_RDYBUSY (1 << 19)
#define BM_GPMI_CTRL1_BCH_MODE (1 << 18)
#define BP_GPMI_CTRL1_DLL_ENABLE 17

Просмотреть файл

@ -71,8 +71,6 @@ static struct ingenic_ecc *ingenic_ecc_get(struct device_node *np)
if (!pdev || !platform_get_drvdata(pdev))
return ERR_PTR(-EPROBE_DEFER);
get_device(&pdev->dev);
ecc = platform_get_drvdata(pdev);
clk_prepare_enable(ecc->clk);

Просмотреть файл

@ -0,0 +1,721 @@
// SPDX-License-Identifier: GPL-2.0+
/* Copyright (c) 2020 Intel Corporation. */
#include <linux/clk.h>
#include <linux/completion.h>
#include <linux/dmaengine.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand.h>
#include <linux/platform_device.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <asm/unaligned.h>
#define EBU_CLC 0x000
#define EBU_CLC_RST 0x00000000u
#define EBU_ADDR_SEL(n) (0x020 + (n) * 4)
/* 5 bits 26:22 included for comparison in the ADDR_SELx */
#define EBU_ADDR_MASK(x) ((x) << 4)
#define EBU_ADDR_SEL_REGEN 0x1
#define EBU_BUSCON(n) (0x060 + (n) * 4)
#define EBU_BUSCON_CMULT_V4 0x1
#define EBU_BUSCON_RECOVC(n) ((n) << 2)
#define EBU_BUSCON_HOLDC(n) ((n) << 4)
#define EBU_BUSCON_WAITRDC(n) ((n) << 6)
#define EBU_BUSCON_WAITWRC(n) ((n) << 8)
#define EBU_BUSCON_BCGEN_CS 0x0
#define EBU_BUSCON_SETUP_EN BIT(22)
#define EBU_BUSCON_ALEC 0xC000
#define EBU_CON 0x0B0
#define EBU_CON_NANDM_EN BIT(0)
#define EBU_CON_NANDM_DIS 0x0
#define EBU_CON_CSMUX_E_EN BIT(1)
#define EBU_CON_ALE_P_LOW BIT(2)
#define EBU_CON_CLE_P_LOW BIT(3)
#define EBU_CON_CS_P_LOW BIT(4)
#define EBU_CON_SE_P_LOW BIT(5)
#define EBU_CON_WP_P_LOW BIT(6)
#define EBU_CON_PRE_P_LOW BIT(7)
#define EBU_CON_IN_CS_S(n) ((n) << 8)
#define EBU_CON_OUT_CS_S(n) ((n) << 10)
#define EBU_CON_LAT_EN_CS_P ((0x3D) << 18)
#define EBU_WAIT 0x0B4
#define EBU_WAIT_RDBY BIT(0)
#define EBU_WAIT_WR_C BIT(3)
#define HSNAND_CTL1 0x110
#define HSNAND_CTL1_ADDR_SHIFT 24
#define HSNAND_CTL2 0x114
#define HSNAND_CTL2_ADDR_SHIFT 8
#define HSNAND_CTL2_CYC_N_V5 (0x2 << 16)
#define HSNAND_INT_MSK_CTL 0x124
#define HSNAND_INT_MSK_CTL_WR_C BIT(4)
#define HSNAND_INT_STA 0x128
#define HSNAND_INT_STA_WR_C BIT(4)
#define HSNAND_CTL 0x130
#define HSNAND_CTL_ENABLE_ECC BIT(0)
#define HSNAND_CTL_GO BIT(2)
#define HSNAND_CTL_CE_SEL_CS(n) BIT(3 + (n))
#define HSNAND_CTL_RW_READ 0x0
#define HSNAND_CTL_RW_WRITE BIT(10)
#define HSNAND_CTL_ECC_OFF_V8TH BIT(11)
#define HSNAND_CTL_CKFF_EN 0x0
#define HSNAND_CTL_MSG_EN BIT(17)
#define HSNAND_PARA0 0x13c
#define HSNAND_PARA0_PAGE_V8192 0x3
#define HSNAND_PARA0_PIB_V256 (0x3 << 4)
#define HSNAND_PARA0_BYP_EN_NP 0x0
#define HSNAND_PARA0_BYP_DEC_NP 0x0
#define HSNAND_PARA0_TYPE_ONFI BIT(18)
#define HSNAND_PARA0_ADEP_EN BIT(21)
#define HSNAND_CMSG_0 0x150
#define HSNAND_CMSG_1 0x154
#define HSNAND_ALE_OFFS BIT(2)
#define HSNAND_CLE_OFFS BIT(3)
#define HSNAND_CS_OFFS BIT(4)
#define HSNAND_ECC_OFFSET 0x008
#define NAND_DATA_IFACE_CHECK_ONLY -1
#define MAX_CS 2
#define HZ_PER_MHZ 1000000L
#define USEC_PER_SEC 1000000L
struct ebu_nand_cs {
void __iomem *chipaddr;
dma_addr_t nand_pa;
u32 addr_sel;
};
struct ebu_nand_controller {
struct nand_controller controller;
struct nand_chip chip;
struct device *dev;
void __iomem *ebu;
void __iomem *hsnand;
struct dma_chan *dma_tx;
struct dma_chan *dma_rx;
struct completion dma_access_complete;
unsigned long clk_rate;
struct clk *clk;
u32 nd_para0;
u8 cs_num;
struct ebu_nand_cs cs[MAX_CS];
};
static inline struct ebu_nand_controller *nand_to_ebu(struct nand_chip *chip)
{
return container_of(chip, struct ebu_nand_controller, chip);
}
static int ebu_nand_waitrdy(struct nand_chip *chip, int timeout_ms)
{
struct ebu_nand_controller *ctrl = nand_to_ebu(chip);
u32 status;
return readl_poll_timeout(ctrl->ebu + EBU_WAIT, status,
(status & EBU_WAIT_RDBY) ||
(status & EBU_WAIT_WR_C), 20, timeout_ms);
}
static u8 ebu_nand_readb(struct nand_chip *chip)
{
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
u8 cs_num = ebu_host->cs_num;
u8 val;
val = readb(ebu_host->cs[cs_num].chipaddr + HSNAND_CS_OFFS);
ebu_nand_waitrdy(chip, 1000);
return val;
}
static void ebu_nand_writeb(struct nand_chip *chip, u32 offset, u8 value)
{
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
u8 cs_num = ebu_host->cs_num;
writeb(value, ebu_host->cs[cs_num].chipaddr + offset);
ebu_nand_waitrdy(chip, 1000);
}
static void ebu_read_buf(struct nand_chip *chip, u_char *buf, unsigned int len)
{
int i;
for (i = 0; i < len; i++)
buf[i] = ebu_nand_readb(chip);
}
static void ebu_write_buf(struct nand_chip *chip, const u_char *buf, int len)
{
int i;
for (i = 0; i < len; i++)
ebu_nand_writeb(chip, HSNAND_CS_OFFS, buf[i]);
}
static void ebu_nand_disable(struct nand_chip *chip)
{
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
writel(0, ebu_host->ebu + EBU_CON);
}
static void ebu_select_chip(struct nand_chip *chip)
{
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
void __iomem *nand_con = ebu_host->ebu + EBU_CON;
u32 cs = ebu_host->cs_num;
writel(EBU_CON_NANDM_EN | EBU_CON_CSMUX_E_EN | EBU_CON_CS_P_LOW |
EBU_CON_SE_P_LOW | EBU_CON_WP_P_LOW | EBU_CON_PRE_P_LOW |
EBU_CON_IN_CS_S(cs) | EBU_CON_OUT_CS_S(cs) |
EBU_CON_LAT_EN_CS_P, nand_con);
}
static int ebu_nand_set_timings(struct nand_chip *chip, int csline,
const struct nand_interface_config *conf)
{
struct ebu_nand_controller *ctrl = nand_to_ebu(chip);
unsigned int rate = clk_get_rate(ctrl->clk) / HZ_PER_MHZ;
unsigned int period = DIV_ROUND_UP(USEC_PER_SEC, rate);
const struct nand_sdr_timings *timings;
u32 trecov, thold, twrwait, trdwait;
u32 reg = 0;
timings = nand_get_sdr_timings(conf);
if (IS_ERR(timings))
return PTR_ERR(timings);
if (csline == NAND_DATA_IFACE_CHECK_ONLY)
return 0;
trecov = DIV_ROUND_UP(max(timings->tREA_max, timings->tREH_min),
period);
reg |= EBU_BUSCON_RECOVC(trecov);
thold = DIV_ROUND_UP(max(timings->tDH_min, timings->tDS_min), period);
reg |= EBU_BUSCON_HOLDC(thold);
trdwait = DIV_ROUND_UP(max(timings->tRC_min, timings->tREH_min),
period);
reg |= EBU_BUSCON_WAITRDC(trdwait);
twrwait = DIV_ROUND_UP(max(timings->tWC_min, timings->tWH_min), period);
reg |= EBU_BUSCON_WAITWRC(twrwait);
reg |= EBU_BUSCON_CMULT_V4 | EBU_BUSCON_BCGEN_CS | EBU_BUSCON_ALEC |
EBU_BUSCON_SETUP_EN;
writel(reg, ctrl->ebu + EBU_BUSCON(ctrl->cs_num));
return 0;
}
static int ebu_nand_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
if (section)
return -ERANGE;
oobregion->offset = HSNAND_ECC_OFFSET;
oobregion->length = chip->ecc.total;
return 0;
}
static int ebu_nand_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *oobregion)
{
struct nand_chip *chip = mtd_to_nand(mtd);
if (section)
return -ERANGE;
oobregion->offset = chip->ecc.total + HSNAND_ECC_OFFSET;
oobregion->length = mtd->oobsize - oobregion->offset;
return 0;
}
static const struct mtd_ooblayout_ops ebu_nand_ooblayout_ops = {
.ecc = ebu_nand_ooblayout_ecc,
.free = ebu_nand_ooblayout_free,
};
static void ebu_dma_rx_callback(void *cookie)
{
struct ebu_nand_controller *ebu_host = cookie;
dmaengine_terminate_async(ebu_host->dma_rx);
complete(&ebu_host->dma_access_complete);
}
static void ebu_dma_tx_callback(void *cookie)
{
struct ebu_nand_controller *ebu_host = cookie;
dmaengine_terminate_async(ebu_host->dma_tx);
complete(&ebu_host->dma_access_complete);
}
static int ebu_dma_start(struct ebu_nand_controller *ebu_host, u32 dir,
const u8 *buf, u32 len)
{
struct dma_async_tx_descriptor *tx;
struct completion *dma_completion;
dma_async_tx_callback callback;
struct dma_chan *chan;
dma_cookie_t cookie;
unsigned long flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
dma_addr_t buf_dma;
int ret;
u32 timeout;
if (dir == DMA_DEV_TO_MEM) {
chan = ebu_host->dma_rx;
dma_completion = &ebu_host->dma_access_complete;
callback = ebu_dma_rx_callback;
} else {
chan = ebu_host->dma_tx;
dma_completion = &ebu_host->dma_access_complete;
callback = ebu_dma_tx_callback;
}
buf_dma = dma_map_single(chan->device->dev, (void *)buf, len, dir);
if (dma_mapping_error(chan->device->dev, buf_dma)) {
dev_err(ebu_host->dev, "Failed to map DMA buffer\n");
ret = -EIO;
goto err_unmap;
}
tx = dmaengine_prep_slave_single(chan, buf_dma, len, dir, flags);
if (!tx)
return -ENXIO;
tx->callback = callback;
tx->callback_param = ebu_host;
cookie = tx->tx_submit(tx);
ret = dma_submit_error(cookie);
if (ret) {
dev_err(ebu_host->dev, "dma_submit_error %d\n", cookie);
ret = -EIO;
goto err_unmap;
}
init_completion(dma_completion);
dma_async_issue_pending(chan);
/* Wait DMA to finish the data transfer.*/
timeout = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000));
if (!timeout) {
dev_err(ebu_host->dev, "I/O Error in DMA RX (status %d)\n",
dmaengine_tx_status(chan, cookie, NULL));
dmaengine_terminate_sync(chan);
ret = -ETIMEDOUT;
goto err_unmap;
}
return 0;
err_unmap:
dma_unmap_single(ebu_host->dev, buf_dma, len, dir);
return ret;
}
static void ebu_nand_trigger(struct ebu_nand_controller *ebu_host,
int page, u32 cmd)
{
unsigned int val;
val = cmd | (page & 0xFF) << HSNAND_CTL1_ADDR_SHIFT;
writel(val, ebu_host->hsnand + HSNAND_CTL1);
val = (page & 0xFFFF00) >> 8 | HSNAND_CTL2_CYC_N_V5;
writel(val, ebu_host->hsnand + HSNAND_CTL2);
writel(ebu_host->nd_para0, ebu_host->hsnand + HSNAND_PARA0);
/* clear first, will update later */
writel(0xFFFFFFFF, ebu_host->hsnand + HSNAND_CMSG_0);
writel(0xFFFFFFFF, ebu_host->hsnand + HSNAND_CMSG_1);
writel(HSNAND_INT_MSK_CTL_WR_C,
ebu_host->hsnand + HSNAND_INT_MSK_CTL);
if (!cmd)
val = HSNAND_CTL_RW_READ;
else
val = HSNAND_CTL_RW_WRITE;
writel(HSNAND_CTL_MSG_EN | HSNAND_CTL_CKFF_EN |
HSNAND_CTL_ECC_OFF_V8TH | HSNAND_CTL_CE_SEL_CS(ebu_host->cs_num) |
HSNAND_CTL_ENABLE_ECC | HSNAND_CTL_GO | val,
ebu_host->hsnand + HSNAND_CTL);
}
static int ebu_nand_read_page_hwecc(struct nand_chip *chip, u8 *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
int ret, reg_data;
ebu_nand_trigger(ebu_host, page, NAND_CMD_READ0);
ret = ebu_dma_start(ebu_host, DMA_DEV_TO_MEM, buf, mtd->writesize);
if (ret)
return ret;
if (oob_required)
chip->ecc.read_oob(chip, page);
reg_data = readl(ebu_host->hsnand + HSNAND_CTL);
reg_data &= ~HSNAND_CTL_GO;
writel(reg_data, ebu_host->hsnand + HSNAND_CTL);
return 0;
}
static int ebu_nand_write_page_hwecc(struct nand_chip *chip, const u8 *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
void __iomem *int_sta = ebu_host->hsnand + HSNAND_INT_STA;
int reg_data, ret, val;
u32 reg;
ebu_nand_trigger(ebu_host, page, NAND_CMD_SEQIN);
ret = ebu_dma_start(ebu_host, DMA_MEM_TO_DEV, buf, mtd->writesize);
if (ret)
return ret;
if (oob_required) {
reg = get_unaligned_le32(chip->oob_poi);
writel(reg, ebu_host->hsnand + HSNAND_CMSG_0);
reg = get_unaligned_le32(chip->oob_poi + 4);
writel(reg, ebu_host->hsnand + HSNAND_CMSG_1);
}
ret = readl_poll_timeout_atomic(int_sta, val, !(val & HSNAND_INT_STA_WR_C),
10, 1000);
if (ret)
return ret;
reg_data = readl(ebu_host->hsnand + HSNAND_CTL);
reg_data &= ~HSNAND_CTL_GO;
writel(reg_data, ebu_host->hsnand + HSNAND_CTL);
return 0;
}
static const u8 ecc_strength[] = { 1, 1, 4, 8, 24, 32, 40, 60, };
static int ebu_nand_attach_chip(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip);
u32 ecc_steps, ecc_bytes, ecc_total, pagesize, pg_per_blk;
u32 ecc_strength_ds = chip->ecc.strength;
u32 ecc_size = chip->ecc.size;
u32 writesize = mtd->writesize;
u32 blocksize = mtd->erasesize;
int bch_algo, start, val;
/* Default to an ECC size of 512 */
if (!chip->ecc.size)
chip->ecc.size = 512;
switch (ecc_size) {
case 512:
start = 1;
if (!ecc_strength_ds)
ecc_strength_ds = 4;
break;
case 1024:
start = 4;
if (!ecc_strength_ds)
ecc_strength_ds = 32;
break;
default:
return -EINVAL;
}
/* BCH ECC algorithm Settings for number of bits per 512B/1024B */
bch_algo = round_up(start + 1, 4);
for (val = start; val < bch_algo; val++) {
if (ecc_strength_ds == ecc_strength[val])
break;
}
if (val == bch_algo)
return -EINVAL;
if (ecc_strength_ds == 8)
ecc_bytes = 14;
else
ecc_bytes = DIV_ROUND_UP(ecc_strength_ds * fls(8 * ecc_size), 8);
ecc_steps = writesize / ecc_size;
ecc_total = ecc_steps * ecc_bytes;
if ((ecc_total + 8) > mtd->oobsize)
return -ERANGE;
chip->ecc.total = ecc_total;
pagesize = fls(writesize >> 11);
if (pagesize > HSNAND_PARA0_PAGE_V8192)
return -ERANGE;
pg_per_blk = fls((blocksize / writesize) >> 6) / 8;
if (pg_per_blk > HSNAND_PARA0_PIB_V256)
return -ERANGE;
ebu_host->nd_para0 = pagesize | pg_per_blk | HSNAND_PARA0_BYP_EN_NP |
HSNAND_PARA0_BYP_DEC_NP | HSNAND_PARA0_ADEP_EN |
HSNAND_PARA0_TYPE_ONFI | (val << 29);
mtd_set_ooblayout(mtd, &ebu_nand_ooblayout_ops);
chip->ecc.read_page = ebu_nand_read_page_hwecc;
chip->ecc.write_page = ebu_nand_write_page_hwecc;
return 0;
}
static int ebu_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op, bool check_only)
{
const struct nand_op_instr *instr = NULL;
unsigned int op_id;
int i, timeout_ms, ret = 0;
if (check_only)
return 0;
ebu_select_chip(chip);
for (op_id = 0; op_id < op->ninstrs; op_id++) {
instr = &op->instrs[op_id];
switch (instr->type) {
case NAND_OP_CMD_INSTR:
ebu_nand_writeb(chip, HSNAND_CLE_OFFS | HSNAND_CS_OFFS,
instr->ctx.cmd.opcode);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++)
ebu_nand_writeb(chip,
HSNAND_ALE_OFFS | HSNAND_CS_OFFS,
instr->ctx.addr.addrs[i]);
break;
case NAND_OP_DATA_IN_INSTR:
ebu_read_buf(chip, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
ebu_write_buf(chip, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
case NAND_OP_WAITRDY_INSTR:
timeout_ms = instr->ctx.waitrdy.timeout_ms * 1000;
ret = ebu_nand_waitrdy(chip, timeout_ms);
break;
}
}
return ret;
}
static const struct nand_controller_ops ebu_nand_controller_ops = {
.attach_chip = ebu_nand_attach_chip,
.setup_interface = ebu_nand_set_timings,
.exec_op = ebu_nand_exec_op,
};
static void ebu_dma_cleanup(struct ebu_nand_controller *ebu_host)
{
if (ebu_host->dma_rx)
dma_release_channel(ebu_host->dma_rx);
if (ebu_host->dma_tx)
dma_release_channel(ebu_host->dma_tx);
}
static int ebu_nand_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct ebu_nand_controller *ebu_host;
struct nand_chip *nand;
struct mtd_info *mtd = NULL;
struct resource *res;
char *resname;
int ret;
u32 cs;
ebu_host = devm_kzalloc(dev, sizeof(*ebu_host), GFP_KERNEL);
if (!ebu_host)
return -ENOMEM;
ebu_host->dev = dev;
nand_controller_init(&ebu_host->controller);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ebunand");
ebu_host->ebu = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ebu_host->ebu))
return PTR_ERR(ebu_host->ebu);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hsnand");
ebu_host->hsnand = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ebu_host->hsnand))
return PTR_ERR(ebu_host->hsnand);
ret = device_property_read_u32(dev, "reg", &cs);
if (ret) {
dev_err(dev, "failed to get chip select: %d\n", ret);
return ret;
}
ebu_host->cs_num = cs;
resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
ebu_host->cs[cs].chipaddr = devm_ioremap_resource(dev, res);
ebu_host->cs[cs].nand_pa = res->start;
if (IS_ERR(ebu_host->cs[cs].chipaddr))
return PTR_ERR(ebu_host->cs[cs].chipaddr);
ebu_host->clk = devm_clk_get(dev, NULL);
if (IS_ERR(ebu_host->clk))
return dev_err_probe(dev, PTR_ERR(ebu_host->clk),
"failed to get clock\n");
ret = clk_prepare_enable(ebu_host->clk);
if (ret) {
dev_err(dev, "failed to enable clock: %d\n", ret);
return ret;
}
ebu_host->clk_rate = clk_get_rate(ebu_host->clk);
ebu_host->dma_tx = dma_request_chan(dev, "tx");
if (IS_ERR(ebu_host->dma_tx))
return dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx),
"failed to request DMA tx chan!.\n");
ebu_host->dma_rx = dma_request_chan(dev, "rx");
if (IS_ERR(ebu_host->dma_rx))
return dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx),
"failed to request DMA rx chan!.\n");
resname = devm_kasprintf(dev, GFP_KERNEL, "addr_sel%d", cs);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
if (!res)
return -EINVAL;
ebu_host->cs[cs].addr_sel = res->start;
writel(ebu_host->cs[cs].addr_sel | EBU_ADDR_MASK(5) | EBU_ADDR_SEL_REGEN,
ebu_host->ebu + EBU_ADDR_SEL(cs));
nand_set_flash_node(&ebu_host->chip, dev->of_node);
if (!mtd->name) {
dev_err(ebu_host->dev, "NAND label property is mandatory\n");
return -EINVAL;
}
mtd = nand_to_mtd(&ebu_host->chip);
mtd->dev.parent = dev;
ebu_host->dev = dev;
platform_set_drvdata(pdev, ebu_host);
nand_set_controller_data(&ebu_host->chip, ebu_host);
nand = &ebu_host->chip;
nand->controller = &ebu_host->controller;
nand->controller->ops = &ebu_nand_controller_ops;
/* Scan to find existence of the device */
ret = nand_scan(&ebu_host->chip, 1);
if (ret)
goto err_cleanup_dma;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
goto err_clean_nand;
return 0;
err_clean_nand:
nand_cleanup(&ebu_host->chip);
err_cleanup_dma:
ebu_dma_cleanup(ebu_host);
clk_disable_unprepare(ebu_host->clk);
return ret;
}
static int ebu_nand_remove(struct platform_device *pdev)
{
struct ebu_nand_controller *ebu_host = platform_get_drvdata(pdev);
int ret;
ret = mtd_device_unregister(nand_to_mtd(&ebu_host->chip));
WARN_ON(ret);
nand_cleanup(&ebu_host->chip);
ebu_nand_disable(&ebu_host->chip);
ebu_dma_cleanup(ebu_host);
clk_disable_unprepare(ebu_host->clk);
return 0;
}
static const struct of_device_id ebu_nand_match[] = {
{ .compatible = "intel,nand-controller" },
{ .compatible = "intel,lgm-ebunand" },
{}
};
MODULE_DEVICE_TABLE(of, ebu_nand_match);
static struct platform_driver ebu_nand_driver = {
.probe = ebu_nand_probe,
.remove = ebu_nand_remove,
.driver = {
.name = "intel-nand-controller",
.of_match_table = ebu_nand_match,
},
};
module_platform_driver(ebu_nand_driver);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Vadivel Murugan R <vadivel.muruganx.ramuthevar@intel.com>");
MODULE_DESCRIPTION("Intel's LGM External Bus NAND Controller driver");

Просмотреть файл

@ -31,7 +31,6 @@
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/mtd/nand_ecc.h>
#define DRV_NAME "lpc32xx_mlc"

Просмотреть файл

@ -23,7 +23,6 @@
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/gpio.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
@ -803,7 +802,7 @@ static int lpc32xx_nand_attach_chip(struct nand_chip *chip)
chip->ecc.write_oob = lpc32xx_nand_write_oob_syndrome;
chip->ecc.read_oob = lpc32xx_nand_read_oob_syndrome;
chip->ecc.calculate = lpc32xx_nand_ecc_calculate;
chip->ecc.correct = nand_correct_data;
chip->ecc.correct = rawnand_sw_hamming_correct;
chip->ecc.hwctl = lpc32xx_nand_ecc_enable;
/*

Просмотреть файл

@ -2678,12 +2678,6 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc,
mtd = nand_to_mtd(chip);
mtd->dev.parent = dev;
/*
* Default to HW ECC engine mode. If the nand-ecc-mode property is given
* in the DT node, this entry will be overwritten in nand_scan_ident().
*/
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
/*
* Save a reference value for timing registers before
* ->setup_interface() is called.

Просмотреть файл

@ -510,7 +510,7 @@ static int meson_nfc_dma_buffer_setup(struct nand_chip *nand, void *databuf,
}
static void meson_nfc_dma_buffer_release(struct nand_chip *nand,
int infolen, int datalen,
int datalen, int infolen,
enum dma_data_direction dir)
{
struct meson_nfc *nfc = nand_get_controller_data(nand);
@ -1044,9 +1044,12 @@ static int meson_nfc_clk_init(struct meson_nfc *nfc)
ret = clk_set_rate(nfc->device_clk, 24000000);
if (ret)
goto err_phase_rx;
goto err_disable_rx;
return 0;
err_disable_rx:
clk_disable_unprepare(nfc->phase_rx);
err_phase_rx:
clk_disable_unprepare(nfc->phase_tx);
err_phase_tx:

Просмотреть файл

@ -21,7 +21,6 @@
#include <linux/completion.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_data/mtd-mxc_nand.h>
#define DRIVER_NAME "mxc_nand"
@ -184,7 +183,6 @@ struct mxc_nand_host {
unsigned int buf_start;
const struct mxc_nand_devtype_data *devtype_data;
struct mxc_nand_platform_data pdata;
};
static const char * const part_probes[] = {
@ -1611,70 +1609,16 @@ static inline int is_imx53_nfc(struct mxc_nand_host *host)
return host->devtype_data == &imx53_nand_devtype_data;
}
static const struct platform_device_id mxcnd_devtype[] = {
{
.name = "imx21-nand",
.driver_data = (kernel_ulong_t) &imx21_nand_devtype_data,
}, {
.name = "imx27-nand",
.driver_data = (kernel_ulong_t) &imx27_nand_devtype_data,
}, {
.name = "imx25-nand",
.driver_data = (kernel_ulong_t) &imx25_nand_devtype_data,
}, {
.name = "imx51-nand",
.driver_data = (kernel_ulong_t) &imx51_nand_devtype_data,
}, {
.name = "imx53-nand",
.driver_data = (kernel_ulong_t) &imx53_nand_devtype_data,
}, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(platform, mxcnd_devtype);
#ifdef CONFIG_OF
static const struct of_device_id mxcnd_dt_ids[] = {
{
.compatible = "fsl,imx21-nand",
.data = &imx21_nand_devtype_data,
}, {
.compatible = "fsl,imx27-nand",
.data = &imx27_nand_devtype_data,
}, {
.compatible = "fsl,imx25-nand",
.data = &imx25_nand_devtype_data,
}, {
.compatible = "fsl,imx51-nand",
.data = &imx51_nand_devtype_data,
}, {
.compatible = "fsl,imx53-nand",
.data = &imx53_nand_devtype_data,
},
{ .compatible = "fsl,imx21-nand", .data = &imx21_nand_devtype_data, },
{ .compatible = "fsl,imx27-nand", .data = &imx27_nand_devtype_data, },
{ .compatible = "fsl,imx25-nand", .data = &imx25_nand_devtype_data, },
{ .compatible = "fsl,imx51-nand", .data = &imx51_nand_devtype_data, },
{ .compatible = "fsl,imx53-nand", .data = &imx53_nand_devtype_data, },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, mxcnd_dt_ids);
static int mxcnd_probe_dt(struct mxc_nand_host *host)
{
struct device_node *np = host->dev->of_node;
const struct of_device_id *of_id =
of_match_device(mxcnd_dt_ids, host->dev);
if (!np)
return 1;
host->devtype_data = of_id->data;
return 0;
}
#else
static int mxcnd_probe_dt(struct mxc_nand_host *host)
{
return 1;
}
#endif
static int mxcnd_attach_chip(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
@ -1800,20 +1744,7 @@ static int mxcnd_probe(struct platform_device *pdev)
if (IS_ERR(host->clk))
return PTR_ERR(host->clk);
err = mxcnd_probe_dt(host);
if (err > 0) {
struct mxc_nand_platform_data *pdata =
dev_get_platdata(&pdev->dev);
if (pdata) {
host->pdata = *pdata;
host->devtype_data = (struct mxc_nand_devtype_data *)
pdev->id_entry->driver_data;
} else {
err = -ENODEV;
}
}
if (err < 0)
return err;
host->devtype_data = device_get_match_data(&pdev->dev);
if (!host->devtype_data->setup_interface)
this->options |= NAND_KEEP_TIMINGS;
@ -1843,14 +1774,6 @@ static int mxcnd_probe(struct platform_device *pdev)
this->legacy.select_chip = host->devtype_data->select_chip;
/* NAND bus width determines access functions used by upper layer */
if (host->pdata.width == 2)
this->options |= NAND_BUSWIDTH_16;
/* update flash based bbt */
if (host->pdata.flash_bbt)
this->bbt_options |= NAND_BBT_USE_FLASH;
init_completion(&host->op_completion);
host->irq = platform_get_irq(pdev, 0);
@ -1891,9 +1814,7 @@ static int mxcnd_probe(struct platform_device *pdev)
goto escan;
/* Register the partitions */
err = mtd_device_parse_register(mtd, part_probes, NULL,
host->pdata.parts,
host->pdata.nr_parts);
err = mtd_device_parse_register(mtd, part_probes, NULL, NULL, 0);
if (err)
goto cleanup_nand;
@ -1930,7 +1851,6 @@ static struct platform_driver mxcnd_driver = {
.name = DRIVER_NAME,
.of_match_table = of_match_ptr(mxcnd_dt_ids),
},
.id_table = mxcnd_devtype,
.probe = mxcnd_probe,
.remove = mxcnd_remove,
};

Просмотреть файл

@ -12,8 +12,8 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/platform_device.h>
#include "internals.h"

Просмотреть файл

@ -35,8 +35,8 @@
#include <linux/types.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand_bch.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include <linux/mtd/nand-ecc-sw-bch.h>
#include <linux/interrupt.h>
#include <linux/bitops.h>
#include <linux/io.h>
@ -5139,6 +5139,118 @@ static void nand_scan_ident_cleanup(struct nand_chip *chip)
kfree(chip->parameters.onfi);
}
int rawnand_sw_hamming_init(struct nand_chip *chip)
{
struct nand_ecc_sw_hamming_conf *engine_conf;
struct nand_device *base = &chip->base;
int ret;
base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
base->ecc.user_conf.algo = NAND_ECC_ALGO_HAMMING;
base->ecc.user_conf.strength = chip->ecc.strength;
base->ecc.user_conf.step_size = chip->ecc.size;
ret = nand_ecc_sw_hamming_init_ctx(base);
if (ret)
return ret;
engine_conf = base->ecc.ctx.priv;
if (chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER)
engine_conf->sm_order = true;
chip->ecc.size = base->ecc.ctx.conf.step_size;
chip->ecc.strength = base->ecc.ctx.conf.strength;
chip->ecc.total = base->ecc.ctx.total;
chip->ecc.steps = engine_conf->nsteps;
chip->ecc.bytes = engine_conf->code_size;
return 0;
}
EXPORT_SYMBOL(rawnand_sw_hamming_init);
int rawnand_sw_hamming_calculate(struct nand_chip *chip,
const unsigned char *buf,
unsigned char *code)
{
struct nand_device *base = &chip->base;
return nand_ecc_sw_hamming_calculate(base, buf, code);
}
EXPORT_SYMBOL(rawnand_sw_hamming_calculate);
int rawnand_sw_hamming_correct(struct nand_chip *chip,
unsigned char *buf,
unsigned char *read_ecc,
unsigned char *calc_ecc)
{
struct nand_device *base = &chip->base;
return nand_ecc_sw_hamming_correct(base, buf, read_ecc, calc_ecc);
}
EXPORT_SYMBOL(rawnand_sw_hamming_correct);
void rawnand_sw_hamming_cleanup(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
nand_ecc_sw_hamming_cleanup_ctx(base);
}
EXPORT_SYMBOL(rawnand_sw_hamming_cleanup);
int rawnand_sw_bch_init(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
struct nand_ecc_sw_bch_conf *engine_conf;
int ret;
base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT;
base->ecc.user_conf.algo = NAND_ECC_ALGO_BCH;
base->ecc.user_conf.step_size = chip->ecc.size;
base->ecc.user_conf.strength = chip->ecc.strength;
ret = nand_ecc_sw_bch_init_ctx(base);
if (ret)
return ret;
engine_conf = base->ecc.ctx.priv;
chip->ecc.size = base->ecc.ctx.conf.step_size;
chip->ecc.strength = base->ecc.ctx.conf.strength;
chip->ecc.total = base->ecc.ctx.total;
chip->ecc.steps = engine_conf->nsteps;
chip->ecc.bytes = engine_conf->code_size;
return 0;
}
EXPORT_SYMBOL(rawnand_sw_bch_init);
static int rawnand_sw_bch_calculate(struct nand_chip *chip,
const unsigned char *buf,
unsigned char *code)
{
struct nand_device *base = &chip->base;
return nand_ecc_sw_bch_calculate(base, buf, code);
}
int rawnand_sw_bch_correct(struct nand_chip *chip, unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc)
{
struct nand_device *base = &chip->base;
return nand_ecc_sw_bch_correct(base, buf, read_ecc, calc_ecc);
}
EXPORT_SYMBOL(rawnand_sw_bch_correct);
void rawnand_sw_bch_cleanup(struct nand_chip *chip)
{
struct nand_device *base = &chip->base;
nand_ecc_sw_bch_cleanup_ctx(base);
}
EXPORT_SYMBOL(rawnand_sw_bch_cleanup);
static int nand_set_ecc_on_host_ops(struct nand_chip *chip)
{
struct nand_ecc_ctrl *ecc = &chip->ecc;
@ -5203,14 +5315,15 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_device *nanddev = mtd_to_nanddev(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
int ret;
if (WARN_ON(ecc->engine_type != NAND_ECC_ENGINE_TYPE_SOFT))
return -EINVAL;
switch (ecc->algo) {
case NAND_ECC_ALGO_HAMMING:
ecc->calculate = nand_calculate_ecc;
ecc->correct = nand_correct_data;
ecc->calculate = rawnand_sw_hamming_calculate;
ecc->correct = rawnand_sw_hamming_correct;
ecc->read_page = nand_read_page_swecc;
ecc->read_subpage = nand_read_subpage;
ecc->write_page = nand_write_page_swecc;
@ -5228,14 +5341,20 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
if (IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC))
ecc->options |= NAND_ECC_SOFT_HAMMING_SM_ORDER;
ret = rawnand_sw_hamming_init(chip);
if (ret) {
WARN(1, "Hamming ECC initialization failed!\n");
return ret;
}
return 0;
case NAND_ECC_ALGO_BCH:
if (!mtd_nand_has_bch()) {
if (!IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH)) {
WARN(1, "CONFIG_MTD_NAND_ECC_SW_BCH not enabled\n");
return -EINVAL;
}
ecc->calculate = nand_bch_calculate_ecc;
ecc->correct = nand_bch_correct_data;
ecc->calculate = rawnand_sw_bch_calculate;
ecc->correct = rawnand_sw_bch_correct;
ecc->read_page = nand_read_page_swecc;
ecc->read_subpage = nand_read_subpage;
ecc->write_page = nand_write_page_swecc;
@ -5246,56 +5365,21 @@ static int nand_set_ecc_soft_ops(struct nand_chip *chip)
ecc->read_oob = nand_read_oob_std;
ecc->write_oob = nand_write_oob_std;
/*
* Board driver should supply ecc.size and ecc.strength
* values to select how many bits are correctable.
* Otherwise, default to 4 bits for large page devices.
*/
if (!ecc->size && (mtd->oobsize >= 64)) {
ecc->size = 512;
ecc->strength = 4;
}
/*
* if no ecc placement scheme was provided pickup the default
* large page one.
*/
if (!mtd->ooblayout) {
/* handle large page devices only */
if (mtd->oobsize < 64) {
WARN(1, "OOB layout is required when using software BCH on small pages\n");
return -EINVAL;
}
mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout());
}
/*
* We can only maximize ECC config when the default layout is
* used, otherwise we don't know how many bytes can really be
* used.
*/
if (mtd->ooblayout == nand_get_large_page_ooblayout() &&
nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) {
int steps, bytes;
if (nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH &&
mtd->ooblayout != nand_get_large_page_ooblayout())
nanddev->ecc.user_conf.flags &= ~NAND_ECC_MAXIMIZE_STRENGTH;
/* Always prefer 1k blocks over 512bytes ones */
ecc->size = 1024;
steps = mtd->writesize / ecc->size;
/* Reserve 2 bytes for the BBM */
bytes = (mtd->oobsize - 2) / steps;
ecc->strength = bytes * 8 / fls(8 * ecc->size);
}
/* See nand_bch_init() for details. */
ecc->bytes = 0;
ecc->priv = nand_bch_init(mtd);
if (!ecc->priv) {
ret = rawnand_sw_bch_init(chip);
if (ret) {
WARN(1, "BCH ECC initialization failed!\n");
return -EINVAL;
return ret;
}
return 0;
default:
WARN(1, "Unsupported ECC algorithm!\n");
@ -5639,7 +5723,9 @@ static int nand_scan_tail(struct nand_chip *chip)
*/
if (!mtd->ooblayout &&
!(ecc->engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
ecc->algo == NAND_ECC_ALGO_BCH)) {
ecc->algo == NAND_ECC_ALGO_BCH) &&
!(ecc->engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
ecc->algo == NAND_ECC_ALGO_HAMMING)) {
switch (mtd->oobsize) {
case 8:
case 16:
@ -5756,15 +5842,18 @@ static int nand_scan_tail(struct nand_chip *chip)
* Set the number of read / write steps for one page depending on ECC
* mode.
*/
ecc->steps = mtd->writesize / ecc->size;
if (!ecc->steps)
ecc->steps = mtd->writesize / ecc->size;
if (ecc->steps * ecc->size != mtd->writesize) {
WARN(1, "Invalid ECC parameters\n");
ret = -EINVAL;
goto err_nand_manuf_cleanup;
}
ecc->total = ecc->steps * ecc->bytes;
chip->base.ecc.ctx.total = ecc->total;
if (!ecc->total) {
ecc->total = ecc->steps * ecc->bytes;
chip->base.ecc.ctx.total = ecc->total;
}
if (ecc->total > mtd->oobsize) {
WARN(1, "Total number of ECC bytes exceeded oobsize\n");
@ -5953,9 +6042,12 @@ EXPORT_SYMBOL(nand_scan_with_ids);
*/
void nand_cleanup(struct nand_chip *chip)
{
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT &&
chip->ecc.algo == NAND_ECC_ALGO_BCH)
nand_bch_free((struct nand_bch_control *)chip->ecc.priv);
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT) {
if (chip->ecc.algo == NAND_ECC_ALGO_HAMMING)
rawnand_sw_hamming_cleanup(chip);
else if (chip->ecc.algo == NAND_ECC_ALGO_BCH)
rawnand_sw_bch_cleanup(chip);
}
nanddev_cleanup(&chip->base);

Просмотреть файл

@ -1087,7 +1087,7 @@ static int nand_update_bbt(struct nand_chip *this, loff_t offs)
}
/**
* mark_bbt_regions - [GENERIC] mark the bad block table regions
* mark_bbt_region - [GENERIC] mark the bad block table regions
* @this: the NAND device
* @td: bad block table descriptor
*

Просмотреть файл

@ -1,219 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* This file provides ECC correction for more than 1 bit per block of data,
* using binary BCH codes. It relies on the generic BCH library lib/bch.c.
*
* Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com>
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/bitops.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_bch.h>
#include <linux/bch.h>
/**
* struct nand_bch_control - private NAND BCH control structure
* @bch: BCH control structure
* @errloc: error location array
* @eccmask: XOR ecc mask, allows erased pages to be decoded as valid
*/
struct nand_bch_control {
struct bch_control *bch;
unsigned int *errloc;
unsigned char *eccmask;
};
/**
* nand_bch_calculate_ecc - [NAND Interface] Calculate ECC for data block
* @chip: NAND chip object
* @buf: input buffer with raw data
* @code: output buffer with ECC
*/
int nand_bch_calculate_ecc(struct nand_chip *chip, const unsigned char *buf,
unsigned char *code)
{
struct nand_bch_control *nbc = chip->ecc.priv;
unsigned int i;
memset(code, 0, chip->ecc.bytes);
bch_encode(nbc->bch, buf, chip->ecc.size, code);
/* apply mask so that an erased page is a valid codeword */
for (i = 0; i < chip->ecc.bytes; i++)
code[i] ^= nbc->eccmask[i];
return 0;
}
EXPORT_SYMBOL(nand_bch_calculate_ecc);
/**
* nand_bch_correct_data - [NAND Interface] Detect and correct bit error(s)
* @chip: NAND chip object
* @buf: raw data read from the chip
* @read_ecc: ECC from the chip
* @calc_ecc: the ECC calculated from raw data
*
* Detect and correct bit errors for a data byte block
*/
int nand_bch_correct_data(struct nand_chip *chip, unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc)
{
struct nand_bch_control *nbc = chip->ecc.priv;
unsigned int *errloc = nbc->errloc;
int i, count;
count = bch_decode(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc,
NULL, errloc);
if (count > 0) {
for (i = 0; i < count; i++) {
if (errloc[i] < (chip->ecc.size*8))
/* error is located in data, correct it */
buf[errloc[i] >> 3] ^= (1 << (errloc[i] & 7));
/* else error in ecc, no action needed */
pr_debug("%s: corrected bitflip %u\n", __func__,
errloc[i]);
}
} else if (count < 0) {
pr_err("ecc unrecoverable error\n");
count = -EBADMSG;
}
return count;
}
EXPORT_SYMBOL(nand_bch_correct_data);
/**
* nand_bch_init - [NAND Interface] Initialize NAND BCH error correction
* @mtd: MTD block structure
*
* Returns:
* a pointer to a new NAND BCH control structure, or NULL upon failure
*
* Initialize NAND BCH error correction. Parameters @eccsize and @eccbytes
* are used to compute BCH parameters m (Galois field order) and t (error
* correction capability). @eccbytes should be equal to the number of bytes
* required to store m*t bits, where m is such that 2^m-1 > @eccsize*8.
*
* Example: to configure 4 bit correction per 512 bytes, you should pass
* @eccsize = 512 (thus, m=13 is the smallest integer such that 2^m-1 > 512*8)
* @eccbytes = 7 (7 bytes are required to store m*t = 13*4 = 52 bits)
*/
struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
unsigned int m, t, eccsteps, i;
struct nand_bch_control *nbc = NULL;
unsigned char *erased_page;
unsigned int eccsize = nand->ecc.size;
unsigned int eccbytes = nand->ecc.bytes;
unsigned int eccstrength = nand->ecc.strength;
if (!eccbytes && eccstrength) {
eccbytes = DIV_ROUND_UP(eccstrength * fls(8 * eccsize), 8);
nand->ecc.bytes = eccbytes;
}
if (!eccsize || !eccbytes) {
pr_warn("ecc parameters not supplied\n");
goto fail;
}
m = fls(1+8*eccsize);
t = (eccbytes*8)/m;
nbc = kzalloc(sizeof(*nbc), GFP_KERNEL);
if (!nbc)
goto fail;
nbc->bch = bch_init(m, t, 0, false);
if (!nbc->bch)
goto fail;
/* verify that eccbytes has the expected value */
if (nbc->bch->ecc_bytes != eccbytes) {
pr_warn("invalid eccbytes %u, should be %u\n",
eccbytes, nbc->bch->ecc_bytes);
goto fail;
}
eccsteps = mtd->writesize/eccsize;
/* Check that we have an oob layout description. */
if (!mtd->ooblayout) {
pr_warn("missing oob scheme");
goto fail;
}
/* sanity checks */
if (8*(eccsize+eccbytes) >= (1 << m)) {
pr_warn("eccsize %u is too large\n", eccsize);
goto fail;
}
/*
* ecc->steps and ecc->total might be used by mtd->ooblayout->ecc(),
* which is called by mtd_ooblayout_count_eccbytes().
* Make sure they are properly initialized before calling
* mtd_ooblayout_count_eccbytes().
* FIXME: we should probably rework the sequencing in nand_scan_tail()
* to avoid setting those fields twice.
*/
nand->ecc.steps = eccsteps;
nand->ecc.total = eccsteps * eccbytes;
nand->base.ecc.ctx.total = nand->ecc.total;
if (mtd_ooblayout_count_eccbytes(mtd) != (eccsteps*eccbytes)) {
pr_warn("invalid ecc layout\n");
goto fail;
}
nbc->eccmask = kzalloc(eccbytes, GFP_KERNEL);
nbc->errloc = kmalloc_array(t, sizeof(*nbc->errloc), GFP_KERNEL);
if (!nbc->eccmask || !nbc->errloc)
goto fail;
/*
* compute and store the inverted ecc of an erased ecc block
*/
erased_page = kmalloc(eccsize, GFP_KERNEL);
if (!erased_page)
goto fail;
memset(erased_page, 0xff, eccsize);
bch_encode(nbc->bch, erased_page, eccsize, nbc->eccmask);
kfree(erased_page);
for (i = 0; i < eccbytes; i++)
nbc->eccmask[i] ^= 0xff;
if (!eccstrength)
nand->ecc.strength = (eccbytes * 8) / fls(8 * eccsize);
return nbc;
fail:
nand_bch_free(nbc);
return NULL;
}
EXPORT_SYMBOL(nand_bch_init);
/**
* nand_bch_free - [NAND Interface] Release NAND BCH ECC resources
* @nbc: NAND BCH control structure
*/
void nand_bch_free(struct nand_bch_control *nbc)
{
if (nbc) {
bch_free(nbc->bch);
kfree(nbc->errloc);
kfree(nbc->eccmask);
kfree(nbc);
}
}
EXPORT_SYMBOL(nand_bch_free);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>");
MODULE_DESCRIPTION("NAND software BCH ECC support");

Просмотреть файл

@ -192,9 +192,10 @@ static void panic_nand_wait_ready(struct nand_chip *chip, unsigned long timeo)
*/
void nand_wait_ready(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
unsigned long timeo = 400;
if (in_interrupt() || oops_in_progress)
if (mtd->oops_panic_write)
return panic_nand_wait_ready(chip, timeo);
/* Wait until command is processed or timeout occurs */
@ -531,7 +532,7 @@ EXPORT_SYMBOL(nand_get_set_features_notsupp);
*/
static int nand_wait(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
unsigned long timeo = 400;
u8 status;
int ret;
@ -546,9 +547,9 @@ static int nand_wait(struct nand_chip *chip)
if (ret)
return ret;
if (in_interrupt() || oops_in_progress)
if (mtd->oops_panic_write) {
panic_nand_wait(chip, timeo);
else {
} else {
timeo = jiffies + msecs_to_jiffies(timeo);
do {
if (chip->legacy.dev_ready) {

Просмотреть файл

@ -23,7 +23,6 @@
#include <linux/string.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_bch.h>
#include <linux/mtd/partitions.h>
#include <linux/delay.h>
#include <linux/list.h>
@ -2214,7 +2213,7 @@ static int ns_attach_chip(struct nand_chip *chip)
if (!bch)
return 0;
if (!mtd_nand_has_bch()) {
if (!IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH)) {
NS_ERR("BCH ECC support is disabled\n");
return -EINVAL;
}

Просмотреть файл

@ -18,7 +18,6 @@
*/
#include <linux/module.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/ndfc.h>
#include <linux/slab.h>
@ -146,7 +145,7 @@ static int ndfc_chip_init(struct ndfc_controller *ndfc,
chip->controller = &ndfc->ndfc_control;
chip->legacy.read_buf = ndfc_read_buf;
chip->legacy.write_buf = ndfc_write_buf;
chip->ecc.correct = nand_correct_data;
chip->ecc.correct = rawnand_sw_hamming_correct;
chip->ecc.hwctl = ndfc_enable_hwecc;
chip->ecc.calculate = ndfc_calculate_ecc;
chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;

Просмотреть файл

@ -23,7 +23,6 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/mtd/nand_bch.h>
#include <linux/platform_data/elm.h>
#include <linux/omap-gpmc.h>
@ -185,6 +184,7 @@ static inline struct omap_nand_info *mtd_to_omap(struct mtd_info *mtd)
* @dma_mode: dma mode enable (1) or disable (0)
* @u32_count: number of bytes to be transferred
* @is_write: prefetch read(0) or write post(1) mode
* @info: NAND device structure containing platform data
*/
static int omap_prefetch_enable(int cs, int fifo_th, int dma_mode,
unsigned int u32_count, int is_write, struct omap_nand_info *info)
@ -214,7 +214,7 @@ static int omap_prefetch_enable(int cs, int fifo_th, int dma_mode,
return 0;
}
/**
/*
* omap_prefetch_reset - disables and stops the prefetch engine
*/
static int omap_prefetch_reset(int cs, struct omap_nand_info *info)
@ -939,7 +939,7 @@ static int omap_calculate_ecc(struct nand_chip *chip, const u_char *dat,
/**
* omap_enable_hwecc - This function enables the hardware ecc functionality
* @mtd: MTD device structure
* @chip: NAND chip object
* @mode: Read/Write mode
*/
static void omap_enable_hwecc(struct nand_chip *chip, int mode)
@ -1009,7 +1009,7 @@ static int omap_wait(struct nand_chip *this)
/**
* omap_dev_ready - checks the NAND Ready GPIO line
* @mtd: MTD device structure
* @chip: NAND chip object
*
* Returns true if ready and false if busy.
*/
@ -1022,7 +1022,7 @@ static int omap_dev_ready(struct nand_chip *chip)
/**
* omap_enable_hwecc_bch - Program GPMC to perform BCH ECC calculation
* @mtd: MTD device structure
* @chip: NAND chip object
* @mode: Read/Write mode
*
* When using BCH with SW correction (i.e. no ELM), sector size is set
@ -1131,7 +1131,7 @@ static u8 bch8_polynomial[] = {0xef, 0x51, 0x2e, 0x09, 0xed, 0x93, 0x9a, 0xc2,
* _omap_calculate_ecc_bch - Generate ECC bytes for one sector
* @mtd: MTD device structure
* @dat: The pointer to data on which ecc is computed
* @ecc_code: The ecc_code buffer
* @ecc_calc: The ecc_code buffer
* @i: The sector number (for a multi sector page)
*
* Support calculating of BCH4/8/16 ECC vectors for one sector
@ -1259,7 +1259,7 @@ static int _omap_calculate_ecc_bch(struct mtd_info *mtd,
* omap_calculate_ecc_bch_sw - ECC generator for sector for SW based correction
* @chip: NAND chip object
* @dat: The pointer to data on which ecc is computed
* @ecc_code: The ecc_code buffer
* @ecc_calc: Buffer storing the calculated ECC bytes
*
* Support calculating of BCH4/8/16 ECC vectors for one sector. This is used
* when SW based correction is required as ECC is required for one sector
@ -1275,7 +1275,7 @@ static int omap_calculate_ecc_bch_sw(struct nand_chip *chip,
* omap_calculate_ecc_bch_multi - Generate ECC for multiple sectors
* @mtd: MTD device structure
* @dat: The pointer to data on which ecc is computed
* @ecc_code: The ecc_code buffer
* @ecc_calc: Buffer storing the calculated ECC bytes
*
* Support calculating of BCH4/8/16 ecc vectors for the entire page in one go.
*/
@ -1674,7 +1674,8 @@ static int omap_read_page_bch(struct nand_chip *chip, uint8_t *buf,
/**
* is_elm_present - checks for presence of ELM module by scanning DT nodes
* @omap_nand_info: NAND device structure containing platform data
* @info: NAND device structure containing platform data
* @elm_node: ELM's DT node
*/
static bool is_elm_present(struct omap_nand_info *info,
struct device_node *elm_node)
@ -2041,16 +2042,16 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
chip->ecc.bytes = 7;
chip->ecc.strength = 4;
chip->ecc.hwctl = omap_enable_hwecc_bch;
chip->ecc.correct = nand_bch_correct_data;
chip->ecc.correct = rawnand_sw_bch_correct;
chip->ecc.calculate = omap_calculate_ecc_bch_sw;
mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops);
/* Reserve one byte for the OMAP marker */
oobbytes_per_step = chip->ecc.bytes + 1;
/* Software BCH library is used for locating errors */
chip->ecc.priv = nand_bch_init(mtd);
if (!chip->ecc.priv) {
err = rawnand_sw_bch_init(chip);
if (err) {
dev_err(dev, "Unable to use BCH library\n");
return -EINVAL;
return err;
}
break;
@ -2083,16 +2084,16 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
chip->ecc.bytes = 13;
chip->ecc.strength = 8;
chip->ecc.hwctl = omap_enable_hwecc_bch;
chip->ecc.correct = nand_bch_correct_data;
chip->ecc.correct = rawnand_sw_bch_correct;
chip->ecc.calculate = omap_calculate_ecc_bch_sw;
mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops);
/* Reserve one byte for the OMAP marker */
oobbytes_per_step = chip->ecc.bytes + 1;
/* Software BCH library is used for locating errors */
chip->ecc.priv = nand_bch_init(mtd);
if (!chip->ecc.priv) {
err = rawnand_sw_bch_init(chip);
if (err) {
dev_err(dev, "unable to use BCH library\n");
return -EINVAL;
return err;
}
break;
@ -2195,7 +2196,6 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip = &info->nand;
mtd = nand_to_mtd(nand_chip);
mtd->dev.parent = &pdev->dev;
nand_chip->ecc.priv = NULL;
nand_set_flash_node(nand_chip, dev->of_node);
if (!mtd->name) {
@ -2271,10 +2271,9 @@ cleanup_nand:
return_error:
if (!IS_ERR_OR_NULL(info->dma))
dma_release_channel(info->dma);
if (nand_chip->ecc.priv) {
nand_bch_free(nand_chip->ecc.priv);
nand_chip->ecc.priv = NULL;
}
rawnand_sw_bch_cleanup(nand_chip);
return err;
}
@ -2285,10 +2284,8 @@ static int omap_nand_remove(struct platform_device *pdev)
struct omap_nand_info *info = mtd_to_omap(mtd);
int ret;
if (nand_chip->ecc.priv) {
nand_bch_free(nand_chip->ecc.priv);
nand_chip->ecc.priv = NULL;
}
rawnand_sw_bch_cleanup(nand_chip);
if (info->dma)
dma_release_channel(info->dma);
ret = mtd_device_unregister(mtd);

Просмотреть файл

@ -96,6 +96,9 @@ static u32 elm_read_reg(struct elm_info *info, int offset)
* elm_config - Configure ELM module
* @dev: ELM device
* @bch_type: Type of BCH ecc
* @ecc_steps: ECC steps to assign to config
* @ecc_step_size: ECC step size to assign to config
* @ecc_syndrome_size: ECC syndrome size to assign to config
*/
int elm_config(struct device *dev, enum bch_ecc bch_type,
int ecc_steps, int ecc_step_size, int ecc_syndrome_size)
@ -432,7 +435,7 @@ static int elm_remove(struct platform_device *pdev)
}
#ifdef CONFIG_PM_SLEEP
/**
/*
* elm_context_save
* saves ELM configurations to preserve them across Hardware powered-down
*/
@ -480,7 +483,7 @@ static int elm_context_save(struct elm_info *info)
return 0;
}
/**
/*
* elm_context_restore
* writes configurations saved duing power-down back into ELM registers
*/

Просмотреть файл

@ -14,7 +14,6 @@
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>

Просмотреть файл

@ -145,6 +145,7 @@
#define OP_PAGE_READ 0x2
#define OP_PAGE_READ_WITH_ECC 0x3
#define OP_PAGE_READ_WITH_ECC_SPARE 0x4
#define OP_PAGE_READ_ONFI_READ 0x5
#define OP_PROGRAM_PAGE 0x6
#define OP_PAGE_PROGRAM_WITH_ECC 0x7
#define OP_PROGRAM_PAGE_SPARE 0x9
@ -460,12 +461,14 @@ struct qcom_nand_host {
* @ecc_modes - ecc mode for NAND
* @is_bam - whether NAND controller is using BAM
* @is_qpic - whether NAND CTRL is part of qpic IP
* @qpic_v2 - flag to indicate QPIC IP version 2
* @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset
*/
struct qcom_nandc_props {
u32 ecc_modes;
bool is_bam;
bool is_qpic;
bool qpic_v2;
u32 dev_cmd_reg_start;
};
@ -1164,7 +1167,13 @@ static int nandc_param(struct qcom_nand_host *host)
* in use. we configure the controller to perform a raw read of 512
* bytes to read onfi params
*/
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE);
if (nandc->props->qpic_v2)
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ_ONFI_READ |
PAGE_ACC | LAST_PAGE);
else
nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ |
PAGE_ACC | LAST_PAGE);
nandc_set_reg(nandc, NAND_ADDR0, 0);
nandc_set_reg(nandc, NAND_ADDR1, 0);
nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE
@ -1180,21 +1189,28 @@ static int nandc_param(struct qcom_nand_host *host)
| 1 << DEV0_CFG1_ECC_DISABLE);
nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE);
/* configure CMD1 and VLD for ONFI param probing */
nandc_set_reg(nandc, NAND_DEV_CMD_VLD,
(nandc->vld & ~READ_START_VLD));
nandc_set_reg(nandc, NAND_DEV_CMD1,
(nandc->cmd1 & ~(0xFF << READ_ADDR))
| NAND_CMD_PARAM << READ_ADDR);
/* configure CMD1 and VLD for ONFI param probing in QPIC v1 */
if (!nandc->props->qpic_v2) {
nandc_set_reg(nandc, NAND_DEV_CMD_VLD,
(nandc->vld & ~READ_START_VLD));
nandc_set_reg(nandc, NAND_DEV_CMD1,
(nandc->cmd1 & ~(0xFF << READ_ADDR))
| NAND_CMD_PARAM << READ_ADDR);
}
nandc_set_reg(nandc, NAND_EXEC_CMD, 1);
nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1);
nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld);
if (!nandc->props->qpic_v2) {
nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1);
nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld);
}
nandc_set_read_loc(nandc, 0, 0, 512, 1);
write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0);
write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL);
if (!nandc->props->qpic_v2) {
write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0);
write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL);
}
nandc->buf_count = 512;
memset(nandc->data_buffer, 0xff, nandc->buf_count);
@ -1205,8 +1221,10 @@ static int nandc_param(struct qcom_nand_host *host)
nandc->buf_count, 0);
/* restore CMD1 and VLD regs */
write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0);
write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL);
if (!nandc->props->qpic_v2) {
write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0);
write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL);
}
return 0;
}
@ -1570,6 +1588,8 @@ static int check_flash_errors(struct qcom_nand_host *host, int cw_cnt)
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
int i;
nandc_read_buffer_sync(nandc, true);
for (i = 0; i < cw_cnt; i++) {
u32 flash = le32_to_cpu(nandc->reg_read_buf[i]);
@ -2770,8 +2790,10 @@ static int qcom_nandc_setup(struct qcom_nand_controller *nandc)
/* kill onenand */
if (!nandc->props->is_qpic)
nandc_write(nandc, SFLASHC_BURST_CFG, 0);
nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD),
NAND_DEV_CMD_VLD_VAL);
if (!nandc->props->qpic_v2)
nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD),
NAND_DEV_CMD_VLD_VAL);
/* enable ADM or BAM DMA */
if (nandc->props->is_bam) {
@ -2791,8 +2813,10 @@ static int qcom_nandc_setup(struct qcom_nand_controller *nandc)
}
/* save the original values of these registers */
nandc->cmd1 = nandc_read(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD1));
nandc->vld = NAND_DEV_CMD_VLD_VAL;
if (!nandc->props->qpic_v2) {
nandc->cmd1 = nandc_read(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD1));
nandc->vld = NAND_DEV_CMD_VLD_VAL;
}
return 0;
}
@ -3050,6 +3074,14 @@ static const struct qcom_nandc_props ipq8074_nandc_props = {
.dev_cmd_reg_start = 0x7000,
};
static const struct qcom_nandc_props sdx55_nandc_props = {
.ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT),
.is_bam = true,
.is_qpic = true,
.qpic_v2 = true,
.dev_cmd_reg_start = 0x7000,
};
/*
* data will hold a struct pointer containing more differences once we support
* more controller variants
@ -3063,10 +3095,18 @@ static const struct of_device_id qcom_nandc_of_match[] = {
.compatible = "qcom,ipq4019-nand",
.data = &ipq4019_nandc_props,
},
{
.compatible = "qcom,ipq6018-nand",
.data = &ipq8074_nandc_props,
},
{
.compatible = "qcom,ipq8074-nand",
.data = &ipq8074_nandc_props,
},
{
.compatible = "qcom,sdx55-nand",
.data = &sdx55_nandc_props,
},
{}
};
MODULE_DEVICE_TABLE(of, qcom_nandc_of_match);

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -30,7 +30,6 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/platform_data/mtd-nand-s3c2410.h>
@ -134,7 +133,8 @@ enum s3c_nand_clk_state {
/**
* struct s3c2410_nand_info - NAND controller state.
* @mtds: An array of MTD instances on this controoler.
* @controller: Base controller structure.
* @mtds: An array of MTD instances on this controller.
* @platform: The platform data for this board.
* @device: The platform device we bound to.
* @clk: The clock resource for this controller.
@ -146,6 +146,7 @@ enum s3c_nand_clk_state {
* @clk_rate: The clock rate from @clk.
* @clk_state: The current clock state.
* @cpu_type: The exact type of this controller.
* @freq_transition: CPUFreq notifier block
*/
struct s3c2410_nand_info {
/* mtd info */

Просмотреть файл

@ -12,7 +12,6 @@
#include <linux/delay.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/sharpsl.h>
#include <linux/interrupt.h>
@ -107,7 +106,7 @@ static int sharpsl_attach_chip(struct nand_chip *chip)
chip->ecc.strength = 1;
chip->ecc.hwctl = sharpsl_nand_enable_hwecc;
chip->ecc.calculate = sharpsl_nand_calculate_ecc;
chip->ecc.correct = nand_correct_data;
chip->ecc.correct = rawnand_sw_hamming_correct;
return 0;
}

Просмотреть файл

@ -51,6 +51,7 @@
#define NFC_REG_USER_DATA(x) (0x0050 + ((x) * 4))
#define NFC_REG_SPARE_AREA 0x00A0
#define NFC_REG_PAT_ID 0x00A4
#define NFC_REG_MDMA_ADDR 0x00C0
#define NFC_REG_MDMA_CNT 0x00C4
#define NFC_RAM0_BASE 0x0400
#define NFC_RAM1_BASE 0x0800
@ -182,6 +183,7 @@ struct sunxi_nand_hw_ecc {
*
* @node: used to store NAND chips into a list
* @nand: base NAND chip structure
* @ecc: ECC controller structure
* @clk_rate: clk_rate required for this NAND chip
* @timing_cfg: TIMING_CFG register value for this NAND chip
* @timing_ctl: TIMING_CTL register value for this NAND chip
@ -191,6 +193,7 @@ struct sunxi_nand_hw_ecc {
struct sunxi_nand_chip {
struct list_head node;
struct nand_chip nand;
struct sunxi_nand_hw_ecc *ecc;
unsigned long clk_rate;
u32 timing_cfg;
u32 timing_ctl;
@ -207,13 +210,13 @@ static inline struct sunxi_nand_chip *to_sunxi_nand(struct nand_chip *nand)
* NAND Controller capabilities structure: stores NAND controller capabilities
* for distinction between compatible strings.
*
* @extra_mbus_conf: Contrary to A10, A10s and A13, accessing internal RAM
* @has_mdma: Use mbus dma mode, otherwise general dma
* through MBUS on A23/A33 needs extra configuration.
* @reg_io_data: I/O data register
* @dma_maxburst: DMA maxburst
*/
struct sunxi_nfc_caps {
bool extra_mbus_conf;
bool has_mdma;
unsigned int reg_io_data;
unsigned int dma_maxburst;
};
@ -233,6 +236,7 @@ struct sunxi_nfc_caps {
* controller
* @complete: a completion object used to wait for NAND controller events
* @dmac: the DMA channel attached to the NAND controller
* @caps: NAND Controller capabilities
*/
struct sunxi_nfc {
struct nand_controller controller;
@ -363,24 +367,31 @@ static int sunxi_nfc_dma_op_prepare(struct sunxi_nfc *nfc, const void *buf,
if (!ret)
return -ENOMEM;
dmad = dmaengine_prep_slave_sg(nfc->dmac, sg, 1, tdir, DMA_CTRL_ACK);
if (!dmad) {
ret = -EINVAL;
goto err_unmap_buf;
if (!nfc->caps->has_mdma) {
dmad = dmaengine_prep_slave_sg(nfc->dmac, sg, 1, tdir, DMA_CTRL_ACK);
if (!dmad) {
ret = -EINVAL;
goto err_unmap_buf;
}
}
writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
nfc->regs + NFC_REG_CTL);
writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
writel(chunksize, nfc->regs + NFC_REG_CNT);
if (nfc->caps->extra_mbus_conf)
if (nfc->caps->has_mdma) {
writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_DMA_TYPE_NORMAL,
nfc->regs + NFC_REG_CTL);
writel(chunksize * nchunks, nfc->regs + NFC_REG_MDMA_CNT);
writel(sg_dma_address(sg), nfc->regs + NFC_REG_MDMA_ADDR);
} else {
dmat = dmaengine_submit(dmad);
dmat = dmaengine_submit(dmad);
ret = dma_submit_error(dmat);
if (ret)
goto err_clr_dma_flag;
ret = dma_submit_error(dmat);
if (ret)
goto err_clr_dma_flag;
}
return 0;
@ -676,15 +687,15 @@ static void sunxi_nfc_randomizer_read_buf(struct nand_chip *nand, uint8_t *buf,
static void sunxi_nfc_hw_ecc_enable(struct nand_chip *nand)
{
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct sunxi_nand_hw_ecc *data = nand->ecc.priv;
u32 ecc_ctl;
ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL);
ecc_ctl &= ~(NFC_ECC_MODE_MSK | NFC_ECC_PIPELINE |
NFC_ECC_BLOCK_SIZE_MSK);
ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(data->mode) | NFC_ECC_EXCEPTION |
NFC_ECC_PIPELINE;
ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(sunxi_nand->ecc->mode) |
NFC_ECC_EXCEPTION | NFC_ECC_PIPELINE;
if (nand->ecc.size == 512)
ecc_ctl |= NFC_ECC_BLOCK_512;
@ -911,7 +922,7 @@ static int sunxi_nfc_hw_ecc_read_chunks_dma(struct nand_chip *nand, uint8_t *buf
unsigned int max_bitflips = 0;
int ret, i, raw_mode = 0;
struct scatterlist sg;
u32 status;
u32 status, wait;
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
@ -929,13 +940,18 @@ static int sunxi_nfc_hw_ecc_read_chunks_dma(struct nand_chip *nand, uint8_t *buf
writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
dma_async_issue_pending(nfc->dmac);
wait = NFC_CMD_INT_FLAG;
if (nfc->caps->has_mdma)
wait |= NFC_DMA_INT_FLAG;
else
dma_async_issue_pending(nfc->dmac);
writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
nfc->regs + NFC_REG_CMD);
ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0);
if (ret)
ret = sunxi_nfc_wait_events(nfc, wait, false, 0);
if (ret && !nfc->caps->has_mdma)
dmaengine_terminate_all(nfc->dmac);
sunxi_nfc_randomizer_disable(nand);
@ -1276,6 +1292,7 @@ static int sunxi_nfc_hw_ecc_write_page_dma(struct nand_chip *nand,
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct nand_ecc_ctrl *ecc = &nand->ecc;
struct scatterlist sg;
u32 wait;
int ret, i;
sunxi_nfc_select_chip(nand, nand->cur_cs);
@ -1304,14 +1321,19 @@ static int sunxi_nfc_hw_ecc_write_page_dma(struct nand_chip *nand,
writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
nfc->regs + NFC_REG_WCMD_SET);
dma_async_issue_pending(nfc->dmac);
wait = NFC_CMD_INT_FLAG;
if (nfc->caps->has_mdma)
wait |= NFC_DMA_INT_FLAG;
else
dma_async_issue_pending(nfc->dmac);
writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
NFC_DATA_TRANS | NFC_ACCESS_DIR,
nfc->regs + NFC_REG_CMD);
ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0);
if (ret)
ret = sunxi_nfc_wait_events(nfc, wait, false, 0);
if (ret && !nfc->caps->has_mdma)
dmaengine_terminate_all(nfc->dmac);
sunxi_nfc_randomizer_disable(nand);
@ -1597,9 +1619,9 @@ static const struct mtd_ooblayout_ops sunxi_nand_ooblayout_ops = {
.free = sunxi_nand_ooblayout_free,
};
static void sunxi_nand_hw_ecc_ctrl_cleanup(struct nand_ecc_ctrl *ecc)
static void sunxi_nand_hw_ecc_ctrl_cleanup(struct sunxi_nand_chip *sunxi_nand)
{
kfree(ecc->priv);
kfree(sunxi_nand->ecc);
}
static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
@ -1607,10 +1629,10 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
struct device_node *np)
{
static const u8 strengths[] = { 16, 24, 28, 32, 40, 48, 56, 60, 64 };
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct mtd_info *mtd = nand_to_mtd(nand);
struct nand_device *nanddev = mtd_to_nanddev(mtd);
struct sunxi_nand_hw_ecc *data;
int nsectors;
int ret;
int i;
@ -1647,8 +1669,8 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
if (ecc->size != 512 && ecc->size != 1024)
return -EINVAL;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
sunxi_nand->ecc = kzalloc(sizeof(*sunxi_nand->ecc), GFP_KERNEL);
if (!sunxi_nand->ecc)
return -ENOMEM;
/* Prefer 1k ECC chunk over 512 ones */
@ -1675,7 +1697,7 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
goto err;
}
data->mode = i;
sunxi_nand->ecc->mode = i;
/* HW ECC always request ECC bytes for 1024 bytes blocks */
ecc->bytes = DIV_ROUND_UP(ecc->strength * fls(8 * 1024), 8);
@ -1693,9 +1715,8 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
ecc->read_oob = sunxi_nfc_hw_ecc_read_oob;
ecc->write_oob = sunxi_nfc_hw_ecc_write_oob;
mtd_set_ooblayout(mtd, &sunxi_nand_ooblayout_ops);
ecc->priv = data;
if (nfc->dmac) {
if (nfc->dmac || nfc->caps->has_mdma) {
ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
@ -1714,16 +1735,18 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand,
return 0;
err:
kfree(data);
kfree(sunxi_nand->ecc);
return ret;
}
static void sunxi_nand_ecc_cleanup(struct nand_ecc_ctrl *ecc)
static void sunxi_nand_ecc_cleanup(struct sunxi_nand_chip *sunxi_nand)
{
struct nand_ecc_ctrl *ecc = &sunxi_nand->nand.ecc;
switch (ecc->engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
sunxi_nand_hw_ecc_ctrl_cleanup(ecc);
sunxi_nand_hw_ecc_ctrl_cleanup(sunxi_nand);
break;
case NAND_ECC_ENGINE_TYPE_NONE:
default:
@ -2053,11 +2076,41 @@ static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc)
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
sunxi_nand_ecc_cleanup(&chip->ecc);
sunxi_nand_ecc_cleanup(sunxi_nand);
list_del(&sunxi_nand->node);
}
}
static int sunxi_nfc_dma_init(struct sunxi_nfc *nfc, struct resource *r)
{
int ret;
if (nfc->caps->has_mdma)
return 0;
nfc->dmac = dma_request_chan(nfc->dev, "rxtx");
if (IS_ERR(nfc->dmac)) {
ret = PTR_ERR(nfc->dmac);
if (ret == -EPROBE_DEFER)
return ret;
/* Ignore errors to fall back to PIO mode */
dev_warn(nfc->dev, "failed to request rxtx DMA channel: %d\n", ret);
nfc->dmac = NULL;
} else {
struct dma_slave_config dmac_cfg = { };
dmac_cfg.src_addr = r->start + nfc->caps->reg_io_data;
dmac_cfg.dst_addr = dmac_cfg.src_addr;
dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
dmac_cfg.src_maxburst = nfc->caps->dma_maxburst;
dmac_cfg.dst_maxburst = nfc->caps->dma_maxburst;
dmaengine_slave_config(nfc->dmac, &dmac_cfg);
}
return 0;
}
static int sunxi_nfc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -2132,30 +2185,10 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
if (ret)
goto out_ahb_reset_reassert;
nfc->dmac = dma_request_chan(dev, "rxtx");
if (IS_ERR(nfc->dmac)) {
ret = PTR_ERR(nfc->dmac);
if (ret == -EPROBE_DEFER)
goto out_ahb_reset_reassert;
ret = sunxi_nfc_dma_init(nfc, r);
/* Ignore errors to fall back to PIO mode */
dev_warn(dev, "failed to request rxtx DMA channel: %d\n", ret);
nfc->dmac = NULL;
} else {
struct dma_slave_config dmac_cfg = { };
dmac_cfg.src_addr = r->start + nfc->caps->reg_io_data;
dmac_cfg.dst_addr = dmac_cfg.src_addr;
dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
dmac_cfg.src_maxburst = nfc->caps->dma_maxburst;
dmac_cfg.dst_maxburst = nfc->caps->dma_maxburst;
dmaengine_slave_config(nfc->dmac, &dmac_cfg);
if (nfc->caps->extra_mbus_conf)
writel(readl(nfc->regs + NFC_REG_CTL) |
NFC_DMA_TYPE_NORMAL, nfc->regs + NFC_REG_CTL);
}
if (ret)
goto out_ahb_reset_reassert;
platform_set_drvdata(pdev, nfc);
@ -2202,7 +2235,7 @@ static const struct sunxi_nfc_caps sunxi_nfc_a10_caps = {
};
static const struct sunxi_nfc_caps sunxi_nfc_a23_caps = {
.extra_mbus_conf = true,
.has_mdma = true,
.reg_io_data = NFC_REG_A23_IO_DATA,
.dma_maxburst = 8,
};

Просмотреть файл

@ -35,7 +35,6 @@
#include <linux/ioport.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
@ -293,11 +292,11 @@ static int tmio_nand_correct_data(struct nand_chip *chip, unsigned char *buf,
int r0, r1;
/* assume ecc.size = 512 and ecc.bytes = 6 */
r0 = __nand_correct_data(buf, read_ecc, calc_ecc, 256, false);
r0 = rawnand_sw_hamming_correct(chip, buf, read_ecc, calc_ecc);
if (r0 < 0)
return r0;
r1 = __nand_correct_data(buf + 256, read_ecc + 3, calc_ecc + 3, 256,
false);
r1 = rawnand_sw_hamming_correct(chip, buf + 256, read_ecc + 3,
calc_ecc + 3);
if (r1 < 0)
return r1;
return r0 + r1;

Просмотреть файл

@ -14,7 +14,6 @@
#include <linux/delay.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
#include <linux/io.h>
#include <linux/platform_data/txx9/ndfmc.h>
@ -194,8 +193,8 @@ static int txx9ndfmc_correct_data(struct nand_chip *chip, unsigned char *buf,
int stat;
for (eccsize = chip->ecc.size; eccsize > 0; eccsize -= 256) {
stat = __nand_correct_data(buf, read_ecc, calc_ecc, 256,
false);
stat = rawnand_sw_hamming_correct(chip, buf, read_ecc,
calc_ecc);
if (stat < 0)
return stat;
corrected += stat;

Просмотреть файл

@ -2,6 +2,7 @@
menuconfig MTD_SPI_NAND
tristate "SPI NAND device Support"
select MTD_NAND_CORE
select MTD_NAND_ECC
depends on SPI_MASTER
select SPI_MEM
help

Просмотреть файл

@ -193,6 +193,135 @@ static int spinand_ecc_enable(struct spinand_device *spinand,
enable ? CFG_ECC_ENABLE : 0);
}
static int spinand_check_ecc_status(struct spinand_device *spinand, u8 status)
{
struct nand_device *nand = spinand_to_nand(spinand);
if (spinand->eccinfo.get_status)
return spinand->eccinfo.get_status(spinand, status);
switch (status & STATUS_ECC_MASK) {
case STATUS_ECC_NO_BITFLIPS:
return 0;
case STATUS_ECC_HAS_BITFLIPS:
/*
* We have no way to know exactly how many bitflips have been
* fixed, so let's return the maximum possible value so that
* wear-leveling layers move the data immediately.
*/
return nanddev_get_ecc_conf(nand)->strength;
case STATUS_ECC_UNCOR_ERROR:
return -EBADMSG;
default:
break;
}
return -EINVAL;
}
static int spinand_noecc_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
return -ERANGE;
}
static int spinand_noecc_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
if (section)
return -ERANGE;
/* Reserve 2 bytes for the BBM. */
region->offset = 2;
region->length = 62;
return 0;
}
static const struct mtd_ooblayout_ops spinand_noecc_ooblayout = {
.ecc = spinand_noecc_ooblayout_ecc,
.free = spinand_noecc_ooblayout_free,
};
static int spinand_ondie_ecc_init_ctx(struct nand_device *nand)
{
struct spinand_device *spinand = nand_to_spinand(nand);
struct mtd_info *mtd = nanddev_to_mtd(nand);
struct spinand_ondie_ecc_conf *engine_conf;
nand->ecc.ctx.conf.engine_type = NAND_ECC_ENGINE_TYPE_ON_DIE;
nand->ecc.ctx.conf.step_size = nand->ecc.requirements.step_size;
nand->ecc.ctx.conf.strength = nand->ecc.requirements.strength;
engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL);
if (!engine_conf)
return -ENOMEM;
nand->ecc.ctx.priv = engine_conf;
if (spinand->eccinfo.ooblayout)
mtd_set_ooblayout(mtd, spinand->eccinfo.ooblayout);
else
mtd_set_ooblayout(mtd, &spinand_noecc_ooblayout);
return 0;
}
static void spinand_ondie_ecc_cleanup_ctx(struct nand_device *nand)
{
kfree(nand->ecc.ctx.priv);
}
static int spinand_ondie_ecc_prepare_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct spinand_device *spinand = nand_to_spinand(nand);
bool enable = (req->mode != MTD_OPS_RAW);
/* Only enable or disable the engine */
return spinand_ecc_enable(spinand, enable);
}
static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
struct nand_page_io_req *req)
{
struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
struct spinand_device *spinand = nand_to_spinand(nand);
if (req->mode == MTD_OPS_RAW)
return 0;
/* Nothing to do when finishing a page write */
if (req->type == NAND_PAGE_WRITE)
return 0;
/* Finish a page write: check the status, report errors/bitflips */
return spinand_check_ecc_status(spinand, engine_conf->status);
}
static struct nand_ecc_engine_ops spinand_ondie_ecc_engine_ops = {
.init_ctx = spinand_ondie_ecc_init_ctx,
.cleanup_ctx = spinand_ondie_ecc_cleanup_ctx,
.prepare_io_req = spinand_ondie_ecc_prepare_io_req,
.finish_io_req = spinand_ondie_ecc_finish_io_req,
};
static struct nand_ecc_engine spinand_ondie_ecc_engine = {
.ops = &spinand_ondie_ecc_engine_ops,
};
static void spinand_ondie_ecc_save_status(struct nand_device *nand, u8 status)
{
struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
if (nand->ecc.ctx.conf.engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE &&
engine_conf)
engine_conf->status = status;
}
static int spinand_write_enable_op(struct spinand_device *spinand)
{
struct spi_mem_op op = SPINAND_WR_EN_DIS_OP(true);
@ -214,7 +343,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
const struct nand_page_io_req *req)
{
struct nand_device *nand = spinand_to_nand(spinand);
struct mtd_info *mtd = nanddev_to_mtd(nand);
struct spi_mem_dirmap_desc *rdesc;
unsigned int nbytes = 0;
void *buf = NULL;
@ -254,16 +382,9 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
memcpy(req->databuf.in, spinand->databuf + req->dataoffs,
req->datalen);
if (req->ooblen) {
if (req->mode == MTD_OPS_AUTO_OOB)
mtd_ooblayout_get_databytes(mtd, req->oobbuf.in,
spinand->oobbuf,
req->ooboffs,
req->ooblen);
else
memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
req->ooblen);
}
if (req->ooblen)
memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
req->ooblen);
return 0;
}
@ -272,7 +393,7 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
const struct nand_page_io_req *req)
{
struct nand_device *nand = spinand_to_nand(spinand);
struct mtd_info *mtd = nanddev_to_mtd(nand);
struct mtd_info *mtd = spinand_to_mtd(spinand);
struct spi_mem_dirmap_desc *wdesc;
unsigned int nbytes, column = 0;
void *buf = spinand->databuf;
@ -284,9 +405,12 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
* must fill the page cache entirely even if we only want to program
* the data portion of the page, otherwise we might corrupt the BBM or
* user data previously programmed in OOB area.
*
* Only reset the data buffer manually, the OOB buffer is prepared by
* ECC engines ->prepare_io_req() callback.
*/
nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand);
memset(spinand->databuf, 0xff, nbytes);
memset(spinand->databuf, 0xff, nanddev_page_size(nand));
if (req->datalen)
memcpy(spinand->databuf + req->dataoffs, req->databuf.out,
@ -402,42 +526,17 @@ static int spinand_lock_block(struct spinand_device *spinand, u8 lock)
return spinand_write_reg_op(spinand, REG_BLOCK_LOCK, lock);
}
static int spinand_check_ecc_status(struct spinand_device *spinand, u8 status)
static int spinand_read_page(struct spinand_device *spinand,
const struct nand_page_io_req *req)
{
struct nand_device *nand = spinand_to_nand(spinand);
if (spinand->eccinfo.get_status)
return spinand->eccinfo.get_status(spinand, status);
switch (status & STATUS_ECC_MASK) {
case STATUS_ECC_NO_BITFLIPS:
return 0;
case STATUS_ECC_HAS_BITFLIPS:
/*
* We have no way to know exactly how many bitflips have been
* fixed, so let's return the maximum possible value so that
* wear-leveling layers move the data immediately.
*/
return nanddev_get_ecc_conf(nand)->strength;
case STATUS_ECC_UNCOR_ERROR:
return -EBADMSG;
default:
break;
}
return -EINVAL;
}
static int spinand_read_page(struct spinand_device *spinand,
const struct nand_page_io_req *req,
bool ecc_enabled)
{
u8 status;
int ret;
ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req);
if (ret)
return ret;
ret = spinand_load_page_op(spinand, req);
if (ret)
return ret;
@ -446,22 +545,26 @@ static int spinand_read_page(struct spinand_device *spinand,
if (ret < 0)
return ret;
spinand_ondie_ecc_save_status(nand, status);
ret = spinand_read_from_cache_op(spinand, req);
if (ret)
return ret;
if (!ecc_enabled)
return 0;
return spinand_check_ecc_status(spinand, status);
return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req);
}
static int spinand_write_page(struct spinand_device *spinand,
const struct nand_page_io_req *req)
{
struct nand_device *nand = spinand_to_nand(spinand);
u8 status;
int ret;
ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req);
if (ret)
return ret;
ret = spinand_write_enable_op(spinand);
if (ret)
return ret;
@ -476,9 +579,9 @@ static int spinand_write_page(struct spinand_device *spinand,
ret = spinand_wait(spinand, &status);
if (!ret && (status & STATUS_PROG_FAILED))
ret = -EIO;
return -EIO;
return ret;
return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req);
}
static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
@ -488,25 +591,24 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
struct nand_device *nand = mtd_to_nanddev(mtd);
unsigned int max_bitflips = 0;
struct nand_io_iter iter;
bool enable_ecc = false;
bool disable_ecc = false;
bool ecc_failed = false;
int ret = 0;
if (ops->mode != MTD_OPS_RAW && spinand->eccinfo.ooblayout)
enable_ecc = true;
if (ops->mode == MTD_OPS_RAW || !spinand->eccinfo.ooblayout)
disable_ecc = true;
mutex_lock(&spinand->lock);
nanddev_io_for_each_page(nand, NAND_PAGE_READ, from, ops, &iter) {
if (disable_ecc)
iter.req.mode = MTD_OPS_RAW;
ret = spinand_select_target(spinand, iter.req.pos.target);
if (ret)
break;
ret = spinand_ecc_enable(spinand, enable_ecc);
if (ret)
break;
ret = spinand_read_page(spinand, &iter.req, enable_ecc);
ret = spinand_read_page(spinand, &iter.req);
if (ret < 0 && ret != -EBADMSG)
break;
@ -537,20 +639,19 @@ static int spinand_mtd_write(struct mtd_info *mtd, loff_t to,
struct spinand_device *spinand = mtd_to_spinand(mtd);
struct nand_device *nand = mtd_to_nanddev(mtd);
struct nand_io_iter iter;
bool enable_ecc = false;
bool disable_ecc = false;
int ret = 0;
if (ops->mode != MTD_OPS_RAW && mtd->ooblayout)
enable_ecc = true;
if (ops->mode == MTD_OPS_RAW || !mtd->ooblayout)
disable_ecc = true;
mutex_lock(&spinand->lock);
nanddev_io_for_each_page(nand, NAND_PAGE_WRITE, to, ops, &iter) {
ret = spinand_select_target(spinand, iter.req.pos.target);
if (ret)
break;
if (disable_ecc)
iter.req.mode = MTD_OPS_RAW;
ret = spinand_ecc_enable(spinand, enable_ecc);
ret = spinand_select_target(spinand, iter.req.pos.target);
if (ret)
break;
@ -580,7 +681,7 @@ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos)
};
spinand_select_target(spinand, pos->target);
spinand_read_page(spinand, &req, false);
spinand_read_page(spinand, &req);
if (marker[0] != 0xff || marker[1] != 0xff)
return true;
@ -965,30 +1066,6 @@ static int spinand_detect(struct spinand_device *spinand)
return 0;
}
static int spinand_noecc_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
return -ERANGE;
}
static int spinand_noecc_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
if (section)
return -ERANGE;
/* Reserve 2 bytes for the BBM. */
region->offset = 2;
region->length = 62;
return 0;
}
static const struct mtd_ooblayout_ops spinand_noecc_ooblayout = {
.ecc = spinand_noecc_ooblayout_ecc,
.free = spinand_noecc_ooblayout_free,
};
static int spinand_init(struct spinand_device *spinand)
{
struct device *dev = &spinand->spimem->spi->dev;
@ -1066,10 +1143,15 @@ static int spinand_init(struct spinand_device *spinand)
if (ret)
goto err_manuf_cleanup;
/*
* Right now, we don't support ECC, so let the whole oob
* area is available for user.
*/
/* SPI-NAND default ECC engine is on-die */
nand->ecc.defaults.engine_type = NAND_ECC_ENGINE_TYPE_ON_DIE;
nand->ecc.ondie_engine = &spinand_ondie_ecc_engine;
spinand_ecc_enable(spinand, false);
ret = nanddev_ecc_engine_init(nand);
if (ret)
goto err_cleanup_nanddev;
mtd->_read_oob = spinand_mtd_read;
mtd->_write_oob = spinand_mtd_write;
mtd->_block_isbad = spinand_mtd_block_isbad;
@ -1078,14 +1160,11 @@ static int spinand_init(struct spinand_device *spinand)
mtd->_erase = spinand_mtd_erase;
mtd->_max_bad_blocks = nanddev_mtd_max_bad_blocks;
if (spinand->eccinfo.ooblayout)
mtd_set_ooblayout(mtd, spinand->eccinfo.ooblayout);
else
mtd_set_ooblayout(mtd, &spinand_noecc_ooblayout);
ret = mtd_ooblayout_count_freebytes(mtd);
if (ret < 0)
goto err_cleanup_nanddev;
if (nand->ecc.engine) {
ret = mtd_ooblayout_count_freebytes(mtd);
if (ret < 0)
goto err_cleanup_ecc_engine;
}
mtd->oobavail = ret;
@ -1095,6 +1174,9 @@ static int spinand_init(struct spinand_device *spinand)
return 0;
err_cleanup_ecc_engine:
nanddev_ecc_engine_cleanup(nand);
err_cleanup_nanddev:
nanddev_cleanup(nand);

Просмотреть файл

@ -119,6 +119,53 @@ static const struct spinand_info macronix_spinand_table[] = {
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF2GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26),
NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
0,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35LF4GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37),
NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
0,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35LF1G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
0,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF2G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
0,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF4G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
0,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX31LF1GE4BC",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x1e),
NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),

Просмотреть файл

@ -28,7 +28,7 @@
#define MICRON_SELECT_DIE(x) ((x) << 6)
static SPINAND_OP_VARIANTS(read_cache_variants,
static SPINAND_OP_VARIANTS(quadio_read_cache_variants,
SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
@ -36,14 +36,27 @@ static SPINAND_OP_VARIANTS(read_cache_variants,
SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
static SPINAND_OP_VARIANTS(write_cache_variants,
static SPINAND_OP_VARIANTS(x4_write_cache_variants,
SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
SPINAND_PROG_LOAD(true, 0, NULL, 0));
static SPINAND_OP_VARIANTS(update_cache_variants,
static SPINAND_OP_VARIANTS(x4_update_cache_variants,
SPINAND_PROG_LOAD_X4(false, 0, NULL, 0),
SPINAND_PROG_LOAD(false, 0, NULL, 0));
/* Micron MT29F2G01AAAED Device */
static SPINAND_OP_VARIANTS(x4_read_cache_variants,
SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0),
SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
static SPINAND_OP_VARIANTS(x1_write_cache_variants,
SPINAND_PROG_LOAD(true, 0, NULL, 0));
static SPINAND_OP_VARIANTS(x1_update_cache_variants,
SPINAND_PROG_LOAD(false, 0, NULL, 0));
static int micron_8_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
@ -74,6 +87,47 @@ static const struct mtd_ooblayout_ops micron_8_ooblayout = {
.free = micron_8_ooblayout_free,
};
static int micron_4_ooblayout_ecc(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
struct spinand_device *spinand = mtd_to_spinand(mtd);
if (section >= spinand->base.memorg.pagesize /
mtd->ecc_step_size)
return -ERANGE;
region->offset = (section * 16) + 8;
region->length = 8;
return 0;
}
static int micron_4_ooblayout_free(struct mtd_info *mtd, int section,
struct mtd_oob_region *region)
{
struct spinand_device *spinand = mtd_to_spinand(mtd);
if (section >= spinand->base.memorg.pagesize /
mtd->ecc_step_size)
return -ERANGE;
if (section) {
region->offset = 16 * section;
region->length = 8;
} else {
/* section 0 has two bytes reserved for the BBM */
region->offset = 2;
region->length = 6;
}
return 0;
}
static const struct mtd_ooblayout_ops micron_4_ooblayout = {
.ecc = micron_4_ooblayout_ecc,
.free = micron_4_ooblayout_free,
};
static int micron_select_target(struct spinand_device *spinand,
unsigned int target)
{
@ -120,9 +174,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -131,9 +185,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x25),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -142,9 +196,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -153,9 +207,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x15),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -164,9 +218,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x36),
NAND_MEMORG(1, 2048, 128, 64, 2048, 80, 2, 1, 2),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status),
@ -176,9 +230,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x34),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
SPINAND_HAS_CR_FEAT_BIT,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -187,9 +241,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
SPINAND_HAS_CR_FEAT_BIT,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status)),
@ -198,9 +252,9 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x46),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 2),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
SPINAND_HAS_CR_FEAT_BIT,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status),
@ -210,13 +264,23 @@ static const struct spinand_info micron_spinand_table[] = {
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x47),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 2),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants,
&x4_write_cache_variants,
&x4_update_cache_variants),
SPINAND_HAS_CR_FEAT_BIT,
SPINAND_ECCINFO(&micron_8_ooblayout,
micron_8_ecc_get_status),
SPINAND_SELECT_TARGET(micron_select_target)),
/* M69A 2Gb 3.3V */
SPINAND_INFO("MT29F2G01AAAED",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x9F),
NAND_MEMORG(1, 2048, 64, 64, 2048, 80, 2, 1, 1),
NAND_ECCREQ(4, 512),
SPINAND_INFO_OP_VARIANTS(&x4_read_cache_variants,
&x1_write_cache_variants,
&x1_update_cache_variants),
0,
SPINAND_ECCINFO(&micron_4_ooblayout, NULL)),
};
static int micron_spinand_init(struct spinand_device *spinand)

Просмотреть файл

@ -28,7 +28,7 @@ static SPINAND_OP_VARIANTS(update_cache_x4_variants,
SPINAND_PROG_LOAD_X4(false, 0, NULL, 0),
SPINAND_PROG_LOAD(false, 0, NULL, 0));
/**
/*
* Backward compatibility for 1st generation Serial NAND devices
* which don't support Quad Program Load operation.
*/

Просмотреть файл

@ -226,7 +226,7 @@ static int mtdpart_setup_real(char *s)
struct cmdline_mtd_partition *this_mtd;
struct mtd_partition *parts;
int mtd_id_len, num_parts;
char *p, *mtd_id, *semicol;
char *p, *mtd_id, *semicol, *open_parenth;
/*
* Replace the first ';' by a NULL char so strrchr can work
@ -236,6 +236,14 @@ static int mtdpart_setup_real(char *s)
if (semicol)
*semicol = '\0';
/*
* make sure that part-names with ":" will not be handled as
* part of the mtd-id with an ":"
*/
open_parenth = strchr(s, '(');
if (open_parenth)
*open_parenth = '\0';
mtd_id = s;
/*
@ -245,6 +253,10 @@ static int mtdpart_setup_real(char *s)
*/
p = strrchr(s, ':');
/* Restore the '(' now. */
if (open_parenth)
*open_parenth = '(';
/* Restore the ';' now. */
if (semicol)
*semicol = ';';

Просмотреть файл

@ -13,7 +13,7 @@
#include <linux/sysfs.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include "nand/raw/sm_common.h"
#include "sm_ftl.h"
@ -216,20 +216,19 @@ static void sm_break_offset(struct sm_ftl *ftl, loff_t loffset,
static int sm_correct_sector(uint8_t *buffer, struct sm_oob *oob)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
uint8_t ecc[3];
__nand_calculate_ecc(buffer, SM_SMALL_PAGE, ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
if (__nand_correct_data(buffer, ecc, oob->ecc1, SM_SMALL_PAGE,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)) < 0)
ecc_sw_hamming_calculate(buffer, SM_SMALL_PAGE, ecc, sm_order);
if (ecc_sw_hamming_correct(buffer, ecc, oob->ecc1, SM_SMALL_PAGE,
sm_order) < 0)
return -EIO;
buffer += SM_SMALL_PAGE;
__nand_calculate_ecc(buffer, SM_SMALL_PAGE, ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
if (__nand_correct_data(buffer, ecc, oob->ecc2, SM_SMALL_PAGE,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)) < 0)
ecc_sw_hamming_calculate(buffer, SM_SMALL_PAGE, ecc, sm_order);
if (ecc_sw_hamming_correct(buffer, ecc, oob->ecc2, SM_SMALL_PAGE,
sm_order) < 0)
return -EIO;
return 0;
}
@ -369,6 +368,7 @@ static int sm_write_block(struct sm_ftl *ftl, uint8_t *buf,
int zone, int block, int lba,
unsigned long invalid_bitmap)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
struct sm_oob oob;
int boffset;
int retry = 0;
@ -395,13 +395,13 @@ restart:
}
if (ftl->smallpagenand) {
__nand_calculate_ecc(buf + boffset, SM_SMALL_PAGE,
oob.ecc1,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(buf + boffset,
SM_SMALL_PAGE, oob.ecc1,
sm_order);
__nand_calculate_ecc(buf + boffset + SM_SMALL_PAGE,
SM_SMALL_PAGE, oob.ecc2,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(buf + boffset + SM_SMALL_PAGE,
SM_SMALL_PAGE, oob.ecc2,
sm_order);
}
if (!sm_write_sector(ftl, zone, block, boffset,
buf + boffset, &oob))

Просмотреть файл

@ -24,6 +24,50 @@ config MTD_SPI_NOR_USE_4K_SECTORS
Please note that some tools/drivers/filesystems may not work with
4096 B erase size (e.g. UBIFS requires 15 KiB as a minimum).
choice
prompt "Software write protection at boot"
default MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE
config MTD_SPI_NOR_SWP_DISABLE
bool "Disable SWP on any flashes (legacy behavior)"
help
This option disables the software write protection on any SPI
flashes at boot-up.
Depending on the flash chip this either clears the block protection
bits or does a "Global Unprotect" command.
Don't use this if you intent to use the software write protection
of your SPI flash. This is only to keep backwards compatibility.
config MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE
bool "Disable SWP on flashes w/ volatile protection bits"
help
Some SPI flashes have volatile block protection bits, ie. after a
power-up or a reset the flash is software write protected by
default.
This option disables the software write protection for these kind
of flashes while keeping it enabled for any other SPI flashes
which have non-volatile write protection bits.
If the software write protection will be disabled depending on
the flash either the block protection bits are cleared or a
"Global Unprotect" command is issued.
If you are unsure, select this option.
config MTD_SPI_NOR_SWP_KEEP
bool "Keep software write protection as is"
help
If you select this option the software write protection of any
SPI flashes will not be changed. If your flash is software write
protected or will be automatically software write protected after
power-up you have to manually unlock it before you are able to
write to it.
endchoice
source "drivers/mtd/spi-nor/controllers/Kconfig"
endif # MTD_SPI_NOR

Просмотреть файл

@ -8,39 +8,192 @@
#include "core.h"
#define ATMEL_SR_GLOBAL_PROTECT_MASK GENMASK(5, 2)
/*
* The Atmel AT25FS010/AT25FS040 parts have some weird configuration for the
* block protection bits. We don't support them. But legacy behavior in linux
* is to unlock the whole flash array on startup. Therefore, we have to support
* exactly this operation.
*/
static int atmel_at25fs_lock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
return -EOPNOTSUPP;
}
static int atmel_at25fs_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
int ret;
/* We only support unlocking the whole flash array */
if (ofs || len != nor->params->size)
return -EINVAL;
/* Write 0x00 to the status register to disable write protection */
ret = spi_nor_write_sr_and_check(nor, 0);
if (ret)
dev_dbg(nor->dev, "unable to clear BP bits, WP# asserted?\n");
return ret;
}
static int atmel_at25fs_is_locked(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
return -EOPNOTSUPP;
}
static const struct spi_nor_locking_ops atmel_at25fs_locking_ops = {
.lock = atmel_at25fs_lock,
.unlock = atmel_at25fs_unlock,
.is_locked = atmel_at25fs_is_locked,
};
static void atmel_at25fs_default_init(struct spi_nor *nor)
{
nor->params->locking_ops = &atmel_at25fs_locking_ops;
}
static const struct spi_nor_fixups atmel_at25fs_fixups = {
.default_init = atmel_at25fs_default_init,
};
/**
* atmel_set_global_protection - Do a Global Protect or Unprotect command
* @nor: pointer to 'struct spi_nor'
* @ofs: offset in bytes
* @len: len in bytes
* @is_protect: if true do a Global Protect otherwise it is a Global Unprotect
*
* Return: 0 on success, -error otherwise.
*/
static int atmel_set_global_protection(struct spi_nor *nor, loff_t ofs,
uint64_t len, bool is_protect)
{
int ret;
u8 sr;
/* We only support locking the whole flash array */
if (ofs || len != nor->params->size)
return -EINVAL;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
sr = nor->bouncebuf[0];
/* SRWD bit needs to be cleared, otherwise the protection doesn't change */
if (sr & SR_SRWD) {
sr &= ~SR_SRWD;
ret = spi_nor_write_sr_and_check(nor, sr);
if (ret) {
dev_dbg(nor->dev, "unable to clear SRWD bit, WP# asserted?\n");
return ret;
}
}
if (is_protect) {
sr |= ATMEL_SR_GLOBAL_PROTECT_MASK;
/*
* Set the SRWD bit again as soon as we are protecting
* anything. This will ensure that the WP# pin is working
* correctly. By doing this we also behave the same as
* spi_nor_sr_lock(), which sets SRWD if any block protection
* is active.
*/
sr |= SR_SRWD;
} else {
sr &= ~ATMEL_SR_GLOBAL_PROTECT_MASK;
}
nor->bouncebuf[0] = sr;
/*
* We cannot use the spi_nor_write_sr_and_check() because this command
* isn't really setting any bits, instead it is an pseudo command for
* "Global Unprotect" or "Global Protect"
*/
return spi_nor_write_sr(nor, nor->bouncebuf, 1);
}
static int atmel_global_protect(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
return atmel_set_global_protection(nor, ofs, len, true);
}
static int atmel_global_unprotect(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
return atmel_set_global_protection(nor, ofs, len, false);
}
static int atmel_is_global_protected(struct spi_nor *nor, loff_t ofs, uint64_t len)
{
int ret;
if (ofs >= nor->params->size || (ofs + len) > nor->params->size)
return -EINVAL;
ret = spi_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
return ((nor->bouncebuf[0] & ATMEL_SR_GLOBAL_PROTECT_MASK) == ATMEL_SR_GLOBAL_PROTECT_MASK);
}
static const struct spi_nor_locking_ops atmel_global_protection_ops = {
.lock = atmel_global_protect,
.unlock = atmel_global_unprotect,
.is_locked = atmel_is_global_protected,
};
static void atmel_global_protection_default_init(struct spi_nor *nor)
{
nor->params->locking_ops = &atmel_global_protection_ops;
}
static const struct spi_nor_fixups atmel_global_protection_fixups = {
.default_init = atmel_global_protection_default_init,
};
static const struct flash_info atmel_parts[] = {
/* Atmel -- some are (confusingly) marketed as "DataFlash" */
{ "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) },
{ "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) },
{ "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K | SPI_NOR_HAS_LOCK)
.fixups = &atmel_at25fs_fixups },
{ "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_HAS_LOCK)
.fixups = &atmel_at25fs_fixups },
{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) },
{ "at25df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) },
{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) },
{ "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) },
{ "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at25df321", INFO(0x1f4700, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at25sl321", INFO(0x1f4216, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) },
{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) },
{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) },
{ "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) },
{ "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE)
.fixups = &atmel_global_protection_fixups },
{ "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) },
};
static void atmel_default_init(struct spi_nor *nor)
{
nor->flags |= SNOR_F_HAS_LOCK;
}
static const struct spi_nor_fixups atmel_fixups = {
.default_init = atmel_default_init,
};
const struct spi_nor_manufacturer spi_nor_atmel = {
.name = "atmel",
.parts = atmel_parts,
.nparts = ARRAY_SIZE(atmel_parts),
.fixups = &atmel_fixups,
};

Просмотреть файл

@ -320,7 +320,7 @@ static const struct spi_nor_controller_ops hisi_controller_ops = {
.write = hisi_spi_nor_write,
};
/**
/*
* Get spi flash device information and register it as a mtd device.
*/
static int hisi_spi_nor_register(struct device_node *np,

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -26,6 +26,9 @@ enum spi_nor_option_flags {
SNOR_F_HAS_SR_TB_BIT6 = BIT(11),
SNOR_F_HAS_4BIT_BP = BIT(12),
SNOR_F_HAS_SR_BP3_BIT6 = BIT(13),
SNOR_F_IO_MODE_EN_VOLATILE = BIT(14),
SNOR_F_SOFT_RESET = BIT(15),
SNOR_F_SWP_IS_VOLATILE = BIT(16),
};
struct spi_nor_read_command {
@ -62,6 +65,7 @@ enum spi_nor_read_command_index {
SNOR_CMD_READ_1_8_8,
SNOR_CMD_READ_8_8_8,
SNOR_CMD_READ_1_8_8_DTR,
SNOR_CMD_READ_8_8_8_DTR,
SNOR_CMD_READ_MAX
};
@ -78,6 +82,7 @@ enum spi_nor_pp_command_index {
SNOR_CMD_PP_1_1_8,
SNOR_CMD_PP_1_8_8,
SNOR_CMD_PP_8_8_8,
SNOR_CMD_PP_8_8_8_DTR,
SNOR_CMD_PP_MAX
};
@ -189,7 +194,12 @@ struct spi_nor_locking_ops {
* Serial Flash Discoverable Parameters (SFDP) tables.
*
* @size: the flash memory density in bytes.
* @writesize Minimal writable flash unit size. Defaults to 1. Set to
* ECC unit size for ECC-ed flashes.
* @page_size: the page size of the SPI NOR flash memory.
* @rdsr_dummy: dummy cycles needed for Read Status Register command.
* @rdsr_addr_nbytes: dummy address bytes needed for Read Status Register
* command.
* @hwcaps: describes the read and page program hardware
* capabilities.
* @reads: read capabilities ordered by priority: the higher index
@ -198,6 +208,7 @@ struct spi_nor_locking_ops {
* higher index in the array, the higher priority.
* @erase_map: the erase map parsed from the SFDP Sector Map Parameter
* Table.
* @octal_dtr_enable: enables SPI NOR octal DTR mode.
* @quad_enable: enables SPI NOR quad mode.
* @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode.
* @convert_addr: converts an absolute address into something the flash
@ -211,7 +222,10 @@ struct spi_nor_locking_ops {
*/
struct spi_nor_flash_parameter {
u64 size;
u32 writesize;
u32 page_size;
u8 rdsr_dummy;
u8 rdsr_addr_nbytes;
struct spi_nor_hwcaps hwcaps;
struct spi_nor_read_command reads[SNOR_CMD_READ_MAX];
@ -219,6 +233,7 @@ struct spi_nor_flash_parameter {
struct spi_nor_erase_map erase_map;
int (*octal_dtr_enable)(struct spi_nor *nor, bool enable);
int (*quad_enable)(struct spi_nor *nor);
int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable);
u32 (*convert_addr)(struct spi_nor *nor, u32 addr);
@ -311,6 +326,18 @@ struct flash_info {
* BP3 is bit 6 of status register.
* Must be used with SPI_NOR_4BIT_BP.
*/
#define SPI_NOR_OCTAL_DTR_READ BIT(19) /* Flash supports octal DTR Read. */
#define SPI_NOR_OCTAL_DTR_PP BIT(20) /* Flash supports Octal DTR Page Program */
#define SPI_NOR_IO_MODE_EN_VOLATILE BIT(21) /*
* Flash enables the best
* available I/O mode via a
* volatile bit.
*/
#define SPI_NOR_SWP_IS_VOLATILE BIT(22) /*
* Flash has volatile software write
* protection bits. Usually these will
* power-up in a write-protected state.
*/
/* Part specific fixup hooks. */
const struct spi_nor_fixups *fixups;
@ -399,6 +426,9 @@ extern const struct spi_nor_manufacturer spi_nor_winbond;
extern const struct spi_nor_manufacturer spi_nor_xilinx;
extern const struct spi_nor_manufacturer spi_nor_xmc;
void spi_nor_spimem_setup_op(const struct spi_nor *nor,
struct spi_mem_op *op,
const enum spi_nor_protocol proto);
int spi_nor_write_enable(struct spi_nor *nor);
int spi_nor_write_disable(struct spi_nor *nor);
int spi_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable);
@ -409,6 +439,9 @@ void spi_nor_unlock_and_unprep(struct spi_nor *nor);
int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor);
int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor);
int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor);
int spi_nor_read_sr(struct spi_nor *nor, u8 *sr);
int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len);
int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1);
int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr);
ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len,
@ -418,6 +451,11 @@ ssize_t spi_nor_write_data(struct spi_nor *nor, loff_t to, size_t len,
int spi_nor_hwcaps_read2cmd(u32 hwcaps);
u8 spi_nor_convert_3to4_read(u8 opcode);
void spi_nor_set_read_settings(struct spi_nor_read_command *read,
u8 num_mode_clocks,
u8 num_wait_states,
u8 opcode,
enum spi_nor_protocol proto);
void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode,
enum spi_nor_protocol proto);

Просмотреть файл

@ -11,7 +11,7 @@
static const struct flash_info esmt_parts[] = {
/* ESMT */
{ "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_HAS_LOCK) },
SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "f25l32qa", INFO(0x8c4116, 0, 64 * 1024, 64,
SECT_4K | SPI_NOR_HAS_LOCK) },
{ "f25l64qa", INFO(0x8c4117, 0, 64 * 1024, 128,

Просмотреть файл

@ -10,23 +10,16 @@
static const struct flash_info intel_parts[] = {
/* Intel/Numonyx -- xxxs33b */
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) },
{ "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) },
{ "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) },
};
static void intel_default_init(struct spi_nor *nor)
{
nor->flags |= SNOR_F_HAS_LOCK;
}
static const struct spi_nor_fixups intel_fixups = {
.default_init = intel_default_init,
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32,
SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "320s33b", INFO(0x898912, 0, 64 * 1024, 64,
SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "640s33b", INFO(0x898913, 0, 64 * 1024, 128,
SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
};
const struct spi_nor_manufacturer spi_nor_intel = {
.name = "intel",
.parts = intel_parts,
.nparts = ARRAY_SIZE(intel_parts),
.fixups = &intel_fixups,
};

Просмотреть файл

@ -8,10 +8,123 @@
#include "core.h"
#define SPINOR_OP_MT_DTR_RD 0xfd /* Fast Read opcode in DTR mode */
#define SPINOR_OP_MT_RD_ANY_REG 0x85 /* Read volatile register */
#define SPINOR_OP_MT_WR_ANY_REG 0x81 /* Write volatile register */
#define SPINOR_REG_MT_CFR0V 0x00 /* For setting octal DTR mode */
#define SPINOR_REG_MT_CFR1V 0x01 /* For setting dummy cycles */
#define SPINOR_MT_OCT_DTR 0xe7 /* Enable Octal DTR. */
#define SPINOR_MT_EXSPI 0xff /* Enable Extended SPI (default) */
static int spi_nor_micron_octal_dtr_enable(struct spi_nor *nor, bool enable)
{
struct spi_mem_op op;
u8 *buf = nor->bouncebuf;
int ret;
if (enable) {
/* Use 20 dummy cycles for memory array reads. */
ret = spi_nor_write_enable(nor);
if (ret)
return ret;
*buf = 20;
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_MT_WR_ANY_REG, 1),
SPI_MEM_OP_ADDR(3, SPINOR_REG_MT_CFR1V, 1),
SPI_MEM_OP_NO_DUMMY,
SPI_MEM_OP_DATA_OUT(1, buf, 1));
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
ret = spi_nor_wait_till_ready(nor);
if (ret)
return ret;
}
ret = spi_nor_write_enable(nor);
if (ret)
return ret;
if (enable)
*buf = SPINOR_MT_OCT_DTR;
else
*buf = SPINOR_MT_EXSPI;
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_MT_WR_ANY_REG, 1),
SPI_MEM_OP_ADDR(enable ? 3 : 4,
SPINOR_REG_MT_CFR0V, 1),
SPI_MEM_OP_NO_DUMMY,
SPI_MEM_OP_DATA_OUT(1, buf, 1));
if (!enable)
spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR);
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
/* Read flash ID to make sure the switch was successful. */
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 1),
SPI_MEM_OP_NO_ADDR,
SPI_MEM_OP_DUMMY(enable ? 8 : 0, 1),
SPI_MEM_OP_DATA_IN(round_up(nor->info->id_len, 2),
buf, 1));
if (enable)
spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR);
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
if (memcmp(buf, nor->info->id, nor->info->id_len))
return -EINVAL;
return 0;
}
static void mt35xu512aba_default_init(struct spi_nor *nor)
{
nor->params->octal_dtr_enable = spi_nor_micron_octal_dtr_enable;
}
static void mt35xu512aba_post_sfdp_fixup(struct spi_nor *nor)
{
/* Set the Fast Read settings. */
nor->params->hwcaps.mask |= SNOR_HWCAPS_READ_8_8_8_DTR;
spi_nor_set_read_settings(&nor->params->reads[SNOR_CMD_READ_8_8_8_DTR],
0, 20, SPINOR_OP_MT_DTR_RD,
SNOR_PROTO_8_8_8_DTR);
nor->cmd_ext_type = SPI_NOR_EXT_REPEAT;
nor->params->rdsr_dummy = 8;
nor->params->rdsr_addr_nbytes = 0;
/*
* The BFPT quad enable field is set to a reserved value so the quad
* enable function is ignored by spi_nor_parse_bfpt(). Make sure we
* disable it.
*/
nor->params->quad_enable = NULL;
}
static struct spi_nor_fixups mt35xu512aba_fixups = {
.default_init = mt35xu512aba_default_init,
.post_sfdp = mt35xu512aba_post_sfdp_fixup,
};
static const struct flash_info micron_parts[] = {
{ "mt35xu512aba", INFO(0x2c5b1a, 0, 128 * 1024, 512,
SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ |
SPI_NOR_4B_OPCODES) },
SPI_NOR_4B_OPCODES | SPI_NOR_OCTAL_DTR_READ |
SPI_NOR_OCTAL_DTR_PP |
SPI_NOR_IO_MODE_EN_VOLATILE)
.fixups = &mt35xu512aba_fixups},
{ "mt35xu02g", INFO(0x2c5b1c, 0, 128 * 1024, 2048,
SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ |
SPI_NOR_4B_OPCODES) },

Просмотреть файл

@ -4,6 +4,7 @@
* Copyright (C) 2014, Freescale Semiconductor, Inc.
*/
#include <linux/bitfield.h>
#include <linux/slab.h>
#include <linux/sort.h>
#include <linux/mtd/spi-nor.h>
@ -19,6 +20,11 @@
#define SFDP_BFPT_ID 0xff00 /* Basic Flash Parameter Table */
#define SFDP_SECTOR_MAP_ID 0xff81 /* Sector Map Table */
#define SFDP_4BAIT_ID 0xff84 /* 4-byte Address Instruction Table */
#define SFDP_PROFILE1_ID 0xff05 /* xSPI Profile 1.0 table. */
#define SFDP_SCCR_MAP_ID 0xff87 /*
* Status, Control and Configuration
* Register Map.
*/
#define SFDP_SIGNATURE 0x50444653U
@ -59,7 +65,7 @@ struct sfdp_bfpt_read {
struct sfdp_bfpt_erase {
/*
* The half-word at offset <shift> in DWORD <dwoard> encodes the
* The half-word at offset <shift> in DWORD <dword> encodes the
* op code and erase sector size to be used by Sector Erase commands.
*/
u32 dword;
@ -602,10 +608,32 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
break;
}
/* Soft Reset support. */
if (bfpt.dwords[BFPT_DWORD(16)] & BFPT_DWORD16_SWRST_EN_RST)
nor->flags |= SNOR_F_SOFT_RESET;
/* Stop here if not JESD216 rev C or later. */
if (bfpt_header->length == BFPT_DWORD_MAX_JESD216B)
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt,
params);
/* 8D-8D-8D command extension. */
switch (bfpt.dwords[BFPT_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) {
case BFPT_DWORD18_CMD_EXT_REP:
nor->cmd_ext_type = SPI_NOR_EXT_REPEAT;
break;
case BFPT_DWORD18_CMD_EXT_INV:
nor->cmd_ext_type = SPI_NOR_EXT_INVERT;
break;
case BFPT_DWORD18_CMD_EXT_RES:
dev_dbg(nor->dev, "Reserved command extension used\n");
break;
case BFPT_DWORD18_CMD_EXT_16B:
dev_dbg(nor->dev, "16-bit opcodes not supported\n");
return -EOPNOTSUPP;
}
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, params);
}
@ -1047,9 +1075,16 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
}
/* 4BAIT is the only SFDP table that indicates page program support. */
if (pp_hwcaps & SNOR_HWCAPS_PP)
if (pp_hwcaps & SNOR_HWCAPS_PP) {
spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP],
SPINOR_OP_PP_4B, SNOR_PROTO_1_1_1);
/*
* Since xSPI Page Program opcode is backward compatible with
* Legacy SPI, use Legacy SPI opcode there as well.
*/
spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP_8_8_8_DTR],
SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR);
}
if (pp_hwcaps & SNOR_HWCAPS_PP_1_1_4)
spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP_1_1_4],
SPINOR_OP_PP_1_1_4_4B,
@ -1083,6 +1118,131 @@ out:
return ret;
}
#define PROFILE1_DWORD1_RDSR_ADDR_BYTES BIT(29)
#define PROFILE1_DWORD1_RDSR_DUMMY BIT(28)
#define PROFILE1_DWORD1_RD_FAST_CMD GENMASK(15, 8)
#define PROFILE1_DWORD4_DUMMY_200MHZ GENMASK(11, 7)
#define PROFILE1_DWORD5_DUMMY_166MHZ GENMASK(31, 27)
#define PROFILE1_DWORD5_DUMMY_133MHZ GENMASK(21, 17)
#define PROFILE1_DWORD5_DUMMY_100MHZ GENMASK(11, 7)
/**
* spi_nor_parse_profile1() - parse the xSPI Profile 1.0 table
* @nor: pointer to a 'struct spi_nor'
* @profile1_header: pointer to the 'struct sfdp_parameter_header' describing
* the Profile 1.0 Table length and version.
* @params: pointer to the 'struct spi_nor_flash_parameter' to be.
*
* Return: 0 on success, -errno otherwise.
*/
static int spi_nor_parse_profile1(struct spi_nor *nor,
const struct sfdp_parameter_header *profile1_header,
struct spi_nor_flash_parameter *params)
{
u32 *dwords, addr;
size_t len;
int ret;
u8 dummy, opcode;
len = profile1_header->length * sizeof(*dwords);
dwords = kmalloc(len, GFP_KERNEL);
if (!dwords)
return -ENOMEM;
addr = SFDP_PARAM_HEADER_PTP(profile1_header);
ret = spi_nor_read_sfdp(nor, addr, len, dwords);
if (ret)
goto out;
le32_to_cpu_array(dwords, profile1_header->length);
/* Get 8D-8D-8D fast read opcode and dummy cycles. */
opcode = FIELD_GET(PROFILE1_DWORD1_RD_FAST_CMD, dwords[0]);
/* Set the Read Status Register dummy cycles and dummy address bytes. */
if (dwords[0] & PROFILE1_DWORD1_RDSR_DUMMY)
params->rdsr_dummy = 8;
else
params->rdsr_dummy = 4;
if (dwords[0] & PROFILE1_DWORD1_RDSR_ADDR_BYTES)
params->rdsr_addr_nbytes = 4;
else
params->rdsr_addr_nbytes = 0;
/*
* We don't know what speed the controller is running at. Find the
* dummy cycles for the fastest frequency the flash can run at to be
* sure we are never short of dummy cycles. A value of 0 means the
* frequency is not supported.
*
* Default to PROFILE1_DUMMY_DEFAULT if we don't find anything, and let
* flashes set the correct value if needed in their fixup hooks.
*/
dummy = FIELD_GET(PROFILE1_DWORD4_DUMMY_200MHZ, dwords[3]);
if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_166MHZ, dwords[4]);
if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_133MHZ, dwords[4]);
if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_100MHZ, dwords[4]);
if (!dummy)
dev_dbg(nor->dev,
"Can't find dummy cycles from Profile 1.0 table\n");
/* Round up to an even value to avoid tripping controllers up. */
dummy = round_up(dummy, 2);
/* Update the fast read settings. */
spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_8_8_8_DTR],
0, dummy, opcode,
SNOR_PROTO_8_8_8_DTR);
out:
kfree(dwords);
return ret;
}
#define SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE BIT(31)
/**
* spi_nor_parse_sccr() - Parse the Status, Control and Configuration Register
* Map.
* @nor: pointer to a 'struct spi_nor'
* @sccr_header: pointer to the 'struct sfdp_parameter_header' describing
* the SCCR Map table length and version.
* @params: pointer to the 'struct spi_nor_flash_parameter' to be.
*
* Return: 0 on success, -errno otherwise.
*/
static int spi_nor_parse_sccr(struct spi_nor *nor,
const struct sfdp_parameter_header *sccr_header,
struct spi_nor_flash_parameter *params)
{
u32 *dwords, addr;
size_t len;
int ret;
len = sccr_header->length * sizeof(*dwords);
dwords = kmalloc(len, GFP_KERNEL);
if (!dwords)
return -ENOMEM;
addr = SFDP_PARAM_HEADER_PTP(sccr_header);
ret = spi_nor_read_sfdp(nor, addr, len, dwords);
if (ret)
goto out;
le32_to_cpu_array(dwords, sccr_header->length);
if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[22]))
nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE;
out:
kfree(dwords);
return ret;
}
/**
* spi_nor_parse_sfdp() - parse the Serial Flash Discoverable Parameters.
* @nor: pointer to a 'struct spi_nor'
@ -1184,6 +1344,14 @@ int spi_nor_parse_sfdp(struct spi_nor *nor,
err = spi_nor_parse_4bait(nor, param_header, params);
break;
case SFDP_PROFILE1_ID:
err = spi_nor_parse_profile1(nor, param_header, params);
break;
case SFDP_SCCR_MAP_ID:
err = spi_nor_parse_sccr(nor, param_header, params);
break;
default:
break;
}

Просмотреть файл

@ -90,6 +90,14 @@ struct sfdp_bfpt {
#define BFPT_DWORD15_QER_SR2_BIT1_NO_RD (0x4UL << 20)
#define BFPT_DWORD15_QER_SR2_BIT1 (0x5UL << 20) /* Spansion */
#define BFPT_DWORD16_SWRST_EN_RST BIT(12)
#define BFPT_DWORD18_CMD_EXT_MASK GENMASK(30, 29)
#define BFPT_DWORD18_CMD_EXT_REP (0x0UL << 29) /* Repeat */
#define BFPT_DWORD18_CMD_EXT_INV (0x1UL << 29) /* Invert */
#define BFPT_DWORD18_CMD_EXT_RES (0x2UL << 29) /* Reserved */
#define BFPT_DWORD18_CMD_EXT_16B (0x3UL << 29) /* 16-bit opcode */
struct sfdp_parameter_header {
u8 id_lsb;
u8 minor;

Просмотреть файл

@ -8,6 +8,173 @@
#include "core.h"
#define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */
#define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */
#define SPINOR_REG_CYPRESS_CFR2V 0x00800003
#define SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24 0xb
#define SPINOR_REG_CYPRESS_CFR3V 0x00800004
#define SPINOR_REG_CYPRESS_CFR3V_PGSZ BIT(4) /* Page size. */
#define SPINOR_REG_CYPRESS_CFR5V 0x00800006
#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN 0x3
#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS 0
#define SPINOR_OP_CYPRESS_RD_FAST 0xee
/**
* spi_nor_cypress_octal_dtr_enable() - Enable octal DTR on Cypress flashes.
* @nor: pointer to a 'struct spi_nor'
* @enable: whether to enable or disable Octal DTR
*
* This also sets the memory access latency cycles to 24 to allow the flash to
* run at up to 200MHz.
*
* Return: 0 on success, -errno otherwise.
*/
static int spi_nor_cypress_octal_dtr_enable(struct spi_nor *nor, bool enable)
{
struct spi_mem_op op;
u8 *buf = nor->bouncebuf;
int ret;
if (enable) {
/* Use 24 dummy cycles for memory array reads. */
ret = spi_nor_write_enable(nor);
if (ret)
return ret;
*buf = SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24;
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WR_ANY_REG, 1),
SPI_MEM_OP_ADDR(3, SPINOR_REG_CYPRESS_CFR2V,
1),
SPI_MEM_OP_NO_DUMMY,
SPI_MEM_OP_DATA_OUT(1, buf, 1));
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
ret = spi_nor_wait_till_ready(nor);
if (ret)
return ret;
nor->read_dummy = 24;
}
/* Set/unset the octal and DTR enable bits. */
ret = spi_nor_write_enable(nor);
if (ret)
return ret;
if (enable)
*buf = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN;
else
*buf = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS;
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WR_ANY_REG, 1),
SPI_MEM_OP_ADDR(enable ? 3 : 4,
SPINOR_REG_CYPRESS_CFR5V,
1),
SPI_MEM_OP_NO_DUMMY,
SPI_MEM_OP_DATA_OUT(1, buf, 1));
if (!enable)
spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR);
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
/* Read flash ID to make sure the switch was successful. */
op = (struct spi_mem_op)
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 1),
SPI_MEM_OP_ADDR(enable ? 4 : 0, 0, 1),
SPI_MEM_OP_DUMMY(enable ? 3 : 0, 1),
SPI_MEM_OP_DATA_IN(round_up(nor->info->id_len, 2),
buf, 1));
if (enable)
spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR);
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
if (memcmp(buf, nor->info->id, nor->info->id_len))
return -EINVAL;
return 0;
}
static void s28hs512t_default_init(struct spi_nor *nor)
{
nor->params->octal_dtr_enable = spi_nor_cypress_octal_dtr_enable;
nor->params->writesize = 16;
}
static void s28hs512t_post_sfdp_fixup(struct spi_nor *nor)
{
/*
* On older versions of the flash the xSPI Profile 1.0 table has the
* 8D-8D-8D Fast Read opcode as 0x00. But it actually should be 0xEE.
*/
if (nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode == 0)
nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode =
SPINOR_OP_CYPRESS_RD_FAST;
/* This flash is also missing the 4-byte Page Program opcode bit. */
spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP],
SPINOR_OP_PP_4B, SNOR_PROTO_1_1_1);
/*
* Since xSPI Page Program opcode is backward compatible with
* Legacy SPI, use Legacy SPI opcode there as well.
*/
spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP_8_8_8_DTR],
SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR);
/*
* The xSPI Profile 1.0 table advertises the number of additional
* address bytes needed for Read Status Register command as 0 but the
* actual value for that is 4.
*/
nor->params->rdsr_addr_nbytes = 4;
}
static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header,
const struct sfdp_bfpt *bfpt,
struct spi_nor_flash_parameter *params)
{
/*
* The BFPT table advertises a 512B page size but the page size is
* actually configurable (with the default being 256B). Read from
* CFR3V[4] and set the correct size.
*/
struct spi_mem_op op =
SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RD_ANY_REG, 1),
SPI_MEM_OP_ADDR(3, SPINOR_REG_CYPRESS_CFR3V, 1),
SPI_MEM_OP_NO_DUMMY,
SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1));
int ret;
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret)
return ret;
if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ)
params->page_size = 512;
else
params->page_size = 256;
return 0;
}
static struct spi_nor_fixups s28hs512t_fixups = {
.default_init = s28hs512t_default_init,
.post_sfdp = s28hs512t_post_sfdp_fixup,
.post_bfpt = s28hs512t_post_bfpt_fixup,
};
static int
s25fs_s_post_bfpt_fixups(struct spi_nor *nor,
const struct sfdp_parameter_header *bfpt_header,
@ -104,6 +271,11 @@ static const struct flash_info spansion_parts[] = {
SPI_NOR_4B_OPCODES) },
{ "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1,
SPI_NOR_NO_ERASE) },
{ "s28hs512t", INFO(0x345b1a, 0, 256 * 1024, 256,
SECT_4K | SPI_NOR_OCTAL_DTR_READ |
SPI_NOR_OCTAL_DTR_PP)
.fixups = &s28hs512t_fixups,
},
};
static void spansion_post_sfdp_fixups(struct spi_nor *nor)

Просмотреть файл

@ -11,26 +11,28 @@
static const struct flash_info sst_parts[] = {
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
{ "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64,
SECT_4K | SST_WRITE) },
{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128,
SECT_4K | SPI_NOR_4BIT_BP | SPI_NOR_HAS_LOCK |
SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4,
SECT_4K | SST_WRITE) },
{ "sst25wf020a", INFO(0x621612, 0, 64 * 1024, 4, SECT_4K) },
{ "sst25wf040b", INFO(0x621613, 0, 64 * 1024, 8, SECT_4K) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25wf020a", INFO(0x621612, 0, 64 * 1024, 4, SECT_4K | SPI_NOR_HAS_LOCK) },
{ "sst25wf040b", INFO(0x621613, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_HAS_LOCK) },
{ "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst25wf080", INFO(0xbf2505, 0, 64 * 1024, 16,
SECT_4K | SST_WRITE) },
SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) },
{ "sst26wf016b", INFO(0xbf2651, 0, 64 * 1024, 32,
SECT_4K | SPI_NOR_DUAL_READ |
SPI_NOR_QUAD_READ) },
@ -127,11 +129,6 @@ out:
return ret;
}
static void sst_default_init(struct spi_nor *nor)
{
nor->flags |= SNOR_F_HAS_LOCK;
}
static void sst_post_sfdp_fixups(struct spi_nor *nor)
{
if (nor->info->flags & SST_WRITE)
@ -139,7 +136,6 @@ static void sst_post_sfdp_fixups(struct spi_nor *nor)
}
static const struct spi_nor_fixups sst_fixups = {
.default_init = sst_default_init,
.post_sfdp = sst_post_sfdp_fixups,
};

Просмотреть файл

@ -8,7 +8,7 @@
#include <linux/string.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/nand-ecc-sw-hamming.h>
#include "mtd_test.h"
@ -119,13 +119,13 @@ static void no_bit_error(void *error_data, void *error_ecc,
static int no_bit_error_verify(void *error_data, void *error_ecc,
void *correct_data, const size_t size)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
unsigned char calc_ecc[3];
int ret;
__nand_calculate_ecc(error_data, size, calc_ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order);
ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size,
sm_order);
if (ret == 0 && !memcmp(correct_data, error_data, size))
return 0;
@ -149,13 +149,13 @@ static void single_bit_error_in_ecc(void *error_data, void *error_ecc,
static int single_bit_error_correct(void *error_data, void *error_ecc,
void *correct_data, const size_t size)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
unsigned char calc_ecc[3];
int ret;
__nand_calculate_ecc(error_data, size, calc_ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order);
ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size,
sm_order);
if (ret == 1 && !memcmp(correct_data, error_data, size))
return 0;
@ -186,13 +186,13 @@ static void double_bit_error_in_ecc(void *error_data, void *error_ecc,
static int double_bit_error_detect(void *error_data, void *error_ecc,
void *correct_data, const size_t size)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
unsigned char calc_ecc[3];
int ret;
__nand_calculate_ecc(error_data, size, calc_ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order);
ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size,
sm_order);
return (ret == -EBADMSG) ? 0 : -EINVAL;
}
@ -248,6 +248,7 @@ static void dump_data_ecc(void *error_data, void *error_ecc, void *correct_data,
static int nand_ecc_test_run(const size_t size)
{
bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC);
int i;
int err = 0;
void *error_data;
@ -266,9 +267,7 @@ static int nand_ecc_test_run(const size_t size)
}
prandom_bytes(correct_data, size);
__nand_calculate_ecc(correct_data, size, correct_ecc,
IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC));
ecc_sw_hamming_calculate(correct_data, size, correct_ecc, sm_order);
for (i = 0; i < ARRAY_SIZE(nand_ecc_test); i++) {
nand_ecc_test[i].prepare(error_data, error_ecc,
correct_data, correct_ecc, size);

Просмотреть файл

@ -50,6 +50,7 @@
* struct mtd_dev_param - MTD device parameter description data structure.
* @name: MTD character device node path, MTD device name, or MTD device number
* string
* @ubi_num: UBI number
* @vid_hdr_offs: VID header offset
* @max_beb_per1024: maximum expected number of bad PEBs per 1024 PEBs
*/

Просмотреть файл

@ -1290,7 +1290,7 @@ static int is_error_sane(int err)
* @ubi: UBI device description object
* @from: physical eraseblock number from where to copy
* @to: physical eraseblock number where to copy
* @vid_hdr: VID header of the @from physical eraseblock
* @vidb: data structure from where the VID header is derived
*
* This function copies logical eraseblock from physical eraseblock @from to
* physical eraseblock @to. The @vid_hdr buffer may be changed by this
@ -1463,6 +1463,7 @@ out_unlock_leb:
/**
* print_rsvd_warning - warn about not having enough reserved PEBs.
* @ubi: UBI device description object
* @ai: UBI attach info object
*
* This is a helper function for 'ubi_eba_init()' which is called when UBI
* cannot reserve enough PEBs for bad block handling. This function makes a

Просмотреть файл

@ -439,7 +439,7 @@ static int gluebi_resized(struct ubi_volume_info *vi)
* gluebi_notify - UBI notification handler.
* @nb: registered notifier block
* @l: notification type
* @ptr: pointer to the &struct ubi_notification object
* @ns_ptr: pointer to the &struct ubi_notification object
*/
static int gluebi_notify(struct notifier_block *nb, unsigned long l,
void *ns_ptr)

Просмотреть файл

@ -450,7 +450,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_read);
* ubi_leb_read_sg - read data into a scatter gather list.
* @desc: volume descriptor
* @lnum: logical eraseblock number to read from
* @buf: buffer where to store the read data
* @sgl: UBI scatter gather list to store the read data
* @offset: offset within the logical eraseblock to read from
* @len: how many bytes to read
* @check: whether UBI has to check the read data's CRC or not.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше