Fairly unexciting MTD merge for 3.9:

* misc clean-ups in the MTD command-line partitioning parser (cmdlinepart)
  * add flash locking support for STmicro chips serial flash chips, as well as
    for CFI command set 2 chips.
  * new driver for the ELM error correction HW module found in various TI chips,
    enable the OMAP NAND driver to use the ELM HW error correction
  * added number of new serial flash IDs
  * various fixes and improvements in the gpmi NAND driver
  * bcm47xx NAND driver improvements
  * make the mtdpart module actually removable
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.13 (GNU/Linux)
 
 iEYEABECAAYFAlExEs8ACgkQdwG7hYl686Oa+gCgiBNt+I0MiixDeN+MGuE1uw9s
 i2wAniD9sR2HDy6y5SkbdXK/aPr8ez/p
 =xV1l
 -----END PGP SIGNATURE-----

Merge tag 'for-linus-20130301' of git://git.infradead.org/linux-mtd

Pull MTD update from David Woodhouse:
 "Fairly unexciting MTD merge for 3.9:

   - misc clean-ups in the MTD command-line partitioning parser
     (cmdlinepart)
   - add flash locking support for STmicro chips serial flash chips, as
     well as for CFI command set 2 chips.
   - new driver for the ELM error correction HW module found in various
     TI chips, enable the OMAP NAND driver to use the ELM HW error
     correction
   - added number of new serial flash IDs
   - various fixes and improvements in the gpmi NAND driver
   - bcm47xx NAND driver improvements
   - make the mtdpart module actually removable"

* tag 'for-linus-20130301' of git://git.infradead.org/linux-mtd: (45 commits)
  mtd: map: BUG() in non handled cases
  mtd: bcm47xxnflash: use pr_fmt for module prefix in messages
  mtd: davinci_nand: Use managed resources
  mtd: mtd_torturetest can cause stack overflows
  mtd: physmap_of: Convert device allocation to managed devm_kzalloc()
  mtd: at91: atmel_nand: for PMECC, add code to check the ONFI parameter ECC requirement.
  mtd: atmel_nand: make pmecc-cap, pmecc-sector-size in dts is optional.
  mtd: atmel_nand: avoid to report an error when lookup table offset is 0.
  mtd: bcm47xxsflash: adjust names of bus-specific functions
  mtd: bcm47xxpart: improve probing of nvram partition
  mtd: bcm47xxpart: add support for other erase sizes
  mtd: bcm47xxnflash: register this as normal driver
  mtd: bcm47xxnflash: fix message
  mtd: bcm47xxsflash: register this as normal driver
  mtd: bcm47xxsflash: write number of written bytes
  mtd: gpmi: add sanity check for the ECC
  mtd: gpmi: set the Golois Field bit for mx6q's BCH
  mtd: devices: elm: Removes <xx> literals in elm DT node
  mtd: gpmi: fix a dereferencing freed memory error
  mtd: fix the wrong timeo for panic_nand_wait()
  ...
This commit is contained in:
Linus Torvalds 2013-03-02 16:33:54 -08:00
Родитель 37cae6ad4c 24dea0c9fe
Коммит 48476df998
38 изменённых файлов: 1882 добавлений и 332 удалений

Просмотреть файл

@ -0,0 +1,16 @@
Error location module
Required properties:
- compatible: Must be "ti,am33xx-elm"
- reg: physical base address and size of the registers map.
- interrupts: Interrupt number for the elm.
Optional properties:
- ti,hwmods: Name of the hwmod associated to the elm
Example:
elm: elm@0 {
compatible = "ti,am3352-elm";
reg = <0x48080000 0x2000>;
interrupts = <4>;
};

Просмотреть файл

@ -26,6 +26,9 @@ file systems on embedded devices.
- linux,mtd-name: allow to specify the mtd name for retro capability with - linux,mtd-name: allow to specify the mtd name for retro capability with
physmap-flash drivers as boot loader pass the mtd partition via the old physmap-flash drivers as boot loader pass the mtd partition via the old
device name physmap-flash. device name physmap-flash.
- use-advanced-sector-protection: boolean to enable support for the
advanced sector protection (Spansion: PPB - Persistent Protection
Bits) locking.
For JEDEC compatible devices, the following additional properties For JEDEC compatible devices, the following additional properties
are defined: are defined:

Просмотреть файл

@ -74,8 +74,8 @@ config MTD_REDBOOT_PARTS_READONLY
endif # MTD_REDBOOT_PARTS endif # MTD_REDBOOT_PARTS
config MTD_CMDLINE_PARTS config MTD_CMDLINE_PARTS
bool "Command line partition table parsing" tristate "Command line partition table parsing"
depends on MTD = "y" depends on MTD
---help--- ---help---
Allow generic configuration of the MTD partition tables via the kernel Allow generic configuration of the MTD partition tables via the kernel
command line. Multiple flash resources are supported for hardware where command line. Multiple flash resources are supported for hardware where

Просмотреть файл

@ -142,7 +142,13 @@ static int __init ar7_parser_init(void)
return register_mtd_parser(&ar7_parser); return register_mtd_parser(&ar7_parser);
} }
static void __exit ar7_parser_exit(void)
{
deregister_mtd_parser(&ar7_parser);
}
module_init(ar7_parser_init); module_init(ar7_parser_init);
module_exit(ar7_parser_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR( "Felix Fietkau <nbd@openwrt.org>, " MODULE_AUTHOR( "Felix Fietkau <nbd@openwrt.org>, "

Просмотреть файл

@ -19,12 +19,6 @@
/* 10 parts were found on sflash on Netgear WNDR4500 */ /* 10 parts were found on sflash on Netgear WNDR4500 */
#define BCM47XXPART_MAX_PARTS 12 #define BCM47XXPART_MAX_PARTS 12
/*
* Amount of bytes we read when analyzing each block of flash memory.
* Set it big enough to allow detecting partition and reading important data.
*/
#define BCM47XXPART_BYTES_TO_READ 0x404
/* Magics */ /* Magics */
#define BOARD_DATA_MAGIC 0x5246504D /* MPFR */ #define BOARD_DATA_MAGIC 0x5246504D /* MPFR */
#define POT_MAGIC1 0x54544f50 /* POTT */ #define POT_MAGIC1 0x54544f50 /* POTT */
@ -59,13 +53,21 @@ static int bcm47xxpart_parse(struct mtd_info *master,
uint32_t *buf; uint32_t *buf;
size_t bytes_read; size_t bytes_read;
uint32_t offset; uint32_t offset;
uint32_t blocksize = 0x10000; uint32_t blocksize = master->erasesize;
struct trx_header *trx; struct trx_header *trx;
int trx_part = -1;
int last_trx_part = -1;
int max_bytes_to_read = 0x8004;
if (blocksize <= 0x10000)
blocksize = 0x10000;
if (blocksize == 0x20000)
max_bytes_to_read = 0x18004;
/* Alloc */ /* Alloc */
parts = kzalloc(sizeof(struct mtd_partition) * BCM47XXPART_MAX_PARTS, parts = kzalloc(sizeof(struct mtd_partition) * BCM47XXPART_MAX_PARTS,
GFP_KERNEL); GFP_KERNEL);
buf = kzalloc(BCM47XXPART_BYTES_TO_READ, GFP_KERNEL); buf = kzalloc(max_bytes_to_read, GFP_KERNEL);
/* Parse block by block looking for magics */ /* Parse block by block looking for magics */
for (offset = 0; offset <= master->size - blocksize; for (offset = 0; offset <= master->size - blocksize;
@ -80,7 +82,7 @@ static int bcm47xxpart_parse(struct mtd_info *master,
} }
/* Read beginning of the block */ /* Read beginning of the block */
if (mtd_read(master, offset, BCM47XXPART_BYTES_TO_READ, if (mtd_read(master, offset, max_bytes_to_read,
&bytes_read, (uint8_t *)buf) < 0) { &bytes_read, (uint8_t *)buf) < 0) {
pr_err("mtd_read error while parsing (offset: 0x%X)!\n", pr_err("mtd_read error while parsing (offset: 0x%X)!\n",
offset); offset);
@ -95,9 +97,16 @@ static int bcm47xxpart_parse(struct mtd_info *master,
} }
/* Standard NVRAM */ /* Standard NVRAM */
if (buf[0x000 / 4] == NVRAM_HEADER) { if (buf[0x000 / 4] == NVRAM_HEADER ||
buf[0x1000 / 4] == NVRAM_HEADER ||
buf[0x8000 / 4] == NVRAM_HEADER ||
(blocksize == 0x20000 && (
buf[0x10000 / 4] == NVRAM_HEADER ||
buf[0x11000 / 4] == NVRAM_HEADER ||
buf[0x18000 / 4] == NVRAM_HEADER))) {
bcm47xxpart_add_part(&parts[curr_part++], "nvram", bcm47xxpart_add_part(&parts[curr_part++], "nvram",
offset, 0); offset, 0);
offset = rounddown(offset, blocksize);
continue; continue;
} }
@ -131,6 +140,10 @@ static int bcm47xxpart_parse(struct mtd_info *master,
if (buf[0x000 / 4] == TRX_MAGIC) { if (buf[0x000 / 4] == TRX_MAGIC) {
trx = (struct trx_header *)buf; trx = (struct trx_header *)buf;
trx_part = curr_part;
bcm47xxpart_add_part(&parts[curr_part++], "firmware",
offset, 0);
i = 0; i = 0;
/* We have LZMA loader if offset[2] points to sth */ /* We have LZMA loader if offset[2] points to sth */
if (trx->offset[2]) { if (trx->offset[2]) {
@ -154,6 +167,8 @@ static int bcm47xxpart_parse(struct mtd_info *master,
offset + trx->offset[i], 0); offset + trx->offset[i], 0);
i++; i++;
last_trx_part = curr_part - 1;
/* /*
* We have whole TRX scanned, skip to the next part. Use * We have whole TRX scanned, skip to the next part. Use
* roundown (not roundup), as the loop will increase * roundown (not roundup), as the loop will increase
@ -169,11 +184,15 @@ static int bcm47xxpart_parse(struct mtd_info *master,
* Assume that partitions end at the beginning of the one they are * Assume that partitions end at the beginning of the one they are
* followed by. * followed by.
*/ */
for (i = 0; i < curr_part - 1; i++) for (i = 0; i < curr_part; i++) {
parts[i].size = parts[i + 1].offset - parts[i].offset; u64 next_part_offset = (i < curr_part - 1) ?
if (curr_part > 0) parts[i + 1].offset : master->size;
parts[curr_part - 1].size =
master->size - parts[curr_part - 1].offset; parts[i].size = next_part_offset - parts[i].offset;
if (i == last_trx_part && trx_part >= 0)
parts[trx_part].size = next_part_offset -
parts[trx_part].offset;
}
*pparts = parts; *pparts = parts;
return curr_part; return curr_part;

Просмотреть файл

@ -33,6 +33,8 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/mtd/map.h> #include <linux/mtd/map.h>
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/cfi.h> #include <linux/mtd/cfi.h>
@ -74,6 +76,10 @@ static void put_chip(struct map_info *map, struct flchip *chip, unsigned long ad
static int cfi_atmel_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len); static int cfi_atmel_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
static int cfi_atmel_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len); static int cfi_atmel_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
static int cfi_ppb_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
static int cfi_ppb_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
static int cfi_ppb_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len);
static struct mtd_chip_driver cfi_amdstd_chipdrv = { static struct mtd_chip_driver cfi_amdstd_chipdrv = {
.probe = NULL, /* Not usable directly */ .probe = NULL, /* Not usable directly */
.destroy = cfi_amdstd_destroy, .destroy = cfi_amdstd_destroy,
@ -496,6 +502,7 @@ static void cfi_fixup_m29ew_delay_after_resume(struct cfi_private *cfi)
struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary) struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
{ {
struct cfi_private *cfi = map->fldrv_priv; struct cfi_private *cfi = map->fldrv_priv;
struct device_node __maybe_unused *np = map->device_node;
struct mtd_info *mtd; struct mtd_info *mtd;
int i; int i;
@ -570,6 +577,17 @@ struct mtd_info *cfi_cmdset_0002(struct map_info *map, int primary)
cfi_tell_features(extp); cfi_tell_features(extp);
#endif #endif
#ifdef CONFIG_OF
if (np && of_property_read_bool(
np, "use-advanced-sector-protection")
&& extp->BlkProtUnprot == 8) {
printk(KERN_INFO " Advanced Sector Protection (PPB Locking) supported\n");
mtd->_lock = cfi_ppb_lock;
mtd->_unlock = cfi_ppb_unlock;
mtd->_is_locked = cfi_ppb_is_locked;
}
#endif
bootloc = extp->TopBottom; bootloc = extp->TopBottom;
if ((bootloc < 2) || (bootloc > 5)) { if ((bootloc < 2) || (bootloc > 5)) {
printk(KERN_WARNING "%s: CFI contains unrecognised boot " printk(KERN_WARNING "%s: CFI contains unrecognised boot "
@ -2172,6 +2190,205 @@ static int cfi_atmel_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return cfi_varsize_frob(mtd, do_atmel_unlock, ofs, len, NULL); return cfi_varsize_frob(mtd, do_atmel_unlock, ofs, len, NULL);
} }
/*
* Advanced Sector Protection - PPB (Persistent Protection Bit) locking
*/
struct ppb_lock {
struct flchip *chip;
loff_t offset;
int locked;
};
#define MAX_SECTORS 512
#define DO_XXLOCK_ONEBLOCK_LOCK ((void *)1)
#define DO_XXLOCK_ONEBLOCK_UNLOCK ((void *)2)
#define DO_XXLOCK_ONEBLOCK_GETLOCK ((void *)3)
static int __maybe_unused do_ppb_xxlock(struct map_info *map,
struct flchip *chip,
unsigned long adr, int len, void *thunk)
{
struct cfi_private *cfi = map->fldrv_priv;
unsigned long timeo;
int ret;
mutex_lock(&chip->mutex);
ret = get_chip(map, chip, adr + chip->start, FL_LOCKING);
if (ret) {
mutex_unlock(&chip->mutex);
return ret;
}
pr_debug("MTD %s(): XXLOCK 0x%08lx len %d\n", __func__, adr, len);
cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi,
cfi->device_type, NULL);
cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi,
cfi->device_type, NULL);
/* PPB entry command */
cfi_send_gen_cmd(0xC0, cfi->addr_unlock1, chip->start, map, cfi,
cfi->device_type, NULL);
if (thunk == DO_XXLOCK_ONEBLOCK_LOCK) {
chip->state = FL_LOCKING;
map_write(map, CMD(0xA0), chip->start + adr);
map_write(map, CMD(0x00), chip->start + adr);
} else if (thunk == DO_XXLOCK_ONEBLOCK_UNLOCK) {
/*
* Unlocking of one specific sector is not supported, so we
* have to unlock all sectors of this device instead
*/
chip->state = FL_UNLOCKING;
map_write(map, CMD(0x80), chip->start);
map_write(map, CMD(0x30), chip->start);
} else if (thunk == DO_XXLOCK_ONEBLOCK_GETLOCK) {
chip->state = FL_JEDEC_QUERY;
/* Return locked status: 0->locked, 1->unlocked */
ret = !cfi_read_query(map, adr);
} else
BUG();
/*
* Wait for some time as unlocking of all sectors takes quite long
*/
timeo = jiffies + msecs_to_jiffies(2000); /* 2s max (un)locking */
for (;;) {
if (chip_ready(map, adr))
break;
if (time_after(jiffies, timeo)) {
printk(KERN_ERR "Waiting for chip to be ready timed out.\n");
ret = -EIO;
break;
}
UDELAY(map, chip, adr, 1);
}
/* Exit BC commands */
map_write(map, CMD(0x90), chip->start);
map_write(map, CMD(0x00), chip->start);
chip->state = FL_READY;
put_chip(map, chip, adr + chip->start);
mutex_unlock(&chip->mutex);
return ret;
}
static int __maybe_unused cfi_ppb_lock(struct mtd_info *mtd, loff_t ofs,
uint64_t len)
{
return cfi_varsize_frob(mtd, do_ppb_xxlock, ofs, len,
DO_XXLOCK_ONEBLOCK_LOCK);
}
static int __maybe_unused cfi_ppb_unlock(struct mtd_info *mtd, loff_t ofs,
uint64_t len)
{
struct mtd_erase_region_info *regions = mtd->eraseregions;
struct map_info *map = mtd->priv;
struct cfi_private *cfi = map->fldrv_priv;
struct ppb_lock *sect;
unsigned long adr;
loff_t offset;
uint64_t length;
int chipnum;
int i;
int sectors;
int ret;
/*
* PPB unlocking always unlocks all sectors of the flash chip.
* We need to re-lock all previously locked sectors. So lets
* first check the locking status of all sectors and save
* it for future use.
*/
sect = kzalloc(MAX_SECTORS * sizeof(struct ppb_lock), GFP_KERNEL);
if (!sect)
return -ENOMEM;
/*
* This code to walk all sectors is a slightly modified version
* of the cfi_varsize_frob() code.
*/
i = 0;
chipnum = 0;
adr = 0;
sectors = 0;
offset = 0;
length = mtd->size;
while (length) {
int size = regions[i].erasesize;
/*
* Only test sectors that shall not be unlocked. The other
* sectors shall be unlocked, so lets keep their locking
* status at "unlocked" (locked=0) for the final re-locking.
*/
if ((adr < ofs) || (adr >= (ofs + len))) {
sect[sectors].chip = &cfi->chips[chipnum];
sect[sectors].offset = offset;
sect[sectors].locked = do_ppb_xxlock(
map, &cfi->chips[chipnum], adr, 0,
DO_XXLOCK_ONEBLOCK_GETLOCK);
}
adr += size;
offset += size;
length -= size;
if (offset == regions[i].offset + size * regions[i].numblocks)
i++;
if (adr >> cfi->chipshift) {
adr = 0;
chipnum++;
if (chipnum >= cfi->numchips)
break;
}
sectors++;
if (sectors >= MAX_SECTORS) {
printk(KERN_ERR "Only %d sectors for PPB locking supported!\n",
MAX_SECTORS);
kfree(sect);
return -EINVAL;
}
}
/* Now unlock the whole chip */
ret = cfi_varsize_frob(mtd, do_ppb_xxlock, ofs, len,
DO_XXLOCK_ONEBLOCK_UNLOCK);
if (ret) {
kfree(sect);
return ret;
}
/*
* PPB unlocking always unlocks all sectors of the flash chip.
* We need to re-lock all previously locked sectors.
*/
for (i = 0; i < sectors; i++) {
if (sect[i].locked)
do_ppb_xxlock(map, sect[i].chip, sect[i].offset, 0,
DO_XXLOCK_ONEBLOCK_LOCK);
}
kfree(sect);
return ret;
}
static int __maybe_unused cfi_ppb_is_locked(struct mtd_info *mtd, loff_t ofs,
uint64_t len)
{
return cfi_varsize_frob(mtd, do_ppb_xxlock, ofs, len,
DO_XXLOCK_ONEBLOCK_GETLOCK) ? 1 : 0;
}
static void cfi_amdstd_sync (struct mtd_info *mtd) static void cfi_amdstd_sync (struct mtd_info *mtd)
{ {

Просмотреть файл

@ -22,11 +22,22 @@
* *
* mtdparts=<mtddef>[;<mtddef] * mtdparts=<mtddef>[;<mtddef]
* <mtddef> := <mtd-id>:<partdef>[,<partdef>] * <mtddef> := <mtd-id>:<partdef>[,<partdef>]
* where <mtd-id> is the name from the "cat /proc/mtd" command * <partdef> := <size>[@<offset>][<name>][ro][lk]
* <partdef> := <size>[@offset][<name>][ro][lk]
* <mtd-id> := unique name used in mapping driver/device (mtd->name) * <mtd-id> := unique name used in mapping driver/device (mtd->name)
* <size> := standard linux memsize OR "-" to denote all remaining space * <size> := standard linux memsize OR "-" to denote all remaining space
* size is automatically truncated at end of device
* if specified or trucated size is 0 the part is skipped
* <offset> := standard linux memsize
* if omitted the part will immediately follow the previous part
* or 0 if the first part
* <name> := '(' NAME ')' * <name> := '(' NAME ')'
* NAME will appear in /proc/mtd
*
* <size> and <offset> can be specified such that the parts are out of order
* in physical memory and may even overlap.
*
* The parts are assigned MTD numbers in the order they are specified in the
* command line regardless of their order in physical memory.
* *
* Examples: * Examples:
* *
@ -70,6 +81,7 @@ struct cmdline_mtd_partition {
static struct cmdline_mtd_partition *partitions; static struct cmdline_mtd_partition *partitions;
/* the command line passed to mtdpart_setup() */ /* the command line passed to mtdpart_setup() */
static char *mtdparts;
static char *cmdline; static char *cmdline;
static int cmdline_parsed; static int cmdline_parsed;
@ -330,16 +342,6 @@ static int parse_cmdline_partitions(struct mtd_info *master,
if (part->parts[i].size == SIZE_REMAINING) if (part->parts[i].size == SIZE_REMAINING)
part->parts[i].size = master->size - offset; part->parts[i].size = master->size - offset;
if (part->parts[i].size == 0) {
printk(KERN_WARNING ERRP
"%s: skipping zero sized partition\n",
part->mtd_id);
part->num_parts--;
memmove(&part->parts[i], &part->parts[i + 1],
sizeof(*part->parts) * (part->num_parts - i));
continue;
}
if (offset + part->parts[i].size > master->size) { if (offset + part->parts[i].size > master->size) {
printk(KERN_WARNING ERRP printk(KERN_WARNING ERRP
"%s: partitioning exceeds flash size, truncating\n", "%s: partitioning exceeds flash size, truncating\n",
@ -347,6 +349,16 @@ static int parse_cmdline_partitions(struct mtd_info *master,
part->parts[i].size = master->size - offset; part->parts[i].size = master->size - offset;
} }
offset += part->parts[i].size; offset += part->parts[i].size;
if (part->parts[i].size == 0) {
printk(KERN_WARNING ERRP
"%s: skipping zero sized partition\n",
part->mtd_id);
part->num_parts--;
memmove(&part->parts[i], &part->parts[i + 1],
sizeof(*part->parts) * (part->num_parts - i));
i--;
}
} }
*pparts = kmemdup(part->parts, sizeof(*part->parts) * part->num_parts, *pparts = kmemdup(part->parts, sizeof(*part->parts) * part->num_parts,
@ -365,7 +377,7 @@ static int parse_cmdline_partitions(struct mtd_info *master,
* *
* This function needs to be visible for bootloaders. * This function needs to be visible for bootloaders.
*/ */
static int mtdpart_setup(char *s) static int __init mtdpart_setup(char *s)
{ {
cmdline = s; cmdline = s;
return 1; return 1;
@ -381,10 +393,21 @@ static struct mtd_part_parser cmdline_parser = {
static int __init cmdline_parser_init(void) static int __init cmdline_parser_init(void)
{ {
if (mtdparts)
mtdpart_setup(mtdparts);
return register_mtd_parser(&cmdline_parser); return register_mtd_parser(&cmdline_parser);
} }
static void __exit cmdline_parser_exit(void)
{
deregister_mtd_parser(&cmdline_parser);
}
module_init(cmdline_parser_init); module_init(cmdline_parser_init);
module_exit(cmdline_parser_exit);
MODULE_PARM_DESC(mtdparts, "Partitioning specification");
module_param(mtdparts, charp, 0);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Marius Groeger <mag@sysgo.de>"); MODULE_AUTHOR("Marius Groeger <mag@sysgo.de>");

Просмотреть файл

@ -17,8 +17,10 @@ obj-$(CONFIG_MTD_LART) += lart.o
obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o
obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o
obj-$(CONFIG_MTD_M25P80) += m25p80.o obj-$(CONFIG_MTD_M25P80) += m25p80.o
obj-$(CONFIG_MTD_NAND_OMAP_BCH) += elm.o
obj-$(CONFIG_MTD_SPEAR_SMI) += spear_smi.o obj-$(CONFIG_MTD_SPEAR_SMI) += spear_smi.o
obj-$(CONFIG_MTD_SST25L) += sst25l.o obj-$(CONFIG_MTD_SST25L) += sst25l.o
obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o
CFLAGS_docg3.o += -I$(src)
CFLAGS_docg3.o += -I$(src)

Просмотреть файл

@ -5,6 +5,8 @@
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/bcma/bcma.h> #include <linux/bcma/bcma.h>
#include "bcm47xxsflash.h"
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Serial flash driver for BCMA bus"); MODULE_DESCRIPTION("Serial flash driver for BCMA bus");
@ -13,26 +15,28 @@ static const char *probes[] = { "bcm47xxpart", NULL };
static int bcm47xxsflash_read(struct mtd_info *mtd, loff_t from, size_t len, static int bcm47xxsflash_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf) size_t *retlen, u_char *buf)
{ {
struct bcma_sflash *sflash = mtd->priv; struct bcm47xxsflash *b47s = mtd->priv;
/* Check address range */ /* Check address range */
if ((from + len) > mtd->size) if ((from + len) > mtd->size)
return -EINVAL; return -EINVAL;
memcpy_fromio(buf, (void __iomem *)KSEG0ADDR(sflash->window + from), memcpy_fromio(buf, (void __iomem *)KSEG0ADDR(b47s->window + from),
len); len);
*retlen = len;
return len; return len;
} }
static void bcm47xxsflash_fill_mtd(struct bcma_sflash *sflash, static void bcm47xxsflash_fill_mtd(struct bcm47xxsflash *b47s)
struct mtd_info *mtd)
{ {
mtd->priv = sflash; struct mtd_info *mtd = &b47s->mtd;
mtd->priv = b47s;
mtd->name = "bcm47xxsflash"; mtd->name = "bcm47xxsflash";
mtd->owner = THIS_MODULE; mtd->owner = THIS_MODULE;
mtd->type = MTD_ROM; mtd->type = MTD_ROM;
mtd->size = sflash->size; mtd->size = b47s->size;
mtd->_read = bcm47xxsflash_read; mtd->_read = bcm47xxsflash_read;
/* TODO: implement writing support and verify/change following code */ /* TODO: implement writing support and verify/change following code */
@ -40,19 +44,30 @@ static void bcm47xxsflash_fill_mtd(struct bcma_sflash *sflash,
mtd->writebufsize = mtd->writesize = 1; mtd->writebufsize = mtd->writesize = 1;
} }
static int bcm47xxsflash_probe(struct platform_device *pdev) /**************************************************
* BCMA
**************************************************/
static int bcm47xxsflash_bcma_probe(struct platform_device *pdev)
{ {
struct bcma_sflash *sflash = dev_get_platdata(&pdev->dev); struct bcma_sflash *sflash = dev_get_platdata(&pdev->dev);
struct bcm47xxsflash *b47s;
int err; int err;
sflash->mtd = kzalloc(sizeof(struct mtd_info), GFP_KERNEL); b47s = kzalloc(sizeof(*b47s), GFP_KERNEL);
if (!sflash->mtd) { if (!b47s) {
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
bcm47xxsflash_fill_mtd(sflash, sflash->mtd); sflash->priv = b47s;
err = mtd_device_parse_register(sflash->mtd, probes, NULL, NULL, 0); b47s->window = sflash->window;
b47s->blocksize = sflash->blocksize;
b47s->numblocks = sflash->numblocks;
b47s->size = sflash->size;
bcm47xxsflash_fill_mtd(b47s);
err = mtd_device_parse_register(&b47s->mtd, probes, NULL, NULL, 0);
if (err) { if (err) {
pr_err("Failed to register MTD device: %d\n", err); pr_err("Failed to register MTD device: %d\n", err);
goto err_dev_reg; goto err_dev_reg;
@ -61,34 +76,40 @@ static int bcm47xxsflash_probe(struct platform_device *pdev)
return 0; return 0;
err_dev_reg: err_dev_reg:
kfree(sflash->mtd); kfree(&b47s->mtd);
out: out:
return err; return err;
} }
static int bcm47xxsflash_remove(struct platform_device *pdev) static int bcm47xxsflash_bcma_remove(struct platform_device *pdev)
{ {
struct bcma_sflash *sflash = dev_get_platdata(&pdev->dev); struct bcma_sflash *sflash = dev_get_platdata(&pdev->dev);
struct bcm47xxsflash *b47s = sflash->priv;
mtd_device_unregister(sflash->mtd); mtd_device_unregister(&b47s->mtd);
kfree(sflash->mtd); kfree(b47s);
return 0; return 0;
} }
static struct platform_driver bcma_sflash_driver = { static struct platform_driver bcma_sflash_driver = {
.remove = bcm47xxsflash_remove, .probe = bcm47xxsflash_bcma_probe,
.remove = bcm47xxsflash_bcma_remove,
.driver = { .driver = {
.name = "bcma_sflash", .name = "bcma_sflash",
.owner = THIS_MODULE, .owner = THIS_MODULE,
}, },
}; };
/**************************************************
* Init
**************************************************/
static int __init bcm47xxsflash_init(void) static int __init bcm47xxsflash_init(void)
{ {
int err; int err;
err = platform_driver_probe(&bcma_sflash_driver, bcm47xxsflash_probe); err = platform_driver_register(&bcma_sflash_driver);
if (err) if (err)
pr_err("Failed to register BCMA serial flash driver: %d\n", pr_err("Failed to register BCMA serial flash driver: %d\n",
err); err);

Просмотреть файл

@ -0,0 +1,15 @@
#ifndef __BCM47XXSFLASH_H
#define __BCM47XXSFLASH_H
#include <linux/mtd/mtd.h>
struct bcm47xxsflash {
u32 window;
u32 blocksize;
u16 numblocks;
u32 size;
struct mtd_info mtd;
};
#endif /* BCM47XXSFLASH */

404
drivers/mtd/devices/elm.c Normal file
Просмотреть файл

@ -0,0 +1,404 @@
/*
* Error Location Module
*
* Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/platform_device.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/pm_runtime.h>
#include <linux/platform_data/elm.h>
#define ELM_IRQSTATUS 0x018
#define ELM_IRQENABLE 0x01c
#define ELM_LOCATION_CONFIG 0x020
#define ELM_PAGE_CTRL 0x080
#define ELM_SYNDROME_FRAGMENT_0 0x400
#define ELM_SYNDROME_FRAGMENT_6 0x418
#define ELM_LOCATION_STATUS 0x800
#define ELM_ERROR_LOCATION_0 0x880
/* ELM Interrupt Status Register */
#define INTR_STATUS_PAGE_VALID BIT(8)
/* ELM Interrupt Enable Register */
#define INTR_EN_PAGE_MASK BIT(8)
/* ELM Location Configuration Register */
#define ECC_BCH_LEVEL_MASK 0x3
/* ELM syndrome */
#define ELM_SYNDROME_VALID BIT(16)
/* ELM_LOCATION_STATUS Register */
#define ECC_CORRECTABLE_MASK BIT(8)
#define ECC_NB_ERRORS_MASK 0x1f
/* ELM_ERROR_LOCATION_0-15 Registers */
#define ECC_ERROR_LOCATION_MASK 0x1fff
#define ELM_ECC_SIZE 0x7ff
#define SYNDROME_FRAGMENT_REG_SIZE 0x40
#define ERROR_LOCATION_SIZE 0x100
struct elm_info {
struct device *dev;
void __iomem *elm_base;
struct completion elm_completion;
struct list_head list;
enum bch_ecc bch_type;
};
static LIST_HEAD(elm_devices);
static void elm_write_reg(struct elm_info *info, int offset, u32 val)
{
writel(val, info->elm_base + offset);
}
static u32 elm_read_reg(struct elm_info *info, int offset)
{
return readl(info->elm_base + offset);
}
/**
* elm_config - Configure ELM module
* @dev: ELM device
* @bch_type: Type of BCH ecc
*/
void elm_config(struct device *dev, enum bch_ecc bch_type)
{
u32 reg_val;
struct elm_info *info = dev_get_drvdata(dev);
reg_val = (bch_type & ECC_BCH_LEVEL_MASK) | (ELM_ECC_SIZE << 16);
elm_write_reg(info, ELM_LOCATION_CONFIG, reg_val);
info->bch_type = bch_type;
}
EXPORT_SYMBOL(elm_config);
/**
* elm_configure_page_mode - Enable/Disable page mode
* @info: elm info
* @index: index number of syndrome fragment vector
* @enable: enable/disable flag for page mode
*
* Enable page mode for syndrome fragment index
*/
static void elm_configure_page_mode(struct elm_info *info, int index,
bool enable)
{
u32 reg_val;
reg_val = elm_read_reg(info, ELM_PAGE_CTRL);
if (enable)
reg_val |= BIT(index); /* enable page mode */
else
reg_val &= ~BIT(index); /* disable page mode */
elm_write_reg(info, ELM_PAGE_CTRL, reg_val);
}
/**
* elm_load_syndrome - Load ELM syndrome reg
* @info: elm info
* @err_vec: elm error vectors
* @ecc: buffer with calculated ecc
*
* Load syndrome fragment registers with calculated ecc in reverse order.
*/
static void elm_load_syndrome(struct elm_info *info,
struct elm_errorvec *err_vec, u8 *ecc)
{
int i, offset;
u32 val;
for (i = 0; i < ERROR_VECTOR_MAX; i++) {
/* Check error reported */
if (err_vec[i].error_reported) {
elm_configure_page_mode(info, i, true);
offset = ELM_SYNDROME_FRAGMENT_0 +
SYNDROME_FRAGMENT_REG_SIZE * i;
/* BCH8 */
if (info->bch_type) {
/* syndrome fragment 0 = ecc[9-12B] */
val = cpu_to_be32(*(u32 *) &ecc[9]);
elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[5-8B] */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[5]);
elm_write_reg(info, offset, val);
/* syndrome fragment 2 = ecc[1-4B] */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[1]);
elm_write_reg(info, offset, val);
/* syndrome fragment 3 = ecc[0B] */
offset += 4;
val = ecc[0];
elm_write_reg(info, offset, val);
} else {
/* syndrome fragment 0 = ecc[20-52b] bits */
val = (cpu_to_be32(*(u32 *) &ecc[3]) >> 4) |
((ecc[2] & 0xf) << 28);
elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[0-20b] bits */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12;
elm_write_reg(info, offset, val);
}
}
/* Update ecc pointer with ecc byte size */
ecc += info->bch_type ? BCH8_SIZE : BCH4_SIZE;
}
}
/**
* elm_start_processing - start elm syndrome processing
* @info: elm info
* @err_vec: elm error vectors
*
* Set syndrome valid bit for syndrome fragment registers for which
* elm syndrome fragment registers are loaded. This enables elm module
* to start processing syndrome vectors.
*/
static void elm_start_processing(struct elm_info *info,
struct elm_errorvec *err_vec)
{
int i, offset;
u32 reg_val;
/*
* Set syndrome vector valid, so that ELM module
* will process it for vectors error is reported
*/
for (i = 0; i < ERROR_VECTOR_MAX; i++) {
if (err_vec[i].error_reported) {
offset = ELM_SYNDROME_FRAGMENT_6 +
SYNDROME_FRAGMENT_REG_SIZE * i;
reg_val = elm_read_reg(info, offset);
reg_val |= ELM_SYNDROME_VALID;
elm_write_reg(info, offset, reg_val);
}
}
}
/**
* elm_error_correction - locate correctable error position
* @info: elm info
* @err_vec: elm error vectors
*
* On completion of processing by elm module, error location status
* register updated with correctable/uncorrectable error information.
* In case of correctable errors, number of errors located from
* elm location status register & read the positions from
* elm error location register.
*/
static void elm_error_correction(struct elm_info *info,
struct elm_errorvec *err_vec)
{
int i, j, errors = 0;
int offset;
u32 reg_val;
for (i = 0; i < ERROR_VECTOR_MAX; i++) {
/* Check error reported */
if (err_vec[i].error_reported) {
offset = ELM_LOCATION_STATUS + ERROR_LOCATION_SIZE * i;
reg_val = elm_read_reg(info, offset);
/* Check correctable error or not */
if (reg_val & ECC_CORRECTABLE_MASK) {
offset = ELM_ERROR_LOCATION_0 +
ERROR_LOCATION_SIZE * i;
/* Read count of correctable errors */
err_vec[i].error_count = reg_val &
ECC_NB_ERRORS_MASK;
/* Update the error locations in error vector */
for (j = 0; j < err_vec[i].error_count; j++) {
reg_val = elm_read_reg(info, offset);
err_vec[i].error_loc[j] = reg_val &
ECC_ERROR_LOCATION_MASK;
/* Update error location register */
offset += 4;
}
errors += err_vec[i].error_count;
} else {
err_vec[i].error_uncorrectable = true;
}
/* Clearing interrupts for processed error vectors */
elm_write_reg(info, ELM_IRQSTATUS, BIT(i));
/* Disable page mode */
elm_configure_page_mode(info, i, false);
}
}
}
/**
* elm_decode_bch_error_page - Locate error position
* @dev: device pointer
* @ecc_calc: calculated ECC bytes from GPMC
* @err_vec: elm error vectors
*
* Called with one or more error reported vectors & vectors with
* error reported is updated in err_vec[].error_reported
*/
void elm_decode_bch_error_page(struct device *dev, u8 *ecc_calc,
struct elm_errorvec *err_vec)
{
struct elm_info *info = dev_get_drvdata(dev);
u32 reg_val;
/* Enable page mode interrupt */
reg_val = elm_read_reg(info, ELM_IRQSTATUS);
elm_write_reg(info, ELM_IRQSTATUS, reg_val & INTR_STATUS_PAGE_VALID);
elm_write_reg(info, ELM_IRQENABLE, INTR_EN_PAGE_MASK);
/* Load valid ecc byte to syndrome fragment register */
elm_load_syndrome(info, err_vec, ecc_calc);
/* Enable syndrome processing for which syndrome fragment is updated */
elm_start_processing(info, err_vec);
/* Wait for ELM module to finish locating error correction */
wait_for_completion(&info->elm_completion);
/* Disable page mode interrupt */
reg_val = elm_read_reg(info, ELM_IRQENABLE);
elm_write_reg(info, ELM_IRQENABLE, reg_val & ~INTR_EN_PAGE_MASK);
elm_error_correction(info, err_vec);
}
EXPORT_SYMBOL(elm_decode_bch_error_page);
static irqreturn_t elm_isr(int this_irq, void *dev_id)
{
u32 reg_val;
struct elm_info *info = dev_id;
reg_val = elm_read_reg(info, ELM_IRQSTATUS);
/* All error vectors processed */
if (reg_val & INTR_STATUS_PAGE_VALID) {
elm_write_reg(info, ELM_IRQSTATUS,
reg_val & INTR_STATUS_PAGE_VALID);
complete(&info->elm_completion);
return IRQ_HANDLED;
}
return IRQ_NONE;
}
static int elm_probe(struct platform_device *pdev)
{
int ret = 0;
struct resource *res, *irq;
struct elm_info *info;
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info) {
dev_err(&pdev->dev, "failed to allocate memory\n");
return -ENOMEM;
}
info->dev = &pdev->dev;
irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!irq) {
dev_err(&pdev->dev, "no irq resource defined\n");
return -ENODEV;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "no memory resource defined\n");
return -ENODEV;
}
info->elm_base = devm_request_and_ioremap(&pdev->dev, res);
if (!info->elm_base)
return -EADDRNOTAVAIL;
ret = devm_request_irq(&pdev->dev, irq->start, elm_isr, 0,
pdev->name, info);
if (ret) {
dev_err(&pdev->dev, "failure requesting irq %i\n", irq->start);
return ret;
}
pm_runtime_enable(&pdev->dev);
if (pm_runtime_get_sync(&pdev->dev)) {
ret = -EINVAL;
pm_runtime_disable(&pdev->dev);
dev_err(&pdev->dev, "can't enable clock\n");
return ret;
}
init_completion(&info->elm_completion);
INIT_LIST_HEAD(&info->list);
list_add(&info->list, &elm_devices);
platform_set_drvdata(pdev, info);
return ret;
}
static int elm_remove(struct platform_device *pdev)
{
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
platform_set_drvdata(pdev, NULL);
return 0;
}
#ifdef CONFIG_OF
static const struct of_device_id elm_of_match[] = {
{ .compatible = "ti,am3352-elm" },
{},
};
MODULE_DEVICE_TABLE(of, elm_of_match);
#endif
static struct platform_driver elm_driver = {
.driver = {
.name = "elm",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(elm_of_match),
},
.probe = elm_probe,
.remove = elm_remove,
};
module_platform_driver(elm_driver);
MODULE_DESCRIPTION("ELM driver for BCH error correction");
MODULE_AUTHOR("Texas Instruments");
MODULE_ALIAS("platform: elm");
MODULE_LICENSE("GPL v2");

Просмотреть файл

@ -565,6 +565,96 @@ time_out:
return ret; return ret;
} }
static int m25p80_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct m25p *flash = mtd_to_m25p(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int res = 0;
mutex_lock(&flash->lock);
/* Wait until finished previous command */
if (wait_till_ready(flash)) {
res = 1;
goto err;
}
status_old = read_sr(flash);
if (offset < flash->mtd.size-(flash->mtd.size/2))
status_new = status_old | SR_BP2 | SR_BP1 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/4))
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
else if (offset < flash->mtd.size-(flash->mtd.size/8))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/16))
status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2;
else if (offset < flash->mtd.size-(flash->mtd.size/32))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset < flash->mtd.size-(flash->mtd.size/64))
status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1;
else
status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0;
/* Only modify protection if it will not unlock other areas */
if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) >
(status_old&(SR_BP2|SR_BP1|SR_BP0))) {
write_enable(flash);
if (write_sr(flash, status_new) < 0) {
res = 1;
goto err;
}
}
err: mutex_unlock(&flash->lock);
return res;
}
static int m25p80_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
{
struct m25p *flash = mtd_to_m25p(mtd);
uint32_t offset = ofs;
uint8_t status_old, status_new;
int res = 0;
mutex_lock(&flash->lock);
/* Wait until finished previous command */
if (wait_till_ready(flash)) {
res = 1;
goto err;
}
status_old = read_sr(flash);
if (offset+len > flash->mtd.size-(flash->mtd.size/64))
status_new = status_old & ~(SR_BP2|SR_BP1|SR_BP0);
else if (offset+len > flash->mtd.size-(flash->mtd.size/32))
status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0;
else if (offset+len > flash->mtd.size-(flash->mtd.size/16))
status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1;
else if (offset+len > flash->mtd.size-(flash->mtd.size/8))
status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0;
else if (offset+len > flash->mtd.size-(flash->mtd.size/4))
status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2;
else if (offset+len > flash->mtd.size-(flash->mtd.size/2))
status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0;
else
status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1;
/* Only modify protection if it will not lock other areas */
if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) <
(status_old&(SR_BP2|SR_BP1|SR_BP0))) {
write_enable(flash);
if (write_sr(flash, status_new) < 0) {
res = 1;
goto err;
}
}
err: mutex_unlock(&flash->lock);
return res;
}
/****************************************************************************/ /****************************************************************************/
/* /*
@ -642,6 +732,10 @@ static const struct spi_device_id m25p_ids[] = {
/* Everspin */ /* Everspin */
{ "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2) }, { "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2) },
/* GigaDevice */
{ "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) },
{ "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) },
/* Intel/Numonyx -- xxxs33b */ /* Intel/Numonyx -- xxxs33b */
{ "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) },
{ "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) }, { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) },
@ -899,6 +993,12 @@ static int m25p_probe(struct spi_device *spi)
flash->mtd._erase = m25p80_erase; flash->mtd._erase = m25p80_erase;
flash->mtd._read = m25p80_read; flash->mtd._read = m25p80_read;
/* flash protection support for STmicro chips */
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ST) {
flash->mtd._lock = m25p80_lock;
flash->mtd._unlock = m25p80_unlock;
}
/* sst flash chips use AAI word program */ /* sst flash chips use AAI word program */
if (JEDEC_MFR(info->jedec_id) == CFI_MFR_SST) if (JEDEC_MFR(info->jedec_id) == CFI_MFR_SST)
flash->mtd._write = sst_write; flash->mtd._write = sst_write;

Просмотреть файл

@ -429,7 +429,7 @@ config MTD_GPIO_ADDR
config MTD_UCLINUX config MTD_UCLINUX
bool "Generic uClinux RAM/ROM filesystem support" bool "Generic uClinux RAM/ROM filesystem support"
depends on MTD_RAM=y && (!MMU || COLDFIRE) depends on (MTD_RAM=y || MTD_ROM=y) && (!MMU || COLDFIRE)
help help
Map driver to support image based filesystems for uClinux. Map driver to support image based filesystems for uClinux.

Просмотреть файл

@ -68,9 +68,6 @@ static int of_flash_remove(struct platform_device *dev)
kfree(info->list[i].res); kfree(info->list[i].res);
} }
} }
kfree(info);
return 0; return 0;
} }
@ -199,8 +196,9 @@ static int of_flash_probe(struct platform_device *dev)
map_indirect = of_property_read_bool(dp, "no-unaligned-direct-access"); map_indirect = of_property_read_bool(dp, "no-unaligned-direct-access");
err = -ENOMEM; err = -ENOMEM;
info = kzalloc(sizeof(struct of_flash) + info = devm_kzalloc(&dev->dev,
sizeof(struct of_flash_list) * count, GFP_KERNEL); sizeof(struct of_flash) +
sizeof(struct of_flash_list) * count, GFP_KERNEL);
if (!info) if (!info)
goto err_flash_remove; goto err_flash_remove;
@ -241,6 +239,7 @@ static int of_flash_probe(struct platform_device *dev)
info->list[i].map.phys = res.start; info->list[i].map.phys = res.start;
info->list[i].map.size = res_size; info->list[i].map.size = res_size;
info->list[i].map.bankwidth = be32_to_cpup(width); info->list[i].map.bankwidth = be32_to_cpup(width);
info->list[i].map.device_node = dp;
err = -ENOMEM; err = -ENOMEM;
info->list[i].map.virt = ioremap(info->list[i].map.phys, info->list[i].map.virt = ioremap(info->list[i].map.phys,

Просмотреть файл

@ -23,12 +23,26 @@
/****************************************************************************/ /****************************************************************************/
#ifdef CONFIG_MTD_ROM
#define MAP_NAME "rom"
#else
#define MAP_NAME "ram"
#endif
/*
* Blackfin uses uclinux_ram_map during startup, so it must not be static.
* Provide a dummy declaration to make sparse happy.
*/
extern struct map_info uclinux_ram_map;
struct map_info uclinux_ram_map = { struct map_info uclinux_ram_map = {
.name = "RAM", .name = MAP_NAME,
.phys = (unsigned long)__bss_stop,
.size = 0, .size = 0,
}; };
static unsigned long physaddr = -1;
module_param(physaddr, ulong, S_IRUGO);
static struct mtd_info *uclinux_ram_mtdinfo; static struct mtd_info *uclinux_ram_mtdinfo;
/****************************************************************************/ /****************************************************************************/
@ -60,11 +74,17 @@ static int __init uclinux_mtd_init(void)
struct map_info *mapp; struct map_info *mapp;
mapp = &uclinux_ram_map; mapp = &uclinux_ram_map;
if (physaddr == -1)
mapp->phys = (resource_size_t)__bss_stop;
else
mapp->phys = physaddr;
if (!mapp->size) if (!mapp->size)
mapp->size = PAGE_ALIGN(ntohl(*((unsigned long *)(mapp->phys + 8)))); mapp->size = PAGE_ALIGN(ntohl(*((unsigned long *)(mapp->phys + 8))));
mapp->bankwidth = 4; mapp->bankwidth = 4;
printk("uclinux[mtd]: RAM probe address=0x%x size=0x%x\n", printk("uclinux[mtd]: probe address=0x%x size=0x%x\n",
(int) mapp->phys, (int) mapp->size); (int) mapp->phys, (int) mapp->size);
/* /*
@ -82,7 +102,7 @@ static int __init uclinux_mtd_init(void)
simple_map_init(mapp); simple_map_init(mapp);
mtd = do_map_probe("map_ram", mapp); mtd = do_map_probe("map_" MAP_NAME, mapp);
if (!mtd) { if (!mtd) {
printk("uclinux[mtd]: failed to find a mapping?\n"); printk("uclinux[mtd]: failed to find a mapping?\n");
return(-ENXIO); return(-ENXIO);
@ -118,6 +138,6 @@ module_exit(uclinux_mtd_cleanup);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>"); MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>");
MODULE_DESCRIPTION("Generic RAM based MTD for uClinux"); MODULE_DESCRIPTION("Generic MTD for uClinux");
/****************************************************************************/ /****************************************************************************/

Просмотреть файл

@ -101,6 +101,8 @@ struct atmel_nand_host {
u8 pmecc_corr_cap; u8 pmecc_corr_cap;
u16 pmecc_sector_size; u16 pmecc_sector_size;
u32 pmecc_lookup_table_offset; u32 pmecc_lookup_table_offset;
u32 pmecc_lookup_table_offset_512;
u32 pmecc_lookup_table_offset_1024;
int pmecc_bytes_per_sector; int pmecc_bytes_per_sector;
int pmecc_sector_number; int pmecc_sector_number;
@ -908,6 +910,84 @@ static void atmel_pmecc_core_init(struct mtd_info *mtd)
pmecc_writel(host->ecc, CTRL, PMECC_CTRL_ENABLE); pmecc_writel(host->ecc, CTRL, PMECC_CTRL_ENABLE);
} }
/*
* Get ECC requirement in ONFI parameters, returns -1 if ONFI
* parameters is not supported.
* return 0 if success to get the ECC requirement.
*/
static int get_onfi_ecc_param(struct nand_chip *chip,
int *ecc_bits, int *sector_size)
{
*ecc_bits = *sector_size = 0;
if (chip->onfi_params.ecc_bits == 0xff)
/* TODO: the sector_size and ecc_bits need to be find in
* extended ecc parameter, currently we don't support it.
*/
return -1;
*ecc_bits = chip->onfi_params.ecc_bits;
/* The default sector size (ecc codeword size) is 512 */
*sector_size = 512;
return 0;
}
/*
* Get ecc requirement from ONFI parameters ecc requirement.
* If pmecc-cap, pmecc-sector-size in DTS are not specified, this function
* will set them according to ONFI ecc requirement. Otherwise, use the
* value in DTS file.
* return 0 if success. otherwise return error code.
*/
static int pmecc_choose_ecc(struct atmel_nand_host *host,
int *cap, int *sector_size)
{
/* Get ECC requirement from ONFI parameters */
*cap = *sector_size = 0;
if (host->nand_chip.onfi_version) {
if (!get_onfi_ecc_param(&host->nand_chip, cap, sector_size))
dev_info(host->dev, "ONFI params, minimum required ECC: %d bits in %d bytes\n",
*cap, *sector_size);
else
dev_info(host->dev, "NAND chip ECC reqirement is in Extended ONFI parameter, we don't support yet.\n");
} else {
dev_info(host->dev, "NAND chip is not ONFI compliant, assume ecc_bits is 2 in 512 bytes");
}
if (*cap == 0 && *sector_size == 0) {
*cap = 2;
*sector_size = 512;
}
/* If dts file doesn't specify then use the one in ONFI parameters */
if (host->pmecc_corr_cap == 0) {
/* use the most fitable ecc bits (the near bigger one ) */
if (*cap <= 2)
host->pmecc_corr_cap = 2;
else if (*cap <= 4)
host->pmecc_corr_cap = 4;
else if (*cap < 8)
host->pmecc_corr_cap = 8;
else if (*cap < 12)
host->pmecc_corr_cap = 12;
else if (*cap < 24)
host->pmecc_corr_cap = 24;
else
return -EINVAL;
}
if (host->pmecc_sector_size == 0) {
/* use the most fitable sector size (the near smaller one ) */
if (*sector_size >= 1024)
host->pmecc_sector_size = 1024;
else if (*sector_size >= 512)
host->pmecc_sector_size = 512;
else
return -EINVAL;
}
return 0;
}
static int __init atmel_pmecc_nand_init_params(struct platform_device *pdev, static int __init atmel_pmecc_nand_init_params(struct platform_device *pdev,
struct atmel_nand_host *host) struct atmel_nand_host *host)
{ {
@ -916,8 +996,22 @@ static int __init atmel_pmecc_nand_init_params(struct platform_device *pdev,
struct resource *regs, *regs_pmerr, *regs_rom; struct resource *regs, *regs_pmerr, *regs_rom;
int cap, sector_size, err_no; int cap, sector_size, err_no;
err_no = pmecc_choose_ecc(host, &cap, &sector_size);
if (err_no) {
dev_err(host->dev, "The NAND flash's ECC requirement are not support!");
return err_no;
}
if (cap != host->pmecc_corr_cap ||
sector_size != host->pmecc_sector_size)
dev_info(host->dev, "WARNING: Be Caution! Using different PMECC parameters from Nand ONFI ECC reqirement.\n");
cap = host->pmecc_corr_cap; cap = host->pmecc_corr_cap;
sector_size = host->pmecc_sector_size; sector_size = host->pmecc_sector_size;
host->pmecc_lookup_table_offset = (sector_size == 512) ?
host->pmecc_lookup_table_offset_512 :
host->pmecc_lookup_table_offset_1024;
dev_info(host->dev, "Initialize PMECC params, cap: %d, sector: %d\n", dev_info(host->dev, "Initialize PMECC params, cap: %d, sector: %d\n",
cap, sector_size); cap, sector_size);
@ -1215,7 +1309,7 @@ static void atmel_nand_hwctl(struct mtd_info *mtd, int mode)
static int atmel_of_init_port(struct atmel_nand_host *host, static int atmel_of_init_port(struct atmel_nand_host *host,
struct device_node *np) struct device_node *np)
{ {
u32 val, table_offset; u32 val;
u32 offset[2]; u32 offset[2];
int ecc_mode; int ecc_mode;
struct atmel_nand_data *board = &host->board; struct atmel_nand_data *board = &host->board;
@ -1259,42 +1353,41 @@ static int atmel_of_init_port(struct atmel_nand_host *host,
/* use PMECC, get correction capability, sector size and lookup /* use PMECC, get correction capability, sector size and lookup
* table offset. * table offset.
* If correction bits and sector size are not specified, then find
* them from NAND ONFI parameters.
*/ */
if (of_property_read_u32(np, "atmel,pmecc-cap", &val) != 0) { if (of_property_read_u32(np, "atmel,pmecc-cap", &val) == 0) {
dev_err(host->dev, "Cannot decide PMECC Capability\n"); if ((val != 2) && (val != 4) && (val != 8) && (val != 12) &&
return -EINVAL; (val != 24)) {
} else if ((val != 2) && (val != 4) && (val != 8) && (val != 12) && dev_err(host->dev,
(val != 24)) { "Unsupported PMECC correction capability: %d; should be 2, 4, 8, 12 or 24\n",
dev_err(host->dev, val);
"Unsupported PMECC correction capability: %d; should be 2, 4, 8, 12 or 24\n", return -EINVAL;
val); }
return -EINVAL; host->pmecc_corr_cap = (u8)val;
} }
host->pmecc_corr_cap = (u8)val;
if (of_property_read_u32(np, "atmel,pmecc-sector-size", &val) != 0) { if (of_property_read_u32(np, "atmel,pmecc-sector-size", &val) == 0) {
dev_err(host->dev, "Cannot decide PMECC Sector Size\n"); if ((val != 512) && (val != 1024)) {
return -EINVAL; dev_err(host->dev,
} else if ((val != 512) && (val != 1024)) { "Unsupported PMECC sector size: %d; should be 512 or 1024 bytes\n",
dev_err(host->dev, val);
"Unsupported PMECC sector size: %d; should be 512 or 1024 bytes\n", return -EINVAL;
val); }
return -EINVAL; host->pmecc_sector_size = (u16)val;
} }
host->pmecc_sector_size = (u16)val;
if (of_property_read_u32_array(np, "atmel,pmecc-lookup-table-offset", if (of_property_read_u32_array(np, "atmel,pmecc-lookup-table-offset",
offset, 2) != 0) { offset, 2) != 0) {
dev_err(host->dev, "Cannot get PMECC lookup table offset\n"); dev_err(host->dev, "Cannot get PMECC lookup table offset\n");
return -EINVAL; return -EINVAL;
} }
table_offset = host->pmecc_sector_size == 512 ? offset[0] : offset[1]; if (!offset[0] && !offset[1]) {
if (!table_offset) {
dev_err(host->dev, "Invalid PMECC lookup table offset\n"); dev_err(host->dev, "Invalid PMECC lookup table offset\n");
return -EINVAL; return -EINVAL;
} }
host->pmecc_lookup_table_offset = table_offset; host->pmecc_lookup_table_offset_512 = offset[0];
host->pmecc_lookup_table_offset_1024 = offset[1];
return 0; return 0;
} }

Просмотреть файл

@ -1,6 +1,10 @@
#ifndef __BCM47XXNFLASH_H #ifndef __BCM47XXNFLASH_H
#define __BCM47XXNFLASH_H #define __BCM47XXNFLASH_H
#ifndef pr_fmt
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#endif
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h> #include <linux/mtd/nand.h>

Просмотреть файл

@ -9,14 +9,14 @@
* *
*/ */
#include "bcm47xxnflash.h"
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/bcma/bcma.h> #include <linux/bcma/bcma.h>
#include "bcm47xxnflash.h"
MODULE_DESCRIPTION("NAND flash driver for BCMA bus"); MODULE_DESCRIPTION("NAND flash driver for BCMA bus");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Rafał Miłecki"); MODULE_AUTHOR("Rafał Miłecki");
@ -77,6 +77,7 @@ static int bcm47xxnflash_remove(struct platform_device *pdev)
} }
static struct platform_driver bcm47xxnflash_driver = { static struct platform_driver bcm47xxnflash_driver = {
.probe = bcm47xxnflash_probe,
.remove = bcm47xxnflash_remove, .remove = bcm47xxnflash_remove,
.driver = { .driver = {
.name = "bcma_nflash", .name = "bcma_nflash",
@ -88,13 +89,10 @@ static int __init bcm47xxnflash_init(void)
{ {
int err; int err;
/* err = platform_driver_register(&bcm47xxnflash_driver);
* Platform device "bcma_nflash" exists on SoCs and is registered very
* early, it won't be added during runtime (use platform_driver_probe).
*/
err = platform_driver_probe(&bcm47xxnflash_driver, bcm47xxnflash_probe);
if (err) if (err)
pr_err("Failed to register serial flash driver: %d\n", err); pr_err("Failed to register bcm47xx nand flash driver: %d\n",
err);
return err; return err;
} }

Просмотреть файл

@ -9,13 +9,13 @@
* *
*/ */
#include "bcm47xxnflash.h"
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/bcma/bcma.h> #include <linux/bcma/bcma.h>
#include "bcm47xxnflash.h"
/* Broadcom uses 1'000'000 but it seems to be too many. Tests on WNDR4500 has /* Broadcom uses 1'000'000 but it seems to be too many. Tests on WNDR4500 has
* shown ~1000 retries as maxiumum. */ * shown ~1000 retries as maxiumum. */
#define NFLASH_READY_RETRIES 10000 #define NFLASH_READY_RETRIES 10000

Просмотреть файл

@ -606,7 +606,7 @@ static int __init nand_davinci_probe(struct platform_device *pdev)
if (pdev->id < 0 || pdev->id > 3) if (pdev->id < 0 || pdev->id > 3)
return -ENODEV; return -ENODEV;
info = kzalloc(sizeof(*info), GFP_KERNEL); info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info) { if (!info) {
dev_err(&pdev->dev, "unable to allocate memory\n"); dev_err(&pdev->dev, "unable to allocate memory\n");
ret = -ENOMEM; ret = -ENOMEM;
@ -623,11 +623,11 @@ static int __init nand_davinci_probe(struct platform_device *pdev)
goto err_nomem; goto err_nomem;
} }
vaddr = ioremap(res1->start, resource_size(res1)); vaddr = devm_request_and_ioremap(&pdev->dev, res1);
base = ioremap(res2->start, resource_size(res2)); base = devm_request_and_ioremap(&pdev->dev, res2);
if (!vaddr || !base) { if (!vaddr || !base) {
dev_err(&pdev->dev, "ioremap failed\n"); dev_err(&pdev->dev, "ioremap failed\n");
ret = -EINVAL; ret = -EADDRNOTAVAIL;
goto err_ioremap; goto err_ioremap;
} }
@ -717,7 +717,7 @@ static int __init nand_davinci_probe(struct platform_device *pdev)
} }
info->chip.ecc.mode = ecc_mode; info->chip.ecc.mode = ecc_mode;
info->clk = clk_get(&pdev->dev, "aemif"); info->clk = devm_clk_get(&pdev->dev, "aemif");
if (IS_ERR(info->clk)) { if (IS_ERR(info->clk)) {
ret = PTR_ERR(info->clk); ret = PTR_ERR(info->clk);
dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret); dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret);
@ -845,8 +845,6 @@ err_timing:
clk_disable_unprepare(info->clk); clk_disable_unprepare(info->clk);
err_clk_enable: err_clk_enable:
clk_put(info->clk);
spin_lock_irq(&davinci_nand_lock); spin_lock_irq(&davinci_nand_lock);
if (ecc_mode == NAND_ECC_HW_SYNDROME) if (ecc_mode == NAND_ECC_HW_SYNDROME)
ecc4_busy = false; ecc4_busy = false;
@ -855,13 +853,7 @@ err_clk_enable:
err_ecc: err_ecc:
err_clk: err_clk:
err_ioremap: err_ioremap:
if (base)
iounmap(base);
if (vaddr)
iounmap(vaddr);
err_nomem: err_nomem:
kfree(info);
return ret; return ret;
} }
@ -874,15 +866,9 @@ static int __exit nand_davinci_remove(struct platform_device *pdev)
ecc4_busy = false; ecc4_busy = false;
spin_unlock_irq(&davinci_nand_lock); spin_unlock_irq(&davinci_nand_lock);
iounmap(info->base);
iounmap(info->vaddr);
nand_release(&info->mtd); nand_release(&info->mtd);
clk_disable_unprepare(info->clk); clk_disable_unprepare(info->clk);
clk_put(info->clk);
kfree(info);
return 0; return 0;
} }

Просмотреть файл

@ -176,8 +176,8 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
ifc_nand_ctrl->page = page_addr; ifc_nand_ctrl->page = page_addr;
/* Program ROW0/COL0 */ /* Program ROW0/COL0 */
out_be32(&ifc->ifc_nand.row0, page_addr); iowrite32be(page_addr, &ifc->ifc_nand.row0);
out_be32(&ifc->ifc_nand.col0, (oob ? IFC_NAND_COL_MS : 0) | column); iowrite32be((oob ? IFC_NAND_COL_MS : 0) | column, &ifc->ifc_nand.col0);
buf_num = page_addr & priv->bufnum_mask; buf_num = page_addr & priv->bufnum_mask;
@ -239,18 +239,19 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
int i; int i;
/* set the chip select for NAND Transaction */ /* set the chip select for NAND Transaction */
out_be32(&ifc->ifc_nand.nand_csel, priv->bank << IFC_NAND_CSEL_SHIFT); iowrite32be(priv->bank << IFC_NAND_CSEL_SHIFT,
&ifc->ifc_nand.nand_csel);
dev_vdbg(priv->dev, dev_vdbg(priv->dev,
"%s: fir0=%08x fcr0=%08x\n", "%s: fir0=%08x fcr0=%08x\n",
__func__, __func__,
in_be32(&ifc->ifc_nand.nand_fir0), ioread32be(&ifc->ifc_nand.nand_fir0),
in_be32(&ifc->ifc_nand.nand_fcr0)); ioread32be(&ifc->ifc_nand.nand_fcr0));
ctrl->nand_stat = 0; ctrl->nand_stat = 0;
/* start read/write seq */ /* start read/write seq */
out_be32(&ifc->ifc_nand.nandseq_strt, IFC_NAND_SEQ_STRT_FIR_STRT); iowrite32be(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt);
/* wait for command complete flag or timeout */ /* wait for command complete flag or timeout */
wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat, wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat,
@ -273,7 +274,7 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
int sector_end = sector + chip->ecc.steps - 1; int sector_end = sector + chip->ecc.steps - 1;
for (i = sector / 4; i <= sector_end / 4; i++) for (i = sector / 4; i <= sector_end / 4; i++)
eccstat[i] = in_be32(&ifc->ifc_nand.nand_eccstat[i]); eccstat[i] = ioread32be(&ifc->ifc_nand.nand_eccstat[i]);
for (i = sector; i <= sector_end; i++) { for (i = sector; i <= sector_end; i++) {
errors = check_read_ecc(mtd, ctrl, eccstat, i); errors = check_read_ecc(mtd, ctrl, eccstat, i);
@ -313,31 +314,33 @@ static void fsl_ifc_do_read(struct nand_chip *chip,
/* Program FIR/IFC_NAND_FCR0 for Small/Large page */ /* Program FIR/IFC_NAND_FCR0 for Small/Large page */
if (mtd->writesize > 512) { if (mtd->writesize > 512) {
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) |
(IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP3_SHIFT) |
(IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP3_SHIFT) | (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP4_SHIFT),
(IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP4_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fir1, 0x0); iowrite32be(0x0, &ifc->ifc_nand.nand_fir1);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be((NAND_CMD_READ0 << IFC_NAND_FCR0_CMD0_SHIFT) |
(NAND_CMD_READ0 << IFC_NAND_FCR0_CMD0_SHIFT) | (NAND_CMD_READSTART << IFC_NAND_FCR0_CMD1_SHIFT),
(NAND_CMD_READSTART << IFC_NAND_FCR0_CMD1_SHIFT)); &ifc->ifc_nand.nand_fcr0);
} else { } else {
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) |
(IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP3_SHIFT),
(IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP3_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fir1, 0x0); iowrite32be(0x0, &ifc->ifc_nand.nand_fir1);
if (oob) if (oob)
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(NAND_CMD_READOOB <<
NAND_CMD_READOOB << IFC_NAND_FCR0_CMD0_SHIFT); IFC_NAND_FCR0_CMD0_SHIFT,
&ifc->ifc_nand.nand_fcr0);
else else
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(NAND_CMD_READ0 <<
NAND_CMD_READ0 << IFC_NAND_FCR0_CMD0_SHIFT); IFC_NAND_FCR0_CMD0_SHIFT,
&ifc->ifc_nand.nand_fcr0);
} }
} }
@ -357,7 +360,7 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
switch (command) { switch (command) {
/* READ0 read the entire buffer to use hardware ECC. */ /* READ0 read the entire buffer to use hardware ECC. */
case NAND_CMD_READ0: case NAND_CMD_READ0:
out_be32(&ifc->ifc_nand.nand_fbcr, 0); iowrite32be(0, &ifc->ifc_nand.nand_fbcr);
set_addr(mtd, 0, page_addr, 0); set_addr(mtd, 0, page_addr, 0);
ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize; ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize;
@ -372,7 +375,7 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
/* READOOB reads only the OOB because no ECC is performed. */ /* READOOB reads only the OOB because no ECC is performed. */
case NAND_CMD_READOOB: case NAND_CMD_READOOB:
out_be32(&ifc->ifc_nand.nand_fbcr, mtd->oobsize - column); iowrite32be(mtd->oobsize - column, &ifc->ifc_nand.nand_fbcr);
set_addr(mtd, column, page_addr, 1); set_addr(mtd, column, page_addr, 1);
ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize; ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize;
@ -388,19 +391,19 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
if (command == NAND_CMD_PARAM) if (command == NAND_CMD_PARAM)
timing = IFC_FIR_OP_RBCD; timing = IFC_FIR_OP_RBCD;
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | (timing << IFC_NAND_FIR0_OP2_SHIFT),
(timing << IFC_NAND_FIR0_OP2_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(command << IFC_NAND_FCR0_CMD0_SHIFT,
command << IFC_NAND_FCR0_CMD0_SHIFT); &ifc->ifc_nand.nand_fcr0);
out_be32(&ifc->ifc_nand.row3, column); iowrite32be(column, &ifc->ifc_nand.row3);
/* /*
* although currently it's 8 bytes for READID, we always read * although currently it's 8 bytes for READID, we always read
* the maximum 256 bytes(for PARAM) * the maximum 256 bytes(for PARAM)
*/ */
out_be32(&ifc->ifc_nand.nand_fbcr, 256); iowrite32be(256, &ifc->ifc_nand.nand_fbcr);
ifc_nand_ctrl->read_bytes = 256; ifc_nand_ctrl->read_bytes = 256;
set_addr(mtd, 0, 0, 0); set_addr(mtd, 0, 0, 0);
@ -415,16 +418,16 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
/* ERASE2 uses the block and page address from ERASE1 */ /* ERASE2 uses the block and page address from ERASE1 */
case NAND_CMD_ERASE2: case NAND_CMD_ERASE2:
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP2_SHIFT),
(IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP2_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be((NAND_CMD_ERASE1 << IFC_NAND_FCR0_CMD0_SHIFT) |
(NAND_CMD_ERASE1 << IFC_NAND_FCR0_CMD0_SHIFT) | (NAND_CMD_ERASE2 << IFC_NAND_FCR0_CMD1_SHIFT),
(NAND_CMD_ERASE2 << IFC_NAND_FCR0_CMD1_SHIFT)); &ifc->ifc_nand.nand_fcr0);
out_be32(&ifc->ifc_nand.nand_fbcr, 0); iowrite32be(0, &ifc->ifc_nand.nand_fbcr);
ifc_nand_ctrl->read_bytes = 0; ifc_nand_ctrl->read_bytes = 0;
fsl_ifc_run_command(mtd); fsl_ifc_run_command(mtd);
return; return;
@ -440,26 +443,28 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
(NAND_CMD_SEQIN << IFC_NAND_FCR0_CMD0_SHIFT) | (NAND_CMD_SEQIN << IFC_NAND_FCR0_CMD0_SHIFT) |
(NAND_CMD_PAGEPROG << IFC_NAND_FCR0_CMD1_SHIFT); (NAND_CMD_PAGEPROG << IFC_NAND_FCR0_CMD1_SHIFT);
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be(
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) |
(IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP3_SHIFT) | (IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP3_SHIFT) |
(IFC_FIR_OP_CW1 << IFC_NAND_FIR0_OP4_SHIFT)); (IFC_FIR_OP_CW1 << IFC_NAND_FIR0_OP4_SHIFT),
&ifc->ifc_nand.nand_fir0);
} else { } else {
nand_fcr0 = ((NAND_CMD_PAGEPROG << nand_fcr0 = ((NAND_CMD_PAGEPROG <<
IFC_NAND_FCR0_CMD1_SHIFT) | IFC_NAND_FCR0_CMD1_SHIFT) |
(NAND_CMD_SEQIN << (NAND_CMD_SEQIN <<
IFC_NAND_FCR0_CMD2_SHIFT)); IFC_NAND_FCR0_CMD2_SHIFT));
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be(
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CMD2 << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_CMD2 << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP2_SHIFT) | (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP2_SHIFT) |
(IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP3_SHIFT) | (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP3_SHIFT) |
(IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP4_SHIFT)); (IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP4_SHIFT),
out_be32(&ifc->ifc_nand.nand_fir1, &ifc->ifc_nand.nand_fir0);
(IFC_FIR_OP_CW1 << IFC_NAND_FIR1_OP5_SHIFT)); iowrite32be(IFC_FIR_OP_CW1 << IFC_NAND_FIR1_OP5_SHIFT,
&ifc->ifc_nand.nand_fir1);
if (column >= mtd->writesize) if (column >= mtd->writesize)
nand_fcr0 |= nand_fcr0 |=
@ -474,7 +479,7 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
column -= mtd->writesize; column -= mtd->writesize;
ifc_nand_ctrl->oob = 1; ifc_nand_ctrl->oob = 1;
} }
out_be32(&ifc->ifc_nand.nand_fcr0, nand_fcr0); iowrite32be(nand_fcr0, &ifc->ifc_nand.nand_fcr0);
set_addr(mtd, column, page_addr, ifc_nand_ctrl->oob); set_addr(mtd, column, page_addr, ifc_nand_ctrl->oob);
return; return;
} }
@ -482,10 +487,11 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
/* PAGEPROG reuses all of the setup from SEQIN and adds the length */ /* PAGEPROG reuses all of the setup from SEQIN and adds the length */
case NAND_CMD_PAGEPROG: { case NAND_CMD_PAGEPROG: {
if (ifc_nand_ctrl->oob) { if (ifc_nand_ctrl->oob) {
out_be32(&ifc->ifc_nand.nand_fbcr, iowrite32be(ifc_nand_ctrl->index -
ifc_nand_ctrl->index - ifc_nand_ctrl->column); ifc_nand_ctrl->column,
&ifc->ifc_nand.nand_fbcr);
} else { } else {
out_be32(&ifc->ifc_nand.nand_fbcr, 0); iowrite32be(0, &ifc->ifc_nand.nand_fbcr);
} }
fsl_ifc_run_command(mtd); fsl_ifc_run_command(mtd);
@ -493,12 +499,12 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
} }
case NAND_CMD_STATUS: case NAND_CMD_STATUS:
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP1_SHIFT),
(IFC_FIR_OP_RB << IFC_NAND_FIR0_OP1_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT,
NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT); &ifc->ifc_nand.nand_fcr0);
out_be32(&ifc->ifc_nand.nand_fbcr, 1); iowrite32be(1, &ifc->ifc_nand.nand_fbcr);
set_addr(mtd, 0, 0, 0); set_addr(mtd, 0, 0, 0);
ifc_nand_ctrl->read_bytes = 1; ifc_nand_ctrl->read_bytes = 1;
@ -512,10 +518,10 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
return; return;
case NAND_CMD_RESET: case NAND_CMD_RESET:
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT,
IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(NAND_CMD_RESET << IFC_NAND_FCR0_CMD0_SHIFT,
NAND_CMD_RESET << IFC_NAND_FCR0_CMD0_SHIFT); &ifc->ifc_nand.nand_fcr0);
fsl_ifc_run_command(mtd); fsl_ifc_run_command(mtd);
return; return;
@ -639,18 +645,18 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
u32 nand_fsr; u32 nand_fsr;
/* Use READ_STATUS command, but wait for the device to be ready */ /* Use READ_STATUS command, but wait for the device to be ready */
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_RDSTAT << IFC_NAND_FIR0_OP1_SHIFT),
(IFC_FIR_OP_RDSTAT << IFC_NAND_FIR0_OP1_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, NAND_CMD_STATUS << iowrite32be(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT,
IFC_NAND_FCR0_CMD0_SHIFT); &ifc->ifc_nand.nand_fcr0);
out_be32(&ifc->ifc_nand.nand_fbcr, 1); iowrite32be(1, &ifc->ifc_nand.nand_fbcr);
set_addr(mtd, 0, 0, 0); set_addr(mtd, 0, 0, 0);
ifc_nand_ctrl->read_bytes = 1; ifc_nand_ctrl->read_bytes = 1;
fsl_ifc_run_command(mtd); fsl_ifc_run_command(mtd);
nand_fsr = in_be32(&ifc->ifc_nand.nand_fsr); nand_fsr = ioread32be(&ifc->ifc_nand.nand_fsr);
/* /*
* The chip always seems to report that it is * The chip always seems to report that it is
@ -744,34 +750,34 @@ static void fsl_ifc_sram_init(struct fsl_ifc_mtd *priv)
uint32_t cs = priv->bank; uint32_t cs = priv->bank;
/* Save CSOR and CSOR_ext */ /* Save CSOR and CSOR_ext */
csor = in_be32(&ifc->csor_cs[cs].csor); csor = ioread32be(&ifc->csor_cs[cs].csor);
csor_ext = in_be32(&ifc->csor_cs[cs].csor_ext); csor_ext = ioread32be(&ifc->csor_cs[cs].csor_ext);
/* chage PageSize 8K and SpareSize 1K*/ /* chage PageSize 8K and SpareSize 1K*/
csor_8k = (csor & ~(CSOR_NAND_PGS_MASK)) | 0x0018C000; csor_8k = (csor & ~(CSOR_NAND_PGS_MASK)) | 0x0018C000;
out_be32(&ifc->csor_cs[cs].csor, csor_8k); iowrite32be(csor_8k, &ifc->csor_cs[cs].csor);
out_be32(&ifc->csor_cs[cs].csor_ext, 0x0000400); iowrite32be(0x0000400, &ifc->csor_cs[cs].csor_ext);
/* READID */ /* READID */
out_be32(&ifc->ifc_nand.nand_fir0, iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
(IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT),
(IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT)); &ifc->ifc_nand.nand_fir0);
out_be32(&ifc->ifc_nand.nand_fcr0, iowrite32be(NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT,
NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT); &ifc->ifc_nand.nand_fcr0);
out_be32(&ifc->ifc_nand.row3, 0x0); iowrite32be(0x0, &ifc->ifc_nand.row3);
out_be32(&ifc->ifc_nand.nand_fbcr, 0x0); iowrite32be(0x0, &ifc->ifc_nand.nand_fbcr);
/* Program ROW0/COL0 */ /* Program ROW0/COL0 */
out_be32(&ifc->ifc_nand.row0, 0x0); iowrite32be(0x0, &ifc->ifc_nand.row0);
out_be32(&ifc->ifc_nand.col0, 0x0); iowrite32be(0x0, &ifc->ifc_nand.col0);
/* set the chip select for NAND Transaction */ /* set the chip select for NAND Transaction */
out_be32(&ifc->ifc_nand.nand_csel, cs << IFC_NAND_CSEL_SHIFT); iowrite32be(cs << IFC_NAND_CSEL_SHIFT, &ifc->ifc_nand.nand_csel);
/* start read seq */ /* start read seq */
out_be32(&ifc->ifc_nand.nandseq_strt, IFC_NAND_SEQ_STRT_FIR_STRT); iowrite32be(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt);
/* wait for command complete flag or timeout */ /* wait for command complete flag or timeout */
wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat, wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat,
@ -781,8 +787,8 @@ static void fsl_ifc_sram_init(struct fsl_ifc_mtd *priv)
printk(KERN_ERR "fsl-ifc: Failed to Initialise SRAM\n"); printk(KERN_ERR "fsl-ifc: Failed to Initialise SRAM\n");
/* Restore CSOR and CSOR_ext */ /* Restore CSOR and CSOR_ext */
out_be32(&ifc->csor_cs[cs].csor, csor); iowrite32be(csor, &ifc->csor_cs[cs].csor);
out_be32(&ifc->csor_cs[cs].csor_ext, csor_ext); iowrite32be(csor_ext, &ifc->csor_cs[cs].csor_ext);
} }
static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv) static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
@ -799,7 +805,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
/* fill in nand_chip structure */ /* fill in nand_chip structure */
/* set up function call table */ /* set up function call table */
if ((in_be32(&ifc->cspr_cs[priv->bank].cspr)) & CSPR_PORT_SIZE_16) if ((ioread32be(&ifc->cspr_cs[priv->bank].cspr)) & CSPR_PORT_SIZE_16)
chip->read_byte = fsl_ifc_read_byte16; chip->read_byte = fsl_ifc_read_byte16;
else else
chip->read_byte = fsl_ifc_read_byte; chip->read_byte = fsl_ifc_read_byte;
@ -813,13 +819,13 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->bbt_td = &bbt_main_descr; chip->bbt_td = &bbt_main_descr;
chip->bbt_md = &bbt_mirror_descr; chip->bbt_md = &bbt_mirror_descr;
out_be32(&ifc->ifc_nand.ncfgr, 0x0); iowrite32be(0x0, &ifc->ifc_nand.ncfgr);
/* set up nand options */ /* set up nand options */
chip->bbt_options = NAND_BBT_USE_FLASH; chip->bbt_options = NAND_BBT_USE_FLASH;
if (in_be32(&ifc->cspr_cs[priv->bank].cspr) & CSPR_PORT_SIZE_16) { if (ioread32be(&ifc->cspr_cs[priv->bank].cspr) & CSPR_PORT_SIZE_16) {
chip->read_byte = fsl_ifc_read_byte16; chip->read_byte = fsl_ifc_read_byte16;
chip->options |= NAND_BUSWIDTH_16; chip->options |= NAND_BUSWIDTH_16;
} else { } else {
@ -832,7 +838,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->ecc.read_page = fsl_ifc_read_page; chip->ecc.read_page = fsl_ifc_read_page;
chip->ecc.write_page = fsl_ifc_write_page; chip->ecc.write_page = fsl_ifc_write_page;
csor = in_be32(&ifc->csor_cs[priv->bank].csor); csor = ioread32be(&ifc->csor_cs[priv->bank].csor);
/* Hardware generates ECC per 512 Bytes */ /* Hardware generates ECC per 512 Bytes */
chip->ecc.size = 512; chip->ecc.size = 512;
@ -884,7 +890,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->ecc.mode = NAND_ECC_SOFT; chip->ecc.mode = NAND_ECC_SOFT;
} }
ver = in_be32(&ifc->ifc_rev); ver = ioread32be(&ifc->ifc_rev);
if (ver == FSL_IFC_V1_1_0) if (ver == FSL_IFC_V1_1_0)
fsl_ifc_sram_init(priv); fsl_ifc_sram_init(priv);
@ -910,7 +916,7 @@ static int fsl_ifc_chip_remove(struct fsl_ifc_mtd *priv)
static int match_bank(struct fsl_ifc_regs __iomem *ifc, int bank, static int match_bank(struct fsl_ifc_regs __iomem *ifc, int bank,
phys_addr_t addr) phys_addr_t addr)
{ {
u32 cspr = in_be32(&ifc->cspr_cs[bank].cspr); u32 cspr = ioread32be(&ifc->cspr_cs[bank].cspr);
if (!(cspr & CSPR_V)) if (!(cspr & CSPR_V))
return 0; return 0;
@ -997,17 +1003,16 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
dev_set_drvdata(priv->dev, priv); dev_set_drvdata(priv->dev, priv);
out_be32(&ifc->ifc_nand.nand_evter_en, iowrite32be(IFC_NAND_EVTER_EN_OPC_EN |
IFC_NAND_EVTER_EN_OPC_EN | IFC_NAND_EVTER_EN_FTOER_EN |
IFC_NAND_EVTER_EN_FTOER_EN | IFC_NAND_EVTER_EN_WPER_EN,
IFC_NAND_EVTER_EN_WPER_EN); &ifc->ifc_nand.nand_evter_en);
/* enable NAND Machine Interrupts */ /* enable NAND Machine Interrupts */
out_be32(&ifc->ifc_nand.nand_evter_intr_en, iowrite32be(IFC_NAND_EVTER_INTR_OPCIR_EN |
IFC_NAND_EVTER_INTR_OPCIR_EN | IFC_NAND_EVTER_INTR_FTOERIR_EN |
IFC_NAND_EVTER_INTR_FTOERIR_EN | IFC_NAND_EVTER_INTR_WPERIR_EN,
IFC_NAND_EVTER_INTR_WPERIR_EN); &ifc->ifc_nand.nand_evter_intr_en);
priv->mtd.name = kasprintf(GFP_KERNEL, "%x.flash", (unsigned)res.start); priv->mtd.name = kasprintf(GFP_KERNEL, "%x.flash", (unsigned)res.start);
if (!priv->mtd.name) { if (!priv->mtd.name) {
ret = -ENOMEM; ret = -ENOMEM;

Просмотреть файл

@ -61,6 +61,16 @@
& BM_BCH_FLASH0LAYOUT0_ECC0) \ & BM_BCH_FLASH0LAYOUT0_ECC0) \
) )
#define MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14 10
#define MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14 \
(0x1 << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14)
#define BF_BCH_FLASH0LAYOUT0_GF(v, x) \
((GPMI_IS_MX6Q(x) && ((v) == 14)) \
? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14) \
& MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14) \
: 0 \
)
#define BP_BCH_FLASH0LAYOUT0_DATA0_SIZE 0 #define BP_BCH_FLASH0LAYOUT0_DATA0_SIZE 0
#define BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \ #define BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \
(0xfff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE) (0xfff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE)
@ -93,6 +103,16 @@
& BM_BCH_FLASH0LAYOUT1_ECCN) \ & BM_BCH_FLASH0LAYOUT1_ECCN) \
) )
#define MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14 10
#define MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14 \
(0x1 << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14)
#define BF_BCH_FLASH0LAYOUT1_GF(v, x) \
((GPMI_IS_MX6Q(x) && ((v) == 14)) \
? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14) \
& MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14) \
: 0 \
)
#define BP_BCH_FLASH0LAYOUT1_DATAN_SIZE 0 #define BP_BCH_FLASH0LAYOUT1_DATAN_SIZE 0
#define BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \ #define BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \
(0xfff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) (0xfff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE)
@ -103,4 +123,6 @@
? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ ? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \
: ((v) & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ : ((v) & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \
) )
#define HW_BCH_VERSION 0x00000160
#endif #endif

Просмотреть файл

@ -208,6 +208,11 @@ void gpmi_dump_info(struct gpmi_nand_data *this)
} }
/* start to print out the BCH info */ /* start to print out the BCH info */
pr_err("Show BCH registers :\n");
for (i = 0; i <= HW_BCH_VERSION / 0x10 + 1; i++) {
reg = readl(r->bch_regs + i * 0x10);
pr_err("offset 0x%.3x : 0x%.8x\n", i * 0x10, reg);
}
pr_err("BCH Geometry :\n"); pr_err("BCH Geometry :\n");
pr_err("GF length : %u\n", geo->gf_len); pr_err("GF length : %u\n", geo->gf_len);
pr_err("ECC Strength : %u\n", geo->ecc_strength); pr_err("ECC Strength : %u\n", geo->ecc_strength);
@ -232,6 +237,7 @@ int bch_set_geometry(struct gpmi_nand_data *this)
unsigned int metadata_size; unsigned int metadata_size;
unsigned int ecc_strength; unsigned int ecc_strength;
unsigned int page_size; unsigned int page_size;
unsigned int gf_len;
int ret; int ret;
if (common_nfc_set_geometry(this)) if (common_nfc_set_geometry(this))
@ -242,6 +248,7 @@ int bch_set_geometry(struct gpmi_nand_data *this)
metadata_size = bch_geo->metadata_size; metadata_size = bch_geo->metadata_size;
ecc_strength = bch_geo->ecc_strength >> 1; ecc_strength = bch_geo->ecc_strength >> 1;
page_size = bch_geo->page_size; page_size = bch_geo->page_size;
gf_len = bch_geo->gf_len;
ret = gpmi_enable_clk(this); ret = gpmi_enable_clk(this);
if (ret) if (ret)
@ -263,11 +270,13 @@ int bch_set_geometry(struct gpmi_nand_data *this)
writel(BF_BCH_FLASH0LAYOUT0_NBLOCKS(block_count) writel(BF_BCH_FLASH0LAYOUT0_NBLOCKS(block_count)
| BF_BCH_FLASH0LAYOUT0_META_SIZE(metadata_size) | BF_BCH_FLASH0LAYOUT0_META_SIZE(metadata_size)
| BF_BCH_FLASH0LAYOUT0_ECC0(ecc_strength, this) | BF_BCH_FLASH0LAYOUT0_ECC0(ecc_strength, this)
| BF_BCH_FLASH0LAYOUT0_GF(gf_len, this)
| BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(block_size, this), | BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(block_size, this),
r->bch_regs + HW_BCH_FLASH0LAYOUT0); r->bch_regs + HW_BCH_FLASH0LAYOUT0);
writel(BF_BCH_FLASH0LAYOUT1_PAGE_SIZE(page_size) writel(BF_BCH_FLASH0LAYOUT1_PAGE_SIZE(page_size)
| BF_BCH_FLASH0LAYOUT1_ECCN(ecc_strength, this) | BF_BCH_FLASH0LAYOUT1_ECCN(ecc_strength, this)
| BF_BCH_FLASH0LAYOUT1_GF(gf_len, this)
| BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(block_size, this), | BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(block_size, this),
r->bch_regs + HW_BCH_FLASH0LAYOUT1); r->bch_regs + HW_BCH_FLASH0LAYOUT1);

Просмотреть файл

@ -94,6 +94,25 @@ static inline int get_ecc_strength(struct gpmi_nand_data *this)
return round_down(ecc_strength, 2); return round_down(ecc_strength, 2);
} }
static inline bool gpmi_check_ecc(struct gpmi_nand_data *this)
{
struct bch_geometry *geo = &this->bch_geometry;
/* Do the sanity check. */
if (GPMI_IS_MX23(this) || GPMI_IS_MX28(this)) {
/* The mx23/mx28 only support the GF13. */
if (geo->gf_len == 14)
return false;
if (geo->ecc_strength > MXS_ECC_STRENGTH_MAX)
return false;
} else if (GPMI_IS_MX6Q(this)) {
if (geo->ecc_strength > MX6_ECC_STRENGTH_MAX)
return false;
}
return true;
}
int common_nfc_set_geometry(struct gpmi_nand_data *this) int common_nfc_set_geometry(struct gpmi_nand_data *this)
{ {
struct bch_geometry *geo = &this->bch_geometry; struct bch_geometry *geo = &this->bch_geometry;
@ -112,17 +131,24 @@ int common_nfc_set_geometry(struct gpmi_nand_data *this)
/* The default for the length of Galois Field. */ /* The default for the length of Galois Field. */
geo->gf_len = 13; geo->gf_len = 13;
/* The default for chunk size. There is no oobsize greater then 512. */ /* The default for chunk size. */
geo->ecc_chunk_size = 512; geo->ecc_chunk_size = 512;
while (geo->ecc_chunk_size < mtd->oobsize) while (geo->ecc_chunk_size < mtd->oobsize) {
geo->ecc_chunk_size *= 2; /* keep C >= O */ geo->ecc_chunk_size *= 2; /* keep C >= O */
geo->gf_len = 14;
}
geo->ecc_chunk_count = mtd->writesize / geo->ecc_chunk_size; geo->ecc_chunk_count = mtd->writesize / geo->ecc_chunk_size;
/* We use the same ECC strength for all chunks. */ /* We use the same ECC strength for all chunks. */
geo->ecc_strength = get_ecc_strength(this); geo->ecc_strength = get_ecc_strength(this);
if (!geo->ecc_strength) { if (!gpmi_check_ecc(this)) {
pr_err("wrong ECC strength.\n"); dev_err(this->dev,
"We can not support this nand chip."
" Its required ecc strength(%d) is beyond our"
" capability(%d).\n", geo->ecc_strength,
(GPMI_IS_MX6Q(this) ? MX6_ECC_STRENGTH_MAX
: MXS_ECC_STRENGTH_MAX));
return -EINVAL; return -EINVAL;
} }
@ -920,8 +946,7 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
dma_addr_t auxiliary_phys; dma_addr_t auxiliary_phys;
unsigned int i; unsigned int i;
unsigned char *status; unsigned char *status;
unsigned int failed; unsigned int max_bitflips = 0;
unsigned int corrected;
int ret; int ret;
pr_debug("page number is : %d\n", page); pr_debug("page number is : %d\n", page);
@ -945,35 +970,25 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
payload_virt, payload_phys); payload_virt, payload_phys);
if (ret) { if (ret) {
pr_err("Error in ECC-based read: %d\n", ret); pr_err("Error in ECC-based read: %d\n", ret);
goto exit_nfc; return ret;
} }
/* handle the block mark swapping */ /* handle the block mark swapping */
block_mark_swapping(this, payload_virt, auxiliary_virt); block_mark_swapping(this, payload_virt, auxiliary_virt);
/* Loop over status bytes, accumulating ECC status. */ /* Loop over status bytes, accumulating ECC status. */
failed = 0; status = auxiliary_virt + nfc_geo->auxiliary_status_offset;
corrected = 0;
status = auxiliary_virt + nfc_geo->auxiliary_status_offset;
for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) { for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) {
if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED)) if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED))
continue; continue;
if (*status == STATUS_UNCORRECTABLE) { if (*status == STATUS_UNCORRECTABLE) {
failed++; mtd->ecc_stats.failed++;
continue; continue;
} }
corrected += *status; mtd->ecc_stats.corrected += *status;
} max_bitflips = max_t(unsigned int, max_bitflips, *status);
/*
* Propagate ECC status to the owning MTD only when failed or
* corrected times nearly reaches our ECC correction threshold.
*/
if (failed || corrected >= (nfc_geo->ecc_strength - 1)) {
mtd->ecc_stats.failed += failed;
mtd->ecc_stats.corrected += corrected;
} }
if (oob_required) { if (oob_required) {
@ -995,8 +1010,8 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
this->payload_virt, this->payload_phys, this->payload_virt, this->payload_phys,
nfc_geo->payload_size, nfc_geo->payload_size,
payload_virt, payload_phys); payload_virt, payload_phys);
exit_nfc:
return ret; return max_bitflips;
} }
static int gpmi_ecc_write_page(struct mtd_info *mtd, struct nand_chip *chip, static int gpmi_ecc_write_page(struct mtd_info *mtd, struct nand_chip *chip,
@ -1668,8 +1683,8 @@ exit_nfc_init:
release_resources(this); release_resources(this);
exit_acquire_resources: exit_acquire_resources:
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
kfree(this);
dev_err(this->dev, "driver registration failed: %d\n", ret); dev_err(this->dev, "driver registration failed: %d\n", ret);
kfree(this);
return ret; return ret;
} }

Просмотреть файл

@ -284,6 +284,10 @@ extern int gpmi_read_page(struct gpmi_nand_data *,
#define STATUS_ERASED 0xff #define STATUS_ERASED 0xff
#define STATUS_UNCORRECTABLE 0xfe #define STATUS_UNCORRECTABLE 0xfe
/* BCH's bit correction capability. */
#define MXS_ECC_STRENGTH_MAX 20 /* mx23 and mx28 */
#define MX6_ECC_STRENGTH_MAX 40
/* Use the platform_id to distinguish different Archs. */ /* Use the platform_id to distinguish different Archs. */
#define IS_MX23 0x0 #define IS_MX23 0x0
#define IS_MX28 0x1 #define IS_MX28 0x1

Просмотреть файл

@ -530,12 +530,23 @@ static void send_page_v1(struct mtd_info *mtd, unsigned int ops)
static void send_read_id_v3(struct mxc_nand_host *host) static void send_read_id_v3(struct mxc_nand_host *host)
{ {
struct nand_chip *this = &host->nand;
/* Read ID into main buffer */ /* Read ID into main buffer */
writel(NFC_ID, NFC_V3_LAUNCH); writel(NFC_ID, NFC_V3_LAUNCH);
wait_op_done(host, true); wait_op_done(host, true);
memcpy32_fromio(host->data_buf, host->main_area0, 16); memcpy32_fromio(host->data_buf, host->main_area0, 16);
if (this->options & NAND_BUSWIDTH_16) {
/* compress the ID info */
host->data_buf[1] = host->data_buf[2];
host->data_buf[2] = host->data_buf[4];
host->data_buf[3] = host->data_buf[6];
host->data_buf[4] = host->data_buf[8];
host->data_buf[5] = host->data_buf[10];
}
} }
/* Request the NANDFC to perform a read of the NAND device ID. */ /* Request the NANDFC to perform a read of the NAND device ID. */

Просмотреть файл

@ -825,13 +825,8 @@ static void panic_nand_wait(struct mtd_info *mtd, struct nand_chip *chip,
static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip) static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
{ {
unsigned long timeo = jiffies;
int status, state = chip->state; int status, state = chip->state;
unsigned long timeo = (state == FL_ERASING ? 400 : 20);
if (state == FL_ERASING)
timeo += (HZ * 400) / 1000;
else
timeo += (HZ * 20) / 1000;
led_trigger_event(nand_led_trigger, LED_FULL); led_trigger_event(nand_led_trigger, LED_FULL);
@ -849,6 +844,7 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
if (in_interrupt() || oops_in_progress) if (in_interrupt() || oops_in_progress)
panic_nand_wait(mtd, chip, timeo); panic_nand_wait(mtd, chip, timeo);
else { else {
timeo = jiffies + msecs_to_jiffies(timeo);
while (time_before(jiffies, timeo)) { while (time_before(jiffies, timeo)) {
if (chip->dev_ready) { if (chip->dev_ready) {
if (chip->dev_ready(mtd)) if (chip->dev_ready(mtd))

Просмотреть файл

@ -55,8 +55,7 @@ struct mtd_info;
#define MODULE_AUTHOR(x) /* x */ #define MODULE_AUTHOR(x) /* x */
#define MODULE_DESCRIPTION(x) /* x */ #define MODULE_DESCRIPTION(x) /* x */
#define printk printf #define pr_err printf
#define KERN_ERR ""
#endif #endif
/* /*
@ -507,7 +506,7 @@ int __nand_correct_data(unsigned char *buf,
if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1) if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1)
return 1; /* error in ECC data; no action needed */ return 1; /* error in ECC data; no action needed */
printk(KERN_ERR "uncorrectable error : "); pr_err("%s: uncorrectable ECC error", __func__);
return -1; return -1;
} }
EXPORT_SYMBOL(__nand_correct_data); EXPORT_SYMBOL(__nand_correct_data);

Просмотреть файл

@ -1468,12 +1468,12 @@ int do_read_error(struct nandsim *ns, int num)
void do_bit_flips(struct nandsim *ns, int num) void do_bit_flips(struct nandsim *ns, int num)
{ {
if (bitflips && random32() < (1 << 22)) { if (bitflips && prandom_u32() < (1 << 22)) {
int flips = 1; int flips = 1;
if (bitflips > 1) if (bitflips > 1)
flips = (random32() % (int) bitflips) + 1; flips = (prandom_u32() % (int) bitflips) + 1;
while (flips--) { while (flips--) {
int pos = random32() % (num * 8); int pos = prandom_u32() % (num * 8);
ns->buf.byte[pos / 8] ^= (1 << (pos % 8)); ns->buf.byte[pos / 8] ^= (1 << (pos % 8));
NS_WARN("read_page: flipping bit %d in page %d " NS_WARN("read_page: flipping bit %d in page %d "
"reading from %d ecc: corrected=%u failed=%u\n", "reading from %d ecc: corrected=%u failed=%u\n",

Просмотреть файл

@ -22,9 +22,12 @@
#include <linux/omap-dma.h> #include <linux/omap-dma.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/of.h>
#include <linux/of_device.h>
#ifdef CONFIG_MTD_NAND_OMAP_BCH #ifdef CONFIG_MTD_NAND_OMAP_BCH
#include <linux/bch.h> #include <linux/bch.h>
#include <linux/platform_data/elm.h>
#endif #endif
#include <linux/platform_data/mtd-nand-omap2.h> #include <linux/platform_data/mtd-nand-omap2.h>
@ -117,6 +120,33 @@
#define OMAP24XX_DMA_GPMC 4 #define OMAP24XX_DMA_GPMC 4
#define BCH8_MAX_ERROR 8 /* upto 8 bit correctable */
#define BCH4_MAX_ERROR 4 /* upto 4 bit correctable */
#define SECTOR_BYTES 512
/* 4 bit padding to make byte aligned, 56 = 52 + 4 */
#define BCH4_BIT_PAD 4
#define BCH8_ECC_MAX ((SECTOR_BYTES + BCH8_ECC_OOB_BYTES) * 8)
#define BCH4_ECC_MAX ((SECTOR_BYTES + BCH4_ECC_OOB_BYTES) * 8)
/* GPMC ecc engine settings for read */
#define BCH_WRAPMODE_1 1 /* BCH wrap mode 1 */
#define BCH8R_ECC_SIZE0 0x1a /* ecc_size0 = 26 */
#define BCH8R_ECC_SIZE1 0x2 /* ecc_size1 = 2 */
#define BCH4R_ECC_SIZE0 0xd /* ecc_size0 = 13 */
#define BCH4R_ECC_SIZE1 0x3 /* ecc_size1 = 3 */
/* GPMC ecc engine settings for write */
#define BCH_WRAPMODE_6 6 /* BCH wrap mode 6 */
#define BCH_ECC_SIZE0 0x0 /* ecc_size0 = 0, no oob protection */
#define BCH_ECC_SIZE1 0x20 /* ecc_size1 = 32 */
#ifdef CONFIG_MTD_NAND_OMAP_BCH
static u_char bch8_vector[] = {0xf3, 0xdb, 0x14, 0x16, 0x8b, 0xd2, 0xbe, 0xcc,
0xac, 0x6b, 0xff, 0x99, 0x7b};
static u_char bch4_vector[] = {0x00, 0x6b, 0x31, 0xdd, 0x41, 0xbc, 0x10};
#endif
/* oob info generated runtime depending on ecc algorithm and layout selected */ /* oob info generated runtime depending on ecc algorithm and layout selected */
static struct nand_ecclayout omap_oobinfo; static struct nand_ecclayout omap_oobinfo;
/* Define some generic bad / good block scan pattern which are used /* Define some generic bad / good block scan pattern which are used
@ -156,6 +186,9 @@ struct omap_nand_info {
#ifdef CONFIG_MTD_NAND_OMAP_BCH #ifdef CONFIG_MTD_NAND_OMAP_BCH
struct bch_control *bch; struct bch_control *bch;
struct nand_ecclayout ecclayout; struct nand_ecclayout ecclayout;
bool is_elm_used;
struct device *elm_dev;
struct device_node *of_node;
#endif #endif
}; };
@ -1031,6 +1064,13 @@ static int omap_dev_ready(struct mtd_info *mtd)
* omap3_enable_hwecc_bch - Program OMAP3 GPMC to perform BCH ECC correction * omap3_enable_hwecc_bch - Program OMAP3 GPMC to perform BCH ECC correction
* @mtd: MTD device structure * @mtd: MTD device structure
* @mode: Read/Write mode * @mode: Read/Write mode
*
* When using BCH, sector size is hardcoded to 512 bytes.
* Using wrapping mode 6 both for reading and writing if ELM module not uses
* for error correction.
* On writing,
* eccsize0 = 0 (no additional protected byte in spare area)
* eccsize1 = 32 (skip 32 nibbles = 16 bytes per sector in spare area)
*/ */
static void omap3_enable_hwecc_bch(struct mtd_info *mtd, int mode) static void omap3_enable_hwecc_bch(struct mtd_info *mtd, int mode)
{ {
@ -1039,32 +1079,57 @@ static void omap3_enable_hwecc_bch(struct mtd_info *mtd, int mode)
struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, struct omap_nand_info *info = container_of(mtd, struct omap_nand_info,
mtd); mtd);
struct nand_chip *chip = mtd->priv; struct nand_chip *chip = mtd->priv;
u32 val; u32 val, wr_mode;
unsigned int ecc_size1, ecc_size0;
/* Using wrapping mode 6 for writing */
wr_mode = BCH_WRAPMODE_6;
nerrors = (info->nand.ecc.bytes == 13) ? 8 : 4;
dev_width = (chip->options & NAND_BUSWIDTH_16) ? 1 : 0;
nsectors = 1;
/* /*
* Program GPMC to perform correction on one 512-byte sector at a time. * ECC engine enabled for valid ecc_size0 nibbles
* Using 4 sectors at a time (i.e. ecc.size = 2048) is also possible and * and disabled for ecc_size1 nibbles.
* gives a slight (5%) performance gain (but requires additional code).
*/ */
ecc_size0 = BCH_ECC_SIZE0;
ecc_size1 = BCH_ECC_SIZE1;
/* Perform ecc calculation on 512-byte sector */
nsectors = 1;
/* Update number of error correction */
nerrors = info->nand.ecc.strength;
/* Multi sector reading/writing for NAND flash with page size < 4096 */
if (info->is_elm_used && (mtd->writesize <= 4096)) {
if (mode == NAND_ECC_READ) {
/* Using wrapping mode 1 for reading */
wr_mode = BCH_WRAPMODE_1;
/*
* ECC engine enabled for ecc_size0 nibbles
* and disabled for ecc_size1 nibbles.
*/
ecc_size0 = (nerrors == 8) ?
BCH8R_ECC_SIZE0 : BCH4R_ECC_SIZE0;
ecc_size1 = (nerrors == 8) ?
BCH8R_ECC_SIZE1 : BCH4R_ECC_SIZE1;
}
/* Perform ecc calculation for one page (< 4096) */
nsectors = info->nand.ecc.steps;
}
writel(ECC1, info->reg.gpmc_ecc_control); writel(ECC1, info->reg.gpmc_ecc_control);
/* /* Configure ecc size for BCH */
* When using BCH, sector size is hardcoded to 512 bytes. val = (ecc_size1 << ECCSIZE1_SHIFT) | (ecc_size0 << ECCSIZE0_SHIFT);
* Here we are using wrapping mode 6 both for reading and writing, with:
* size0 = 0 (no additional protected byte in spare area)
* size1 = 32 (skip 32 nibbles = 16 bytes per sector in spare area)
*/
val = (32 << ECCSIZE1_SHIFT) | (0 << ECCSIZE0_SHIFT);
writel(val, info->reg.gpmc_ecc_size_config); writel(val, info->reg.gpmc_ecc_size_config);
dev_width = (chip->options & NAND_BUSWIDTH_16) ? 1 : 0;
/* BCH configuration */ /* BCH configuration */
val = ((1 << 16) | /* enable BCH */ val = ((1 << 16) | /* enable BCH */
(((nerrors == 8) ? 1 : 0) << 12) | /* 8 or 4 bits */ (((nerrors == 8) ? 1 : 0) << 12) | /* 8 or 4 bits */
(0x06 << 8) | /* wrap mode = 6 */ (wr_mode << 8) | /* wrap mode */
(dev_width << 7) | /* bus width */ (dev_width << 7) | /* bus width */
(((nsectors-1) & 0x7) << 4) | /* number of sectors */ (((nsectors-1) & 0x7) << 4) | /* number of sectors */
(info->gpmc_cs << 1) | /* ECC CS */ (info->gpmc_cs << 1) | /* ECC CS */
@ -1072,7 +1137,7 @@ static void omap3_enable_hwecc_bch(struct mtd_info *mtd, int mode)
writel(val, info->reg.gpmc_ecc_config); writel(val, info->reg.gpmc_ecc_config);
/* clear ecc and enable bits */ /* Clear ecc and enable bits */
writel(ECCCLEAR | ECC1, info->reg.gpmc_ecc_control); writel(ECCCLEAR | ECC1, info->reg.gpmc_ecc_control);
} }
@ -1161,6 +1226,298 @@ static int omap3_calculate_ecc_bch8(struct mtd_info *mtd, const u_char *dat,
return 0; return 0;
} }
/**
* omap3_calculate_ecc_bch - Generate bytes of ECC bytes
* @mtd: MTD device structure
* @dat: The pointer to data on which ecc is computed
* @ecc_code: The ecc_code buffer
*
* Support calculating of BCH4/8 ecc vectors for the page
*/
static int omap3_calculate_ecc_bch(struct mtd_info *mtd, const u_char *dat,
u_char *ecc_code)
{
struct omap_nand_info *info = container_of(mtd, struct omap_nand_info,
mtd);
unsigned long nsectors, bch_val1, bch_val2, bch_val3, bch_val4;
int i, eccbchtsel;
nsectors = ((readl(info->reg.gpmc_ecc_config) >> 4) & 0x7) + 1;
/*
* find BCH scheme used
* 0 -> BCH4
* 1 -> BCH8
*/
eccbchtsel = ((readl(info->reg.gpmc_ecc_config) >> 12) & 0x3);
for (i = 0; i < nsectors; i++) {
/* Read hw-computed remainder */
bch_val1 = readl(info->reg.gpmc_bch_result0[i]);
bch_val2 = readl(info->reg.gpmc_bch_result1[i]);
if (eccbchtsel) {
bch_val3 = readl(info->reg.gpmc_bch_result2[i]);
bch_val4 = readl(info->reg.gpmc_bch_result3[i]);
}
if (eccbchtsel) {
/* BCH8 ecc scheme */
*ecc_code++ = (bch_val4 & 0xFF);
*ecc_code++ = ((bch_val3 >> 24) & 0xFF);
*ecc_code++ = ((bch_val3 >> 16) & 0xFF);
*ecc_code++ = ((bch_val3 >> 8) & 0xFF);
*ecc_code++ = (bch_val3 & 0xFF);
*ecc_code++ = ((bch_val2 >> 24) & 0xFF);
*ecc_code++ = ((bch_val2 >> 16) & 0xFF);
*ecc_code++ = ((bch_val2 >> 8) & 0xFF);
*ecc_code++ = (bch_val2 & 0xFF);
*ecc_code++ = ((bch_val1 >> 24) & 0xFF);
*ecc_code++ = ((bch_val1 >> 16) & 0xFF);
*ecc_code++ = ((bch_val1 >> 8) & 0xFF);
*ecc_code++ = (bch_val1 & 0xFF);
/*
* Setting 14th byte to zero to handle
* erased page & maintain compatibility
* with RBL
*/
*ecc_code++ = 0x0;
} else {
/* BCH4 ecc scheme */
*ecc_code++ = ((bch_val2 >> 12) & 0xFF);
*ecc_code++ = ((bch_val2 >> 4) & 0xFF);
*ecc_code++ = ((bch_val2 & 0xF) << 4) |
((bch_val1 >> 28) & 0xF);
*ecc_code++ = ((bch_val1 >> 20) & 0xFF);
*ecc_code++ = ((bch_val1 >> 12) & 0xFF);
*ecc_code++ = ((bch_val1 >> 4) & 0xFF);
*ecc_code++ = ((bch_val1 & 0xF) << 4);
/*
* Setting 8th byte to zero to handle
* erased page
*/
*ecc_code++ = 0x0;
}
}
return 0;
}
/**
* erased_sector_bitflips - count bit flips
* @data: data sector buffer
* @oob: oob buffer
* @info: omap_nand_info
*
* Check the bit flips in erased page falls below correctable level.
* If falls below, report the page as erased with correctable bit
* flip, else report as uncorrectable page.
*/
static int erased_sector_bitflips(u_char *data, u_char *oob,
struct omap_nand_info *info)
{
int flip_bits = 0, i;
for (i = 0; i < info->nand.ecc.size; i++) {
flip_bits += hweight8(~data[i]);
if (flip_bits > info->nand.ecc.strength)
return 0;
}
for (i = 0; i < info->nand.ecc.bytes - 1; i++) {
flip_bits += hweight8(~oob[i]);
if (flip_bits > info->nand.ecc.strength)
return 0;
}
/*
* Bit flips falls in correctable level.
* Fill data area with 0xFF
*/
if (flip_bits) {
memset(data, 0xFF, info->nand.ecc.size);
memset(oob, 0xFF, info->nand.ecc.bytes);
}
return flip_bits;
}
/**
* omap_elm_correct_data - corrects page data area in case error reported
* @mtd: MTD device structure
* @data: page data
* @read_ecc: ecc read from nand flash
* @calc_ecc: ecc read from HW ECC registers
*
* Calculated ecc vector reported as zero in case of non-error pages.
* In case of error/erased pages non-zero error vector is reported.
* In case of non-zero ecc vector, check read_ecc at fixed offset
* (x = 13/7 in case of BCH8/4 == 0) to find page programmed or not.
* To handle bit flips in this data, count the number of 0's in
* read_ecc[x] and check if it greater than 4. If it is less, it is
* programmed page, else erased page.
*
* 1. If page is erased, check with standard ecc vector (ecc vector
* for erased page to find any bit flip). If check fails, bit flip
* is present in erased page. Count the bit flips in erased page and
* if it falls under correctable level, report page with 0xFF and
* update the correctable bit information.
* 2. If error is reported on programmed page, update elm error
* vector and correct the page with ELM error correction routine.
*
*/
static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data,
u_char *read_ecc, u_char *calc_ecc)
{
struct omap_nand_info *info = container_of(mtd, struct omap_nand_info,
mtd);
int eccsteps = info->nand.ecc.steps;
int i , j, stat = 0;
int eccsize, eccflag, ecc_vector_size;
struct elm_errorvec err_vec[ERROR_VECTOR_MAX];
u_char *ecc_vec = calc_ecc;
u_char *spare_ecc = read_ecc;
u_char *erased_ecc_vec;
enum bch_ecc type;
bool is_error_reported = false;
/* Initialize elm error vector to zero */
memset(err_vec, 0, sizeof(err_vec));
if (info->nand.ecc.strength == BCH8_MAX_ERROR) {
type = BCH8_ECC;
erased_ecc_vec = bch8_vector;
} else {
type = BCH4_ECC;
erased_ecc_vec = bch4_vector;
}
ecc_vector_size = info->nand.ecc.bytes;
/*
* Remove extra byte padding for BCH8 RBL
* compatibility and erased page handling
*/
eccsize = ecc_vector_size - 1;
for (i = 0; i < eccsteps ; i++) {
eccflag = 0; /* initialize eccflag */
/*
* Check any error reported,
* In case of error, non zero ecc reported.
*/
for (j = 0; (j < eccsize); j++) {
if (calc_ecc[j] != 0) {
eccflag = 1; /* non zero ecc, error present */
break;
}
}
if (eccflag == 1) {
/*
* Set threshold to minimum of 4, half of ecc.strength/2
* to allow max bit flip in byte to 4
*/
unsigned int threshold = min_t(unsigned int, 4,
info->nand.ecc.strength / 2);
/*
* Check data area is programmed by counting
* number of 0's at fixed offset in spare area.
* Checking count of 0's against threshold.
* In case programmed page expects at least threshold
* zeros in byte.
* If zeros are less than threshold for programmed page/
* zeros are more than threshold erased page, either
* case page reported as uncorrectable.
*/
if (hweight8(~read_ecc[eccsize]) >= threshold) {
/*
* Update elm error vector as
* data area is programmed
*/
err_vec[i].error_reported = true;
is_error_reported = true;
} else {
/* Error reported in erased page */
int bitflip_count;
u_char *buf = &data[info->nand.ecc.size * i];
if (memcmp(calc_ecc, erased_ecc_vec, eccsize)) {
bitflip_count = erased_sector_bitflips(
buf, read_ecc, info);
if (bitflip_count)
stat += bitflip_count;
else
return -EINVAL;
}
}
}
/* Update the ecc vector */
calc_ecc += ecc_vector_size;
read_ecc += ecc_vector_size;
}
/* Check if any error reported */
if (!is_error_reported)
return 0;
/* Decode BCH error using ELM module */
elm_decode_bch_error_page(info->elm_dev, ecc_vec, err_vec);
for (i = 0; i < eccsteps; i++) {
if (err_vec[i].error_reported) {
for (j = 0; j < err_vec[i].error_count; j++) {
u32 bit_pos, byte_pos, error_max, pos;
if (type == BCH8_ECC)
error_max = BCH8_ECC_MAX;
else
error_max = BCH4_ECC_MAX;
if (info->nand.ecc.strength == BCH8_MAX_ERROR)
pos = err_vec[i].error_loc[j];
else
/* Add 4 to take care 4 bit padding */
pos = err_vec[i].error_loc[j] +
BCH4_BIT_PAD;
/* Calculate bit position of error */
bit_pos = pos % 8;
/* Calculate byte position of error */
byte_pos = (error_max - pos - 1) / 8;
if (pos < error_max) {
if (byte_pos < 512)
data[byte_pos] ^= 1 << bit_pos;
else
spare_ecc[byte_pos - 512] ^=
1 << bit_pos;
}
/* else, not interested to correct ecc */
}
}
/* Update number of correctable errors */
stat += err_vec[i].error_count;
/* Update page data with sector size */
data += info->nand.ecc.size;
spare_ecc += ecc_vector_size;
}
for (i = 0; i < eccsteps; i++)
/* Return error if uncorrectable error present */
if (err_vec[i].error_uncorrectable)
return -EINVAL;
return stat;
}
/** /**
* omap3_correct_data_bch - Decode received data and correct errors * omap3_correct_data_bch - Decode received data and correct errors
* @mtd: MTD device structure * @mtd: MTD device structure
@ -1193,6 +1550,92 @@ static int omap3_correct_data_bch(struct mtd_info *mtd, u_char *data,
return count; return count;
} }
/**
* omap_write_page_bch - BCH ecc based write page function for entire page
* @mtd: mtd info structure
* @chip: nand chip info structure
* @buf: data buffer
* @oob_required: must write chip->oob_poi to OOB
*
* Custom write page method evolved to support multi sector writing in one shot
*/
static int omap_write_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required)
{
int i;
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint32_t *eccpos = chip->ecc.layout->eccpos;
/* Enable GPMC ecc engine */
chip->ecc.hwctl(mtd, NAND_ECC_WRITE);
/* Write data */
chip->write_buf(mtd, buf, mtd->writesize);
/* Update ecc vector from GPMC result registers */
chip->ecc.calculate(mtd, buf, &ecc_calc[0]);
for (i = 0; i < chip->ecc.total; i++)
chip->oob_poi[eccpos[i]] = ecc_calc[i];
/* Write ecc vector to OOB area */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
}
/**
* omap_read_page_bch - BCH ecc based page read function for entire page
* @mtd: mtd info structure
* @chip: nand chip info structure
* @buf: buffer to store read data
* @oob_required: caller requires OOB data read to chip->oob_poi
* @page: page number to read
*
* For BCH ecc scheme, GPMC used for syndrome calculation and ELM module
* used for error correction.
* Custom method evolved to support ELM error correction & multi sector
* reading. On reading page data area is read along with OOB data with
* ecc engine enabled. ecc vector updated after read of OOB data.
* For non error pages ecc vector reported as zero.
*/
static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
uint32_t *eccpos = chip->ecc.layout->eccpos;
uint8_t *oob = &chip->oob_poi[eccpos[0]];
uint32_t oob_pos = mtd->writesize + chip->ecc.layout->eccpos[0];
int stat;
unsigned int max_bitflips = 0;
/* Enable GPMC ecc engine */
chip->ecc.hwctl(mtd, NAND_ECC_READ);
/* Read data */
chip->read_buf(mtd, buf, mtd->writesize);
/* Read oob bytes */
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_pos, -1);
chip->read_buf(mtd, oob, chip->ecc.total);
/* Calculate ecc bytes */
chip->ecc.calculate(mtd, buf, ecc_calc);
memcpy(ecc_code, &chip->oob_poi[eccpos[0]], chip->ecc.total);
stat = chip->ecc.correct(mtd, buf, ecc_code, ecc_calc);
if (stat < 0) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += stat;
max_bitflips = max_t(unsigned int, max_bitflips, stat);
}
return max_bitflips;
}
/** /**
* omap3_free_bch - Release BCH ecc resources * omap3_free_bch - Release BCH ecc resources
* @mtd: MTD device structure * @mtd: MTD device structure
@ -1218,43 +1661,86 @@ static int omap3_init_bch(struct mtd_info *mtd, int ecc_opt)
struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, struct omap_nand_info *info = container_of(mtd, struct omap_nand_info,
mtd); mtd);
#ifdef CONFIG_MTD_NAND_OMAP_BCH8 #ifdef CONFIG_MTD_NAND_OMAP_BCH8
const int hw_errors = 8; const int hw_errors = BCH8_MAX_ERROR;
#else #else
const int hw_errors = 4; const int hw_errors = BCH4_MAX_ERROR;
#endif #endif
enum bch_ecc bch_type;
const __be32 *parp;
int lenp;
struct device_node *elm_node;
info->bch = NULL; info->bch = NULL;
max_errors = (ecc_opt == OMAP_ECC_BCH8_CODE_HW) ? 8 : 4; max_errors = (ecc_opt == OMAP_ECC_BCH8_CODE_HW) ?
BCH8_MAX_ERROR : BCH4_MAX_ERROR;
if (max_errors != hw_errors) { if (max_errors != hw_errors) {
pr_err("cannot configure %d-bit BCH ecc, only %d-bit supported", pr_err("cannot configure %d-bit BCH ecc, only %d-bit supported",
max_errors, hw_errors); max_errors, hw_errors);
goto fail; goto fail;
} }
/* software bch library is only used to detect and locate errors */ info->nand.ecc.size = 512;
info->bch = init_bch(13, max_errors, 0x201b /* hw polynomial */); info->nand.ecc.hwctl = omap3_enable_hwecc_bch;
if (!info->bch) info->nand.ecc.mode = NAND_ECC_HW;
goto fail; info->nand.ecc.strength = max_errors;
info->nand.ecc.size = 512; if (hw_errors == BCH8_MAX_ERROR)
info->nand.ecc.hwctl = omap3_enable_hwecc_bch; bch_type = BCH8_ECC;
info->nand.ecc.correct = omap3_correct_data_bch; else
info->nand.ecc.mode = NAND_ECC_HW; bch_type = BCH4_ECC;
/* /* Detect availability of ELM module */
* The number of corrected errors in an ecc block that will trigger parp = of_get_property(info->of_node, "elm_id", &lenp);
* block scrubbing defaults to the ecc strength (4 or 8). if ((parp == NULL) && (lenp != (sizeof(void *) * 2))) {
* Set mtd->bitflip_threshold here to define a custom threshold. pr_err("Missing elm_id property, fall back to Software BCH\n");
*/ info->is_elm_used = false;
if (max_errors == 8) {
info->nand.ecc.strength = 8;
info->nand.ecc.bytes = 13;
info->nand.ecc.calculate = omap3_calculate_ecc_bch8;
} else { } else {
info->nand.ecc.strength = 4; struct platform_device *pdev;
info->nand.ecc.bytes = 7;
info->nand.ecc.calculate = omap3_calculate_ecc_bch4; elm_node = of_find_node_by_phandle(be32_to_cpup(parp));
pdev = of_find_device_by_node(elm_node);
info->elm_dev = &pdev->dev;
elm_config(info->elm_dev, bch_type);
info->is_elm_used = true;
}
if (info->is_elm_used && (mtd->writesize <= 4096)) {
if (hw_errors == BCH8_MAX_ERROR)
info->nand.ecc.bytes = BCH8_SIZE;
else
info->nand.ecc.bytes = BCH4_SIZE;
info->nand.ecc.correct = omap_elm_correct_data;
info->nand.ecc.calculate = omap3_calculate_ecc_bch;
info->nand.ecc.read_page = omap_read_page_bch;
info->nand.ecc.write_page = omap_write_page_bch;
} else {
/*
* software bch library is only used to detect and
* locate errors
*/
info->bch = init_bch(13, max_errors,
0x201b /* hw polynomial */);
if (!info->bch)
goto fail;
info->nand.ecc.correct = omap3_correct_data_bch;
/*
* The number of corrected errors in an ecc block that will
* trigger block scrubbing defaults to the ecc strength (4 or 8)
* Set mtd->bitflip_threshold here to define a custom threshold.
*/
if (max_errors == 8) {
info->nand.ecc.bytes = 13;
info->nand.ecc.calculate = omap3_calculate_ecc_bch8;
} else {
info->nand.ecc.bytes = 7;
info->nand.ecc.calculate = omap3_calculate_ecc_bch4;
}
} }
pr_info("enabling NAND BCH ecc with %d-bit correction\n", max_errors); pr_info("enabling NAND BCH ecc with %d-bit correction\n", max_errors);
@ -1270,7 +1756,7 @@ fail:
*/ */
static int omap3_init_bch_tail(struct mtd_info *mtd) static int omap3_init_bch_tail(struct mtd_info *mtd)
{ {
int i, steps; int i, steps, offset;
struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, struct omap_nand_info *info = container_of(mtd, struct omap_nand_info,
mtd); mtd);
struct nand_ecclayout *layout = &info->ecclayout; struct nand_ecclayout *layout = &info->ecclayout;
@ -1292,11 +1778,21 @@ static int omap3_init_bch_tail(struct mtd_info *mtd)
goto fail; goto fail;
} }
/* ECC layout compatible with RBL for BCH8 */
if (info->is_elm_used && (info->nand.ecc.bytes == BCH8_SIZE))
offset = 2;
else
offset = mtd->oobsize - layout->eccbytes;
/* put ecc bytes at oob tail */ /* put ecc bytes at oob tail */
for (i = 0; i < layout->eccbytes; i++) for (i = 0; i < layout->eccbytes; i++)
layout->eccpos[i] = mtd->oobsize-layout->eccbytes+i; layout->eccpos[i] = offset + i;
if (info->is_elm_used && (info->nand.ecc.bytes == BCH8_SIZE))
layout->oobfree[0].offset = 2 + layout->eccbytes * steps;
else
layout->oobfree[0].offset = 2;
layout->oobfree[0].offset = 2;
layout->oobfree[0].length = mtd->oobsize-2-layout->eccbytes; layout->oobfree[0].length = mtd->oobsize-2-layout->eccbytes;
info->nand.ecc.layout = layout; info->nand.ecc.layout = layout;
@ -1360,6 +1856,9 @@ static int omap_nand_probe(struct platform_device *pdev)
info->nand.options = pdata->devsize; info->nand.options = pdata->devsize;
info->nand.options |= NAND_SKIP_BBTSCAN; info->nand.options |= NAND_SKIP_BBTSCAN;
#ifdef CONFIG_MTD_NAND_OMAP_BCH
info->of_node = pdata->of_node;
#endif
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL) { if (res == NULL) {

Просмотреть файл

@ -174,7 +174,14 @@ out:
return rc; return rc;
} }
static void __exit ofpart_parser_exit(void)
{
deregister_mtd_parser(&ofpart_parser);
deregister_mtd_parser(&ofoldpart_parser);
}
module_init(ofpart_parser_init); module_init(ofpart_parser_init);
module_exit(ofpart_parser_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Parser for MTD partitioning information in device tree"); MODULE_DESCRIPTION("Parser for MTD partitioning information in device tree");

Просмотреть файл

@ -44,7 +44,7 @@ struct nand_ecc_test {
static void single_bit_error_data(void *error_data, void *correct_data, static void single_bit_error_data(void *error_data, void *correct_data,
size_t size) size_t size)
{ {
unsigned int offset = random32() % (size * BITS_PER_BYTE); unsigned int offset = prandom_u32() % (size * BITS_PER_BYTE);
memcpy(error_data, correct_data, size); memcpy(error_data, correct_data, size);
__change_bit_le(offset, error_data); __change_bit_le(offset, error_data);
@ -55,9 +55,9 @@ static void double_bit_error_data(void *error_data, void *correct_data,
{ {
unsigned int offset[2]; unsigned int offset[2];
offset[0] = random32() % (size * BITS_PER_BYTE); offset[0] = prandom_u32() % (size * BITS_PER_BYTE);
do { do {
offset[1] = random32() % (size * BITS_PER_BYTE); offset[1] = prandom_u32() % (size * BITS_PER_BYTE);
} while (offset[0] == offset[1]); } while (offset[0] == offset[1]);
memcpy(error_data, correct_data, size); memcpy(error_data, correct_data, size);
@ -68,7 +68,7 @@ static void double_bit_error_data(void *error_data, void *correct_data,
static unsigned int random_ecc_bit(size_t size) static unsigned int random_ecc_bit(size_t size)
{ {
unsigned int offset = random32() % (3 * BITS_PER_BYTE); unsigned int offset = prandom_u32() % (3 * BITS_PER_BYTE);
if (size == 256) { if (size == 256) {
/* /*
@ -76,7 +76,7 @@ static unsigned int random_ecc_bit(size_t size)
* and 17th bit) in ECC code for 256 byte data block * and 17th bit) in ECC code for 256 byte data block
*/ */
while (offset == 16 || offset == 17) while (offset == 16 || offset == 17)
offset = random32() % (3 * BITS_PER_BYTE); offset = prandom_u32() % (3 * BITS_PER_BYTE);
} }
return offset; return offset;

Просмотреть файл

@ -55,7 +55,7 @@ static int rand_eb(void)
unsigned int eb; unsigned int eb;
again: again:
eb = random32(); eb = prandom_u32();
/* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */ /* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */
eb %= (ebcnt - 1); eb %= (ebcnt - 1);
if (bbt[eb]) if (bbt[eb])
@ -67,7 +67,7 @@ static int rand_offs(void)
{ {
unsigned int offs; unsigned int offs;
offs = random32(); offs = prandom_u32();
offs %= bufsize; offs %= bufsize;
return offs; return offs;
} }
@ -76,7 +76,7 @@ static int rand_len(int offs)
{ {
unsigned int len; unsigned int len;
len = random32(); len = prandom_u32();
len %= (bufsize - offs); len %= (bufsize - offs);
return len; return len;
} }
@ -191,7 +191,7 @@ static int do_write(void)
static int do_operation(void) static int do_operation(void)
{ {
if (random32() & 1) if (prandom_u32() & 1)
return do_read(); return do_read();
else else
return do_write(); return do_write();

Просмотреть файл

@ -208,7 +208,7 @@ static inline int write_pattern(int ebnum, void *buf)
static int __init tort_init(void) static int __init tort_init(void)
{ {
int err = 0, i, infinite = !cycles_count; int err = 0, i, infinite = !cycles_count;
int bad_ebs[ebcnt]; int *bad_ebs;
printk(KERN_INFO "\n"); printk(KERN_INFO "\n");
printk(KERN_INFO "=================================================\n"); printk(KERN_INFO "=================================================\n");
@ -250,28 +250,24 @@ static int __init tort_init(void)
err = -ENOMEM; err = -ENOMEM;
patt_5A5 = kmalloc(mtd->erasesize, GFP_KERNEL); patt_5A5 = kmalloc(mtd->erasesize, GFP_KERNEL);
if (!patt_5A5) { if (!patt_5A5)
pr_err("error: cannot allocate memory\n");
goto out_mtd; goto out_mtd;
}
patt_A5A = kmalloc(mtd->erasesize, GFP_KERNEL); patt_A5A = kmalloc(mtd->erasesize, GFP_KERNEL);
if (!patt_A5A) { if (!patt_A5A)
pr_err("error: cannot allocate memory\n");
goto out_patt_5A5; goto out_patt_5A5;
}
patt_FF = kmalloc(mtd->erasesize, GFP_KERNEL); patt_FF = kmalloc(mtd->erasesize, GFP_KERNEL);
if (!patt_FF) { if (!patt_FF)
pr_err("error: cannot allocate memory\n");
goto out_patt_A5A; goto out_patt_A5A;
}
check_buf = kmalloc(mtd->erasesize, GFP_KERNEL); check_buf = kmalloc(mtd->erasesize, GFP_KERNEL);
if (!check_buf) { if (!check_buf)
pr_err("error: cannot allocate memory\n");
goto out_patt_FF; goto out_patt_FF;
}
bad_ebs = kcalloc(ebcnt, sizeof(*bad_ebs), GFP_KERNEL);
if (!bad_ebs)
goto out_check_buf;
err = 0; err = 0;
@ -290,7 +286,6 @@ static int __init tort_init(void)
/* /*
* Check if there is a bad eraseblock among those we are going to test. * Check if there is a bad eraseblock among those we are going to test.
*/ */
memset(&bad_ebs[0], 0, sizeof(int) * ebcnt);
if (mtd_can_have_bb(mtd)) { if (mtd_can_have_bb(mtd)) {
for (i = eb; i < eb + ebcnt; i++) { for (i = eb; i < eb + ebcnt; i++) {
err = mtd_block_isbad(mtd, (loff_t)i * mtd->erasesize); err = mtd_block_isbad(mtd, (loff_t)i * mtd->erasesize);
@ -394,6 +389,8 @@ out:
pr_info("finished after %u erase cycles\n", pr_info("finished after %u erase cycles\n",
erase_cycles); erase_cycles);
kfree(bad_ebs);
out_check_buf:
kfree(check_buf); kfree(check_buf);
out_patt_FF: out_patt_FF:
kfree(patt_FF); kfree(patt_FF);

Просмотреть файл

@ -86,7 +86,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi)
static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi) static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
{ {
if (ubi->dbg.emulate_bitflips) if (ubi->dbg.emulate_bitflips)
return !(random32() % 200); return !(prandom_u32() % 200);
return 0; return 0;
} }
@ -100,7 +100,7 @@ static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi) static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
{ {
if (ubi->dbg.emulate_io_failures) if (ubi->dbg.emulate_io_failures)
return !(random32() % 500); return !(prandom_u32() % 500);
return 0; return 0;
} }
@ -114,7 +114,7 @@ static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi) static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi)
{ {
if (ubi->dbg.emulate_io_failures) if (ubi->dbg.emulate_io_failures)
return !(random32() % 400); return !(prandom_u32() % 400);
return 0; return 0;
} }

Просмотреть файл

@ -528,6 +528,7 @@ struct bcma_sflash {
u32 size; u32 size;
struct mtd_info *mtd; struct mtd_info *mtd;
void *priv;
}; };
#endif #endif

Просмотреть файл

@ -245,6 +245,7 @@ struct map_info {
unsigned long pfow_base; unsigned long pfow_base;
unsigned long map_priv_1; unsigned long map_priv_1;
unsigned long map_priv_2; unsigned long map_priv_2;
struct device_node *device_node;
void *fldrv_priv; void *fldrv_priv;
struct mtd_chip_driver *fldrv; struct mtd_chip_driver *fldrv;
}; };
@ -328,7 +329,7 @@ static inline int map_word_bitsset(struct map_info *map, map_word val1, map_word
static inline map_word map_word_load(struct map_info *map, const void *ptr) static inline map_word map_word_load(struct map_info *map, const void *ptr)
{ {
map_word r = {{0} }; map_word r;
if (map_bankwidth_is_1(map)) if (map_bankwidth_is_1(map))
r.x[0] = *(unsigned char *)ptr; r.x[0] = *(unsigned char *)ptr;
@ -342,6 +343,8 @@ static inline map_word map_word_load(struct map_info *map, const void *ptr)
#endif #endif
else if (map_bankwidth_is_large(map)) else if (map_bankwidth_is_large(map))
memcpy(r.x, ptr, map->bankwidth); memcpy(r.x, ptr, map->bankwidth);
else
BUG();
return r; return r;
} }
@ -391,7 +394,7 @@ static inline map_word map_word_ff(struct map_info *map)
static inline map_word inline_map_read(struct map_info *map, unsigned long ofs) static inline map_word inline_map_read(struct map_info *map, unsigned long ofs)
{ {
map_word uninitialized_var(r); map_word r;
if (map_bankwidth_is_1(map)) if (map_bankwidth_is_1(map))
r.x[0] = __raw_readb(map->virt + ofs); r.x[0] = __raw_readb(map->virt + ofs);
@ -425,6 +428,8 @@ static inline void inline_map_write(struct map_info *map, const map_word datum,
#endif #endif
else if (map_bankwidth_is_large(map)) else if (map_bankwidth_is_large(map))
memcpy_toio(map->virt+ofs, datum.x, map->bankwidth); memcpy_toio(map->virt+ofs, datum.x, map->bankwidth);
else
BUG();
mb(); mb();
} }

Просмотреть файл

@ -0,0 +1,54 @@
/*
* BCH Error Location Module
*
* Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ELM_H
#define __ELM_H
enum bch_ecc {
BCH4_ECC = 0,
BCH8_ECC,
};
/* ELM support 8 error syndrome process */
#define ERROR_VECTOR_MAX 8
#define BCH8_ECC_OOB_BYTES 13
#define BCH4_ECC_OOB_BYTES 7
/* RBL requires 14 byte even though BCH8 uses only 13 byte */
#define BCH8_SIZE (BCH8_ECC_OOB_BYTES + 1)
/* Uses 1 extra byte to handle erased pages */
#define BCH4_SIZE (BCH4_ECC_OOB_BYTES + 1)
/**
* struct elm_errorvec - error vector for elm
* @error_reported: set true for vectors error is reported
* @error_uncorrectable: number of uncorrectable errors
* @error_count: number of correctable errors in the sector
* @error_loc: buffer for error location
*
*/
struct elm_errorvec {
bool error_reported;
bool error_uncorrectable;
int error_count;
int error_loc[ERROR_VECTOR_MAX];
};
void elm_decode_bch_error_page(struct device *dev, u8 *ecc_calc,
struct elm_errorvec *err_vec);
void elm_config(struct device *dev, enum bch_ecc bch_type);
#endif /* __ELM_H */