A
alb
Guest
Hi everyone,
We have ~128Mbit of configuration to be stored in a Flash device and for
reasons related to qualification (HiRel application) we are more
inclined to the use of NAND technology instead of NOR. Unfortunately
NAND flash suffers from bad blocks, which may also develop during the
lifetime of the component and have to be handled.
I've read something about bad block management and it looks like there
are two essential strategies to cope with the issue of bad blocks:
1. skip block
2. reserved block
The first one will skip a block whenever is bad and write on the first
free one, updating also the logical block addressing (LBA). While the second
strategy reserves a dedicated area to remap the bad blocks. In this
second case the LBA shall be kept updated as well.
I do not see much of a difference between the two strategies except the
fact that in case 1. I need to 'search' for the first available free
block, while in second case I reserved a special area for it. Am I
missing any other major difference?
The second question I have is about 'management'. I do not have a
software stack to perform the management of these bad blocks and I'm
obliged to do it with my FPGA. Does anyone here see any potential risk
in doing so? Would I be better off dedicating a small footprint
controller in the FPGA to handle the Flash Translation Layer with wear
leveling and bad block management? Can anyone here point me to some
IPcores readily available for doing this?
There's a high chance I will need to implement some sort of 'scrubbing'
to avoid accumulation of errors. All these 'functions' to handle the
Flash seem to me very suited for software but not for hardware. Does
anyone here have a different opinion?
Any comment/suggestion/pointer/rant is appreciated.
Cheers,
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
We have ~128Mbit of configuration to be stored in a Flash device and for
reasons related to qualification (HiRel application) we are more
inclined to the use of NAND technology instead of NOR. Unfortunately
NAND flash suffers from bad blocks, which may also develop during the
lifetime of the component and have to be handled.
I've read something about bad block management and it looks like there
are two essential strategies to cope with the issue of bad blocks:
1. skip block
2. reserved block
The first one will skip a block whenever is bad and write on the first
free one, updating also the logical block addressing (LBA). While the second
strategy reserves a dedicated area to remap the bad blocks. In this
second case the LBA shall be kept updated as well.
I do not see much of a difference between the two strategies except the
fact that in case 1. I need to 'search' for the first available free
block, while in second case I reserved a special area for it. Am I
missing any other major difference?
The second question I have is about 'management'. I do not have a
software stack to perform the management of these bad blocks and I'm
obliged to do it with my FPGA. Does anyone here see any potential risk
in doing so? Would I be better off dedicating a small footprint
controller in the FPGA to handle the Flash Translation Layer with wear
leveling and bad block management? Can anyone here point me to some
IPcores readily available for doing this?
There's a high chance I will need to implement some sort of 'scrubbing'
to avoid accumulation of errors. All these 'functions' to handle the
Flash seem to me very suited for software but not for hardware. Does
anyone here have a different opinion?
Any comment/suggestion/pointer/rant is appreciated.
Cheers,
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?