dram circuits

A

Adnan Aziz

Guest
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.

the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.

- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?

- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)

cheers,
adnan

ps - pls reply to the newsgroups, or send me mail at adnan at
ece_nospam . utexas . edu_DELETE (the adnan_aziz@hotmail.com acct is
long gone)



-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
 
"Adnan Aziz" <adnan_aziz@hotmail.com> wrote in message
news:dbf9db48.0411091450.2ce50019@posting.google.com...
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.

the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.

- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?
When you read the bit, you check if the C has any charge or not. Remember
that the C of the bitline is significantly larger than the C of the trench
capacitor.

When you read you may not observe if the bitline chnges to VDD or to GND but
rather which direction it moves. It will not reach VDD neither GND on the
read alone.

When the read is done, the voltage on the trench capacitor is close to
VDD/2. That is why it is a destructive read and you need to rewrite the
contents of the cell.

- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)
As soon as the bitline moves in any direction, the op.amp. will respond. On
a write you must wait til the bitline and trech capacitor are fulle charged
or discharged. (I think that may be the reason.)

--------------------
Have a great day!
Bjřrn BL.
 
In article <dbf9db48.0411091450.2ce50019@posting.google.com>,
adnan_aziz@hotmail.com (Adnan Aziz) writes:
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.

the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.

- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?

John Jakson got it right about the power being the reason to use
VDD/2 sensing. But in practice, it hasn't always worked well to
count on the VDD/2 bitline precharge to eliminate the need for
dummy cells. Dummy cells help keep the bitlines better balanced
during sensing, rather than one side being heavy by the capacitance
of one cell. In addition, wordlines couple into the bitlines more
from an active cell than an off cell, so having a dummy cell
equalizes coupling noise.

To get a 1/2 level dummy cell, you simply have a dummy cell with
a back-door gate, and put a 1/2 level into it. There are several
ways to do this. One is to open the unused reference wordline after
sensing is complete, so a dummy cell is attached to the bitline on
each side of the sense amp. One will get written to "0", and the
other to "1". After the wordline is shut off, open the back-door
gate, shorting the two dummy cells together, giving a 1/2 level.

There's more to the timing than that, but that's the basics. You
can also shove a hard-generated voltage in through the back-door
gate.

It's worth mentioning that sensing is moving away from Vdd/2,
because of sense amp stall. The operating voltages are getting so
low in modern technologies that it's getting tough to set a sense
amp. Moving to rail sensing gives the sense amp more drive, getting
away from stall conditions and giving faster performance. The
power goes up, but at the same time performance requirements are
driving toward shorter bitlines, and that can bring the power back
down.

- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)

Every DRAM write is really a read-modify-write. In one cycle you will
typically sense a thousand or several thousands of cells, but will
only write 4 or 8 or 16 in that chip. The rest of those cells have to
retain their old data. So you typically latch the write data while
you start the read. When the read is complete and the data stable,
the write data is gated into the desired cells.

You typically wait to do the write until after the read, so you don't
disturb the sensing process in the adjacent cells. There is some art
for getting around these limits, and writing faster.

Dale Pontius
cheers,
adnan

ps - pls reply to the newsgroups, or send me mail at adnan at
ece_nospam . utexas . edu_DELETE (the adnan_aziz@hotmail.com acct is
long gone)



-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
--
 
adnan_aziz@hotmail.com (Adnan Aziz) wrote in message news:<dbf9db48.0411091450.2ce50019@posting.google.com>...
i teach a vlsi design class at UT austin, and there were a couple of
questions in my last lecture on DRAMs that i couldn't answer.

the text (weste and harris, "cmos vlsi design", 3rd edition, excellent
book) and my DRAM reference (keeth & baker, dram circuit design)
werent much help, so i thought i'd ask the net.

- Q1. why is the bitline pre-charged to V_DD/2 (instead of V_DD). i
thought this would be for performance, i.e., get a larger swing
quicker, but at least from a simple model, the opposite seems to be
true. perhaps it's related to power or noise?

- Q2. shouldn't DRAM writes be faster than reads? (the logic being
that in reads, the bitline is driven by the trench capacitor, but in
writes the bitlinehas an active driver. perhaps the reason has
something to do with senseamp logic compensating for the slow read.)

cheers,
adnan

ps - pls reply to the newsgroups, or send me mail at adnan at
ece_nospam . utexas . edu_DELETE (the adnan_aziz@hotmail.com acct is
long gone)



-------------------------------------------
Adnan Aziz, Dept. of Elect. and Comp. Eng.,
The University of Texas, Austin TX, 78712
1 (512) 475-9774 www.ece.utexas.edu/~adnan
-------------------------------------------
Both those books are very good.

The other book you could add would be

L. Glasser and D. Dobberpuhl, The Design and Analysis of VLSI
Circuits. Reading, MA: Addison-Wesley, 1985.

which I consider to be the more serious circuit design book, perhaps
the need for less circuit design than logic/system has left it behind.

Q1
Anyway the answer to 1/2vdd bit line precharge is very simple.

In the older DRAMs the line was charged to VDD or even VSS. The sense
amp is a cross coupled 2T Nflop with some extra devices for access and
equilibration and precharging. The cross couple though has exponential
gain if its common source floating sense gnd is slowly pulled to true
gnd along an exp path which is achieved by a small & big nmos
succesively pulling maybe 64 or more sense amps to gnd.

Its important that all sense amps be similar and not interfere with
each other even though they all share a common sense drive line. In
cmos the same is also true for the top side since there may also be a
cross coupled P flop, but the N side is of more importance.

Now when the bitlines were charged to vdd, the opened bit cell
contributes a very small charge to selected bitline. The other side of
the flop needs to be in the center of the eye and the only way that
could be achieved was if the other side was given a reference charge
exactly half that of the other side max change.

Conundrum, if the bit cell is as small as possible for maximising no
of cells in array, how can you make a ref cell half that size. Well
you can't do it reliabibly and most techniques at the time did wierd
and wonderfull things to fake it. One scheme involved using a normal
cell always charged with a 0 and dumping it onto 2 adjacent bitlines.

If the bit line is precharged to mid level, then the charge in the
cell will nudge the line about the same Vdif either way and the other
side of the sense amp need not have a ref charge since the other bit
line is already centered.

Another reason is that the access T is an Nmos device so if it is
taken to Vdd on its wordline gate, it is fully on allowing the charge
to fully transfer on or off to the bitline since the Vt is << VDD/2.

Q2
DRAMS don't perform writes at all in the classic sense because they
open up a line and only change 1 bit out of maybe 64 or 256 etc. Every
write cycle is exactly same as a read cycle with a minor change.

When the entire row has been read, the act of sensing the row bits
into the array of senseamps is always followed by a wait time so that
the fully restored data on the sense amp which is now VDD/VSS is now
passed back into the bitcells.

Initially during the sense, the sense amp flops are sampling the
bitlines with a very small charge. The flops though are voltage
coupled to the bitlines but capacitively are decoupled in the
following sense. The bitlines have enormous C and move slowly. The
sense amps have very little self C but connect to the bitlines through
smallish Nmos acess devices which separate it from the big C. During
the amplification phase when the 2 driver sense transisters apply the
exponential down signal to the common sense gnd of the amps, the sense
amp nodes still move very slowly so as not to disturb the bitlines Vs.
When the amp has a significant margin, the amp can now safely drive
out current to the 2 bitlines.

It really helps to run a Spice simulation of this to see how the Vs
move around and come back into the cell. Bit too difficult to explain
in words.

The write part
Now all the bits are refreshed whether 1 particular bit is needed or
not. Sometime during the sense phase, we can dump the desired write
bit via the column read muxing circuit into the specific sense amp
that is reading desired cell.

In essense a write cycle is always a read cycle with the read path
flowing backwards to disturb the sensing so as to put desired data
into cell. It can't get any simpler than that.

Hope that helps,

John Jakson
johnjakson_usa_com

(unemployed old time VLSI circuit designer that sometimes wouldn't
mind being asked to design chips again)
 

Welcome to EDABoard.com

Sponsor

Back
Top