EDK : FSL macros defined by Xilinx are wrong

"Symon" <symon_brewer@hotmail.com> wrote in message
news:fgqj6r$d3k$1@aioe.org...
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message
news:4730b9f2@clear.net.nz...

Also, unlike uC design teams who are very 'analog aware', FPGA
development is rather cocooned in the digital world - Linear stuff !?.

-jg

Hi Jim,

I think that because FPGAs are on the latest and greatest geometry,
building SMPS onto the dice is not a practical proposition. The FPGA
manufacturers (and their customers) want faster, smaller, better. Think of
all the LUTs a 4A pfet would replace. Also, I'd trust Linear Tech. to do a
much better job of a SMPS than an FPGA manufacturer.

Cheers, Syms.

p.s. I think that John's use of a diode drop for Vccaux is just fine.
Simple and robust. I wonder if the temperature sensing diode that exists
in some FPGAs could be used for this. ;-) (Is that repulsive enough?)
Thanks for the laugh, Symon. That is really "thinking out-of-the-box".

Bob
 
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message
news:4730b9f2@clear.net.nz...
Also, unlike uC design teams who are very 'analog aware', FPGA development
is rather cocooned in the digital world - Linear stuff !?.

-jg

Hi Jim,

I think that because FPGAs are on the latest and greatest geometry, building
SMPS onto the dice is not a practical proposition. The FPGA manufacturers
(and their customers) want faster, smaller, better. Think of all the LUTs a
4A pfet would replace. Also, I'd trust Linear Tech. to do a much better job
of a SMPS than an FPGA manufacturer.

Cheers, Syms.

p.s. I think that John's use of a diode drop for Vccaux is just fine. Simple
and robust. I wonder if the temperature sensing diode that exists in some
FPGAs could be used for this. ;-) (Is that repulsive enough?)
 
Since the only purpose of the refresh circuitry is to avoid the
memory dropping bits, it should already be running at the slowest
possible rate, and speed reduction will be harmful, while speed
increase will do no good. So this is not a good idea.

What are you trying to do?
Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike
 
<MikeShepherd564@btinternet.com> wrote in message
news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com...
Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike
Although that may well be the case for asynchronous DRAMs (because the
reduced charge in the memory cell capacitor would mean that the sense
amplifier took longer to register the state), this would not be the case for
SDRAM since this registers the outputs a fixed number of clocks after the
access starts. If the underlying access time increased by too much then the
data would just be wrong.
 
On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote:
MikeShepherd...@btinternet.com> wrote in message

news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com...



Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike

Although that may well be the case for asynchronous DRAMs (because the
reduced charge in the memory cell capacitor would mean that the sense
amplifier took longer to register the state), this would not be the case for
SDRAM since this registers the outputs a fixed number of clocks after the
access starts. If the underlying access time increased by too much then the
data would just be wrong.
For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke
 
On 24 Okt., 07:50, Peter Alfke <al...@sbcglobal.net> wrote:
On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote:





MikeShepherd...@btinternet.com> wrote in message

news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com...

Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike

Although that may well be the case for asynchronous DRAMs (because the
reduced charge in the memory cell capacitor would mean that the sense
amplifier took longer to register the state), this would not be the case for
SDRAM since this registers the outputs a fixed number of clocks after the
access starts. If the underlying access time increased by too much then the
data would just be wrong.

For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke- Zitierten Text ausblenden -

- Zitierten Text anzeigen -
Sinclair ZX?
at least some old Z80 homecomputers used refresh by video scan

Antti
 
On Wed, 24 Oct 2007 07:15:08 -0000,
Antti <Antti.Lukats@googlemail.com> wrote:

For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke- Zitierten Text ausblenden -

- Zitierten Text anzeigen -

Sinclair ZX?
at least some old Z80 homecomputers used refresh by video scan
Yes, and it's a completely ridiculous way to do it. The
added cost of making frequent additional row accesses is
far greater than the cost of the necessary refresh.

A DRAM row is effectively a cache. When you access a row,
you read the whole row into the DRAM's row buffer as a free
side-effect, and can then make very fast column accesses
to anly location in the row. It's preposterous to throw
away that massive free bandwidth just to save yourself
some refresh effort - unless you're trying to design
a $80 home computer/toy in the early 1980s.

In those days, the video buffer was a sufficiently
large fraction of the overall DRAM that it was
reasonable to lay out the video memory so that
every row was automatically visited by the video
scan, giving a refresh cycle every 20ms (16.7ms
in the USA). That was out-of-spec for many DRAMs
of the day (8ms refresh cycle) but in practice it
worked in almost all cases - and the manufacturers
of those computers had a shoddy enough warranty
policy that they weren't going to worry about a
handful of customers complaining about occasional
mysterious memory corruption on a hot day.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
That happens in a couple of common cases...

Running video refresh out of DRAM
Running DSP code
Running memory tests :)

I once worked on a memory board that worked better (at least as
measured by memory diagnostics) when the refresh was clipleaded out.
(We had a bug in the arbiter.)

--
These are my opinions, not necessarily my employer's. I hate spam.
 
On Oct 24, 5:04 am, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal
Murray) wrote:
For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.

That happens in a couple of common cases...

Running video refresh out of DRAM
Running DSP code
Running memory tests :)

I once worked on a memory board that worked better (at least as
measured by memory diagnostics) when the refresh was clipleaded out.
(We had a bug in the arbiter.)

--
These are my opinions, not necessarily my employer's. I hate spam.

For SDR SDRAMs, the refresh period depends on the density. Highest
density parts need twice the refresh rate (about 7.8 uS vs 15.6 uS).
If you sensed the part size, or used a DIMM or SO-DIMM with a PROM
for configuration, you may want to set up the refresh rate (once)
after the FPGA is running. A full-fledged SDRAM controller could
also set up other parameters based on a configuration PROM. This
is not something that needs to be dynamic for any given system.
You wouldn't swap out DIMMs with the power on. However it can be
more useful than requiring a different configuration load for the
FPGA depending upon the installed memory.
 
Jonathan, why so aggressive?
I was just pointing out that certain applications naturally perform
sufficient refresh operations in their normal addressing sequence. I
can't see why this is "completely ridiculuous"...
Peter Alfke

On Oct 24, 12:40 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
On Wed, 24 Oct 2007 07:15:08 -0000,

Antti <Antti.Luk...@googlemail.com> wrote:
For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke- Zitierten Text ausblenden -

- Zitierten Text anzeigen -

Sinclair ZX?
at least some old Z80 homecomputers used refresh by video scan

Yes, and it's a completely ridiculous way to do it. The
added cost of making frequent additional row accesses is
far greater than the cost of the necessary refresh.

A DRAM row is effectively a cache. When you access a row,
you read the whole row into the DRAM's row buffer as a free
side-effect, and can then make very fast column accesses
to anly location in the row. It's preposterous to throw
away that massive free bandwidth just to save yourself
some refresh effort - unless you're trying to design
a $80 home computer/toy in the early 1980s.

In those days, the video buffer was a sufficiently
large fraction of the overall DRAM that it was
reasonable to lay out the video memory so that
every row was automatically visited by the video
scan, giving a refresh cycle every 20ms (16.7ms
in the USA). That was out-of-spec for many DRAMs
of the day (8ms refresh cycle) but in practice it
worked in almost all cases - and the manufacturers
of those computers had a shoddy enough warranty
policy that they weren't going to worry about a
handful of customers complaining about occasional
mysterious memory corruption on a hot day.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Oct 24, 2:15 am, Antti <Antti.Luk...@googlemail.com> wrote:
On 24 Okt., 07:50, Peter Alfke <al...@sbcglobal.net> wrote:



On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote:

MikeShepherd...@btinternet.com> wrote in message

news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com...

Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike

Although that may well be the case for asynchronous DRAMs (because the
reduced charge in the memory cell capacitor would mean that the sense
amplifier took longer to register the state), this would not be the case for
SDRAM since this registers the outputs a fixed number of clocks after the
access starts. If the underlying access time increased by too much then the
data would just be wrong.

For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke- Zitierten Text ausblenden -

- Zitierten Text anzeigen -

Sinclair ZX?
at least some old Z80 homecomputers used refresh by video scan

Antti
If I recall, the Apple II also refreshed its RAM this way, too.
-Dave Pollum
 
On Wed, 24 Oct 2007 11:29:29 -0700, Peter Alfke <peter@xilinx.com>
wrote:

Jonathan, why so aggressive?
Ooh, I can be much more aggressive than that! And it
certainly wasn't directed at you.

I was just pointing out that certain applications naturally perform
sufficient refresh operations in their normal addressing sequence. I
can't see why this is "completely ridiculuous"...
Nor is it; the absurdity comes from bending the addressing
so that only a small part of each row is sequentially accessed,
thereby wasting the massive increase in memory bandwidth that
can be achieved for sequential-access applications by using
the row buffer as a cache. My spleen was being vented at some
designers of old computers (as alluded to by Antti, not you)
who used video scan to access every row of DRAM on each video
field, thereby unnecessarily burning-up memory bandwidth
(which was in short enough supply on such machines) in order
to save the trouble of doing refresh properly...
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Jonathan Bromley wrote:
(snip)

Yes, and it's a completely ridiculous way to do it. The
added cost of making frequent additional row accesses is
far greater than the cost of the necessary refresh.
Processor speed has increased somewhat faster than DRAM speed.

A DRAM row is effectively a cache. When you access a row,
you read the whole row into the DRAM's row buffer as a free
side-effect, and can then make very fast column accesses
to anly location in the row. It's preposterous to throw
away that massive free bandwidth just to save yourself
some refresh effort - unless you're trying to design
a $80 home computer/toy in the early 1980s.
When RAM cycle time was faster than processor cycle time.

In those days, the video buffer was a sufficiently
large fraction of the overall DRAM that it was
reasonable to lay out the video memory so that
every row was automatically visited by the video
scan, giving a refresh cycle every 20ms (16.7ms
in the USA). That was out-of-spec for many DRAMs
of the day (8ms refresh cycle) but in practice it
worked in almost all cases - and the manufacturers
of those computers had a shoddy enough warranty
policy that they weren't going to worry about a
handful of customers complaining about occasional
mysterious memory corruption on a hot day.
Any access to the row will refresh the whole row.
If you address it such that sequential characters are
in different rows then it is refreshed much faster than
the frame rate.

-- glen
 
On Wed, 24 Oct 2007 13:06:34 -0800,
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:

Processor speed has increased somewhat faster than DRAM speed.
Indeed so; a fair point. And you could perhaps also argue
that the cost of row access, as a fraction of a data access,
has increased quite dramatically over that time.

A DRAM row is effectively a cache. When you access a row,
you read the whole row into the DRAM's row buffer as a free
side-effect, and can then make very fast column accesses
to anly location in the row. It's preposterous to throw
away that massive free bandwidth just to save yourself
some refresh effort - unless you're trying to design
a $80 home computer/toy in the early 1980s.

When RAM cycle time was faster than processor cycle time.
That too is an interesting point. My own experience of
that sort of video controller was that they typically
caused lots of processor stalling while video data
was being fetched, but it may have been different
for other designs.

If you address it such that sequential characters are
in different rows then it is refreshed much faster than
the frame rate.
True, but then you are *really* wasting bandwidth
by doing more row accesses than necessary.

I guess you could, by juggling the use of address bits
sufficiently cunningly, arrange that row accesses by
video scan would *just* provide enough refresh to
satisfy the data sheet spec.

I've seen many different variants on this: block refresh
during frame blanking, for example. They all seemed
pretty unpleasant to me at the time, and still seem so
now - although, of course, no-one needs to do that sort
of dirty trick any more (do they? please?)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Jonathan Bromley wrote:
On Wed, 24 Oct 2007 11:29:29 -0700, Peter Alfke <peter@xilinx.com
wrote:


Jonathan, why so aggressive?


Ooh, I can be much more aggressive than that! And it
certainly wasn't directed at you.


I was just pointing out that certain applications naturally perform
sufficient refresh operations in their normal addressing sequence. I
can't see why this is "completely ridiculuous"...


Nor is it; the absurdity comes from bending the addressing
so that only a small part of each row is sequentially accessed,
thereby wasting the massive increase in memory bandwidth that
can be achieved for sequential-access applications by using
the row buffer as a cache. My spleen was being vented at some
designers of old computers (as alluded to by Antti, not you)
who used video scan to access every row of DRAM on each video
field, thereby unnecessarily burning-up memory bandwidth
(which was in short enough supply on such machines) in order
to save the trouble of doing refresh properly...
The bandwidth is there for the designer to use how they wish.
It also only actually matters, if that bandwidth is the
bottleneck in the total design.

eg I have done designs using interleaved video access, which removes
flicker, and makes the system appear to be dual-port.
On your yardstick, because the bandwidth is not 100% used, this
is a bad design ?

-jg
 
Peter Alfke wrote:


For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke
Such as for a video processor. I've done several that used no refresh.
 
Peter Alfke wrote:

Jonathan, why so aggressive?
I was just pointing out that certain applications naturally perform
sufficient refresh operations in their normal addressing sequence. I
can't see why this is "completely ridiculuous"...
Peter Alfke

On Oct 24, 12:40 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com
wrote:


Yes, and it's a completely ridiculous way to do it. The
added cost of making frequent additional row accesses is
far greater than the cost of the necessary refresh.



And by not having to perform explicit refreshes, the bandwidth is
slightly higher and latency is more predictable. If your application is
one that always addresses all the memory that it uses (no need to
refresh rows you are not using) within the minimum refresh interval,
then this can sometimes be used to simplify the system. There are still
plenty of FPGA applications that, for example, use the DRAM only for a
video frame buffer.
 
On Oct 24, 1:40 pm, Dave Pollum <vze24...@verizon.net> wrote:
On Oct 24, 2:15 am, Antti <Antti.Luk...@googlemail.com> wrote:



On 24 Okt., 07:50, Peter Alfke <al...@sbcglobal.net> wrote:

On Oct 23, 5:27 pm, "David Spencer" <davidmspen...@verizon.net> wrote:

MikeShepherd...@btinternet.com> wrote in message

news:1evsh3ds7i44iqhrsc4kldthlo2vb0tul2@4ax.com...

Although it's not expressed in DRAM specs and you wouldn't want to
rely on it, the effect of reducing refresh rate is to increase the
access time. I'm not up-to-date with DRAM technology, but my
experience with devices 30 years ago was that you could turn off
refresh (and all other access) for 10s or more without losing the
contents, provided you weren't pushing the device to its access time
limits.

So, it's not impossible that reducing refresh rate would have a use
(albeit outside the published device spec). But, as you suggest, it
would help if he would just tell us what he's trying to do.

Mike

Although that may well be the case for asynchronous DRAMs (because the
reduced charge in the memory cell capacitor would mean that the sense
amplifier took longer to register the state), this would not be the case for
SDRAM since this registers the outputs a fixed number of clocks after the
access starts. If the underlying access time increased by too much then the
data would just be wrong.

For certain addressing patterns, the refresh can be eliminated
alltogether, when the addressing sequence is such that all (used)
memory cells are naturally being read, and thus refreshed, within the
required time.
Peter Alfke- Zitierten Text ausblenden -

- Zitierten Text anzeigen -

Sinclair ZX?
at least some old Z80 homecomputers used refresh by video scan

Antti

If I recall, the Apple II also refreshed its RAM this way, too.
-Dave Pollum
The TRS-80 Color Computer (Moto 6809 based) refreshed during the
vertical retrace. But there was a bit in the system controller that
could be set to turn it and video access off, while doubling the
processor clock. As long as your Basic code was running, and not
waiting on a keyboard input or other event, the ROM interpreter's RAM
accesses managed to keep the RAM (at least the part of it being used)
refreshed. But if/when the code hit an error (and thus waited for user
response) you could watch the screen go from random pixels to all
white. Once the coding errors were eliminated, it was a reliable way
to double the processing speed when you did not need video.

Ah the good old days... but, I digress.

Andy
 
Andy wrote:
(snip)

The TRS-80 Color Computer (Moto 6809 based) refreshed during the
vertical retrace. But there was a bit in the system controller that
could be set to turn it and video access off, while doubling the
processor clock.
I thought it was the display memory access that did the refresh.
I probably still have the service manual around somewhere.

As long as your Basic code was running, and not
waiting on a keyboard input or other event, the ROM interpreter's RAM
accesses managed to keep the RAM (at least the part of it being used)
refreshed. But if/when the code hit an error (and thus waited for user
response) you could watch the screen go from random pixels to all
white. Once the coding errors were eliminated, it was a reliable way
to double the processing speed when you did not need video.
If I remember, there were three modes. Normal mode, one that doubled
the clock speed some of the time, and one that doubled it all the time.
I never tried turning the display off, though.

-- glen
 
Jonathan Bromley wrote:
(snip)

I've seen many different variants on this: block refresh
during frame blanking, for example. They all seemed
pretty unpleasant to me at the time, and still seem so
now - although, of course, no-one needs to do that sort
of dirty trick any more (do they? please?)
I have seen ATX style motherboards for PCs with built-in
video that use a part of the main memory. I don't know how they
do the access, though.

-- glen
 

Welcome to EDABoard.com

Sponsor

Back
Top