Binary File I/O in simulation

Guest
Hi,

Is that possible to write binary (I mean real binary, not the '0' and
'1' char)to a file in Verilog simulation? I am using NC-Verilog and
don't want to wirte the PLI function myself. I did google and someone
suggest me to use $fwrite() which +MEMPACK option but that failed in my
environment. All I want is a simple code like below:

int output_file;

initial begin
output_file = $fopen("my_output.bin", "w");
end

always @ (posedge clk)
if (data_valid)
// I don't want ASCII output in the following line
$fwrite(output_file, rd_data[31:16]);

// Thanks in advance

--
Regards,
Tsoi Kuen Hung (Brittle)
CSE CUHK
 
The +MEMPACK should have worked if you are using NC-Verilog.
Alternatively you could dump the ASCII data and postprocess using a
Perl or a C++ script.

-Rajat Mitra
 
You can use the Verilog-2001 extension of using a %u format descriptor
to write unformatted binary data. Just change your $fwrite call to

$fwrite(output_file, "%u", rd_data[31:16]);

You might also want to change your $fopen mode from "w" to "wb" for a
binary file, though I don't think it is necessary in NC-Verilog.
 
Thanks for all the info! I have figured out the solution as below. It
may not be the best and it may not work for your environment. But it
works so far (without any special option in ncverilog command/script).


/* ----8<---- */
integer file;

initial file = $fopen("screen.rgb", "wb");

always @ (posedge LCD_CLK)
if (LCD_DE) begin
$fwriteb(file, "%c", LCD_RD);
$fwriteb(file, "%c", LCD_GD);
$fwriteb(file, "%c", LCD_BD);
end
/* ---->8---- */

--
Regards,
Tsoi Kuen Hung (Brittle)
CSE CUHK
 
The %c format descriptor will work properly in NC-Verilog, but some
other simulators are unable to write a zero value with the %c
descriptor.
 
<khtsoi@pc89122.cse.cuhk.edu.hk> writes:

Is that possible to write binary (I mean real binary, not the '0'
and '1' char)to a file in Verilog simulation? I am using NC-Verilog
http://groups.google.com/group/comp.lang.verilog/msg/e6943e93addb5ac1

Petter
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
File I/O accesses might reduce simulation speed, if you just use
$fwrite* functions.

Either accumulate a bunch of data to an array and write it at once (ie.
reduce the frequency of $fwrite*'s in simulation). Or use $fflush or if
possible, a mechanism like setvbuf in C might speed up simulations.

This stuff becomes more important if your simulation time is long.

Utku.
 
utku.ozcan@gmail.com wrote:
File I/O accesses might reduce simulation speed, if you just use
$fwrite* functions.

Either accumulate a bunch of data to an array and write it at once (ie.
reduce the frequency of $fwrite*'s in simulation). Or use $fflush or if
possible, a mechanism like setvbuf in C might speed up simulations.
Any reasonable implementation of $fwrite will already be buffering the
output. Trying to do your own buffering in an array before writing may
or may not help any. It will probably end up going through the same
buffering inside $fwrite regardless, so all you might be saving is some
call overhead. Meanwhile executing your Verilog code to accumulate the
data is costing time. The overall performance effect will depend on
your simulator implementation.

Using $fflush will only slow down I/O, by forcing the buffer to be
flushed more often than it would otherwise, and performing more
physical I/O operations. Since you are suggesting improving
performance with buffering, using $fflush is exactly the opposite of
what you want.

Now if your data is already in a large chunk, you are better off
writing it with a single %u format, rather than breaking it into bytes
and writing them with separate %c formats. That may save time in both
the Verilog code and the calling overhead.
 
sharp@cadence.com wrote:
utku.ozcan@gmail.com wrote:
File I/O accesses might reduce simulation speed, if you just use
$fwrite* functions.

Either accumulate a bunch of data to an array and write it at once (ie.
reduce the frequency of $fwrite*'s in simulation). Or use $fflush or if
possible, a mechanism like setvbuf in C might speed up simulations.

Any reasonable implementation of $fwrite will already be buffering the
output. Trying to do your own buffering in an array before writing may
or may not help any. It will probably end up going through the same
buffering inside $fwrite regardless, so all you might be saving is some
call overhead. Meanwhile executing your Verilog code to accumulate the
data is costing time. The overall performance effect will depend on
your simulator implementation.
Supposing that simulator architecture is normalized, accumulation in
Verilog must be faster than I/O access, because it's pure software and
running in memory. Am I missing something here?

Using $fflush will only slow down I/O, by forcing the buffer to be
flushed more often than it would otherwise, and performing more
physical I/O operations. Since you are suggesting improving
performance with buffering, using $fflush is exactly the opposite of
what you want.
Do the simulator architectures support automatic flushing in
background? Do they decide automatically they must $fflush in
background if a number of $fwrites (or more generally, a number of I/O
accesses) is achieved?

I had seen on internet (unfortunately cannot find it) that tuning
setvbuf() in C increases I/O performance. But in HDL platforms, there
is a simulator between the user and operating system (and thus
hardware). Do the simulators support I/O buffer tuning to increase
performance?

Thanks for invaluable inputs.
Utku.
 
utku.ozcan@gmail.com writes:

Supposing that simulator architecture is normalized, accumulation in
Verilog must be faster than I/O access, because it's pure software
and running in memory. Am I missing something here?
Basically you're saying that all Verilog simulations are IO bound.
That might not be the case, it depends on the rate the simulation
generates data to be written (could be statistical data every hour
which would not have any significant impact on total simulation time)
and the size of the buffer used to do the write.

Do the simulator architectures support automatic flushing in
background? Do they decide automatically they must $fflush in
background if a number of $fwrites (or more generally, a number of I/O
accesses) is achieved?
Think of VCS, it compiles from Verilog to C (or asm) which is then
compiled. Basically it's a C program which is linked with stdio. It
will flush when the output buffer is full, explicitly flushed, or when
the file descriptor is closed.

I had seen on internet (unfortunately cannot find it) that tuning
setvbuf() in C increases I/O performance. But in HDL platforms,
there is a simulator between the user and operating system (and thus
hardware). Do the simulators support I/O buffer tuning to increase
performance?
I haven't seen this, but there is nothing preventing simulator vendors
from letting you set the IO buffer size. This could be done using the
command line, resource files, $setbuffersize and similar.

These issues are important when you turn on tracing all signals using
VCD, TRN/SST, VPD, or similar.

Petter
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
Petter Gustad wrote:
Supposing that simulator architecture is normalized, accumulation in
Verilog must be faster than I/O access, because it's pure software
and running in memory. Am I missing something here?

Basically you're saying that all Verilog simulations are IO bound.
That might not be the case, it depends on the rate the simulation
generates data to be written (could be statistical data every hour
which would not have any significant impact on total simulation time)
and the size of the buffer used to do the write.
Then this confirms what I said in my first email. If the sim time is
pretty long like 2-3 days (which is quite normal these days), I/O
accesses have a significiant impact on sim time.

Do the simulator architectures support automatic flushing in
background? Do they decide automatically they must $fflush in
background if a number of $fwrites (or more generally, a number of I/O
accesses) is achieved?

Think of VCS, it compiles from Verilog to C (or asm) which is then
compiled. Basically it's a C program which is linked with stdio. It
will flush when the output buffer is full, explicitly flushed, or when
the file descriptor is closed.
The intermediate C code is generated by VCS with default I/O settings
determined by the software, if any. As a user you have no access to
tune these I/O parameters. My opinion, is, that default settings for
I/O access are not very visible to user, then your coding style having
I/O accesses have direct impact on sim time.

I had seen on internet (unfortunately cannot find it) that tuning
setvbuf() in C increases I/O performance. But in HDL platforms,
there is a simulator between the user and operating system (and thus
hardware). Do the simulators support I/O buffer tuning to increase
performance?

I haven't seen this, but there is nothing preventing simulator vendors
from letting you set the IO buffer size. This could be done using the
command line, resource files, $setbuffersize and similar.

These issues are important when you turn on tracing all signals using
VCD, TRN/SST, VPD, or similar.
Generalling speaking, generating waveforms (VCD/TRN/SST) is implemented
with built-in software by EDA designers. They might have all the fancy
optimization stuff the speed up with fflush/setvbuf etc.

But self-checking testbenches, where a lot of data is printed to
stdout, store the log files (which is typical) or even additional
custom testvector data (also typical) by using custom I/O calls (with
$fwrite etc, which have pre-defined I/O settings), Here you have
practically no control, unlike in C.

My concern is, that Verilog platforms can be less flexible than C in
this aspect.
Because in Verilog there is a software between the Verilog programmer
and OS, which probably restricts your possibilities, practically only
way is your coding style (accumulate data first, then flush them).

But in C, not, because there is nobody between the programmer and OS,
you can use whatever you want (fflush, setvbuf etc).

Utku.
 
utku.ozcan@gmail.com writes:

Basically you're saying that all Verilog simulations are IO bound.
That might not be the case, it depends on the rate the simulation
generates data to be written (could be statistical data every hour
which would not have any significant impact on total simulation time)
and the size of the buffer used to do the write.

Then this confirms what I said in my first email. If the sim time is
pretty long like 2-3 days (which is quite normal these days), I/O
accesses have a significiant impact on sim time.
A few miliseconds every hour would only account for virtually no
significant impact on simulation time. As I said it depends upon the
rate the simulation generates data to be written. If you print data on
every clock it can have severe impact on your simulation time.

These issues are important when you turn on tracing all signals using
VCD, TRN/SST, VPD, or similar.

Generalling speaking, generating waveforms (VCD/TRN/SST) is implemented
with built-in software by EDA designers. They might have all the fancy
optimization stuff the speed up with fflush/setvbuf etc.
The same argument could be true for $fwrite. Simulator developers
might as well try to optimize/buffer $fwrite as well as $dumpvars. As
Steven Sharp says in his message is that there might be buffering in
$fwrite already so adding your own buffer in Verilog might not improve
performance significantly.

But self-checking testbenches, where a lot of data is printed to
stdout, store the log files (which is typical) or even additional
Hmm. My self-check testbenches don't print any data to stdout, other
than the success string, unless there is an error.

My concern is, that Verilog platforms can be less flexible than C in
this aspect.
True. If there is no $setvbuf or other method of calling setvbuffer
you have to trust that the simulator developers have done their best
to optmize IO, like you say they do with $dumpvars etc.

Petter

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
utku.ozcan@gmail.com wrote:
Supposing that simulator architecture is normalized, accumulation in
Verilog must be faster than I/O access, because it's pure software and
running in memory. Am I missing something here?
Yes. What you are calling I/O access (i.e. calling $fwrite) mostly
consists of accumulation in buffers by the I/O library, which is pure
software and running in memory. And it is probably written more
efficiently than what you would write in Verilog.

Do the simulator architectures support automatic flushing in
background? Do they decide automatically they must $fflush in
background if a number of $fwrites (or more generally, a number of I/O
accesses) is achieved?
Most of them probably rely on the C standard I/O library. The usual
implementation of that library flushes a buffer when it gets full.
There is lower-level I/O support in the operating system that deals
with getting that data out onto disk in the background, probably with
additional buffering.

I had seen on internet (unfortunately cannot find it) that tuning
setvbuf() in C increases I/O performance. But in HDL platforms, there
is a simulator between the user and operating system (and thus
hardware). Do the simulators support I/O buffer tuning to increase
performance?
Increasing the buffer size with setvbuf can improve performance of C
code that is I/O intensive. Most C applications don't bother, and
probably wouldn't gain much anyway. The Verilog language does not
provide a mechanism for tuning I/O buffering, such as a $setbuf task.
Most users probably wouldn't bother with it if it did, and might not
have the expertise to use it appropriately. It isn't something I have
heard much demand for.
 
You are correct that Verilog provides less low-level control for I/O
buffering than C. Most Verilog users probably don't want to worry
about such things. Even most C programmers don't.

If I/O is becoming a bottleneck (and you shouldn't assume that it is,
without making measurements to demonstrate it), a more effective
strategy is to eliminate I/O rather than trying to tune it.
Self-checking testbenches typically *don't* print out much data. By
checking the data themselves, rather than writing it out to be checked
later, they avoid most of the I/O.

Some C applications have to do a lot of I/O, because that is part of
their purpose. For example, the Unix utility cp has to do a lot of I/O
because that is what it is for. But Verilog simulation isn't generally
about writing a program to process a lot of external data. You don't
write a database utility or operating system in Verilog. A Verilog
simulation may not need to say anything except "I passed" or "I failed
in this situation".
 
Finally I found the article I was talking about. It might be useful:
http://www.enterprisestorageforum.com/technology/features/article.php/11192_1569961_4

Thanks for your precious answers. Have a nice day,
Utku
 
utku.ozcan@gmail.com wrote:
Finally I found the article I was talking about. It might be useful:
http://www.enterprisestorageforum.com/technology/features/article.php/11192_1569961_4
Thanks for the pointer.
 

Welcome to EDABoard.com

Sponsor

Back
Top