Test Driven Design?

On Saturday, May 20, 2017 at 11:17:19 AM UTC+3, rickman wrote:
On 5/20/2017 3:11 AM, Ilya Kalistru wrote:
Yes. It is xilinx vivado.

Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?

Doesn't the tool still generate all the intermediate files? The Lattice
tool (which uses Synplify for synthesis) creates a huge number of files
that only the tools look at. They aren't really project files, they are
various intermediate files. Living in the project main directory they
really get in the way.

--

Rick C

It does. You can tell the tool where to generate these files and I do it in a special directory. It is easy to delete them, and you don't have to add them to your source control system. As all your important stuff is in src dir and all your junk is in sim_* dirs it is easy to manage them.

That's what I have in my repository

Project_name
\sim_Test1NameDir
*sim_test2NameDir
*sim_test3NameDir
|\
| *sim_report.log
| *other_junk
*synth_Module1Dir
*synth_Module2Dir
|\
| *Results
| | \
| | *Reports
| | *bitfiles
| *Some_junk
*src
|\
| *DesignChunk1SrcDir
| *DesignChunk2SrcDir
*sim_test1.tcl
*sim_test2.tcl
*sim_test3.tcl
*synth_Module1.tcl
*synth_Module2.tcl
 
All our builds run in continuous integration, which extracts logs and
timing/area numbers. The bitfiles then get downloaded and booted on FPGA,
then the test suite and benchmarks are run automatically to monitor
performance. Numbers then come back to continuous integration for graphing.

Theo

Nice!
 
Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?

I would like to know more about this. When I used ISE I only used scripts (shell scripts) and when I transitioned to Vivado I promised I would use TCL scripts but I've never done that and I'm still just using the GUI. I need to use the GUI to look at schematics of critical paths or to look at placement, but I'd like to use scripts to do all the PAR and timing and everything else.
 
On Tuesday, May 23, 2017 at 9:26:26 PM UTC+3, Kevin Neilson wrote:
Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?

I would like to know more about this. When I used ISE I only used scripts (shell scripts) and when I transitioned to Vivado I promised I would use TCL scripts but I've never done that and I'm still just using the GUI. I need to use the GUI to look at schematics of critical paths or to look at placement, but I'd like to use scripts to do all the PAR and timing and everything else.

I am writing an article about that. I'll post it here.

I examine timing reports in logs of Vivado, but if I have bad timings somewhere, I often use GUI as well. It's just easier to understand what part of code creates bad timing if you investigate it visually.
I just open vivado, do
open_checkpoint post_place.cpt
Then I examine schematics of the paths and and their placement. Non-project mode doesn't prevent using GUI when you need it. They work fine together.
 
It's a draft of an article.
https://docs.google.com/document/d/17LgQjxYdh8Dxy4NdFWWNYQ7up8MFNG4GQdPfv3s5LzI/edit?usp=sharing
It would be great if you left you comments right in the document or here, so that I could improve it.

Thanks.
 
On Saturday, June 3, 2017 at 1:54:05 AM UTC+3, Ilya Kalistru wrote:
It's a draft of an article.
https://docs.google.com/document/d/17LgQjxYdh8Dxy4NdFWWNYQ7up8MFNG4GQdPfv3s5LzI/edit?usp=sharing
It would be great if you left you comments right in the document or here, so that I could improve it.

Thanks.

It's finally published on edn.com ! http://tinyurl.com/y9ekp7lf
 
If we are talking about the typical software developer interpretation of TDD we're dealing with code/test cycles that can be as short as one minute. This is only possible with a fully automated approach to testing and that's why we have unit testing frameworks designed with this requirement in mind. I only know of two such frameworks for HDL, VUnit and SVUnit, so the popularity of these tools may serve as an indication of how widely TDD is used. I can't speak for SVUnit but from what I understand it's mostly used by ASIC verification engineers. However, I'm "the guy" who setup the test environment for VHDL-2017 and also one of the authors of VUnit so maybe I can shed some light in that area.

VUnit is today used by both FPGA and ASIC teams, for VHDL and Verilog, from US to Japan, when developing everything from high-volume products like automotive vision to niche military system, by simulator vendors to verify their tools, and for education. If you're like me you prefer facts over claims from a promoter so I would recommend that you google for VUnit rather than TDD. You should be able to find job ads where people look for VUnit skills, that the latest version of Sigasi Studio added VUnit support, independent training providers , university education, VHDL text books, etc. You can also compare VUnit with other popular open source projects. Search for VHDL on Github and you'll find most of them. Look at the number of stars, where the supporters work, the number of forks, the number of opened issues, the number of closed. Look at the project homepages and the activities on their forums.

However, just because people use VUnit doesn't mean that they are doing TDD.. What I see is mainly two types of users. Some users have their own SW experiences with TDD/unit testing or have seen SW people doing it. They know what the want and end up doing just that. The other type of users doesn't have that experience or finds the concept somewhat absurd. They are more looking at the convenience of having everything fully automated but stick with their longer code/test cycle and they tend to do less refactoring. It doesn't mean that they don't get there eventually but it usually takes longer time. Unit testing and TDD is an acquired taste for anyone and requires practice. The good thing is that you can takes small steps and still get rewarded.
 
After reading this thread I also feel that I need to "defend" the simplicity of VUnit :) If you look at the VUnit setup I made for the IEEE standard libraries I can understand if it looks complicated but that is not your average project and not the place to get started. At better starting point is https://www.youtube.com/channel/UCCPVCaeWkz6C95aRUTbIwdg. There you will find a clip showing how to install VUnit in a minute, how you can create a compile script for non-trivial project in a minute, and how you can have your project fully automated by adding 5 lines of code to every testbench. http://vunit.github.io/user_guide.html is another good source of information. So what makes the IEEE libraries different? Well...
- The simulator already knows about these libraries and gets confused when you try to compile them yourself.
- The code is not fully self-contained VHDL but relies on the simulator "magically" implementing some functionality
- Some subprograms are expected to assert warning/error/failure on certain inputs and that is not something that you can verify automatically within VHDL since the simulator stops. That can be handled with Python scripting or you can use VUnit's assertion library which supports mocking of asserts.
- I setup a continuous integration server to run all tests whenever people want to summit code to he Git repository. You don't have to do that in order to run VUnit or any other unit testing framework.
- IEEE already had 30 or so testbenches but these testbenches also contain many test cases. VUnit allow you to isolate these test cases such that you can run and have their status reported individually. Rather than having 30 passing testbenches we now have 1700 passing test cases. You don't have to do this though, 5 lines of code is sufficient.
 

Welcome to EDABoard.com

Sponsor

Back
Top