New soft processor core paper publisher?

In comp.lang.forth Paul Rubin <no.email@nospam.invalid> wrote:
Java is either the most popular or the second most popular porgramming
language in the world. Most Java runs on servers;

I think most Java runs on SIM cards.
Err, what is this belief based on? I mean, you might be right, but I
never heard that before.

Andrew.
 
You might want to fix your attribution line... What you replied to were
not my words.

Rick


On 6/23/2013 5:04 AM, Andrew Haley wrote:
In comp.lang.forth rickman<gnuarm@gmail.com> wrote:

Backing up a bit, it strikes me as a bit crazy to make a language
based on the concept of a weird target processor. I mean, I get the
portability thing, but at what cost? If my experience as a casual
user (not programmer) of Java on my PC is any indication (data point
of one, the plural of anecdote isn't data, etc.), the virtual
stack-based processor paradigm has failed, as the constant updates,
security issues, etc. pretty much forced me to uninstall it. And I
would think that a language targeting a processor model that is
radically different than the physically underlying one would be
terribly inefficient unless the compiler can do hand stands while
juggling spinning plates on fire - even if it is, god knows what it
spits out.

Let's pick this apart a bit. Firstly, most Java updates and security
bugs have nothing whatsoever to do with the concept of a virtual
machine. They're almost always caused by coding errors in the
library, and they'd be bugs regardless of the architecture of the
virtual machine. Secondly, targeting a processor model that is
radically different than the physically underlying one is what every
optimizing compiler does evey day, and Java is no different.

Canonical stack processors and their languages (Forth, Java,
Postscript) at this point seem to be hanging by a legacy thread
(even if every PC runs one peripherally at one time or another).

Not even remotely true. Java is either the most popular or the second
most popular porgramming language in the world. Most Java runs on
servers; the desktop is such a tiny part of the market that even if
everyone drops Java in the browser it will make almost no difference.

Andrew.
--

Rick
 
On 6/23/2013 7:34 AM, Eric Wallin wrote:
On Saturday, June 22, 2013 11:47:49 PM UTC-4, rickman wrote:

Someone was talking about this recently, I don't recall if it was you or
someone else. It was pointed out that the most important aspect of any
core is the documentation. opencores has lots of pretty worthless cores
because you have to reverse engineer them to do anything with them.

I just posted the design document:

http://opencores.org/project,hive

I'd be interested in any comments, my email address is in the document. I'll post the verilog soon.

Cheers!
I'd be interested in reading the design document, but this is what I
find at Opencores...


HIVE - a 32 bit, 8 thread, 4 register/stack hybrid, pipelined verilog
soft processor core :: Overview Overview News Downloads Bugtracker

Project maintainers

Wallin, Eric
Details

Name: hive
Created: Jun 22, 2013
Updated: Jun 23, 2013
SVN: No files checked in


--

Rick
 
On 6/23/2013 5:31 AM, Tom Gardner wrote:
rickman wrote:
On 6/22/2013 5:57 PM, Tom Gardner wrote:

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.

I don't get the question. Weren't semaphores invented a long time ago
and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!
What aspect of a processor can make implementation of semaphores
impossible?

--

Rick
 
On 6/23/2013 10:27 AM, rickman wrote:
On 6/23/2013 5:31 AM, Tom Gardner wrote:
rickman wrote:
On 6/22/2013 5:57 PM, Tom Gardner wrote:

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.

I don't get the question. Weren't semaphores invented a long time ago
and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!

What aspect of a processor can make implementation of semaphores
impossible?
Lack of atomic operations.

Rob.
 
On 6/23/2013 4:38 AM, David Brown wrote:
On 22/06/13 18:18, rickman wrote:
On 6/22/2013 11:21 AM, David Brown wrote:
On 22/06/13 07:23, rickman wrote:
On 6/21/2013 7:18 AM, David Brown wrote:
On 21/06/13 11:30, Tom Gardner wrote:

I suppose I ought to change "nobody writes Forth" to
"almost nobody writes Forth.


Shouldn't that be "almost nobody Forth writes" ?

I would say that was "nobody almost Forth writes". Wouldn't it be [noun
[adjective] [noun [adjective]]] verb?

I thought about that, but I was not sure. When I say "work with Forth
again", I have only "played" with Forth, not "worked" with it, and it
was a couple of decades ago.

Hey, it's not like this is *real* forth. But looking at how some Forth
code works for things like assemblers and my own projects, the data is
dealt with first starting with some sort of a noun type piece of data
(like a register) which may be modified by an adjective (perhaps an
addressing mode) followed by others, then the final verb to complete the
action (operation).


(I too would like an excuse to work with Forth again.)

What do you do instead?


I do mostly small-systems embedded programming, which is mostly in C. It
used to include a lot more assembly, but that's quite rare now (though
it is not uncommon to have to make little snippets in assembly, or to
study compiler-generated assembly), and perhaps in the future it will
include more C++ (especially with C++11 features). I also do desktop and
server programming, mostly in Python, and I have done a bit of FPGA work
(but not for a number of years).

Similar to myself, but with the opposite emphasis. I mostly do hardware
and FPGA work with embedded programming which has been rare for some years.

I think Python is the language a customer recommended to me. He said
that some languages are good for this or good for that, but Python
incorporates a lot of the various features that makes it good for most
things. They write code running under Linux on IP chassis. I think
they use Python a lot.


I don't think of Forth as being a suitable choice of language for the
kind of systems I work with - but I do think it would be fun to work
with the kind of systems for which Forth is the best choice. However, I
suspect that is unlikely to happen in practice. (Many years ago, my
company looked at a potential project for which Atmel's Marc-4
processors were a possibility, but that's the nearest I've come to Forth
at work.)

So why can't you consider Forth for processors that aren't stack based?


There are two main reasons.

The first, and perhaps most important, is non-technical - C (and to a
much smaller extent, C++) is the most popular language for embedded
development. That means it is the best supported by tools, best
understood by other developers, has the most sample code and libraries,
etc. There are a few niches where other languages are used - assembly,
Ada, etc. And of course there are hobby developers, lone wolves, and
amateurs pretending to be professionals who pick Pascal, Basic, or Forth.

I get to pick these things myself to a fair extent (with some FPGA work
long ago, I used confluence rather than the standard VHDL/Verilog). But
I would need very strong reasons to pick anything other than C or C++
for embedded development.
What you just said in response to my question about why you *can't* pick
Forth is, "because it doesn't suit me". That's fair enough, but not as
much about Forth as it is about your preferences and biases.


The other reason is more technical - Forth is simply not a great
language for embedded development work.

It certainly has some good points - its interactivity is very nice, and
you can write very compact source code.

But the stack model makes it hard to work with more complex functions,
so it is difficult to be sure your code is correct and maintainable.
I think you will get some disagreement on that point.


The traditional Forth solution is to break your code into lots of tiny
pieces - but that means the programmer is jumping back and forth in the
code, rather than working with sequential events in a logical sequence.
I am no expert, so far be it from me to defend Forth in this regard, but
my experience is that if you are having trouble writing code in Forth,
you don't "get it".

I've mentioned many times I think the first time I can recall hearing
"the word". I liked the idea of Forth, but was having trouble writing
code in it for the various reasons that people give, one of which is
your issue above. One time I was complaining that it was hard to find
stack mismatches where words were leaving too many parameter on the
stack or not enough. Jeff Fox weighed in (as he often would) and told
me I didn't need debuggers and such, statck mismatches just showed that
I couldn't count... That hit me between the eyes and I realized he was
right. Balancing the stack is just a matter of counting... *and*
keeping your word definitions small so that you aren't prone to
miscounting. That was the real lesson, keep the definitions small.

You don't need to jump "back and forth" so much, you just need to learn
to decompose the code so that each word is small enough to debug
visually. It was recognized a long time ago that even in C programming
that smaller is better. I don't recall the expert, but one of the
programming gurus of yesteryear had a guideline that C routines should
fit on a screen which was 24 lines at the time. But do people listen?
No. They write large routines that are hard to debug.


The arithmetic model makes it hard to work with different sized types,
which are essential in embedded systems - the lack of overloading on
arithmetic operators means a lot of manual work in manipulating types
and getting the correct variant of the operator you want. The highly
flexible syntax means that static error checking is almost non-existent.
Really? I have never considered data types to be a problem in Forth.
Using S>D or just typing 0 to convert from single to double precision
isn't so hard. You could also define words that are C like,
(signed_double) and (unsigned_double), but then I don't know if this
would help you since I don't understand your concern.

Yes, in terms of error checking, Forth is at the other end of the
universe (almost) from Ada or VHDL (I'm pretty proficient at VHDL, not
so much with Ada). I can tell you that in VHDL you spend almost as much
time specifying and converting data types as you do the rest of coding.
The only difference from not having the type checking is that the tool
catches the "bugs" and you spend your time figuring out how to make it
happy, vs. debugging the usual way. I'm not sure which is really
faster. I honestly can't recall having a bug from data types in Forth,
but it could have happened.

--

Rick
 
On 6/23/2013 1:34 PM, Rob Doyle wrote:
On 6/23/2013 10:27 AM, rickman wrote:
On 6/23/2013 5:31 AM, Tom Gardner wrote:
rickman wrote:
On 6/22/2013 5:57 PM, Tom Gardner wrote:

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.

I don't get the question. Weren't semaphores invented a long time ago
and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!

What aspect of a processor can make implementation of semaphores
impossible?

Lack of atomic operations.
Lol, so semaphores were never implemented on a machine without an atomic
read modify write?

--

Rick
 
On 6/23/2013 10:46 AM, rickman wrote:
On 6/23/2013 1:34 PM, Rob Doyle wrote:
On 6/23/2013 10:27 AM, rickman wrote:
On 6/23/2013 5:31 AM, Tom Gardner wrote:
rickman wrote:
On 6/22/2013 5:57 PM, Tom Gardner wrote:

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.

I don't get the question. Weren't semaphores invented a long time ago
and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!

What aspect of a processor can make implementation of semaphores
impossible?

Lack of atomic operations.

Lol, so semaphores were never implemented on a machine without an atomic
read modify write?
You don't need a read/modify/write instruction. You need to perform a
read/modify/write sequence of instructions atomically. On simple
processors that could be accomplished by disabling interrupts around the
critical section of code.

I don't know if there is a machine that /can't/ implement a semaphore -
but that is not the question that you asked.

I suppose I could contrive one. For example, if you had a processor
that required disabling interrupts as described above and you had a had
to support a non-maskable interrupt...

Rob.
 
On 23/06/13 19:45, rickman wrote:
On 6/23/2013 4:38 AM, David Brown wrote:
On 22/06/13 18:18, rickman wrote:
On 6/22/2013 11:21 AM, David Brown wrote:
On 22/06/13 07:23, rickman wrote:
On 6/21/2013 7:18 AM, David Brown wrote:
On 21/06/13 11:30, Tom Gardner wrote:

I suppose I ought to change "nobody writes Forth" to
"almost nobody writes Forth.


Shouldn't that be "almost nobody Forth writes" ?

I would say that was "nobody almost Forth writes". Wouldn't it be
[noun
[adjective] [noun [adjective]]] verb?

I thought about that, but I was not sure. When I say "work with Forth
again", I have only "played" with Forth, not "worked" with it, and it
was a couple of decades ago.

Hey, it's not like this is *real* forth. But looking at how some Forth
code works for things like assemblers and my own projects, the data is
dealt with first starting with some sort of a noun type piece of data
(like a register) which may be modified by an adjective (perhaps an
addressing mode) followed by others, then the final verb to complete the
action (operation).


(I too would like an excuse to work with Forth again.)

What do you do instead?


I do mostly small-systems embedded programming, which is mostly in
C. It
used to include a lot more assembly, but that's quite rare now (though
it is not uncommon to have to make little snippets in assembly, or to
study compiler-generated assembly), and perhaps in the future it will
include more C++ (especially with C++11 features). I also do desktop
and
server programming, mostly in Python, and I have done a bit of FPGA
work
(but not for a number of years).

Similar to myself, but with the opposite emphasis. I mostly do hardware
and FPGA work with embedded programming which has been rare for some
years.

I think Python is the language a customer recommended to me. He said
that some languages are good for this or good for that, but Python
incorporates a lot of the various features that makes it good for most
things. They write code running under Linux on IP chassis. I think
they use Python a lot.


I don't think of Forth as being a suitable choice of language for the
kind of systems I work with - but I do think it would be fun to work
with the kind of systems for which Forth is the best choice. However, I
suspect that is unlikely to happen in practice. (Many years ago, my
company looked at a potential project for which Atmel's Marc-4
processors were a possibility, but that's the nearest I've come to
Forth
at work.)

So why can't you consider Forth for processors that aren't stack based?


There are two main reasons.

The first, and perhaps most important, is non-technical - C (and to a
much smaller extent, C++) is the most popular language for embedded
development. That means it is the best supported by tools, best
understood by other developers, has the most sample code and libraries,
etc. There are a few niches where other languages are used - assembly,
Ada, etc. And of course there are hobby developers, lone wolves, and
amateurs pretending to be professionals who pick Pascal, Basic, or Forth.

I get to pick these things myself to a fair extent (with some FPGA work
long ago, I used confluence rather than the standard VHDL/Verilog). But
I would need very strong reasons to pick anything other than C or C++
for embedded development.

What you just said in response to my question about why you *can't* pick
Forth is, "because it doesn't suit me". That's fair enough, but not as
much about Forth as it is about your preferences and biases.

I viewed the question as "why *you* can't pick Forth" - I can only
really answer for myself.

You say "preferences and biases" - I say "experience and understanding" :)

The other reason is more technical - Forth is simply not a great
language for embedded development work.

It certainly has some good points - its interactivity is very nice, and
you can write very compact source code.

But the stack model makes it hard to work with more complex functions,
so it is difficult to be sure your code is correct and maintainable.

I think you will get some disagreement on that point.
No doubt I will.

Of course, remember your own preferences and biases - I say Forth makes
these things hard or difficult, but not impossible. If you are very
experienced with Forth, you'll find them easier. In fact, you will
forget that you ever found them hard, and can't see why it's not easy
for everyone.

The traditional Forth solution is to break your code into lots of tiny
pieces - but that means the programmer is jumping back and forth in the
code, rather than working with sequential events in a logical sequence.

I am no expert, so far be it from me to defend Forth in this regard, but
my experience is that if you are having trouble writing code in Forth,
you don't "get it".
I can agree with that to a fair extent. Forth requires you to think in
a different manner than procedural languages (just as object oriented
languages, function languages, etc., all require different ways to think
about the task).

My claim is that even when you do "get it", there are disadvantages and
limitations to Forth.

I've mentioned many times I think the first time I can recall hearing
"the word". I liked the idea of Forth, but was having trouble writing
code in it for the various reasons that people give, one of which is
your issue above. One time I was complaining that it was hard to find
stack mismatches where words were leaving too many parameter on the
stack or not enough. Jeff Fox weighed in (as he often would) and told
me I didn't need debuggers and such, statck mismatches just showed that
I couldn't count... That hit me between the eyes and I realized he was
right. Balancing the stack is just a matter of counting... *and*
keeping your word definitions small so that you aren't prone to
miscounting. That was the real lesson, keep the definitions small.
There are times when code is complex, because the task in hand is
complex, and it cannot sensibly be reduced into small parts without a
lot of duplication, inefficiency, or confusing structure (the same
applies to procedural programming - sometimes the best choice really is
a huge switch statement). I don't want to deal with trial-and-error
debugging in the hope that I've tested all cases of miscounting - I want
a compiler that handles the drudge work automatically and lets me
concentrate on the important things.

You don't need to jump "back and forth" so much, you just need to learn
to decompose the code so that each word is small enough to debug
visually. It was recognized a long time ago that even in C programming
that smaller is better. I don't recall the expert, but one of the
programming gurus of yesteryear had a guideline that C routines should
fit on a screen which was 24 lines at the time. But do people listen?
No. They write large routines that are hard to debug.
Don't kid yourself here - people write crap in all languages. And most
programmers - of all languages - are pretty bad at it. Forth might
encourage you to split up the code into small parts, but there will be
people who call these "part1", "part2", "part1b", etc.

The arithmetic model makes it hard to work with different sized types,
which are essential in embedded systems - the lack of overloading on
arithmetic operators means a lot of manual work in manipulating types
and getting the correct variant of the operator you want. The highly
flexible syntax means that static error checking is almost non-existent.

Really? I have never considered data types to be a problem in Forth.
Using S>D or just typing 0 to convert from single to double precision
isn't so hard. You could also define words that are C like,
(signed_double) and (unsigned_double), but then I don't know if this
would help you since I don't understand your concern.
I need to easily and reliably deal with data that is 8-bit, 16-bit,
32-bit and 64-bit. Sometimes I need bit fields that are a different
size. Sometimes I work with processors that have 20-bit, 24-bit or
40-bit data. I need to know exactly what I am getting, and exactly what
I am doing with it. Working with a "cell" or "double cell" is not good
enough - just like C "int" or "short int" is unacceptable.

If Forth has the equivalent of "uint8_t", "int_fast16_t", etc., then it
could work - but as far as I know, it does not. Unless I am missing
something, there is no easy way to write code that is portable between
different Forth targets if cell width is different. You would have to
define your own special set of words and operators, with different
definitions depending on the cell size. You are no longer working in
Forth, but your own little private language.

Yes, in terms of error checking, Forth is at the other end of the
universe (almost) from Ada or VHDL (I'm pretty proficient at VHDL, not
so much with Ada). I can tell you that in VHDL you spend almost as much
time specifying and converting data types as you do the rest of coding.
Yes, I dislike that about VHDL and Ada, though I have done little work
with either. C is a bit more of a happy medium - though sometimes the
extra protection you can get (but only if you want it) with C++ can be a
good idea.

The only difference from not having the type checking is that the tool
catches the "bugs" and you spend your time figuring out how to make it
happy, vs. debugging the usual way. I'm not sure which is really
faster. I honestly can't recall having a bug from data types in Forth,
but it could have happened.
I use Python quite a bit - it has strong typing, but the types are
dynamic. This means there is very little compile-time checking. I
definitely miss that in the language - you waste a lot of time debugging
by trial-and-error when a statically typed language would spot your
error for you immediately.
 
On 6/23/2013 4:52 PM, Eric Wallin wrote:
On Sunday, June 23, 2013 1:24:48 PM UTC-4, rickman wrote:

I'd be interested in reading the design document, but this is what I
find at Opencores...

SVN: No files checked in

I believe SVN is for the verilog, which isn't there quite yet, but the document is. Click on "Downloads" at the upper right.

Here is a link to it:

http://opencores.org/usercontent,doc,1371986749
Ok, this is certainly a lot more document than is typical for CPU
designs on opencores.

I'm not sure why you need to insert so much opinion of stack machines in
the discussions of the paper. Some of what I have read so far is not
very clear exactly what your point is and just comes off as a general
bias about stack machines including those who promote them. I don't
mind at all when technical shortcomings are pointed out, but I'm not
excited about reading the sort of opinion shown...

"Stack machines are (perhaps somewhat inadvertently) portrayed as a
panacea for all computing ills" I don't recall ever hearing anyone
saying that. Certainly there are a lot of claims for stack machines,
but the above is almost hyperbole.

There is a lot to digest in your document. I'll spend some time looking
at it.

--

Rick
 
Eric Wallin wrote:
I can see the need for some kind of semaphore mechanism if you have one or more caches sitting between the processor and the main memory (a "memory hierarchy") but that certainly isn't the case for my (Hive) processor, which is targeted towards small processor-centric tasks in FPGAs. Main memory is static, dual port, and connected directly to the core.
Unless your system is constrained in ways you haven't mentioned...

Do you have interrupts? If so you need semaphores.

Can more than one "source" cause a memory location
to be read or written within one processor instruction
cycle? If so you need semaphores.

I first realised the need for atomic operations when
doing hard real-time work on a 6800 (no caches,
single processor) as a vacation student. Then I did
some research and found out about semaphores. Atomicity
could only be guaranteed by disabling interrupts
for the critical operations. And if you ran two 6809s
off opposite clock phases, even that wasn't sufficient.
 
On Sunday, June 23, 2013 1:24:48 PM UTC-4, rickman wrote:

I'd be interested in reading the design document, but this is what I
find at Opencores...

SVN: No files checked in
I believe SVN is for the verilog, which isn't there quite yet, but the document is. Click on "Downloads" at the upper right.

Here is a link to it:

http://opencores.org/usercontent,doc,1371986749
 
I can see the need for some kind of semaphore mechanism if you have one or more caches sitting between the processor and the main memory (a "memory hierarchy") but that certainly isn't the case for my (Hive) processor, which is targeted towards small processor-centric tasks in FPGAs. Main memory is static, dual port, and connected directly to the core.
 
rickman <gnuarm@gmail.com> wrote:
On 6/23/2013 4:52 PM, Eric Wallin wrote:
(snip)
I believe SVN is for the verilog, which isn't there quite yet,
but the document is. Click on "Downloads" at the upper right.
(snip)
http://opencores.org/usercontent,doc,1371986749

Ok, this is certainly a lot more document than is typical for CPU
designs on opencores.

I'm not sure why you need to insert so much opinion of stack
machines in the discussions of the paper. Some of what I have
read so far is not very clear exactly what your point is and
just comes off as a general bias about stack machines including
those who promote them. I don't mind at all when technical
shortcomings are pointed out, but I'm not excited about reading
the sort of opinion shown...

"Stack machines are (perhaps somewhat inadvertently) portrayed as a
panacea for all computing ills" I don't recall ever hearing anyone
saying that. Certainly there are a lot of claims for stack
machines, but the above is almost hyperbole.
I suppose. Stack machines are pretty much out of style now.
One reason is that current compiler technology has a hard
time generating good code for them.

Well, stack machines, such as the Burroughs B5500, were popular
when machines had a small number of registers. They could be
implemented with most or all of the stack in main memory
(usually magnetic core). They allow for smaller instructions,
even as addressing space gets larger. (The base-displacement
addressing for S/360 was also to help with addressing.)

Now, I suppose if stack machines had stayed popular, that compiler
technology would have developed to use them more efficiently,
but general registers, 16 or more of them, allow for flexibility
in addressing that stacks make difficult.

Now, you could do like the x87, with a stack that also allows
one to address any stack element. The best, and some of the
worst, of both worlds.

-- glen

There is a lot to digest in your document. I'll spend some
time looking at it.
 
On Sunday, June 23, 2013 6:43:06 PM UTC-4, Tom Gardner wrote:

Do you have interrupts?
Yes, one per thread.

If so you need semaphores.
Not sure I follow, but I'm not sure you've read the paper.

Can more than one "source" cause a memory location
to be read or written within one processor instruction
cycle? If so you need semaphores.
If the programmer writes the individual thread programs so that two threads never write to the same address then by definition it can't happen (unless there is a bug in the code). I probably haven't thought about this as much as you have, but I don't see the fundamental need for more hardware if the programmer does his/her job.
 
On 6/23/2013 9:23 PM, Eric Wallin wrote:
On Sunday, June 23, 2013 6:43:06 PM UTC-4, Tom Gardner wrote:

Do you have interrupts?

Yes, one per thread.

If so you need semaphores.

Not sure I follow, but I'm not sure you've read the paper.
I used to know this stuff, but it has been a long time. I think what
Tom is referring to may not apply if you don't run more than one task on
a given processor. The issue is that to implement a semaphore you have
to do a read-modify-write operation on a word in memory. If "anyone"
else can get in the middle of your operation the semaphore can be
corrupted or fails. But I'm not sure just using an interrupt means you
will have problems, I think it simply means the door is open since
context can be switched causing a failure in the semaphore.

But as I say, it has been a long time and there are different reasons
for semaphores and different implementations.


Can more than one "source" cause a memory location
to be read or written within one processor instruction
cycle? If so you need semaphores.

If the programmer writes the individual thread programs so that two threads never write to the same address then by definition it can't happen (unless there is a bug in the code). I probably haven't thought about this as much as you have, but I don't see the fundamental need for more hardware if the programmer does his/her job.
There are other resources that might be shared. Or maybe not, but if
so, you need to manage it.

Wow, I never realized how much I have forgotten.

--

Rick
 
On 6/23/2013 8:15 PM, glen herrmannsfeldt wrote:
rickman<gnuarm@gmail.com> wrote:
On 6/23/2013 4:52 PM, Eric Wallin wrote:

(snip)
I believe SVN is for the verilog, which isn't there quite yet,
but the document is. Click on "Downloads" at the upper right.

(snip)
http://opencores.org/usercontent,doc,1371986749

Ok, this is certainly a lot more document than is typical for CPU
designs on opencores.

I'm not sure why you need to insert so much opinion of stack
machines in the discussions of the paper. Some of what I have
read so far is not very clear exactly what your point is and
just comes off as a general bias about stack machines including
those who promote them. I don't mind at all when technical
shortcomings are pointed out, but I'm not excited about reading
the sort of opinion shown...

"Stack machines are (perhaps somewhat inadvertently) portrayed as a
panacea for all computing ills" I don't recall ever hearing anyone
saying that. Certainly there are a lot of claims for stack
machines, but the above is almost hyperbole.

I suppose. Stack machines are pretty much out of style now.
One reason is that current compiler technology has a hard
time generating good code for them.
I think you might be referring to the sort of stack machines used in
minicomputers 30 years ago. For FPGA implementations stack CPUs are
alive and kicking. Forth seems to do a pretty good job with them. What
is the problem with other languages?


Well, stack machines, such as the Burroughs B5500, were popular
when machines had a small number of registers. They could be
implemented with most or all of the stack in main memory
(usually magnetic core). They allow for smaller instructions,
even as addressing space gets larger. (The base-displacement
addressing for S/360 was also to help with addressing.)
Yes, you *are* talking about 30 year old machines, or even 40 year old
machines.


Now, I suppose if stack machines had stayed popular, that compiler
technology would have developed to use them more efficiently,
but general registers, 16 or more of them, allow for flexibility
in addressing that stacks make difficult.

Now, you could do like the x87, with a stack that also allows
one to address any stack element. The best, and some of the
worst, of both worlds.
You are reading my mind! That is what I spent some time looking at this
past winter. Then I got busy with work and have had to put it aside.

Eric's machine is a bit different having four stacks for each processor
and allowing each one to be popped or not rather than any addressing on
the stack itself. Interesting, but not so small as the two stack CPUs.

--

Rick
 
Rob Doyle wrote:
On 6/23/2013 10:27 AM, rickman wrote:
On 6/23/2013 5:31 AM, Tom Gardner wrote:
rickman wrote:
On 6/22/2013 5:57 PM, Tom Gardner wrote:

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.

I don't get the question. Weren't semaphores invented a long time ago
and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!

What aspect of a processor can make implementation of semaphores
impossible?

Lack of atomic operations.

Rob.
No. The only requirement for semaphores
to work is to be able to turn off interrupts briefly.


--
Les Cargill
 
On 6/23/2013 11:18 PM, glen herrmannsfeldt wrote:
rickman<gnuarm@gmail.com> wrote:

(snip, I wrote)
I suppose. Stack machines are pretty much out of style now.
One reason is that current compiler technology has a hard
time generating good code for them.

I think you might be referring to the sort of stack machines used in
minicomputers 30 years ago. For FPGA implementations stack CPUs are
alive and kicking. Forth seems to do a pretty good job with them. What
is the problem with other languages?

The code generators designed for register machines, such as that
used by GCC or LCC, don't adapt to stack machines well.
That shouldn't be a surprise to anyone. The guy who designed the ZPU
found that out the hard way.


As users of HP calculators know, given an expression with unrelated
arguments, it isn't hard to evaluate using a stack. But consider that
the expression might have some common subexpressions? You want to
evaluate the expression, evaluating the common subexpressions only
once. It is not so easy to get things into the right place on
the stack, such that they are at the top at the right time.
Tell me about it ;^)

--

Rick
 
On 6/23/2013 10:54 PM, Eric Wallin wrote:
On Sunday, June 23, 2013 6:30:28 PM UTC-4, rickman wrote:

I'm not sure why you need to insert so much opinion of stack machines in
the discussions of the paper. Some of what I have read so far is not
very clear exactly what your point is and just comes off as a general
bias about stack machines including those who promote them. I don't
mind at all when technical shortcomings are pointed out, but I'm not
excited about reading the sort of opinion shown...

Point taken. I suppose I'm trying to spare others from wasting too much time and energy on canonical one and two stack machines. There just aren't enough stacks, so unless you want to deal with the top entry or two right now you'll be digging around, wasting both programming and real time, and getting confused. And they automatically toss data away that you often very much need, so you waste more time copying it or reloading it or whatever. I spent years trying to like them, thinking the problem was me. The J processor really helped break the spell.

Not saying I have all the answers, I hope the paper doesn't come across that way, but I do have to sell it to some degree (the paper ends with the down sides that I'm aware of, I'm sure there are more).
I'm glad you can take (hopefully) constructive criticism. I was
concerned when I wrote the above that it might be a bit too blunt.

It will be a while before I get to the end of your paper. Do you
describe the applications you think the design would be good for? One
reason I don't completely agree with you about the suitability of MISC
type CPUs is that there are many apps with different requirements. Some
will definitely do better with a design other than yours. I wonder if
you had some specific class of applications that you were seeing that
you didn't think the MISC approach was optimal for or if it was just the
various "features" of MISC that didn't suit your tastes.


"Stack machines are (perhaps somewhat inadvertently) portrayed as a
panacea for all computing ills" I don't recall ever hearing anyone
saying that. Certainly there are a lot of claims for stack machines,
but the above is almost hyperbole.

Defense exhibit A:

http://www.ultratechnology.com/cowboys.html

Maybe I'm seeing things that aren't there, but almost every web site, paper, and book on stack machines and Forth that I've encountered has a vibe of "look at this revolutionary idea that the man has managed to keep down!" Absolutely no down sides mentioned, so the hapless noob is left with much too flattering of an impression. In my case this false impression was quite lasting, so I guess I've got something of an axe to grind. Perhaps I'll moderate this in future releases of the design document.
I can't argue with you on this one. When I first saw the GA144 design
it sounded fantastic! But that is typical corporate product hype. The
reality of the chip is very different. When it comes to CPU cores for
FPGAs I don't see a lot of difference. Check out some of the other
offerings on Opencores. Everyone touts their design as something pretty
special even if they are just one of two or three that do the same
thing! I think they had some five or six PIC implementations and all
seemed to say they were the best!

I do have to say I am not in complete agreement with you about the
issues of MISC machines. Yes, there can be a lot of stack ops compared
to a register machine. But these can be minimized with careful
programming. I know that from experience. However, part of the utility
of a design is the ease of programming efficiently. I haven't looked at
yours yet, but just picturing the four stacks makes it seem pretty
simple... so far. :^)

I have to say I'm not crazy about the large instruction word. That is
one of the appealing things about MISC to me. I work in very small
FPGAs and 16 bit instructions are better avoided if possible, but that
may be a red herring. What matters is how many bytes a given program
uses, not how many bits are in an instruction.

I am supposed to present to the SVFIG and I think your design would be a
very interesting part of the presentation unless you think you would
rather present yourself. I'm sure they would like to hear about it and
they likely would be interested in your opinions on MISC. I know I am.

--

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top