The variable bit cpu

In article <PMeIe.236744$Qo.136527@fed1read01>,
Richard Henry <rphenry@home.com> wrote:
[...]
I keep my old 432 databooks on my bookshelf to haul out when the Intel rep
starts making glowing promises of the future.
Unfortunately mine went away in a witless clean up.


The series-4 development system was also a great charmer. I think it is
what put Intel out of the development system market. In some ways it is a
shame.

The MDS-800 was their first 8080 development system. It ran an OS called
ISIS-II. ISIS-II was years ahead of its time as a small systems OS. From
that point forwards, every new system they introduced was better than the
next.

--
--
kensmith@rahul.net forging knowledge
 
In article <jpGdnZ2dnZ1f8UODnZ2dnYg9bN-dnZ2dRVn-0J2dnZ0@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
I was doing compilers and interpreters at the time the 432 was being done.
After it was cancelled I talked with one of the people on the team. He
said that their first version of the compiler would validate references
to memory, sometimes again and again, in the execution of a single
statement. He claimed that if they had been given the chance that the
next version of the compiler was slated to optimize away all the redundant
validations and significantly increase the speed of the code.
They still had the hardware requirement of making a fetch from the table
to get the address before you could access the variable. IIRC, if you
accessed a variable twice in a row, you could save on the table look ups
but if you were in a loop processing 4 variables, you ended up with 4
extra memory reads per loop.

We were not looking at a specific compiler but at the instruction set
timing its self when we concluded that there was no good reason to go with
the 432. We could almost get the throughput we needed with a 186 and were
sure to with a 286.

[...]
And it was the first attempt at the processor, somewhat like the 8086 was
the first attempt at the processor. The 286 then benefitted from doing a
second, or third if you count the 80186, turn and cranking up the speed.
I'd certainly count the 186 even though they were both planned at about
the same point, they followed one another into the market.

And you might consider the truly horrible code that was generated by the
first few generations of 80x86 compilers. It was truly awful, big, slow,
and compiler writers were all talking about these mythical huge gains that
the Intel x86 architecture was supposed to have given them.
I, however, was doing the 8086 in assembly. Did you know that the 8086 is
not as fast as the 8051 for many of its instructions. The 8086 required a
trip through the ALU to address a variable or do a jump or call
instruction. The result was a fairly large number (22IIRC) clock cycles
to do a jump. If you wanted to do things like memory moves and block I/O,
you were best off adding an 8089 to the project. The 8086 was basically a
second micro with a much simpler instruction set targetted at doing I/O
quickly.

BTW: The last time I looked at compiled 8086 code, I was impressed that
they had greatly improved things. Today, to looks like hand coding would
only gain a factor of 2 or maybe 3 on the through put and perhaps not
enough decrease in size to bother with.


I came very very close to owning a complete 432 development system after
it was all over. I questioned whether I should have done that or not.
I'm sure that at some point one will be worth money.


--
--
kensmith@rahul.net forging knowledge
 
On Thu, 04 Aug 2005 14:18:13 +0000, Ken Smith wrote:

In article <jpGdnZ2dnZ1f8UODnZ2dnYg9bN-dnZ2dRVn-0J2dnZ0@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
I was doing compilers and interpreters at the time the 432 was being done.
After it was cancelled I talked with one of the people on the team. He
said that their first version of the compiler would validate references
to memory, sometimes again and again, in the execution of a single
statement. He claimed that if they had been given the chance that the
next version of the compiler was slated to optimize away all the redundant
validations and significantly increase the speed of the code.

They still had the hardware requirement of making a fetch from the table
to get the address before you could access the variable. IIRC, if you
accessed a variable twice in a row, you could save on the table look ups
but if you were in a loop processing 4 variables, you ended up with 4
extra memory reads per loop.

We were not looking at a specific compiler but at the instruction set
timing its self when we concluded that there was no good reason to go with
the 432. We could almost get the throughput we needed with a 186 and were
sure to with a 286.

[...]
And it was the first attempt at the processor, somewhat like the 8086 was
the first attempt at the processor. The 286 then benefitted from doing a
second, or third if you count the 80186, turn and cranking up the speed.

I'd certainly count the 186 even though they were both planned at about
the same point, they followed one another into the market.
I wouldn't. The 186 was nothing but an 8086 with a bunch of peripherals
on-chip. I'm not sure if there was ever a 187, since it was targeted at
embedded systems. I know this is true, because I've written code for
one. :)

Cheers!
Rich
 
In article <pan.2005.08.04.23.33.42.454967@example.net>,
richgrise@example.net says...
On Thu, 04 Aug 2005 14:18:13 +0000, Ken Smith wrote:

In article <jpGdnZ2dnZ1f8UODnZ2dnYg9bN-dnZ2dRVn-0J2dnZ0@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
I was doing compilers and interpreters at the time the 432 was being done.
After it was cancelled I talked with one of the people on the team. He
said that their first version of the compiler would validate references
to memory, sometimes again and again, in the execution of a single
statement. He claimed that if they had been given the chance that the
next version of the compiler was slated to optimize away all the redundant
validations and significantly increase the speed of the code.

They still had the hardware requirement of making a fetch from the table
to get the address before you could access the variable. IIRC, if you
accessed a variable twice in a row, you could save on the table look ups
but if you were in a loop processing 4 variables, you ended up with 4
extra memory reads per loop.

We were not looking at a specific compiler but at the instruction set
timing its self when we concluded that there was no good reason to go with
the 432. We could almost get the throughput we needed with a 186 and were
sure to with a 286.

[...]
And it was the first attempt at the processor, somewhat like the 8086 was
the first attempt at the processor. The 286 then benefitted from doing a
second, or third if you count the 80186, turn and cranking up the speed.

I'd certainly count the 186 even though they were both planned at about
the same point, they followed one another into the market.

I wouldn't. The 186 was nothing but an 8086 with a bunch of peripherals
on-chip. I'm not sure if there was ever a 187, since it was targeted at
embedded systems. I know this is true, because I've written code for
one. :)
The 186/8 weren't targeted at the embedded market, but that's where
they ended up after IBM mucked up the interrupts in the PC.

--
Keith
 
In article <pan.2005.08.04.23.33.42.454967@example.net>,
Rich Grise <richgrise@example.net> wrote:
[...]
I'd certainly count the 186 even though they were both planned at about
the same point, they followed one another into the market.

I wouldn't. The 186 was nothing but an 8086 with a bunch of peripherals
on-chip. I'm not sure if there was ever a 187, since it was targeted at
embedded systems. I know this is true, because I've written code for
one. :)
I'll disagree with you since I can remember right off the top of my head
that the ROL AX,2 would work for the 186 but not the 8086.



--
--
kensmith@rahul.net forging knowledge
 
"mc" <mc_no_spam@uga.edu> wrote in message
news:42ec40fc$1@mustang.speedfactory.net...
"Skybuck Flying" <nospam@hotmail.com> wrote in message
news:dcgrv9$eu3$1@news6.zwoll1.ov.home.nl...
Hi,

I think I might have just invented the variable bit cpu :)

Or rather a scheme for using a CPU for data fields with any number of bits
up to the CPU's maximum.

The key problem is your idea of "reading until a meta bit 1 is found."
That
means the bits have to be read in sequence, one after another.
Essentially,
what you really have is a 1-bit CPU with some shift registers. A normal
CPU
reads 16 or 32 bits *all at once*.


The reason for the variable bit cpu with variable bit software is too
save
costs and to make computers/software even more powerfull and usefull ;)

For example:

Currently fixed bitsoftware has to be re-written or modified,
re-compiled,
re-documented, re-distributed, re-installed, re-configured when it's
fixed
bit limit is reached and has to be increased for example from 32 bit to
64
bit etc.

Example are windows xp 32 to 64 bit, the internet IPv4 to IPv6.

The usual solution is to build the bigger CPU with all the smaller
instructions, plus new ones. That is how we got from 8086 to Pentium.
8086
programs (16-bit) will still run on the Pentium, and there is even a set
of
8-bit registers in it for running programs converted from the 8080.

I don't think there's any way to get around redesigning programs when new
data structures are chosen.
Yes the idea is to re-write all software so that it uses variable bit field
so that it can scale up to anything. :)

But this is not to say you're completely off track. Google for the term
VLIW (Very Long Instruction Word).
Yes I think my variable bit field allows these kind of instruction too. But
for a first version of the cpu... let's first stick to the simple
instruction.

By the way now that the larger processors are starting to overheat it was
way too early to sell of these transmeta processors etc... these lessheat
processors might become much more interesting in the future.

Oh well

Bye,
Skybuck =D
 
"Skybuck Flying" <nospam@hotmail.com> writes:
"mc" <mc_no_spam@uga.edu> wrote in message
news:42ec40fc$1@mustang.speedfactory.net...
"Skybuck Flying" <nospam@hotmail.com> wrote in message
news:dcgrv9$eu3$1@news6.zwoll1.ov.home.nl...
I think I might have just invented the variable bit cpu :)

Or rather a scheme for using a CPU for data fields with any number of bits
up to the CPU's maximum.
....
I don't think there's any way to get around redesigning programs when new
data structures are chosen.

Yes the idea is to re-write all software so that it uses variable bit field
so that it can scale up to anything. :)
Without trying to drag this out any longer, how about some specifics here?

For example, is your idea handing the stream of bits to the CPU
least significant bit first the followed by bits to most significant
bit? Or the reverse of this?

If you are doing least up to most then I can see how a 1-bit ALU
can handle operations like addition or subtraction, just as long
as you can stream in the two operands and be able to stream the
result out all at the same time.

If you are doing most down to least then I'd like to see some
convincing explanation of how you can do something as simple as add
or subtract without needing arbitrary amounts of storage.

Next, if you are thinking arbitrary length strings of bits then is
there anything equivalent to a "register" in your CPU? And if there
is how do you hold an arbitrary amount of data in them, how can
they be big enough?

Next, how do you do something like multiply or divide with your
arbitrarily long streams of bits? Unless you have some really good
explanation I'm not sure how your streams of bits can do this without
needing to do something I don't see.

Next, what is your memory addressing going to look like? All the
memory I'm aware of, other than some special forms of memory that
I'm fairly sure you've never heard of, all the conventional memory
deals with fixed length addresses and fixed length words of data.

So, how about some specific descriptions of how your idea is going
to handle these things? If you are worried about credit then it
seems that your coming out with the details will be better than
having someone else taken a few clues, figured it out on their own
and claimed all the credit for themselves.
 
In article <vfydnaDbOu3JhWrfRVn-2A@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
Next, how do you do something like multiply or divide with your
arbitrarily long streams of bits?
A serial multiplier is fairly easy to do. It takes N*M shifts to get the
job done.

N = the number of bits in one argument
M = the number of bits in the result


Dividing is a lot harder but dividing is semi-optional so it isn't too
much of a problem that it takes a lot more shifts to do.


--
--
kensmith@rahul.net forging knowledge
 
kensmith@green.rahul.net (Ken Smith) writes:
In article <vfydnaDbOu3JhWrfRVn-2A@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
Next, how do you do something like multiply or divide with your
arbitrarily long streams of bits?

A serial multiplier is fairly easy to do. It takes N*M shifts to get the
job done.

N = the number of bits in one argument
M = the number of bits in the result

Dividing is a lot harder but dividing is semi-optional so it isn't too
much of a problem that it takes a lot more shifts to do.
I know there are some solutions to some of the problems that he faces.
I specifically didn't tell him what a solution might be to any one of
them, although I did strongly hint in a couple of the things I wrote.

What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.
 
In article <4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.
I think we can save our brains on that one and just assume that the only
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)


--
--
kensmith@rahul.net forging knowledge
 
"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:dd8vqf$ru3$1@blue.rahul.net...
In article <4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.

I think we can save our brains on that one and just assume that the only
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)
Boring, it's base 2.

--
--
kensmith@rahul.net forging knowledge
 
Does the world really need a variable bit cpu ? ;)

Maybe for nanobots but that's it.

I most come to the conclusion that the variable bit cpu is a waste of time
for me... though an interesting waste of time... maybe I will work on it
sometime when I feel like it... but at the moment I have lost interest ;)

There are other people who made little cpu's and where did it take them ?
into the garbage bin :)

Then again it might get interested again when combining it with robotics
etc... building a little spider with a camera on it would be cool :D:D:D:D:D
little spiiiieeeee robot.

Hmmm yess.. maybe I should go spent sometime in the robotics newsgroups and
websites.. since these hardware newsgroups are starting to bore me quite a
lot... mostly full with dumb people asking stupid questions :) and when I
ask dumb stupid question they dont even know the answer :D bbboorrinngg ;)

"Don Taylor" <dont@agora.rdrop.com> wrote in message
news:4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com...
kensmith@green.rahul.net (Ken Smith) writes:
In article <vfydnaDbOu3JhWrfRVn-2A@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
Next, how do you do something like multiply or divide with your
arbitrarily long streams of bits?

A serial multiplier is fairly easy to do. It takes N*M shifts to get the
job done.

N = the number of bits in one argument
M = the number of bits in the result

Dividing is a lot harder but dividing is semi-optional so it isn't too
much of a problem that it takes a lot more shifts to do.

I know there are some solutions to some of the problems that he faces.
I specifically didn't tell him what a solution might be to any one of
them, although I did strongly hint in a couple of the things I wrote.

What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.
 
In article <ddbg1s$g1l$2@blue.rahul.net>, kensmith@green.rahul.net
says...
In article <ddaakd$iki$1@news3.zwoll1.ov.home.nl>,
Skybuck Flying <nospam@hotmail.com> wrote:

"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:dd8vqf$ru3$1@blue.rahul.net...
In article <4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.

I think we can save our brains on that one and just assume that the only
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)

Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.

How about Pi? ...to one significant digit.

--
Keith
 
In article <MPG.1d63c9b78cc953fe989b6d@news.individual.net>,
Keith Williams <krw@att.bizzzz> wrote:
In article <ddbg1s$g1l$2@blue.rahul.net>, kensmith@green.rahul.net
says...
[...]
Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.

How about Pi? ...to one significant digit.
An interesting alternative would be to imbed the markers in the data as I
guess you are thinking. The context in which I'm suggesting this is where
the bits represent the number only and not the marker.

Remember we want a general purpose machine that can handle signed values
and be variable length.

--
--
kensmith@rahul.net forging knowledge
 
"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:ddbg1s$g1l$2@blue.rahul.net...
In article <ddaakd$iki$1@news3.zwoll1.ov.home.nl>,
Skybuck Flying <nospam@hotmail.com> wrote:

"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:dd8vqf$ru3$1@blue.rahul.net...
In article <4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.

I think we can save our brains on that one and just assume that the
only
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)

Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.
Base 4 ? get real.


--
--
kensmith@rahul.net forging knowledge
 
On Fri, 12 Aug 2005 13:42:05 +0200, Skybuck Flying wrote:

"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:ddbg1s$g1l$2@blue.rahul.net...
In article <ddaakd$iki$1@news3.zwoll1.ov.home.nl>,
Skybuck Flying <nospam@hotmail.com> wrote:

"Ken Smith" <kensmith@green.rahul.net> wrote in message
news:dd8vqf$ru3$1@blue.rahul.net...
In article <4-mdnZUku-xaAWrfRVn-tQ@scnresearch.com>,
Don Taylor <dont@agora.rdrop.com> wrote:
[...]
What I'm trying to get at is whether he has anything more than just
an idea about how to have a stream of bits tell you when you have
reached the end or not.

I think we can save our brains on that one and just assume that the
only
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)

Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.

Base 4 ? get real.
What's wrong with base 4? In grade school we were taught to do arithmetic
in all bases up to 32 (symbols got the be hard to remember ;). Are you a
crappy coder; a binary bigot?

--
Keith
 
On Fri, 12 Aug 2005 01:37:29 +0000, Ken Smith wrote:

In article <MPG.1d63c9b78cc953fe989b6d@news.individual.net>,
Keith Williams <krw@att.bizzzz> wrote:
In article <ddbg1s$g1l$2@blue.rahul.net>, kensmith@green.rahul.net
says...
[...]
Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.

How about Pi? ...to one significant digit.

An interesting alternative would be to imbed the markers in the data as I
guess you are thinking. The context in which I'm suggesting this is where
the bits represent the number only and not the marker.
Why not? X86 imbeds instruction "markers" in the input stream. It can't
be any harder to decode data.

Remember we want a general purpose machine that can handle signed values
and be variable length.
Sure. But why waste bandwidth with a word marker per bit? (a question for
the OP, BTW)

--
Keith
 
On Sat, 13 Aug 2005 02:02:32 +0000, Ken Smith wrote:

In article <pan.2005.08.12.19.45.50.678755@att.bizzzz>,
keith <krw@att.bizzzz> wrote:
[...]
Base 4 ? get real.

What's wrong with base 4? In grade school we were taught to do arithmetic
in all bases up to 32 (symbols got the be hard to remember ;). Are you a
crappy coder; a binary bigot?

Up to 36, the symbols are not that bad.
Sure, but we dropped 'O's and 'I's and such confusing things. ;-)

I've always kind of liked base 4 for the exponents in floating point
numbers. It can save you a whole box full of NAND gates.
IBM likes base-16 for FP. Go figure.

--
Keith
 
In article <ddi1q1$g5v$1@news2.zwoll1.ov.home.nl>,
Skybuck Flying <nospam@hotmail.com> wrote:
[....]
idea he has is the end marker idea. He hasn't even realized what base
numbering he should use. (hint: base 2 is not the right one)

Boring, it's base 2.

Guess again; the correct answer is obvious and it is not 2.

Base 4 ? get real.
No. Four is the wrong answer. The right answer is obvious.

--
--
kensmith@rahul.net forging knowledge
 
In article <pan.2005.08.12.19.45.50.678755@att.bizzzz>,
keith <krw@att.bizzzz> wrote:
[...]
Base 4 ? get real.

What's wrong with base 4? In grade school we were taught to do arithmetic
in all bases up to 32 (symbols got the be hard to remember ;). Are you a
crappy coder; a binary bigot?
Up to 36, the symbols are not that bad.

I've always kind of liked base 4 for the exponents in floating point
numbers. It can save you a whole box full of NAND gates.

--
--
kensmith@rahul.net forging knowledge
 

Welcome to EDABoard.com

Sponsor

Back
Top