Using an FPGA to drive the 80386 CPU on a real motherboard

rickman wrote:



I use SN74CBTD3384 on a board I produce.

Also, the 74LVC8T245 is a good bidirectional translator, 8 bits in a 24-pin
package. Or, the 74ALVC164245DL, two independent 8-bit translators in a 48-
pin package. I've used a bunch of both of these in some gear I have
produced, mostly to connect between FPGAs with 3.3 V I/O and 5V systems.
I also used the former to connect 5 V systems to the old Beagle Board
computer, which had 1.8 V I/O.

Jon
 
Rick C. Hodgin wrote:

I'll get the pinouts
and work up a circuit and wiring diagram proposal in multi-layer image
format for inspection.
Complex designs like this require GOOD schematic and PCB layout tools. I
use an old one, Protel 99SE, but that is no longer available, and was pretty
expensive when it was. I have used Kicad a little, it shows REAL promise,
but is not yet as good as Protel. It runs on Windows AND Linux! And, it is
free, open-source software. The advantage of these packages is you can do
copper pours, inner power plane layers, and it checks the correctness of the
PCB layout against the schematic. No human could EVER be sure that a
complex PCB layout was correct, no matter how long they looked at it.
Software DRC takes just a couple seconds.

Jon
 
rickman wrote:

I don't recall how many I/O you
need, but there are 144 pin QFPs (~110 I/Os) and I think some 208 pin
QFPs around. Even if you go with an older FPGA like a Spartan 3A the
wider pitch package is worth it. If you have to use a BGA, pick one
with a wide ball spacing like 1.0 mm.
I have learned how to solder QFPs down to 0.4mm pitch, but it takes a TINY
soldering tip, a stereo zoom microscope and a STEADY hand! 0.65 mm pitch is
pretty easy, at least for me.

Jon
 
On 4/8/2016 2:49 PM, Jon Elson wrote:
rickman wrote:

On 4/8/2016 12:14 AM, Jon Elson wrote:

So, Xilinx is working for me. And, yes, after going to the trouble of
getting comfortable in the Xilinx tools, the last thing I want to do is
learn somebody else's tools' quirks.

What tool quirks.

I didn't really mean quirks as in things that didn't work, or work right. I
just meant that each tool chain has a lot of features to learn, where the
optional settings are hidden, how to quickly configure the simulator, how to
set up to generate configuration PROM images, etc. There is a lot to learn
before you get fully productive.

Yeah, I guess so. Is Xilinx still using their own simulator? I seem to
recall it compiled to machine code so the compile was slow, but the
simulation itself was fast. Still true?

--

Rick
 
On 4/8/2016 3:09 PM, Rick C. Hodgin wrote:
On Friday, April 8, 2016 at 3:05:09 PM UTC-4, Jon Elson wrote:
rickman wrote:

I don't recall how many I/O you
need, but there are 144 pin QFPs (~110 I/Os) and I think some 208 pin
QFPs around. Even if you go with an older FPGA like a Spartan 3A the
wider pitch package is worth it. If you have to use a BGA, pick one
with a wide ball spacing like 1.0 mm.

I have learned how to solder QFPs down to 0.4mm pitch, but it takes a TINY
soldering tip, a stereo zoom microscope and a STEADY hand! 0.65 mm pitch is
pretty easy, at least for me.

I was under the impression I'd use some kind of solder paste over a solder
mask the PCB maker sends, place the parts, and then simply bake in some kind
of high-heat oven.

You can do that. But if you are using QFPs, a soldering iron works
pretty well I am told. The solder stencil is not so easy to use but
works ok. If you have BGAs or land grid array parts you have to use the
solder stencil.

--

Rick
 
rickman wrote:

On 4/8/2016 2:49 PM, Jon Elson wrote:
rickman wrote:

On 4/8/2016 12:14 AM, Jon Elson wrote:

So, Xilinx is working for me. And, yes, after going to the trouble of
getting comfortable in the Xilinx tools, the last thing I want to do is
learn somebody else's tools' quirks.

What tool quirks.

I didn't really mean quirks as in things that didn't work, or work right.
I just meant that each tool chain has a lot of features to learn, where
the optional settings are hidden, how to quickly configure the simulator,
how to
set up to generate configuration PROM images, etc. There is a lot to
learn before you get fully productive.

Yeah, I guess so. Is Xilinx still using their own simulator? I seem to
recall it compiled to machine code so the compile was slow, but the
simulation itself was fast. Still true?
Yes. For the designs I do, the compile only takes a few seconds, and the
sim runs pretty fast, although not blazingly. Sometimes I need to run 10's
of ms of simulated time to get out to the interesting part, and it takes a
minute or so. I can't imagine how some of the people simulating gigantic
systems manage.

But, the GUI aspects of Xilinx' sim is SO much better than that ghastly
Modelsim product which I never really got competent at running.

Jon
 
Rick C. Hodgin wrote:

On Friday, April 8, 2016 at 3:05:09 PM UTC-4, Jon Elson wrote:
rickman wrote:

I don't recall how many I/O you
need, but there are 144 pin QFPs (~110 I/Os) and I think some 208 pin
QFPs around. Even if you go with an older FPGA like a Spartan 3A the
wider pitch package is worth it. If you have to use a BGA, pick one
with a wide ball spacing like 1.0 mm.

I have learned how to solder QFPs down to 0.4mm pitch, but it takes a
TINY
soldering tip, a stereo zoom microscope and a STEADY hand! 0.65 mm pitch
is pretty easy, at least for me.

I was under the impression I'd use some kind of solder paste over a solder
mask the PCB maker sends, place the parts, and then simply bake in some
kind of high-heat oven.
I don't do this for one-offs or prototypes. There is a big trick to the
stencils. You need to reduce the area of the stencil apertures, or the
excessive solder paste clumps together and bridges between the leads. As
the lead pitch gets finer, this gets more and more critical.

Another trick is to place solder blobs on two diagonal pads, and tack the
chip down. You can view the alignment on all 4 sides and "walk" the chip by
melting the solder on one of the tacked-down pins at a time until alignment
is good. Then, apply liquid flux down all the rows of pins, and drag a
soldering iron down the rows. The solder plate on the board is usually
enough to solder the pins.

Jon
 
On 4/8/2016 6:11 PM, Jon Elson wrote:
rickman wrote:

On 4/8/2016 2:49 PM, Jon Elson wrote:
rickman wrote:

On 4/8/2016 12:14 AM, Jon Elson wrote:

So, Xilinx is working for me. And, yes, after going to the trouble of
getting comfortable in the Xilinx tools, the last thing I want to do is
learn somebody else's tools' quirks.

What tool quirks.

I didn't really mean quirks as in things that didn't work, or work right.
I just meant that each tool chain has a lot of features to learn, where
the optional settings are hidden, how to quickly configure the simulator,
how to
set up to generate configuration PROM images, etc. There is a lot to
learn before you get fully productive.

Yeah, I guess so. Is Xilinx still using their own simulator? I seem to
recall it compiled to machine code so the compile was slow, but the
simulation itself was fast. Still true?

Yes. For the designs I do, the compile only takes a few seconds, and the
sim runs pretty fast, although not blazingly. Sometimes I need to run 10's
of ms of simulated time to get out to the interesting part, and it takes a
minute or so. I can't imagine how some of the people simulating gigantic
systems manage.

But, the GUI aspects of Xilinx' sim is SO much better than that ghastly
Modelsim product which I never really got competent at running.

Can you be more specific? I got used to Modelsim and then paid for a
package from Lattice when I got some work using their part. Between the
time I ordered the package with Modelsim and the time it was shipped to
me, they switched to using the Aldec product. I raised hell with them
over the phone and email, but they insisted there was nothing they could
do. So I got over it and found the Aldec simulator didn't crash
periodically like the Modelsim product did. Otherwise it used a
compatible scripting interpreter and overall worked very similarly. It
has been a while since I've done much with it, but I don't recall
anything that is too awkward. What is so bad that you find Modelsim to
be "ghastly"?

--

Rick
 
On 4/8/2016 6:17 PM, Jon Elson wrote:
Rick C. Hodgin wrote:

On Friday, April 8, 2016 at 3:05:09 PM UTC-4, Jon Elson wrote:
rickman wrote:

I don't recall how many I/O you
need, but there are 144 pin QFPs (~110 I/Os) and I think some 208 pin
QFPs around. Even if you go with an older FPGA like a Spartan 3A the
wider pitch package is worth it. If you have to use a BGA, pick one
with a wide ball spacing like 1.0 mm.

I have learned how to solder QFPs down to 0.4mm pitch, but it takes a
TINY
soldering tip, a stereo zoom microscope and a STEADY hand! 0.65 mm pitch
is pretty easy, at least for me.

I was under the impression I'd use some kind of solder paste over a solder
mask the PCB maker sends, place the parts, and then simply bake in some
kind of high-heat oven.
I don't do this for one-offs or prototypes. There is a big trick to the
stencils. You need to reduce the area of the stencil apertures, or the
excessive solder paste clumps together and bridges between the leads. As
the lead pitch gets finer, this gets more and more critical.

Another trick is to place solder blobs on two diagonal pads, and tack the
chip down. You can view the alignment on all 4 sides and "walk" the chip by
melting the solder on one of the tacked-down pins at a time until alignment
is good. Then, apply liquid flux down all the rows of pins, and drag a
soldering iron down the rows. The solder plate on the board is usually
enough to solder the pins.

I have yet to deal with hand soldering of anything this fine, but I'm
told you can put a blob of solder on the iron tip to do the swipe you
are referring to. *Very* little solder is needed to make a good
connection. Many follow up the solder swipe by a solder braid and iron
to remove the excess which may not be easy to see between or behind the
pins. Someone who was hand soldering one of my boards told me he had a
fit trying to remove a short once because it was so fine he couldn't see
it even *with* a magnifier. Eventually he just passed a sharp point
between all the leads on the connector and the short was gone. I guess
it was virtually like a tin whisker (but before RoHS).

--

Rick
 
On Thu, 07 Apr 2016 05:28:39 -0700, Rick C. Hodgin wrote:

After hearing all of the difficulties I may have on the motherboard
side, the re-grouping of just working with the Am386 CPU makes a lot
more sense. Plus, it actually accomplishes nearly all of my goals as my
goals were to replace the CPU's instruction set with my own, and to
validate it 1:1 that I am correct. By having a side-by-side comparison
I can do that. And as I've stated, it might even be interesting to try
to get other 80386-clone CPUs to test out side-by-side in the
configuration, and then write a paper outlining where they are
different. But, that's the lowest possible goal, just a "wouldn't it be
interesting" thought. :)

Yesterday I remembered an additional thing that can go wrong, and almost
certainly will go wrong.

The system you are hacking (the motherboard) almost certainly uses DRAM
as its main memory. DRAM needs to be refreshed every so often. This is
done by the memory controller toggling a particular command to the chip
when it wants the chip to refresh the memory. The question is how does
the memory controller know when to order a refresh. Almost 100%
certainly, it has a counter that responds to the clock signal driven by
or derived from the main system clock. The system clock you underclock.

So if you slow the clock enough, you are certain to violate the refresh
timing of the DRAM and ruin its contents.

And you can't have a computer without a functional main memory. :)
 
On Wed, 06 Apr 2016 13:38:19 -0700, Rick C. Hodgin wrote:

My ultimate goal is to build a completely homemade CPU using my own
garage fab on 3 to 10 micron processes!

I'm in. :)

Although, for the time being, I'm fine with using FPGAs.

I've been thinking about building a completely open-source computer, down
to the atoms, but it's really a group project. There's literally no point
in building a computer that will not be used outside of a single family
or "close knit community". A bunch of villages and a city would be the
smallest user base I would target.

Yet, so far, I seem to be the only person I ever met off the Internet to
have such interests.

I have to ask: why spend time hacking x86 when there are so many other,
BETTER architectures out there? :)

I have a long history on 80386. I wrote my own kernel, debuggers, etc.
It's been a relationship dating back to the late 80s.

Oh, ok.

However, one of the reasons I'm doing this is because I am extending the
ISA out to include 40-bit addresses, rather than just 32-bit,
which accesses memory in the Terabyte range, and to include a built-in
ARM ISA which allows the CPU to switch between ISAs based on branch
instructions.

Ouch, ouch, ouch, too much - unless you're good at it. :)

I designed and implemented a 16-bit soft CPU from scratch, and I can tell
you it's seriously difficult to make it work. Right now, I'm hacking a 32-
bit CPU (aeMB, to be very specific) and interfacing it to a SoC I plan to
publish eventually and again, it's seriously difficult to make it work.

If you add a bit to the word or address size, you are not just doubling
the CPUs capabilities, you are also doubling the number, size and scope
of problems you have to deal with.

Now, if you already did work on this, or have a working Verilog/VHDL
model, it's probably OK - taking into account your time horizon. But if
you are at the stage of an idea, I would suggest making up your mind
between x86 and ARM and just focusing on one untill you make it work.

Also, why are you doing this? Is this a hobby? Work related? Starting a
new bussiness? Want to design and implement a NSA-proof PC?

To be honest, I am a Christian, and I want to use the talents I was
gifted with and give the fruit of my labor back to God, and to my fellow
man (and not a pursuit of money, or proprietary IP, or patents, or other
such things, but rather an expression of love basically in giving back).

Oh. OK. :) Works for me.

Did you publish any of your work?

Does simulation count? :D

Yes. Also in emulation, as by a real FPGA product, but one which does
not plug into a socket, but is its own entire creation. Here's an
Aleksander who created a 486 SX CPU (it has not integrated FPU):

https://github.com/alfikpl/ao486

Verily, I shall review this. I'm starting to get the impression that all
the stuff I'm making on my own has already been solved, but hasn't been
advertised. I'm working on my dream computer, but these solved systems
constantly keep popping up. Maybe all of it has already been solved?

At any rate, this implementation is an absolute MONSTER, clocking in at
36k gates (and providing a passe 30 MHz of x86). Just how the fuck am I
supposed to fit that in a sane chip? You know, the ones for which you can
get synthesizers for free, instead of paying several thousand dollars for
them.

But the HDD or VGA *might* be salvageable, depending on the
implementation.
 
On 4/9/2016 5:15 AM, Aleksandar Kuktin wrote:
On Thu, 07 Apr 2016 05:28:39 -0700, Rick C. Hodgin wrote:

After hearing all of the difficulties I may have on the motherboard
side, the re-grouping of just working with the Am386 CPU makes a lot
more sense. Plus, it actually accomplishes nearly all of my goals as my
goals were to replace the CPU's instruction set with my own, and to
validate it 1:1 that I am correct. By having a side-by-side comparison
I can do that. And as I've stated, it might even be interesting to try
to get other 80386-clone CPUs to test out side-by-side in the
configuration, and then write a paper outlining where they are
different. But, that's the lowest possible goal, just a "wouldn't it be
interesting" thought. :)

Yesterday I remembered an additional thing that can go wrong, and almost
certainly will go wrong.

The system you are hacking (the motherboard) almost certainly uses DRAM
as its main memory. DRAM needs to be refreshed every so often. This is
done by the memory controller toggling a particular command to the chip
when it wants the chip to refresh the memory. The question is how does
the memory controller know when to order a refresh. Almost 100%
certainly, it has a counter that responds to the clock signal driven by
or derived from the main system clock. The system clock you underclock.

So if you slow the clock enough, you are certain to violate the refresh
timing of the DRAM and ruin its contents.

And you can't have a computer without a functional main memory. :)

There is a 14.31 MHz clock on the main board that is used to time
various activity including the refresh. I believe this was divided by 3
to get the original CPU clock rate (8088) and further divided to get the
clock to the 8253 timer chip which controlled the refresh as well as the
speaker logic and generated the time of day clock. The clock rate to
the CPU changed as PCs ran faster, but the clock to the timer chip
remained. The 14.31 MHz clock was used on the backplane connectors to
be used by the video cards when needed.

Refresh needs to be done on DRAM, but if you aren't using DRAM, then you
don't need refresh.

--

Rick
 
On 4/9/2016 6:00 AM, Aleksandar Kuktin wrote:
On Wed, 06 Apr 2016 13:38:19 -0700, Rick C. Hodgin wrote:

My ultimate goal is to build a completely homemade CPU using my own
garage fab on 3 to 10 micron processes!

I'm in. :)

Although, for the time being, I'm fine with using FPGAs.

I've been thinking about building a completely open-source computer, down
to the atoms, but it's really a group project. There's literally no point
in building a computer that will not be used outside of a single family
or "close knit community". A bunch of villages and a city would be the
smallest user base I would target.

Yet, so far, I seem to be the only person I ever met off the Internet to
have such interests.

I expect trying to get anything remotely like a critical mass is
virtually impossible. There is an open source chip similar in size and
capability to the ARM processors called RISC-V that is getting wide
attention and will produce a chip soon.


I have to ask: why spend time hacking x86 when there are so many other,
BETTER architectures out there? :)

I have a long history on 80386. I wrote my own kernel, debuggers, etc.
It's been a relationship dating back to the late 80s.

Oh, ok.

However, one of the reasons I'm doing this is because I am extending the
ISA out to include 40-bit addresses, rather than just 32-bit,
which accesses memory in the Terabyte range, and to include a built-in
ARM ISA which allows the CPU to switch between ISAs based on branch
instructions.

Ouch, ouch, ouch, too much - unless you're good at it. :)

It will never be possible to include an ARM ISA unless a license fee is
paid. I recall some years back a student produced an HDL version of an
ARM 7TDMI. ARM spoke to him and the core was withdrawn. He also got a
job with them. Win/win


I designed and implemented a 16-bit soft CPU from scratch, and I can tell
you it's seriously difficult to make it work. Right now, I'm hacking a 32-
bit CPU (aeMB, to be very specific) and interfacing it to a SoC I plan to
publish eventually and again, it's seriously difficult to make it work.

I'm surprised that you say it is hard to make it work. Do you mean it
is hard to build all the infrastructure? I have designed my own CPUs
before and found that part easy. It is creating the software support
that is hard, or at least a lot of work. I use Forth which helps make
things easier.


If you add a bit to the word or address size, you are not just doubling
the CPUs capabilities, you are also doubling the number, size and scope
of problems you have to deal with.

??? My CPU design did not specify the data size, only the instruction
size. I didn't have a problem adjusting the data size to suit my
application.


Now, if you already did work on this, or have a working Verilog/VHDL
model, it's probably OK - taking into account your time horizon. But if
you are at the stage of an idea, I would suggest making up your mind
between x86 and ARM and just focusing on one untill you make it work.

Also, why are you doing this? Is this a hobby? Work related? Starting a
new bussiness? Want to design and implement a NSA-proof PC?

To be honest, I am a Christian, and I want to use the talents I was
gifted with and give the fruit of my labor back to God, and to my fellow
man (and not a pursuit of money, or proprietary IP, or patents, or other
such things, but rather an expression of love basically in giving back).

Oh. OK. :) Works for me.

Did you publish any of your work?

Does simulation count? :D

Yes. Also in emulation, as by a real FPGA product, but one which does
not plug into a socket, but is its own entire creation. Here's an
Aleksander who created a 486 SX CPU (it has not integrated FPU):

https://github.com/alfikpl/ao486

Verily, I shall review this. I'm starting to get the impression that all
the stuff I'm making on my own has already been solved, but hasn't been
advertised. I'm working on my dream computer, but these solved systems
constantly keep popping up. Maybe all of it has already been solved?

Exactly what is your dream computer?


At any rate, this implementation is an absolute MONSTER, clocking in at
36k gates (and providing a passe 30 MHz of x86). Just how the fuck am I
supposed to fit that in a sane chip? You know, the ones for which you can
get synthesizers for free, instead of paying several thousand dollars for
them.

But the HDD or VGA *might* be salvageable, depending on the
implementation.

--

Rick
 
On Saturday, April 9, 2016 at 10:42:27 AM UTC-4, rickman wrote:
On 4/9/2016 6:00 AM, Aleksandar Kuktin wrote:
On Wed, 06 Apr 2016 13:38:19 -0700, Rick C. Hodgin wrote:

My ultimate goal is to build a completely homemade CPU using my own
garage fab on 3 to 10 micron processes!

I'm in. :)

Although, for the time being, I'm fine with using FPGAs.

I've been thinking about building a completely open-source computer, down
to the atoms, but it's really a group project. There's literally no point
in building a computer that will not be used outside of a single family
or "close knit community". A bunch of villages and a city would be the
smallest user base I would target.

Yet, so far, I seem to be the only person I ever met off the Internet to
have such interests.

I expect trying to get anything remotely like a critical mass is
virtually impossible. There is an open source chip similar in size and
capability to the ARM processors called RISC-V that is getting wide
attention and will produce a chip soon.

Never underestimate the power of a project given over to God. :) He can
guide people down paths that don't seem to make sense, but because He can
see what's coming in the future, has them right where they need to be when
that time comes.

FWIW, I've been considering photonic circuits lately. I've devised an
entire methodology for how they would operate in theory. They cannot yet
be built (to my knowledge), but the circuits I'm creating perform the
necessary logic ops, and do more than existing circuits because they
general almost no heat.

It's been a nice mental exercise actually, and it's helped me think about
those things in the low-level "building sand castles" arena, as though I
am a builder on the silicon, creating things up from there.

I have to ask: why spend time hacking x86 when there are so many other,
BETTER architectures out there? :)

I have a long history on 80386. I wrote my own kernel, debuggers, etc.
It's been a relationship dating back to the late 80s.

Oh, ok.

However, one of the reasons I'm doing this is because I am extending the
ISA out to include 40-bit addresses, rather than just 32-bit,
which accesses memory in the Terabyte range, and to include a built-in
ARM ISA which allows the CPU to switch between ISAs based on branch
instructions.

Ouch, ouch, ouch, too much - unless you're good at it. :)

It will never be possible to include an ARM ISA unless a license fee is
paid. I recall some years back a student produced an HDL version of an
ARM 7TDMI. ARM spoke to him and the core was withdrawn. He also got a
job with them. Win/win

It's why all patents and copyrights should be abolished, and the fruit of
man's ideas should be given to mankind, with the people then only being
paid for their labor, as the ideas and ingenuity they possess are gifts
from God, given not just for them to use to their profit, but as part of
that fabric of man God put here upon this world.

We should not oppress people, but work with them and encourage those who
have special and unique abilities, letting them thrive.

I designed and implemented a 16-bit soft CPU from scratch, and I can tell
you it's seriously difficult to make it work. Right now, I'm hacking a 32-
bit CPU (aeMB, to be very specific) and interfacing it to a SoC I plan to
publish eventually and again, it's seriously difficult to make it work.

I'm surprised that you say it is hard to make it work. Do you mean it
is hard to build all the infrastructure? I have designed my own CPUs
before and found that part easy. It is creating the software support
that is hard, or at least a lot of work. I use Forth which helps make
things easier.


If you add a bit to the word or address size, you are not just doubling
the CPUs capabilities, you are also doubling the number, size and scope
of problems you have to deal with.

??? My CPU design did not specify the data size, only the instruction
size. I didn't have a problem adjusting the data size to suit my
application.


Now, if you already did work on this, or have a working Verilog/VHDL
model, it's probably OK - taking into account your time horizon. But if
you are at the stage of an idea, I would suggest making up your mind
between x86 and ARM and just focusing on one untill you make it work.

Also, why are you doing this? Is this a hobby? Work related? Starting a
new bussiness? Want to design and implement a NSA-proof PC?

To be honest, I am a Christian, and I want to use the talents I was
gifted with and give the fruit of my labor back to God, and to my fellow
man (and not a pursuit of money, or proprietary IP, or patents, or other
such things, but rather an expression of love basically in giving back).

Oh. OK. :) Works for me.

Did you publish any of your work?

Does simulation count? :D

Yes. Also in emulation, as by a real FPGA product, but one which does
not plug into a socket, but is its own entire creation. Here's an
Aleksander who created a 486 SX CPU (it has not integrated FPU):

https://github.com/alfikpl/ao486

Verily, I shall review this. I'm starting to get the impression that all
the stuff I'm making on my own has already been solved, but hasn't been
advertised. I'm working on my dream computer, but these solved systems
constantly keep popping up. Maybe all of it has already been solved?

Exactly what is your dream computer?


At any rate, this implementation is an absolute MONSTER, clocking in at
36k gates (and providing a passe 30 MHz of x86). Just how the fuck am I
supposed to fit that in a sane chip? You know, the ones for which you can
get synthesizers for free, instead of paying several thousand dollars for
them.

But the HDD or VGA *might* be salvageable, depending on the
implementation.

Best regards,
Rick C. Hodgin
 
On 4/9/2016 8:06 PM, Rick C. Hodgin wrote:
On Saturday, April 9, 2016 at 10:42:27 AM UTC-4, rickman wrote:

It will never be possible to include an ARM ISA unless a license fee is
paid. I recall some years back a student produced an HDL version of an
ARM 7TDMI. ARM spoke to him and the core was withdrawn. He also got a
job with them. Win/win

It's why all patents and copyrights should be abolished, and the fruit of
man's ideas should be given to mankind, with the people then only being
paid for their labor, as the ideas and ingenuity they possess are gifts
from God, given not just for them to use to their profit, but as part of
that fabric of man God put here upon this world.

That is a very naive ideology. If you abolish patents and make all
ideas free, there is much less reason to invent. Most people are
motivated by profit which patents potentially provide. I expect you are
going to talk about designing for the glory of God. However I would
point to the design you are doing and how it will benefit virtually no
one other than yourself. If ARM were devoted to the type of chips they
were designing in the 90's, would that be a good thing?


We should not oppress people, but work with them and encourage those who
have special and unique abilities, letting them thrive.

Patents are no more oppression than laws to prevent the theft of crops
you raise or goods you make. That's why they call it Intellectual
Property.

Perhaps you are a pure socialist who believes no one should own
property, that it should belong to everyone.

--

Rick
 
Jon Elson <jmelson@wustl.edu> wrote:
Complex designs like this require GOOD schematic and PCB layout tools. I
use an old one, Protel 99SE, but that is no longer available, and was pretty
expensive when it was. I have used Kicad a little, it shows REAL promise,
but is not yet as good as Protel.

Just to note that there's a free descendent of Protel which is called
CircuitMaker:
http://circuitmaker.com/

I haven't used it, but I use its commercial sister Altium which is pretty
nice. The difference with CircuitMaker is your boards have to be public,
there is no way to keep them private. So it's no good for commercial work,
but that limitation is not a problem for an open source project.

(It's also not open source, but my experience is the proprietary tools are
generally several steps above the open source tools - depending on board
complexity, going with proprietary tools can be a necessary evil to get work
done. It also relies on an internet connection)

Theo
 
rickman <gnuarm@gmail.com> wrote:
I expect trying to get anything remotely like a critical mass is
virtually impossible. There is an open source chip similar in size and
capability to the ARM processors called RISC-V that is getting wide
attention and will produce a chip soon.

RISC-V is an architecture, and various chips have been produced. It's also
getting embedded in other products (eg as a tiny part of some other SoC).

My colleagues at LowRISC:
http://www.lowrisc.org/

are working on a fully open-source SoC based on the RISC-V architecture - ie
open source at the Verilog design level, including peripheral IP. They
still face many challenges, including the need to use proprietary
ASIC tools and hard macros (eg memory generators). Their initial run is
aimed for a multi-platform wafer service.

I have to ask: why spend time hacking x86 when there are so many other,
BETTER architectures out there? :)

I have a long history on 80386. I wrote my own kernel, debuggers, etc.
It's been a relationship dating back to the late 80s.

Oh, ok.

There is an argument that the i386 has the largest software stack, so an
i386 will have the most compatibility. However, that is changing. The
Linux kernel doesn't support the i386 any more. Many binaries end up using
MMX, SSE and so on, so won't work on an i386. Other operating systems are
more platform agnostic (eg Android runs on ARM, x86, MIPS and others).

Maybe that doesn't matter to you, but the question is not just what is the
best choice now, but what might be when the you hope to be finished.

One useful feature of RISC-V is they have ISA subsets: the base RV32I subset
is quite small, and then features like 64 bit, floating point, etc are
extra. That means the instruction requirements of software are made
explicit.

It will never be possible to include an ARM ISA unless a license fee is
paid. I recall some years back a student produced an HDL version of an
ARM 7TDMI. ARM spoke to him and the core was withdrawn. He also got a
job with them. Win/win

ARMv2 (ARM with 26 bit addressing) is now out of patent, so it's possible to
implement it licence-free, and it has been:
https://en.wikipedia.org/wiki/Amber_(processor_core)

That is one thing in favour of the i386 against newer architectures: it is
now out of patent, so Intel can't sue you for making one.

I'm surprised that you say it is hard to make it work. Do you mean it
is hard to build all the infrastructure? I have designed my own CPUs
before and found that part easy. It is creating the software support
that is hard, or at least a lot of work. I use Forth which helps make
things easier.

We've also designed our own 64-bit CPUs, with our own ISA extensions.
Multicore and cache coherency were a significant effort, likewise we spend a
lot of effort on testing.

Software is indeed a lot of work, though using the maximum of pre-existing
software helps.

At any rate, this implementation is an absolute MONSTER, clocking in at
36k gates (and providing a passe 30 MHz of x86). Just how the fuck am I
supposed to fit that in a sane chip? You know, the ones for which you can
get synthesizers for free, instead of paying several thousand dollars for
them.

This is the tricky question about the 'homebrew silicon fab' question.

I think there's an interesting space for the 'artisanal semiconductor fab',
that makes old chips relatively cheaply and allows users to get closer to
the process. Some universities have that kind of facility, but often not
intended for even medium volume manufacturing. But a general
democratisation of fab technology could be an interesting space.

You'd think that using new techniques on an old process would make life
easier, just like people can make marble sculpture with CNC machines instead
of hammers.

What hits you, however, is lack of Moore's law. Let's say you have a 3
micron fab. You might only have two metal layers. You just can't get many
transistors on that chip, and you can't wire them very well. The tools
aren't optimised for that kind of environment, and will waste space trying
to deal with layer limitations. You also can't fit much memory, and you
can't deal with modern interfaces (no USB for you) or modern off-chip DRAM.

You'd end up with 1970s style hand design, albeit with a fancy GUI rather
than plastic tapes, connected to 1970s peripherals.

Maybe you can deal with some of those by careful choice of off-chip
components (eg everything is SRAM, not DDR4) and others by die stacking a
commodity small-process interface die on top of your large-process die.

Maybe the 'artisanal' producer makes the coarse-grained silicon interposer,
and then stacks commodity dice from other vendors on it - like hobbyists make
and assemble PCBs today. It does, however, require an ecosystem set up for
that.

Theo
 
On 4/10/2016 2:26 PM, Theo Markettos wrote:
rickman <gnuarm@gmail.com> wrote:
I expect trying to get anything remotely like a critical mass is
virtually impossible. There is an open source chip similar in size and
capability to the ARM processors called RISC-V that is getting wide
attention and will produce a chip soon.

RISC-V is an architecture, and various chips have been produced. It's also
getting embedded in other products (eg as a tiny part of some other SoC).

My colleagues at LowRISC:
http://www.lowrisc.org/

are working on a fully open-source SoC based on the RISC-V architecture - ie
open source at the Verilog design level, including peripheral IP. They
still face many challenges, including the need to use proprietary
ASIC tools and hard macros (eg memory generators). Their initial run is
aimed for a multi-platform wafer service.

I have to ask: why spend time hacking x86 when there are so many other,
BETTER architectures out there? :)

I have a long history on 80386. I wrote my own kernel, debuggers, etc.
It's been a relationship dating back to the late 80s.

Oh, ok.

There is an argument that the i386 has the largest software stack, so an
i386 will have the most compatibility. However, that is changing. The
Linux kernel doesn't support the i386 any more. Many binaries end up using
MMX, SSE and so on, so won't work on an i386. Other operating systems are
more platform agnostic (eg Android runs on ARM, x86, MIPS and others).

Maybe that doesn't matter to you, but the question is not just what is the
best choice now, but what might be when the you hope to be finished.

One useful feature of RISC-V is they have ISA subsets: the base RV32I subset
is quite small, and then features like 64 bit, floating point, etc are
extra. That means the instruction requirements of software are made
explicit.

It will never be possible to include an ARM ISA unless a license fee is
paid. I recall some years back a student produced an HDL version of an
ARM 7TDMI. ARM spoke to him and the core was withdrawn. He also got a
job with them. Win/win

ARMv2 (ARM with 26 bit addressing) is now out of patent, so it's possible to
implement it licence-free, and it has been:
https://en.wikipedia.org/wiki/Amber_(processor_core)

That is one thing in favour of the i386 against newer architectures: it is
now out of patent, so Intel can't sue you for making one.

I don't think the restriction was ever patent based. It was a matter of
microcode copyright. That is why AMD had to use cleanrooms to generate
a spec for all subsequent CPUs they desired to copy. That gets around
copyright. So the 386 doesn't have an advantage over other x86
processors in terms of IP protection while it does have an advantage
over patent protected designs.


I'm surprised that you say it is hard to make it work. Do you mean it
is hard to build all the infrastructure? I have designed my own CPUs
before and found that part easy. It is creating the software support
that is hard, or at least a lot of work. I use Forth which helps make
things easier.

We've also designed our own 64-bit CPUs, with our own ISA extensions.
Multicore and cache coherency were a significant effort, likewise we spend a
lot of effort on testing.

Software is indeed a lot of work, though using the maximum of pre-existing
software helps.

At any rate, this implementation is an absolute MONSTER, clocking in at
36k gates (and providing a passe 30 MHz of x86). Just how the fuck am I
supposed to fit that in a sane chip? You know, the ones for which you can
get synthesizers for free, instead of paying several thousand dollars for
them.

This is the tricky question about the 'homebrew silicon fab' question.

I think there's an interesting space for the 'artisanal semiconductor fab',
that makes old chips relatively cheaply and allows users to get closer to
the process. Some universities have that kind of facility, but often not
intended for even medium volume manufacturing. But a general
democratisation of fab technology could be an interesting space.

I would like to see an open source effort for FPGAs. I think the big
roadblock would be the fab, but maybe it's more of a seed money issue
since wafers can be bought for a reasonable price. A *very* small
startup created an array CPU on a shoestring a few years ago. They have
run a second batch of chips, so they must be selling them to someone.
It's just a question of proving a market exists and coming up with the
startup funding.

http://www.greenarraychips.com/

--

Rick
 
On Tuesday, April 5, 2016 at 3:15:38 PM UTC-4, Rick C. Hodgin wrote:
I have a desire to create an 80386 CPU in FPGA form, one which will plug in
to the 132-pin socket of existing 80386 motherboard as a replacement CPU. I
want to be able to provide the features of the 80386 on that machine, but
through my FPGA, to then allow me to extend the ISA to include other
instructions and abilities.

Does anybody have an experience or advice in creating an FPGA-based CPU that
connects to a real hardware device and simulates the real device's abilities?

For example, the 80386 uses 5V and the Altera board I have drives 1.xV and
3.3V max, so I'd have to use a level converter. At speeds up to a max of
40 MHz, would there be any issues?

Also, I'd like to create a "monitor board," which is a board with a 132-
pin male socket connecting to the CPU on one side, and a 132-pin female
socket on the other side to which a real 80386 CPU would connect, and then
to be able to pull signals off the wires between the CPU socket and the
CPU itself. I had assumed I would use opto-isolation for this, but I don't
know if it would work or be best.

In addition, and specific to the 80386 CPU, AMD manufactured an Am386 CPU
that is 100% compatible with the Intel 80386, but it has the ability to
underclock down to even 0 MHz in a standby mode (allowing it to consume
only 0.001 Watts). I'm wondering if anyone has any experience underclocking
an 80386 motherboard down into the KHz range, or even Hz range, and if it
would still work at those slow speeds on the board?

My goals in slowing down the CPU are to detect and isolate timing protocols,
which I can then scale up to higher speeds once identified.

In any event, any help or advice is appreciated. Thank you.

Well much to my surprise, I was watching CPU-related videos today and
I came across this video:

https://www.youtube.com/watch?v=y0WEx0Gwk1E&t=4m22s

Beginning around 4:22 it shows the very type of board I was looking to
create to plug in to an existing 80386 system and monitor its signals.

Does anyone have one of these types boards on their shelf from back in
the day?

Best regards,
Rick C. Hodgin
 
On Saturday, April 16, 2016 at 11:31:21 AM UTC-4, Herbert Kleebauer wrote:
On 16.04.2016 16:58, Rick C. Hodgin wrote:
On Tuesday, April 5, 2016 at 3:15:38 PM UTC-4, Rick C. Hodgin wrote:

I have a desire to create an 80386 CPU in FPGA form, one which will plug in
to the 132-pin socket of existing 80386 motherboard as a replacement CPU.

Does anyone have one of these types boards on their shelf from back in
the day?


http://www.forcetechnologies.co.uk/news/replacement-for-intel-processors-in-high-reliability-long-life-systems
http://www.forcetechnologies.co.uk/downloads/x86-Processor-Recreation

How exciting! :)

> But I doubt that a complete 386 will fit into a FPGA.

A man named Aleksander Osman has created a full 486SX CPU in an Altera
FPGA (Terasic DE2-115 only, and it does not have an FPU):

https://github.com/alfikpl/ao486

It is comprised of:

Unit Cells M9K
---------------- ----- ---
ao486SX CPU 36517 47
floppy 1514 2
hdd 2071 17
nios2 1056 3
onchip for nios2 0 32
pc_dma 848 0
pic 388 0
pit 667 0
ps2 742 2
rtc 783 1
sound 37131 29
vga 2534 260
-------------------------------
Total: 84,251 393

After compiling with Quartus II:

-----[ Start ]----
Fitter Status : Successful - Sun Mar 30 21:00:13 2014
Quartus II 64-Bit Version : 13.1.0 Build 162 10/23/2013 SJ Web Edition
Revision Name : soc
Top-level Entity Name : soc
Family : Cyclone IV E
Device : EP4CE115F29C7
Timing Models : Final
Total logic elements : 91,256 / 114,480 ( 80 % )
Total combinational functions : 86,811 / 114,480 ( 76 % )
Dedicated logic registers : 26,746 / 114,480 ( 23 % )
Total registers : 26865
Total pins : 108 / 529 ( 20 % )
Total virtual pins : 0
Total memory bits : 2,993,408 / 3,981,312 ( 75 % )
Embedded Multiplier 9-bit elements : 44 / 532 ( 8 % )
Total PLLs : 1 / 4 ( 25 % )
-----[ End ]----

Runs at 39 MHz. Boots and runs:

Microsoft MS-DOS version 6.22
Microsoft Windows for Workgroups 3.11
Microsoft Windows 95
Linux 3.13.1

Best regards,
Rick C. Hodgin
 

Welcome to EDABoard.com

Sponsor

Back
Top