New soft processor core paper publisher?

On Saturday, June 29, 2013 9:56:30 AM UTC-4, Tom Gardner wrote:

But I sure can't see why that should be true for Win7.
Zillions of updates that only begin to slow down coming at you after two days or so.

Why not just do a full re-install from CD?
It's a couple of days trying to repair vs. a couple of days reinstalling and updating. The former is usually the safe bet but I think I've met my match in this laptop (which I previously did a reinstall on due to a hard drive crash).

I was thinking of getting Win7 to replace XP when MS withdraw support next year. Now I'm in doubt.
I'm riding XP Pro until the hubcaps fall off.

Yes, but it is better than Vista, and the hacks don't feel so guilty about supporting it.
I'm beginning to think the whole "every other MS OS is a POS, and every other one is golden" meme is 99% marketing. I work on a couple of Vista machines here and there and Win7 seems about the same in terms of fixing things (i.e. a dog). XP has it's issues as well, but it is simpler and there are more ways to fix it without blowing absolutely everything off the HD. I just want an OS that mounts a drive, garbage collects, and runs the programs I'm familiar with (the last is the kicker).
 
Tom Gardner wrote:
On 29/06/13 03:55, Les Cargill wrote:
Bakul Shah wrote:
Most of the concepts
are from ~40 years back (CSP, guarded commands etc.).

Most *all* concepts in computers are from that long ago or longer.
The "new stuff" is more about arbitraging market forces than getting
real work done.

There's some truth in that. For most people
re-inventing the wheel is completely unprofitable.

Who want to learn iron-mining, smelting, forging, and
finishing when all you need to do is cut up this
evening's meal.
Yep. Although there is a place in the world for katana-makers.
That's almost a ... religious devotion.

http://video.pbs.org/video/1150578495/

Turning serial programs
into parallel versions is manual, laborious, error prone
and not very successful.

So don't do that. Write them to be parallel from the
git-go. Write them to be event-driven. It's better in
all dimensions.

But not at all scales; there's a reason fine-grained
dataflow failed.
And not with all timescales :)
Of course. But we do what we can.

After all, we're all really clockmakers. Events regulate our
"wheels" just like the escapement on a pendulum clock. .
When you get that happening, things get to be a lot more
deterministic and that is what parallelism needs the most.

Don't get me wrong, I really like event-driven programming,
and some programming types are triumphantly re-inventing
it yet again, many-many layers up the software stack!
Har! All that's old is new again. :)

For example, and to torture the nomenclature:
- unreliable photons/electons at the PMD level
- unreliable bits at the PHY level
- reliable bits, unreliable frames at MAC level
- reliable frames, unreliable packets at the IP level
- reliable packets, unreliable streams at the TCP level
- reliable streams, unreliable conversations at the app level
- app protocols to make conversations reliable
- reliable conversations within apps:
- protocols to make apps reliable
- streams to send unreliable message events
- frameworks to make message events reliable
where some of the app and framework stuff looks *very* like
some of the networking stuff.
And so you end up throwing all that out and writing one layer
with the business rules, and another that does transport
and event management on top of UDP*.

*or something less sophisticated, like a serial port.

Then you write a GUI if you need it that uses pipes/sockets
to talk to the middleware.

Same as it ever was...

But I'm not going to throw the baby out with the
bathwater: there are *very* good reasons why most
(not all) of those levels are there.
The Bad Things are that you end up making assumptions
about the defect rates in the libraries you link in. I am
relatively secure in the knowledge that it's easier to do all that
from scratch. That should not be so, but it frequently
is.

--
Les Cargill
 
On 29/06/13 17:58, Les Cargill wrote:
Tom Gardner wrote:
On 29/06/13 03:15, Eric Wallin wrote:
snip

Speedy installation: I get a fully-patched installed
system in well under an hour. Last time MS would
let me (!) install XP, it took me well over a day
because of all the reboots.

Speedy re-installation once every 3 years: your
files are untouched so you just upgrade the o/s
(trivial precondition: put /home on a separate
disk partition). Takes < 1 hour.

And things like virtualbox make running a
Windows guest pretty simple. I'm stuck with a
Win7 host for now because of one PCI card, but
virtualbox claims to be able to publish PCI cards to
guests presently but only on a Linux host.
I'm not going to comment on Win in a VM,
because I only use win98 like that :)

But shortly before XP is discontinued (and MS shoots
its corporate customers in the foot!), I'll be
putting a clean WinXP inside at least one VM.

Does MS squeal about putting its o/s inside a VM?
They certainly stop me re-installing my perfectly
legal version of XP on a laptop, even though I have
the product code for that laptop! They sure do make
it difficult for me to use their products, sigh.
 
On Saturday, June 29, 2013 12:53:27 PM UTC-4, Les Cargill wrote:

Yeah, that's ugly. Although that's more the update infrastructure
that's ugly rather than Win7 itself.
Part and parcel. The modern OS seems to be a constantly moving target.

I want an OS that has all the bad bugs wrung out of it and is stuck in amber (ROM) for a couple of decades so I might actually get some work done already.
 
On 6/29/2013 12:50 PM, Les Cargill wrote:
rickman wrote:
On 6/28/2013 10:44 PM, Les Cargill wrote:
Eric Wallin wrote:
On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:

You are still thinking von Neumann. Any application can be broken
down into small units and parceled out to small processors. But
you have to think in those terms rather than just saying, "it
doesn't fit". Of course it can fit!

Intra brain communications are hierarchical as well.

I'm nobody, but one of the reasons for designing Hive was because I
feel processors in general are much too complex, to the point where
I'm repelled by them. I believe one of the drivers for this
over-complexity is the fact that main memory is external. I've been
assembling PCs since the 286 days, and I've never understood why main
memory wasn't tightly integrated onto the uP die.

RAM was both large and expensive until recently. Different people
made RAM than made processors and it would have been challenging to get
the business arrangements such that they'd glue up.

That's not the reason. Intel could buy any of the DRAM makers any day
of the week.


I have to presume they "couldn't", because they didn't.
That's a bit silly. Why would they want to? They don't even need to
buy an SDRAM company to add SDRAM to their chips.


But memory
architectures did evolve over time - I believe 286 machines still
used DIP packages for DRAM. And the target computers I used
from the mid-80s to the mid-90s may well have still used SRAM.

At that point, by the time SIP/SIMM/DIMM modules were available,
the culture expected things to be seperate. We were also arbitraging
RAM prices - we'd buy less-quantity, more-expensive DRAM now, then buy
bigger later when the price dropped.
Trust me, it's not about "culture", it is about what they can make work
the best at the lowest price. That's why they added cache memory, then
put the cache memory on a module with the CPU then on the chip itself.
Did they stick with a "culture" that cache should be chips on the
motherboard or stick with separate cache chips, no, they continued to
improve to what the current technology would support.


Some of that was doubtless retail behavioral stuff.

At several points DRAM divisions of companies were sold
off to form merged DRAM specialty companies primarily so the risk was
shared by several companies and they didn't have to take such a large
hit to their bottom line when DRAM was in the down phase of the business
cycle. Commodity parts like DRAMs are difficult to make money on and
there is always one of the makers who could be bought easily.


Right. So if you integrate it into the main core package, you no
longer have to suffer as a commodity vendor. It's a captive market.

I'm sure it's not that simple.
Yes, it's not that simple. Adding main memory to the CPU chip has all
sorts of problems. But knowing how to make SDRAM is not one of them.


In fact, Intel started out making DRAMs!

Precisely! They did one of the first "bet the company" moves
that resulted in the 4004.
Making the 4004 was *not* a "bet the company" design. They did it under
contract for a calculator company who paid for the work. Intel took
virtually no risk in the matter.


The main reason why main
memory isn't on the CPU chip is because there are lots of variations in
size *and* that it just wouldn't fit! You don't put one DRAM chip in a
computer, they used to need a minimum of four, IIRC to make up a module,
often they were 8 to a module and sometimes double sided with 16 chips
to a DRAM module.


If you were integrating inside the package, you could use any
physical configuration you wanted. But the thing would still have been
too big.
Yes, I agree, main memory is too big to fit on the CPU die for any size
memory in common use at the time. Isn't that what I said?


The next bigger problem is that CPUs and DRAMs use highly optimized
processes and are not very compatible. A combined chip would likely not
have as fast a CPU and would have poor DRAM on board.



I also have to wonder if the ability to cool things was involved.
SDRAM does not use a lot of power. It is cooler running than the CPU.


Everyone pretty
much gets the same ballpark memory size when putting a PC together,
and I can remember only once or twice upgrading memory after the
initial build (for someone else's Dell or similar where the initial
build was anemically low-balled for "value" reasons). Here we are in
2013, the memory is several light cm away from the processor on the
MB, talking in cache lines, and I still don't get why we have this
gross inefficiency.


That's not generally the bottleneck, though.

I'm not so sure. With the multicore processors my understanding is that
memory bandwidth *is* the main bottle neck. If you could move the DRAM
on chip it could run faster but more importantly it could be split into
a bank for each processor giving each one all the bandwidth it could
want.


That's consistent with my understanding as well. The big thing on
transputers in the '80s was the 100MBit links between them. As
we used to say - "the bus is usually the bottleneck". Er,
at least once you got past 10MHz clock speeds...
Then why did you write "That's not generally the bottleneck"?


I think a large part of the problem is that we have been designing more
and more complex machines so that the majority of the CPU cycles are
spent supporting the framework rather than doing the work the user
actually cares about.

Yep - although it's eminently possible to avoid this problem. I use
a lot of old programs - some going back to Win 3.1.

Really, pure 64-bit computers would have completely failed
had there not been the abiity to run a legacy O/S in a VM
or run 32 bit progs through the main O/S.

It is a bit like the amount of fuel needed to go
into space. Add one pound of payload and you need some hundred or
thousand more pounds of fuel to launch it. If you want to travel
further out into space, the amount of fuel goes up exponentially.

So Project X is trying to do something about that. There is something
about engineering culture that "wants scale" - a Saturn V is a really
impressive thing to watch, I am sure.

We
seem to be reaching the point that the improvements in processor speed
are all being consumed by the support software rather than getting to
the apps.


But things like BeOS and the like have been available, and remain
widely unused. There is some massive culture fail in play; either
that or things are just good enough.
I'm not sure why you consider this to be a "culture" issue. Windows is
the dominant OS. It is very hard to work with other OSs because there
are so many fewer apps. I can design FPGAs only with Windows or Linux
and at one time I couldn't even use Linux unless I paid for the
software. BeOS doesn't run current Windows programs does it?


Heck, pad/phone computers do much much *less* than desktops and
have the bullet in the market. You can't even type on them but people
still try...
They do some of the same things which are what most people need, but
they are very different products than what computers are. The market
evolved because the technology evolved. 10 years ago pads were mostly a
joke and there smart phones weren't really possible/practical. Now the
processors are fast enough running from battery that hand held computing
is practical and the market will turn that way some 99%. Desktops will
always be around just as "workstations" are still around, but only in
very specialized, demanding applications.

--

Rick
 
On 6/29/2013 9:56 AM, Tom Gardner wrote:
On 29/06/13 14:10, Eric Wallin wrote:

I don't get all the accolades for Win7, it's a dog.

Yes, but it is better than Vista, and the hacks don't feel so guilty
about supporting it.

Good things about linux: fanbois are vocally and acerbically critical when
things don't work smoothly, and then point you towards the many
alternatives
that /do/ work smoothly.
Many of the Linux "fanbois" also expect all users to be geeks who are
happy to dig into the machine to keep it humming. Most people don't
want to know how it works under the hood, they just want it to work...
like a car. Linux is no family sedan. That is what Windows tries to be
with some moderate level of success.

--

Rick
 
On 30/06/13 07:25, rickman wrote:
On 6/29/2013 9:56 AM, Tom Gardner wrote:
On 29/06/13 14:10, Eric Wallin wrote:

I don't get all the accolades for Win7, it's a dog.

Yes, but it is better than Vista, and the hacks don't feel so guilty
about supporting it.

Good things about linux: fanbois are vocally and acerbically critical when
things don't work smoothly, and then point you towards the many
alternatives
that /do/ work smoothly.

Many of the Linux "fanbois" also expect all users to be geeks who are happy to dig into the machine to keep it humming. Most people don't want to know how it works under the hood, they just want it
to work... like a car. Linux is no family sedan. That is what Windows tries to be with some moderate level of success.
Many, but not all.

One deep geek whose idea of an ideal distro is that "it just
works and lets me get on with what I want to do" is
http://www.dedoimedo.com/
He savages distros that don't work out of the box.

Have you looked at some of the modern distros?
They are easy to get going and easy to learn - arguably
easier than Windows8 judging by its reviews and
lack of uptake.

Try Mint, or xubuntu.
 
On 30/06/13 04:38, Eric Wallin wrote:
On Saturday, June 29, 2013 12:53:27 PM UTC-4, Les Cargill wrote:

Yeah, that's ugly. Although that's more the update infrastructure
that's ugly rather than Win7 itself.

Part and parcel. The modern OS seems to be a constantly moving target.

I want an OS that has all the bad bugs wrung out of it and is stuck in amber (ROM) for a couple of decades so I might actually get some work done already.
If you want an o/s in ROM, will CD-ROM do? If so, try any
modern linux liveCD!

If you want security, try Lightweight Portable Security,
by the US DoD, for accessing sensitive information, e.g.
you bank account.

If you want multimedia, try Mint.
 
Tom Gardner wrote:
On 29/06/13 17:58, Les Cargill wrote:
Tom Gardner wrote:
On 29/06/13 03:15, Eric Wallin wrote:
snip

Speedy installation: I get a fully-patched installed
system in well under an hour. Last time MS would
let me (!) install XP, it took me well over a day
because of all the reboots.

Speedy re-installation once every 3 years: your
files are untouched so you just upgrade the o/s
(trivial precondition: put /home on a separate
disk partition). Takes < 1 hour.

And things like virtualbox make running a
Windows guest pretty simple. I'm stuck with a
Win7 host for now because of one PCI card, but
virtualbox claims to be able to publish PCI cards to
guests presently but only on a Linux host.

I'm not going to comment on Win in a VM,
because I only use win98 like that :)

But shortly before XP is discontinued (and MS shoots
its corporate customers in the foot!), I'll be
putting a clean WinXP inside at least one VM.
It works well.

Does MS squeal about putting its o/s inside a VM?
Not in my experience. Even OEM versions can be activated.

They certainly stop me re-installing my perfectly
legal version of XP on a laptop, even though I have
the product code for that laptop! They sure do make
it difficult for me to use their products, sigh.
That's bizarre. I know the activation process is unreliable;
that's why you may have to call the phone number on some
reinstalls.


--
Les Cargill
 
rickman wrote:
On 6/29/2013 12:50 PM, Les Cargill wrote:
rickman wrote:
On 6/28/2013 10:44 PM, Les Cargill wrote:
Eric Wallin wrote:
On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:

You are still thinking von Neumann. Any application can be broken
down into small units and parceled out to small processors. But
you have to think in those terms rather than just saying, "it
doesn't fit". Of course it can fit!

Intra brain communications are hierarchical as well.

I'm nobody, but one of the reasons for designing Hive was because I
feel processors in general are much too complex, to the point where
I'm repelled by them. I believe one of the drivers for this
over-complexity is the fact that main memory is external. I've been
assembling PCs since the 286 days, and I've never understood why main
memory wasn't tightly integrated onto the uP die.

RAM was both large and expensive until recently. Different people
made RAM than made processors and it would have been challenging to get
the business arrangements such that they'd glue up.

That's not the reason. Intel could buy any of the DRAM makers any day
of the week.


I have to presume they "couldn't", because they didn't.

That's a bit silly. Why would they want to?

I presume for the same reasons that "soundcards" were added to
motherboards. You lose traces, you lose connectors.

They don't even need to
buy an SDRAM company to add SDRAM to their chips.
Right.

But memory
architectures did evolve over time - I believe 286 machines still
used DIP packages for DRAM. And the target computers I used
from the mid-80s to the mid-90s may well have still used SRAM.

At that point, by the time SIP/SIMM/DIMM modules were available,
the culture expected things to be seperate. We were also arbitraging
RAM prices - we'd buy less-quantity, more-expensive DRAM now, then buy
bigger later when the price dropped.

Trust me, it's not about "culture", it is about what they can make work
the best at the lowest price. That's why they added cache memory, then
put the cache memory on a module with the CPU then on the chip itself.
Did they stick with a "culture" that cache should be chips on the
motherboard or stick with separate cache chips, no, they continued to
improve to what the current technology would support.
Right!

Some of that was doubtless retail behavioral stuff.

At several points DRAM divisions of companies were sold
off to form merged DRAM specialty companies primarily so the risk was
shared by several companies and they didn't have to take such a large
hit to their bottom line when DRAM was in the down phase of the business
cycle. Commodity parts like DRAMs are difficult to make money on and
there is always one of the makers who could be bought easily.


Right. So if you integrate it into the main core package, you no
longer have to suffer as a commodity vendor. It's a captive market.

I'm sure it's not that simple.

Yes, it's not that simple. Adding main memory to the CPU chip has all
sorts of problems. But knowing how to make SDRAM is not one of them.


In fact, Intel started out making DRAMs!

Precisely! They did one of the first "bet the company" moves
that resulted in the 4004.

Making the 4004 was *not* a "bet the company" design. They did it under
contract for a calculator company who paid for the work. Intel took
virtually no risk in the matter.
Interestingly, many people say they took considerable risk. It was
certainly disruptive.

The main reason why main
memory isn't on the CPU chip is because there are lots of variations in
size *and* that it just wouldn't fit! You don't put one DRAM chip in a
computer, they used to need a minimum of four, IIRC to make up a module,
often they were 8 to a module and sometimes double sided with 16 chips
to a DRAM module.


If you were integrating inside the package, you could use any
physical configuration you wanted. But the thing would still have been
too big.

Yes, I agree, main memory is too big to fit on the CPU die for any size
memory in common use at the time. Isn't that what I said?
If you did, I missed it.

The next bigger problem is that CPUs and DRAMs use highly optimized
processes and are not very compatible. A combined chip would likely not
have as fast a CPU and would have poor DRAM on board.



I also have to wonder if the ability to cool things was involved.

SDRAM does not use a lot of power. It is cooler running than the CPU.


Everyone pretty
much gets the same ballpark memory size when putting a PC together,
and I can remember only once or twice upgrading memory after the
initial build (for someone else's Dell or similar where the initial
build was anemically low-balled for "value" reasons). Here we are in
2013, the memory is several light cm away from the processor on the
MB, talking in cache lines, and I still don't get why we have this
gross inefficiency.


That's not generally the bottleneck, though.

I'm not so sure. With the multicore processors my understanding is that
memory bandwidth *is* the main bottle neck. If you could move the DRAM
on chip it could run faster but more importantly it could be split into
a bank for each processor giving each one all the bandwidth it could
want.


That's consistent with my understanding as well. The big thing on
transputers in the '80s was the 100MBit links between them. As
we used to say - "the bus is usually the bottleneck". Er,
at least once you got past 10MHz clock speeds...

Then why did you write "That's not generally the bottleneck"?
Because on most designs I have seen for the last decade or
more, the memory bus is not the processor interconnect bus.

I think a large part of the problem is that we have been designing more
and more complex machines so that the majority of the CPU cycles are
spent supporting the framework rather than doing the work the user
actually cares about.

Yep - although it's eminently possible to avoid this problem. I use
a lot of old programs - some going back to Win 3.1.

Really, pure 64-bit computers would have completely failed
had there not been the abiity to run a legacy O/S in a VM
or run 32 bit progs through the main O/S.

It is a bit like the amount of fuel needed to go
into space. Add one pound of payload and you need some hundred or
thousand more pounds of fuel to launch it. If you want to travel
further out into space, the amount of fuel goes up exponentially.

So Project X is trying to do something about that. There is something
about engineering culture that "wants scale" - a Saturn V is a really
impressive thing to watch, I am sure.

We
seem to be reaching the point that the improvements in processor speed
are all being consumed by the support software rather than getting to
the apps.


But things like BeOS and the like have been available, and remain
widely unused. There is some massive culture fail in play; either
that or things are just good enough.

I'm not sure why you consider this to be a "culture" issue. Windows is
the dominant OS.
Well - that is a cultural artifact. What else can it be? There is no
feedback path in the marketplace for us to express our disdain
over bloatware.

It is very hard to work with other OSs because there
are so many fewer apps. I can design FPGAs only with Windows or Linux
and at one time I couldn't even use Linux unless I paid for the
software. BeOS doesn't run current Windows programs does it?
No. My point is that the culture does not reward minimalist
software solutions unless they're in the control or embedded
space.


Heck, pad/phone computers do much much *less* than desktops and
have the bullet in the market. You can't even type on them but people
still try...

They do some of the same things which are what most people need, but
they are very different products than what computers are. The market
evolved because the technology evolved. 10 years ago pads were mostly a
joke and there smart phones weren't really possible/practical. Now the
processors are fast enough running from battery that hand held computing
is practical and the market will turn that way some 99%.
Phones and tablets are and will always be cheezy little non-computers.
They don't have enough peripheral options to do anything
besides post cat pictures to social media sites.

You *can* make serious control surface computers out of them, but
they're no longer at a consumer-friendly price. And the purchasing
window for them is very narrow, so managing market thrash is
a problem.

Desktops will
always be around just as "workstations" are still around, but only in
very specialized, demanding applications.
Or they can be a laptop in a box. The world* is glued together by
Visual Basic. Dunno if the Win8 tablets can be relied on to run that
in a manner to support all that.

*as opposed to the fantasy world - the Net - which is glued with Java.

I expect the death of the desktop is greatly exaggerated.


--
Les Cargill
 
On 6/30/2013 12:03 PM, Les Cargill wrote:
rickman wrote:
On 6/29/2013 12:50 PM, Les Cargill wrote:
rickman wrote:

That's not the reason. Intel could buy any of the DRAM makers any day
of the week.


I have to presume they "couldn't", because they didn't.

That's a bit silly. Why would they want to?

I presume for the same reasons that "soundcards" were added to
motherboards. You lose traces, you lose connectors.
The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.

You presume a lot. That is not the same as it being correct.



You seem to be learning... ;^)


In fact, Intel started out making DRAMs!

Precisely! They did one of the first "bet the company" moves
that resulted in the 4004.

Making the 4004 was *not* a "bet the company" design. They did it under
contract for a calculator company who paid for the work. Intel took
virtually no risk in the matter.


Interestingly, many people say they took considerable risk. It was
certainly disruptive.

Like who? What was the risk, that the calculator wouldn't work, they
wouldn't get the contract??? Where was the "considerable" risk?

Actually, there was little risk. Once they convinced the calculator
company that they could do it more cheaply it was an obvious move to
make. The technology was to the point where they could put a small CPU
on a chip (or chips) and make a fully functional computer. There was no
idea of becoming the huge computer giant. I am sure they realized that
this could become the basis of a very significant industry. So where
was the risk?


The main reason why main
memory isn't on the CPU chip is because there are lots of variations in
size *and* that it just wouldn't fit! You don't put one DRAM chip in a
computer, they used to need a minimum of four, IIRC to make up a
module,
often they were 8 to a module and sometimes double sided with 16 chips
to a DRAM module.


If you were integrating inside the package, you could use any
physical configuration you wanted. But the thing would still have been
too big.

Yes, I agree, main memory is too big to fit on the CPU die for any size
memory in common use at the time. Isn't that what I said?


If you did, I missed it.
Uh, look above...

"The main reason why main memory isn't on the CPU chip is because there
are lots of variations in size *and* that it just wouldn't fit!"


The next bigger problem is that CPUs and DRAMs use highly optimized
processes and are not very compatible. A combined chip would likely not
have as fast a CPU and would have poor DRAM on board.



I also have to wonder if the ability to cool things was involved.

SDRAM does not use a lot of power. It is cooler running than the CPU.


Everyone pretty
much gets the same ballpark memory size when putting a PC together,
and I can remember only once or twice upgrading memory after the
initial build (for someone else's Dell or similar where the initial
build was anemically low-balled for "value" reasons). Here we are in
2013, the memory is several light cm away from the processor on the
MB, talking in cache lines, and I still don't get why we have this
gross inefficiency.


That's not generally the bottleneck, though.

I'm not so sure. With the multicore processors my understanding is that
memory bandwidth *is* the main bottle neck. If you could move the DRAM
on chip it could run faster but more importantly it could be split into
a bank for each processor giving each one all the bandwidth it could
want.


That's consistent with my understanding as well. The big thing on
transputers in the '80s was the 100MBit links between them. As
we used to say - "the bus is usually the bottleneck". Er,
at least once you got past 10MHz clock speeds...

Then why did you write "That's not generally the bottleneck"?


Because on most designs I have seen for the last decade or
more, the memory bus is not the processor interconnect bus.
What does that mean? I don't know what processor designs you have seen,
but all of the multicore stuff (which is what they have been building
for nearly a decade) is memory bus speed constrained because you have
two or three or four or eight processors sharing just one memory
interface or in some cases I believe they have used two. This is a
classic problem at this point referred to as the "memory wall". Google it.


We
seem to be reaching the point that the improvements in processor speed
are all being consumed by the support software rather than getting to
the apps.


But things like BeOS and the like have been available, and remain
widely unused. There is some massive culture fail in play; either
that or things are just good enough.

I'm not sure why you consider this to be a "culture" issue. Windows is
the dominant OS.

Well - that is a cultural artifact. What else can it be? There is no
feedback path in the marketplace for us to express our disdain
over bloatware.
You can't buy a computer with Linux or some other OS?


It is very hard to work with other OSs because there
are so many fewer apps. I can design FPGAs only with Windows or Linux
and at one time I couldn't even use Linux unless I paid for the
software. BeOS doesn't run current Windows programs does it?


No. My point is that the culture does not reward minimalist
software solutions unless they're in the control or embedded
space.
Why is that? Of course rewards are there for anyone who makes a better
product.


Heck, pad/phone computers do much much *less* than desktops and
have the bullet in the market. You can't even type on them but people
still try...

They do some of the same things which are what most people need, but
they are very different products than what computers are. The market
evolved because the technology evolved. 10 years ago pads were mostly a
joke and there smart phones weren't really possible/practical. Now the
processors are fast enough running from battery that hand held computing
is practical and the market will turn that way some 99%.

Phones and tablets are and will always be cheezy little non-computers.
They don't have enough peripheral options to do anything
besides post cat pictures to social media sites.
Ok, another quote to go up there with "No one will need more than 640
kBytes" and "I see little commercial potential for the internet for the
next 10 years."

I'll bet you have one of these things as a significant computing
platform in four years... you can quote me on that!


You *can* make serious control surface computers out of them, but
they're no longer at a consumer-friendly price. And the purchasing
window for them is very narrow, so managing market thrash is
a problem.

Desktops will
always be around just as "workstations" are still around, but only in
very specialized, demanding applications.


Or they can be a laptop in a box. The world* is glued together by
Visual Basic. Dunno if the Win8 tablets can be relied on to run that
in a manner to support all that.

*as opposed to the fantasy world - the Net - which is glued with Java.

I expect the death of the desktop is greatly exaggerated.
I don't know what the "death of the desktop" is, but I think you and I
will no longer have traditional computers (aka, laptops and desktops) as
anything but reserve computing platforms in six years.

I am pretty much a Luddite when it comes new technology. I think most
of it is bogus crap. But I have seen the light of phones and tablets
and I am a believer. I have been shown the way and the way is good.

Here's a clue to the future. How many here want to use Windows after
XP? Who likes Vista? Who likes Win7? Win8? Is your new PC any faster
than your old PC (other than the increased memory for memory bound
apps)? PCs are reaching the wall while hand held devices aren't.
Handhelds will be catching up in six years and will be able to do all
the stuff you want from your computer today. Tomorrow's PC's,
meanwhile, won't be doing a lot more. So the gap will narrow and who
wants all the baggage of traditional PCs when they can use much more
convenient hand helds? I/O won't be a problem. I think all the tablets
plug into a TV via HDMI and you can add a keyboard and mouse easily. So
there you have all the utility of a PC in a tiny form factor along with
all the advantages of the handheld when you want a handheld.

If the FPGA design software ran on them well, I'd get one today. But I
need to wait a few more years for the gap to close.

--

Rick
 
On 6/30/2013 5:01 AM, Tom Gardner wrote:
On 30/06/13 07:25, rickman wrote:
On 6/29/2013 9:56 AM, Tom Gardner wrote:
On 29/06/13 14:10, Eric Wallin wrote:

I don't get all the accolades for Win7, it's a dog.

Yes, but it is better than Vista, and the hacks don't feel so guilty
about supporting it.

Good things about linux: fanbois are vocally and acerbically critical
when
things don't work smoothly, and then point you towards the many
alternatives
that /do/ work smoothly.

Many of the Linux "fanbois" also expect all users to be geeks who are
happy to dig into the machine to keep it humming. Most people don't
want to know how it works under the hood, they just want it
to work... like a car. Linux is no family sedan. That is what Windows
tries to be with some moderate level of success.

Many, but not all.

One deep geek whose idea of an ideal distro is that "it just
works and lets me get on with what I want to do" is
http://www.dedoimedo.com/
He savages distros that don't work out of the box.

Have you looked at some of the modern distros?
They are easy to get going and easy to learn - arguably
easier than Windows8 judging by its reviews and
lack of uptake.

Try Mint, or xubuntu.
I had that conversation recently in one of the newsgroups and it ended
up with an argument where at least one person was arguing that you
shouldn't use a computer unless you are prepared to work on it.

I don't want to have that discussion again. Maybe later.

Linux reminds me of Forth in that regard. Someone said, "If you've seen
one Forth, you've seen one Forth". There seem to be so many different
Linux distros, one of them has to be good, right?

Oh, BTW, one of the disk copy programs I tried to use installed a dual
boot Linux partition on my laptop. It runs ok, but it craps out because
it finds an error in one of the files... the exact sort of hard drive
error that is the reason why I am trying to copy my hard drive to a new
one. Maybe this is just the stupid program, but this is one reason why
I don't think Linux is the answer to any problems I have.

--

Rick
 
On 6/29/2013 5:14 AM, Tom Gardner wrote:
On 29/06/13 02:02, rickman wrote:
On 6/28/2013 5:11 PM, Tom Gardner wrote:
On 28/06/13 20:06, rickman wrote:
I think the trick will be in finding ways of dividing up the programs
so they can meld to the hardware rather than trying to optimize
everything.

My suspicion is that, except for compute-bound
problems that only require "local" data, that
granularity will be too small.

Examples where it will work, e.g. protein folding,
will rapidly migrate to CUDA and graphics processors.

You are still thinking von Neumann. Any application can be broken down
into small units and parceled out to small processors. But you have to
think in those terms rather than just saying, "it
doesn't fit". Of course it can fit!

Regrettably not. People have been trying different
techniques for ~50 years, with varying degrees of
success as technology bottlenecks change.
The people working in those areas are highly
intelligent and motivated (e.g. high performance
computing research) and there is serious money
available (e.g. life sciences, big energy).

As a good rule of thumb, if you can think of it,
they've already tried it and found where it does
and doesn't work.
So you are saying that multiprocessors are dead on arrival? I don't
think so. No one I have seen has started the design process from
scratch thinking like they were designing hardware.

How does a bee hive work? How about an ant farm? How do all the cells
in your body work together? No, the fact that the answer has not been
found does not mean it does not exist.


Consider a chip where you have literally a trillion operations per
second available all the time. Do you really care if half go to waste?
I don't! I design FPGAs and I have never felt obliged (not
since the early days anyway) to optimize the utility of each LUT and
FF. No, it turns out the precious resource in FPGAs is routing and you
can't do much but let the tools manage that anyway.

Those internal FPGA constraints also have analogues at
a larger scale, e.g. ic pinout, backplanes, networks...


So a fine grained processor array could be very effective if the
programming can be divided down to suit. Maybe it takes 10 of these
cores to handle 100 Mbps Ethernet, so what? Something like a
browser might need to harness a couple of dozen. If the load slacks
off and they are idling, so what?

The fundamental problem is that in general as you make the
granularity smaller, the communications requirements
get larger. And vice versa :(

Actually not. The aggregate comms requirements may increase, but we
aren't sharing an Ethernet bus. All of the local processors talk to
each other and less often have to talk to non-local
processors. I think the phone company knows something about that.

That works to an extent, particularly in "embarrassingly parallel"
problems such as telco systems. I know: I've architected and
implemented some :)

It still has its limits in most interesting computing systems.
Well, the other approaches are hitting a wall. It is clearly time for a
change. You can say this or that doesn't work, but they have only been
tried in very limited contexts.


I'm sort-of retired (I got sick of corporate in-fighting,
and I have my "drop dead money", so...)

That's me too, but I found some work that is paying off very well now.
So I've got a foot in both camps, retired, not retired... both are fun
in their own way. But dealing with international shipping
is a PITA.

Or even sourcing some components, e.g. a MAX9979KCTK+D or +TD :(
Yes, actually component lead time is a PITA. The orders are very
"lumpy" as one of my customer contacts refers to it. So I'm not willing
to inventory anything I don't have to. At this point that will only be
connectors.


I regard golf as silly, despite having two courses in
walking distance. My equivalent of kayaking is flying
gliders.

That has got to be fun!

Probably better than you imaging (and that's recursive
without a terminating condition). I know instructors
that still have pleasant surprises after 50 years :)

I did a tiny bit of kayaking on flat water, but now
I wear hearing aids :(
One of my better kayaking friends has a cochlear implant with an
external processor. She either wears her older back up processor or
none at all.


I've never worked up the whatever to learn to fly.

Going solo is about as difficult as learning to drive
a car. And then the learning really starts :)
Yes, but it is a lot more training than learning to drive and a lot more
money. It is also a lot more demanding of scheduling in that you can't
just say, "Dad, can I drive you to the store?"


It seems like a big investment and not so cheap overall.

Not in money. In the UK club membership is $500/year,
a launch + 10 mins instruction is $10, and an hour
instruction in the air is $30. The real cost is time:
club members help you get airborne, and you help them
in return. Very sociable, unlike aircraft with air
conditioning fans up front or scythes above.


But there is clearly a great thrill there.

0-40kt in 3s, 0-50kt in 5s, climb with your feet
above your head, fly in close formation with raptors,
eyeball sheep on a hillside as you whizz past
below them at 60kt, 10-20kft, 40kt-150kt, hundreds
and thousands of km range, pre-solo spinning at
altitudes that make power pilots blanche, and
pre-solo flying in loose formation with other
aircraft.

Let me know if you want pointers to youtube vids.
Not at this time. I'm way too busy with other things including getting
a hip replacement.

--

Rick
 
On Sunday, June 30, 2013 12:03:45 PM UTC-4, Les Cargill wrote:

There is no
feedback path in the marketplace for us to express our disdain
over bloatware.
I'd like to give the FPGA vendors a piece of my mind regarding their FPGA tools. Talk about the bloatiest of great white whale bloaty-bloat SW. Fire up your virus scanner and go to bed, only to wake up and find it listlessly pawing C:\Altera or C:\Xilinx, wearing your hard drive head down to a bloody nub.
 
On 7/1/2013 2:51 AM, glen herrmannsfeldt wrote:
rickman<gnuarm@gmail.com> wrote:

(snip)
The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.

OK, but how about 4G of DRAM off chip, but in the same package.
Maybe call it L4 cache instead of DRAM. Use a high interleave so
you can keep the access rate up, and besides the cost of the wiring
isn't so high as it would be outside the package.
How is that any real advantage? Once you go off chip you have suffered
the slings and arrows of outrageous output drivers.

--

Rick
 
rickman <gnuarm@gmail.com> wrote:

(snip)
The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.
OK, but how about 4G of DRAM off chip, but in the same package.
Maybe call it L4 cache instead of DRAM. Use a high interleave so
you can keep the access rate up, and besides the cost of the wiring
isn't so high as it would be outside the package.

-- glen
 
On 01/07/13 09:07, rickman wrote:
On 7/1/2013 2:51 AM, glen herrmannsfeldt wrote:
rickman<gnuarm@gmail.com> wrote:

(snip)
The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.

OK, but how about 4G of DRAM off chip, but in the same package.
Maybe call it L4 cache instead of DRAM. Use a high interleave so
you can keep the access rate up, and besides the cost of the wiring
isn't so high as it would be outside the package.

How is that any real advantage? Once you go off chip you have suffered
the slings and arrows of outrageous output drivers.
Making separate chips and putting them in the same package is the golden
middle road here. You need output drivers - but you don't need the same
sort of drivers as for separate chips on a motherboard. There are
several differences - your wires are shorter (so less noise, better
margins, easier timing, lower currents, lower power), you can have many
more wires (broader paths means higher bandwidth), and you have
dedicated links (better timing, easier termination, separate datapaths
for each direction). It is particularly beneficial if the die are
stacked vertically rather than horizontally - your inter-chip
connections are minimal length, it's (relatively) easy to have huge
parallel buses, and you can arrange the layout as you want.

There is significant work being done in making chip packaging and driver
types for exactly this sort of arrangement. It is perhaps more aimed at
portable devices rather than big systems, but the idea is the same.
 
rickman wrote:
On 6/30/2013 12:03 PM, Les Cargill wrote:
rickman wrote:
On 6/29/2013 12:50 PM, Les Cargill wrote:
rickman wrote:

That's not the reason. Intel could buy any of the DRAM makers any day
of the week.


I have to presume they "couldn't", because they didn't.

That's a bit silly. Why would they want to?

I presume for the same reasons that "soundcards" were added to
motherboards. You lose traces, you lose connectors.

The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.

You presume a lot. That is not the same as it being correct.
I am examining the "isn't practical" premise, and trying to do that
without any preconceptions. I've seen ... high levels of integration
in real life before - stuff you wouldn't normally think of
as practical.

Of *course* i don't know for sure - I wasn't there when
it wasn't done... :)

Right.


Right!

You seem to be learning... ;^)
That's the goal! :)

In fact, Intel started out making DRAMs!

Precisely! They did one of the first "bet the company" moves
that resulted in the 4004.

Making the 4004 was *not* a "bet the company" design. They did it under
contract for a calculator company who paid for the work. Intel took
virtually no risk in the matter.


Interestingly, many people say they took considerable risk. It was
certainly disruptive.


Like who? What was the risk, that the calculator wouldn't work, they
wouldn't get the contract??? Where was the "considerable" risk?
Journalists, mainly. They're probably doing the usual; "constructing a
narrative". There's a general credo in storyteller spheres that SiVa is
all about doing crazy things .

Actually, there was little risk. Once they convinced the calculator
company that they could do it more cheaply it was an obvious move to
make. The technology was to the point where they could put a small CPU
on a chip (or chips) and make a fully functional computer. There was no
idea of becoming the huge computer giant. I am sure they realized that
this could become the basis of a very significant industry. So where
was the risk?
As the story was told, the risk was mostly in what they'd have to do to
adapt.

The main reason why main
memory isn't on the CPU chip is because there are lots of
variations in
size *and* that it just wouldn't fit! You don't put one DRAM chip in a
computer, they used to need a minimum of four, IIRC to make up a
module,
often they were 8 to a module and sometimes double sided with 16 chips
to a DRAM module.


If you were integrating inside the package, you could use any
physical configuration you wanted. But the thing would still have been
too big.

Yes, I agree, main memory is too big to fit on the CPU die for any size
memory in common use at the time. Isn't that what I said?


If you did, I missed it.

Uh, look above...

"The main reason why main memory isn't on the CPU chip is because there
are lots of variations in size *and* that it just wouldn't fit!"
Well, I got that eventually - although I suppose I got hung up on
"variations in size" - just pick one.

<snip>
Then why did you write "That's not generally the bottleneck"?


Because on most designs I have seen for the last decade or
more, the memory bus is not the processor interconnect bus.

What does that mean? I don't know what processor designs you have seen,
but all of the multicore stuff (which is what they have been building
for nearly a decade) is memory bus speed constrained because you have
two or three or four or eight processors sharing just one memory
interface or in some cases I believe they have used two. This is a
classic problem at this point referred to as the "memory wall". Google it.
We're back to the interconnect bus vs. the memory bus distinction.
Interconnects must be arbitrated or otherwise act like "networks";
what I am calling a memory bus does not have to.

Sadly, now we have to distinguish between usage of these terms in
whether it's multicore or not.

FWIW, I have nearly avoided anything multicore in terms of my living
successfully so far that does not run shrink wrap.

We
seem to be reaching the point that the improvements in processor speed
are all being consumed by the support software rather than getting to
the apps.


But things like BeOS and the like have been available, and remain
widely unused. There is some massive culture fail in play; either
that or things are just good enough.

I'm not sure why you consider this to be a "culture" issue. Windows is
the dominant OS.

Well - that is a cultural artifact. What else can it be? There is no
feedback path in the marketplace for us to express our disdain
over bloatware.

You can't buy a computer with Linux or some other OS?
Linux isn't bloated? I don't know if you can buy something preloaded
with BeOS or not. Used to be that specialist things like DAW machines
used Be.

It is very hard to work with other OSs because there
are so many fewer apps. I can design FPGAs only with Windows or Linux
and at one time I couldn't even use Linux unless I paid for the
software. BeOS doesn't run current Windows programs does it?


No. My point is that the culture does not reward minimalist
software solutions unless they're in the control or embedded
space.

Why is that? Of course rewards are there for anyone who makes a better
product.
Don't be silly.

Heck, pad/phone computers do much much *less* than desktops and
have the bullet in the market. You can't even type on them but people
still try...

They do some of the same things which are what most people need, but
they are very different products than what computers are. The market
evolved because the technology evolved. 10 years ago pads were mostly a
joke and there smart phones weren't really possible/practical. Now the
processors are fast enough running from battery that hand held computing
is practical and the market will turn that way some 99%.

Phones and tablets are and will always be cheezy little non-computers.
They don't have enough peripheral options to do anything
besides post cat pictures to social media sites.

Ok, another quote to go up there with "No one will need more than 640
kBytes" and "I see little commercial potential for the internet for the
next 10 years."
Both of those are also true, given other constraints. I would say
the commercial potential of the internet has been more limited
than people would perhaps prefer.

I'll bet you have one of these things as a significant computing
platform in four years... you can quote me on that!
We'll see - I can't find that today, and I have looked. Gave up in a
fit of despair and bought a netbook.

You *can* make serious control surface computers out of them, but
they're no longer at a consumer-friendly price. And the purchasing
window for them is very narrow, so managing market thrash is
a problem.

Desktops will
always be around just as "workstations" are still around, but only in
very specialized, demanding applications.


Or they can be a laptop in a box. The world* is glued together by
Visual Basic. Dunno if the Win8 tablets can be relied on to run that
in a manner to support all that.

*as opposed to the fantasy world - the Net - which is glued with Java.

I expect the death of the desktop is greatly exaggerated.

I don't know what the "death of the desktop" is, but I think you and I
will no longer have traditional computers (aka, laptops and desktops) as
anything but reserve computing platforms in six years.
We'll see. FWIW, the people that made this machine I am typing on now
no longer make desktops, so I see something coming. Not sure what, though.

I am pretty much a Luddite when it comes new technology.
Skepticism != Luddism.

I think most
of it is bogus crap. But I have seen the light of phones and tablets
and I am a believer. I have been shown the way and the way is good.

Here's a clue to the future. How many here want to use Windows after
XP? Who likes Vista? Who likes Win7? Win8?

They're all fine, so far. No trouble with Win7 or Win8 here. Win8 is
far too clever but it works.

Is your new PC any faster
than your old PC (other than the increased memory for memory bound
apps)?
Yes. But my old PC is a 3.0GHz monocore.

The only reason I upgraded was that Silverlight stopped utilizing
graphics cards.

PCs are reaching the wall while hand held devices aren't.
There is more than one wall.

Handhelds will be catching up in six years and will be able to do all
the stuff you want from your computer today. Tomorrow's PC's,
meanwhile, won't be doing a lot more. So the gap will narrow and who
wants all the baggage of traditional PCs when they can use much more
convenient hand helds? I/O won't be a problem.
Uh huh. Right :)

I think all the tablets
plug into a TV via HDMI and you can add a keyboard and mouse easily.
That's true enough. But that isn't all the I/O I would need. It isn't
even the right *software*.

So
there you have all the utility of a PC in a tiny form factor along with
all the advantages of the handheld when you want a handheld.

If the FPGA design software ran on them well, I'd get one today. But I
need to wait a few more years for the gap to close.

Ironically, I expect Apple to sell desktops for quite some time. Other
than that, here's to the gamers.

--
Les Cargill
 
David Brown wrote:
On 01/07/13 09:07, rickman wrote:
On 7/1/2013 2:51 AM, glen herrmannsfeldt wrote:
rickman<gnuarm@gmail.com> wrote:

(snip)
The only fly in the ointment is that it isn't practical to combine an
x86 CPU with 4 GB of DRAM on a single chip. Oh well, otherwise a great
idea. That might be practical in another 5 years when low end computers
are commonly using more than 16 GB of DRAM on the board.

OK, but how about 4G of DRAM off chip, but in the same package.
Maybe call it L4 cache instead of DRAM. Use a high interleave so
you can keep the access rate up, and besides the cost of the wiring
isn't so high as it would be outside the package.

How is that any real advantage? Once you go off chip you have suffered
the slings and arrows of outrageous output drivers.


Making separate chips and putting them in the same package is the golden
middle road here. You need output drivers - but you don't need the same
sort of drivers as for separate chips on a motherboard. There are
several differences - your wires are shorter (so less noise, better
margins, easier timing, lower currents, lower power), you can have many
more wires (broader paths means higher bandwidth), and you have
dedicated links (better timing, easier termination, separate datapaths
for each direction). It is particularly beneficial if the die are
stacked vertically rather than horizontally - your inter-chip
connections are minimal length, it's (relatively) easy to have huge
parallel buses, and you can arrange the layout as you want.

There is significant work being done in making chip packaging and driver
types for exactly this sort of arrangement. It is perhaps more aimed at
portable devices rather than big systems, but the idea is the same.
This is what I was meaning - although I don't know enough deeply enough
about DRAM to know what the slings and arrows are.

DRAM still seem like magic to me, in a way. Magic I am used to, but
still....

--
Les Cargill
 
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)

OK, but how about 4G of DRAM off chip, but in the same package.
Maybe call it L4 cache instead of DRAM. Use a high interleave so
you can keep the access rate up, and besides the cost of the wiring
isn't so high as it would be outside the package.

How is that any real advantage? Once you go off chip you
have suffered the slings and arrows of outrageous output drivers.
It must help some, or Intel wouldn't have put the off chip
in package cache on some processors. Pentium pro if I remember,
and maybe Pentium 2.

Yes it is off chip and requires drivers, but the capacitance will
be less than off package, the distance (speed of light) delay
will be less, and known. The drivers can be sized optimally for
the needed speed and distance.

-- glen
 

Welcome to EDABoard.com

Sponsor

Back
Top