B
Bakul Shah
Guest
On 9/18/11 10:02 PM, Andrew Reilly wrote:
I asked but he didn't clarify. I thought he meant "expressing the
essential properties of parallelism". And here I think CSP/guarded
commands do an excellent job). I think you are talking about is
"placement" -- mapping an N-parallel algorithm to a smaller number
of cores and in general making optimum use of available resources.
But these are implementation issues, not abstraction ones. I agree
that compilers have a long way to go.
Once easy to use parallel languages become widely available and we
gain some experience I am hoping that
a) better implementations will follow.
b) we will find ways to extract parallelism in the language itself
c) they will lead to much *simpler* h/w structures. Seems to me a
lot of the h/w complexity is due to wanting to dynamically extract
parallelism.
Now I am not sure what Nick meant by "abstracting parallelism".On Sun, 18 Sep 2011 18:26:45 -0700, Bakul Shah wrote:
On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote:
In article<4E74F69C.5080009@bitblocks.com>,
Bakul Shah<usenet@bitblocks.com> wrote:
I have not seen anything as elegant as CSP& Dijkstra's Guarded
commands and they have been around for 35+ years.
Well, measure theory is also extremely elegant, and has been around for
longer, but is not a usable abstraction for programming.
Your original statement was
Despite a lot of effort over the years, nobody has ever thought of a
good way of abstracting parallelism in programming languages.
I gave some counter examples but instead of responding to that, your
bring in some random assertion. If you'd used Erlang or Go and had
actual criticisms that would at least make this discussion interesting.
Ah well.
I've read the language descriptions of Erlang and Go and think that both
are heading in the right direction, in terms of practical coarse-grain
parallelism, but I doubt that there is a compiler (for any language) that
can turn, say, a large GEMM or FFT problem expressed entirely as
independent agents or go-routines (or futures) into cache-aware vector
code that runs nicely on a small-ish number of cores, if that's what you
happen to have available. It isn't really a question of language at all:
as you say, erlang, go and a few others already have quite reasonable
syntaxes for independent operation. The problem is one of compilation
competence: the ability to decide/adapt/guess vast collections of
nominally independent operations into efficient arbitrarily sequential
operations, rather than putting each potentially-parallel operation into
its own thread and letting the operating system's scheduler muddle
through it at run-time.
I asked but he didn't clarify. I thought he meant "expressing the
essential properties of parallelism". And here I think CSP/guarded
commands do an excellent job). I think you are talking about is
"placement" -- mapping an N-parallel algorithm to a smaller number
of cores and in general making optimum use of available resources.
But these are implementation issues, not abstraction ones. I agree
that compilers have a long way to go.
Once easy to use parallel languages become widely available and we
gain some experience I am hoping that
a) better implementations will follow.
b) we will find ways to extract parallelism in the language itself
c) they will lead to much *simpler* h/w structures. Seems to me a
lot of the h/w complexity is due to wanting to dynamically extract
parallelism.