D
Don Y
Guest
On 12/7/2020 8:45 AM, Martin Brown wrote:
Memory bandwidth is the bigger problem. There\'s only so much cache you can
put on the die before the CPU starts to *become* memory!
> Most non-trivial problems cannot which is where life gets interesting.
You need tools that can recognize opportunities in algorithms and
massage the STATED algorithm into an equivalent CONCURRENT algorithm.
Too often, developers THINK serially; it\'s human nature.
I\'ve found dicing \"jobs\" into tiny little snippets (job-ettes?) is
an effective way of visualizing the dependencies that you are
baking into your implementation that aren\'t inherently part of the
problem.
[As formal techniques seem to have fallen out of favor, these issues
are no longer as visible as they once were. Petri nets, anyone?]
He is right about one thing though. CPU speed has now maxed out and feature
size cannot go down much more so parallel algorithms are going to be the
future. Usually this comes with much pain and suffering unless you happen to be
lucky and have a problem that parallelises cleanly.
Memory bandwidth is the bigger problem. There\'s only so much cache you can
put on the die before the CPU starts to *become* memory!
> Most non-trivial problems cannot which is where life gets interesting.
You need tools that can recognize opportunities in algorithms and
massage the STATED algorithm into an equivalent CONCURRENT algorithm.
Too often, developers THINK serially; it\'s human nature.
I\'ve found dicing \"jobs\" into tiny little snippets (job-ettes?) is
an effective way of visualizing the dependencies that you are
baking into your implementation that aren\'t inherently part of the
problem.
[As formal techniques seem to have fallen out of favor, these issues
are no longer as visible as they once were. Petri nets, anyone?]