M
Michael Terrell
Guest
I\'ve replaced thousands of failed TTL ICs over the decades.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
On 25/07/20 19:51, Phil Hobbs wrote:
Check out Qubes OS, which is what I run daily. It addresses most of
the problems you note by encouraging you to run browsers in disposable
VMs and otherwise containing the pwnage.
I did.
It doesn\'t like Nvidia graphics cards, and that\'s all my
new machine has
On Thursday, July 16, 2020 at 11:07:33 PM UTC-7, Ricketty C wrote:
On Thursday, July 16, 2020 at 1:23:33 PM UTC-4, Gerhard Hoffmann wrote:
Am 16.07.20 um 15:44 schrieb jlarkin@highlandsniptechnology.com:
On Thu, 16 Jul 2020 09:55:32 +0200, Gerhard Hoffmann <dk4xp@arcor.de
wrote:
Am 16.07.20 um 09:20 schrieb Bill Sloman:
James Arthur hasn\'t noticed that when people start dying of Covid-19, other people start practice social distancing of their own accord. Note the Swedish example.
It still kills a lot of people, and even the Swedes aren\'t anywhere near herd immunity yet.
In fact, Sweden was just yesterday removed from the list of dangerous
countries by our (German) government. That has consequences for
insurances etc if you insist to go there.
And herd immunity is a silly idea. It does not even work in Bad Ischgl,
the Austrian skiing resort where they sport 42% seropositives, which
is probably the world record, much higher than Sweden.
And herd immunity does not mean that you won\'t get it if you are
in the herd.
It means that if you get it and survive, you are out of the herd.
No, it means that you are now a badly needed part of the herd by
diluting the danger for the rest.
Some people seem to have serious problems helping others.
I was hoping Larkin would take his own advice and work on his personal herd immunity. Unfortunately the evidence is mounting that there will be no lasting immunity and so no herd immunity ever.
As with many situations this is a Darwinian event. Part of the trouble is those who choose to ignore the danger put the rest of us at risk by continuing the propagation of the disease.
I know people in this group saw the video about the three general approaches to dealing with the disease. Ignore it and lots of people die. There is not so much impact on the economy and the disease reduces at some point.
Fight the disease with isolation, etc. to the detriment of the economy and save lives. Again, the disease does not last forever and at some point everything can reopen.
But the middle of the road approach, where we try to \"balance\" fighting the disease with keeping the economy open is insane because it continues the disease indefinitely resulting in the most morbidity and mortality as well as the worst impact to the economy.
Dealing with this disease halfheartedly is worse than doing nothing at all. Doing nothing at all is still much worse than mounting an effect attack on the disease and saving lives as well as the economy.
I don\'t get why this is not well understood. I guess Kim was right.
--
Rick C.
-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209
And you are a fucking SHRILL - posting that shit about 1,000 miles of \"free\" supercharging to LINE YOUR FUCKING POCKETS!!!!!!
On Mon, 27 Jul 2020 19:58:27 -0700 (PDT), Flyguy
soar2morrow@yahoo.com> wrote:
On Thursday, July 16, 2020 at 11:07:33 PM UTC-7, Ricketty C wrote:
On Thursday, July 16, 2020 at 1:23:33 PM UTC-4, Gerhard Hoffmann wrote:
Am 16.07.20 um 15:44 schrieb jlarkin@highlandsniptechnology.com:
On Thu, 16 Jul 2020 09:55:32 +0200, Gerhard Hoffmann <dk4xp@arcor.de
wrote:
Am 16.07.20 um 09:20 schrieb Bill Sloman:
James Arthur hasn\'t noticed that when people start dying of Covid-19, other people start practice social distancing of their own accord. Note the Swedish example.
It still kills a lot of people, and even the Swedes aren\'t anywhere near herd immunity yet.
In fact, Sweden was just yesterday removed from the list of dangerous
countries by our (German) government. That has consequences for
insurances etc if you insist to go there.
And herd immunity is a silly idea. It does not even work in Bad Ischgl,
the Austrian skiing resort where they sport 42% seropositives, which
is probably the world record, much higher than Sweden.
And herd immunity does not mean that you won\'t get it if you are
in the herd.
It means that if you get it and survive, you are out of the herd.
No, it means that you are now a badly needed part of the herd by
diluting the danger for the rest.
Some people seem to have serious problems helping others.
I was hoping Larkin would take his own advice and work on his personal herd immunity. Unfortunately the evidence is mounting that there will be no lasting immunity and so no herd immunity ever.
As with many situations this is a Darwinian event. Part of the trouble is those who choose to ignore the danger put the rest of us at risk by continuing the propagation of the disease.
I know people in this group saw the video about the three general approaches to dealing with the disease. Ignore it and lots of people die. There is not so much impact on the economy and the disease reduces at some point.
Fight the disease with isolation, etc. to the detriment of the economy and save lives. Again, the disease does not last forever and at some point everything can reopen.
But the middle of the road approach, where we try to \"balance\" fighting the disease with keeping the economy open is insane because it continues the disease indefinitely resulting in the most morbidity and mortality as well as the worst impact to the economy.
Dealing with this disease halfheartedly is worse than doing nothing at all. Doing nothing at all is still much worse than mounting an effect attack on the disease and saving lives as well as the economy.
I don\'t get why this is not well understood. I guess Kim was right.
--
Rick C.
-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209
And you are a fucking SHRILL - posting that shit about 1,000 miles of \"free\" supercharging to LINE YOUR FUCKING POCKETS!!!!!!
He\'s a penny-pincher. Some people are that way.
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out that
the
factors-of-10 productivity improvements of the early days were gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.
Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.
Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit you
quickly realised that the process which ensures all the other processes
are kept busy doing useful things is by far the most important.
I\'m talking about programmer productivity, not MIPS.
There is still scope for some improvement but most of the ways it might
happen have singularly failed to deliver. There are plenty of very high
quality code libraries in existence already but people still roll their
own An unwillingness of businesses to pay for licensed working code.
The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very predictable
mistakes too. The latest compilers and tools are better at spotting
human errors using dataflow analysis but they are far from perfect.
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.
Now the issues are mostly intrinsic to an artifact built of thought.
So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit you
quickly realised that the process which ensures all the other processes
are kept busy doing useful things is by far the most important.
I\'m talking about programmer productivity, not MIPS.
There is still scope for some improvement but most of the ways it might
happen have singularly failed to deliver. There are plenty of very high
quality code libraries in existence already but people still roll their
own An unwillingness of businesses to pay for licensed working code.
The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very predictable
mistakes too. The latest compilers and tools are better at spotting
human errors using dataflow analysis but they are far from perfect.
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and what
an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited hardware,
and so
forth.
Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most important.
I wrote a clusterized optimizing EM simulator that I still use--I have a
simulation gig just starting up now, in fact. I learned a lot of ugly
things about the Linux thread scheduler in the process, such as that the
pthreads documents are full of lies about scheduling and that you can\'t
have a real-time thread in a user mode program and vice versa. This is
an entirely arbitrary thing--there\'s no such restriction in Windows or
OS/2. Dunno about BSD--I should try that out.
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Sorry; never used BSD.
I\'m talking about programmer productivity, not MIPS.
There is still scope for some improvement but most of the ways it
might happen have singularly failed to deliver. There are plenty of
very high quality code libraries in existence already but people still
roll their own An unwillingness of businesses to pay for licensed
working code.
The big snag is that way too many programmers do the coding equivalent
in mechanical engineering terms of manually cutting their own non
standard pitch and diameter bolts - sometimes they make very
predictable mistakes too. The latest compilers and tools are better at
spotting human errors using dataflow analysis but they are far from
perfect.
Cheers
Phil Hobbs
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.
Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.
I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact. I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa. This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2. Dunno about BSD--I should try that out.
In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Sorry; never used BSD.
Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.
I don\'t think this does what you want.
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.
Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors
are typically fence post errors. Binary fence post errors being
about the most severe since you end up with the opposite of what you
intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively
parallel hardware. If you have ever done any serious programming on
such kit you quickly realised that the process which ensures all the
other processes are kept busy doing useful things is by far the most
important.
I wrote a clusterized optimizing EM simulator that I still use--I
have a simulation gig just starting up now, in fact. I learned a lot
of ugly things about the Linux thread scheduler in the process, such
as that the pthreads documents are full of lies about scheduling and
that you can\'t have a real-time thread in a user mode program and
vice versa. This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2. Dunno about BSD--I should try that out.
In Linux, realtime threads are in \"the realtime context\". It\'s a bit
of a cadge. I\'ve never really seen a good explanation of that that means.
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Sorry; never used BSD.
Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them to run. You may be able to get close if you remove unnecessary
services.
I don\'t think this does what you want.
In Linux if one thread is real time, all the threads in the process have
to be as well. Any compute-bound thread in a realtime process will
bring the UI to its knees.
I\'d be perfectly happy with being able to _reduce_ thread priority in a
user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say.
So in Linux there is no way to
express the idea that some threads in a process are more important than
others. That destroys the otherwise-excellent scaling of my simulation
code.
Cheers
Phil Hobbs
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote in
35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>:
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Assuming RT means \'real time\'
No
First Unix / Linux (whatever version) is not a real time system.
It is a multi-tasker and so sooner or later it will have to do other things than your code.
For a kernel module to some extend you can service interrupts and keep some data in memory.
It will then be read sooner or later by the user program.
For threads in a program anything time critical is out.
Many things will work though as for example i2c protocol does not care so much about
timing, I talk to SPI and i2c chips all the time from threads.
The way I do \'real time\' with Linux is add a PIC to do the real time stuff,
or add logic and a hardware FIFO, FPGA if needed.
All depends on your definition of \'real time\' and requirements.
On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote in
35ffa56c-b81f-c4e2-4227-138e706cdc91@electrooptical.net>:
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Assuming RT means \'real time\'
No
First Unix / Linux (whatever version) is not a real time system.
It is a multi-tasker and so sooner or later it will have to do other things than your code.
For a kernel module to some extend you can service interrupts and keep some data in memory.
It will then be read sooner or later by the user program.
For threads in a program anything time critical is out.
Many things will work though as for example i2c protocol does not care so much about
timing, I talk to SPI and i2c chips all the time from threads.
The way I do \'real time\' with Linux is add a PIC to do the real time stuff,
or add logic and a hardware FIFO, FPGA if needed.
All depends on your definition of \'real time\' and requirements.
Here real time DVB-S encoding from a Raspberry Pi,
uses two 4k x 9 FIFOs to handle the task switch interrupt.
http://panteltje.com/panteltje/raspberry_pi_dvb-s_transmitter/
Phil Hobbs wrote:
On 2020-08-02 20:56, Les Cargill wrote:
Phil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.
Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a
lot more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors
are typically fence post errors. Binary fence post errors being
about the most severe since you end up with the opposite of what
you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively
parallel hardware. If you have ever done any serious programming on
such kit you quickly realised that the process which ensures all
the other processes are kept busy doing useful things is by far the
most important.
I wrote a clusterized optimizing EM simulator that I still use--I
have a simulation gig just starting up now, in fact. I learned a
lot of ugly things about the Linux thread scheduler in the process,
such as that the pthreads documents are full of lies about
scheduling and that you can\'t have a real-time thread in a user mode
program and vice versa. This is an entirely arbitrary
thing--there\'s no such restriction in Windows or OS/2. Dunno about
BSD--I should try that out.
In Linux, realtime threads are in \"the realtime context\". It\'s a bit
of a cadge. I\'ve never really seen a good explanation of that that
means.
Does anybody here know if you can mix RT and user threads in a
single process in BSD?
Sorry; never used BSD.
Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee
them to run. You may be able to get close if you remove unnecessary
services.
I don\'t think this does what you want.
In Linux if one thread is real time, all the threads in the process
have to be as well. Any compute-bound thread in a realtime process
will bring the UI to its knees.
I\'d be perfectly happy with being able to _reduce_ thread priority in
a user process, but noooooo. They all have to have the same priority,
despite what the pthreads docs say.
That last bit makes me wonder. Priority settings have to conform to
\"policies\" but there have to be more options than \"the same\".
This link sends me to the \"sched(7)\" man page.
https://man7.org/linux/man-pages/man2/sched_setscheduler.2.html
Might have to use sudo to start the thing, which is bad form in
some domains these days.
So in Linux there is no way to express the idea that some threads in a
process are more important than others. That destroys the
otherwise-excellent scaling of my simulation code.
There\'s a lot of info on the web now that seems to indicate you can
probably do what you need done.
FWIW:
http://www.yonch.com/tech/82-linux-thread-priority#:~:text=In%20real%2Dtime%20scheduling%20policies,always%20preempt%20lower%20priority%20threads.&text=Two%20alternatives%20exist%20to%20set,(also%20known%20as%20pthreads).
There\'s an interesting 2016 paper about serious performance bugs in thePhil Hobbs wrote:
On 2020-08-02 08:46, Martin Brown wrote:
On 23/07/2020 19:10, Phil Hobbs wrote:
On 2020-07-23 12:43, Tom Gardner wrote:
On 23/07/20 16:30, pcdhobbs@gmail.com wrote:
Isn\'t our ancient and settled idea of what a computer is, and
what an OS
and languages are, overdue for the next revolution?
In his other famous essay, \"No Silver Bullet\", Brooks points out
that the
factors-of-10 productivity improvements of the early days were
gained by
getting rid of extrinsic complexity--crude tools, limited
hardware, and so
forth.
Now the issues are mostly intrinsic to an artifact built of
thought. So apart
from more and more Python libraries, I doubt that there are a lot
more orders
of magnitude available.
It is ironic that a lot of the potentially avoidable human errors are
typically fence post errors. Binary fence post errors being about the
most severe since you end up with the opposite of what you intended.
Not in a single processor (except perhaps the Mill).
But with multiple processors there can be significant
improvement - provided we are prepared to think in
different ways, and the tools support it.
Examples: mapreduce, or xC on xCORE processors.
The average practitioner today really struggles on massively parallel
hardware. If you have ever done any serious programming on such kit
you quickly realised that the process which ensures all the other
processes are kept busy doing useful things is by far the most
important.
I wrote a clusterized optimizing EM simulator that I still use--I have
a simulation gig just starting up now, in fact. I learned a lot of
ugly things about the Linux thread scheduler in the process, such as
that the pthreads documents are full of lies about scheduling and that
you can\'t have a real-time thread in a user mode program and vice
versa. This is an entirely arbitrary thing--there\'s no such
restriction in Windows or OS/2. Dunno about BSD--I should try that out.
In Linux, realtime threads are in \"the realtime context\". It\'s a bit of
a cadge. I\'ve never really seen a good explanation of that that means.
Does anybody here know if you can mix RT and user threads in a single
process in BSD?
Sorry; never used BSD.
Realtime threads are simply of a different group of priorities. You
can install kernel loadable modules ( aka device drivers ) to provide
a timebase that will make them eligible. SFAIK, you can\'t guarantee them
to run. You may be able to get close if you remove unnecessary services.
I don\'t think this does what you want.
\"Cons: Tasks that runs in the real-time context does not have access to
all of the resources (drivers, services, etc.) of the Linux system.\"
https://github.com/MarineChap/Real-time-system-course
I haven\'t set any of this up on the past. Again - if we had an FPGA,
there was a device driver for it and the device driver kept enough FIFO
to prevent misses.
I\'m talking about programmer productivity, not MIPS.
There is still scope for some improvement but most of the ways it
might happen have singularly failed to deliver. There are plenty of
very high quality code libraries in existence already but people
still roll their own An unwillingness of businesses to pay for
licensed working code.
The big snag is that way too many programmers do the coding
equivalent in mechanical engineering terms of manually cutting their
own non standard pitch and diameter bolts - sometimes they make very
predictable mistakes too. The latest compilers and tools are better
at spotting human errors using dataflow analysis but they are far
from perfect.
Cheers
Phil Hobbs
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
We don\'t need more compute power. We need reliability and user
friendliness.
Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.
For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
On 2020-08-03 01:17, Les Cargill wrote:
snip
The only way to do something like that in Linux appears to be to put all
the comms threads in a separate process, which involves all sorts of
shared memory and synchronization hackery too hideous to contemplate.
Cheers
Phil Hobbs
(*) When I talk about this, some fanboi always accuses me of trying to
hog the machine by jacking up the priority of my process, so let\'s be
clear about it.
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
We don\'t need more compute power. We need reliability and user
friendliness.
Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.
For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
It has been tried and it all ended in tears. Viper was supposed to be correct by
design CPU but it all ended in recrimination and litigation.
Humans make mistakes and the least bad solution is to design tools that can find
the most commonly made mistakes as rapidly as possible. Various dataflow methods
can catch a whole host of classic bugs before the code is even run but industry
seems reluctant to invest so we have the status quo. C isn\'t a great language
for proof of correctness
but the languages that tried to force good programmer
behaviour have never made any serious penetration into the commercial market. I
know this to my cost as I have in the past been involved with compilers.
Ship it and be damned software development culture persists and it existed long
before there were online updates over the internet.
Phil Hobbs wrote:
On 2020-08-03 01:17, Les Cargill wrote:
snip
Apologies for inspiring you to repeat yourself.
The only way to do something like that in Linux appears to be to put
all the comms threads in a separate process, which involves all sorts
of shared memory and synchronization hackery too hideous to contemplate.
Have you ruled out (nonblocking) sockets yet? They\'re quite
performant[1]. This would give you a mechanism to differentiate
priority. You can butch up an approximation of control flow, and it
should solve any synchronization problems - you won\'t at least need
sempahores.
[1] but perhaps not performant enough...
There is MSG_ZEROCOPY.
(*) When I talk about this, some fanboi always accuses me of trying to
hog the machine by jacking up the priority of my process, so let\'s be
clear about it.
There is always significant confusion about priority. Pushing it as a
make/break thing in a design is considered bad form But sometimes...
Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.
Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.
On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:
Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.
Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.
I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:
0) you need to understand the hardware your code runs on.
1) you need to know how to code and the various coding systems used.
2) you need to know 100% about what you are coding for.
What I see in the world of bloat we live in is
0) no clue
1) 1 week tinkering with C++ or snake languages.
2) Huh? that is easy ..
And then blame everything on the languages and compilers if it goes wrong.
Some open source code I wrote and published runs 20 years without problems.
I know it can be hacked...
We will see ever more bloat as cluelessness is build upon cluelessness,
problem here is that industry / capitalism likes that.
Sell more bloat, sell more hardware, obsolete things ever faster
keep spitting out new standards ever faster,
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
We don\'t need more compute power. We need reliability and user
friendliness.
Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.
For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.
Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.
Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.
On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <rgtmr9$60l$1@gioia.aioe.org>:
Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.
Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.
I think it is not that hard to write code that simply works and does what it needs to do.
The problem I see is that many people who write code do not seem to understand
that there are 3 requirements:
0) you need to understand the hardware your code runs on.