design rigor: electronics vs. software

On 2020-01-15 13:43, George Herold wrote:
On Wednesday, January 15, 2020 at 7:29:08 AM UTC-5, Phil Hobbs wrote:
On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.

The founder called me out of the blue at 3 PM on Christmas Eve, 2012.
He turned out to be a charming fellow with a lot of drive and not a lot
of education, who was practically supernatural at raising money. He
wanted me to build him an instrument, because that's what I do.

He'd patented the general principle, which avoided the individual
physiological variations that usually bedevil those sorts of measurements.

The idea was to use a hand cradle with a virtual pivot (*) holding a
fibre bundle against the web of the first and second fingers. There are
two arteries very close to the surface there, so you get to measure
fresh blood instead of tissue fluid, and no one has hair, fat, or
callouses there, so the physiology is very cooperative. He had some
promising data that he took himself using a Perkin-Elmer FTIR and his
hand cradle. (He somehow talked some guys at USC into giving him lab
space and some help doing that--he was entirely self-taught, which has
its strengths and weaknesses.)

The project was an unusual one for me, in that I didn't have my arms
around the whole measurement--I built the gizmo, but the founder had his
USC statistics friends use their AI nous to build the model and extract
the blood solute data. Thus I don't actually know how that was done.
(It wasn't something simple like spectral differencing, anyway.)

I did a photon budget, which is my term for a detailed feasibility
calculation emphasizing stability and SNR. That's super important
because without calculating how good the measurement _could_ be, you
never really know how you're doing. A photon budget prevents you from
wasting time on recreational impossibilities on the one hand, or turning
a silk purse back into a sow's ear on the other. In this case it looked
pretty good, using a tungsten source, a custom-designed split bundle of
about 20 fibres (TX + RX), a conventional Czerny-Turner monochromator, a
single extended-InGaAs photodiode, and a chopper plus lock-in for
detection.

The proof-of-concept system took me about six weeks start to finish,
including the photon budget, optical design, designing and building the
electronics, assembling the optomechanics, and writing the software. It
was built on a 12 x 24-inch aluminum breadboard using a combination of
hhacked Microbench (**) parts, JB Weld (the poor man's machine shop),
and a servo from an RC airplane for moving the grating. The grating
cradle was also built from toy airplane parts, courtesy of
servocity.com. The electronics was all built in die cast aluminum stomp
boxes, dead bug style, wired up with BNC cables. The chopper was a
commercial (Thor Labs) unit, and the back end was a LabJack and a
console-mode C++ program running on a second-hand laptop.

It all worked great, and was very amusing to watch--an advanced clinical
instrument built out of JB Weld and toy parts. (Wouldn't the FDA have
loved that?) ;)

We did the preliminary acceptance test by having some friends over for
cocktails and measuring their blood spectra every 15 minutes or so. The
data did exactly as we hoped--nice repeatable curves with the right time
dependence and no big physiological variations between subjects. We did
some glucose work using a strip reader for comparison, but the strips
have relatively poor accuracy, so we concentrated on the alcohol
measurement for that part of the demo. (Quaffing a few cool ones is
much more fun than sticking pins in your fingers, too.)

After the founder used the POC data to raise a bunch more money, we
brought the proto and the Perkin-Elmer FTIR to a contract engineering
house in Orange County CA that will remain nameless because they have
this unfortunate tendency to sue everybody in sight. The founder kept
me sort of distantly in the loop, but his crucial mistake was trying to
save money by supervising the CE firm himself. (I tried to help, but
they ignored me almost entirely because I wasn't the one writing the
cheques.)

The CE folks proceeded to fall into every pothole along the road, like a
drunk. They ignored the photon budget, and so redesigned the front end
to use an ordinary op amp TIA, not realizing they lost about 15 dB of
SNR in the process. (I managed to get that one fixed, and the guy
responsible taken off the project. Unfortunately they were almost all
like that.)

They replaced the direct drive for the grating with a belt drive, which
gave nice smooth motion but of course rapidly lost accuracy as the belt
squirmed around while moving, so that the calibration wouldn't sit
still. They needed more encoder precision, so they put an encoder on
the motor as well as the grating, and did some micky-mouse trick to
combine the two encoder readings. Even the magnetic encoder on the
grating shaft refused to sit still--it drifted around like mad. I went
there to try to get to the bottom of some of this stuff, and although
they mostly sidelined me because I wasn't writing the cheques, we did
manage to get to the bottom of that one. Turned out that the encoder's
output was the duty cycle of a pulse train (like the RC servo only
backwards), and they were measuring the pulse width instead, using a
capture input of their MCU. That turned the frequency drift into a
position drift. Once it was fixed, I hit that poor encoder with cold
spray and a heat gun, and couldn't get it to move at all. Kudos to US
Digital for that one.

The belt-drive system failed nonetheless, basically because the
measurement was being done on the slope of the very strong IR absorption
spectrum of water in the 1.4-1.7 micron range, so that small wavelength
shifts caused much larger amplitude errors.

I told them to use a sine-bar drive like every other Czerny-Turner
monochromator in the world, but they refused, insisting on using a
worm-and-sector gear instead, with the encoder on the worm shaft.
Unlike nearly any other sort of gearing, worms work using sliding
friction. That makes the grease film thin out with time. I calculated
for their benefit that in their design, with the very small radius of
the sector gear, the maximum lubricant variation we could tolerate was
about 70 nanometres. Since they were nearly finished with the prototype
build for the formal clinical trial, I told them to use dry molybdenum
disulphide for lubricant instead of grease.

They straight-up refused, saying they couldn't get MoS2, so I sent them
a link to the exact SKU on fastenall.com, after verifying that their
local Fastenall had it in stock. (I even sent them a Mapquest link so
they could find the store. That was a bit sarcastic, which I regret,
but I was getting pretty tired of them by that point.) They proceeded
to ship one unit with grease and three units unlubricated.

When I complained about all the directionless fiddling they were doing,
one bright lad smiled and said brightly, "That's engineering!" He was
one of the better ones.

They also took the POC proto apart to use bits of it in their test
system, so that they had no comparison data, and, oh, yes, they broke
the $100k FTIR and didn't tell anyone.

The units failed the acceptance test. I attended it, but since the USC
folks weren't crunching the data in real time, the failure wasn't
entirely apparent till later. By that point the CE had run through a
year's time and most of a million bucks.

Some months later, two units arrived on my bench, along with an urgent
request for me to get to the bottom of it all.

Turned out to be a real onion problem--you know, peel off one layer,
cry, and peel off the next.

The phase of the detected signal was moving around by 10 degrees or so,
which was easily enough to destroy the measurement. The code seemed to
be a PID controller using an optointerrupter on the chopper wheel, which
should have been fine. I built a strobe light using an HP 3325A
synthesizer driving a LED, so that I could see the loop dynamics. The
controller was totally broken--regardless of the settings of P, I, and
D, there was no way of making the phase sit still. A gentle stream of
canned air would move the phase, and it would never recover--i.e. there
was no integral term in the control law, despite what the settings would
have one believe.

And then it turned out that they'd never built a lock-in before, and
were trying to extract the (approximately triangular) signal waveform by
least-squares curve fitting to a sine wave instead of using an FFT like
normal people. Everybody screws up their first digital lock-in, but
I've never seen one as bad as that. (For non signals-and-systems folks:
least squares fitting works OK at high SNR, but for noisy data it falls
apart completely. The FFT uses orthogonality, and so works at any SNR.)

I didn't get to the last onion layer, because the founder ran out of
both dough and friends. He never did pay me for my last month's work.

A pity. I would have made those boxes work eventually.


(*) One where the business end slides around on a curved surface, like
the blade assembly on some razors.
(**) A cage system using plates held together with 6-mm
centreless-ground stainless steel rods, similar to the Thor Labs 30-mm
cage system.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Wow! Thanks, nice story. I've half dreamed of making a cheap spectrometer
with a grating and rotation stage/ worm drive... figuring if you always
turn it in one direction the backlash won't be a problem... I never thought
about the grease!

Re digital LI: Color me ignorant, but my first idea would be to
multiply the signal and ref. in software... is there a reason that is
a bad idea?

Not at all, if you have the computrons available. The issues are mostly
that the frequency domain will show up all sorts of analogue issues that
don't bug you at all when your signal isn't swamped by wideband noise.

For instance, some ADCs can't swing their internal nodes fast enough to
avoid pattern-dependent errors, and a lot of amplifiers get driven nuts
by the charge pulse that comes out the ADC input during sampling. The
effects look subtle until you torture-test them.

My usual approach is to use analogue lock-ins, with something like a
74HC4017 or firmware to produce non-overlapping gate signals for the
0/pi phases. The non-overlap is pretty generous to allow the op amps to
settle completely between sample periods. That never causes any
problems IME.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Wednesday, January 15, 2020 at 2:29:36 PM UTC-5, Phil Hobbs wrote:
On 2020-01-15 13:43, George Herold wrote:
On Wednesday, January 15, 2020 at 7:29:08 AM UTC-5, Phil Hobbs wrote:
On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:


[snip]

<Big snip of PH's story>
Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Wow! Thanks, nice story. I've half dreamed of making a cheap spectrometer
with a grating and rotation stage/ worm drive... figuring if you always
turn it in one direction the backlash won't be a problem... I never thought
about the grease!

Re digital LI: Color me ignorant, but my first idea would be to
multiply the signal and ref. in software... is there a reason that is
a bad idea?

Not at all, if you have the computrons available. The issues are mostly
that the frequency domain will show up all sorts of analogue issues that
don't bug you at all when your signal isn't swamped by wideband noise.
So the FFT is faster than multiplying? So you take a chunk (time wise)
of signal, FFT it, compare phase and amplitude to fundamental in ref. and then
spit out a number?
I'll have to look up how to do a digital LI if I ever need one.

With noisy signals a BP filter somewhere in the signal chain is a must.
(IMHO)

Hey, when you were talking about measuring blood response while
having a few brewskis. I was wondering if this is something you could
sell to bars? So patrons could see if they need a taxi before they leave.
I've always wanted to know what my blood alcohol is...

Oh speaking of FFT's and windows I read this nice talk by Richard Hamming
https://www.cs.virginia.edu/~robins/YouAndYourResearch.html

George H.

For instance, some ADCs can't swing their internal nodes fast enough to
avoid pattern-dependent errors, and a lot of amplifiers get driven nuts
by the charge pulse that comes out the ADC input during sampling. The
effects look subtle until you torture-test them.

My usual approach is to use analogue lock-ins, with something like a
74HC4017 or firmware to produce non-overlapping gate signals for the
0/pi phases. The non-overlap is pretty generous to allow the op amps to
settle completely between sample periods. That never causes any
problems IME.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-01-15 15:45, George Herold wrote:
On Wednesday, January 15, 2020 at 2:29:36 PM UTC-5, Phil Hobbs wrote:
On 2020-01-15 13:43, George Herold wrote:
On Wednesday, January 15, 2020 at 7:29:08 AM UTC-5, Phil Hobbs wrote:
On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:


[snip]


Big snip of PH's story

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Wow! Thanks, nice story. I've half dreamed of making a cheap spectrometer
with a grating and rotation stage/ worm drive... figuring if you always
turn it in one direction the backlash won't be a problem... I never thought
about the grease!

Re digital LI: Color me ignorant, but my first idea would be to
multiply the signal and ref. in software... is there a reason that is
a bad idea?

Not at all, if you have the computrons available. The issues are mostly
that the frequency domain will show up all sorts of analogue issues that
don't bug you at all when your signal isn't swamped by wideband noise.

So the FFT is faster than multiplying? So you take a chunk (time wise)
of signal, FFT it, compare phase and amplitude to fundamental in ref. and then
spit out a number?

No, just multiply-and-accumulate is faster than the FFT, but it only
gets you one frequency component.

I'll have to look up how to do a digital LI if I ever need one.

With noisy signals a BP filter somewhere in the signal chain is a must.
(IMHO)

Depends on how much you care about the absolute phase.

Hey, when you were talking about measuring blood response while
having a few brewskis. I was wondering if this is something you could
sell to bars? So patrons could see if they need a taxi before they leave.
I've always wanted to know what my blood alcohol is...

Some bars have breathalyzers today.
Oh speaking of FFT's and windows I read this nice talk by Richard Hamming
https://www.cs.virginia.edu/~robins/YouAndYourResearch.html

George H.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 1/15/20 6:33 PM, bitrex wrote:
On 1/14/20 4:43 PM, Phil Hobbs wrote:
On 2020-01-14 16:31, bitrex wrote:
On 1/13/20 9:51 PM, jlarkin@highlandsniptechnology.com wrote:
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <user@example.net> wrote:

On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com wrote:

Hardware designs are more rigorously done than software designs.
A large company had problems with a 737 and a
rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers









I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by
caring electronics design engineers and marketing engineers.

Software gets sloppy with OOPs. Object Oriented Programming.
 Windows 10 on a rocket to ISS space station. C++ mud.

The easier it is to change things, the less careful people are
about doing them. Software, which includes FPGA code, seldom works
the first time. Almost never. The average hunk of fresh code has a
mistake roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still
mostly done as hack-and-fix cycles, with software test
benches. When we did OTP (fuse based) FPGAs without test
benching, we often got them right first try. If compiles took
longer, people would be more careful.

PCBs usually work the first time, because they are checked and
 reviewed, and that is because mistakes are slow and expensive
 to fix, and very visible to everyone. Bridges and buildings
are almost always right the first time. They are even more
expensive and slow and visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there
are basically no consequences to running the first try without
much checking. That is of course dangerous; we don't want to base
a hardware design on a sim that runs and makes pretty graphs but
is fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is
 rigorous, there are books about what to do and what not to do, and
the language has built-in facilities to ensure that e.g. memory is
never leaked, pointers always refer to an object that exists, and
the user can't ever add feet to meters if they're
not supposed to.

Pointers are evil.

That's why in modern times you avoid working with "naked" ones at
all costs. In architectures with managed memory like x86 and ARM with
an operating system there's pretty much no good reason to use naked
pointers at all unless you are yourself writing a memory manager or
allocator.

That's a bit strong. It's still reasonable to use void* deep in the
implementation of templates for performance-critical stuff.  My
clusterized EM simulator uses bare pointers in structs, because they
vectorize dramatically better, but again that's optimized innermost-loop
stuff.

For other things, std::shared_ptr, std::unique_ptr, std::weak_ptr, and
the standard containers are the bomb.

And unique pointers also have no overhead as compared to raw pointers.
They enforce ownership at no cost.

For bare-metal programming on small ARM processors without managed
memory you often have no choice but to resort to raw pointers but
something like a std::eek:bserver_ptr, a non-owning pointer, would be nice.

Storing raw references in classes and structures is problematic when you
add in move construction and assignment, same with const-qualified raw
pointers. Returning qualified raw pointers from functions and passing
them into classes/constructors is prickly, too. std::reference_wrapper
makes a raw reference move-able but a reference isn't always what you
want since it assumes the object it refers to is "alive"

Something like that may exist.  Wouldn't be too hard to write one I guess.

Another thing I do is when array-like data doesn't require very fast
access I store it in a List<something> class, and the data is accessed
via iterators rather than manipulating the underlying array directly.

Array indexing errors are the source of a lot of bugs, sanity-checking
access to the underlying array is a good thing whenever you can afford
the overhead.
 
On 1/14/20 4:43 PM, Phil Hobbs wrote:
On 2020-01-14 16:31, bitrex wrote:
On 1/13/20 9:51 PM, jlarkin@highlandsniptechnology.com wrote:
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <user@example.net> wrote:

On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com wrote:

Hardware designs are more rigorously done than software designs. A
large company had problems with a 737 and a
rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers









I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by
caring electronics design engineers and marketing engineers.

Software gets sloppy with OOPs. Object Oriented Programming.
 Windows 10 on a rocket to ISS space station. C++ mud.

The easier it is to change things, the less careful people are
about doing them. Software, which includes FPGA code, seldom works
the first time. Almost never. The average hunk of fresh code has a
mistake roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still mostly
done as hack-and-fix cycles, with software test
benches. When we did OTP (fuse based) FPGAs without test
benching, we often got them right first try. If compiles took
longer, people would be more careful.

PCBs usually work the first time, because they are checked and
 reviewed, and that is because mistakes are slow and expensive
 to fix, and very visible to everyone. Bridges and buildings
are almost always right the first time. They are even more
expensive and slow and visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there
are basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is
 rigorous, there are books about what to do and what not to do, and
the language has built-in facilities to ensure that e.g. memory is
never leaked, pointers always refer to an object that exists, and
the user can't ever add feet to meters if they're
not supposed to.

Pointers are evil.

That's why in modern times you avoid working with "naked" ones at
all costs. In architectures with managed memory like x86 and ARM with
an operating system there's pretty much no good reason to use naked
pointers at all unless you are yourself writing a memory manager or
allocator.

That's a bit strong. It's still reasonable to use void* deep in the
implementation of templates for performance-critical stuff.  My
clusterized EM simulator uses bare pointers in structs, because they
vectorize dramatically better, but again that's optimized innermost-loop
stuff.

For other things, std::shared_ptr, std::unique_ptr, std::weak_ptr, and
the standard containers are the bomb.

And unique pointers also have no overhead as compared to raw pointers.
They enforce ownership at no cost.

For bare-metal programming on small ARM processors without managed
memory you often have no choice but to resort to raw pointers but
something like a std::eek:bserver_ptr, a non-owning pointer, would be nice.

Storing raw references in classes and structures is problematic when you
add in move construction and assignment, same with const-qualified raw
pointers. Returning qualified raw pointers from functions and passing
them into classes/constructors is prickly, too. std::reference_wrapper
makes a raw reference move-able but a reference isn't always what you
want since it assumes the object it refers to is "alive"

Something like that may exist. Wouldn't be too hard to write one I guess.

There are test suites to find all potential memory leaks! There's no
good excuse to have programs that leak resources anymore...

RAII is really good medicine.  I used to like mudflap a lot, but it got
rolled up into GCC's sanitizers, which are super useful too.

Cheers

Phil Hobbs
 
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote in
news:a717379b-8eac-ee79-9afa-194dca4d5676@electrooptical.net:

> Some bars have breathalyzers today.

About 9 years ago I saw a news item on TV showing a UK bar giving
alcohol to people by way of vapor infusion. Instant buzz, no ingestion
required.

Never saw them show up here in the us.

I guess they would be called Breathabuzzers.
 
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> said:
On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.

[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?
 
On 2020-01-16 06:18, RBlack wrote:
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:

On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.


[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

Yeah, it's better when we all learn from each other.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

No huge surprise there. OO is a pretty good outfit, although their
hardware is much better than their software nowadays--what with all the
fancy programming features, it's amazingly difficult to just turn the
thing on and take a spectrum.

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?

They're very expensive, and the ones I know about don't have a lot of
pixels. Also I don't think they're available in the extended-wavelength
(0.8-1.9 um) variety. It was much cheaper to use more fibres.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Thursday, January 16, 2020 at 4:39:31 AM UTC-10, Phil Hobbs wrote:
On 2020-01-16 06:18, RBlack wrote:
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
pcdhSpptical.net> said:

On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpaoptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.


[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

Yeah, it's better when we all learn from each other.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

No huge surprise there. OO is a pretty good outfit, although their
hardware is much better than their software nowadays--what with all the
fancy programming features, it's amazingly difficult to just turn the
thing on and take a spectrum.

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?

They're very expensive, and the ones I know about don't have a lot of
pixels. Also I don't think they're available in the extended-wavelength
(0.8-1.9 um) variety. It was much cheaper to use more fibres.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Hi Phil,
In conclusion, we electronics design engineers are trapeze artists
who hand-off hardware designs to our team with a certainty.

Software programmers are the Fed Ex handoff folks
who throw the software over the fence.

We care.
Alan F.
 
On Thursday, January 16, 2020 at 7:03:58 PM UTC-5, omni...@gmail.com wrote:
On Thursday, January 16, 2020 at 4:39:31 AM UTC-10, Phil Hobbs wrote:
On 2020-01-16 06:18, RBlack wrote:
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
pcdhSpptical.net> said:

On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpaoptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.


[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

Yeah, it's better when we all learn from each other.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

No huge surprise there. OO is a pretty good outfit, although their
hardware is much better than their software nowadays--what with all the
fancy programming features, it's amazingly difficult to just turn the
thing on and take a spectrum.

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?

They're very expensive, and the ones I know about don't have a lot of
pixels. Also I don't think they're available in the extended-wavelength
(0.8-1.9 um) variety. It was much cheaper to use more fibres.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Hi Phil,
In conclusion, we electronics design engineers are trapeze artists
who hand-off hardware designs to our team with a certainty.

Software programmers are the Fed Ex handoff folks
who throw the software over the fence.

I'm always happy to help the software people. But often they don't like my answers. I had one who was integrating some speech compression software and didn't like the sounds he was getting. He used a scope and saw the 100 kHz noise from the class D amp. He started raising a big fuss about the horrible noise and would not let go of the bone. Nothing I said about how that noise was inaudible and likely never made it past the filter of the speaker cone anyway, made any difference to him. Finally he found the problem in his software.

Another time was about a board with an FPGA that could not be booted. Configuring an FPGA is not complex. There are about five things you have to get right and it will boot. The trouble is if you mess up any one of them it simply sits there acting like you've done nothing and won't raise the one line that says "programming complete". There's no debugging except to look at what you are doing and get it right.

So I was called to help with the debugging and there were five people at the bench, one a software manager. Software because the put the FPGA group in software. So there was the lead SW engineer, an assistant SW engineer (fresh out of school) a software manager, and the same guy who couldn't get his voice compression to work and blamed it on me doing actual software for this board. In addition they had a FPGA consultant who was making as much as any of the two of us. At first they don't even want to talk to me even though they demanded that I be there.

Eventually I took control and explained how simple booting the thing was and if each of the five things weren't right, it would not work with no indication. So now I list the things and they rattle off of the things they had done to make sure it was working like giving a few extra config clocks to complete the internal functions and release the FPGA to run. Trouble was when they started working on some other issues they were also doing wrong they had dropped the extra pulses... So I pointed out that they needed to do ALL of these things and when they did that it worked. I asked if they needed anything else and when they said no, I walked away. The manager looked dumbfounded. lol I guess she didn't have a lot of time on the bench.

I guess my point is that while I was happy working with them, they didn't call me in until they were so frustrated they couldn't see what they were doing. They wanted to throw it back over the wall to me.

--

Rick C.

---+ Get 1,000 miles of free Supercharging
---+ Tesla referral code - https://ts.la/richard11209
 
On 2020-01-16 19:03, omnilobe@gmail.com wrote:
On Thursday, January 16, 2020 at 4:39:31 AM UTC-10, Phil Hobbs wrote:
On 2020-01-16 06:18, RBlack wrote:
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
pcdhSpptical.net> said:

On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpaoptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.


[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

Yeah, it's better when we all learn from each other.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

No huge surprise there. OO is a pretty good outfit, although their
hardware is much better than their software nowadays--what with all the
fancy programming features, it's amazingly difficult to just turn the
thing on and take a spectrum.

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?

They're very expensive, and the ones I know about don't have a lot of
pixels. Also I don't think they're available in the extended-wavelength
(0.8-1.9 um) variety. It was much cheaper to use more fibres.

Hi Phil,
In conclusion, we electronics design engineers are trapeze artists
who hand-off hardware designs to our team with a certainty.

Software programmers are the Fed Ex handoff folks
who throw the software over the fence.

Nah, I work both sides of that line, and optics and mechanics as well.
There's lots of praise and blame to spread around.

In the blood project, the CE's problems showed up mainly on the hardware
side, but the real trouble was a vicious circle of incompetence and a
bad attitude. That's seriously hard to fix.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Friday, January 17, 2020 at 11:03:58 AM UTC+11, omni...@gmail.com wrote:
On Thursday, January 16, 2020 at 4:39:31 AM UTC-10, Phil Hobbs wrote:
On 2020-01-16 06:18, RBlack wrote:
On Wed, 15 Jan 2020 07:28:58 -0500, Phil Hobbs
pcdhSpptical.net> said:

On 2020-01-15 03:17, RBlack wrote:
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
pcdhSpaoptical.net> said:


[snip]

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector

^^^^^^^^^^^^^^ ^^^^^ ^^^^^^^

Bummer, sorry to hear about that. (I have a personal interest in seeing
this type of device working and on the market). Did anyone else pick up
the project, or did it just die?

for hens' eggs.)


Yeah, that was a sad one, for sure. It remains vivid in my memory, so
at the risk of boring people, here's the tale.


[big snip]

Wow. Sounds horribly frustrating. Thanks for the deatiled write-up.

There are quite a few similarities with our first product, which
measured blood oxygen saturation. We had a very steep learning curve,
if I'd known about your book at the time we could have avoided some of
the worst blind alleys.

Yeah, it's better when we all learn from each other.

For oxygen saturation the wavelength region of interest is right in the
middle of the visible, so rather than a mechanical monochromator we were
able to use a spectrometer with a grating and a linear 2048-pixel
silicon CCD array. The spectral data then got fed into a curve-fitting
algorithm running on an embedded Windows PC. I wasn't very closely
involved with that part, my job was to design the CCD interface. The
proof-of-concept used an Ocean Optics USB2000, but the CEO was convinced
we could do it smaller and cheaper. Let's just say that didn't turn out
quite how he expected...

No huge surprise there. OO is a pretty good outfit, although their
hardware is much better than their software nowadays--what with all the
fancy programming features, it's amazingly difficult to just turn the
thing on and take a spectrum.

The team that developed the proof-of-concept, and then sold us the
rights, were initially looking at glucose, but oxygen saturation seemed
to be a lot easier to get working. There are InGaAs detector arrays
available, I take it they weren't suitable for your system?

They're very expensive, and the ones I know about don't have a lot of
pixels. Also I don't think they're available in the extended-wavelength
(0.8-1.9 um) variety. It was much cheaper to use more fibres.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Hi Phil,
In conclusion, we electronics design engineers are trapeze artists
who hand-off hardware designs to our team with a certainty.

Probably a misreading.

There are good engineers on both sides of the hardware/software divide, but not all that many of them.

A lot of the job is getting the less good engineers to raise their game. "Not invented here" gets in the way all too frequently.

Software programmers are the Fed Ex handoff folks
who throw the software over the fence.

We care.

Some of us do, but so do some software engineers. The problem is that most developments are team efforts, and some parts of the team can be weaker than others (and the weaker members of team tend to be more confident about the correctness of their work than the stronger members, who have better grasp of the myriad ways that things can go wrong).

--
Bill Sloman, Sydney
 

Welcome to EDABoard.com

Sponsor

Back
Top