triggering things with ethernet...

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote:
On 17-04-2023 21:18, John Larkin wrote:


Suppose one were to send a broadcast packet from a PC to multiple
boxes over regular 100 Mbit ethernet, without any fancy time protocols
like PTP or anything. Each box would accept the packet as a trigger.
Assume a pretty small private network and a reasonable number of
switches to fan out to many boxes.

Any guess as to how much the effective time trigger to various boxes
would skew? I\'ve seen one esimate of 125 usec, for cameras, with
details unclear.

If you are connected to the Phy directly with high priority ISR, I think
you can do typical less than 1us.

problem is loading on the bus, or retransmissions, then it could be way
longer

If you need precise timing, you can use real time ethernet

https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf

I was wondering what sort of time skew I might get with an ordinary PC
shooting out commands and fairly ordinary receivers and some switches.
The PC could send a message to each of the boxes in the field, like
\"set your voltage to 17 when I say GO\" and things like that to various
boxes. Then it could broadcast a UDP message to all the boxes GO .

With Windows <anything>, it\'s going to be pretty bad, especially when
something like Java is running. There will be startling gaps, into
the tens of milliseconds, sometimes longer.


All I want to know is the destination time skews after the PC sends
the GO packet. Windows doesn\'t matter.

My customers mostly use some realtime Linux, but that doesn\'t matter
either.

Probably RHEL.


Which is not a criticism - Windows is intended for desktop
applications, not embedded realtime. So, wrong tool.

With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it\'s
going to be on order of hundreds of microseconds, so long as you run
the relevant applications are a sufficiently urgent realtime priority
and scheduling policy.

To do better, one goes to partly hardware (with firmware) solutions.


The boxes would have to be able to accept the usual TCP commands at
unique IP addresses, and a UDP packet with some common IP address, and
process the GO command rapidly, but I was wondering what the inherent
time uncertainty might be with the ethernet itself.

How good that stack is depends on what the host computer is optimized
for.


I guess some switches are better than others, so if I found some good
ones I could recommend them. I\'d have to understand how a switch can
handle a broadcast packet too. I think the GO packet is just sent to
some broadcast address.

Modern network switches are typically far faster than RHEL.

I want numbers!

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency
was typical, because the switch ingested the entire incoming packet
before even thinking about transmitting it on. It would wait until
the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when
the header part has been received and can be parsed, so first-bit-in
to first-bit-out is more like a microsecond, maybe far less in the
bigger faster switches. These switches are designed to do wirespeed
in and out, so the buffering delay is proportional to a short bit of
the wire in question. There is less blockage due to big packets ahead
in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the
switch is not important.

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

..<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf>

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.

The unspoken assumption in the above is that the ethernet network is
lightly loaded, with few big packets getting underfoot.

Also unmentioned is that non-blocking switches are not required to
preserve packet reception order. If the key packets are spaced far
enough apart, this won\'t cause reordering.


The fancy time protocols, ethercat and PTP and TSN (and others!) are
complex on both ends. I might invent a new one, but that\'s another
story.

It\'s been done, many times. Guilty. But PTP over ethernet is
sweeping all that stuff away.

The wider world is going to PTPv2.1, which provides tens of
nanoseconds (random jitter) and maybe 50-100 nanoseconds average
offset error (can be plus or minus, depending on details of the cable
plant et al). But all this is quite complex and expensive. But in
five or ten years, it\'ll be common and dirt cheap.

Joe Gwinn
 
On 4/18/2023 8:02 AM, Dimiter_Popoff wrote:
[I think mail is hosed, again  :< ]

I did email you earlier today (nothing worth a second thought if it
gets lost).

Not here. (spam or otherwise)
 
On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote:
On 17-04-2023 21:18, John Larkin wrote:


Suppose one were to send a broadcast packet from a PC to multiple
boxes over regular 100 Mbit ethernet, without any fancy time protocols
like PTP or anything. Each box would accept the packet as a trigger.
Assume a pretty small private network and a reasonable number of
switches to fan out to many boxes.

Any guess as to how much the effective time trigger to various boxes
would skew? I\'ve seen one esimate of 125 usec, for cameras, with
details unclear.

If you are connected to the Phy directly with high priority ISR, I think
you can do typical less than 1us.

problem is loading on the bus, or retransmissions, then it could be way
longer

If you need precise timing, you can use real time ethernet

https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf

I was wondering what sort of time skew I might get with an ordinary PC
shooting out commands and fairly ordinary receivers and some switches.
The PC could send a message to each of the boxes in the field, like
\"set your voltage to 17 when I say GO\" and things like that to various
boxes. Then it could broadcast a UDP message to all the boxes GO .

With Windows <anything>, it\'s going to be pretty bad, especially when
something like Java is running. There will be startling gaps, into
the tens of milliseconds, sometimes longer.


All I want to know is the destination time skews after the PC sends
the GO packet. Windows doesn\'t matter.

My customers mostly use some realtime Linux, but that doesn\'t matter
either.

Probably RHEL.


Which is not a criticism - Windows is intended for desktop
applications, not embedded realtime. So, wrong tool.

With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it\'s
going to be on order of hundreds of microseconds, so long as you run
the relevant applications are a sufficiently urgent realtime priority
and scheduling policy.

To do better, one goes to partly hardware (with firmware) solutions.


The boxes would have to be able to accept the usual TCP commands at
unique IP addresses, and a UDP packet with some common IP address, and
process the GO command rapidly, but I was wondering what the inherent
time uncertainty might be with the ethernet itself.

How good that stack is depends on what the host computer is optimized
for.


I guess some switches are better than others, so if I found some good
ones I could recommend them. I\'d have to understand how a switch can
handle a broadcast packet too. I think the GO packet is just sent to
some broadcast address.

Modern network switches are typically far faster than RHEL.

I want numbers!

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency
was typical, because the switch ingested the entire incoming packet
before even thinking about transmitting it on. It would wait until
the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when
the header part has been received and can be parsed, so first-bit-in
to first-bit-out is more like a microsecond, maybe far less in the
bigger faster switches. These switches are designed to do wirespeed
in and out, so the buffering delay is proportional to a short bit of
the wire in question. There is less blockage due to big packets ahead
in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the
switch is not important.

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.

That\'s encouraging. Thanks.

I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.



The unspoken assumption in the above is that the ethernet network is
lightly loaded, with few big packets getting underfoot.

My users usually have a private network for data aquisition and
control, and I can tell them what the rules are.


Also unmentioned is that non-blocking switches are not required to
preserve packet reception order. If the key packets are spaced far
enough apart, this won\'t cause reordering.


The fancy time protocols, ethercat and PTP and TSN (and others!) are
complex on both ends. I might invent a new one, but that\'s another
story.

It\'s been done, many times. Guilty. But PTP over ethernet is
sweeping all that stuff away.

The wider world is going to PTPv2.1, which provides tens of
nanoseconds (random jitter) and maybe 50-100 nanoseconds average
offset error (can be plus or minus, depending on details of the cable
plant et al). But all this is quite complex and expensive. But in
five or ten years, it\'ll be common and dirt cheap.

I don\'t need nanoseconds for power supplies and motors. If I were to
try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be
nice.

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.


Joe Gwinn
 
On 4/18/2023 5:45 AM, Martin Brown wrote:
My suggestion would be to measure it experimentally on a modest sized
configuration with the depth of switches you intend to use and have the
triggered devices send back a step function to the master box on an identical
length of coax. That should give you a good idea of the delay per additional
switch added and how the jitter increases with depth.

Put together a little bit of *hardware* (small FPGA) with N inputs.
Each input trips a latch; first input starts a timer. When timer
expires, number of tripped latches are totalled (logged?), reset and timer
reset.

Adjust duration of timer to set size of \"window\" to smallest that allows
ALL latches to be reliably tripped.

Note that this ignores the effect of latency between trigger issuance and
reception; it just measures how *tightly* the arriving pulses are clustered.

Let it run for days to reassure yourself. Then, prove to yourself that
this behavior is repeatable -- in the presence of other traffic, etc.

It will obviously depend a lot on the software and stack at the receiver - if
you can control that and/or put it into a quick response state then you might
be able to do quite well. That or have a means to calibrate out the systematic
delays using the same length of coax as a reference. Depends a lot on good
behaviour from the switches so you might have to be careful about which
chipsets you specify.

If you want small times with small variances, then you\'ll code on bare metal.
If this has to coexist with some other (e.g., FOSS) code, you\'ll have to
sort out how the two might potentially interact (e.g., the packet scheduler
will obviously need a tweeking).

Run ping(1) for an hour and note the statistics regarding the echoes.
This typically drags the stack into the picture.

Vary the length of cable connecting the *two* devices. Add another
switch in the chain. Add some background chatter (e.g., if someone else
wants to broadcast a datagram, that will tie up ALL ports just like
your broadcast would).

Similarly, set up a node as an NTP master. Periodically, send messages to
each node (using a \"reliable\" protocol -- tallying the number of times
and maximum delay for all to have been successfully notified of your
\"scheduled event\". Then, let each toggle that wire to that same bit
of (FPGA) hardware. Compare the distribution to the distribution of
times between NTP-sync\'ed slaves.

Lots of ways to get information from commodity hardware. Then, figure out
how to *beat* those figures (or, SETTLE for them).

At the very least, you\'ll likely encounter many of the same problems
that customers will encounter: why am I not getting a reply from
this device? why is the delay so long? why are packets being dropped?
how did these runts come into the picture? yikes! where did that jumbo
frame come from?? etc.
 
On 4/19/2023 0:41, Don Y wrote:
On 4/18/2023 8:02 AM, Dimiter_Popoff wrote:
[I think mail is hosed, again  :< ]

I did email you earlier today (nothing worth a second thought if it
gets lost).

Not here.  (spam or otherwise)

Sent 3 copies: one exact, one with your address within <> (originally
sent without these as usual by my mistake), and one like the second
but Cc-ed to an address of mine.
I got the Cc.
 
On 2023-04-18 17:40, John Larkin wrote:
On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote:
On 17-04-2023 21:18, John Larkin wrote:


Suppose one were to send a broadcast packet from a PC to multiple
boxes over regular 100 Mbit ethernet, without any fancy time protocols
like PTP or anything. Each box would accept the packet as a trigger.
Assume a pretty small private network and a reasonable number of
switches to fan out to many boxes.

Any guess as to how much the effective time trigger to various boxes
would skew? I\'ve seen one esimate of 125 usec, for cameras, with
details unclear.

If you are connected to the Phy directly with high priority ISR, I think
you can do typical less than 1us.

problem is loading on the bus, or retransmissions, then it could be way
longer

If you need precise timing, you can use real time ethernet

https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf

I was wondering what sort of time skew I might get with an ordinary PC
shooting out commands and fairly ordinary receivers and some switches.
The PC could send a message to each of the boxes in the field, like
\"set your voltage to 17 when I say GO\" and things like that to various
boxes. Then it could broadcast a UDP message to all the boxes GO .

With Windows <anything>, it\'s going to be pretty bad, especially when
something like Java is running. There will be startling gaps, into
the tens of milliseconds, sometimes longer.


All I want to know is the destination time skews after the PC sends
the GO packet. Windows doesn\'t matter.

My customers mostly use some realtime Linux, but that doesn\'t matter
either.

Probably RHEL.


Which is not a criticism - Windows is intended for desktop
applications, not embedded realtime. So, wrong tool.

With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it\'s
going to be on order of hundreds of microseconds, so long as you run
the relevant applications are a sufficiently urgent realtime priority
and scheduling policy.

To do better, one goes to partly hardware (with firmware) solutions.


The boxes would have to be able to accept the usual TCP commands at
unique IP addresses, and a UDP packet with some common IP address, and
process the GO command rapidly, but I was wondering what the inherent
time uncertainty might be with the ethernet itself.

How good that stack is depends on what the host computer is optimized
for.


I guess some switches are better than others, so if I found some good
ones I could recommend them. I\'d have to understand how a switch can
handle a broadcast packet too. I think the GO packet is just sent to
some broadcast address.

Modern network switches are typically far faster than RHEL.

I want numbers!

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency
was typical, because the switch ingested the entire incoming packet
before even thinking about transmitting it on. It would wait until
the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when
the header part has been received and can be parsed, so first-bit-in
to first-bit-out is more like a microsecond, maybe far less in the
bigger faster switches. These switches are designed to do wirespeed
in and out, so the buffering delay is proportional to a short bit of
the wire in question. There is less blockage due to big packets ahead
in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the
switch is not important.

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.


That\'s encouraging. Thanks.

I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.



The unspoken assumption in the above is that the ethernet network is
lightly loaded, with few big packets getting underfoot.

My users usually have a private network for data aquisition and
control, and I can tell them what the rules are.



Also unmentioned is that non-blocking switches are not required to
preserve packet reception order. If the key packets are spaced far
enough apart, this won\'t cause reordering.


The fancy time protocols, ethercat and PTP and TSN (and others!) are
complex on both ends. I might invent a new one, but that\'s another
story.

It\'s been done, many times. Guilty. But PTP over ethernet is
sweeping all that stuff away.

The wider world is going to PTPv2.1, which provides tens of
nanoseconds (random jitter) and maybe 50-100 nanoseconds average
offset error (can be plus or minus, depending on details of the cable
plant et al). But all this is quite complex and expensive. But in
five or ten years, it\'ll be common and dirt cheap.

I don\'t need nanoseconds for power supplies and motors. If I were to
try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be
nice.

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

There\'s an algo for that in the guts of NTP since before the Flood, I
believe. It even dorks the cal factor to ensure phase continuity in the
timer as it slews to the new offset value.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 4/18/2023 3:19 PM, Dimiter_Popoff wrote:
On 4/19/2023 0:41, Don Y wrote:
On 4/18/2023 8:02 AM, Dimiter_Popoff wrote:
[I think mail is hosed, again  :< ]

I did email you earlier today (nothing worth a second thought if it
gets lost).

Not here.  (spam or otherwise)

Sent 3 copies: one exact, one with your address within <> (originally
sent without these as usual by my mistake), and one like the second
but Cc-ed to an address of mine.
I got the Cc.

I *just* received these two -- timestamped 4:37AM.

The first must still be stuck in the ether...

I\'ll reply a bit later (we\'re watching the last few episodes...)
 
On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote:
On 17-04-2023 21:18, John Larkin wrote:


Suppose one were to send a broadcast packet from a PC to multiple
boxes over regular 100 Mbit ethernet, without any fancy time protocols
like PTP or anything. Each box would accept the packet as a trigger.
Assume a pretty small private network and a reasonable number of
switches to fan out to many boxes.

Any guess as to how much the effective time trigger to various boxes
would skew? I\'ve seen one esimate of 125 usec, for cameras, with
details unclear.

If you are connected to the Phy directly with high priority ISR, I think
you can do typical less than 1us.

problem is loading on the bus, or retransmissions, then it could be way
longer

If you need precise timing, you can use real time ethernet

https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf

I was wondering what sort of time skew I might get with an ordinary PC
shooting out commands and fairly ordinary receivers and some switches.
The PC could send a message to each of the boxes in the field, like
\"set your voltage to 17 when I say GO\" and things like that to various
boxes. Then it could broadcast a UDP message to all the boxes GO .

With Windows <anything>, it\'s going to be pretty bad, especially when
something like Java is running. There will be startling gaps, into
the tens of milliseconds, sometimes longer.


All I want to know is the destination time skews after the PC sends
the GO packet. Windows doesn\'t matter.

My customers mostly use some realtime Linux, but that doesn\'t matter
either.

Probably RHEL.


Which is not a criticism - Windows is intended for desktop
applications, not embedded realtime. So, wrong tool.

With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it\'s
going to be on order of hundreds of microseconds, so long as you run
the relevant applications are a sufficiently urgent realtime priority
and scheduling policy.

To do better, one goes to partly hardware (with firmware) solutions.


The boxes would have to be able to accept the usual TCP commands at
unique IP addresses, and a UDP packet with some common IP address, and
process the GO command rapidly, but I was wondering what the inherent
time uncertainty might be with the ethernet itself.

How good that stack is depends on what the host computer is optimized
for.


I guess some switches are better than others, so if I found some good
ones I could recommend them. I\'d have to understand how a switch can
handle a broadcast packet too. I think the GO packet is just sent to
some broadcast address.

Modern network switches are typically far faster than RHEL.

I want numbers!

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency
was typical, because the switch ingested the entire incoming packet
before even thinking about transmitting it on. It would wait until
the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when
the header part has been received and can be parsed, so first-bit-in
to first-bit-out is more like a microsecond, maybe far less in the
bigger faster switches. These switches are designed to do wirespeed
in and out, so the buffering delay is proportional to a short bit of
the wire in question. There is less blockage due to big packets ahead
in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the
switch is not important.

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.


That\'s encouraging. Thanks.

Welcome.


I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.

Yes. The shortest UDP packet is ~64 bytes.


The unspoken assumption in the above is that the ethernet network is
lightly loaded, with few big packets getting underfoot.

My users usually have a private network for data aquisition and
control, and I can tell them what the rules are.

Ahh. The usual dodge is to have a \"realtime\" LAN (no big trucks or
coal trains allowed), plus a everything-goes LAN where latency is
uncontrolled. These two LANs are logical, and may both be created by
partitioning one or more network switches, so long as those switches
are hunky enough.


Also unmentioned is that non-blocking switches are not required to
preserve packet reception order. If the key packets are spaced far
enough apart, this won\'t cause reordering.


The fancy time protocols, ethercat and PTP and TSN (and others!) are
complex on both ends. I might invent a new one, but that\'s another
story.

It\'s been done, many times. Guilty. But PTP over ethernet is
sweeping all that stuff away.

The wider world is going to PTPv2.1, which provides tens of
nanoseconds (random jitter) and maybe 50-100 nanoseconds average
offset error (can be plus or minus, depending on details of the cable
plant et al). But all this is quite complex and expensive. But in
five or ten years, it\'ll be common and dirt cheap.

I don\'t need nanoseconds for power supplies and motors. If I were to
try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be
nice.

OK.


The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

I don\'t know that Raspberry Pi units are all that good as clocks.

The logic clocks in computers are pretty temperature-sensitive, but
one can certainly implement a kind of DDS.

Phil H mentioned antediluvian frequency lock loop algorithm from NTP,
which I have in the past adapted for a like purpose.

Basically, one counts the DDS output cycles between 1PPS pips, and
change the DDS tuning word to steer towards zero frequency error. But
this is done like steering a sailboat - steer to a place far ahead and
readjust far slower than the response time of the boat to the helm. If
one gets too eager, the boat instead swings widely instead of
proceeding steadily towards the distant objective.


Joe Gwinn
 
On Wed, 19 Apr 2023 18:46:53 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 01:39:18 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:

On 18-04-2023 01:38, Klaus Vestergaard Kragelund wrote:
On 17-04-2023 21:18, John Larkin wrote:


Suppose one were to send a broadcast packet from a PC to multiple
boxes over regular 100 Mbit ethernet, without any fancy time protocols
like PTP or anything. Each box would accept the packet as a trigger.
Assume a pretty small private network and a reasonable number of
switches to fan out to many boxes.

Any guess as to how much the effective time trigger to various boxes
would skew? I\'ve seen one esimate of 125 usec, for cameras, with
details unclear.

If you are connected to the Phy directly with high priority ISR, I think
you can do typical less than 1us.

problem is loading on the bus, or retransmissions, then it could be way
longer

If you need precise timing, you can use real time ethernet

https://www.cisco.com/c/dam/en/us/solutions/collateral/industry-solutions/white-paper-c11-738950.pdf

I was wondering what sort of time skew I might get with an ordinary PC
shooting out commands and fairly ordinary receivers and some switches.
The PC could send a message to each of the boxes in the field, like
\"set your voltage to 17 when I say GO\" and things like that to various
boxes. Then it could broadcast a UDP message to all the boxes GO .

With Windows <anything>, it\'s going to be pretty bad, especially when
something like Java is running. There will be startling gaps, into
the tens of milliseconds, sometimes longer.


All I want to know is the destination time skews after the PC sends
the GO packet. Windows doesn\'t matter.

My customers mostly use some realtime Linux, but that doesn\'t matter
either.

Probably RHEL.


Which is not a criticism - Windows is intended for desktop
applications, not embedded realtime. So, wrong tool.

With RHEL (Red Hat Enterprise Linux) with a few fancy extensions, it\'s
going to be on order of hundreds of microseconds, so long as you run
the relevant applications are a sufficiently urgent realtime priority
and scheduling policy.

To do better, one goes to partly hardware (with firmware) solutions.


The boxes would have to be able to accept the usual TCP commands at
unique IP addresses, and a UDP packet with some common IP address, and
process the GO command rapidly, but I was wondering what the inherent
time uncertainty might be with the ethernet itself.

How good that stack is depends on what the host computer is optimized
for.


I guess some switches are better than others, so if I found some good
ones I could recommend them. I\'d have to understand how a switch can
handle a broadcast packet too. I think the GO packet is just sent to
some broadcast address.

Modern network switches are typically far faster than RHEL.

I want numbers!

Ten years ago, 20 microseconds first-bit-in to last-bit-out latency
was typical, because the switch ingested the entire incoming packet
before even thinking about transmitting it on. It would wait until
the entire packet was in a buffer before trying to decode it.

Now days, cut-through handling is common, and transmission begins when
the header part has been received and can be parsed, so first-bit-in
to first-bit-out is more like a microsecond, maybe far less in the
bigger faster switches. These switches are designed to do wirespeed
in and out, so the buffering delay is proportional to a short bit of
the wire in question. There is less blockage due to big packets ahead
in line. It all depends.

But when compared with RHEL churn, at least 200 microseconds, the
switch is not important.

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.


That\'s encouraging. Thanks.

Welcome.


I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.

Yes. The shortest UDP packet is ~64 bytes.


The unspoken assumption in the above is that the ethernet network is
lightly loaded, with few big packets getting underfoot.

My users usually have a private network for data aquisition and
control, and I can tell them what the rules are.

Ahh. The usual dodge is to have a \"realtime\" LAN (no big trucks or
coal trains allowed), plus a everything-goes LAN where latency is
uncontrolled. These two LANs are logical, and may both be created by
partitioning one or more network switches, so long as those switches
are hunky enough.


Also unmentioned is that non-blocking switches are not required to
preserve packet reception order. If the key packets are spaced far
enough apart, this won\'t cause reordering.


The fancy time protocols, ethercat and PTP and TSN (and others!) are
complex on both ends. I might invent a new one, but that\'s another
story.

It\'s been done, many times. Guilty. But PTP over ethernet is
sweeping all that stuff away.

The wider world is going to PTPv2.1, which provides tens of
nanoseconds (random jitter) and maybe 50-100 nanoseconds average
offset error (can be plus or minus, depending on details of the cable
plant et al). But all this is quite complex and expensive. But in
five or ten years, it\'ll be common and dirt cheap.

I don\'t need nanoseconds for power supplies and motors. If I were to
try to phase coordinate, say, 400 Hz AC sources, 10s of usec would be
nice.

OK.


The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

I don\'t know that Raspberry Pi units are all that good as clocks.

No, that\'s the point of doing a DDS sort of correction to the event
timebase. The Pico has a crystal and two caps, the classic CMOS
oscillator, and I\'d suspect it could be off by 100 PPM maybe.


The logic clocks in computers are pretty temperature-sensitive, but
one can certainly implement a kind of DDS.

Phil H mentioned antediluvian frequency lock loop algorithm from NTP,
which I have in the past adapted for a like purpose.

Basically, one counts the DDS output cycles between 1PPS pips, and
change the DDS tuning word to steer towards zero frequency error. But
this is done like steering a sailboat - steer to a place far ahead and
readjust far slower than the response time of the boat to the helm. If
one gets too eager, the boat instead swings widely instead of
proceeding steadily towards the distant objective.

It deserves to be simulated. But if it seesaws the effective clock
frequency some 10s of PPM, but is long-term correct, that would so.


Joe Gwinn
 
On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.

The 64 byte minimum Ethernet frame size is from the 10Base5 vampire
tap Ethernet so that collisions could be reliably detected.

That\'s encouraging. Thanks.

I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.

The problem is that if an other big frame has been started to be
transmitted when the \"GO\" frame is received, the previous frame is
transmitted fully before the GO packet. Things get catastrophic, if 9
Kbyte Jumbo frames are allowed on the network.

IIRC the maximum IP frame size can be limited to 576 bytes, this
reducing the maximum Ethernet frame size from 1500 to under 600 bytes.

<snip>

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Of course using a binary counter with say 0x8000 for no frequency
error would simplify things.
 
On 4/20/2023 12:41 AM, upsidedown@downunder.com wrote:
The problem is that if an other big frame has been started to be
transmitted when the \"GO\" frame is received, the previous frame is
transmitted fully before the GO packet. Things get catastrophic, if 9
Kbyte Jumbo frames are allowed on the network.

Note that there may be *more* than one such frame competing for
access to *a* particular port (where \"a\" is any port that you\'re
trying to \"notify\"). Because it\'s *not* a physically shared medium
(\"switched fabric\"), it is possible that every other device on the
switch (or with access to the switch) can choose to send datagrams to
that port simultaneously. No collisions involved as each port
operates simultaneously with all others.

But, as only one packet can be emitted on the (any) port at a given time,
these other datagrams are queued *in* the switch to be transmitted to
the device on that port when time permits.

*Your* packet gets thrown in as the switch deems appropriate.

Damn near all COTS products \"chatter\" even when idle (network discovery,
keepalives, DHCP lease renewals, NTP, routing protocols, name resolution,
syslogd, etc.). So, you can\'t predict when someone else\'s device will
decide that it wants to send datagrams.

Or, their nature (size, frequency, etc.). Windows machines tend to have
oodles of services running that each feel free to use the wire as *they*
desire. (note that every application can avail itself of the network
services -- to check for updates, other identical licenses elsewhere in
the enterprise, etc.). Always amusing to look at the number of open
ports and wonder \"why?\"...

With something as ubiquitous as a PC, the traffic is likewise more
generalized (e.g., you didn\'t see idle chatter on HPIB networks).

IIRC the maximum IP frame size can be limited to 576 bytes, this
reducing the maximum Ethernet frame size from 1500 to under 600 bytes.

But only by constraining all clients anywhere on the fabric to such
a restriction.

And, requiring your customers to know how to do this and do so
reliably, for all devices.

[If ethernet users were so savvy, we wouldn\'t have duplex problems,
wouldn\'t need autonegotiation ports, MDIX, etc. \"It *looks* like
an ethernet cable... what else is there to know?? This one is BLUE!\"]
 
On Thu, 20 Apr 2023 10:41:37 +0300, upsidedown@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 18 Apr 2023 16:42:19 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 19:26:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Mon, 17 Apr 2023 21:38:52 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Mon, 17 Apr 2023 17:49:33 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:


The modern equivalent of a \"hub\" is an \"unmanaged switch\". They are
just that, but are internally buffered. If one chooses a
gigabit-capable unit, the latency will be quite small. For instance,
consider a NETGEAR 5-Port Multi-Gigabit (2.5G) Ethernet Unmanaged
Switch:

.<https://www.downloads.netgear.com/files/GDC/MS105/MS105_DS.pdf

The datasheet specifies the latency for 64-byte packets as less than
2.5 microseconds. Again, this ought to suffice. Web price from
Netgear is $150. Slower units are cheaper, with increased buffering
latency.


The 64 byte minimum Ethernet frame size is from the 10Base5 vampire
tap Ethernet so that collisions could be reliably detected.

That\'s encouraging. Thanks.

I like the idea of the switch forwarding the packet in microseconds,
before it\'s actually over.

A short UDP packet should get through fast.

The problem is that if an other big frame has been started to be
transmitted when the \"GO\" frame is received, the previous frame is
transmitted fully before the GO packet. Things get catastrophic, if 9
Kbyte Jumbo frames are allowed on the network.

I can tell my users: don\'t do that.


IIRC the maximum IP frame size can be limited to 576 bytes, this
reducing the maximum Ethernet frame size from 1500 to under 600 bytes.

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.


Of course using a binary counter with say 0x8000 for no frequency
error would simplify things.
 
On 18/04/2023 22:40, John Larkin wrote:

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

Depending on the frequency that you want to generate and the jitter that
you can tolerate you can sometimes get away with calibrating local slave
system clock ticks per reference second (or 10 seconds) in each unit.
Assuming here that you do have a good reference frequency available in
the master.

I have used a free running loop counter in some entirely software driven
low power clock devices to allow fractional corrections every 1,2,4,8,
16 times around the loop so that you can adjust phase by one CPU cycle
every 16. Actually it had a once per day fiddle in the same vein.

You only need to be able to trim out about +/- 50ppm or so (often less).
It worked well enough that I never bothered to temperature compensate it
since the lab environment was so always so close to 20C.

--
Martin Brown
 
On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsidedown@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

<snip>

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.

Windows has a 63 bit time counter which counts 100 ns time steps,
which is updated by the clock interrupt only 100 times a second, by
adding a number close to 100 000 at each interrupt.This addend number
can be adjusted with system calls, making implementing the NTP client
easy.
Of course using a binary counter with say 0x8000 for no frequency
error would simplify things.

Using two addend vales N and N+1 and applying each in every other
clock interrupt, the jitter can be further reduced.
 
On Sat, 22 Apr 2023 10:22:09 +0300, upsidedown@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsidedown@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.

Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

This is basically a DDS concept.

The progression of Time can now be tuned to parts-per-trillion at
production calibration, and tweaked later on if some external
correction is available.

The actual interrupt rate could be trimmed in a similar manner if some
hardware counter-timer is available with enough bits, maybe in an
FPGA. Hybrid tricks are possible.

If the interrupt rate is high, one could also just skip the occasional
IRQ to get the apparent rate exact. Nobody will notice.

Windows has a 63 bit time counter which counts 100 ns time steps,
which is updated by the clock interrupt only 100 times a second, by
adding a number close to 100 000 at each interrupt.This addend number
can be adjusted with system calls, making implementing the NTP client
easy.

Same idea.


Of course using a binary counter with say 0x8000 for no frequency
error would simplify things.

Using two addend vales N and N+1 and applying each in every other
clock interrupt, the jitter can be further reduced.
 
lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 10:22:09 +0300, upsid...@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jla...@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsid...@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.
Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

floats might not be the best idea
 
On Sat, 22 Apr 2023 08:17:52 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 10:22:09 +0300, upsid...@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jla...@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsid...@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.
Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

floats might not be the best idea

Why not? It\'s sure easy. Other processes can just use the int part of
TIME.
 
lørdag den 22. april 2023 kl. 18.21.54 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 08:17:52 -0700 (PDT), Lasse Langwadt Christensen
lang...@fonz.dk> wrote:

lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 10:22:09 +0300, upsid...@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jla...@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsid...@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.
Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

floats might not be the best idea
Why not? It\'s sure easy. Other processes can just use the int part of
TIME.

adding a large and a small float

32bit integer as fractional time, add to that, carry out increments 32 integer time


 
On 4/22/2023 19:49, Lasse Langwadt Christensen wrote:
lørdag den 22. april 2023 kl. 18.21.54 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 08:17:52 -0700 (PDT), Lasse Langwadt Christensen
lang...@fonz.dk> wrote:

lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 10:22:09 +0300, upsid...@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jla...@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsid...@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.
Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

floats might not be the best idea
Why not? It\'s sure easy. Other processes can just use the int part of
TIME.

adding a large and a small float

32bit integer as fractional time, add to that, carry out increments 32 integer time

Floats are really necessary if you need magnitude at the cost of
absolute accuracy.
Adding large and small float should be no issue, FPUs I have used
always convert both operands to large, do the busyness then convert
the result to small if that was asked for.
However if you have an FPU anyway and you know how what you are
doing why not use it, if this can simplify your work or speed up
things or both.
 
lørdag den 22. april 2023 kl. 19.26.25 UTC+2 skrev Dimiter_Popoff:
On 4/22/2023 19:49, Lasse Langwadt Christensen wrote:
lørdag den 22. april 2023 kl. 18.21.54 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 08:17:52 -0700 (PDT), Lasse Langwadt Christensen
lang...@fonz.dk> wrote:

lørdag den 22. april 2023 kl. 16.26.03 UTC+2 skrev John Larkin:
On Sat, 22 Apr 2023 10:22:09 +0300, upsid...@downunder.com wrote:

On Thu, 20 Apr 2023 07:21:18 -0700, John Larkin
jla...@highlandSNIPMEtechnology.com> wrote:

On Thu, 20 Apr 2023 10:41:37 +0300, upsid...@downunder.com wrote:

On Tue, 18 Apr 2023 14:40:35 -0700, John Larkin

snip

The clock on the Raspberry Pi is a cheap crystal and is not tunable.
It might be interesting to do a DDS sort of thing to make a variable
that is a local calibrated time counter. We could occasionally send
out a packet to declare the time of day, and the little boxes could
both sync to that and tweak their DDS cal factors to stay pretty close
until the next correction. All software.

If the crystal has a reasonable short term stability but the frequency
is seriously inaccurate, some DDS principle can be applied. Assuming
the crystal drives a timer interrupt, say nominally every millisecond
and the ISR updates a nanosecond counter. If it has been determined
that the ISR is activated every 1.001234 milliseconds, the ISR adds
1001234 to the nanosecond counter. Each time when the million changes,
a new millisecond period is declared. Using two or more slightly
different adders and fractional nanoseconds can be counted.

Yes, something like that. On a dual-core 130 MHz ARM, one of the cores
could run a reasonable periodic interrupt at, say, 50 KHz. I\'ve run
non-trivial IRQs on an ARM at 100 KHz with a 70 MHz clock.

The option is to clock the Pico externally, which can probably be
done, or at least fire an IRQ externally. That adds a VCXO and some
other parts to the board, which isn\'t terrible. Adds a little hardware
in place of a lot of thinking and software; better path to done.

Typically a timer interrupt increments by one. Why not add a semi
constant value to the counter, it is not much slower than INCin a
counter.
Given a periodic interrupt interrupt based on a cheap non-adjustable
clock, evey tick just do

Time = Time + Kcal

where Time and Kcal are both long floats, and Kcal is near 1.00000.

floats might not be the best idea
Why not? It\'s sure easy. Other processes can just use the int part of
TIME.

adding a large and a small float

32bit integer as fractional time, add to that, carry out increments 32 integer time



Floats are really necessary if you need magnitude at the cost of
absolute accuracy.
Adding large and small float should be no issue, FPUs I have used
always convert both operands to large, do the busyness then convert
the result to small if that was asked for.
However if you have an FPU anyway and you know how what you are
doing why not use it, if this can simplify your work or speed up
things or both.

rp2040 doesn\'t have an FPU
 

Welcome to EDABoard.com

Sponsor

Back
Top