gps sattelites...

On Sunday, June 12, 2022 at 1:56:41 PM UTC-4, Hul Tytus wrote:
Martin the readings I\'ve recently taken are formed by waiting about 2 minutes
after the reciever has \"captured\" a useable number of sattelites and then averaging
all readings for 8 minutes. This has shown uniquely accurate (repeatible) readings
one day and some distinctly at variance on another. The objective now is to identify
the good days and also the bad days in order to avoid the latter. Averaging more
than the most basic method mentioned above would be counter to current intent, at
least at this point.

I wasn\'t doing averaging, but the GPS was. I connected it to a PC program that showed the readings on a map. It was mostly in a small area, but once in a while, it would take a trip, up to some 100 feet away from the location. That entire excursion would be a significant part of 8 minutes and would cause noticeable error in data collection with a short term average. There was no reason to suspect satellite positioning as the excursion was too short. Sats take hours to move across the sky. I think their orbit is 1/2 day, no?

I also found a location that simply would not register a solid reading. It wandered around by 100\'s of feet it seemed. This was on a ridge, potentially with line of sight, although at some 10 mile distance, to Camp David. Word is they use GPS spoofing in the area at times. I guess that can\'t be confirmed very easily.

GPS modules can be bought for $25 the last time I looked. One could be connected to an inexpensive data logger and left for a day or more. Plot the data on a map and you will see a drunkards walk. The excursions will be minimal from my observations.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
In article <e4d1b33e-f630-4eaf-b3a7-0d5f127d3799n@googlegroups.com>,
Ricky <gnuarm.deletethisbit@gmail.com> wrote:

I wasn\'t doing averaging, but the GPS was. I connected it to a PC program that showed the
readings on a map. It was mostly in a small area, but once in a while, it would take a trip, up
to some 100 feet away from the location. That entire excursion would be a significant part of 8
minutes and would cause noticeable error in data collection with a short term average. There was
no reason to suspect satellite positioning as the excursion was too short. Sats take hours to
move across the sky. I think their orbit is 1/2 day, no?

Roughly that, yes. It\'s not a geostationary or geosynchronous orbit.

Position \"wander\" (GPS drift) can be caused by a whole bunch of
phenomena. The signal path from any given satellite to the receiver
is going to be affected by multipath - e.g. signal reflections from
buildings, trees, airplanes, the ground, and so forth. As the
satellite moves, the multipath behavior will change. This creates an
effect similar to audible \"picket fencing\" in a VHF-FM signal.

If I recall correctly, ionospheric disturbances can also perturb the
signal. The ionosphere is far from static, and if there\'s significant
solar activity the signal propagation can change on a minute-to-minute
basis.

I find it quite amazing that the GPS receiver\'s front end and signal
processing logic can pick out a whole bunch of extremely weak signals
all transmitting on the same frequency, and measure their arrival times
(phases) to such a high resolution.

It would be interesting to use a single high-quality GPS antenna, with
an active signal booster/splitter, and feed the split signal to a
group of GPS receivers of different make/model/design, and then
compare and contrast the position reports and the \"which satellites
were in view and which ones were used for this position report\" data.
For best accuracy all of the receivers would need to be programmed
correctly with the length (i.e. propagation delay) between cable and
receiver.

That sort of comparison might help separate site-specific issues
(e.g. patterns of multipath) from device-specific ones (e.g.
differences in the algorithms used by different receivers\' firmware).
 
Chris \"working reliably\" has numerous definitions. The objective here is
to reduce the number of readings required, ie the number of days required.
At this point, I\'m guessing that a lack of overhead sattelites is causing
large altitude errors. Might work. One way to find out.

Hul

Chris Jones <lugnut808@spam.yahoo.com> wrote:
On 12/06/2022 10:15, Hul Tytus wrote:
John I am looking at positions generated each second by one of Ublox\'s
9 series of GPS receivers. Some readings at a single location show (monday at
10, tues at 5..) good repeatability and some are noticebly off. The idea
is to see the satellite positions at the time of known readings, gain a feel
for good/bad satellite positions and use that to shedule further readings.

Unless you implement your own receiver, that isn\'t going to work
reliably. The receiver can pick and choose which satellites it includes
or excludes in its calculations and there is no guarantee that it will
pick the same ones at two different times, even if the same ones happen
to be visible. For the same reason, you can\'t use two ordinary consumer
GPS receivers to do differential GPS by just subtracting the known error
in the position solution of one station from the other station\'s
position solution - that won\'t work at all if they are using different
satellites.

About 15 years ago I recall finding a software GPS receiver somewhere on
the internet, and I\'m sure there have been more or better ones developed
since then. Perhaps that would suit your needs, as you could tweak its
behaviour, get it to print out the satellite positions, etc.
 
Thanks Lasse. I\'ll take a look.

Hul

Lasse Langwadt Christensen <langwadt@fonz.dk> wrote:
s??ndag den 12. juni 2022 kl. 15.40.57 UTC+2 skrev Chris Jones:
On 12/06/2022 10:15, Hul Tytus wrote:
John I am looking at positions generated each second by one of Ublox\'s
9 series of GPS receivers. Some readings at a single location show (monday at
10, tues at 5..) good repeatability and some are noticebly off. The idea
is to see the satellite positions at the time of known readings, gain a feel
for good/bad satellite positions and use that to shedule further readings.
Unless you implement your own receiver, that isn\'t going to work
reliably. The receiver can pick and choose which satellites it includes
or excludes in its calculations and there is no guarantee that it will
pick the same ones at two different times, even if the same ones happen
to be visible. For the same reason, you can\'t use two ordinary consumer
GPS receivers to do differential GPS by just subtracting the known error
in the position solution of one station from the other station\'s
position solution - that won\'t work at all if they are using different
satellites.

About 15 years ago I recall finding a software GPS receiver somewhere on
the internet, and I\'m sure there have been more or better ones developed
since then. Perhaps that would suit your needs, as you could tweak its
behaviour, get it to print out the satellite positions, etc.

https://www.rtl-sdr.com/rtl-sdr-tutorial-gps-decoding-plotting/
 
On Sunday, June 12, 2022 at 5:57:16 PM UTC-4, Dave Platt wrote:
In article <e4d1b33e-f630-4eaf...@googlegroups.com>,
Ricky <gnuarm.del...@gmail.com> wrote:

I wasn\'t doing averaging, but the GPS was. I connected it to a PC program that showed the
readings on a map. It was mostly in a small area, but once in a while, it would take a trip, up
to some 100 feet away from the location. That entire excursion would be a significant part of 8
minutes and would cause noticeable error in data collection with a short term average. There was
no reason to suspect satellite positioning as the excursion was too short. Sats take hours to
move across the sky. I think their orbit is 1/2 day, no?
Roughly that, yes. It\'s not a geostationary or geosynchronous orbit.

Position \"wander\" (GPS drift) can be caused by a whole bunch of
phenomena. The signal path from any given satellite to the receiver
is going to be affected by multipath - e.g. signal reflections from
buildings, trees, airplanes, the ground, and so forth. As the
satellite moves, the multipath behavior will change. This creates an
effect similar to audible \"picket fencing\" in a VHF-FM signal.

A GPS receiver needs four sats to get a 3D lock. A few more improve the accuracy. But it also provides redundancy. If one sat is arriving by multi-path, the calculations using that sat will result in significant deviations.. I don\'t know if they do, but such a sat can be removed from the calculations, improving the accuracy.


If I recall correctly, ionospheric disturbances can also perturb the
signal. The ionosphere is far from static, and if there\'s significant
solar activity the signal propagation can change on a minute-to-minute
basis.

Variations due to atmospheric disruptions are minimized by WAAS and similar correction schemes. I find without WAAS the error was typically 30 feet with larger excursions, while with the WAAS correction turned on, normal accuracy is typically better than 10 feet. Apparently they are done by brute force, a stationary... station, measures it\'s location and periodically reports the error. This is broadcast by the sats as a correction based on your area.


I find it quite amazing that the GPS receiver\'s front end and signal
processing logic can pick out a whole bunch of extremely weak signals
all transmitting on the same frequency, and measure their arrival times
(phases) to such a high resolution.

I assume you know this is done by correlating PRN codes which give a lot of gain in the signal. Each sat has a different PRN which looks like noise to all the other codes, so very little interference. I believe the code is 1024 bits long, so lots of gain.


It would be interesting to use a single high-quality GPS antenna, with
an active signal booster/splitter, and feed the split signal to a
group of GPS receivers of different make/model/design, and then
compare and contrast the position reports and the \"which satellites
were in view and which ones were used for this position report\" data.
For best accuracy all of the receivers would need to be programmed
correctly with the length (i.e. propagation delay) between cable and
receiver.

As long as they all see the same delay, it won\'t make a difference.


That sort of comparison might help separate site-specific issues
(e.g. patterns of multipath) from device-specific ones (e.g.
differences in the algorithms used by different receivers\' firmware).

We evaluated small, GPS circuit boards for use in a product once and came down to two candidates, so we lab tested them using an external antenna. One worked just fine with some seconds to 1st lock (20 sec maybe). The other brand never got a lock. The vendor didn\'t care enough to find out why their units weren\'t working.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209
 
On 13/6/22 03:56, Hul Tytus wrote:
Martin the readings I\'ve recently taken are formed by waiting about 2 minutes
after the reciever has \"captured\" a useable number of sattelites and then averaging
all readings for 8 minutes. This has shown uniquely accurate (repeatible) readings
one day and some distinctly at variance on another. The objective now is to identify
the good days and also the bad days in order to avoid the latter.

Did you also record the HDOP and VDOP figures from the GPS?

It calculates how the trigonometry propagates the signal timing errors
into positioning errors, depending on the specific locations of the
satellites it\'s relying on. If you aren\'t using DOP, you should be.
 
On Sunday, June 12, 2022 at 5:58:27 PM UTC-4, Hul Tytus wrote:
Chris \"working reliably\" has numerous definitions. The objective here is
to reduce the number of readings required, ie the number of days required..
At this point, I\'m guessing that a lack of overhead sattelites is causing
large altitude errors. Might work. One way to find out.

It\'s usually the other way around, several overhead sats and not so many closer to the horizon. Low elevation sats are often blocked by structures or terrain. Elevation accuracy is inherently poorer than the map coordinates. Having low elevation sats helps that issue, not only the sats overhead.

GPS measures timing reported by the sats in a relative manner only. This creates a 3D hyperbola for each pair. The intersections show the location. The best accuracy comes from sats spread around so the axis of the hyperbolae are not so close to one another. Only once you calculate your position, can the actual time be determined from the reported times and the now known path delays.

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
 
On 12/06/2022 18:33, Hul Tytus wrote:
Martin the VSOP methods sound like what I was after, thanks for mentioning
it. The Predict program at qsl.net appears useful but the source is barred
by an indemnity clause in their \"terms of service\".

VSOP is really for very accurate solar system object positions. The
trouble with multibody gravitational effects is that they cause the
orbital elements of each component to evolve with time.

Historically when computer controlled scopes first became available the
planets were easy enough but the moon was well beyond what they could
fit in the firmware - too many perturbations and the scope would almost
never point at the largest object in the night sky!

You should be able to get away with something much cruder for taking
satellite orbital elements to actual positions. The tedious bit will be
obtaining the relevant orbital elements every couple of weeks.

You are after all only interested in the the number of birds within a
given zenith angle at your location.

I have a spreadsheet that can take classical orbital elements for solar
system objects and turn them into x,y,z and thence to RA & Dec. It is
more intended for amateur astronomers doing comet hunting though.

--
Regards,
Martin Brown
 
On 12/06/2022 18:56, Hul Tytus wrote:
Martin the readings I\'ve recently taken are formed by waiting about 2 minutes
after the reciever has \"captured\" a useable number of sattelites and then averaging
all readings for 8 minutes. This has shown uniquely accurate (repeatible) readings
one day and some distinctly at variance on another. The objective now is to identify
the good days and also the bad days in order to avoid the latter. Averaging more
than the most basic method mentioned above would be counter to current intent, at
least at this point.

Do you have two examples of the raw data good and bad as CSV files?

I\'d be interested to take a quick look (although it will be July before
I have any slack time for interesting look see type things).

My instinct is that switching from mean to median would go a long way to
solving your immediate problem by weighting down the sporadic outliers.

Averaging is only helpful against Gaussian distributed noise and my
instinct is that your noise is decidedly not that friendly.

--
Regards,
Martin Brown
 
Martin while seeking \"Astronomical Algorithms\" I bumped into \"Sun Position\" by
someone named Craig. A British fellow apparantly. Claim was code for accuratly
finding the sun\'s position, in 6 languages! I had hopes Craig was using the
current VSOP procedure which you mentioned.
The intrest in the sun\'s position came from a different project and is not
a part of the GPS efforts I\'ve described.
What\'s needed now is a description of the procedure for deriving a satellite\'s
position at a given time or an example of such code such as \"Predict\" on qsl.net.
Any suggestions?

Hul

Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 12/06/2022 18:33, Hul Tytus wrote:
Martin the VSOP methods sound like what I was after, thanks for mentioning
it. The Predict program at qsl.net appears useful but the source is barred
by an indemnity clause in their \"terms of service\".

VSOP is really for very accurate solar system object positions. The
trouble with multibody gravitational effects is that they cause the
orbital elements of each component to evolve with time.

Historically when computer controlled scopes first became available the
planets were easy enough but the moon was well beyond what they could
fit in the firmware - too many perturbations and the scope would almost
never point at the largest object in the night sky!

You should be able to get away with something much cruder for taking
satellite orbital elements to actual positions. The tedious bit will be
obtaining the relevant orbital elements every couple of weeks.

You are after all only interested in the the number of birds within a
given zenith angle at your location.

I have a spreadsheet that can take classical orbital elements for solar
system objects and turn them into x,y,z and thence to RA & Dec. It is
more intended for amateur astronomers doing comet hunting though.

--
Regards,
Martin Brown
 
On 14/6/22 10:18, Hul Tytus wrote:
Martin while seeking \"Astronomical Algorithms\" I bumped into \"Sun Position\" by
someone named Craig. A British fellow apparantly. Claim was code for accuratly
finding the sun\'s position, in 6 languages! I had hopes Craig was using the
current VSOP procedure which you mentioned.

I already responded with a pointer to this:
<<https://github.com/cosinekitty/astronomy>


What\'s needed now is a description of the procedure for deriving a satellite\'s
position at a given time or an example of such code such as \"Predict\" on qsl.net.
Any suggestions?

Contact the author of that package and ask if they\'re interested in
helping implement that?

CH
 
Martin by \"raw data\" you\'re refering to data recieved each second of
the 8 minute sampling period each is not recorded but included in the average. CSV
is a term unknown to me, but that\'s probably due to memory and/or lack off access
to the source code or attending documentation. However, the position of the satellites
& some other data is recorded upto, I think, 64 entries. Therafter the oldest is replaced
by the newest. That is held in RAM and is consequently lost when power goes. The results,
ie the positions, are recorded by hand.
Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Hul

Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 12/06/2022 18:56, Hul Tytus wrote:
Martin the readings I\'ve recently taken are formed by waiting about 2 minutes
after the reciever has \"captured\" a useable number of sattelites and then averaging
all readings for 8 minutes. This has shown uniquely accurate (repeatible) readings
one day and some distinctly at variance on another. The objective now is to identify
the good days and also the bad days in order to avoid the latter. Averaging more
than the most basic method mentioned above would be counter to current intent, at
least at this point.

Do you have two examples of the raw data good and bad as CSV files?

I\'d be interested to take a quick look (although it will be July before
I have any slack time for interesting look see type things).

My instinct is that switching from mean to median would go a long way to
solving your immediate problem by weighting down the sporadic outliers.

Averaging is only helpful against Gaussian distributed noise and my
instinct is that your noise is decidedly not that friendly.

--
Regards,
Martin Brown
 
On 14/06/2022 01:42, Hul Tytus wrote:
Martin by \"raw data\" you\'re refering to data recieved each second of
the 8 minute sampling period each is not recorded but included in the average. CSV
is a term unknown to me, but that\'s probably due to memory and/or lack off access
to the source code or attending documentation. However, the position of the satellites
& some other data is recorded upto, I think, 64 entries. Therafter the oldest is replaced
by the newest. That is held in RAM and is consequently lost when power goes. The results,
ie the positions, are recorded by hand.

CSV = comma separated variables.
I was hoping that maybe the device could output an ASCII raw data dump.

time, x, y, z (or lat, long, height)

One thing to bear in mind is that the unit is mostly concerned with
obtaining a latitude & longitude for the observer so in marginal signal
or constellation situations it will tend to put the observer onto the
default oblate spheroid of the Earths surface (or some other perhaps
more detailed internal topographic map).

Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

Basically it computes mean and variance of the original buffer and then
mean and variance of the modified dataset. Keeping track of number of
samples actually used. Rinse and repeat.

If you have the entire dataset at once then after the raw data are all
acquired you can do it better in post processing.

--
Regards,
Martin Brown
 
Martin, on common terms, the \"terminal\" used here doesn\'t store the data
but just adds it to the average.
I found a source for Meeus\' Astronomical Algorithms. Thanks for mentioning
it.

Hul

Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 14/06/2022 01:42, Hul Tytus wrote:
Martin by \"raw data\" you\'re refering to data recieved each second of
the 8 minute sampling period each is not recorded but included in the average. CSV
is a term unknown to me, but that\'s probably due to memory and/or lack off access
to the source code or attending documentation. However, the position of the satellites
& some other data is recorded upto, I think, 64 entries. Therafter the oldest is replaced
by the newest. That is held in RAM and is consequently lost when power goes. The results,
ie the positions, are recorded by hand.

CSV = comma separated variables.
I was hoping that maybe the device could output an ASCII raw data dump.

time, x, y, z (or lat, long, height)

One thing to bear in mind is that the unit is mostly concerned with
obtaining a latitude & longitude for the observer so in marginal signal
or constellation situations it will tend to put the observer onto the
default oblate spheroid of the Earths surface (or some other perhaps
more detailed internal topographic map).

Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

Basically it computes mean and variance of the original buffer and then
mean and variance of the modified dataset. Keeping track of number of
samples actually used. Rinse and repeat.

If you have the entire dataset at once then after the raw data are all
acquired you can do it better in post processing.

--
Regards,
Martin Brown
 
On 14/6/22 16:52, Martin Brown wrote:
On 14/06/2022 01:42, Hul Tytus wrote:
Martin  by \"raw data\" you\'re refering to data recieved each second of
the 8 minute sampling period each is not recorded but included in the
average. CSV
is a term unknown to me, but that\'s probably due to memory and/or lack
off access
to the source code or attending documentation. However, the position
of the satellites
& some other data is recorded upto, I think, 64 entries. Therafter the
oldest is replaced
by the newest. That is held in RAM and is consequently lost when power
goes. The results,
ie the positions, are recorded by hand.

CSV = comma separated variables

Values. Comma separated values. Variables would leave you guessing :)

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

Basically it computes mean and variance

You need to keep the sum of squares if you want variance.
But yes, a good technique.

CH
 
On Tue, 14 Jun 2022 07:52:55 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/06/2022 01:42, Hul Tytus wrote:
Martin by \"raw data\" you\'re refering to data recieved each second of
the 8 minute sampling period each is not recorded but included in the average. CSV
is a term unknown to me, but that\'s probably due to memory and/or lack off access
to the source code or attending documentation. However, the position of the satellites
& some other data is recorded upto, I think, 64 entries. Therafter the oldest is replaced
by the newest. That is held in RAM and is consequently lost when power goes. The results,
ie the positions, are recorded by hand.

CSV = comma separated variables.
I was hoping that maybe the device could output an ASCII raw data dump.

time, x, y, z (or lat, long, height)

One thing to bear in mind is that the unit is mostly concerned with
obtaining a latitude & longitude for the observer so in marginal signal
or constellation situations it will tend to put the observer onto the
default oblate spheroid of the Earths surface (or some other perhaps
more detailed internal topographic map).

Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

The simple version of this is the trimmed mean.

The more powerful approach is a Jackknife Estimator.

..<https://en.wikipedia.org/wiki/Jackknife_resampling>


Joe Gwinn



Basically it computes mean and variance of the original buffer and then
mean and variance of the modified dataset. Keeping track of number of
samples actually used. Rinse and repeat.

If you have the entire dataset at once then after the raw data are all
acquired you can do it better in post processing.
 
On 14/06/2022 16:25, Joe Gwinn wrote:
On Tue, 14 Jun 2022 07:52:55 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/06/2022 01:42, Hul Tytus wrote:
Martin by \"raw data\" you\'re refering to data recieved each second of

Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

The simple version of this is the trimmed mean.

In rough and ready engineering terms it was frequently referred to as
the mean whose parents were unmarried at my place of work.

The more powerful approach is a Jackknife Estimator.

.<https://en.wikipedia.org/wiki/Jackknife_resampling

Doesn\'t really lend itself to real time computation with limited
resources though. Summing a few extra terms is more easily done.


--
Regards,
Martin Brown
 
On Tue, 14 Jun 2022 17:03:07 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/06/2022 16:25, Joe Gwinn wrote:
On Tue, 14 Jun 2022 07:52:55 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/06/2022 01:42, Hul Tytus wrote:
Martin by \"raw data\" you\'re refering to data recieved each second of

Along the lines you\'ve mentioned regarding methods of averaging, especially those
avoiding the \"outliers\" here is a possible scheme:
record 1000 positions
on each new position
calculate the average
remove the most distant entry and place the new in it\'s place

Keeping a running mean and average for the length of buffer that you
have and then ignoring any values more than 3 sigma away from the mean
is one sort of quick and dirty heuristic I have seen used in realtime
crude and not very powerful data acquisition with (very) noisy data.

The simple version of this is the trimmed mean.

In rough and ready engineering terms it was frequently referred to as
the mean whose parents were unmarried at my place of work.

Heh.


The more powerful approach is a Jackknife Estimator.

.<https://en.wikipedia.org/wiki/Jackknife_resampling

Doesn\'t really lend itself to real time computation with limited
resources though. Summing a few extra terms is more easily done.

Well, it is often used in realtime, for radar, but the computer is
generally pretty capable.

For a jackknife mean, it usually means subtracting ni/N from the mean
of all N samples, for each sample ni, dropping the sample with the
largest effect. This is pretty fast. Keep doing this until the
successive means are reasonably close to one another, where
\"reasonably\" is domain dependent.

Joe Gwinn
 
On Sunday, June 12, 2022 at 2:33:18 PM UTC-7, Ricky wrote:

> I also found a location that simply would not register a solid reading. It wandered around by 100\'s of feet it seemed. This was on a ridge, potentially with line of sight, although at some 10 mile distance, to Camp David. Word is they use GPS spoofing in the area at times. I guess that can\'t be confirmed very easily.

There\'s failures of most schemes; the old map-and-compass days got interesting around Mt.St. Helens,
in May 1980; I took sightings off local peaks to locate a seismometer or two, but the big mountain
was... unavailable at that time.
 
On 2022-06-11, Hul Tytus <ht@panix.com> wrote:
Martin - thanks for the info, especially the \"spacecraft elements\" at in-the-sky.org. From
what\'s there, I need to learn the meaning of the terms shown and the procedure
for predicting positions at a given time. Any suggestions?

search for \"satellite prediction software.\"

http://gpredict.oz9aec.net/ looks promising.


--
Jasen.
 

Welcome to EDABoard.com

Sponsor

Back
Top