WTF with my computer clock?

In article <4a886fcc$0$7464$822641b3@news.adtechcomputers.com>,
David Nebenzahl <nobody@but.us.chickens> wrote:

On 8/16/2009 9:55 AM Michael A. Terrell spake thus:

David Nebenzahl wrote:

Hope I'm not belaboring the point here. I just ran "net time" again and
got the error message "Could not locate a time-server". So I assume that
even if that process is running on my computer, as someone else here
asserted, it's not doing anything to my RTC, as there are no
time-servers to query (that it knows about).

http://download.cnet.com/Atomic-Clock-Sync/3000-18512_4-14844.html

Thanks, but I'm happy with the little utility I already use that
contacts NIST (Nat'l Institute of Standards and Technology); see
http://tf.nist.gov/service/its.htm for more info.
You should stay away from NIST (and all other stratum one servers) to
avoid overloading their server unless you have a real need for high
precision -- REALLY high. Otherwise, find a good stratum two server to
connect to; you'll never know the difference. There are a lot; just
google. I use time.apple.com.

Isaac
 
In article <4a88a59a$0$7469$822641b3@news.adtechcomputers.com>,
David Nebenzahl <nobody@but.us.chickens> wrote:

On 8/16/2009 2:51 PM Meat Plow spake thus:

I agree that for most a minute per month is reasonable but I would
expect the same accuracy as my $29.99 Timex wristwatch which is more
like a second a month.

So that kinda begs the question of why computer mfrs. can't (or won't)
include clocks that are at *least* as accurate as a Timex, no? Wouldn't
a computah be a more compelling reason for a more accurate clock? (I
know, $$$ bottom line, right?)

If you use the NIST SNTP server you'll be as accurate as how
frequently your SNTP client updates.

Of course, it would be nice to know one's computer would maintain
accurate time even if, god forbid, it was somehow disconnected from The
Network ...
Functionally impossible. By adding money, you can reduce the drift rate
but you can't make it zero. Period. Just use NTP. And *stay away* from
the stratum one servers like NIST; they have better things to do than
keep your computer's clock on time.

Isaac
 
In article <u9ah85l1paeetprd8r7mdc61npk29cda51@4ax.com>,
Jeff Liebermann <jeffl@cruzio.com> wrote:

On Sun, 16 Aug 2009 17:34:37 -0700, David Nebenzahl
nobody@but.us.chickens> wrote:

On 8/16/2009 2:51 PM Meat Plow spake thus:

I agree that for most a minute per month is reasonable but I would
expect the same accuracy as my $29.99 Timex wristwatch which is more
like a second a month.

So that kinda begs the question of why computer mfrs. can't (or won't)
include clocks that are at *least* as accurate as a Timex, no? Wouldn't
a computah be a more compelling reason for a more accurate clock? (I
know, $$$ bottom line, right?)

Because it's difficult. The right way to have done it would have been
to do a function call from an RTC (real time clock) every time some
application needs the actual time.
I don't agree. NO CLOCK, running alone, can be really accurate over the
long term. A much better way is to take the output from a crummy,
inaccurate *but low cost* clock and using an external time reference,
synthesize from it a local clock of simply amazing accuracy.

NTP solves the problem completely, and at a very low cost (processing
cycles instead of expen$ive hardware). NTP works even if the computer
it's running on has *no RTC* (in the hardware sense) at all. All it
needs is some sort of interrupt generated every N cycles of the
processor clock (N is any integer that produces regular interrupts a few
times a second; the actual interval is not important).

Isaac
 
In article <0003da1e$0$2175$c3e8da3@news.astraweb.com>,
Sylvia Else <sylvia@not.at.this.address> wrote:

Dave Plowman (News) wrote:
In article <7dudnfkL1oZuVhrXnZ2dnUVZ8txi4p2d@brightview.co.uk>,
Nigel Feltham <nigel.feltham@btinternet.com> wrote:
Are you sure that's what it is ? Any HD that I've seen is just that. A
perfectly 'normal' looking picture, but with a higher resolution. Why
should a higher res camera change the tonal composition of the picture
? (assuming that it is being shot on video). Looks more like they've
changed from film to video, or the other way round perhaps. Or are
maybe using a video mode that attempts to simulate film, something
like that. I saw it before on the programme when they did a couple of
'specials'. Didn't like it then, don't like it now.

Oh dear, sounds like the horrible filmic processing - where they reverse
the order of the 2 interlaced half frames to give the picture a
juddering effect which is claimed to look more like film.

No - The Bill has never used that. Or rather not in general - it may have
been tried on a 'special'.
The current ones are shot HD using progressive scan.

But IIRC, they suppress one field and repeat the other for this effect?


Certainly, no one in their right mind would deliberately reverse the
interlace ordering - the result is unwatchable.
But recording only odd (or even -- doesn't matter) fields is a very
functional low-quality sort of "compression"; VCR's have been using it
for years.

Isaac
 
In article <XMSdnfsBxIPy1RXXnZ2dnUVZ8nSdnZ2d@brightview.co.uk>,
Nigel Feltham <nigel.feltham@btinternet.com> wrote:

Dave Plowman (News) wrote:

In article <0003dcd7$0$7860$c3e8da3@news.astraweb.com>,
Sylvia Else <sylvia@not.at.this.address> wrote:
Much as I'd like to be able to support the view that the BBC's standards
are falling, I have to advise that I was already being frustrated by the
BBC's apparent inability to keep to its published schedules back in the
early 1980s. This is nothing new.

If you give it some thought, it's near impossible to make a prog run 'to
the second', as some seem to want. You could, of course, always make it
shorter and fill the gaps with trails etc - allowing the next one to start
on the second. But that would bring even more complaints. ;-)

Maybe not with live shows but when shows are pre-recorded the broadcaster
knows the exact length of each show long before broadcast
Live or recorded, it is perfectly possible for broadcasters to maintain
program timing to the nearest second; we used to do it back in the
sixties, when nationwide network switching was synchronized by people
watching Western Union clocks on the walls of broadcast stations all
over the country. What has happened is that broadcasters either don't
care any more, or there is some commercial advantage to playing fast and
loose with the timing. My bet is on the latter.

Isaac
 
In article <508bedd76ddave@davenoise.co.uk>,
"Dave Plowman (News)" <dave@davenoise.co.uk> wrote:

In article <XMSdnfsBxIPy1RXXnZ2dnUVZ8nSdnZ2d@brightview.co.uk>,
Nigel Feltham <nigel.feltham@btinternet.com> wrote:
If you give it some thought, it's near impossible to make a prog run
'to the second', as some seem to want. You could, of course, always
make it shorter and fill the gaps with trails etc - allowing the next
one to start on the second. But that would bring even more complaints.
;-)

Maybe not with live shows but when shows are pre-recorded the
broadcaster knows the exact length of each show long before broadcast so
should be able to make the published schedule fit what is actually
broadcast - like if you broadcast a pre-recorded show at 8pm and you
know the recording is exactly 60 mins long then advertise the next one
as 9:02 to allow for trailers not 9:00 and run late.

Oh they do know the *exact* length of a pre-recorded show - but even those
won't run on time to the second.
Yes, they do. Timing is based on the number of frames in the entire
show, and the frame rate is very, very, accurately controlled by major
broadcasters -- figure on something better than one part in a hundred
million from any major network. You can use the frame rate, line rate,
or color subcarrier frequency as at least a stratum two timebase if you
refer to any major network's signals.

And so much is automated these days, playout wise.
That just makes the switching times more precise -- IF the operator
cares to be...

Isaac
 
In article <4a88a59a$0$7469$822641b3@news.adtechcomputers.com>,
David Nebenzahl <nobody@but.us.chickens> wrote:
I agree that for most a minute per month is reasonable but I would
expect the same accuracy as my $29.99 Timex wristwatch which is more
like a second a month.

So that kinda begs the question of why computer mfrs. can't (or won't)
include clocks that are at *least* as accurate as a Timex, no? Wouldn't
a computah be a more compelling reason for a more accurate clock? (I
know, $$$ bottom line, right?)
Wonder if it's because a wrist watch is kept at a pretty constant
temperature via the skin?

--
*I'm pretty sure that sex is better than logic, but I can't prove it.

Dave Plowman dave@davenoise.co.uk London SW
To e-mail, change noise into sound.
 
In article <isw-A079FD.22361716082009@[216.168.3.50]>,
isw <isw@witzend.com> wrote:
Oh they do know the *exact* length of a pre-recorded show - but even those
won't run on time to the second.

Yes, they do. Timing is based on the number of frames in the entire
show, and the frame rate is very, very, accurately controlled by major
broadcasters -- figure on something better than one part in a hundred
million from any major network. You can use the frame rate, line rate,
or color subcarrier frequency as at least a stratum two timebase if you
refer to any major network's signals.
I know the length won't vary as transmitted, but all one hour progs etc
ain't *exactly* the same length.

--
*Pride is what we have. Vanity is what others have.

Dave Plowman dave@davenoise.co.uk London SW
To e-mail, change noise into sound.
 
"isw" <isw@witzend.com> wrote in message
news:isw-EF6E23.22301316082009@[216.168.3.50]...
In article <XMSdnfsBxIPy1RXXnZ2dnUVZ8nSdnZ2d@brightview.co.uk>,
Nigel Feltham <nigel.feltham@btinternet.com> wrote:

Dave Plowman (News) wrote:

In article <0003dcd7$0$7860$c3e8da3@news.astraweb.com>,
Sylvia Else <sylvia@not.at.this.address> wrote:
Much as I'd like to be able to support the view that the BBC's
standards
are falling, I have to advise that I was already being frustrated by
the
BBC's apparent inability to keep to its published schedules back in
the
early 1980s. This is nothing new.

If you give it some thought, it's near impossible to make a prog run
'to
the second', as some seem to want. You could, of course, always make it
shorter and fill the gaps with trails etc - allowing the next one to
start
on the second. But that would bring even more complaints. ;-)

Maybe not with live shows but when shows are pre-recorded the broadcaster
knows the exact length of each show long before broadcast

Live or recorded, it is perfectly possible for broadcasters to maintain
program timing to the nearest second; we used to do it back in the
sixties, when nationwide network switching was synchronized by people
watching Western Union clocks on the walls of broadcast stations all
over the country. What has happened is that broadcasters either don't
care any more, or there is some commercial advantage to playing fast and
loose with the timing. My bet is on the latter.

Isaac
That's my feeling too. It definitely used to be much better here in the UK,
than it is now. If a programme was billed to start at 8pm, then it pretty
much did. Now, it is often several minutes late, after they have finished
showing genuine commercials, and then long trailers for forthcoming
programmes. Even the BBC is now poor, and they only have their own trailers
to factor into the equation. I really don't think that they care too much
these days, as the 'networks' are no longer formed from independant regional
stations, each with their own control centre, which had to synchronise, and
jump on and off the network, as the programming and commercial breaks
dictated. It probably is just a combination of 'no need', someone's
smart-arsed thinking about channel surfing, and the general 'don't really
care' attitude that's pervading everything we do now ...

Arfa
 
In article <9V8im.353345$jW1.145448@newsfe22.ams2>,
Arfa Daily <arfa.daily@ntlworld.com> wrote:
That's my feeling too. It definitely used to be much better here in the
UK, than it is now. If a programme was billed to start at 8pm, then it
pretty much did.
Pretty much sums it up. But in those days few had dead accurate clocks
which are so common now.

If you'd said 9 o'clock you'd have been right - that was one data point
for the network, the 9 o'clock news.

--


Dave Plowman dave@davenoise.co.uk London SW
To e-mail, change noise into sound.
 
On Sun, 16 Aug 2009 17:34:37 -0700, David Nebenzahl
<nobody@but.us.chickens>wrote:

On 8/16/2009 2:51 PM Meat Plow spake thus:

I agree that for most a minute per month is reasonable but I would
expect the same accuracy as my $29.99 Timex wristwatch which is more
like a second a month.

So that kinda begs the question of why computer mfrs. can't (or won't)
include clocks that are at *least* as accurate as a Timex, no? Wouldn't
a computah be a more compelling reason for a more accurate clock? (I
know, $$$ bottom line, right?)
Uh $$$ is my guess.

If you use the NIST SNTP server you'll be as accurate as how
frequently your SNTP client updates.

Of course, it would be nice to know one's computer would maintain
accurate time even if, god forbid, it was somehow disconnected from The
Network ...
A computer not on a network? That's blasphemy!
 
On Sun, 16 Aug 2009 22:20:34 -0700, isw <isw@witzend.com> wrote:

In article <u9ah85l1paeetprd8r7mdc61npk29cda51@4ax.com>,
Jeff Liebermann <jeffl@cruzio.com> wrote:

On Sun, 16 Aug 2009 17:34:37 -0700, David Nebenzahl
nobody@but.us.chickens> wrote:

On 8/16/2009 2:51 PM Meat Plow spake thus:

I agree that for most a minute per month is reasonable but I would
expect the same accuracy as my $29.99 Timex wristwatch which is more
like a second a month.

So that kinda begs the question of why computer mfrs. can't (or won't)
include clocks that are at *least* as accurate as a Timex, no? Wouldn't
a computah be a more compelling reason for a more accurate clock? (I
know, $$$ bottom line, right?)

Because it's difficult. The right way to have done it would have been
to do a function call from an RTC (real time clock) every time some
application needs the actual time.

I don't agree.
That's ok. Nobody ever agrees with me. I'm used to it.

NO CLOCK, running alone, can be really accurate over the
long term. A much better way is to take the output from a crummy,
inaccurate *but low cost* clock and using an external time reference,
synthesize from it a local clock of simply amazing accuracy.
Yep. However, the IBM PC was designed in 1981. At that time, there
were 10 Phase 1 GPS birds, incomplete coverage, and $5,000 receivers.
There were overpriced WWV and WWVB receivers, and no internet. The
best you could do was something synced to the color burst frequency of
a local TV station, assuming they were on 24 hours per day. Your
brilliant hindight is totally correct for a 21st century design, but
would be impossibly expensive in 1981.

My point was that in 1981 IBM had a reasonably accurate clock inside
the IBM PC using a fairly large 14.31818MHz xtal which could have
easily been temperature compensated. There were other computers at
the time that had a seperate stabilized RTC that did it the right way.
However, the IBM PC was originally designed as a home computer, not a
laboratory instrument. The casette tape interface should be a clue.
Using it for industrial, scientific or navigation applications was
probably never considered by the original architects. We're living
with the results today.

NTP solves the problem completely, and at a very low cost (processing
cycles instead of expen$ive hardware). NTP works even if the computer
it's running on has *no RTC* (in the hardware sense) at all. All it
needs is some sort of interrupt generated every N cycles of the
processor clock (N is any integer that produces regular interrupts a few
times a second; the actual interval is not important).

Isaac

--
Jeff Liebermann jeffl@cruzio.com
150 Felker St #D http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
 
On Sun, 16 Aug 2009 22:09:36 -0700, isw <isw@witzend.com> wrote:

You should stay away from NIST (and all other stratum one servers) to
avoid overloading their server unless you have a real need for high
precision -- REALLY high. Otherwise, find a good stratum two server to
connect to; you'll never know the difference. There are a lot; just
google. I use time.apple.com.
Isaac
You might want to look into:
<http://www.ntp.org>
<http://www.pool.ntp.org>
<http://www.pool.ntp.org/zone/us>
For NTP I use:
us.pool.ntp.org
500 servers in the US pool and growing.


--
Jeff Liebermann jeffl@cruzio.com
150 Felker St #D http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
 
In article <isw-64DD42.22124416082009@[216.168.3.50]>,
isw <isw@witzend.com> wrote:

Functionally impossible. By adding money, you can reduce the drift rate
but you can't make it zero. Period. Just use NTP.
Or, if you prefer something stand-alone which will give you a good time
reference if your network connection is down: use GPS.

It's not hard to find a GPS receiver which has a decent "pulse per
second" output on its serial port, as well as standard NMEA sentences.
Software packages are available which will monitor the NMEA output and
the PPS signal, and synchronize your PC's clock very accurately.

The better receivers (those specifically intended for timing purposes)
synchronize the PPS pulse edge with the start-of-second to within a
small number of nanoseconds. The limiting factor in your PC's clock
accuracy is likely to be the speed at which it can respond to the
change in the PPS signal (which typically requires taking an
interrupt).

A good timekeeping package should be able to compare an averaged PPS
timing (over a period of some minutes) with your system's inherent
clock drift, and figure out how to jiggle the internal clock so that
the drift averages down to zero. You should end up with time accuracy
good to within a few milliseconds.

On Linux, you can do this by running "gpsd" to monitor the GPS, and
having it feed timing information into the NTP daemon. In effect,
your GPS then serves as a new time source to the local NTP timing
pool... it's very accurate in the long term but somewhat prone to
short-term jitter. You can, if you wish, configure the ntp daemon to
use both the local GPS time source, and one or more network time
servers... this will give you redundency in both directions.

And *stay away* from
the stratum one servers like NIST; they have better things to do than
keep your computer's clock on time.
Correct. Use "pool.ntp.org", or one of the regional subdomains
thereof (e.g. "us.pool.ntp.org"). These domains point to a list
of well-connected, relatively-high-stratrum servers which have
volunteered to serve as public NTP resources.

--
Dave Platt <dplatt@radagast.org> AE6EO
Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior
I do _not_ wish to receive unsolicited commercial email, and I will
boycott any company which has the gall to send me such ads!
 
On 8/16/2009 10:12 PM isw spake thus:

Functionally impossible. By adding money, you can reduce the drift rate
but you can't make it zero. Period.
I don't care about zero. One minute a month is plenty accurate enough
for me.

Just use NTP. And *stay away* from the stratum one servers like NIST;
they have better things to do than keep your computer's clock on
time.
You're admonishing me not to use NIST? Why?

After all, they offer this service to me. See
http://tf.nist.gov/service/its.htm:

The NIST Internet Time Service (ITS) allows users to synchronize
computer clocks via the Internet. The time information provided by
the service is directly traceable to UTC(NIST). The service responds
to time requests from any Internet client in several formats
including the DAYTIME, TIME, and NTP protocols.

So why shouldn't I use them?

Keep in mind that I use this service *at most* 3 or 4 times a *year*.


--
Found--the gene that causes belief in genetic determinism
 
David Nebenzahl wrote:
You're admonishing me not to use NIST? Why?
.....
Keep in mind that I use this service *at most* 3 or 4 times a *year*.
David, it depends upon how you use it. If you use Windows' or MacOS's
automatic time sync or *NIX's NTPDATE, you only access it occasionaly.
Windows and Mac access it once a week, NTPDATE does it whenever it is invoked,
usually when you boot your computer.

If you are runnin *NIX NTP deamon (including MacOS's) or a third party Windows
time sync program, your computer is in frequent contact with the time server.
In that case, it would be a good idea not to use those servers as they are
heavily loaded down.

For once in a week sync of one computer, you can use just about any server
without worry about it being overloaded or adding any additional load.

If you have multiple computers networked together, that is a different story.

Geoff.


--
Geoffrey S. Mendelson, Jerusalem, Israel gsm@mendelson.com N3OWJ/4X1GM
 
On Mon, 17 Aug 2009 11:37:35 -0700, David Nebenzahl
<nobody@but.us.chickens> wrote:

So why shouldn't I use them?
Keep in mind that I use this service *at most* 3 or 4 times a *year*.
I use 8 hours, which guarantees at least one update during working
hours.

You're fine, but there are NTP abuse problems among application
writers and device vendors:
<http://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse>

Since there doesn't seem to be a way to stop someone from flooding the
NTP servers with excessive requests, the SNTP group came up with the
"kiss of death" packet, which tells the sender to shut up:
<http://tools.ietf.org/html/rfc4330#page-20>
 
On Sun, 16 Aug 2009 22:12:44 -0700, isw <isw@witzend.com> wrote:

Functionally impossible. By adding money, you can reduce the drift rate
but you can't make it zero. Period. Just use NTP. And *stay away* from
the stratum one servers like NIST; they have better things to do than
keep your computer's clock on time.
Why bother with an internet solution? It won't work for laptops,
PDA's, stand alone PC devices, and such. A WWVB receiver is cheap
enough that it's included inside weather stations, alarm clocks, wrist
watches, and yes.... computahs:
<http://www.meinberg-usa.com/usb-radio-clocks/23-40/wwvb51usb---wwvb-radio-clock-for-the-universal-serial-bus--usb-.htm>
<http://www.atomictimeclock.com/radsynhome.htm>
<http://www.beaglesoft.com/radsynhome.htm>
<http://www.timetools.co.uk/products/mps-time-server.htm>
etc...
The only problem I can see with building one into a PC is the RF noise
generated by the PC will probably trash the receiver. That's what
long extension cords and external antennas are good for.

You can also sync to the local CDMA cellular provider, although the
prices are close to astronomical:
<http://www.beaglesoft.com/celsynhome.htm>

Got $10.70? Build your own:
<http://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=561-1014-ND>
 
On 8/17/2009 6:44 PM Jeff Liebermann spake thus:

Got $10.70? Build your own:
http://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=561-1014-ND
Interesting.

But once you've picked up WWV, what do you do with that signal to derive
a time base from it? (I guess you gots to know something about the
signal, which I don't.) Pretty simple?


--
Found--the gene that causes belief in genetic determinism
 
In article <508c2b53d8dave@davenoise.co.uk>,
"Dave Plowman (News)" <dave@davenoise.co.uk> wrote:

In article <9V8im.353345$jW1.145448@newsfe22.ams2>,
Arfa Daily <arfa.daily@ntlworld.com> wrote:
That's my feeling too. It definitely used to be much better here in the
UK, than it is now. If a programme was billed to start at 8pm, then it
pretty much did.

Pretty much sums it up. But in those days few had dead accurate clocks
which are so common now.
Actually,they were very close.

Western Union clocks all over the country were almost always all synched
to within a second or so. The technique was to use clocks (those big
things with the red sweep hand you may have seen in a broadcast studio)
that were basically pretty good, and to synch them to a remote timebase
from time to time.

The clocks were pendulum timed and electrically wound (couple of big dry
cells inside), and every one of them had a leased-line connection to the
nearest WU office, and from there to a national site.

Every 12 hours (AFAIR), Western Union sent a pulse down the wire that
"jammed" the sweep hands of all those clocks to 12 (and illuminated a
little red light behind the clock face so you could see that your time
was being corrected). I don't think the minute and hour hands were
controlled. It was up to the engineering personnel in each station to
twiddle their clocks' pendulums so the clocks could run within a second
or two in 12 hours -- not at all difficult for a good pendulum clock.

So as long as the accounting department paid the WU bill, you could join
your network or insert a local commercial with almost perfect accuracy.

Isaac
 

Welcome to EDABoard.com

Sponsor

Back
Top