NetTime has no effect on bootup
jim lesurf (2082) 1438 posts |
I’m using NetTime 0.43 here. I just tried using it again via the !Alarm ‘try’ interface. I used ‘try’ twice and checked in a taskwindow to get the results shown below. *nettime File 'nettime' not found *help nettime ==> Help on keyword NetTime Module is: NetTime 0.43 (28 Nov 2013) Commands provided: NetTime_Kick NetTime_Status NetTime_PollInterval *nettime_status Current time: Tuesday, 12 August 2014 12:55:29.60 Status: Sleeping Last adjustment: 35 seconds ago Last delta: 124.341981 seconds slow Last server: ntp1.exa-networks.co.uk Last protocol: SNTP Poll interval: 30 minutes *nettime_status Current time: Tuesday, 12 August 2014 12:55:45.06 Status: Sleeping Last adjustment: 5 seconds ago Last delta: 123.436941 seconds slow Last server: pool.ntp.org Last protocol: SNTP Poll interval: 30 minutes Showing a residual error of over 2 mins. Which I think was pretty much the error before I did the first try. Jim |
jim lesurf (2082) 1438 posts |
Ah! I think I may have twigged the nature of the problem. The first factor is the need for the time to be out by > 127 sec before the clock is set (which seems a crazily long time to me TBH). The second is that – using !Alarm to illustrate – you have to use both ‘try’ and ‘set’ to get the time setting to function. This leaves the nettime module running and active. But if you don’t want nettime running in the background you’re then faced with having to use !Alarm again to “set manually” and then “set”. That means that to stop using nettime you have to manually reset the clock. To make things worse, the !Alarm values are just shown in hours and mins. No seconds. So it is a challenge to set this again without generating a multi-sec error unless you have the clock showing the time inc secs and time your use of “set manually” and “set” to match as exactly as you can. Which makes this look like a user interface difficulty for mere users who want to set the time accurately once but not have nettime running in the background. Jim |
Dave Higton (1515) 3534 posts |
Me too. My suggested threshold is 5 seconds. Maybe even less. If the time is within the threshold, and thus the time is not corrected instantly, I’d like to see some kind of comforting message come up to say that it has done something, really it has, and that we shouldn’t panic. There really is no excuse for the time to be so far out. Don’t blame drift until you’ve put a reasonable tolerance on the clock’s timebase (50 parts per million would be awful; realistically it should be much closer than that) and calculated how far it can drift between synchronisations, e.g. almost 24 hours of the host not being switched on. My watch cost £9.99 new, and it drifts less than 1 second per month. The technology is the same. |
jim lesurf (2082) 1438 posts |
To me, 1 sec would be a good figure. However I don’t know enough about the processes NetTime involves to know what kind of value would be feasible. And something around 5 sec seems fine as a reasonable compromise for most desktop users. However I confess I have the feeling that NetTime may fall between two stools. On one hand:I can’t see that most ordinary desktop users will care enough to want the clock rate tweaked. Whereas I can see that can have drawbacks for some users. On the other hand: A process that cheerfully leaves many seconds of offset seems unsatisfactory for precision users – particularly as NetTime itself gives no warning or says “If you want the time set, deliberately make it wrong by many mins before using NetTime!” And the third leg of the stool: Precison users may well not want the clock rate or time altered without knowing exactly what changes were made, and when, and what the rate/time values were beforehand. We get a ‘delta’ for the time change. But no info on the rates. And the ‘delta’ doesn’t help much if the actual error remains many secs but below 127 secs. People who may want to use the computer for precision processes that have to issue control signals or capture values in a well-defined and regular way may be confused by an automatic application. Whereas most ‘ordinary’ users simply may not have any need for the clock rate to be tweaked. I managed (I think!) to set my computer’s time to a sec or so by using NetTime, and maintained this after NetTime was no longer selected. But this is tricky as !Alarm and the !Boot time config only show the manual value in hh mm, not ss. So takes a carefully judged ‘manual’ time set to coincide to a second or two. And I still don’t even know if RTC rate tweaks persist over a powerdown-reboot cycle. Without being to monitor this I don’t know if that would make things worse or better! :-/ Nor do I know how large these tweaks are. Personally I have the feeling that NetTime is trying to be too clever for a general purpose method. It would be far simpler and more sensible IMHO to just have a “once at bootup” process that sets the clock and then doesn’t leave any module or process to tweak or re-check. And any attempt to tweak the clock rate should only do so on the basis of having the user see the start and end times, etc, in a user-chosen calibration. Having got the time about right I think I’ll avoid NetTime and leave it to others. Maybe I’ll need how to test and calibrate the RTC if my trying it has made the clock run faster/slower than before. Until then I think I’ll just use wget to check the time from the US Navy server (until I find a suitable NPL equivalent) as that avoids me having to learn about the fancy protocols. :-) BTW The main start-point of this for me was asking about documentation for NetTime. The response has been in effect the sadly predictable “the code is the documentation”. Which actually doesn’t help me or most mere users. And was no help when I kept getting problems and nothing others said showed why. In user interface terms, one problem here is the way the user of ‘radio’ buttons in !Alarm and the !Boot interface force the user to reset the time ‘manually’ just to get NetTime to stop doing so. Given the hh:mm interface that can mean screwing up the time unless you take other steps. There should be a choice of “neither” that will stop NetTime being used without beg forced to set the time again from a manual value you can’t even specify better than a minute. Another problem is that “try” may say “synch successful” but this doesn’t seem to mean the clock has been set! You apparently have to click “set” for that. The interface doesn’t make this clear. I was confused by this for a long period as I assumed that “synch successful” meant the clock was being set. And by using “try” for NetTime and then closing without a “set” I’d set the clock without having NetTime continue from then on. As it is, the interface seems to assume no-one will want to use NetTime just to set the time as a one-off user chosen event. However that’s what I want, and I wonder if others have been tripped up by this. Jim |
Dave Higton (1515) 3534 posts |
There is no way to alter the timebase of the RTC hardware. Ergo, its rate can’t be changed when the host isn’t running. Even when the host is running, all that can be altered is a soft copy of the time. When you have done that, you have bought and opened a can of worms. Two different versions of the time on the same host, with the relationship between them deliberately concealed from the user, and the ability to set the time (whatever that means) severely limited. It’s an attempt to be too clever for its own good. It ends up looking stupid. |
jim lesurf (2082) 1438 posts |
OK, so that means I don’t have to worry that I’ve changed the clock rate in an enduring way. But from what you say, that using NetTime means in effect that the user-facing clock will be running at a rate different to the hardware RTC during a session. I wish I understood the time protocols, etc. If I did, I’d write a modern equivalent of !FreeTime. As it is, if no-one else does so, I may hack something together using wget and a suitable clock source to let the user see a ‘net time’ and set their computer clock if they wish from it. At present I’m doing this via seeing the results wget shows in a taskwindow and comparing it with my local computer clock. Not elegant programming, but at least under user control. :-) Jim |
Rick Murray (539) 13851 posts |
Could we compromise? I can understand the idea of gradually shifting the system time so as not to freak out applications that make use of the clock and would handle a sudden change badly… …but this makes absolutely no sense at all if/when the time is seen to be incorrect at boot up. No matter how clever NetTime thinks it is, there is no possible way that it can speed up or slow down the system ticker by any meaningful way from boot up because there is no previous value to record so it won’t be capable of detecting if the clock is running quickly or slowly. By the way, where do we get “the system time is 0.0004857273 milliseconds slow” from a timer with centisecond resolution? Anyway – we need a compromise. We either need NetTime to absolutely forcibly set the real time is the system uptime is less than 30 seconds, or – better – to have a “-force” parameter to NetTime_Kick that will do this. Let the user decide, instead of pretending to provide a service and then failing to do so over a concern that some unspecified entity might be upset by it. Because, trust me, a fed up user adjusting Sys$Time manually is going to be just as jarring to said piece of software… |
Andrew Rawnsley (492) 1445 posts |
When discussing it with sprow, apparently cmos is used to store time from session to session so that at cold boot, it’ll immediately restore the clock to the last stored time, before attempting any kind of time sync-ing. I guess this should avoid the whole 1970 issue, provided NetTime has been used previously. Internet time is then checked for a reference point, and the clock is then jumped forwards, or gently adjusted, depending on the result. I’d also be inclined to think that 2 minutes (127 sec) is a little on the large size – 31 seconds or even 15 would seem a suitable power of 2. That said, there may be an issue at play here such as the time it takes for a TCP/IP timeout that affects this choice. I don’t know who designed this in the first place, so I can’t comment on the logic. |
Rick Murray (539) 13851 posts |
Booted my Pi from a day switched off (was out) and it has brought the time to correct and retained the CEST timezone. :-)
md.neolane.net? Never seen that before. [update: <to self>look at your browser’s URL bar, stupid!</> :-) it redirects to http://www.adobe.com/fr/solutions/campaign-management.html] I wonder where it got that address from? Especially given as NetTime$Server says pool.ntp.org !? |
Steve Fryatt (216) 2105 posts |
There is no way to alter the timebase of the RTC hardware. Ergo, its rate can’t be changed when the host isn’t running. Even when the host is running, all that can be altered is a soft copy of the time. Not really. Since RISC OS 2, I’m not aware that there’s been a way for applications to read the hardware clock: according to the PRMs all the OS_Word calls just read the “soft” copy of the clock. The soft copy is synced from the RTC (if present) on a reset, and after that it runs independently. Pre-NetTime, ISTR that the soft-copy would deviate from reality at quite a large rate with no control. Now it may also deviate, but in line with real time. |
Steve Fryatt (216) 2105 posts |
The ntp.org pool will just point you at one of their members’ servers, depending on your location, what subdomain (if any) you specify, and how loaded they are. My local timeserver (to which I point NetTime) is currently looking at the following as a result of being configured to
|
Steve Fryatt (216) 2105 posts |
Not for me. Setting the time at boot only means that you know the clock was correct then, but doesn’t tell you anything about where it is after a couple of hours at the mercy of the “soft”-clock’s timekeeping. As soon as machines start to be networked together, keeping their clocks in step becomes more useful than not.
I seem to recall pointing you at the documentation? |
jim lesurf (2082) 1438 posts |
I seem to only recall you pointing me at source code. If NetTime suits you now, fine for you. But it seems quite clear from this thread that I’m not the only person who thinks the current setup needs changing. And if your concern is that machines on a local network need synching together, then I suspect that would be better done by synching them together. Not by having each one fetch seperately at different times from a distant ‘pool’. If precise synching is needed, then it is the machines that need synching together that should be synched together as directly as possible. Not by using randomly different distant references via connections whose relay delays also vary uncontrollably. And of course none of the above changes what I said wrt the user presentation and interface. e.g. the problem of keeping a decent set time when you want to stop using NetTime via the current GUI. Jim |
Sprow (202) 1158 posts |
Steve Fryatt has said most of what others have missed. As ever, it’s worth considering the full range of use cases, not just the one presented on the thing in the box in front of you. Case 1: There’s no RTC hardware at all. You’ll be solely running on the soft clock incremented at 100Hz from some timer or other, which is initialised to 1970 (however, the Pi uses the last shutdown time instead by poking this value over a magic location in the ROM image, yuck!). Case 2: There’s no RTC hardware, and NetTime is enabled. The soft clock is initialised to 1970 until NetTime can get a reply from the time server. Case 3: There is RTC hardware. The kernel reads this after all the modules have started (effectively, doing an OS_ResyncTime) then leaves RTCAdjust to slew the soft clock by changing the ticker rate. The adjuster reads the hardware clock hourly and recalculates the adjustment needed with a simple control loop. The RTCAdjust module has been in RISC OS since 1989 so if you’ve use RISC OS in the last 25 years your clock has been slewed and you’ve just never noticed! The adjustment is needed because the RTC hardware is assumed vastly more accurate than the 100Hz timer, either because it’s skipping a count when interrupts are disabled or because the crystal or PLL running the 100Hz timer is terrible. For reference, a really bad RTC crystal has ~30ppm initial accuracy (not accounting for ageing and temperature). Additionally, some RTC chips don’t have a full 4 digit year, so any missing digits are augmented by a value held in the year CMOS locations – eg. the Risc PC has a 2 bit year, and gets the other 14 bits this way. Case 4: There is RTC hardware, and NetTime is enabled. As with (3), the soft clock is initialised from the hardware RTC first and slewing will begin. NetTime may then get an update from the time server and will try to slew the soft clock too (subject to some priority rules, since there are now 3 views of the time to contend with). Historically, as RTCAdjust had to contain a second copy of a (read only) RTC driver this meant it kept having to be updated each time a new HAL came along with another clock chip. More recently been replaced by the RTC module instead which just asks the HAL.
You’re very lucky, that’s 0.4ppm.
Because the centisecond timer has always been derived from a clock of at least 1MHz, even on the BBC Micro. More recent platforms improve on this with 10’s MHz ticks divided down to 1cs. BTW The main start-point of this for me was asking about documentation for NetTime. The response has been in effect the sadly predictable “the code is the documentation”. This post included a link to the NetTime spec, all 2030 words of it.
Yes, a step change in time is the last resort, when outside 163.84s error I think. Slewing can be up to ~1% which is 10000ppm (contrast that with the crystal accuracy above). Over 1h you’d catch up 36s. To ever accumulate 163s of error in the first place the RTC would have to have been at the extreme of its 30ppm range for over 2 months, so in practice the error on power up will be trimmed out in much less than 1h.
I don’t think so. The RTC hardware is assumed king so RTCAdjust never wrote to the clock. NetTime did (and still does) but only on Service_PreReset. Without wanting to get too philosophical, when you manually set the time, how long is the time from your eyeballs seeing the number on your watch (or hearing the pip of the speaking clock) and pressing a mouse button. Does the time apply when you press or release the mouse button? Have you accounted for the time taken to write or remove any configuration file, plus the IIC transaction to the chip, and the time take to convert the string into the 40b counter? I’d take anyone claiming their PC clock is more accurate than 1s is just kidding. |
Jeffrey Lee (213) 6048 posts |
Not necessarily; the RTC used with OMAP3 machines allows you to fine-tune the clock rate. Which is just as well, considering that the RTC’s accuracy is pretty abysmal. It’s a feature that we’re not using at the moment, but it would probably make sense to add it to the RTC module.
True. On startup NetTime can easily determine whether the hardware RTC has suffered any clock drift and can adjust the frequency of the soft clock in order to gently fix that drift (plus reset the hardware RTC). But once that drift has been corrected it should reset the soft clock frequency to default because it doesn’t yet have enough data to make any assumptions about the accuracy of the soft clock.
Because on all platforms the RISC OS centisecond timer is derived from a timer running at 1MHz or greater. HAL calls allow the code to peek at the value of the timer and so get a result that’s accurate to within about a microsecond. Unfortunately hardware RTCs generally only expose the time accurate to the nearest second or centisecond, so making the hardware RTC sync precisely with network time is a bit trickier (Wait for the soft clock to roll over to the next second and then reprogram the hardware RTC?)
Yes, it looks like making some changes would be best. It’s probably worth checking what thresholds Linux and BSD use for their RTC adjustments, as I suspect they’re much better at this kind of thing than we are. FWIW the NetTime spec mentions that of the two NTP methods supported, Time Protocol only give the time with an accuracy of one second, and NTP results are typically accurate to within 1-50ms. |
jim lesurf (2082) 1438 posts |
I assume that page may make sense to experience programmers working on the topic. But I’m afraid that to me it doesn’t look much like a page of user documentation explaining how to use NetTime, etc, avoiding the snags I’ve described. And if you’re ever going to add the ability to genuinely alter the RTC clock rate in a way that persists over power down-up cycles I’d strongly recommend ensuring this also make a user-readable record in a file somewhere of what the before/after rates were and when they were changed. Otherwise you could be causing real problems for some kinds of precision-timed real-world processes. Jim |
Jeffrey Lee (213) 6048 posts |
Surely you’d want to log the values regardless of whether the RTC is being adjusted? If you aren’t adjusting the RTC then the RTC will continue to drift further and further out of sync, and without a log file listing RTC time vs. a reliable time source you won’t have a clue how far out of sync it was at any point in time. For all you know it could run 10x faster than normal for a week, then run at normal speed for a week, then run backwards for a week, and then jump forwards 10 years due to a bug in the OS. |
Steve Fryatt (216) 2105 posts |
I seem to recall pointing you at the documentation? I remember the following exchange:
If NetTime suits you now, fine for you. But it seems quite clear from this thread that I’m not the only person who thinks the current setup needs changing. I think Sprow covers most of the issues above. In short: It’s A Bit More Complicated Than That, and always has been even though no-one seems to have been worried until now. And if your concern is that machines on a local network need synching together, then I suspect that would be better done by synching them together. Not by having each one fetch seperately at different times from a distant ‘pool’. If precise synching is needed, then it is the machines that need synching together that should be synched together as directly as possible. Not by using randomly different distant references via connections whose relay delays also vary uncontrollably. I do, thanks. NetTime syncs with a local NTP server, which in turn syncs to uk.pool.ntp.org — as I explained above. |
Rick Murray (539) 13851 posts |
@ Sprow:
Case 5: The user knows the time is wrong. The module entrusted to rectify the situation intentionally leaves it wrong (only maybe a little less wrong) by design – while providing no obvious indication to the user that the time is not being immediately corrected when the user asks it to be done.
I struggle to believe a cheap watch can manage better than one second a month. I have an expensive watch that is maybe that good, got it a long time ago but I don’t wear watches these days.
Well, by calculating what real time should be from the NetTime offset, and pressing the mouse button slightly ahead of when I wanted to, I managed to get within 0.05s of reality. Is that good enough for you? ;-)
Somebody with way too much free time could probably examine the code and give you an answer. However I rather suspect the answer would be out of the resolution that NetTime works in, given the speed ARM processors run these days, and the cache, even predictive execution, and…
Indeed. Back before I had internet, I used to update my PCs clock from the teletext TSDP. My code didn’t even bother adjusting the time if it was within 10s because with the latencies of satellite transmission, the decoding of teletext, the transfer of data over a parallel port IIC link, the interpretation of the data….it wasn’t worth getting all hung up over a minor discrepancy. My argument is not “how accurate” the time is (after all, we’re taking it on trust that the network time server is correct), but rather that when explicitly asked to set the “correct” time, the NetTime module would rather fart around doing something complicated instead of just saying “okay, the time at the third meep will be….”. @ Jeffrey:
I don’t think there is the impetuous to make an accurate RTC these says, as every major operating system in the last decade auto-syncs at least once a day to ensure the clock is correct.
+/- a second is good enough for me. ;-)
Hey! I know that! A billion years ago, an office I worked in had a “smart office fax” that behaved like that. It used to tickle me that we had to call out the service engineer because it crashed on Feb 29th. So we ended up sending out stuff on March 1st for two days of the year on leap years. It was eventually thrown out (and I stripped it for useful parts) because on 1st January 2000, it reverted to 1980. I should add, however, that I am currently listening to Dido on a little MP3 player with a clone chip inside which is a Z80 clocked at ~25Mhz hooked to a DSP. The wildly amusing thing? While it has a DSP and some GPIO (memory mapped?) for the buttons, the Z80 core is pretty much a straight Z80 right down to a 64K addressing space. [datasheet] Aaaaanyway… @ Jim:
Before getting two hung up on great accuracy, I would ask two questions: 1, What are these applications that require such accuracy (and are they likely to be using a domestic operating system?)? And: 2, More importantly, perhaps, is that if time accuracy is required down to centi/milliseconds, how can you be certain that the current time is correct? You can, I would imagine, bounce your request off multiple time servers to try to accumulate all of the differences – but in reality you only have the time as presented by the servers you use. For most grockles and muggles, within a second (or so) is likely to be sufficient. For precision? Well… I would refer you to a GPS receiver that contains custom hardware that depends upon exact timings and may take many minutes to synchronise “from cold”. So if you told me your clock is accurate to a thousandth of a second outside of a laboratory, I would say “prove it”. |
Dave Higton (1515) 3534 posts |
It does. Like the man said, I’m very lucky. There is a difference between watches and computer RTCs: watches are often trimmed, RTCs rarely are. But in any case, it’s luck. Just because something can be 10 ppm out, it doesn’t mean it will be. One other small thing: my watch spends two thirds of its time in a temperature controlled environment. |
Chris Hall (132) 3559 posts |
(memory mapped?) unlikely. The Z80 had IO ports built in to its instruction set e.g. IN A,&20. |
Rick Murray (539) 13851 posts |
Thanks for the clarification. I was one of the unpopular kids with a BBC Micro. The other kids had Z80s inside their machines. ;-) Out of interest, how was the I/O hooked to the processor? Was there an I/O bus or was it somehow within the normal addressing range but with some sort of signal to say “this is I/O”? Didn’t use my Pi yesterday. Went south of the Loire (we like the Vallet/Clisson area) and since my summer holiday is almost over… Was a lovely bright sunny day and as soon as we crossed the Loire northwards – bang! Clouds, dreary, on-and-off torrential rain. It was kind of galling looking back and seeing blue skies with fluffy clouds and thinking that there might be some truth in the idea that “the weather changes south of the Loire”. Anyway, power up today and…
The last time I looked (11th), I was Chris @ CJE: Does your RTC board have a way of monitoring the state of the button cell, or do I just need to replace it if the time starts getting inaccurate? I ask this because my PVR, when not plugged in, can run down a button cell in under a month (it is running an MSP430 microcontroller). My Pi is not habitually always-on, so the button cell will be keeping the clock running a lot… |
jim lesurf (2082) 1438 posts |
I also remember continuing to point out the absense of user documentation of the kind I kept asking for. WRT possible problems I can suggest two sorts of examples. 1) For some years I ran a RPC to regularly take samples measuring the gain and phase changes over a 20-30 km FWIW I did the above with the RPC running for about a month per chunk, only stopping it briefly to change removable HD for analyis. But during these periods it’s clock seemed 2) Sampling for processes akin to audio where a high rate is used but relies on the computer’s ‘clock’. Again if I want to assess The user can, of course, work around possible problems if they know about them. And as a pro I tended to always check things for myself (hence my queries about NetTime). The problem with NetTime e.g. It really doesn’t help that when you see “synch successful” in fact no synch has actually been applied, for example. e.g. 2 the difficulty of using the standard GUI for NetTime to stop using NetTime without fouling up the time accuracy. And WRT Rick’s comments. I have also had "try"s of using NetTime which failed to synch the clock. Jim |
Dave Higton (1515) 3534 posts |
I don’t know the specifics of the machine in question, but the Z80 has 64 kiB of memory address space and a separate I/O space, which can be 256 bytes or 64 kiB depending on how you address it. I suspect it was rare for anything Z80-powered to require more than 256 bytes of I/O. The address and data busses were the same; there were different control signals for memory and I/O space. Been there, done a lot of Z80 assembly language programming, designed a Z80-based embedded control system for an audio test set… and survived… |
jim lesurf (2082) 1438 posts |
It is also quite possible that if you’d bought a series of watches made one after another on a single FWIW I hadn’t tampered with the setting of my ARMiniX clock until I became curious about NetTime. But I’ve now set the clock to about a sec wrt the USA Navy clock. I’ll keep an eye on how it does over BTW apologies for worse-that-usual typing. I keep getting a text window about three screens wide! And I’m afraid I still find a webforum much harder to read, etc, that using !Pluto Jim |