NetTime has no effect on bootup
Rick Murray (539) 13850 posts |
Oh, good. I thought it was just me with the obscenely big window in NetSurf. @ Steve Fryatt – could you please try editing your post of 2014/08/13 at 17h55.39? I think the long URL in the quote (NetTime/Doc/Spec%2Cfaf) is messing up NetSurf’s rendering. Thanks. PS: Just started Pi since this morning. Time is 3 seconds slow. I think I’ll just go ahead and close the TaskWindow and stop obsessing about this. ;-) |
Chris Evans (457) 1614 posts |
The chip has no built in battery monitoring. Adding external components could do it but I don’t think the market wants it! The battery theoretically should last about nine years before needing replacing! |
Rick Murray (539) 13850 posts |
AFAIK, RISC OS does not inherently support leap seconds. Official US time sources can return them. It is unlikely that you will encounter a leap second – there have been 25 since 1972 – but if you should – does your software and the OS itself cater for a seconds value of sixty? After all, you want to be accurate… :-P |
Steve Fryatt (216) 2105 posts |
Done, I think. Most of that line appears to have been Textile “helpfully” inserting stuff for me that I didn’t actually type. |
Sprow (202) 1158 posts |
That’s not how to interpret those two events, sigh, we’ve really not got the hang of this concept yet… NetTime_Kick (which I think should be considered as a system internal command, not for users) schedules an immediate contact with the time server, subject to being able to complete a DNS lookup and some callbacks for the network stack to chew the packets with, and that new time becomes one new point on an imaginary graph whose trajectory is to get to zero error at some point in the near future. The correct interpretation (depending on how fast you can type) is that 7m27s ago the time was 32s out, and now it’s 16s out, meaning that in the last 7m27s the slew has adjusted 16s of error. It was not that a 16s sudden step change occurred because of typing NetTime_Kick, just as tapping a barometer doesn’t make the sun come out. Of course if you just keep NetTime_Kick’ing you’ll end up with pilot induced oscillation, which is why there’s a handy GUI which restricts the retry period (and, here comes the user documentation) you set once to your preferred mode and then leave alone.
Risc PC, built in the last 25 years, therefore ran RTCAdjust, same algorithm NetTime slews with. Move on.
Citation? Presumably it’s the 1cs timer that’s of concern here (not the calendar time), but even at a lowly 22kHz sampling rate (45us samples) a 1cs timer is so coarse as to be useless, especially not on RISC OS where you can arbitrarily disable interrupts for 100’s microseconds. Move on. |
Rick Murray (539) 13850 posts |
How about something really novel? A call that will set the clock correctly and be done with it (until an error of note is detected at the next check point).
True, I’m struggling to see what depends upon a time error being less than a second that runs on RISC OS. I know absolutely accurate time is not necessary for POS systems (compare your till receipts with something accurate, like ClockSync on Android), mostly correct seems to be good enough – especially for credit card tickets that spell the name of the town they are in “mostly correctly” ;-).
Um… Well… Obviously not degraded enough for you to notice the variations.
Are you mistaking “clock” for the fast (several MHz ticker)? That said, the RTC jiggery-pokery messes with the ticker too. I should add – unless you are willing to take over the computer and program the hardware yourself, you can only expect the hardware to behave in a manner consistent with that which the operating system decides is appropriate. In this case, the OS will have decided to tweak the fast ticker to advance or retard the system clock gradually – the overall effect to operating system processes and user programs is zero (hands up if you even knew RISC OS did this until this thread?). For all the times I used Maestro and Coconiser, and now with the Pi listened to webradio, I cannot say I noticed these small variations. Take a look at this: https://www.riscosopen.org/viewer/view/castle/RiscOS/Sources/HWSupport/RTCAdjust/s/RTC?rev=4.6 Comments at the text following semicolons. You might want to read from line 450, and from line 727. ;-) |
Malcolm Hussain-Gambles (1596) 811 posts |
One thing that I found interesting to use is PTPD |
Chris Evans (457) 1614 posts |
Nothing. As I understand it NetTime slowly adjusting time is aimed to stop programs that regularly do time related actions getting confused by time jumping, in particular going backwards. IF time could corrected during booting ‘before’ any other software that might get upset with time leaps then that would I think work. But I’m not sure the ‘before’ could be guaranteed. Time related: I do think a disc based utility that tests for an incorrect year and pops up a warning/runs the set time configure plug in would be very useful. |
David Feugey (2125) 2709 posts |
Perhaps the solution would be to sync the clock not only at shutdown, but also then the machine starts. The Internet stack and the NTP client should be started before any user tasks. |
jim lesurf (2082) 1438 posts |
Indeed, that’s all I want. If that’s available as an alternative to NetTime and people can choose, fine with me. The problem is that NetTime isn’t easy to use for one-off clock setting for the various reasons already outlined. All I’d want is a program that can be run either when the user wants, or ‘once at bootup’ that sets the clock to a decent accuracy. Then ceases. No rate tampering or automatic repeats as the machine runs. I’d probably only use it twice a year as the free running accuracy seems fine for me. If others want NetTime, fine for them. Although I would suggest people sort out its user documentation and GUI so it doesn’t mislead them as now. Personally, I’m quite happy with a clock that is accurate to no better than a few sec. Although it would be nice if the Alarm/Boot GUI gave more resolution than hh:mm as being out by up to a min seems needlessly innaccurate. Means these GUI interfaces have to be used with great care by the user if you want accuracy without leaving NetTime running. And I’m afraid simply saying “move on” doesn’t deal with all the points I’ve raised. All it does is tell us that some people don’t care about the snags and user interface problems I’ve pointed out.
Erm.. what variations? I wasn’t routinely using something like NetTime during the process. Indeed, the RPC in question wasn’t connected to the internet. It was running single-tasking doing one job. Logging the data to a HD. Erm.. what variations? I wasn’t routinely using something like NetTime during the process. Indeed, the RPC in question wasn’t connected to the internet. It was running single-tasking doing one job. Logging the data to a HD.As I explained, the absolute time wasn’t that important. Just that the data was taken with a fairly regular ‘tick’. TBH having random jitter also wasn’t too important provided it was small. But having the clock rate varied every x hours would have caused some problems unless the changes were pretty small.
By ‘clock’ I mean whatever process was being used to space apart the data samples. For the RPC example I quoted IIRC this was a Wild Vision ‘DSP’ card whose firmware Andy Ray and I had bodged to force it to do a set number of loops per call to sample. (The original looped until a timer event via the podule. Since the two rates were different this jitterred the data taking quite badly.) But I used various other cards and approaches with RPCs and earlier machines. They all had timing problems of various kinds TBH. Alas, this was so long ago now, I can’t recall details. And the cards and machines were given away ‘first come first served’ some years ago. So I can’t recall for sure but I think the DSP chip ran from its own on-podule clock. Hence the irregularity caused by the way it looped as-sold, needing the bodge in firmware. Jim |
Rick Murray (539) 13850 posts |
I think the option of dropping That said, just booted the Pi. The clock is 0.197084 seconds fast. I don’t think I’ll be losing sleep over that. ;-)
Create an Obey file inside $.!Boot.Choices.Boot.Tasks. Call it something like “YearCheck”. Make it:
That ought to do the trick. You don’t need to worry too much about updating the year field in four and a bit months, for when the time “reverts” it is usually towards something like 1970…
I don’t mind NetTime tweaking in the background. It doesn’t affect my day to day use of the system. I’d just like it to begin with a clock set correctly. I might have mentioned this once or twice already. ;-)
How long to you keep your machine on for? It is entirely possible that the time would drift if the machine was always on. I for one would want NetTime to stay active if I was running a server or somesuch. I only power up the Pi/Beagle when I’m using them, so I cannot really comment on long-term clock accuracy.
??? Firstly, unless you disable both NetTime and RTCAdjust (and the latter only on older machines), your free running accuracy has been automatically tweaked. Secondly, from experience on a bog-standard RiscPC, I updated my clock weekly from the teletext TSDP. It was never a lot out (at most ~20 seconds), but an error of 10-20 seconds multiplied by 26 weeks, it adds up.
Well, not to be pedantic or anything, but the date setting UI says “Try” on the button and “Sync xxxxs ago” in the status field. If you click on the Try button, it says “Sync success”. Nowhere does it say “Your clock has been updated”. :-) Something interesting to note is that I just clicked on “Try” (to see what the status said) and now my clock is reported as being 16s slow instead of the almost-correct that it was at boot up, over a duration of 22 minutes.
While you have raised valid points – you said, and I quote: The data would have been degraded if the clock rate was being changed at regular intervals by some other process that didn’t keep a clear record of what it had done. And to follow up, you said:
It was not running NetTime. True. It was not on the internet. True. It was not having its clock adjusted. FALSE. Assuming you are running RISC OS 3.70, you will find RTCAdjust present, module #52. It is module #49 in RISC OS 3.50. I don’t have a RISC OS 3.60 ROM image to check. If you are running an ROLtd flavour, they’ll probably have it too unless they have a NetTime style replacement present instead. In other words – there’s a good chance your clock has been played with all along. Hence, “move on”.
For a podule I would expect it would either have its own clock, or derive timing from the 8MHz system clock present on the backplane.
?? I am not surprised there was jitter. If you use a CallEvery or CallAfter, your minimum call time is every 2cs. You can shave this down to 1cs by hooking into EventV, but you are at the mercy of what else the system is doing (you’re just another event to be called) and, furthermore, as I have discovered, the only safe way to do anything of value on a timed callafter/callevery/event is to just use that to set up a callback to get RISC OS to drop into your code “when it is free” which is a very vague length of time – for example if you schedule a callback in 5 seconds and then do *CheckMap, your callback will fire when the checkmap command has finished (otherwise, RISC OS is “busy”). There are ways and means around this or else RISC OS would be dead as an OS. I’m a bit confused re. using a timer event. If I was doing some sort of DSP sampling, I would be very inclined to set up a big in-memory buffer, then hook into IQRV to read from the card into the buffer on an IRQ event, and then use that to schedule a callback to have a foreground process read from the buffer to harddisc. Or, whatever. I mean, if you can allocate a really big buffer… depends what you are recording.
Not a surprise. The problem will be getting data from the card to a usable state with what might be an underpowered processor. Pre-RiscPC? You’re looking at 8/24/25/33? MHz as clock speeds. I have a TV capture card hooked to a 600MHz PC. It is capable of recording video in real time at something like 360×288, it only “lightly” compresses the video, I need to post-process it to make XviDs, which is always interesting given the number of dropped frames it creates. This makes sound sync…interesting. The reason? Probably the same as yours – more data than the machine can handle.
Ah, that’s even worse, if there is no simple relationship between the on-podule clock and the podule clock. I recall not so long ago we had a discussion regarding a certain member of the RISC OS family’s inability to sensibly cope with 44.1kHz samples with audio hardware apparently only capable of 48kHz sampling. Logically, it is not a difficult thing to do – resample at some much higher rate that is nearly a multiple of 48, then convert down to 48kHz. There are additional complexities if you want to avoid distortion, certainly, however the difficulty comes in getting a medium-speed processor to actually be capable of doing this sort of thing without spending the majority of its time doing it. This is, typically, the sort of data-crunching application that would be pushed off to custom hardware (either a codec chip or a DSP), precisely because a general purpose CPU isn’t terribly good at doing it efficiently. Especially when running an OS at the same time where all sorts of things can happen “in the background” that can cause an influence on the rest of the system. |
jim lesurf (2082) 1438 posts |
For most home use generally for periods of the order of an hour at a time. I’m doing other things most of the time – albeit some of that using Linux boxes instead. However for various of my work activities (before I ‘retired’ [sic]) I might have machines running for weeks at a stretch running some automaticed real-world process. These machines wouldn’t have been net connected, and would be single-tasking – i.e. wimp halted.
You’ll have to explain more of that for me. If I don’t use NetTime what is RTCAdjust doing without my awareness on my ARMiniX? And would that have been relevant on a RPC some years ago in circumstances like the ones I describe above for ‘work’ use?
Nice try. :-) However I suspect that many English-language users may tend to think that “synch success” implies that one clock time (their computer’s) has been synched to agree with a reference (via the net) and that their clock time has been accurately set. The wording is at best ambiguously misleading. Indeed I – mistakenly – assumed that using “try” to get “synch success” without then using “set” meant I might be able to set the clock time via the net fetched time without then having NetTime lurking in the backound. Otherwise I can’t quite see the point of having two different buttons and getting the response “synch success” rather than “found the net server” or similar… Afraid I still have the impression that people haven’t thought carefully enough about this user interface and how the user uses this. All of which is made worse when – without the user knowing – the time may not have been set accurately anyway when they clicked “set”, because NetIme assumes an error of tens of secs is too small to correct.
Slight correction. It was an Irlam DSP card, I think. And the orginal firmware was that supplied with the card. It let the DSP loop (using its own clock I think) and check for an event or something over the podule interface at the end of each loop cycle. (Alas, too long ago, so I can’t now recall exactly what it was waiting to get from the host.) As you say, the method will generally jitter badly. However Andy modified the loop so the timing just did N loops between samples and didn’t try to follow any other clock than the DSP’s on-card clock. That got rid of the jitter. But meant the actual sample rates weren’t the ‘standard’ ones claimed by the manual. That was OK for my use as the rate being regular and known was more important that keeping to a ‘standard’ rate. Jim |
Rick Murray (539) 13850 posts |
From http://www.riscos.info/pipermail/rpcemu/2008-June/000201.html : In RISC OS there’s the RTCAdjust module which will try to keep the soft-copy in line with the hardware copy. The reason is to ensure applications aren’t confused. Imagine if you discover the hardware clock is running a minute behind the soft clock – for apps time would either go backwards or stand still. RTCAdjust smooths the transition so time is still increasing but at a slower rate. From http://www.apdl.org.uk/riscworld/volumes/volume6/issue2/iyonix/index.htm : The new version of RISC OS 5 sports the inclusion of a new module named RTCAdjust. This should enable the Iyonix’s infamously inaccurate clock to keep track of time a lot more closely by repeatedly synchronising the clock with the timer crystal installed on the motherboard. Without this being done, the time as seen by Alarm or Organizer can drift by as much as 30 seconds per day. (ambiguously worded – the module was new to the Iyonix, but not new to RISC OS) Basically, every so often it will read the hardware clock and verify that the soft copy matches up, and if not, will adjust things to cause the soft copy to drift back towards what is real. This is because the hardware clock is supposed to keep a reasonably accurate time, but the soft copy (working off a centisecond ticker and a system where a lot of things temporarily disable IRQs) is not nearly so accurate. In general, then, an older machine will have its clock more or less kept in step with the hardware clock. What may differ slightly is the absolute length of a minute, an hour… These days, as time syncing is commonplace (it was practically impossible in 1991), there is less necessity for a good quality RTC (some are worse than others) and thus the NetTime module does the dual job of keeping the time in line with reality and adjusting the speed to counter for differences between what the computer thinks the time is, and reality. In other words, it should make your machine more accurate.
This is low level and has nothing to do with whether or not the desktop is running.
Yeah, a bit like its inability to update the clock when directly asked to do so. ;-)
It is correcting it, it is just doing so in a non-obvious way (speeding up/slowing down the soft clock). Two things that I came across: 1, http://en.wikipedia.org/wiki/Network_Time_Protocol – Microsoft says that the W32Time service cannot reliably maintain sync time to the range of 1 to 2 seconds. 2, http://www.timesynctool.com – For Windows XP, in order to prevent the Microsoft SNTP client from setting the system time to a wildly incorrect value, Microsoft made the design decision that their client would only update the system time if the server response was within 15 hours of the current system time. This reduced the risk of an invalid time being set on the system (but not completely) but also has the effect that if the system time isn’t at least reasonably accurate it never would be until manually fixed! For a system with a failed CMOS battery, the Microsoft SNTP client is pretty much useless. (things are slightly better as of Vista) There is, however, an interesting feature that might be worthy of consideration in the RISC OS implementation – NetTime ensures that it is not setting the system time to an incorrect value by always checking with a second server (when configured) if the time adjustment is more than 10 seconds. Short of a major bug in the program design or a very sustained attempt to maliciously skew the system time by a rogue time server, NetTime simply won’t set an invalid system time! (this is NetTime at the timesynctool URL, not our NetTime, but maybe it should be? ;-) ). |
jim lesurf (2082) 1438 posts |
Interesting. FWIW I ‘retired’ in 2006 and the work I referred to was before that. Afraid I have no idea now what OS version was on the machines at the time. Too long ago. How often would the RTCAjust take place and how big would the changes be? I can’t say I found them. However the machines had been run for years anyway so I wonder how close the two clocks were. If the intial corrections worked, presumably later ones would become very small? All that said, IIRC the DSP card’s sample clocking used N loops of a process on the DSP chip using its own clock. So once I’d spotted and Andy fixed the daft “check for an event” it presumably clocked at a rate that ignored the host clocks – physical or soft. So whilst fixing problems I knew to avoid I guess I also may have dodged this one as well unknowingly! :-)
Afraid I don’t agree that is correcting the time error. It is adjusting the process that lead to it in the hope it will reduce. Given the lack of clear user documentation again at best calling this ‘correction’ is ambiguous and misleading for users. WRT to acceptable errors in ‘absoulte time’ I’d think the order of a few secs – say 2 to 5 – would be find for most general user purposes. But 127 seems far too big to me. And again, not telling the user this clearly doesn’t help them at all. A better user interface and some decent user documentation would help a lot. Ideally including a user option to cease having NetTime operating without that changing the set time. And a “use once at each bootup then remove the module” option would be very useful. As would an ability to disengage or control rate changes. That said, those really concerned can deal with that (as per my DSP case) provided they can see user documentation telling them about it. Jim |
David Feugey (2125) 2709 posts |
In fact there are two things: 1/ an option to sync time (at start-up, or shutdown, or both) 2/ an option to correct time in real time. |
Rick Murray (539) 13850 posts |
Do you remember what machines? Or was it pre 1990ish? Probably RISC OS 3 (because RISC OS 2 was pretty basic, and you would have remembered the yucky colours if it was Arthur).
No idea. Used RISC OS 3.x for more than a decade and a half on four different machines. Can’t say I noticed anything untoward. In fact, it wasn’t really until this discussion that I realised the drift adjustment was being done all along. I think that was the point. ;-)
Ah, but we are looking at this from two different angles. It would be correcting (slowly) a time error based upon the hardware clock only now we have the additional complexity of a supposedly-honest time source to compare everything against. Part one – already explained – your computer will drift from the hardware clock. Inaccuracies, missed interrupts, blah blah. The reasons don’t matter, only that it is potentially useful to sync the soft clock (in the OS) with the hard clock (the RTC) at periodic intervals. Part two – the main difference between RTCAdjust and NetTime is that the clock itself is now compared against an external clock source. The question I would have is that if the clock is so far out of step that it won’t drift back into time over the course of a couple of hours, it implies that your hardware clock (RTC) is remarkably inaccurate.
How would you handle a mail debatch if the clock is 14:07.55 when the process starts, somewhere in the middle NetTime adjusts the clock, and it is 14:06.25 when the debatch finishes? I can understand why NetTime does what it does. My only request is to have an option to force the clock to be set (regardless of side effects) that a user can choose to do.
I think 127 seconds seems a lot, yes. However I should add that a disparity of about 20 seconds was seen to have been corrected in approximately 45 minutes, so it isn’t out of the realms of possibility that a “day of use” would be sufficient to drag things back into line without the user noticing.
I think for the majority of users, this might be a spectacularly bad choice. The soft clock will drift, how much depends entirely upon what the computer is doing. I will give an example – when I dump a screenful of data to my OLED (128×64, about 1K), my music playback stops. Because even with a hacked IIC module to run at 400Hz, it cannot send the data fast enough. Well, I lie, it sends the data just fine, but the RISC OS IIC system runs with interrupts disabled – there is no reason for this other than RISC OS has always done it like that (the original bit-banged ports on the IOC) and it would probably be a complicated job to remove this and guarantee things won’t be messed up by this. Anyway, the IIC runs with IRQs off. So the sound buffer runs out and the IRQ requesting more data is missed and accordingly the sound shuts down. Makes me wonder if a few centisecond ticks are missed as well. It might only be 3/100ths of a second, but these things add up over the course of a day. Any decent OS would attempt to keep its time in step with the RTC by some sort of periodic check and adjustment, and a proper NTP client is most likely to do this by altering the speed of the soft clock, not to much to counter for the case of time stepping forwards but more to prevent an occurrence of time stepping backwards. If the system does not have the ability to speed up or slow down the clock, then the same sort of effect will be done by making numerous tiny adjustments to the clock. http://www.ntp.org/ntpfaq/NTP-s-algo.htm#AEN1943
As said, you hadn’t noticed in something approaching a quarter of a century (!), so this is surely just a Beaufort 12 in a small receptacle of brewed beverage, is it not? If you really don’t want NetTime doing its thing, just *Unplug it. |
Dave Higton (1515) 3534 posts |
You might like to see how you can send a repeated start condition if you are unable to guarantee to access the registers of the IIC controller during transmission of the last byte prior to the repeated start, especially on the Raspberry Pi. |
jim lesurf (2082) 1438 posts |
It would have been between 2000 and 2006. And I’m fairly sure I’d have kept the OS reasonably up to date.
I wouldn’t. Because I wouldn’t want to run a background process like that which kept altering the clock whilst I was using the machine. All I’d find useful is a ‘correct clock time at startup’ so I could know there would be no tampering at other times. The current problem is that people don’t know what’s going on and get no warning before each ‘automatic’ change. Nor any detailed record of what was done.
I’ve simply set back to using manual adjustments. This was a faff because doing so means having to then correct the time ‘by hand’ because de-selecting fouls up the time. Now when I want to know the time I use my own small program that does it via a wget, and I check the result against the computer clock. So far as I can see from having done this for some days the drift is negligible for my purposes. But my concern is that others may be using NetTime unware of its snags and mislead by the inappropriate GUI. However if no-one else has any interest in sorting it out, I’ll be happy enough just to have aired the problems so others can make up their own minds. Shame there is no decent user documentation. And I hadn’t noticed NetTime or the RTCAdjust because in practice I wasn’t using either of them when doing time-critical things. The concern is that others may make the mistake of assuming their computer clock can be relied on to give a regular rate because of a lack of user info on what NetTime gets up to. Jim |
Steve Fryatt (216) 2105 posts |
Can I just be clear here: you were killing RTCAdjust on your RiscPC? If you weren’t, then this “problem” (of adjusting the tick rate of the soft-copy of the RTC), has been affecting you since RISC OS 3.5 came out (and before that on older hardware). RISC OS has always kept a soft-copy of the RTC, and since RISC OS 3.1 it has adjusted the ticks to keep them in step with the “real” RTC in hardware (when one is available) using RTCAdjust. All that has changed with the advent of NetTime is the source for that adjustment: it comes from an internet time server, and not the machine’s internal RTC. |
Steve Fryatt (216) 2105 posts |
How would you handle a mail debatch if the clock is 14:07.55 when the process starts, somewhere in the middle NetTime adjusts the clock, and it is 14:06.25 when the debatch finishes? Point of pedantry: it’s not altering the clock. It’s altering the speed of the clock. All I’d find useful is a ‘correct clock time at startup’ so I could know there would be no tampering at other times. The current problem is that people don’t know what’s going on and get no warning before each ‘automatic’ change. Nor any detailed record of what was done. I’m more than happy that this might be a problem for you. However, I remain to be convinced that subtly tweaking the length of the ticks of the “soft” RTC, which RISC OS has been doing since circa 1991, isn’t the best option for the vast majority of users. It’s certainly better than doing step changes (especially ones in a backwards direction), or doing nothing and the clock ending up many minutes out of step by the end of a few hours. One of Rick’s links above reminded me of the Iyonix in its early days before RTCAdjust was brought on to RISC OS 5. That’s what happened without the adjustments being made by RTCAdjust and/or NetTime. But my concern is that others may be using NetTime unware of its snags and mislead by the inappropriate GUI. What are these snags for the average user who isn’t doing DSP on their machine? For someone using RISC OS to send and receive emails, or for someone like me who’s using it to edit files that are being tracked by a build control system on another computer? What snags are there in having the clock wildly incorrect, or in doing a backwards step in time to make the correction? I think we need to get some perspective on this “problem”. As I understand it, RTCAdjust has been around for over 20 years now; why hasn’t a better approach been found in that time? However if no-one else has any interest in sorting it out, I’ll be happy enough just to have aired the problems so others can make up their own minds. Shame there is no decent user documentation. Development and documentation of RISC OS 5 is largely being done by volunteers in their own time. I’m sure that they would love some help with documentation, but then I’ve been saying that for years (about a range of projects) and haven’t had any takers. If someone wanted to write the documentation, I’m sure that developers could help out with the details of what the various bits of code actually did (and then check the documentation authors’ interpretation afterwards). |
Rick Murray (539) 13850 posts |
This:
And this:
Thank you. Saves me typing it. Again. ;-)
Oh, I’ll go one more than that. Making subtle adjustments to the clock rate is the best option for the majority of users. As you say, no step changes. And the soft clock remains more or less consistent with the hardware|internet clock source. It might take a short while to correct itself to reality, but the changes are not critically important. Just booted. My last delta is 93.blah seconds fast. Dunno how it accumulates that in a day, unless RaspBMC (that I was running last night) had anything to do with it? [hmmm, does RaspBMC even know how to talk to the CJE RTC? I doubt it…] If I was on a machine with no correction, I’d worry about being a minute and a half wrong. As I’m on RISC OS, I’m not overly concerned. It’ll sort itself out.
I recall describing it as surely just a Beaufort 12 in a small receptacle of brewed beverage. I think what Jim needs to understand is:
Or, to put it in fewer words – RISC OS is doing what is seen to be in the best interests of the domestic home market (and schools) which were Acorn’s target audience. I’m afraid that if you want to use an ARM based system for reliable and accurate measuring of complex things, you will need to write your own operating system (or have somebody do it for you) as that is the only way that you can be certain of what is running and when and what it is doing. And even then there are vagarities – what “delay” is involved in receiving a network packet, for instance? Or it could just run with interrupts disabled but… long story short – there’s a reason expensive measurement kit uses custom hardware.1 This will give you an idea of the sort of issues that are involved: https://rt.wiki.kernel.org/index.php/HOWTO:_Build_an_RT-application
…they pretty much don’t need to know.
Just like your browser doesn’t usually inform you if it is taking a redirect. There’s such a thing as drowning in useless information.
I kicked it two seconds ago. The clock is now 78 seconds fast instead of 93 seconds, so it is slowly improving (this is over ~25 mins). Anything more than that is not really useful. Do you think you need to know exactly how many ticks of the timer represent a second at the present time? Even though you can’t change this? 1 I helped out doing EMC compliance back in the mid-90s. What everybody saw was a clunky laptop running Windows For Workgroups and drawing wibbly displays. What most people didn’t see was the box that sat between the parallel port and the antenna-probe. From a brief look inside (don’t ask!) it looked to be some sort of 8051 processor hooked to a number of ASICs. All of the ICs were ceramic, all of the legs appeared to be gold, and it looked like a tin (not copper) circuit, with lots of mesh cages around stuff. It looked like the antenna-probe input was split in two and went down two identical pathways. It wouldn’t surprise me if the sampling wasn’t done in tandem to try to be more accurate. It also wouldn’t surprise me if the software in the processor wasn’t timed down to every individual clock tick. When an EMC compliance pass or fail depends upon what this bit of hardware indicates, it needs to be accurate. For that, the visible “computer” part is just a dumb device that plots pretty graphs and makes printouts, but is nowhere near accurate enough nor responsive enough2 to perform any measurements by itself. 2 The level of crappiness of the interrupt latency of the earlier x86 family is legendary: http://ftp.gwdg.de/pub/misc/x86.org/manuals/186/2153.pdf |
David Feugey (2125) 2709 posts |
Yes, it’s the best way. But a complete sync at boot time will make it even better (and probably not difficult to implement as a predesk operation). The clock will be exact, and NetTime will make it even more precise. If I understand well, time is synced only at shutdown. The problem is that a lot of people do not use their RISC OS computers very often, so to get the exact time, they should boot their system twice. Right? Just sync it twice (start of session, end of session), and make it run as usual the rest of the time. |
Rick Murray (539) 13850 posts |
Or, give us an “undocumented” command to force-sync and let us decide. People who want it right can poke their boot sequence. Everybody else? Works as it did before… |
David Feugey (2125) 2709 posts |
It’s not really a problem to sync the clock, if no user applications are launch. If course it could be a problem in case of a reboot, but it’s already a problem with syncing at shutdown. Users should also be able to refuse any sync, for example when RISC OS is used as a server. Sync at boot, sync at shutdown or sync for both could be an option in the configure applet. |
Steve Fryatt (216) 2105 posts |
That depends a lot on the order of the boot sequence, however. You have to be sure that the network is started and NetTime has done its stuff before anything else can be run, which isn’t ideal and might slow things down. It’s far easier (and probably better for the average user experience, where people expect things to just work out of the box) to try and avoid the step-change altogether. |