Timing system
Alan Adams (2486) 1149 posts |
I have a scoring system written in BASIC which runs on a network of computers, mostly RPIs. As it consists of about 25 separate programs with about 15 libraries of shared code, I have no intention of converting it to any other language. Now however I would like to add timing to it. The requirement is to time to an accuracy of better than 0.1 seconds, ideally better than 0.01 seconds. I envisage one computer handling the starter and another the finisher. Each would respond to an event on a GPIO pin and store the current time, subsequently sending this to the server, which would calculate the difference when both times have been received. This avoids both network delays and server delay issues. As I see it at the moment this requires several things I don’t currently have in place: 1 Synchronisation of the clock on two computers. This will need to run without internet connection, so standard NTP won’t work. I don’t know whether NTP is accurate enough, even if I can set up a master server – most server implementations I have found require a higher level server upstream. A brief search suggests that this might be possible using an rpi with gps and linux as a time master. (Since all these computers will be co-located, there does exist the possibility of providing a hardware synchronisation signal, e.g. the 1 second clock edge trigger which can be obtained from gps.) 2 A module (I assume) which will handle the interrupt from the hardware and store the time, using a poll word to notify the BASIC application that something happened. I’ve never written any C, although I’ve written assembler for 6502, Z80 and VAX. I don’t know how complex this module might be, but I’m thinking if it doesn’t have to do very much, it might be easier for me to write in assembler rather than having to learn C. 3 Adequate de-bouncing of the timing signal. This would probably be easier to do in the electronics than the software. Any thoughts, comments, suggestions for things I haven’t thought of, would be welcome. |
David Feugey (2125) 2709 posts |
It could be possible with a SNTP server. Some source code is available : The protocol is quite simple: |
Stuart Painting (5389) 714 posts |
Synchronising everything to an external reference time has several problems, some of which you have already identified. Here’s a few more to worry about:
If you want high accuracy the clients need to be recording the event and not a lot else, in order to minimise the possibility of clock jitter affecting the result. The server can do all the clever stuff (acting as master clock, calculating client clock drift etc.). So a timing run would look something like this: 1. Prior to the start, the server polls each client to determine what its local time is. Items 1 and 4 can be used to determine clock drift for each client, and hence to apply corrections to the event data received in item 3. For maximum accuracy you would need to minimise network delays: the easiest way of doing this would be for the server to initiate all interactions and the clients to merely respond as necessary. What I have just described is, in essence, NTP turned on its head. You aren’t necessarily interested in the exact time compared to UTC, but you are interested in how much each of the client clocks are drifting compared to the server’s clock. |
John Sandgrounder (1650) 574 posts |
I do not think you are going to get even close to that sort of timing using two computers. I would suggest you need to use one computer for both the start and finish. Even then, centi-second timing will be pushing the limit.
I think I have replied to this problem before. If you have power to run a network of computers, then you can have Internet connection for NTP. (routers like the TP-Link Archer 200 will run off 12 volt or a low power inverter using a SIM card to access an NTP time server via the mobile phone network. Setting RiscOS to synchronise with a time Server every 30 minutes (or perhaps even every 10 minutes) will give you a very smooth and reliable clock as RiscOS will speed up or slow downn the clock rate as required to give you a reliable time reference. (i.e. the clock will not be ‘stepped’ either up or down). I have done this for many years for another scoring system.
It might help if you could give us a bit more detail about what hardware will create the event on the GPIO pin. It would probably also be usefull to know what the time interval is normally between the start and finish events and how far apart they are on the ground. |
Alan Adams (2486) 1149 posts |
While this would work in principle, in the case of this competition the times need to be available during the competition, and will be checked manually against a backup system during the competition. Therefore waiting for the end to make adjustmentts is not possible. The competition runs for up to 6 hours each day. I’m wondering whether providing the start and finish computers with a gps time source would work. It might be one gps module feeding both, or separate modules for each computer. As should be apparent, the actual time-of-day displayed is not important, but the difference between them is critical. The major difficulty I see here is that gps software only seems to be available for Linux. I have no idea what would be involved in porting it to run under RISC OS. It’s certainly not something I could visualise undertaking. |
Alan Adams (2486) 1149 posts |
Using one computer is certainly a possibility. It would need careful software design to deal with events such as needing to start one competitor just as another finishes. I’m not sure why getting the level of accuracy in this case would be a problem. If the timing event is a transition on a GPIO pin, thus triggering an interrupt, then a module should be able to read the clock very fast, store the result and alter a poll word. It would need to monitor two pins in this case, but I would have thought there isn’t much of a problem with that – two pins, two poll words or maybe just one poll word. To calibrate it, connect the two pins together and apply a trigger. This will show how much delay is incurred in this worst-case condition. |
Rick Murray (539) 13840 posts |
Forget NTP. If we are trying to get centisecond accuracy, you’ll need a way to directly hijack and set the RISC OS clock. The time server method works by stretching or compressing time slightly to bring the system clock closer to reality with every test – and in reality it is likely to be a few seconds out either way. Trying to sync up machines to be correct within 100th of a second, that alone will be an interesting and frustrating technical challenge. I did once try something like that between an A5000 and an A3000 over Econet, using plain seconds. It needed a resync every couple of hours – using second accuracy. I dread to think how quickly cs ticks would drift. Ideally you want to concentrate on having one sole machine handle the timing. It can send out messages…however…for other machines to pick up and process. That way, timing will be accurate as the one machine will look after that, and timing will be accurate as the one machine will only look after that, not get bogged down with other things. |
Dave Lawton (309) 87 posts |
Hi Alan, |
Chris Hall (132) 3554 posts |
One thing SatNav can do is to use the accuracy of the atomic clock in the GPS satellites. It receives a time (and location) signal allowing the clock error to be calculated. The best solution, however, is to have one machine that is defined to be correct and assume the others are wrong by a calculated amount based on the messages you get from them. |
Alan Adams (2486) 1149 posts |
OK, there seems to be a concensus that attmepting to synchronise two pis to centisecond isn’t a good idea. So my options seem to be either to accept decisecond accuracy, which meets the current requirement, but might not in future, or use a single computer. (Unless gps can achieve better synchronisation. It’s not just initial sync, as the two clocks may/will have different drift rates.) |
Alan Adams (2486) 1149 posts |
Does Satnav adjust the clock on the rpi? My general reading says there are two signals. One is a serial data stream which contains the time of day to a second, and the second a pulse which identifies the start of the current second to microsecond accuracy. I can’t see a reason why these signals couldn’t both be applied to two rpi’s in parallel, thus synchronising them to a very high degree. Even if the signals travelled a long way over coax between computers, the delay would be constant, which would be acceptable. For the timing application, as long as the clock is adjusted frequently enough that the drift doesn’t exceed half the resolution needed (i.e. 0.005 second) then a change during a competitor’s run won’t affect the result. Adjustments at say 10 second intervals or less might be feasible. |
John Sandgrounder (1650) 574 posts |
Would I be right in thinking that your inputs come from human operators. If so, then they are doing a lot better than ours. We think we are doing well if the humn inputs are accurate to the nearest second. (which is perfectly adequate for our event). |
Alan Adams (2486) 1149 posts |
For lower division events, which we run, the timing is by a button press, and the required timing display is to 0.1 second. For higher divisions the signal is from double-beam optical systems, and required resolution is 0.01 second. I would like to build the system capable of the higher accuracy if possible, but would be OK with the lower as it is what we currently need. In each case the backup is by stopwatches, which need to show tenths of seconds, and these backups are compared at intervals with the main system. The requirement here is that the two must be within 1 second. |
John Sandgrounder (1650) 574 posts |
double-beam optical systems sounds good. |
Jeff Doggett (257) 234 posts |
I need a timing system that automatically subtracts at least 20 seconds from K1M Div 3 bib 30 please. |
Chris Hall (132) 3554 posts |
Does Satnav adjust the clock on the rpi? By default it sets the clock under specific circumstasnces (for example the time is one of those it recognises as being a reset RTC chip): it also sends Wimp Messages about how fast or slow it thinks the clock is, in centiseconds. |