An event when there is data to read on a socket?
Garry (87) 184 posts |
Hello all, I have a program, it writes to a socket, and reads the response. This works fine, using SockLib5, and InetLib. I am using OSLib for most things, but not for sockets, basically because I was pointed at a nice example which used SockLib etc. I can read an write to the socket just fine. I have set the socket to be FIOASYNC, but after that I’m a little lost. I understand I need to set an OSByte to get an Internet event, and also register an event, but I’m not quite sure how to do that. So basically, I have a WIMP/Toolbox application with a UI, I can read and write to a socket just fine, but I can’t block whilst waiting for something to read on the socket, so I’d love to get an event to tell me there is something to read. Really, I’d love to get this event just like I get a WIMP or Toolbox event, and handle in the same way. I don’t mind blocking while I read from the socket, but I can’t block while waiting for something to appear on the socket. Thanks as always. Garry |
Rick Murray (539) 13840 posts |
I have not programmed sockets in years… …but don’t you just keep retrying the same call so long as the error returned is “EWOULDBLOCK”? |
Garry (87) 184 posts |
Hi Rick, Am I missing the obvious here? I guess I can’t use threads (and would likely rather not anyway). Cheers Garry |
Jeffrey Lee (213) 6048 posts |
I believe that what you want is the SocketWatch module, but I’ve got no idea where to find the latest version (or even if the latest version is 32bit compatible). I know it’s built into some versions of RISC OS Select, so if the original version isn’t being maintained anymore then perhaps it’s time we had our own clone of it. |
Raik (463) 2061 posts |
Look here ! |
Rick Murray (539) 13840 posts |
Or if you don’t want a dependency, why not call [I say this because when I installed the package for Nettle, it failed to start because of this module not being present nor being flagged as a dependency!] |
Garry (87) 184 posts |
Hi All, If I call wimp_poll, then I get constant events, that’s excellent, just what I’m looking for. However, the rest of my program seems to break. The event handlers I set up with event_register_toolbox_handler, and event_register_wimp_handler cease to work, and on open my window (which I handle redraw events for), the program crashes hard, and sometimes I need to restart. So now I’m back to using event_poll (part of OSLibSupport), but I don’t get my constant stream of events now. Ideas? I want the constant events of wimp_poll, but for my event handlers to still work. As it happens, my program could receive network data at any time, so I’d never run wimp_poll_idle, but I’m OK with that. I’m OK with getting the constant events, it does not seem to slow things down. Plus I could ignore most of them, only checking for data every 10th WIMP event or something. wimp_poll would be perfect for me, except I need for the rest of my events to keep working. Cheers Garry |
Garry (87) 184 posts |
Don’t worry, got it! Seems that event_poll masks out the NULL events by default. I changed that and now I’m getting NULL events, plus all my Toolbox/WIMP events. That means I can sniff the socket periodically to check for data. Excellent. |
Chris Johnson (125) 825 posts |
As you are using the oslib support event library, you can turn null events (or any other) on and off at any time using the functions event_get_mask and event_set_mask. I usually mask out the events I am definitely not interested in using event_set_mask immediately after event_initialise, and then turn eg null events on and off as required using event_get_mask for the current state and then event_set_mask with the current mask modified by appropriate logical operators and flags. |
WPB (1391) 352 posts |
For the responsiveness of the system as a whole, you should try not to receive NULL events when you don’t need to. So when you know you’re not waiting for socket data, for example, you could mask them out. In other words, it’s something you’ll probably want to change dynamically. |
Malcolm Hussain-Gambles (1596) 811 posts |
The way I do it is using wimppollidle, and set the next poll based on the amount of data read last time vs size of buffer. So for example if I read 17k of data, then I poll as quickly as I can, as it means the data is coming in faster than I can read it, if I read 12k of data I’ll poll in a medium time and if I read 5k or less then I’ll poll in an even longer time. I think that the pandaboard isn’t quite fast enough to handle 100Mbit with polling, so if you’re looking to guarantee maximum throughput, you may want to have some extra logic in the case of a 17k read to re-read the socket before returning – but this is always going to be a juggling act between throughput and smoothness of the WIMP. This issue doesn’t just affect co-operative systems, it affects all OS’s – you just tend to notice it less on pre-emptive ones, and people usually ignore it as the effects are usually less irritating as they have less detrimental effects. Also avoiding DNS lookups is a good thing, as it’s a sync command at the moment :-( These aren’t perfect, they need much much better logic – but they seem to work well enough. |
Rick Murray (539) 13840 posts |
Many moons ago I wrote a web fetcher that did all sorts of convoluted calculations to determine the absolute best polling speed. Turns out that there was not much between doing that and just polling repeatedly.
And just switch between then as appropriate. You are unlikely to save any noticable amount of time by calculating optimal polling frequencies, and it comes at the expense of more complicated code. |
Jeffrey Lee (213) 6048 posts |
Or you could just use SocketWatch. |
Steve Pampling (1551) 8170 posts |
Hmmm, I have this idea for a nearly circular thing that could as a set of four fit on a pair of rods under a platform to help it move around. |
Malcolm Hussain-Gambles (1596) 811 posts |
@Rick, the point wasn’t to reduce time, it was to reduce the overhead as much as possible without going bananas. And it does make a difference when your using quite a few fetches. If your doing a one-off or even 10, but in my case I’m doing a lot. I’ll have to have a detailed look at SocketWatch that really looks great! |
Rick Murray (539) 13840 posts |
??? To me that’s just a different way of saying the same thing. How do you define overheads if not by using a measurement akin to “x amount of stuff done in y amount of time”?
How are you quantifying this difference? The number of polls per second? The CPU load? How it “feels” to use to machine? I’m not really convinced that a non-blocking sockets app that uses constant polling during the data transfer time is going to make a big impact on system performance – unless there’s something else going on (waiting on slow media, etc). To test the theory, I just threw together a little app that does nothing except call Wimp_Poll and, on a null event, call XOS_GenerateError. I ran it. There is perhaps a measurable difference in system behaviour, but it felt the same to use. Maybe tomorrow I’ll add code to support Message_Quit and then run a bunch of copies, see what happens… Don’t get me wrong here – I’m not saying “don’t bother to try to reduce overheads”. What I am saying is: 1, if your application’s raison d’être is to receive some sort of data from the internet, then this should take priority and be the important thing; and 2, the method of reducing overheads should be the simplest code that has the desired effect. |
Garry (87) 184 posts |
Hi all, So I’ve made my program only even sniff the socket on every 10,000th NULL event, which seems to result in maybe 5 checks per second, so probably still more than I need. Data coming from the server can be as small as 7 bytes, so seeing how full buffers are etc. I don’t think would work for me, as even if only 1 byte exists in the buffer, I’d bet money there will be more in a fraction of a second. So far, in ‘feel’ there is no difference that I can detect, so I’m pretty happy with how things are going right now. Cheers Garry |
Jeffrey Lee (213) 6048 posts |
I’ll admit that on a modern machine, the difference in system responsiveness of polling in null events vs. using a poll word is negligible. But there is one use case where it’s significant – portable machines running off of batteries. If no Wimp tasks are using null poll events then it allows the OS to stop the CPU completely using wait-for-interrupt whenever you call Wimp_Poll and the Wimp runs out of other things to do. But if null poll events are in use then the system can’t go to sleep because it doesn’t know what the tasks are doing (are they merely waiting for something, or are they doing some number crunching?) Even on non-portable systems, the decreased heat associated with using lower power CPU modes is favourable, especially as speed and TDP are constantly increasing. Disclaimer: The Wimp currently doesn’t use the exact logic as described above. But the logic which it does use is heavily influenced by how many null poll events occur per second – if two few are occurring (i.e. system is under heavy CPU load) then it won’t switch the CPU to low speed or use wait-for-interrupt. Maybe it’s time !Usage was updated to show which power saving modes are in use. |
Rick Murray (539) 13840 posts |
Jeffrey:
…which isn’t going to be a big issue when you are actively transferring data; which is why right at the top I say to use PollIdle when not transferring data. Yes, it would be nice if !Usage could dynamically reflect the current processor speed and whether it is high or low (on systems where this is an option)? Garry:
So what you want here is to run on regular Wimp_Poll when you know you have data to receive, and switch back to Wimp_PollIdle with a short delay (33cs = 3 polls per second; 25cs = 4 polls per second; etc) to perform the sniffing on the Null events. In pseudo code, it’ll look a little something like this:
|
Malcolm Hussain-Gambles (1596) 811 posts |
@Rick – agree completely. But on A RISC PC this takes around a minute and a half to do, now when I’m hoping to refresh every five minutes, anything I can do to improve desktop responsiveness is going to be a good thing. You could answer that by saying “yeah, but your bonkers”. That would be completely accurate ;-) |
Rick Murray (539) 13840 posts |
In my (minor) experience, a good optimisation would be to omit ChangeFSI – it takes a lot of time. Can you not use JPEG_PlotFileScaled (or regular PlotScaled if they can fit in memory)? Would that improve responsiveness? A cheat, of course, is judicial use of polling – find the places where things are laggy and drop in some polling. It’ll make the updates take longer, but the desktop smoother. You could always take the Microsoft approach and offset any time taken to build the Main News list by showing the user a cute little animation. You know, like the flying file for file copy, or the envelope that sprouts wings and flies away for an email being sent. It doesn’t make anything better, but the user now believes some complex magical process is happening. |
Jeffrey Lee (213) 6048 posts |
Remember that lazy task swapping is only supported on StrongARM rev T and above. So getting rid of needless null poll events can actually make a significant difference to the responsiveness of older machines. My RiscPC was purchased in 1998, after the cancellation of Phoebe, and even it doesn’t support lazy task swapping! |
Steve Fryatt (216) 2105 posts |
In which case, you shouldn’t be taking those intermediate 9,999 null polls. If you’re checking your socket 5 times per second, use |
Rick Murray (539) 13840 posts |
Hmm, this implies about 50K polls per second? [has anybody actually timed the poll rate on modern hardware?] |
Martin Avison (27) 1494 posts |
Nooo! Please!! Just imagine what would happen if your application was running on a machine with 3 other applications which did similar nasty processing. Your poll rate would plummet by a factor of 10 times at least (because of swapping overheads). So your 5 checks per second would drop to only one every two seconds. On a slower machine it could be far, far less. Using PollIdle you can easily check 5 times per second, regardless of machine speed, and regardless of what else is running (unless a process takes longer than 1/5 second, anyway). And you would not affect other applications. The PRM page 3-116 makes the use of redundant Null Polls officially deprecated, for all the reasons listed here and above. My TaskUsage application enables you to see the cpu/task (similar to !Usage) but it will also show the rates for each poll reason code in total, or per task.
My Iyonix, with a normal number of tasks running, will process 10,000 Null Polls/second. I think I have seen double that, and faster machines will manage more I am sure. |