URL_Fetcher and AcornHTTP
|
Ouch! :-)
This wasn’t really debugging time with the fetcher. It just happened that I tried seeing what the printer was sending back and the bits of the puzzle fell into place. Yes, it’s an issue with the fetcher, but now it has been identified it can be looked at. The GET request method already claims to support HTTP/1.1, so no reason why POST shouldn’t. My code does this, but since (as far as I’m aware) HTTP/1.1 ought to be backwards compatible, I don’t see why it wouldn’t work to say we’re 1.1 and use a roughly 1.0 level feature set? After all, a basic fetch is a static stateless thing. The client sends a request (with optional data) to the host. It doesn’t go through a telnet-like rigamarole of negotiation first…
They’re there. They’re just here, and there, and a bit over there. It sort of makes sense if you’re treating each bit as a discrete entity (this is the URL part, this is the HTTP part…) but it’s a pain if one is trying to tie all the bits together to make a working whole.
Yes. I use it in Manga now. And my Vonets configuration tool. In fact, while it can be a bit quirky, it’s easy to use and does the job. Tell it “go get this” and poll until something happens. Some bits can be improved, but on the whole it’s a hell of a lot simpler than writing socket code and it works just as well from BASIC and C.
This, this, this, and this. |
|
If you’ve been following the mbedTLS mailing list etc. you’ll be aware that there has been a lot of effort in the last few months to implement TLS 1.3. |
|
I guess you have not followed the libcurl mailing list recently? (lib)curl supports an awful lot of encryption backends. And they are going to extraordinary efforts to maintain backward compatibility and they are very slow at deprecating outdated stuff (currently, there is a discussion to deprecate NSS, and there are an awful lot of reasons why NSS is a lot worse than mbedTLS). But Daniel Stenberg (curl mastermind) has also made clear that the recommended TLS backends are either the native Windows/MacOS ones or the stuff with a good track record on quickly supporting the latest standards (like OpenSSL, GnuTLS, wolfSSL).
Well, I am not at all surprised about the problems that such old code is giving us. I am just surprised why we still insist on using it while a lot better (debugged, battle-tested, real-world compatible) stuff exists out there.
I am well aware of the whole history, this is one of the reasons why I came to the conclusion that it would be a good idea to abandon it and switch to something stable, maintained, tested, well-documented, lean, with a friendly license.
Why do you want SWIs? Why should a Of course there is the BBC BASIC use case, but as I said: libcurl is very very portable and very flexible and suited to a minimal-feature configuration, so should be easily wrapped into a module. It does not expect the host OS to support fancy features because it is used to nearly-bare-metal embedded OSes.
ISTR that Matthew Phillips has wasted quite a bit of time on that. Didn’t it even result in a merge request? libcurl has very good docs, extensive tutorials, an active mailing list where even questions from users of strange OSes are handled in a very friendly way. In addition to its handling of many many protocols including
I remember discussion about the shortcomings of cookie handling. And problems when connecting to various real-world servers. And the chunked transfer handling you mentioned. And there is the problem of the completely outdated set of protocols handled – how about HTTP/2 and HTTP/3 support? Authentication via NTLM and Kerberos? Or other interesting stuff that could be used by many applications like SMTPS, SCP, SFTP or FTPS? Why not base LanManFS on libcurl’s SMB capabilities? And it is IPv6 compatible.
I know that they are working on it, but they (plan to) do so since 2016 (see related GitHub issue). Other TLS libs had TLS 1.3 support in 2018. mbedTLS seems to be very late to the game, which is never a good sign for an encryption library. Because it is a sign for a lack of development manpower, which might end deadly in case a of real security issue. Bottom line, as I said previously…I don’t have any interest in that topic on RISC OS, I am just observing the typical patterns of not-invented-here and stay-with-known-broken-stuff-until-it-really-hurts that is in my opinion responsible for a lot of wasted developer time. |
|
What we’re looking at is a problem of technical debt. A common problem. I really don’t think the approach of throwing old stuff out, and throwing out compatibility with it, is sensible. It’s unhelpful just saying “we have a library, use that” unless it can be used by BASIC too, and it can’t – unless someone invests a whole lot of time to make a module from it that’s completely upward compatible from the present one. And I see you’re not volunteering for the job. |
|
I tried this evening to build one from the most recent nightly’s sources, but the build process errors out with “Window manager in use”. I have to try again tomorrow as I’ve run out of time today. |
|
Indeed, especially in RISC OS world. There are various strategies to tackle it.
Then keep the old stuff if you need it for compatibility reasons. No problem.
IMHO it is helpful to identify a library that is used basically everywhere, has a very liberal license, and has a reputation for being very reliable and well-maintained with good documentation as well as being easily portable. And has an accompanying application that is even shipped with Windows nowadays and is also used everywhere as THE universal network client on the command line. Even the PHP guys are relying on it. Whether it would be sensible to try to build a compatibility module to mimic an API that seems to be both complicated and underdocumented – I don’t know. How many pieces of software use that API, will they be modernized when IPv6 becomes available (and who will do the work to adapt the API to that new world), do they need shiny new updates to existing protocols, do they need secure communication? Will someone extend the existing module world with support for needed protocols? Is it possible to quickly identify and fix remaining bugs? Many unanswered questions. Maybe the way forward would be to think of a sensible API (there must be a reason why HTTPlib was created instead of just using the pure SWIs after all) and then create a module that implements that API with libcurl? Or provide a subset of the libcurl API as SWIs for BASIC users?
Entirely correct. At the end of the day, it is a question of “effort expected”. There is a lot of sunk cost in URLFetcher/AcornHTTP/AcornSSL, but that does not mean that it is sensible to follow that way further – it might be a cul-de-sac and a major future time sink. Of course the developers that do the actual work decide, and I will not be one of them. I will be pleased if I find out some day that someone implemented HTTP/3 inside the current module framework, but I somehow doubt that this will happen. |
|
Emailed. |
|
Digging around the source for AcornHTTP, there is a flag defined (bit 3) as flags_EXTRA_DATA_IN_FILE, but there is no reference to this value. So someone was already thinking about adding the facility. |
|
Sadly, it’s completely broken. No fetches work at all (not even Manga that uses GET). I’m guessing you fiddled this: if (ses->method == 1) { /* We only do this got GET */ proto_minor_version = 1; } else { proto_minor_version = 0; } ? If so, is |
|
While repairing !Printers again (it’s so flakey), I noticed that the LaserJet class icon (“lj”) looks like an inkjet printer. Is there any particular reason for this? Sometimes it works, sometimes it does this when loading Printers, including from a reboot: What the heck is looking for “squished.SupportDP”? I can’t find that in the !RunImage, the Code, PDriver module, nor PDriverDP, nor PDumperSupport. Had a quick whizz through an older (5.23) ROM image, don’t see that there either… Hmm…? |
|
Exactly.
All I did was make one small edit and build it. I notice that the minor version has advanced two steps (I didn’t change that). I have no idea what else has gone on. |
|
Yeah. My thoughts were more in keeping with Father Jack. ;) We’ll have to suss this one. Can’t be giving Steffen easy points here! |
|
I just tried it. It works for me… yes, I double-clicked the one out of the email I sent you. Maybe it’s another one of those data-dependent bugs. I tested it with one of the apps that gets the attributes from a printer. What did you do? There are comments in the code to the effect that the module claims 1.0 because 1.1 would require it to handle chunked transfers, and it doesn’t. So I imagine it might fail if you expect a big or unpredictable response to your POST. |
|
Replaced AcornHTTP in !System with this one (renaming the original _AcornHTTP), so it starts at boot. Ran Proto, to the HP. It seemed to do nothing, and returned a code and status of zero. Tried Manga. It set up to fetch the index as it does if you’ve not used it recently. Nothing happened, until an eventual timeout. RMKilled that AcornHTTP, manually reloaded the original. Repeated. Proto reported a 505 as expected. Manga fetched the index. Note that Proto POSTs a small amount of data (as you know, you wrote it!) and Manga does a normal GET request that returns a lot of data. As GET is bumped to HTTP/1.1 anyway, it shouldn’t make much difference. I noticed the line a little further down said: sprintf(reqline, "%s%s HTTP/%d.%d", ses->endhost ? "" : "/", ses->uri, proto_major_version, proto_minor_version); and since the versions were at the end, I did a binary hack of the original module to force the string to HTTP/1.1 and… same behaviour. It sat twiddling it’s fingers doing nothing. Reloaded the original and… am now reading Anne Freaks. It’s much better in English! I bought the books about fifteen (or so) years ago, in French. How very peculiar! |
|
Reverted to the original stack. Same behaviour. Back to the ROD stack. Which reminds me, there’s a newer version I’ll have to install. I need to see what the installer is doing, because it likes tramping on files in !Boot which isn’t nice if there are customisations in place… |
|
As for the crappy Epson – https://youtu.be/DGjsPgLUel8 If you don’t care what’s inside a printer, skip ahead to 14m10s… 😉 |
|
Our Epson printer has served us faithfully for… so long that I can’t remember when we bought it, but it must be well over 10 years. Edit: the Epson manual, found on the Internet, is dated 2004. Sounds about right. |
|
As I mention, the fx-80 was a tour de force. Company might be run by beancounters rather than engineers these days. And we know how that goes, just look at Boeing. |
|
so true… :( |
|
On the other hand remember Rolls Royce. The RB211 engine nearly didn’t happen because the company was run by engineers, and the finances went unchecked. It does need both, but a balance is important. |
|
While I like the URL_Fetcher interface and have written or helped with several applications which use it, I think Steffen makes a serious point about whether continued maintenance and enhancement of the modules is good use of time. URL_Fetcher was designed to have a number of back-end modules, to support FTP, Gopher, and any other fancy new protocols that emerged. What was not anticipated was that HTTP was going to make most of the rest obsolete. Consequently we have an API between URL_Fetcher and the protocol modules which is not adequate for modern HTTP use and is difficult to enhance. Dave started this thread with the issue of posting large quantities of data. People have mentioned newer versions of HTTP. I would add the following deficiencies:
I’m happy to fix a few bugs to keep the code working, and I’d be prepared to contribute to a modernisation effort, but for my purposes a Norcroft-friendly compilation of libcurl would be a perfectly acceptable alternative. |
|
Just to add that I don’t know very much about the RISC OS printing system, and I suppose that for IPP support it may be necessary for the communication to be done using a module, in which case I can see that enhancing AcornHTTP might be the simplest option. |
|
I wasn’t sure what the discussion a bit up-thread about HTTP 1.1 related to. This reminded me that I till have not committed a change from 10 February when I discovered that the reason my printer was not responding via AcornHTTP was because of the Accept-Encoding value in the header which AcornHTTP generates. See this post for the details and link to modified module. I’ll try to open a merge request in the next few days. |
|
Oh, a debug build. Cool. What do I need to get the debugging info? DaDebug (my preferred)? Reporter? Something else? |
|
All that an app needs to do is give its own Accept-Encoding header, which will override the default. |