Enforced HTTPS - coming to a browser near you
Steve Pampling (1551) 8170 posts |
I will let this linked page do the explaining, but in short: Major browser developers are starting the push to eliminate mixed HTTP / HTTPS use in a page, which would see certain sub-elements of a page not loading. NB. Gravatar use in these pages is already using HTTPS, so people who like that are still OK. Also noted in the reference page is the HTTPS First Recent checks showed some RO related websites using HTTP but sitting on an incompatibly configured HTTPS host, this forthcoming HTTPS First policy would break access to those sites. I’ve seen work problems relating to this clamp down on mixed use (HTTP in HTTPS pages) when using MS Edge or Chrome. Fortunately, we have an experienced team so no showstoppers so far, but even there the “solution” at present is tweaking to force legacy mode use. Not a long term strategy. So, if you have web pages, and you want them to be accessible to MS Edge/Chrome/Firefox users, it’s time to review them and look at configuring them appropriately. |
Stuart Painting (5389) 714 posts |
Two of the affected websites are www.riscos.com and www.virtualacorn.co.uk (HTTPS fails with a certificate error; HTTP access still works) and I understand Aaron is unwell so may not be in a position to correct the site certificates. Both sites have been excluded from the Wayback Machine, so if any of you have been meaning to consult any of the documentation on those websites (for example, there are HTML versions of books that may be difficult to find elsewhere) I suggest you do so while you still have the chance. |
Steve Pampling (1551) 8170 posts |
To be frank, that has two failings, the second is that the certificate has expired. The first is that even before it expired it would have prompted an error on the first visit since the certificate is a “self-signed” one, i.e. it’s actually generated on the server and is of a type that should never be used on the open net. If that was corrected, with something like a Let’s Encrypt certificate, it will work for the moment, but if you check through there is a mix of https and http link content so it’s a good example of one case that will likely fail once the browser developers tighten things. |
Clive Semmens (2335) 3276 posts |
Ah. I should take a look at my website for this issue, I presume. |
Rick Murray (539) 13840 posts |
I guess it is coming to the time when this (http://heyrick.eu/blog/index.php?diary=20240611 – note the obvious box at the top) will no longer be optional. |
Steve Pampling (1551) 8170 posts |
https://www.riscos.com illustrates what can happen if a site is set up with just HTTP – nothing earthshaking about that at the time many of these were done – and then time moves on and we have a HTTPS First push. Error code: SSL_ERROR_BAD_CERT_DOMAIN Again, using a Lets Encrypt certificate would fix that, and not break the bank. 1 Saying maybe as the Reverse lookup of the IP gives web17.extendcp.co.uk > 176.32.230.17 Got answer: Name: web17.extendcp.co.uk |
Clive Semmens (2335) 3276 posts |
My own website’s pages are all https; hostinger does that for me, and all my links to my own site are internal. But I link to other sites via http from many of my pages. Do I need to check that the linked sites are already https and change my links to suit, or will people’s browsers automatically try https anyway? |
Stuart Swales (8827) 1357 posts |
Some hosting companies like to milk their customers and don’t allow installation of Lets Encrypt for SSL. |
Steve Pampling (1551) 8170 posts |
That was the intention of highlighting this. I think you’re OK Clive, minor tidy up, e.g the link to the Green Party manifesto pages is HTTP (and they divert to HTTPS) same for East Cambs Green party and wikipedia. I’d really like it if no RO related sites became inaccessible due to avoidable incompatibilities with major browsers, hence the reminder/heads up. Slightly ironically, the links Stuart singled out also bring some assistance as this page on VirtualAcorn has a link to a link checker utility XENUlink that will trawl your website and examine the links. |
Steve Pampling (1551) 8170 posts |
Why do I feel Druck stirring from slumber to mutter about justice for “Squirrel abusers” or some such? |
Clive Semmens (2335) 3276 posts |
Crikey – was that you who crawled ~1700 pages of my site at great speed around 10:00 this morning, Steve? Many thanks, anyway! Edit… And then, about 12:00, all the same ones (apart from one…) and perhaps a handful that didn’t get crawled 1st time? There’s actually a lot that whoever it was hasn’t crawled… :) |
Steve Pampling (1551) 8170 posts |
XENU says there are 3304 URLs in there :) Then there are the ones that are plain broken or need a little work:
|
Clive Semmens (2335) 3276 posts |
That is brilliant, Steve – you’re a hero! I wouldn’t have had a clue what to do, other than work my way through the whole thing checking every link & editing as necessary… Over the years, I’ve had a handful of emails mentioning broken links, that I’ve fixed or removed as appropriate – but I know from the logs that lots of people either don’t try them anyway, or don’t bother to report them. |
Steve Pampling (1551) 8170 posts |
I’d do a search in the files for the http:// instances and convert to https:// You have more work on the ones in the Textile buggered list of 16 above. If they are friends/relatives you can nudge them to update, others – maybe delete or make a note that time and tide… Anyway, I thought that having dug a hole, I might as well put some effort into pointing to the easy route out. |
Clive Semmens (2335) 3276 posts |
Yup, that much I can manage! It’s knowing which ones that would break that would have been a problem for me.
You’ve certainly done that! Many, many thanks. As I say, you’re a hero. |
Rick Murray (539) 13840 posts |
What did you use to do this? I ought to cast an eye over the dung pile that passes for my site… |
Steve Pampling (1551) 8170 posts |
From earlier:
Needs a PC. Then just do a find in the window that opens looking for http:// or export as txt, drop into a spreadsheet app, sort by host, remove duplicate hosts, check the response of all hosts to being called with a browser (Firefox) set to HTTPS only, recheck any that fail with a more forgiving setting on the same browser. Normal fayre for an IT support person from my viewpoint1. I probably should dig around for a few better tools. 1 Which might not be that normal if you consider the reactions of my younger cow-orkers to set the normal standard and how close I match (or not) |
Clive Semmens (2335) 3276 posts |
Mac won’t do? |
Steve Pampling (1551) 8170 posts |
There’s mention of it working under a particular utility in the web page I referenced. |
Steve Pampling (1551) 8170 posts |
Rick:
Works on https (or http (rather than hhtp) :) |
Rick Murray (539) 13840 posts |
I’ve found a W3C link checker https://validator.w3.org/checklink , but it’s kind of slow as rather than listing links, it tries to follow and validate them. It’d take all weekend to parse my blog, never mind anything else, so I stopped it.
I guess there’s some work to do. Are those links on the /blog part? That’s my main concentration these days. Speaking of which, does it even sanely handle the /blog part? I’ve found some things consider it to be a single page (/blog/index.php), as they don’t understand that the parameter (?diary=YYYYMMDD) changes the content. I guess this is why some things (like wikis) use quiet redirects, so the user might be /blog/articles/YYYYMMDD which is a fake address that is changed to the internal reference; so it works and has a unique URI for each page.
On-site or off? If off, NMFP. ;)
Now, now, we all have dumb things we enjoy. As for the link checking, I can’t help but think that this is the sort of thing that I ought to be able to do quickly with a bit of PHP. Just iterate through anything that is .html or .php in the site (recursive search), then load each document in turn and search for the href element in a and img tags. If it begins “http:” then list it (ignore everything else so we aren’t bogged down with internal links and other guff). This won’t validate the links exist, but should point out which ones may need to be upgraded. |
Steve Pampling (1551) 8170 posts |
:)
Cats – out. One hides under the patio table, another comes back later: dry, nicely brushed and not immediately requiring food.
https://heyrick.eu/eurovision/2005/scorecard2005.html#Israel not found |
Steve Pampling (1551) 8170 posts |
If you’re doing something like that, you could likely have it look for the mixed content items (as detailed in the notification page) in the various pages |
Clive Semmens (2335) 3276 posts |
8~) Since you’d very kindly done it for me already, I hadn’t bothered to follow the link – have now 8~) & might find it useful in the future. Haven’t yet checked whether the utility works on the M1 Mac (my main machine) – but I do also have an ancient Intel Macbook Pro (hand-me-down from our son) that I use when away from the desk (the Command Centre, according to the offspring…) |
Rick Murray (539) 13840 posts |
Well, now, that’s because some twat write crappy links like this:
I ought to find the idiot that did that code and give them a piece of my mind… |