Programming things
Tristan M. (2946) 1039 posts |
The USB port came off my ESP32 :( e: Apparently there’s a Lua interpreter for STM32. Not M0 though. That’s a bit too tiny. |
Tristan M. (2946) 1039 posts |
DDE ObjAsm has a code generation for Pre-UAL thumb. Haven’t tried it. I spotted it in the WIMP app for it, which isn’t something I ever use really. |
Tristan M. (2946) 1039 posts |
I connected a USB UART module up to my USB-Connector-less ESP32. It works but it is no substitute. For some reason the ESP was just slowly pumping garbage out of the UART. I couldn’t find a baud rate at which it worked. It was also doing it while its power switch was off. Oh dear. While freestanding without a USB cable connected at least it cut power to the UART when turned off. |
Jeffrey Lee (213) 6048 posts |
Interesting Thumb observation: Because Thumb-1 is based around a 16 bit instruction length, the 32 bit BL/BLX instructions are actually two 16 bit instructions which can be executed independently of each other. But on a Thumb-2 CPU, it will be seen as a single 32 bit instruction; if you tried to split them into two halves you’d either get an undefined instruction or something completely different. This means that a disassembler (assuming it wants to produce output that can be assembled back to the original binary), might produce anywhere between 0 and 2 lines of output per input halfword. E.g. if you had the first half of a BL, followed by a different instruction (e.g. MOV), the sequence would be as follows:
Plus you’ve got the problem that almost any variable-length instruction set has to deal with, which is that the second half of a 32bit instruction could be a valid 16bit instruction, so without any external help the disassembler can’t always be certain where the instruction boundaries lie. (This makes me wonder whether there are any devious things that you could do to crunch the size of your code, e.g. for demos. I don’t think there’s any practical benefit to splitting up BL/BLX instructions, but I’d imagine there could be one or two situations where a dual 32/16 bit instruction could come in handy) |
Rick Murray (539) 13840 posts |
Which means an API change will be required for the Thumb disassembler to flag no reply yet / reply now / another line follows; as well as some mechanism to help get the addresses correct (maybe it should return also how many bytes the instruction is)? |
Tristan M. (2946) 1039 posts |
Yesterday the Chinese ST-Link v2s arrived. Haven’t tried them yet. Yes two. They are cheap. Spares are good. Looking forward to trying it out on the STM32 with gdb. I don’t seem to have any way to practice ARM assembly away from the desktop (where the ARM based SBCs are). So I’m trying to work out a way to do something a little more portable. |
Tristan M. (2946) 1039 posts |
After some effort I managed to solder a USB cable on where the micro USB connector was on theESP32. Seems to work fine. I received a small I2S DAC in the mail which I’m tempted to try with the ESP32. To do what I’m not sure, but it makes a much nicer “bare” platform to experiment. |
Tristan M. (2946) 1039 posts |
Hooboy… I finished reviving and upgrading an egg hatcher a little while ago with a proverbial gun to my head. About a year ago maybe it failed, but the last of the chickens and ducks hatched so I never gave it another thought. This hatcher… I threw it together in a few hours, both hardware and firmware. It had problems. I worked through most of them. Today I had to bring it back into operation. I wired a couple of 10A relays in parallel which for practical reasons was a little more difficult than it sounds, then power it up and hope for the best. There’s a little warning in here. Be careful with coding. Bugs can kill. Very minor slip ups caused things like the uC freezing if the network was down, or even when no web clients were connected. Bad temperature regulation etc. Simple hardware bugs / oversights caused multiple relays, a heat pump, a sensor and a transformer (or bridge rectifier) to fail. The situation this time around btw was a duckling that needs assistance hatching. Was a “right now!” sort of thing. |
Rick Murray (539) 13840 posts |
|
Clive Semmens (2335) 3276 posts |
Hmm. I made an incubator for hatching chickens, many years ago. No computer involved – just two adjustable thermostats, one to keep the air in the incubator at a constant temperature (I forget what temperature it was) and another to keep a tray of water underneath the egg rack at a slightly lower temperature thereby keeping the humidity correct. Plus of course two heaters and a motor with a time switch to turn the eggs a couple of times a day by rotating the rods of the rack. My sister still uses it twenty years later. We gave up keeping chickens ten years ago. Chicken pics at https://coshipi.deviantart.com/gallery/361706/Chickens |
Tristan M. (2946) 1039 posts |
Rick, that is a good example of something having very dire consequences. It blows my mind that something like that could have happened. Even contemporary automotive ECUs were hardened systems with multiple failsafes. Clive, that’s a perfectly acceptable incubator that people would pay a premium for. I’m not surprised your sister still uses it. Mine is only a hatcher thankfully. We had a good incubator but sold it. We decided on broody hens only for multiple reasons. One of the many reasons being how unpleasant it can be at times dealing with potential buyers for the chickens / ducks. As I said with the hatcher, the whole thing was built in a matter of hours. IIRC I had to go on a long drive the next day and we had chicks and ducklings that couldn’t be left in the incubator or put in the brooding box (repurposed TV cabinet with thermostat and ceramic heat bulb). I threw it together out of a small aquarium and cardboard for the lid which had the workings attached. It used / uses a large heatsink possibly from an amplifier with a PC case fan I attached for the cool side. A 486 heatsink with fan on the hot side, and a 90W, 12V peltier device in the middle. That mess was held together with fencing wire and used nappy rash cream as a heatsink compound. I cobbled together a baseboard with a couple of linear regulators for the esp8266, and connected it to a mains to car cig lighter 12v adapter I had. An old transformer one for running big stuff. The voltage sag was horrendous! Still, it did the job well enough that it lasted until I got home. It got multiple upgrades and lots of tweaks to the firmware. Truth be told it was based on the parts I had collected for an incubator. Thing is I only had some of them and a rough idea of what I wanted to build. The idea was to build an incubator using a Peltier device as a heat pump, controlled by an H bridge or possibly relays. A major problem we had was the incubator overheating and this was my idea to make one that could solve the issue. |
Rick Murray (539) 13840 posts |
Just have been one hell of a brown trouser moment when the investigation worked out what happened, and thus which person made the error.
Show me the code and I’ll believe you. |
Jeffrey Lee (213) 6048 posts |
I think the problem is that most of the time the failsafe is to have an exact copy (or two) of the same system running in parallel. It’ll protect against problems like random component failures, but won’t do anything to protect against design flaws. |
Tristan M. (2946) 1039 posts |
Because they love making their code public. I wish I could remember better, but they did have some pretty neat methods of ascertaining faults beyond simple CRCs or values out of range. Sorry, I’m very tired. The amount of effort companies put into them is pretty impressive. They have a lot to lose if they don’t work right. That Ariane fault is insane. I wonder what language they were writing in? No matter what, I’m sure the controller would have had a datasheet, or whatever API they were interfacing with would have stated the data type. Even then, it just screams “untested code”. Even basic testing would have revealed the bug. I half expect you to go on to point out various failings with the Space Shuttles. They were a bit of a mess. |
Jeffrey Lee (213) 6048 posts |
The code had been tested and worked perfectly fine…. on the Ariane 4. On both rockets the hardware that caused the failure shuts down shortly after launch, but because the Ariane 5 was more powerful it was able to exceed the 16bit limit before the shutdown time was reached (and then the error code was foolishly interpreted as valid flight data, causing the rocket to go too far off-course and disintegrate). All explained in the full failure report.
Fighter jets can have some pretty nasty bugs evade the design/testing process as well. A while ago I read an article about NASA’s software development team, and the amount of effort they put into designing and implementing the software to try and ensure it’s bug-free. Although NASA’s approach takes a lot of time (and therefore money), the stark contrast between how NASA does things and how ordinary software is written made me think that the article must be a recent one. After all, if it was an old article, surely the commercial software industry would have picked up on some of the techniques by now? But then I realised the article was about 20 years old. (It’s possibly this article, although I can’t remember enough about it to say for certain) |
Steve Pampling (1551) 8170 posts |
Bear in mind that this is a Euro-project which means that the team will speak and document in English plus French1 (otherwise a certain nation sulks) and sometimes throw in a mix of metric and imperial measuring because the mix helps things go smoothly (ROTFL) 1 Possibly a few other languages for the odd team member or two. |
David Feugey (2125) 2709 posts |
Correct. To use multiple algorithms to be sure that everything is OK would be in fact a way to get more errors.
So the problem is not code, but documentation: “This algorithm has these bounds” should be somewhere. Two things very difficult to implement. |
Clive Semmens (2335) 3276 posts |
Difficult enough in software. Harder in IT hardware. Bloody difficult in aerospace hardware! |
Rick Murray (539) 13840 posts |
No. You never ignore. Never. If something is wrong, it is wrong, and ignoring a wrong result could lead to a compound error later – as we could see with the eventual result of the rocket.
You’re speaking of an industry that decided that it was cheaper to be sued for deaths due to a faulty design than to issue a recall.
Recall how long it takes a BBC Micro to boot. Consider an ECU is probably a lot faster and a lot simpler. It might reboot itself twice every minute and we’d probably never know… Of course, those cars with highly integrated onboard computers. Well, there’s a security nightmare right there… |
Steve Pampling (1551) 8170 posts |
That seems to have been the philosophy of the IT support for TNT – look how it works out when both servers are on the same network and NotPetya1 encrypts both servers.
Like putting twice the amount of kit into the same location as “resilience” ? 1 Yes that was what pretty nearly sent TNT into the information stone age. No they still don’t have a working system ready to communicate with ours2 after several months. |
GavinWraith (26) 1563 posts |
Ideally safety-critical software should always come with 1) a complete specification in an appropriate formal language and 2) a formal proof that the algorithm implemented in the software satisfies the specification. The difficulty lies with the word proof which somewhere along the line has to imply that human beings have read and understood both specification and verification. We long ago reached the point where mathematical proofs were no longer comprensible to humans (e.g. the 4-colour theorem). So it maybe that the checking will have to be done by machine; an uncomfortable situation that could involve logical circularities and that is hardly likely to inspire confidence. Could there be any mileage in imposing a legal duty for specifications, source code and verifications to be registered with a public body charged with the oversight of safety-critical software? GCHQ and businesses would object, but perhaps a Wikipedia approach would mean better oversight – a democratization of the expert witness ? In the early 90s even international flight-control software lacked proper specification. But how much software is treated in a rigorous manner? How much that should be, could be? |
Xavier Louis Tardy (359) 27 posts |
About the Ariane5 issue, I can tell you the code was not ADA. |
Rick Murray (539) 13840 posts |
The problem with a formal specification checked by a machine is that essentially you will have an incomprehensible complex specification checked by an incomprehensible complex system that will probably give an incomprehensible complex result that will eventually boil down to pass or fail; but the problem is that if we don’t know if we can “trust” the code, how do we know we can trust the checker? Maybe a KISS approach is better? A car onboard computer, for example, should be concerned with reading sensors, monitoring the engine, and displaying information to the driver. It has no purpose interacting with media playback, GSM devices, WiFi, or anything else. Ideally it should be airgapped with the only possible connection being the ODB port under the dashboard. Connections to other devices (such as if the dash readouts are a different system) should be a one way opto serial link with a known protocol. There is no reason why one cannot have multiple computers in a car each doing their own thing. Hell, this message is being written on my phone and will bounce off the media sharer beside me to the ADSL router. That’s three entirely independent computers involved. Well, actually five… The WiFi module in the phone probably has it’s own processor and firmware. The sharer is one of those RALink chips, and the Livebox has a slot in WiFi card.
Not really. |
Rick Murray (539) 13840 posts |
Sorry Xavier, it was ADA. Loads of nerdy detail: https://hownot2code.com/2016/09/02/a-space-error-370-million-for-an-integer-overflow/ |
Tristan M. (2946) 1039 posts |
It still runs. It’s not causing damage through detonation, lean mix etc. It’ll throw a CEL to notify the operator of an issue, and be in some degree of limp home mode. It hasn’t failed. It’s in a failure mode. If it has failed the car won’t start. Under normal conditions the ECU is silently taking note of the short term and long term behaviour through and of the various sensors to detect where an issue lies, and compensating where necessary for less than optimal conditions. If it spots and abhorrent condition it acts accordingly. If a sensor is taking too long to respond, drawing too much current or not enough, open circuit, reporting data that is out of bounds or is impossible given data from the other sensors, the ECU chooses to ignore the input, extrapolate from other inputs, logs the error and notifies the operator. Seriously even my thrown together egg hatcher checks for invalid data. IIRC all it can do is display the error on the web interface and shut off the heat source because of the simple hardware, but it’s still something. A chick / duckling can survive for a while in an effectively enclosed insulated container (don’t worry there is an air gap) without a heat source, but overheating can be quickly fatal. Can’t comment re: Ada. Never used it. Used plenty of other languages though. I still consider C to be a bit of a disaster that should have been superseded a long time ago. It’s the programming equivalent of English. I really like the FreePascal version of Pascal. It’s neat, consistent, and very functional. Such a shame it’s popularity has waned.
Absolutely. There need to be failure modes and the ability to compensate if these arise. Ignoring errors on a normal PC program is bad enough. When it’s for a critical application it’s a disaster.
There should be no nuances. It should be plainly and clearly documented with no assumptions made. The PRMs have given me a headache more than once for this exact reason. |