A day with a RiscOS machine
Pages: 1 2
Bryan Hogan (339) 592 posts |
Sorry, late to this discussion, but had to comment on this:
Why on earth are you using ChangeFSI as an image viewer? It’s for image processing, hence it is quite slow.
Use SwiftJPEG or one of the other viewers that use the built in OS routines. They display 8Mpix photos in under a second. |
Ronald May (387) 407 posts |
“Use SwiftJPEG or one of the other viewers that use the built in OS routines. They display 8Mpix photos in under a second.” |
Jeff Doggett (257) 234 posts |
Because, as Bryan already pointed out:
Which means that you use ChangeFSI for a high quality image conversion, and a quick viewer for exactly that. |
Trevor Johnson (329) 1645 posts |
It uses the boustrophedon transform with Floyd–Steinberg dithering, 1 which achieves good results but is fairly demanding. 1 How to find the closest colour, 30 Jan 92 |
Ronald May (387) 407 posts |
“Which means that you use ChangeFSI for a high quality image conversion, and a quick viewer for exactly that.” |
nemo (145) 2546 posts |
As long as you you’re happy for “high quality” to mean “getting the gamma correction completely wrong” (or have a linearly-calibrated monitor… which you don’t, if you’re using ChangeFSI for “high quality” anything). ChangeFSI can’t even convert images to 256 colours correctly. |
Trevor Johnson (329) 1645 posts |
Ticket #236 seems to be the nearest relevant open ticket, but doesn’t seem to cover that. Should this failing be logged separately? |
nemo (145) 2546 posts |
No, it’s my usual rant about gamma which few understand and almost no one bothers to get right. Ask ChangeFSI to dither grey &7F7F7F (or &808080) into two colours (for example) and it’ll give you a checkboard of black and white, which is totally incorrect by a very large margin unless you have a linearly-calibrated monitor. The same problem affects all colour calculations made by anything Wilson wrote. All my RISC OS machines have had linearly calibrated output since the hardware permitted as it’s the only way to get ChangeFSI, ColourTrans, FontManager etc to get the right result. |
Rick Murray (539) 13840 posts |
Sources are available… You could fix this? :-) |
Rick Murray (539) 13840 posts |
It can be, at least according to the diagram about half way down the wiki page. Though, if you have a 1:1 arrangement (1 master, 1 slave) it ought to be possible to hot-plug, except there is no “device connected” detection other than either rigging up a micro switch somewhere, or repeatedly polling the device to see if it replies.
Speed being a big one. It looks like SPI to an SD card can run at 0-25MHz. Since native 4 wire SD interfaces can run up to 50MHz (maybe more, depends on the hardware) and has a higher throughput due to more data lines and USB can keep up with it (USB2 can read data from hard disc faster than a Class 10 card can write it), I think it is safe to say that SPI is not going to be the quick protocol here.
Never that simple. Computers work with sectors, which are typically somewhere between 512 and 4096 bytes. Flash memory works with blocks, which are typically around 128K in size. Writing to different logical sectors may cause the same SD block to be moved. Note, also, that certain blocks (free space map, root directory, log files, etc) may be written extremely regularly, which may well trigger the entire block to be moved. All that for a mere few bytes written. I found this: Adding spare erase blocks, say 3 per wear-leveling unit, those cards end up having around 37 erase blocks per wear-leveling unit. Given this scenario and a lifetime of 10,000 write cycles, you would end up with 370,000 write cycles on a particular wear-leveling unit, not counting additional stress caused by copying around used blocks as part of the actual wear leveling if you’re not simply writing to the same sector every time. Not exactly very much… However, the problems of using SD cards as main storage are as follows:
1 I know RISC OS does not correctly signal dismounts to USB devices. All of my USB devices with lights, the light goes off when I tell the Windows computer to eject the device. My hard discs go a step further and park and spin down. I have not connected a hard disc to RISC OS, but given that USB devices keep their light on, I don’t imagine the drive would shutdown. I have, in my time, suffered three hard disc failures and three flash media failures.
Flash now:
While it is possible to have sudden catastrophic hard disc failures (and why backing up is always important), I think that for age related issues, a hard disc should provide warnings of impending failure. A flash device, on the other hand, is likely to one day just cease working. Not a little bit, not damaged sectors, but entirely. The SD in my Pi is going okay, as are most people’s (else we’d hear a lot about this), but what is the anticipated service life? And what happens when that period expires?
See above. My eeePC runs on two SSDs, and I would love to think that they will run for a long time (no thanks to &@#%$ Avast!), but I know that if simple calculations were accurate, the service life would be quoted as 10-20 years, not ~3.
These dinky little SoCs have a few hundred Mb on board. Big computers have obscene amounts for obscene wastage. I couldn’t get Firefox (with many non-active tabs) running at the same time as Thunderbird on my eeePC (due to SSD there is no swap so when memory runs out, it runs out). Got a 2Gb SoDIMM for €8 at a boot sale. Now it works…
No, they don’t.
I wouldn’t say more reliable, I’d say less fragile. I can drop a screwdriver onto an SD card and it likely won’t be the end of the world. I think the differences between flash and spiny-discs will be defined by their fault behaviour. I don’t think that there is really enough empirical data here yet – though looking online it appears that SanDisk 32Gb Class 10 cards have a worryingly high failure rate (and, as concerned me, defined by the device it is inserted in no longer recognising the card). I don’t know if these reports are for legitimate retail-packed SanDisk cards, or knock-offs with a SanDisk label… |
Timothy Baldwin (184) 242 posts |
IIC are SPI completely incapable of operating at the speeds and distances of USB, firstly crosstalk in the cable would destroy signal, USB avoids this using differential transceivers and twisted pair cable. Then they assume the round-trip time is less than one clock cycle, IIC requires a sender wait after every bit for a potential stop signal from the recipient, and SPI does not provide means for a master to recognise the start of a message from a slave. For USB 2 there could be an entire byte in the cable. |
Rick Murray (539) 13840 posts |
That there are more bad (car) drivers than good does not really justify being bad. I would imagine that proper, licenced (as a consumer device would need to be to legitimately use the logo) SD interfaces far outstrip those home-brew and budget kit using SPI. Let’s start with digital cameras, add in mobile phones, then personal media players, and laptops/netbooks, and throw in multi-card readers for good measure. To be honest, I think the primary use of SPI in home brew kit is because the proper method is proprietary and that carries boring complications. The secondary use of SPI in home brew kit is because it is simpler to implement, and you don’t really need to have epic throughput if you are storing data to be crunched by a 10MHz MCU.
Raw HD? Really? Given that RGB 4:4:4 at 1080P runs to around 150MiB/sec – I want one of those cards! Though, you might find hard discs better, as a 32GiB card will only hold around three minutes of video. …you probably meant compressed HD…
Sorry, maths is not my strong point. What does that equate to in frequency? My rough “guesstimate” puts that at around 90MHz. What sort of cabling arrangement would you use at those speeds?
More complete. The SPI bus is… basically a pair of shift registers. It isn’t that different to the stuff we were going thirty years ago with the 6522VIA. USB offers…
SPI offers…? Allow me to quote Wiki to make the point clear: “The SPI bus is a de facto standard. However, the lack of a formal standard is reflected in a wide variety of protocol options. Different word sizes are common. Every device defines its own protocol, including whether or not it supports commands at all. Some devices are transmit-only; others are receive-only. Chip selects are sometimes active-high rather than active-low. Some protocols send the least significant bit first.” You can understand, I trust, that a de-facto standard that is actually a hodge-podge of sort-of-similar things is great for people working with MCUs that want to hook some bits together simply, but in a commercial sense it is utterly dead. Nobody wants to buy a “thingy” and wonder “will this even work with my computer”? At least with USB, driver issues aside, devices should plug in and work. Anyway, SPI is largely “undefined”, and is dictated mostly by the device you want to talk to. I would imagine, looking at a few pages of info on SPI, that I could probably hook up a BBC Micro’s User Point to talk SPI through the VIAs Shift Register. Throw in some handshaking and we’d be good to go, right? |
Trevor Johnson (329) 1645 posts |
No, it’s my usual rant about gamma which few understand and almost no one bothers to get right.Sources are available… You could fix this? :-) Does anyone know whether the ROL forked version of ChangeFSI has addressed this? (Does it include ColourTrans changes?) |
nemo (145) 2546 posts |
It doesn’t. Gamma doesn’t just affect colour matching (when changing from mode to mode), but all calculations involving colour components, including scaling and antialiasing. Another ChangeFSI example would be downscaling a black and white checkerboard by 50% – it’ll give you &7F7F7F (or &808080), which again is badly wrong unless you have a linear output. |
nemo (145) 2546 posts |
Fixing ChangeFSI doesn’t fix ColourTrans. Fixing ColourTrans doesn’t fix FontManager. Fixing FontManager doesn’t fix BlendTable. Fixing BlendTable doesn’t fix PhotoDesk, Artworks, ImageMaster, etc The reason I advocate a linearly calibrated monitor is that there’s too much broken to fix, and having such a calibration means the expense of doing it correctly is avoided anyway. Win-win.† The only down-side is dealing with all the icons that other people design on their 2.2 monitors. :-) † I should point out that ideally one wouldn’t use non-gamma corrected RGB with only 8bits per component. Even 16bits is a bit grotty. I’ve never found banding to be a problem, but it theoretically can be if you’re using numerically linear as opposed to perceptively linear. |
Pages: 1 2