How did I .... ?
Pages: 1 2
GavinWraith (26) 1563 posts |
I have been using RO 5.23 on an Rpi3, with a 64Gb SD card. I cannot remember how I set this up. I have WinDiskImager on a Windows XP machine and also !SystemDisc. I saved all my SD card onto a 30Gb FAT32FS-formatted memory stick, and used WinDiskImager to install RO 5.24 onto the card. I am disappointed to see that Free now shows only 2Gb available on the SD card. Evidently I have done something wrong. How do I make the other 62Gb useable? |
Erich Kraehenbuehl (1634) 181 posts |
the download- image is for a 2GB-card. Therefore it writes a 2GB- Image onto the card.( no matter of the real size of the card.) |
GavinWraith (26) 1563 posts |
Thanks for that. I have written the RO5.24 image to a 2Gb card, and I have copied !system-disk to that. I have reformatted the 64Gb card and given it a boot partition. However, when trying to load in the boot sequence I get the message Target-Error from !system-disk and Bad free space map from the filer. Does this mean that the 64Gb card is damaged, or that it has not been formatted correctly? |
Erich Kraehenbuehl (1634) 181 posts |
you have to erase the64GB card with something like ‘Fat32-formatter’. Afterwards you have first to format the card to adfs- format. and afterwards you should be able to use !Systemdisk ( i do not know, if it still is working with Risc Os 5.24 still) The problem is, that 64GB is at the limit, if you format it under windows, it will format it to Exfat, which canot get formatet anymore under Risc Os. Therefore use Fat32- formater first. |
GavinWraith (26) 1563 posts |
Thanks. That may be answer. I am coming round to the idea that a large SD card may not be such a good idea. It might be better to have a small SD card for booting and use a memory stick for storing applications and one’s own stuff, perhaps formatted with FAT32FS for readability on other platforms. |
Rick Murray (539) 13840 posts |
Personally, I never think huge media is a good idea. Not only is FileCore unable to fix its own drives, but SD cards are of varying quality and capabilities and being used with a filesystem that is not even remotely flash-friendly1. As such, if I had to recommend a drive size, I’d say 16GB max, unless you really REALLY need the space. Also, if there’s a big failure, it’s possible to drop the most recent backup onto a fresh SD card of the same type, and boot it. The only thing that you’ll lose is whatever changed since the most recent backup. 1 No notification given to SD card about which blocks are actually allocated for use by the filesystem, so when a card fills up, the wear leveling mechanism has no choice but to assume that ALL blocks are in use; even if it’s a device that has recently had half of its data deleted. Also, just try to imagine how many times the Free Space Map sectors get clobbered. Or certain directories such as !Scrap. And then understand that while RISC OS thinks in terms of individual sectors, flash media works in terms of blocks, so updating a 1K sector may involve caching, erasing, and reprogramming a 4MB block, over and over… [16GB SD cards often report a block size of 4MB; 64GB cards often report a block size of 16MB; all to change a 1024 byte sector!] |
GavinWraith (26) 1563 posts |
Erich’s advice worked. I used an application called Partition Minitool on an XP notebook to reformat the card to Fat32, then used !HForm, then !SystemDisc Of course, changes to the RO distribution only occur infrequently, but when they do I find myself engulfed by activity re-personalizing the standard distribution with all my own silly fetishes, without which I find it impossible to get any work done. The situation is better than it used to be, but a lot of the effort is caused by inadequate separation of public (ROOL stuff) from private (my stuff). Or more accurately, in many cases third party stuff configured how I want it. Fred has done wonders developing this separation for StrongED. Ideally |
Colin (478) 2433 posts |
But Gavin is proposing to use a memorystick as well so backing up is not just the imaging of an SD card. Can’t see the advantage of a memory stick myself it’s just as bad as an SD card for longevity, USB storage is slower than SDFS and you share the USB bus with USB networking and any other USB device you may be using. If you were proposing to use an SSD then USB storage is worth the compromise as SSDs have better longevity. |
Colin (478) 2433 posts |
re !Boot. I’m still not a fan after all these years. As I often install new !boots onto discs I no longer use it for 3rd party applications other than for saving choices – that reminds me I keep meaning to do a !choices application, is the old one still around? |
Steve Pampling (1551) 8170 posts |
There really ought to be a clear split between user settings and the system boot. Essentially the system boot should be of a form where user applications can read (in a restricted fashion at most) and not write. Deletion and replacement with “as new” system boot should not affect an application, like a browser for example1, in any way shape or form. ROL did work on this aspect, but from what I’ve seen the split merely sub-divided the inner parts of !Boot instead of doing things properly and the naming of the directory structure has always suggested that the programmer(s) didn’t have “both oars in the water” I’d say: first identify which bits need to be system access and which bits user access and then start designing a restructured !Boot that uses only the “system” elements and is followed later by the “user” elements. The latter is probably a target for an easy “skinning” setup. 1 Netsurf comes to mind as an offender here. |
GavinWraith (26) 1563 posts |
Indeed, and with a reciprocal pair of protocols: The way that directory names like Apps, Documents, Programming, Utilities and the like are ready baked into the distribution has always p*ss*d me off. Concrete names like this are not only annoying for speakers of other than English, they also make little sense for anybody. Just look at all the smoke and heat generated by the packaging debate. If there are official applications put them inside system territory and put their choices into user-accessible system-territory. Have system variables for naming specific applications and have the user set them (with a GUI if necessary). |
Rick Murray (539) 13840 posts |
I think the problem originated with the older network “ArmBoot” concept, where the “boot stuff” and the “user configuration stuff” were lumped in together, along with associated things like !Scrap. One may consider that it is not a good idea to have !Scrap within !Boot; but on the other hand, where else would it go? It is intended to be a system resource (hence BootResources) that is mostly out of sight of the user. If not within !Boot, then where? I would prefer my filesystem not be cluttered with disparate bits of system startup and settings.
I don’t mind “Apps”, because it is fairly well mapped to “Apps” as a resource, plus I think since Android/iOS, the entire world knows the concept of “App”. ;-) I think it’s safe to say that there’s roughly zero chance that we’d all agree on the perfect layout of where to put stuff.
I view much in the RISC OS world as quaintly anglocentric…
? The main bugbear I have is that Apps (the icon on the iconbar) does not support directories. Else I’d have “Apps.Graphics” and “Apps.DTP” and “App.Printing” (☺) and “Apps.System” (most of the stuff in Utilities), plus some other stuff. Having said that, it is entirely reasonable for an SD card image of an operating system (as this is) to contain resources scattered around for organisation. In a recent specific case (PiSSD), backdrops being placed into in $.Documents.Images.Backdrops. It is done for organisation and to have a better layout than stuffing even more stuff into !Boot.
From my memory, this was because it didn’t let you choose where to dump stuff. Now it does. Win!
Let’s take my Manga application as an example. It has some user choices (written to Choices:) and it has a (large!) cache (written into !Scrap). Both of these are within !Boot, because both of these, while used by programs, are best thought of as behind-the-scenes system stuff. The “Show cache” menu entry is really there for me to get at the files quickly for dev or debug. I doubt that most end users would be interested in where/how it works, so long as it does… If an application isn’t to write to a nominally “system” area, where should they write to? And if Choices/Scrap isn’t to be buried inside Boot, where should it be placed? |
GavinWraith (26) 1563 posts |
If memory sticks are not wise for long term storage, what is recommended for archiving? For about five years I had a Buffalo 1Tb hard-drive that was pluggable into a USB slot. I thought I had been given it, but it appears that that was a misunderstanding. Do I understand you, Colin, as suggesting a USB-pluggable SSD drive? The Buffalo device was very cheap, I remember, and far larger than what I needed. Any recommendations? |
Rick Murray (539) 13840 posts |
THe problem is:
Both are based upon Flash technology. As is, I should point out, SSD. There are basically four types of storage media:
At the moment, I use flash media with my Pi, satellite receiver, camera, phones, etc. But I do periodically copy stuff to a harddisc using my PC. For “important” things (photographs), I also burn DVD-Rs at times; and for other things of importance to me, I copy the files to more than one harddisc. It isn’t perfect (for a start, everything is on-site), but I neither use nor trust “the cloud”. On a Pi, use SD card. USB is slower. But keep backups – don’t put too much trust in the media… Flash memory does wear out, and the rate of wear depends upon a lot of hidden internal things, so it isn’t easy to say “It’ll last for X”. Indeed, my two video recorders (the OSD and the satellite receiver) are great for stress-testing flash media; and it is my experience that SD cards last longer than USB. I had a tiny 4GB Transcend device wear out in only two months (and the part that is broken is where FAT places the root directory, so it’s essentially a tiny piece of scrap). |
Colin (478) 2433 posts |
They are no better or no worse than SD cards. The problem is not so much long term storage as using them every day. they don’t take many writes before they fail. Cheap ones can fail very quickly. Using windows to put a disc image on a sd card can show the problems even when the card is still working, the graph you get can show large dips in speed across the disc. SSDs are supposed to be much better for constant usage but last time I read HDDs were considered better for security cameras where they are being constantly written to. The problem with SSDs/HDDs though is they use more power and my PiB+ doesn’t like anything plugged into it which draws any power, I don’t know what the newer Pis are like perhaps someone will tell us. So I’d have to use an external powered usb hub to use an external HDD/SSD. So the conclusion you come to is it’s not worth the hassle, stick with the SD card and try not to lose something important. |
Colin (478) 2433 posts |
duplicate post Thats interesting when there is a long delay on this forum the post gets through – its the reply confirming the post which is slow. |
Steve Pampling (1551) 8170 posts |
Plus there sometimes being an echo anyway, but yes when there’s slow response I normally wander off to the kitchen to put the kettle on or elsewhere to get rid. |
John Sandgrounder (1650) 574 posts |
I believe that to be the case. But I am putting my faith in mSATA for regular use. (Although it is more expensive, if only because it also needs the use of an adapter). mSATA is also physically more difficult to neatly attach to a Raspberry Pi) |
John Rickman (71) 646 posts |
The problem with SSDs/HDDs though is they use more power and my PiB+ doesn’t like anything plugged into it which draws any power, I don’t know what the newer Pis are like perhaps someone will tell us. Not very good. I have a 60GB MSata SSD on my Pi3B+. CONFIG.TXT has a parameter to increase the current draw to 1 amp.:- max_usb_current=1. |
John Sandgrounder (1650) 574 posts |
Hmmmm …. All of my Pi systems (including two 3B+) are very stable. (does all of the things you mention) I do not have the max_usb_current setting as the highest current draw I have seen is 0.32 amps (with the meter in the USB lead to the drive). Are you sure the power supply is up to the job? What date is your ‘ROM’? (my 3B+ is 12th April 2018) |
Tristan M. (2946) 1039 posts |
Well. Thanks for stirring up my old fear of building source on solid state media. Still can’t get RO to build anymore though. Even from a fresh tarball. Perhaps especially from a fresh tarball. It’s freaking out about billions of files being missing. Even the DDE APCS-32 files which it should have imported. Oh fun. Back on topic. My Pi Zero is a shambling mess. I don’t think it’s possible to use one any other way. So the HDD is just an addition to said mess. At least it has an OLED set up which can use Rick’s driver. And an I2S DAC which I still have to get around to working out how to drive. Anyway, doing things like building source gives media an absolute thrashing. I usually prefer to do my builds on rusty plates for that reason. It’s just awkward with RO. That being said, I have a 2TB EXT4 formatted USB HDD for building stuff for my Raspberry / orange Pis, and occasionally the PC. e: History repeats. CoreUtils snuck on at some point recently. Removed it and it can build again. Except using the crusty old drive. Although it’s a pipe dream, I really wish RO had an EXT4 support module. |
Rick Murray (539) 13840 posts |
Thrashing, yes. But I’d imagine the most of it is simply reading the same bunch of included files over and over and over and over and over. Output is object files, then built into some sort of executable (module). Writes, yes. But much less than all the reads. Infinitely less than deleting a source tree for upgrading to a newer version. Image your SD card regularly, then just use it as you normally would. If it fails, revert to last backup. Also run a CheckMap from time to time in case the failure is the filesystem and not the media. That’s my method, and I build stuff including the ROM…
Your next project? ;-) |
John Rickman (71) 646 posts |
Are you sure the power supply is up to the job? The power supply is rated at 2.5 amps. Both it, the Pi, and the SD card (5.24 16 April 18)were bought at Wakefield last month. |
Colin (478) 2433 posts |
It’s not the rom it’s the pi. I have a 2.5A and 5A power supply. With no load on the usb ports the voltage is around 5.05v. I don’t have a spare SSD but when I plug in a HDD, which admittedly probably uses more power than an SSD, it idles at 4.85v .34A but while it appears to work ok it won’t count the files on my hard disc (250000 files) the computer locks up part way through. While counting the volts drop to 4.72 (4.75 is the USB minimum spec) and amps increase to 0.68A so it’s unsurprising that it fails like this as USB2 ports are supposed to only supply 0.5A max. So if I use a USB lead which can plug in to 2 sockets so it shares the amps between the 2 sockets it doesn’t work either though it does count more files. The 5 amp supply also tends to count more files than the 2.5A supply. I also tend to find that as the voltage drops below around 4.85v I notice more missed mouse clicks. If instead of plugging in the hard disc to the pi I plug a powered hub into the pi and the hard disc into the powered hub it counts the files no problem and of course the voltage for the pi doesn’t drop. |
David Pitt (3386) 1248 posts |
The SSD I use with the Pi’s says 1.6A on it’s label. The SSD is lurking inside an enclosure and I have forgotten what it is, but it is old, consumption might be lower now. It has always been used with a powered hub that has a 2A power supply, entirely necessary with old Pi’s. My RPi3B+ is powered by the ‘official and recommended’ 2.5A power supply. The RPi3B and RPiB3+ are good for a total USB current draw of 1.2A or 1.5A dependant on which document one reads, so time for a rethink perhaps. The SSD’s enclosure has two USB plugs now both connected directly to the RPi3B+. It’s looking good, with earlier Pi’s that would have collapsed quite quickly. A count of the whole SSD with almost 350,000 files completed, and it has just received a Titanium backup over ShareFS. |
Pages: 1 2