Does Hard Drive affect Programming Speed?
Julia (9820) 1 post |
Hello everyone, I’m curious to know what types of hard drives other programmers are using and how they might affect programming speed. Personally, I use an SSD Card for my primary drive because it has faster access times and can improve overall system performance. However, I know that there are also other types of hard drives available, such as traditional mechanical HDDs (hard disk drives) and hybrid drives that combine SSD and HDD technology. Do any of you have experience using different types of hard drives for programming? If so, which one do you prefer and why? Have you noticed any significant differences in speed or performance? I’m interested to hear your thoughts and experiences. Thanks in advance for your input! |
Herbert zur Nedden (9470) 41 posts |
On my Windows computer the M2 SSD is quite a bit faster in access than the fast SATA SSD I have as second drive – so if I do have huge projects it might affect the compile time; but for those you use a decent make tool so that only the things changed are rebuilt; and the odd full build is the perfect time for a lunch or coffee break. So I dare say pretty much any SSD is good. If you’re on a RISC OS system (a real one, not an emulated one) a SSD does have some benefits due to reduced seek times; but overall most hard disc interfaces we have a fast HDD might probably not make that much of a difference… except if you have one of the hardwares with a SATA port where the SSD has the edge probably. But in any case I recommend to use SSD in any case in any computer – they are faster, especially seek times are much better and the lack of a decent directory cache in RISC OS makes that a very interesting feature; and when writing you want the directory updated anyhow so seek times come into play then anyhow. Hybrid SSD-HDD drives try to address some of the performance issues and they even do… But seriously with the current prices I’d simply avoid HDDs (be real for hybrid) – in any case except perhaps for backups in a NAS that sits well at a distance from my desk so that I don’t have that noise; well perhaps if in your computer you need a real huge disc for some data like pictures or even movies then you might want to consider a 5400rpm HDD (a bit slower but a lot less loud) for storing that. But for regular work a SSD is the way to go – and for a real RISC OS machine there is no need to get one of the super fast ones since the host adapters are the bottle neck anyhow. |
Rick Murray (539) 13840 posts |
The biggest amount of time spent programming is shared between writing the code in the first place, and the debugging. Between those two, any delays in harddisc speed pales into insignificance. For building a project from scratch, you’ll notice a difference between, say, a class 2 and class 10 SD card. Of course, I’m talking about RISC OS here. It’s a different ballgame entirely if you have an OS that has a propensity to spend a small eternity swapping out to disc. In that case, you don’t need a faster disc, you need more memory.
Probably a bad idea for programmers. Building stuff tends to create quite a lot of smallish temporary files. Not the most friendly thing to flash memory.
IDE, SCSI, ST506, SATA, USB flash, SD card… On my Pi, I use a fast SD card. Works for me. |
Rick Murray (539) 13840 posts |
You are aware, I hope, that competent modern filing systems communicate with the device to tell it which parts of the drive surface are “unused” (and thus can be used for wear levelling). RISC OS is not one of those types of systems. So as the harddisc fills up, the drive has no choice but to assume it is completely full even if you delete all the files. Because of this, instead of simply relocating data to wear level, it must do a whole song and dance shifting blocks around. This is potentially dangerous (power failure whilst a block is cached in memory will lose it), it requires more writes than necessary, and it’ll take longer to do. Maybe one day FileCore will support TRIM. That day isn’t today. |
Clive Semmens (2335) 3276 posts |
I know you already mentioned it, but worth emphasizing: not only does it slow down dramatically, all those extra writes are eating into its expectation of life. |
Steve Pampling (1551) 8170 posts |
Obvs. that’s what we’re all missing: wear levelling for all the failing joints, muscles etc. On topic, what RO really needs is a compiler that reads the source files into RAM, does the whole build process there and saves the result to disc as a default setup. |
Stuart Swales (8827) 1357 posts |
Ooh, I never thought of copying the build tree into a RAM disc. (Or did I? ;-)) Define result, Steve. How is make expected to know what you actually intend? Some things that you think are just intermediate files I may need for my final product. |
Alan Adams (2486) 1149 posts |
Like the SSD in my ARMX6 which started failing after 4 years of fairly general use – no big compilations or graphics manipulations. I put that down to lack of trim, and replaced it with a hard disc. I can detect the drop in speed, but I want reliability more. |
Dave Higton (1515) 3525 posts |
Spammer alert: “Julia” is a known spammer. |
Stuart Swales (8827) 1357 posts |
Interesting, Alan. I use a Crucial SSD on my ARMX6 and it’s still happy as anything, many many large compiles later. An interesting snippet from their site (I am not paid by them!) “But if your operating system doesn’t support Trim, it’s not a disaster. All Crucial SSDs are designed and tested assuming that they will be used without Trim.” |
Rick Murray (539) 13840 posts |
I guessed that, but it’s still an interesting topic to discuss.
That’s the thing. The compiler builds a number of intermediate files that get mashed together into the final executable. In more complex setups, the same files may be used in building different versions (say, demo and retail). It’s just an artefact of how the build process works which, granted, was born in a day when machines had very little memory… …but don’t pick on the build process. Consider also all those stupid little files splattered all around by your browser’s caching, and all the other junk that ends up in Wimp$Scrap.
I wonder how they handle that? Or maybe it’s just a case of “without Trim it’ll work for this many writes” (but with Trim, whoo!).
The µSD card in the Pi 3 is the one that was in the Pi 2 before. It’s been in regular use running a 24/7 system since around autumn 2017… (periodically I make full images of it on the PC, and important stuff is backed up to USB (plus quite a bit has been copied into the Pi2 (new µSD) via ShareFS because it’s often just nicer to code out front in the shady part)) |
Alan Adams (2486) 1149 posts |
Crucial’s description is here: |