Best source hosting
Pages: 1 2
Rick Murray (539) 13840 posts |
Oh, certainly. The demise of Geocities took with it an awful lot of content, only some of which got copied at the Wayback Engine. If you aren’t paying for your hosting, don’t be surprised if the company decides that it is unprofitable and shuts up shop.
And keep the removable media where?
That’s true, especially with jurisdictions such as, well, yours… where any available data is there for analysis and to hell with any notion of privacy.
Wow are you out of touch. These days any sales bod just needs to start talking about “the cloud” if they want to start management drooling. Everything’s so easy in the cloud. All the technical issues are somebody else’s problem in the cloud. Unicorns fart rainbows in the cloud. It’s pretty much the buzzword that beancounters understand (because anything to do with DevOps just goes right over their heads). But cloud? F’k yes! And what’s “the cloud” if not a fancy way of saying “your1 computer, but off site”?
Depends upon the program in question. It may be poor code, using entirely the wrong sort of variable, like a double to hold a true/false flag; or it might be that since it’s easier to write in a high level language the program in question simply does an awful lot more. It tickles me the times when people moan about how terribly slow Linux is to boot and get going but look at how fast RISC OS is!
Not so many of them around these days. Part of why AArch64 is an eyesore is because it isn’t intended for people to use.
You mean like this?
You’ve been quite clear all along that you like coding in ARM. That’s fine. I do it myself from time to time, just for the hell of it. But should you choose to promote your opinion by deriding something – such as all this waffle about calling new stuff old so your outdated ideas can be suggested as somehow new – don’t be surprised if somebody calls you on it. I guess it was predictable that it would be me. ;-) Well, it was either that or I write a blog article about […auto-redacted as it has nothing to do with this topic…] However, I’m not yet in a position where I wouldn’t have to rewrite it all the next day to remove all the sweary parts. What the hell is the world coming to? 1 “your” is defined as “ours but we’ll let you use it in exchange for currency tokens” (and when you’re really dependent, watch the price go up) |
Steffen Huber (91) 1953 posts |
It was a long time ago, but I actually implemented XMODEM (the experts sometimes call it the “Christensen protocol”) in my youth, and what you write is not correct. Or at least misleading and contradictory. XMODEM can survive any “glitch” that would have been corrected by things like MNP4 or V.42, because it can retransmit blocks that have the wrong checksum. XMODEM acknowledges every block that is transmitted with ACK or NAK, and if NAKked, it is re-transmitted. This handshake after every block (originally 128 bytes, later 1024 byte variants took over) is one of the reasons why ZMODEM (and YMODEM-G) with its streaming approach is much faster than XMODEM, along with its dynamic data block size of course, which reduced overheads further. What XMODEM cannot do is to re-start at an arbitrary position to continue a file transfer that was e.g. interrupted by a dropped line. That is what you probably meant by “auto-resume”, but MNP wouldn’t help against this case. |
Chris Mahoney (1684) 2165 posts |
Back in NZ in the 90s, most customers were with Telecom, and maybe 20% were with the biggest competitor, Clear. When a Telecom customer called a Clear customer, Telecom had to pay Clear, and vice versa. As there were roughly the same number/length of calls in each direction, this more-or-less cancelled out. Circa 1999, Clear started a dial-up ISP called Zfree. It didn’t cost the end customer anything (free local calling) and was funded through the interconnection revenue. Since these calls were all one-way, and since most customers were using Telecom, not Clear, this ended up costing Telecom a decent amount of money. Telecom responded by flagging Zfree’s dial-up numbers as national, so that the customer would be charged for calling them. This went back and forth through court and I’m not sure how it all ended up in the end… but the debate lasted well into the broadband age when nobody cared any more. Clear no longer exists. |
Paolo Fabio Zaino (28) 1882 posts |
@ Clive sorry I have lost track of this thread, I am back heads down at work. If you want to put your sources on the RISC OS Community on github you (and everyone else) are very welcome. I have written all the instructions here: https://github.com/RISC-OS-Community/ExampleRepository That repository shows you a typical directory structure and how to either join the community or ask us to upload your code in the community. It contains all the necessary document to read if you need more info and also the code of conduct if you intend to join the community. The README file has a link to a full video course (totally free) in English made of very small lessons and very easy to follow. If you want to use RISC OS to get on github.com please use Iris browser it works fine with github.com. github.com also offers totally free apps for Android and Apple iOS (both phones and tablets), which make it even easier to use github.com. For the git side, actually we already have basic clients that are good enough on RISC OS to handle a basic repo, but if you want to do more advanced git usage you’ll need (for now) a Linux, macOS or Windows computer. Please let me know if you need help installing git on one of those, it may be useful to add instructions on the repository above on how to install git on such systems. It doesn’t matter if your program is small or large, it’s always something that someone may be interested into (and believe me it always is!). Even if someone may not need the app per-se they might need to see how you’ve solved a problem coding they are having in their own programs, so sharing sources is vital to help RISC OS developer’s community to grow. Hope this helps, and please lets try to work together to avoid losing RISC OS code. As always if there are people interested to join to help administering everyone is welcome, we are 3 now and we hope we’ll be more :) P.S. Sorry for typos/mistakes and what not, both documentation linked above has been written at roughly 3:00am in the morning and this post too, so I am dead tired. |
Clive Semmens (2335) 3276 posts |
Many thanks, @Paolo. Will look. Not promising it’ll be really soon, but (assuming I survive…) I will. My apps were all originally written for my own use and I didn’t take care to make them easy to understand – but I’ll initially put them up as they are, and try to tidy them up and make them more comprehensible in due course. |
GavinWraith (26) 1563 posts |
Ditto, thanks @Paolo. I have never used git, but there it is in Rasbian on my Linux computer. I suppose it would be good for my soul to learn something new. However, it brings to my mind the image of a steamroller cracking a nut. Over the decades I put zip-files of some of my software on my website, so that people could download them with their browser. I never bothered trying to find out whether anybody actually did. Thirty years ago there were things called blackboard systems that allowed people to do, say mathematics, or design software, cooperatively. That is a form of version control, I suppose; blackboards need dusters to wipe the chalk off. However, my cooperation has always been mediated by email, and with single individuals, not groups. I suspect that git would be overkill for me. In any case, my impression is that there are not many RISC OS users still around for whom my efforts would be of any interest. |
Clive Semmens (2335) 3276 posts |
For what it’s worth, I’ve used SVN (working on ARM documentation) but I honestly don’t remember a lot about it. Some things stick very well in my memory; some things seem to evaporate completely. Although sometimes they resurface pretty handily if the occasion arises. |
David J. Ruck (33) 1635 posts |
@Clive git isn’t too bad to learn for the basics, although there are a few tricky areas when you get in to it. I don’t just use it for sources, but also for configuration files on my Linux Raspberry Pis as it makes it simple to change something on one machine and roll it out to all the others – and if something breaks, to wind it all back again. |
Matthew Harris (1462) 36 posts |
Most of my exposure to version control systems has been through contributing over many years to an open source project which started off using CVS then migrated to SVN and finally switched to git. This has been further reinforced in ‘day-job’ where we’ve also migrated quite some of our development work to git. For me, one of the main differences between git and others, like SVN and CVS, is that you are always working with a complete repository even when disconnected. (OK, there are special scenarios where this is not necessarily the case, but for usual day-to-day operations this is true). Both CVS and SVN mandate a client-server architecture, meaning that offline work was a bit sketchy to say the least. With git, every clone is a full duplicate of the repository and in a multi-clone setup, it’s purely convention that determines which of those clones is deemed the canonical source. Most setups would tend to consider the version of the repository on, say, github being canonical, with all other clones treated as working copies which should be regularly synchronised with the canonical one. But there is no explicit need to do that – it’s just a convention. Also, the way that the branching model worked (at least in SVN – can’t really recall CVS now) was such that you only branched if REALLY necessary due to pretty much all files needing to be copied to a different section of the tree. This is not the case for git as it has a very ‘cheap’ branching process. When working in a git repository, I attempt to keep ‘master’/‘main’ clean from any commits, aside from merges from so-called feature branches. This allows for a non-linear development flow if wanting to work on several different aspects at the same time. As one of the software development memes goes: In case of fire:
|
John Rickman (71) 646 posts |
Good enough to repeat and relevant to the post in Announcements coming shortly. I have been exercising Kevin Swinton’s !Git client for a while now and it has exceeded expectations. It has limitations, but these are documented, and if you are cautious you can make it sit up and beg. That is until you come to git push. I have not been able to get a push to work. I believe this is to do with GitHub now insisting on credentials via tokens instead of passwords. |
Paolo Fabio Zaino (28) 1882 posts |
@ GavinWraith
This is absolutely NOT true, I use your Lua port on RISC OS a lot (working on a Tasking Framework to generate a universal builder process), I am also writing the wrappers for my Gennan ML port for Lua, so who is interested will be able to use ML in Lua on RISC OS. For the math side, I am working on a library to have C maths lib replaced with a library that uses NEON to have high parallelism in math computations (I am sure you’ll love it) as well as a port of a lib for really big numbers in C (also very high precision), so I am sure you’ll have fun with both of them :) My time is very very (VERY) limited on RISC OS, so I am extremely (if not horribly) slow, lots of (hopefully) useful stuff will be released over time in that community repo and I am sure you have a lot of useful code for many Gavin, so do not underestimate your effort! Git is easy to learn and there are tons of resources on-line, plus ROOL is making an effort on giving us a proper client, so I am sure in the end it will get very popular in the RISC OS community. |
Paolo Fabio Zaino (28) 1882 posts |
@ Clive
Thanks and no worries, it will be there for you waiting for when you’ll have time. I hope everything is ok with health and everything, all the best wishes!
Understand and many useful things in the history of Computer Science are/were made for personal use :) |
Paolo Fabio Zaino (28) 1882 posts |
I want to thank Steve Pampling that promptly this morning emailed me with the fixes for my usual bad English (or my semi-good English of 3:00am lol). Also correction: we are 4, Steve is external reviewer! |
James Peacock (318) 129 posts |
Branches in SVN are in effect just a link in the repository, so have very little overhead and are fast if done using the URLs, i.e. on the server. Given most development projects have a single central repository, where Git really shines over SVN is in the workflow it enables on developer’s machine. Git makes, and encourages, having temporary local branches. You can make multiple commits to that local branch, and has powerful tooling to edit them, reorder them, move them between branches, etc. This allows you to easily commit after each change making it easy to undo any things you later regret. When working in teams, before pushing your changes into the mainline or for review, you can clean up that mess and if necessary organise the change into a coherent sequence of commits which are easier for others to follow. SVN is a very good tool, but it does not support this local workflow and for me at least, that is the main day to day difference. Despite having been using Git for a few years now, I find the command line dreadfully inconsistent and error prone. There are some things I think SVN does better like handling of binaries, working on small subtrees of massive server side repositories. The RISC OS SVN client has a couple of great features: 1. It stores the RISC OS file type as a property in SVN so you don’t need to have ,Feb or whatever all over the place. 2. It does the directory to file extension dance for things like foo.c. These two features make developing cross platform trivial. The same codebase can be checked out on either Linux or RISC OS and build straight off without having to create a load of symlinks or rename files. |
GavinWraith (26) 1563 posts |
Thanks Paolo. Very interesting and encouraging to hear what you are doing. |
Paolo Fabio Zaino (28) 1882 posts |
@ Gavin
Thanks, for the Lua work, maybe when I push it into the github it may requires some help from you if you would like to join me :) |
GavinWraith (26) 1563 posts |
Delighted. The NEON stuff. Do you plan to have this compilable with the Norcroft (AcornC/C++) compiler? In the days before VFP I always compiled RiscLua with that because the binary was smaller than with GCC, and I was just more ignorant of GCC and its makefiles. However, these days I have to use GCC because Norcroft cannot use VFP – at least I think that is correct. That has left a big gap in the AcornC/C++ compiler package – just having soft floats seems a waste, though no doubt it is adequate for many purposes. Being able to do fast arithmetic and still use the AcornC/C++ compiler would be good. The other strength of GCC is the possibility of dynamic loading of ELF files. Lee Noar was a big help, holding my hand here. But I only managed to get this working right in RiscLua 6. I just bottled out with subsequent versions and made do with static linking (of the You refer to Gennan ML . The only use of the acronym ML that I am au fait with is MetaLanguage , the functional programming language. Also SML, Standard ML. Am I off beam here? |
Paolo Fabio Zaino (28) 1882 posts |
Nice! Thanks :)
Yes, I originally started the effort directly in Norcroft using the in-line assembly directive (__asm) but given that it actually behaves differently than originally thought1 I switched to GCC in-line asm. So the functions are all pretty much re-defined using C constructs and implemented using ASM.
Yup Norcroft does a better job than GCC 4.7.4 at generating small binaries, but there are options for GCC 4.7.4 to reduce the size (I am sure you are aware of them), also newer GCC (available in GCCSDK) are doing a much better job at optimising the code (hopefully soon we’ll have access to GCC 10). Anyway to answer your original question, even if I am now coding it using GCC the plan is still to convert the binary back to AIF and eventually make it an AIFLib so it should be available for both GCC and DDE developers. Right now the majority of the focus is on ensuring correct results of the functions, so the conversion of ELF into AIF will come later.
I totally agree on that and hopefully the idea above will work.
Yup, and I love it. I managed to make a builder for my port of CJSON (yup had to redo it ’cause none of the others are available as source, so put an end to this situation and ported sources are available on the ROS Community as a stand alone library for DDE, GCC and both static and dynamic link-ables) that generates both the static and dynamic as well as all the requires symlinks for the GCC SharedLibraries :) So same will be done for the math library.
Sorry should have provided more details, my bad. With ML I refer to what is popularly known as Machine Learning (some refers to it as AI, Artificial Intelligence, which IMHO is a totally wrong term, but oh well…). For a Mathematician like you this just mean: applied statistics modelling to decisional process, that’s all it really is. So, using big computers, we’ll be able to generate statistical models of “something” and then use RiscLua (with the Gennan wrappers) to apply such models to something, for example an image to detect if there is a cat in that image or not. So the usefulness of such a crazy thing, is for example, on RISC OS, you can use it to create a plugin for (let’s say) PhotoDesk, that could “select” a human figure in a picture without the user having to do a crazy work on tracing the human figure manually. When such plugin would be put to use then it would make the selection of a human figure in a photo super quick, so then you could cut/modify/whatever we usually do on photo-editing. It can also be used with audio signals, or other things. But for “live use” on RISC OS we’ll need some Hardware Acceleration in place. Part of this is the effort for the NEON math library that should help improving Gennan performance and also allow me to port more complex ML engines. But to do human or dogs recognitions on a live stream we’ll need more than that, including HW accelerated camera drivers etc. However I am proceeding one step at the time. 1 Not a complain, I think ROOL had a really interesting idea there with that directive and I am using it for another project! |
GavinWraith (26) 1563 posts |
This sounds very ambitious and exciting. It also sounds like stuff that really needs to be able to exploit as many cores as are available. Alas, I am too ignorant of parallel programming to know what constraints that puts on development of software in the single-core circumstances that we have to work with at present. Perhaps none that we need to worry about, apart from vectorizability. |
Paolo Fabio Zaino (28) 1882 posts |
Yes, and for this I started to have a look at Jeffrey’s SMP library, but first I want to get the math-lib right, then will have to interface it with Gennan and then… etc. So one step at the time. It is ambitious, but it’s also fun (and that’s all it is for me), it’s offering a lot of opportunities to learn a lot about RISC OS internals, a bit like the Desktop Modernisation project is doing for the WIMP stuff.
Then I’ll formalise it for you in a more math-friendly way, Amdahl Law: https://en.wikipedia.org/wiki/Amdahl%27s_law Regardless if RISC OS itself offers support for SMP (Symmetric Multi Processing) or you do it by yourself initiating cores and creating all the required management code, what really matters for a user-application is the way Amdahl law works if you want to measure the benefits of parallelise your algorithm. This works in similar ways in Cooperative or Preemptive SMP approaches, with Cooperative being more complicated (as usual) for the application developer, while preemptive (as usual) results transparent to the application developer and so offers some help to reduce complexity. In both cases it’s how you “partition” your logic/algorithm that really determines the benefits. With AI/ML things are slightly different than traditional programming. Parallelism (in the case of AI/ML) is being taken away from the application software developer and its being moved into the hardware domain. This mostly because helps reducing power consumption compared to the amount of transistors required to separate cores in a traditional way, so AI/ML acceleration cores are usually a single core in a package that supports load partitioning (a bit like GPUs do). There are many good books on the matter, for details, here is some titles (if you are interested and enjoy reading as I do): - Sourcebook of Parallel Computing (big book, tons of info on irregular mesh, poisson problem, adaptive: graph partitioning, mesh, grid hierarchy, integral method, redistribution and many many other approaches), all the formulae is good and well documented. – Morgan Kaufman Pub. (many authors, so forgive me if I do not post the full list) P.S. I think we should move this chat somewhere else before we go wayyyyy tooo off-topic :D |
Rick Murray (539) 13840 posts |
There’s always the other Aldershot. ;-) |
Paolo Fabio Zaino (28) 1882 posts |
right, let me look at that one! :) P.S. link pleaseeeee ??? :D |
Rick Murray (539) 13840 posts |
https (colon) (slash) (slash) heyrick (dot) eu (slash) aldershot 1 1 Security through obscurity. ;-) |
Paolo Fabio Zaino (28) 1882 posts |
lol… anyway done, and can’t wait to start posting in the “Other Systems” given that there is that and below there is a French forum so I guess it’s the perfect place to start posting about the Thomson MO,TO series!!!!! :D |
Pages: 1 2