Apple unveiled M1 PRO and M1 MAX and just made PCs architecture retro'?
Pages: 1 2
Paolo Fabio Zaino (28) 1882 posts |
Performance of the M1 PRO and M1 MAX are outstanding according to Apple’s reports shared today on their presentation. Yes, we need to see in real world applications, yes macOS is not as performant as Linux is (but this at this point is just “bread crumbs”). PC architecture as we know it it’s clearly obsolete at this point and Apple has showed that the fully integrated System on a Chip is the way to achieve top performances. Laptops with 200GB/s and 400GB/s with battery lasting for days have not even been seen on sci-fi movies so far ;) Or will traditional PC architecture find away to compete? Even if they manage to supersede on the overall performance side, the performance per watt battle is quite hard to win for the old x86 architecture at this point and people love long lasting batteries… Basically Apple UK almost sold out just after the show today, the waiting queue started from end of October half of November and almost immediately (around 20 minutes after the show) reached end of December. Impressive. |
Kuemmel (439) 384 posts |
I hope also they update the Mac Mini M1 someday with some of the latest core technology. On the day when Asahi Linux (which is doing great progress ) is an easy install I would get one of those… |
Paolo Fabio Zaino (28) 1882 posts |
I agree, a Macmini with an M1 Max would also be a killer machine :) |
Andrew Chamberlain (165) 74 posts |
A second hand M1 Mac Mini will probably be my next machine once my Hackintosh is properly obsolete. However, I’m more excited about the next generation of ARM Chromebooks than the M1 Max. Apple’s claims for the M1 Max’s graaphics performance are that it matches an Nvidia RTX 3080 mobile. Here’s a video of an actual RTX 3060 hooked up to the forthcoming Kompanio 1200 Chromebook SoC running ray tracing on Wolfenstein: Youngblood. It’s going to be a lot easier for Linux to make use of this hardware than Apple’s in house GPUs. Also, there’s potential for RISC OS to make use of next gen Chromebooks via the Linux port if it continues to be developed. |
Andrew Chamberlain (165) 74 posts |
I love the fact that there are now ARM based desktops and laptops that are more than competitive with their x86 equivalents, but I think we will be losing something if the industry moves away completely from PC-style upgradeable modular systems. If nothing else it’s bad for the environment to have to upgrade your complete system every time part of it becomes outdated. It’d be nice to have up to date ARM processors that can be slotted into motherboards that take standard PC RAM and peripherals. |
Stuart Swales (8827) 1357 posts |
Springboard springs to mind ;-) |
Andrew Chamberlain (165) 74 posts |
I’d forgotten about Springboard! I was thinking more along the lines of a standard PC motherboard with a socket for an ARM processor rather than an x86 one |
Martin Philips (9013) 48 posts |
The Apple SoC team is realy impressive Very impressive! Only downside it that’s they’re just so expensive! |
David J. Ruck (33) 1635 posts |
The Apple SOC team were formerly a company call PA-Semi which was set up by ex-DEC employees who worked on the Alpha and the StrongARM. So it really should come as no surprise that they know their shit. The expense is what Apple brings to the party. |
Paolo Fabio Zaino (28) 1882 posts |
What I find hilarious is that all we are seeing right now, is well known by many years. Even old books like “The Cache Memory Book” and the even more classic “Computer Architecture A Quantitative Approach” and others, presented a lot of what we are seeing today as R&D results. I applaud Apple for allowing engineers to build and design chips as they should, and I agree with Druck, PA-Semi knows what their are doing. It’s a combination of a management that is not afraid of trying “new” approaches and multiple teams of engineers that are let to work. ARM based computers are finally becoming a reality (again lol), can’t wait for it to become full-on main stream. And on Nvidia/ARM, wait for what they are going to unveil later, when the two will merge in a single chip with HBM2 based SoC. IMHO, the future is in Unified Memory, it makes hardware acceleration so much easier from a software point of view, we no longer have to copy large chunks of data from one type of memory to another, we can just use remapping techniques. Imagine RegEx acceleration without copying data from RAM to GPU RAM, or traffic analysis without copying packet data around, or Gaming where the Game Engine just remap the data to load for the shaders and the shaders process it and ray-traces it in real time and with super-low latency. …and we are just at the beginning of all this… |
Andrew Chamberlain (165) 74 posts |
Could you not have the unified memory as a replaceable module in a socket rather than soldered onto the board? Then you could sell graphics cards without VRAM. It’s the tendency to solder everything down that I’m not keen on. |
Stuart Swales (8827) 1357 posts |
Going (too far) off-chip is too slow. Anyhow you still get modular junk just as fast – my ‘old’ DDR3 doesn’t fit in my new system for instance. |
Andrew Chamberlain (165) 74 posts |
It’d be interesting to know how much of the speed jump with the M1 systems is due to unified memory and how much to having everything on the same chip. From a consumer point of view it’s clearly beneficial to be able to choose from competing component manufacturers when putting together a computer. Being able to upgrade RAM etc. during a system’s life is often worthwhile too. If ARM Windows ever takes off I’d bet you’ll see the likes of EVGA, MSI etc. launching modular ARM-based computers. |
Stuart Swales (8827) 1357 posts |
Well, one boggo DIMM usually gets you 64 bits, so dual channel will be 128. The M1 Max has gone to 512 bits wide LPDDR5-6400 (or similar). Cost of eight equivalent boggo DDR5 DIMMs, and motherboard space / cute layout, and memory controllers is prohibitive except for workstation / servers. SoCs are the way forward. |
Stuart Swales (8827) 1357 posts |
Silly thing repeated! |
Stuart Swales (8827) 1357 posts |
I wonder what the profit margin is on the different components on modern graphics cards is.
What’s an ARM processor? ;-) |
Paolo Fabio Zaino (28) 1882 posts |
The first M1 had ~60GB/s, M1 PRO has 200GB/s and M1 MAX has 400GB/s as a ref. The memory used is actually LPDDR4X, so it’s not HMB2 for instance. That means that the CPU is benefitting heavily from the absence of de-localised SIMMs. The reason for having, with the same architecture, such a bump in bandwidth, is that when adding new memory control units on a SoP (System On a Package, like the M1), you can wire the extra memory directly to them, while in a typical PC architecture you can’t and so, the memory bus is shared across all the memory units you have on your CPU. So, by desiring memory expandability one is also indirectly causing performance bottlenecks. In the PC world things gets even worst, because the business side of it dictate a lot of rules. For instance CPU makers only can warrantee a PC maker a certain amount of CPU units, so generally PC makers, to produce more boards and sell them all, get batches of different CPU types and create those i3, i5, i7, i9 models with the same main board. One thing that constantly happen in PC laptops is that, when chasing thiner designs, more powerful CPUs (that are also more power hungry) heat up more than an i3 or an i5, this causes more thermal throttling which reduce performances and in many cases PC vendor even under-rate CPU TDP to avoid overheating, resulting in a user buying a product with a set of specs that is actually not in the box. Yup it is that bad. So, while a lot of buyers have this believe of being spending less on PC laptops, they systematically end-up spending more because their laptops gets obsolete way faster. All the above are part of the reasons why I can’t wait for ARM to become the de-facto standard in the PC market, because really there is no point for x86 to continue to exists. It’s not anymore the 80s and 90s where a looooot of code was still written in assembly, there is such little assembly code going on these days, that there is no need to stick to an architecture as much as it has been done with PCs. Especially when they no longer fit the requirements. |
Andrew Chamberlain (165) 74 posts |
I think I understand. The great strength of the traditional modular PC set up is that it facilitates competition which drives down prices. Even if a modular system can’t be optimised to the same degree as a SoP, if it’s cheaper then it could still be popular. |
Stuart Swales (8827) 1357 posts |
Every little thing adds cost. Sockets for DIMM modules – kerching. Sockets for processors – kerching. Lot cheaper to just blob the BGAs onto the board and make do. Might not be ideal, but there will be buyers. |
Rick Murray (539) 13840 posts |
I suppose it depends upon the target market sector. Hell, these days forget about expanding memory, in many devices it would be nice just to replace the battery. And thanks to Apple’s evilness and control freakery nature, they’re not enjoying the whole “right to repair” thing. Which means if you get a non-approved screen replacement in a newish iPhone (which can be done then and there, unlike Apple, and costs a lot less) the phone will notice this and deactivate unrelated things “for security”. It’s not really that much better over in Android land. I’m not aware (yet) of a phone that is as customer-aggressive as Apple is trying to be, but battery changes are increasingly fighting wodges of glue and microscopic screws in weird places. Getting out of Belmarsh is possibly easier than getting into a phone these days. Plus, of course, there’s not much point in upgrading a device that’s already obsolete… This goes for PCs as well. Just look at the churn gamers go through in adding/upgrading the graphics unit, processor, entirely new box, etc etc. Oh, and notice the ridiculous baseline specifications that Win11 expects.
That’s why I wonder why the Pi boards have the LCD socket fitted. There must be tens of millions of the things that will never be used. More annoyingly, the LCD socket is fitted. A header for a reset button is not. :-/ |
Rick Murray (539) 13840 posts |
Easily done, the guy’s dead…
They did that in order to raise cash quickly when times were hard. When Jobs came back, he pointed out that the clone program was actually hurting Apple as some of the high end clones were, well, better. Jobs did approach Compaq and Sony when they were moving to x86 hardware, but neither were interested in making OS X boxes. Maybe by then they had, you know, a bit of a reputation?
Copland was doomed to failure. Pretty much the worst idea was to make it compatible with classic MacOS…which was absolutely dreadfully engineered. Best thing they did was toss away all that crap and start again with OS X. I’m just surprised it took nine major releases and two architectures before they got rid of that awful mess. |
Paolo Fabio Zaino (28) 1882 posts |
LOL spot on right! But there is a way to add it (even to be used via software), but you are totally right, super annoying not to have it by default.
Back then, I coded on macOS 8 and later releases, I remember it was not an easy task to debug (possibly also because I had much less knowledge than now), but you’d be surprised of how similar it was to code on RISC OS… macOS 8 had the Toolbox (yup) and a bit more powerful one. It indeed had a very similar concept with toolbox events which were pretty much the same as on RISC OS events (just more events available a LOOOOOT more). macOS implemented the concept of low-level events, os-event and user-events (IIRC). But yes user-events where generally available only if my application was in “foreground” (I remember the terminology of foreground and background applications on macOS 8 and 9). The WindowManager (carbon) looked prettier than RISC OS UI and it had way more customisation option than RISC OS WindowManager. Especially when on the matter of nested windows, macOS 8 was better than RISC OS. The number of controls on macOS was way higher than on RISC OS back then. So I would not say it made RISC OS look like a “paragon of virtue”, but yes like RISC OS it was not very stable if a developer would have design code that wasn’t robust and indeed macOS had issues, so it was far from perfect. But I recall it was way better than coding on Windows 95 (but that is probably because back then I had a PC which had both EISA and PCI and that always made Windows 95 very unstable). But hey thanks for the memory lane, I enjoyed coding on macOS it was fun, too bad it never really became as popular as Windows did. MacOS X was a huge step forward for Apple and indeed it worked way better and coding was way easier on MacOS X than on older macOS. A bit like the big step forward from Windows 95/98/ME to Windows 2000, I really liked coding on Windows 2000 back then. |
Stuart Swales (8827) 1357 posts |
“With a dedicated team of 500 software engineers” Well that’s one way to doom a project, eh? |
Charlotte Benton (8631) 168 posts |
System on a chip becoming standard is inevitable. There’s no practical way of providing the necessary connectivity and low latencies otherwise. That doesn’t necessarily mean the end for modular systems, just that the compute module will be a package deal (in terms of cost, there’s far more to a computer than the components that actually do the computing). |
Charlotte Benton (8631) 168 posts |
The difference is that Acorn was fighting a format war against international giants, when its only base was the UK educational market. Nowadays ARM is the de facto global standard in several areas, and its push into the PC market is being backed by major players. |
Pages: 1 2