Credit: Mark Rademaker
I recently bought an Apple computer with the new M1 CPU to supplement the beastiarium known as Varnish Cache's continuous integration cluster. I am a bit impressed that it goes head-to-head with the s390x virtual machine borrowed from IBM, while never drawing more than 25 watts, but other than that: Meh …
This is one disadvantage of being a systems programmer: You see up close how each successive generation in an architecture has been inflicted with yet another "extension," "accelerator," "cache," "look-aside buffer," or some other kind of "marchitecture," to the point where the once-nice and orthogonal architecture is almost obscured by the "improvements" that followed. It seems almost like a law of nature:
Any successful computer architecture, under immense pressure to "improve" while "remaining 100% compatible," will become a complicated mess.
Before RISC dominated the field, there were several interesting approaches, which were successful in their niches. Maybe some ideas should be re-explored if we get RISC-averse.
It would be great to have a single document summarizing some of these architectures. Otherwise, how to even compare them? I'd love to be able to do a cogent comparison between the AS/400 and the Burroughs/Unisys Large Systems ("BULS"). If memory serves, writing a C compiler for the latter was made particularly difficult because linear addressing spaces are a little alien to BULS and about every piece of software running there. Or in the HP3000 - but I have never seen official documents discussing these efforts...
Displaying 1 comment