While the MOS 6502 was a groundbreaking design and has its advantages, including clock cycle efficiency, Zilog is the one that is still in every high school in the US and available in every Walmart, in the form of the TI-84/84 series graphing calculators. The latest TI-84 uses an eZ80 at 48MHz, and the eZ80 is extremely cycle efficient, perhaps more so than 6502.
While the MOS 6502 was a groundbreaking design and has its advantages, including clock cycle efficiency,
Unfortunately, the 6502 fails miserably on memory timing efficiency. I like the processor, but I like the Z80 far more. And for any given memory system, the Z80 was faster than the 6502.
For instance, assume you have a memory system with an access speed of 500 ns. For that memory, the maximum clock speed for the 6502 would be 1 MHz. So, effectively the 6502 required a memory speed twice as fast as the clock used on the 6502.
Now, let's use that same 500 ns memory on a Z80. The most stringent timing for the Z80 was the opcode fetch on the M1 cycle. This was 1.5 clock cycles long. So you could clock the Z80 at 3 MHz and access that 500 ns memory with zero wait states. If you used the more relaxed timing for normal read and write cycles that weren't during M1, you could clock that Z80 at 4MHz, but at the cost of having to add a wait cycle for M1 reads.
In the following table, I'm assuming 500 ns memory. For the 6502, there is a 1 MHz clock. For the Z80, there is a 3MHz clock (no wait states), and a 4MHz clock (1 wait state for the M1 read cycle).
Now, let's compare some opcodes
6502
6502 uSec
Z80
3MHz Z80 uSec
4MHz Z80 uSec
NOP
2
NOP
1.33
1.25
JMP addr
3
JP addr
3.33
2.75
ADC #n
2
ADC A,n
2.33
2
ADC zaddr
3
ADC A,r
1.33
1.25
Overall, the Z80 would perform most work faster than the 6502, assuming both processors were clocked as fast as possible, while retaining compatibility with their memory. One major issue with the 6502 is the shortage of registers. So holding intermediate results inside the processor is effectively impossible, whereas with the abundant registers, many of the intermediate results could be stored within registers. The effect of this is that with the 6502, memory bandwidth has to be shared between code and data when performing computations, while with the Z80, most or all of the memory bandwidth can be used by code. This would allow the Z80 to perform more calculations in less time.
Overall, both processors are nice to work with. But the Z80 is much easier.
* Z80 has a 16 bit stack pointer vs the 8-bit stack for the 6502
* Z80 has a lot more registers than the 6502
* Sharing page zero between system code and user code in the 6502 is a PITA
I'm definitely a Z80 fan, but there are areas where the 6502 is more efficient. Neither are as nice as the 6809 or any of the 16-bit chips, including 65816 and Z280 (or the Z380 or eZ80) for C code.
Mostek Corporation, distinct from MOS technology, was a Z80 second source, but this is the first time I've seen the claim that MOS Technology second sourced Z80. That would be really interesting if so; would love to see the reference.
Motorola. 6800 and then the 6809 CPU was much better than either the Z80 or the 6502. And the 68k was just outstanding for the time. I cut my teeth on 6502 assembly language on the Science of Cambridge Mk14 in the late '70's... The intel 8086 instruction set made zero sense to me. Still doesn't!
Give me big-endian and source,destination any day vs little-endian and destination,source. Who thought it was a good idea to deliberately make things backwards?!
I thought this might ruffle somebody's feathers! :D Little endian is like writing 10,110 as 110,10. The bits go MSb->LSb, but the bytes go LSB->MSB. But yeah, it really only matters if you're looking at a memory dump and you have to keep it in mind.
And don't get me started on that segment:offset shit vs flat addressing. I'll give Intel I/O vs memory addressing though. That was kind of a neat idea... although they could have just used that line as an extra address bit.
But yeah, it really only matters if you're looking at a memory dump and you have to keep it in mind.
It matters for more than that. By storing the least significant byte first, it makes it easier and faster for systems to incrementally process or extend values, particularly in low-level operations like pointer arithmetic or reading multi-byte integers from memory. It aligns with how arithmetic operations are naturally performed starting from the least significant digit. The machines don't have it wrong, we humans have it wrong.
Interesting. Does that still apply when CPU's are scooping up data in 4 or 8 byte gulps? Seems like once the data is in registers it becomes arbitrary.
And yes, these holy wars go back to the days of Usenet in the late 80's.
If you want to cast an int to a byte it's a NOP in little-endian.
If you're operating on the entire number then it doesn't matter. As the number of the bits a CPU can process natively increase it sort of does matter less. But we're still always operating on numbers smaller than the register of the CPU and then it becomes a hassle.
Humans process numbers from right-to-left even though we read left-to-right so we humans have it backwards.
6502 assembly definitely feels easier. It's a bit RISC-ier. The z80 gives you 16 bit pseudo-registers though, that I imagine make things like "filling a screen with pixels", "doing maths that requires numbers than 255 in general" a lot easier
There're things on Zilog that made it slower, but allow us to do some trickery that compiled languages like very much.
MOS favours speed and eficciency that most of the time demands a human to write code that extracts the performance from the chip. In special, recursivity are a stack eater, and having a single stack for everything as used on MOS chips seriously hinders it.
Some of the older studio gear is interesting here.
My Lexicon PCM80 effects rack has a Z80, a NEC V40, a 56000, and a proprietary Lexichip 2 DSP, all working in harmony mediated through the 'TACO' chip. It must have been fun to program for all those different architectures all at once!
The quality of the effects are still fine, though you can get the same kind of thing as plugins nowadays. The version of the Concert Hall reverb algorithm does everything you'd want from the early Lexicon Hall sound, it can be lush and artificial sounding in a nice way.
What is perhaps harder to emulate with plugins is the way the PCM80 combines all the different kinds of delays and reverbs with real time movement and control, which is also why it has a slightly unusual DSP architecture.
The 56000 does the modulated delays and filters, while the Lexichip handles just the reverbs. The Z80 handles loading programs into the Lexichip, and afterwards constantly overwrites some of the instructions in Lexichip's program store to modulate parts of the reverb algorithm. (The Lexichip is fast, and performs 128 instructions per sample, but rather dumb and can't even compare two values or calculate an LFO on it's own.)
The V40 does all the general housekeeping and also can write real time changes into the Z80 and 56000 program memory. This lets it do things like controlling delay feedback from the input signal level, while using the pitch of the last MIDI note you played to adjust the brightness of the reverb.
Anyway, I find these devices fun to play with, like old computer hardware. Nowadays you get pretty much just a single generic DSP in every effect, but the older hardware has lots of unique custom solutions. Some of the other digital reverbs I own are designed with just discrete logic chips, and lots of interesting hacks are used to create a fairly powerful DSP (for the time) with as few chips as possible.
I believe a big motivation was to use it to teach circuit design. The use of two different CPUs requires some pretty complex glue logic and while a single FPGA could have handled this, they instead used 3 CPLDs on the original version (Cerberus 2080) because it made the circuitry easier to understand. In fact, the use of the 2 CPUs seems like it was done to create the artificial difficulty of figuring out how to design the glue circuitry to make these two CPUs exist.
Considering how slow a 2MHz Z80 is compared to a 1 MHz 6502, I don't see the point. That's probably the reason why no serious game was developped for the Z80 on C128
Fun fact: some games like Alleykat or Elite128 double the 6502 speed during screen blanking, achieving a 30% performance boost when run on a genuine C128
CP/M for the C128 could run on both 40 and 80 column displays. To access features like sprites or sound, you would have to access the hardware registers directly.
Yea, and Microsoft was a bit of an asshole about that softcard. Specifically, the Macro assembler for CP/M had a bit of code in it that would specifically test if it was running on a Softcard. If it wasn't, the program would terminate quietly without any messages. I found out about this trap when I later purchased a different Z80 card (it had a 6 MHz Z80, it's own 64K of memory and used an 8 bit port to communicate with the 6502 in the Apple. Basically a much higher performance Z80 that used the Apple for simultaneous I/O). Was rather annoyed at M80 simply quitting. So, I fired up my debugger, found the offending piece of code, and patched it out of existence on my copy of the assembler.
Yes, Microsoft has a rather long history of being an asshole company.
Purely based on the machines' capabilities, MOS. That doesn't mean it was more capable but it ended up in the good team 😁 I wonder what it looked like if C64 was based on Z80.
The MOS machines were the best home computers, especially with the Atari 800 and Commodore 64 having hardware scrolling, sprites, and sound.
The Zilog machines were best for business as the TRS-80 and Amstrad (and others) could run CP/M.
The MSX 2 and Commodore 128 both win both categories as they both had both a Zilog to run CP/M in 80 columns, and hardware scrolling, sprites and the sound chips for games. The later Amstrad Plus models had hardware scrolling and sprites but arrived very late in 1990 and didn't sell well.
That said, I think the 6809 was the most powerful 8-bit CPU but it didn't get much use due to its cost.
I still love my dragon32 and CoCo, but I started out with a ZX81, then 48k Speccy, then C64, so for me the Z80 and 6502 win out the 8 bit wars, but as an Amiga owner Motorola came back hard with the 68k to own the 16bit days… Amiga, Mac, ST, X68000 were all superior machines to the 16 bit PC era
I wrote assembler on both and as a programmer I preferred the Zilog, the main reasons being:
The 6502 was limited with a 256 byte stack that was held in zero-page memory and on occassion I had to refactor code due to the stack hitting ZP variables;
The Z80 had a more flexible set of registers that allowed 8 and 16 bit arithmetic operations whereas the 6502 was limited to 8 bit arithmetic.
The Z80 also bit-wise ops to set/test/reset a bit whereas on the 6502 you had to write the equivalent using basic logic ops and masks.
The Z80 also had block move ops that (LDIR, LDDR) that were very handy.
The appeal of a simple instruction set was provided thanks to their experience with the 6502's architecture. The 6502 was more efficient per clock cycle than the Z80 just like RISC, even though it isn't a true RISC design.
wasn't it that that the Acorn team visited Bill Mensch at WDC, saw that it was a one-man band, and realized they had enough smarts to design a better chip than the 65C816 themselves
The closest thing I ever ran that even had a z80 in it was a sega genesis (as a side cpu to the main cpu of a motorola 68k), so I wouldn't know about how well they ran. On the hand I do have an apple IIc, and love it. So I guess I would have to say the MOS 6501 :)
https://en.wikipedia.org/wiki/Federico_Faggin
The Wikipedia description is too kind anyway. His theory of consciousness is just creationist salad in quantum sauce.
I never really saw the use application for the Z80, especially as an add-on, such as the soft card for the Apple II.
Apparently it has a library of commonly available business software. But which means what exactly, a spreadsheet, a database, a word processor, and….. we’ve run out of things for it to do. Um yeah sounds really useful. lol
Ignoring the Actual Question.. The BBC in the picture was my first computer and its really really rare.
Its a BBC B+ 64k, a 128k also came out, but they where both out for less than 6 months and then the Master came out. There's loads of model A/B and Masters but the b+ is a bit of a unicorn, especially 40 years later.
I can’t see too much difference between 6502 and Z80, although 6502-based computers were usually equipped with dedicated peripheral chips, so we had Nintendo, Atari, Commodore vs mainly Sinclair. So, between two I’d choose 6502 but only because so much great systems were equipped with it and Spectrum and TRS-80 seem too boring to me.
rosmaniac@reddit
While the MOS 6502 was a groundbreaking design and has its advantages, including clock cycle efficiency, Zilog is the one that is still in every high school in the US and available in every Walmart, in the form of the TI-84/84 series graphing calculators. The latest TI-84 uses an eZ80 at 48MHz, and the eZ80 is extremely cycle efficient, perhaps more so than 6502.
johndcochran@reddit
Unfortunately, the 6502 fails miserably on memory timing efficiency. I like the processor, but I like the Z80 far more. And for any given memory system, the Z80 was faster than the 6502.
For instance, assume you have a memory system with an access speed of 500 ns. For that memory, the maximum clock speed for the 6502 would be 1 MHz. So, effectively the 6502 required a memory speed twice as fast as the clock used on the 6502.
Now, let's use that same 500 ns memory on a Z80. The most stringent timing for the Z80 was the opcode fetch on the M1 cycle. This was 1.5 clock cycles long. So you could clock the Z80 at 3 MHz and access that 500 ns memory with zero wait states. If you used the more relaxed timing for normal read and write cycles that weren't during M1, you could clock that Z80 at 4MHz, but at the cost of having to add a wait cycle for M1 reads.
In the following table, I'm assuming 500 ns memory. For the 6502, there is a 1 MHz clock. For the Z80, there is a 3MHz clock (no wait states), and a 4MHz clock (1 wait state for the M1 read cycle).
Now, let's compare some opcodes
Overall, the Z80 would perform most work faster than the 6502, assuming both processors were clocked as fast as possible, while retaining compatibility with their memory. One major issue with the 6502 is the shortage of registers. So holding intermediate results inside the processor is effectively impossible, whereas with the abundant registers, many of the intermediate results could be stored within registers. The effect of this is that with the 6502, memory bandwidth has to be shared between code and data when performing computations, while with the Z80, most or all of the memory bandwidth can be used by code. This would allow the Z80 to perform more calculations in less time.
Overall, both processors are nice to work with. But the Z80 is much easier.
* Z80 has a 16 bit stack pointer vs the 8-bit stack for the 6502
* Z80 has a lot more registers than the 6502
* Sharing page zero between system code and user code in the 6502 is a PITA
rosmaniac@reddit
I'm definitely a Z80 fan, but there are areas where the 6502 is more efficient. Neither are as nice as the 6809 or any of the 16-bit chips, including 65816 and Z280 (or the Z380 or eZ80) for C code.
scruss@reddit
Trick question, 'cos MOS was a second source for Z80s. And pretty much every CPU made at the time got tested at MOS's facility
sixothree@reddit
Technically correct. The best kind of correct!
rosmaniac@reddit
Mostek Corporation, distinct from MOS technology, was a Z80 second source, but this is the first time I've seen the claim that MOS Technology second sourced Z80. That would be really interesting if so; would love to see the reference.
https://en.m.wikipedia.org/wiki/Mostek
scruss@reddit
ahh, I may have fumbled this one. I suspect you're right. Companies need very different names, dammit!
timfountain4444@reddit
Motorola. 6800 and then the 6809 CPU was much better than either the Z80 or the 6502. And the 68k was just outstanding for the time. I cut my teeth on 6502 assembly language on the Science of Cambridge Mk14 in the late '70's... The intel 8086 instruction set made zero sense to me. Still doesn't!
stalkythefish@reddit
Give me big-endian and source,destination any day vs little-endian and destination,source. Who thought it was a good idea to deliberately make things backwards?!
wvenable@reddit
Backwards?!? What does X = Y mean in your programming language of choice? And big-endian is just madness. :P
stalkythefish@reddit
I thought this might ruffle somebody's feathers! :D Little endian is like writing 10,110 as 110,10. The bits go MSb->LSb, but the bytes go LSB->MSB. But yeah, it really only matters if you're looking at a memory dump and you have to keep it in mind.
And don't get me started on that segment:offset shit vs flat addressing. I'll give Intel I/O vs memory addressing though. That was kind of a neat idea... although they could have just used that line as an extra address bit.
wvenable@reddit
It matters for more than that. By storing the least significant byte first, it makes it easier and faster for systems to incrementally process or extend values, particularly in low-level operations like pointer arithmetic or reading multi-byte integers from memory. It aligns with how arithmetic operations are naturally performed starting from the least significant digit. The machines don't have it wrong, we humans have it wrong.
stalkythefish@reddit
Interesting. Does that still apply when CPU's are scooping up data in 4 or 8 byte gulps? Seems like once the data is in registers it becomes arbitrary.
And yes, these holy wars go back to the days of Usenet in the late 80's.
wvenable@reddit
If you want to cast an int to a byte it's a NOP in little-endian.
If you're operating on the entire number then it doesn't matter. As the number of the bits a CPU can process natively increase it sort of does matter less. But we're still always operating on numbers smaller than the register of the CPU and then it becomes a hassle.
Humans process numbers from right-to-left even though we read left-to-right so we humans have it backwards.
MagnetoManectric@reddit
6502 assembly definitely feels easier. It's a bit RISC-ier. The z80 gives you 16 bit pseudo-registers though, that I imagine make things like "filling a screen with pixels", "doing maths that requires numbers than 255 in general" a lot easier
LisiasT@reddit
It depends...
MOS for games and performance computing.
Zilog for everything else.
Reason? The ASM language.
There're things on Zilog that made it slower, but allow us to do some trickery that compiled languages like very much.
MOS favours speed and eficciency that most of the time demands a human to write code that extracts the performance from the chip. In special, recursivity are a stack eater, and having a single stack for everything as used on MOS chips seriously hinders it.
SomePeopleCallMeJJ@reddit
Why not both?
termites2@reddit
Some of the older studio gear is interesting here.
My Lexicon PCM80 effects rack has a Z80, a NEC V40, a 56000, and a proprietary Lexichip 2 DSP, all working in harmony mediated through the 'TACO' chip. It must have been fun to program for all those different architectures all at once!
Fragrant_Pumpkin_669@reddit
Does the PCM80 still hold up with the latest VST fx plugins?
termites2@reddit
The quality of the effects are still fine, though you can get the same kind of thing as plugins nowadays. The version of the Concert Hall reverb algorithm does everything you'd want from the early Lexicon Hall sound, it can be lush and artificial sounding in a nice way.
What is perhaps harder to emulate with plugins is the way the PCM80 combines all the different kinds of delays and reverbs with real time movement and control, which is also why it has a slightly unusual DSP architecture.
The 56000 does the modulated delays and filters, while the Lexichip handles just the reverbs. The Z80 handles loading programs into the Lexichip, and afterwards constantly overwrites some of the instructions in Lexichip's program store to modulate parts of the reverb algorithm. (The Lexichip is fast, and performs 128 instructions per sample, but rather dumb and can't even compare two values or calculate an LFO on it's own.)
The V40 does all the general housekeeping and also can write real time changes into the Z80 and 56000 program memory. This lets it do things like controlling delay feedback from the input signal level, while using the pitch of the last MIDI note you played to adjust the brightness of the reverb.
Anyway, I find these devices fun to play with, like old computer hardware. Nowadays you get pretty much just a single generic DSP in every effect, but the older hardware has lots of unique custom solutions. Some of the other digital reverbs I own are designed with just discrete logic chips, and lots of interesting hacks are used to create a fairly powerful DSP (for the time) with as few chips as possible.
LousyMeatStew@reddit
Though not vintage, the Cerberus 2100 is a modern design that combines both.
SomePeopleCallMeJJ@reddit
Sweet!
setwindowtext@reddit
What a fascinating piece of kit! What would be a real-life use case for it? I have a hard time imagining a curriculum where this would fit.
LousyMeatStew@reddit
I believe a big motivation was to use it to teach circuit design. The use of two different CPUs requires some pretty complex glue logic and while a single FPGA could have handled this, they instead used 3 CPLDs on the original version (Cerberus 2080) because it made the circuitry easier to understand. In fact, the use of the 2 CPUs seems like it was done to create the artificial difficulty of figuring out how to design the glue circuitry to make these two CPUs exist.
There's a full playlist documenting the hardware design process.
setwindowtext@reddit
Oh, that makes sense! I didn’t think about circuit design, only about the software part for some reason.
Zeznon@reddit
C128 z80 games would be interesting, I think. Unless you don't get access to graphics amd sound stuff.
Ok-Current-3405@reddit
Considering how slow a 2MHz Z80 is compared to a 1 MHz 6502, I don't see the point. That's probably the reason why no serious game was developped for the Z80 on C128
Fun fact: some games like Alleykat or Elite128 double the 6502 speed during screen blanking, achieving a 30% performance boost when run on a genuine C128
International-Pen940@reddit
I’m not sure what the Z80 had access to, it would be interesting to find out.
Timbit42@reddit
CP/M for the C128 could run on both 40 and 80 column displays. To access features like sprites or sound, you would have to access the hardware registers directly.
timfountain4444@reddit
BBC Micro also had an early Arm as an aval kit, 80186 and 6502 co-pros....
help_send_chocolate@reddit
Also NS32016.
johndcochran@reddit
Yea, and Microsoft was a bit of an asshole about that softcard. Specifically, the Macro assembler for CP/M had a bit of code in it that would specifically test if it was running on a Softcard. If it wasn't, the program would terminate quietly without any messages. I found out about this trap when I later purchased a different Z80 card (it had a 6 MHz Z80, it's own 64K of memory and used an 8 bit port to communicate with the 6502 in the Apple. Basically a much higher performance Z80 that used the Apple for simultaneous I/O). Was rather annoyed at M80 simply quitting. So, I fired up my debugger, found the offending piece of code, and patched it out of existence on my copy of the assembler.
Yes, Microsoft has a rather long history of being an asshole company.
Maeglin75@reddit
As a proud owner of a C128 I agree.
RafaRafa78@reddit (OP)
It's just for fun, a question for instant reaction ;)
Thanks, nice information.
nobody2008@reddit
Purely based on the machines' capabilities, MOS. That doesn't mean it was more capable but it ended up in the good team 😁 I wonder what it looked like if C64 was based on Z80.
rosmaniac@reddit
C64 with Z80 is basically the MSX and derivatives.
monolalia@reddit
Seems to depend on whether you want your vintage micro in black or beige :D (The 64 usually got there eventually due to UV)
Timbit42@reddit
The MOS machines were the best home computers, especially with the Atari 800 and Commodore 64 having hardware scrolling, sprites, and sound.
The Zilog machines were best for business as the TRS-80 and Amstrad (and others) could run CP/M.
The MSX 2 and Commodore 128 both win both categories as they both had both a Zilog to run CP/M in 80 columns, and hardware scrolling, sprites and the sound chips for games. The later Amstrad Plus models had hardware scrolling and sprites but arrived very late in 1990 and didn't sell well.
That said, I think the 6809 was the most powerful 8-bit CPU but it didn't get much use due to its cost.
GeordieAl@reddit
I still love my dragon32 and CoCo, but I started out with a ZX81, then 48k Speccy, then C64, so for me the Z80 and 6502 win out the 8 bit wars, but as an Amiga owner Motorola came back hard with the 68k to own the 16bit days… Amiga, Mac, ST, X68000 were all superior machines to the 16 bit PC era
OutlandishnessOld29@reddit
K1810VM86 is my choise
Zeznon@reddit
WDC
the123king-reddit@reddit
The 65816 is a chip that had so much potential
mi7chy@reddit
Growing up with 6502, I'd say 6309 > 6809 > 6502 > z80.
BrissBurger@reddit
I wrote assembler on both and as a programmer I preferred the Zilog, the main reasons being:
Trenchbroom@reddit
My love for the C64 means MOS for me, although 80% of the arcade machines from my youth were Z80 so that's a HUGE feather in Zilog's cap.
johnnybovril@reddit
Mos all the way. 6502 was the inspiration for ARM via BBC micro and look where that’s taken us
turnips64@reddit
That’s not the case. The inspiration for ARM were things like the RISC projects that IBM and some of the US University’s were doing.
Trenchbroom@reddit
The appeal of a simple instruction set was provided thanks to their experience with the 6502's architecture. The 6502 was more efficient per clock cycle than the Z80 just like RISC, even though it isn't a true RISC design.
scruss@reddit
wasn't it that that the Acorn team visited Bill Mensch at WDC, saw that it was a one-man band, and realized they had enough smarts to design a better chip than the 65C816 themselves
NiteWaves77@reddit
laughs in Bil Herd
SophiaDrivesMeNuts@reddit
6502!
ewayte@reddit
MOS - Atari 800XL at home, Apple ][+ at college in the early 80s.
Confident_Oil_7495@reddit
Z80 all the way
AdamTheSlave@reddit
The closest thing I ever ran that even had a z80 in it was a sega genesis (as a side cpu to the main cpu of a motorola 68k), so I wouldn't know about how well they ran. On the hand I do have an apple IIc, and love it. So I guess I would have to say the MOS 6501 :)
Pangocciolo@reddit
MOS, because the founder of Zilog has become the spiritual guru of a new pseudo-science creed, and must be boycotted.
mattthepianoman@reddit
That's a pretty dumb reason to dismiss the Z80
Pangocciolo@reddit
Sure, it's just an ideological choice. Never said it's a insighful reason.
king_john651@reddit
What's that?
Pangocciolo@reddit
https://en.wikipedia.org/wiki/Federico_Faggin The Wikipedia description is too kind anyway. His theory of consciousness is just creationist salad in quantum sauce.
Primo0077@reddit
I'm a CP/M guy, Z80 naturally!
Future-Side4440@reddit
I never really saw the use application for the Z80, especially as an add-on, such as the soft card for the Apple II.
Apparently it has a library of commonly available business software. But which means what exactly, a spreadsheet, a database, a word processor, and….. we’ve run out of things for it to do. Um yeah sounds really useful. lol
lazygerm@reddit
MOS for me. Though I did own a C128. I never got around to using it's CP/M feature set.
FlyByPC@reddit
I have a Sinclair ZX80 and ZX81, a Tandy Model 100 and Model 200, and a PET -- so, both, but especially Zilog?
sneekeruk@reddit
Ignoring the Actual Question.. The BBC in the picture was my first computer and its really really rare.
Its a BBC B+ 64k, a 128k also came out, but they where both out for less than 6 months and then the Master came out. There's loads of model A/B and Masters but the b+ is a bit of a unicorn, especially 40 years later.
AlexOughton@reddit
Looking at this I'm realizing that everything I have from that era is MOS, and almost everything I don't is Zilog.
Clearly I'm biased!
dunzdeck@reddit
I've always preferred Z80 for asm programming. Feels more like an old school CISC isa
the123king-reddit@reddit
1802!
johndcochran@reddit
Ah, COSMAC ELF perhaps?
the123king-reddit@reddit
My MS2000 is just too cool. Slow as molasses and almost entirely useless, but cool
retardedboi1991@reddit
Both are good but the Z80 is my fav purely because it was designed with pitfalls for anyone trying to copy it and Amstrad is the G.O.A.T
rpocc@reddit
Can I choose Motorola? :)
I can’t see too much difference between 6502 and Z80, although 6502-based computers were usually equipped with dedicated peripheral chips, so we had Nintendo, Atari, Commodore vs mainly Sinclair. So, between two I’d choose 6502 but only because so much great systems were equipped with it and Spectrum and TRS-80 seem too boring to me.
McTrinsic@reddit
MOS
numsixof1@reddit
Living in North America this one is pretty easy.. (sorry TRS-80)
JetzeMellema@reddit
Z80 4 life!