Menu Sign In Contact FAQ
Banner
Welcome to our forums

Software nostalgia discussion

From here
:

You probably meant 16GiB, not MB/MiB.

Yeah, you’re right. When I started with computers, 640 KB was enough for everybody. It’s quite a leap from that to 16 GB.

BackPacker wrote:

640 KB was enough for everybody

- for all eternity

ENVA ENOP ENMO, Norway
When I started with computers, 640 KB was enough for everybody

Actually, it was not. But you had to make do.

Last Edited by Airborne_Again at 18 Jan 13:10
ESKC (Uppsala/Sundbro), Sweden

From here

10000 lines of C compiles to about 30k, for stuff I make.

And it always did, all the way back to the Z80, when men were men, girls were girls, and life was real.

2x more code for an ARM.

Today we live in a bloatware world.

Administrator
Shoreham EGKA, United Kingdom

mmm, my ZX81 had 1KB of memory. Was fun to write some a game in assembler for it. Then I purchased a 4KB memory extension.
2 years later, the 64KB of my commodore 64 was like getting an infinite amount of space!

Ukraine

On my first paid job, I was a junior member of the sysadmin team (back then they were called system programmers – and the work involved quite a bit of programming indeed) on an IBM-compatible mainframe that performed 300,000 register operations per second, had 512 KB ferrite core RAM and a whopping 72 MB of disk space (2×29 + 2×7), occupied 120 m² and consumed 40 kW. It was implemented in 7400 series ICs.

LKBU near Prague, Czech Republic

free·dom /ˈfrēdəm/ noun. The human state of having more money, more toys and less software.

Peter wrote:

2x more code for an ARM, accusation of ‘bloat’

Knowing both ARM and Z80 asm, I’d disagree pretty strongly with that

Quite a lot of Z80 code is just shuffling things out of A and HL (because they are the only registers you can actually do most stuff with – and as such, quite a lot of Z80 asm is figuring out a way to write your code to minimise this need!) If you use the Thumb instruction set on ARM, each instruction is always exactly 16 bits wide, and all the registers are genuinely general purpose (e.g. you can do anything with R5 that you can with R0) so when you call subroutines you can often do it without needing to pass stuff on the stack (or other means of doing it via memory) which is slow. Z80 instructions vary between 8 bits and 32 bits wide and some are incredibly slow (e.g. anything involving the index registers, you’re looking at 20+ T-states, during which time you cannot service an interrupt). Even if you’re using full ARM32 rather than thumb (where every instruction is exactly 32 bits wide) it will be a lot less than 2x code – only “2x code” if you consider the memory usage, but it’s likely an equivalent ARM program will be significantly fewer instructions, especially if you’re dealing with values longer than 16 bits long. ARM also has the barrel shifter so ‘ALU instructions’ can do two things at once (e.g. a calculation and a shift operation in a single instruction).

Andreas IOM

That’s also true but the ARM RISC instruction set generates more kbytes of code. Also each being 32 bit… Nevertheless this comparison is of little value because the future is with the ARM, etc.

Really good Z80 code used much more than A and HL; it used the whole lot, and sometimes the alternate set too (EXX).

But then I wonder whether a lot more effort used to go into the compilers, than today. There were some fiendishly clever compilers for the Z80, which cost best part of 1k. But nobody pays for any of these tools for the ARM. They are all free. And I wonder how they were generated. For example take a prinf(). On a really good Z80 compiler a full featured (float etc) printf took up about 4k. But many were produced by writing it in C and compiling it – dead easy because printf() exists as a C source (in many forms) in the public domain. I was once execution time profiling a sscanf implementation which was taking up ~90% of the CPU time (processing HPGL input at 38400 baud) and found the stupid compiler did a floating multiply by 10, and then a floating add, for every incoming digit! It was dumb because to multiply a single digit by 10 you subtract 30h, shift left 2x, save that, shift left twice more, add the other bit on… And nobody is going to put any effort into optimising tools which being GNU are a give-away.

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

That’s also true but the ARM RISC instruction set generates more kbytes of code.

Well, not in any way that really matters if you use Thumb, especially if you’re dealing with values greater than 16 bits long – since ARM instructions generally do a lot more than Z80 ones, and Thumb instructions are on average no longer than Z80 ones, and ARM has a lot more registers which allows you to write pretty substantial subroutines without having to ever store any working values in memory. Z80 allows compactness if you can use things like the LDIR or OTIR instructions (you’d have to write a loop in most other processor architectures), but then again because LDIR is slow many Z80 programmers end up using unrolled LDI instructions anyway (or fiddle with the stack pointer and use unrolled PUSH instructions, as PUSH takes 11 T-states versus 16 T-states for LDI versus 21 T-states for each trip around LDIR), and also some of the ALU instructions can work directly on memory (but these are slow, and generally can only use HL as the pointer).

Peter wrote:

Really good Z80 code used much more than A and HL; it used the whole lot, and sometimes the alternate set too (EXX).

That wasn’t my point – my point is there’s a lot of instructions that only work on A, so you often have to marshall stuff in and out of the accumulator to do anything useful (and it is quite a skill to minimise the amount of register shuffling). For instance, there is no instruction ADD D, L – only ADD A, L (and the case is similar for the 16-bit ADD instruction, except HL can be the only thing added to). There is no INC (DE) instruction, only INC (HL).

Peter wrote:

And nobody is going to put any effort into optimising tools which being GNU are a give-away.

You couldn’t be more wrong – in fact that statment is so wrong it’s likely to rip a tear in the fabric of space-time :-) HUGE efforts are put into optimising the GNU tools. This statement also seems to fundamentally mis-understand open source software.

(incidentally, there isn’t a GNU Z80 compiler, there is GNU binutils for Z80 though, in other words an assembler and linker – which are very good software – but no version of gcc for Z80. There is the Z88DK which has a pretty good optimising compiler for Z80. ARM has gcc and llvm compilers, both of which produce highly optimised code. There are of course edge cases, e.g. gcc will put normal pre-amble and post-amble into subroutines being used as an interrupt service routine, which isn’t necessary because the CPU automatically pushes the registers in question as part of the interrupt response, but this is pretty much an edge case and only wastes a single digit number of T-states, and anyone who really cares will likely not use the compiler to generate their ISR).

Andreas IOM
13 Posts
Sign in to add your message

Back to Top