BigEd wrote:
It really is a question about easy it is to build a compiler for a machine with some asymmetry about the register file. (I'm sure you already know more than I do in that area)
I was reminded of the 68k's separation of Address and Data registers, and whether that might help pack in the instruction encoding. We do know the 68k was a popular machine with several C compilers.
Yeah, the 68k gained a bit or two in the instruction encoding by separating data and address registers. It's tempting to do since about a half-dozen registers are fixed pointer type registers. (stack pointer, frame pointer, global pointer, return address, xhandler address, class pointer, thread pointer). But it adds to the compiler's complexity as you mention.
Since I'm building a compiler too I've decided to leave the large constants up to the assembler. In that case the compiler should never use the asymmetrical lui instruction. The compiler will assume it can use constants of any size with an instruction, then the assembler will patch up the code to use registers if the constant is too large for the instruction.
It makes the assembler more complex because it has to test for constant ranges.
My next question is what to do about calls to other functions where the function call might exceed the default number of bits available. It's easy enough to generate code to call a function that's further away but an issue is when to do this. It's wasteful of code space and performance to generate a "far" call for every function, but the compiler / assembler doesn't know in advance what needs to be encoded. I'm led to the ugly "far" / "near" routine designator which would have to be specified by the programmer. Or perhaps a "using far code;"