Last visit was: Fri Jul 19, 2024 7:03 am
It is currently Fri Jul 19, 2024 7:03 am

 [ 4 posts ] 
 any new studies about constant data sizes? 
Author Message

Joined: Mon Oct 07, 2019 2:41 am
Posts: 619
Has there been any new studies about constant data sizes?
A Microprocessor for the Revolution: The 6809; Terry Ritter & Joel Boney (co-designers of 6809); BYTE magazine; Jan-Feb 1979.
Gives indexing off X for the 6800 as 0 40% 1-31 50% 32-63 1% 64-255 6%.
Since this is a 8 bit cpu, I would scale the sizes x4 for 32 bit data.
For the cpu's I have been designing, I use B00.. to B10.. for + numbers and B11... for - numbers in constants.

Mon Oct 12, 2020 8:11 pm

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1789
Hi Ben
I'm not sure what you're asking, but it didn't look like it related to the thread you posted in ("74xx based CPU (yet another)"), so I've made your post a new thread. Please feel free to edit the subject line - I've guessed.

Mon Oct 12, 2020 8:56 pm

Joined: Mon Oct 07, 2019 2:41 am
Posts: 619
What I am thinking is the constants for typical programs.
How large typical structure or variable name offsets or local variables?
struct ... char name[16],short inode of the 1970's could be
struct ... char path[1024],long long inode of 2020's.

Having a idea the ranges could improve code density. Risc came out just as
the world was moving from 16 bit to 32 bit unix, and away from the 8086.
Having large segments now seemed the way to go. But is larger better?
Algol promoted local variables and data, keeping the idea of local varables.
Fortran promoted global variables every where, like common varables.
RISC tried to keep all varables in registers and hope array access was rare.

What are the real size of programs data segement. Are programs slowed down
by huge data areas like screen buffers or floating point random access of arrays?

One needs to match the hardware to the data used.
But with out knowing how todays programs address data or arrange data, one has no clue
how build better ways to access dynamic memory, or give a clue of how fast we can go.


Tue Oct 13, 2020 5:40 am

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1789
I read recently that ARM's 64 bit instruction set does support larger embedded offsets and constants. I think the hope is that when inline data is sufficient, you save a clock cycle or two, or a cache entry, or give the databus something better to do. In other words, it's a performance win. Whenever the embedded offsets and constants are too big to be inline, there has to be a fetch from memory - in other words, nothing is impossible, it just costs a little performance and a little complexity in the tools.

I've certainly heard the idea that ARM's changes to create their 64 bit instruction set are a big bundle of complexity and detail - it's an even less simple machine to understand and program than the original, and there's a case to be made that the original ARM is not as simple as one might have expected from the RISC label. (But ARM was built at a particular time with a particular technology, and arguably makes reasonable tradeoffs for its implementation. It was never minimal, it was reduced.)

Tue Oct 13, 2020 7:55 am
 [ 4 posts ] 

Who is online

Users browsing this forum: CCBot and 0 guests

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software