View unanswered posts | View active topics It is currently Thu Mar 28, 2024 2:46 pm



Reply to topic  [ 14 posts ] 
 Why make a small CPU? (And, why make a large one?) 
Author Message

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
Elsewhere, Rob wondered:
robfinch wrote:
...But a decade from now what will the $10 FPGA look like ? What is the trend in price per LC ? I started with a 800 LUT (200 slice) FPGA. (XC4010)
It may not even be possible to get "tiny" FPGAs / PLDs in the future. So why pull one's hair out minimizing a design ?


It could be an interesting question - why is anyone interested in designing a small CPU? Here are some thoughts:
- it's fun to work with a constraint
- a smaller CPU can fit in a smaller, cheaper, package and use less power (make less heat)
- a smaller CPU might use a part which isn't BGA technology, and so be easier to solder
- it feels like a waste to have an inefficient solution: a powerful CPU doing only a little work, or a wide CPU talking through a narrow memory bus
- a smaller CPU might have denser code and need less code space
- a smaller CPU might clock faster and might have better interrupt latency
- a smaller CPU might have a simpler assembler
- a smaller CPU will probably need less verification and be less likely to have obscure bugs remaining
- a smaller CPU has a smaller spec, easier to write and easier to read

Other thoughts?


Wed May 10, 2017 3:01 pm
Profile

Joined: Sat Aug 22, 2015 6:26 am
Posts: 40
The 8 bit CPU challenge was just for fun. As explained in the first post, the idea is to see if you could do better than the 6502 designers, and the 128 slice limit is roughly what you need for a 6502 lookalike.

But if I need a soft core inside a FPGA for one of my professional projects, I usually want a small one as well, simply because bigger FPGAs are expensive, and they usually only come in BGA, which means a more expensive board and more trouble prototyping. The smallest Spartan 6 only has 600 slices, so even a small 128 slice CPU would still consume more than a fifth of the resources.

I think it would have to be a very special project to justify the cost of something like 8400 slices for a soft core CPU. Maybe if you already require a big FPGA for the rest of the project, and the slices are basically free.


Wed May 10, 2017 4:05 pm
Profile

Joined: Sat Aug 22, 2015 6:26 am
Posts: 40
Another consideration is that the free tools only work for the smaller devices.


Wed May 10, 2017 5:52 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
I thought of a couple more points:
- with a small CPU you can put several on one FPGA. That might be a good solution for some problems.
- with a small CPU you have lots of room left on the FPGA for other parts of your system. For example, huge amounts of distributed RAM, or some computational accelerator.


Thu May 11, 2017 1:39 am
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
I changed the subject of this thread, because it seems fair to also outline why one might use more resources:
- a large CPU can be wide, and a wide machine with wide memory interface is an easy way to higher performance.
- if a large FPGA is reasonable cost, or if the smallest FPGA is already quite large, then keeping the CPU small isn't critical for the system cost
- a large CPU could be simpler, if it doesn't need to reuse resources like adders and incrementers
- it's fun to explore the possibilities of what can be built with fewer constraints
- a large register set might make for an easier compiler target, might help with RAM bandwidth
- a machine with complex instructions might need fewer instructions and so have better code density
- large structures[1] like microcode, multipliers, dividers, barrel shifters, FPUs, SIMD extensions, might be desirable and be large without being complex

[1] On some FPGAs, microcode and multipliers are very low cost because they are separate resources from the logic fabric. But there still might be something to the argument.


Thu May 11, 2017 1:59 am
Profile

Joined: Sat Feb 02, 2013 9:40 am
Posts: 2095
Location: Canada
One reason to choose a larger CPU is that it may be compatible with an existing design, so the tools for that design can be used. For instance a Z80 is unlikely to fit into 128 slices. But it might be more desirable to use a Z80 core than a custom core. TG68 a 68000 core is about 2700 slices. ao486 a 486 core is about 37000 LC’s (5800 slices).
It is possible to fit some older mainframe style processor on an FPGA.

_________________
Robert Finch http://www.finitron.ca


Thu May 11, 2017 4:13 am
Profile WWW

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
A very good point! And, from a starting point of a compatible CPU, one might venture into backward-compatible extensions of that CPU... I think several of us have done that, and even more have thought about it!

I've been using a National Semiconductors 32016 compatible CPU recently: hanging the FPGA off a BBC Micro, I have a hierarchical filesystem on solid state storage, and Pascal, C and Fortran compilers, and Lisp and Basic interpreters, running at a healthy speed. This CPU core just fits in the LX9, with no cache (unfortunately) and possibly with other bits missing.
https://github.com/hoglet67/CoPro6502/t ... src/m32632
https://github.com/hoglet67/PiTubeDirec ... 32016-core

(and the Lisp has a Logo implementation)


Thu May 11, 2017 8:24 am
Profile

Joined: Tue Dec 31, 2013 2:01 am
Posts: 116
Location: Sacramento, CA, United States
Arlet wrote:
The 8 bit CPU challenge was just for fun ...

A slightly different and more eccentric path to probe would be a 9-bit or 10-bit CPU challenge, to see just how annoying and limiting those darned 8-bit bytes can be ... 8-)

Mike B.


Thu May 11, 2017 7:33 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
Discussing with revaldinho the fits-in-a-small-CPLD challenge, he found himself knocking off address bits to make the machine fit. I did wonder about knocking off data bits - a 7-bit machine with 14-bit addresses isn't so bad. A 6-bit machine is a bit more difficult because you couldn't do ASCII I/O very easily.


Thu May 11, 2017 7:35 pm
Profile

Joined: Tue Jan 15, 2013 10:11 am
Posts: 114
Location: Norway/Japan
BigEd wrote:
A 6-bit machine is a bit more difficult because you couldn't do ASCII I/O very easily.
The 16-bit Norsk Data minicomputers used a 6-bit char format for symbol tables in relocatable objects and other compiler output. "A".."_" were mapped to 0x01..0x1F, i.e. ASCII minus 0x40. 0x20..0x3F mapped directly to ASCII (space.."?"). Short version:

"To convert a 6-bit character to ASCII, add 0x40 to the value if the 6-bit value is above 0 but below 0x20, otherwise use as ASCII (or ignore if value is zero)." (0x00 was 'no character').

No lower case characters, and no control characters.. but good enough for symbol tables, with capital letters, numbers, underscores, spaces, and more.

That would work for a 12-bit computer too. 8 bits can be limiting w.r.t. opcodes etc, but 16 (or 32) bits would be overkill.. so why not 12 then..


Fri May 12, 2017 7:16 am
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
Can you think of a simple logic fix (one chip maybe) which would allow a 6-bit computer to connect to an 8-bit UART, and be able to do something sensible when viewed as ASCII? It's a pity that we only need CR or NL, no need for other control characters. Even BEL and TAB and NUL I can do without!


Fri May 12, 2017 7:21 am
Profile

Joined: Tue Jan 15, 2013 10:11 am
Posts: 114
Location: Norway/Japan
Well, if bit 5 is not set, set bit 6.. should be easy. We could translate 0x00 to CR, as it's not used, but that adds to the logic of the first rule.


Fri May 12, 2017 7:33 am
Profile

Joined: Sat Aug 22, 2015 6:26 am
Posts: 40
On an FPGA, you can benefit from block RAM parity bits to make 9 or 18 bit wide buses (my X18 CPU uses 18 bit opcodes for that reason). Going smaller isn't really useful.


Fri May 12, 2017 8:37 am
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1780
A few months have passed, and the tiny OPC1 for CPLD has led to the small OPC6 for FPGA. In the process, we learnt another advantage of making small CPUs - by working within a constraint, there's less trouble with an ever-growing featureset. There's a strong incentive to start with minimal features, and only to add features when they seem very desirable and there seems to be a simple enough way to implement them.

So, a constraint can be a creative advantage.


Tue Aug 15, 2017 11:45 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 14 posts ] 

Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software