View unanswered posts | View active topics It is currently Thu Dec 12, 2019 12:59 am



Reply to topic  [ 6 posts ] 
 Why did most relay computers use floating point arithmetic? 
Author Message

Joined: Sat Jun 16, 2018 2:51 am
Posts: 44
In this video, CuriousMarc tours the 1958 FACOM 128B relay computer. At around 9:00 he mentions that most relay machines of the time ran "natively in floating point".[1]

Is there anything special about relays with regards to favoring floating point arithmetic? At the end of the day, given 'x' and 'y', a logic gate should output 'z' regardless of how it is physically implemented...

Also, does "natively floating point" apply just to the ALU? I assume the program counter would have a separate counter that does integer math? What about memory addressing? Is there some circuit dedicated to converting a floating point address to its integer equivalent that can then be sent to the memory "unit"?

Is there a reason integer arithmetic is favored over floating-point in "modern" CPUs? (I.e. imagine a CPU whose assembly language looks like: `ADD r1, 3.14`)?


---

[1]: I have a vague understanding of Konrad Zuse's machines, which were also relay based and floating point arithmetic.


Sat Nov 30, 2019 10:30 pm
Profile

Joined: Mon Oct 07, 2019 2:41 am
Posts: 44
Time frame. People needed a better calculating device than adding machine,
and decimal or decimal floating point is often what you got. With the begining of the
cold war era, computer develpers got the cash needed to create equipment so ships,tanks,ICBM's
radar could be better designed.


Sun Dec 01, 2019 12:41 am
Profile
User avatar

Joined: Fri Mar 22, 2019 8:03 am
Posts: 209
Location: Girona-Catalonia
Hi Quadrant,

Relay computer floating point was not Floating point as we currently know it. Most operations were still integer arithmetic, for example the program counter, or the computation of addresses. Current floating point representations are fully binary, including both the mantissa and the exponent. This is very convenient for hardware implementations and modern processors are able to perform basic floating point really fast. However, floating point has a number of disadvantages to be adopted as an universal type. It can't really replace integer arithmetic due to rounding errors and other inaccuracies. Another reason why integer arithmetic is favoured is because that's what's required by 99% of modern applications. Only scientific applications and games require them. In the case of games and cgi animation, most floating point is performed in the GPU using fast hardware algorithms that are not even meant to be accurate.

My understanding is that Floating point arithmetic on old relay computers was based on decimal digits (posibly BCD) at least for the mantissa, not on binary representations. Also, floating point arithmetic required a lot of clock cycles, and there were no single instructions capable of floating point addition or multiplication as it is the case of modern processors. Relay computers were capable of floating point arithmetic and designed for that, but this was achieved with long sequences of instructions or subroutines. A relay computer is much slower than a pocket calculator and it can't really be used by much more than arithmetic calculations. So that's why floating point was one of their key features, because they were used mostly as (programmable) calculators.


Sun Dec 01, 2019 4:52 pm
Profile

Joined: Sat Jun 16, 2018 2:51 am
Posts: 44
Ah I see, thank you both for the insight!
Quote:
It can't really replace integer arithmetic due to rounding errors and other inaccuracies. Another reason why integer arithmetic is favoured is because that's what's required by 99% of modern applications. Only scientific applications and games require them. In the case of games and cgi animation, most floating point is performed in the GPU using fast hardware algorithms that are not even meant to be accurate.
Ohhh, gotcha!


Sun Dec 01, 2019 7:06 pm
Profile

Joined: Tue Dec 31, 2013 2:01 am
Posts: 101
Location: Sacramento, CA, United States
I'm reminded of the Burroughs mainframes which had an interesting tagged storage system, whereby instructions like ADD had a single "syllable" (aka opcode) that worked on single and double precision values, floating point and integer. Integers were just floats with an exponent of zero. Since it was a stack architecture, no operand or addressing mode was required in this example. The tags also greatly reduced the possibility of accidentally (or maliciously) mixing up instructions and data.

Mike B.


Mon Dec 02, 2019 7:18 am
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1320
I don't think there's any necessary link to decimal: Zuse's machines were floating point and binary.

I think floating point is very natural for scientific or engineering (numeric) computing: you often need a lot of dynamic range far in excess of the need for accuracy. It's also natural to anyone who grew up with log tables and slide rules: you have to take care that the digits are right and you also have to take care that the point lands in the right place.

Perhaps in commercial computing fixed point or integer computation is more natural.

In all cases, though, I'd expect addresses and program addresses to be integers - they can be decimal, but of course are usually binary.

There is at least one Basic interpreter which implements line numbers as floats - an unusual choice!


Mon Dec 02, 2019 2:09 pm
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 6 posts ] 

Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software