Close

Ternary: Past and Present

A project log for Ternary Computing Menagerie

A place for documenting the many algorithms, data types, logic diagrams, etc. that would be necessary for the design of a ternary processor.

mechanical-advantageMechanical Advantage 04/14/2019 at 20:060 Comments

Calling this a "history" would be misleading. I'm not trying to set forth a comprehensive chronology of the development of ternary logic or technologies. Rather, I simply want to quickly mention some of the key aspects of ternary development. The point is to make it better understood that, while the combination of binary circuits and boolean logic are the dominant digital technology, they are not the only digital technology that has ever been implemented. Every single component needed to build a microprocessor has been implemented with ternary technology at some point in time. It can be done because it has been done. It just hasn't become popularized or mass produced. It may never be, but that doesn't mean it's not interesting to try.

With that being said, let's take a look at the development of digital technology in general. Note that I use the term digital in its strictest sense. It only implies discrete values, not necessarily binary values. I consider that modern digital technology required the junction of two previously unrelated sciences. One was electronics and the other was formal logic. Before Claude Shannon became known as the father of communication theory, he was working with telephone switching equipment. Even earlier than that he had been exposed to the then little-known Boolean logic. In 1937 he made the connection that binary telephone switches could implement Boolean logic and therefore could be used to do algebra and arithmetic. Analog computers already existed at that time, but this revelation wiped them into obscurity and paved the way for the digital revolution. From that point forward, the combination of binary circuits and Boolean logic have been essentially synonymous with digital computers.

When I talk about ternary computers, I am talking about the combination of ternary circuits and Kleene logic. Kleene logic is named after Stephen Cole Kleene (pronounced KLAY-nee), who is probably better known for inventing regular expressions and for his work with Alonzo Church on lambda calculus. It is a system largely based on De Morgan algebra. It was developed over time, but the key point is that by 1938 Kleene logic formally incorporated the concept of an uncertain or unknown value and handled them in a predictable and consistent way. Also very important is that Boolean logic fits in as a subset of Kleene logic without alteration. Every Boolean logical function works as expected within the larger context of Kleene logic.

Kleene is not the only mathematician to extend Boolean logic to more than two values but I chose his formal definitions as the most appropriate for a ternary computer. This is because, if you map the values False, Unknown, and True into the balanced ternary numerical values -1, 0, and 1 and then do arithmetic with them, they come out correctly. Other multi-valued logic systems I studied were not consistently appropriate for balanced ternary arithmetic.

If we look at the types of instructions that a processor actually executes and skip all the ones that relate to the operation of the computer itself (I/O, interrupts, program flow, etc.) you pretty much just have math operators and logical operators. In my concept of a ternary processor, math operators are balanced ternary arithmetic and logical operators are Kleene logic. I won't go into Kleene logic itself because the parts that actually pertain to computing will be explained in a later post on 2-input gates.

That pretty much wraps it up for the logic part of a ternary digital computer system, but what about the hardware? It turns out that there has been much more development in this area than most people realize.

In 1840 an Englishman named Thomas Fowler built a fully functional wooden calculator that used balanced ternary arithmetic to simplify tax assessment calculations. Along with this, he published a pamphlet detailing addition and subtraction algorithms in balanced ternary.

Between 1958 and 1970 Nikolay Brusentsov built at least 50 fully functional balanced ternary computers at Moscow State University. Every part of these computers worked on the principals of balanced ternary arithmetic except for main storage. Some documents suggest that these were natively binary and translation between binary and ternary was done with intermediate I/O devices.

Storage devices such as ROM, RAM, Flash memory, etc. that use three or more values per storage element have been consistently available since at least 1984. The SSD's in modern computers can store up to 16 different values per storage element.

A vast array of communications protocols use anywhere from 2 to over 500 different values. In the field of radio communication, above-binary modulation has been ordinary for decades.

Ternary flip-flap-flops, 1-input gates, 2-input gates, sample-and-hold circuits, etc. have all been demonstrated with CMOS, NMOS, analog switches, multiplexors, discrete transistors, voltage comparators, op amps, etc. They have even made their way into some commercial products (I've found ternary circuits in some signal switching chips).

So if non-binary storage device, non-binary I/O devices, , non-binary modulation methods and non-binary signal control devices exist in use today, why isn't it more obvious? It's because processors are all still binary and these other devices are required to do translations between binary and whatever non-binary system they happen to use. Next time you plug in your Samsung SD card, you could be using a binary device, an 8-level device, or a 16-level device. They are all powers of two because that makes the translation easier, not because some other number of levels isn't feasible.

As far as I can tell, the real reason so many things have gone non-binary, but processors haven't, is because of cost/benefit analysis. It makes sense for a serial communications chip to use more signal levels if it makes it faster without introducing errors. The comm chip doesn't need to do math or logic on those values. But processors need to process numbers. The manipulation of these numbers is their whole purpose and the decades of research and development that have been put into getting every last scrap of performance out of the binary/boolean system would be wiped out in a second if they switched to another number base.

Existing math algorithms wouldn't work, new logic problems would crop up, new physical engineering problems, etc. etc. etc. Everything from sorting algorithms to parity checking would have to be rebuilt from the ground up. It took decades just to get everybody to agree on a floating point standard and nobody wants to go through that again. And then they would have to figure out how to make the solutions technically feasible in silicon. In other words, it's just too damn much trouble.

If anyone is really serious about getting ternary processors off the ground, it would help to recognize the scope of what they are tackling. It is no less than redesigning several large chunks of computer science almost from scratch...

Sounds like fun.

Discussions