Ternary circuits have been implemented with op amps, comparators, discrete transistors, analog switches, diode-transistor logic, CMOS process, and a small subset of ternary gates have even been made with diode-resistor logic (not joking). My own "Tern" project details many of these techniques and others such as the "Shared Silicon" project detail techniques I didn't explore. I consider that the question of how to make ternary circuits is now an engineering problem. Fundamental proof of concept has already been accomplished.

With that being said, there remains enormous amounts of logic-level (rather than circuit-level) problems that need to be addressed in order to make a ternary processor possible. Things like: How does having a third state affect primitive data types? Or: How do you calculate a parity trit? Or: What useful "tritwise" operators are there?

This project is going to be my collection point for answering these questions.

Details

I suggest reading the posts in chronological order. Many terms are introduced early on and not explained in any detail later. Also, several concepts are evolving as I go.

You can see the posts in order by clicking on the "View all # project logs" button and then sorting by oldest.

Balanced ternary shift operations have different results than binary shift operations for four reasons: 1) The value of each digit position is a power of three rather than a power of two. 2) The values that can be shifted out could be a -1, 0, or 1 instead of just a 0 or 1. 3) All balanced ternary numbers are signed (except 0, technically). 4) The sign trit is not necessarily the most significant trit of the number. Thus, both left shifts and right shifts may or may not alter the sign of the number.

On most binary architectures there are three shift operations and four circular shifts. The three ordinary shifts are the right logical shift, the right arithmetic shift, and the left shift.

In a left shift, each bit is shifted one position to the left. A zero take the place of the least significant bit and the most significant bit is lost. In some cases the MSB may be preserved in the carry flag of the status register.

A right logical shift is the same operation, but the shift is to the right. A zero takes the place of the most significant bit and the least significant bit is lost. It is proper to use for unsigned numbers. In most cases the LSB is not preserved in the carry flag of the status register.

A right arithmetic shift is a little different in that the least significant trit is lost, but the new most significant trit is copied over from the existing most significant trit so that it does not change. This is so that the sign of a signed number remains the same after a right shift. In some cases the LSB may be preserved in the carry flag of the status register.

In balanced ternary, a left shift is a multiplication by three just as long as the trit shifted out is a 0. If a non-zero value is shifted out then it is not a straight multiplication and the sign will have changed unless the new most significant trit is of the same value as the one that was lost.

There is no distinction between a right shift and a right arithmetic shift. They are the same thing because the sign of the number is dependent only on the most significant non-zero trit. By shifting in a 0, the sign of the number is automatically preserved. The exception is if the shift drops off the only non-zero trit at which point the overall value becomes zero.

A right shift is the equivalent of dividing the original value by three and rounding toward 0. If the trit shifted out was a 0, then the value was perfectly divided by three without rounding. Rounding toward zero means that if a - was shifted out, the number is rounded up toward 0, but if a + was shifted out, the number is rounded down toward 0. In balanced ternary, rounding always approaches zero, not +infinity or -infinity. In contrast, binary rounding always trends toward -infinity.

The circular shifts take the bit or trit that was shifted out and make that the value that gets shifted in on the opposite side. They have two directions, left and right, and two variants, with carry and without. The rotate-through-carry simply treats the carry-in digit as if it were the least significant digit, so the total length is word-length + 1.

There is no functional difference between binary and balanced ternary when it comes to circular shifts.

I've been working on the subject of ternary logic and ternary processing for years on and off, but I haven't been doing so in such a methodical way as I am now. By approaching this in a more methodical and comprehensive way, I have discovered a rather a big error in my logic.

Until now, I've been assuming that with careful design, a balanced ternary system could be made that would still be code-compatible with a conventional computer. Of course I knew that the internals of the compiler would have to have some significant work done to both interpret new syntax and to target a new (and highly unusual) architecture. Nonetheless, I was sure that existing code could somehow be shoe-horned in to the new system. This looked hopeful right up until I started trying to incorporate bitwise operators into the running thought-experiment I've been writing up in these posts. I didn't notice the problem when going over the NOT operator earlier and I started to get a hint of it when trying to get bool's and kleene's to play nice together. I finally realized the scope of the problem when I got into bitwise AND, OR, and XOR.

Each operator is useful for a variety of different purposes because of how they manipulate the bit pattern of their operands. But in a ternary system, the operands themselves are ternary, so even if you kept the operators identical, they will come up with unexpected answers every time. Basically, anything that is expecting to work at the bit-level will be broken. Anything that expects to work at the register/word level or that has to do with control flow, can be made to work.

So, going back to the beginning with bit-level subjects:

1) bools should still exist. To follow logical expectations in a balanced ternary environment, their FALSE representation should be a -, not a 0. I still favor strict value-checking so that bools can only hold a - or a +, nothing else.

2) Bitwise NOT and tritwise NEG (negation) become the same operation. NEG switches -'s into +'s and vice versa but leaves 0's alone. Since bool's now use - for their FALSE condition and + for their TRUE condition, there would be no difference between NEG and NOT when using them on bools or kleenes. However, NOT would produce unexpected results if applied as a bitwise operator to other types of data, just like bitwise AND, OR, and XOR would.

3) The other bitwise operators such as AND, OR, XOR, and the various shifts, would have utterly different effects than they would in a binary system. The solution is not to try to force them into the system. The solution is to find equivalent tritwise operators that produce the results that programmers want out of the conventional bitwise operators.

4) This doesn't effect logical operations or arithmetic operations. Those are just fine as previously described.

I don't have a comprehensive list of which ternary operations would produce each of the results people are currently using bitwise operators for (except NEG of course). There are 19,683 different possible bitwise operators (not counting shifts), so that would be a job about 1,200 times more complex than coming up with everything in the "Hackers Delight" book from scratch and then writing a book about it.

Instead I'll write up the few I know of right now, and then as I find them or learn of them, I will do a separate post on any new "tritwise operator candidates". I say "candidates" because choosing which operators to implement in a real processor will be an important point. Digit-level operations generally take place directly inside the ALU, which means that independent circuitry has to exist in the ALU for every digit-level operator the processor is capable of executing. To avoid unnecessary complexity in the ALU it will be necessary to carefully choose which operations to include. Of course, any tritwise operations that are also necessary for arithmetic will be there anyways, so you get those "for free". The exception is the shift operations. Those are often done...

Control statements such as While loops, Do-While loops, For loops, Switch statements and If statements are most of the reason you have relational or logical operations in the first place. Looking at how balanced ternary values fit into this system of program flow is of the highest importance.

To put it simply, the existing control flow statements should work the same as always. Nothing would stop you from using kleene data types or the spaceship operator for the decision making, but it doesn't actually change the way the statements work.

If variables a and b are kleenes, then:

while ((a <=> b) >= NEUT)...

is equivalent to:

while ((a >= b))...

No new ground is being broken here, they would just work as you would normally expect them to. Of course existing control statements are expecting boolean evaluation so they would need to do type checking to correctly handle kleene FALSE because it would look like a boolean TRUE due to being non-zero.

All this doesn't mean that control statements that take advantage of the more expressive ternary logic couldn't be created. An alternative to the "if" statement is easy to work out.

Imagine a conveyor belt that could be moving forward, moving backward, or stopped. The variable that held the current conveyor motion is a kleene called "conveyorDirection" that contains a - (FALSE) for reverse, a 0 (NEUT) for stopped, or a + (TRUE) for forward. Now imagine a control statement that natively evaluated kleenes. We'll call it an evaluation statement or "eval" statement.

eval(conveyorDirection) { //Code if - } { //Code if 0 } { //Code if + }

The syntax could be different as long as it was clear which section of code would be executed for each state.

This could even be used to ease debugging and error handling. Imagine you were evaluating a game variable called lifePoints. This variable should never be negative. In a boolean system, if an off-by-one bug or manipulation resulted in underflow, the negative non-zero value would return TRUE, so additional steps would have to be taken to check for this. With this hypothetical eval statement you have a convenient place to put your error handler call or even a breakpoint for your debugger.

eval(lifePoints <=> 0) { //Code if - invalidValueHandler(); //Feel free to put breakpoint here as well } { //Code if 0 printf("You Died!"); } { //Code if + printf("%d remaining", lifePoints); }

An alternative while loop is a bit more complex. Currently, while loops test an expression and if it returns TRUE, executes the code, then tests the expression again, and so on. Any combination of TRUE, NEUT, and FALSE could be manipulated with conditional operators to evaluate TRUE, so the while loop (and do-while loop by extension) can already handle any three-value expression you want to throw at it.

A natively ternary version of the while loop might be something that evaluates an expression and then chooses which code to loop through. Let's call this hypothetical control statement a "choice loop". The distinguishing feature is that it will switch back and forth through different loops (scary, right?) depending on the return value of the expression. If it returns FALSE, one section of code is looped through until it returns something else at which point it loops through that code, and so on. A break in one of the sections would release the loop. Here's an example.

choice(x <=> 0) { //Loop if + doSomethingRepeatedly... x-- //This loop executes repeatedly and decrements x } { //Loop if 0 doSomething... break //This...

As for logical operators, the ones in common use are ! (NOT), && (AND) and || (OR). I have already addressed NOT in other posts, so we can focus on the two-input logical operators here.

Here are the equivalent kleene versions of those boolean logical operators:

Boolean Kleene && (AND) Min || (OR) Max

By the way, here's a really good explanation of why logical XOR (and by extension XNOR) don't exist in C. http://c-faq.com/misc/xor.dmr.html

I'll skip over the obvious uses of the boolean logical operators. Suffice it to say, they regard each operand as a boolean, so a non-zero value is considered TRUE and a zero value is considered FALSE. The result of the operation is returned as a boolean value.

As for ternary logical operators, it first needs to be pointed out that there are 19,683 of them. Of course the vast majority wouldn't be of much use and it would be a complete joke to expect a language to have vocabulary for all of them. However, I suppose it could be possible for a language to compute the truth table for any logical operator if there were a way to describe it in the source code. Say, with the heptavintimal name for the two-input gate for example:

if (x <A6R> y == NEUT) then...

This way, theoretically any of the possible two-input gates could be used as a logical operator. I don't have a clue how (or why) that would be implemented, but it's an interesting idea.

Putting that aside for the moment, let's address the use of the ternary logical operators that correspond to existing logical operators. They evaluate each operand for its kleene value (FALSE for a negative, NEUT for a zero and TRUE for a positive) and they they return kleene values. Min returns the value of the lesser of the two inputs. Max returns the greater of them. Simple.

The thing to keep in mind is that they can return a NEUT and that this can be used for more expressive control statements. More on that in a later post.

This is just a quick address of an important law in logic. De Morgan's Laws demonstrate the relationship between two basic binary gates, AND and OR. Here they are:

NOT(A AND B) = (NOT A) OR (NOT B) NOT(A OR B) = (NOT A) AND (NOT B)

These laws have an equivalent function in Kleene's version of ternary logic using the negation gate and the Min and Max two-input gates. This is another reason I liked Kleene's version of ternary logic. Not all of them adhered to De Morgans Laws.

Neg(A Min B) = (Neg A) Max (Neg B) Neg(A Max B) = (Neg A) Min (Neg B)

The conventional relational operations compare numerical or boolean variables and return a boolean value based on the result of that comparison. The possible variations of relational operations are:

if a then... if !a then... if a > b then... if a !<= b then... (same as above) if a < b then... if a !>= b then... (same as above) if a >= b then... if a !< b then... (same as above) if a <= b then... if a !> b then... (same as above) if a == b then... if a != b then...

Further, a couple of new relational operator would exist. The first is:

if a == NEUT then...

Using '?' to represent this operator is nicely consistent.

if a then... if ?a then... if !a then...

The second one is !! (ternary negation) which differs from boolean NOT. It returns FALSE for a positive input, returns NEUT for a 0 input and returns TRUE for a negative input.

In a boolean system some of these operations are redundant and aren't generally implemented in languages. These same redundancies exist in a balanced ternary system because the basic laws of arithmetic haven't changed.

The behavior of the relational operators is well defined in each language for booleans and integers, but needs to be explored when comparing a boolean and a kleene. As mentioned in other posts, I favor booleans that are forced to contain only a 0 or 1 rather than the "soft" booleans where any non-zero value is TRUE. This is how Python and stdbool.h in C do it. I also favor expanding this concept to the kleene data type so that only -1 is FALSE, only 0 is NEUTRAL and only 1 is TRUE. Any attempt to assign a kleene a different value would result in a negative value being clamped to -1 and a positive value being clamped to 1.

When comparing a kleene and a boolean there are a couple of things to consider. So far I have been considering that a boolean FALSE would continue to be represented by a 0 but I now realize this could cause serious problems such as:

bool x = FALSE //contains 0 kleene y = FALSE //contains -1

x //returns FALSE y //returns FALSE

x > y //returns TRUE... uh oh...

x == -1 //returns FALSE y == -1 //returns TRUE... dang

x == y //returns... FALSE? Let me think about it...

Now how about comparing a boolean to a kleene:

bool x = FALSE //contains 0 kleene y = NEUT //contains 0

x == y //returns TRUE... oh hell...

After some more consideration I now see that one of two things would have to happen. Either a boolean would have to use -1 as the FALSE value to avoid irrationalities like the ones demonstrated above, or the relational operators would have to do type checking and translate accordingly.

I favor the type checking approach rather than just asserting that -1 is the new boolean FALSE. This is because it would simply break too much code. Even the simplest things like "int x = 0" in C would break. So would any code that counts down to 0 and does comparisons to make a decision based on the countdown. Off-by-one errors would be everywhere. Therefore, type checking looks like the solution.

The next question is, if a boolean were compared to a kleene, what would the return type be? On one hand it could always be a boolean since every one of the above...

In a recent post I introduced the Kleene data type as an addition to the boolean data type. In my hypothetical system both data types are available and bools would continue to operate exactly as they do now with 0 representing FALSE and 1 representing TRUE. Boolean logic works just fine on its own terms, there's no need to mess with it.

In this post I will explore how existing unary operators would work in a balanced ternary system and what additional unary operators are possible with a three valued system. Specifically, the ones that alter the value of a variable or return logical data about the variable. I will not address special purpose unary operators for casting, indirection, etc. and I will look at bitwise and tritwise operators in a later post.

Here are the most common unary operators and what they do:

x++ increments a numerical variable x-- decrements a numerical variable -x returns the negation of the value +x returns the value !x returns a boolean TRUE or FALSE value opposite the truth value of the variable

The logical negation operator (!) is the only logical operator on the list. It could be adjusted to return a kleene type instead of a boolean, but I don't think altering the behavior of existing operators is a good idea. The other four are arithmetic operators, not logical ones and would work the same way in a ternary system as they do on a binary one.

In a ternary system though, there are 27 unary logical operators. Of course many would be of little or no use, but all of them should be thoroughly explored. To talk about all these new operators we will need a vocabulary to use. It would be too complicated to figure out symbols for each operation and keep track of them so I will use function-style syntax and each unary operation is assigned its heptavintimal name. To reiterate, here are the heptavintimal (see my earlier post called "counting") values for each single-input gate and their truth tables:

For my immediate purposes in writing these posts I'll use the format "logic_x(variable)" indicating a logical operation as opposed to a tritwise operation with 'x' standing in for the hept name. Using this function/hept syntax, the way to perform a logical inversion on variable x is logic_5(x). When I later take a look at tritwise operators I would use tritwise_5(x) to perform a tritwise inversion on variable x. This formatting is just for clarity and probably wouldn't be well liked.

Applying any of these operations to a kleene would of course do exactly what the truth table above states. At this moment I am of the opinion that applying one to a boolean should cause a compiler error. It's probably best not to try to mix kleenes and booleans. If a boolean is adequate for your purposes, use one and apply existing operators. If the flexibility of a kleene would benefit you, use a kleene and apply the above operators as desired.

Example pseudocode snippets:

kleene x = TRUE print logic_5(x)

FALSE

kleene x = FALSE print logic_5(x)

TRUE

kleene x = NEUT print logic_5(x)

NEUT

The question is what would each one mean when applied to a numerical variable? The conventional NOT operator can be used on numbers and it has the property of treating the number as if it was a boolean. If the number was non-zero, its boolean negation would be FALSE and it returns a 0 value. If the number was zero, its boolean negation would be TRUE and it returns a 1. This behavior should remain unchanged.

Probably the most sensible way to do this is to follow that convention, but because the the 27 unary operators follow Kleene logic, they would return a kleene data type. They would treat negative numbers as FALSE, zero as NEUT and positive numbers as TRUE.

In my previous post I proposed a series of primitive data types with only three possible sizes: 1 tryte (9 trits), 3 trytes (27 trits) and 6 trytes (54 trits).

Simplifying the number of data type sizes is not just an aesthetic desire for design elegance. It could also potentially improve the performance of reads from RAM. Conventional memory devices are designed to access chunks of data that are multiples of a byte. This is because reading from memory is agonizingly slow compared to the speed of a modern processor. If a memory device accessed each bit individualy it would have to do eight separate actions to store a single byte and each one of those actions would be at the same slow rate (slow compared to the processor). The most time-efficient solution would be to only ever deal with chunks of memory the same size as the largest possible data type. That way there is no decision making process or any need for multiple reads. However, this would waste lots of space since even a char would take up 8 bytes (64 bits).

There are various architectures, but most RAM devices take the middle road and store data in individual bytes (never bits) while providing access to 4 or 8 bytes at a time. These groups of bytes are called a chunk. If the processor is storing a char, it only uses one of the bytes in the chunk, a short would use two, etc. The problem is that this can lead to misalinged data storage. For example, an integer takes up four bytes but could span accross the boundry between two four-byte chunks in memory. When this happens the processor has to read both chunks in, then do a series of logical shifts on both chunks to remove the unwanted bytes from each chunk and finally combine the two chunks to recover the original integer. You end up having to perform half a dozen processor operations just to get the data you originally wanted to perform an operation on. Having fewer data sizes means fewer misalingments overall and fewer operations necessary to recover misaligned data.

If we use our 9-trit, 27-trit, 54-trit example above, a memory device that presented 3-tryte chunks (27 trits) would only have one boundary that could be crossed and only two ways to cross it. One way is that a 3-tryte data type could be misaligned with a 3-tryte chunk. The other is that storing a 6-tryte data type would always span two chunks, but might also be misalinged with the chunks and therefore span accross three chunks. Therefore, three reads would be needed to recover the data. This is analogous to a 64-bit double being misaligned with the 4 byte chunks in memory. The two chunks with most of the double would need to be read as well as the chunk with the overlapping piece of the double.

If a memory device presented 6-tryte chunks instead of 3-tryte chunks then both situations would still be possible, but in the case of the misaligned 54-trit data type, only two reads would be necessary since a 6-tryte variable couldn't span three 6-tryte chunks no matter how misalligned it was. The downside is that more shifts would be needed to correct a misalignment and more space would be wasted by data types smaller than 54 trits.

Of course, this all presupposes the availability of memory devices that not only store ternary values but also present chunks in multiples of three rather than two. In the meantime, an intermediary device to do translation between a prototype ternary processor and a conventional memory device wouldn't be exceptionally difficult to throw together. This would support proof-of-concept work on a ternary processor.

Primitive data types are an interesting subject to tackle. They are technically defined by programming languages, not hardware, so I am a little out of my field. Nevertheless, it's vital to explore the possibilities and variations that come about with a balanced ternary system. Everything I put forward here is really just intended to get some of my ideas documented. The implementation details would be up to the designers of the programming languages, a task I am not really qualified to undertake.

When I say primitive data type, I'm talking about the "C style" types that map directly to objects in memory of a pre-defined size (some portion or multiple of the processors word length). They can be manipulated directly by individual processor instructions, like in assembly. I am also limiting this post to value types and not reference types such as pointers, references, classes and such.

The hypothetical systems I am thinking about would have word lengths of 27 trits or 54 trits. These would far exceed the capacity of a 32-bit system or a 64-bit system respectively. In a previous post I said a 45-trit system would be sensible because it would exceed a 64-bit system and is divisible by both 3 and 9, but then I realized it is not an even multiple of 27. Moving up to 27*2=54 trits would probably be the next logical step up from a 27-trit word length.

Let's start with the usual lineup of commonly recognized primitive data types. The values given are common mappings, but not necessarily true in all implementations. I'm using C style primitives because that is the lowest level programming language that most people are acquainted with.

Breaking this down, we have several integer types, either signed or unsigned of various sizes. Then there is a pair of special integer types for portability across systems that could have dissimilar word lengths. Next we have a special 8-bit type for ASCII characters. Then there are some floating point types and the boolean type.

Now on to the primitive data types of our imaginary world where 27-trit and 54-trit balanced ternary computers, microcontrollers and digital signal processors are commonplace. First off, all this signed and unsigned stuff can go out the window! Balanced ternary numbers are natively signed or unsigned by their value alone. Therefore, we would only need the following integer types:

That handles signed and unsigned numbers well beyond 32-bit or 64-bit equivalents. The "word" type also allows programs written for the smaller architecture to run on the larger without worrying about weird integer problems. Just like with existing systems, programs written for the larger architecture could still get bugs due to truncation if compiled on the smaller architecture. The Tryte is useful to have because there does need to be an efficient scheme for packing arbitrary data.

We would also need floating point number types:

The 27-trit float is not just in my imagination. A formal proposal has already been written and submitted for peer review. The 54-trit float is total fantasy at this point but that word length should be capable of exceeding the range and precision of the x86 80-bit double double. It would definitely not exceed the range/precision of a 96-bit or 128-bit long double.

Next, we should have data types for human readable characters. I suggest a char type that is the full 27-trits but is explicitly not an integer. It would be large enough to hold a full-sized UTF-32 Unicode character. This could seem rather wasteful of space, but that isn't the same major concern it once was. Further, the 9-trit tryte exists and could be used to allow a string of tiny UTF-8 or ASCII characters taking up whole 27-trit chars to be efficiently packed if they needed to be transmitted or if space constraints were severe. That is more of a language specific implementation detail.

But how do we handle boolean values on a balanced ternary computer and how to handle the relative "truthiness" of 0 in a balanced ternary...

Gray codes are any bit (or trit, qubit, etc.) sequence that changes in only one position between any digit-pattern and the next. This included the transition between the last pattern and the first in the sequence. Each code represents a specific value, but not necessarily the same value as if you simply counted the bit pattern normally as a number.

There are a number of uses gray codes such as error detection and correction, modulating digital sequences onto analog signals (i.e. radio) and the transmission of data between components with different clock rates.

There are a number of different binary gray codes. Here are some examples for various lengths:

As you can see, each pattern in the sequence only changes in one bit position at a time. This includes the switch from the last position back to the first position.

Let's see if you can do the same thing in a balanced ternary number system. I won't bore anyone with my path from "no clue" to "no problem". Instead I'll just jump straight ahead to the method I found to construct a balanced ternary gray code for any arbitrary number of digits.

First, we need to know which patterns of trits to throw out right away. Any pattern which does not include all three trit values (-,0,+) in equal numbers is obviously not going to work. So we can immediately toss out all patterns other than the following, given in sequence and with their heptavintimal name:

Any one of the above forms a 1-trit gray code by themselves. Thus there are 6 different 1-trit gray codes. To form a 2-trit gray code, simply take one of the above...

...follow it by another one that starts where the last one ends (two possible options)...

... and follow that by one more that starts where the last ended AND ends where the first one began...

...Now take the first of those three patterns, put it in the next most significant digit position, and stretch it out to three symbols for each symbol...

... and voila! You made a 2-trit gray code! How about 3-trits? Take your existing code and now copy the least significant digit position twice more, tripling it's length...

... now remember how we stretched out the first triplet in the 2nd digit position? Do that again with the second and third triplets.

... and finally take the first symbol, place it in the next most significant digit position, and strrreeeccchhh it out to 9 symbols per symbol...

... and you have a 3-trit gray code. Wash, rinse, repeat. Want an 847 billion value gray code? Just keep doing that until it is 25 trits wide. Or more, or whatever. The figures to the right show the pattern of changes as being regular and consistent.

As a side note, as long as you consider that swapping from - to + or from + to - is a shift of only one degree, then every ternary gray code is also only shifting one degree of value at any given time. If you consider that such a transition is shifting two values, then some transitions (like - to 0) are 1 degree while others (- to +) are two degrees.

Hah! Of course. That's some really impressive work. :)