
More on Balanced Ternary Parity Checking
11/07/2019 at 07:24 • 0 commentsI’ve experimented a bit more with parity checking balanced ternary values. As mentioned in an earlier post, the behavior is quite similar to binary parity checking. But the fact that each trit is capable of arriving incorrectly in two different ways (either incremented or decremented) causes some interesting effects.
Let’s say that a 9trit word is transmitted, along with a single parity trit. This gives us 10 trits on the receiving end. We’ll call the original 9 trits ‘data trits’, the transmitted parity trit is the ‘parity trit’ and the calculated check trit on the receiving side will be called the ‘check trit’. If all 10 transmitted trits are received without any alteration, the check trit will be a 0. If one of the data trits is incremented, the check trit is a +. If one of the data trits is decremented, then the check trit is a . However, if the parity trit itself is altered during transmission, the result is the opposite. If the parity trit is incremented, the check trit is a . If the parity trit is decremented, the check trit is a +.
This phenomena is interesting, but not useful by itself since you don’t know which type of error caused the check trit to be nonzero. It may be of use in arrangements using multiple parity trits such as SECDED (Single Error Correction, Double Error Detection) schemes.

Balanced Ternary Huffman Encoding
10/22/2019 at 07:04 • 0 commentsHuffman Encoding (and decoding) is very commonly used compression algorithm. It is often used in conjunction with other compression techniques but can also be used by itself. The idea is to count up each time a digit pattern (such as a bytes, 16bit words, 32bit words, etc.) appears in a file and then rank each pattern by how common it is. For example, if you had a file where the byte 01001100 appeared more frequently than 01001111, then the first would have a higher “score”.
After ranking the digit patterns, you assign the most commonly used bytes (or words, Unicode values, etc.) a short codeword and you assign the least commonly used bytes longer codewords. Because the bytes are being assigned codewords of differing length, this is called a variablelength encoding algorithm. The result of applying the Huffman algorithm to a file can be stored as a binary tree to aid in decoding.
The decoder on the receiving end will not know where a codeword begins or ends. To handle this, the assignment of codewords must adhere to a rule known as the prefix property. This means that no codeword may appear as the prefix to another codeword. For example: If a byte is assigned the codeword “0”, then no other codeword can begin with “0” since the first one would be the prefix of it. If a byte were assigned the codeword “110”, then “11001” would be an invalid codeword because it is prefixed by “110”, but “101100” would be valid since “110” is contained inside, not at the beginning. Sharing prefixes is okay, such as “1110” and “1111”. Both start with “111”, but as long as “111” isn’t itself a codeword the prefix property is not violated.
To adapt this to a balanced ternary system, we would use a ternary tree rather than a binary tree. There is also one additional step that is not necessary in a binary tree. Just before starting the tree you must check to see if the number of items is odd or even. More on this later.
Let’s assume you are encoding a plain text file with the words “attack at dawn.” and want to compress it based on letter frequency. First, count up the letters, spaces, and punctuation:
a = 4
t = 3
c = 1
k = 1
space = 2
d = 1
w = 1
n = 1
period = 1
Then arrange them from least to greatest:
period = 1
n = 1
w = 1
d = 1
k = 1
c = 1
space = 2
t = 3
a = 4
Now is where you would check to see if the number of items is even or odd. There are nine items making it odd. In this case, the next step is to remove three items from the list and begin the tree from the bottom up. However, if the number of items on the list was even, you would only remove two items from the list. This is only done the first time when you are beginning the tree. Every other time you remove items from the list and add them to the tree, you do so in sets of three.
The number of items is odd, so remove the three lowest valued items from the list. There are several options that are equally low, so lets arbitrarily pick period, n, and w. These become leaf nodes with a new node above them. Each node is denoted with what symbol it represents and its frequency in the text being compressed. We then add the new node n0 to the list.
Our list now looks like this:
d = 1
k = 1
c = 1
space = 2
n0 = 3
t = 3
a = 4
We repeat the procedure of removing the three lowest value items from the list, placing a new node over them, and putting the new node on the list. Our ternary tree now looks like this:
And our new list looks like this:
space = 2
n0 = 3
n1 = 3
t = 3
a = 4
After another iteration…
The list now contains:t = 3
a = 4
n2 = 8
And finally:
By starting at the top, you can count every leftward motion as a , each downward motion as a 0, and each rightward motion as a +. This gives you the codeword for each symbol. The symbol “a” is encoded into the codeword “”. The symbol “n” is encoded into the codeword “++0”.
Let’s say that the message had been “Attack at dawn”. The period at the end is omitted, so now there are an even number of items on the list:
n = 1
w = 1
d = 1
k = 1
c = 1
space = 2
t = 3
a = 4
We would only remove two items from the list, place a new node over them, and add that node to the list. Here is the tree:
And here is the new list:
d = 1
k = 1
c = 1
n0 = 2
space = 2
t = 3
a = 4
Notice that n0 has a value of only 2 in this instance. In the previous example it had a value of 3.
We now remove the three lowest value items from the list (the removal of two items is only done the first time). Here is the tree:
And here is the new list:
n0 = 2
space = 2
n1 = 3
t = 3
a = 4
The tree after another iteration:
And the new list:
t = 3
a = 4
n2 = 7
Remove the final items from the list and add them to the tree:
If we assume that each letter took 6 trits (which is just a bit more data than 8 binary digits), the total message would have been 84 trits long. The new compressed message reads as: 00+++++0+0+++++0++. This is only 26 trits, a savings of 31%. If the difference between letter frequencies had been more pronounced, the savings would be greater.
To decode, the sender only needs to provide the encoded message and the ternary tree data structure. With these, the decoding program can correctly recover the original message by taking the tritstream and following it down the tree. Every time it hits a leaf node, it returns the value stored in that node and starts over at the root node with the rest of the tritstream.
Because the ternary tree has to be provided along with the compressed message, you lose a little bit of your savings to the space taken up by the tree data structure. However, this does not necessarily need to be the case. If the sender and receiver have previously agreed on a ternary tree in advance, based on prior knowledge about which symbols are likely to be common then the tree doesn’t need to be sent. Unfortunately, it also means that the tree won’t be perfect for any given message and therefore optimal compression won’t be achieved.
In the final analysis, there is little difference between binary and balanced ternary in how the algorithm behaves. Nevertheless, it was good to sort out little details like the necessary check for evenness or oddness.

Parity Calculation and Checking
10/13/2019 at 07:43 • 0 commentsWhen data is transmitted along a bus, serial interface, RF link, etc. it is common practice to do some form of error checking and correcting. This is also true of data being stored such as in RAM or in nonvolatile storage. The simplest of these techniques is called parity checking and it simply detects if a transmission error has occurred, but cannot correct a detected error. Further, the commonplace implementation can only detect one incorrect bit for a given set of bits transmitted. If there is more than one error in a given set of bits, it may or may not detect the errors.
In binary systems this is normally done by XOR’ing each pair of bits and then XOR’ing those bits, and so on until you have one bit at the end. This results in a bit that is a 1 if there is an odd number of 1’s in the group and 0 if there is an even number of 1’s. This is called odd parity. If you want an even parity bit, just change the last XOR into an XNOR.
In balanced ternary, the Sum gate performs this function admirably, though the behavior of the parity trit is different. Simply cascade all of the input bits through a series of Sum gates and you end up with a parity trit. On the receiving side, do the exact same thing with all of the trits including the parity trit. Use the output of the Sum cascades and sum that with the incoming parity trit again. If this circuit outputs a 0, no errors were detected. If it outputs a  or a +, an error has been detected. Below is a diagram for a parity calculator and then a parity checker.
This subject brings up some interesting ideas regarding ternary errors and error correction. Consider that each trit can be wrong in two different ways instead of just one as in a binary system. Also, the parity trit can transmit data about the nature of the error by being either  or +. I haven't explored that yet, but it looks like it would be a valuable thing to know. Finally, a very interesting phenomena related to multitrit errors. In the above ternary system, the ternary parity trit has a variable likelihood of detecting a multitrit error. As the number of errors increases, the likelihood of detecting them oscillates up and down, narrowing each time until it finally converges on 66% or 2/3rds. I'm not sure it's useful, but it's interesting.

A Fast Balanced Ternary Adder/Subtracter
10/12/2019 at 07:02 • 0 commentsIn this post I will describe what I consider the most important thing I've learned about balanced ternary computing thus far. A fast balanced ternary adder/subtracter.
Thus far, my research has not turned up anybody who has designed a balanced ternary adder other than the ripplecarry variety. This is fine for demonstration or testing of higher level functions, but the lack of a *fast* adder/subtracter was a total dealbreaker in terms of making balanced ternary anything other than an interesting curiosity. A ripplecarry adder has a time complexity of O(n) while a carrylookahead adder is O(log n).
As near as I can tell, nobody has figured out how to build a balanced ternary carrylookahead adder, meaning that we were stuck with ripplecarry adders; essentially the stoneage. I spent months of my free time bashing my head against this problem because I knew that the lack of of an O(log n) or better adder unit meant there was no point in doing any other research on the subject. Dr. Jones at the University of Iowa did figure out a carrylookahead adder for unbalanced ternary (0,1,2) but his efforts to work one out for balanced ternary did not result in a functional design (though his work did clarify the problem quite a bit).
Ultimately, I gave up on carrylookahead and focused my attention on a lesserknown design, the carryselect adder. A carryselect adder works by computing each possible carryout simultaneously and passing these “carry candidates” to a multiplexer. The incoming carry from the previous digit “selects” the appropriate multiplexer input to pass along. Because the possible carryouts from every bit position are calculated and ready before the carryin from the previous digit arrives, each additional digit only adds a single gate delay due to the multiplexer. This type of adder has O(log n) time complexity, but takes more logic gates to build than a carrylookahead does. These digits are put into groups with each group taking a carryin and producing a final carryout for the group. That group carryout is passed on to the next group.
In a binary system, the carryselect adder requires two separate sets of logic to calculate the two possible carry candidates and the two possible sum candidates. Add two multiplexers to this and the circuit for each digit ends up being around twice the number of gates as a ripplecarry adder would need. In a ternary system, there are three possible carry candidates and three possible sum candidates, making a carryselect adder around three times size of a balanced ternary ripplecarry adder. Fortunately there are some tricks that can be used to reduce this a bit, but the end result is still pretty big.
My balanced ternary carryselect adder is composed of two types of adder circuits. They are the partial select adder and the full select adder. Each group is composed of one partial select adder in the least significant position and as many full select adders as needed to fill out the group.
As you can see below, the partial select adder takes a carryin and produces a sum and three carry candidates. No multiplexer is required in the partial select adder. A full select adder takes three carry candidates and produces three sum candidates and three more carry candidates. It passes the sum candidates to a multiplexer. Any full select adder that is not in the most significant position of the group passes its three carry candidates to the next full select adder. If it is in the most significant position, it passes the carry candidates to a multiplexer. All multiplexers in the group are selected by the single carryin for the group.
The only gates used in these adder units are sum, consensus, increment, decrement, multiplexers, and two new ones, the minus consensus, and plus consensus. The minus consensus can be created by passing the output of a consensus gate through a monadic C gate(00), and the plus consensus by passing the output through an R gate (00+). There may be simpler ways to make them, but that works.
As a note on gatedelays; I am just working out a rough count of delays. Obviously some gates require the signal pass through more transistors than other gates so the time to propagate a signal is not identical from gate to gate. Those kinds of details and the tradeoffs between space complexity vs. time complexity vs. power consumption vs… are better left to the implementation at the silicon layer. For my purposes, I’m just interested in getting a O(log n) adder/subtracter design that works.
If you count it out, the above threetrit carryselect group has six gatedelays from the first set of carry candidates in the partial select adder, to the final carryselect multiplexer at the end. However, the carryin is already at the carryselect multiplexer and waiting for the group carryout candidates to propagate down to it. This is the key to why the carryselect adder is faster than O(n).
Following is a 27trit carryselect adder that takes advantage of that to compute 27trits with only 11 gatedelays. Each group is calculating its internal sum candidates and carry candidates and getting them ready just in time for the carry from the previous group to arrive and select the correct candidates.
Voila!

Shift Operations
05/02/2019 at 07:38 • 0 commentsBalanced ternary shift operations have different results than binary shift operations for four reasons:
1) The value of each digit position is a power of three rather than a power of two.
2) The values that can be shifted out could be a 1, 0, or 1 instead of just a 0 or 1.
3) All balanced ternary numbers are signed (except 0, technically).
4) The sign trit is not necessarily the most significant trit of the number. Thus, both left shifts and right shifts may or may not alter the sign of the number.On most binary architectures there are three shift operations and four circular shifts. The three ordinary shifts are the right logical shift, the right arithmetic shift, and the left shift.
In a left shift, each bit is shifted one position to the left. A zero take the place of the least significant bit and the most significant bit is lost. In some cases the MSB may be preserved in the carry flag of the status register.
A right logical shift is the same operation, but the shift is to the right. A zero takes the place of the most significant bit and the least significant bit is lost. It is proper to use for unsigned numbers. In most cases the LSB is not preserved in the carry flag of the status register.
A right arithmetic shift is a little different in that the least significant trit is lost, but the new most significant trit is copied over from the existing most significant trit so that it does not change. This is so that the sign of a signed number remains the same after a right shift. In some cases the LSB may be preserved in the carry flag of the status register.
In balanced ternary, a left shift is a multiplication by three just as long as the trit shifted out is a 0. If a nonzero value is shifted out then it is not a straight multiplication and the sign will have changed unless the new most significant trit is of the same value as the one that was lost.
There is no distinction between a right shift and a right arithmetic shift. They are the same thing because the sign of the number is dependent only on the most significant nonzero trit. By shifting in a 0, the sign of the number is automatically preserved. The exception is if the shift drops off the only nonzero trit at which point the overall value becomes zero.
A right shift is the equivalent of dividing the original value by three and rounding toward 0. If the trit shifted out was a 0, then the value was perfectly divided by three without rounding. Rounding toward zero means that if a  was shifted out, the number is rounded up toward 0, but if a + was shifted out, the number is rounded down toward 0. In balanced ternary, rounding always approaches zero, not +infinity or infinity. In contrast, binary rounding always trends toward infinity.
The circular shifts take the bit or trit that was shifted out and make that the value that gets shifted in on the opposite side. They have two directions, left and right, and two variants, with carry and without. The rotatethroughcarry simply treats the carryin digit as if it were the least significant digit, so the total length is wordlength + 1.
There is no functional difference between binary and balanced ternary when it comes to circular shifts.

Tritwise Operations... And Eating Crow
05/02/2019 at 07:36 • 0 commentsI've been working on the subject of ternary logic and ternary processing for years on and off, but I haven't been doing so in such a methodical way as I am now. By approaching this in a more methodical and comprehensive way, I have discovered a rather a big error in my logic.
Until now, I've been assuming that with careful design, a balanced ternary system could be made that would still be codecompatible with a conventional computer. Of course I knew that the internals of the compiler would have to have some significant work done to both interpret new syntax and to target a new (and highly unusual) architecture. Nonetheless, I was sure that existing code could somehow be shoehorned in to the new system. This looked hopeful right up until I started trying to incorporate bitwise operators into the running thoughtexperiment I've been writing up in these posts. I didn't notice the problem when going over the NOT operator earlier and I started to get a hint of it when trying to get bool's and kleene's to play nice together. I finally realized the scope of the problem when I got into bitwise AND, OR, and XOR.
Each operator is useful for a variety of different purposes because of how they manipulate the bit pattern of their operands. But in a ternary system, the operands themselves are ternary, so even if you kept the operators identical, they will come up with unexpected answers every time. Basically, anything that is expecting to work at the bitlevel will be broken. Anything that expects to work at the register/word level or that has to do with control flow, can be made to work.
So, going back to the beginning with bitlevel subjects:
1) bools should still exist. To follow logical expectations in a balanced ternary environment, their FALSE representation should be a , not a 0. I still favor strict valuechecking so that bools can only hold a  or a +, nothing else.
2) Bitwise NOT and tritwise NEG (negation) become the same operation. NEG switches 's into +'s and vice versa but leaves 0's alone. Since bool's now use  for their FALSE condition and + for their TRUE condition, there would be no difference between NEG and NOT when using them on bools or kleenes. However, NOT would produce unexpected results if applied as a bitwise operator to other types of data, just like bitwise AND, OR, and XOR would.
3) The other bitwise operators such as AND, OR, XOR, and the various shifts, would have utterly different effects than they would in a binary system. The solution is not to try to force them into the system. The solution is to find equivalent tritwise operators that produce the results that programmers want out of the conventional bitwise operators.
4) This doesn't effect logical operations or arithmetic operations. Those are just fine as previously described.
I don't have a comprehensive list of which ternary operations would produce each of the results people are currently using bitwise operators for (except NEG of course). There are 19,683 different possible tritwise operators (not counting shifts), so that would be a job about 1,200 times more complex than coming up with everything in the "Hackers Delight" book from scratch and then writing a book about it.
Instead I'll write up the few I know of right now, and then as I find them or learn of them, I will do a separate post on any new "tritwise operator candidates". I say "candidates" because choosing which operators to implement in a real processor will be an important point. Digitlevel operations generally take place directly inside the ALU, which means that independent circuitry has to exist in the ALU for every digitlevel operator the processor is capable of executing. To avoid unnecessary complexity in the ALU it will be necessary to carefully choose which operations to include. Of course, any tritwise operations that are also necessary for arithmetic will be there anyways, so you get those "for free". The exception is the shift operations. Those are often done in a separate piece of hardware from the ALU. I'll look into shift operations in a separate post.
One other important factor to note is that not all of the twoinput gates are symmetrical and could produce different results if the input and the mask are switched. In the below examples I am assuming that the mask is always input 'a' and the data being masked is always input 'b'. If both arrangements are useful it could be arranged to get an extra tritwise operator without any additional hardware in the ALU. It also means that if you were applying an asymmetric operator to two variables (instead of masking) then a <operation> b would produce different results than b <operation> a.
Free tritwise operators used in basic arithmetic. They may have other uses I haven't thought of yet:
Negation gate (5)
A Singleinput gate, negations performs an inversion on a single trit by switching 's and +'s. Also used in performing subtraction. However, it may not be needed as inversion can also be performed by the XNOR gate below by masking with all 's.
Sum gate (BP7):
Performs singletrit addition by returning the sum of two trits. Also used in multitrit addition, subtraction, and probably multiplication.
1) Mask with a  to decrement the input trit.
2) Mask with a 0 to pass the input trit through unchanged.
3) Mask with a + to increment the input trit.
Can be used to selectively increment, decrement, or pass through any trit.Consensus gate (CDR):
Calculates the carryout of an addition or subtraction. Returns  if both inputs are , returns + if both inputs are +, otherwise returns 0.
1) Mask with a  to pass through only 's. Anything else becomes a 0.
2) Mask with a 0 to set the trit to 0.
3) Mask with a + to pass through only +'s. Anything else becomes a 0.
Can be used to selectively detect 's or +'s, or to neutralize any trit.XNOR gate (5DP):
Performs singletrit multiplication. Almost certainly going to be needed for multitrit multiplication.
1) Mask with a  to invert the input trit.
2) Mask with a 0 to set the trit to 0.
3) Mask with a + to pass the input trit through unchanged.
Can be used to selectively invert, neutralize, or pass through any trit.So far I've identified four other operators that have a clear use and could be candidates for inclusion in a processor. Below are the names, truth tables, and how they can be used. They may have other uses I haven't thought of.
Min Gate (0CP):
Passes through the lessor of each input trit. Equivalent to an AND gate.
1) Mask with a  to set a trit to .
2) Mask with a 0 to clamp +'s down to 0's.
3) Mask with a + to pass an input trit through unchanged.
Can be used to selectively clamp down, pass through, or set any trit to .Max Gate (PRZ):
Passes through the greater of each input trit. Equivalent to an OR gate.
1) Mask with a  to pass an input trit through unchanged.
2) Mask with a 0 to clamp 's up to 0's.
3) Mask with a + to set a trit to +.
Can be used to selectively clamp up, pass through, or set any trit to +.0PZ gate:
Several of these gates are probably going to be used in the status register for detecting the most significant nonzero trit.
1) Mask with a  to set a trit to .
2) Mask with a 0 to pass an input trit through unchanged.
3) Mask with a + to set a trit to +.
Can be used to selectively set any trit to , or +, or pass them through unchanged.Equality gate (26K):
Mask with a desired pattern of trits. Every trit that matches the pattern will be set to +. Any trit that doesn't match will be set to .
Much more research will need to be done to see what logic and math tricks can be performed with these gates and which other gates would be of equal or greater value in the ALU.

Control Statements
04/29/2019 at 07:19 • 0 commentsControl statements such as While loops, DoWhile loops, For loops, Switch statements and If statements are most of the reason you have relational or logical operations in the first place. Looking at how balanced ternary values fit into this system of program flow is of the highest importance.
To put it simply, the existing control flow statements should work the same as always. Nothing would stop you from using kleene data types or the spaceship operator for the decision making, but it doesn't actually change the way the statements work.
If variables a and b are kleenes, then:
while ((a <=> b) >= NEUT)...
is equivalent to:
while ((a >= b))...
No new ground is being broken here, they would just work as you would normally expect them to. Of course existing control statements are expecting boolean evaluation so they would need to do type checking to correctly handle kleene FALSE because it would look like a boolean TRUE due to being nonzero.
All this doesn't mean that control statements that take advantage of the more expressive ternary logic couldn't be created. An alternative to the "if" statement is easy to work out.
Imagine a conveyor belt that could be moving forward, moving backward, or stopped. The variable that held the current conveyor motion is a kleene called "conveyorDirection" that contains a  (FALSE) for reverse, a 0 (NEUT) for stopped, or a + (TRUE) for forward. Now imagine a control statement that natively evaluated kleenes. We'll call it an evaluation statement or "eval" statement.
eval(conveyorDirection)
{
//Code if 
}
{
//Code if 0
}
{
//Code if +
}The syntax could be different as long as it was clear which section of code would be executed for each state.
This could even be used to ease debugging and error handling. Imagine you were evaluating a game variable called lifePoints. This variable should never be negative. In a boolean system, if an offbyone bug or manipulation resulted in underflow, the negative nonzero value would return TRUE, so additional steps would have to be taken to check for this. With this hypothetical eval statement you have a convenient place to put your error handler call or even a breakpoint for your debugger.
eval(lifePoints <=> 0)
{
//Code if 
invalidValueHandler();
//Feel free to put breakpoint here as well
}
{
//Code if 0
printf("You Died!");
}
{
//Code if +
printf("%d remaining", lifePoints);
}An alternative while loop is a bit more complex. Currently, while loops test an expression and if it returns TRUE, executes the code, then tests the expression again, and so on. Any combination of TRUE, NEUT, and FALSE could be manipulated with conditional operators to evaluate TRUE, so the while loop (and dowhile loop by extension) can already handle any threevalue expression you want to throw at it.
A natively ternary version of the while loop might be something that evaluates an expression and then chooses which code to loop through. Let's call this hypothetical control statement a "choice loop". The distinguishing feature is that it will switch back and forth through different loops (scary, right?) depending on the return value of the expression. If it returns FALSE, one section of code is looped through until it returns something else at which point it loops through that code, and so on. A break in one of the sections would release the loop. Here's an example.
choice(x <=> 0)
{
//Loop if +
doSomethingRepeatedly...
x //This loop executes repeatedly and decrements x
}
{
//Loop if 0
doSomething...
break //This loop executes once when x reaches 0, then breaks
}
{
//Loop if 
doSomethingElse...
break //This loop does something else if x returns FALSE, then breaks
}If x was a positive number the above code would execute the TRUE loop over and over, decrementing x until it reached 0. Then it would execute the NEUTRAL loop once and break. If x was already 0, it would execute the NEUTRAL loop once and break. If x was negative, it would do some other action and break. Or you could have it do nothing and break as a simple failthrough.
Infinite loops would naturally have to be watched out for just like with any other loop statement but I think debugging such errors would end up being easier. Just put a break point at the beginning of each of the three loops and watch the variable being evaluated.
This control statement could be used to do pretty much anything you would expect from a for loop or a while loop. Combining it with a "do" statement would get a bit complex but could be arranged. It wouldn't replace a switch statement since that can be used to define an arbitrary number of cases.
On the surface the eval statement and choice loop look like elegant, nativeternary flow control statements. At least they look like that to me. Someone who actually has some language design experience would be better suited to judge them or to recommend alternatives.

Logical Operators
04/28/2019 at 07:44 • 0 commentsAs for logical operators, the ones in common use are ! (NOT), && (AND) and  (OR). I have already addressed NOT in other posts, so we can focus on the twoinput logical operators here.
Here are the equivalent kleene versions of those boolean logical operators:
Boolean Kleene
&& (AND) Min
 (OR) MaxBy the way, here's a really good explanation of why logical XOR (and by extension XNOR) don't exist in C. http://cfaq.com/misc/xor.dmr.html
I'll skip over the obvious uses of the boolean logical operators. Suffice it to say, they regard each operand as a boolean, so a nonzero value is considered TRUE and a zero value is considered FALSE. The result of the operation is returned as a boolean value.
As for ternary logical operators, it first needs to be pointed out that there are 19,683 of them. Of course the vast majority wouldn't be of much use and it would be a complete joke to expect a language to have vocabulary for all of them. However, I suppose it could be possible for a language to compute the truth table for any logical operator if there were a way to describe it in the source code. Say, with the heptavintimal name for the twoinput gate for example:
if (x <A6R> y == NEUT) then...
This way, theoretically any of the possible twoinput gates could be used as a logical operator. I don't have a clue how (or why) that would be implemented, but it's an interesting idea.
Putting that aside for the moment, let's address the use of the ternary logical operators that correspond to existing logical operators. They evaluate each operand for its kleene value (FALSE for a negative, NEUT for a zero and TRUE for a positive) and they they return kleene values. Min returns the value of the lesser of the two inputs. Max returns the greater of them. Simple.
The thing to keep in mind is that they can return a NEUT and that this can be used for more expressive control statements. More on that in a later post.

De Morgan's Laws
04/27/2019 at 07:46 • 0 commentsThis is just a quick address of an important law in logic. De Morgan's Laws demonstrate the relationship between two basic binary gates, AND and OR. Here they are:
NOT(A AND B) = (NOT A) OR (NOT B)
NOT(A OR B) = (NOT A) AND (NOT B)These laws have an equivalent function in Kleene's version of ternary logic using the negation gate and the Min and Max twoinput gates. This is another reason I liked Kleene's version of ternary logic. Not all of them adhered to De Morgans Laws.
Neg(A Min B) = (Neg A) Max (Neg B)
Neg(A Max B) = (Neg A) Min (Neg B)There you have it.

Relational Operators
04/27/2019 at 07:31 • 0 commentsThe conventional relational operations compare numerical or boolean variables and return a boolean value based on the result of that comparison. The possible variations of relational operations are:
if a then...
if !a then...
if a > b then...
if a !<= b then... (same as above)
if a < b then...
if a !>= b then... (same as above)
if a >= b then...
if a !< b then... (same as above)
if a <= b then...
if a !> b then... (same as above)
if a == b then...
if a != b then...Further, a couple of new relational operator would exist. The first is:
if a == NEUT then...
Using '?' to represent this operator is nicely consistent.
if a then...
if ?a then...
if !a then...The second one is !! (ternary negation) which differs from boolean NOT. It returns FALSE for a positive input, returns NEUT for a 0 input and returns TRUE for a negative input.
In a boolean system some of these operations are redundant and aren't generally implemented in languages. These same redundancies exist in a balanced ternary system because the basic laws of arithmetic haven't changed.
The behavior of the relational operators is well defined in each language for booleans and integers, but needs to be explored when comparing a boolean and a kleene. As mentioned in other posts, I favor booleans that are forced to contain only a 0 or 1 rather than the "soft" booleans where any nonzero value is TRUE. This is how Python and stdbool.h in C do it. I also favor expanding this concept to the kleene data type so that only 1 is FALSE, only 0 is NEUTRAL and only 1 is TRUE. Any attempt to assign a kleene a different value would result in a negative value being clamped to 1 and a positive value being clamped to 1.
When comparing a kleene and a boolean there are a couple of things to consider. So far I have been considering that a boolean FALSE would continue to be represented by a 0 but I now realize this could cause serious problems such as:
bool x = FALSE //contains 0
kleene y = FALSE //contains 1x //returns FALSE
y //returns FALSEx > y //returns TRUE... uh oh...
x == 1 //returns FALSE
y == 1 //returns TRUE... dangx == y //returns... FALSE? Let me think about it...
Now how about comparing a boolean to a kleene:
bool x = FALSE //contains 0
kleene y = NEUT //contains 0x == y //returns TRUE... oh hell...
After some more consideration I now see that one of two things would have to happen. Either a boolean would have to use 1 as the FALSE value to avoid irrationalities like the ones demonstrated above, or the relational operators would have to do type checking and translate accordingly.
I favor the type checking approach rather than just asserting that 1 is the new boolean FALSE. This is because it would simply break too much code. Even the simplest things like "int x = 0" in C would break. So would any code that counts down to 0 and does comparisons to make a decision based on the countdown. Offbyone errors would be everywhere. Therefore, type checking looks like the solution.
The next question is, if a boolean were compared to a kleene, what would the return type be? On one hand it could always be a boolean since every one of the above operators always resolves to either TRUE or FALSE. On the other hand, what would happen in a case like this:
boolean x = FALSE
kleene y = FALSEx = x == y //returns a TRUE... boolean?
y = x == y //returns a TRUE... kleene?This would definitely be language specific. Python doesn't have any problem with changing a variable's type depending on the situation. C++ just ignores you and leaves the variable type alone if you do something like trying to add 1.5 to an integer. In cases where the return value is not being assigned to an existing or newly instantiated variable then it would be fine to internally consider it a boolean in its limited scope. For example:
kleene x = FALSE
kleene y = FALSEif x == y
{do something...
}
The datum returned by the == operator is going to exist temporarily and with limited scope. It would probably be safest to consider it a boolean since that is the more limited of the two options.
As for comparing a kleene and a kleene, the simple answer is that the operators behave exactly as one would logically expect, and return either TRUE or FALSE, but not NEUT.
kleene x = FALSE
kleene y = NEUT
kleene z = TRUEprint (x > y)
FALSE
print (y >= x)
TRUE
print (x < z)
TRUE
That takes care of most of the relational operators but there is one more that is perfectly suited to a balanced ternary system. This is the "spaceship" operator that is generally denoted by <=> in the languages that support it. This is the "threeway" comparison. Let's say that the expression "a <=> b" is being evaluated. It returns an integer with a 1 if a is less than b, or it returns 0 if a is equal to b, or it returns 1 if it is greater than b. The only difference would be that in our hypothetical system it would return a kleene with a value of FALSE, NEUT, or TRUE.
if a <=> b == FALSE, then a < b
if a <=> b == NEUT, then a == b
if a <=> b == TRUE, then a > bRelational Operations Equivalent Three Way Operations
if a > b then... if a <=> b == TRUE then...
if a !<= b then... if a <=> b == TRUE then... (same as above)
if a < b then... if a <=> b == FALSE then...
if a !>= b then... if a <=> b == FALSE then... (same as above)
if a >= b then... if a <=> b != FALSE then...
if a !< b then... if a <=> b != FALSE then... (same as above)
if a <= b then... if a <=> b != TRUE then...
if a !> b then... if a <=> b != TRUE then... (same as above)
if a == b then... if a <=> b == NEUT then...
if a != b then... if a <=> b != NEUT then...if a then... if a == TRUE then...
if ?a then... if a == NEUT then...
if !a then... if a == FALSE then...
I know that the threeway operations look more complex and appear to be equivalent, but there is a difference. Based on some work I've previously done on the appropriate status flags needed for a ternary processor, it is possible to do any of those comparisons in a single operation. On the binary side, some of those take two separate comparisons, which actually means: do the first comparison, store the result in a register, do the second comparison, store the result in a register, compare the two results for the final answer, jump/branch or don't based on the result. Using the three way operators it would simply be: do the comparison, jump/branch or don't based on the answer. Six operations turn into two.
Ultimately, I think it will be this kind of thing that defines the difference between binary and balanced ternary systems. You just get more computation out of each action. The more I research it, the more this becomes clear.