Copyright (C) 2000, 2001, 2003 Mike Sebastian

In the mid-70s, calculators were *the* hot new consumer product. Virtually every
store had a calculator sales display. There were a large number of manufacturers
producing calculators, with new models coming on the market almost every day. My
particular interest was, and continues to be, scientific calculators. Every time I
went into a store that sold calculators, I would head straight to the calculator
sales display to check out the new models, much to the chagrin of my mother. (I
still check out the calculator sales display, looking for new models, every time I
go into a store that sells calculators, much to the chagrin of my wife.)

Besides simply ogling over the new models of calculators, I wanted some systematic method to provide a qualitative comparison of the accuracy of the different scientific calculators on display. To that end, I developed an algorithm to quickly ascertain the relative accuracy of the embedded algorithms (and, implicitly the precision) of a calculator. This algorithm was capable of being performed quickly and easily on virtually all scientific calculators and produced a single number as its result.

I would like to be able to say that the algorithm I developed was the product of a rigorous analytic analysis. But, that is not the case. This evaluation algorithm was the product of trial and error on my trusty Texas Instruments SR-51A, the calculators of a few friends, and whatever calculators were on display in the local stores.

The algorithm I eventually settled upon was: arcsin (arccos (arctan (tan (cos (sin 9))))), with the calculator in degrees mode. The keystroke sequence (ignoring 2nd, INV, or ARC keys) was typically: 9, sin, cos, tan, arctan, arccos, arcsin. Trigonometry functions were utilized because the trig function keys were almost always grouped together on the keyboard. Nine was chosen as the initial value because it was a single keystroke, and not too close to zero. Many calculators had, and still have, problems with angular values near zero. My purpose was to quickly test the overall accuracy of the embedded algorithms, not their limits.

Obviously, this algorithm usually produced different results on different calculator models (otherwise it would have no value as an evaluation and comparison tool). Also, due to imprecise algorithms, there was a small group of early scientific calculators that were unable to complete the forensics algorithm.

**Why the Forensics Algorithm Produces Different Results on Different
Calculators**

Calculators use *approximations* in computing transcendental functions.
Most calculators use the CORDIC (COordinate Rotation DIgital Computer) method
to approximate their transcendental functions. A few calculators use polynomial
approximations. Whichever method is used, the accuracy of the approximation is
going to generally be limited by the precision of the calculator – the number
of digits the calculator is capable of calculating – and the quality of the
software written to implement the algorithm performing the approximation.

Against this backdrop, we can better understand why the forensics algorithm produces such a variety of results on different calculators. When evaluating the forensics algorithm, several factors combine to produce the observed inaccuracies. These factors include:

- Inherent Loss of Precision
- Number of Digits Calculated (Precision)
- Algorithm Quality (Accuracy)

The first factor is inherent to the forensics algorithm and independent of which calculator is being evaluated. The latter two factors, which are dependent on the calculator being evaluated, are the variables responsible for the varying results produced by the forensics algorithm.

**Inherent Loss of Precision.** This factor is inherent to the forensics
algorithm and is independent of the calculator being evaluated. Because of
intermediate results in the forensics algorithm and the fixed precision of
scientific calculators, approximately five digits of precision are lost during
execution of the forensics algorithm. These five digits of precision are lost
at the stage where the cosine of 0.156434... is taken (the second step of the
forensics algorithm). The result of this operation is 0.999996.... This loss of
precision is independent of the calculator. For example, if your calculator is
an HP-42S or a TI-36X, both of which calculate their results to twelve digits,
your calculator is going to have only seven significant digits remaining to work
with. So, a result accurate to seven digits is doing pretty good. In fact, about
seven digits accuracy is what is observed with these two calculators. If you
have an early National Semiconductor or Casio calculator which only computes
eight digit results, the calculator will be working with about three digits.
Three digits accuracy is what is observed with these early calculators. And, if
your vintage calculator uses one of the early General Instruments chips which
only computed transcendental functions to five digits, it is really no surprise
that it produces zero as its forensics algorithm result.

**Number of Digits Calculated.** This factor is dependent on the calculator
being evaluated. As discussed above, the number of digits of precision can make
a significant difference in the result produced by the forensics algorithm. Many
manufacturers have and continue to utilize "guard digits," digits which are not
displayed, to improve the accuracy of the displayed result on their calculators.
Assuming decent algorithms, the more digits calculated, the greater the accuracy
of the result.

**Algorithm Quality.** This factor is also dependent on the calculator being
evaluated. The quality of the embedded algorithms which compute the
transcendental functions affect the accuracy of the result. Calculators of
similar capability (e.g. the same number of digits of precision) may produce
significantly different results. Early Hewlett-Packard calculators (the HP-35
through the Voyager series) clearly demonstrate an evolution of algorithm
quality, even though all of these calculators calculated their results to ten
digits. Another facet of algorithm quality is illustrated by a few early
calculators (e.g. calculators based on General Instruments, MOS Technologies,
and Rockwell chips) which computed transcendental functions to less digits
precision than the calculator was capable of performing. Especially on the early
calculators, where memory (both ROM and registers) was at a premium and
instruction execution speeds were slow, compromises were made which affected
algorithm quality.

**Anecdotal Observations**

As my calculator collection grew, I would perform the evaluation algorithm on each new scientific calculator obtained. I started noticing that some calculators produced the same result. For example, several early models of Texas Instruments scientific calculators produced 9.000004661314. Hewlett-Packard calculators from the HP-28C through the HP-48GX all produced 8.99999864267. It is reasonable to expect different models by the same manufacturer to produce the same result. But, what really caught my attention was the fact that many of the more modern calculators (late-80s and on), which were distinctly different and manufactured by several different companies, produced 8.99999863704.

Another interesting observation is that the Russian Electronika MK-37 (and its cousins) produces the same result, 10.4382, as the early Rockwell/Unicom 202/SR (and close cousins, the Rockwell 61R and Sears 801.58770). It is also worth noting that the keyboard layout and key assignments on the MK-37 are identical to the Rockwell.

I am starting to see calculators designed in China that appear to return exactly 9.0 from the evaluation algorithm. The first calculator I observed this on was the Hewlett-Packard 30S. Since then, I have seen this same result on several other calculators also manufactured by Kinpo Electronics. I have also observed this result on the Spectra SSC-200 (manufacturer unknown). These calculators appear to perform their calculations in binary, instead of BCD (Binary Coded Decimal), and they appear to calculate the results of their transcendental functions to about 80 bits of precision (the equivalent of almost 24 digits of precision). I believe they are also utilizing a more sophisticated (and hence more difficult to circumvent) rounding process than that found on other calculators.

**Summary**

The forensics algorithm was developed to quickly provide a qualitative comparison of the accuracy of scientific calculators. Three factors contribute to the loss of accuracy observed by executing the forensics algorithm on a calculator. The first factor, a loss of precision, is inherent in the algorithm, and is independent of the calculator being evaluated. The two remaining factors, the number of digits calculated (precision) and algorithm quality (accuracy) are the variables which are dependent on the calculator being evaluated, and are responsible for the varying results produced by the forensics algorithm. It is these varying results produced by different calculators evaluating the forensics algorithm which give it its utility.

Last updated July 28, 2003