Menu

Fixed point arithmetic

History

I bet most of you - like me - began our voyage with Leo Brodies “Starting Forth”. Brodie dedicates an entire chapter to “Fixed point” arithmetic, but although he gives many convincing examples, reality proved to be very different.

Yes, adding and subtracting works fine in “fixed point”. But when it comes to multiplying and dividing you’ll find you hit the wall pretty hard and pretty fast. And of course, we’re to blame. We think we’re working “fixed point” when we change our units from “dollars” to “cents”. And since we’re unable to leave our decimal mindset, we assume that we’re doing “fixed point” when we scale a number in decimal units.

That is because there was a section missing in that notorious chapter. Brodie made up for that omission in “Forth Dimensions”, volume 4, number 1, where he presented a delightfully elegant solution to the problem, which consists of only six lines of Forth.

Agreed, it is of little use in a 16-bit or even a 32-bit environment. That may have been why it has been overlooked for so many years. But now we’re moving into the 64-bit era - which is an entire game changer. In many circumstances, it has become a low cost alternative for floating point.

Brodies solution

Brodies solution consists of two parts:
1. First, he uses a binary format;
2. Second, he creates operators that effectively and transparently deal with scaling.

Of your 64-bits, 14 are reserved for the fraction. So converting an ordinary integer into a fraction is equivalent with either a left shift or a simple multiplication.

In order to convert an integer to a fraction, Brodie offers two options:
1. Either divide two integers;
2. Or scale the number by 10000 and convert it using a dedicated word.

Note that all these operations are performed by a few integer operations - which make them extremely fast compared to floating point. The only drawback is that this repeated scaling reduces the actual range available to your fixed point number.

Another drawback is that you’re losing accuracy every single time you perform an arithmetic operation. Losing accuracy is nothing new. It is a known problem in floating point, where you might lose half a bit of accuracy with every single operation. However, if you’ve only got 14 bits to spare, you’ll reach that point much quicker - especially without rounding.

So, this is not a panacea. I wouldn’t use it in situations where high precision or high accuracy is required. But it’s good enough for “government work”, e.g. low or medium resolution graphics.

It may be noted that lots of things work out of the box, like unary minus, comparison operators, ABS and of course - "0" is zero in almost any configuration, so that one doesn't need any conversion, unless you're overly pedantic.

What’s missing

To make it a true alternative for a floating point library it has to offer at least some essential transcendental functions, like SQRT, SIN, COS, LN and EXP. Fortunately, 4tH has lots of libraries. I just had to pick the best ones and write some wrappers.

Our first stop is math.4th. Although it offers a wide range of integer functions, there are better alternatives available. The trigonometric functions are pretty good. They are based on highly efficient Taylor approximations and take their operands as 10K scaled numbers. Interfacing with that one is a walk in the park.

When it comes to LN and EXP we don’t have much choice. We only got the fxexpln.4th library, but those are pretty fast - so there is no need to go hunting for others. They take their operands as 64K scaled numbers - which means we just have to shift our 16K scaled numbers 2 bits to the left. Even better.

When it comes to SQRT, there are a lot of options. I chose the most recent addition, based on a quadratic residue algorithm. It’s very fast and quite precise. Sure, I have to do the proper scaling myself, but that isn’t really rocket science, given the properties of square roots.

A lot of other functions, like TAN, can be derived from those.

Applications

The first time these functions were applied in real life applications was when we tried to recreate an old ZX Spectrum 3D graph. Our first attempt was using Zen float, which - although successful - wasn’t particularly fast. And that was the point we dusted off Brodies fixed point library and started experimenting.

It was fast. Where Zen float took seconds, this one finished in the blink of an eye - without any noticeable effects on the final result. That was the point we started to take it serious.

And that made me wonder. What if I take this a step further and try a much slower environment, like uBasic/4tH? That one has no floating point support whatsoever. So I re-implemented the entire library in uBasic/4tH and started rewriting.

I selected a few existing floating point BASIC programs for that purpose. The main thing was that these translations got very long and very complicated due to the uBasic/4tH calling conventions, e.g. an expression like:

Z=INT(25+FNA(SQR(X*X+Y*Y))-.7*Y)

Expanded into:

Z=FUNC(_Ftoi(FUNC(_Itof(250000))+FUNC(_FNa(FUNC(_Fsqrt(FUNC(_Fmul(X, X))+FUNC(_Itof((Y*Y*10000)))))))-FUNC(_Fdiv((7*Y), 10))))/10000

But it did work. And the performance on my Intel i5 was quite reasonable - a few seconds - although your mileage may vary on other 64-bit platforms, like ARM. Note, this whole thing was originally written with a floating point environment in mind and is interpreted by a program running on top of the 4tH virtual machine.

A real fun thing is, I wrote an entire FOR..NEXT loop with fractions as parameters and it performed brilliantly:

FOR X=FUNC(_Itof(-300000)) TO FUNC(_Itof(300000)) STEP FUNC(_Itof(15000))

And yes, I did check the values it looped through. That's just crazy!

Interesting to mention is that there may be room to optimize these expressions even further. Brodie writes that some mixed expressions result in integers - like multiplying a fraction with an integer. I haven't explored all of these possibilities yet in detail, since I cherish the motto "first make it work, then improve it".

Still, I feel that this technique is underappreciated. I think primarily because it is all but forgotten. And IMHO it most certainly doesn’t deserve such fate.