I am wondering about some floating point to fixed point conversion issues.
After creating an algorithm in floating point notation and achieving what the algorithm is suppose to do, if the final goal of implementation is in fixed point notation, the next step is to do that conversion. In case of IT++ this is done by replacing the used floating point data type with the Fix or Fixed class as data type and making the necessary adjustments.
Now after that conversion it would be good to know how much the conversion degraded the achieved performance of the model in comparison to the original floating point version. So one way in the conversion process could be to actually copy the model. Keep the floating point version and convert the copy of the model to fixed point notation. Then it would be possible to compare the achieved performance of both models and know how much the fixed point conversion affected the performance of the model.
The disadvantage of that approach is that it requires to keep those two models synchronized when doing changes to the algorithm.
How is that done in practice? There are numerous algorithm converted to fixed point that face this problem. There must be a solution to that which I don't see in my limited view. Could anybody shed some light on this?
Thanks for your help.
Cheers,
Guenter
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
I am wondering about some floating point to fixed point conversion issues.
After creating an algorithm in floating point notation and achieving what the algorithm is suppose to do, if the final goal of implementation is in fixed point notation, the next step is to do that conversion. In case of IT++ this is done by replacing the used floating point data type with the Fix or Fixed class as data type and making the necessary adjustments.
Now after that conversion it would be good to know how much the conversion degraded the achieved performance of the model in comparison to the original floating point version. So one way in the conversion process could be to actually copy the model. Keep the floating point version and convert the copy of the model to fixed point notation. Then it would be possible to compare the achieved performance of both models and know how much the fixed point conversion affected the performance of the model.
The disadvantage of that approach is that it requires to keep those two models synchronized when doing changes to the algorithm.
How is that done in practice? There are numerous algorithm converted to fixed point that face this problem. There must be a solution to that which I don't see in my limited view. Could anybody shed some light on this?
Thanks for your help.
Cheers,
Guenter