Following the discussion about Intel libraries and sse
optimization, I did some literature recherche about elementary
functions (trigonometric functions, log, exp). In particular I hope to
find some clever methods to speed up atan and sin/cos, which would be
very handy for all the FM stuff. But I need some parameters to find
Which precision are we talking about? Do we need double precision
floating point with exact rounding? Are we happy with reduced precision
or net exact rounding? AFAIS it depends on the data type which is
processed, and should be optimized for every type.
How accurate do we want to be? Can we live with 1% relative error, or
0.01%? I have no clue about the accuracy constraints.
Is it important to have a constant (if possible) calculation time or
can we live with iterative algorithms with an estimation of average run
times? I think runtime should be constant, as we do not want to have
data dependent runtimes: one time the computation capacity is sufficient
for real-time processing of a give input set, with a different input set
it is over real time.
Which input range has to be processed? Taylor-polynoms can be
optimized quite fine, but perform bad outside a limited input range like
say [-pi, pi]. Do we have some constraints, or are there fast and
accurate methods to condition data suitable (as trigonometric functions
are generally periodic)?
How bad is branching (if, case, etc.) for our architectures? How can
we measure the impact of branches on the calculation speed?
Engineers motto: cheap, good, fast: choose any two
Student of Telematik, Techn. University Graz, Austria