Menu

#96 Need a redefinition for \pgfpointnormalised to improve calculation accuracy

Next Release
closed-fixed
None
5
2019-03-01
2015-07-26
Rokas
No

When drawing altitudes of a triangle I discovered that inacuracies in some TikZ calculations can accumulate to produce faulty pictures.

All three altitudes of a triangle intersect at a single point but if you draw them using projection syntax (e.g., $(A)!(B)!(C)$), they will not meet at a single point but instead produce three distinct intersections.

You can find out more about this issue and what results it brings about here: http://tex.stackexchange.com/questions/256333/drawing-the-three-altitudes-of-a-triangle-with-tikz-incorrect-orthocenter

The following redefinition for \pgfpointnormalised was suggested:

\makeatletter
\def\pgfmathpointnormalised#1{%
  \pgf@process{#1}%
  \pgfmathatantwo{\the\pgf@y}{\the\pgf@x}%
  \let\pgf@tmp=\pgfmathresult%
  \pgfmathcos@{\pgf@tmp}\pgf@x=\pgfmathresult pt\relax%
  \pgfmathsin@{\pgf@tmp}\pgf@y=\pgfmathresult pt\relax%
}

This fix was tested and it indeed works perfectly.

I think the fix should be incorporated into the PGF/TikZ source code.

Discussion

  • Henri Menke

    Henri Menke - 2019-02-28
    • status: open --> closed-fixed
    • assigned_to: Henri Menke
     
  • Stefan Pinnow

    Stefan Pinnow - 2019-02-28

    Comment to the commit message: Because I was not sure how often \pgfpointnormalised is used in the manual (1x or 1000x) I made a test with the MWE given in the TeX.SX question and looping the inside of the tikzpicture 1000x. Compiling with LuaLaTeX resulted in 65s using the "old" definition and 75s using the "new" definition".

     
  • Henri Menke

    Henri Menke - 2019-03-01

    Your reported 15% performance loss is not reproducilbe for me.
    I tested the following MWE on my machine with LuaTeX 1.10.0

    \documentclass[tikz]{standalone}
    \usetikzlibrary{calc,spy}
    \begin{document}
    \foreach \i in {1,...,1000} {
      \begin{tikzpicture}[x=2cm,y=2cm, 
        spy using outlines={circle, magnification=10, size=2cm, connect spies}]
        \path  (0,0) coordinate (A) (1,2.5) coordinate (B) (4,0) coordinate (C);
        \draw (A) -- (B) -- (C) -- cycle;
        \draw [blue, opacity=0.5, very thin] 
        (A) -- ($(B)!(A)!(C)$) (B) -- ($(A)!(B)!(C)$) (C) -- ($(A)!(C)!(B)$);
        \spy [blue] on (1,1.2) in node at (3.5,1.5);
      \end{tikzpicture}
    }
    \end{document}
    

    With the old definition I got 11.062s and with the new definition 11.639s, which is at 5% a larger performance loss than for the manual. However, in wall clock time it is still less than a second difference for a document with 1000 pictures using the projection syntax. I think this is negligible.

     
  • Stefan Pinnow

    Stefan Pinnow - 2019-03-01

    Sorry, I missed a 0 ...

    I did my test with

    \documentclass[varwidth,border=5]{standalone}
    \usepackage{tikz}
        \usetikzlibrary{calc,spy}
        \makeatletter
        \def\pgfmathpointnormalised#1{%
          \pgf@process{#1}%
          \pgfmathatantwo{\the\pgf@y}{\the\pgf@x}%
          \let\pgf@tmp=\pgfmathresult%
          \pgfmathcos@{\pgf@tmp}\pgf@x=\pgfmathresult pt\relax%
          \pgfmathsin@{\pgf@tmp}\pgf@y=\pgfmathresult pt\relax%
        }
    \begin{document}
    \begin{tikzpicture}[
        x=2cm,y=2cm,
        spy using outlines={
            circle,
            magnification=10,
            size=2cm,
            connect spies,
        },
    ]
    %        \let\pgfpointnormalised=\pgfmathpointnormalised
        \foreach \i in {1,...,10000} {
            \path  (0,0) coordinate (A) (1,2.5) coordinate (B) (4,0) coordinate (C);
            \draw (A) -- (B) -- (C) -- cycle;
            \draw [blue, opacity=0.5, very thin]
              (A) -- ($(B)!(A)!(C)$) (B) -- ($(A)!(B)!(C)$) (C) -- ($(A)!(C)!(B)$);
        }
            \spy [blue] on (1,1.2) in node at (3.5,1.5);
    \end{tikzpicture}
    \end{document}
    

    thus looping 10,000x (instead of 1,000x). But the wall-clock times where correct assuming with this is meant I use a stop-watch in hand and start counting when pressing the compile button and stopping when the console finishes.

    Regardless of that in my test the performance loss is 15%, I think this is not a problem at all, because even at 1,000 iterations the difference is barely noticable by the user.

    To sum up (and that already was my conclusion yesterday, but unfortunately I didn't write it in the comment as well):
    Everything is fine with the new definition.

     
MongoDB Logo MongoDB