|
From: Helmut J. <jar...@sk...> - 2020-03-23 16:27:54
|
On 03/23/2020 02:20:57 PM, Michel Talon wrote:
>
> Le 23/03/2020 à 05:59, Gunter Königsmann a écrit :
>> On 22.03.20 21:41, Richard Fateman wrote:
>>> This is probably not going to solve this particular problem,
>>> but the general problem of making sure that plots don't lie
>>> because of numerical errors can be addressed by interval
>>> arithmetic, at least sometimes. google search for "honest
>>> plotting".
>>> Not that Maxima has any seriously implemented interval arithmetic.
>>> RJF
>>>
>> Interval arithmetic is easy to implement for anything which is both
>> continuous and monotonous - and can be extended to such functions
>> with
>> more than input (for example multiplications). For machine-float
>> periodic functions like sin(x) and tan(x) we could perhaps get info
>> about the periodicity for large x.
>>
>> The question is if this is enough for this case or if we would need
>> an
>> interval arithmetics for find_root - which is impossible for more
>> than
>> trivial problems.
>
>
> I have looked at other examples and draw2d(implicit(...)) always
> fails to plot correctly the branches near
>
> a singular point of f(x,y)=0. I have taken a look at the way it is
> programmed, basically the square is
>
> subdivided into a grid of finally spaced points, then f is computed
> at each point, if for some point
>
> |f| becomes < 1e-06 it is considered that f=0 and this point is drawn
> on the plot. This obviously works well
>
> in general. However, close to the singular point (0,0), f is given by
> powers >=2 of x and y, so is much smaller
>
> than the typical value of f in the square. Hence the epsilon=1e-06
> which is appropriate for the typical case
>
> is no more relevant. It should be renormalized by the typical value
> of f near the singular point. Second,
>
> the derivatives of f at (0,0) vanish by definition, so f doesn't vary
> much between nearby points in the grid.
>
> So the selection of which point is considered a zero of f becomes
> fuzzy. Third there are several curves which
>
> converge to (0,0), also by definition, hence they are spaced thinner
> and thinner as one approaches the singular
>
> point. Sufficiently close they are thinner than the grid, so the
> computation cannot work. Only an adaptive grid
>
> could overcome this problem. As a consequence it seems to me that
> only a far more complicated program could
>
> plot correctly this situation. I don't see in what way so-called
> "interval arithmetic" could be of any help.
>
Once any zero of f is found one can "continue" as follows (crude
outline of an algorithm)
Let f be a mapping from R^(d+1) -> R^d and x be a (d+1)-dimensional
vector (here d=2, x<= (x,y))
Given x_0 we are looking for a nearby point on the root locus of f.
We have to solve f(x(t)) = 0, where x(t) is a (locally defined)
parameterized curve in R^{d+1}
Let J be the Jacobian of f at x_0. Except at a singularity, the
nullspace of J will be one dimensional.
Without loss of generality we can assume norm(\dot x) = 1. Such a
nullvector can be found by an SVD of J.
Now one can compute a predictor x_1p= x_0 + h \dot x, where h is an
adaptive step length.
One refines x_1p with a Quasi-Newton method which uses the
pseudo-inverse of J (easily computable
since we have the SVD at hand). Given a good step length h, a single
Newton step suffices.
A crude step length adaption can be based on the number of required
Newton steps to reduce the residuum
below a relative threshold. If one isn't interested in the singular
point itself, this method usually just "jumps" over the
singular point.
To get several arcs of the root locus of f one has to use different
starting points x_0 and in general, it is difficult
to be sure one has found all arcs.
One could use an interval Newton method for the refinement step to get
rigorous bounds on the zero, but this is expensive
in higher dimensions.
This and similar algorithms are standard in numerical bifurcation
analysis. It is even possible to switch branches
at a (near) singularity.
Of course, this is all restricted to continuously differentiable
mappings.
Just some ideas,
Helmut
|