Axis : SciLex
Subject : Stability(s): links between floating arithmetic and numerical analysis
Thesis Supervisors : Sylvie Boldo (Inria - LRI) et Alexandre Chapoutot (ENSTA ParisTech)
Institutions : Inria - LRI et ENSTA ParisTech
Administrator laboratory : Inria
PhD : Florian Faissole
Beginning : 10/01/16
Scientific production :
autre page du projet ELEFFAN — Page web de Florian Faissole
This topic is at the interface between computer science and applied mathematics. The computer part is preponderant because it is a question here of considering algorithms resulting from numerical analysis and to check their good behavior on computer. This behaviour, proven on the assumption that the calculations are perfect, could be put in default by rounding and capacity overruns due to calculations in floating arithmetic.
Scientific goals :
Computer calculations create errors with each calculation that can accumulate to give a completely false result in certain pathological cases. This thesis aims to clarify the link between two notions: stability in the sense of numerical analysis and stability in the sense of floating arithmetic. In floating arithmetic, an algorithm is stable if, on close inputs, it provides close outputs. The ratio between the difference in inputs and the difference in outputs is called conditioning, and it characterizes the stability of a function. In numerical analysis, stability has another meaning. For partial differential equations, this means that the solution values do not diverge. For ordinary differential equations, this means the stability of the dynamic system, and often Lyapunov's stability. Note that there is also a notion of data sensitivity for numerical integration schemes called zero-stability that is very similar to the notion of stability in floating numbers. This thesis may highlight this link before considering the other types of stability of digital schemas.
Mathematicians believe that a stable pattern in the sense of dynamics is numerically stable. We think this recipe is true, and we would like to prove it formally. The idea is that the numerical errors will be compensated because of the particular form that a stable and convergent numerical schema has and thus that the calculation will globally "go well". Local rounding errors can easily be bounded at each step by the floating arithmetic properties specified by IEEE-754 (with the exception of trigonometric functions). This is particularly simple here because the values are bounded by the stability of the schematic. One difficulty will be to limit the final errors after a large number of iterations. In fact, it appears that the offsets exhibited in a particular case are more general and in fact apply to a broad class of stable digital schemes. Another difficulty is that the result must be general enough to apply to different types of schemas, but precise enough to give usable error bounds on rounding errors.