Speakers
 Sergey Dolgov
 Max Planck Institute for Dynamics of complex technical systems, Magdeburg (D)

Alternating iteration for lowrank solution of linear systems with large indefinite matrices
view abstract
Sergey Dolgov
Alternating iteration for lowrank solution of linear systems with large indefinite matrices
Highdimensional partial differential equations arise naturally if spatial coordinates are extended by time and auxiliary variables.
The latter may account for uncertain inputs or design parameters.
After discretization, the number of degrees of freedom grows exponentially with the number of parameters.
To reduce the complexity, we can employ the lowrank separation of variables and approximate large vectors and matrices
by a polylinear combination of factors, each of whose depends only on one variable.
One of the most powerful combinations are Tensor Train and Hierarchical Tucker decompositions.
A workhorse method to compute the factors directly is the Alternating Least Squares iteration and its extensions.
However, it was too difficult to treat matrices of a saddle point structure via existing alternating schemes.
Such matrices occur in an optimal control problem, where a convex optimization,
constrained by a highdimensional PDE, is solved via Lagrangian approach.
In this talk, we show how to preserve the saddle point structure of the linear system during the alternating iteration and solve it efficiently.
We demonstrate numerical examples of the inverse StokesBrinkman and NavierStokes problems.
 Bart Vandereycken
 Université de Genéve, section of Mathematics, Geneve (CH)

Riemannian optimisation with rank contraints: preconditioning and rank adaptivity
view abstract
Bart Vandereycken
Riemannian optimisation with rank contraints: preconditioning and rank adaptivity
The minimisation of a smooth objective function subject to a matrix rank constraint can sometimes be very effectively solved by methods from Riemannian optimisation. This is for instance the case with the lowrank matrix completion problem or the solution of PDEs on square domains like the Lyapunov equation. However, the theory of Riemannian optimisation leaves some questions unanswered regarding the practical application of such algorithms. I will focus on two such questions. The first is how the metric has a significant impact on the convergence of the numerical methods. This is related to how Newtonâ€™s method can be seen as a variable metric in numerical optimisation and to general preconditioning techniques. The second topic is rank adaptivity. In rankconstrained optimisation, one does typically not know the rank a priori but may be searching for the smallest rank satisfying a certain criterion, like a small residual. I will explain how the geometry of the tangent cone of the variety of matrices of bounded rank can be incorporated so as to obtain rank adaptive algorithms that stay true to the manifold philosophy.