Szemináriumok
Nemlineáris egyenletrendszerek megoldása közelítő polinomok segítségével
Strong-stability-preserving Runge-Kutta methods with downwind-biased operators
Strong stability preserving (SSP) time integrators have been developed to preserve nonlinear stability properties (e.g., monotonicity, boundedness) of the numerical solution in arbitrary norms or convex functionals, when coupled with suitable spatial discretizations. Currently, all existing general linear methods (including Runge-Kutta and linear multistep methods) either attain small step sizes for nonlinear stability, or they are only first order accurate. In order to obtain larger step sizes discretizations of PDEs that contain both upwind- and downwind-biased operators have been employed.
In this talk, we review SSP Runge-Kutta methods that use upwind- and downwind-biased discretizations in the framework of perturbed Runge-Kutta methods. We show how downwinding improves the SSP properties of time-stepping methods and breaks some order barriers. In particular, we focus on implicit perturbed SSP Runge-Kutta methods that have unbounded SSP coefficient. We present a second- and third-order one-parameter family of perturbed Runge-Kutta methods, for which the CFL-like step-size restriction can be chosen arbitrarily large. The stability of this family of methods is analyzed, and we demonstrate that the desired order of accuracy is obtained for large CFL numbers. Finally, we investigate the computational challenges of the implicit problem and propose ideas that lead to an efficient implementation of Newton's method.
Algebrai geometria és a végtelen távoli test
Asymptotic properties of mean field coupled maps
Level set percolation for Gaussian fields
A duality theorem for infinity LPs - an application in game theory
Application of quantum-graph theory to the quantum mechanics of extremely floppy molecules
Evolutionary algorithms
In my talk I'm going to present a quite efficient tool for optimization and constraint satisfaction problems. These methods are called evolutionary or genetic algorithms since their basic concepts mimic the evolution of species. These algorithms were first used to solve discrete problems like scheduling. Later the modified version proved efficient for a class continuous problems, where gradient method and its variants fail either because the objective function isn't differentiable, or the function has many local optima. An example for a specific continuous problem will be presented.