Скачать 238.92 Kb.

APPLIED PARTIAL DIFFERENTIAL EQUATIONS
1. Various models involving linear and nonlinear partial differential equations 2. Elliptic equations. Maximum principles 3. Variational solutions for elliptic boundary value problems 4. Parabolic equations 5. Hyperbolic equations and systems. Vibrating strings and membranes 6. Theory for nonlinear partial differential equations. Variational and nonvariational techniques 7. Conservation laws 8. Laplace transform solution of partial differential equations DIFFERENCE METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS
1. Introduction to finite differences 2. Convergence, consistency, stability 3. Difference schemes for parabolic equations 4. Difference schemes for hyperbolic equations 5. Difference schemes for systems of partial differential equations 6. Dispersion and dissipation 7. Various applications and examples CONTROL OF DYNAMIC SYSTEMS
Basic principles and methods of control theory are discussed. The main concepts (observability, controllability, stabilizability, optimality conditions, etc.) are addressed, with special emphasis on linear differential systems and quadratic functionals. Many applications are discussed in detail. The course is designed for students oriented to Applied Mathematics.
The main goal of the course is to introduce students to the theory of optimal control for differential systems. We also intend to discuss specific problems which arise from downtoearth applications in order to illustrate this remarkable theory.
The students will learn some basic concepts and results in control theory, which are very useful for applied mathematicians, economists, engineers, physicists. Even more, they will learn how to use these tools in solving specific real world problems.
Week 1: Linear Differential Systems (existence of solutions, variation of constants formula, continuous dependence of solutions on data, exercises) Week 2: Nonlinear Differential Systems (local and global existence of solutions for the Cauchy problem, continuous dependence on data, differential inclusions, exercises) Week 3: Basic Stability Theory (concepts of stability, stability of the equilibrium, stability by linearization, Lyapunov functions, applications) Week 4: Observability of linear autonomous systems (definition, observability matrix, necessary an sufficient conditions for observability, examples) Week 5: Observability of linear time varying systems (definition, observability matrix, numerical algorithms for observability, examples) Week 6: Input identification for linear systems (definition, the rank condition in the case of autonomous systems, examples) Week 7: Controllability of linear systems (definition, controllability of autonomous systems, controllability matrix, Kalman’s rank condition, the case of time varying systems, applications) Week 8: Controllability of perturbed systems (perturbations of the control matrix, nonlinear autonomous systems, time varying systems, examples) Week 9: Stabilizability (definition, state feedback, output feedback, applications) Week 10: Introduction to optimal control theory (Meyer’s problem, Pontryagin’s Minimum Principle, examples) Week 11: Linear quadratic regulator theory (introduction, the Riccati equation, perturbed regulators, applications) Week 12: Time optimal control (general problem, linear systems, bangbang control, applications) Books: 1. N.U. Ahmed, Dynamic Systems and Control with Applications, World Scientific, 2006. 2. E.B. Lee and L. Markus, Foundations of Optimal Control Theory, John Wiley, 1967. COMBINATORIAL OPTIMIZATION
Basic concepts and theorems are presented. Some significant applications are analyzed to illustrate the power and the use of combinatorial optimization. Special attention is paid to algorithmic questions.
One of the main goals of the course is to introduce students to the most important results of combinatorial optimization. A further goal is to discuss the applications of these results to particular problems, including problems involving applications in other areas of mathematics and practice. Finally, computer science related problems are to be considered too.
The students will learn some basic notions and results of combinatorial optimization. They will learn how to use these tools in solving every day life problems as well as in software developing. More detailed display of contents. Week 1: Typical optimization problems, complexity of problems, graphs and digraphs Week 2: Connectivity in graphs and digraphs, spanning trees, cycles and cuts, Eulerian and Hamiltonian graphs Week 3: Planarity and duality, linear programming, simplex method and new methods Week 4: Shortest paths, Dijkstra method, negative cycles Week 5: Flows in networks Week 6: Matchings in bipartite graphs, matching algorithms Week 7: Matchings in general graphs, Edmonds’ algorithm Week 8: Matroids, basic notions, system of axioms, special matroids Week 9: Greedy algorithm, applications, matroid duality, versions of greedy algorithm Week 10: Rank function, union of matroids, duality of matroids Week 11: Intersection of matroids, algorithmic questions Week 12: Graph theoretical applications: dedge disjoint and coverong spanning trees, directed cuts Book: E.L. Lawler, Combinatorial Optimization: Networks and Matroids, Courier Dover Publications, 2001 or earlier edition: Rinehart and Winston, 1976 OPTIMIZATION IN ECONOMICS
In the last decades mathematical methods have become indispensable in the study of many economical problems, in particular, in the optimization of certain reallife phenomena. For instance, J. F. Nash received the Nobel Prize in Economics (1994) for his outstanding contributions in the field of Economics via mathematical tools. Our aim here is to emphasize the importance of Mathematics in the study of a broad range of economical problems. Many applications/examples will be discussed in detail.
The main goal of the present course is to introduce Students into the most important concepts and fundamental results of Economics by using various tools from Mathematics as calculus of variations, critical points, matrixalgebra, or even RiemannianFinsler geometry. Starting with basic economical problems, our final purpose is to describe some recent research directions concerning certain optimization problems in Economics.
The Students will learn how to use wellknown mathematical tools to treat both theoretical and practical economical problems.
Lecture 1. Introduction and motivation: some basic problems from Economics via optimization. Lecture 2. Economic applications of onevariable calculus (demand and marginal revenue, elasticity of price, cost functions, profitmaximizing output). Lecture 3. Economic applications of multivariate calculus (consumer choice theory, production theory, the equation of exchange in Macroeconomics, Paretoefficiency, application of the least square method). Lectures 4. Linear programming (application of the geometric, simplex and dual simplex method). Lecture 5. Linear economical problems (diet problem, Ricardian model of international trade). Lecture 6. Comparative statics I (equilibrium comparative statics in one and two dimensions; comparative statics with optimization, perfectly competitive firms, Cournot duopoly model). Lecture 7. Comparative statics II: n variables with and without optimization (equilibrium comparative statics in n dimensions, Grosssubstitute system, perfectly competitive firms). Lecture 8. Comparative statics III: Optimization under constraints (Lagrangemultipliers, specific utility functions, expenditure minimization problems). Lecture 10. Nash equilibrium points (existence, location, dynamics, and stability). Lecture 11. Optimal placement of a deposit between markets: a RiemannFinsler geometrical approach. Lecture 12. Economical problems via best approximations. References:
QUANTITATIVE FINANCIAL RISK ANALYSIS
1. Market risk measurement 2. Time independent fat tailed distributions of market price (FX rates, interest rates, stock and commodity prices) fluctuations 3. Volatility clusters in stock exchanges, GARCH models 4. Filtered historical simulation 5. Best practice for calculating Value at Risk for market risk related problems 6. Credit portfolio risk models 7. Mathematical background of the Basel II regulatory model 8. Granularity adjustment for undiversified idiosyncratic risk 9. CreditRiskPlus as a realistic and implementable portfolio model 10. Comparison of CreditRiskPlus and CreditMetrics models 11. Probability of Default (PD) Estimation 12. Low default problem NONLINEAR OPTIMIZATION
The course provides an introduction to the nonlinear optimization problems. Main topics are the first and secondorder, necessary and sufficient optimality conditions; convex optimization; quasiconvex and pseudoconvex functions; Lagrange duality, weak and strong duality theorems, saddle point theorem; Newton’s method in optimization, theorems of convergence.
The aim of the course is to encourage students to the use of nonlinear optimization techniques in many areas of their interest and to gain theoretical and practical knowledge. Students are proposed to know the elementary theorems and proofs of nonlinear optimization and also to use the corresponding tools and commands in Matlab and/or Maple.
At the end of the course students can identify, model and classify nonlinear optimization problems and can solve some of them by using Lagrange multipliers or Newton’s method. Students will have a toolbox of basic nonlinear optimization routines as well as the ability of implementing elementary algorithms.
1. Modeling of nonlinear optimization problems – examples, well known mathematical problems written as nonlinear optimization problems, alternative ways for modeling the same problem 2. First and secondorder, necessary and sufficient optimality conditions – and solution of numerical exercises 3. Convex optimization – theorems of convex optimization, applications in inequalities 4. An introduction to the generalized convexity: quasiconvex and pseudoconvex functions – with examples and counterexamples 5. Lagrange duality – relation to the primal problem, solution of numerical exercises 6. Duality theorems 7. Saddle point theorem 8. Newton’s method in optimization, theorems of convergence 9. The implementation of Newton’s method in one and two dimensions – in Matlab and/or Maple 10. Newton’s method and fractals Lecture notes: • Tamás Rapcsák, Smooth Nonlinear Optimization in Rn, Kluwer Academic Publishers, 1997. • Pascal Sebah, Xavier Gourdon: Newton’s method and high order iterations o http://numbers.computation.free.fr/Constants/Algorithms/newton.html o http://numbers.computation.free.fr/Constants/Algorithms/newton.ps 