Abstract
In this thesis we study two problems concerning probability. The first is stochastic control problem, which essentially amounts to find an optimal probability in order to optimize some reward function of probability. The second is to approximate the solution of the Boltzmann equation. Thanks to conservation of mass, the solution can be regarded as a family of probability indexed by time. In the first part, we prove a dynamic programming principle for stochastic optimal control problem with expectation constraint by measurable selection approach. Since state constraint, drawdown constraint, target constraint, quantile hedging and floor constraint can all be reformulated into expectation constraint, we apply our results to prove the corresponding dynamic programming principles for these five classes of stochastic control problems in a continuous but non-Markovian setting. In order to solve the Boltzmann equation numerically, in the second part, we propose a new model equation to approximate the Boltzmann equation without angular cutoff. Here the approximate equation incorporates Boltzmann collision operator with angular cutoff and the Landau collision operator. As a first step, we prove the well-posedness theory for our approximate equation. Then in the next step, we show the error estimate between the solutions to the approximate equation and the original equation. Compared to the standard angular cutoff approximation method, our method results in higher order of accuracy.
Date of Award | 19 Jul 2017 |
---|---|
Original language | English |
Supervisor | Tieyong ZENG (Supervisor) |
User-Defined Keywords
- Boltzmann, Ludwig, 1844-1906
- Probability measures
- Stochastic control theory
- Transport theory