In this thesis, we investigate two-dimensional singular stochastic control problems motivated by different applications in economics and finance. The main interest is to characterize the optimal control in the problems, and in particular to characterize the corresponding free-boundaries. We investigate three different settings, in which the two-dimensional nature is driven by various aspects. In Section 2, we propose and solve a dividend problem with capital injections in a finite time horizon setting. The surplus process of a firm is assumed to follow a stochastic dynamic, and due to the finite time horizon, the time itself becomes a state variable. In Section 3, we study a control problem regarding the inventory of a firm. We assume that the demand of a good follows some stochastic dynamics. In addition, we assume that drift and volatility parameters are Markov modulated, representing different scenarios of the economy. Finally, in Section 4, we study a control problem with interconnected dynamics. This problem is motivated by different applications as, for example, the inflation control. We consider a process with some stochastic dynamics (e.g.\ the inflation rate), in which the drift can be controlled. In this model, the process and the drift are state variables, which are interconnected.

In all these applications, we characterize the free-boundaries by combining and extending different techniques. In particular, in Section 2, we extend a result by El Karoui and Karatzas(1989), which connects a singular stochastic control problem with a problem of optimal stopping. Hence, we can study the time-dependent free-boundary of the optimal stopping problem. Moreover, the optimal dividend strategy can be expressed as a solution to a Skorokhod reflection problem at the free-boundary. In Section 3, an application of the dynamic programming principle is used to derive a system of non-linear equations characterizing the constant free-boundaries. This system is solved numerically to provide a comparative static analysis. Finally, in Section 4, we derive the structure of the value function by employing the connection of the singular stochastic control problem to a Dynkin game of stopping. Moreover, by characterizing the value function as a viscosity solution to the corresponding dynamic programming equation, we can derive a second-order smooth-fit property as well as a necessary system of non-linear functional equations for the free-boundaries. Furthermore, in a particular modification of the model, these functional equations can be used to derive a system of first-order ordinary differential equations, which is explicitly computable.