《1-Obstfeld-Stochastic Optimization in Continuous Time(for the perplexed)》.pdf
文本预览下载声明
III. Stochastic Optimization in Continuous Time
The optimization principles set forth above extend directly
to the stochastic case. The main difference is that to do
continuous-time analysis, we will have to think about the right
way to model and analyze uncertainty that evolves continuously
with time. To understand the elements of continuous-time
stochastic processes requires a bit of investment, but there is a
large payoff in terms of the analytic simplicity that results.
Let’s get our bearings by looking first at a discrete-time
stochastic model. 11 Imagine now that the decision maker
maximizes the von Neumann-Morgenstern expected-utility indicator
8
(19) E s edthU[c(t),k(t)]h,
0 t
t=0
where E X is the expected value of random variable X conditional
t
on all information available up to (and including) time t. 12
Maximization is to be carried out subject to the constraint that
(20) k(t+h) k(t) = G[c(t),k(t),q (t+h),h], k(0) given,
11An encyclopedic reference on discrete-time dynamic programming
and its applications in economics is Nancy L. Stokey and Robert
E. Lucas, Jr. (with Edward C. Prescott), Recursive Methods in
Economic Dynamics (Cambridge, Mass.: Harvard University Press,
1989). The volume pays special attention to the foundations of
stochastic models.
12Preferences less restrictive than those delimited by the von
Neumann-Morgenstern axioms have been proposed, and can be handled
by methods analogous to those sketched below.
21
8
where {q (t)} is a sequence of exogenous random variables with
t=-8
a known joint distribution, and such that only realizations up to
and including q (t) are known at time t. For simplicity I will
assume that the q process is first-order Markov, that is, that
the joint distribution of {q (t+h),
显示全部