A Monte Carlo simulation is an optimization technique that relies on random sampling.

Suppose you are trying to find the minimum value in a set, but it’s too computationally intensive to search every possible point or to find a closed-form solution. In a simple MC model, you start by selecting a random point and find its value. You then find a second random point and find its value. If the value of the second point is less than the value of the first point, you will select a third point by continuing down the same direction as from point one to point two. If the value of the second point is greater than the value of the first, you go in the opposite direction from the previous low.

This method will guarantee that you will eventually find a local minimum (a point where every point adjacent to it is greater than the point itself). However, there are several trade-offs. First, it does not guarantee that you will find the global minimum. Consider a graph that has two valleys. If the algorithm reaches the bottom of the shallower valley, it will keep testing new points around the nadir and reject them since they are higher than the bottom itself. After rejecting the points in each direction, it concludes that it has found the true bottom. Second, it may be computationally intensive to try so many points to find the minimum.

There are many extensions to the Monte Carlo Simulation method that attempt to solve these problems. In the Metropolitan simulation, the simulation will throw out a lesser data point with probability p that scales with how much greater the value is. So if point 2 is much greater than point 1, the simulation will accept it with some low probability p. That means it is possible to escape local minima with enough simulation, however it will likely fail if the local minima is much deeper than the surrounding area. Moderating that “exploration v. exploitation” tradeoff is a key point of focus for these algorithms. Highly exploitative models are faster, but are more likely to fall into the rut of local minima. Highly exploratory ones will be slower but more likely to find the global minima. Other extensions, like the Markov Chain Monte Carlo and many others, attempt to algorithmically determine the optimal updating step size (that is, once you have a candidate point, how do you choose the next point).

More From Capital

No posts found