Member-only story
CS(3) Monte Carlo Convergence and the Law of Large Numbers
Almost Sure Convergence

Concept of Almost Sure Convergence and the Law of Large Numbers
The Law of Large Numbers (LLN) is a key principle that explains why the Monte Carlo method works. It states that as the number of samples increases, the sample average will “almost sure”, or with a probability of 1, converge to the true expected value of the function.
In Monte Carlo integration, this means that our estimates of an integral become more accurate as we increase the number of samples. This convergence “almost surely” ensures that, although individual samples are random, the average will get closer and closer to the actual value as the sample size grows.
Observing the convergence with increasing samples
Let’s return to the integral of
over the interval [0,1] as an example as we have seen in article 1 of this series. We will calculate the Monte Carlo estimate for this integral, gradually increasing the number of samples, and observe how the estimate approaches the exact value.
The integral we want to estimate is:
The exact value of this integral is (you can use a math solver to find it):
# Define the exact value of the integral
exact_value <- 1 - exp(-1)
# Define the number of samples and initialize vector to store estimates
n <- 10000
estimates <- numeric(n)
# Generate n random samples and calculate estimates cumulatively
for (i in 1:n) {
x <- runif(i, 0, 1) # Generate i random values between 0 and 1
estimates[i] <- mean(exp(-x)) # Calculate mean for current sample size
}
# Plot the convergence of the estimates
plot(estimates, type = "l", col = "blue", lwd = 2,
xlab = "Number of Samples", ylab = "Estimate of Integral",
main = "Convergence of Integral ∫ e^(-x) dx from 0 to 1")
# Add horizontal line for the exact value
abline(h = exact_value, col = "red", lwd = 2, lty = 2)
# Add a legend
legend("topright", legend = c("Monte Carlo Estimate", "Exact Value"),
col = c("blue", "red"), lty = c(1, 2), lwd = 2)