Lecture 14: When's that meteor arriving

(previous, next)



How many meteors strike the earth each year? How long do I have to wait to see one? Every day, around \(100,000\) kilograms of meteoroids strike the earth, mostly in the form of stellar dust. That might seem like allot, but not when compared to the \(6 \times 10^{24}\) kilogram mass of the earth. And most of these meteorites are so small, we won't see them. But there are also a few large meteors (called bolides) that enter our atmosphere rather regularly and create impressive fireballs all over the world (see below). But predicting the arrival of these bolides is hard -- rather than being a regular, continuous process like Newton's laws or pharmacokinetics, the arrival of bolides is random.

This problem is a bit puzzling to address at first. Intuitively, we expect that there is some average rate \(\lambda\) at which bolides arrive, so we might expect to wait \(1/\lambda\) days between arrivals. But the process itself seems largely mysterious, and sometimes the wait times will be longer or shorter than this average prediction. Sure, their motion is predictable under Newton's laws, but there are so many of them, with totally unknown starting conditions and these rocks have been floating around our solar system for billions of years -- why do they hit earth now?

Still, as we'll see, we can use probability to model bolides arrivals We'll use this as the starting point for our studies today. Allong the way, we'll introduce the ideas of asymptotic approximation.

Poisson counting

One reasonable starting place is from those simple probability distributions we already know about. Suppose we break time down into many successive time intervals (minutes) per day, and then say that there is a chance that a bolide hits in each interval. We could use the binomial distribution but \(n\) will be very large, and \(p\) will be very small for each trial.

\[\hat{x}(s, n) = \sum _ {k=0}^{n} \frac{n!}{k! (n-k)!} p^k (1-p)^{n-k} s^k\]

But factorials are hard to compute with -- \(24! \approx 6 \times 10^{23}\), and we'd like to use hundreds if not thousands of intervals. Some simplifications would be useful.

Poisson derivation

In order to simplify our calculation, we have to be able to do some calculations with factorials. Abraham de Moivre (1667 - 1754), one of the best forgotten mathematicians, discovered a very useful formula that we know call Stirling's approximation.

\[n! \sim \sqrt{2 \pi n} \left( \frac{n}{e} \right)^n\] This is what we call an asymptotic approximation. Two functions \(f(n)\) and \(g(n)\) are asymptotically equivalent if and only if \[\lim _ {n\rightarrow \infty} \frac{f(n)}{g(n)} = 1.\] Asymptotic equivalence is not the same as equality. For example, if both \(f\) and \(g\) diverge for large \(n\), then \(f(n) \sim g(n)\) also implies \(f(n) + 7 \sim g(n)\) -- the constant doesn't change the limit of the ratios. The art of asymptotic approximation is a deep and powerful field started by Poincare and Stieltjes, with important develpments by Watson, Erdelyi, Nayfeh, and Kevorkian, among others. But for now, let's just take Stirling's formula and see what we can do with it.

According to Stirling's formula, when \(n\) is large, \[ \frac{n!}{(n-k)!} \sim n^k\] \[\left(1-\frac{\lambda}{n}\right)^{n} \sim e^{-\lambda}\] \[\frac{ (\lambda/n)^k }{(1-\lambda/n)^{k}} \sim (\lambda/n)^k\] And so, if we make the appropriate substitutions into the binomial distribution's probability generating function, we find \[\hat{x}(s, n) \sim \sum _ {k=0}^{\infty} \frac{ \lambda^k e^{-\lambda} }{k!} s^{k} = e^{\lambda (s-1)}, \quad\text{where}\; \lambda = p n, p \approx 0, n \approx \infty\] This is what we call the Poisson distribution (Poisson was later to rediscover this formula after de Moivre's work was forgotten). With a little calculation, we can show that the parameter \(\lambda\) is both the mean (the expected number of impacts per unit time) AND the variance of a Poisson distribution.

One of the special properties of the Poisson distribution is that it is additive -- if you take two Poisson random variables and add them together, you get a new random variable that is also Poisson-distributed.

The replacement issue

In reality, bolides are samples without replacement, while the binomial assumes replacement. We overlooked this issue above, because it turns out not to matter, but it's still something that needs to be convincingly argued. The hypergeometric distribution is a discrete probability distribution that describes the probability of k successes in n draws from an urn of \(L\) good stones and \(M\) bad stones, without replacement, \[\hat{x}(s) = \sum _ {k=0}^{n} {\frac{{{} _ L C _ k} \quad { {} _ {M} C _ {n-k}}}{{} _ {M+L} C _ n}} s^k\] Once something happens, it can never happen identically again (thus reducing the chance of a future success). But the hypergeometric distribution is hard to make sense of, with all it's factorials floating around. And numerical computations are even worse. However, if the number of events is large compared to the number of trails (about 20 times greater), then the distribution is closely approximated by the binomial distribution, and hence, the Poisson distribution.

Exponentially distributed wait-times

Simple assumptions are always good places to start with a modelling problem. Perhaps the simplest assumption here is that there is no-time-preference for the arrival of bolides -- they are equally likely to arrive at any time. Let's suppose we consider a small window of time \(\Delta t\), and that we know the probability of observing a bolide in the window \(\Delta t\) is given by a smooth function \(R(\Delta t)\) where \(R(0) = 0\) and \(R(\infty) = 1\). Now, let's ask what's the probability of observing our first bolide in the n'th time window \([n \Delta t, (n+1) \Delta t)]\) from now? Well, that would mean there were no arrivals in the previous \(n-1\) time windows, but there is an arrival in this time-window. So the probability that the first arrival is in time window \(n\) is

\[(1 - R(\Delta t))^{n-1} R(\Delta t).\]

If \(R(\Delta t) \approx 0 + \lambda \Delta t + O(\Delta t^2)\), and we replace \(\Delta t\) with the infinitessimal time \(dt\), the probability of waiting exactly \(t\) time units to see the first bolide is \[( 1 - \lambda dt)^{t/dt - 1} \lambda dt = \lambda e^{-\lambda t} dt\]

This is the very common exponential distribution with expected value \(1/\lambda\). Integrating from time \(0\) to time \(t\), the probability of not seeing a bolide after a wait of length \(t\) is \[e^{-\lambda t}.\] The parameter \(\lambda\) is called the "hazard rate" in reliability theory.