Posts

2014-09-23:

2014-08-17: Feynman's missing method for third-orders?

2014-07-31: CIA spies even on congress

2014-07-16: Rehm on vaccines

2014-06-21: Kurtosis, 4th order diffusion, and wave speed

2014-06-20: Random dispersal speeds invasions

2014-05-06: Preservation of information asymetry in Academia

2014-04-16: Dual numbers are really just calculus infinitessimals

2014-04-14: More on fairer markets

2014-03-18: It's a mad mad mad mad prisoner's dilemma

2014-03-05: Integration techniques: Fourier--Laplace Commutation

2014-02-25: Fiber-bundles for root-polishing in two dimensions

2014-02-17: Is life a simulation or a dream?

2014-01-30: PSU should be infosocialist

2014-01-12: The dark house of math

2014-01-11: Inconsistencies hinder pylab adoption

2013-12-24: Cuvier and the birth of extinction

2013-12-17: Risk Resonance

2013-12-15: The cult of the Levy flight

2013-12-09: 2013 Flu Shots at PSU

2013-12-02: Amazon sucker-punches 60 minutes

2013-11-26: Zombies are REAL, Dr. Tyson!

2013-11-22: Crying wolf over synthetic biology?

2013-11-21: Tilting Drake's Equation

2013-11-18: Why $1^\infty != 1$

2013-11-15: Adobe leaks of PSU data + NSA success accounting

2013-11-14: 60 Minutes misreport on Benghazi

2013-11-11: Making fairer trading markets

2013-11-10: L'Hopital's Rule for Multidimensional Systems

2013-11-09: Using infinitessimals in vector calculus

2013-11-08: Functional Calculus

2013-11-03: Elementary mathematical theory of the health poverty trap

2013-11-02: Proof of the area of a circle using elementary methods

More stochastic than?

In some current research, I need a way to compare distributions and talk about one probability measure that is more stochastic than another. After a little effort (wheel re-invention and directed literature search), and I happened on the very cool idea of 2nd-order stochastic dominance.

The basic idea is that given two measures with the same expected value, measures that exhibit more variation around the expectation are dominated by measures that exhibit less. So singleton \(\delta\)-functions are stochastically dominate over all other measures.

The actual definition of 2nd-order stochastic dominance, though, seems weird at first glance, though -- it's based on the integrals of cumulative distribution functions (CDF's). Integrating outward from the expected value (0, WLOG), measure \(f\) is dominated by measure \(g\) if and only if

\[ \forall x, \quad \int_{0}^{x} \int_{0}^{v} f(u) d u d v \geq \int_{0}^{x} \int_{0}^{v} g(u) d u d v \]

Now, just looking at this doesn't make much sense to me -- no intuition. So to get a better handle, it's useful to actually have an example we can study. One of the easiest measures where we can apply this is the logistic measure. If the probability distribution is

\[p(x;a)=\frac{1}{a \left(e^{x/2a} + e^{-x/2a}\right)^2}\]

then the lowered CDF is

\[P(x; a) := \int_0^{x} p(x,a) dx = \frac{e^{x/2a} - e^{-x/2a}}{2 (e^{x/2a} + e^{-x/2a})}\]

and the integrated CDF which we need for comparison is

\[I(x; a) := \int_{0}^{x} P(x; a) d x = a \ln\left(\frac{e^{-x/2a} + e^{x/2a}}{2} \right)\]

So, we can plot a few examples (below). And what we see is that as $a$ increases and the measure gains more variation, the CDF integrated from the mean is lower because there is less probability near the mean. The delta-function which integrates to an absolute value dominates everything.

Produced by GNUPLOT 4.4 patchlevel 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 -3 -2 -1 0 1 2 3 |x|/2 I(x,.5) I(x,1.) I(x,2.)