Topics
Further Reading
One explanation for the appearance of scaling laws in experimental data is dimensionality. When we build equations and models, we represent state with a set of variables. The variables we include in our models have more meaning than just their numerical values. There is extra information attached that reminds us of how the variables should be interpreted and manipulated. If we measure a football field, we don't say it is 100 long; we say a football field is 100 yards long. Similarly, the speed of light is 300 megameters per second. In physical chemistry, rather than describing the motion of every individual particle making up the gas, we might represent the state of a gas with numbers for its temperature in degrees Celsius and pressure in Pascals.
In each of these cases, we state the number along with the its units (yards, meters per second, degrees Celsius, Pascals). The units communicate both the measuringconvention employed and the underlying dimension of the thing being measured (length, speed, temperature, pressure). Over history, civilizations have accumulated many different units of measure for various common dimensions like length (chain, foot, kilometer, angstrom,...), area (square footage, acres, hectares, ...), weight (talent, pound, Newton, ton,...), and money (drachma, dollar, euro, yen, ...). Dimensions and units constrain how we use variables, similar to how a typesystem constrains variable use in some programming languages. It does not make sense to add together variables with different dimensions  \[\text{$2$ apples} + \text{$2$ meters}\] does not simplify to anything. But if the variables have the same dimensions, then we convert the units of the stray variables into the desired units, and then we can proceed with the addition  \[\text{$2$ centimeters} + \text{$2$ meters} = \text{$0.02$ meters} + \text{$2$ meters} = \text{$2.02$ meters}.\] And we can multiply and divide variables with different dimensions  the dimensions of a product is the product of the dimensions of the arguments. \[\begin{gather} \text{$2$ ft} \times \text{$3$ ft} = \text{$6$ square ft}, \\ \text{$10$ meters} / \text{$2$ minutes} = \text{$5$ meters per minute}, \end{gather}\]The very simple constraints that unit systems place on our calculations encode important rules about the natural world that have been developed over centuries. In some cases, we can derive a nearly complete model based just on reasoning about the dimensions of the variables involved. This process is generally referred to as "dimensional analysis". Over the remainder of this chapter, we'll work through some examples of dimensional analysis, then discuss how the practice can be formalized with linear algebra as the Buckingham \(\Pi\)Theorem, and then conclude with a classic example application of dimensional analysis to the Trinity nuclear test in 1945.
Obviously, there can be different units used to measure a given dimension, but the dimension of a variable can also be expressed in more than one way. For example, momentum, the tendency of an object in motion to stay in motion, can have dimensions of force \(\times\) time, or speed \(\times\) mass, or mass \(\times\) length per time.
The flexibility of dimensions can sometimes be helpful, but can also be confusing when used inconsistently. To avoid confusion, it is conventional to pick a set of "fundamental" dimensions to form a basis for the dimensions of our variables. The set of fundamental dimensions should satisfy two rules. First, the fundamental dimensions should form an independent set  no one fundamental dimension can be expressed in terms of the other fundamental dimensions. Second, the fundamental dimensions must spans the dimensions of all variables involved in our model  the dimension of each variable can be expressed as a product of powers of the fundamental dimensions. Under these two conditions, the value of each variable can be expressed uniquely in terms of powers of fundamental dimensions.
The common convention is to consider time, length, and mass as fundamental dimensions, and then to add other dimensions like temperature and charge as needed. Then area has derived dimensions of square length, speed has derived dimensions of length per time, and electrical current has derived dimensions of charge per time. But the distinction between fundamental dimensions and derived dimensions is somewhat arbitrary. For example, in quantum physics, one might use use momentum rather than mass as a fundamental dimension because photons of light carry momentum but not mass. And any given set of dimensions can be systematically grown (so it spans) and shrunk (so it is independent) into a set of fundamental dimensions.
In our brief introduction to complex variables, we showed how the principles of arithmetic were expanded to represent twodimensional vectors by introducing the imaginary variable \(i\) with it's own special algebraic rules. A variation of that idea can be applied to help us recognize scaling symmetries due to common variable dimensions.
Suppose we have a parameter \(x\) whose dimensions can be expressed in terms of fundamental dimensions of time, length, and mass, and we wish to change the units of this parameter. Let \(T\) be the conversion factor for the time units, \(L\) be the conversion factor for the length units, and \(M\) be the conversion factor for mass units. Then, to convert \(x\) to the new units, we multiply (or divide) it by each conversion factor as many times as the corresponding fundamental unit appears in the dimensions. For example, if \(x\) has units of length, then after converting to the new units we would get \(x L\). If \(x\) has units of area, which is length squared in terms of the chosen fundamental dimensions, unit conversion will give \(x L^2\). If \(x\) has units of velocity, which is length per time, we would get \(x L T^{1}\). Or if \(x\) had units of force, which is mass length per time squared in terms of the chosen fundamental dimensions, we would get \(x M L T^{2}\).
When actually converting units, we known the values our conversion factors \(T\), \(L\), and \(M\), and so can plug in and evaluate. But for the moment, let's suppose we leave the conversion factors as unknown variables. Then under this generic unit conversion, each parameter \(x\) should be replaced by a monomial where the powers of the fundamental unit conversion factors represent the dimensions of the parameter and \(x\) becomes the numerical coefficient of the monomial.
The central idea of dimensional analysis is that the laws of the natural world exhibit symmetry of scale  they they should remain the same independent of the units used to measure the model parameters. We will call this the "scaleinvariance" hypothesis.
Galileo was one of the first promoters of scale invariance, and used it in his argument against the Aristotelian dogma that heavier objects fall more quickly than lighter objects. Galileo suggested that if you split a heavy stone into 2 pieces and drop them, then according to Aristotle, the lighter piece should slow the fall of the heavier piece, which is selfcontradictory, since collectively, the pieces are heavier, and thus should fall faster.
The scaleinvariance hypothesis does not hold for all parameters. While we generally expect time, length, and mass to be scaleinvariant, we do not expect parameters measuring angles to be scaleinvariant. Angles can be measured in units radians, degrees, hours, or fractions of a rotation. Unlike mass, angles have an inherent scale  the relationship between angles of 180 degrees and 360 degrees is not the same as the relationship between 90 degrees and 180 degrees. Probabilities are another kind of parameter without scale invariance  if we double a probability of \(3/4\)ths, we get a nonsense probability of \(3/2\)ths.
Footnote: In many textbooks, the absence of scale invariance in angles and probabilities is handled by treating such parameters as dimensionless numbers. This is good enough for purposes of analysis, but confusing to the student who recognizes the practical need to frequently convert angle units  how can a dimensionless value still have units?
Under the scaleinvariance hypothesis, formulas should work regardless of their specific units of the variables. As a consequence, we should be able pick the units that are most convenient for our calculations.
give motivation
Suppose we need to hang a rope between two telephone poles and we want to make sure the rope won't come down too close to a garage roof. The more slack is in the rope, the farther down it will hang, but we cannot tell howfar down because the rope hangs with a curve. We can use dimensional analysis to find a relation between the lowest point of a rope hanging between two anchor points of equal height. Call the depth \(d\) the height from the top of the anchor points to the lowest point on the rope. Well, the depth \(d\) might depend on the length of the rope \(s\), the distance \(w\) between the anchors, the density of the rope \(\rho\), and the gravitational force per mass \(g\) through a function model \[f( d, w, s, \rho, g) = 0.\] Although it appears quite general, this function has already encoded some of our intuition about the physics of the problem. For instance, we have implicitly assumed that the gravitational force on the rope is determined by a single physical parameter \(g\), and that the rope has no stiffness that could effect it's depth.
Now, let's try to analyze this equation. It is a very general formula – it isn't even solved for one of the variables – but the variables do have units, and that helps. Suppose we introduce conversion factors \(M\) for units of mass, \(L\) for units of length, and \(T\) for units of time. Then, \[ f( d L, w L, s L, \rho M L^{1}, g L T^{2}) = 0 \] Close inspection shows that time and mass each only appear once in this equation. With some thought, taking \(L = 1/s\) , \(M = {1}/{\rho s}\) , \(T = \sqrt{g/s}\), will lead to \[f\left( \frac{d}{s}, \frac{w}{s}, 1, 1, 1 \right) = 0.\] So, the rope's depth \(d\) will only depend on the rope's length \(s\) and the distance between the anchors \(w\). The density \(\rho\) of the rope and the strength of gravity \(g\) have no effect. And this makes some sense  heavy chains and pieces of thread all seem to hang the same way in common experience. The other feature here is that the aspect ratios \(d/s\) and \(w/s\) are dimensionless numbers  for a given physical problem, these same aspect ratios will give the same number, no matter what units we choose for our measurements!
Solving for \(d\), and using a new function \(\phi()\) to express the unknown dependence, \[ d = s \phi\left(\frac{w}{s} \right).\]
If the rope's length equal the width between the anchors (\(s=w\)), the rope has to be perfectly straight, so we should have \(d=0\). On the other hand, when the anchors are right next to each other (\(w=0\)), we expect the depth to be half the length (\(d = s/2\)). These imply \(\phi(0) = 1/2\), \(\phi(1) = 0\), and \(\phi()\) is a monotone decreasing function in between. So, we have determined quite allot about the shape of a hanging rope without doing very much math.
But as a headsup, you should be warned that we got rather lucky in choosing to normalize our variables by the rope's length rather than the width between anchors. If we had normalized by the width \(w\), we would arrive at the alternative dimensionless relationship \(d = w \psi\left(\frac{s}{w} \right)\). While algebraically equivalent to our first dimensionless relationship for the depth \(d\), this form is indeterminate when \(w=0\); we would get in that case that \(d = 0 \times \psi(\infty)\). Our normalization by the rope length \(s\) avoided this issue since for all physically meaningful versions of the problem, the length \(s\) will always be positive. For new problems, the best choice of normalization factors may not initially be obvious, and it may be worthwhile to consider more than one possibility.
We mentioned previously that the great debate over the earth's shape began when the period of a pendulum was found to be different in French Guyana than it was in Paris (relative to the astronomical measurement of the length of a day), and that this was interpreted to imply a difference in the strength of gravity between the two places. But this is an imprecise description, and we should be able to do better. Suppose we hang a lead bob from the end of a long thin wire to make a pendulum, pull it to one side, and let it go so that it swings back and forth with a steady period \(p\). If we measure the period, what does that tell us about gravity's acceleration \(g\)?
There are a few variables that we may also want to measure  the length of the pendulum (\(r\)), the mass of the bob (\(m\)), and the initial angle (\(\theta _ 0\)). But the relationship between these variables and the period is unknown. so the best we can do atfirst is to write a general functional relationship \[g = \beta(p,r,m,\theta _ 0),\] where \(\beta()\) is an unknown function. To determine \(\beta()\), it looks like we need to explore the values of all four variables. However, we can greatly simplify this exploration based on consideration of the dimensions of our variables.
If we dimensionalize each variable with a generic unit conversion, base on fundamental units of time \(T\), length \(L\), and mass \(M\), our function becomes \[g L T^{2} = \beta(p T, r L , m M, \theta _ 0 ).\] Note that the initial angle \(\theta _ 0\) has no unit conversion variables.
Let \(L = 1/r\), \(T = 1/p\), and \(M =1/m\). Then \[ \frac{g p^2}{r} = \beta(1,1,1,\theta _ 0).\] If we rewrite the function of the right as \(\beta(\theta _ 0)\) and solve for the gravity, we find \[g = \frac{r}{p^2} \beta(\theta _ 0).\] Note that the prediction for gravity is independent of the mass. But if we observed a swinging pendulum, we can say something about the gravitational accelerations in the place we observe it (like on the moon compared to earth) as long as we know the universal function \(\beta(\theta _ 0)\).
If we compare the prediction of this formula to some data, we find good agreement with at least one set of observations.
A soap bubble is wondrous object, so light it can float on the air, but strong enough to contract and trap a palmsize pocket of air in a nearly perfect sphere. We call this contraction by the bubble "surface tension". Air pressure from outside the bubble also pushes on it to contract. However, the air inside the bubble resists this contraction. As the bubble gets smaller, the pressure in the bubble must increase until it reaches a point where the difference in pressure from the inside and outside of the bubble balances the contraction forces created by the surface tension.
Without knowing any more, all we can say that the radius \(r\), pressure \(p\), and surface tension \(s\) must satisfy an equation \[\chi(r, p, s)=0\] where \(r\) has units of length, \(p\) has units of Newtons per square meter or mass per time square time per length in fundamental dimensions, and \(s\) has units of Newtons per meter or mass per square time in fundamental dimensions. Performing our generic unit conversion, we find \[\chi(r L, \, p L^{1} T^{2} M, \, s T^{2} M)=0.\] On first pass, it may look like we have 3 fundamental dimensions, but on closer inspection, we see that mass and time do not appear independently. Instead of using the standard fundamental dimensions of length, time, and mass, let us use length \(L\) and mass per square time \(A = T^{2} M\). Then our under our generic unit conversion, \[\chi(r L, p A/L, s A)=0.\] Taking \(L = 1/r\) and \(A = 1/s\), we find \[\chi\left(1, \frac{p r}{s}, 1\right)=0,\] which is equivalent to \[\frac{pr}{s} = \chi^{1}(0),\] where \(\chi^{1}(0)\) is an asyet unknown constant.
Having a formula like this tells us several things already, even without knowing the constant. For a constant surface tension, then for the product \(pr\) to stay constant, the pressure difference between the inside and outside must decrease if the bubble is enlarged. Since the pop of a bubble depends on the pressure difference \(p\), small bubbles with greater pressure differences will pop more loudly than large bubbles. (Think champaign bubbles frothing vs. giant soap bubbles!) If we consider a latex balloon instead of a bubble, then the situation is different.
But if you are feeling a little skeptical about getting something for nothing, you are right. We've obtained our formula by assuming the pressure inside the bubble depended only on the size and surface tension  which is implicitly assumed to be a constant independent of other factors like bubble size. That's reasonable for a bubble which can create more surface as needed from the surrounding fluid. But this is not a good model for a balloon, where the tension changes as the balloon is inflated.
Dimensional analysis can sometimes be applied to models that are algorithmic computer simulations. Suppose, for example, we want to make a model to study how a prey animal like a fiddler crab's behavior effects it's ability to avoid predation by a seagull on a marsh mudflat at low tide. We can imagine lots of decision rules for when and where to run for the fiddler crab. A simulation would give us much more power for exploring decision rules than the usual kinds of equations we study. But whatever rules we decide to study, we can still expect certain basic parameters to be important in estimating the probability of the crab being caught by the gull.
These might include the speed of crab \(c\), the speed of gull \(g\), warning time \(w\) that the crab receives before the gull becomes an immediate risk, and the spatial scale \(d\) of the simulation. So, we expect the probability \(p\) of being caught to be a function of these four parameters, \(p = \zeta(c,g,w,d)\).
\(p = \zeta(c/g,1,w/d g,1)\)
So exploring the simulation space of interest requires only considering velocity and horizon, not all 4  much simpler.
The dimensional analysis techniques can be applied directly to differential equations. As in simulations this is often very useful because it reduces the number of parameters that need to be considered. One elementary example of this can be found in the logistic equation describing the growth of bacteria in a test tube. Let \(n(t)\) be the number of cells at time \(t\). The population increases in size according to \[\frac{dn}{dt}= r \, n(1\frac{n}{k})\] where \(r\) is the growth rate with dimensions of 1/time while \(k\) is called the carrying capacity and has dimensions of population size. To solve this differential equation, we also need an initial condition specifying the population size \(n_0\) at some time \(t_0\). So, in the abstract, if the population starts at size \(n_0\) at time \(t_0\) and ends at size \(n _ 1\) at time \(t_1 > t_0\), then the logistic growth model specifies a relationship \[f(n_0,t_0,n_1,t_1,r,k) = 0.\]
The first observation we can make is that the logistic growth equation is autonomous  it does not depend explicitly on the absolute time. This means that starting from initial population \(n_0\) at time \(t_0\) and running to time \(t _ 1\) will give the same answer as starting with the same population size at time \(0\) and running to time \(t = t_1  t_0\). This is a property of all autonomous systems of differential equations, and the reason we will often get away with assuming that the initial condition is applied at time \(t=0\).
The second observation we can make is that we can perform dimensional analysis on the logistic growth equation to reveal scaling symmetries in its solutions. Both \(n\) and \(k\) have dimensions of abundance, while \(t\) has dimensions of time and \(r\) has dimensions of inverse time$. Measuring abundance in units of \(k\) and time in units of \(1/r\), \[\begin{gather} f\left(\frac{n_0}{k},0,\frac{n_1}{k},(t_1  t_0) r,1,1\right) = 0. \end{gather}\] Introducing new variables \(\hat{n}\), \(\hat{n} _ 0\) and \(\hat{t}\) such that \(n / k = \hat{n}\), \(r (t_1t_0) = \hat{t}\), and \(n _ 0 / k = \hat{n} _ 0\), and substituting back into the differential equaiton, \[\begin{gather} \frac{d\hat{n}}{d\hat{t}}=\hat{n}(1\hat{n}), \quad \hat{n}(0) = \hat{n}_0. \end{gather}\]We see now that all solutions of our original equation can be written as solutions of this equation without any parameters. If \(\hat{n}(\hat{t},\hat{n} _ 0)\) is the solution of this nondimensionalized logistic equation for initial condition \(\hat{n} _ 0\), then when converted back to dimensional variables and our original calendar timescale, \(n _ 1 = k \hat{n}( r (t _ 1t _ 0), n _ 0/k)\).
\[f(n _ 0 N, 0, n _ 1 N,(t _ 1t _ 0) T,r T^{1}, k N) = 0.\]
While dimensional analysis is a general and powerful tool, it is not universally applicable. It is not as useful, for example, in the study of systems that are fundamentally discrete, or that are structured with many variables having the same units. An illustrative example of this is the modelling of a chess game.
When we play chess, it is traditional to represent the state of a game using an \(8 \times 8\) checkerboard with pieces arrayed on the board. One reason we like this representation is that it helps us leverage our visualprocessing wetware for decisionmaking. However, a computer might consider it an inefficient way to represent the game's state, particularly when communicating a single move or the game's full state in an endgame where most squares are empty. Instead, one can list each piece, and the board coordinates of that piece (while it's still on the board), just as we might for objects in coordinate geometry. And since we now have a state represented in terms of many coordinates, all with the same units, we might consider using dimensional analysis.
But chess does not possess the same scale invariances we intuitively expect of natural systems. The board is discrete and has a fixed size. The rules of the game rely on these board characteristics  the movement rules for pawns, knights, and kings are all specified in terms of the arbitrary size of one square. If an observer choose to interpret the game's rules using a unit of distance different from 1 checkerboard square, the game would change into something new. Not all the pieces have movement rules that are so scaledependent. The rooks, bishops, and queens can are restricted in their directions of movement, but not their distances. But the lack of scaleinvariance in the other rules still implies that any redimensionalization of chess will change the game.
So far, we've studied a few problems, each with their own dimensionless groups. The procedures we've invoked can be summarized as follows.
Buckingham \(\Pi\) theorem: If there are \(M\) dimensional parameters involving \(N\) fundamental dimensions, then system has \(MN\) dimensionless groups. For any dimensioned equation \[f(q _ 1,q _ 2, ... ,q _ M) = 0\] where the \(q _ i\)'s are the measurable variables with \(N\) fundamental dimensions, then the equation can be stated as \[F(\Pi _ 1,\Pi_2, ... ,\Pi _ {MN})=0\] where each \(\Pi _ i\) is a dimensionless group constructed from the \(q\)'s and has the form \[\Pi_i = q^{a _ i} = q_1^{a_{i1}} q_2^{a_{i2}} \ldots q _ M^{a _ {i,M}}\] where the exponents \(a _ i\) are rational numbers (they can always be taken to be integers: just raise it to a power to clear denominators).
Consider a ship moving across the surface of the ocean. One of the main sources of drag for the ship is it's wake, which consists of surface waves moving under the influence of gravity, as well as the ship.
Parameters:
If we conveniently arrange our table of variables and units,
r 
g 
v 
\(\mu\) 
\(\rho\) 

length, \(L\) 
1 
1 
1 
1 
3 
time, \(T\) 
0 
2 
1 
1 
0 
mass, \(M\) 
0 
0 
0 
1 
1 
Applying the Buckingham \(\Pi\) theorem, there are 5 variables and 3 fundamental dimensions
So, from the \(\Pi\) theorem, we deduce that the row nullspace must be spanned by two linearly independent basis vectors, each corresponding to a dimensionless group of the model. In reduced roweschelon form (RREF), the dimension matrix \[\left[\begin{matrix} 1 & 1 & 1 &1 & 3 \\ 0 & 2 & 1 &1 & 0 \\ 0 & 0 & 0 & 1 & 1 \end{matrix}\right]\] becomes \[\left[\begin{matrix}1 & 0 & \frac{1}{2} & 0 &  \frac{3}{2}\\0 & 1 & \frac{1}{2} & 0 &  \frac{1}{2}\\0 & 0 & 0 & 1 & 1\end{matrix}\right]\] \[\left[\begin{matrix}2 & 0 & 1 & 0 & 3\\0 & 2 & 1 & 0 & 1\\0 & 0 & 0 & 1 & 1\end{matrix}\right]\]
\[C_1 \begin{bmatrix} 1 \\ 1 \\ 2 \\ 0 \\ 0 \end{bmatrix} + C_2 \begin{bmatrix} 3 \\ 1 \\ 0 \\ 2 \\ 2 \end{bmatrix} \]
from sympy import *
from sympy.abc import *
vs=(r,g,v,mu,rho)
A = Matrix([[1,1,1,1,3],[0,2,1,1,0],[0,0,0,1,1]])
ns = A.nullspace()
F = lambda k : Mul( * [i**j for i,j in zip(vs, ns[k])])**2
print latex((F(0),F(1)))
There are two free variables (columns 3 and 5), which we can use to construct basis vectors of the row nullspace. This leads to an equation \[F\left( \frac{v^{2}}{g r}, \frac{g r^{3}\rho^{2}}{\mu^{2}} \right)=0\] This is a perfectly good dimensionless formula, but intuitively, we find it easier to work with dimensionless groups that have small integer exponents and sparse overlap of variables. If we multiply the second by the first and take a square root, we get the more standard (and slightly simpler) version \[F\left ( \frac{v^{2}}{g r}, \quad \frac{r \rho v}{\mu} \right )=0\] The first dimensionless term is called the Froude number (\(Fr:=\dfrac{v^{2}}{g r}\)) while the second dimensionless term is called the Reynolds number (\(Re:=\dfrac{r \rho v}{\mu}\)).
For a ship in water, \(\rho \approx 10^3\) kg/\(m^3\), \(\mu \approx 10^{3}\) kg/s/m, \(r \approx 10\) m, and \(v \approx 1\) m/s, so the Reynolds number is about \(10^7\), very large. If the Reynolds number is large, we can asymptotically expand around its reciprical being small, and hope to find
\[v \propto \sqrt{gr}\]
Thus, the speed is proportional to the square root of waterline length, and longer boats will be faster that shorter boats, all else equal. Also, ships will be faster on planets will lower gravity, and slower on planets with higher gravity.
On the Froude number: In 1857, Froude was consulted on the performance of the Great Eastern, a huge iron ocean liner that was, in spite of great efforts, was so slow she wasn't worth the cost of fuel to run. He began towing models, first in a creek, then in a constructed tank. He noticed that geometrically similar small and large hulls produced different wave patterns  but when larger hulls were towed at great speeds (making \(Re\) large!), he could find a particular speed at which wave patterns were almost identical. That is, when \(v^2/g\ell\) were the same for the small and large ships! When using scale models to predict what a prototype will do, typically the Froude number is kept constant so the surface wave pattern will remain the same.
Trinity site, New Mexico, 1945
These photographs were declassified and published in 1947 in Life Magazine and other news outlets as part of a public relations campaign on nuclear weapons and nuclear power. It was thought that the pictures were benign and would not reveal any more about America's topsecret nuclear weapons program than the public already knew. That was wrong. In 1950, G.I. Taylor, an English fluid dynamicist who had worked independently of the Manhattan project, published a pair of papers (one,two) explaining a 1941 report and using the pictures to calculate the bomb's energy yield.
Trying to recreate Taylor's thought process might go something like this. (It did not, if you look at the above papers.)
Taylor knew his thermodynamics very well. In particular, he knew that the ratio expansion of a gas as it absorbs energy (under slowly changing conditions!) depends on the specific heat of the gas at constant pressure \(C _ p\) and the specific heat of the gas at constant volume \(C _ v\). The specific heat is the change in the internal energy per change in temperature per amount (measure in mass). In the case of constant volume, all of the energy from an increase in temperature must be absorbed through increases in pressure due to acceleration of gas molecules. In the case of constant pressure, an increase in temperature corresponds to an increase in volume, so molecular acceleration is offset by a decrease in density.
So, to calculate the energy of the Trinity bomb test from the pictures in Life, Taylor needed a formula that involved the energy \(E\), the time \(t\), the radius \(R\), the initial atmospheric density \(\rho _ 0\), and the specific heats \(C _ p\) and \(C _ v\), \[f( E, t, R, \rho _ 0, C _ p, C _ v) = 0,\] for some unknown function \(f\). The fireball is moving so fast, the atmospheric density has no chance to change in the short term. The specific heat has units of joules per kilogram per degree Kelvin.
\(E\) 
\(t\) 
\(R\) 
\(\rho _ 0\) 
\(C _ p\) 
\(C _ v\) 

distance 
2 
0 
1 
3 
2 
2 
time 
2 
1 
0 
0 
2 
2 
mass 
1 
0 
0 
1 
0 
0 
temp 
0 
0 
0 
0 
1 
1 
We can save ourselves some work in the rowreduction by rearranging the rows and columns of the matrix ...
t 
R 
E 
\(\rho _ 0\) 
\(C _ p\) 
\(C _ v\) 

time 
1 
0 
2 
0 
2 
2 
distance 
0 
1 
2 
3 
2 
2 
mass 
0 
0 
1 
1 
0 
0 
temp 
0 
0 
0 
0 
1 
1 
Now, by rowreduction,
t 
R 
E 
\(\rho _ 0\) 
\(C _ p\) 
\(C _ v\) 

time 
1 
0 
0 
2 
0 
0 
distance 
0 
1 
0 
5 
0 
0 
mass 
0 
0 
1 
1 
0 
0 
temp 
0 
0 
0 
0 
1 
1 
Taylor was very suspicious of the role of the functional dependence on the adiabatic constant \(\gamma\)  a nuclear bomb explosion does not seem like a situation where things change slowly. And he was unsure which value to use for \(\gamma\). If the the atmosphere was acting like it's usual diatomic self, \(\gamma = 1.4\), but if the explosion was so hot that it split nitrogen and oxygen molecules up into individual atoms, \(\gamma = 1.67\). But he didn't give up and was able to use some small explosive experiments to estimate \(S(\gamma)\approx 1\), and obtained some formulas.
Now, in 1950, he had the pictures from Life magazine with which to work.
[ Data : hide , shown as table , shown as CSV shown as QR ]
# time (milliseconds), radius (meters)
#
# Original data used by GI Taylor to estimate
# the energy of the Trinity test's atomic bomb.
# Data represent the radius of the atmospheric
# shockwave of the bomb explosion at sequential
# time points, and were extracted from timestamped
# and scaled pictures released with the announcement
# of the bomb.
#
0.1,11.1
0.24,19.9
0.38,25.4
0.52,28.2
0.66,31.9
0.80,34.2
0.94,36.3
1.08,38.9
1.22,41.0
1.36,42.8
1.50,44.4
1.65,46.0
1.79,46.9
1.93,48.7
3.26,59.0
3.53,61.1
3.80,62.9
4.07,64.3
4.34,65.6
4.61,67.3
15.0,106.5
25.0,130
34.0,145
53.0,175
62.0,185
From this data, we can determine the yintercept. Although he was somewhat surprised how straight the line really was, G. I. Taylor was then able to solve for energe E, and estimate the explosion size at 16.8 kilotons, with a range from 9.5 to 34 kilotons. With other methods, the strength was estimated to be the same as 20 kilotons of TNT.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 

Notes: (1) plot on the left is from Taylor's 1950 paper. (2) The yaxes are different since the plot on the left measures radius in centimeters, while the plot on the right measures it in meters.
It may seem like nature prefers one system fundamental dimensions (length, time, mass, ...), but there are actually many ways to choose fundamental units. For example, instead of using time, length, and mass as fundamental dimensions, we could use time, length, and force, in which case mass would have derived dimensions of force by time squared per length. In the traditional American system of units, for example, we regularly use pounds (a unit of force) instead of slugs or kilograms (units of mass). And in the theory of quantum mechanics, it is common to use momentum rather than mass as a fundamental unit.
Dimensional analysis is largely viewed as much art as science. While there are relevant mathematical theorems like Buckingham's Pi theorem, there is still an element of arbitrariness in how we do dimensional analysis, and this arbitrariness plays into how we think about a problem  particularly when we start to consider the relative magnitudes of quantities for approximations.
In 1672, as part of a scientific expedition to Cayenne in French Guyana, Jean Richer observed that his Paris clocks were losing 148 seconds each day. Giovani Cassini guess that this lose was a result of weaker gravity. If so, how much stronger was the gravity in Paris?
The Deborah number is a dimensionless number used in rheology to describe the "solidness" of a "fluid". It is the ratio of the "relaxation time" of a fluid to the observation time of interest. If the Deborah number is very large, then a fluid moves very slowly relative to the observation time and acts like a solid. Deborah numbers near one indicate a normal fluid, and really small Debora numbers indicate a "hydrostatic" situation where the fluid moves so quickly, we can assume it is always near equilibrium. Estimate your Debora numbers the following scenarios, and explain your reasoning.
A bicyclist traversing a flat circular race track leans into the turn to keep her bicycle balanced.
Find 4 dimensional variables that together should determine the angle at which she should lean.
Determine the units of each variable, in terms of the fundamental units of length, time, mass, charge, and temperature.
According to the Buckingham \(\Pi\) theorem, how many dimensionless groups can be constructed from your 4 dimensional variables.
Find a general formula for a functional relationship between your variables and the angle of lean, expressed in terms of dimensionless groups.
Which of your initial 4 variables does not actually effect the angle of her lean.
In Principa Mathematica, Isaac Newton was the first to attempt to calculate the speed of sound in air. Let's replicate his result.
Determine three commonly known properties of an ideal gas might effect the speed of sound (\(c\)) in that gas.
Now, use dimensional analysis to come up with a formula for the speed of sound \(c\) as a function of these three values.
(McMahon and Bonner, 1983, p. 7678) When you cook a roast, the cooking time (\(T\)) is defined as the time needed for the center of the roast to reach a predefined temperature. The cooking time actually depends on the thermal conductivity (\(k\)) of the roast, its density (\(\rho\)), the radius of the roast (\(R\)), and the specificheat capacity at constant pressure (\(s_p\)).
Find a dimensionless group relating these five variables.
If we halve the size of the roast, how should the cooking time be changed, assuming everything else stays the same?
The drag force \(D\) on a ship depends on the water density \(\rho\), the water viscosity \(\mu\), the ship length \(L\), and the ship velocity \(v\).
What's the number of dimensionless terms needed to express the functional relationship among these 5 variables?
Find two different formulas formulas relating these 5 variables to each other. One formula should make use of the Reynolds number \(L \rho v / \mu\), while the other formula should make use of the drag coefficient \(D/(\rho L^2 v^2)\).
Nuclear fission occurs as neutrons bounce around atoms of plutonium, shattering some nuclei and releasing more neutrons in a chain reaction. Whether or not a chain reaction will keep going or die out depends on how much bouncing occurs before the neutron escapes from the plutonium or are absorbed. In a simple theory of the chain reaction, the minimum mass of a sphere of uranium needed to sustain a nuclear chain reaction (called the "critical mass") depends on the following three variables.
Use dimensional analysis to find a formula for the critical mass \(m\) as a function of these three variables.
(From Continuum Modeling in the Physical Sciences by E. van Groesen and Jaap Molenaar) Consider a train travelling through a light rain shower, where raindrops accumulate on the window of the train and trail down in diagonal streaks. Use dimensional analysis to find a formula for the speed of the train as a function of the raindrop size and the angle of descent of the drop.
(modified from Schmidt, 1977) Meteorite and asteroid impacts create craters. While these are eroded over time on earth, they are easily seen still on the moon and mars. There are two different hypotheses for what controls the impact crater size, which we derive below.
Find a functional relationship for crater radius \(r\) depending on the weight of the asteroid \(W\) and the density \(\rho\) of the substrate being moved by the explosion.
If the crater radius also depends on the strength of gravity \(g\) (which is different on each planet and moon), as evidence suggests it does for large explosions, find a new relationship for the crater radius.
(From The Art of Approximation in Science and Engineering by Sanjoy Mahajan) When Einstein proposed the theory of relativity, one of his tests of it relied heavily on dimensional analysis. Suppose a small object (comet, photon, derilict spaceship) enters the solar system at high speed and passes within a minimum distance \(r\) from the sun. The sun's gravity bends its path and the object leaves the solar system travelling in a different direction than it entered. Let \(\theta\) be the angle between the old path and the new path. This angle depends on the minimum distance \(r\); the smaller the distance, the sharper the turn.
What parameters besides the distance \(r\) does the angle \(\theta\) depend on?
Using dimensional analysis, derive a formula for the angle \(\theta\) in terms of \(r\) and the other parameters.
Under Newton's theory of gravity, the proportionality constant in this formula should be 2 while in Einstein's theory, the constant should be 4. Einstein's value was eventually confirmed using radio astronomy measurements.)
In Principia Mathematica, Isaac Newton discusses the resistance a fluid poses to an object moving through it. These ideas were subsequently applied to the calculation of the lift force created by an inclined plate moving through the air. The lift force \(F\) depends on the density of the fluid \(\rho\), the surface area of the plate \(S\), the velocity of the plate through the fluid \(v\) and the angle of attack of the plate \(\theta\).
Use dimensional analysis to derive an equation for the attack angle \(\theta\) as a function of a dimensionless product of the other variables.
Solve your equation for the lift force \(F\).
Newton believed that when the angle of attack was small, the lift force scaled like the square of the angle. In 1804, George Cayley tested Newton's idea. How does Newton's prediction compare with Cayley's data?
[ Data : hide , shown as table , shown as CSV shown as QR ]
# velocity (ft/sec), angle of attack (deg), lift fource (ounces)
#
# Early data on lift coefficient of a square plate
# moved using a whirlingarm setup
#
# Extracted by Tim Reluga, 201404 from
# Aerodynamics in 1804, the pioneering work of Sir George Caley
# by A. H. Yates, Flight, page 612.
#
# two data sets, the first at 15 feet per second velocity
# and the second at 21.8 feet per second velocity
#
# column 1: velocity of plate
# column 2: angle of attack, in degrees
# column 3: force of lift, in ounces
#
15.0,3.0310,0.1324
15.0,6.0190,0.1620
15.0,8.9630,0.2918
15.0,11.906,0.4542
15.0,14.894,0.6315
15.0,17.882,0.6787
21.8,3.0750,0.1139
21.8,6.0190,0.1752
21.8,9.0070,0.2408
21.8,11.950,0.3179
21.8,14.894,0.4759
21.8,17.838,0.5996
21.8,19.903,0.6222
Suppose a classmate told you that the formula for the time \(t\) a projectile is in the air (ignoring air resistance) is calculated from the formula \(g t = v + \sqrt{v^2 + 2 y}\) where \(v\) is the initial vertical velocity, \(y\) is the initial height above the ground, and \(g\) is the acceleration of gravity. Based on principles of dimensional analysis, how can you tell this equation must be wrong? Can you guess your classmate's mistake?
In 1604, Johannes Kepler proposed an algorithm equivalent to the folllowing formula for the relationship between the angle of incidence \(\theta_1\) and the angle of refraction \(\theta_2\) for light entering water, where \(k\) is the index of refraction. Discuss. \[\theta_1 = \frac{k \theta_2}{k(k1)\sec \theta_2}.\]
A crank has suggested that the height \(h\) of a child can be predicted from the height \(x\) of the mother and the height \(y\) of the father using the formula \[h = \sqrt{ (x+1) (y1) }.\] Use the theory of dimensional analysis to critique this formula.
If \(a\), \(b\), and \(c\) are the lengths of sides of a generalized triangle, and \(A\),\(B\), and \(C\) are corresponding angles, which of the following are dimensionally consistent formulas?
Buckingham's Pi theorem is related to legendary mathematician Emmy Noether's famous theorem that all conservation laws in physics are actually statements the symmetries of spacetime described using partial differential equations. While recovering Noether's theorem is to ambitious for us here, we can show how the Pi theorem is related to our natural law solving certain linear partial differential equations. Suppose we wish to find a formula for the volume of a \(n\)dimensional sphere. We expect there to be a formula \(f(r,V) = 0\), where \(r\) is the sphere's radius and \(V\) is the sphere's volume. Let \(\lambda\) be a conversion factor for units of distance.
Use dimensional analysis to express the solutions \(y(t)\) of the dimensional equation \(dy/dt = m  r y\) in terms of solutions \(u(s)\) of the dimensionless equation \(du/ds = 1  u\).
Use algebra and dimensional analysis to show that the 4 parameter system \[\begin{align*} \dot{p} &= \alpha q, \\ \dot{q} &= \beta p + \gamma q  \delta p^2 q, \end{align*}\] is equivalent to the Van der Pol equation (see above) under a rescaling of the states and time. What is the formula for \(a\) in terms of \(\alpha, \beta, \gamma, \delta\)?
Newton's law of gravity says that the height \(y(t)\) of a stone thrown straight up from the surface of moon obeys the secondorder equation \(\ddot{y} =  g R^2/(R+y)^2\) where \(R=1737\) meters is the radius of the moon and the \(g=1.62\) meters per square second is gravity's acceleration at the surface. Use dimensional to show solutions to this equation can be calculated in terms of the solutions to \(\ddot{u} = (1+\epsilon u)^{2}\), and estimate \(\epsilon\).
A classic differential game problem is the hare and the lion. The hare is so small, it can accelerate instantly, and we need only care about it's maximum speed \(c\). The lion, on the other hand, is big and slow to change its speed but given enough time can move much faster than the rabbit. The lion's movement can be parameterized in terms of it's mass \(m\) and it's maximum accelerating force \(f\). If \(d\) is the length of a side of the arena, what dimensionless groups parameterize the probability of the lion catching the hare?
In our previous study of Watt's linkage, we found the implicit solution curve was specified as \[0 = 4 y^{2} \left(x^{2} + y^{2}  r^{2}\right) + \left(x^{2} + y^{2}\right) \left(\frac{\ell^{2}}{4}  r^{2}  1 + x^{2} + y^{2} \right)^{2}.\] In a linkage problem, all the parameters should have units of length. However, if we inspect our solution, it looks liked the first term then would have units of length to the fourth power, and the second term would have units of length to the 6th power. Does this contradict the premise of dimensional analysis?