# Dimensional analysis

( previous, home, next )

Topics

• Basics of dimensional analysis
• Soap bubbles
• Pendulum period
• Nuclear explosions

Further Reading

One explanation for the appearance of scaling laws in experimental data is dimensionality. When we build equations and models, we represent state with a set of variables. The variables we include in our models have more meaning than just their numerical values. There is extra information attached that reminds us of how the variables should be interpreted and manipulated. If we measure a football field, we don't say it is 100 long; we say a football field is 100 yards long. Similarly, the speed of light is 300 megameters per second. In physical chemistry, rather than describing the motion of every individual particle making up the gas, we might represent the state of a gas with numbers for its temperature in degrees Celsius and pressure in Pascals or millimeters of mercury.

In each of these cases, we state the number along with the its units (yards, megameters per second, degrees Celsius, Pascals, mmHg). Units are a kind of descriptor that
communicates the measuring-convention employed and the underlying dimension of the thing being measured (length, speed, temperature, pressure). Over history, civilizations have accumulated many different units of measure for various common dimensions like length (chain, foot, kilometer, angstrom,...), area (square footage, acres, hectares, ...), weight (talent, pound, Newton, ton,...), and money (drachma, dollar, euro, yen, ...). Dimensions and units indicate that we can use variables in certain ways, similar to how a type-system constrains variable use in some programming languages. For example, adding together variables with different dimensions like $\text{2 apples} + \text{2 feet}$ does not simplify to 4 of anything. But if the variables have the same dimensions, then we can convert the units of the stray variables into the desired units, and then proceed with the addition -- $\text{2 meters} + \text{5 centimeters} = \text{2 meters} + \text{0.05 meters} = \text{2.05 meters},$ or $\text{3 apples} + \text{4 pears} = \text{7 pieces of fruit}.$ We can multiply and divide variables with different dimensions -- the dimensions of a product is the product of the dimensions of the arguments. $\begin{gather} \text{2 ft} \times \text{3 ft} = \text{6 square ft}, \\ \text{10 meters} / \text{2 minutes} = \text{5 meters per minute}. \end{gather}$

The very simple constraints that unit systems place on our calculations allow us to apply some important rules about the natural world that have been developed over centuries. In some cases, we can derive a nearly complete scaling model based just on reasoning about the dimensions of the variables involved. This process is generally referred to as "dimensional analysis". Over the remainder of this chapter, we'll work through some examples of dimensional analysis, then discuss how the practice can be formalized with the principle of homogeneity of dimensions and linear algebra into the Buckingham $$\Pi$$-Theorem, and then conclude with a classic example application of dimensional analysis to the Trinity nuclear test in 1945.

## Base and derived dimensions

Obviously, there can be different units used to measure a given dimension, but the dimension of a variable can also be expressed in more than one way. For example, momentum, the tendency of an object in motion to stay in motion, can have dimensions of force $$\times$$ time, or speed $$\times$$ mass, or mass $$\times$$ length $$/$$ time. The flexibility of dimensions can sometimes be helpful, but can also be confusing when used inconsistently. To avoid confusion, it is conventional to pick a set of "base" dimensions for our variables. If a variable's dimension is not a base dimension, we call it a "derived" dimension. The set of base dimensions should satisfy two rules. First, the base dimensions should form an independent set -- no one base dimension can be derived from the other base dimensions. Second, the base dimensions must span the dimensions of all variables involved in our model -- the dimension of each variable can be derived as a product of powers of the base dimensions. Under these two conditions, there will be exactly one way to derive the dimensions of each variable. As we'll see below, the terminology parallels with linear algebra are intentional.

The common contemporary convention is to consider time, length, and mass as base dimensions, and then to add other dimensions like temperature and charge as needed. Then area has derived dimensions of square length, momentum has derived dimensions of mass times length per time, and electrical current has derived dimensions of charge per time. But this is not universal. For example, in the English system of units, it is standard to work with force (e.g. pounds) insteady of mass as a base dimension. And in quantum physics, people use momentum rather than mass as a base dimension because photons of light carry momentum but not mass. Any given set of dimensions can be systematically grown (so it spans) and shrunk (so it is independent) into a set of base dimensions.

## Unit conversion monomials

In our brief introduction to complex variables, we showed how the principles of arithmetic were expanded to represent two-dimensional vectors by introducing the imaginary variable $$i$$ with it's own special algebraic rules. A variation of that idea can be applied to help us recognize scaling symmetries due to common variable dimensions.

Suppose we have a variable $$x$$ whose dimensions can be expressed in terms of base dimensions of time, length, and mass, and we wish to change the units of this variable. Let $$T$$ be the conversion factor for the time units, $$L$$ be the conversion factor for the length units, and $$M$$ be the conversion factor for mass units. Then, to convert $$x$$ to the new units, we multiply (or divide) it by each conversion factor as many times as the corresponding base unit appears in the dimensions. For example, if $$x$$ has units of length, then after converting to the new units we would get $$x L$$. If $$x$$ has units of area, which is length squared in terms of the chosen base dimensions, unit conversion will give $$x L^2$$. If $$x$$ has units of velocity, which is length per time, we would get $$x L T^{-1}$$. Or if $$x$$ had units of force, which is mass length per time squared in terms of the chosen base dimensions, we would get $$x M L T^{-2}$$.

When actually converting units, we known the values our conversion factors $$T$$, $$L$$, and $$M$$, and so can plug in and evaluate. But for the moment, let's suppose we leave the conversion factors as unknown variables. Then under this generic unit conversion, each variable $$x$$ should be replaced by a monomial where the powers of the base unit conversion factors represent the dimensions of the variable and $$x$$ becomes the numerical coefficient of the monomial.

## Dimensional analysis

exhibit symmetry of scale -- they should remain the same whether we are talking about small things or large things, and whatever units the observer has picked for their dimensions. All that should matter in natural laws is the relative sizes of the quantities of interest. This has historically been call the "principle of homogeneity of dimensions", or "Bridgman’s principle of absolute significance of relative magnitude" but we will just call it the "scale-invariance principle".

Scale invariance implies that all natural laws can be expressed in terms of "dimensionless" proportions. Such an equation is called dimensionless.

Galileo was one of the first promoters of scale invariance, and used it in his argument against the Aristotelian dogma that heavier objects fall more quickly than lighter objects. Galileo suggested that if you split a heavy stone into 2 pieces and drop them, then according to Aristotle, the lighter piece should slow the fall of the heavier piece, which is self-contradictory, since collectively, the pieces are heavier, and thus should fall faster.

### Relating length and weight

The first step in expanding the applications of mathematics beyond geometry to mechanics was relating size (measured as length, area, or volume) to weight, or in modern language, mass. Suppose we want to relate size and weight of a sphere. We can measure the weight $$w$$ with a scale, and we can determine the volume as a function of the sphere radius $$r$$, so we should be able to find an equation of the form $$F(w,r)=0$$, or with conversion factors, $$F(w N, r L)=0$$.

Empirically, we can see that there is such a formula. Using spheres of lead, we would find a sphere of lead with a radius of $$r$$ inches would weigh about $$w = 1.71 r^3$$ pounds. This formula isn't dimensionally complete -- there must be some other parameter in this formula that converts between the dimensions of weight and length. And of course, there is -- density $$\rho$$, so we should be working with $$F(w, r, \rho) = 0$$. Density of a solid has dimensions of weight per length cubed, and so with dimension conversions, $$F(w N, r L, \rho N L^{-3}) = 0$$

Taking $$L=1/r$$, and $$N = 1/w$$, we find $$F(1, 1, \rho r^3 / w) = 0$$ or $$w \propto \rho r^3$$. And, in fact, for spheres, we know the proportionality constant: $$\frac{w}{\rho r^3} = \frac{4}{3} \pi$$.

It needs to be emphasized, now, that the form of the equation we've arrived is a consequence of a modelling assumption -- the dimension's we've hypothesized for $$\rho$$. If the dimensions of density are weight per length to the $$n$$'th power, $$F(w N, r L, \rho N L^{-n}) = 0,$$ and dimensional analysis implies $$w \propto \rho r^n$$. So if $$\rho$$ was weight per length (like for a wire or chain), we would have $$w \propto \rho r$$, while if $$\rho$$ was weight per area (like a sheet of metal), we would have $$w \propto \rho r^2$$.

The sample principles can be applied to a shape that is specified by more than one dimension. If we are studying the volume of a brick with sides $$x$$ by $$y$$ by $$z$$, then the relationship to weight $$w$$ will have some form $$F(w, x, y, z, \rho) = 0$$. Dimensionally, $$F(w N, x L, y L, z L, \rho N L^{-3}) = 0$$, and if we pick $$(N,L)=(1/w,1/x)$$, then $$F(1, 1, \frac{y}{x}, \frac{z}{x}, \frac{\rho x^3}{w} ) = 0$$, and $$w \propto \rho x^3 F^+(\frac{y}{x}, \frac{z}{x})$$. There are two things to point out here. First, the formula we have obtained is not as explicit as that we obtained for the sphere. There is still an unknown function $$F^+$$ which, though related to the original $$F$$, is completely unknown. But if we think hard, we shouldn't be surprised. There is nothing in our mathematical specification so far that references the meanings of $$(x,y,z)$$ -- they could be referencing the dimensions of a pyramid, or a truncated cone, or even a torus, and our result most hold no matter what the shape these 3 lengths are specifying. Second, there were actually 3 different conversion factors we might reasonably choose for length, each of which gives a different but equivalent proportionality relationship. For example, if instead $$L=1/y$$, then we would have concluded $$w \propto \rho y^3 F(\frac{x}{y}, \frac{z}{y})$$. This ambiguity only get's worse as we have more variables with the same dimensions. Dimensional analysis is most useful when the number of important model variables is close to the number of base dimensions.

### Scope of a hanging rope

Suppose we need to hang a rope between two telephone poles and we want to make sure the rope won't come down too close to a garage roof. The more slack is in the rope, the farther down it will hang, but we cannot tell how-far down because the rope hangs with a curve. We can use dimensional analysis to find a relation between the lowest point of a rope hanging between two anchor points of equal height. Call the depth $$d$$ the height from the top of the anchor points to the lowest point on the rope. Well, the depth $$d$$ might depend on the length of the rope $$s$$, the distance $$w$$ between the anchors, the density of the rope $$\rho$$, and the gravitational force per mass $$g$$ through a function model $f( d, w, s, \rho, g) = 0.$ Although it appears quite general, this function has already encoded some of our intuition about the physics of the problem. For instance, we have implicitly assumed that the gravitational force on the rope is determined by a single physical parameter $$g$$, and that the rope has no stiffness that could effect it's depth.

Now, let's try to analyze this equation. It is a very general formula – it isn't even solved for one of the variables – but the variables do have units, and that helps. Suppose we introduce conversion factors $$M$$ for units of mass, $$L$$ for units of length, and $$T$$ for units of time. Then, $f( d L, w L, s L, \rho M L^{-1}, g L T^{-2}) = 0$ Close inspection shows that time and mass each only appear once in this equation. With some thought, taking $$L = 1/s$$ , $$M = {1}/{\rho s}$$ , $$T = \sqrt{g/s}$$, will lead to $f\left( \frac{d}{s}, \frac{w}{s}, 1, 1, 1 \right) = 0.$ So, the rope's depth $$d$$ will only depend on the rope's length $$s$$ and the distance between the anchors $$w$$. The density $$\rho$$ of the rope and the strength of gravity $$g$$ have no effect. And this makes some sense -- heavy chains and pieces of thread all seem to hang the same way in common experience. The other feature here is that the aspect ratios $$d/s$$ and $$w/s$$ are dimensionless numbers -- for a given physical problem, these same aspect ratios will give the same number, no matter what units we choose for our measurements!

Solving for $$d$$, and using a new function $$\phi()$$ to express the unknown dependence, $d = s \phi\left(\frac{w}{s} \right).$

If the rope's length equal the width between the anchors ($$s=w$$), the rope has to be perfectly straight, so we should have $$d=0$$. On the other hand, when the anchors are right next to each other ($$w=0$$), we expect the depth to be half the length ($$d = s/2$$). These imply $$\phi(0) = 1/2$$, $$\phi(1) = 0$$, and $$\phi()$$ is a monotone decreasing function in between. So, we have determined quite allot about the shape of a hanging rope without doing very much math.

But as a heads-up, you should be warned that we got rather lucky in choosing to normalize our variables by the rope's length rather than the width between anchors. If we had normalized by the width $$w$$, we would arrive at the alternative dimensionless relationship $$d = w \psi\left(\frac{s}{w} \right)$$. While algebraically equivalent to our first dimensionless relationship for the depth $$d$$, this form is indeterminate when $$w=0$$; we would get in that case that $$d = 0 \times \psi(\infty)$$. Our normalization by the rope length $$s$$ avoided this issue since for all physically meaningful versions of the problem, the length $$s$$ will always be positive. For new problems, the best choice of normalization factors may not initially be obvious, and it may be worth-while to consider more than one possibility.

### Dimensions without scale invariance

Probabilities and angles are always dimensionless, even though they still have units (probabilities can be expressed as odds, percentages or fractions of a whole, while angles can be expressed in radians, degrees, or minutes).

### Gravity and the pendulum period

We mentioned previously that the great debate over the earth's shape began when the period of a pendulum was found to be different in French Guyana than it was in Paris (relative to the astronomical measurement of the length of a day), and that this was interpreted to imply a difference in the strength of gravity between the two places. But this is an imprecise description, and we should be able to do better. Suppose we hang a lead bob from the end of a long thin wire to make a pendulum, pull it to one side, and let it go from rest so that it swings back and forth with a steady period $$p$$. If we measure the period, what does that tell us about gravity's acceleration $$g$$?

There are a few variables that we may also want to measure -- the length of the pendulum ($$r$$), the mass of the bob ($$m$$), and the initial angle ($$\theta _ 0$$). But the relationship between these variables and the period is unknown. so the best we can do at-first is to write a general functional relationship $g = \beta(p,r,m,\theta _ 0),$ where $$\beta()$$ is an unknown function. To determine $$\beta()$$, it looks like we need to explore the values of all four variables. However, we can greatly simplify this exploration based on consideration of the dimensions of our variables.

If we dimensionalize each variable with a generic unit conversion, base on base units of time $$T$$, length $$L$$, and mass $$M$$, our function becomes $g L T^{-2} = \beta(p T, r L , m M, \theta _ 0 ).$ Note that the initial angle $$\theta _ 0$$ has no unit conversion variables.

Let $$L = 1/r$$, $$T = 1/p$$, and $$M =1/m$$. Then $\frac{g p^2}{r} = \beta(1,1,1,\theta _ 0).$ If we rewrite the function of the right as $$\beta(\theta _ 0)$$ and solve for the gravity, we find $g = \frac{r}{p^2} \beta(\theta _ 0).$ Note that the prediction for gravity is independent of the mass. But if we observed a swinging pendulum, we can say something about the gravitational accelerations in the place we observe it (like on the moon compared to earth) as long as we know the universal function $$\beta(\theta _ 0)$$.

If we compare the prediction of this formula to some data, we find good agreement with at least one set of observations.

### Pressure in a soap bubble

A soap bubble is wondrous object, so light it can float on the air, but strong enough to contract and trap a palm-size pocket of air in a nearly perfect sphere. We call this contraction by the bubble "surface tension". Air pressure from outside the bubble also pushes on it to contract. However, the air inside the bubble resists this contraction. As the bubble gets smaller, the pressure in the bubble must increase until it reaches a point where the difference in pressure from the inside and outside of the bubble balances the contraction forces created by the surface tension.

Without knowing any more, all we can say that the radius $$r$$, pressure $$p$$, and surface tension $$s$$ must satisfy an equation $\chi(r, p, s)=0$ where $$r$$ has units of length, $$p$$ has units of Newtons per square meter or mass per time square time per length in base dimensions, and $$s$$ has units of Newtons per meter or mass per square time in base dimensions. Performing our generic unit conversion, we find $\chi(r L, \, p L^{-1} T^{-2} M, \, s T^{-2} M)=0.$ On first pass, it may look like we have 3 base dimensions, but on closer inspection, we see that mass and time do not appear independently. Instead of using the standard base dimensions of length, time, and mass, let us use length $$L$$ and mass per square time $$A = T^{-2} M$$. Then our under our generic unit conversion, $\chi(r L, p A/L, s A)=0.$ Taking $$L = 1/r$$ and $$A = 1/s$$, we find $\chi\left(1, \frac{p r}{s}, 1\right)=0,$ which is equivalent to $\frac{pr}{s} = \chi^{-1}(0),$ where $$\chi^{-1}(0)$$ is an as-yet unknown constant.

Having a formula like this tells us several things already, even without knowing the constant. For a constant surface tension, then for the product $$pr$$ to stay constant, the pressure difference between the inside and outside must decrease if the bubble is enlarged. Since the pop of a bubble depends on the pressure difference $$p$$, small bubbles with greater pressure differences will pop more loudly than large bubbles. (Think champaign bubbles frothing vs. giant soap bubbles!)

But if you are feeling a little skeptical about getting something for nothing, you are right. We've obtained our formula by assuming the pressure inside the bubble depended only on the size and surface tension -- which is implicitly assumed to be a constant independent of other factors like bubble size. That's reasonable for a bubble which can create more surface as needed from the surrounding fluid. But this is not a good model for a balloon, where the tension changes as the balloon is inflated. And experience tells us that popping a large balloon will be louder than popping a small balloon.

## The crab and the seagull

to write

Dimensional analysis can sometimes be applied to models that are algorithmic computer simulations. Suppose, for example, we want to make a model to study how a prey animal like a fiddler crab's behavior effects it's ability to avoid predation by a seagull on a marsh mudflat at low tide. We can imagine lots of decision rules for when and where to run for the fiddler crab. A simulation would give us much more power for exploring decision rules than the usual kinds of equations we study. But whatever rules we decide to study, we can still expect certain basic parameters to be important in estimating the probability of the crab being caught by the gull.
These might include the speed of crab $$c$$, the speed of gull $$g$$, warning time $$w$$ that the crab receives before the gull becomes an immediate risk, and the spatial scale $$d$$ of the simulation. So, we expect the probability $$p$$ of being caught to be a function of these four parameters, $$p = \zeta(c,g,w,d)$$.

$$p = \zeta(c/g,1,w/d g,1)$$

So exploring the simulation space of interest requires only considering velocity and horizon, not all 4 -- much simpler.

## Differential equations: logistic growth

The dimensional analysis techniques can be applied directly to differential equations. As in simulations this is often very useful because it reduces the number of parameters that need to be considered. One elementary example of this can be found in the logistic equation describing the growth of bacteria in a test tube. Let $$n(t)$$ be the number of cells at time $$t$$. The population increases in size according to $\frac{dn}{dt}= r \, n \left(1-\frac{n}{k}\right)$ where $$r$$ is the growth rate with dimensions of 1/time while $$k$$ is called the carrying capacity and has dimensions of population size. To solve this differential equation, we also need an initial condition specifying the population size $$n_0$$ at some time $$t_0$$. So, in the abstract, if the population starts at size $$n_0$$ at time $$t_0$$ and ends at size $$n _ 1$$ at time $$t_1 > t_0$$, then the logistic growth model specifies a relationship $f(n_0,t_0,n_1,t_1,r,k) = 0.$

The first observation we can make is that the logistic growth equation is autonomous -- it does not depend explicitly on the absolute time. This means that starting from initial population $$n_0$$ at time $$t_0$$ and running to time $$t _ 1$$ will give the same answer as starting with the same population size at time $$0$$ and running to time $$t = t_1 - t_0$$. This is a property of all autonomous systems of differential equations, and the reason we will often get away with assuming that the initial condition is applied at time $$t=0$$.

The second observation we can make is that we can perform dimensional analysis on the logistic growth equation to reveal scaling symmetries in its solutions. Both $$n$$ and $$k$$ have dimensions of abundance, while $$t$$ has dimensions of time and $$r$$ has dimensions of inverse time$. Measuring abundance in units of $$k$$ and time in units of $$1/r$$, $\begin{gather} f\left(\frac{n_0}{k},0,\frac{n_1}{k},(t_1 - t_0) r,1,1\right) = 0. \end{gather}$ Introducing new variables $$\hat{n}$$, $$\hat{n} _ 0$$ and $$\hat{t}$$ such that $$n / k = \hat{n}$$, $$r (t_1-t_0) = \hat{t}$$, and $$n _ 0 / k = \hat{n} _ 0$$, and substituting back into the differential equaiton, $\begin{gather} \frac{d\hat{n}}{d\hat{t}}=\hat{n}(1-\hat{n}), \quad \hat{n}(0) = \hat{n}_0. \end{gather}$ We see now that all solutions of our original equation can be written as solutions of this equation without any parameters. If $$\hat{n}(\hat{t},\hat{n} _ 0)$$ is the solution of this non-dimensionalized logistic equation for initial condition $$\hat{n} _ 0$$, then when converted back to dimensional variables and our original calendar timescale, $$n _ 1 = k \hat{n}( r (t _ 1-t _ 0), n _ 0/k)$$. ## A chess game While dimensional analysis is a general and powerful tool, it is not universally applicable. It is not as useful, for example, in the study of systems that are fundamentally discrete, or that are structured with many variables having the same units. An illustrative example of this is the modelling of a chess game. When we play chess, it is traditional to represent the state of a game using an $$8 \times 8$$ checkerboard with pieces arrayed on the board. One reason we like this representation is that it helps us leverage our visual-processing wetware for decision-making. However, a computer might consider it an inefficient way to represent the game's state, particularly when communicating a single move or the game's full state in an end-game where most squares are empty. Instead, one can list each piece, and the board coordinates of that piece (while it's still on the board), just as we might for objects in coordinate geometry. And since we now have a state represented in terms of many coordinates, all with the same units, we might consider using dimensional analysis. But chess does not possess the same scale invariances we intuitively expect of natural systems. The board is discrete and has a fixed size. The rules of the game rely on these board characteristics -- the movement rules for pawns, knights, and kings are all specified in terms of the arbitrary size of one square. If an observer choose to interpret the game's rules using a unit of distance different from 1 checkerboard square, the game would change into something new. Not all the pieces have movement rules that are so scale-dependent. The rooks, bishops, and queens can are restricted in their directions of movement, but not their distances. But the lack of scale-invariance in the other rules still implies that any redimensionalization of chess will change the game. ## Dimensionless symmetry groups So far, we've studied a few problems, each with their own dimensionless groups. The procedures we've invoked can be summarized as follows. 1. Chose a set of of base dimensions. 2. Expressed the dimensions of all parameters as a product of powers of the base dimensions. 3. Hypothesize scale-invariance for appropriate dimensions. 4. By picking convenient units, we can exploit unit symmetry to arrive at general symmetry forms for our laws. Buckingham $$\Pi$$ theorem: If there are $$M$$ dimensional parameters involving $$N$$ independent base dimensions, then system has $$M-N$$ dimensionless groups. For any dimensioned natural law $f(q _ 1,q _ 2, ... ,q _ M) = 0$ where the $$q _ i$$'s are the measurable variables with $$N$$ independent base dimensions with scale invariance, then the law can be stated as $F(\Pi _ 1,\Pi_2, ... ,\Pi _ {M-N})=0$ where each $$\Pi _ i$$ is a dimensionless group constructed as a monomial of $$q$$'s and has the form $\Pi_i = q^{a _ i} = q_1^{a _ {i1}} q _ 2^{a _ {i2}} \ldots q _ M^{a _ {i,M}}$ where the exponents $$a _ i$$ are rational numbers (they can always be taken to be integers by raising it to a power to clear denominators) such that all base dimensions cancel out. ### Linear algebra approach to construction of dimensionless groups #### Ship-speed example Consider a ship moving across the surface of the ocean. One of the main sources of drag for the ship is it's wake, which consists of surface waves moving under the influence of gravity, as well as the ship. Parameters: • $$r$$ = ship length • $$v$$ = velocity has dimensions of length per time • $$g$$ = gravity's acceleration has dimensions of length per time per time • $$\rho$$ = density of water has units of mass per length cubed • $$\mu$$ = viscosity has dimensions of mass per length per time. If we conveniently arrange our table of variables and units,  r g v $$\mu$$ $$\rho$$ length, $$L$$ 1 1 1 -1 -3 time, $$T$$ 0 -2 -1 -1 0 mass, $$M$$ 0 0 0 1 1 Applying the Buckingham $$\Pi$$ theorem, there are 5 variables and 3 base dimensions • The dimensions of the row space = 3 • The dimensions of the row nullspace = 2 • The dimensions of the col space = 3 • The dimensions of the col nullspace = 0 So, from the $$\Pi$$ theorem, we deduce that the row nullspace must be spanned by two linearly independent basis vectors, each corresponding to a dimensionless group of the model. In reduced row-eschelon form (RREF), the dimension matrix $\left[\begin{matrix} 1 & 1 & 1 &-1 & -3 \\ 0 & -2 & -1 &-1 & 0 \\ 0 & 0 & 0 & 1 & 1 \end{matrix}\right]$ becomes $\left[\begin{matrix}1 & 0 & \frac{1}{2} & 0 & - \frac{3}{2}\\0 & 1 & \frac{1}{2} & 0 & - \frac{1}{2}\\0 & 0 & 0 & 1 & 1\end{matrix}\right]$ $\left[\begin{matrix}2 & 0 & 1 & 0 & -3\\0 & 2 & 1 & 0 & -1\\0 & 0 & 0 & 1 & 1\end{matrix}\right]$ $C_1 \begin{bmatrix} -1 \\ -1 \\ 2 \\ 0 \\ 0 \end{bmatrix} + C_2 \begin{bmatrix} 3 \\ 1 \\ 0 \\ -2 \\ 2 \end{bmatrix}$ [Show code] from sympy import * from sympy.abc import * vs=(r,g,v,mu,rho) A = Matrix([[1,1,1,-1,-3],[0,-2,-1,-1,0],[0,0,0,1,1]]) ns = A.nullspace() F = lambda k : Mul( * [i**j for i,j in zip(vs, ns[k])])**2 print latex((F(0),F(1))) There are two free variables (columns 3 and 5), which we can use to construct basis vectors of the row nullspace. This leads to an equation $F\left( \frac{v^{2}}{g r}, \frac{g r^{3}\rho^{2}}{\mu^{2}} \right)=0$ This is a perfectly good dimensionless formula, but intuitively, we find it easier to work with dimensionless groups that have small integer exponents and sparse overlap of variables. If we multiply the second by the first and take a square root, we get the more standard (and slightly simpler) version $F\left ( \frac{v^{2}}{g r}, \quad \frac{r \rho v}{\mu} \right )=0$ The first dimensionless term is called the Froude number ($$Fr:=\dfrac{v^{2}}{g r}$$) while the second dimensionless term is called the Reynolds number ($$Re:=\dfrac{r \rho v}{\mu}$$). For a ship in water, $$\rho \approx 10^3$$ kg/$$m^3$$, $$\mu \approx 10^{-3}$$ kg/s/m, $$r \approx 10$$ m, and $$v \approx 1$$ m/s, so the Reynolds number is about $$10^7$$, very large. If the Reynolds number is large, we can asymptotically expand around its reciprical being small, and hope to find $v \propto \sqrt{gr}$ Thus, the speed is proportional to the square root of water-line length, and longer boats will be faster than shorter boats, all else equal. Also, we have the unconfirmed prediction that ships will be faster on planets will lower gravity, and slower on planets with higher gravity. On the Froude number: In 1857, Froude was consulted on the performance of the Great Eastern, a huge iron ocean liner that was, in spite of great efforts, was so slow she wasn't worth the cost of fuel to run. He began towing models, first in a creek, then in a constructed tank. He noticed that geometrically similar small and large hulls produced different wave patterns - but when larger hulls were towed at great speeds (making $$Re$$ large!), he could find a particular speed at which wave patterns were almost identical. That is, when $$v^2/g r$$ were the same for the small and large ships! When using scale models to predict what a prototype will do, typically the Froude number is kept constant so the surface wave pattern will remain the same. ## GI Taylor's analysis of shock-wave speeds Trinity site, New Mexico, 1945 These photographs were declassified and published in 1947 in Life Magazine and other news outlets as part of a public relations campaign on nuclear weapons and nuclear power. It was thought that the pictures were benign and would not reveal any more about America's top-secret nuclear weapons program than the public already knew. That was wrong. In 1950, G.I. Taylor, an English fluid dynamicist who had worked independently of the Manhattan project, published a pair of papers (one,two) explaining a 1941 report and using the pictures to calculate the bomb's energy yield. Trying to recreate Taylor's thought process might go something like this. (It did not, if you look at the above papers.) 1. These are great pictures! Can we learn anything from them? 2. Plot the data (regular and log-log) 3. Create a hypothesis -- scaling law relationship between radius and time! 4. Derive scaling law by dimensional analysis 5. Check that the curve slope matches our prediction!! 6. Use a little physics to estimate the total energy based on the intercept of the fitted curve. Taylor knew his thermodynamics very well. In particular, he knew that the ratio expansion of a gas as it absorbs energy (under slowly changing conditions!) depends on the specific heat of the gas at constant pressure $$C _ p$$ and the specific heat of the gas at constant volume $$C _ v$$. The specific heat is the change in the internal energy per change in temperature per amount (measure in mass). In the case of constant volume, all of the energy from an increase in temperature must be absorbed through increases in pressure due to acceleration of gas molecules. In the case of constant pressure, an increase in temperature corresponds to an increase in volume, so molecular acceleration is offset by a decrease in density. So, to calculate the energy of the Trinity bomb test from the pictures in Life, Taylor needed a formula that involved the energy $$E$$, the time $$t$$, the radius $$R$$, the initial atmospheric density $$\rho _ 0$$, and the specific heats $$C _ p$$ and $$C _ v$$, $f( E, t, R, \rho _ 0, C _ p, C _ v) = 0,$ for some unknown function $$f$$. The fireball is moving so fast, the atmospheric density has no chance to change in the short term. The specific heat has units of joules per kilogram per degree Kelvin.  $$E$$ $$t$$ $$R$$ $$\rho _ 0$$ $$C _ p$$ $$C _ v$$ distance 2 0 1 -3 2 2 time -2 1 0 0 -2 -2 mass 1 0 0 1 0 0 temp 0 0 0 0 -1 -1 We can save ourselves some work in the row-reduction by re-arranging the rows and columns of the matrix ...  t R E $$\rho _ 0$$ $$C _ p$$ $$C _ v$$ time 1 0 -2 0 -2 -2 distance 0 1 2 -3 2 2 mass 0 0 1 1 0 0 temp 0 0 0 0 -1 -1 Now, by row-reduction,  t R E $$\rho _ 0$$ $$C _ p$$ $$C _ v$$ time 1 0 0 2 0 0 distance 0 1 0 -5 0 0 mass 0 0 1 1 0 0 temp 0 0 0 0 1 1 So, there is a 2-dimensional column nullspace with the spanning basis $$\{ [-2,5,-1,1,0]^T, [0,0,0,1,-1]^T \}$$. Then the physical relationship, expressed in terms of the two corresponding dimensionless groups, should have the form $F(t^{-2} R^{5} E^{-1} \rho_0^{1}, C _ p C _ v ^{-1}) = 0$ The ratio of specific heats $$C _ p / C _ v$$ is commonly called the adiabatic constant $$\gamma$$. Substituting and solving for $$R$$, $R = E^{1/5} t^{2/5} \rho_0^{-1/5} S(\gamma)$ or, after taking logarithms, $\begin{gather*} \log R = \frac{1}{5}\left( \log E + 2 \log t - \log \rho _ 0 \right) + \log S(\gamma) \end{gather*}$ Taylor was very suspicious of the role of the functional dependence on the adiabatic constant $$\gamma$$ -- a nuclear bomb explosion does not seem like a situation where things change slowly. And he was unsure which value to use for $$\gamma$$. If the the atmosphere was acting like it's usual diatomic self, $$\gamma = 1.4$$, but if the explosion was so hot that it split nitrogen and oxygen molecules up into individual atoms, $$\gamma = 1.67$$. But he didn't give up and was able to use some small explosive experiments to estimate $$S(\gamma)\approx 1$$, and obtained some formulas. Now, in 1950, he had the pictures from Life magazine with which to work. [ Data : hide , shown as table , shown as CSV shown as QR ] # time (milliseconds), radius (meters) # # Original data used by GI Taylor to estimate # the energy of the Trinity test's atomic bomb. # Data represent the radius of the atmospheric # shockwave of the bomb explosion at sequential # time points, and were extracted from time-stamped # and scaled pictures released with the announcement # of the bomb. # 0.1,11.1 0.24,19.9 0.38,25.4 0.52,28.2 0.66,31.9 0.80,34.2 0.94,36.3 1.08,38.9 1.22,41.0 1.36,42.8 1.50,44.4 1.65,46.0 1.79,46.9 1.93,48.7 3.26,59.0 3.53,61.1 3.80,62.9 4.07,64.3 4.34,65.6 4.61,67.3 15.0,106.5 25.0,130 34.0,145 53.0,175 62.0,185  From this data, we can determine the y-intercept. Although he was somewhat surprised how straight the line really was, G. I. Taylor was then able to solve for energe E, and estimate the explosion size at 16.8 kilotons, with a range from 9.5 to 34 kilotons. With other methods, the strength was estimated to be the same as 20 kilotons of TNT. [Show code]  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34  from numpy import * from pylab import loglog, show, xlabel, ylabel, savefig, text, subplot, plot # data copied from webpage, saved as 'trinity.csv' bomb_data = loadtxt('trinity.csv', delimiter=',') bomb_data[:,0] = bomb_data[:,0]/1000 # converting milliseconds to seconds bomb_data[:,1] = bomb_data[:,1]*100 # converting meters to centimeters # conversions to match Taylor's paper. # First, we log-transform our data set log_data = log10(bomb_data) # use log base 10 log_data[:,1]=log_data[:,1] # Now, we construct A and b for A x = b. b = log_data[:, 1:2]*5./2. A = hstack([ones((len(b), 1)), log_data[:, 0:1]]) # solve A x = b for x using gaussian elimination with partial pivoting x = linalg.solve( A.T.dot(A), A.T.dot(b) ) k, m = float(x[0]), float(x[1]) # float calls convert 0-d arrays to scalars print (k, m) x = linspace(-4.1,-1,100) figure(figsize=(10,8)) plot(log_data[:,0],log_data[:,1]*5./2.,'ro',markersize=8) plot(x,m*x+k,'b',linewidth=2) legend(['Data','Best fit,$(5/2)\log_{10}R=%.3f\log_{10}t+%.3f$'%(m,k)],loc='upper left', fontsize=20) xlabel('$\log_{10}t$(s)',fontsize=24) ylabel('$(5/2)\log_{10}R\$ (m)',fontsize=24) xlim([-4.1, -1]) #ylim([7.5, 11]) xticks(fontsize=20) yticks(fontsize=20) savefig('Taylor.png') show()

Notes: (1) plot on the left is from Taylor's 1950 paper. (2) The y-axes are different since the plot on the left measures radius in centimeters, while the plot on the right measures it in meters.

## Concluding remarks

It may seem like nature prefers one system base dimensions (length, time, mass, ...), but there are actually many ways to choose base units. For example, instead of using time, length, and mass as base dimensions, we could use time, length, and force, in which case mass would have derived dimensions of force by time squared per length. In the traditional American system of units, for example, we regularly use pounds (a unit of force) instead of slugs or kilograms (units of mass). And in the theory of quantum mechanics, it is common to use momentum rather than mass as a base unit.

Dimensional analysis is largely viewed as much art as science. While there are relevant mathematical theorems like Buckingham's Pi theorem, there is still an element of arbitrariness in how we do dimensional analysis, and this arbitrariness plays into how we think about a problem -- particularly when we start to consider the relative magnitudes of quantities for approximations.

# Exercises

1. If $$a$$, $$b$$, and $$c$$ are the lengths of sides of a generalized triangle, and $$A$$,$$B$$, and $$C$$ are corresponding angles, which of the following are dimensionally consistent formulas? (e.g. The dimensions of all the terms being added are the same.)

1. $$a^2 + b^2 - c^2 = \cos C$$
2. $$a^2 + b^2 - c^2 = 4 a b \cos C$$
3. $$c = \dfrac{a}{b} \cos B + \dfrac{b}{a} \cos A$$
4. $$c = a \cos B + b \cos A$$
2. Suppose a classmate told you that the formula for the time $$t$$ a projectile is in the air (ignoring air resistance) is calculated from the formula $$g t = v + \sqrt{v^2 + 2 y}$$ where $$v$$ is the initial vertical velocity, $$y$$ is the initial height above the ground, and $$g$$ is the acceleration of gravity. Based on principles of dimensional analysis, how can you tell this equation must be wrong? Can you guess your classmate's mistake?

3. A crank has suggested that the height $$h$$ of a child can be predicted from the height $$x$$ of the mother and the height $$y$$ of the father using the formula $h = \sqrt{ (x+1) (y-1) }.$ Use the theory of dimensional analysis to critique this formula.

4. In 1672, as part of a scientific expedition to Cayenne in French Guyana, Jean Richer observed that his Paris clocks were losing 148 seconds each day. Giovani Cassini guessed that this lose was a result of weaker gravity. If so, using our dimensional analysis example from class, determine how much stronger was the gravity in Paris?

5. The Deborah number is a dimensionless number used in rheology to describe the "solidness" of a "fluid". It is the ratio of the "relaxation time" of a fluid to the observation time of interest. If the Deborah number is very large, then a fluid moves very slowly relative to the observation time and acts like a solid. Deborah numbers near one indicate a fluid that moves at the same speed we interact with it, and really small Debora numbers indicate a "hydrostatic" situation where the fluid moves so quickly, we can assume it is always near equilibrium. Estimate Debora numbers for each of the the following scenarios, and explain your reasoning.

1. Pouring maple syrup onto blueberry pancakes.
2. The inflation of an airbag during a car accident.
3. The erosion of the Appalachian mountain chain into a flat plane.
4. A house built on top of a glacier.
5. The houseboat on Lake Union in Seattle.
6. The flow of a soap film in a popping soap bubble.
7. In the old TV show Seahunt, a swimmer has his leg stuck under a rock. The waves in the water around him vary from a foot over his head to below his head.
6. A bicyclist traversing a flat circular race track leans into the turn to keep her bicycle balanced. When the turn is tight, she has to lean allot, but when the turn is gradual, she doesn't have to lean very much.

1. Find 4 dimensional variables that together could determine the angle $$\theta$$ at which she should lean.

2. Determine the dimensions of each variable, in terms of the base dimensions of length, time, mass, charge, and temperature.

3. According to the Buckingham $$\Pi$$ theorem, how many dimensionless groups can be constructed from your variables?

4. Find a general formula for a functional relationship between your variables and the angle of lean, expressed in terms of dimensionless groups.

5. Which of your initial 4 variables does not actually effect the angle of her lean.

7. In Principa Mathematica, Isaac Newton was the first to attempt to calculate the speed of sound in air. Let's replicate his result.

1. Determine three commonly known properties of an ideal gas might effect the speed of sound ($$c$$) in that gas.

2. Now, use dimensional analysis to come up with a formula for the speed of sound $$c$$ as a function of these three values.

8. (McMahon and Bonner, 1983, p. 76-78) When you cook a roast, the cooking time ($$T$$) is defined as the time needed for the center of the roast to reach a pre-defined temperature. The cooking time actually depends on the thermal conductivity ($$k$$) of the roast, its density ($$\rho$$), the radius of the roast ($$R$$), and the specific-heat capacity at constant pressure ($$s_p$$).

1. Find a dimensionless group relating these five variables.

2. If we halve the size of the roast, how should the cooking time be changed, assuming everything else stays the same?

9. Nuclear fission occurs as neutrons bounce around atoms of plutonium, shattering some nuclei and releasing more neutrons in a chain reaction. Whether or not a chain reaction will keep going or die out depends on how much bouncing occurs before the neutron escapes from the plutonium or are absorbed. In a simple theory of the chain reaction, the minimum mass of a sphere of uranium needed to sustain a nuclear chain reaction (called the "critical mass") depends on the following three variables.

• The density of uranium $$\rho$$ (mass per meters cubed)
• The diffusion rate of neutrons in uranium $$D$$ (area per time)
• The rate of nuclear collisions $$c$$ (per time).

Use dimensional analysis to find a formula for the critical mass $$m$$ as a function of these three variables.

10. (From Continuum Modeling in the Physical Sciences by E. van Groesen and Jaap Molenaar) Consider a train travelling through a light rain shower, where raindrops accumulate on the window of the train and trail down in diagonal streaks. Use dimensional analysis to find a formula for the speed of the train as a function of the raindrop size and the angle of descent of the drop.

11. (modified from Schmidt, 1977) Meteorite and asteroid impacts create craters. While these are eroded over time on earth, they are easily seen still on the moon and mars. There are two different hypotheses for what controls the impact crater size, which we derive below.

1. Find a functional relationship for crater radius $$r$$ depending on the weight of the asteroid $$W$$ and the density $$\rho$$ of the substrate being moved by the explosion.

2. If the crater radius also depends on the strength of gravity $$g$$ (which is different on each planet and moon), as evidence suggests it does for large explosions, find a new relationship for the crater radius.

12. (From The Art of Approximation in Science and Engineering by Sanjoy Mahajan) When Einstein proposed the theory of relativity, one of his tests of it relied heavily on dimensional analysis. Suppose a small object (comet, photon, derilict spaceship) enters the solar system at high speed and passes within a minimum distance $$r$$ from the sun. The sun's gravity bends its path and the object leaves the solar system travelling in a different direction than it entered. Let $$\theta$$ be the angle between the old path and the new path. This angle depends on the minimum distance $$r$$; the smaller the distance, the sharper the turn.

1. What parameters besides the distance $$r$$ does the angle $$\theta$$ depend on?

2. Using dimensional analysis, derive a formula for the angle $$\theta$$ in terms of $$r$$ and the other parameters.

Under Newton's theory of gravity, the proportionality constant in this formula should be 2 while in Einstein's theory, the constant should be 4. Einstein's value was eventually confirmed using radio astronomy measurements.)

13. In Principia Mathematica, Isaac Newton discusses the resistance a fluid poses to an object moving through it. These ideas were subsequently applied to the calculation of the lift force created by an inclined plate moving through the air. The lift force $$F$$ depends on the density of the fluid $$\rho$$, the surface area of the plate $$S$$, the velocity of the plate through the fluid $$v$$ and the angle of attack of the plate $$\theta$$.

1. Use dimensional analysis to derive an equation for the attack angle $$\theta$$ as a function of a dimensionless product of the other variables.

2. Solve your equation for the lift force $$F$$.

3. Newton believed that when the angle of attack was small, the lift force scaled like the square of the angle. In 1804, George Cayley tested Newton's idea. How does Newton's prediction compare with Cayley's data?

[ Data : hide , shown as table , shown as CSV shown as QR ]

# velocity (ft/sec), angle of attack (deg), lift fource (ounces)
#
# Early data on lift coefficient of a square plate
# moved using a whirling-arm set-up
#
# Extracted by Tim Reluga, 2014-04 from
#   Aerodynamics in 1804, the pioneering work of Sir George Caley
#   by A. H. Yates, Flight, page 612.
#
# two data sets, the first at 15 feet per second velocity
# and the second at 21.8 feet per second velocity
#
# column 1: velocity of plate
# column 2: angle of attack, in degrees
# column 3: force of lift, in ounces
#
15.0,3.0310,0.1324
15.0,6.0190,0.1620
15.0,8.9630,0.2918
15.0,11.906,0.4542
15.0,14.894,0.6315
15.0,17.882,0.6787
21.8,3.0750,0.1139
21.8,6.0190,0.1752
21.8,9.0070,0.2408
21.8,11.950,0.3179
21.8,14.894,0.4759
21.8,17.838,0.5996
21.8,19.903,0.6222

1. In 1604, Johannes Kepler proposed an algorithm equivalent to the folllowing formula for the relationship between the angle of incidence $$\theta_1$$ and the angle of refraction $$\theta_2$$ for light entering water, where $$k$$ is the index of refraction. Discuss. $\theta_1 = \frac{k \theta_2}{k-(k-1)\sec \theta_2}.$

2. Buckingham's Pi theorem is related to legendary mathematician Emmy Noether's famous theorem that all conservation laws in physics are actually statements the symmetries of space-time described using partial differential equations. While recovering Noether's theorem is to ambitious for us here, we can show how the Pi theorem is related to our natural law solving certain linear partial differential equations. Suppose we wish to find a formula for the volume of a $$n$$-dimensional sphere. We expect there to be a formula $$f(r,V) = 0$$, where $$r$$ is the sphere's radius and $$V$$ is the sphere's volume. Let $$\lambda$$ be a conversion factor for units of distance.

1. Rewrite our functional relationship between radius and volume to incorporate this conversion factor.
2. Differentiate your new functional relationship with respect to $$\lambda$$ to get a partial differential equation.
3. Evaluate this partial differential equation at $$\lambda = 1$$.
4. Show that $$f(r,V) = g(r^n/V)$$ is a solution of this partial differential equation for any sufficiently smooth invertible function $$g$$.
5. Show that under our scaling hypothesis, the volume of a sphere $$V(r,n) = c(n) r^n$$.
6. Discuss your expectations for the behavior of $$c(n)$$ as a function of $$n$$.
3. Use dimensional analysis to express the solutions $$y(t)$$ of the dimensional equation $$dy/dt = m - r y$$ in terms of solutions $$u(s)$$ of the dimensionless equation $$du/ds = 1 - u$$.

4. Use algebra and dimensional analysis to show that the 4 parameter system \begin{align*} \dot{p} &= \alpha q, \\ \dot{q} &= -\beta p + \gamma q - \delta p^2 q, \end{align*} is equivalent to the Van der Pol equation (see above) under a rescaling of the states and time. What is the formula for $$a$$ in terms of $$\alpha, \beta, \gamma, \delta$$?

5. A classic differential game problem is the hare and the lion. The hare is so small, it can accelerate instantly, and we need only care about it's maximum speed $$c$$. The lion, on the other hand, is big and slow to change its speed but given enough time can move much faster than the rabbit. The lion's movement can be parameterized in terms of it's mass $$m$$ and it's maximum accelerating force $$f$$. If $$d$$ is the length of a side of the arena, what dimensionless groups parameterize the probability of the lion catching the hare?

6. In our previous study of Watt's linkage, we found the implicit solution curve was specified as $0 = 4 y^{2} \left(x^{2} + y^{2} - r^{2}\right) + \left(x^{2} + y^{2}\right) \left(\frac{\ell^{2}}{4} - r^{2} - 1 + x^{2} + y^{2} \right)^{2}.$ In a linkage problem, all the parameters should have units of length. However, if we inspect our solution, it looks liked the first term then would have units of length to the fourth power, and the second term would have units of length to the 6th power. Does this contradict the premise of dimensional analysis?

( previous, home, next )