# Lecture 17: Scaling-law models

(previous, next)

• Application of least-squares
• Empirical laws and hypothesis generation
• Attempts at theoretical justification

## Application of Linear Least Squares

So, suppose we have an equation of the form $$y = m x + b$$, and we observe a bunch of data points $$(x_i, y_i)$$ that we think should occur on this line, except for errors in $$y _ i$$. \begin{align} m u_1 + c &= v_1 \\ m u_2 + c &= v_1 \\ \vdots \\ m u_i + c &= v_i \\ \vdots \end{align} The parameters $$m$$ and $$b$$ are our unknowns that we'd like to solve for, given our data points, but we have way more than 2 equations for our 2 unknowns, so we'll use linear least-squares. If we re-arrange our equations into matrix form, $\begin{gather} \begin{bmatrix} u_1 & 1 \\ u_2 & 1 \\ \vdots & \vdots \\ u_i & 1 \\ \vdots & \vdots \end{bmatrix} \begin{bmatrix} m \\ c \end{bmatrix} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_i \\ \vdots \end{bmatrix} \end{gather}$ So, then $\begin{gather} A = \begin{bmatrix} u_1 & 1 \\ u_2 & 1 \\ \vdots & \vdots \\ u_i & 1 \\ \vdots & \vdots \end{bmatrix}, \quad x = \begin{bmatrix} m \\ c \end{bmatrix}, \quad b = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_i \\ \vdots \end{bmatrix}. \end{gather}$ We can now calculate $$A^T A$$ and $$A^T b$$ and use

x = scipy.linalg.solve(A.T.dot(A), A.T.dot(b))

to solve the normal equation $$A^T A x = A^T b$$. (Expert note: It is often recommended to use a QR algorithm or SVD decomposition to find linear least-squares solutions rather than solving the normal equations directly by elimination with partial pivoting.)

Interestingly, we can use the same methods to fit exponential and power formulas if we first apply a log-transformation.

$y=kx^a \quad \text{or} \quad \log y = \log k + a \log x \quad \text{or} \quad \frac{dy}{dx} = a \frac{y}{x}$ $y=e^{ax+c} \quad \text{or} \quad \log y = ax + c \quad\text{or} \quad \frac{dy}{dx} = a y$

### Example: Scaling of egg size with bird size

Brody, 1945, partially aggregated data file, src
mass of adult mass of egg
113380 1700
4536 165.4
3629 94.5
1020 34
283 14
3.6 0.6
[Show code]
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63  #!/usr/bin/env python2.7 """ This python script illustrates how to find a scaling law that matches data using log-transformed least squares regression. We are fitting the log-linear relationship log(v) = k + m log(u) which corresponds to the scaling law m k v(u) = u e """ from numpy import * from pylab import loglog, show, xlabel, ylabel, savefig, text, subplot, plot # data file at http://www.math.psu.edu/treluga/450/egg.data egg_data = loadtxt('egg.data', delimiter=',') print egg_data # First, we log-transform our data set log_data = log(egg_data)/log(10) # use log base 10 # Now, we construct A and b for A x = b. b = log_data[:, 1:2] A = hstack([ones((6, 1)), log_data[:, 0:1]]) # solve A x = b for x using gaussian elimination with partial pivoting x = linalg.solve( A.T.dot(A), A.T.dot(b) ) k, m = float(x[0]), float(x[1]) # float calls convert 0-d arrays to scalars print (k, m) ## Alternatively, we could use numpy's built-in least-squares ## algorithm. If you look close, you'll see that the answer ## is a little different from the one above. This is because ## the algorithm is using a different calculation method based ## on SVD decomposition of the matrix. SVD is more powerful and ## more robust, but also slower for large problems. # (x_alt, residues, a_rank, svd_vals) = linalg.lstsq(A, b) print (float(x_alt[0]), float(x_alt[1])) #################### # # Now, we display our results # #################### z = 10**linspace(0, 6, 10) loglog(egg_data[:, 0], egg_data[:, 1], 'o', z, 10**k*z**m,'r-') xlabel('Adult bird mass ($M$)', fontsize=18) ylabel('Egg mass ($M_e$)', fontsize=18) Equation = "$M_e = %.3f M^{%.3f}$"%(10**k, m) text(300, 0.5, Equation, fontsize=18) savefig('bird_to_egg_scaling.png') savefig('bird_to_egg_scaling.pdf') #show()

data file

data file

## Explaining empirical observations

### Skeleton scaling

In Dialogues Concerning Two New Sciences, Galileo proposed the "square-cube" law for how bones must change as the size of an animal changes. The naive theory is that if you double the mass of a person or animal, and you double mass of the skeleton correspondingly, everything will work out just fine (As suggested in Gulliver's travels, for instance.) But Galileo held that it isn't so. The argument was simple. The strength of a bone in a body will be proportional to its cross-sectional area ($$A$$) of the bone. Since the bones have to support the full mass of the anmial ($$M_{body}$$), $$A \propto M_{body}$$. Now, the cross-sectional area is proportional to the skeletal mass $$M_{skel}$$ raised to the 2/3rds power ($$A \propto M_{skel}^{2/3}$$), so with a little algebra, $$M_{skel} \propto M_{body}^{3/2}$$. Galileo included a sketch in his book to illustrate -- interestingly, the sketch is actually in-accurate, making the bone much thicker than the theory predicts.

While Galileo's general argument may be right -- skeleton mass can not scale isometrically with body mass -- the specific formula he proposed does not seem to match the data.

data file