( previous, home, next )


Further reading

One of the great tools of applied mathematics has been the art of approximation. As you may have noticed in the past, complicated equations often have special cases that are easier to solve. A classical example would be a hyperbola like \[x^2 + 0.01 x -1 = 0\] can be solved by quadratic formula or completing the square to show \(x \approx -1.00501\) or \(x \approx 0.99501\). But this equation is also "close" to the equation \(x^2 - 1 = 0\), which we can solve immediately by factorization to get \(x \in \{-1,1\}\), values that differ from the exact solutions by fractions of a percent. This is a toy example -- we don't really need the simplify anything to find the solutions. But wouldn't it be nice if we had a systematic way to derive approximate equations whose solutions were stayed near to the solutions of the original equation? That's approximation theory. Here, we'll see some applications of the simpliest form of analytic approximation -- Taylor series -- to geometric models for Watt's curve and inference of the earth's figure.

An introductory example

The theory of approximation has roots in the ancient study of hyperbola's by Apollonius of Perga. Here's the core idea. If you want to calculate \(y\) but it's too hard, find something nearby like \(x\) that is easier to calculate and then use a method to estimate the value of \(y\) based on \(x\). The name comes from the lines used to bound the shape of a hyperbola. The most basic form of analytic approximation is a Taylor series.

Let's consider the quadratic equation given above. We can introduce a small parameter into our original equation by letting \(\epsilon = 0.01\) so that \[x^2 + \epsilon x - 1 = 0.\] Now, let's treat our unknown as a function the small parameter, \(x(\epsilon)\), and imagine that this function has a McLaurin series of the form \[x(\epsilon) = x_0 + \epsilon x_1 + \epsilon^2 x_2 \ldots.\] If we substitute this series into our quadratic equation, we will get a polynomial in \(\epsilon\). Since this polynomial should equal \(0\) for all small values of \(\epsilon\), then the coefficient of each \(\epsilon\)-monomial should also vanish. \[\begin{gather} 0 = (x_0^2 - 1) + \epsilon x_0 (2 x_1 + 1) + \epsilon^2 ( 2 x_0 x_2 + x_1^2 + x_1 ) + \ldots \end{gather}\]

Since \(\epsilon\) is small, the most important term is \(x_0^2 - 1\), which vanishes for \(x_0 = 1\) or \(x_0 = -1\), the two values we initially guessed. The next most important term is \(x_0 (2 x_1 + 1)\), which will vanish for \(x_1 = -1/2\). Thus, this approximation method suggests \[x(\epsilon) \approx \pm 1 - \frac{1}{2} \epsilon.\] This very simple approximation gives \(x\approx -1.00500\) and \(x\approx 0.99500\), 4 decimal places of accuracy when \(\epsilon = 1/100\).

Approximating Watt's curve

To determine the parameter conditions when Watt's linkage mobility is approximately straight around its midpoint, we can apply a series approximation. Recall that the implicit formula for Watt's curve is \[0 = 4 y^{2} \left(x^{2} + y^{2} - r^{2}\right) + \left(x^{2} + y^{2}\right) \left(\frac{\ell^{2}}{4} - r^{2} - 1 + x^{2} + y^{2} \right)^{2}.\] We are interested in Watt's special case where \(\ell = 2 \sqrt{1 - r^2}\), when the piston motion is said to be approximately straight, and the implicit configuration curve simplifies to \[0 = 4 y^{2} \left(x^{2} + y^{2} - r^{2}\right) + \left(x^{2} + y^{2}\right) \left(x^{2} + y^{2} - 2 r^{2}\right)^{2}.\] We can check by substitution and inspection that \((x,y) = (0,0)\) lies on the curve. Now, using a substituting \[y \approx a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5,\] and picking the \(a_i\)'s so that coefficients vanish one by one, we can show that link is approximately give by \[y(x) = a_1 x + a_5 x^5 + O(x^6), \quad a_1 = \pm \frac{r}{\sqrt{1-r^2}}, \quad a_5 = \frac{-(1 + a_1^2)^3}{8 a_1 r^2 (1 - r^2)}.\] This shows that Watt's linkage motion is approximately a straight-line motion to fifth-order when the linking bar length \(\ell\) is picked right.

Watt curve. The linear approximation is a close match to the exact curve near the origin.

Watt curve. The linear approximation is a close match to the exact curve near the origin.

Approximating the figure of the earth

Maupertius's equation for the reciprocal curvature \[\frac{ds}{d\lambda} = \sqrt{\frac{(1 + \epsilon) r^2}{\left(1 + \epsilon \cos^2(\lambda)\right)^3}},\] which we derived previously, is a laborious nonlinear equation, so working with it directly at the time was not feasible. But since the earth is approximately spherical, \(\epsilon \approx 0\). The first two terms in the McLaurin series expansion in \(\epsilon\) give \[\frac{ds}{d\lambda} \approx r + \frac{1}{2} \epsilon r - \frac{3}{2} \epsilon r \cos^{2}{\left (\lambda \right )},\] or if we transform to more convenient and conventional form, \[m \sin^2(\lambda) + c \approx \frac{ds}{d\lambda}\] where \(c = r (1 - \epsilon)\) and \(m = 3/2 \epsilon r\). This remarkably little equation relating the latitude and distance on an ellipsoidal planet was applied by Maupertuis, Boscovich, Laplace, and Legendre (with minor variation), when estimating the figure of the earth. The left-hand side has units of distance per angle, the reciprocal of curvature. The ellipse parameters \(r\) and \(\epsilon\) on the right-hand side are the what we seek to estimate, while the free variables \(\lambda\) and \(ds/d\lambda\) can be measured directly by observation.

As we'll see when we start using linear least squares, things aren't quite so simple, but we'll make them work.

User beware

The three examples we've given above are solved by what's called "regular approximations" -- we applied some familiar ideas from calculus in a common-sense way, and got out useful approximations without anything weird happening. However, don't get over confident fishing for solutions this way -- the water gets deep quickly and there are monster's leaking below the surface. Consider the deceptive problem of approximating a positive solution to the cubic equation \[\epsilon y^3 = y + 1\] when \(\epsilon\) is small but positive. If we try our Taylor series trick from above to solve for \(y\), we find \[y(\epsilon) = -1 - \epsilon - 3 \epsilon^2 \ldots .\] But for small \(\epsilon\), this is always negative!

The equation we started with is a cubic equation, so we would expect three possibly complex solutions. Since \(y=0\) implies \(\epsilon y^3 < y + 1\) and \(y = 1 + 1/\sqrt{\epsilon}\) implies \(\epsilon y^3 > y + 1\), then since both sides are continuous, the intermediate value theorem implies there must exist a positive solution. But our Taylor series cann't find it. Sometimes small things are less important, but sometimes small things have big important consequences. Problems where small perturbations are important are called "singular perturbation" problems, and we'd need new methods to find their solution.


  1. In one of his widely-read annecdotes, physicist Richard Feynman and an abacus salesman were challenged to compute \(x = \sqrt[3]{1729.03}\). Feynman observed that this was the same as solving \(x^3 - 12^3 - \epsilon = 0\) when \(\epsilon = 1.03\), and derived a linear approximation to \(x(\epsilon)\approx x_0 + \epsilon x_1\) quickly in his head. Using our methods above, find \(x_0\) and \(x_1\), estimate \(x\), and determine how accurate Feynman's estimated answer was.

  2. Asymptotic approximations can be used to show spherical trigonometry agrees with planar trigonometry when the triangles are much smaller than the radius of the sphere they lie on. Consider a spherical triangle with angles \(A\),\(B\),\(\Gamma\) opposite sides (measured in radians) \(a\),\(b\), and \(c\). Let \(\alpha\), \(\beta\), and \(\gamma\) be the lengths of the sides of a triangle on a sphere of radius \(r\), such that the corresponding angle measures of those sides are \(a = \alpha/r\), \(b = \beta/r\), and \(c=\gamma/r\). Derive approximations for each of the following in the limit of large a sphere (\(r \rightarrow \infty\), or \(\epsilon := 1/r \rightarrow 0\)), and name the associated identity from planar trigonometry.

    1. \(\sin a \sin B = \sin b \sin A\)
    2. \(\cos \Gamma = \sin A \sin B \cos c - \cos A \cos B.\)
    3. \(\cos c = \cos a \cos b\) when the angle \(\Gamma\) opposite \(c\) is \(90^{\circ}\).
  3. Previously, we found the following implicit solution for the path traced by the handle of Archimedes's trammel was \[r^2 = x^2 + \left(\frac{1}{\frac{s}{r}+1}\right)^2 y^2.\]
    1. Describe the asymptotic nature of the solution when \(s=1\) and \(r \rightarrow \infty\).
    2. Describe the asymptotic nature of the solution when \(r=1\) and \(s \rightarrow 0\).
    3. Describe the asymptotic nature of the solution when \(r=0\).
    4. Describe the asymptotic nature of the solution when \(r=-s\).
    5. Describe the asymptotic nature of the solution when \(r=-s/2\).
  4. The theory of asymptotic approximation has roots in the ancient study of hyperbola's by Apollonius of Perga. Consider the hyperbolas of the form \(x^2 - m^2 y^2 + ax + by + c = 0\), which we would like to approximate when \(x\) and \(y\) are large.
    1. Using the change-of-coordinates \(u=1/x\) and \(v=1/y\), transform this hyperbola to a quartic polynomial.
    2. Assume \(u\) and \(v\) are small, discard the smallest terms and find all tangent lines that approximate part of the solution and \((u,v) =(0,0)\).
    3. Which parameters do the asymptotic approximations depend on, and which do not matter at all? (This is how asymptotic approximations can make problems simplier!)
    4. Transform your tangent-line approximations back to \(x\) and \(y\) coordinates.
    5. Use the same style of analysis to obtain asymptotic approximations to \(x^3 - 64 y^2 + 4 = 0\) when \(x\) and \(y\) are large.
  5. Our introductory example of asymptotic approximation can be arrived at by other means.
    1. Solve \(x^2 + \epsilon x - 1=0\) for \(\epsilon(x)\).
    2. Recalling the calculus rules that, \[\frac{dx}{d\epsilon} = \left(\frac{d\epsilon}{dx}\right)^{-1}, \quad \frac{d^2x}{d\epsilon^2} = - \left(\frac{dx}{d\epsilon}\right)^3 \frac{d^2\epsilon}{dx^2},\] use Taylor series to find the first 3 terms of the approximation of \(x(\epsilon)\) when \(x(0) = 1\).
    3. Iteration of Newton's method can also be used to create asymptotic approximations. Recall that to approximate the solution of \(f(x)=0\), Newton's method uses the iteration formula \(x_{n+1} = x_n - f(x_n)/f'(x_n)\). Taking \(x_0 = 1\), find approximations \(x_1(\epsilon)\) and \(x_2(\epsilon)\) when \(f(x) = x^2 + \epsilon x - 1\).
    4. Compare the performance of the 4 approximations we've derived with the exact solutions.
  6. Let's check how good our series approximation for the reciprocal curvature is.
    1. Assuming the radius \(r=1\) and latitude, measured in degrees, falls between \(0\) and \(90\), plot the error between the exact reciprocal curvature formula and first-order McLaurin series approximation when \(\epsilon = 0.1\).
    2. Plot the maximum error between our exact formula for reciprocal curvature and it's first-order McLaurin series approximation for \(\epsilon\) varying from \(0.01\) to \(1\). Use a logarithmic scale for the x-axis (see semilogx).
    3. How do you expect your plots to change if we change the radius \(r\)?
  7. Find the \(\epsilon^2\) term in the McLaurin series for \(ds/d\lambda\) when \(\epsilon \approx 0\).

( previous, home, next )