Category Archives: Computer Algebra

What is the shape of a hanging rope?

Question: What is the shape of a flexible rope hanging from nails at each end and sagging under the gravity?

Answer:

First, observe that no matter how the rope hangs, it will have a lowest point A (see Fig. 1)

Fig. 1

It follows that the hanging rope can be placed in a coordinate system whose origin coincides with the lowest point A and the tangent to the rope at A is horizontal:

Fig. 2

At A, the rope to its left exerts a horizontal force. This force (or tension), denoted by T_0, is a constant:

Fig. 3

Shown in Fig. 3 also is an arbitrary point B with coordinates (x, y) on the rope. The tension at B, denoted by T_1, is along the tangent to the rope curve. \theta is the angle T_1 makes with the horizontal.

Since the section of the rope from A to B is stationary, the net force acting on it must be zero. Namely, the sum of the horizontal force, and the sum of the vertical force, must each be zero:

\begin{cases}T_1cos(\theta)=T_0\quad\quad\quad(1)\\ T_1\sin(\theta) = \rho gs\;\;\quad\quad(2)\end{cases}

where \rho is the hanging rope’s mass density and s its length from A to B.

Dividing (2) by (1), we have

\frac{T\sin(\theta)}{T\cos(\theta)} = \tan(\theta) = \frac{\rho g}{T_0}s\overset{k=\frac{\rho g}{T_0}}{\implies} \tan(\theta) = ks.\quad\quad\quad(3)

Since

\tan(\theta) = \frac{dy}{dx}, the slope of the curve at B,

and

s = \int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2},

we rewrite (3) as

\frac{dy}{dx} = k \int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2}\;dx

and so,

\frac{d^2y}{dx^2}=k\cdot \frac{d}{dx}(\int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2}\;dx)=k\sqrt{1+(\frac{dy}{dx})^2}

i.e.,

\frac{d^2y}{dx^2}=k\sqrt{1+(\frac{dy}{dx})^2}.\quad\quad\quad(4)

To solve (4), let

p = \frac{dy}{dx}.

We have

\frac{dp}{dx} = k\sqrt{1+p^2} \implies \frac{1}{\sqrt{1+p^2}}\frac{dp}{dx}=k.\quad\quad\quad(5)

Integrate (5) with respect to x gives

\log(p+\sqrt{1+p^2}) = kx + C_1\overset{p(0)=y'(0)=0}{\implies} C_1 = 0.

i.e.,

\log(p+\sqrt{1+p^2}) = kx.\quad\quad\quad(6)

Solving (6) for p yields

p = \frac{dy}{dx} =\sinh(kx).\quad\quad\quad(7)

Integrate (7) with respect to x,

y = \frac{1}{k} \cosh(kx) + C_2\overset{y(0)=0,\cosh(0)=1}{\implies}C_2=-\frac{1}{k}.

Hence,

y = \frac{1}{k}\cosh(kx)-\frac{1}{k}.

Essentially, it is the hyperbolic cosine function that describes the shape of a hanging rope.


Exercise-1 Show that \int \frac{1}{\sqrt{1+p^2}} dp = \log(p + \sqrt{1+p^2}).

Exercise-2 Solve \log(p+\sqrt{1+p^2}) = kx for p.

Snowflake and Anti-Snowflake Curves

Fig. 1

The snowflake curve made its first appearence in a 1906 paper written by the Swedish mathematician Helge von Koch. It is a closed curve of infinite perimeter that encloses a finite area.

Start with a equallateral triangle of side length a and area A_0, the snowflake curve is constructed iteratively (see Fig. 1). At each iteration, an outward equallateral triangle is created in the middle third of each side. The base of the newly created triangle is removed:

Fig. 2

Let s_i be the number of sides the snowflak curve has at the end of i^{th} iteration. From Fig. 2, we see that

\begin{cases}s_i = 4s_{i-1}\\ s_0=3\end{cases}.

Fig. 3

Solving for s_i (see Fig. 3) gives

s_i = 3\cdot 4^i.

Suppose at the end of i^{th} iteration, the length of each side is a^*_i, we have

\begin{cases}a^*_i = \frac{a^*_{i-1}}{3}\\ a^*_0=a\end{cases},

therefore,

a^*_i = (\frac{1}{3})^i a

and, p_i, the perimeter of the snowflake curve at the end of i^{th} iteration is

p_i = s_{i-1} (4\cdot a^*_i) = (3\cdot4^{i-1})\cdot (4\cdot(\frac{1}{3})^ia)=3(\frac{4}{3})^i a.

It follows that

\lim\limits_{i\rightarrow \infty}p_i =\infty

since |\frac{4}{3}|>1.

Let A^*_{i-1}, A^*_i be the area of equallateral triangle created at the end of (i-1)^{th} and i^{th} iteration respectively. Namely,

A^*_{i-1} = \boxed{\frac{1}{2}a^*_{i-1}\sqrt{(a^*_{i-1})^2-(\frac{a^*_{i-1}}{2})^2}}

and

A^*_{i} = \frac{1}{2}\frac{a^*_{i-1}}{3}\sqrt{(\frac{a^*_{i-1}}{3})^2-(\frac{1}{2}\frac{a^*_{i-1}}{3})^2}= \frac{1}{9}\cdot\boxed{\frac{1}{2}a^*_{i-1}\sqrt{(a^*_{i-1})^2-(\frac{a^*_{i-1}}{2})^2}}=\frac{1}{9}A^*_{i-1}.

Since

\begin{cases}A^*_i = \frac{1}{9} A^*_{i-1} \\ A^*_0=A_0\end{cases},

solving for A^*_i gives

A^*_i = (\frac{1}{9})^i A_0.

Hence, the area added at the end of i^{th} iteration

\Delta  A_i = s_{i-1} A^*_i = (3\cdot4^{i-1})\cdot(\frac{1}{9})^{i}A_0.

After n iterations, the total enclosed area

A_n =A_0 + \sum\limits_{i=1}^{n}\Delta A_i = A_0 + \sum\limits_{i=1}^{n}(3\cdot4^{i-1})\cdot(\frac{1}{9})^i A_0=\frac{8}{5}A_0-\frac{3}{5}(\frac{4}{9})^{n}A_0.

As the number of iterations tends to infinity,

\lim\limits_{n\rightarrow \infty}A_n = \frac{8}{5}A_0.

i.e., the area of the snowflake is \frac{8}{5} of the area of the original triangle.

If at each iteration, the new triangles are pointed inward, the anti-snowflake is generated (see Fig. 4 ).

Fig. 4 First four iterations of anti-snowflake curve

Like the Snowflake curve, the perimeter of the anti-snowflake curve grows boundlessly, whereas its total enclosed area approaches a finite limit (see Exercise-1).


Exercise-1 Let A_0 be the area of the original triangle. Show that the area encloded by the anti-snowflake curve approaches \frac{5}{2}A_0.

Exercise-2 An “anti-square curve” may be generated in a manner similar to that of the anti-snowflake curve (see Fig. 5). Find:

(1) The perimeter at the end of i^{th} iteration

(2) The enclosed area at the end of i^{th} iteration

Fig. 5 First four iterations of anti-square curve

An Edisonian Moment


We are told that shortly after Edison invented the light bulb, he handed the glass section of a light bulb to one of his engineers, asking him to find the volume of the inside. This was quite a challenge to the engineer, because a light bulb is such an irregular shape. Figuring the volume of the bulb’s irregular shape was quite different from figuring the volume of a glass, or a cylinder.

Several days later, Edison passed the engineer’s desk, and asked for the volume of the bulb, but the engineer didn’t have it. He had been trying to figure the volume mathematically, and had some problems because the shape was so irregular.

Edison took the bulb from the man; filled the bulb with water; poured the water into a beaker, which measured the volume, and handed it to the amazed engineer.

Vincent A. Miller, Working with Words, Words to Work With, 2001, pp. 57-58


Analyze This!” shows the equilibrium point (x_*, y_*) of

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n\\ \frac{dy}{dt}=k_1 x^n-k_2 y\\x(0)=x_0, y(0)=y_0\;\end{cases}

can be found by solving equation

-nk_1x^n-k_2x+c_0=0

where k_1>0, k_2>0, c_0>0.

When n=4, Omega CAS Explorer‘s equation solver yields a list of x‘s (see Fig. 1).

Fig. 1

It appears that identifying x_* from this list of formidable looking expressions is tedious at best and close to impossible at worst:

Fig. 2

However, by Descartes’ rule of signs ,

\forall k_1>0, k_2>0, c_0>0, -4k_1x^4-k_2x+c_0=0 has exactly one positive root.

It means that x_* can be seen quickly from evaluating the x‘s numerically with arbitrary chosen positive values of k_1, k_2and c_0. For instance, k_1=1, k_2=1, c_0=1 (see Fig. 3).

Fig. 3

Only the second value is positive:

Therefore, x_* is the second on the list in Fig. 1:

An Epilogue of “Analyze This!”

In “Analyze This!“, we examined system

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n \\ \frac{dy}{dt}=k_1 x^n-k_2 y\\x(0)=x_0, y(0)=y_0\;\end{cases}

qualitatively.

Now, let us seek its equilibrium (x_*, y_*) quantitatively.

In theory, one may first solve differential equation

\frac{dx}{dt} = -nk_1x^n-k_2x+c_0

for x(t), using a popular symbolic differential equation solver such as ‘ode2’. Then compute x_* as \lim\limits_{t \rightarrow \infty} x(t), followed by y_* = \frac{k_1}{k_2}x_*^n.

However,in practice, such attempt meets a deadend rather quickly (see Fig. 1).

Fig. 1

Bring in a more sophisticated solver is to no avail (see Fig. 2)

Fig. 2

An alternative is getting x_* directly from polynormial equation

-nk_1x^n-k_2x+c_0=0\quad\quad\quad(1)

We can solve (1) for x if n \le 4. For example, when n=3, -3k_1 x^3-k_2x+c_0=0 has three roots (see Fig. 3).

Fig. 3

First two roots are complex numbers. By Descartes’ rule of signs, the third root

\frac{(\sqrt{4k_2^3+81c_0^2k_1}+9c_0\sqrt{k_1})^{\frac{2}{3}}-2^{\frac{2}{3}}k_2}{3 \cdot 2^{\frac{1}{3}}\sqrt{k_1}(\sqrt{4k_2^3+81c_0^2k_1}+9c_0\sqrt{k_1})^{\frac{1}{3}}}

is the x_* of equilibrium (x_*, y_*) (see Exercise-1).


Exercise-1 Show the third root is real and positive.

Exercise-2 Obtain x_* from -4k_1x^4-k_2x+c_0=0.

Analyze This!

Consider the following chemical reaction

\underbrace{A + A + ... + A}_{n} \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} A_n

where n molecule of A combine reversibly to form A_n and, k_1, k_2 are the reaction rates.

If x, y are the concentrations of A, A_n respectively, then according to the Law of Mass Action, the reaction is governed by

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n \quad\quad(0-1)\\ \frac{dy}{dt}=k_1 x^n-k_2 y\quad\quad\quad(0-2)\\x(0)=x_0, y(0)=y_0\;\quad(0-3)\end{cases}

Without solving this initial-value problem quantitatively, the future state of system can be predicted through qualitatively analyzing how the value of (x, y) changes over the course of time.

To this end, we solve (0-1) for y first:

y=\frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n).

Substitute it in (0-2),

\frac{d}{dt} (\frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n)) =k_1 x^n -k_2 \cdot \frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n).

It simplifies to

\frac{d^2x}{dt^2} + (n^2 k_1 x^{n-1} + k_2)\frac{dx}{dt} =0.

Let

p=\frac{dx}{dt},

we have

\frac{d^2x}{dt^2}=\frac{d}{dt}(\frac{dx}{dt})=\frac{dp}{dt}=\frac{dp}{dx}\frac{dx}{dt}=\frac{dx}{dt}\frac{dp}{dx}=p\cdot\frac{dp}{dx}.

Substituting p, p\frac{dp}{dx} for \frac{dx}{dt}, \frac{d^2x}{dt^2} respectively in (1-1) gives

p\frac{dp}{dx}+(n^2 k_1 x^{n-1}+k_2)p=0.

It means p=0 or

\frac{dp}{dx}=-n^2k_1x^{n-1}-k_2.

Integrate it with respect to x,

p = \frac{dx}{dt}=-n k_1 x^n - k_2 x +c_0.

Let

f(x) = -n k_1 x^n - k_2 x +c_0,

we have

\frac{df(x)}{dx} = -n^2 k_1 x^{n-1} - k_2 < 0 \implies f(x) = -n k_1 x^{n-1} - k_2 x + c_0 is a monotonically decreasing function.

In addition, Descartes’ rule of signs reveals that

f(x)=0 has exactly one real positive root.

By definition, this root is the x_* in an equilibrium point (x_*, y_*) .

Fig. 1

Hence,

As time advances, x\uparrow if x_0 < x_*. Otherwise (x_0>x_*), x\downarrow \quad\quad\quad(1-1)

Dividing (0-2) by (0-1) yields

\frac{dy}{dx} = -\frac{1}{n}.

That is,

y=-\frac{1}{n} x + c_1.

By (0-3),

c_1 = y_0 + \frac{1}{n}x_0.

And so,

y=-\frac{1}{n} x + y_0 + \frac{1}{n}x_0.

Since y is a line with a negative slope,

y is a monotonically decreasing function of x.\quad\quad\quad(1-2)

Moreover, from (0-1) and (0-2), we see that

\forall x > 0, (x, \frac{k_1}{k_2}x^n) is an equilibrium point.

i.e.,

All points on the curve y = \frac{k_1}{k_2}x^n in the first quadrant of x-y plane are equilibriums \quad\quad\quad(1-3).

Based on (1-1), (1-2) and (1-3), for a initial state (x_0, y_0),

x_0 < x_* \implies x\uparrow, y\downarrow.

Similary,

x_0 > x_* \implies x\downarrow, y\uparrow.

Fig. 2

A phase portrait of the system is shown in Fig. 3.

Fig. 3

It shows that (x, y) on the trajectory approaches the equilibrium point (x_*, y_*) over the course of time. Namely, the system is asymptotically stable.

An Ellipse in Its Polar Form

In this appendix to my previous post “From Dancing Planet to Kepler’s Laws“, we derive the polar form for an ellipse that has a rectangular coordinate system’s origin as one of its foci.

Fig. 1

We start with the ellipse shown in Fig. 1. Namely,

\frac{x^2}{a^2}+\frac{y^2}{b^2}=1.

Clearly,

f^2=a^2-b^2\implies f< a.\quad\quad\quad(1)

After shifting the origin O to the right by f, the ellipse has the new origin O' as one of its foci (Fig. 2).

Fig. 2

Since x = x' + f, y=y', the ellipse in x'O'y' is

\frac{(x'+f)^2}{a^2}+\frac{y'^2}{b^2}=1.\quad\quad\quad(2)

Substituting x', y' in (2) by

x'=r\cos(\theta), y'= r\sin(\theta)

yields equation

a^2r^2\sin^2(\theta)+b^2r^2\cos^2(\theta)+2b^2fr\cos(\theta)+b^2f^2-a^2b^2=0.

Replacing \sin^2(\theta), f^2 by 1-\cos^2(\theta), a^2-b^2 respectively, the equation becomes

b^2r^2\cos^2(\theta)+a^2r^2(1-\cos^2(\theta)+2b^2fr\cos(\theta)-a^2b^2+b^2(a^2-b^2)=0.\quad\quad\quad(3)

Fig. 3

Solving (3) for r (see Fig. 3) gives

r=\frac{b^2}{f cos(\theta)+a} or r=\frac{b^2}{f \cos(\theta)-a}.

The first solution

r=\frac{b^2}{f\cos(\theta)+a} \implies r= \frac{\frac{b^2}{a}}{\frac{f}{a}\cos(\theta)+1}.

Let

p=\frac{b^2}{a}, e=\frac{f}{a},

we have

r = \frac{p}{1+e\cdot\cos(\theta)}.

The second solution is not valid since it suggests that r < 0:

\cos(\theta)<1 \implies f\cdot\cos(\theta) < f \overset{(1)}{\implies} f\cos(\theta)<a \implies f\cos(\theta)-a<0.

From Dancing Planet to Kepler’s Laws

This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent powerful Being” Sir. Issac Newton

When I was seven years old, I had the notion that all planets dance around the sun along a wavy orbit (see Fig. 1).

Fig. 1

Many years later, I took on a challenge to show mathematically the orbit of my ‘dancing planet’ . This post is a long overdue report of my journey.

Shown in Fig. 2 is the sun and a planet in a x-y-z coordinate system. The sun is at the origin. The moving planet’s position is being described by x=x(t), y=y(t), z=z(t).

Fig. 2 r=\sqrt{x^2+y^2+z^2}, F=G\frac{Mm}{r^2}, F_z=-F\cos(c)=-F\cdot\frac{z}{r}

According to Newton’s theory, the gravitational force sun exerts on the planet is

F=-G\cdot M \cdot m \cdot \frac{1}{r^2}(\frac{x}{r},\frac{y}{r}, \frac{z}{r})=-\mu\cdot m \cdot \frac{1}{r^3}\cdot(x, y, z)

where G is the gravitational constant, M, m the mass of the sun and planet respectively. \mu = G\cdot M.

By Newton’s second law of motion,

\frac{d^2x}{dt^2} = -\mu\frac{x}{r^3},\quad\quad\quad(0-1)

\frac{d^2y}{dt^2} = -\mu\frac{y}{r^3},\quad\quad\quad(0-2)

\frac{d^2z}{dt^2} = -\mu\frac{z}{r^3}.\quad\quad\quad(0-3)

y \cdot (0-3) - z \cdot (0-2) yields

y\frac{d^2z}{dt^2}-z\frac{d^2y}{dt^2} = -\mu\frac{yz}{r^3}+ \mu\frac{yz}{r^3}=0.

Since

y\frac{d^2z}{dt^2}-z\frac{d^2y}{dt^2} = \frac{dy}{dt}\frac{dz}{dt}+y\frac{d^2z}{dt^2}-\frac{dz}{dt}\frac{dy}{dt}-z\frac{d^2y}{dt^2}=\frac{d}{dt}(y\frac{dz}{dt}-z\frac{dy}{dt}),

it must be true that

\frac{d}{dt}(y\frac{dz}{dt}-z\frac{dy}{dt}) = 0.

i.e.

y\frac{dz}{dt}-z\frac{dy}{dt}=A\quad\quad\quad(0-4)

where A is a constant.

Similarly,

z\frac{dx}{dt}-x\frac{dz}{dt}= B,\quad\quad\quad(0-5)

x\frac{dy}{dt}-y\frac{dx}{dt}= C\quad\quad\quad(0-6)

where B,C are constants.

Consequently,

Ax=xy\frac{dz}{dt} - xz\frac{dy}{dt},

By=yz\frac{dx}{dt} - xy\frac{dz}{dt},

Cz=xz\frac{dy}{dt}-yz\frac{dx}{dt}.

Hence

Ax + By +Cz=0.\quad\quad\quad(0-7)

If C \ne 0 then by the following well known theorem in Analytic Geometry:

If A, B, C and D are constants and A, B, and C are not all zero, then the graph of the equation Ax+By+Cz+D=0 is a plane“,

(0-7) represents a plane in the x-y-z coordinate system.

For C=0, we have

\frac{d}{dt}(\frac{y}{x})=\frac{x\frac{dy}{dt}-y\frac{dx}{dt}}{x^2}\overset{(0-6)}{=}\frac{C}{x^2}=\frac{0}{x^2}=0.

It means

\frac{y}{x}=k

where k is a constant. Simply put,

y=k x.

Hence, (0-7) still represents a plane in the x-y-z coordinate system (see Fig. 3(a)).

Fig. 3

The implication is that the planet moves around the sun on a plane (see Fig. 4).

Fig. 4

By rotating the axes so that the orbit of the planet is on the x-y plane where z \equiv 0 (see Fig. 3), we simplify the equations (0-1)-(0-3) to

\begin{cases} \frac{d^2x}{dt^2}=-\mu\frac{x}{r^3} \\ \frac{d^2y}{dt^2}=-\mu\frac{y}{r^3}\end{cases}.\quad\quad\quad(1-1)

It follows that

\frac{d}{dt}((\frac{dx}{dt})^2 + (\frac{dy}{dt})^2)

= 2\frac{dx}{dt}\cdot\frac{d^2x}{dt^2} + 2 \frac{dy}{dt}\cdot\frac{d^2y}{dt^2}

\overset{(1-1)}{=}2\frac{dx}{dt}\cdot(-\mu\frac{x}{r^3})+ 2\frac{dy}{dt}\cdot(-\mu\frac{y}{r^3})

= -\frac{\mu}{r^3}\cdot(2x\frac{dx}{dt}+2y\frac{dy}{dt})

=  -\frac{\mu}{r^3}\cdot\frac{d(x^2+y^2)}{dt}

= -\frac{\mu}{r^3}\cdot\frac{dr^2}{dt}

= -\frac{\mu}{r^3} \cdot 2r \cdot \frac{dr}{dt}

= -\frac{2\mu}{r^2} \cdot \frac{dr}{dt}.

i.e.,

\frac{d}{dt}((\frac{dx}{dt})^2 + (\frac{dy}{dt})^2) = -\frac{2\mu}{r^2} \cdot \frac{dr}{dt}.

Integrate with respect to t,

(\frac{dx}{dt})^2+(\frac{dy}{dt})^2 = \frac{2\mu}{r} + c_1\quad\quad\quad(1-2)

where c_1 is a constant.

We can also re-write (0-6) as

x\frac{dy}{dt}-y\frac{dx}{dt}=c_2\quad\quad\quad(1-3)

where c_2 is a constant.

Using polar coordinates

\begin{cases} x= r\cos(\theta) \\ y=r\sin(\theta) \end{cases},

Fig. 5

we obtain from (1-2) and (1-3) (see Fig. 5):

(\frac{dr}{dt})^2 + (r\frac{d\theta}{dt})^2-\frac{2\mu}{r} = c_1,\quad\quad\quad(1-4)

r^2\frac{d\theta}{dt} = c_2.\quad\quad\quad(1-5)

If the speed of planet at time t is v then from Fig. 6,

Fig. 6

v = \lim\limits_{\Delta t \rightarrow 0}\frac{\Delta l}{\Delta t} = \lim\limits_{\Delta t\rightarrow 0}\frac{l_r}{\Delta t}\overset{l_r=r\Delta \theta}{=}\lim\limits_{\Delta t \rightarrow 0}\frac{r\cdot \Delta \theta}{\Delta t}=r\cdot\lim\limits_{\Delta t\rightarrow 0}\frac{\Delta \theta}{\Delta t}=r\cdot\frac{d\theta}{dt}

gives

v = r\frac{d\theta}{dt}.\quad\quad\quad(1-6)

Suppose at t=0, the planet is at the greatest distance from the sun with r=r_0, \theta=0 and speed v_0. Then the fact that r attains maximum at t=0 implies (\frac{dr}{dt})_{t=0}=0. Therefore, by (1-4) and (1-5),

(\frac{dr}{dt})^2_{t=0} + (r\frac{d\theta}{dt})^2_{t=0}-\frac{s\mu}{r} = 0+ v_0^2-\frac{2\mu}{r}=c_1,

r (r\frac{d\theta}{dt})_{t=0}=r_0v_0=c_2.

i.e.,

c_1=v_0^2-\frac{2\mu}{r_0},\quad\quad\quad(1-7)

c_2=v_0 r_0.\quad\quad\quad(1-8)

We can now express (1-4) and (1-5) as:

\frac{dr}{dt} = \pm \sqrt{c_1+\frac{2\mu}{r}-\frac{c_2^2}{r^2}},\quad\quad\quad(1-9)

\frac{d\theta}{dt} = \frac{c_2}{r^2}.\quad\quad\quad(1-10)

Let

\rho = \frac{c_2}{r}\quad\quad\quad(1-11)

then

\frac{d\rho}{dr} = -\frac{c_2}{r^2},\quad\quad\quad(1-12)

r=\frac{c_2}{\rho}.\quad\quad\quad(1-13)

By chain rule,

\frac{d\theta}{dt} = \frac{d\theta}{d\rho}\cdot\frac{d\rho}{dr}\cdot\frac{dr}{dt}.

Thus,

\frac{d\theta}{d\rho} = \frac{\frac{d\theta}{dt}}{ \frac{d\rho}{dr} \cdot \frac{dr}{dt}}

\overset{(1-10), (1-12), (1-9)}{=} \frac{\frac{c_2}{r^2}}{ (-\frac{c_2}{r^2})\cdot(\pm\sqrt{c_1+\frac{2\mu}{r}-\frac{c_2^2}{r^2}}) }

\overset{(1-11)}{=} \mp\frac{1}{\sqrt{c_1-\rho^2+2\mu(\frac{\rho}{c_2})}}

= \mp\frac{1}{\sqrt{c_1+(\frac{\mu}{c_1})^2-\rho^2+2\mu(\frac{\rho}{c_2}) -(\frac{\mu}{c_2})^2}}

= \mp\frac{1}{\sqrt{c_1+(\frac{\mu}{c_1})^2-(\rho^2-2\mu(\frac{\rho}{c_2}) +(\frac{\mu}{c_2})^2)}}

= \mp\frac{1}{\sqrt{c_1+(\frac{\mu}{c_1})^2-(\rho-\frac{\mu}{c_2})^2}}.

That is,

\frac{d\theta}{d\rho} = \mp\frac{1}{\sqrt{c_1+(\frac{\mu}{c_1})^2-(\rho-\frac{\mu}{c_2})^2}}.\quad\quad\quad(1-14)

Since

c_1+(\frac{\mu}{c_2})^2\overset{(1-7)}{=}v_0^2-\frac{2\mu}{r_0}+(\frac{\mu}{v_0r_0})^2=(v_0-\frac{\mu}{v_0r_0})^2,

we let

\lambda = \sqrt{c_1 + (\frac{\mu}{c_2})^2}=\sqrt{(v_0-\frac{\mu}{v_0r_0})^2}=|v_0-\frac{\mu}{v_0r_0}|.

Notice that \lambda \ge 0.

By doing so, (1-14) can be expressed as

\frac{d\theta}{d\rho} =\mp \frac{1}{\sqrt{\lambda^2-(\rho-\frac{\mu}{c_2})^2}} .

Take the first case,

\frac{d\theta}{d\rho} = -\frac{1}{\sqrt{\lambda^2-(\rho-\frac{\mu}{c_2})^2}} .

Integrate it with respect to \rho gives

\theta + c = \arccos(\frac{\rho-\frac{\mu}{c_2}}{\lambda})

where c is a constant.

When r=r_0, \theta=0,

c = \arccos(1)=0 or c = \arccos(-1) = \pi.

And so,

\theta = \arccos(\frac{\rho-\frac{\mu}{c_2}}{\lambda}) or \theta+\pi = \arccos(\frac{\rho-\frac{\mu}{c_2}}{\lambda}).

For c = 0,

\lambda\cos(\theta) = \rho-\frac{\mu}{c_2}.

By (1-11), it is

\frac{c_2}{r}-\frac{\mu}{c_2} = \lambda \cos(\theta).\quad\quad\quad(1-15)

Fig. 7

Solving (1-15) for r yields

r=\frac{c_2^2}{c_2 \lambda \cos(\theta)+\mu}=\frac{\frac{c_2^2}{\mu}}{\frac{c_2}{\mu}\lambda \cos(\theta)+1}\overset{p=\frac{c_2^2}{\mu}, e=\frac{c_2 \lambda}{\mu}}{=}\frac{p}{e \cos(\theta) + 1}.

i.e.,

r = \frac{p}{e \cos(\theta) + 1}.\quad\quad\quad(1-16)

Studies in Analytic Geometry show that for an orbit expressed by (1-16), there are four cases to consider depend on the value of e:

We can rule out parabolic and hyperbolic orbit immediately for they are not periodic. Given the fact that a circle is a special case of an ellipse, it is fair to say:

The orbit of a planet is an ellipse with the Sun at one of the two foci.

In fact, this is what Kepler stated as his first law of planetary motion.

Fig. 8

For c=\pi,

\theta + \pi = \arccos(\frac{\rho-\frac{\mu}{c_2}}{\lambda})

from which we obtain

r=\frac{c_2^2}{c_2 \lambda \cos(\theta+\pi)+\mu}=\frac{\frac{c_2^2}{\mu}}{\frac{c_2}{\mu}\lambda\cos(\theta+\pi)+1}\overset{p=\frac{c_2^2}{\mu}, e=\frac{c_2 \lambda}{\mu}}{=}\frac{p}{e \cos(\theta+\pi) + 1}.\quad\quad(1-17)

This is an ellipse. Namely, the result of rotating (1-16) by hundred eighty degrees or assuming r attains its minimum at t=0.

The second case

\frac{d\theta}{d\rho} = +\frac{1}{\sqrt{\lambda^2-(\rho-\frac{\mu}{c_2})^2}}

can be written as

-\frac{d\theta}{d\rho} = -\frac{1}{\sqrt{\lambda^2-(\rho-\frac{\mu}{c_2})^2}} .

Integrate it with respect to \rho yields

-\theta + c = \arccos(\frac{\rho-\frac{\mu}{c_2}}{\lambda})

from which we can obtain (1-16) and (1-17) again.

Fig. 9

Over the time duration \Delta t, the area a line joining the sun and a planet sweeps an area A (see Fig. 9).

A = \int\limits_{t}^{t+\Delta t}\frac{1}{2}r\cdot v\;dt \overset{(1-6)}{=} \int\limits_{t}^{t+\Delta t}\frac{1}{2}r\cdot r\frac{d\theta}{dt}\;dt=\int\limits_{t}^{t+\Delta t}\frac{1}{2}r^2\frac{d\theta}{dt}\;dt\overset{(1-5)}{=}\int\limits_{t}^{t+\Delta t}\frac{1}{2}c_2\;dt = \frac{1}{2}c_2\Delta t.

It means

A = \frac{1}{2}c_2\Delta t\quad\quad\quad(2-1)

or that

\frac{A}{\Delta t} = \frac{1}{2}c_2

is a constant. Therefore,

A line joining the Sun and a planet sweeps out equal areas during equal intervals of time.

This is Kepler’s second law. It suggests that the speed of the planet increases as it nears the sun and decreases as it recedes from the sun (see Fig. 10).

Fig. 10

Furthermore, over the interval T, the period of the planet’s revolution around the sun, the line joining the sun and the planet sweeps the enire interior of the planet’s elliptical orbit with semi-major axis a and semi-minor axis b. Since the area enlosed by such orbit is \pi ab (see “Evaluate a Definite Integral without FTC“), setting \Delta t in (2-1) to T gives

{\frac{1}{2}c_2 T} = {\pi a b} \implies  {\frac{1}{4}c_2^2 T^2}={\pi^2 a^2 b^2} \implies T^2 = \frac{4\pi^2 a^2 b^2}{c_2^2} \implies \frac{T^2}{a^3} = \frac{4\pi^2b^2}{c_2^2a}. (3-1)

While we have p = \frac{c_2^2}{\mu} in (1-16), it is also true that for such ellipse, p=\frac{b^2}{a} (see “An Ellipse in Its Polar Form“). Hence,

\frac{b^2}{a}=\frac{c_2^2}{\mu}\implies c_2^2=\frac{\mu b^2}{a}.\quad\quad\quad(3-2)

Substituting (3-2) for c_2^2 in (3-1),

\frac{T^2}{a^3} = \frac{4\pi^2 b^2}{(\frac{\mu b^2}{a})a}=\frac{4\pi^2}{\mu} \overset{\mu=GM}{=}\frac{4\pi^2}{GM}.\quad\quad\quad(3-3)

Thus emerges Kepler’s third law of planetary motion:

The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit.

Established by (3-3) is the fact that the proportionality constant is the same for all planets orbiting around the sun.

This image has an empty alt attribute; its file name is main-qimg-e7446d7decf4bdc83f8b712109470213.jpg

A Joint Work with David Deng on CAS-Aided Analysis of Rocket Flight Performance Optimization

Research on rocket flight performance has shown that typical single-stage rockets cannot serve as the carrier vehicle for launching satellite into orbit. Instead, multi-stage rockets are used in practice with two-stage rockets being the most common. The jettisoning of stages allows decreasing the mass of the remaining rocket in order for it to accelerate rapidly till reaching its desired velocity and height.

Optimizing flight performance is a non-trivial problem in the field of rocketry. This post examines a two-stage rocket flight performance through rigorous mathematical analysis. A Computer Algebra System (CAS) is employed to carry out the symbolic computations in the process. CAS has been proven to be an efficient tool in carry out laborious mathematical calculations for decades. This post reports on the process and the results of using Omega CAS explorer, a Maxima based CAS to solve this complex problem.

A two-stage rocket consists of a payload P propelled by two stages of masses m_1 (first stage) and m_2(second stage), both with structure factor 1-e. The exhaust speed of the first stage is c_1, and of second stage c_2. The initial total mass, m_1 + m_2 is fixed. The ratio b = \frac{P}{m_1+m_2} is small.

Based on Tsiolkovsky’s equation, we derived the multi-stage rocket flight equation [1]. For a two-stage rocket, the final velocity can be calculated from the following:

v = -c_1\log(1-\frac{em_1}{m_1+m_2+P}) - c_2\log(1-\frac{em_2}{m_2+P})\quad\quad\quad(1)

Let a = \frac{m_2}{m_1+m_2}, so that (1) becomes

v=-c_1\log(1-\frac{e(1-a)}{1+b}) - c_2\log(1-\frac{ea}{a+b})\quad\quad\quad(2)

where 0<a<1, b>0, 0<e<1, c_1>0, c_2>0.

We seek an appropriate value of a that maximizes v.

Consider v as a function of a, its derivative \frac{d}{da}v is computed (see Fig. 1)

Fig. 1

We have \frac{d}{da}v = -\frac{e(abc_2e+abc_1e+a^2c_1e+b^2c_2+bc_2-b^2c_1-2abc_1-a^2c_1)}{(b+a)(ae-b-a)(ae-e+b+1)}.

That is, \frac{d}{da}v = \frac{e(abc_2e+abc_1e+a^2c_1e+b^2c_2+bc_2-b^2c_1-2abc_1-a^2c_1)}{(b+a)(b+a(1-e))(ae+b+1-e)}.

Fig. 2

As shown in Fig. 2, \frac{d}{da}v can be expressed as

\frac{d}{da}v = \frac{e(A_2a^2+B_2a+C_2)}{(b+a)(b+a(1-e))(ae+b+1-e)}\quad\quad\quad(3)

Notice that A_2 = c_1(e-1) = -c_1(1-e) < 0.

Solving \frac{d}{da}v = 0 for a gives two solutions a_1, a_2 (see Fig. 3)

Fig. 3

We rewrite the expression under the square root in a_1 and a_2 as a quadratic function of e: Ae^2+Be+ Cand compute B^2-4AC (see Fig. 4)

Fig. 4

If c_1 \ne c_2, B^2-4AC< 0. It implies that Ae^2+Be+C is positive since A>0. When c_1=c_2, Ae^2+Be+C where A>0, 0<e<1 is still positive since as a result of B^2-4AC=0, the zero point of function Ax^2+Bx+C is \frac{-B}{2A} = \frac{(8b^2+8b)c_1^2}{2(b^2c_1^2+(2b^2+4b)c_1^2+b^2c_1^2)} = 1.

The expression under the square root is positive means both a_1 and a_2 are real-valued and a_1-a_2 >0 (see Fig. 5), i.e., a_1 > a_2.

Fig. 5

From (3) where A_2  < 0, we deduce the following:

For all a \ge 0, if a>a_1 then \frac{d}{da}v(a) < 0 \quad\quad\quad(\star)

For all a \ge 0, if a_2<a<a_1 then \frac{d}{da}v(a) >0 \quad\quad\quad(\star\star)

For all a \ge 0, if a<a_2 then \frac{d}{da}v(a) < 0 \quad\quad\quad(\star\star\star)

Fig. 6

Moreover, from Fig. 6,

\frac{d}{da}v(0)\cdot \frac{d}{da}v(1) = -\frac{e^2(c_1e+bc_2-bc_1-c_1)(c_2e-bc_2-c_2+bc_1)}{b(b+1)(e-b-1)^2}=-\frac{e^2(c_1e+bc_2-bc_1-c_1)(c_2e-bc_2-c_2+bc_1)}{b(b+1)(b+1-e)^2}.

Since the expression in the numerator of \frac{d}{da}v(0)\cdot \frac{d}{da}v(1), namely

(c_1e + bc_2-bc_1-c_1)(c_2e-bc_2-c_2 +bc_1)

= (c_1(e-1-b) + bc_2)(c_2(e-1-b)+bc_1)

=c_1c_2(e-1-b)^2+b^2c_1c_2+b(c_1^2+c_2^2)(e-1-b)

\ge c_1c_2(e-1-b)^2 + b^2c_1c_2+2bc_1c_2(e-1-b)

=c_1c_2(e-1-b+b)^2

=c_1c_2(e-1)^2>0,

It follows that

\frac{d}{da}v(0) \cdot \frac{d}{da}v(1) < 0\quad\quad\quad(4)

The implication is that \frac{d}{da}v has at least one zero point between 0 and 1.

However, if both a_1 and a_2, the two known zero points of \frac{d}{da}v are between 0 and 1, by (\star) and (\star\star\star), \frac{d}{da}v(0)\cdot  \frac{d}{da}v(1) must be positive, which contradicts (4). Therefore, \frac{d}{da}v must have only one zero point between 0 and 1.

We will proceed to show that the only zero lies between 0 and 1 is a_1.

There are two cases to consider.

Case 1 (c_1 \le c_2) \frac{d}{da} v(0)=\frac{e(bc_2(1-e)+b^2(c_2-c1))}{b^2(b+1-e)} >0 since b>0, c_2>0, 0<e<1 and c_2-c_1>0. But this contradicts (\star\star\star). Therefore, a_2 must not be positive.

Case 2 (c_1 >  c_2) The denominator of a_2 is negative since c_1 > 0, e < 1. However, (-bc_2-bc_1)e + 2bc_1, the terms not under the square root in the numerator of a_2 can be expressed as bc_1(1-e) + b(c_1-c_2e). This is a positive expression since c_1 >  c_2 > 0, 0<e<1 implies that c_1-c_2e >  c_1 - c_1e = c_1(1-e) >0. Therefore, a_2< 0.

The fact that only a_1 lies between 0 and 1, together with (\star) and (\star\star) proves that a_1 is where the global maximum of v occurs.

a_1 can be simplified to a Taylor series expansion (see Fig. 7)

Fig. 7

The result -\frac{b((c_2+c_1)e-2c_1)}{2c_1e-2c_1} - \frac{\sqrt{b}\sqrt{c_2}}{\sqrt{c_1}} produced by CAS can be written as -\sqrt{\frac{bc_2}{c_1}} + O(b). However, it is incorrect as \lim\limits_{b\rightarrow 0} \frac{-\sqrt{\frac{bc_2}{c_1}}+O(b)}{\sqrt{b}} = -\sqrt{\frac{c_2}{c_1}} < 0 would suggest that a_1 is a negative quantity when b is small.

To obtain a correct Taylor series expansion for a_1, we rewrite a_1 as \sqrt{b^2D+bE}+bF first where

D=\frac{c_2^2e^2+2c_1c_2e^2+c_1^2e^2-8c_1c_2e+4c_1c_2}{(2c_1-2c_1e)^2}

E=\frac{4c_1c_2e^2-8c_1c_2e+4c_1c_2}{(2c_1-2c1e)^2}=\frac{c_2}{c_1}

F=\frac{c1e+c_2e-2c_1}{2c_1-2c_1e}

Its first order Taylor series is then computed (see Fig. 8)

Fig. 8

The first term of the result can be written as O(b). Bring the value of E into the result, we have:

a_1= \sqrt{bE} + O(b) = \sqrt{\frac{bc_2}{c_1}} + O(b)\quad\quad\quad(6)

To compute v(a_1) from (6) , we substitute \sqrt{Db^2+Eb} + Fb for a in (2) and compute its Taylor series expansion about b=0 (see Fig. 9 )

Fig. 9

Writing its first term as O(b) and substituting the value E yields:

v = -(c_1+c_2)\log(1-e)+\frac{\sqrt{b}(c_1E^{1/2}+c_2E^{-1/2})}{e-1} + O(b)

= -(c_1+c_2)\log(1-e) + \frac{\sqrt{b}(c_1e\sqrt{\frac{c_2}{c_1}}+c_2e\sqrt{\frac{c_1}{c_2}})}{e-1}+ O(b)

= -(c_1+c_2)\log(1-e)-2e\frac{\sqrt{c_1c_2b}}{1-e} + O(b)

It is positive when b is small.

We have shown the time-saving, error-reduction advantages of using CAS to aid manipulation of complex mathematical expressions. On the other hand, we also caution that just as is the cases with any software system, CAS may contain software bugs that need to be detected and weeded out with a well- trained mathematical mind.

References

[1] M. Xue, Viva Rockettry! Part 2 https://vroomlab.wordpress.com/2019/01/31/viva-rocketry-part-2

[2] Omega: A Computer Algebra System Explorer http://www.omega-math.com