# Having a cow !

Let’s have a cow first:

There are more:

For example:

# Constructing the tangent line of circle without calculus

The tangent line of a circle can be defined as a line that intersects the circle at one point only.

Put a circle in the rectangular coordinate system.

Let $(x_0, y_0)$ be a point on a circle. The tangent line at $(x_0, y_0)$ is a line intersects the circle at $(x_0, y_0)$ only.

Let’s first find a function $y=kx+m$ that represents the line.

From circle’s equation $x^2+y^2=r^2$, we have

$y^2=r^2-x^2$

Since the line intersects the circle at $(x_0, y_0)$ only,

$r^2-x^2=(kx+m)^2$

has only one solution.

That means

$k^2x^2+x^2+2kmx+m^2-r^2 =0$

has only one solution. i.e., its discriminant

$(2km)^2-4(k^2+1)(m^2-r^2)=0\quad\quad\quad(1)$

By the definition of slope,

$kx+m-y_0 = k(x-x_0)$.

It follows that

$m =y_0-kx_0\quad\quad\quad(2)$

Substitute (2) into (1) and solve for $k$ gives

$k = \frac{-x_0}{y_0}\quad\quad\quad(3)$

The slope of line connecting $(0, 0)$ and $(x_0, y_0)$ where $x_0 \neq 0$ is $\frac{y_0}{x_0}$.

Since $\frac{-x_0}{y_0}\cdot \frac{y_0}{x_0} = -1$, the tangent line is perpendicular to the line connecting $(0, 0)$ and $(x_0, y_0)$.

Substitute (3) into $y = k x +m$, we have

$y=-\frac{x_0}{y_0} x + m\quad\quad\quad(4)$.

The fact that the line intersects the circle at $(x_0, y_0)$ means

$y_0 = -\frac{x_0^2}{y_0} + m$

or

$y_0^2=-x_0^2+ my_0$.

Hence,

$m =\frac{x_0^2+y_0^2}{y_0} = \frac{r^2}{y_0}$.

It follows that by (4),

$x_0 x +y_0 y = r^2\quad\quad\quad(5)$

(5) is derived under the assumption that $y_0 \neq 0$. However, by letting $y_0 =0$ in (5), we obtain two tangent lines that can not be expressed in the form of $y=kx+m$:

$x=-r, x=r$

# Constructing the tangent line of quadratic without calculus

The tangent line of a quadratic function at $(x_0, y_0)$is a line $y=kx+m$ that intersects $y=ax^2+bx+c$ at $(x_0, y_0=ax_0^2+bx_0+c)$ only.

The presence of function $y=kx+m$ immediately excludes the vertical line $x=x_0$ which also intersects $y=ax^2+bx+c$ at $(x_0, ax_0^2+bx_0+c)$ only (see Fig. 1).

Fig. 1

Let’s find $k$.

Line $y = kx+m$ intersects $y=ax^2+bx+c$ at $(x_0, ax_0^2+bx_0+c)$ only means quadratic equation

$ax^2+bx +c =kx +m$

has only one solution. That is, the discriminant of $ax^2+bx+c-kx-m =0$ is zero:

$(b-k)^2-4a(c-m) = 0\quad\quad\quad(1)$

Fig. 2

And, by the definition of slope (see Fig. 2),

$(x-x_0)k = (kx+m)-(ax_0^2+bx_0+c)$.

It follows that

$m = (ax_0^2+b_0+c)-x_0 k\quad\quad\quad(2)$

Substituting (2) into (1), we have

$(b-k)^2-4a(c-((a_0 x^2+b x_0 + c)-x_0 k)=0$.

Solve it for $k$ gives

$k = 2 a x_0 +b$.

Fig. 3

# A Proof without Calculus

Now, solve $\sin(x)=x$.

By inspection, $x=0$.

Is this the only solution?

Visually, it is difficult to tell (see Fig. 1 and Fig. 2)

Fig. 1

Fig. 2

However, we can prove that $0$ is the only solution:

Fig. 3

If $0 < x \leq 1 < \frac{\pi}{2}$ then from Fig. 3, we have

Area of triangle OAB < Area of circular sector OAB.

That is,

$\frac{1}{2}\cdot 1 \cdot \sin(x)< \frac{1}{2}\cdot 1 \cdot x$.

Hence,

$\forall 0< x \leq 1, \sin(x) < x\quad\quad\quad(1)$

If $x>1$ then

$\sin(x) \leq 1 \overset{x>1}{\implies} \sin(x) \leq 1< x \implies \forall x>1, \sin(x)

Put (1) and (2) together, we have

$\forall x > 0, \sin(x) < x\quad\quad\quad(3)$

If $x < 0$ then

$-x > 0 \overset{(3)}\implies \sin(-x) < -x \overset{\sin(-x)=-\sin(x)}\implies -\sin(x) < -x \implies \sin(x) > x$

i.e.,

$\forall x < 0, \sin(x) > x\quad\quad\quad(4)$

Therefore by (3) and (4), we conclude that

only when $x = 0, \sin(x)=x$.

# From proof to simpler proof

Solve $\sin(x)=2x$ for $x$.

By mere inspection, we have $x=0$.

Visually, it appears that $0$ is the only solution (see Fig.1 or Fig. 2)

Fig. 1

Fig. 2

To show that $0$ is the only solution of $\sin(x)=2x$ analytically, let

$f(x) = \sin(x)-2x$

$0$ is a solution of $\sin(x)=2x$ means

$f(0) = 0\quad\quad\quad(1)$

Suppose there is a solution

$x^* \neq 0\quad\quad\quad(2)$

then,

$f(x^*)=\sin(x^*)-2x^*=0\quad\quad\quad(3)$

Since $f(x)$ is an function continuous and differentiable on $(-\infty, +\infty)$,

by Lagrange’s Mean-Value Theorem (see “A Sprint to FTC“), there $\exists c \in (0, x^*)$ such that

$f(x^*)-f(0)=f'(c)(x^*-0)$

$\overset{(1), (3)}{\implies} 0 = f'(c) x^*\quad\quad\quad(4)$

$\overset{(2)}{\implies} f'(c)=0\quad\quad\quad(5)$.

We know

$f'(x) = \cos(x)-2$.

From (5), we have

$\cos(c)-2=0$.

i.e.,

$\cos(c)=2$.

This is not possible since $\forall x \in R, -1 \le \cos(x) \le 1$.

A simpler alternative without direct applying Lagrange’s Mean-Value Theorem is:

$f(x) = \sin(x)-2x \implies f'(x) = \cos(x) - 2 \overset{-1 \le \cos(x) \le 1}{\implies} f'(x) < 0\implies$

$f(x) =\sin(x)-2x$ is a strictly decreasing function.

Since $f(0) = 0, \forall x<0, f(x) > 0$ and $\forall x>0, f(x) < 0$.

Therefore, $0$ is the only solution of $f(x)=0$. i.e.,

$0$ is the only solution of $\sin(x) = 2x$.

# Analyzing Heron’s Algorithm

Finding the value of $\sqrt{a}$ for $a>0$ is equivalent to seeking a positive number $x$ whose square yields $a$: $x^2=a$. In other words, the solution to $x^2-a=0$.

We can find $\sqrt{4}$ by solving $x^2-4=0$. It is $2$ by mere inspection. $\sqrt{9}$ can also be obtained by solving $x^2-9=0$, again, by mere inspection. But ‘mere inspection’ can only go so far. For Example, what is $\sqrt{15241578750190521}$?

Method for finding the square root of a positive real number date back to the acient Greek and Babylonian eras. Heron’s algorithm, also known as the Babylonian method, is an algorithm named after the $1^{st}$ – century Greek mathematician Hero of Alexandria, It proceeds as follows:

1. Begin with an arbitrary positive starting value $x_0$.
2. Let $x_{k+1}$ be the average of $x_k$ and $\frac{a}{x_k}$
3. Repeat step 2 until the desired accuracy is achieved.

This post offers an analysis of Heron’s algorithm. Our aim is a better understanding of the algorithm through mathematics.

Let us begin by arbitrarily choosing a number $x_0>0$. If $x_0^2-a=0$, then $x_0$ is $\sqrt{a}$, and we have guessed the exact value of the square root. Otherwise, we are in one of the following cases:

Case 1: $x_0^2-a > 0 \implies x_0^2 > a \implies \sqrt{a} x_0 > a \implies \sqrt{a} > \frac{a}{x_0} \implies \frac{a}{x_0} < \sqrt{a} < x_0$

Case 2: $x_0^2-a <0 \implies x_0 < \sqrt{a} \implies \sqrt{a} x_0 < a \implies \sqrt{a} < \frac{a}{x_0} \implies x_0 < \sqrt{a} < \frac{a}{x_0}$

Both cases indicate that $\sqrt{a}$ lies somewhere between $\frac{a}{x_0}$ and $x_0$.

Let us define $e_0$ as the relative error of approximating $\sqrt{a}$ by $x_0$:

$e_0 =\frac{x_0-\sqrt{a}}{\sqrt{a}}$

The closer $e_0$ is to $0$, the better $x_0$ is as an approximation of $\sqrt{a}$.

Since $x_0>0, \sqrt{a}>0$,

$e_0 + 1 = \frac{x_0-\sqrt{a}}{\sqrt{a}} + 1 = \frac{x_0}{\sqrt{a}} > 0\quad\quad\quad(1)$

By (1),

$x_0 = \sqrt{a} (e_0+1)\quad\quad\quad(2)$

Let $x_1$ be the mid-point of $x_0$ and $\frac{a}{x_0}$:

$x_1 = \frac{1}{2} (x_0 + \frac{a}{x_0})\quad\quad\quad(3)$

and, $e_1$ the relative error of $x_1$ approximating $\sqrt{a}$,

$e_1 = \frac{x_1-\sqrt{a}}{\sqrt{a}}\quad\quad\quad(4)$

We have

$x_1-\sqrt{a}\overset{(3)}{=}\frac{1}{2}(x_0+\frac{a}{x_0}) - \sqrt{a} \overset{(2)}{=}\frac{1}{2}(\sqrt{a}(e_0+1)+\frac{a}{\sqrt{a}(e_0+1)})-\sqrt{a}=\frac{\sqrt{a}}{2}\frac{e_0^2}{e_0+1}\overset{(1)}{>}0\quad(5)$

$e_1 = \frac{x_1-\sqrt{a}}{\sqrt{a}} \overset{(5)}{=} \frac{\frac{\sqrt{a}}{2}\frac{e_0^2}{e_0+1}}{\sqrt{a}} = \frac{1}{2}\frac{e_0^2}{e_0+1}\overset{(1)}{ > }0\quad\quad\quad(6)$

By (5),

$x_1 > \sqrt{a} \implies \sqrt{a} x_1 > a \implies \sqrt{a} > \frac{a}{x_1}\implies \frac{\sqrt{a}}{x_1} < \sqrt{a} < x_1$

i.e., $\sqrt{a}$ lies between $\frac{a}{x_1}$ and $x_1$.

We can generate more values in stages by

$x_{k+1} = \frac{1}{2}(x_k + \frac{a}{x_k}), \;k=1, 2, 3, ...\quad\quad(8)$

Clearly,

$\forall k, x_k >0$.

Let

$e_k = \frac{x_k-\sqrt{a}}{\sqrt{a}}$.

We have

$x_k = \sqrt{a}(e_k+1)\quad\quad\quad(9)$

and

$e_{k+1} = \frac{x_{k+1}-\sqrt{a}}{\sqrt{a}}=\frac{\frac{1}{2}(\sqrt{a}(e_k+1) + \frac{a}{\sqrt{a}(e_k+1)})-\sqrt{a}}{\sqrt{a}}=\frac{1}{2}\frac{e_k^2}{e_k + 1}\quad\quad\quad(10)$

Consequently, we can prove that

$\forall k \ge 1, e_{k+1} > 0\quad\quad\quad(11)$

by induction:

When $k = 1, e_{1+1} \overset{(10)}{=} \frac{1}{2}\frac{e_1^2}{e_1+1} \overset{(6)} {>} 0$.

Assume when $k = p$,

$e_{p+1} =\frac{1}{2}\frac{e_p^2}{e_p^2+1} >0\quad\quad\quad(12)$

When $k = p + 1, e_{(p+1)+1} \overset{(10)}{=}\frac{1}{2}\frac{e_{p+1}^2}{e_{p+1} + 1}\overset{(12)}{>}0$.

Since (11) implies

$\forall k \ge 2, e_k > 0\quad\quad\quad(13)$

(10) and (13) together implies

$\forall k \ge 1, e_k > 0\quad\quad\quad(14)$

It follows that

$\forall k \ge 1, 0 \overset{(14)}{<} e_{k+1} \overset{(10)}{=} \frac{1}{2}\frac{e_k^2}{e_k+1} \overset{(14)}{<} \frac{1}{2}\frac{e_k^2}{e_k}=\frac{1}{2}e_k\quad\quad\quad(15)$

and

$\forall k \ge 1, 0 \overset{(14)}{<} e_{k+1} \overset{(10)}{=} \frac{1}{2}\frac{e_k^2}{e_k+1}\overset{(14)}{<}\frac{1}{2}\frac{e_k^2}{1}

(15) indicates that the error is cut at least in half at each stage after the first.

Therefore, we will reach a stage $n$ that

$0 < e_n < 1\quad\quad\quad(17)$

It can be shown that

$\forall k \ge 1, 0< e_{n+k} < e_n^{2^k}\quad\quad\quad(18)$

by induction:

When $k = 1$,

$0 \overset{(14)}{<} e_{n+1} \overset{(10)}{=}\frac{1}{2}\frac{e_n^2}{e_n+1} \overset{}{<} e_n^2=e_n^{2^1}\quad\quad\quad(19)$

Assume when k = p,

$0 < e_{n+p} < e_n^{2^p}\quad\quad\quad(20)$

When $k = p+1$,

$0 \overset{(14)}{<} e_{n+(p+1)} = e_{(n+p)+1} \overset{(16)}{<}e_{n+p}^2\overset{(20)}{<}(e_n^{2^p})^2 = e_n^{2^p\cdot 2} = e_n^{2^{p+1}}$

From (17) and (18), we see that eventually, the error decreases at an exponential rate.

Illustrated below is a script that implements the Heron’s Algorithm:

/*
a   - the positive number whose square root we are seeking
x0  - the initial guess
eps - the desired accuracy
*/
square_root(a, x0, eps) := block(
[xk],
numer:true,
xk:x0,
loop,
if abs(xk^2-a) < eps then return(xk),
xk:(xk+a/xk)/2,
go(loop)
)\$


Suppose we want to find $\sqrt{2}$ and start with $x_0 = 99$. Fig. 2 shows the Heron’s algorithm in action.

Fig. 1

# Déjà vu!

The stages of a two stage rocket have initial masses $m_1$ and $m_2$ respectively and carry a payload of mass $P$. Both stages have equal structure factors and equal relative exhaust speeds. If the rocket mass, $m_1+m_2$, is fixed, show that the condition for maximal final speed is

$m_2^2 + P m_2 = P m_1$.

Find the optimal ratio $\frac{m_1}{m_2}$ when $\frac{P}{m_1+m_2} = b$.

According to multi-stage rocket’s flight equation (see “Viva Rocketry! Part 2“), the final speed of a two stage rocket is

$v = -c \log(1-\frac{e \cdot m_1}{m_1 + m_2 +P})- c\log(1-\frac{e \cdot m_2}{m_2+P})$

Let $m_0 = m_1 + m_2$, we have

$m_1 = m_0-m_2$

and,

$v = -c \log(1-\frac{e \cdot (m_0-m_2)}{m_0 +P})- c\log(1-\frac{e \cdot m_2}{m_2+P})$

Differentiate $v$ with respect to $m_2$ gives

$v' = \frac{c(e-1)e(2m_2 P-m_0 P +m_2^2)}{(P+m_2)(P-e m_2+m_2)(P+e m_2-e m_0+m_0)}= \frac{c(e-1)e(2m_2-m_0 P+m_2^2)}{(P+m_2)(P+(1-e)m_2)(P+e m_2 + (1-e)m_0)}\quad\quad\quad(1)$

It follows that $v'=0$ implies

$2m_2 P-m_0 P + m_2^2 =0$.

That is, $2 m_2 P - (m_1+m_2) P + m_2^2 = (m_2-m_1) P + m_2^2=0$. i.e.,

$m_2^2 + P m_2 = P m_1\quad\quad\quad(2)$

It is the condition for an extreme value of $v$. Specifically, the condition to attain a maximum (see Exercise-2)

When $\frac{P}{m_1+m_2} = b$, solving

$\begin{cases} (m_2-m_1)P+m_2^2=0\\ \frac{P}{m_1+m_2} = b\end{cases}$

yields two pairs:

$\begin{cases} m_1=\frac{(\sqrt{b}\sqrt{b+1}+b+1)P}{b}\\m_2= -\frac{\sqrt{b^2+b}P+bP}{b}\end{cases}$

and

$\begin{cases} m_1= - \frac{(\sqrt{b}\sqrt{b+1}-b-1)P}{b}\\ m_2=\frac{\sqrt{b^2+b}P-bP}{b}\end{cases}\quad\quad\quad(3)$

Only (3) is valid (see Exercise-1)

Hence

$\frac{m_1}{m_2} = -\frac{(\sqrt{b}\sqrt{b+1}-b-1)P}{\sqrt{b^2+b}P-b P} = \frac{\sqrt{b+1}}{\sqrt{b}} = \sqrt{1+\frac{1}{b}}$

The entire process is captured in Fig. 2.

Fig. 2

Exercise-1 Given $b>0, 00$, prove:

1. $- \frac{(\sqrt{b}\sqrt{b+1}-b-1)P}{b}>0$
2. $\frac{\sqrt{b^2+b}P-bP}{b}>0$

Exercise-2 From (1), prove the extreme value attained under (2) is a maximum.