# A Computer Algebra Aided Proof of Feuerbach’s Nine-Point Circle Theorem

Feuerbach’s Nine-Point Circle Theorem states that a circle passes through the following nine significant points of any triangle can be constructed:

1. The midpoint of each side of the triangle
2. The foot of each altitude
3. The midpoint of the line segment from each vertex of the triangle to the orthocenter

Let’s prove it with the aid of Omega CAS Explorer.

Step-1 Set up the circle equation:

$x^2 + y^2 + d \cdot x + e \cdot y +f = 0 \quad\quad\quad(1)$

is a circle centered at $( -{d \over 2}, -{e \over 2})$ with radius

$r^2 = { {d^2 + e^2-4f} \over 4}\quad\quad\quad(2)$

provide (2) is positive.

Step-2 Find d,e,f using p1, p2, p3:

ceq: x^2+y^2+d*x+e*y+f=0;

eq1: ev(ceq, x=(x1+x2)/2, y=y2/2);

eq2: ev(ceq, x=0, y=0);

eq3: ev(ceq, x=(x2-x1)/2, y=y2/2);

sol: linsolve([eq1, eq2, eq3], [d,e,f]);

$\implies d = -x_2, e = - {{y_2^2-x_2^2+x_1^2} \over {2 y_2}}$

The new circle equation is

nceq: ev(ceq, sol);

$\implies - {{y(y_2^2-x_2^2+x_1^2)} \over {2 y_2}} +y - x \cdot x_2+x^2 = 0$,

Evaluate (2)

ev(d^2+e^2-4*f, sol);

$\implies { (y_2^2-x_2^2+x_1^2)^2 \over {4 y_2^2}} + x_2^2$,

always positive for $x_1 > 0 , y_2 \neq 0$.

Step-3 Show p5, p6, p4 are on the circle:

p5:ev(nceq, x=x2, y=0);

$\implies 0 = 0$

p4: linsolve([(x2-x1)*y=y2*(x-x1), y2*y=-(x2-x1)*(x+x1)], [x,y]);

ratsimp(ev(nceq, p4));

$\implies 0 = 0$

p6: linsolve([(x2+x1)*y=y2*(x+x1), y*y2=-(x2+x1)*(x-x1)], [x,y]);

ratsimp(ev(nceq, p6));

$\implies 0 = 0$

Step-4 Find the orthocenter

o: linsolve([x=x2, y*y2=-(x2+x1)*(x-x1)], [x,y]);

$\implies x = x_2, y = - {{x_2^2-x_1^2} \over y_2}$

Step-5 Show p7, p8, p9 are on the circle:

xo: rhs(o[1]);

yo: rhs(o[2]);

p7: ratsimp(ev(nceq, x=(xo+x2)/2, y=(yo+y2)/2));

$\implies 0 = 0$

p8: ratsimp(ev(nceq, x=(xo-x1)/2, y=(yo+0)/2));

$\implies 0 = 0$

p9: ratsimp(ev(nceq, x=(xo+x1)/2, y=(yo+0)/2));

$\implies 0 = 0$

This post is essentially my presentation at 21st Conference on Applications of Computer Algebra, July 20-23, 2015, Kalamata, Greece.

# Two Peas in a Pod, Part 3

In part 1 and part 2,  the existence and uniqueness theorem of initial-value problem is used to prove the equivalence of functions.

This effective technique will now help us to explore the relationships between $log(x)$ and $exp(x)$, namely, we shall prove the following two propositions:

$log(exp(x)) = x\quad\quad\quad(1)$

$exp(log(x)) = x\quad\quad\quad(2)$

We know ${d \over dx} log(exp(x)) = {1 \over exp(x)}\cdot exp(x) = 1$,${d \over dx} x =1$

and $log(exp(x)) = 0$ when $x = 0$

$\implies$ both $log(exp(x)), x$ are solutions of initial-value problem

$\begin{cases} {d \over dx} f(x) = 1, \\ f(0)=0\end{cases}$

Therefore by uniqueness theorem, (1) is true.

Similarly,

${d \over dx} exp(log(x)) = {1 \over x} exp(log(x)), {d \over dx} x = 1 = {1 \over x} x$

and $exp(log(x)) =1$ when $x=1$

$\implies$ both $exp(log(x)), x$ are solutions of initial-value problem

$\begin{cases} {d \over dx} f(x) = {1 \over x} f(x), \\ f(1)=1\end{cases}$

It follows that (2) is also true.

Recall in part 2, we computed $exp(1)$. This constant is often denoted by $e$ and commonly known as Euler Number.

From (1), we have

$log(e) = log(exp(1)) = 1$

Since $log(x)$ is the area under ${1 \over x}$ from 1 to $x$ (see “Introducing Lady L“), The geometric meaning of $log(e)=1$ is that the area under ${1 \over x}$  from 1 to $e$ is 1 (see Fig. 1)

Fig. 1

We are now ready to define two new functions in terms of $exp(x)$ and $log(x)$. They are the Exponential function:

$a^x \triangleq exp(x\cdot log(a)$ where  $x, a \in R, a >0\quad\quad\quad(3)$

and

$x^r \triangleq exp(r \cdot log(x))$ where $x, r \in R, x >0\quad\quad\quad(4)$,

the Power function.

From (3) and (4), we see that both $a^x$ and $x^r$ are continuous positive functions.

Moreover, we have

${d \over dx} a^x = {d \over dx} exp(x\cdot log(a)) = exp(x \cdot log(a)) \cdot log(a)$

$\implies$

${d \over dx} a^x = a^x \cdot log(a) \quad\quad\quad(7)$

and

${d \over dx} x^r = {d \over dx} exp(r \cdot log(x))$

$= r {exp(r \cdot log(x)) \over x}$

$= r {exp(r \cdot log(x)) \over exp(log(x))}$

$= r\cdot exp(r\cdot log(x) - log(x))$

$= r \cdot exp((r-1)log(x))$

$=r\cdot x^{r-1}$

$\implies$

${d \over dx} x^r = r\cdot x^{r-1}\quad\quad\quad(8)$

Exercise-1: Prove that $a^x \cdot a^y = a^{x+y} , {a^x \over a^y} = a^{x-y}, (a^{x})^{ y} = a^{x \cdot y}, ( a \cdot b)^x = {a^x \cdot b^x}$ where $a, b > 0, x, y \in R$.

Exercise-2: Prove that $x^r \cdot x^s = x^{r+s} , {x^r \over x^s} = x^{r-s}, (x^{r})^{ s} = x^{r \cdot s}, (x \cdot y)^r = {x^r \cdot y^r}$ where $x \in R^{+}, r, s \in R$.

# Two Peas in a Pod, Part 2

We are now defining the Natural Exponential Function, $exp(x)$ as the unique solution of initial-value problem

$\begin{cases} {d \over dx} f(x) = f(x), \\ f(0)=1\end{cases}\quad\quad\quad(1)$

There are several propositions based on this definition.

First, let’s show that

$exp(x)exp(y)=exp(x+y)\quad\quad\quad(2)$

is true:

${d \over dx}(exp(x)exp(y)) = exp(y){d \over dx}exp(x)=exp(x)exp(y)$,

${d \over dx} exp(x+y) = exp(x+y)\cdot 1 = exp(x+y)$

and when $x=0$,

$exp(x)exp(y)=exp(0)exp(y)=1\cdot exp(y) = exp(y)$,

$exp(x+y)=exp(0+y)=exp(y)$

$\implies exp(x)exp(y)$ and $exp(x+y)$  are solutions of initial-value problem

$\begin{cases} {d \over dx} f(x) = f(x), \\ f(0)=exp(y)\end{cases}$

Hence, by uniqueness theorem,  $exp(x)exp(y) \equiv exp(x+y)$.

As a consequence of (2),

$\forall x, exp(x) > 0\quad\quad\quad(3)$

Let’s prove it too:

From (2), we know that $exp(x) = exp({x \over 2} + {x \over 2}) = (exp({x \over 2}))^2 \geq 0$.

This shows that $exp(x)$ is a non-negative function.

But suppose there $\exists x_* \ni exp(x_*) =0$ then

$\forall x, exp(x)=exp(x+x_* - x_*)=exp(x_*)exp(x-x_*)=0\cdot exp(x-x_*) =0$

$\implies \forall x, exp(x) = 0$.

This is a contradiction since by definition, $exp(0)=1 \neq 0$.

Therefore, there is no $x_* \ni exp(x_*) = 0$, i.e., $\forall x, exp(x) > 0$.

Next, we have

$exp(-x)= {1 \over exp(x)}\quad\quad\quad(4)$

The fact that

${d \over dx} exp(-x) = exp(-x)\cdot -1 = -exp(-x)$,

${d \over dx} ({1 \over exp(x)}) ={-1\cdot exp(x) \over (exp(x))^2}= -{1 \over exp(x)}$

and when $x=0, exp(-x) = exp(0)=1, {1 \over exp(x)} = {1 \over exp(0)} = 1$

$\implies exp(-x)$ and ${1 \over exp(x)}$ are both solutions of initial-value problem

$\begin{cases} {d \over dx} f(x) = -f(x), \\ f(0)=1\end{cases}$

It follows from above and uniqueness theorem, $ex(-x) \equiv {1 \over exp(x)}$.

Finally,

${exp(x) \over exp(y) }= exp(x-y)\quad\quad\quad(5)$

A short proof is given below:

By (4), (2), ${exp(x) \over exp(y)} = exp(x)\cdot {1 \over exp(y)} = exp(x) \cdot exp(-y) = exp(x-y)$.

The value of $exp(x)$ can be obtained by solving (1) numerically. For example,

Fig. 1 Compute $exp(1)$

An alternative expression for $exp(x)$ is $e^x$. Using this notation, propositions (2), (3), (4), (5) are expressed as

$e^x e^y=e^{x+y}$,

$\forall x,e^x>0$,

$e^{-x}={1 \over e^x}$,

${e^x \over e^y} =e^{x-y}$

respectively.

Let’s conclude this post with some humor from the internet:

$e^x$ and a constant are walking down the street together when the constant sees a  differential operator coming their way. He starts to run away, and $e^x$ asks “Why are you running away?” The constant answers, “That’s a differential operator. If it acts on me, I’ll disappear.” The $e^x$ says “I’m $e^x$, I don’t have anything to worry about,” and keeps walking. When he reaches the differential operator, he says “Hi, I’m $e^x$.”

The differential operator responds, “Hi, I’m ${d \over dy}$.”

Exercise:  Prove that $e^{n\cdot x} = (e^x)^n, n \in N$.

# Two Peas in a Pod, Part 1

The Natural Log Function $log(x)$ (see “Introducing Lady L“) may very well be defined as the unique solution of the initial-value problem:

$\begin{cases} {d \over dx} f(x) = {1 \over x}, \\ f(1)=0\end{cases}\quad\quad\quad(1)$

Based on this definition, several properties of $log(x)$ can be derived easily.

$log(xy) = log(x) + log(y)\quad\quad\quad(2)$

Let $u = x y$,

${d \over dx} log(x y) = {d \over du} log(u) {d \over dx}u={1\over u} y={1 \over {x y}} y = {1 \over x}$

When $x=1$,

$log(x y) = log(1\cdot y) = log(y)$

$\implies log(x y)$ is a solution of the initial-value problem

$\begin{cases} {d \over dx} \phi(x) = {1 \over x} \\ \phi(1)=log(y)\end{cases}\quad\quad\quad(3)$

On the other hand, by definition,

${d \over dx} (log(x) + log(y)) = {d \over dx} log(x) + {d \over dx} log(y) ={1\over x} + 0 = {1 \over x}$

and,  when $x = 1$,

$log(x) + log(y) = log(1) + log(y) = 0 + log(y) = log(y)$

$\implies log(x) + log(y)$ is also a solution of (3)

By Uniqueness Theorem, (3) has a unique solution. Hence $log(x y) \equiv log(x) + log(y)$.

Next,

$log(x) = -log(1/x)\quad\quad\quad(4)$

By definition, $log(x)$ is a solution of (1). So is $- log({1 \over x})$:

${d \over dx}( -log({1 \over x})) = -1 {d \over dx} log({1 \over x}) = - {1 \over {1 \over x}} {d \over dx} ({1 \over x}) = -x\cdot -1\cdot {1 \over x^2}={1 \over x}$

and since $log(1)=0$, when $x = 1, -log({1 \over x }) =-log(1) = 0$.

Therefore $log(x) \equiv - log({1 \over x})$.

Finally, with (2) and (4),

$log({x \over y}) = log(x) - log(y)\quad\quad\quad(5)$

since $log({x \over y}) = log(x {1 \over y}) = log(x) + log({1 \over y}) =log(x) - log(y)$

As an alternative to “Knowing Lady L“, we can now compute the value of $log(x)$ by solving (1) numerically, For example,

Fig. 1 Compute $log(3)$

Fig. 2 Compute $log(0.2)$

Noticed the negative step size in Fig. 2.