Knock Euler’s Line

Screen Shot 2018-02-02 at 12.01.05 PM.png

my imagination
is a piece of board
my sole instrument
is a wooden stick

I strike the board
it answers me

“A Knocker” by Zbigniew Herbert

Euler’s line theorem states
In every triangle
the intersection of the medians
the intersection of the heights
and the center of the circumscribed circle
are on a straight line

Let’s prove it
with the aid of Omega CAS Explorer

Screen Shot 2018-02-02 at 12.15.55 PM.png

We know

x^2+y^2+d \cdot x + e \cdot y +f =0\quad\quad\quad(1)

is a circle centered at (-{d \over 2}, -{e \over 2}) with radius

r^2={{d^2+e^2-4f} \over 4}\quad\quad\quad(2)

provide (2) is positive

Find d, e, f from triangle’s vertices (-x_1, 0), (x_1, 0), (x_2, y_2):

eq1:ev(ceq, x=x1, y=0);
eq2:ev(ceq, x=-x1, y=0);
eq3:ev(ceq, x=x2, y=y2);
sol:linsolve([eq1, eq2, eq3], [d,e,f]);

\implies d=0, e=-{{y_2^2-x_2^2+x_1^2} \over y_2}, f=-x_1^2

Evaluate (2):

ev(d^2 + e^2-4*f, sol);

\implies {(-y_2^2-x_2^2+x1^2)^2\over y_2^2} + 4x_1^2

always positive for x_1>0, y2 \neq 0

Find the center of the circumscribed circle x_c, y_c:

xc:ev(-d/2, sol)$

\implies 0

yc:ev(-e/2, sol)$

\implies {{-y_2^2-x_2^2+x_1^2} \over {2y_2}}

Find the intersection of the medians x_m, y_m:

sol:solve([eq1, eq2], [x,y])$

\implies {x_2 \over 3}

ym:ev(y, sol);

\implies {y_2 \over 3}

is(ev(x2*y=y2*x, x=xm, y=ym));

\implies true

Find the intersection of the heights x_h, y_h:

sol:solve([eq1, eq2], [x,y])$
xh:ev(x, sol);

\implies x_2

yh:ev(y, sol);

\implies -{{x_2^2-x_1^2} \over y_2}

Compute the area of triangle with vertices (x_c, y_c), (x_m, y_m), (x_h, y_h):

m:matrix([xc, yc,1], [xm, ym, 1], [xh, yh,1]);

\implies 0

(x_c, y_c), (x_m, y_m) and (x_h, y_h) are on a straight line.

Screen Shot 2018-02-02 at 9.59.16 AM.png


A Computer Algebra Aided Proof of Feuerbach’s Nine-Point Circle Theorem

Screen Shot 2018-01-23 at 9.10.56 PM.png

Feuerbach’s Nine-Point Circle Theorem states that a circle passes through the following nine significant points of any triangle can be constructed:

1. The midpoint of each side of the triangle
2. The foot of each altitude
3. The midpoint of the line segment from each vertex of the triangle to the orthocenter

Let’s prove it with the aid of Omega CAS Explorer.

Screen Shot 2018-01-23 at 9.54.37 PM.png

Step-1 Set up the circle equation:

 x^2 + y^2 + d \cdot x + e \cdot y +f = 0 \quad\quad\quad(1)

is a circle centered at ( -{d \over 2}, -{e \over 2}) with radius

r^2 = { {d^2 + e^2-4f} \over 4}\quad\quad\quad(2)

provide (2) is positive.

Step-2 Find d,e,f using p1, p2, p3:

ceq: x^2+y^2+d*x+e*y+f=0;

eq1: ev(ceq, x=(x1+x2)/2, y=y2/2);

eq2: ev(ceq, x=0, y=0);

eq3: ev(ceq, x=(x2-x1)/2, y=y2/2);

sol: linsolve([eq1, eq2, eq3], [d,e,f]);

\implies d = -x_2, e = - {{y_2^2-x_2^2+x_1^2} \over {2 y_2}}

The new circle equation is

nceq: ev(ceq, sol);

\implies - {{y(y_2^2-x_2^2+x_1^2)} \over {2 y_2}} +y - x \cdot x_2+x^2 = 0,

Evaluate (2)

ev(d^2+e^2-4*f, sol);

\implies { (y_2^2-x_2^2+x_1^2)^2 \over {4 y_2^2}} + x_2^2,

always positive for x_1 > 0 , y_2 \neq 0.

Step-3 Show p5, p6, p4 are on the circle:

p5:ev(nceq, x=x2, y=0);

\implies 0 = 0

p4: linsolve([(x2-x1)*y=y2*(x-x1), y2*y=-(x2-x1)*(x+x1)], [x,y]);

ratsimp(ev(nceq, p4));

\implies 0 = 0

p6: linsolve([(x2+x1)*y=y2*(x+x1), y*y2=-(x2+x1)*(x-x1)], [x,y]);

ratsimp(ev(nceq, p6));

\implies  0 = 0

Step-4 Find the orthocenter

o: linsolve([x=x2, y*y2=-(x2+x1)*(x-x1)], [x,y]);

\implies x = x_2, y = - {{x_2^2-x_1^2} \over y_2}

Step-5 Show p7, p8, p9 are on the circle:

xo: rhs(o[1]);

yo: rhs(o[2]);

p7: ratsimp(ev(nceq, x=(xo+x2)/2, y=(yo+y2)/2));

\implies 0 = 0

p8: ratsimp(ev(nceq, x=(xo-x1)/2, y=(yo+0)/2));

\implies 0 = 0

p9: ratsimp(ev(nceq, x=(xo+x1)/2, y=(yo+0)/2));

\implies 0 = 0

Screen Shot 2018-01-23 at 12.03.20 PM.png

This post is essentially my presentation at 21st Conference on Applications of Computer Algebra, July 20-23, 2015, Kalamata, Greece.

Two Peas in a Pod, Part 3


In part 1 and part 2,  the existence and uniqueness theorem of initial-value problem is used to prove the equivalence of functions.

This effective technique will now help us to explore the relationships between log(x) and exp(x), namely, we shall prove the following two propositions:

log(exp(x)) = x\quad\quad\quad(1)

exp(log(x)) = x\quad\quad\quad(2)

We know {d \over dx} log(exp(x)) = {1 \over exp(x)}\cdot exp(x) = 1,{d \over dx} x =1

and log(exp(x)) = 0 when x = 0

\implies both log(exp(x)), x are solutions of initial-value problem

\begin{cases} {d \over dx} f(x) = 1, \\ f(0)=0\end{cases}

Therefore by uniqueness theorem, (1) is true.


{d \over dx} exp(log(x)) = {1 \over x} exp(log(x)), {d \over dx} x = 1 = {1 \over x} x

and exp(log(x)) =1 when x=1

\implies both exp(log(x)), x are solutions of initial-value problem

\begin{cases} {d \over dx} f(x) = {1 \over x} f(x), \\ f(1)=1\end{cases}

It follows that (2) is also true.

Recall in part 2, we computed exp(1). This constant is often denoted by e and commonly known as Euler Number.

From (1), we have

log(e) = log(exp(1)) = 1

Since log(x) is the area under {1 \over x} from 1 to x (see “Introducing Lady L“), The geometric meaning of log(e)=1 is that the area under {1 \over x}  from 1 to e is 1 (see Fig. 1)

Screen Shot 2018-01-20 at 1.40.54 AM.png

Fig. 1

We are now ready to define two new functions in terms of exp(x) and log(x). They are the Exponential function:

a^x \triangleq exp(x\cdot log(a) where  x, a \in R, a >0\quad\quad\quad(3)


x^r \triangleq exp(r \cdot log(x)) where x, r \in R, x >0\quad\quad\quad(4),

the Power function.

From (3) and (4), we see that both a^x and x^r are continuous positive functions.

Moreover, we have

{d \over dx} a^x = {d \over dx} exp(x\cdot log(x)) = exp(x \cdot log(a)) \cdot log(a)


{d \over dx} a^x = a^x \cdot log(a) \quad\quad\quad(7)


{d \over dx} x^r = {d \over dx} exp(r \cdot log(x))

= r {exp(r \cdot log(x)) \over x}

= r {exp(r \cdot log(x)) \over exp(log(x))}

= r\cdot exp(r\cdot log(x) - log(x))

= r \cdot exp((r-1)log(x))

=r\cdot x^{r-1}


{d \over dx} x^r = r\cdot x^{r-1}\quad\quad\quad(8)


Exercise-1: Prove that a^x \cdot a^y = a^{x+y} ,  {a^x \over a^y} = a^{x-y}, (a^{x})^{ y} = a^{x \cdot y}, ( a \cdot b)^x = {a^x \cdot b^x} where a, b > 0, x, y \in R.

Exercise-2: Prove that x^r \cdot x^s = x^{r+s} ,  {x^r \over x^s} = x^{r-s}, (x^{r})^{ s} = x^{r \cdot s}, (x \cdot y)^r = {x^r \cdot y^r} where x \in R^{+}, r, s \in R.

Two Peas in a Pod, Part 2


We are now defining the Natural Exponential Function, exp(x) as the unique solution of initial-value problem

\begin{cases} {d \over dx} f(x) = f(x), \\ f(0)=1\end{cases}\quad\quad\quad(1)

There are several propositions based on this definition.

First, let’s show that


is true:

{d \over dx}(exp(x)exp(y)) = exp(y){d \over dx}exp(x)=exp(x)exp(y),

{d \over dx} exp(x+y) = exp(x+y)\cdot 1 = exp(x+y)

and when x=0,

exp(x)exp(y)=exp(0)exp(y)=1\cdot exp(y) = exp(y),


\implies exp(x)exp(y) and exp(x+y)  are solutions of initial-value problem

\begin{cases} {d \over dx} f(x) = f(x), \\ f(0)=exp(y)\end{cases}

Hence, by uniqueness theorem,  exp(x)exp(y) \equiv exp(x+y).

As a consequence of (2),

\forall x, exp(x) > 0\quad\quad\quad(3)

Let’s prove it too:

From (2), we know that exp(x) = exp({x \over 2} + {x \over 2}) = (exp({x \over 2}))^2 \geq 0.

This shows that exp(x) is a non-negative function.

But suppose there \exists x_*  \ni exp(x_*) =0 then

\forall x, exp(x)=exp(x+x_* - x_*)=exp(x_*)exp(x-x_*)=0\cdot exp(x-x_*) =0

\implies \forall x, exp(x) = 0.

This is a contradiction since by definition, exp(0)=1 \neq 0.

Therefore, there is no x_* \ni exp(x_*) = 0, i.e.,  \forall x, exp(x) > 0.

Next, we have

exp(-x)= {1 \over exp(x)}\quad\quad\quad(4)

The fact that

{d \over dx} exp(-x) = exp(-x)\cdot -1 = -exp(-x),

{d \over dx} ({1 \over exp(x)}) ={-1\cdot exp(x) \over (exp(x))^2}= -{1 \over exp(x)}

and when x=0, exp(-x) = exp(0)=1, {1 \over exp(x)} = {1 \over exp(0)} = 1

\implies exp(-x) and {1 \over exp(x)} are both solutions of initial-value problem

\begin{cases} {d \over dx} f(x) = -f(x), \\ f(0)=1\end{cases}

It follows from above and uniqueness theorem, ex(-x) \equiv {1 \over exp(x)}.


{exp(x) \over exp(y) }= exp(x-y)\quad\quad\quad(5)

A short proof is given below:

By (4), (2), {exp(x) \over exp(y)} = exp(x)\cdot {1 \over exp(y)} = exp(x) \cdot exp(-y) = exp(x-y).

The value of exp(x) can be obtained by solving (1) numerically. For example,

Screen Shot 2018-01-11 at 8.59.41 PM.png

Fig. 1 Compute exp(1)

An alternative expression for exp(x) is e^x. Using this notation, propositions (2), (3), (4), (5) are expressed as

e^x e^y=e^{x+y},

\forall x,e^x>0,

e^{-x}={1 \over e^x},

{e^x \over e^y} =e^{x-y}


Let’s conclude this post with some humor from the internet:

e^x and a constant are walking down the street together when the constant sees a  differential operator coming their way. He starts to run away, and e^x asks “Why are you running away?” The constant answers, “That’s a differential operator. If it acts on me, I’ll disappear.” The e^x says “I’m e^x, I don’t have anything to worry about,” and keeps walking. When he reaches the differential operator, he says “Hi, I’m e^x.”

The differential operator responds, “Hi, I’m {d \over dy}.”


Exercise:  Prove that e^{n\cdot x} = (e^x)^n, n \in N.

Two Peas in a Pod, Part 1


The Natural Log Function log(x) (see “Introducing Lady L“) may very well be defined as the unique solution of the initial-value problem:

\begin{cases} {d \over dx} f(x) = {1 \over x}, \\ f(1)=0\end{cases}\quad\quad\quad(1)

Based on this definition, several properties of log(x) can be derived easily.


log(xy) = log(x) + log(y)\quad\quad\quad(2)

Let u = x y,

{d \over dx} log(x y)  = {d \over du} log(u) {d \over dx}u={1\over u} y={1 \over {x y}} y = {1 \over x}

When x=1,

log(x y) = log(1\cdot y) = log(y)

\implies log(x y) is a solution of the initial-value problem

\begin{cases} {d \over dx} \phi(x) = {1 \over x} \\ \phi(1)=log(y)\end{cases}\quad\quad\quad(3)

On the other hand, by definition,

{d \over dx} (log(x) + log(y)) = {d \over dx} log(x) + {d \over dx} log(y) ={1\over x} + 0 = {1 \over x}

and,  when x = 1,

 log(x) + log(y) = log(1) + log(y) = 0 + log(y) = log(y)

\implies log(x) + log(y) is also a solution of (3)

By Uniqueness Theorem, (3) has a unique solution. Hence log(x y) \equiv log(x) + log(y).


log(x) = -log(1/x)\quad\quad\quad(4)

By definition, log(x) is a solution of (1). So is - log({1 \over x}):

{d \over dx}( -log({1 \over x})) = -1 {d \over dx} log({1 \over x}) = - {1 \over {1 \over x}} {d \over dx} ({1 \over x}) = -x\cdot -1\cdot {1 \over x^2}={1 \over x}

and since log(1)=0, when x = 1, -log({1 \over x }) =-log(1) = 0.

Therefore log(x) \equiv - log({1 \over x}).

Finally, with (2) and (4),

log({x \over y}) = log(x) - log(y)\quad\quad\quad(5)

since log({x \over y}) = log(x {1 \over y}) = log(x) + log({1 \over y}) =log(x) - log(y)

As an alternative to “Knowing Lady L“, we can now compute the value of log(x) by solving (1) numerically, For example,

Screen Shot 2018-01-03 at 3.48.52 PM.png

Fig. 1 Compute log(3)

Screen Shot 2018-01-03 at 4.30.31 PM.png

Fig. 2 Compute log(0.2)

Noticed the negative step size in Fig. 2.


On Viete Theorem and Second-Order Linear Differential Equations with Constant Coefficients

Viete theorem, named after French mathematician Franciscus Viete relates the coefficients of a polynomial equation to sums and products of its roots. It states:

For quadratic equation a x^2+b x + c =0, with roots r_1, r_2,

 r_1+r_2 = {{-b} \over {a}},

r_1 r_2 = {{c} \over {a}}.

This is easy to prove. We know that the roots of a x^2 + b x +c =0 are

r_1 = {{-b + \sqrt{b^2-4 a c}} \over {2 a}},       r_2 = {{-b - \sqrt{b^2-4 a c}} \over {2 a}}.


r_1 + r_2 ={{-b + \sqrt{b^2-4 a c}} \over {2 a}} + {{-b - \sqrt{b^2-4 a c}} \over {2 a}}={{-b}\over{a}}


r_1 r_2 = ({{-b + \sqrt{b^2-4 a c}} \over {2 a}})({{-b - \sqrt{b^2-4 a c}} \over {2 a}})={{(-b)^2-(b^2-4 a c)}\over{4 a}}={{c}\over {a}}.

In fact, the converse is also true. If two given numbers r_1, r_2 are such that

r_1 + r_2 = {{-b}\over{a}}\qquad\qquad(1)

r_1 r_2 = {{c}\over {a}}\qquad\qquad\qquad(2)


r_1,  r_2 are the roots of a x^2 + b x + c=0.

This is also easy to prove. From (2) we haver_1 = {{-b}\over{a}}-r_2. Hence, (2) implies that ({{-b} \over{a}} -r_2) r_2 = {{c} \over a}, or

a r_2^2 + b r_2 +c =0\qquad\qquad(3)

Since r_1, r_2 are symmetric in both (1) and (2), (3) implies that r_1 is also the root of a x^2 + b x +c = 0.

Let us consider the second-order linear ordinary differential equation with constant coefficients:

y''(t)+b y'(t) + c y = f(t)\qquad\qquad(4)

Let \lambda_1, \lambda_2 be the roots of quadratic equation x^2+b x + c =0 with unknown x.

By Viete’s theorem,

b = -(\lambda_1 + \lambda_2)

c =\lambda_1 \lambda_2.

Therefore, (4) can be written as

y''(t)-(\lambda_1+\lambda_2)y'(t) +\lambda_1 \lambda_2 y(t) = f(t).

Rearrange the terms, we have

y''(t)-\lambda_1 y'(t) - \lambda_2 y'(t) + \lambda_1 \lambda_2 y(t) = f(t)


y''(t)-\lambda_1 y'(t) -\lambda_2 (y'(t)-\lambda_1 y(t) )=f(t)


(y'(t)-\lambda_1 y(t))' - \lambda_2 (y'(t)-\lambda_1 y(t)) =f(t)\qquad(5)


z(t) = y'(t) - \lambda_1 y(t)\qquad\qquad(6)

(5), a second-order equation is reduced to a first-order equation

z'(t) - \lambda_2 z(t) = f(t)\qquad\qquad(7)

To obtain y(t),  we solve two first-order equations, (7) for z(t) first, then (6) for y(t).

We are now ready to show that any solution obtained as described above is also a solution of (1):

Let y_*(t) be the result of solving (7) for z(t) then (6) for y(t),


z(t) = y'_*(t)-\lambda_1 y_*(t).

By (7),

(y'_*(t)-\lambda_1 y_*(t) )'- \lambda_2(y_*(t)-\lambda_1 y_*(t)) = f(t)\quad\quad\quad(8)

(8) tells that y_*(t) is a solution of (5).

The fact that (5) is equivalent to (1) implies y_*(t), a solution of (5) is also a solution of (1)

Solving y’ + a(x) y = b(x), Part 3

Yet another way to find the solution of

y' + a(x) y = b(x)\quad\quad\quad(1)

is to seek a function f(x) > 0 \; \forall x such that the result of multiply (1) by f(x), namely

f(x)y' + f(x) a(x) y =f(x) b(x)\quad\quad\quad(2)

can be written as

(f(x) y)' =  b(x) f(x),


f(x) y =\int b(x) f(x)\; dx + c_{3}

where c_{3} is a constant.


y = {1 \over f(x)} (\int b(x) f(x)\; dx + c_{3})\quad\quad\quad(3)

since f(x) > 0.

Let us proceed to find such f(x).

From (2) we see that if

f'(x)=a(x) f(x)

then the left side of (2) is certainly (f(x) y)' and consequently  for f(x) > 0 \;\forall x,

{1 \over f(x)} f'(x) = a(x)\quad\quad\quad(4)


log(f(x)) = \int a(x) dx + c_{1}

where c_{1} is a constant.

Therefore, a solution to (4) is

f(x) = c_{2}e^{\int a(x)\;dx}\quad\quad\quad(5)

where c_{2} = exp(c_{1}). This is a positive function \forall x indeed.

With (5), (3) becomes

y = e^{-\int a(x)\;dx} (\int b(x) e^{ \int b(x) dx} dx + c)\quad\quad\quad(6)

where c = {c_{3} \over c_{2}}.

In fact, for any constant c,

 e^{-\int a(x) dx} (\int b(x) e^{\int a(x) dx} dx + c)

is a solution of (1):

(e^{-\int a(x) dx} (\int b(x) e^{\int a(x) dx} dx+ c))' +

a(x)\dot(e^{-\int a(x)\;dx} (\int b(x) e^{\int a(x) dx} dx + c))

= -a(x) (e^{-\int a(x) dx} (\int b(x) e^{\int a(x) dx} dx + c))

+ e^{-\int a(x) dx} b(x) e^{\int a(x) dx}

+ a(x)(e^{-\int a(x) dx} (\int b(x) e^{\int a(x) dx} dx + c))


Therefore, the solution of (1) is

 y = e^{-\int a(x) dx} (\int b(x) e^{\int a(x) dx} dx + c)

where c is any constant.

Exercise: prove that e^x  is a positive function.