# My talk at the 24th Conference on Applications of Computer Algebra

Showing below is the abstract of my talk at ACA 2018. Stay tuned for the complete presentation after the conference.

# Six of one, half a dozen of the other

We have defined the derivative of function $f(x)$ in “Inching towards Definite Integral” as

$f'(x) = \lim\limits_{h\to 0}{{f(x+h)-f(x)}\over{h}}$

An equivalent definition of $f'(x)$ is

$f'(x) = \lim\limits_{x^* \to x}{{f(x^*)-f(x)} \over {x^*-x}}$

We will prove the equivalency below:

$f'(x) = \lim\limits_{h \to 0}{{f(x+h)-f(x)} \over h}$

$\Longrightarrow \forall \epsilon >0, \exists \delta>0 \ni 0<|h-0|<\delta, |{{f(x+h)-f(x)} \over {h} }- f'(x)|< \epsilon\quad\quad\quad(1)$

Let $x^*=x+h$, then $h = x^*-x$, (1) becomes

$\forall \epsilon >0, \exists \delta>0 \ni 0<|(x^*-x)-0|<\delta, |{{f(x^*)-f(x)} \over {x^*-x}} - f'(x)|<\epsilon$

$\Longrightarrow \forall \epsilon >0, \exists \delta>0 \ni 0<|x^*-x|<\delta, |{{f(x^*)-f(x)} \over {x^*-x}}- f'(x)|<\epsilon$

$\Longrightarrow f'(x) = \lim\limits_{x^* \to x} {{f(x^*)-f(x)} \over {x^*-x}}$

Similarly,

$f'(x) = \lim\limits_{x^* \to x}{{f(x^*)-f(x)} \over {x^*-x}}$

$\Longrightarrow \forall \epsilon >0, \exists \delta>0 \ni 0<|x^*-x|<\delta, |{{f(x^*)-f(x)} \over {x^*-x} }- f'(x)|< \epsilon\quad\quad\quad(2)$

Let $h=x^*-x$, then $x^*= x+h$, (2) becomes

$\forall \epsilon >0, \exists \delta>0 \ni 0<|h|<\delta, |{{f(x+h)-f(x)} \over {h}} - f'(x)|<\epsilon$

$\Longrightarrow \forall \epsilon >0, \exists \delta>0 \ni 0<|h-0|<\delta, |{{f(x+h)-f(x)} \over {h}}- f'(x)|<\epsilon$

$\Longrightarrow f'(x) = \lim\limits_{h \to 0} {{f(x+h)-f(x)} \over {h}}$

# Every dog has its day

On a school day in the late 1700s, a teacher, for unknown reason, ordered her pupils to add up all the numbers from 1 up to 100.

The children immediately started adding except for the future prince of mathematics.

It took him a few seconds before he proudly announced the result.

What a glorious day for the little Carl Friedrich Gauss!

Fast forward to a hot summer day in the late 1970s.  I was then a high school student. On that day, factoring was taught first, then an assignment was given:

“Factor all expressions on page 18, only 88 of them”

The students went to work immediately except for me who got a funny idea of doing something else.

While pretending to be working, I was peeking though the illustrated book “Ring of Nibelungs” by Arthur Rackham. I brought it to school and hid it under the dull textbooks.

What a sixteen-year-old would rather do? Slaying dragon or factoring algebraic expressions?

Just as I was enthralled by the adventures of Siegfried, my hero, a voice boomed from behind:

“What are you doing rascal ?!”

It was the teacher.

The punishment – Factoring $x^n-1$, for $n=2, 3, 4, ... , 20$ is the additional homework!

I managed to complete this before staggering into the bed:

As I was factoring, I made an observation:

The non-vanishing coefficients in any factor are either -1 or 1.

I wondered: Could this be true for all values of $n$?

As soon as I finished factoring $x^{20}-1$, I wrote down a conjecture named after Siegfried:

The absolute value of all coefficients appear in the polynomial factor of $x^n-1$ is always 1.

That night I had a lucid dream, where I proved “Siegfried conjecture.”

Reviewing the proof in the morning, however, to my dismay, it was a pile of junk.

Still, for several weeks I continued my effort fervently, but I could neither prove nor refute “Siegfried conjecture”.

All to no avail.  I had to give up.

Fast forward again to a mid summer day a few years later.

I visited Joe, a high school classmate who was studying at MIT to become an aeronautical engineer.

While walking on campus, I teased my friend: “How many basketball championships does MIT have? Do you even have any sports team here?”

Joe shrugged: “Haha, but we have created MACSYMA, the world’s first program that does symbolic algebra! Come, let me show you its power.”

That evening, I sat quietly in front of a VT100 tube, getting acquainted with MACSYMA.

It was fantastic. I felt as though I was brought out of the hold of a ship where I had lived all my life,  onto the deck, into the fresh air.

I saw the good –

Then the better –

Finally the best –

I grabbed the keyboard and commanded MACSYMA:

It responded:

At $n = 105, -2$ is the coefficient of term $x^{41}$.

In a blink of an eye, MACSYMA disproved “Siegfried conjecture”.

And I, while fully clothed, jumped up and shouted

Thanks to MACSYMA,  the day was finally mine!

Fast forward once more to the summer of 2005, the spark MACSYMA lit in me twenty years ago gave life to Omega CAS Explorer:

Hooray! Every dog has its day!

Recently, as I was preparing for my talk at ACA 2017 (See “Little Bird and a Recursive Generator“), I looked at the formula

$1^2 + 2^2 +...+ n^2 = {{n(n+1)(2n+1)} \over 6}\quad\quad\quad(1)$

and wondered:

For what value(s) of $n>1$, (1) produces a perfect square ?

A search shows when $n=24$, (1) yields $4900 = 70^2$, a perfect square:

But are there any other such $n$‘s?

Further search yields no result:

For $n, k >1, \{n = 24, k=70\}$ is the only solution to Diophantine equation

${n(n+1)(2n+1) \over 6} = k^2$

Can you prove or refute my new conjecture?

# Inching towards Definite Integral

In a blog titled “Introducing Lady L“, we showed that

$\lim\limits_{h \to 0} {{log(x+h) - log(x)} \over h} = {1 \over x}\quad\quad\quad(1)$

In light of the fact that $1 \over x$ is a monotonic function on $R^+$, i.e.,

$\forall x_1, x_2\in R^+, x_1 < x_2 \implies {1 \over x_1} > {1 \over x_2}$,

we can prove that

Let $F(x) = \int\limits_{a}^{x} f(t)\;dt, t \in [a, b], \forall x \in [a, b], f(x)$ is continuous and monotonic $\implies {\lim\limits_{h \to 0}{ {F(x+h) - F(x)} \over h } }= f(x)\quad\quad\quad(2)$

The proof is simple, rigorous and similar to what we have done in “Introducing Lady L“.

Let $f(x)$ be a monotonically increasing function,

$h > 0 \implies \forall t \in (x, x+h), f(x) < f(t) < f(x+h)$.

and

$h\cdot f(x) < \int\limits_{x}^{x+h} f(t)\;dt < h\cdot f(x+h)$.

Since

$\int\limits_{x}^{x+h} f(x)\; dx = \int\limits_{x}^{a} f(t)\;dt + \int\limits_{a}^{x+h}f(t)\;dt$

$= \int\limits_{a}^{x+h}f(x)\;dx - \int\limits_{a}^{x}f(t)\;dt$

$= F(x+h)- F(x)\quad\quad\quad(3)$

It follows that

$f(x) < {{F(x+h) - F(x)} \over {h}} < f(x+h)$.

The fact that $f(x)$ is continuous tells us

$\lim\limits_{h \to 0} f(x + h) = f(x + 0) = f(x)$.

$\lim\limits_{h \to 0} f(x) = f(x)$.

Therefore,

$\lim\limits_{h \to 0} {{F(x+h) - F(x) } \over {h} }= f(x)$.

The case for $h < 0$ can be handled in a similar fashion.

(2) becomes more general when the condition of $f(x)$ being a monotonic function is removed:

Let $F(x) = \int\limits_{a}^{x} f(t)\;dt, t \in [a, b], \forall x \in [a, b], f(x)$ is continuous $\implies {\lim\limits_{h \to 0}{ {F(x+h) - F(x)} \over h } }= f(x)\quad\quad\quad(4)$

Let’s prove it.

By definition, $f(x)$ is continuous at $x$ means

$\forall \epsilon > 0, \exists \delta \ni 0 <|t - x| <\delta \implies |f(t)-f(x)| < \epsilon$.

It implies $-\epsilon < f(t)-f(x) < \epsilon$.

For $h > 0$, we have

$\int\limits_{x}^{x+h} -\epsilon \;dt < \int\limits_{x}^{x+h} {f(t)-f(x)}\;dt < \int\limits_{x}^{x+h} \epsilon\;dt$

That is

$-\epsilon \int\limits_{x}^{x+h}\;dt < \int\limits_{x}^{x+h}f(t)\;dt - \int\limits_{x}^{x+h}f(x)\;dt < \epsilon \int\limits_{x}^{x+h}\;dt$

Since $\int\limits_{x}^{x+h}\;dt = h, \int\limits_{x}^{x+h}f(x)\;dt = f(x)\cdot\int\limits_{x}^{x+h}\;dt = h \cdot f(x)$,

it follows that $-h\cdot\epsilon < \int\limits_{x}^{x+h} f(t)\;dt -h\cdot f(x) < h \cdot\epsilon$

or, $-\epsilon < { \int\limits_{x}^{x+h} f(t)\;dt \over h }- f(x) < \epsilon$

By (3), we have $-\epsilon < { {F(x+h)-F(x)} \over h }- f(x) < \epsilon$

i.e., $|{{F(x+h)-F(x)} \over h}- f(x)|< \epsilon$

As a result,

${\lim\limits_{h \to 0}{ {F(x+h) - F(x)} \over h } }= f(x)$.

For $h < 0$, since $x+h < x$,

$\int\limits_{x+h}^{x} -\epsilon\;dt < \int\limits_{x+h}^{x}{f(t)-f(x)}\;dt < \int\limits_{x+h}^{x}\epsilon\;dt$.

That is $h\cdot\epsilon < -\int\limits_{x}^{x+h}f(t)\;dt + h \cdot f(x) < -h\cdot\epsilon$

or, $h\cdot\epsilon < -(\int\limits_{x}^{x+h}f(t)\;dt - h \cdot f(x)) < -h\cdot\epsilon$

Divide $-h\;(h < 0 \implies -h >0)$ throughout, and express $\int\limits_{x}^{x+h}f(t)\;dt$ as $F(x+h)-F(x)$, we arrived at

$-\epsilon < { {F(x+h)-F(x)} \over h }- f(x) < \epsilon$

as before.

We are now poised to define the derivative of a function:

Let $f$ be a function on an opensubset $I$ of $R$. Let $x \in I$. We say that $f$ is differentiable at $x$ if

$\lim\limits_{h \to 0}{ {f(x+h) - f(x)} \over h }\quad\quad\quad(5)$

exists. If exists, this limit, commonly denoted by $f'(x)$ or ${d \over dx} f(x)$, is called the derivative of $f$ at $x$.

For function $f_1(x) - f_2(x)$, the difference of two differentiable functions,

${{f_1(x+h)-f_2(x+h) - (f_1(x)-f_2(x))} \over h }= {{f_1(x+h) - f_2(x)} \over h}- {{f_1(x) - f_2(x)} \over h}$.

By definition,

${d \over dx} (f_1(x) - f_2(x)) = \lim\limits_{h \to 0}{{f_1(x+h)-f_2(x+h) - (f_1(x)-f_2(x))} \over h }$

${d \over dx} f_1(x) = \lim\limits_{h \to 0}{{f_1(x+h) - f_1(x)} \over h}$

${d \over dx} f_2(x) = \lim\limits_{h \to 0}{{f_2(x+h) - f_2(x)} \over h}$

We have

$\lim\limits_{h \to 0}{{f_1(x+h)-f_2(x+h) - (f_1(x)-f_2(x))} \over h }=\lim\limits_{h \to 0}{{f_1(x+h) - f_1(x)} \over h}-\lim\limits_{h \to 0}{{f_2(x+h) - f_2(x)} \over h}$

or,

${d \over dx} (f_1(x) - f_2(x)) = {d \over dx} f_1(x) - {d \over dx} f_2(x)\quad\quad\quad(6)$

With this definition, we can also re-state (4) as:

Let $F(x) = \int\limits_{a}^{x} f(t)\;dt, t \in [a, b], \forall x \in [a, b], f(x)$ is continuous $\implies {d \over dx} F(x)= f(x)\quad(7)$

From (7), it is clear that $F(x) = \int_{a}^{x} f(x)\;dx$ is a solution of the following equation:

${d \over dx} y(x) = f(x)\quad\quad\quad(8)$

where $y(x)$ is the unknown function.

In fact, for any function $g(x)$ that satisfies (8), we have

${d \over dx} F(x) = {d \over dx} g(x) = f(x)$.

Therefore by (6),

${d \over dx} (F(x) - g(x)) = {d \over dx} F(x) - {d \over dx} g(x) = f(x) - f(x) = 0\quad\quad\quad(9)$

That is,  $F(x) - g(x)$ is a function whose derivative is everywhere zero.

Geometrically, if the curve of a function is horizontally directed at every point, it represents a constant function.

It is even more obvious if one considers a function $f(t)$ that describes the position (on some axis) of a car at time $t$. Then the derivative of the function, $\lim\limits_{\Delta t \to 0} {{f(t+{\Delta t}) -f(t)}\over {\Delta t}}$ is the instantaneous velocity of the car. If the derivative is zero for some time interval (the car does not move) then the value of the function is constant (the car stays where it is).

Hence, we assert

A function $f(x)$ on an open interval $I$ has derivative zero at each point $\implies \forall x \in I, f(x) = c$, a constant.

From (9) and above assertion, whose rigorous proof we postpone until later in “Sprint to FTC“, it follows that

$F(x) - g(x) = c$

or,

$F(x) = \int\limits_{a}^{x} f(x)\;dx = g(x) + c\quad\quad\quad(10)$

where c is a constant.

We know

$\int\limits_{a}^{a} f(x)\;dx = 0$.

It implies

$\int\limits_{a}^{a} f(t)\;dt = g(a) + c = 0$.

i.e.,

$c = -g(a)$

and, (10) becomes

$\int\limits_{a}^{x} f(t)\;dt = g(x) - g(a)$

Let $x=b$, we have

$\int\limits_{a}^{b} f(x)\;dx = g(b) - g(a)$

This is the well known Newton-Leibnitz formula. It expresses an algorithm for evaluating the definite integral $\int_{a}^{b} f(x)\;dx$:

Find any function $g(x)$ whose derivative is $f(x)$, and the difference $g(b)-g(a)$ gives the answer.

# A Sprint to FTC

Our sprint starts with Lagrange’s Mean-Value theorem which states:

A function $f(x)$ is

(1) continous on closed interval $[a, b]$
(2) differentiable on open interval $(a, b)$

$\implies\exists c \in (a, b) \ni {{f(b)-f(a)} = f'(c) (b-a) }$.

Let’s prove it.

The area of a triangle with vortices $(x, f(x)), (a, f(a))$ and $(b, f(b))$ is the absolute value of

${1 \over 2} \left|\begin{array}{ccc} x & f(x) & 1 \\ a & f(a) & 1 \\ b & f(b) & 1 \end{array}\right|$

where $x \in [a, b]$ (See “Had Heron Known Analytic Geometry“)

Let

$g(x) = \left|\begin{array}{ccc} x & f(x) & 1 \\ a & f(a) & 1 \\ b & f(b) & 1 \end{array}\right|$,

Since $f(x)$ is differentiable, we have

$g'(x) = (b-a)f'(x)-f(b)+f(a)$.

Clearly,

$g(a) = g(b) = 0$

Therefore, by Rolle’s Theorem (See “Rolle’s theorem”) $\exists c \in (a, b) \ni g'(c) = 0$, i.e.,

$\exists c \in (a, b) \ni {{f(b)-f(a)}} = f'(c) (b-a)$

or

$\exists c \in (a, b) \ni f'(c) = {{f(b)-f(a)} \over {b-a}}$.

The geometric meaning of Lagrange’s Mean-Value theorem is illustrated for several functions in Fig. 1. It shows that the graph of a differentiable function has at least one tangent line parallel to the cord connecting $(a, f(a))$ and $(b, f(b))$.

Fig. 1

The bottom curve falsifies the theorem due to its missing differentiability at one point.

Following Lagrange’s Mean-Value theorem are two corollaries. We have encountered and accepted the first one without proper proof in the past (See “Inching towards Definite Integral“)

Let’s state and prove them now.

Corollary 1. A function $f(x)$ on an open interval $I$ has derivative zero at each point $\implies \forall x \in I, f(x) = c$, a constant.

It is true due to the fact that $\forall x_1, x_2 \in I, x_1 \neq x_2, f(x)$ is both continous on $[x_1, x_2]$ and differentiable on $(x_1, x_2)$. By Lagrange’s Mean-Value theorem, $\exists c \in (x_1, x_2) \ni f(x_2)-f(x_1) = f'(c)(x_2-x_1)$. Since $\forall x \in I, f'(x)=0 \implies f'(c)=0$, We have $f(x_2)-f(x_1)=0 \cdot (x_2-x_1) = 0$. i.e., $f(x_2) = f(x_1)$. Hence $f(x)=c$, a constant on $I$.

Corollary 2. Two functions, $f_1(x)$ and $f_2(x)$ have the same derivative at each point on an open interval $I \implies \forall x \in I, f_1(x) - f_2(x) = c$, a constant.

For $\forall x \in I, f_1'(x)=f_2'(x) \implies (f_1(x) - f_2(x))' = f'_1(x) - f'_2(x) = 0$. By corollary 1, $f_1(x) - f_2(x) = c$, a constant.

Next, we define a set $G$ as follows:

$G \triangleq \{g(x) | g'(x) = f(x)\}$

i.e.,

$G$ is a set of function such that $\forall g \in G, g'(x) = f(x)$ for all $x$‘s.

$G$ is certainly not empty for we have proved that $\int\limits_{a}^{x}f(x)\;dx \in G$ by showing $(\int\limits_{a}^{x}f(x)\;dx)'= f(x)$ (See “Inching towards Definite Integral“). It follows that $\forall g \in G$,

$(g(x)-\int\limits_{a}^{x}f(x)\;dx)' = g'(x)-(\int\limits_{a}^{x}f(x)\;dx)'=f(x)-f(x)=0$

By Corollary 2,

$g(x)-\int\limits_{a}^{x}f(x)\;dx = c$.

Let $x=a$, we have

$g(a)-\int\limits_{a}^{a}f(x)\;dx = g(a)-0=g(a)=c$.

Hence

$g(x)-\int\limits_{a}^{x}f(x)\;dx = g(a)$

or

$\int\limits_{a}^{x}f(x)\;dx = g(x)-g(a)$.

Let’s summarize the result of our exploration in a theorem:

On an open interval $I$ containing $a$, a function $g(x)$ is differentiable and, $g'(x) = f(x)\implies \forall x \in I$,

$\int\limits_{a}^{x} f(x)\;dx = g(x)-g(a)\quad\quad\quad\quad(1)$

This is FTC, The Fundamental Theorem of Calculus.

Let $x=b$, (1) becomes the Newton-Leibnitz formula

$\int\limits_{a}^{b}f(x)\;dx = g(b)-g(a)\quad\quad\quad\quad$,

our old friend for evaluating $\int\limits_{a}^{b} f(x) \;dx$ (See “Inching towards Definite Integral”)

# Knock Euler’s Line

my imagination
is a piece of board
my sole instrument
is a wooden stick

I strike the board
yes—yes
no—no

“A Knocker” by Zbigniew Herbert

Euler’s line theorem states
In every triangle
the intersection of the medians
the intersection of the heights
and the center of the circumscribed circle
are on a straight line

Let’s prove it
with the aid of Omega CAS Explorer

We know

$x^2+y^2+d \cdot x + e \cdot y +f =0\quad\quad\quad(1)$

is a circle centered at $(-{d \over 2}, -{e \over 2})$ with radius

$r^2={{d^2+e^2-4f} \over 4}\quad\quad\quad(2)$

provide (2) is positive

Find d, e, f from triangle’s vertices $(-x_1, 0), (x_1, 0), (x_2, y_2)$:

ceq:x^2+y^2+d*x+e*y+f=0;
eq1:ev(ceq, x=x1, y=0);
eq2:ev(ceq, x=-x1, y=0);
eq3:ev(ceq, x=x2, y=y2);
sol:linsolve([eq1, eq2, eq3], [d,e,f]);

$\implies d=0, e=-{{y_2^2-x_2^2+x_1^2} \over y_2}, f=-x_1^2$

Evaluate (2):

ev(d^2 + e^2-4*f, sol);

$\implies {(-y_2^2-x_2^2+x1^2)^2\over y_2^2} + 4x_1^2$

always positive for $x_1>0, y2 \neq 0$

Find the center of the circumscribed circle $x_c, y_c$:

xc:ev(-d/2, sol)$$\implies 0$ yc:ev(-e/2, sol)$

$\implies {{-y_2^2-x_2^2+x_1^2} \over {2y_2}}$

Find the intersection of the medians $x_m, y_m$:

eq1:y*((x1+x2)/2+x1)=y2/2*(x+x1)$eq2:y*((x2-x1)/2-x1)=y2/2*(x-x1)$
sol:solve([eq1, eq2], [x,y])$xm:ev(x,sol); $\implies {x_2 \over 3}$ ym:ev(y, sol); $\implies {y_2 \over 3}$ is(ev(x2*y=y2*x, x=xm, y=ym)); $\implies true$ Find the intersection of the heights $x_h, y_h$: eq1:y2*y=-(x2+x1)*(x-x1)$
eq2:y2*y=-(x2-x1)*(x+x1)$sol:solve([eq1, eq2], [x,y])$
xh:ev(x, sol);

$\implies x_2$

yh:ev(y, sol);

$\implies -{{x_2^2-x_1^2} \over y_2}$

Compute the area of triangle with vertices $(x_c, y_c), (x_m, y_m), (x_h, y_h)$:

m:matrix([xc, yc,1], [xm, ym, 1], [xh, yh,1]);
determinant(m)\$
ratsimp(%);

$\implies 0$

Indeed
$(x_c, y_c), (x_m, y_m)$ and $(x_h, y_h)$ are on a straight line.

# A Computer Algebra Aided Proof of Feuerbach’s Nine-Point Circle Theorem

Feuerbach’s Nine-Point Circle Theorem states that a circle passes through the following nine significant points of any triangle can be constructed:

1. The midpoint of each side of the triangle
2. The foot of each altitude
3. The midpoint of the line segment from each vertex of the triangle to the orthocenter

Let’s prove it with the aid of Omega CAS Explorer.

Step-1 Set up the circle equation:

$x^2 + y^2 + d \cdot x + e \cdot y +f = 0 \quad\quad\quad(1)$

is a circle centered at $( -{d \over 2}, -{e \over 2})$ with radius

$r^2 = { {d^2 + e^2-4f} \over 4}\quad\quad\quad(2)$

provide (2) is positive.

Step-2 Find d,e,f using p1, p2, p3:

ceq: x^2+y^2+d*x+e*y+f=0;

eq1: ev(ceq, x=(x1+x2)/2, y=y2/2);

eq2: ev(ceq, x=0, y=0);

eq3: ev(ceq, x=(x2-x1)/2, y=y2/2);

sol: linsolve([eq1, eq2, eq3], [d,e,f]);

$\implies d = -x_2, e = - {{y_2^2-x_2^2+x_1^2} \over {2 y_2}}$

The new circle equation is

nceq: ev(ceq, sol);

$\implies - {{y(y_2^2-x_2^2+x_1^2)} \over {2 y_2}} +y - x \cdot x_2+x^2 = 0$,

Evaluate (2)

ev(d^2+e^2-4*f, sol);

$\implies { (y_2^2-x_2^2+x_1^2)^2 \over {4 y_2^2}} + x_2^2$,

always positive for $x_1 > 0 , y_2 \neq 0$.

Step-3 Show p5, p6, p4 are on the circle:

p5:ev(nceq, x=x2, y=0);

$\implies 0 = 0$

p4: linsolve([(x2-x1)*y=y2*(x-x1), y2*y=-(x2-x1)*(x+x1)], [x,y]);

ratsimp(ev(nceq, p4));

$\implies 0 = 0$

p6: linsolve([(x2+x1)*y=y2*(x+x1), y*y2=-(x2+x1)*(x-x1)], [x,y]);

ratsimp(ev(nceq, p6));

$\implies 0 = 0$

Step-4 Find the orthocenter

o: linsolve([x=x2, y*y2=-(x2+x1)*(x-x1)], [x,y]);

$\implies x = x_2, y = - {{x_2^2-x_1^2} \over y_2}$

Step-5 Show p7, p8, p9 are on the circle:

xo: rhs(o[1]);

yo: rhs(o[2]);

p7: ratsimp(ev(nceq, x=(xo+x2)/2, y=(yo+y2)/2));

$\implies 0 = 0$

p8: ratsimp(ev(nceq, x=(xo-x1)/2, y=(yo+0)/2));

$\implies 0 = 0$

p9: ratsimp(ev(nceq, x=(xo+x1)/2, y=(yo+0)/2));

$\implies 0 = 0$

This post is essentially my presentation at 21st Conference on Applications of Computer Algebra, July 20-23, 2015, Kalamata, Greece.