Pumpkin Pi


Fig. 1

Among many images of carved pumpkin, I like the one above (see Fig. 1) the most. It shows Leibniz’s formula for calculating the value of \pi. Namely,


To derive this formula, we begin with finding the derivative of \arctan{x}:

Let y = \arctan{x}, we have x=tan(y), and

\frac{d}{dx}x=\frac{d}{dx}\tan{y}=\frac{d}{dy}\tan{y}\frac{dy}{dx}=\sec^{2}{y}\frac{dy}{dx} = (1+tan^{2}{y})\frac{dy}{dx}

= (1+x^{2})\frac{dy}{dx}.

Since \frac{d}{dx}x=1,


It follows that by (1) and the Fundamental Theorem of Calculus,

\int\limits_{0}^{1}\frac{1}{1+x^2}dx = \arctan{x}\bigg|_{0}^{1}=\frac{\pi}{4}


\frac{\pi}{4} = \int\limits_{0}^{1}\frac{1}{1+x^2} dx\quad\quad\quad\quad\quad\quad(2)

From carrying out polynomial long division, we observe

\frac{1}{1+x^2} = 1 + \frac{-x^2}{1+x^2},

\frac{1}{1+x^2} = 1 - x^2 + \frac{x^4}{1+x^2},

\frac{1}{1+x^2} = 1 - x^2 + x^4  + \frac{-x^6}{1+x^2},

\frac{1}{1+x^2} = 1 - x^2 + x^4 - x^6 +  \frac{x^8}{1+x^2}.

It seems that

\frac{1}{1+x^2} = \sum\limits_{k=1}^{n}{(-1)^{k+1}x^{2k-2}} + \frac{(-1)^{n} x^{2n}}{1+x^2}\quad\quad\quad\quad\quad\quad(3)

Assuming (3) is true, we integrate it with respect to x from 0 to 1,

 \int\limits_{0}^{1}\frac{1}{1+x^2}dx=\int\limits_{0}^{1}\sum\limits_{k=0}^{n}(-1)^{k+1}x^{2k-2} dx + \int\limits_{0}^{1}\frac{(-1)^{n} x^{2n}}{1+x^2}dx

= \sum\limits_{k=1}^{n}(-1)^{k+1}\int\limits_{0}^{1}x^{2k-2}dx +(-1)^{n}\int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx

= \sum\limits_{k=1}^{n}(-1)^{k+1}\frac{x^{2k-1}}{2k-1}\bigg|_{0}^{1}+(-1)^{n}\int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx .

As a result of integration,  (2) becomes

\frac{\pi}{4} = \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1} + (-1)^{n}\int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx,


\frac{\pi}{4} - \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1} =(-1)^{n}\int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx.


|\frac{\pi}{4} -  \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1}|=|(-1)^{n}\int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx | = \int\limits_{0}^{1}\frac{x^{2n}}{1+x^2}dx < \int\limits_{0}^{1}x^{2n}dx

=\frac{x^{2n+1}}{2n+1}\bigg|_{0}^{1}= \frac{1}{2n+1}.

Moreover, \forall \epsilon > 0, we obtain n > \frac{1}{2}(\frac{1}{\epsilon}-1) through solving \frac{1}{2n+1} < \epsilon. It means that \forall \epsilon > 0, \exists n^*=\frac{1}{2}(\frac{1}{\epsilon}-1) such that  for all n > n^*, |\frac{\pi}{4} -  \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1}|<\epsilon, i.e.,

\lim\limits_{n\to\infty}{ \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1}}=\frac{\pi}{4}.


\pi = 4 \sum\limits_{k=1}^{\infty}\frac{(-1)^{k+1}}{2k-1}\quad\quad\quad(4)

The numerical value of \pi is therefore approximated according to (4) by the partial sum

 4 \sum\limits_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1}=4(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\dots+(-1)^{n+1}\frac{1}{2n-1})\quad\quad\quad\quad(5)

Its value converges to \pi as n increases.

However, (5) is by no means a practical way of finding the value of \pi, since its convergence is so slow that many terms must be summed up before a reasonably accurate result emerges (see Fig. 2)

Screen Shot 2017-08-10 at 9.33.13 PM.png

Fig. 2

I doubt Leibniz has ever used his own formula to obtain the value of \pi !

Let me leave you with an exercise: Prove (3)

Little Bird and a Recursive Generator

Screen Shot 2017-07-01 at 3.26.21 PM.png

When I was asked to prove the following power summation formulas by mathematical induction:

\sum\limits_{i=1}^{n}i = 1+2+3+...+n=\frac{n (n+1)}{2},

\sum\limits_{i=1}^{n}i^2=1^2+2^2+3^2+...+n^2= \frac{n (n+1) (2n +1)}{6},

\sum\limits_{i=1}^{n}i^3=1^3+2^3+...+n^3 = \frac{n^2 (n+1)^2}{4},

I wondered how these closed forms are obtained in the first place!  Did a little bird whisper the formulas into our ears?

Even though there are elementary derivations of the power summation formulas, for example,  the visual derivations in Proof Without Words by MAA, none goes beyond the 3rd power.

In this blog, I will construct a recursive generator capable of generating the closed form of power summation


for all p \in N^{+}.

Let us start with a picture.


Fig. 1

Let A denotes the area of a rectangle, Fig.1 shows

\sum A_{blue} = A_{n^p \times n} - \sum A_{yellow}.


\sum A_{blue} = \sum\limits_{i=1}^{n}i^p,

A_{n^p\times n} = n^p n,

\sum A_{yellow}= \sum\limits_{i=1}^{n-1}i((i+1)^p-i^p),

we have

\sum\limits_{i=1}^{n}i^p = n^p n - \sum\limits_{i=1}^{n-1}i((i+1)^p-i^p)\quad\quad\quad(1).

when p = 1, (1) becomes

\sum \limits_{i=1}^{n}i=n^2-\sum\limits_{i=1}^{n-1}i((i+1)-i)

= n^2-(\sum\limits_{i=1}^{n}(i(i+1)-i)-(n(n+1)-n))

= n^2-\sum\limits_{i=1}^{n}i+n,





When p = 2,

\sum\limits_{i=1}^{n}i^2=n^2 n-\sum\limits_{i=1}^{n-1}i((i+1)^2-i^2)

= n^3-2\sum \limits_{i=1}^{n}i^2-\sum\limits_{i=1}^{n}i+2n^2+n.

Substituting (2) for \sum\limits_{i=1}^{n}i,  we have



\sum\limits_{i=1}^{n}i^2= \frac{n(n+1)(2n+1)}{6}.

In general, \forall p \in N^+,

\sum\limits_{i=1}^{n}i^p = n^p n-\sum\limits_{i=1}^{n-1}i((i+1)^p-i^p)

= n^{p+1}-(\sum\limits_{i=1}^{n}i((i+1)^p-i^p)-n((n+1)^p-n^p))

= n^{p+1}-     \sum\limits_{i=1}^{n}(i(\sum\limits_{j=0}^{p}\binom{p}{j}i^{p-j}-i^p))        +n((n+1)^p-n^p)


= n(n+1)^p - \sum\limits_{i=1}^{n}\sum\limits_{j=1}^{p}\binom{p}{j}i^{p-j+1}

= n(n+1)^p - \sum\limits_{i=1}^{n}(p i^p + \sum\limits_{j=2}^{p}\binom{p}{j}i^{p-j+1})

= n(n+1)^p - p\sum\limits_{i=1}^{n}i^p-\sum\limits_{i=1}^{n}\sum\limits_{j=2}^{p}\binom{p}{j}i^{p-j+1}

= n(n+1)^p-p\sum\limits_{i=1}^{n}i^p-\sum\limits_{j=2}^{p}\sum\limits_{i=1}^{n}\binom{p}{j}i^{p-j+1}

= n(n+1)^p-p\sum\limits_{i=1}^{n}i^p-\sum\limits_{j=2}^{p}(\binom{p}{j}\sum\limits_{i=1}^{n}i^{p-j+1}).

Solving for \sum\limits_{i=1}^{n}i^p, we obtain

\sum\limits_{i=1}^{n}i^p = \frac{n(n+1)^p -\sum\limits_{j=2}^{p}(\binom{p}{j}\sum\limits_{i=1}^{n}i^{p-j+1}) }{p+1}.


s_p \triangleq \sum\limits_{i=1}^{n}i^p,


s_p = \frac{n(n+1)^p - \sum\limits_{j=2}^{p}\binom{p}{j} s_{p-j+1}}{p+1}\quad\quad\quad(3)

What we have here is a recursive generator capable of  generating  power summation formulas for virtually all integrer powers. Implemented in Omega CAS Explorer,  the figures below illustrate the awesomeness of this generator:

Screen Shot 2017-07-04 at 6.20.59 PM

Fig. 2

Screen Shot 2017-07-04 at 6.23.26 PM.png

Fig. 3

Chaplin or Leibniz ?


Before I do proofs via the 3-step mathematical induction, the classic Charlie Chaplin movie clip often comes to my mind. It nudges me to seek better ways. For example, to prove

(k+1)(1^k+2^k+3^k+\dots+n^k)<(n+1)^{k+1}, \quad k, n \in N^+,

Instead of applying the 3-step mathematical induction, let us look at Fig. 1.

Screen Shot 2017-07-07 at 6.44.28 PM.png

Fig. 1

If A denotes the total area of the blue rectangles, Fig.1 shows

A=1\cdot 1^k+1\cdot 2^k+1\cdot 3^k+\dots+1\cdot n^k = 1^k+2^k+3^k+\dots+n^k.

It also shows that

A < the  area under the curve x^k = \int\limits_{0}^{n+1}x^k\; dx.

By the fundamental theorem of calculus,

\int\limits_{0}^{n+1}x^k\;dx = \frac{x^{k+1}}{k+1}\bigg|_{0}^{n+1}=\frac{(n+1)^{k+1}}{k+1}.


1^k+2^k+3^k+\dots+n^k < \frac{(n+1)^{k+1}}{k+1},


(k+1)(1^k+2^k+3^k+\dots+n^k)<(n+1)^{k+1}, \quad\quad k, n \in N^+.

This proof suggests another inequality:

(k+1)(1^k+2^k+3^k+\dots+n^k)>n^{k+1}, \quad\quad k, n \in N^+.

a x + b y + c = 0 : Why It Applies to All Straight Lines

In the traditional teaching of Analytical Geometry, the governing equation for a straight line has the following five forms, along with limitations for the first four:

[1]  Point-Slope form: y - y_1 = k (x-x_1) where (x_1, y_1) is a point on the line, and k is the slope. The limitation for this form is that it can not represent line perpendicular to the x-axis since it has no slope.

[2]  Slope-Intercept form: y = k x + b where k is the slope, b is the intersect the line made on y-axis. Its limitation is that it can not represent line perpendicular to the x-axis.

[3] Two-Point form: \frac{y-y1}{y_2 - y_1} = \frac{x - x_1}{x_2 - x_1} where (x_1, y_1), (x_2, y_2) are two points on the line. However, this form can represent neither line perpendicular nor parallel to x-axis due to the fact when x_1 = x_2 or y_1 = y_2, the form breaks down from dividing by zero.

[4] Point-Intercept form: \frac{x}{a} + \frac{y}{b} = 1 where a, b are the intersects the line made on x-axis and y-axis respectively, and a\neq 0, b\neq0. Again, this form can represent neither line perpenticular nor parallel to the x-axis. It does not work for any line that passes the point of origin either.

[5] General form: a x +b y +c = 0 (a^2+b^2 \neq 0), this form can represent all lines.

Here I am presenting a proof to show [5] is indeed capable of representing all straight lines.

Let us start with an observation:

In a rectangular coordinate system, given two distinct points (x_1, y_1), (x_2, y_2), and any point (x, y) on the line connecting (x_1, y_1) and (x_2, y_2), the area of triangle with vertices (x_1, y_1), (x_2, y_2) and (x, y) must be zero!

Recall a theorem proved in my blog “Had Heron Known Analytic Geometry“, it means for such (x_1, y_1),  (x_2, y_2) and (x, y),

\left|\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right|= 0.

Therefore, we can define the line connecting two distinct points as a set of (x, y) such that the area of the triangle with vertices (x_1, y_1), (x_2, y_2) and (x, y) is zero, mathematically written as

A \triangleq  \{ (x, y)  | \left|\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right|= 0, (x_1-x_2)^2+(y_1-y_2)^2 \neq 0\}.

Since \forall (x, y) \in A,

\left|\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right|= x_1 y_2-x y_2-x_2 y_1+x y_1+x_2 y-x_1y =


is an algebraic representation of the line connecting two distinct points (x_1, y_1) and (x_2, y_2).

When x_1=x_2, (1) becomes


and when x_1 = x_2, y_1-y_2 \neq  0, we have

x = x_1,

a line perpendicular to the horizontal axis.

When y_1=y_2, (1) becomes

y = y_1,

a line parallel to the horizontal axis.

Evaluate (1) with x_2=0, y_2=0 yields:

(y-y_1) x_1 -(x-x_1)y_1=0.

Collecting terms in (1), and letting




(1) can be expressed as

ax + by + c = 0.

In fact, we can prove the following theorem:

B \triangleq  \{ (x, y) | \exists a, b, a^2+b^2 \neq 0, a x +b y+c=0\} \implies A=B.

To prove A=B, we need to show

\forall (x, y) \in A \implies (x, y) \in B\quad\quad\quad\quad(2)

\forall (x, y) \in B \implies (x, y) \in A\quad\quad\quad\quad(3)

We have already shown (2) by setting the values of a, b and c earlier.

We will prove (3) now:

\forall (x_1, y_1), (x_2, y_2) and (x, y) \in B, we have

\begin{cases}a x_1 + b y_1 +c =0 \\ a x_2 + b y_2 +c =0 \\ a x + b y+c =0\end{cases}.

Written in matrix form,

\left(\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right) \left(\begin{array}{rrr}  a \\  b \\  c  \end{array}\right)= 0.


\left|\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right| \neq 0,

then by Cramer’s rule,

\left(\begin{array}{rrr}  a \\  b \\  c  \end{array}\right) is a column vector of zeros,



which contradicts the fact that

a,b are not all zero.


\left|\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x & y  & 1  \end{array}\right| = 0

which implies:

(x, y) \in A.

The consequence of A=B is that every point (x, y) on a line connecting two distinct points satisfies equation a x + b y + c =0 for some a, b (a^2+b^2\neq 0).

Stated differently,

a x + b y +c = 0 where a, b are not all zero is the governing equation of any straight line.

Had Heron Known Analytic Geometry…

In my previous two posts, “An Algebraic Proof of Heron’s Formula” and “An Alternative Derivation of Heron’s Formula,” I proved Heron’s formula for the area of a triangle with three given sides.  Based on Heron’s formula, we can now prove a theorem concerning the area of any triangle in a rectangle coordinate system, namely,

The area of a triangle with vertices at (x_1, y_1), (x_2, y_2), (x_3, y_3) in a rectangle coordinate system can be expressed as


where D is the determinant of matrix:

\left(\begin{array}{ccc}  x_1 & y_1 & 1  \\  x_2 & y_2 & 1 \\ x_3 & y_3  & 1  \end{array}\right)

I offer the following proof:

Screen Shot 2017-06-14 at 1.02.37 PM.png

Fig. 1

By Heron’s formula, the area of triangle in Fig. 1

A=\sqrt{s (s-a) (s-b) (s-c)}\quad\quad\quad\quad\quad(2)

where a, b, c are three sides of the triangle and, s=(a+b+c)/2.







Let B=|\frac{1}{2} D|,  we have

B^2 ={|\frac{1}{2} D|}^2=(\frac{1}{2}D)^2.

Compute A^2-B^2 using Omega CAS Explorer (see Fig. 2) , the result shows


Screen Shot 2017-06-13 at 9.21.21 PM.png

Fig. 2

Since A> 0, B\geq 0 implies

A+B >0,

A^2-B^2=(A-B)(A+B)=0 implies



A = B.

Hence, (1) and (2) are equivalent.

I would like to learn any other alternative proof.

Higham’s Parametric Curve

Screen Shot 2017-06-10 at 11.02.26 AM.png

Fig. 1

The parametric curve (see Fig. 1) has always intrigued me. It was to my delight to finally find its equations from “MATLAB guide” written by the Highams:

x = \int\limits_{0}^{t} \sin {\omega^2}\; d\omega

y = \int\limits_{0}^{t} \cos {\omega^2}\; d\omega

Below is the curve plotted by Omega CAS Explorer:

Screen Shot 2017-06-10 at 12.43.19 PM.png

Fig. 2

After several failed trials, I realized that ‘nticks’ must be provided in ‘plot2d’ in order to produced the image correctly.

Next, I tried ‘draw2d’ function (see Fig. 3), but the tics and numbers are too close to the image of the curve.

Screen Shot 2017-06-10 at 12.36.10 PM.png

Fig. 3

To better position the image,  I specified ‘xrange’ and ‘yrange’ to put more space between the image and the tics and numbers. Cropping the resulting image to obtain the Fig. 1 at the top of this post.

Screen Shot 2017-06-10 at 12.39.31 PM

Fig. 4

I would like to ask all the maxima Jedis out there,

Without specify ‘xrange’ and ‘yrange’, is there an option that I can set to turn off the tics and numbers ?

Thank you in advance for your answer.