Author Archives: Michael Xue

“Mine’s Bigger!”

Question: Which one is bigger, e^\pi or \pi^e?

Answer:

Consider function

f(x) = \frac{\log(x)}{x}.

We have

f'(x) = \frac{1-\log(x)}{x^2} \implies f'(e)=\frac{1-\log(e)}{e^2} \overset{\log(e)=1}{=} 0\quad\quad\quad(1)

and

f''(x) = \frac{2\log(x)}{x^3}-\frac{3}{x^3}\implies f''(e) = \frac{2\log(e)}{e^3}-\frac{3}{e^3}= -\frac{1}{e^3}<0.\quad\quad\quad(2)

From (1) and (2), we see that

\frac{\log(x)}{x} attains its global maximum at x=e

which means

\forall x >0, x \neq e \implies \frac{\log(x)}{x} < \frac{\log(e)}{e}.

When x=\pi,

\frac{\log(\pi)}{\pi} < \frac{\log(e)}{e}

or,

e\log(\pi) < \pi\log(e).

It follows that

\log(\pi^e) < \log(e^\pi).\quad\quad\quad(3)

Since \log(x) is a monotonic increasing function (see Exercise-1), we deduce from (3) that

e^\pi > \pi^e


Exercise-1 Show that \log(x) is a monotonic increasing function.

What is the shape of a hanging rope?

Question: What is the shape of a flexible rope hanging from nails at each end and sagging under the gravity?

Answer:

First, observe that no matter how the rope hangs, it will have a lowest point A (see Fig. 1)

Fig. 1

It follows that the hanging rope can be placed in a coordinate system whose origin coincides with the lowest point A and the tangent to the rope at A is horizontal:

Fig. 2

At A, the rope to its left exerts a horizontal force. This force (or tension), denoted by T_0, is a constant:

Fig. 3

Shown in Fig. 3 also is an arbitrary point B with coordinates (x, y) on the rope. The tension at B, denoted by T_1, is along the tangent to the rope curve. \theta is the angle T_1 makes with the horizontal.

Since the section of the rope from A to B is stationary, the net force acting on it must be zero. Namely, the sum of the horizontal force, and the sum of the vertical force, must each be zero:

\begin{cases}T_1cos(\theta)=T_0\quad\quad\quad(1)\\ T_1\sin(\theta) = \rho gs\;\;\quad\quad(2)\end{cases}

where \rho is the hanging rope’s mass density and s its length from A to B.

Dividing (2) by (1), we have

\frac{T\sin(\theta)}{T\cos(\theta)} = \tan(\theta) = \frac{\rho g}{T_0}s\overset{k=\frac{\rho g}{T_0}}{\implies} \tan(\theta) = ks.\quad\quad\quad(3)

Since

\tan(\theta) = \frac{dy}{dx}, the slope of the curve at B,

and

s = \int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2},

we rewrite (3) as

\frac{dy}{dx} = k \int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2}\;dx

and so,

\frac{d^2y}{dx^2}=k\cdot \frac{d}{dx}(\int\limits_{0}^{x}\sqrt{1+(\frac{dy}{dx})^2}\;dx)=k\sqrt{1+(\frac{dy}{dx})^2}

i.e.,

\frac{d^2y}{dx^2}=k\sqrt{1+(\frac{dy}{dx})^2}.\quad\quad\quad(4)

To solve (4), let

p = \frac{dy}{dx}.

We have

\frac{dp}{dx} = k\sqrt{1+p^2} \implies \frac{1}{\sqrt{1+p^2}}\frac{dp}{dx}=k.\quad\quad\quad(5)

Integrate (5) with respect to x gives

\log(p+\sqrt{1+p^2}) = kx + C_1\overset{p(0)=y'(0)=0}{\implies} C_1 = 0.

i.e.,

\log(p+\sqrt{1+p^2}) = kx.\quad\quad\quad(6)

Solving (6) for p yields

p = \frac{dy}{dx} =\sinh(kx).\quad\quad\quad(7)

Integrate (7) with respect to x,

y = \frac{1}{k} \cosh(kx) + C_2\overset{y(0)=0,\cosh(0)=1}{\implies}C_2=-\frac{1}{k}.

Hence,

y = \frac{1}{k}\cosh(kx)-\frac{1}{k}.

Essentially, it is the hyperbolic cosine function that describes the shape of a hanging rope.


Exercise-1 Show that \int \frac{1}{\sqrt{1+p^2}} dp = \log(p + \sqrt{1+p^2}).

Exercise-2 Solve \log(p+\sqrt{1+p^2}) = kx for p.

Snowflake and Anti-Snowflake Curves

Fig. 1

The snowflake curve made its first appearence in a 1906 paper written by the Swedish mathematician Helge von Koch. It is a closed curve of infinite perimeter that encloses a finite area.

Start with a equallateral triangle of side length a and area A_0, the snowflake curve is constructed iteratively (see Fig. 1). At each iteration, an outward equallateral triangle is created in the middle third of each side. The base of the newly created triangle is removed:

Fig. 2

Let s_i be the number of sides the snowflak curve has at the end of i^{th} iteration. From Fig. 2, we see that

\begin{cases}s_i = 4s_{i-1}\\ s_0=3\end{cases}.

Fig. 3

Solving for s_i (see Fig. 3) gives

s_i = 3\cdot 4^i.

Suppose at the end of i^{th} iteration, the length of each side is a^*_i, we have

\begin{cases}a^*_i = \frac{a^*_{i-1}}{3}\\ a^*_0=a\end{cases},

therefore,

a^*_i = (\frac{1}{3})^i a

and, p_i, the perimeter of the snowflake curve at the end of i^{th} iteration is

p_i = s_{i-1} (4\cdot a^*_i) = (3\cdot4^{i-1})\cdot (4\cdot(\frac{1}{3})^ia)=3(\frac{4}{3})^i a.

It follows that

\lim\limits_{i\rightarrow \infty}p_i =\infty

since |\frac{4}{3}|>1.

Let A^*_{i-1}, A^*_i be the area of equallateral triangle created at the end of (i-1)^{th} and i^{th} iteration respectively. Namely,

A^*_{i-1} = \boxed{\frac{1}{2}a^*_{i-1}\sqrt{(a^*_{i-1})^2-(\frac{a^*_{i-1}}{2})^2}}

and

A^*_{i} = \frac{1}{2}\frac{a^*_{i-1}}{3}\sqrt{(\frac{a^*_{i-1}}{3})^2-(\frac{1}{2}\frac{a^*_{i-1}}{3})^2}= \frac{1}{9}\cdot\boxed{\frac{1}{2}a^*_{i-1}\sqrt{(a^*_{i-1})^2-(\frac{a^*_{i-1}}{2})^2}}=\frac{1}{9}A^*_{i-1}.

Since

\begin{cases}A^*_i = \frac{1}{9} A^*_{i-1} \\ A^*_0=A_0\end{cases},

solving for A^*_i gives

A^*_i = (\frac{1}{9})^i A_0.

Hence, the area added at the end of i^{th} iteration

\Delta  A_i = s_{i-1} A^*_i = (3\cdot4^{i-1})\cdot(\frac{1}{9})^{i}A_0.

After n iterations, the total enclosed area

A_n =A_0 + \sum\limits_{i=1}^{n}\Delta A_i = A_0 + \sum\limits_{i=1}^{n}(3\cdot4^{i-1})\cdot(\frac{1}{9})^i A_0=\frac{8}{5}A_0-\frac{3}{5}(\frac{4}{9})^{n}A_0.

As the number of iterations tends to infinity,

\lim\limits_{n\rightarrow \infty}A_n = \frac{8}{5}A_0.

i.e., the area of the snowflake is \frac{8}{5} of the area of the original triangle.

If at each iteration, the new triangles are pointed inward, the anti-snowflake is generated (see Fig. 4 ).

Fig. 4 First four iterations of anti-snowflake curve

Like the Snowflake curve, the perimeter of the anti-snowflake curve grows boundlessly, whereas its total enclosed area approaches a finite limit (see Exercise-1).


Exercise-1 Let A_0 be the area of the original triangle. Show that the area encloded by the anti-snowflake curve approaches \frac{5}{2}A_0.

Exercise-2 An “anti-square curve” may be generated in a manner similar to that of the anti-snowflake curve (see Fig. 5). Find:

(1) The perimeter at the end of i^{th} iteration

(2) The enclosed area at the end of i^{th} iteration

Fig. 5 First four iterations of anti-square curve

An Edisonian Moment


We are told that shortly after Edison invented the light bulb, he handed the glass section of a light bulb to one of his engineers, asking him to find the volume of the inside. This was quite a challenge to the engineer, because a light bulb is such an irregular shape. Figuring the volume of the bulb’s irregular shape was quite different from figuring the volume of a glass, or a cylinder.

Several days later, Edison passed the engineer’s desk, and asked for the volume of the bulb, but the engineer didn’t have it. He had been trying to figure the volume mathematically, and had some problems because the shape was so irregular.

Edison took the bulb from the man; filled the bulb with water; poured the water into a beaker, which measured the volume, and handed it to the amazed engineer.

Vincent A. Miller, Working with Words, Words to Work With, 2001, pp. 57-58


Analyze This!” shows the equilibrium point (x_*, y_*) of

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n\\ \frac{dy}{dt}=k_1 x^n-k_2 y\\x(0)=x_0, y(0)=y_0\;\end{cases}

can be found by solving equation

-nk_1x^n-k_2x+c_0=0

where k_1>0, k_2>0, c_0>0.

When n=4, Omega CAS Explorer‘s equation solver yields a list of x‘s (see Fig. 1).

Fig. 1

It appears that identifying x_* from this list of formidable looking expressions is tedious at best and close to impossible at worst:

Fig. 2

However, by Descartes’ rule of signs ,

\forall k_1>0, k_2>0, c_0>0, -4k_1x^4-k_2x+c_0=0 has exactly one positive root.

It means that x_* can be seen quickly from evaluating the x‘s numerically with arbitrary chosen positive values of k_1, k_2and c_0. For instance, k_1=1, k_2=1, c_0=1 (see Fig. 3).

Fig. 3

Only the second value is positive:

Therefore, x_* is the second on the list in Fig. 1:

It’s Magic Square!

A magic square is a n by n square grid filled with distinct integers 1,2,..., n^2 such that each cell contains a different integer and the sum of the integers, called magic number, is equal in each row, column and diagonal. The order of a n by n magic square is n.

The first known example of a magic square is said to have been found on the back of a turtle by Chinese Emperor Yu in 2200 B.C. (see Fig. 1) : circular dots represents the integers 1 through 9 are arranged in a 3 by 3 grid. Its order and magic number are 3 and 15 respectively.

Fig. 1

The first magic square to appear in the Western world was the one depicted in a copperplate engraving by the German artist-mathematician Albrecht Dürer, who also managed to enter the year of engraving – 1514 – in the two middle cells of the bottom row (see Fig. 2)

Fig. 2

Let m be the magic number of a n by n magic square. Sum of all its rows (see “Little Bird and a Recursive Generator“)

n\cdot m = 1+2+3+...+n^2 = \frac{n^2(n^2+1)}{2}.

That is,

m = \frac{n(n^2+1)}{2}.\quad\quad\quad(1)

Fig. 3

There is no 2 by 2 magic square. For if there is, it must be true (see Fig. 3) that

a +b = a+c \implies b=c, contradicts “each cell contains a different integer”.

Shown in Fig. 4 is a square whose rows and columns add up to 260, but the diagonals do not, which makes the square only semi-magic.

Fig. 4

What’s interesting is that a chess Knight, starting its L-shaped moves from the cell marked “1” can tour all 64 squares in numerical order. The knight can reach the starting point from its finishing cell (marked”64“) for a new round of tour.

Exercise-1 Is it true that for every n \in \mathbb{N}\setminus\left\{2\right\} there exists at least one magic square?

Exercise-2 Is there a Knight’s Tour magic square?

Exercise-3 Fill in the missing numbers in the following square so that it is a magic square:

An Epilogue of “Analyze This!”

In “Analyze This!“, we examined system

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n \\ \frac{dy}{dt}=k_1 x^n-k_2 y\\x(0)=x_0, y(0)=y_0\;\end{cases}

qualitatively.

Now, let us seek its equilibrium (x_*, y_*) quantitatively.

In theory, one may first solve differential equation

\frac{dx}{dt} = -nk_1x^n-k_2x+c_0

for x(t), using a popular symbolic differential equation solver such as ‘ode2’. Then compute x_* as \lim\limits_{t \rightarrow \infty} x(t), followed by y_* = \frac{k_1}{k_2}x_*^n.

However,in practice, such attempt meets a deadend rather quickly (see Fig. 1).

Fig. 1

Bring in a more sophisticated solver is to no avail (see Fig. 2)

Fig. 2

An alternative is getting x_* directly from polynormial equation

-nk_1x^n-k_2x+c_0=0\quad\quad\quad(1)

We can solve (1) for x if n \le 4. For example, when n=3, -3k_1 x^3-k_2x+c_0=0 has three roots (see Fig. 3).

Fig. 3

First two roots are complex numbers. By Descartes’ rule of signs, the third root

\frac{(\sqrt{4k_2^3+81c_0^2k_1}+9c_0\sqrt{k_1})^{\frac{2}{3}}-2^{\frac{2}{3}}k_2}{3 \cdot 2^{\frac{1}{3}}\sqrt{k_1}(\sqrt{4k_2^3+81c_0^2k_1}+9c_0\sqrt{k_1})^{\frac{1}{3}}}

is the x_* of equilibrium (x_*, y_*) (see Exercise-1).


Exercise-1 Show the third root is real and positive.

Exercise-2 Obtain x_* from -4k_1x^4-k_2x+c_0=0.

A Delightful Piece of Mathematics

Prove: \sqrt[3]{\sqrt{108} +10} - \sqrt[3]{\sqrt{108} - 10} = 2.

We will proceed as follows:

For all a, b,

a^3-b^3=(a-b)(a^2+ab+b^2)=(a-b)(a^2-2ab+b^2+3ab)=(a-b)((a-b)^2+3ab).

That is,

a^3-b^3 = (a-b)((a-b)^2+3ab).\quad\quad\quad(1)

When

a = \sqrt[3]{\sqrt{108}+10}, \quad b=\sqrt[3]{\sqrt{108}-10},

we have

a^3 = (\sqrt[3]{\sqrt{108}+10})^3 = \sqrt{108}+10, \quad b^3=(\sqrt[3]{\sqrt{108}-10})^3=\sqrt{108}-10.

It means

a^3-b^3= 20.

We also have

ab = \sqrt[3]{\sqrt{108}+10}\cdot\sqrt[3]{\sqrt{108}-10}=\sqrt[3]{(\sqrt{108})^2-10^2}=\sqrt[3]{8}=2.

As a result, (1) becomes

20 = (a-b)((a-b)^2+3\cdot 2).

Rewrite it as

20 = x (x^2+6)

where x denotes a-b, we see that

a-b=\sqrt[3]{\sqrt{108}+10}-\sqrt[3]{\sqrt{108}-10} is a positive root of 20=x(x^2+6).\quad\quad\quad(\star)

Clearly,

2 is also a positive root of 20=x(x^2+6)\quad\quad\quad(\star\star)

since 20=2\cdot(2^2+6).

Moreover (see Exercise 1),

20=x(x^2+6) has only one positive root \quad\quad\quad(\star\star\star)

Therefore, it must be true that

\sqrt[3]{\sqrt{108} +10} - \sqrt[3]{\sqrt{108} - 10} = 2.

Exercise-1 Prove: 20=x(x^2+6) has only one positive root.

Exercise-2 Prove: \sqrt[3]{8+3\sqrt{21}} + \sqrt[3]{8-3\sqrt{21}} = 1

Analyze This!

Consider the following chemical reaction

\underbrace{A + A + ... + A}_{n} \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} A_n

where n molecule of A combine reversibly to form A_n and, k_1, k_2 are the reaction rates.

If x, y are the concentrations of A, A_n respectively, then according to the Law of Mass Action, the reaction is governed by

\begin{cases}\frac{dx}{dt}=n k_2 y - n k_1 x^n \quad\quad(0-1)\\ \frac{dy}{dt}=k_1 x^n-k_2 y\quad\quad\quad(0-2)\\x(0)=x_0, y(0)=y_0\;\quad(0-3)\end{cases}

Without solving this initial-value problem quantitatively, the future state of system can be predicted through qualitatively analyzing how the value of (x, y) changes over the course of time.

To this end, we solve (0-1) for y first:

y=\frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n).

Substitute it in (0-2),

\frac{d}{dt} (\frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n)) =k_1 x^n -k_2 \cdot \frac{1}{n k_2}(\frac{dx}{dt} +n k_1 x^n).

It simplifies to

\frac{d^2x}{dt^2} + (n^2 k_1 x^{n-1} + k_2)\frac{dx}{dt} =0.

Let

p=\frac{dx}{dt},

we have

\frac{d^2x}{dt^2}=\frac{d}{dt}(\frac{dx}{dt})=\frac{dp}{dt}=\frac{dp}{dx}\frac{dx}{dt}=\frac{dx}{dt}\frac{dp}{dx}=p\cdot\frac{dp}{dx}.

Substituting p, p\frac{dp}{dx} for \frac{dx}{dt}, \frac{d^2x}{dt^2} respectively in (1-1) gives

p\frac{dp}{dx}+(n^2 k_1 x^{n-1}+k_2)p=0.

It means p=0 or

\frac{dp}{dx}=-n^2k_1x^{n-1}-k_2.

Integrate it with respect to x,

p = \frac{dx}{dt}=-n k_1 x^n - k_2 x +c_0.

Let

f(x) = -n k_1 x^n - k_2 x +c_0,

we have

\frac{df(x)}{dx} = -n^2 k_1 x^{n-1} - k_2 < 0 \implies f(x) = -n k_1 x^{n-1} - k_2 x + c_0 is a monotonically decreasing function.

In addition, Descartes’ rule of signs reveals that

f(x)=0 has exactly one real positive root.

By definition, this root is the x_* in an equilibrium point (x_*, y_*) .

Fig. 1

Hence,

As time advances, x\uparrow if x_0 < x_*. Otherwise (x_0>x_*), x\downarrow \quad\quad\quad(1-1)

Dividing (0-2) by (0-1) yields

\frac{dy}{dx} = -\frac{1}{n}.

That is,

y=-\frac{1}{n} x + c_1.

By (0-3),

c_1 = y_0 + \frac{1}{n}x_0.

And so,

y=-\frac{1}{n} x + y_0 + \frac{1}{n}x_0.

Since y is a line with a negative slope,

y is a monotonically decreasing function of x.\quad\quad\quad(1-2)

Moreover, from (0-1) and (0-2), we see that

\forall x > 0, (x, \frac{k_1}{k_2}x^n) is an equilibrium point.

i.e.,

All points on the curve y = \frac{k_1}{k_2}x^n in the first quadrant of x-y plane are equilibriums \quad\quad\quad(1-3).

Based on (1-1), (1-2) and (1-3), for a initial state (x_0, y_0),

x_0 < x_* \implies x\uparrow, y\downarrow.

Similary,

x_0 > x_* \implies x\downarrow, y\uparrow.

Fig. 2

A phase portrait of the system is shown in Fig. 3.

Fig. 3

It shows that (x, y) on the trajectory approaches the equilibrium point (x_*, y_*) over the course of time. Namely, the system is asymptotically stable.

Qualitative Analysis of a Simple Chemical Reaction

We will study a simple chemical reaction described by

A + A \underset{k_2}{\stackrel{k_1}{\rightleftharpoons}} A_2

where two molecules of A are combined reversibly to form A_2 and, k_1, k_2 are the reaction rates.

If x is the concentration of A, y of A_2, then according to the Law of Mass Action,

\begin{cases}\frac{1}{2}\cdot\frac{dx}{dt}=k_2 y -k_1 x^2 \\ \frac{dy}{dt}=k_1 x^2-k_2 y\\x(0)=x_0, y(0)=y_0\end{cases}

or equivalently,

\begin{cases}\frac{dx}{dt}=2 k_2 y - 2 k_1 x^2 \quad\quad(0-1)\\ \frac{dy}{dt}=k_1 x^2-k_2 y\quad\quad\quad(0-2)\\x(0)=x_0, y(0)=y_0\;\quad(0-3)\end{cases}

We seek first the equilibrium points that represent the steady state of the system. They are the constant solutions where \frac{dx}{dt}=0 and \frac{dy}{dt}=0, simultaneously.

From \frac{dx}{dt}=2 k_2 y - 2 k_1 x^2=0 and \frac{dy}{dt}=k_1 x^2-k_2 y=0, it is apparent that

\forall  x \ge 0, (x_*, y_*) = (x, \frac{k_1}{k_2}x^2)\quad\quad\quad(0-4)

is an equilibrium point.

To find the value of x_*, we solve for y from (0-1),

y = \frac{1}{2k_2}(\frac{dx}{dt}+2 k_1x^2).\quad\quad\quad(0-5)

Substitute it in (0-2),

\frac{d}{dt}(\frac{1}{2k_2}(\frac{dx}{dt}+2k_1x^2))=k_1x^2-k_2\frac{1}{2k_2}(\frac{dx}{dt}+2k_1x^2)

\implies  \frac{d}{dt}(\frac{dx}{dt}+2k_1x^2)=2k_2k_1x^2-k_2(\frac{dx}{dt}+2k_1x^2),

i.e.,

\frac{d^2x}{dt^2} + \frac{dx}{dt}(4k_1x+k_2)=0.\quad\quad\quad(0-6)

This is a 2nd order nonlinear differential equaion. Since it has no direct dependence on t, we can reduce its order by appropriate substitution of its first order derivative.

Let

p=\frac{dx}{dt},

we have

\frac{d^2x}{dt^2} = \frac{d}{dt}(\frac{dx}{dt})=\frac{dp}{dt}=\frac{dp}{dx}\frac{dx}{dt}=\frac{dx}{dt}\frac{dp}{dx}=p\cdot\frac{dp}{dx}

so that (0-6) is reduced to

p\cdot\frac{dp}{dx}+p\cdot(4k_1x+k_2)=0,\quad\quad\quad(0-7)

a 1st order differential eqution. It follows that either p=\frac{dx}{dt}=0 or \frac{dp}{dx}+(4k_1x+k_2)=0.

The second case gives

\frac{dp}{dx}=-4k_1x-k_2.

Integrate it with respect to x,

p=\frac{dx}{dt}=-2k_1x^2-k_2x + c_0.\quad\quad\quad(0-8)

Hence, the equilibrium points of (0-1) and (0-2) can be obtained by solving a quadratic equation

-2k_1x^2-k_2x + c_0=0.

Notice in order to have x_* \ge 0 as a solution, c_0 must be non-negative .

Fig. 1

The valid solution is

x_* = \frac{\sqrt{k_2^2+8c_0k_1}-k_2}{4k_1}.

Fig. 2

By (0-4),

y_* =\frac{k_1}{k_2}x_*^2=\frac{(\sqrt{k_2^2+8c_0k_1}-k2)^2}{16k_1 k_2}.

and so, the equilibrium point is

(x_*, y_*) = (\frac{\sqrt{k_2^2+8c_0k_1}-k_2}{4k_1}, \frac{(\sqrt{k_2^2+8c_0k_1}-k2)^2}{16k_1 k_2}).

Next, we turn our attentions to the phase-plan trajectories that describe the paths traced out by the (x, y) pairs over the course of time, depending on the initial values.

For (x, y) \ne (x_*, y_*), \frac{dx}{dt} \ne 0. Dividing (0-2) by (0-1) yields

\frac{\frac{dy}{dt}}{\frac{dx}{dt}}= \frac{k_1 x^2-k_2 y}{2 k_2 y- 2 k_1 x^2}=-\frac{1}{2}

i.e.,

\frac{dy}{dx} = -\frac{1}{2}.

Integrating it with respect to x,

y = -\frac{1}{2} x + c_1.

By (0-3),

c_1= y_0 +\frac{1}{2}x_0.

Therefore,

y = -\frac{1}{2}x +y_0 + \frac{1}{2}x_0.\quad\quad\quad(1-1)

Moreover, by (0-5)

y=\frac{1}{2k_2}(\frac{dx}{dt} + 2 k_1 x^2) \overset{(0-8)}{=} \frac{1}{2k_1}(-2k_1 x^2-k_2 x + x_0 +2 k_1 x^2)=\frac{1}{2k_2}(-k_2x +x_0).

As a result,

y_0 = \frac{1}{2k_2}(-k_2 x_0 + x_0).

Substitute y_0 in (1-1), we have

y = -\frac{1}{2} x + \frac{1}{2k_2}(-k_2x_0+x_0) + \frac{1}{2}x_0.

This is the trajectory of the system. Clearly, all trajectories are monotonically decreaseing lines.

At last, let us examine how the system behaves in the long run.

If x < x_* then \frac{dx}{dt}>0 (see Fig. 2) and x will increase. As a result, y will decrease. Similarly, if x > x_*, \frac{dx}{dt}<0 ensures that x will decrease. Consequently, y will increase.

Fig. 3 Trajectories and Equilibriums

It is evident that as time t advances, (x, y) on the trajectory approaches the equilibrium point (x_*, y_*).

A phase portrait of the system is illustrated in Fig. 4.

Fig. 4

It shows that the system is asymptotically stable.