Viva Rocketry! Part 1

maxresdefault.jpg

In this post, we will first look at the main characteristics of rocket flight, and then examine the feasibility of launching a satellite as the payload of a rocket into an orbit above the earth.

A rocket accelerates itself by ejecting part of its mass with high velocity.

Screen Shot 2018-11-09 at 9.37.30 AM.png

Fig. 1

Fig. 1 shows a moving rocket. At time t+\Delta t, the mass \Delta m leaves the rocket in opposite direction. As a result, the rocket is being propelled away with an increased speed.

Let

m(\square), m_{\square} – the mass of rocket at time {\square}

\vec{v}_{\square} – the velocity of rocket at time \square

v(\square), v_{\square} – the magnitude of \vec{v}_{\square}

\vec{v}^*_{t+\Delta t} – the velocity of ejected mass \Delta m at t + \Delta t

v^*_{t+\Delta t} – the magnitude of \vec{v}^*_{t+\Delta t}

u – the magnitude of \Delta m‘s velocity relative to the rocket when it is ejected. It is time invariant.


From Fig. 1, we have

\Delta m = m_t - m_{t + \Delta t},

\vec{v}_t = v_{t},

\vec{v}_{t + \Delta t} = v_{t + \Delta t}

and most notably, the relationship between v^*_{t+\Delta t}, v_{t+\Delta t} and u (see “A Thought Experiment on Velocities”):

 v^*_{t+\Delta t} = u - v_{t + \Delta t}.

It follows that

\vec{v}^*_{t+\Delta t} = -v^*_{t+\Delta t} = v_{t + \Delta t} - u,

momentum at time t: \vec{p}(t) = m_t \vec{v}_t = m_t v_t

and,

momentum at time t+\Delta t\vec{p}(t+\Delta t) = m_{t+\Delta t}\vec{v}_{t+\Delta t} + {\Delta m} \vec{v}^*_{t+\Delta t}=m_{t+\Delta t}\vec{v}_{t+\Delta t} + (m_t - m_{t+\Delta t}) \vec{v}^*_{t+\Delta t}= m_{t+\Delta t}v_{t+\Delta t} + (m_t - m_{t+\Delta t})(v_{t+\Delta t}-u).

Consequently, change of momentum in \Delta t is \vec{p}(t+\Delta t)- \vec{p}(t) = m_t (v_{t + \Delta t} - v_t) + u (m_{t + \Delta t} - m_t).

Apply Newton’s second law of motion to the whole system,

\vec{F}= {d \over dt} \vec{p}(t)

= \lim\limits_{\Delta t \rightarrow 0} {{\vec{p}(t+\Delta t) - \vec{p}(t)} \over \Delta t}

= \lim\limits_{\Delta t \rightarrow 0} { {m_t (v_{t + \Delta t} - v_t) + u (m_{t + \Delta t} - m_t)} \over {\Delta t} }

= \lim\limits_{\Delta t \rightarrow 0} {m_t {{v_{t + \Delta t} - v_t} \over {\Delta t}} + {u {{m_{t + \Delta t} - m_t} \over {\Delta t}}}}

= m_t \lim\limits_{\Delta t \rightarrow 0}{(v_{t+\Delta t} - v_t) \over {\Delta t}} + u \lim\limits_{\Delta t \rightarrow 0}{(m_{t +\Delta t} - m_t) \over \Delta t}

That is,

\vec{F} = m(t) {d \over dt} v(t) + u {d \over dt} m(t)

where \vec{F} is the sum of external forces acting on the system.

To get an overall picture of the rocket flight, we will neglect all external forces.

Without any external force, \vec{F} = 0. Therefore

0 = m(t) {d \over dt} v(t) + u {d \over dt} m(t)

i.e.,

{d \over dt} v(t) = -{u \over m(t)} {d \over dt} m(t)\quad\quad\quad(1)

That fact that u, m(t) in (1) are positive quantities shows as the rocket loses mass ({d \over dt} m(t) < 0), its velocity increases ({d \over dt} v(t) > 0)

Integrate (1) with respect to t,

\int {d \over dt} v(t)\;dt = -u \int {1 \over m(t)} {d \over dt} m(t)\;dt

gives

v(t) = -u \log(m(t)) + c

where c is the constant of integration.

At t = 0, m(0) = m_1 + P where m_1 is the initial rocket mass (liquid or solid fuel + casing and instruments, exclude payload) and P the payload.

It means c = u \log(m_1+P).

As a result,

v(t) = -u \log(m(t)) + u \log(m_1+P)

= -u (\log(m(t) - \log(m_1+P))

= -u \log({m(t) \over m_1+P})

i.e.,

v(t) = -u \log({m(t) \over m_1+P})\quad\quad\quad(2)

Since m_1 is divided into two parts, the initial fuel mass \epsilon m_1 (0 < \epsilon < 1), and the casing and instruments of mass (1-\epsilon)m_1, m(0) can be written as

m(0) = \epsilon m_1 + ( 1 - \epsilon) m_1 + P

When all the fuel has burnt out at t_1,

m(t_1) = (1 - \epsilon)m_1 + P

By (2), the rocket’s final speed at t_1

v(t_1) = -u \log({m(t_1) \over {m_1+P}})

= -u \log({(1-\epsilon)m_1+P  \over {m_1 + P}})

= -u \log({{m_1 + P -\epsilon m_1} \over {m_1+P}})

= -u \log(1-{{\epsilon m_1} \over {m_1+P}})

= -u \log(1-{\epsilon \over {1 + {P \over m_1}}})

= -u \log(1-{\epsilon \over {1 + \beta}})

where \beta = {P \over m_1}.

In other words,

v(t_1) =-u \log(1-{\epsilon \over {1 + \beta}})\quad\quad\quad(3)

Hence, the final speed depends on three parameters

u, \epsilon and \beta

Typically,

u = 3.0\;km\;s^{-1}, \epsilon = 0.8 and \beta = 1/100.

Using these values, (3) gives

Screen Shot 2018-11-05 at 9.21.14 PM.png

v_1 = 4.7\;km\;s^{-1}\quad\quad\quad(4)

This is an upper estimate to the typical final speed a single stage rocket can give to its payload. Neglected external forces such as gravity and air resistance would have reduced this speed.

With (4) in mind, let’s find out whether a satellite can be put into earth’s orbit as the payload of a single stage rocket.

We need to determine the speed that a satellite needs to have in order to stay in a circular orbit of height h above the earth, as illustrated in Fig. 2.

Screen Shot 2018-11-08 at 3.47.09 PM.png

Fig. 2

By Newton’s inverse square law of attraction, The gravitational pull on satellite with mass m_{s} is

{\gamma \; m_{s} M_{\oplus} \over (R_{\oplus} + h)^2}

where universal gravitational constant \gamma = 6.67 \times 10^{-11}, the earth’s mass M_{\oplus} = 5.9722 \times 10^{24}\; kg, and the earth’s radius R_{\oplus} = 6371\;km.

For a satellite to circle around the earth with a velocity of magnitude v, it must be true that

{\gamma \; m_{s} M_{\oplus} \over (R_{\oplus} + h)^2} = {m_{s} v^2 \over (R_{\oplus}+h) }

 i.e,

v = \sqrt{\gamma \; M_{\oplus} \over (R_{\oplus}+h)}

On a typical orbit, h = 100\;km above earth’s suface,

Screen Shot 2018-11-02 at 10.04.41 AM.png

v = 7.8\;km\cdot s^{-1}

This is far in excess of (4), the value obtained from a single stage rocket.

The implication is that a typical single stage rocket cannot serve as the launching vehicle of satellite orbiting around earth.

We will turn to multi-stage rocket in “Viva Rocketry! Part 2”.

 

Advertisements

What moves fast, will slow down, Part One

rockwel1.jpg

This post aims to explain mathematically how populations change.

Our first attempt is based on ideas put forward by Thomas Malthus’ article “An Essay on the Principle of Population” published in 1798.

Let p(t) denotes total population at time t.

Assume in a small interval \Delta t, births and deaths are proportional to p(t) and \Delta t. i.e.

births = a \cdot p(t) \Delta t

deaths = b \cdot p(t) \Delta t

where a, b are constants.

It follows that the change of total population during time interval \Delta t is

p(t+\Delta t) - p(t) = a\cdot p(t)\Delta t - b \cdot p(t)\Delta t = r\cdot p(t)\Delta t

where r = a - b.

Dividing by \Delta t and taking the limit as \Delta t \rightarrow 0, we have

\lim\limits_{\Delta \rightarrow 0} {p(t+\Delta t) - p(t) \over \Delta t} = r \cdot p(t)

which is

{d \over dt} p(t) = r \cdot p(t)\quad\quad\quad(1)

a first order differential equation.

Since (1) can be written as

{1 \over p(t)} {d \over dt} p(t) = r,

integrate with respect to t; i.e.

\int {1 \over p(t)}{d \over dt} p(t)dt = \int {r} dt

leads to

\log p(t) = r\cdot t + c

where c is the constant of integration.

If at t=0, p(0) = p_0, we have

c = \log p_0

and so

p(t) = p_0 e^{r\cdot t}\quad\quad\quad(2)

The result of our first attempt shows that the behavior of the population depends on the sign of constant r. We have exponential growth if r > 0, exponential decay if r < 0 and no change if r = 0.

The world population has been on a upward trend ever since such data is collected (see “World Population by Year“)

Qualitatively, our model (2) with r>0 indicates this trend. However, it also predicts the world population would grow exponentially without limit. And that, is most unlikely to occur, since there are so many limitation factors to growth: lack of food, insufficient energy, overcrowding, disease and war.

Therefore, it is doubtful that model (1) is The One.


Our second attempt makes a modification to (1). It takes the limitation factors into consideration by replacing constant r in (1) with a function r(t). Namely,

r(t) = \gamma - \alpha \cdot p(t)\quad\quad\quad(3)

where \gamma and \alpha are both positive constants.

Replace r in (1) with (3),

 {d \over dt} p(t) = (\gamma - \alpha \cdot p(t)) p(t) = \gamma (1 - {p(t) \over {\gamma \over \alpha}}) p(t)\quad\quad\quad(4)

Since r(t) is a monotonic decreasing function, it shows as population grows, the growth slows down due to the limitation factors.

Let p_{\infty} = {\gamma \over \alpha},

{d \over dt} p(t) = \gamma (1- {p(t) \over p_{\infty}}) p(t)\quad\quad\quad(5)

This is the Logistic Differential Equation.

Written differently as

{d \over dt} p(t) - \gamma \cdot p(t) = -{\gamma \over p_{\infty}} p(t)^2,

the Logistic Differential Equation is also a Bernoulli’s equation (see “Meeting Mr. Bernoulli“)

Let’s understand (5) geometrically without solving it.

Two constant functions, p(t) = 0 or p_{\infty} are solutions of (5), since

{d \over dt} 0 = \gamma (1-{0\over p_{\infty}})  0 = 0

and

{d \over dt} p_{\infty} = \gamma (1-{p_{\infty} \over {p_{\infty}}}) p_{\infty} = 0.

Plot p(t) vs. {d \over dt} p(t) in Fig. 1, the points, 0 and p_{\infty}, are where the curve of {d \over dt} p(t) intersects the axis of p(t).

Screen Shot 2018-10-23 at 9.42.20 AM.png

Fig. 1

At point A where p(t) > p_{\infty}, since {d \over dt} p(t) < 0, p(t) will decrease; i.e., A moves left toward p_{\infty}.

Similarly, at point B where p(t)  < p_{\infty}, {d \over dt} p(t) > 0 implies that p(t) will increase and B moves right toward p_{\infty}.

The model equation can also tell the manner in which p(t) approaches p_{\infty}.

Let p = p(t),

{d^2 \over dt^2} p(t) = {d \over dt}({d \over dt} p)

= {d \over dp} ({d \over dt}p) \cdot {d \over dt} p

= {d \over d p}(\gamma (1-{p \over p_{\infty}})p)\cdot {d \over dt }p

= \gamma(1 - {2 p\over p_{\infty}})\cdot \gamma (1-{p \over p_{\infty}})p

= \gamma^2 p ({{2 p} \over p_{\infty}} -1)({p \over p_{\infty}}-1)

As an equation with unknown p,  \gamma^2 p ({{2 p} \over p_{\infty}} -1)({p \over p_{\infty}}-1)=0 has three zeros:

0, {p_{\infty} \over 2} and p_{\infty}.

Therefore,

{d^2 \over dt^2}p > 0 if p > p_{\infty},

{d^2 \over dt^2} p < 0 if {p_{\infty} \over 2} < p < p_{\infty}

and

{d^2 \over dt^2} p > 0 if p < {p_{\infty} \over 2}.

Consequently p(t), the solution of initial-value problem

\begin{cases} {d \over dt} p(t) = \gamma (1-{p(t) \over p_{\infty}}) p(t) \\  p(0)=p_0 \end{cases}\quad\quad(6)

where p_0 \neq 0, p_{\infty} behaves in the manner illustrated in Fig. 2.

Screen Shot 2018-10-23 at 9.50.25 AM.png

Fig. 2

If  p_0 > p_{\infty}, p(t) approaches p_{\infty} on a concave curve. Otherwise, when {p_{\infty} \over 2} \leq p_0 < p_{\infty}, p(t) moves along a convex curve.  For p_0 < {p_{\infty} \over 2}, the curve is concave first.  It turns convex after p(t) reaches {p_{\infty} \over 2}.


Next, let’s solve the initial-value problem analytically for p_0 \neq 0, p_{\infty}.

Instead of using the result from “Meeting Mr. Bernoulli“, we will start from scratch.

At t where p(t) \neq 0, p_{\infty},  we re-write (5) as

{1 \over p(t)(1-{p(t) \over p_{\infty}}) }{d \over dt} p(t) = \gamma.

Expressed in partial fraction,

({1 \over p(t)} + {{1 \over p_{\infty}} \over {1-{p(t) \over p_{\infty}}}}) {d \over dt} p(t) = \gamma.

Integrate it with respect to t,

\int ({1 \over p(t)} + {{1 \over p_{\infty}} \over {1-{p(t) \over p_{\infty}}}}) {d \over dt} p(t) dt = \int \gamma dt

gives

\log p(t)  - \log (1-{p(t) \over p_{\infty}}) = \gamma t + c

where c is the constant of integration.

i.e.,

\log {p(t) \over {1-{p(t) \over p_{\infty}}}} = \gamma t + c.

Since p(0) = p_0, we have

c = {\log {p_{0} \over {1-{p_0 \over p_{\infty}}}}}

and so

\log ({{p(t) \over {1-{p(t) \over p_{\infty}}}} \cdot {{1-{p_0 \over p_{\infty}}}\over p_0}} )=\gamma t.

Hence,

{{p(t) \over {1 - {p(t) \over p_\infty}}}= {{p_0 \cdot e^{\gamma t}} \over {1-{p_0 \over p_\infty}}}}.

Solving for p(t) gives

p(t) = { p_{\infty} \over {1+({p_{\infty} \over p_0}-1)e^{-\gamma \cdot t}}}\quad\quad\quad(7)

We proceed to show that (7) expresses the value of p(t), the solution to (6) where p_0 \neq 0, p_{\infty}, for all t ‘s (see Fig.3)

Screen Shot 2018-10-25 at 4.29.47 PM.png

Fig. 3

From (7), we have

 \lim\limits_{t \rightarrow \infty} p(t) = p_{\infty}.

It validates Fig. 1.

(7) also indicates that none of the curves in Fig. 2 touch horizontal line p(t) = p_{\infty}.

If this is not the case, then there exists at least one instance of t where p(t) = p_{\infty}; i.e.,

 {p_{\infty} \over {1+({p_{\infty} \over p_0}-1)e^{-\gamma \cdot t}}} = p_{\infty}.

It follows that

{({p_{\infty} \over {p_0}} - 1) e^{-\gamma t}} = 0

Since {e^{-\gamma t}} > 0 (see “Two Peas in a Pod, Part 2“), it must be true that

p_0 = p_{\infty}.

But this contradicts the fact that (7) is the solution of the initial-value problem (6) where p_0 \neq 0,p_\infty.

Reflected in Fig.1 is that A and B will not become p_{\infty}. They only move ever closer to it.

Last, but not the least,

{\lim \limits_{t \rightarrow \infty}} {d \over dt} p(t) =  \gamma (1-{{ \lim\limits_{t \rightarrow \infty} p(t)} \over p_{\infty}}) {\lim\limits_{t \rightarrow \infty} p(t)} = \gamma (1 - {p_{\infty} \over p_{\infty}}) p_{\infty} = 0.

Hence the title of this post.

You say, “y” I say, “y[x]”

220px-Wolfram_Language_Logo_2016.svg.png

 

You see things; and you say “Why?”

But I dream things that never were; and I say “Why not?”

George Bernard Shaw in Back to Methuselah

 

The Wolfram Language function DSolve and NDSolve can solve differential equations.

Let’s look at a few examples.

Example 1 Solving an ODE symbolically. The solution, a function, is evaluated at a given point.

Screen Shot 2018-10-06 at 8.55.21 PM.png

Example 2 Solving an ODE symbolically. Redefine a function and evaluate it at a given point.

Screen Shot 2018-10-06 at 8.52.38 PM.png

Example 3 Solving an ODE initial-value problem symbolically. Get the value at a given point from the symbolic solution.

Screen Shot 2018-10-06 at 8.44.42 PM.png

Example 4 Solving an ODE initial-value problem numerically. Get the value at a given point from the numerical solution.

Screen Shot 2018-10-06 at 8.57.59 PM.png

Regarding whether to specify ‘y‘ or ‘y[x]‘ in DSolve, the only decent explanation I can find is in Stephen Wolfram’s book “The Mathematica Book”. This is straight from horse’s mouth:

When you ask DSolve to get you a solution for y[x], the rule it returns specify how to replace y[x] in any expression. However, these rules do not specify how to replace objects such as y'[x]. If you want to manipulate solutions that you get from DSolve, you will often find it better to ask for solutions for y, rather than y[x].

He then proceeds to give an illustration:

Screen Shot 2018-10-07 at 12.30.46 AM.png

Had you started with DSolve[y'[x]==x+y[x], y[x], x], the result would be

Screen Shot 2018-10-07 at 12.34.12 AM.png

As expected, only y[x] is replaced.

Meeting Mr. Bernoulli

1957974fb78c8ba.jpg

The differential equation

{d \over dx} y + f(x) y = g(x) y^{\alpha}\quad\quad\quad(1)

where \alpha \neq 0, 1 and g(x) \not \equiv 0, is known as the Bernoulli’s equation.

When \alpha is an integer, (1) has trivial solution y(x) \equiv 0.

To obtain nontrivial solution, we divide each term of (1) by y^{\alpha} to get,

\boxed{y^{-\alpha}{d \over dx}y} + f(x) y^{1-\alpha} = g(x)\quad\quad\quad(2)

Since  {d \over {dx}}({{1 \over {1-\alpha}}y^{1-\alpha}}) ={1 \over {1-\alpha}}\cdot (1-\alpha) y^{1-\alpha-1}{d \over dx}y=\boxed{y^{-\alpha}{d \over dx}y}

(2) can be expressed as

{d \over dx} ({{1 \over {1-\alpha}} y^{1-\alpha}}) + f(x) y^{1-\alpha} = g(x)

which is

{{1 \over {1-\alpha}} {d \over dx} y^{1-\alpha}} + f(x) y^{1-\alpha} = g(x) .

Multiply 1-\alpha throughout,

{d \over dx} y^{1-\alpha} + (1-\alpha) f(x) y^{1-\alpha} = (1-\alpha) g(x)\quad\quad\quad(3)

Let z = y^{1-\alpha}, (3) is transformed to a first order linear equation

{d \over dx} z + (1-\alpha) f(x) z = (1-\alpha) g(x),

giving the general solution of a Bernoulli’s equation (see Fig. 1)

Screen Shot 2018-09-28 at 4.36.05 PM.png

Fig. 1

For a concrete example of Bernoulli’s equation, see “What moves fast, will slow down

Pandora’s Box

Unknown.jpeg

Summations arise regularly in mathematical analysis. For example,

\sum\limits_{i=1}^{n}{1 \over {i (i+1)}} = {n \over {n+1}}

Having a simple closed form expression such as {n \over {n+1}} makes the summation easier to understand and evaluate.

The summation we focus on in this post is

\sum\limits_{i=1}^{n}i 2^i\quad\quad\quad(1)

We will find a closed form for it.

In a recent post, I derived the closed form of a simpler summation (see “Beer theorems and their proofs“) Namely,

\sum\limits_{i=0}^{n}x^i={{x^{n+1}-1} \over {x-1}}\quad\quad\quad(2)

From (2) it follows that

{d \over {dx}}{\sum\limits_{i=0}^{n}x^i} = {d \over {dx}}({ {x^{n+1}-1} \over {x-1} })

which gives us

{\sum\limits_{i=0}^{n}{{d \over dx}x^i}}={{(n+1)x^{n}(x-1)-(x^{n+1}-1)} \over {(x-1)^2}}.

Or,

{\sum\limits_{i=0}^{n}{i x^{i-1}}}={{\sum\limits_{i=0}^{n}{i x^{i}}} \over {x}} = {{\sum\limits_{i=1}^{n}{i x^{i}}} \over {x}} = {{(n+1)x^{n}(x-1)-(x^{n+1}-1)} \over {(x-1)^2}}.

Therefore,

{\sum\limits_{i=1}^{n}{i x^{i}}}={{(n+1)x^{n+1}(x-1)-x^{n+2}+x} \over {(x-1)^2}}.

Let x=2, we arrived at (1)’s closed form:

{\sum\limits_{i=1}^{n}i 2^i} = {{(n+1)2^{n+1} -2 ^{n+2} + 2} \over {2-1}} = 2^{n+1} (n-1) + 2.

images.jpg

I have a Computer Algebra aided solution too.

Let s_n \triangleq \sum\limits_{i=1}^{n} i x^i,

we have

s_1 = x,  s_{n}-s_{n-1}=n x^n

Therefore, the closed form of s_n is the solution of initial-value problem

\begin{cases} {s_{n}-s_{n-1} }= {n x^n} \\ s_1=x\end{cases}

It is solved by Omega CAS Explorer (see Fig. 1)

Screen Shot 2018-09-21 at 3.24.33 PM.png

Fig. 1

At ACA 2017 in Jerusalem, I gave a talk on “Generating Power Summation Formulas using a Computer Algebra System“.

I had a dream that night. In the dream, I was taking a test.

It reads:

Derive the closed form for

\sum\limits_{i=1}^{n} {1 \over {(3i-2)(3i+1)}}

\sum\limits_{i=1}^{n} {1 \over {(2i+1)^2-1}}

\sum\limits_{i=1}^{n} {i \over {(4i^2-1)^2}}

\sum\limits_{i=1}^{n} {{i^2 4^i} \over {(i+1)(i+2)}}

\sum\limits_{i=1}^{n} { i \cdot i!}

I woke up with a sweat.

My shot at Harmonic Series

Screen Shot 2018-08-29 at 12.40.22 AM.png

To prove Beer Theorem 2 (see “Beer theorems and their proofs“) is to show that the Harmonic Series 1 + {1 \over 2} + {1 \over 3} + ... diverges.

Below is my shot at it.

Yaser S. Abu-Mostafa proved a theorem in an article titled “A differentiation test for absolute convergence” (see Mathematics Magazine 57(4), 228-231)

His theorem states that

Let f be a real function such that {d^2 f} \over {dx^2} exists at x = 0 . Then \sum\limits_{n=1}^{\infty} f({1 \over n}) converges absolutely if and only if f(0) = f'(0)=0.

Let f(x) = x, we have

\sum\limits_{n=1}^{\infty}f({1 \over n}) = \sum\limits_{n=1}^{\infty}{1 \over n},

the Harmonic Series. And,

f'(x) = {d \over dx} x = 1 \implies f'(0) \neq 0.

Therefore, by Abu-Mostafa’s theorem, the Harmonic Series diverges.

Beer theorems and their proofs

TwoBeer.jpgBeer Theorem 1.

An infinite crowd of mathematicians enters a bar.

The first one orders a pint, the second one a half pint, the third one a quarter pint…

“Got it”, says the bartender – and pours two pints.

Proof.

Let s_n = \sum\limits_{i=1}^{n} a \cdot r^{i-1} = a + a\cdot r + a \cdot r^{2} + ...+ a\cdot r^{n-2} + a \cdot r^{n-1}.

Then r\cdot s_{n} = \sum\limits_{i=1}^{n} a\cdot r^{i} = a\cdot r  + a\cdot r^2+ ... + a\cdot r^{n-1} + a\cdot r^{n}

\implies s_{n}-r\cdot s_{n} = a  - a\cdot r^{n}.

Therefore,

s_{n} = {{a\cdot(1-r^{n})} \over {1-r}} .

When a = 1, r={{1} \over {2}},

s_{n} = \frac{1 \cdot (1-({1 \over 2})^n)}{1-{1 \over 2}} = 2\cdot (1-({1 \over 2})^n)

i.e.,

1+ {1 \over 2} + {1 \over 4} + {1 \over 8}+...+({1 \over 2})^{n-1}= 2\cdot (1-({1 \over 2})^n)

\implies \lim\limits_{n \rightarrow \infty} s_{n} = \lim\limits_{n \rightarrow \infty} {2\cdot (1-({1 \over 2})^n)} = 2.

There is also a proof without words at all:

infinite-series-square.jpg

Beer Theorem 2.

An infinite crowd of mathematicians enters a bar.

The first one orders a pint, the second one a half pint, the third one a third of pint…

“Get out here! Are you trying to ruin me?”, bellows the bartender.

Proof.

See “My shot at Harmonic Series