Playing “Feynman’s Trick” on Indefinite Integrals – Tongue in Cheek

“Differentiation under the integral sign”, a.k.a., “Feynman’s trick” is a clever application of Leibniz’s rule (LR-1):

Let f(x, \beta) be continuous and have a continuous derivative \frac{\partial}{\partial \beta} in a domain of x\beta-plane that includes the rectangle a \le x \le b, \beta_1 \le \beta \le \beta_2,

\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx =\int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x, \beta)\;dx.

“Feynman’s trick” is known to be an effective technique for evaluating difficult definite integral such as \int\limits_{0}^{1}\frac{x-1}{\log(x)}\;dx.

Is Feynman’s “trick” applicable to indefinite integrals too?

In other words, is it also true that

\frac{\partial}{\partial \beta}\int f(x, \beta)\;dx + C = \int \frac{\partial}{\partial \beta}f(x, \beta)\;dx?\quad\quad\quad(\star)

Let’s apply(\star) to indefinite integral \int \log(x)\;dx:

\frac{\partial}{\partial \beta}\int x^{\beta}\;dx+C = \int \frac{\partial}{\partial \beta}x^{\beta}\;dx = \int x^{\beta}\log(x)\;dx;


\int x^{\beta}\log(x)\;dx =\frac{\partial}{\partial \beta}\int x^{\beta}\;dx+C.\quad\quad\quad(1)

Since \int x^{\beta}\; dx = \frac{x^{\beta+1}}{\beta+1} + C_1, the right-hand side of (1) is

\frac{\partial}{\partial \beta}\left(\frac{x^{\beta+1}}{\beta+1}+C_1\right) + C= \frac{x^{\beta+1}\log(x)\cdot (\beta+1) - x^{\beta+1}}{(\beta+1)^2}+C.

It means

\int x^{\beta}\log(x)\;dx = \frac{x^{\beta+1}\log(x)\cdot (\beta+1) - x^{\beta+1}}{(\beta+1)^2}+C.

For \beta = 0, we have

\int \log(x)\;dx = x\log(x)-x+C.

It checks out:

\frac{d}{dx}(x\log(x)-x+C) = \log(x)+x \cdot \frac{1}{x}-1 = \log(x).

Let’s also evaluate \int x e^{x}\;dx:

By (\star),

\frac{\partial}{\partial \beta}\int e^{\beta x}\;dx + C  = \int\frac{\partial}{\partial \beta} e^{\beta x}\;dx=\int x e^{\beta x}\;dx.

That is,

\int x e^{\beta x}\;dx = \frac{\partial}{\partial \beta}\int e^{\beta x}\; dx+C= \frac{\partial}{\partial \beta}\left(\frac{1}{\beta}e^{\beta x} + C_1\right)+C=\frac{e^{\beta x}\beta x - e^{\beta x}}{\beta^2}+C.\quad\quad(2)

Let \beta = 1, (2) yields

\int x e^{x} \;dx= e^x (x-1)+C.

It checks out too:

\frac{d}{dx} (e^x (x-1)+C) = e^x(x-1) + e^x = x e^x.

This image has an empty alt attribute; its file name is images.png

Now that we have gained confidence in the validity of (\star), let’s prove it.


G_1(x) = \int g(x)\;dx, G_2(x) = \int\limits_{a}^{x} g(x)\; dx

where g(x) is a function of x, we have,

(G_1(x)-G_2(x))' = (\int g(x)\;dx)' -  (\int\limits_{a}^{x}  g(x) \;dx)'= g(x)-g(x) = 0.

It means

G_1(x)-G_2(x)=C\implies \int g(x)\;dx= \int\limits_{a}^{x}g(x)\;dx + C.

When x=b,

\int g(x)\;dx = \int\limits_{a}^{b}g(x)\;dx+C;

i.e., for f(x,\beta), a function of both x and \beta,

\int f(x,\beta)\;dx = \int\limits_{a}^{b} f(x, \beta)\;dx+C\quad\quad\quad(3)


\int \frac{\partial}{\partial t} f(x,\beta)\;dx = \int\limits_{a}^{b} \frac{\partial}{\partial \beta}f(x, \beta)\;dx+C.\quad\quad\quad(4)

It follows that

\frac{\partial}{\partial \beta}\int f(x,\beta)\;dx \overset{(3)}{=} \frac{\partial}{\partial \beta}\left(\int\limits_{a}^{b}f(x,\beta)\;dx+C\right)

=\frac{\partial}{\partial \beta}\int\limits_{a}^{b}f(x,\beta)\;dx

\overset{\textbf{LR-1}}{=} \int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x,\beta)\;dt

\overset{(4)}{=} \int\frac{\partial }{\partial \beta}f(x,\beta)\;dx -C.

And so,

\frac{\partial}{\partial \beta}\int f(x,\beta)\;dx +C= \int \frac{\partial}{\partial \beta} f(x,\beta)\;dx.

Exercise-1 Evaluate \int x^2 e^x\;dx.

hint: \frac{\partial}{\partial \beta}\int x e^{\beta x}\;dx = \int x^2 e^{\beta x}\; dx.


Problem Given

f(x) = e^x + \int\limits_{0}^{x} (t-x)f(t)\;dt\quad\quad\quad(\star)

where f(x) is a continuous function, find f(x).


From (\star), we see that

f(0) = 1;

f(x) = e^x + \int\limits_{0}^{x} t\cdot f(t) - x\cdot f(t) \;dt = e^x + \int\limits_{0}^{x} t\cdot f(t)\;dt-x\cdot \int\limits_{0}^{x}f(t)\;dt.

And so,

\frac{df(x)}{dx}=\frac{de^x}{dx} + \frac{d}{dx}\int\limits_{0}^{x}tf(t)\;dt - \frac{d}{dx}\left(x\cdot \int\limits_{0}^{x}f(t)\;dt\right)

=e^x+\frac{d}{dx}\int\limits_{0}^{x}tf(t)\;dt-\left(\int\limits_{0}^{x}f(t)\;dt + x\frac{d}{dx}\int\limits_{0}^{x}f(t)\;dt\right)

\overset{\textbf{FTC}}{=}e^x + xf(x) -\int\limits_{0}^{x} f(t)\;dt - xf(x)

= e^x - \int\limits_{0}^{x}f(t)\;dt

That is,

\frac{d}{dx}f(x)= e^x - \int\limits_{0}^{x}f(t)\;dt\implies f'(0) = 1.

It follows that



\begin{cases} f''(x)=e^x-f(x) \\f(0)=1\\f'(0)=1 \end{cases}\quad\quad\quad(\star\star)


f(x) = \frac{1}{2}(\sin(x)+\cos(x)+e^x).

Fig. 1

Notice the derivation of (\star\star) can be simplified if the generalized Leibniz’s Rule (GLR, see “Deriving Generalized Leibniz’s Integral Rule”) is applied:

\frac{df(x)}{dx} = e^x + \underline{\frac{d}{dx}\int\limits_{0}^{x}(t-x)f(t)\;dt}

\overset{\textbf{GLR}}{=} e^x + \underline{(x-x)f(x)\cdot x'-(0-x)f(0)\cdot 0' + \int\limits_{0}^{x}\frac{\partial}{\partial x}(t-x)f(t)\;dt}

=  e^x + \underline{\int\limits_{0}^{x}\frac{\partial}{\partial x}(t-x)f(t)\;dt}

=e^x+\int\limits_{0}^{x}-1\cdot f(t)\;dt

= e^x-\int\limits_{0}^{x}f(t)\;dt

\implies \frac{d^2f(x)}{dx}=e^x-\frac{d}{dx}\int\limits_{0}^{x}f(t)\;dt\overset{\textbf{FTC}}{=}e^x-f(x).

Fig.2 shows that Omega CAS explorer‘s Maxima engine is both FTC and GLR aware:

Fig. 2

Exercise-1 Given:

f(x) = \int\limits_{0}^{x}t\cdot f(x-t)\;dt+\sin(x)

where f(x) is a continuous function, find f(x).

hint: Let u=x-t, t = x-u; t=0\implies u=x; t=x\implies u=0; \frac{du}{dt}=-1;

f(x) = \int\limits_{x}^{0}(x-u)\cdot f(u)\cdot (-1)\;du + \sin(x)=\int\limits_{0}^{x}(x-u)f(u)\;du+\sin(x).

FTC saves the day!

Problem-1 Given

f(x) = \int\limits_{0}^{2x}f(\frac{t}{2})\;dt +\log(2)\quad\quad\quad(\star)

where f(x) is a continuous function, find f(x).



p=2x \implies \frac{dp}{dx} = 2.\quad\quad\quad(1-1)

By (\star),

\frac{df(x)}{dx} =\frac{d}{dx} \int\limits_{0}^{p} f(\frac{t}{2})\;dt + \frac{d\log(2)}{dx} = \underline{\frac{d}{dp}\left(\int\limits_{0}^{p} f(\frac{t}{2})\;dt\right)} \cdot \frac{dp}{dx}\overset{\textbf{FTC}}{=}\underline{f(\frac{p}{2})}\cdot \frac{dp}{dx}\overset{(1-1)}{=}2f(x),


\frac{df(x)}{dx} = 2f(x).\quad\quad\quad(1-2)

Moreover, we see from (\star) that

f(0) = \int\limits_{0}^{0}f(\frac{t}{2})\;dt + \log(2) = 0 + \log(2) = \log(2).\quad\quad\quad(1-3)

Solving initial-value problem

\begin{cases} \frac{df(x)}{dx} = 2f(x)\\ f(0)=\log(2)\end{cases}


f(x) = \log(2)\cdot e^{2x}.

We use Omega CAS Explorer to verify:

Fig. 1-1

Problem-2 Given

\int\limits_{0}^{1}f(u\cdot x) \;du = \frac{1}{2} f(x) +1\quad\quad\quad(\star\star)

where f(x) is a continuous function, find f(x).


Let p=u\cdot x,

u=\frac{p}{x} \implies \frac{du}{dp} = \frac{1}{x}\quad\quad\quad(2-1)

u=0\implies p=0; u=1\implies p=x.\quad\quad\quad(2-2)

\int\limits_{0}^{1}f(u\cdot x)\;du\overset{(2)}{=} \int\limits_{0}^{x}f(p)\cdot\frac{du}{dp}\cdot dp\overset{(1)}{=}\int\limits_{0}^{x}f(p)\frac{1}{x}\;dp=\frac{1}{x}\int\limits_{0}^{x}f(p)\;dp.\quad\quad\quad(2-3)

By (2-3), we express (\star\star) as

\frac{1}{x}\int\limits_{0}^{x}f(p)\;dp = \frac{1}{2}f(x)+1,


\int\limits_{0}^{x} f(p)\;dp = \frac{x}{2}f(x)+x.

It follows that

\underline{\frac{d}{dx}\left(\int\limits_{0}^{x}f(p)\;dp\right)}=\frac{d}{dx}\left(\frac{x}{2}f(x)+x\right)\overset{\textbf{FTC}}{\implies}\underline{f(x)}=\frac{1}{2}\left(f(x) + x\frac{d f(x)}{dx}\right)+1.\;(2-4)

Solving differential equation (2-4) (see Fig. 2-1) gives

f(x) = c x + 2.

Fig. 2-1

The solution is verified by Omega CAS Explorer:

Fig. 2-2

Exercise-1 Solving \begin{cases} \frac{df(x)}{dx} = 2f(x)\\ f(0)=\log(2)\end{cases} using a CAS.

Exercise-2 Solving (2-4) without using a CAS.

An Epilogue to “Truth vs. Intellect”

This post illustrates an alternative of compute the approximate value of \pi.

We begin with a circle whose radius is r, and let L_{n}, L_{n+1} denotes the side’s length of regular polygon inscribed in the circle with 2^n and 2^{n+1} sides respectively, n=2, 4, ....

Fig. 1

On one hand, we see the area of \Delta ABC as

\frac{1}{2}\cdot AB\cdot BC = \frac{1}{2}\cdot AB\cdot L_{n+1}.

On the other hand, it is also

\frac{1}{2}\cdot AC\cdot BE = \frac{1}{2}\cdot 2r\cdot \frac{L_n}{2}=\frac{1}{2}\cdot r\cdot L_n.


\frac{1}{2}AB\cdot L_{n+1}= \frac{1}{2}r\cdot L_n.


AB^2\cdot L_{n+1}^2 = r^2\cdot L_n^2\quad\quad\quad(1)

where by Pythagorean theorem,

AB^2= (2r)^2 - L_{n+1}^2.\quad\quad\quad(2)

Substituting (2) into (1) gives

(4r^2-L_{n+1}^2)L_{n+1}^2 = L_n^2\implies 4r^2L_{n+1}^2 - L_{n+1}^4 = r^2 L_n^2.

That is,

L_{n+1}^4-4r^2L_{n+1}^2+r^2 L_n^2 = 0.

Let p = L_{n+1}^2, we have

p^2-4r^2 p + r^2 L_n^2=0.\quad\quad\quad(3)

Solving (3) for p yields

p = 2r^2 \pm r \sqrt{4 r^2-L_n^2}.

Since L_n^2 must be greater than L_{n+1}^2 (see Exercise 1), it must be true (see Exercise 2) that

L_{n+1}^2=2r^2 - r \sqrt{4r^2-L_n^2}.\quad\quad\quad(4)

Notice when r=\frac{1}{2}, we obtain (5) in “Truth vs. Intellect“.

With increasing n,

L_n\cdot 2^n \approx \pi\cdot 2r \implies \pi \approx \frac{L_n 2^n}{2r}.\quad\quad\quad

We can now compute the approximate value of \pi from any circle with radius r:

Fig. 2 r=2

Fig. 3 r=\frac{1}{8}

Exercise 1 Explain L_{n}^2 > L_{n+1}^2 geometrically.

Exercise 2 Show it is 2r^2-r\sqrt{4r^2-L_n^2} that represents L_{n+1}^2.

Truth vs. Intellect

It was known long ago that \pi, the ratio of the circumference to the diameter of a circle, is a constant. Nearly all people of the ancient world used number 3 for \pi. As an approximation obtained through physical measurements with limited accuracy, it is sufficient for everyday needs.

An ancient Chinese text (周髀算经,100 BC) stated that for a circle with unit diameter, the ratio is 3.

In the Bible, we find the following description of a large vessel in the courtyard of King Solomon’s temple:

He made the Sea of cast metal, circular in shape, measuring ten cubits from rim to rim and five cubits high, It took a line of thirty cubits to measure around it. (1 Kings 7:23, New International Version)

This infers a value of \pi = \frac{30}{10} = 3.

It is fairly obvious that a regular polygon with many sides is approximately a circle. Its perimeter is approximately the circumference of the circle. The more sides the polygon has, the more accurate the approximation.

To find an accurate approximation for \pi, we inscribe regular polygons in a circle of diameter 1. Let L_{n}, L_{n+1} denotes the side’s length of regular polygon with 2^n and 2^{n+1} sides respectively, n=2, 4, ....

Fig. 1

From Fig. 1, we have

\begin{cases} L_{n+1}^2 = x^2 + (\frac{1}{2} L_n)^2\quad\quad\quad(1) \\ (\frac{1}{2})^2 = (\frac{1}{2}L_n)^2 + y^2\quad\quad\;\quad(2)\\ x+y = \frac{1}{2}\;\quad\quad\quad\quad\quad\quad(3) \end{cases}

It follows that

y\overset{(2)}{=}\sqrt{(\frac{1}{2})^2-(\frac{1}{2} L_n)^2} \overset{(3)}{ \implies} x=\frac{1}{2}-\sqrt{(\frac{1}{2})^2-(\frac{1}{2}L_n)^2}.\quad\quad\quad(4)

Substituting (4) into (1) yields

L_{n+1}^2 = \left(\frac{1}{2}-\sqrt{(\frac{1}{2})^2-(\frac{1}{2}L_n)^2}\right)^2+(\frac{1}{2}L_n)^2

That is,

L_{n+1}^2 = \frac{1}{4}\left(L_n^2 + \left(1-\sqrt{1-L_n^2}\right)^2\right).

Further simplification gives

L_{n+1}^2 = \frac{1}{2}\left(1-\sqrt{1-L_n^2}\right),\quad\quad\quad(5)

Starting with an inscribed square (L_2^2 =\frac{1}{2}), we compute L_{n+1}^2 from L_{n}^2 (see Fig. 2). The perimeter of the polygon with 2^{n+1} sides is 2^{n+1} \cdot L_{n+1}.

Fig. 2


\lim\limits_{n \rightarrow \infty} 2^n \cdot L_{n} = \pi.

Exercise-1 Explain, and then make the appropriate changes:

Hint: (5) is equivalent to L_{n+1}^2 = \frac{L_n^2}{2\left(1+\sqrt{1-L_{n}^2}\right)}.

We all bleed the same color

In “Mathematical Models in Biology”, Leah Edelstein-Keshet presents a model describing the number of circulating red blood cells (RBC’s). It assumes that the spleen filters out and destroys a fraction of the cells daily while the bone marrow produces a amount proportional to the number lost on the previous day:

\begin{cases} R_{n+1} = (1-f)R_n+M_n\\ M_{n+1} = \gamma f R_n\end{cases}(1)


R_n - number of RBC’s in circulation on day n,

M_n - number of RBC’s produced by marrow on day n,

f - fraction of RBC’s removed by the spleen,

\gamma - numer of RBC’s produced per number lost.

What would be the cell count on the n^{th} day?

Observe first that (1) is equivalent to

R_{n+2} = (1-f)R_{n+1}+M_{n+1}\quad\quad\quad(2)


M_{n+1} = \gamma f  R_n.\quad\quad\quad(3)

Let n = -1,

M_0=\gamma f R_{-1} \implies R_{-1} = \frac{M_0}{\gamma f}.\quad\quad\quad(4)

Substituting (3) into (2) yields

R_{n+2} = (1-f)R_{n+1}+\gamma f R_{n}.

We proceed to solve the following initial-value problem using ‘solve_rec‘ (see “Solving Difference Equations using Omega CAS Explorer“):

\begin{cases} R_{n+2}=(1-f)R_{n+1}+\gamma f R_{n}\\ R_{0}=1, R_{-1} = \frac{1}{\gamma f}\end{cases}

Evaluate the solution with f=\frac{1}{2}, g=1, we have

R_n = \frac{4}{3} + \frac{(-1)^{n+1}2^{-n}}{3}.\quad\quad\quad(5)

Plotting (5) by ‘plot2d(4/3 + (-1)^(n+1)*2^(-n)/3, [n, 0, 10], WEB_PLOT)’ fails (see Fig. 1) since plot2d treats (5) as a continuous function whose domain includes number such as \frac{1}{2}.

Fig. 1

Instead, a discrete plot is needed:

Fig. 2

From Fig. 2 we see that R_{n} converges to a value between 1.3 and 1.35. In fact,

\lim\limits_{n \rightarrow \infty}  \frac{4}{3} + \frac{(-1)^{n+1}2^{-n}}{3} = \frac{4}{3}\approx 1.3333....


X had a Q&A session with Buddha.

X: What is 0.9\;?

Buddha: 0.9 is \frac{9}{10}\;.

X: What is 0.99\;?

Buddha: 0.99 is \frac{99}{100}\;.

X: What is 0.999\;?

Buddha: 0.999 is \frac{999}{1000}\;.


X: What is 0.99999....\;?

Buddha: What are the dots ?

X: The dots are all 9‘s. It means \forall n \ge 1, the n^{th} digit after the decimal point is 9.

Buddha: 0.99999... is


X: What is \sum\limits_{i=1}^{\infty}\frac{9}{10^i}\;?

Buddha: It is


A Sophism in Calculus

Evaluating indefinite integral \int \frac{1}{x}\;dx using integration by parts gives

\int \frac{1}{x}\;dx =\int x'\cdot \frac{1}{x}\;dx=x\cdot\frac{1}{x}-\int x\cdot(\frac{1}{x})'\;dx=1-\int x\cdot\frac{-1}{x^2}\;dx=1+\int \frac{1}{x}\;dx.

That is,

\int \frac{1}{x}\;dx = 1 + \int \frac{1}{x}\;dx.

Substracting \int \frac{1}{x}\;dx from both sides we conclude


The expression

\int f(x)\;dx = g(x)


\left(\int f(x)\; dx - g(x)\right)' = 0;


\int f(x)\;dx- g(x) = C

where C is a constant.


\int \frac{1}{x}\;dx = 1+ \int \frac{1}{x}\;dx \implies \int \frac{1}{x}\;dx - (1 + \int \frac{1}{x}\;dx) =C.


C = -1.

Moreover, define \leftrightarrow as

f(x) \leftrightarrow g(x), if f(x)-g(x)=C, \quad\quad\quad(2)

it can be shown that

f(x) \leftrightarrow h(x), h(x) \leftrightarrow g(x) \implies f(x) \leftrightarrow g(x) \quad\quad\quad(2-1)


f(x) \leftrightarrow g(x) \Longleftrightarrow f(x)+h(x) \leftrightarrow g(x) + h(x).\quad\quad\quad(2-2)

If we proceed to evaluate \int \frac{1}{x}\;dx as follows:

\int \frac{1}{x}\;dx \leftrightarrow \int x'\cdot \frac{1}{x}\;dx \leftrightarrow x\cdot\frac{1}{x}-\int x\cdot(\frac{1}{x})'\;dx \leftrightarrow 1-\int x\cdot\frac{-1}{x^2}\;dx \leftrightarrow 1+\int \frac{1}{x}\;dx,

we have (by (2-1))

\int \frac{1}{x}\;dx \leftrightarrow 1 + \int \frac{1}{x}\; dx.

Substracting \int \frac{1}{x}\;dx from both sides (by (2-2)) we conclude

0 \leftrightarrow 1.\quad\quad\quad(3)

Unlike (1), (3) is true:

0-1 = -1, a constant.

By the way,

\int \frac{1}{x}\;dx \leftrightarrow \log(x).

See “Introducing Lady L” for details.

Exercise-1 Can you spot the fallacies?

For x>0,

x^2 = x\cdot x = \underbrace{x+x+x...+x}_{x\;\; x's}.

Differentiating both sides, we have

2x=\underbrace{1+1+1+... + 1}_{x\;\;1's}=x.

Dividing both sides by x (since x>0) yields

2 = 1.

Fire & Water


When a forrest fire occurs, how many fire fighter should be sent to fight the fire so that the total cost is kept minimum?


Suppose the fire starts at t=0; at t=t_1, the fire fighter arrives; the fire is extinguished later at t=t_2.

Let c_1, c_2, c_3 and x denotes the monetary damage per square footage burnt, the hourly wage per fire fighter, the one time cost per fire fighter and the number of fire fighters sent respectively. The total cost consists damage caused by the fire, the wage paid to the fire fighters and the one time cost of the fire fighters:

c(x) = c_1\cdot(total square footage burnt) +\;c_2\cdot (t_2-t_1)\cdot x + c_3\cdot x.

Notice t_2-t_1, the duration of fire fighting.

Assume the fire ignites at a single point and quickly spreads in all directions with flame velocity v_*, the growing square footage of engulfed circular area is a function of time. Namely,

b(t) = \pi r^2  = \pi (v_* t) ^2 = \pi v_*^2 t^2 \overset{k=\pi v_*^2}{=} k t^2.

Its burning rate

v_b(t) = b'(t) = 2 k t \overset{\alpha = 2 k}{=} \alpha t.\quad\quad\quad(1)

However, after the arrival of x fire fighters, \alpha is reduced by \beta x, an amount that is directly proportional to the number of fire fighters on the scene. The reduction of \alpha reflects the fire fighting efforts exerted by the crew. As a result, for t > t_1, v_b(t) declines along the line described by

v_b(t) = (\alpha-\beta x)t + d

where \alpha-\beta x <0. Or equivalently,

\beta x - \alpha >0.\quad\quad\quad(2)

Moreover, the fire is extinguished at t_2 suggests that

v_b(t_2) = 0 \implies (\alpha-\beta x) t_2 + d =0 \implies d = -(\alpha-\beta x)t_2.


\forall t_1< t \le t_2, v_b(t) = (\alpha-\beta x) t - (\alpha-\beta x) t_2=(\alpha-\beta x) (t-t_2).\quad\quad\quad(3)

Combine (1) and (3),

v_b(t) = \begin{cases} \alpha t, \quad\quad\quad\quad\quad\quad\quad\quad0\le t \le t_1\\ (\alpha-\beta x)(t-t_2),\quad\quad t_1< t \le t_2 \end {cases}.

It is further assumed that

v_b(t) is continuous at t=t_1.\quad\quad\quad(4)

We illustrate v_b(t) in Fig. 1.

Fig. 1

The fact that v_b(t) is continuous at t_1 means

\lim\limits_{t \to t_1^+}v_b(t)=(\alpha-\beta x)(t_1-t_2) = \lim\limits_{t \to t_1^-}v_b(t) = h \implies (\alpha-\beta x) (t_1-t_2)=h.

That is,

t_2-t_1=\frac{h}{\beta x - \alpha}.\quad\quad\quad(5)

The area of triangle in Fig. 1 represents b(t_2), the total square footage damaged by the fire. i.e.,

b(t_2) =\frac{1}{2}t_1 h + \frac{1}{2}(t_2-t_1)h\overset{(5)}{=}\frac{1}{2}t_1 h+ \frac{1}{2}\frac{h^2}{\beta x-\alpha}.\quad\quad\quad(6)

Consequently, the total cost

c(x)= c_1 b(t_2) + c_2 x (t_2-t_1)  + c_3 x \overset{(6), (5)}{=}\frac{c_1 t_1 h}{2} + \frac{c_1 h^2}{2(\beta x-\alpha)} + \frac{c_2 h x}{\beta x-\alpha} + c_3 x.

To minimize the total cost, we seek a value x where function c attains its minimum value c(x).

From expressing x as

x = \frac{1}{\beta}\beta x = \frac{1}{\beta}({\beta x-\alpha +\alpha} )= \frac{1}{\beta}(\beta x -\alpha) + \frac{\alpha}{\beta},

we obtain

\frac{c_2 h x}{\beta x-\alpha} = \frac{c_2 h}{\beta x-\alpha}(\frac{1}{\beta}(\beta x - \alpha) + \frac{\alpha}{\beta})=\frac{c_2 h}{\beta} + \frac{c_2 \alpha h}{\beta(\beta x-\alpha)},\quad\quad\quad(7)

c_3 x = \frac{c_3}{\beta}(\beta x-\alpha) + \frac{c_3 \alpha}{\beta}.\quad\quad\quad(8)

It follows that

c(x) = \underbrace{\frac{c_1 t_1 h}{2} + \frac{c_1 h^2}{2(\beta x-\alpha)}}_{c_1\cdot(6)} + \underbrace{\frac{c_2 h}{\beta}+\frac{c_2 \alpha h}{\beta(\beta x - \alpha)}}_{(7)} + \underbrace{\frac{c_3}{\beta}(\beta x-\alpha) + \frac{c_3 \alpha}{\beta}}_{(8)}

=\frac{c_1 t_1 h}{2} + \frac{c_2 h}{\beta} + \frac{c_3 \alpha}{\beta} + \underline{\frac{c_1 \beta  h^2+2 c_2 \alpha h}{2\beta(\beta x - \alpha)} + \frac{c_3(\beta x-\alpha)}{\beta}}.

Hence, to minimize c(x) is to find the x that minimizes \frac{c_1\beta h^2+2 c_2\alpha h }{2\beta(\beta x - \alpha)} + \frac{c_3(\beta x - \alpha)}{\beta}.

Since \frac{c_1\beta h^2+2 c_2\alpha h }{2\beta(\beta x - \alpha)} \cdot \frac{c_3(\beta x - \alpha)}{\beta}=\frac{(c_1 \beta h^2+2 c_2 \alpha h)c_3}{2\beta^2} , a constant, by the following theorem:

For positive quantities a_1, a_2, ..., a_n, c_1, c_2, ..., c_n and positive rational quantities p_1, p_2, ..., p_n, if a_1^{p_1}a_2^{p_2}...a_k^{p_k} is a constant, then c_1a_1+c_2a_2+...+c_na_n attains its minimum if \frac{c_1a_1}{p_1} = \frac{c_2a_2}{p_2} = ... = \frac{c_na_n}{p_n}.

(see “Solving Kepler’s Wine Barrel Problem without Calculus“), we solve equation

\frac{c_1\beta h^2+2 c_2\alpha h }{2\beta(\beta x - \alpha)} = \frac{c_3(\beta x - \alpha)}{\beta}

for x:

Fig. 2

From Fig. 2, we see that x =\frac{\alpha}{\beta} - \frac{\sqrt{\beta c_1 c_3 h^2+2\alpha c_2 c_3 h}}{\sqrt{2}\beta c_3} \implies \beta x - \alpha < 0, contradicts (2).

However, when x =\frac{\sqrt{\beta c_1 c_3 h^2+2\alpha c_2 c_3 h}}{\sqrt{2}\beta c_3} + \frac{\alpha}{\beta}=\sqrt{\frac{\beta c_1 h^2 + 2 \alpha c_2 h}{2\beta^2 c_3}} + \frac{\alpha}{\beta}, \beta x-\alpha is a positive quantity. Therefore, it is

x \overset{h=\alpha t_1}{=} \alpha \sqrt{\frac{\beta c_1 t_1^2 + 2 c_2 t_1}{2\beta^2 c_3}} + \frac{\alpha}{\beta}

that minimizes the total cost.