Deriving Two Inverse Functions

In a previous post, we defined function $y=\sinh^{-1}(x)$ as

$\{(x, y) | x = \frac{e^y-y^{-y}}{2}\}.\quad\quad\quad(1)$

From $x = \frac{e^y-e^{-y}}{2}$, we obtain

$(e^y)^2-2x\cdot e^y-1=0.$

It means either $e^y = x-\sqrt{x^2+1}$ or $e^y = x+\sqrt{x^2+1}.$

But $e^y = x-\sqrt{x^2+1}$ suggests $e^y < 0$ (see Exercise-1), contradicts the fact that $\forall t \in R, e^t > 0$ (see “Two Peas in a Pod, Part 2“).

Therefore,

$e^y = x+\sqrt{x^2+1} \implies y = \log(x + \sqrt{x^2+1}).$

i.e.,

$\sinh^{-1}(x) = \log(x + \sqrt{x^2+1}), \;\;x \in (-\infty, +\infty).$

We also defined a non-negative valued function $y = \cosh^{-1}(x)$:

$\{(x, y) | y \ge 0, x = \frac{e^y + e^{-y}}{2}\}.\quad\quad\quad(2)$

Simplifying $x=\frac{e^y+e^{-y}}{2}$ yields

$(e^y)^2-2x\cdot e^y+1=0.$

It follows that either $e^y = x-\sqrt{x^2-1}$ or $e^y=x+\sqrt{x^2-1}.$

For both expressions’ right-hand side to be valid requires that $x \ge 1$. However, when $x = 2$,

$e^y = x-\sqrt{x^2-1} = 2 - \sqrt{3} < 1$

suggests that $y < 0$ (see Exercise-2,3), contradicts (2).

Hence,

$e^y = x+\sqrt{x^2-1} \implies y = \log(x+\sqrt{x^2-1}).$

i.e.,

$\cosh^{-1}(x) = \log(x+\sqrt{x^2-1}), \;\;x \in [1, +\infty).$

Exercise-1 Show that $\forall x \in R, x-\sqrt{x^2+1} < 0.$

Exercise-2 Without calculator or CAS, show that $2-\sqrt{3} < 1.$

Exercise-3 Prove $\forall t \ge 0. e^t \ge 1$ (hint: see “Two Peas in a Pod, Part 2“)

Beltrami’s Identity

The Beltrami’s identity

$F - y' \frac{\partial F}{\partial y'} = C$

where $C$ is a constant, is a reduced form of Euler-Lagrange equation for the special case when $F$ does not dependent explicitly on $x$.

For $F = F(y, y')$,

$\frac{dF}{dx} = \frac{\partial F}{\partial y} y' + \frac{\partial F}{\partial y'} y''.\quad\quad\quad(1)$

From Euler-Lagrange equation

$\frac{\partial F}{\partial y} - \frac{d}{dx}(\frac{\partial F}{\partial y'}) = 0,$

we have

$\frac{\partial F}{\partial y} = \frac{d}{dx}(\frac{\partial F}{\partial y'}).\quad\quad\quad(2)$

Substituting (2) into (1) gives

$\frac{dF}{dx} = \frac{d}{dx}(\frac{\partial F}{\partial y'}) y' + \frac{\partial F}{\partial y'} y''.\quad\quad\quad(3)$

Consequently, when (3) is expressed as

$\frac{dF}{dx} = \frac{d}{dx}(\frac{\partial F}{\partial y'}) y' + \frac{\partial F}{\partial y'} \frac{dy'}{dx},$

it becomes clear that

$\frac{dF}{dx} = \frac{d}{dx}(\frac{\partial F}{\partial y'}y').$

Therefore,

$\frac{d}{dx}(F-\frac{\partial F}{\partial y'}y') = 0.$

i.e.,

$F - y' \frac{\partial F}{\partial y'} = C.$

In “A Relentless Pursuit”, we derived differential equation

$\frac{dy}{dx} = \frac{\sqrt{y^2-C_1^2}}{\pm C_1}\quad\quad\quad(4)$

from Euler-Lagrange equation without taking advantage of the fact that $F=y\sqrt{1+(y')^2}$ has no explicit dependency on $x$. The derivation was mostly done by a CAS.

Let’s derive (4) from Beltrami’s Identity. This time, we will not use CAS.

For $F=y\sqrt{1+(y')^2}$, the Beltrami’s Identity is

$y\sqrt{1+(y')^2} - y'\cdot\left(y\cdot\frac{1}{2}\frac{2y'}{\sqrt{1+(y')^2}}\right) = C.$

It simplifies to

$\frac{y}{\sqrt{1+(y')^2}} = C.$

Further simplification yields

$C^2(y')^2 = y^2-C^2.$

For $C \ne 0$,

$(y')^2 = \frac{y^2-C^2}{C^2}.$

Therefore,

$\frac{dy}{dx} = \pm \sqrt{\frac{y^2-C^2}{C^2}}=\frac{\sqrt{y^2-C^2}}{\pm C}.$

Prequel to “A Relentless Pursuit”

Fig. 1

Suppose we have two circular hoops of unit radius, centered on a common x-axis and a distance $2a$ apart. Suppose too, that a soap films extends between the two hoops, taking the form of a surface of revolution about the x-axis (see Fig. 2). Then if gravity is negligible the film takes up a state of stable, equilibrium in which its surface area is a minimum.

Fig. 2

Our problem is to find the function $y(x)$, satisfying the boundary conditions

$y(-a) = y(a) = 1,\quad\quad\quad(1)$

which makes the surface area

$A=2\pi\displaystyle\int\limits_{-a}^{a}y\sqrt{1+(y')^2}\;dx\quad\quad\quad(2)$

a minimum.

Let

$F(x,y, y') = 2\pi y \sqrt{1+(y')^2}.$

We have

$\frac{\partial F}{\partial y} = 2\pi \sqrt{1+(y')^2}$

and

$\frac{\partial F}{\partial y'} = 2 \pi y \cdot\frac{1}{2}\left(1+(y')^2\right)^{-\frac{1}{2}}\cdot 2y'=\frac{2 \pi y y'}{\sqrt{1+(y')^2}}.$

The Euler-Lagrange equation

$\frac{\partial F}{\partial y} - \frac{d}{dx}\left(\frac{\partial F}{\partial y'}\right) = 0$

becomes

$\sqrt{1+(y')^2} - \frac{d}{dx}\left(\frac{y y'}{\sqrt{1+(y')^2}}\right) = 0.$

Fig. 3

Using Omega CAS Explorer (see Fig. 3), it can be simplified to:

$y \frac{d^2 y}{dx^2}- \left(\frac{dy}{dx}\right)^2=1.$

This is the differential equation solved in “A Relentless Pursuit” whose solution is

$y = C_1\cdot \cosh(\frac{x+C_2}{C_1}).$

We must then find $C_1$ and $C_2$ subject to the boundary condition (1), i.e.,

$C_1\cdot \cosh(\frac{a+C_2}{C_1}) = C_1\cdot\cosh(\frac{-a+C_2}{C_1})\implies \cosh(\frac{a+C_2}{C_1}) = \cosh(\frac{-a+C_2}{C_1}).$

The fact that $\cosh$ is an even function implies either

$a+C_2 = -a+C_2\quad\quad\quad(3)$

or

$a+C_2 = -(-a+C_2).\quad\quad\quad(4)$

While (3) is clearly false since it claims for all $a >0, a = -a$, (4) gives

$C_2=0.$

And so, the solution to boundary-value problem

$\begin{cases} y \frac{d^2 y}{dx^2}- \left(\frac{dy}{dx}\right)^2=1,\\ y(-a)=y(a)=1\end{cases}\quad\quad\quad(5)$

is

$y = C_1\cdot \cosh(\frac{x}{C_1}).\quad\quad\quad(6)$

To determine $C_1$, we deduce the following equation from the boundary conditions that $y=1$ at $x=\pm a:$

$C_1\cdot \cosh(\frac{a}{C_1}) = 1.\quad\quad\quad(7)$

This is a transcendental equation for $C_1$ that can not be solved explicitly. Nonetheless, we can examine it qualitatively.

Let

$\mu = \frac{a}{C_1}$

and express (7) as

$\cosh(\mu) = \frac{\mu}{a}.\quad\quad\quad(8)$

Fig. 4

A plot of (8)’s two sides in Fig. 4 shows that for sufficient small $a$, the curves $z = \cosh(\mu)$ and $z = \frac{\mu}{a}$ will intersect. However, as $a$ increases, $z=\frac{\mu}{a}$, a line whose slope is $\frac{1}{a}$ rotates clockwise towards $\mu$-axis. The curves will not intersect if $a$ is too large. The critical case is when $a=a^*$, the curves touch at a single point, so that

$\cosh(\mu) = \frac{\mu}{a^*}\quad\quad\quad(9)$

and $y=\frac{\mu}{a}$ is the tangent line of $z=\cosh(\mu),$ i.e.,

$\sinh(\mu) = \frac{1}{a^*}.\quad\quad\quad(10)$

Dividing (9) by (10) yields

$\coth(\mu) = \mu. \quad\quad\quad(11)$

What the mathematical model (5) predicts then is, as we gradually move the rings apart, the soap film breaks when the distance between the two rings reaches $2a^*$, and for $a > a^*$, there is no more soap film surface connects the two rings. This is confirmed by an experiment (see Fig. 1).

We compute the value of $a^*$, the maximum value of $a$ that supports a minimum area soap film surface as follows.

Fig. 5

Solving (11) for $\mu$ numerically (see Fig. 5), we obtain

$\mu = 1.1997.$

By (10), the corresponding value of

$a^* = \frac{1}{\sinh(\mu)} = \frac{1}{\sinh(1.1997)} = 0.6627$.

We also compute the surface area of the soap film from (2) and (6) (see Fig. 6). Namely,

$A = 2\pi \displaystyle\int\limits_{-a}^{a} C_1 \cosh\left(\frac{x}{C_1}\right) \sqrt{1+\left(\frac{d}{dx}C_1\cosh\left(\frac{x}{C_1}\right)\right)^2}\;dx = \pi C_1^2\left(\sinh\left(\frac{2a}{C_1}\right) + \frac{2a}{C_1}\right).$

Fig. 6

Exercise-1 Given $a=\frac{1}{2}$, solve (7) numerically for $C_1.$

Exercise-2 Without using a CAS, find the surface area of the soap film from (2) and (6).

An Epilogue to “A Relentless Pursuit”

Let

$p=\frac{dy}{dx},$

differential equation (1) in “A Relentless Pursuit” can be expressed as

$\frac{dp}{dy}-\frac{1}{y}p = \frac{1}{y}p^{-1}.$

This is Bernoulli’s equation

$\frac{dp}{dy} + f(y) p = g(y) p^{\alpha}$

with $f(y) = -\frac{1}{y}, g(y) = \frac{1}{y}$ and $\alpha = -1$ (see “Meeting Mr. Bernoulli“).

Hence,

$p^{1-\alpha} = e^{(\alpha-1)\displaystyle\int f(y)\;dy}\left((1-\alpha)\displaystyle\int e^{-(\alpha-1)\displaystyle\int f(y)\;dy} g(y)\;dy + C\right).\quad\quad\quad(1)$

Substitute $f(y), g(y)$ and $\alpha$ into (1) gives

$p^{1-(-1)} = e^{(-1-1)\displaystyle\int -\frac{1}{y}\; dy}\left((1-(-1))\displaystyle\int e^{-(-1-1)\displaystyle\int -\frac{1}{y}\;dy}\frac{1}{y}\;dy+C\right).$

i.e.,

$p^2 = Cy^2-1$

where it must be true that $C>0$. Therefore,

$p^2=\frac{1}{C_1^2}y^2-1, C_1>0\implies p^2 = \frac{y^2-C_1^2}{C_1^2} \implies p \overset{C_1>0}{=} \frac{\sqrt{y^2-C_1^2}}{\pm C_1}.$

That is,

$\frac{dy}{dx} = \frac{\sqrt{y^2-C_1^2}}{\pm C_1}.\quad\quad\quad(2)$

There is yet another way to obtain (2):

Since

$yy''-(y')^2 = 1 \implies 1+(y')^2 = y y''\quad\quad\quad(3)$

and

$(1+(y')^2)'= 2y' y''.\quad\quad\quad(4)$

(4)/(3) yields

$\frac{(1+(y')^2)'}{1+(y')^2} = \frac{2y'}{y}.\quad\quad\quad(5)$

Integrate (5) with respect to $y$, we have

$\frac{1}{2}\log(1+(y')^2) = \log(y) + C \implies \log(\frac{\sqrt{1+(y')^2}}{y}) = C.$

i.e.,

$\frac{\sqrt{1+(y')^2}}{y}= e^C=\frac{1}{C_1}$

where $C_1>0$.

It follows that

$\frac{dy}{dx} = \frac{\sqrt{y^2-C_1^2}}{\pm C_1}.$

A Relentless Pursuit

Problem: Solving differential equation

$y \frac{d^2y}{dx^2} - (\frac{dy}{dx})^2 = 1.\quad\quad\quad(1)$

Solution:

Let

$p = \frac{dy}{dx}.\quad\quad\quad(2)$

Then by chain rule,

$\frac{d^2 y}{dx^2} = \frac{dp}{dx} =\frac{dp}{dy}\cdot\frac{dy}{dx} = p\frac{dp}{dy}.$

Rewrite (1) as

$y\cdot p\frac{dp}{dy} - p^2=1.$

Equivalently,

$\frac{p}{1+p^2}\cdot\frac{dp}{dy} = \frac{1}{y}.\quad\quad\quad(3)$

Integrate (3) with respect to $y$ gives

$\displaystyle\int \frac{1}{2}\cdot\frac{2p}{1+p^2}\cdot\frac{dp}{dy}\;dy =\displaystyle \int \frac{1}{y}\;dy\implies\frac{1}{2}\log(1+p^2)= \log(y) + C.$

Hence

$\log(\sqrt{1+p^2}) - \log(y) = C.$

Or,

$\frac{\sqrt{1+p^2}}{y} = e^{C}=\frac{1}{C_1}\quad\quad\quad(4)$

where $C_1 > 0$ since $e^C>0$.

Square (4) gives

$\frac{1+p^2}{y^2} = \frac{1}{C_1^2}.\quad\quad\quad(5)$

Solving for $p$ from (5), we obtain

$p^2 = \frac{y^2-C_1^2}{C_1^2} \overset{(4)}{\implies} p = \pm \frac{\sqrt{y^2-C_1^2}}{C_1}.$

And so,

$\frac{dy}{dx} \overset{(2)}{=} \frac{\sqrt{y^2-C_1^2}}{\pm C_1}.$

Or,

$\pm C_1\cdot\displaystyle\frac{1}{\sqrt{y^2-C_1^2}}\cdot\frac{dy}{dx} = 1.\quad\quad\quad(6)$

Integrate (6) with respect to $x$ yields

$\pm C_1\cdot\displaystyle\int\frac{1}{\sqrt{y^2-C_1^2}}\cdot\frac{dy}{dx}\;dx= \int\;dx\overset{(\star)}{\implies}\pm C_1\cdot\cosh^{-1}\left(\frac{y}{C_1}\right)=x+C_2.$

i.e.,

$\cosh^{-1}(\frac{y}{C_1}) = \frac{x+C_2}{\pm C_1}.$

Therefore,

$y = C_1\cdot\cosh\left(\frac{x+C_2}{\pm C_1}\right)=C_1\cdot\cosh\left(\frac{x+C_2}{C_1}\right)$

since $\cosh$ is an even function.

Let $y=C_1z \implies \frac{dy}{dz}=C_1.$

$\displaystyle\int \frac{1}{\sqrt{y^2-C_1^2}} \;dy=\int\frac{C_1}{\sqrt{(C_1z)^2-C_1^2}}\;dz\overset{C_1>0}{=}\int\frac{1}{\sqrt{z^2-1}}\;dz\overset{(\star\star)}{=}\cosh^{-1}(z)$

$=\cosh^{-1}(\frac{y}{C_1}).$

i.e.,

$\displaystyle\int \frac{1}{\sqrt{y^2-C_1^2}} \;dy = \cos^{-1}\left(\frac{y}{C_1}\right).\quad\quad\quad(\star)$

Let $z = \sec(\theta) \implies \frac{dz}{d\theta} = \sec(\theta)\tan(\theta).$

$\displaystyle\int \frac{1}{\sqrt{z^2-1}}\;dz=\displaystyle\int \frac{\sec(\theta)\tan(\theta)}{\sqrt{\sec(\theta)^2-1}}\;d\theta$

$= \displaystyle\int \sec{\theta}\;d\theta \overset{(\star\star\star)}{=} \log(\sec(\theta)+\tan(\theta)) = \log(z + \sqrt{z^2-1})$

By definition (see “Deriving Two Inverse Functions“),

$= \cosh^{-1}(z).$

i.e.,

$\displaystyle\int \frac{1}{\sqrt{z^2-1}} = \cosh^{-1}(z).\quad\quad\quad(\star\star)$

Without using a CAS,

$\displaystyle \int \sec(x)\;dx = \displaystyle\int\frac{1}{\cos(x)}\;dx=\displaystyle\int \frac{\cos(x)}{\cos(x)^2}\;dx= \displaystyle\int\frac{\cos(x)}{1-\sin(x)^2}\;dx$

Let $t = \sin(x) \implies 1 = \cos(x)\frac{dx}{dt}\implies \frac{dx}{dt} = \sec(x)$,

$=\displaystyle\int \frac{1}{1-t^2} \;dt = \displaystyle\int\frac{1}{2}\left(\frac{1}{1-t} + \frac{1}{1+t}\right)\;dt=\frac{1}{2}(-\log(1-t) + \log(1+t))$

$= \frac{1}{2}\log\left(\frac{1+t}{1-t}\right)=\frac{1}{2}\log\left(\frac{(1+t)(1+t)}{(1-t)(1+t)}\right)=\frac{1}{2}\log\left(\frac{(1+t)^2}{1-t^2}\right)=\frac{1}{2}\log\left(\frac{(1+\sin(x))^2}{1-\sin(x)^2}\right)$

$=\frac{1}{2}\log\left(\frac{(1+\sin(x))^2}{\cos(x)^2}\right)=\log\left(\frac{1+\sin(x)}{\cos(x)}\right)=\log(\sec(x) + \tan(x)).$

i.e.,

$\displaystyle\int \sec(x)\;dx = \log(\sec(x) + \tan(x)).\quad\quad\quad(\star\star\star).$

Exercise-1 What is the derivative of $\cosh^{-1}(x)?$ hint: $(\star\star)$

Deriving Generalized Leibniz’s Integral Rule

The general form of Leibniz’s Integral Rule with variable limits states:

Suppose $f(x, t)$ satisfies the condition stated previously for the basic form of Leibniz’s Rule (LR-1, see “A Semi-Rigorous Derivation of Leibniz’s Rule“) . In addition, $a(t), b(t)$ are defined and have continuous derivatives for $t_1\le t\le t_2.$ Then for $t_1\le t \le t_2,$

$\frac{d}{d t}\int\limits_{a(t)}^{b(t)}f(x, t)\;dx = f(b(t),t)\cdot b'(t) -f(a(t),t)\cdot a'(t)+ \int\limits_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x, t)\;dx.\quad\quad\quad(1)$

(1) can be derived as a consequence of LR-1, the Multivariable Chain Rule, and the Fundamental Theorem of Calculus (FTC):

Clearly,

$\int\limits_{a(t)}^{b(t)}f(x, t)\;dx\quad\quad\quad(2)$

on the left side of (1) is a function of $t$.

Let

$u = a(t),\quad\quad\quad(3)$

$v = b(t),\quad\quad\quad(4)$

$w = t,\quad\quad\quad(5)$

(2) can be expressed as

$G(u,v,w) = \int\limits_{u}^{v}f(x,w)\;dx.$

Hence, by the chain rule,

$\frac{d}{d t}\int\limits_{u}^{v}f(x, w)\;dx = \frac{\partial}{\partial u}G(u,v,w)\cdot \frac{du}{dt} + \frac{\partial}{\partial v} G(u,v,w)\cdot \frac{dv}{dt} + \frac{\partial}{\partial w}G(u,v,w)\cdot \frac{dw}{dt}$

where

$\frac{\partial}{\partial u}G(u,v,w)=\frac{\partial}{\partial u}\int\limits_{u}^{v}f(x, w)\;dx$

$= \frac{\partial}{\partial u}\left(-\int\limits_{v}^{u}f(x, w)\;dx\right)$

$= -\frac{\partial}{\partial u}\int\limits_{v}^{u}f(x, w) \;dx$

$\overset{\textbf{FTC}}{=} -f(u, w)$

$\overset{(3), (5)}{=} -f(a(t), t),$

$\frac{\partial}{\partial v}G(u,v,w)=\frac{\partial}{\partial v}\int\limits_{u}^{v}f(x, w)\;dx \overset{\textbf{FTC}}{=}f(v, w) \overset{(4), (5)}{=} f(b(t), t)$

and,

$\frac{\partial}{\partial w}G(u,v,w)=\frac{\partial}{\partial w}\int\limits_{u}^{v}f(x, w)\;dx \overset{\textbf{LR-1}}{=} \int\limits_{u}^{v}\frac{\partial}{\partial w}f(x, w)\;dx\overset{(3), (4), (5)}{=} \int\limits_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x, t)\;dx.$

It follows that

$\frac{d}{d t}\int\limits_{a(t)}^{b(t)}f(x, t)\;dx =-f(a(t),t)\cdot a'(t) + f(b(t),t)\cdot b'(t) + \int\limits_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x, t)\;dx,$

i.e.,

$\frac{d}{d t}\int\limits_{a(t)}^{b(t)}f(x, t)\;dx = f(b(t),t)\cdot b'(t) -f(a(t),t)\cdot a'(t)+ \int\limits_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x, t)\;dx.$

A Semi-Rigorous Derivation of Leibniz’s Rule

Leibniz’s rule (LR-1) states:

Let $f(x, \beta)$ be continuous and have a continuous derivative $\frac{\partial}{\partial \beta}$ in a domain of $x\beta-$plane that includes the rectangle $a \le x \le b, \beta_1 \le \beta \le \beta_2,$

$\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx =\int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x, \beta)\;dx.$

I will derive LR-1 semi-rigorously as follows:

Let

$g(t) = \int\limits_{a}^{b} \frac{\partial}{\partial t}f(x, t)\;dx.\quad\quad\quad(1-1)$

Integrate (1-1) with respect to $t$ from a constant $\alpha$ to a variable $\beta$, we have

$\int\limits_{\alpha}^{\beta} g(t)\;dt = \int\limits_{\alpha}^{\beta}\left(\int\limits_{a}^{b} \frac{\partial}{\partial t}f(x, t)\;dx\right)\;dt$

$\overset{(\star)}{=}\int\limits_{a}^{b}\left(\int\limits_{\alpha}^{\beta} \frac{\partial}{\partial t}f(x, t)\;dt\right)\;dx$

$=\int\limits_{a}^{b}f(x, \beta) - f(x, \alpha)\; dx$

$=\int\limits_{a}^{b}f(x, \beta)\;dx - \int\limits_{a}^{b}f(x,\alpha)\; dx.$

That is,

$\int\limits_{\alpha}^{\beta} g(t)\;dt = \int\limits_{a}^{b}f(x, \beta)\;dx - \int\limits_{a}^{b}f(x,\alpha)\; dx.\quad\quad\quad(1-2)$

While $\int\limits_{\alpha}^{\beta}g(t)\;dt$ and $\int\limits_{a}^{b}f(x,\beta)\;dx$ are functions of $\beta$, $\int\limits_{a}^{b}f(x,\alpha)\;dx$ is a constant.

Since

$\frac{d}{d\beta}\int\limits_{\alpha}^{\beta} g(t)\;dt \overset{\textbf{FTC}}{= }g(\beta), \quad\frac{d}{d\beta}\left(\int\limits_{a}^{b}f(x, \beta)\;dx - \int\limits_{a}^{b}f(x,\alpha)\; dx\right)=\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx,$

differentiate (1-2) with respect to $\beta$ gives

$g(\beta) = \frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx\overset{(1-1)}{\implies} \int\limits_{a}^{b} \frac{\partial}{\partial \beta}f(x, \beta)\;dx= \frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx;$

i.e.,

$\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx = \int\limits_{a}^{b} \frac{\partial}{\partial \beta}f(x, \beta)\;dx.$

In the three-dimensional $x, y, z$-space, the double integral of a continuous function with two independent variables, $V=\iint_{R} f(x, y) dx dy$, may be interpreted as a volume between the surface $z=f(x, y)$ and the $x, y$-plane:

Fig. 1 $V = \iint_R f(x,y) dx dy$

We see from Fig. 1 that on one hand,

$V=\int\limits_{c}^{d}\int\limits_{a}^{b}f(x,y)\;dx dy,\quad\quad\quad(2-1)$

but on the other hand,

$V=\int\limits_{a}^{b}\int\limits_{c}^{d}f(x,y)\;dy dx.\quad\quad\quad(2-2)$

Since (2-1) and (2-2) amounts to the same thing, it must be true that

$\int\limits_{c}^{d}\int\limits_{a}^{b}f(x,y)\;dx dy=\int\limits_{a}^{b}\int\limits_{c}^{d}f(x,y)\;dy dx.\quad\quad\quad(\star)$

In other words, the order of integration can be interchanged.

The beat of a different drum

“I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign — it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Richard P. Feynman, “Surely You’re Joking, Mr. Feynman!”, Bantam Book, 1985)

“Feynman’s Trick” is a powerful technique for evaluating nontrivial definite integrals. It is based on Leibniz’s rule (LR-1) which states:

Let $f(x, \beta)$ be a differentiable function in$\beta$ with $\frac{\partial}{\partial \beta}f(x, \beta)$ continuous. Then

$\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx = \int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x, \beta)\;dx.$

This is how it works in practice:

To evaluate definite integral

$\int\limits_{a}^{b} f(x)\;dx,$

we introduce into integrand $f(x)$ a parameter $\beta$ such that

$f(x) = f(x, \beta)$ when $\beta = \beta_0\quad\quad\quad(1)$

and

$\int\limits_{a}^{b}f(x,\beta)\;dx = f_*$ when $\beta = \beta_*.\quad\quad\quad(2)$

Suppose

$\int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x,\beta)\; dx=g(\beta).\quad\quad\quad(3)$

By Leibniz’s rule,

$\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)=\int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x,\beta)\; dx\overset{(3)}{=}g(\beta).\quad\quad\quad(4)$

Integrate (4) with respect to $\beta$:

$\int \left(\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx\right) \; d\beta = \int g(\beta)\;d\beta\implies \int\limits_{a}^{b}f(x,\beta)\;dx = G(\beta)+ C\quad(5)$

where $G'(\beta)=g(\beta).$

Let $\beta=\beta_*,$

$\int\limits_{a}^{b}f(x, \beta_*)\;dx \overset{(2)}{=} f_* \overset{(5)}{=} G(\beta_*) + C\implies C=f_*-G(\beta_*).$

Let $\beta = \beta_0,$

$\int\limits_{a}^{b}f(x) \;dx \overset{(1)}{=} \int\limits_{a}^{b}f(x, \beta_0)\;dx\overset{(5)}{=} G(\beta_0) + C.$

And so,

$\int\limits_{a}^{b}f(x) \;dx = G(\beta_0) + f_*-G(\beta_*).$

Now, let’s play “Feynman’s Trick” on definite integral $\int\limits_{0}^{1} \frac{x-1}{\log(x)}\;dx:$

Differentiate $\int\limits_{0}^{1}\frac{x^{\beta}-1}{\log(x)}\;dx$ with respect to $\beta,$ we have

$\frac{d}{d\beta}\int\limits_{0}^{1}\frac{x^\beta-1}{\log(x)}\;dx =\int\limits_{0}^{1}\frac{\partial}{\partial \beta}\frac{x^{\beta-1}}{\log(x)}\;dx=\int\limits_{0}^{1}\frac{x^{\beta}\log(x)}{\log(x)}\;dx=\int\limits_{0}^{1}x^{\beta}\;dx=\frac{x^{\beta+1}}{\beta+1}\bigg|_{0}^{1}=\frac{1}{\beta+1}.$

It means

$\int\limits_{0}^{1}\frac{x^{\beta}-1}{\log(x)}\;dx=\int\frac{1}{\beta+1}\;d\beta = \log(\beta+1)+C\overset{\beta=0}{\implies} 0=\log(0+1) +C \implies C=0.$

Hence,

$\int\limits_{0}^{1}\frac{x^{\beta}-1}{\log(x)}\;dx = \log(\beta+1).$

Let $\beta=1$,

$\int\limits_{0}^{1}\frac{x-1}{\log(x)}\;dx = \log(2).$

Exercise-1 Given $\int\limits_{-\infty}^{\infty}\frac{e^{2x}}{ae^{3x}+b}\;dx = \frac{2\pi}{3\sqrt{3}a^{2/3}b^{1/3}}$ where $a, b >0.$ Show that

$\int\limits_{-\infty}^{\infty}\frac{e^{2x}}{(e^{3x}+1)^2}\;dx = \frac{2\pi}{9\sqrt{3}}.$

Playing “Feynman’s Trick” on Indefinite Integrals – Tongue in Cheek

“Differentiation under the integral sign”, a.k.a., “Feynman’s trick” is a clever application of Leibniz’s rule (LR-1):

Let $f(x, \beta)$ be continuous and have a continuous derivative $\frac{\partial}{\partial \beta}$ in a domain of $x\beta-$plane that includes the rectangle $a \le x \le b, \beta_1 \le \beta \le \beta_2,$

$\frac{d}{d\beta}\int\limits_{a}^{b}f(x, \beta)\;dx =\int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x, \beta)\;dx.$

“Feynman’s trick” is known to be an effective technique for evaluating difficult definite integral such as $\int\limits_{0}^{1}\frac{x-1}{\log(x)}\;dx.$

Is Feynman’s “trick” applicable to indefinite integrals too?

In other words, is it also true that

$\frac{\partial}{\partial \beta}\int f(x, \beta)\;dx + C = \int \frac{\partial}{\partial \beta}f(x, \beta)\;dx?\quad\quad\quad(\star)$

Let’s apply$(\star)$ to indefinite integral $\int \log(x)\;dx:$

$\frac{\partial}{\partial \beta}\int x^{\beta}\;dx+C = \int \frac{\partial}{\partial \beta}x^{\beta}\;dx = \int x^{\beta}\log(x)\;dx;$

i.e.,

$\int x^{\beta}\log(x)\;dx =\frac{\partial}{\partial \beta}\int x^{\beta}\;dx+C.\quad\quad\quad(1)$

Since $\int x^{\beta}\; dx = \frac{x^{\beta+1}}{\beta+1} + C_1$, the right-hand side of (1) is

$\frac{\partial}{\partial \beta}\left(\frac{x^{\beta+1}}{\beta+1}+C_1\right) + C= \frac{x^{\beta+1}\log(x)\cdot (\beta+1) - x^{\beta+1}}{(\beta+1)^2}+C.$

It means

$\int x^{\beta}\log(x)\;dx = \frac{x^{\beta+1}\log(x)\cdot (\beta+1) - x^{\beta+1}}{(\beta+1)^2}+C.$

For $\beta = 0$, we have

$\int \log(x)\;dx = x\log(x)-x+C.$

It checks out:

$\frac{d}{dx}(x\log(x)-x+C) = \log(x)+x \cdot \frac{1}{x}-1 = \log(x).$

Let’s also evaluate $\int x e^{x}\;dx:$

By $(\star),$

$\frac{\partial}{\partial \beta}\int e^{\beta x}\;dx + C = \int\frac{\partial}{\partial \beta} e^{\beta x}\;dx=\int x e^{\beta x}\;dx$.

That is,

$\int x e^{\beta x}\;dx = \frac{\partial}{\partial \beta}\int e^{\beta x}\; dx+C= \frac{\partial}{\partial \beta}\left(\frac{1}{\beta}e^{\beta x} + C_1\right)+C=\frac{e^{\beta x}\beta x - e^{\beta x}}{\beta^2}+C.\quad\quad(2)$

Let $\beta = 1$, (2) yields

$\int x e^{x} \;dx= e^x (x-1)+C.$

It checks out too:

$\frac{d}{dx} (e^x (x-1)+C) = e^x(x-1) + e^x = x e^x$.

Now that we have gained confidence in the validity of $(\star),$ let’s prove it.

Given

$G_1(x) = \int g(x)\;dx, G_2(x) = \int\limits_{a}^{x} g(x)\; dx$

where $g(x)$ is a function of $x$, we have,

$(G_1(x)-G_2(x))' = (\int g(x)\;dx)' - (\int\limits_{a}^{x} g(x) \;dx)'= g(x)-g(x) = 0$.

It means

$G_1(x)-G_2(x)=C\implies \int g(x)\;dx= \int\limits_{a}^{x}g(x)\;dx + C.$

When $x=b$,

$\int g(x)\;dx = \int\limits_{a}^{b}g(x)\;dx+C;$

i.e., for $f(x,\beta)$, a function of both $x$ and $\beta,$

$\int f(x,\beta)\;dx = \int\limits_{a}^{b} f(x, \beta)\;dx+C\quad\quad\quad(3)$

and,

$\int \frac{\partial}{\partial t} f(x,\beta)\;dx = \int\limits_{a}^{b} \frac{\partial}{\partial \beta}f(x, \beta)\;dx+C.\quad\quad\quad(4)$

It follows that

$\frac{\partial}{\partial \beta}\int f(x,\beta)\;dx \overset{(3)}{=} \frac{\partial}{\partial \beta}\left(\int\limits_{a}^{b}f(x,\beta)\;dx+C\right)$

$=\frac{\partial}{\partial \beta}\int\limits_{a}^{b}f(x,\beta)\;dx$

$\overset{\textbf{LR-1}}{=} \int\limits_{a}^{b}\frac{\partial}{\partial \beta}f(x,\beta)\;dt$

$\overset{(4)}{=} \int\frac{\partial }{\partial \beta}f(x,\beta)\;dx -C.$

And so,

$\frac{\partial}{\partial \beta}\int f(x,\beta)\;dx +C= \int \frac{\partial}{\partial \beta} f(x,\beta)\;dx.$

Exercise-1 Evaluate $\int x^2 e^x\;dx$.

hint: $\frac{\partial}{\partial \beta}\int x e^{\beta x}\;dx = \int x^2 e^{\beta x}\; dx.$

Jump!

Problem Given

$f(x) = e^x + \int\limits_{0}^{x} (t-x)f(t)\;dt\quad\quad\quad(\star)$

where $f(x)$ is a continuous function, find $f(x).$

Solution

From $(\star)$, we see that

$f(0) = 1;$

$f(x) = e^x + \int\limits_{0}^{x} t\cdot f(t) - x\cdot f(t) \;dt = e^x + \int\limits_{0}^{x} t\cdot f(t)\;dt-x\cdot \int\limits_{0}^{x}f(t)\;dt$.

And so,

$\frac{df(x)}{dx}=\frac{de^x}{dx} + \frac{d}{dx}\int\limits_{0}^{x}tf(t)\;dt - \frac{d}{dx}\left(x\cdot \int\limits_{0}^{x}f(t)\;dt\right)$

$=e^x+\frac{d}{dx}\int\limits_{0}^{x}tf(t)\;dt-\left(\int\limits_{0}^{x}f(t)\;dt + x\frac{d}{dx}\int\limits_{0}^{x}f(t)\;dt\right)$

$\overset{\textbf{FTC}}{=}e^x + xf(x) -\int\limits_{0}^{x} f(t)\;dt - xf(x)$

$= e^x - \int\limits_{0}^{x}f(t)\;dt$

That is,

$\frac{d}{dx}f(x)= e^x - \int\limits_{0}^{x}f(t)\;dt\implies f'(0) = 1.$

It follows that

$\frac{d^2f(x)}{dx^2}=\frac{d}{dx}\left(e^x-\int\limits_{0}^{x}f(t)\;dt\right)=e^x-\frac{d}{dx}\left(\int\limits_{0}^{x}f(t)\;dt\right)\overset{\textbf{FTC}}{=}e^x-f(x).$

Solving

$\begin{cases} f''(x)=e^x-f(x) \\f(0)=1\\f'(0)=1 \end{cases}\quad\quad\quad(\star\star)$

gives

$f(x) = \frac{1}{2}(\sin(x)+\cos(x)+e^x).$

Fig. 1

Notice the derivation of $(\star\star)$ can be simplified if Leibniz’s Rule (LR-1, see “A Semi-Rigorous Derivation of Leibniz’s Rule”) is applied:

$\frac{df(x)}{dx} = e^x + \underline{\frac{d}{dx}\int\limits_{0}^{x}(t-x)f(t)\;dt}$

$\overset{\textbf{LR-1}}{=} e^x + \underline{\int\limits_{0}^{x}\frac{\partial}{\partial x}(t-x)f(t)\;dt}$

$=e^x+\int\limits_{0}^{x}-1\cdot f(t)\;dt$

$= e^x-\int\limits_{0}^{x}f(t)\;dt$

$\implies \frac{d^2f(x)}{dx}=e^x-\frac{d}{dx}\int\limits_{0}^{x}f(t)\;dt\overset{\textbf{FTC}}{=}e^x-f(x).$

Fig.2 shows that Omega CAS explorer‘s Maxima engine is both FTC and LR-1 aware:

Fig. 2

Exercise-1 Given:

$f(x) = \int\limits_{0}^{x}t\cdot f(x-t)\;dt+\sin(x)$

where $f(x)$ is a continuous function, find $f(x).$

hint: Let $u=x-t, t = x-u; t=0\implies u=x; t=x\implies u=0; \frac{du}{dt}=-1$;

$f(x) = \int\limits_{x}^{0}(x-u)\cdot f(u)\cdot (-1)\;du + \sin(x)=\int\limits_{0}^{x}(x-u)f(u)\;du+\sin(x).$