Sunday, 12 June 2016

mp.mathematical physics - Motivating the Laplace transform definition

This answer is not exactly an answer to the original question, but this is for the benefit of MO user vonjd who wanted to know more details about the similarities between solving differential equations through Laplace transforms and solving recurrence relations using generating functions.



Since I was going to write it anyway, I figured I might as well post it here for anyone interested.



I will do an example of each, and this should be enough to show the similarities. In each case, we have a linear equation with constant coefficients; this is where both methods really shine, although they both can handle some variable coefficients more or less gracefully. Ultimately, the biggest challenge is to apply the inverse transform: always possible in the linear case, not so easy otherwise.



Differential Case



Take the function $y(t)=2e^{3t}-5e^{2t}$. It is a solution of the IVP:
begin{equation}
y''-5y'+6y=0; qquad y(0)=-3, y'(0)=-4.
end{equation}
If we apply the Laplace transform to the equation, letting $Y(s)$ denote the transform of $y(t)$,
we get
$$ s^2Y(s)-sy(0)-y'(0)-5[sY(s)-y(0)]+6Y(s)=0.$$
Substitute the values of $y(0)$ and $y'(0)$, and solve to obtain:
$$ Y(s)=frac{11-3 s}{s^2-5s+6};$$
and apply partial fractions to get:
$$ Y(s)= frac{2}{s-3}+frac{-5}{s-2}.$$
This is where you exclaim: "Wait a second! I recognize this, since it's well known that
$$mathcal{L}[e^{at}]= frac{1}{s-a}$$
for all $a$, then by linearity we recognize the function that I started from.



Recurrence case



Let $(a_n)$ be the sequence defined for all $ngeq 0$ by $a_n=2(3^n)-5(2^n)$.
It is a solution of the IVP:
begin{equation}
a_{n+2}-5a_{n+1}+6a_n=0 qquad a_0=-3, a_1=-4.
end{equation}
Define the generating function $A(x)$ to be:
$$ A(x)=sum_{n=0}^{infty} a_n; x^n. $$
Multiplying each line of the recurrence by $x^{n+2}$ gives:
$$ a_{n+2}; x^{n+2}-5a_{n+1}; x^{n+2}+6a_n; x^{n+2}=0 $$
You can sum those lines for all $ngeq 0$, do a small change of index in each sum, and factor out relevant powers of $x$ to get
$$ sum_{n=2}^{infty} a_n; x^n-5x sum_{n=1}^{infty} a_n; x^n+6x^2 sum_{n=0}^{infty} a_n; x^n=0.$$
Or in other terms:
$$ A(x)-a_1x-a_0-5x[A(x)-a_0]+6x^2A(x)=0.$$
Substituting $a_0$ and $a_1$ and solving for $A(x)$ then gives, with partial fractions:
$$ A(x)=frac{11 x-3}{6 x^2-5 x+1}=frac{2}{1-3 x}+frac{-5}{1-2 x}$$



Looks familiar? It should! If you substitute $x=1/s$, you will recover $sY(s)$ from the differential example.



For generating functions, the key fact we need here is the sum of geometric series:
$$ sum_{n=0}^{infty} (ax)^n=frac{1}{1-ax}.$$
Thus, by linearity again, we recognize the sequence we started from in the expression for $A(x)$.



Closing Remarks



In both theories, there is the notion of characteristic polynomial of a linear equation with constant coefficients. This polynomial ends up being the denominator of the Laplace transform, and the reversed polynomial $x^dp(1/x)$ is the denominator of the generating function. In both cases, multiple roots are very well managed by the theories and explain very naturally the appearances of otherwise "magical" solutions of the type $te^{lambda t}$ or $n(r^n)$.



The biggest mystery to me is the historical perspective: did one technique pre-date the other, and were the connections actively exploited, or did both techniques develop independently for a while before the similarities were noticed?

No comments:

Post a Comment