Processing math: 100%
Skip to main content

Section 7.2 Method of series solutions

The idea of this section is to use series to construct solutions to linear differential equations that don't have obvious closed form solutions (which is most of them). First, we need to know that the differential equation is itself sufficiently well-behaved to have analytic solutions.

Definition 7.2.1.

A point x=x0 is called an ordinary point for a homogeneous second order differential equation of the form

y″+P(x)y′+Q(x)y=0

if P and Q are analytic at x0. (Typical functions for P,Q are trig functions, exponentials, polynomials, and rational functions, which are analytic away from asymptotes).

Every point \(x=x_0\) is ordinary for the differential equation

\begin{equation*} y'' + e^x y = 0. \end{equation*}

Every positive \(x_0\) is ordinary for the equation

\begin{equation*} y'' + (\ln x) y = 0. \end{equation*}

However, \(\ln x\) has an asymptote at \(x=0\text{,}\) so this is a singular point for the equation. (Singular points are important to consider but quite complicated to analyze, and addressed in further advanced courses in ODE).

The big idea of power series solutions is that if

y″+Py′+Qy=0

then at any ordinary point x=x0 we can find two linearly independent power series centered at x0 that solve the differential equation of the form

y=∞∑k=0ck(x−x0)k.

The challenge is to find the coefficients ck.

To illustrate the method, we'll begin with the power series approach to a first order differential equation

\begin{equation*} y' - 2y = 0. \end{equation*}

Of course, we already know that the solution to this equation should be \(y = k e^{2x}\) by the method of characteristic equations, so we should expect our series answer to recover that.

Since the equation is regular at every point, (2 is analytic), for convenience, we will work at the regular point \(x = 0\text{.}\) We will assume that the solution \(y = f(x)\) is a power series, which gives the expressions

\begin{align*} y \amp = \sum_{k=0}^\infty c_k x^k = c_0 + c_1 x + c_2 x^2 + \ldots;\\ y' \amp = \sum_{k=0}^\infty (k+1) c_{k+1} x^k = c_1 + 2c_2 x + 3c_3 x^2 + \ldots. \end{align*}

Plugging into the differential equation, we get

\begin{align*} (c_1 + 2c_2 x + 3c_3 x^2 + \ldots) - 2(c_0 + c_1 x + c_2 x^2 + \ldots) \amp = 0\\ (c_1 - 2c_0) + (2c_2 - 2c_1)x + (3c_3 - 2c_2)x^2 + \ldots\amp = 0. \end{align*}

What we get when we set all the coefficients equal to 0 is called a recursion: if we know \(c_0\text{,}\) we can get \(c_1\text{,}\) which lets us get \(c_2\) and so on. We use substitution to get all the values in terms of the first value \(c_0\text{.}\)

\begin{equation*} \begin{array}{ccc} c_1 - 2c_0 = 0 \amp \Rightarrow \amp c_1 = 2c_0 \\ 2c_2 - 2c_1 = 0 \amp \Rightarrow \amp c_2 = c_1 = 2c_0 \\ 3c_3 - 2c_2 = 0 \amp \Rightarrow \amp c_3 = \frac{2}{3}c_2 = \frac{4}{3}c_0 \\ 4c_4 - 2c_3 = 0 \amp \Rightarrow \amp c_4 = \frac{2}{4}c_3 = \frac{2}{3} c_0 \\ \vdots \amp \vdots \amp \vdots \end{array} \end{equation*}

Now we can substitute these expressions into our assumed solution \(y\text{:}\)

\begin{align*} y \amp = c_0 + c_1 x + c_2 x^2 + c_3 x^3 + \ldots\\ \amp = c_0 + (2c_0) x + (2c_0)x^2 + (\frac{4}{3}c_0)x^3 + \ldots\\ \amp = c_0\underbrace{\left(1 + 2x + 2x^2 + \frac{4}{3} x^3 + \ldots \right)}_{\text{homogeneous solution}} \end{align*}

The quantity \(c_0\) comes from an intital condition, and the series in this case turns out to be the power series of the function \(e^{2x}\) (check if you like!).

The next example first appeared in work on optics in 1838. It is a very simple second order linear equation, yet does not have any closed form solutions. That is, the only approach to finding solutions is to use series methods. The solutions turn out to have applications in quantum physics as well as in optics. The equation is the straightforward looking

\begin{equation*} y'' - x y = 0. \end{equation*}

Since \(P = 0\) and \(Q = x\text{,}\) both of which are analytic functions, series solutions exist everywhere, so we assume a center point of \(x_0 = 0\text{.}\) Then we have the expressions

\begin{gather*} y = \sum_{k=0}^\infty c_k x^k = c_0 + c_1 x + c_2 x^2 + c_3 x^3 + c_4 x^4 + \ldots\\ y' = \sum_{k=0}^\infty (k+1) c_{k+1} x^k = c_1 + 2c_2 x + 3 c_3 x^2 + 4 c_4 x^3 + \ldots\\ y'' = \sum_{k=0}^\infty (k+2)(k+1) c_{k+2} x^k = 2c_2 + 6c_3 x + 12 c_4 x^2 + 20c_5 x^3 + \ldots \end{gather*}

Plugging into Airy's equation, we get

\begin{align*} \amp(2c_2 + 6c_3 x + 12 c_4 x^2 + 20c_5 x^3 + \ldots)\\ \amp\hspace{1in} - x(c_0 + c_1 x + c_2 x^2 + c_3 x^3 + c_4 x^4 + \ldots) = 0\\ \amp(2c_2 + 6c_3 x + 12 c_4 x^2 + 20c_5 x^3 + \ldots)\\ \amp\hspace{1in} - (c_0 x + c_1 x^2 + c_2 x^3 + c_3 x^4 + c_4 x^5 + \ldots) = 0\\ \amp\text{which resorts into}\\ \amp2c_2 + (6c_3 - c_0)x + (12c_4 - c_1)x^2 + (20c_5 - c_2)x^3 \\ \amp\hspace{1in}+ (30c_6 - c_3)x^4 + (42c_7 - c_4)x^5 + \ldots = 0 \end{align*}

Now we set the coefficients of each term equal to 0.

First, notice that \(2c_2 =0\) means that \(c_2 = 0\text{.}\) But \(20c_5 - c_2 = 0\text{,}\) and so \(c_5 =0\) as well. Since, \(c_8\) is given in terms of \(c_5\text{,}\) \(c_8\) is also 0, and so on for the family of coefficients \(c_2, c_5, c_8, c_{11}, \ldots\text{.}\) The is one family of coefficients.

The second family is in terms of \(c_0\text{:}\)

\begin{equation*} \begin{array}{ccc} 6c_3 - c_0 = 0 \amp \Rightarrow \amp c_3 = \frac{1}{6} c_0 \\ 30 c_6 - c_3 = 0 \amp \Rightarrow \amp c_6 = \frac{1}{6\cdot 5} c_3 = \frac{1}{6\cdot5\cdot3\cdot2} c_0 \\ 72 c_9 - c_6 = 0 \amp \Rightarrow \amp c_9 = \frac{1}{9\cdot 8} c_6 = \frac{1}{9\cdot8\cdot6\cdot5\cdot3\cdot2}c_0\\ \vdots \amp \vdots \amp \vdots \end{array} \end{equation*}

which gives expressions for \(c_3, c_6, c_9, c_12, \ldots\) in terms of \(c_0\) (this will correspond to the first linearly independent solution).

The third family of solutions is given by \(c_1\text{:}\)

\begin{equation*} \begin{array}{ccc} 12c_4 - c_1 = 0 \amp \Rightarrow \amp c_4 = \frac{1}{4\cdot3}c_1 \\ 42c_7 - c_3 = 0 \amp \Rightarrow \amp c_7 = \frac{1}{7\cdot6}c_3 = \frac{1}{7\cdot6\cdot4\cdot3}c_1 \\ 90c_{10} - c_7 \amp \Rightarrow \amp c_{10} = \frac{1}{10\cdot9\cdot 7\cdot6\cdot 4\cdot3}c_1 \\ \vdots \amp \vdots \amp \vdots \end{array} \end{equation*}

which gives expressions for \(c_4, c_7, c_{10}, c_{13}, \ldots\) in terms of c_1 (this represents the second linearly independent solution).

Then we are ready to solve the equation. Starting with our assumed solution of the form \(y = \sum c_k x^k\) and sorting into the three families we've identified, we get

\begin{align*} y \amp = c_0 + c_1 x + c_2 x^2 + c_3 x^3 \ldots\\ \amp = c_0 + c_3 x^3 + c_6 x^6 + c_9 x^9 + \ldots \hspace{.5in} c_0 \text{ family}\\ \amp \phantom{=} + c_1 x + c_4 x^4 + c_7 x^7 + c_{10} x^{10} + \ldots \hspace{.5in} c_1 \text{ family}\\ \amp \phantom{=} + c_2 x^2 + c_5 x^5 + c_8 x^8 + \ldots \hspace{.5in} c_2 \text{ family}\\ \amp = c_0\left( 1 + \frac{1}{3\cdot2} x^3 + \frac{1}{6\cdot5\cdot3\cdot2} x^6 + \frac{1}{9\cdot8\cdot6\cdot5\cdot3\cdot2}x^9 + \ldots \right)\\ \amp \phantom{=} + c_1\left( x + \frac{1}{4\cdot3} x^4 + \frac{1}{7\cdot6\cdot4\cdot3} x^7 + \frac{1}{10\cdot9\cdot 7\cdot6\cdot 4\cdot3} x^{10} + \ldots\right) \\ \amp \phantom{=} + 0 + 0 + 0 + 0 + \ldots \end{align*}

Then the two linearly independent solutions to Airy's equation are

\begin{equation*} y_1 = 1 + \frac{1}{3\cdot2} x^3 + \frac{1}{6\cdot5\cdot3\cdot2} x^6 + \frac{1}{9\cdot8\cdot6\cdot5\cdot3\cdot2}x^9 + \ldots \end{equation*}

and

\begin{equation*} y_2 = x + \frac{1}{4\cdot3} x^4 + \frac{1}{7\cdot6\cdot4\cdot3} x^7 + \frac{1}{10\cdot9\cdot 7\cdot6\cdot 4\cdot3} x^{10} + \ldots \end{equation*}

where neither function has a closed form. (The complicated behavior of the solutions is part of why there isn't much better than a series or improper integral form, as can be seen from their plots. The plot below is a slight rearrangment of the two solutions into a different fundamental set, but preserves the same behavior - namely, the functions are oscillating and then become exponential.)

Another important example is the so callled Bessel Differential Equation:

\begin{equation*} x^{2}y^{\prime\prime}+xy^{\prime}+\left(x^{2}-\nu^{2}\right)y=0,\,\,\,\,\,\,x>0 \end{equation*}

where \(\nu\) is some constant. For sake of simplicity, let us pick \(\nu=0\text{,}\) so that

\begin{equation*} x^{2}y^{\prime\prime}+xy^{\prime}+x^{2}y=0\,\,\,\,\,\,x>0 \end{equation*}

and we can rewrite this equation by dividing by \(x^{2}\) to get

\begin{equation*} y^{\prime\prime}+\frac{1}{x}y^{\prime}+y=0,\,\,\,\,\,\,x>0. \end{equation*}

Much like Airy's equation, this seems like such a simple equation, but it turns out there is no nice solution in terms of elementary functions. In turns out that one way to solve this Bessel ODE is to use power series. (Since \(x > 0\text{,}\) the function \(\frac{1}{x}\) is analytic.) One can find out that

\begin{align*} y_{1}(x) \amp =J_{0}(x),\\ y_{2}(x) \amp =Y_{0}(x) \end{align*}

where

\begin{equation*} J_{0}(x) =\sum_{n=0}^{\infty}\frac{(-1)^{n}}{\left(n!\right)^{2}2^{2n}}x^{2n}. \end{equation*}

\(J_{0}(x)\) is called the Bessel function of first kind of order \(\nu=0\text{.}\) \(Y_{0}(x)\) is called the Bessel function of second kind of order \(\nu=0\text{.}\) \(Y_{0}(x)\) can also be represented by a series, but is more complicated. Another way to write \(Y_{0}\) is as an integral,

\begin{equation*} Y_{0}(x)=-\frac{2}{\pi}\int_{1}^{\infty}\frac{\cos\left(tx\right)}{\sqrt{t^{2}-1}}dt,\,\,x>0 \end{equation*}

Thus the general solution to Bessel Equation

\begin{equation*} y^{\prime\prime}+\frac{1}{x}y^{\prime}+y=0,\,\,\,\,x>0 \end{equation*}

is given by

\begin{align*} y(x) \amp =c_{1}J_{0}(x)+c_{2}Y_{0}(x)\\ \amp =c_{1}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{\left(n!\right)^{2}2^{2n}}x^{2n}-c_{2}\int_{1}^{\infty}\frac{2}{\pi}\frac{\cos\left(tx\right)}{\sqrt{t^{2}-1}}dt. \end{align*}