Notes on Differential Equations

Thumbnail for the post

Added to collisteru.net on June 08, 2024

Last updated on December 29, 2024

Differential equations describe relationships between functions and their derivatives. We solve them by finding the set of functions that make the relationship true. Differential equations are important to us, as engineers and applied mathematicians, because they appear over and over again in science but are difficult to solve. In fact, the vast majority of differential equations are not solvable analytically. This class covers the tricks we use for the ones that are.

Notational note: for this class, tt is the independent variable and we solve for y(t)y(t)

Classifying Differential Equations

Classifying differential equations helps us quickly recognize patterns in the solvable ones.

We classify differential equations on three axes: Order, Linear vs. Nonlinear, and Ordinary vs. Partial

The order of a differential equation is the largest derivative present in the equation. A differential equation is linear iff it can be expressed in the following form:

[a1a2a3a4...a(n)][1yyyy(n)]=g(t)\begin{bmatrix} a_1 & a_2 & a_3 & a_4 & ... & a_{(n)} \end{bmatrix} \cdot \begin{bmatrix} 1 \\ y \\ y' \\ y'' \\ y^{(n)} \end{bmatrix} = g(t)

In other words, the function y and its derivatives are (1) never raised to any power other than the first and (2) don’t appear “inside” other functions like (y)(y)

A differential equation is partial if it contains partial derivatives; it is ordinary if it contains ordinary (univariate) derivatives. This class focuses on ordinary differential equations, or ODEs.

Note that there are also systems of differential equations in which we consider more than one at a time. They can typically be rewritten as differential equations involving vectors.

Examples of Differential Equations

  • Newton’s Second Law, expressed as
F(x,y)=mxF(x, y) = m \cdot x''
  • Verhulst’s famous equation for logistic growth of individual species
  • Fourier’s Heat Equation

Solving Differential Equations

One of the most interesting features of differential equations is that they usually have an infinite number of solutions. Each one corresponds to a different value for the constant of integration, which you’ll remember from Calculus I and which we introduce when solving ODEs.

When your teacher asks for the general solution of an ODE, they want the formula (or set thereof) that describes all the solutions. These formulas leave the constant of integration unspecified. When you assign it a value, you transform the general solution into a real solution.

Watch out, however: discontinuities can introduce the need for more than one equation in the general solution. You need them all to be correct.

A linear algebraist would say that solutions to ODEs form a subspace of the set of continuously differential functions. For general solutions with only one constant, that subspace has only one dimension.

Direction Fields illustrate this principle. They chart how yy' varies along the tt and yy axes. The ODE’s solutions follow the slopes like gulls over wind steams.

It is also no accident that direction fields resemble vector fields. Differential equations illustrate a specific path in the general fluid flow that vector fields express. This is one of the features that makes them indispensible to science.

Solving Linear ODEs

Homogeneous Linear ODEs

The right-hand side is zero and the left-hand side is a linear combination of the function and its derivatives. One example looks like the following:

yy=0y' - y = 0

Just add yy to both sides and integrate. You can do this if you have an arbitrary-order derivative of yy on one side and a gg on the other.

Nonhomogeneous Linear ODEs

You can also solve a nonhomogeneous first-order linear ODE in the following form:

y+p(t)y=g(t)y' + p(t) y = g(t)

… by multiplying all terms by an integrating factor.

This integrating factor is ep(t) e^{\int p(t)}

And the solution is:

y=1μ(μg+C)y = \frac{1}{\mu} (\int \mu g + C)

(Usually the hardest part of this type of problem is integrating g(t)g(t), which may require integration by parts, a complicated U-substitution, or both.)s

Once you’ve solved for an ODE’s general solution, an initial value problem will give you an expression of the form y(a)=by(a) = b and ask you to solve for C as well.

  • WATCH OUT: Solutions containing t raised to negative or fractional powers are often domain-limited. Check for singularities or individual constant solutions like t = 0.

Separation of Variables

You can do this if your equation can be expressed as:

y=T(t)Y(y) y' = T(t)Y(y)

In this case, you can manipulate it into this form:

dy/dt=T(t)Y(y)dy/dt = T(t) \cdot Y(y)
Y1(y)dy=T(t)dtY^{-1}(y) dy = T(t) dt

My warning from earlier applies here with special force. The manipulation above often results in domain-limited curves. After you solve for the general solution, always check the domain. You may need to solve for y’s domain, then backsolve for the restrictions on x.

Exact Equations

An exact equation appers in standard form like this:

M(t,y)dt+N(t,y)dy=0M(t, y) dt + N(t, y) dy = 0

Where there exists a function ψ\psi such that:

M=ψt,N=ψyM = \frac{\partial \psi}{\partial t} , N = \frac{\partial \psi}{\partial y}

We call ψ\psi the scalar potential, and a theorem by Poincaré guarantees its existence so long as My=NtM_y = N_t.

If you know that My=NtM_y = N_t, one way to find the scalar potential is just to look at M and N and find by inspection a function ψ \psi such that ψx=M\psi_x = M and ψy=N\psi_y = N

Another method is a line integral, where the line is an L-shape that goes only horizontally at first from (0,0)(0, 0) to (x,0)(x, 0) and then all vertically from (x,0)(x, 0) to (x,y)(x, y).

For the first part of that two-part line integral, note also that y is always zero, so any term with y is zero.

Making Exact Equations with Integrating Factors

We can sometimes turn a non-exact equation into one that is exact with an integrating factor. We find a positive function (t,y)(t, y) such that

(μM)y=(μN)t(\mu M )_y = ( \mu N)_t

This equation is usually harder to solve than the original ODE. There are two cases when it isn’t: when μ\mu is a function solely of tt and when it is a function solely of yy. In both cases, relatively easy differential equations that solve for μ\mu can be found by solving the above equation. Keep in mind that, when μ \mu is a function solely of xx, μy=0\mu_y = 0. The opposite applies when μ\mu is a function solely of yy.

If neither of these work, μ\mu must be a function of both xx and yy. Consider μ(x,y)=xmyn? \mu(x, y) = x^m y^n ?

Change of Variables

In some cases, a change of variables can convert one type of ODE into another type that we can solve.

Rationically Homogeneous Equations (RHEs)

I call them “rationically homogeneous” even though they’re actually just called “homogeneous” to distinguish them from the homogeneous equations discussed earlier.

RHEs come in the form y=F(t,y)y^{} = F(t, y), where F depends only on the ratio y/ty / t.

We solve this with the change of variables v=ytv = \frac{y}{t}

Bernoulli Differential Equations

This is our first encounter with differential equations in which yy is raised to a power other than one. Bernoulli Equations come in the form:

y+p(x)y=q(x)yny^{\prime} + p(x)y = q(x)y^n

The crucial step to solving these is dividing the whole equation by yny^n, leaving

yny+p(x)y1n=q(x)y^{-n} y^{\prime} + p(x)y^{1-n} = q(x)

This isolates the RHS, allowing us to transform the equation into a classic linear first-order ODE with the following substitution:

v=y1nv = y^{1-n}

which turns the above into

v(1n)+vp(x)=q(x)\frac{v^{\prime}}{(1-n)} + vp(x) = q(x)

and solve.

Riccati Equations

The equation

y=a0(t)+a1(t)y+a2(t)y2y' = a_0(t) + a_1(t)y + a_2(t)y^2

is a Riccati equation if a0,a1,a2a_0, a_1, a_2 are continuous and a2a_2 is not zero (if it is, the equation is just linear, and we can solve it using methods we already know.)

We can solve Riccati equations by knowing a particular solution ypy_p. Rewrite the equation using t and v, where

v=yypv = y - y_p

Then, solve this for y and replace v back to return to the original solution.

Autonomous Differential Equations

Some differential equations don’t depend directly on t but instead only on y.These are known as autonomous and often crop up in nature.

Consider a poulation of jackrabbits in a field of boundless plenty. The only restriction on their growth is the number of rabbits available to breed and have children. The more rabbits there are, the more rabbit couplings can occur per gestational period and the more quickly the population grows. We model this with the equation below:

dydx=ay\frac{dy}{dx} = ay

These can usually be solved with separation of variables.

By Power Series

Sometimes we can convert an equation into a power series. Differentiate the equation to get another power series, and you can sometimes combine that information to get an expression that reproduces the coefficients. Even luckier, you might be able to use one of the series convergence formulas to get a continuous solution.

The Fundamental Theorem of Ordinary Differential Equations

All of the equations we’ve looked at so far have been “nice,” in that they’ve been carefully designed to fall into one of the forms that we can solve relatively easily. However, the general differential equation will not be so nice– in fact, there are many that we don’t know how to solve with analytic (algebraic) methods.

For all this difficulty, however, there is a ray of hope. The Fundamental Theorem of Ordinary Differential Equations tells us that a solution to an initial value problem always exists as a few (relatively weak) conditions are met.

Consider the following ODE:

y=F(t,y),y(t0)=y0y' = F(t, y), y(t0) = y0

When F is a continuous function on some domain $$ in the Cartesian plane, there exists a certain range in that plane around the starting point where a function exists that solves that ODE. Moreover, if FyF_y is continuous, then that solution will be unique.

This theorem is important for a number of reasons:

  1. It guarantees that for “nice” functions, a solution to the ODE will always exist, even if we might not be able to solve for it.
  2. It guarantees that that solution will be unique if $F_y$ is continuous, which is often more important than knowing what that solution is.
  3. It tells us that we can determine the maximal domain of the function solution without actually solving for the function– because that domain is precisely the minimum domain among $F()$ and $y()$.
  4. From this theorem we can derive information about how the solutions to an ODE vary as the initial conditions vary. They do so smoothly, like a piece of hair being drawn out on a curler… and futhermore, solutions to the same ODE at different initial conditions (also known as integral lines) can be transformed (rectified) by an invertible function to become parallel lines.
However, keep in mind that this theorem is only actually useable under certain, specific circumstances– as always, watch out for discontinuities.

So far, we’ve only covered solutions to ODEs based on analytical methods, i.e. that are created by manipulating symbols. For many equations, this is impossible, and so we resort to numerical methods,. The upside of these is that they are valid for any differential equation. The downside is that they are mere approximations.

Euler Method

Consider the following initial value problem:

y=F(t,y)y' = F(t, y)
y(t0)=y0y(t_0) = y_{0}

You can use Euler’s method to create a polygonal path that approximates the function on which it is defined.

You can show that there exists a sequence such that the limit of this polygonal path, as the number of polygons grows from more to more, is the original function that we were trying to approximate.

Remember that Euler’s method means:

y(t+1)=y(t)+F(t,y(t))hy_{(t+1)} = y(t) + F(t, y(t)) \cdot h

Where hh is the step size. The approximation becomes better as hh decreases. Interestingly, the difference between the value of the differential equation at any point according to Euler’s method and its actual value at that point is linearly bounded by hh:

ϕh(T)ϕ(T)<Kh|\phi_h(T) - \phi(T)| < Kh

K is a constant that varies based on the function F and the length of the total interval between the initial value and the point of extrapolation.

Picard Method

The Euler Method approximates a point along a curve by tip-toeing along the curve’s edge towards the target number. The Picard Method chooses to approximate functions instead: given an initial guess (t)(t) for a solution in terms of yy, the Picard Method can produce a better guess ad infinitum until we reach the desired accuracy.

The general formula:

y(x)=y0+(x0)xf(t,y(t))dty(x) = y_0 + \int(x_0)^x f(t, y(t)) dt

Your initial guess for y(x)y(x) can be as bad as you want it to be, it can even be a constant. Plug it into the integral as y(x)y(x) and the Picard method will produce something better. If your ODE is of a certain form, for example a linear ODE, successive iterations of Picard’s Method will actually produce a polynomial which could end up being the Taylor series expansion of the analytic solution.

The disadvantages of this method are that it converges slowly and is difficult to do by hand (unlike the Euler method). However, it does have the advantage of replacing the continuity condition with the weaker Lipschitz condition, which is true iff there exists a constant K such that

f(x1)f(x2)<K(x1x2)|f(x_1) - f(x_2)| < K(x_1 - x_2)

In other words, a function meets the Lipschitz condition at a point if all its other points lie within the two sectors of the Cartesian Plane defined by lines of slope +K and -K extending from that point.

So far, we have only considered first-order differential equations which is characterized at most by a function and its first derivative. Now, we will consider differential equations with higher derivatives, including second order and above.

We’ll mostly cover tricks to solve DEs in the following form:

p1y+p2(t)y+p3(t)y=g(t)p_{1}y^{\prime\prime}+ p_{2}(t)y^{\prime}+p_{3}(t)y=g(t)

There are two categories of equations in this form: homogeneous ones where g(t)=0g(t) = 0, and nonhomogeneous ones where g(t)0g(t) \neq 0 . We’ll start with the homogeneous case, which is simpler.

Homogeneous Linear Differential Equations

To abbreviate a long story, we derive from these equations a polynomial characteristic equation, and when we solve for the solutions of the characteristic polynomial, we’ll have a better idea of the form of the solutions to the original ODE. For example, when both functions pp are constant, the DE above corresponds to the characteristic equation

r2+p1r+p2=g(t)r^2+p_1r+p_2 = g(t)

If the solutions to this polynomial are ϕ1\phi_1 and ϕ2\phi_2 , then the solution to the original DE is:

y=C1ϕ1(t)+C2ϕ2(t)y = C_{1}\phi_1(t) + C_2\phi_2(t)

When the two solutions are the same, it’s not enough to just combine these two terms– we still have to differentiate them somehow. In practice, we do this by multiplying one of the two terms by t, giving the following form:

y=(C1+C2t)eλty = (C_1+C_2t)e^{\lambda t}

(Apparently D’Alembert came up with this idea, and it’s a good example of intuition in resoning about differnetial equations).

When the two solutions are complex, Euler’s Identity is used to express them purely in terms of functions on the reals. The typical solution lies in the following form:

y=C1eαtcos(βt)+C2eαtsin(βt)y = C_1e^{\alpha t}cos(\beta t) + C_2e^{\alpha t}sin(\beta t)

Where α\alpha is the real part of the complex solution and β\beta is the imaginary part.

Note also that all of the fundamentals here also apply to higher-order differential equations.

Nonhomogeneous Linear Differential Equations: The Method of Undetermined Coefficients

What happens when g(t)g(t) isn’t zero? Then we have a nonhomogeneous equation, and things get a little more interesting. Solutions to this type of equation will always appear in the form yh+ypy_h + y_p, where yhy_h is the “homogeneous” solution that we’ve already discussed, and ypy_p is a “particular” solution that we derive from g(t)g(t).

How do we find the particular solution? First, assume that g(t)g(t) appears in the following form:

g(t)=eαt(P(t)cos(βt)+Q(t)sin(βt))g(t) = e^{\alpha t} (P(t)*cos(\beta t) + Q(t)sin(\beta t))

Where P, Q are polynomials.

The Method of Undetermined Coefficients has us finding the particular solution directly from g(t)g(t). The particular solution will ultimately appear in the following form:

yp=tseαt((A0+A1+...+Antncosβt+(B0+B1t+...+Bntn)sinβt))y_p = t^{s}e^{\alpha t} ((A_0 + A_1 + ... + A_nt^n cos \beta t + (B_0 + B_1t + ... + B_nt^n) sin \beta t))

Importantly, ss relates to whether +i+ i in the right-hand side is a solution to the characteristic polynomial. If it isn’t, then s=0s = 0. If it is a solution, but only once (i.e. not a double root), then s=1s = 1. If it’s a double root, then s=2s = 2.

By the principle of superposition, if g(t)g(t) isn’t in the special form we covered above but is instead a sum of functions in that special form, we can derive ypy_p for each of those separate terms and add them for the final, correct solution.

At this point we have two groups of unknown coefficients to solve for: C1,C2,CnC_1, C_2, … C_n , which comes from the general solution, and A,B,A, B, … , which comes in with the particular solution.

For the first group of coefficients, plug in your intitial conditions and solve for the coefficient set through the resulting linear system.

Note that you’ll need at least as many initial conditions as you have coefficients.

An Interlude for Euler-Cauchy Equations

Euler-Cauchy equations come in the form:

t2y+αty+βy=g(t),    t>0t^2y'' + \alpha t y' + \beta y = g(t), \; \; t > 0

We really shouldn’t be able to solve this, because the coefficients aren’t constant. However, there are still at least two solution methods-– in fact, this is the only known form of DE with non-constant coefficients that we always know how to solve!

That being said, some trickery is required. Let y=xmy = x^m.

Given this, we also have:

dydx=mxm1 d2ydx2=m(m1)xm2\frac{dy}{dx} = mx^{m-1} \\~\\ \frac{d^2y}{dx^2} = m(m-1)x^{m-2}

Substituting this into the original equation gives:

x2[r(r1)xr2]+ax[rxr1]+by=0 x^2 [r(r-1)x^{r-2}] + ax[rx^{r-1}] + by = 0 \\~\\
r(r1)xr+arxr+bxr=0 r(r-1)x^r + arx^r + bx^r = 0 \\~\\
r2xrrxr+arxr+bxr=0 r^2x^r - rx^r + arx^r + bx^r = 0 \\~\\
r2r+arb=0 r^2 - r + ar _ b = 0 \\~\\
r2+(a1)r+b=0r^2 + (a-1)r + b = 0

This comes in the form that we can solve.

The Wronskian

Here is where you’ll learn why linear algebra is a prerequisite for differential equations. You see, vectors aren’t just bags of numbers– we can also describe solutions to DEs as linear combinations of functions in a vector space.

In the general solutions we gave two sections ago, every term is multiplied by a different arbitrary constant CnC_n. This means that a general solution is a sum of functions multiplied by constants, or a linear combination.

One question we often want to answer is whether the functions in the linear combination are linearly independent, i.e. can any one of them be expressed as a linear combination of the others.

Using the rules we’ve defined so far and some elementary calculus, we can construct the Wronskian, a useful tool for understanding differential equations and the relationships between them.

W[f1,f2]=f1f2f1f2W[f_1, f_2] = f_1f'_2 - f'_1 f_2

If the Wronskian is nonzero for all t, then we know that 1_1 and 2_2 are linearly independent.

Alternatively, according to Abel’s Identity:

Wherey+p1(t)y+p2(t)y=0 W=Cep1\text{Where} \: y'' + p_1(t)y' + p_2(t)y = 0 \\~\\ W = Ce^{-\int p_1}

This also allows us to derive a second solution from the first. We can compute the Wronskian with Abel’s Identity and, knowing the first solution, plug them into the determinant equation for the Wronskian and produce a new differential equation that gives the other part of the general solution. This method is known as reduction of order.

However, the Wronskian can give us more…

Variation of Parameters

What happens when g(t)g(t) isn’t in the correct form to use the Method of Undetermined Coefficients? As long as both functions of the general solution still exist, we can still compute the particular solution with the Wronskian, which relies on the Wronskian. The proper formula is:

yp=ϕ1aϕ2gW[ϕ1,ϕ2]+ϕ2aϕ1gW[ϕ1,ϕ2]y_p = \frac{-\phi_1}{a} \int \frac{\phi_2 g}{W[\phi_1, \phi_2]} + \frac{\phi_2}{a} \int \frac{\phi_1 g}{W[\phi_1, \phi_2]}

This works just as well with DEs with an order higher than two.

Differential Operators

An operator is a function on functions: it accepts one or more functions as input and returns one or more functions as an output. Differentiation is an operator; so is integration. There are many more operators than this and the differential operators involve differentiation in some way. This class briefly went over the Integral Operator, Convolution, and the Laplace Transform, but here I’ll focus entirely on the latter.

The Laplace Transform is an operator defined as:

Ly(s)=0esty(t)dt\mathfrak{L}y(s) = \int_0^\infty e^{-st} y(t) dt

Along with Fourier Transforms, Laplace Transforms are the most important transforms in engineering. They’re used for analyzing linear time invariant systems, including the input / output response and stability and behavior in terms of bounded and unbounded output.

While we’re on the subject, let’s talk about another direct application of this sort of differential equation: damped oscillating systems.

Damping in Harmonic Oscillators

Atoms are particles orbiting other particles, and all forms of light are oscillating waves. Therefore, both matter and light can be considered harmonic oscillators. Our world is made of them.

In a damped harmonic oscillator, friction is present that slows down the oscillation in proportion to the system’s current velocity. The faster the oscillator moves, the greater the damping is. Thereefore, the equation of motion becomes:

mx=kxγx,γ 0mx'' = -kx - \gamma x', \gamma \geq\ 0

γ\gamma must be nonzero in order for the equation to actually be damped, of course. This function exists in the standard form that we’ve already studied, so we can analyze it with the constant-polynomial methods we’ve already studied. In particular, three different scenarios arise in response to the value of in relation to a certain expression:

Underdamped:γ24mk<0\text{Underdamped}: \gamma^2 - 4mk < 0
Critically Damped:γ24mk=0,ory<4mk.\text{Critically Damped}: \gamma^2 - 4mk = 0, \text{or} \: y < \sqrt{4mk}.
Overdamped:γ24mk>0,ory<4mk.\text{Overdamped}: \gamma^2 - 4mk > 0, \text{or} \: y < \sqrt{4mk}.

Underdamped

The solution is

x=Aeγ2mtcos(βtϕ)x = Ae^{\frac{- \gamma}{2m}t} cos(\beta t - \phi)

Which exponentially decays even as it oscillates. This is the “damping” that most of us are used to observing in the oscillations of everyday life: the system moves back and forth, with each subsequent back being less than the forth that preceded it, before eventually decaying to nothing.

Overdamped

Imagine you’re in a swimming pool, dragging behind you a spring with a ball attached to its end. You stop and watch the spring slowly push against the depths, settling into its equilibrium position. By the time the spring is fully relaxed, the ball has lost whatever velocity it has and doesn’t overshoot.

The water is so viscous that it overdamps the ball. In mathematics, this looks like:

y=C1er1x+C2er2xy = C_1*e^{r_1 x}+C_2*e^{r_2 x}

Because the coefficient of yy’ is negative in the original equation, both of these functions represent exponential decay. The system decays to its equilibrium point and stays there.

Critically Damped

Critically damped systems occur when the damping is just enough to make sure that the system does not overshoot the equilibrium point, but not any more. This is desirable because a critically damped system moves more quickly towards equilibrium than one that is overdamped. In this case, the solution is:

x=eω0t(x0+(v0+ω0x0)t)x = e^{-\omega_0t}(x_0 + (v_0 + \omega_0x_0)t)

which is an exponential decay function multiplied by a linear function.

A Few Points on Higher-Order Differential Equations

We may define the Wronskian for a differential equation with an arbitray order and an arbitrary number of solutions $ y_1 \ldots y_n $:
W(y1,y2,...,yn)=y1y2yny1y2y1(n)yn(n)W(y_1, y_2, ..., y_n) = \begin{vmatrix} y_1 & y_2 & \ldots & y_n \\ y'_1 & y'_2 & & \vdots \\ \vdots & & \ddots & \vdots \\ y^{(n)}_1& \ldots & \ldots & y_n^{(n)} \end{vmatrix}

Abel’s Identity also makes a return appearance to provide a nice identity for this case:

W=Cep1W = Ce^{- \int p_1}

Finally, we can also perform variation of parameters for equations of arbitrary order, but the way to do it is a bit weird. Define a new determinant of the following matrix:

Wk=ϕ10ϕnϕ1(n2)0ϕn(n2)ϕ1(n1)1ϕn(n1)W_k = \begin{vmatrix} \phi_1 & \ldots & 0 & \ldots & \phi_n \\ \vdots & & \vdots & & \vdots \\ \phi_1^{(n-2)} & \ldots & 0 & \ldots & \phi_n^{(n-2)} \\ \phi_1^{(n-1)} & \ldots & 1 & \ldots & \phi_n^{(n-1)} \end{vmatrix}

(This is just the Wronskian with the kthk_{th} column replaced with a one-hot vector)

Then the particular solution of your DE is:

yp=k=1nϕkgWkW[ϕ1,,ϕn]y_p = \sum_{k=1}^{n} \phi_k \int \frac{gW_k}{W[\phi_1, \ldots, \phi_n]}

Systems of Linear ODEs

Systems are useful for describing situations where there are multiple continuously-varying quantities in interplay. Classic situations include predator-prey relationships and price relations in economics. There’s actually nothing fundamentally different about the way we solve these; we mostly just apply what we’ve learned already to vector equations. If this doesn’t speak to the power of linear algebra in analysis, I don’t know what does.

We write a system as:

x=F(t)x\textbf{x} = \textbf{F}(t) \textbf{x}
x=(x1,,xn)\textbf{x} = (x_1, \ldots, x_n)

F\textbf{F} is an nxn matrix, and X\textbf{X} is a column vector of equations.

Just like in the single-equation case, the solution to a homogeneous system (the right-hand side is zero) is a linear combination of continuous functions. However, in the systems case we have a vector of functions.

Plug the coefficients of the system into a matrix, then find the eigenvalues and eigenvectors of this matrix. When these eigenvalues are distinct, the general solution is

C1eαt[(cos(βt)p1)sin(βt)q1]  +C2eαt[(cos(βt)p2)sin(βt)q2]C_1 e^{\alpha t} [ (cos(\beta t)\stackrel{\rightarrow}{p_1}) - sin(\beta t) \stackrel{\rightarrow}{q_1} ] \\ \: \: \; + \: C_2 e^{\alpha t} [ (cos(\beta t)\stackrel{\rightarrow}{p_2}) - sin(\beta t) \stackrel{\rightarrow}{q_2}]

Variations of Parameters with Systems

Alright, but what if you have a nonhomogeneous system? The basic idea is the same, but the procedure is behind variation of parameters in this case is a little more involved.

We define the fundamental matrix of an IVP as:

Ψ(t)=(x1(t)xn(t))\Psi (t) = (\textbf{x}_1 (t) \ldots \textbf{x}_n(t))

Your particular solution is:

xp=ΨΨ1g\textbf{x}_p = \Psi \int \Psi^{-1} \textbf{g}

The above expression is simple to write and hellacious to compute. Make sure you remember the formula for the matrix inverse.

Phase Planes and Phase Portraits

Remember how we mapped two-dimensional differential equations and the vector flows around them in part one? Well, here we do that with systems in arbitrary dimensions. Phase portraits record the trajectories of solutions of systems of differential equations across the planes of coordinates, with more coordinates corresponding to more equations.

When asked to sketch a phase portrait for a system, here’s what you should do:

  • Find the eigenvalues of the system
  • Find the eigenvectors that correspond to the eigenvalues. If the eigenvalues are real, then the eigenvectors will slash across the phase portrait, with each solution following them.

We can also classify an equilibrium solution into a number of different bins:

  • Asymptotically stable solutions are ones which have a boundary such that all solutions that enter or start in that boundary eventually move to the origin. They include nodal sinks, star sinks, and spiral sinks.
  • Stable but not asymptotically stable solutions are slightly weaker: they have a region such that no solution that enters or starts in that region ever escapes. Centers fall into this category.
  • Unstable solutions have neither of these properties. They include nodal sources, improper nodal sources, star sources, spiral sources, and saddles.

Fun Fact: Hilbert’s 16th problem asks for the upper bound of the number of limit cycles of a polynomial system

x=Pn(x,y) y=Qn(x,y)x' = P_n(x, y) \\~\\ y' = Q_n(x, y)

This problem is still open for all n greater than 1!

Boundary Value Problems

To wrap things up, we’ll consider a type of differential equation with different boundary conditions:

y+λy=0 y(0)=y(π)=0y'' + \lambda y = 0 \\~\\ y(0) = y(\pi) = 0

We still have two different initial conditions, but instead of y(0)=A,y(0)=By(0) = A, y’(0) = B, we have two conditions that are both of yy.

Furthermore, it turns out that these boundaries are actually very strict! The solution is y=0y = 0 unless λ\lambda is a perfect square, in which case the solution is equal to CsinntC \sin{nt}.

Special thanks to Vikram Bhagavatula for help with editing, fact-checking, and applications.


Mountains
sleeping_cat
© MMXXV