Ordinary differential equations (ODEs)
Scalar ODEs
Ordinary differential equation (ODE): equation of the form
Order of an ODE: order of the highest derivative appearing in the equation
Autonomous ODE:
Linear ODE:
Nonhomogeneous linear ODE:
The general solution to a linear ODE is
Superposition principle: a linear combination of solutions to a homogeneous linear ODE is also a solution to the ODE 1
A basis for the space of solutions to a homogeneous linear ODE is called a fundamental system of solutions.
First-order scalar ODEs
Picard-Lindelöf theorem
Given the initial value problem
, suppose that is continuous in and Lipschitz continuous in (uniformly in ). Then there exists a unique solution to the IVP on for some .
Separable equations
To solve a separable equation
:
- Integrate:
First-order linear ODEs
To solve a first-order linear ODE of the form
:
Multiply both sides by an integrating factor
, which yields Integrate:
Bernoulli equations
To solve a Bernoulli equation
:
- Substitute
, which yields the first-order linear equation
Given a first-order autonomous equation
The phase line of such an equation consists of the
A general first-order scalar ODE
Second-order scalar ODEs
We will consider only linear second-order ODEs.
Existence and uniqueness of solutions to second-order linear ODEs
Given the equation
, suppose that are continuous on some interval . Then for any fixed and , there exists a unique solution to the ODE on satisfying .
Superposition of solutions to second-order linear homogeneous ODEs
Given the equation
, suppose that are continuous. Then the solution space of the ODE is two-dimensional; viz., if are linearly independent solutions to the ODE, the general solution is .
Now we consider the constant-coefficient second-order linear homogeneous ODE
Second-order linear homogeneous ODEs with constant coefficients
To solve an ODE of the form
:
Compute the roots
of its characteristic equation If
are real and distinct ( ), the general solution is 2 If
are real and equal ( ), the general solution is If
are complex ( ) with , the general solution is
Reduction of order of a second-order linear ODE
Given the equation
:
- Let
be a solution to the homogeneous equation - Substitute the ansatz
, which yields a first-order linear ODE in - Solving for
yields a second, linearly independent solution to the second-order ODE
In particular, reduction of order applied to
The (translational mechanical) harmonic oscillator is modelled by the constant-coefficient second-order linear ODE
Several physical systems are harmonic oscillators, as exemplified by the following table.
Mass-spring system | Series RLC circuit | Pendulum |
---|---|---|
Position |
Charge |
Angle |
Mass |
Inductance |
Length |
Damping coefficient |
Resistance |
— |
Spring constant |
Inverse capacitance |
Gravitational acceleration |
Driving force |
EMF |
— |
Damped; undamped harmonic oscillator:
Forced; unforced/free harmonic oscillator:
Undamped/natural angular frequency:
Damping ratio:
We first consider the behaviour of unforced oscillators.
Behaviour of an unforced undamped harmonic oscillator
If
(i.e., ), the oscillator is undamped. The solution oscillates with amplitude , angular frequency , and phase shift .
If the oscillator is damped, we distinguish three cases, depending as above on the sign of the discriminant of the characteristic polynomial.
Behaviour of an unforced damped harmonic oscillator
If
(i.e., ), the oscillator is overdamped
- The solution
decays as and does not oscillate - Its decay constants are
If
(i.e., ), the oscillator is critically damped
- The solution
decays as and does not oscillate - Its decay constant is
If
(i.e., ), the oscillator is underdamped
The solution
decays as while oscillating 4
- Its decay constant is
and the angular pseudo-frequency of its oscillations is
Now suppose the oscillator is subject to sinusoidal forcing
Behaviour of a forced undamped harmonic oscillator
- If
, there are beats
- The particular solution is
- This combines with
to produce beats; the closer is to , the greater the amplitude of the beats - If
, there is resonance
- The particular solution is
- This dominates
as , producing increasingly large oscillations at the resonant frequency
Behaviour of a forced damped harmonic oscillator
The particular solution
is a sinusoid with amplitude and angular frequency , and is called the steady-state solution. The complementary solution decays exponentially and is therefore called the transient solution. The amplitude of
is maximized at the practical resonance frequency , provided that . If
, there is no maximum (for ), but the amplitude increases as .
Higher-order scalar ODEs
th-order linear homogeneous ODEs with constant coefficients To solve an ODE of the form
:
- Compute the roots of its characteristic polynomial
- Each real root
of multiplicity contributes to the general solution - Each pair of complex roots
of (individual) multiplicity contributes to the general solution
There are two commonly used methods for finding a particular solution to a nonhomogeneous linear ODE with source term
The method of undetermined coefficients
Suppose that
for some polynomial of degree and some . Given the equation :
Let
be the multiplicity of as a root of its characteristic polynomial Substitute the ansatz
and match coefficients to determine the
- If
, the ansatz can be used instead (which is useful when or , for example) If
is a sum of such terms, apply the principle of superposition; i.e., solve the equation with each term as the source term separately, then add the solutions
To verify that a solution set of an ODE is linearly independent, it is occasionally useful to compute the Wronskian determinant.
Wronskian (determinant) of
Linear dependence and the Wronskian
Given a set of
-times differentiable functions, if the functions are linearly dependent on an interval, then their Wronskian vanishes identically thereon. 5
Of course, the contrapositive of this result is used in practice.
The method of variation of parameters
Given the equation
:
Let
be a fundamental system of solutions to the homogeneous equation Substitute the ansatz
and impose the constraints for Solve the resulting linear system
- Note that the matrix is the Wronskian matrix of the fundamental system, so by Cramer’s rule,
, where denotes the Wronskian determinant with the th column of the matrix replaced by the right-hand side
The Laplace transform
Laplace transform of
-
If
and is of exponential type (that is, as for some ), then is defined for all- Moreover,
- Moreover,
-
The Laplace transform is linear
-
The Laplace transform is injective in the sense that if
, then a.e. (Lerch’s theorem)- In particular, if
and are continuous, then for all
- In particular, if
-
The variables
and are typically thought of as “time” and “frequency”, respectively
Inverse Laplace transform of
- The formula above is called Mellin’s integral formula and is derived from the Fourier inversion theorem
- In practice, the inverse Laplace transform is computed by inspection (using tables of known Laplace transforms; see below)
Dirac delta “function”: Borel measure defined by
Heaviside step function:
Unit step functions:
We observe that
When applicable, the Laplace transform can be used to solve ODEs by transforming both sides of the equation, solving for the transform of the independent variable, and computing its inverse transform.
In the tables below, we assume where necessary that
These “elementary” transforms can be combined with the general properties in the table below.
Function | Laplace transform |
---|---|
For example,
Initial value theorem
If
is of exponential type and exists (and is finite), then .
Final value theorem
If
is bounded and exists (and is finite), then .
If
Linear ODEs with constant coefficients and the impulse response
If
is a linear differential operator and is the impulse response of , then the solution to is .
(To see this, we can convolve both sides of the equation
Power series methods
Consider the second-order linear homogeneous ODE
Ordinary; singular point
Regular singular point
At an ordinary point
At a regular singular point, we substitute the ansatz
Method of Frobenius
Suppose that
is a regular singular point of and that are the roots of the indicial equation. Then the ODE has two linearly independent solutions as given below.
If
, then If
, then If
, then If
with , we can take and to obtain linearly independent real solutions.
Vector ODEs
A vector ODE is simply an equation of the form
We note that a linear vector ODE takes the form
When the (vector) functions of a fundamental system of solutions to a homogeneous linear vector ODE are concatenated horizontally, the resulting matrix is called a fundamental matrix (solution).
The analogue of the phase line for first-order autonomous vector ODEs
Vector ODEs are useful for expressing (systems of) scalar ODEs. An
We will therefore restrict our attention to systems of 1st-order ODEs, which can themselves be regarded as individual 1st-order vector ODEs. In the example above, we can define
Linear vector ODEs
A first-order linear vector ODE (which we shall refer to interchangeably as a “linear system of ODEs”) can be written in the form
As with second-order linear scalar ODEs, we begin with the homogeneous case.
First-order linear homogeneous vector ODEs with constant coefficients
The general solution of
is , where . In other words, is a fundamental matrix solution. (See Appendix B for the definition of
and further remarks.) Corollary:
If
is a fundamental matrix solution to , then . 9
We note that
In general, if
Stable critical point: given a distance
Asymptotically stable critical point:
Unstable critical point:
A stable critical point that is not asymptotically so is sometimes called marginally/neutrally stable.
When the system is two-dimensional and
Eigenvalues of |
Behaviour | Stability |
---|---|---|
Real and positive | (Nodal) source | Unstable |
Real and negative | (Nodal) sink | A. stable |
Real and of opposite signs | Saddle point | Unstable |
Purely imaginary | Centre | M. stable |
Complex with positive real parts | Spiral source | Unstable |
Complex with negative real parts | Spiral sink | A. stable |
(The “centre” is so called because trajectories are ellipses centred at the origin.)
If
Eigenvalues of |
Behaviour | Stability |
---|---|---|
Real, positive, equal, and nondefective | Proper nodal source | Unstable |
Real, positive, equal, and defective | Improper nodal source | Unstable |
Real, negative, equal, and nondefective | Proper nodal sink | A. stable |
Real, negative, equal, and defective | Improper nodal sink | A. stable |
We now consider the general (i.e., non-homogeneous) case (cf. First-order scalar ODEs).
First-order linear vector ODEs with constant coefficients
To solve a first-order linear vector ODE of the form
:
Multiply both sides by the integrating factor
, which yields Integrate:
However, the integrating factor method is not applicable when
(For constant-coefficient linear systems, the method of undetermined coefficients may also be used, where the ‘coefficients’ are vectors. One difference is that ansatz terms augmented by powers of
To handle variable-coefficient systems, we can use variation of parameters:
First-order linear vector ODEs with variable coefficients
To solve a first-order linear vector ODE of the form
:
Let
be a fundamental matrix solution to the homogeneous equation Substitute the ansatz
, which yields Integrate:
(Note that this reduces to the integrating factor method when
and .)
When
Linear vector ODEs with constant coefficients
To solve a first- or second-order linear vector ODE of the form
, where is diagonalizable:
Let
be the eigenpairs of Write
and Solve for the
(in the linear system ) Substitute these decompositions into the equation and equate the coefficients of the eigenvectors, which yields
decoupled first- or second-order linear scalar ODEs
- Solve the scalar ODEs using the methods described in First-order scalar ODEs or Second-order scalar ODEs
Nonlinear vector ODEs
Suppose that
However, we note that a centre of the linearized system tends to correspond to a spiral point in the nonlinear ODE, since it is unlikely that the real parts of the eigenvalues of
Conservative equation: equation of the form
A conservative equation can be written as the first-order autonomous vector ODE
By integrating both sides of the scalar equation with respect to
The critical points are clearly the points on the
Partial differential equations (PDEs)
Separation of variables
Boundary-value problems
Let
- Dirichlet boundary conditions:
- Neumann boundary conditions:
- Periodic boundary conditions:
Dirichlet and Neumann boundary conditions can also be mixed; e.g.,
All such conditions ensure that
The eigenvectors of
Boundary conditions | Eigenfunctions | Eigenvalues |
---|---|---|
Dirichlet on |
||
Neumann on |
||
Dirichlet-Neumann on |
||
Neumann-Dirichlet on |
||
Periodic on |
Fredholm alternative
For any given
, either:
is an eigenvalue of has a unique solution for every
Trigonometric series
Fourier series of
Periodic extension of
Convergence of Fourier series
If the periodic extension
of is piecewise smooth, then the Fourier series of converges pointwise to for every .
Differentiation and antidifferentiation of Fourier series
Suppose that the Fourier series of
is piecewise smooth. If
is continuous and is piecewise smooth, then the Fourier series of may be computed by differentiating that of term-wise. Similarly, an antiderivative of
may be computed by antidifferentiating the Fourier series of term-wise. (N.B.: In general, the resulting series will not be a Fourier series.)
Decay of Fourier series coefficients
If
, then as .
Parseval’s identity
Even extension of
Odd extension of
The periodic extensions of
Fourier cosine series of
To solve a BVP of the form
Fourier series are also effective in solving periodically forced harmonic oscillator equations. If necessary, resonant terms in the Fourier series are multiplied by
Second-order linear PDEs
Every second-order linear PDE in two independent variables
- Elliptic PDE:
- Describes an ‘equilibrium state’
- Obeys a ‘maximum principle’; smooths out singularities
- Ex.: Laplace’s equation
, Poisson’s equation
- Parabolic PDE:
- Describes ‘diffusion’
- Obeys a ‘maximum principle’; smooths out singularities
- Ex.: the heat equation
- Hyperbolic PDE:
- Describes ‘wave propagation’
- Ex.: the wave equation
The one-dimensional heat equation
To solve the PDE
with boundary conditions and initial condition :
- Substitute the ansatz
, which yields - Solve the BVP
with to obtain eigenfunctions with eigenvalues - Solve the ODEs
, which yield - Write
, where - Impose
and match the coefficients with those of the appropriate trigonometric series for
The technique of writing
The one-dimensional inhomogeneous heat equation
To solve the PDE
with boundary conditions and initial condition :
- Write
, where and are the steady-state and transient parts of , respectively - Solve
with boundary conditions depending on - Solve
with boundary conditions depending on and initial condition
For instance, if
is , then and .
The one-dimensional wave equation
To solve the PDE
with boundary conditions and initial conditions :
- Write
- Solve
with side conditions by separation of variables, where is - Solve
with side conditions by separation of variables, where is
Laplace’s equation in two dimensions
To solve the PDE
on a rectangle with boundary conditions:
- Write
, where the subscripts denote the ‘north’, ‘east’, ‘west’, and ‘south’ sides of the rectangle - For each side function
, solve by separation of variables with the boundary conditions for all other sides set to zero Note: it is convenient to use hyperbolic functions in the second step.
A similar method can be used to solve Laplace’s equation in a semi-infinite strip; however, exponential functions should then be used instead of hyperbolic functions. In this case, it is also assumed that the solution is bounded in the strip.
Separation of variables is also applicable to Laplace’s equation in polar coordinates,
The second-order Cauchy-Euler equation
To solve an ODE of the form
:
- Compute the roots
of its indicial equation - If
are real and distinct, the general solution is - If
are real and equal, the general solution is - If
are complex with , the general solution is
Integral transform methods
The heat equation
Taking the Fourier transform in
The function
Eigenvalue problems
Sturm-Liouville theory
Regular Sturm-Liouville problem: BVP of the form
Solutions of regular Sturm-Liouville problems
Every regular Sturm-Liouville problem has a strictly increasing sequence
of real eigenvalues tending to infinity. Moreover, each is simple and its eigenspace is spanned by an eigenfunction with exactly zeroes in . In addition, if
on , , and , then the eigenvalues are nonnegative.
Any second-order eigenvalue problem of the form
The underlying linear differential operator of an SLP is the operator
Fredholm alternative
For any given
, either:
is an eigenvalue of has a unique solution for every
Eigenfunction series
Just as square-integrable functions admitted trigonometric series expansions,
If
Appendix A: The Jordan normal form
Let
The characteristic and minimal polynomials
Characteristic polynomial of
Cayley-Hamilton theorem
Minimal polynomial of
Invertibility and the minimal polynomial
is invertible if and only if . Corollary:
Eigenvalues and the minimal polynomial
is an eigenvalue of if and only if . 16
Generalized eigenvectors
(Rank-
Jordan chain generated by the rank-
Generalized eigenspace of
Recall that the geometric multiplicity of an eigenvalue
The algebraic multiplicity of an eigenvalue
of is .
It is easy to see that
The Jordan normal form
Suppose that
and let be the distinct eigenvalues of . Then .
Thus, if
Now if
If
is nilpotent, there exist vectors and integers such that is a basis for , where .
Taking
As for the columns of
Another useful observation is that
Eigenvalues and the Jordan normal form
The number of Jordan blocks with
’s on their diagonals is the geometric multiplicity of ; the sum of their sizes is the algebraic multiplicity of . (If these are equal, each Jordan block for
must be , and the eigenvalue is called semisimple. is diagonalizable if and only if all its eigenvalues are semisimple.)
The minimal polynomial and the Jordan normal form
The minimal polynomial of
is , where is the size of the largest Jordan block for (also called the index of ).
Appendix B: The matrix exponential
If
Clearly, if
For the general case, it suffices to compute
When the general solution to
However, the solution in this form may involve complex coefficients and functions even when
For a simple (or, more generally, a nondefective) complex eigenvalue of
For reference and illustration, we also note what happens in the case of a double eigenvalue
-
This is because the space of solutions is the kernel of a linear differential operator! ↩︎
-
If
and so that , the general solution in this case can be written as , which unifies the real and complex cases. In fact, if , we have and (where, by symmetry, the factor of is immaterial in and may be absorbed into for ). ↩︎ -
The (ordinary) frequency is
(cycles per unit time) and the period is . ↩︎ -
We reserve the variable
for the angular frequency of a sinusoidal forcing function (see below) and therefore use where was previously. ↩︎ -
The converse, however, is false:
and are (continuously) differentiable on and their Wronskian vanishes identically thereon, yet they are not linearly dependent on any neighbourhood of the origin. ↩︎ -
More generally,
can be meromorphic functions. ↩︎ -
More generally, an ordinary point is one at which
are analytic. ↩︎ -
More generally, a regular singular point is a singular point at which
has a pole of order and has a pole of order . ↩︎ -
Given that
, solving for in terms of yields , whence the result follows. ↩︎ -
We assume that
is invertible (or equivalently, that both its eigenvalues are nonzero) so that the origin is an isolated critical point of the system. Indeed, if there were even one other critical point, by linearity there would be infinitely many (constituting a subspace of the plane), so the origin is an isolated critical point if and only if it is the sole critical point. ↩︎ -
When the eigenvalues are distinct and of the same sign, nodes are sometimes also called “improper” owing to their graphical similarity to the latter case. ↩︎
-
As an example, a free undamped mass-spring system obeys the conservative equation
. The quantity , or equivalently, , is therefore conserved. But this is just the total energy of the system: the first term is kinetic energy; the second is potential energy! ↩︎ -
We can rule out the possibility of a spiral (source or sink) since the conservation equation implies that trajectories are symmetric about the
-axis. ↩︎ -
As an example, a free undamped pendulum obeys the conservative equation
. The critical points occur when (the pendulum is stationary) and (it is hanging straight down) or (it is balanced upside down). The former is evidently a stable centre ( ); the latter an unstable saddle point ( ). ↩︎ -
That such a polynomial exists follows from a dimension argument; uniqueness is immediate. Moreover, Euclidean division shows that the polynomials that annihilate
are exactly the multiples of . ↩︎ -
Apply the main result to
, whose minimal polynomial satisfies . ↩︎