Calculus
Differential calculus
Derivatives
(Total) derivative of $\vec{f} : \Rn \to \Rm$ at $\vec{x}$: linear map $d\vec{f}_\vec{x}: \Rn \to \Rm$ such that $$ \lim_{\vec{h} \to \vec{0}} \frac{\norm{\vec{f}(\vec{x} + \vec{h}) - [\vec{f}(\vec{x}) + d\vec{f}_\vec{x}(\vec{h})]}}{\norm{\vec{h}}} = 0 $$ Jacobian (matrix) of $\vec{f}: \Rn \to \Rm$ at $\vec{x}$: matrix representation of $d\vec{f}_\vec{x}$
Gradient of $f: \Rn \to \R$ at $\vec{x}$ (denoted ${\grad}f(\vec{x})$): transpose of the Jacobian of $f$ at $\vec{x}$; consequently, ${\grad} f(\vec{x}) \cdot \vec{h} = df_\vec{x}(\vec{h})$ for all $\vec{h} \in {\Rn}$
Directional derivative of $f: \Rn \to \R$ at $\vec{x}$ along the unit vector $\vec{v}$: $$ \grad_\vec{v} f(\vec{x}) = \lim_{h \to 0} \frac{f(\vec{x} + h\vec{v}) - f(\vec{x})}{h} $$ If $f$ is differentiable at $\vec{x}$, then $\grad_\vec{v} f(\vec{x}) = \grad f(\vec{x}) \cdot \vec{v}$.
Partial derivative of $f: \Rn \to \R$ at $\vec{x}$ with respect to $x_i$: $$ \frac{\partial f}{\partial x_i}(\vec{x}) = {\grad}_{\vec{e}_i} f(\vec{x}) $$ If $f$ is differentiable at $\vec{x}$, then $$ \grad f(\vec{x}) = \begin{bmatrix} \frac{\partial f}{\partial x_1}(\vec{x}) \\ \vdots \\ \frac{\partial f}{\partial x_n}(\vec{x}) \end{bmatrix}. $$ Conversely, if all partials of a function $\vec{f}: \Rn \to \Rm$ exist in a neighbourhood of $\vec{x}$ and are continuous at $\vec{x}$, then $\vec{f}$ is differentiable at $\vec{x}$.
Chain rule
If $\vec{f}: \Rm \to \mathbb{R}^k$ and $\vec{g}: \Rn \to \Rm$ are differentiable, then $$ d(\vec{f} \circ \vec{g})_{\vec{x}} = d\vec{f}_{\vec{g}(\vec{x})} \circ d\vec{g}_{\vec{x}}. $$
For instance, if $k = 1$, $y = f(\vec{u})$, and $\vec{u} = \vec{g}(\vec{x})$, then $$ \begin{bmatrix} \frac{\partial{y}}{\partial x_1} & \cdots & \frac{\partial{y}}{\partial x_n} \end{bmatrix} = \begin{bmatrix} \frac{\partial y}{\partial u_1} & \cdots & \frac{\partial y}{\partial u_m} \end{bmatrix} \begin{bmatrix} \frac{\partial u_1}{\partial x_1} & \cdots & \frac{\partial u_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial u_m}{\partial x_1} & \cdots & \frac{\partial u_m}{\partial x_n} \end{bmatrix}, $$ so $\frac{\partial y}{\partial x_i} = \sum_{j = 1}^{m} \frac{\partial y}{\partial u_j} \frac{\partial u_j}{\partial x_i}$.
Clairault’s theorem
If the second partials of $f: \Rn \to \R$ exist and are continuous at $\vec{x}$, then the mixed partials of $f$ are equal at $\vec{x}$, i.e., $$ \frac{\partial^2 f}{\partial x_i \partial x_j} (\vec{x}) = \frac{\partial^2 f}{\partial x_j \partial x_i} (\vec{x}) $$ for all $i, j \in [n]$.
Taylor series
Taylor’s theorem
Suppose that $f: [a, b] \to \R$ is $n+1$ times differentiable on $(a, b)$ and $f^{(n)}$ is continuous on $[a, b]$, and let $x_0, x \in [a, b]$. Then $$ f(x) = \sum_{k=0}^{n} \frac{f^{(k)}(x_0)}{k!} (x-x_0)^{k} + \frac{f^{(n+1)}(\xi)}{(n+1)!} (x-x_0)^{n+1} $$ for some $\xi$ between $x_0$ and $x$.
If $f^{(n)}$ is absolutely continuous on the closed interval between $x_0$ and $x$, then the remainder term is equal to $$ \int_{x_0}^{x} \frac{f^{(n+1)}(t)}{n!}(x-t)^n\, dt. $$
Critical points and extrema
First derivative test
If $a$ is a critical point of $f: \R \to \R$ and $f’$ changes sign at $a$, then $a$ is a local minimum or maximum of $f$ according as $f’$ changes from nonpositive to nonnegative or vice-versa.
Second derivative test
If $a$ is a critical point of $f: \Rn \to \R$ and the Hessian of $f$ at $a$ is positive definite (resp. negative definite), then $a$ is a local minimum (resp. maximum) of $f$. If it is indefinite but invertible, then $a$ is a saddle point of $f$.
Method of Lagrange multipliers
To find the extrema of $f: \Rn \to \R$ subject to the constraints $g_1(\vec{x}) = \cdots = g_M(\vec{x}) = 0$, set $\grad f(\vec{x}) = \sum_{k=1}^{M} \lambda_k \grad g_k(\vec{x})$ and $g_1(\vec{x}) = \cdots = g_M(\vec{x}) = 0$.
Integral calculus
Darboux and Riemann integrals
Partition of $[a, b]$: finite sequence $a = x_0 < x_1 < \cdots < x_n = b$
Lower/upper Darboux sum of $f: [a, b] \to \R$ with respect to the partition $\mathcal{P} = \set{x_i}_{i=0}^{n}$: $$ \begin{align} L(f, \mathcal{P}) &= \sum_{i=1}^{n} \left( \inf_{t \in [x_{i-1},\, x_i]} f(t) \right) (x_i - x_{i-1}) \\ U(f, \mathcal{P}) &= \sum_{i=1}^{n} \left( \sup_{t \in [x_{i-1},\, x_i]} f(t) \right) (x_i - x_{i-1}) \end{align} $$ Lower/upper Darboux integral of $f: [a, b] \to \R$: $$ \begin{align} \underline{\int_a^b} f(x) \, dx &= \sup_{\mathcal{P}} \, L(f, \mathcal{P}) \\ \overline{\int_a^b} f(x) \, dx &= \inf_{\mathcal{P}} \, U(f, \mathcal{P}) \end{align} $$ If $\underline{\int_a^b} f(x) \, dx = \overline{\int_a^b} f(x) \, dx$, their common value is termed the Darboux integral of $f$ and is denoted $\int_a^b f(x) \, dx$.
Tagged partition of $[a, b]$: partition $\mathcal{P} = \set{x_i}_{i=0}^{n}$ of $[a, b]$ with tags $t_i \in [x_{i-1}, x_i]$ for each $i \in [n]$
Riemann sum of $f: [a, b] \to \R$ with respect to the partition $\mathcal{P} = \set{x_i}_{i=0}^{n}$ with tags $\mathcal{T} = \set{t_i}_{i=1}^{n}$: $$ S(f, \mathcal{P}, \mathcal{T}) = \sum_{i=1}^{n} f(t_i) (x_i - x_{i-1}) $$ If there exists an $I \in \R$ such that for all $\epsilon > 0$ there exists a partition $\mathcal{P}_\epsilon$ of $[a, b]$ such that $\abs{I - S(f, \mathcal{P}, \mathcal{T})} < \epsilon$ for all tagged partitions $\mathcal{P} \supseteq \mathcal{P}_\epsilon$ (i.e., for all refinements of $\mathcal{P}_\epsilon$), then $I$ is called the Riemann integral of $f$ and is denoted $\int_a^b f(x) \, dx$. The Riemann and Darboux integrals are equivalent.
A bounded function $f: [a, b] \to \R$ is Riemann/Darboux integrable if and only if it is continuous a.e.
Multiple integrals
Fubini’s theorem
Let $(X, \mathcal{M}, \mu)$ and $(Y, \mathcal{N}, \nu)$ be $\sigma$-finite measure spaces. If $f \in L^1(\mu \times \nu)$, then $$ \int_{X \times Y} f \ d(\mu \times \nu) = \int_X \int_Y f(x, y) \, d\nu(y) \, d\mu(x) = \int_Y \int_X f(x, y) \, d\mu(x) \, d\nu(y) . $$
The conclusion of Fubini’s theorem also holds if $f$ is nonnegative and measurable (Tonelli’s theorem). Hence if $f$ is a measurable function on $X \times Y$ and any one of $\int \abs{f} \, d(\mu \times \nu)$, $\iint \abs{f} \, d\nu \, d\mu$, $\iint \abs{f} \, d\mu \, d\nu$ is finite, then the iterated integrals of $f$ are equal.
Change of variables
Let $\Omega \subseteq \Rn$ be open and $\vec{g}: \Omega \to \Rn$ be an injective $C^1$ function with $d\vec{g}_\vec{x}$ invertible for all $\vec{x} \in \Omega$. If $f \in L^1(\vec{g}(\Omega), d\vec{x})$, then $$ \int_{\vec{g}(\Omega)} f(\vec{x}) \, d\vec{x} = \int_\Omega (f \circ \vec{g})(\vec{x}) \abs{\det d\vec{g}_\vec{x}} \, d\vec{x}. $$
Similarly, the conclusion of the change of variables theorem holds if $f$ is nonnegative and measurable.
Coordinate system | Substitution | Jacobian determinant |
---|---|---|
Polar | $\begin{align} x &= r \cos \theta \\ y &= r \sin \theta \end{align}$ | $r$ |
Cylindrical | $\begin{align} x &= r \cos \theta \\ y &= r \sin \theta \\ z &= z \end{align}$ | $r$ |
Spherical | $\begin{align} x &= r \cos \theta \, \sin \varphi \\ y &= r \sin \theta \, \sin \varphi \\ z &= r \cos \varphi \end{align}$ | $r^2 \sin \varphi$ |
Differentiation under the integral sign
Leibniz’s rule (differentiation under the integral sign)
Suppose that $f: X \times [a, b] \to \mathbb{C}$ and that $f(\cdot, y) \in L^1(\mu)$ for each $y \in [a, b]$. If $\frac{\partial f}{\partial y}$ exists and there is a $g \in L^1(\mu)$ such that $\Abs{\frac{\partial f}{\partial y}(x, y)} \leq g(x)$ for all $x, y$, then $$ \frac{d}{dy} \int_X f(x, y) \, d\mu(x) = \int_X \frac{\partial f}{\partial y} (x, y) \, d\mu(x). $$
Vector calculus
Vector differential operators
Gradient of $f: \mathbb{R}^3 \to \mathbb{R}$: $$ \mathrm{grad}\ f = \grad f = \left( \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z} \right) $$ Curl of $\vec{F}: \mathbb{R}^3 \to \mathbb{R}^3$: $$ \mathrm{curl}\ \vec{F} = \grad \times \vec{F} = \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}, \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}, \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) $$ Divergence of $\vec{F}: \mathbb{R}^3 \to \mathbb{R}^3$: $$ \mathrm{div}\ \vec{F} = \grad \cdot \vec{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z} $$ By Clairault’s theorem, $\grad \cdot (\grad \times \vec{F}) = 0$ and $\grad \times (\grad f) = \vec{0}$ provided that the requisite partials are continuous.
Conservative vector field $\vec{F}: \Rn \to \Rn$: vector field such that $\vec{F} = \grad \phi$ for some $C^1$ scalar field $\phi: \Rn \to \R$ (called a scalar potential for $\vec{F}$)
Let $\vec{F}: \Rn \to \Rn$ be a vector field with continuous partials. Then $\vec{F}$ is conservative if and only if $\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} = 0$ (when $n = 2$) or $\grad \times \vec{F} = \vec{0}$ (when $n = 3$).
Line and surface integrals
Line integral of a scalar field $f: \Rn \to \R$ along $C$: $$ \int_C f(\vec{r}) \, ds = \int_a^b f(\vec{r}(t)) \norm{\vec{r}'(t)} \, dt, $$ where $\vec{r}: [a, b] \to C$ is a parametrization of $C$
Line integral of a vector field $\vec{F}: \Rn \to \Rn$ along $C$: $$ \int_C \vec{F}(\vec{r}) \cdot d\vec{r} = \int_C \vec{F} \cdot \vec{\hat{T}} \ ds = \int_a^b \vec{F}(\vec{r}(t)) \cdot \vec{r}'(t) \, dt, $$ where $\vec{r}: [a, b] \to C$ is a parametrization of $C$ (and $\vec{\hat{T}}$ is the tangent vector to $C$)
Surface integral of a scalar field $f: \Rn \to \R$ over $S$: $$ \iint_S f(\vec{r}) \, dS = \iint_D f(\vec{r}(u, v)) \Norm{\frac{\partial \vec{r}}{\partial u} \times \frac{\partial \vec{r}}{\partial v}} \, du \, dv, $$ where $\vec{r}: D \to S$ is a parametrization of $S$
Surface integral of a vector field $\vec{F}: \Rn \to \Rn$ over $S$: $$ \iint_S \vec{F}(\vec{r}) \cdot d\vec{S} = \iint_S \vec{F} \cdot \vec{\hat{n}} \ dS = \iint_D \vec{F}(\vec{r}(u, v)) \cdot \left(\frac{\partial \vec{r}}{\partial u} \times \frac{\partial \vec{r}}{\partial v} \right) \, du \, dv, $$ where $\vec{r}: D \to S$ is a parametrization of $S$ (and $\vec{\hat{n}}$ is the normal vector to $S$)
Integral theorems
Gradient theorem
Let $C$ be a smooth curve parametrized by $\vec{r}: [a, b] \to C$, and let $f$ be a scalar field with continuous partials on $C$. Then $$ f(\vec{r}(b)) - f(\vec{r}(a)) = \int_C \grad f \cdot d\vec{r} $$
If $\vec{F}$ is a conservative vector field, the gradient theorem implies that $\int_C \vec{F} \cdot d\vec{r}$ is path-independent (or equivalently, $\int_C \vec{F} \cdot d\vec{r} = 0$ for any closed curve $C$). Conversely, if $\vec{F}$ is continuous and $\int_C \vec{F} \cdot d\vec{r}$ is path-independent in a domain $D$, then $\vec{F}$ is conservative on $D$.
Stokes’ theorem
Let $S$ be a piecewise smooth oriented surface bounded by a finite number of piecewise smooth simple closed positively-oriented curves, and let $\vec{F}$ be a vector field with continuous partials on $S$. Then $$ \int_{\partial S} \vec{F} \cdot d\vec{r} = \iint_{S} (\grad \times \vec{F}) \cdot d\vec{S}. $$
Gauss’/divergence theorem
Let $V$ be a solid bounded by a piecewise smooth oriented surface, and let $\vec{F}$ be a vector field with continuous partials on $V$. Then $$ \iint_{\partial V} \vec{F} \cdot d\vec{S} = \iiint_V \grad \cdot \vec{F} \ dV. $$
Green’s theorem
Let $D$ be a planar region bounded by a finite number of piecewise smooth simple closed positively-oriented curves, and let $\vec{F}$ be a vector field with continuous partials on $D$. Then $$ \int_{\partial D} F_x \, dx + F_y \, dy = \iint_D \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \ dx \, dy. $$
Green’s theorem can be derived from Stokes’ theorem by taking $\vec{F} = (F_x, F_y, 0)$ and identifying $D$ with $S$ (whose normal vector would be $\vec{e}_3$).
Real analysis
Fundamentals
Sequences
Bolzano-Weierstrass theorem
Every bounded sequence in $\Rn$ has a convergent subsequence.
Completeness of Euclidean space
Every Cauchy sequence in $\Rn$ converges.
Monotone convergence theorem
A monotonic sequence in $\R$ is convergent if and only if it is bounded.
More precisely, an increasing sequence that is bounded above converges to its supremum, whereas a decreasing sequence that is bounded below converges to its infimum.
Limit superior of a sequence $\set{a_n}_{n=1}^{\infty}$: $$ \limsup_{n \to \infty} a_n = \lim_{n \to \infty} \sup_{m \geq n} a_m = \inf_{n \in \mathbb{N}} \sup_{m \geq n} a_m. $$ Limit inferior of a sequence $\set{a_n}_{n=1}^{\infty}$: $$ \liminf_{n \to \infty} a_n = \lim_{n \to \infty} \inf_{m \geq n} a_m = \sup_{n \in \mathbb{N}} \inf_{m \geq n} a_m. $$ Equivalently, the limit inferior and superior may be defined as the infimum and supremum, respectively, of the set of subsequential limits (in $\overline{\R}$) of $\set{a_n}$.
Series
As a consequence of the Cauchy criterion for sequences, we have
Cauchy’s convergence test
$\sum a_n$ converges if and only if for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that $\abs{\sum_{k=n}^{m} a_n} < \epsilon$ for $m \geq n \geq N$.
In particular (taking $m = n$), the series converges only if $\lim_{n \to \infty} a_n = 0$.
Similarly, the monotone convergence theorem implies that a series of nonnegative terms converges if and only if the sequence of its partial sums is bounded.
Convergence of the geometric series
$\sum_{n=0}^\infty r^n$ converges to $1/(1-r)$ if $\abs{r} < 1$ and diverges otherwise.
Convergence of the $p$-series
$\sum_{n=1}^{\infty} 1/n^p$ converges if and only if $p > 1$, and likewise for $\sum_{n=2}^{\infty} 1/[n (\ln n)^p]$.
Direct comparison test
If $a_n \in \mathcal{O}(b_n)$ and $\sum \, \abs{b_n}$ converges, then $\sum \, \abs{a_n}$ converges.
If $a_n \in \Omega(b_n)$ and $\sum \, \abs{b_n}$ diverges, then $\sum \, \abs{a_n}$ diverges.
Corollary:
Limit comparison test: if $a_n \in \Theta(b_n)$, then $\sum \, \abs{a_n}$ converges if and only if $\sum \, \abs{b_n}$ converges.
Cauchy condensation test
If $\set{a_n}$ is a nonincreasing sequence of nonnegative terms, then $\sum_{n=1}^{\infty} a_n \leq \sum_{k=0}^{\infty} 2^k a_{2^k} \leq 2 \sum_{n=1}^{\infty} a_n$.
Thus, $\sum_{n=1}^{\infty} a_n$ converges if and only if $\sum_{k=0}^{\infty} 2^k a_{2^k}$ converges.
Ratio test
If $\limsup_{n \to \infty} \abs{a_{n+1}/a_n} < 1$, then $\sum \, \abs{a_n}$ converges.
If $\liminf_{n \to \infty} \abs{a_{n+1}/a_n} > 1$ or $\abs{a_{n+1}/a_n} \geq 1$ for all sufficiently large $n$, then $\sum a_n$ diverges.
Root test
If $\limsup_{n \to \infty} \abs{a_n}^{1/n} < 1$, then $\sum \, \abs{a_n}$ converges.
If $\limsup_{n \to \infty} \abs{a_n}^{1/n} > 1$, then $\sum a_n$ diverges.
Dirichlet’s test
If the sequence of partial sums of $\sum a_n$ is bounded and $\set{b_n}$ decreases to zero (i.e., $b_{n+1} \leq b_n$ and $\lim_{n \to \infty} b_n = 0$), then $\sum a_n b_n$ converges.
Corollaries:
Alternating series test: if $c_{2k-1} \geq 0, c_{2k} \leq 0$ for all $k \in \mathbb{N}$ and $\set{|c_n|}$ decreases to zero, then $\sum c_n$ converges. 1
Abel’s test: if $\sum a_n$ converges and $\set{b_n}$ is bounded and monotone, then $\sum a_n b_n$ converges. 2
Continuity, compactness, and connectedness
Continuous function $f$ between topological spaces $X, Y$: the preimage under $f$ of each open set in $Y$ is an open set in $X$
Function $f$ between topological spaces $X, Y$ is continuous at $x \in X$: the preimage under $f$ of each neighbourhood of $f(x)$ is a neighbourhood of $x$ ($f$ is continuous if and only if $f$ is continuous at each $x \in X$)
Compact topological space $X$: every open cover of $X$ has a finite subcover
Connected topological space $X$: not disconnected (i.e., not the union of two disjoint nonempty open sets)
In Euclidean space, we have the following characterizations:
Continuity in Euclidean space
$f: \Rn \to \Rm$ is continuous at $\vec{x}$ if and only if $\forall \epsilon > 0\ \exists \delta > 0 : \forall \vec{y} \in \Rn\ (\norm{\vec{y}-\vec{x}} < \delta \implies \norm{f(\vec{y}) - f(\vec{x})} < \epsilon)$.
Heine-Borel theorem
A subset of $\Rn$ is compact if and only if it is closed and bounded.
Connectedness in $\R$
A subset of $\R$ is connected if and only if it is an interval (i.e., a set $I$ such that $\forall x, y \in I\ (x \leq z \leq y \implies z \in I)$).
In general, continuous maps preserve both compactness and connectedness. Two important consequences for real-valued functions are:
Extreme value theorem
Let $X$ be a nonempty compact topological space and $f: X \to \R$ be continuous. Then $f$ is bounded and attains both its infimum and supremum on $X$.
Intermediate value theorem
Let $f: [a, b] \to \R$ be continuous. If $c \in \R$ is strictly between $f(a)$ and $f(b)$, then there exists an $x \in (a, b)$ such that $f(x) = c$.
Uniformly continuous function $f$ between metric spaces $(X, d_X), (Y, d_Y)$: $\forall \epsilon > 0\ \exists \delta > 0 : \forall x, y \in X\ (d_X(x, y) < \delta \implies d_Y(f(x), f(y)) < \epsilon)$
Let $X, Y$ be metric spaces with $X$ compact. If $f: X \to Y$ is continuous, then it is uniformly so.
Advanced calculus
(Cauchy’s) mean value theorem
If $f, g: [a, b] \to \R$ are continuous on $[a, b]$ and differentiable on $(a, b)$, then there exists a $c \in (a, b)$ such that $[f(b) - f(a)] g’(c) = [g(b) - g(a)] f’(c)$. (The ‘ordinary’ mean value theorem has $g(x) = x$.)
Mean value theorem for integrals
If $f: [a, b] \to \R$ is continuous, then it attains its mean value at some $c \in (a, b)$, i.e., $\int_a^b f(x) \, dx = f(c) (b-a)$.
Darboux’s theorem
Let $f: [a, b] \to \R$ be differentiable (but not necessarily continuously so). If $c \in \R$ is strictly between $f’(a)$ and $f’(b)$, then there exists an $x \in (a, b)$ such that $f’(x) = c$.
Hölder’s inequality
Suppose $f, g$ are measurable functions on some measure space. If $p \in [1, \infty]$ and $q$ is the conjugate exponent to $p$ (i.e., $1/p + 1/q = 1$), then $\norm{fg}_1 \leq \norm{f}_p \norm{g}_q$.
Corollary:
Cauchy-Schwarz inequality: $\abs{\inner{f}{g}} \leq \norm{f}_2 \norm{g}_2$ for $f, g \in L^2(\mu)$. 3
Inverse function theorem
Let $\Omega \subseteq \Rn$ be open and $\vec{f}: \Omega \to \Rn$ be $C^1$. If $d\vec{f}_\vec{x}$ is invertible at some $\vec{x} \in \Omega$, then $\vec{f}$ is a local $C^1$ diffeomorphism at $\vec{x}$ (i.e., there exists an open set $U \ni \vec{x}$ such that $\vec{f}\restriction_U$ is bijective, $C^1$, and has a $C^1$ inverse). Moreover, $(d\vec{f}^{-1})_{\vec{f}(\vec{x})} = (d\vec{f}_\vec{x})^{-1}$.
Corollary:
If $\Omega, \vec{f}$ are as above and $d\vec{f}_\vec{x}$ is invertible at every $\vec{x} \in \Omega$, then $\vec{f}$ is an open map. 4
Implicit function theorem
Let $\Omega \subseteq \mathbb{R}^{n+m}$ be open and $\vec{f}: \Omega \to \Rm$ be $C^1$, and suppose that $\vec{f}(\vec{a}, \vec{b}) = \vec{0}$ for some $(\vec{a}, \vec{b}) \in \Omega$ (regarding $\Omega$ as a subset of $\Rn \times \Rm$). Write $d\vec{f}_{(\vec{a}, \vec{b})} = \begin{bmatrix}X & Y\end{bmatrix}$ with $X \in \R^{m \times n}, Y \in \R^{m \times m}$.
If $Y$ is invertible, then there exists an open set $U \ni \vec{a}$ for which there is a unique $C^1$ map $\vec{g}: U \to \Rm$ satisfying $\vec{g}(\vec{a}) = \vec{b}$ and $\vec{f}(\vec{x}, \vec{g}(\vec{x})) = \vec{0}$ for all $\vec{x} \in U$. Moreover, $d\vec{g}_{\vec{a}} = -Y^{-1} X$.
Contraction mappings
Contraction mapping $f$ on a metric space $(X, d)$: there exists a $\rho \in [0, 1)$ such that $d(f(x), f(y)) \leq \rho d(x, y)$ for all $x, y \in X$ (the minimal $\rho$ is called the Lipschitz constant of $f$)
Banach fixed point theorem
Let $f$ be a contraction mapping on a nonempty complete metric space. Then $f$ admits a unique fixed point (a point $x$ such that $f(x) = x$). Moreover, fixed-point iteration starting from any point converges to it (more precisely, if $x_*$ is the fixed point of $f$, $x_0$ is arbitrary, and $x_n := f(x_{n-1})$, then $x_n \to x_*$ with $d(x_*, x_{n+1}) \leq \rho d(x_*, x_n)$).
Sequences and series of functions
In this subsection, all functions are assumed to be real- or complex-valued functions defined on a subset of a metric space, unless otherwise indicated.
Pointwise and uniform convergence
$f_n \to f$ pointwise on $E$: $\forall x \in E\ (f_n(x) \to f(x))$
$f_n \to f$ uniformly on $E$: $\norm{f - f_n}_\infty = \sup_{x \in E} \abs{f(x) - f_n(x)} \to 0$
Weierstrass M-test
If $\norm{f_n}_\infty \leq M_n$ for each $n$ and $\sum M_n < \infty$, then $\sum f_n$ converges absolutely and uniformly.
Uniform limit theorem
If $f_n \to f$ uniformly, then $\lim_{x \to a} \lim_{n \to \infty} f_n(x) = \lim_{n \to \infty} \lim_{x \to a} f_n(x)$. Consequently, the uniform limit of a sequence of continuous functions is continuous.
Dini’s theorem
If a monotone sequence of continuous functions converges pointwise on a compact topological space, then the convergence is uniform.
If $X, Y$ are metric spaces with $Y$ complete, then $B(X, Y)$ (bounded functions) and $C_b(X, Y)$ (continuous bounded functions) are complete with respect to the uniform metric $d(f, g) = \sup_{x \in X} d_Y(f(x), g(x))$.
Note that if $X$ is compact, $C(X, Y) = C_b(X, Y)$ is complete.
Differentiation and integration of sequences of functions
Differentiation of a sequence of functions
Suppose that $\set{f_n}$ is a sequence of differentiable functions on $[a, b]$ such that $\set{f_n(x_0)}$ converges for some $x_0 \in [a, b]$. If $\set{f_n’}$ converges uniformly, then $\set{f_n}$ converges uniformly to a function $f$ and $f' = \lim_{n \to \infty} f_n’ $.
Dominated convergence theorem
Suppose that $\set{f_n} \subseteq L^1$ with $f_n \to f$ a.e. If there exists a $g \in L^1$ such that $\abs{f_n} \leq g$ a.e. for each $n$, then $f \in L^1$ and $\int f = \lim_{n \to \infty} \int f_n$.
Corollary:
If $\set{f_n} \subseteq L^1$ and $\sum \int \abs{f_n} < \infty$, then ($\sum f_n$ is defined a.e.,) $\sum f_n \in L^1$ and $\int \sum f_n = \sum \int f_n$. 5
The Arzelà–Ascoli theorem
Pointwise bounded family $\mathcal{F}$ of functions on $E$: $\forall x \in E\ (\sup_{f \in \mathcal{F}} \abs{f(x)} < \infty)$
Uniformly bounded family $\mathcal{F}$ of functions on $E$: $\sup_{f \in \mathcal{F},\, x \in E} \abs{f(x)} < \infty$
Equicontinuous family $\mathcal{F}$ of functions on $E$: $\forall \epsilon > 0\ \exists \delta > 0 : \forall f \in \mathcal{F}\ \forall x, y \in E\ (d_X(x, y) < \delta \implies d_Y(f(x), f(y)) < \epsilon)$ (cf. uniform continuity of $f$)
Relatively compact subspace $Y$ of a topological space $X$: $\closure{Y}$ is compact; if $X$ is a metric space, this is equivalent to every sequence in $Y$ having a subsequence converging in $X$
Arzelà–Ascoli theorem
Let $K$ be a compact metric space. A family $\mathcal{F} \subseteq C(K)$ is relatively compact if and only if it is pointwise bounded and equicontinuous.
In fact, for an equicontinuous family, pointwise and uniform boundedness are equivalent, so “pointwise” may be replaced by “uniformly” above. 6
The Weierstrass approximation theorem
Subalgebra $\mathcal{A} \subseteq C(K, \mathbb{F})$, where $K$ is a compact Hausdorff space and $\mathbb{F} \in \set{\R, \C}$: set of functions closed under scalar ($\mathbb{F}$) multiplication, addition, and multiplication
$S \subseteq C(K, \mathbb{F})$ separates points: for all $x, y \in K$ with $x \neq y$, there exists an $f \in S$ such that $f(x) \neq f(y)$
$S \subseteq C(K, \mathbb{F})$ vanishes nowhere: for all $x \in K$, there exists an $f \in S$ such that $f(x) \neq 0$
$S \subseteq C(K, \C) $ is self-adjoint: $S$ is closed under complex conjugation
Stone-Weierstrass theorem (real version)
Let $K$ be a compact Hausdorff space and $\mathcal{A}$ be a subalgebra of $C(K, \R)$ that separates points and vanishes nowhere. Then $\mathcal{A}$ is dense in $C(K, \R)$.
Corollary:
Weierstrass approximation theorem: polynomials are dense in $C([a, b], \R)$.
Stone-Weierstrass theorem (complex version)
Let $K$ be a compact Hausdorff space and $\mathcal{A}$ be a self-adjoint subalgebra of $C(K, \C)$ that separates points and vanishes nowhere. Then $\mathcal{A}$ is dense in $C(K, \C) $.
Corollary:
Trigonometric polynomials ($\mathrm{span} \set{z^k : k \in \mathbb{Z}}$) are dense in $C(\mathbb{T})$ (where $\mathbb{T}$ is the unit circle in $\C$).
(The Stone-Weierstrass theorems continue to hold if $K$ is merely locally compact Hausdorff and “$C$” is replaced by “$C_0$”, the continuous functions vanishing at infinity.)
Power series
Power series: series of the form $\sum_{n=0}^{\infty} a_n(z-z_0)^n$, where $z_0 \in \C, \set{a_n}_{n=0}^{\infty} \subseteq \C$
Radius of convergence of a power series: $r = [\limsup_{n \to \infty} \abs{a_n}^{1/n}]^{-1} = \liminf_{n \to \infty} \abs{a_n}^{-1/n}$ (we also have $r = [\lim_{n \to \infty} \abs{a_{n+1}/a_n}]^{-1}$ if this limit exists)
Convergence of power series
A power series converges absolutely and compactly (i.e., uniformly on compact subsets) in $B_r(z_0)$, its disc of convergence.
Differentiation and integration of power series
A power series may be differentiated and integrated term-by-term in $B_r(z_0) $; the resulting series have the same radii of convergence as the original.
Abel’s theorem
If a real power series centred at $x_0 \in \R$ converges at $x_0 \pm r$ (i.e., at an endpoint of its interval of convergence), then it is also continuous at $x_0 \pm r$.
Complex analysis
Fundamentals
Elementary functions
Complex exponential: $e^z = \sum_{n=0}^{\infty} z^n/n!$
Complex sine and cosine: $\cos z = (e^{iz} + e^{-iz})/2,\, \sin z = (e^{iz} - e^{-iz})/2i $
Euler’s formula: $e^{iz} = \cos z + i \sin z$
de Moivre’s formula: $(\cos z + i \sin z)^n = \cos(nz) + i \sin(nz)$ for all $n \in \mathbb{Z}$
$e^z = 1 \iff z = 2\pi i k,\, k \in \mathbb{Z}$
Complex hyperbolic sine and hyperbolic cosine: $\cosh z = (e^{z} + e^{-z})/2,\, \sinh z = (e^{z} - e^{-z})/2i$
$n$th root (n-valued): $\sqrt[n]{z} = \sqrt[n]{\abs{z}}e^{i \arg(z) / n}$
Complex logarithm (multi-valued): $\ln z = \ln \abs{z} + i \arg z$
Complex power: $z^w = e^{w \ln z}$ (if $w \in \mathbb{Q}$ with $w = m/n$ in lowest terms, then $z^w$ is n-valued; otherwise, it has countably infinitely many values)
Differentiability and holomorphicity
In the following definitions, $\Omega$ denotes an open subset of $\C$.
$f: \Omega \to \C$ (complex-)differentiable at $z_0 \in \Omega$ 7: $f’(z_0) =\lim_{h \to 0} [f(z_0 + h) - f(z_0)] / h$ exists
$f : \Omega \to \C$ holomorphic on $\Omega$: $f$ is differentiable at every $ z_0 \in \Omega$
$f: \Omega \to \C$ holomorphic at $ z_0 \in \Omega$ 7: $f$ is differentiable in a neighbourhood of $z_0 $ (or equivalently, $f$ is holomorphic on a neighbourhood of $z_0 $)
$f$ entire: $f$ is holomorphic on $\C$
Wirtinger derivatives of $f: \Omega \to \C$: $$ \frac{\partial f}{\partial z} = \frac{1}{2} \left(\frac{\partial f}{\partial x} - i \frac{\partial f}{\partial y}\right), \qquad \frac{\partial f}{\partial \conj{z}} = \frac{1}{2} \left(\frac{\partial f}{\partial x} + i \frac{\partial f}{\partial y}\right), $$ where $x = \Re(z)$ and $y = \Im(z)$
If $f$ is viewed as a function of two real variables $x$, $y$ with $z = x + iy, \conj{z} = x - iy$, then $\partial f / \partial z = df/dz = (\partial f / \partial x)(\partial x / \partial z) + (\partial f / \partial y)(\partial y / \partial z)$ and likewise $\partial f / \partial \conj{z} = df/d\conj{z}$.
Cauchy-Riemann equations
If $f$ is complex-differentiable at $z_0$, then $(\partial f / \partial\conj{z}) (z_0) = 0$ and $f’(z_0) = (\partial f / \partial z)(z_0)$.
Writing $f = u + iv$ and $z_0 = x_0 + i y_0$, the first part of the conclusion reads $$ \frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0),\quad \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0), $$ which are the eponymous equations.
Conversely, if the partials of $u$ and $v$ exist in a neighbourhood of $(x_0, y_0)$, are continuous at $(x_0, y_0)$, and satisfy the Cauchy-Riemann equations, then $f$ is complex-differentiable at $z_0$. 8
Analytic functions
From the section on power series, we know that $f(z) = \sum_{n=0}^{\infty} a_n(z-z_0)^n$ is holomorphic on its disc of convergence with $f'(z) = \sum_{n=0}^{\infty} n a_n(z-z_0)^{n-1}$. (Thus, a power series is infinitely differentiable on its disc of convergence.)
$f: \Omega \to \C$ analytic at $z_0 \in \Omega$: there exists a power series centred at $z_0$ converging to $f$ in a neighbourhood of $z_0$; in other words, $f$ is “locally given by a power series”
$f: \Omega \to \C$ analytic on $\Omega$: $f$ is analytic at every $z_0 \in \Omega$
$f_n \to f$ compactly on $\Omega $: $f_n \to f$ uniformly on all compact subsets of $\Omega$
$f_n \to f$ locally uniformly on $\Omega $: every $z_0 \in \Omega$ has a neighbourhood on which $f_n \to f$ uniformly
For open sets $\Omega \subseteq \C$, compact and locally uniform convergence are equivalent. A consequence of Morera’s theorem is the following:
Locally uniform limit of analytic functions
If $\set{f_n}$ is a sequence of analytic functions on an open set $\Omega \subseteq \C$ converging locally uniformly (or equivalently, compactly) to $f$, then $f$ is analytic.
Complex integration
Contour integration
Integral of a continuous function $f: \Omega \to \C$ along a rectifiable curve $\gamma$: $$ \int_\gamma f(z) \, dz = \int_a^b f(z(t)) z'(t) \, dt, $$ where $z: [a, b] \to \gamma$ is a parametrization of $\gamma$
Integrals along contours comprised of successive rectifiable curves are then defined in the obvious way. We allow contours to include “curves” consisting of a single point.
Let $\Omega \subseteq \C$ be open and suppose that $F: \Omega \to \C$ is continuously complex-differentiable. Then for any contour $\Gamma \subseteq \Omega$ from $z_1$ to $z_2$, $$ \int_\Gamma F'(z) \, dz = F(z_2) - F(z_1). $$ Corollary:
If, moreover, $\Omega$ is connected and $F’ = 0$, then $F$ is constant on $\Omega$.
Domain $\Omega \subseteq \C$: an open connected subset of $\C$; equivalently (by openness), an open path-connected subset of $\C$
Thus, the corollary above may be stated as: “if $F' = 0$ on a domain $\Omega$, then it is constant on $\Omega$”. 9
Suppose that $f$ is continuous on a domain $\Omega$. Then $f$ has an antiderivative on $\Omega$ if and only if $\int_\Gamma f(z) \, dz$ is path-independent (for $\Gamma \subseteq \Omega$), which in turn is equivalent to $\int_\Gamma f(z) \, dz = 0$ for all closed contours $\Gamma$.
Cauchy’s integral theorem
Simply connected set $\Omega \subseteq \C$: $\Omega$ is path-connected and any two curves in $\Omega$ with common endpoints are fixed-endpoint homotopic (i.e., can be continuously deformed into each other while keeping both endpoints fixed) 10
Deformation invariance theorem
Suppose that $f$ is holomorphic on an open set $ \Omega \subseteq \C$. If $\gamma_0, \gamma_1 \subseteq \Omega$ are rectifiable curves that are fixed-endpoint homotopic (or are closed rectifiable curves that are homotopic as such), then $$ \int_{\gamma_0} f(z) \, dz = \int_{\gamma_1} f(z) \, dz. $$
As every closed curve in a simply connected domain is null-homotopic (that is, homotopic to a point), we obtain:
Cauchy’s integral theorem
Suppose that $f$ is holomorphic in a simply connected domain $\Omega$. Then for any closed contour $\Gamma \subseteq \Omega$, $$ \int_\Gamma f(z) \, dz = 0. $$
Cauchy’s integral formula
Suppose that $f$ is holomorphic in an open set $\Omega \subseteq \C$ that contains the closure of a disc $D$. Then $$ f^{(n)}(z_0) = \frac{n!}{2\pi i} \int_{\partial D} \frac{f(z)}{(z - z_0)^{n+1}} \, dz $$ for all $z_0 \in D$. (By the above, the integral may equivalently be taken over any closed curve in $\Omega \setminus \set{z_0}$ that is homotopic to $\partial D$.)
Corollary:
Cauchy’s inequalities/estimates: if $z_0$ is the centre of the above disc $D$ and $r$ is its radius, then $$ \abs{f^{(n)}(z_0)} \leq \frac{n!}{r^n} \norm{f}_{\infty, \,\partial D}. $$
Analyticity of holomorphic functions
Suppose that $f$ is holomorphic in an open set $\Omega \subseteq \C$ and that $\closure{B_r(z_0)} \subseteq \Omega$. Then $$ f(z) = \sum_{n = 0}^{\infty} \frac{f^{(n)}(z_0)}{n!} (z - z_0)^n $$ for all $z \in B_r(z_0)$.
Thus, holomorphicity and analyticity are equivalent (both “at a point” and “on an open set”).
Liouville’s theorem
If $f$ is bounded and entire, then $f$ is constant.
Fundamental theorem of algebra
Every complex polynomial of degree $n \geq 0$ has exactly $n$ roots in $\C$ counted with multiplicity.
Identity theorem
If $f$ and $g$ are holomorphic in a domain $\Omega$ and agree on a nonempty open subset of $\Omega$ (or, more generally, on a subset having a limit point in $\Omega$), then $f = g$ throughout $\Omega$.
Morera’s theorem
If $f$ is continuous on a domain $\Omega$ (not necessarily simply connected) and $\int_T f(z) \, dz = 0$ for every triangle $T \subseteq \Omega$, then $f$ is holomorphic on $\Omega$.
The complex logarithm
Existence of the complex logarithm on a simply connected domain
Suppose that $\Omega$ is a simply connected domain containing $1$ and excluding $0$. For all $z \in \Omega$, define $$ \log_\Omega(z) = \int_\gamma \frac{dw}{w}, $$ where $\gamma \subseteq \Omega$ is any rectifiable curve from $1$ to $z$, which is well-defined by Cauchy’s integral theorem. Then $\log_\Omega$ is holomorphic in $\Omega$ with $(\log_\Omega)’(z) = 1/z$ and $e^{\log_\Omega (z)} = z$ for all $z \in \Omega$, and $\log_\Omega$ agrees with the real logarithm near $1$.
The principal branch of the complex logarithm has $\Omega = \C \setminus (-\infty, 0]$.
More generally, if $f$ is a nowhere vanishing holomorphic function on a nonempty simply connected domain $\Omega$, there exists a holomorphic function $g$ such that $g' = f'/f$ and $e^g = f$ on $\Omega$: $$ g(z) = \int_\gamma \frac{f'(w)}{f(w)} \, dw + C, $$ where $\gamma \subseteq \Omega$ is a rectifiable curve from a fixed point $z_0 \in \Omega$ to $z$ and $C \in \C$ is any constant satisfying $e^C = f(z_0)$.
Residue theory
Zeroes, singularities, and residues
Zero of $f$: a point $z_0 \in \C$ at which $f$ vanishes
By the identity theorem, the zeroes of a nontrivial holomorphic function must be isolated.
Zeroes of holomorphic functions
If $f$ is holomorphic in a domain $\Omega $ and has a zero at $z_0 \in \Omega$ (but does not vanish identically), then there exists a unique $n \in \mathbb{N}$ and a neighbourhood $U$ of $z_0$ in which $f(z) = (z - z_0)^n g(z)$ for some non-vanishing holomorphic function $g$ on $U$.
The number $n$ above is called the multiplicity/order of the zero; a simple zero is a zero of multiplicity 1.
Deleted/punctured neighbourhood of $z_0 \in \C$: $B_r(z_0) \setminus \set{z_0}$ for some $r > 0$
Isolated/point singularity of $f$: a point $z_0 \in \C$ at which $f$ is undefined but about which $f$ is defined in a deleted neighbourhood; these are categorized into three types:
- Removable singularity: $f$ may be defined at $z_0$ such that it is holomorphic in a neighbourhood of $z_0$
- Pole: $1/f$ defined to be zero at $z_0$ is holomorphic in a neighbourhood of $z_0$
- Essential singularity: neither a removable singularity nor a pole
Riemann’s theorem (on removable singularities)
Suppose that $f$ is holomorphic in an open set $\Omega \subseteq \C$ except at $z_0$, where it has an isolated singularity. If $f$ is bounded in a deleted neighbourhood of $z_0$, then $z_0$ is a removable singularity of $f$.
Corollary:
If $f$ and $z_0$ are as above, then $z_0$ is a pole of $f$ if and only if $\lim_{z \to z_0} \abs{f(z)} = \infty$.
Casorati-Weierstrass theorem (on essential singularities)
Suppose that $f$ is holomorphic in an open set $\Omega \subseteq \C$ except at $z_0$, where it has an essential singularity. Then the image under $f$ of any deleted neighbourhood of $z_0$ is dense in $\C$.
Poles and residues
If $f$ has a pole at $z_0$, then there exists a unique $n \in \mathbb{N}$ and a neighbourhood $U$ of $z_0$ in which $f(z) = (z - z_0)^{-n} h(z)$ for some non-vanishing holomorphic function $h$ on $U$.
The number $n$ above is called the order/multiplicity of the pole; a simple pole is a pole of order 1. If $f$ has a zero of multiplicity $n$ at $z_0$, then $1/f$ has a pole of order $n$ at $z_0$, and vice versa (defining $(1/f)(z_0) = 0$).
If $f$ has a pole of order $n$ at $z_0$, then $$ f(z) = \left[ \sum_{k=-n}^{-1} a_k (z - z_0)^k \right] + G(z), $$ where $G$ is a holomorphic function in a neighbourhood of $z_0$.
The sum above is called the principal part of $f$ at $z_0$, and $a_{-1}$ is called the residue of $f$ at $z_0$ and is denoted $\res_{z_0} f $ (or $\res_{z = z_0} f(z)$).
Computation of residues
If $f$ has a pole of order $n$ at $z_0$, then $$ \res_{z_0} f = \lim_{z \to z_0} \frac{1}{(n-1)!} \cdot \frac{d^{n-1}}{dz^{n-1}} [(z-z_0)^n f(z)]. $$
Residue theorem
Suppose that $f$ is holomorphic in an open set $\Omega \subseteq \C$ except at a finite set of poles $P$. Then for any positively-oriented simple closed contour $\Gamma \subseteq \Omega$ enclosing $P$, $$ \int_\Gamma f(z) \, dz = 2 \pi i \sum_{z_0 \in P} \res_{z_0} f. $$
Laurent series
If $f$ is holomorphic in the annulus $A_{r, R}(z_0) = \set{z \in \C : r < \abs{z - z_0} < R}$ (where $0 \leq r < R < \infty$) centred at $z_0$, then it admits a unique Laurent series expansion $$ f(z) = \sum_{n = -\infty}^{\infty} a_n (z - z_0)^n $$ therein that converges absolutely and compactly. The coefficients $a_n$ are given by $$ a_n = \frac{1}{2\pi i} \int_\Gamma \frac{f(z)}{(z-z_0)^{n+1}} \, dz, $$ where $\Gamma$ is a simple closed contour in the annulus enclosing $z_0$ (cf. Cauchy’s integral formula).
Classification of isolated singularities using Laurent series
Suppose that $f$ has an isolated singularity at $z_0$ and that it admits a Laurent series expansion $\sum_{n \in \Z} a_n (z-z_0)^n$ in $A_{0, R}(z_0)$. If $m = -\inf\,\set{n \in \Z : a_n \neq 0}$, then $z_0$ is a
- removable singularity if and only if $m = 0$.
- pole of order $m$ if and only if $0 < m < \infty$.
- essential singularity if and only if $m = \infty$.
Evaluation of integrals
Trigonometric integrals
Given an integral of the form
$$ \int_I R(\cos \theta, \sin \theta) \, d\theta, $$ where $R(\cos \theta, \sin \theta)$ is a rational function of $\cos \theta$ and $\sin \theta$ and $I$ is an interval of length $2\pi$, the substitution $z = e^{i \theta}$ yields $$ \int_{\abs{z} = 1} R \left(\frac{z + 1/z}{2}, \frac{z - 1/z}{2i}\right) \, \frac{dz}{iz} \,. $$
Principal value integrals
Cauchy principal value of $\int_{-\infty}^{\infty} f(x) \, dx$: $$ \mathrm{p.v.} \int_{-\infty}^{\infty} f(x) \, dx = \lim_{r \to \infty} \int_{-r}^{r} f(x) \, dx $$ (If the improper integral – that is, $\int_{-\infty}^0 f(x) \, dx + \int_0^\infty f(x) \, dx$ – exists, it is equal to its principal value.)
If $P, Q$ are polynomials with $\mathrm{deg}(Q) \geq \mathrm{deg}(P) + 2$, then $$ \lim_{r \to \infty} \int_{C_r^+} \frac{P(z)}{Q(z)} \, dz = 0, $$ where $C_r^+ = \set{re^{i\theta} : \theta \in [0, \pi]} $. (The same is true for $C_r^- = \set{re^{-i\theta} : \theta \in [0, \pi]}$.)
Jordan’s lemma
If $k > 0$ and $P, Q$ are polynomials with $\mathrm{deg}(Q) \geq \mathrm{deg}(P) + 1$, then $$ \lim_{r \to \infty} \int_{C_r^+} e^{ikz} \frac{P(z)}{Q(z)} \, dz = 0. $$ (The same is true for $C_r^-$ when $k < 0$.)
Cauchy principal value of $\int_{a}^{b} f(x) \, dx$, where $f$ is discontinuous at $c \in (a, b)$: $$ \mathrm{p.v.} \int_{a}^{b} f(x) \, dx = \lim_{\epsilon \to 0^+} \left[ \int_a^{c-\epsilon} f(x) \, dx + \int_{c+\epsilon}^b f(x) \, dx \right] $$ (If the improper integral – that is, $\int_a^c f(x) \, dx + \int_c^b f(x) \, dx$ – exists, it is equal to its principal value.)
Small arc/indentation lemma
If $z_0$ is a simple pole of $f$, then $$ \lim_{r \to 0^+} \int_{A_r} f(z) \, dz = (\theta_2 - \theta_1)i \cdot \res_{z_0} f, $$ where $A_r = \set{z_0 + re^{i\theta} : \theta \in [\theta_1, \theta_2]}$.
Integrals involving (multi-valued) complex powers
Integrals over $[0, \infty)$ involving non-integral powers are occasionally encountered.
- For such integrals, it is efficacious to integrate back and forth along the nonnegative real branch cut of the logarithm, with shrinking circular indentations around nonnegative singularities and an expanding circular contour closing the loop
- The residue theorem remains applicable (albeit not directly)
- The values of the power (say, $z^w$) “below” the branch cut will be $e^{2\pi i w}$ times its values “above” the branch cut
Meromorphic functions
Extended complex plane ($\C_\infty$): $\C \cup \set{\infty}$, where $z / 0 = \infty$ for $z \neq 0$ and $z / \infty = 0$ for $z \neq \infty$ (addition and multiplication by $\infty$ also yield $\infty$, except for $\infty - \infty$ and $0 \cdot \infty$, which are undefined as usual)
($\C_\infty$ is the one-point compactification of $\C$; viz., its open sets are the open subsets of $\C$ together with sets of the form $\C_\infty \setminus K$ for $K$ compact in $\C$.)
Isolated singularity of $f$ at $\infty$: isolated singularity of $z \mapsto f(1/z)$ at $0$
Isolated singularities of entire functions at $\infty$
If $f$ is an entire function on $\C$, then $f$ has an isolated singularity at $\infty$ and
- $\infty$ is a removable singularity of $f$ if and only if $f$ is constant.
- $\infty$ is a pole of order $n$ of $f$ if and only if $f$ is a polynomial of degree $n$.
- $\infty$ is an essential singularity of $f$ if and only if $f$ is non-polynomial.
$f: \Omega \setminus S \to \C$ meromorphic on $\Omega \subseteq \C$: $f$ is holomorphic on $\Omega$ except for a closed discrete set $S$ of removable singularities and poles (or equivalently, $f$ can be extended to a holomorphic function $\tilde{f} : \Omega \to \C_\infty$ with $\tilde{f} \not\equiv \infty$)
$f: \C \setminus S \to \C$ meromorphic on $\C_\infty$: $f$ is meromorphic on $\C$ and is holomorphic or has a pole at $\infty$ (or equivalently, $f$ can be extended to a holomorphic function $\tilde{f} : \C_\infty \to \C_\infty$)
Meromorphic functions on $\C_\infty $
The meromorphic functions on $\C_\infty$ are the rational functions.
The argument principle and Rouché’s theorem
Argument principle
Suppose that $f$ is meromorphic in an open set $\Omega \subseteq \C$. Then for any positively-oriented simple closed contour $\Gamma \subseteq \Omega $ on which $f$ is nonvanishing and nonsingular, $$ \int_\Gamma \frac{f'(z)}{f(z)} \, dz = 2\pi i (Z - P), $$ where $Z$ and $P$ are, respectively, the numbers of zeroes and poles of $f$ inside $\Gamma$, counted with multiplicities.
Rouché’s theorem (symmetric, meromorphic version)
Suppose that $f$ and $g$ satisfy the hypotheses of the argument principle (for the same $\Omega$ and $\Gamma$) and moreover that $\abs{f + g} < \abs{f} + \abs{g}$ on $\Gamma$. Then $Z_f - P_f = Z_g - P_g$, where $Z_f$ and $P_f$ (resp. $Z_g$ and $P_g$) are, respectively, the numbers of zeroes and poles of $f$ (resp. $g$) inside $\Gamma$, counted with multiplicities.
Corollary:
Rouché’s theorem (asymmetric, holomorphic version): if $f$ and $h$ satisfy the hypotheses of the argument principle with “meromophic” replaced by “holomorphic” and, in addition, $\abs{h} < \abs{f}$ on $\Gamma$, then $Z_{f} = Z_{f+h}$. 11 12
The following result can be derived from Rouché’s theorem.
Open mapping theorem
If $f$ is nonconstant and holomorphic on a domain $\Omega$, then $f$ is an open map.
Corollary:
Maximum modulus principle: under the same hypotheses, $f$ cannot attain a maximum in $\Omega$. 13
Furthermore, if $\closure{\Omega}$ is compact and $f$ is also continuous on $\closure{\Omega}$, then $f$ attains its maximum (over $\closure{\Omega}$) on $\partial\Omega = \closure{\Omega} \setminus \Omega$. Hence $\norm{f}_{\infty, \,\Omega} \leq \norm{f}_{\infty, \,\partial\Omega}$.
Harmonic functions
Harmonic function $u : \Omega \subseteq \Rn \to \R$: $u \in C^2$ and $\Delta u = \grad \cdot \grad u = 0$
We now identify subsets of $\mathbb{R}^2$ with those of $\C$.
If $f$ is holomorphic on $\Omega \subseteq \C$, then $\Re(f)$ and $\Im(f)$ are harmonic on $\Omega$.
If $u$ is harmonic on a simply connected domain $\Omega \subseteq \mathbb{R}^2$, then there exists a holomorphic function $f$ on $\Omega$ such that $\Re(f) = u$. Moreover, $\Im(f)$ is unique up to an additive (real) constant and is called a harmonic conjugate of $u$.
To find a harmonic conjugate of $u$, one can take $f$ to be an antiderivative of $(\partial u / \partial x) - i (\partial u / \partial y)$, which is holomorphic by the Cauchy-Riemann equations.
Mean value property
If $f$ is holomorphic in $B_R(z_0)$, then $$ f(z_0) = \frac{1}{2\pi} \int_0^{2\pi} f(z_0 + re^{i\theta}) \, d\theta $$ for any $r < R$.
Corollary:
Harmonic functions also possess the mean value property.
If $u$ is harmonic in a simply connected domain and $v$ is a harmonic conjugate of $u$, the maximum modulus principle applied to $e^{u + iv}$ (whose modulus is $e^u$) implies that the conclusions of the maximum modulus principle apply to the extrema of harmonic functions.
Conformal maps
Biholomorphic functions
Let $U, V \subseteq \C_\infty$ be open sets.
Biholomorphic function (or biholomorphism) $\phi: U \to V$: $\phi$ is bijective and holomorphic
Biholomorphic functions are also called conformal maps. 14
If $f: U \to V$ is injective and holomorphic, then $f’$ is nonvanishing on $U$. Thus, $f^{-1}$ defined on $f(U)$ is holomorphic.
In particular, the inverse of a biholomorphism is a biholomorphism.
$U$ and $V$ are said to be biholomorphically equivalent (or conformally equivalent 14) if there exists a biholomorphism between them. (Note that this is indeed an equivalence relation by the result above.)
Local injectivity and preservation of angles 14
If $f: U \to V$ is holomorphic at $z_0$ and $f’(z_0) \neq 0$, then $f$ is locally injective at $z_0$. Moreover, $f$ preserves the angles of directed smooth curves through $z_0$.
Automorphisms and Möbius transformations
Automorphism of $U$: biholomorphic function from $U$ to itself 15; the set of all automorphisms on $U$ is denoted $\Aut(U)$ and is a group under composition
Automorphisms of the unit disc
Let $\D = \set{z \in \C : \abs{z} < 1}$. For $\alpha \in \D$, define $$ \psi_\alpha(z) = \frac{\alpha - z}{1 - \conj{\alpha} z} $$ and $ \phi_{\theta, \alpha} = e^{i\theta} \psi_\alpha$. Then $$ \Aut(\D) = \set{\phi_{\theta, \alpha} : \theta \in [0, 2\pi),\, \alpha \in \D}. $$
The maps $\psi_\alpha$ are called Blaschke factors. In other words, the automorphisms of the unit disc are the Blaschke factors (modulo complex signs). Moreover, we see that those that fix the origin are rotations.
This can be proved using the following lemma.
Schwarz’s lemma
Suppose that $f: \D \to \C$ is holomorphic and that $f(0) = 0$. Then:
- $\abs{f(z)} \leq \abs{z}$ for all $z \in \D$
- $\abs{f’(0)} \leq 1$
- If $\abs{f(z_0)} = \abs{z_0}$ for some $z_0 \in \D \setminus \set{0}$ or $\abs{f’(0)} = 1$, then $f(z) = e^{i \theta} z$ for some $\theta \in [0, 2\pi)$.
Möbius (or linear fractional) transformation: function of the form $f(z) = \frac{az + b}{cz + d}$, where $a, b, c, d \in \C$ and $ad-bc \neq 0$
We define $f(\infty) = \infty$ if $c = 0$ and $f(\infty) = a/c$ otherwise, so that $f$ is an automorphism of $\C_\infty$.
Möbius transformations compose a group under composition, wherein $$ f^{-1}(z) = \frac{dz - b}{-cz + a}\,. $$
Fixed points of Möbius transformations
Every non-identity Möbius transformation has exactly two fixed points (counted with multiplicity).
Corollary:
If two Möbius transformations agree at three distinct points, then they are identical. 16
Given distinct points $z_2, z_3, z_4 \in \C_\infty$, the Möbius transformation with $z_2 \mapsto 1$, $z_3 \mapsto 0$, and $z_4 \mapsto \infty$ is $$ z \mapsto \frac{z_2 - z_4}{z_2 - z_3} \cdot \frac{z - z_3}{z - z_4} \,. $$ Cross-ratio of distinct points $z_1, z_2, z_3, z_4 \in \C_\infty$: $$ (z_1, z_2; z_3, z_4) = \frac{z_1 - z_3}{z_1 - z_4} : \frac{z_2 - z_3}{z_2 - z_4} = \frac{z_2 - z_4}{z_2 - z_3} \cdot \frac{z_1 - z_3}{z_1 - z_4} $$
As a consequence of this definition, the Möbius transformation $w = T(z)$ that satisfies $T(z_2) = w_2, T(z_3) = w_3, T(z_4) = w_4$ may be found by solving $(w, w_2; w_3, w_4) = (z, z_2; z_3, z_4)$ for $w$.
Preservation of cross-ratios
If $T$ is a Möbius transformation, then $$ (z_1, z_2; z_3, z_4) = (T(z_1), T(z_2); T(z_3), T(z_4)) $$ for any distinct points $z_1, z_2, z_3, z_4 \in \C_\infty$.
Generalized circle in $\C_\infty$: a circle ($\set{z \in \C_\infty : \abs{z - z_0} = r};\, z_0 \in \C, r > 0$) or a line ($\set{z_0 + tw : t \in \R} \cup \set{\infty};\, z_0, w \in \C$)
For any three distinct points in $\C_\infty$, there is a unique generalized circle passing through them.
Preservation of generalized circles
Möbius transformations map generalized circles to generalized circles.
Symmetric points $z, z’ \in \C_\infty$ with respect to the generalized circle $C$: all generalized circles passing through $z$ and $z’$ intersect $C$ orthogonally
Preservation of symmetric points
If $T$ is a Möbius transformation and $z, z’$ are symmetric with respect to $C$, then $T(z), T(z’)$ are symmetric with respect to $T(C)$.
If $C$ is the circle $z_0 + re^{i[0, 2\pi)}$, then $z’ = z_0 + r^2 / \conj{z - z_0}$.
If $C$ is the line $z_0 + \R e^{i \theta}$, then $z’ = z_0 + e^{2i\theta} \conj{z - z_0}$.
The automorphisms of the unit disc can also be written as Möbius transformations 17:
Automorphisms of the unit disc
Let $\D = \set{z \in \C : \abs{z} < 1} $. Then $$ \Aut(\D) = \Set{z \mapsto \frac{az+b}{\conj{b}z+\conj{a}} : a, b \in \C; \, \abs{a}^2 - \abs{b}^2 = 1}. $$
Indeed, the automorphisms of several important domains are groups of Möbius transformations:
Automorphisms of the upper half-plane
Let $\H = \set{z \in \C : \Im(z) > 0}$. Then $$ \Aut(\mathbb{H}) = \Set{z \mapsto \frac{az+b}{cz+d} : a, b, c, d \in \C; \, ad-bc = 1}. $$
Automorphisms of the complex plane $$ \Aut(\C) = \Set{az + b : a, b \in \C; \, a \neq 0}. $$
Automorphisms of the extended complex plane $$ \Aut(\C_\infty) = \Set{z \mapsto \frac{az+b}{cz+d} : a, b, c, d \in \C; \, ad-bc \neq 0}. $$ In other words, the automorphisms of the extended complex plane are the Möbius transformations.
The Riemann mapping theorem
Riemann mapping theorem
Let $U$ be a nonempty simply connected domain that is not all of $\C$. Then for any $a \in U$, there exists a unique conformal map $\phi: U \to \D$ satisfying $f(a) = 0$ and $f’(a) \in (0, \infty)$. (Hence $U$ is conformally equivalent to $\D$.)
Conformal maps
In the table below, $\mathrm{Log}_\theta$ denotes the logarithm with branch cut $\set{re^{i\theta} : r \in [0, \infty)}$.
Domain | Range | Map | Inverse |
---|---|---|---|
$\D$ | $\H$ | $i \frac{1-z}{1+z}$ | $\frac{i-w}{i+w}$ |
$\H$ | $\set{w \in \C : \arg(w) \in (0, \theta)};\, \theta \in (0, 2\pi) $ (sector) | $z^{\theta / \pi}$ | $w^{\pi/\theta}$ |
$\D \cap \mathbb{H}$ | $\set{w \in \C : \Re(w) > 0, \Im(w) > 0}$ (first quadrant) | $\frac{1+z}{1-z}$ | $\frac{w-1}{w+1}$ |
$\H$ | $\set{w \in \C : \Im(w) \in (0, \pi)}$ (horizontal strip) | $\mathrm{Log}_{-\pi/2}(z)$ | $e^w$ |
$\D \cap \mathbb{H}$ | $\set{w \in \C : \Re(w) < 0, \Im(w) \in (0, \pi)}$ (horizontal half-strip) | $\mathrm{Log}_{-\pi/2}(z)$ | $e^w$ |
$\D \cap \mathbb{H}$ | $\set{w \in \C : \Re(w) \in (-\pi/2, \pi/2), \Im(w) > 0}$ (vertical half-strip) | $\mathrm{Log}_{-\pi/2}(z/i) / i$ | $ie^{iw}$ |
$\D \cap \mathbb{H}$ | $\H$ | $-\frac{z + 1/z}{2}$ | omitted |
$\mathbb{H} \setminus \D$ | $\mathbb{H}$ | $\frac{z + 1/z}{2} $ | omitted |
For example, $\sin = (z \mapsto -\frac{z + 1/z}{2}) \circ (w \mapsto ie^{iw})$, so $\sin$ maps the vertical half-strip $\set{w \in \C : \Re(w) \in (-\pi/2, \pi/2), \Im(w) > 0}$ to $\H$.
Harmonic analysis
Fourier series
In this subsection, 1-periodic functions on $\R$ are identified with functions on $\mathbb{T}$ ($\cong \R/\mathbb{Z}$), the unit circle in the complex plane.
Fourier coefficients of $f \in L^1(\mathbb{T})$: $\hat{f}(n) = \int_0^1 f(x) e^{-2 \pi i n x} \, dx$, where $n \in \mathbb{Z}$
Fourier series of $f \in L^1(\mathbb{T})$: $f(x) \sim \sum_{n \in \mathbb{Z}} \hat{f}(n) e^{2 \pi i n x}$
The $N$th partial sum of the Fourier series of $f$, i.e., $\sum_{\abs{n} \leq N} \hat{f}(n) e^{2 \pi i n x}$ is denoted $S_N f(x)$.
Uniform convergence of Fourier series
If $f \in C(\mathbb{T})$ and $\sum_{n \in \mathbb{Z}} \abs{\hat{f}(n)} < \infty$, then $S_N f \to f$ uniformly.
If $f \in C(\mathbb{T})$ and $f’$ exists and is piecewise continuous, then $S_N f \to f$ uniformly.
Pointwise convergence of Fourier series
If $f \in L^1(\mathbb{T})$ and $\abs{f(x+t) - f(x)} \lesssim \abs{t}$ for all $t$ in a neighbourhood of $0$, then $S_N f(x) \to f(x)$. (In particular, if $f$ is differentiable, then the hypothesized bound holds.)
If $f \in BV(\mathbb{T})$, then $S_N f(x) \to [f(x^-) + f(x^+)] / 2$.
$L^2$ convergence of Fourier series
If $f \in L^2(\mathbb{T})$, then $S_N f \to f$ in $L^2$.
Corollaries:
If $f, g \in L^2(\mathbb{T})$, then $\inner{f}{g} = \sum_{n \in \mathbb{Z}} \hat{f}(n) \conj{\hat{g}(n)}$.
Parseval’s identity: if $f \in L^2(\mathbb{T})$, then $\norm{f}_2^2 = \sum_{n \in \mathbb{Z}} \abs{\hat{f}(n)}^2$.
The Fourier transform
Fourier transform of $f \in L^1(\Rn)$: $\hat{f}(\xi) = \int_{\Rn} f(x) e^{-2\pi i \xi \cdot x} \, dx$
Inverse Fourier transform of $f \in L^1(\Rn)$: $\check{f}(x) = \int_{\Rn} f(\xi) e^{2\pi i x \cdot \xi} \, d\xi$
Properties of the Fourier transform
- If $f \in L^1(\Rn)$, then $\norm{\hat{f}}_\infty \leq \norm{f}_1$ and $\hat{f}$ is uniformly continuous.
- If $A \in \mathrm{GL}_n(\mathbb{R})$, then $\widehat{f \circ A} = \abs{\det(A)}^{-1} \hat{f} \circ A^{-T}$.
- If $f_\epsilon(x) = f(\epsilon x)$ and $f^\epsilon(x) = f(x/\epsilon)/\epsilon^n$, then $\widehat{f_\epsilon} = \hat{f}^\epsilon$ and $\widehat{f^\epsilon} = {\hat{f}}_\epsilon$.
- If $f \in C^N(\Rn)$ and $D^\alpha f \in L^1(\Rn)$ for all multiindices with $\abs{\alpha} \leq N$, then $\widehat{D^\alpha f}(\xi) = (2\pi i \xi)^\alpha \hat{f}(\xi)$.
- If $f \in L^1(\Rn)$ is compactly supported, then $\hat{f} \in C^\infty(\Rn)$ with $[(-2\pi i x)^\alpha f(x)]\widehat{} = D^\alpha \hat{f}$.
- Riemann-Lebesgue lemma: if $f \in L^1(\Rn)$, then $\lim_{\abs{\xi} \to \infty} \hat{f}(\xi) = 0$.
- $(e^{-\pi \abs{x}^2})\widehat{} = e^{-\pi \abs{\xi}^2}$ (hence the Fourier transform of a Gaussian is a Gaussian).
Schwartz space $\mathcal{S}(\Rn)$: $\set{f \in C^\infty(\Rn) : \forall \alpha, \beta\ (\abs{f}_{\alpha, \beta} < \infty)}$, where $\abs{f}_{\alpha, \beta} = \sup_{x \in \Rn} \abs{x^\alpha D^\beta f(x)}$ (these are seminorms for multiindices $\alpha, \beta$)
We say that a sequence $\set{f_n} \subseteq \mathcal{S}(\Rn)$ converges to $f \in \mathcal{S}(\Rn)$ if and only if $\abs{f - f_n}_{\alpha, \beta} \to 0$ for each pair of multiindices $\alpha, \beta$.
Properties of Schwartz space
- The following are equivalent to $f \in \mathcal{S}(\Rn)$:
- $(1 + \abs{x})^N D^\beta f$ is bounded for each $N \in \mathbb{Z}_{\geq 0}$ and multiindex $\beta$.
- $\lim_{\abs{x} \to \infty} x^\alpha D^\beta f = 0$ for each pair of multiindices $\alpha, \beta$.
- $f \in C^\infty(\Rn)$ and $\norm{x^\alpha D^\beta f}_1 < \infty$ for each pair of multiindices $\alpha, \beta$.
- $C_c^\infty(\Rn)$ is dense in $\mathcal{S}(\Rn) $.
- $\mathcal{S}(\Rn) \subseteq L^p(\Rn)$ for all $p \in [1, \infty]$.
- $\mathcal{S}(\Rn)$ is closed under multiplication by polynomials, differentiation, multiplication, and the Fourier transform.
Convolution of $f, g: \Rn \to \C$: $(f * g)(x) = \int_{\Rn} f(x - y) g(y) \, dy$
Properties of convolution
Young’s convolution inequality: if $p, q, r \in [1, \infty]$ and $1/p + 1/q = 1 + 1/r$, then $\norm{f * g}_r \leq \norm{f}_p \norm{g}_q$ for all $f \in L^p(\Rn)$ and $g \in L^q(\Rn)$.
If $\phi \in C_c^\infty(\Rn)$ and $f \in L^1_\mathrm{loc}(\Rn)$, then $\phi * f \in C^\infty(\Rn)$ and $D^\alpha(\phi * f) = (D^\alpha \phi) * f$.
$\mathcal{S}(\Rn)$ is closed under convolution.
If $f, g \in L^1(\Rn)$, then $\widehat{f * g} = \hat{f}\hat{g}$; if $f, g \in \mathcal{S}(\Rn)$, then $\widehat{fg} = \hat{f} * \hat{g}$.
Let $\set{\phi^\epsilon} \subseteq \mathcal{S}(\Rn)$ be an approximate identity 18.
- If $f \in C_0(\Rn)$, then $\phi^\epsilon * f \to f$ uniformly as $\epsilon \to 0$.
- If $f \in L^p(\Rn)$ with $p \in [1, \infty)$, then $\phi^\epsilon * f \to f$ in $L^p$ as $\epsilon \to 0$.
- If $f \in L^1_\mathrm{loc}(\Rn)$, there exists a sequence $\set{g_n} \subseteq C_c^\infty(\Rn)$ such that if $p \in [1, \infty)$ and $f \in L^p(\Rn)$, then $g_n \to f$ in $L^p$. (If $f \in C_0(\Rn)$ also, then $g_n \to f$ uniformly.)
Fourier duality
If $f, g \in L^1(\Rn)$, then $\int_{\Rn} \hat{f} g \, dx = \int_{\Rn} f\hat{g} \, dx$.
Fourier inversion theorem
If $f, \hat{f} \in L^1(\Rn)$, then $f = \check{\hat{f}}$ a.e. (or equivalently, $\hat{\hat{f}}(x) = f(-x)$ for a.e. $x$).
Corollaries:
If $f \in L^1(\Rn)$ and $\hat{f} = 0$, then $f = 0$ (a.e.) (viz., the Fourier transform is injective on $L^1(\Rn)$).
The Fourier transform is an automorphism of $\mathcal{S}(\Rn)$ (as a TVS).
Plancherel’s theorem
If $f, g \in \mathcal{S}(\Rn)$, then $\int_{\Rn} f \conj{g} \, dx = \int_{\Rn} \hat{f}\conj{\hat{g}} \, dx$. Furthermore, there exists a unique bounded operator $\mathcal{F}$ on $L^2(\Rn)$ that agrees with the Fourier transform on $\mathcal{S}(\Rn)$; $\mathcal{F}$ is unitary and agrees with the Fourier transform on $L^1(\Rn) \cap L^2(\Rn)$.
-
Take $a_n = (-1)^{n+1}$ and $b_n = \abs{c_n}$ in Dirichlet’s test. ↩︎
-
Let $b = \lim_{n \to \infty} b_n$ and $d_n = \pm(b_n - b)$ according as $\set{b_n}$ is nonincreasing or nondecreasing. By Dirichlet’s test, $\sum a_n d_n$ converges, whence $\sum a_n b_n = b \sum a_n \pm \sum a_n d_n$. ↩︎
-
Take $p = q = 2$ in Hölder’s inequality and note that $\abs{\inner{f}{g}}^2 = \abs{\int f \bar{g} \, d\mu}^2 \leq (\int \abs{f\bar{g}} \, d\mu)^2 = \norm{f\bar{g}}_1^2$. ↩︎
-
It suffices to show that $\vec{f}(\Omega)$ is open. By the inverse function theorem, for each $\vec{x} \in \Omega$, there exists an open set $U_\vec{x} \ni \vec{x}$ such that $\vec{f}(U_\vec{x})$ is open (using continuity of the local inverse). Therefore $\vec{f}(\Omega) = \cup_{\vec{x} \in \Omega} \vec{f}(U_\vec{x})$ is open. ↩︎
-
By the monotone convergence theorem, $\int \sum \abs{f_n} = \sum \int \abs{f_n} < \infty$, so $g = \sum \abs{f_n} \in L^1$. Hence $\sum f_n$ is defined a.e. and is integrable. The conclusion then follows from the dominated convergence theorem applied to the sequence of partial sums. ↩︎
-
That uniform boundedness implies pointwise boundedness is evident. As for the converse, let $\epsilon > 0$ be given and $\delta > 0$ be as in the definition of equicontinuity. Cover $K$ with finitely many balls of radius $\delta$ and thence combine the pointwise bounds for the centres of the balls with the assumption of equicontinuity. ↩︎
-
Some authors use the term “holomorphic at $z_0 $” in place of what we refer to as “differentiable at $z_0 $”. Under this convention, $f(z) = \abs{z}^2$, for instance, is holomorphic at $0$. ↩︎ ↩︎
-
The first two conditions imply that $u$ and $v$ are $\mathbb{R}^2$-differentiable at $(x_0, y_0)$ (see Derivatives); thence the Cauchy-Riemann equations entail that $f$ is $\C$-differentiable at $z_0$. ↩︎
-
Fix $z_0 \in \Omega$. For any $z \in \Omega$, let $\Gamma$ be a piecewise smooth curve from $z_0$ to $z$ (which exists by path-connectedness). Then $0 = \int_\Gamma F’(z) \, dz = F(z) - F(z_0)$. ↩︎
-
More precisely, two curves $\gamma_0, \gamma_1: [0, 1] \to \Omega$ with common endpoints $a = \gamma_0(0) = \gamma_1(0)$ and $b = \gamma_0(1) = \gamma_1(1)$ are said to be fixed-endpoint homotopic if there exists a continuous function $H: [0, 1]^2 \to \Omega$ such that $H(\cdot, 0) = \gamma_0, H(\cdot, 1) = \gamma_1$, and $H(0, t) = a, H(1, t) = b$ for all $t \in [0, 1]$ (note that $\gamma_0, \gamma_1$ may need to be reparametrized for this definition to be applicable in general). Homotopy may similarly be defined between closed curves, which are permitted to have different initial points. ↩︎
-
In fact, it is not necessary that $h$ be nonvanishing on $\Gamma$; we have only assumed that it is for brevity of exposition. ↩︎
-
Apply the symmetric, meromorphic version to $f$ and $g = -(f + h) $, noting that $g \neq 0$ on $\Gamma$ since $\abs{h} < \abs{f}$. ↩︎
-
If $f$ attained a maximum at $z_0 \in \Omega$, the image under $f$ of a neighbourhood of $z_0$ would be open, and would therefore contain values of greater moduli than $f(z_0)$ – a contradiction. ↩︎
-
We adopt the convention under which “biholomorphic” and “conformal” are synonymous. However, the term “conformal” is sometimes used to refer to maps $\phi: U \to V$ that are holomorphic with $\phi’$ nonvanishing on $U$. This condition is strictly weaker than biholomorphicity, as evidenced by functions such as $\phi(z) = e^z$ (with $U = \C, V = \C \setminus \set{0}$). Yet other definitions take “conformal” to mean injective and holomorphic, which is a stronger requirement than in the aforementioned definition but still weaker than biholomorphicity. ↩︎ ↩︎ ↩︎
-
If $U = \C_\infty$, we define $\Aut(U)$ to be the group of meromorphic bijections from $\C_\infty$ to itself. ↩︎
-
If the transformations are denoted $S$ and $T$, then $T^{-1} S$ has three distinct fixed points and is therefore the identity. ↩︎
-
Make the change of variables $a = e^{i \theta/2} / \sqrt{1 - \abs{\alpha}^2}$, $b = \alpha a$. ↩︎
-
That is, $\phi \in \mathcal{S}(\Rn)$ with $\int_{\Rn} \phi \, dx = 1$ and $\phi^\epsilon(x) = \phi(x/\epsilon)/\epsilon^n$. ↩︎