This study note is based on the book Ordinary Differential Equations with Applications by Carmen Chicone.

Notations and Preliminaries

Notations

An ODE is an equation of the form \[ \dot{x} = f(t,x,\lambda)\] where \[f: J \times U \times \Lambda \rightarrow \mathbb{R}^{n} \ , \ J \subseteq \mathbb{R}, U \subseteq \mathbb{R}^{n}, \Lambda \subseteq \mathbb{R}^{k} \] Here, $J$ is the subspace usually for time $t$, $U$ is the subspace for states $x$, $\Lambda$ is the subspaces for parameters $\lambda$.

Initial Value Problem (IVP)

A initial value problem (IVP) is defined as : $\dot{x} = f(t,x,\lambda), x(t_{0})=x_{0}$

Several important properties regarding IVP are as follows:

  1. (Existence) Every IVP has a solution that is smooth w.r.t. initial conditions and paramters
  2. (Uniqueness) The solution above is unique.
  3. (Extension) solutions of IVP can be extended to entire domain of differential eqauation or blow up to infinity

Flow

For autonomous differential equation $\dot{x} = f(x), x \in \mathbb{R}^{n}$, the function $\phi:\mathbb{R} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ called the flow, denotes the solution of the equation with the following properties:
  1. $\phi(0,x) = x$
  2. $\phi(t+s,x) = \phi(t,\phi(s,x))$

The flow function may be re-written as $\phi^{t}(x)$ or $\phi_{t}(x)$ to denote this family of solutions for the autonomous system if the systems starts from $x$ at $t=0$. This alternative notation highlights the fact that the flow is a one-parameter group solutions.

Geometrically, we can look at the flow as a collection of individual trajectories. Consider the solution to the autonomous system if we are given a specific initial condition $x(t_{0}) = x(0) = x_{0}$, the the solution is given by $\phi(t,x_{0})$ or $\phi^{t}(x_{0})$, which denotes a trajectory, solution curve or orbit through the state space for the IVP (since we now also specify the initial condition). The collection of all such solutions form the flow.

Matrix exponential (logarithm)

Consider an $n\times n$ matrix A. The matrix is a special form of the linear transformation on space $E = \mathbb{R}^{n} / \mathbb{C}^{n}$. Denote the space of linear operators on $E$ as $\mathcal{L}(E)$, we have $A \in \mathcal{L}(E)$.

We can then define the exponential map :

Exponential Map $\mathcal{L}(E) \rightarrow \mathcal{L}(E)$ is defined as $$ \mathrm{exp}(A) := I + \sum_{k=1}^{\infty} \frac{1}{k!}A^{k} $$

For notational convenience, we can further simplify notation as: $e^{A} := \mathrm{exp}(A) $

Then we have for $A,B \in \mathcal{L}(E)$:

  1. If $A \in \mathcal{L}(\mathbb{R}^{n})$, then $e^{A} \in \mathcal{L}(\mathbb{R}^{n})$
  2. If $B$ non-singular, then $B^{-1}e^{A}B = e^{B^{-1}AB}$
  3. if $AB=BA$, then $e^{A+B} = e^{A}e^{B}$
  4. $e^{-A} = (e^{A})^{-1}$. [expoential map is invertible]
  5. $\frac{d}{dt}(e^{tA}) = Ae^{tA} = e^{tA}A$. In particular, $e^{tA}$ is the P.F.M. solution for $\dot{x} = Ax, x \in \mathbb{R}^{n}$.
  6. $|e^{A}| \leq e^{|A|}$, where $|\cdot|$ is the operator norm in $\mathcal{L}(E)$
  7. For every $n\times n$ matrix $A$, $\mathrm{det}(e^{A}) = e^{\mathrm{tr} A}$
  8. Exponential of identity matrix is given by $e^{I_{n}} = eI_{n}$

Homogenous Linear Differential Equations

Consider a homogenous linear system $\dot{x} = A(t)x, x\in \mathbb{R}^{n}$ Here, $A(t)$ is a $n\times n$ time-varying matrix. The system is homogenous since we are not providing additional perturbation.

Extension of IVP solution

Consider IVP \[\dot{x} = A(t)x, x(t_{0})=x_{0}\].We know that a solution for the IVP exists in a open neighbourhood containing $t_{0}$. However, if $A(t)$ is continous, we would further have that the solution can be extended to the entire domain onwhich $A(t)$ is define.

Superposition

If the homogenous system has 2 solutions $\phi_{1}(t),\phi_{2}(t)$ defined on some interval $(a,b)$, then their linear combination $\lambda_{1}\phi_{1}(t)+\lambda_{2}\phi_{2}(t)$ is also a solution defined on the same interval $(a,b)$.

Fundamental Set of Solutions and others

A few related concepts regarding solutions of a dynamical system are listed below:

  • Fundamental Set of solutions $\mathcal{F}$: a set of $n$ linearly independent solutions for the homogenous system all defined on the same open interval $J$. A fundamental set of solution always exists on the interval for $A(t)$.
    1. Any solution of the homogenous system on interval $J$ can be expressed as linear combination of elements of $\mathcal{F}$.
  • Matrix solution $\Phi(t)$: An $n\times n$ matrix where each column is a solution
    1. Fundamental Matrix Solution $\Phi(t)$ (F.M.S.): columns are linearly independent for all $t$ where the solutions are defined. Any solution on $J$ (denoted by $\phi(t)$) can be expressed as $\phi(t) = \Phi(t)v, v\in \mathbb{R}^{n}$
    2. Principal Fundamental Matrix Solution $\Phi(t;t_{0})$(P.F.M.) at $t_{0} \in J$: A special case of F.M.S. $\Phi(t)$ such that $\Phi(t_{0})=I_{n}$. It is easy to see that $\Phi(t;t_{0})$ can be obtained from F.M.S $\Phi(t)$ by $\Phi(t;t_{0}) = \Phi(t)\Phi^{-1}(t_{0})$. Therefore, the homogenous sytem has a P.F.M. $\Phi(t,\tau)$ at every point $\tau \in J$.
    3. The F.M.S $\Phi(t)$ is not unique
  • State Transition Matrix: The family of fundamental matrix solutions $\Psi(t,\tau)$ parametrized by $\tau \in J$ such that $\Psi(\tau,\tau) = I_{n}$.
    1. The state transition matrix $\Psi(t,\tau) := \Phi(t)\Phi^{-1}(\tau)$
    2. $\Psi(\tau,\tau) = I_{n}, \Psi(t,s)\Psi(s,\tau) = \Psi(t,\tau)$
    3. $\Psi(t,s)^{-1} = \Psi(s,t), \frac{\partial \Psi}{\partial s}(t,s) = -\Psi(t,s)A(s)$
    4. Given initial state $v\in \mathbb{R}^{n}$ at $t_{0}$, the state transition matrix $\Psi(t,t_{0})$ transfers the $v$ to the new state $\Psi(t,t_{0})v$ at time $t$.
    5. P.M.S $\Phi(t;t_{0}) = \Psi(t,t_{0})$

Inhomogenous Linear Differential Equations

Consider the IVP, \[ \dot{x} = A(t)x+g(x,t), x(t_{0})=x_{0} \]

let $\Phi(t)$ be a fundamental matrix solution for the homogenous system $\dot{x} = A(t)x$ defined on some interval $J_{0}$ containing $t_{0}$, then the solution to the IVP $\phi(t)$ defined on some subinterval of $J_{0}$ can be given by the Variation of Parameters formula:

\[\begin{align*} \phi(t) &= \Phi(t)\Phi^{-1}(t_{0})x_{0} + \Phi(t) \int_{t_{0}}^{t} \Phi^{-1}(s) g(\phi(s),s) ds \\ &= \underbrace{\Psi(t,t_{0})x_{0}}_{\text{zero-input solution}} + \underbrace{\int_{t_{0}}^{t} \Psi(t,s) g(\phi(s),s) ds}_{\text{zero-state solution}} \end{align*}\]

If $g(\cdot,\cdot)$ is constant w.r.t. the first argument, i.e. $g(x,t) = g(t) \ \forall x$, then the perturbation is state-independent and the above Variation of Parameters formula solves the IVP: $ \phi(t) = \Psi(t,t_{0})x_{0} + \int_{t_{0}}^{t} \Psi(t,s) g(s) ds$

Note: The zero-input solution is given by the transfer of $x_{0}$ to $t$, while the zero-state solution (when discretized) is the sum of all transferred slices of inputs $g(s), \forall s\in[t_{0},t] $

Periodic Solutions - The Poincaré Map

Before we explore the systems that naturally give rise to periodic solutions, it is necessary to introduce the concept of the Poincaré Map.

The Poincaré Map, also called the return map, is defined as a map between a $(n-1)$-dimension submanifold of $\mathbb{R}^{n}$ and itself.

Consider a bundle of periodic loop in the $\mathbb{R}^{3}$ which together form a torus. We can obtain a 2-dimensional sub-manifold $S \in \mathbb{R}^{3}$ by taking a slice (or a section) of the torus such that all trajectories in the torus will pass through this sub-manifold. If we take a point $p$ on this submanifold and follow its trajectory for 1 revolution (which takes time $T(p)$) around the torus, it will again cross this submanifold at point $\phi_{T(p)}(p)$. Note that $\phi_{T(p)}(p)$ and $p$ are not guaranteed to be the same, and if $\phi_{T(p)}(p) = p$, we would have a $T(p)$-periodic solution.

Formally, we first define a Poincaré Section.

Take $S \in \mathbb{R}^{n}$ to be a $(n-1)$-dimensional submanifold such that the flow passing through points $p\in S$ are transversal to $S$ (they travel across $S$, not along $S$). Then $\Sigma \subseteq S$ is call a Poincaré Section if it is an open subset of $S$ such that all points of $\Sigma$ returns to $S$.

With that, we denote the Poincaré Map $P:

The Poincaré Map is given as $P(p) := \phi_{T(p)}(p)$ where $p\in \Sigma$, $T(p)$ is the time $p$ first returns to $S$.

Note that $T: \Sigma \rightarrow \mathbb{R}$ is a function called the return time map. Both $P,T$ are smooth functions on $\Sigma$.

As we will shortly discuss, the Poincaré Map is central to the analysis of periodic solutions. Some of the ways it will be useful are:

  1. The existence of periodic solution (orbit) is equivalent to existence of a fixed point for the Poincaré Map, i.e. $x = P(x)$
  2. Stability of the peiorid orbit is determined by the stability of the corresponding fixed point of the Poincaré Map, which is in turn determined by the eigenvalues of its derivative at the fixed point.

Periodic Linear Homogenous System - Floquet Theory

Consider the periodic linear system of the form $\dot{x} = A(t)x, x \in \mathbb{R}^{n} $ where the time-varying matrix $A(t)$ is periodic with period $T$, i.e. $A(t+T) = A(t) \ \forall t \in J $.

Floquet theory provides a canonical form of the solution to this T-periodic system, as well as a periodic time-dependent change of coordinates that transforms ths system into a homogeneous linear system with constant coefficients.

Floquet Theory - Definition

If $\Phi(t)$ is a F.M.S. of the T-periodic system, then
  1. For all $ t\in \mathbb{R}$, $\Phi(t+T) = \Phi(t)\Phi(0)^{-1}\Phi(T)$
  2. There exists matrix $B \in \mathbb{C}^{n}$ and a T-periodic matrix function $P(t) \in \mathbb{C}^{n}$ such that $$ e^{TB} = \Phi(0)^{-1}\Phi(T) \\\\\\ \Phi(t) = P(t)e^{tB} \ \ \forall t \in \mathbb{R} $$
  3. There exists matrix $R \in \mathbb{R}^{n}$ and a 2T-periodic matrix function $Q(t) \in \mathbb{R}^{n}$ such that $$ \Phi(t) = Q(t)e^{tR} \ \ \forall t \in \mathbb{R} $$

Note: Floquet normal form decomposes the solution into the product of a T-periodic function and a matrix exponential. Results in the subsequent sections will connect this decomposition to the types of possible solutions via characterisitic multipliers.

Floquet normal form, Monodromy operator, characteristic multiplier

The representation $\Phi(t) = P(t)e^{tB}$ is called the Floquet Normal Form.

The Monodromy Operator $\mathcal{M}$ is defined as $\Phi(T+\tau)\Phi^{-1}(\tau)$

The Monodromy Operator can be viewed as a state-transition matrix $\Phi(T+\tau,\tau)$ that transfers any initial state across an entire period of the system. The characteristic multipliers $\lambda_{i}, i = 1,2,\ldots,n$ are the eigenvalues of monodromy operator $\mathcal{M}$.

For a given T-periodic homogenous system, there are a few important properties:

  1. Every monodromy operators $\mathcal{M}$ is invetible, therefore every characteristic multiplier $\lambda_{i} \neq 0, \forall i$
  2. All monodromy operators of have the same $n$ characteristic multipliers ${\lambda_{i}}$, i.e. the characteristic multipliers are intrinsic to the system, independent of choise of F.M.S. $\Phi(t)$ or the initial time $\tau$.
  3. Characteristic(Floquet) multipliers ${\lambda_{i}} = eig(\mathcal{M}) = eig(e^{TB})$.
  4. Characteristic(Floquet) exponents ${\mu_{i}} = {\mu : e^{T \mu}=\lambda_{i}, i=1,2,\ldots,n}$
  1. For non-singular matrix $B$ such that $Bv = \mu v$ $$ e^{tB}v = e^{tB^{-1}BB}v = B^{-1}e^{tB}Bv = B^{-1}e^{tB}\mu I_{n} v = e^{t\mu I_{n}}v = e^{t\mu}v $$
  2. Consider P.F.M solution $\Phi(t)$ defined at time $t=0$. Since $\Phi(t)$ is also a F.M.M., we have monodromy operator $\mathcal{M}(0) = \Phi(T)\Phi^{-1}(0) = \Phi(T)I_{n} = \Phi(T)$. Therefore, $\mathcal{M} = \Phi(T) = e^{TB}$.

An application of Spectral Mapping Theorem

Given spectrum of operator $A$ given by $\sigma(A)$, we have $e^{\sigma(A)} = \sigma(e^{A})$. Specifically, if $A$ is a matrix with eigenvalues ${\lambda_{i}}$, then

\[eig(A^{k}) = \{ \lambda_{i}^{k} \}, eig(e^{A}) = \{ e^{\lambda_{i}} \}\]

Time-dependent Coordinate transformation

If the P.F.M. of T-periodic system $\dot{x} = A(t)x$ at time $t=0$ is given by $Q(t)e^{tR}, \ \ Q(t),R \in \mathbb{R}^{n}$. Then there is a 2T-periodic time-dependent change of coordinates $x = Q(t)y$ that changes the system to the real constant coeffcient linear system $\dot{y} = Ry$.

Classification of Solution by $\lambda$

If $\lambda$ is a characteristic multiplier of the homogenous T-periodic system, and $e^{T\mu} = \lambda$, then there is a non-trivial solution of the form $ x(t) = \Phi(t)v = P(t)e^{tB}v = e^{\mu t}P(t)v = e^{\mu t}p(t)$ where $Bv = \mu v$, $p(t) = P(t)v$ is a T-periodic function. More over, for this solution $x(t+T) = \lambda x(t)$

If we are given two characteristic multipliers $\lambda_{1},\lambda_{2}$, we have $\mu_{1},\mu_{2}$ such that $B\mu_{1} = \mu_{1}v_{1},B\mu_{2} = \mu_{1}v_{2} $,and $e^{T\mu_{1}} = \lambda_{1}, e^{T\mu_{2}} = \lambda_{2}$. Then we have two solutions $ x_{1}(t) = e^{\mu_{1}t}p_{1}(t),x_{2}(t) = e^{\mu_{2}t}p_{2}(t) $, where $p_{1}(t) = P(t)v_{1}(t),p_{2}(t) = P(t)v_{2}(t)$. If $\lambda_{1} \neq \lambda_{2}$, then $x_{1}(t),x_{2}(t)$ are linearly independent solutions.

Since $x(t+T) = \lambda x(t)$, we can consider the following cases:

  1. $\lambda \in \mathbb{R}, \lambda>0$ positive real number:
    1. $0<\lambda<1 ,\mu <0$: $x(t)$ converges to the zero solution asymptotically
    2. $\lambda>1, \mu >0 $: $x(t)$ unbounded as $t\rightarrow \infty$
    3. $\lambda=1, \mu =0$: $x(t)$ is 2T-periodic solution, i.e. $x(t+T) = x(t)$.
  2. $\lambda \in \mathbb{R}, \lambda<0$ negative real number: express $\mu = v+\pi i/T$, then $\lambda = e^{T\mu} = e^{Tv+\pi i}$
    1. $\lambda=-1, v=0$: $x(t)$ is 2T-periodic solution, i.e. $x(t+2T) = x(t)$.
    2. $ v\neq 0$: stability of the system follows the case above for positive $\lambda$
  3. $\lambda \in \mathbb{C}$: express $\mu = \alpha + i\beta $ and there is a solution given by $x(t)=e^{\alpha t}(\mathrm{cos}\beta t+i\mathrm{sin}\beta t)(r(t)+is(t))$, which by superposition gives rise to 2 real solutions $x_{1}(t) =e^{\alpha t}(r(t)\mathrm{cos}\beta t-s(t)\mathrm{sin}\beta t) \\
    x_{2}(t) =e^{\alpha t}(r(t)\mathrm{cos}\beta t + s(t)\mathrm{sin}\beta t) $
    1. $\alpha \neq 0$: all solutions approaches zero if $\alpha<0$, blows up if $\alpha>0$
    2. $\alpha = 0$: there are 2 cases
      1. $\exists m,n \text{ relatively prime numbers }, 2\pi m/\beta = nT$: the solution is $nT$-periodic
      2. such $m,n$ pair does not exists, then ratio $n/m$ evaluates to an irrational number. In this case, the true period does not exist and the solution is quasi-periodic.

Periodic Orbits of Linear System

Consider time-periodic system $\dot{x} = A(t)x + b(t), \ \ x\in \mathbb{R}^{n}$ where $A(t),b(t)$ are both T-periodic.

When does a T-periodic solution exists for the above system? Note first that if $x(t)$ is a solution for the system and $x(0) = x(T)$, then the solution $x(t)$ is T-periodic, i.e. $x(t+T) = x(t), \forall t \in \mathbb{R}$

Therefore, to show that a T-periodic solution exists for the above system, we need to show that a solution $x(t)$ satisfies that $x(0)=x(T)$.

If $\Psi(t)$ is a P.F.M of the homogeneous system $\dot{x} = A(t)x, \ \ x\in \mathbb{R}^{n}$ at $t=0$, then by variation of paramters formula: $x(T) = \Psi(T)x(0) + \Psi(T) \int_{0}^{T}\Psi^{-1}(s)b(s)ds $ Therefore, $x(T) = x(0)$ if and only if $(I_{n} - \Psi(T))x(0) = \Psi(T)\int_{0}^{T}\Psi^{-1}(s)b(s)ds $. The above system has a solution only when $1$ is not an eigenvalue of $\Psi(T)$.

If 1 is not a characteristic multiplier of the T-periodic homogenous system $\dot{x} = A(t)x$, then the time-periodic system $\dot{x} = A(t)x + b(t), \ \ x\in \mathbb{R}^{n}$ has at least one T-periodic solution.

Note: if $A(t) = A$ is a constant matrix and has real eigenvalues, then the periodic system will have at least 1 T-periodic solution.

The general sufficient condition for the existence of a T-periodic solution of the system is given as follows:

If the T-periodic system has a bounded solution, then it has a T-periodic solution.

Connection between Floquet Multipliers and the Poincaré Map

Consider autonomous system $\dot{u} = f(u)$ with a periodic solution $\Gamma$. Let the $u(t,\xi)$ denote the solution of the system with initial condition $u(0,\xi) = \xi$.

Then, consider a submanifold $S$ and the Poincaré Section $\Sigma \subseteq S$. For each point $p \in \Sigma$, define $T(p)$ as the time of first return to $S$, which would give us the Poincaré Map $P$ as $P(p) = u(T(p),p)$.

Furthermore, for given point $p\in \Sigma$, we can find a vector $v \in \mathbb{R}^{n}$ that is tangential to $\Sigma$ at $p$. Then derivative of the Poincaré Map $DP$ in the direction $v$ is given as: $ DP(p)\cdot v = (dT(p)v)\cdot f(p) + u_{\xi}(T(p),p)\cdot v $

We can also consider the first variational equation of the original system along $\Gamma$ and its Floquet Multipliers: $ \dot{W} = Df(u(t,p))W$

$\{eig(DP(p))\} \cup \{1\} \equiv \{\lambda_{i}\}$ of the first variational eqaution along $\Gamma$

$\Gamma$ is asymptotically stable if $\{eig(DP(p))\}$ are inside the unit circle in the complex plane.