1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
|
:PROPERTIES:
:ID: 4be41e2e-52b9-4cd1-ac4c-7ecb57106692
:END:
#+title: differential equation
#+author: Preston Pan
#+html_head: <link rel="stylesheet" type="text/css" href="../style.css" />
#+html_head: <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
#+html_head: <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
#+options: broken-links:t
* Introduction
A differential equation is an equation whose solutions are functions and which incorporate derivatives of the function
you're solving for. Differential equations often have a family of infinite solutions, where a general solution for
a differential equation incorporates many particular solutions. Particular solutions to differential equations are specific
functions, corresponding to a single choice of an initial value problem. Therefore, general solutions tell you how to solve
initial value problems.
Differential equations are often used to model real world systems, and are the main tool in numerical simulations of said
systems.
** ODE
:PROPERTIES:
:ID: 5ef63bef-2d8f-4e00-b292-8206cf69469a
:END:
An ODE is a differential equation involving single variable function solution classes and their derivatives.
** PDE
:PROPERTIES:
:ID: 365190d8-0f3a-4728-9b09-83a216292256
:END:
A PDE is a differential equation involving multivariable function solution classes and generally involve partial derivatives
of the unknown function.
** initial value problem
:PROPERTIES:
:ID: bc7e9e01-9721-4b3e-a886-74a2fd27daf3
:END:
An initial value problem is a problem where one is given a differential equation and particular values of the unknown function
and particular values of its derivatives, and the result is a particular solution.
** separable differential equation
:PROPERTIES:
:ID: 8e9c975a-cd75-447e-b094-16258147d83c
:END:
For [[id:5ef63bef-2d8f-4e00-b292-8206cf69469a][ODEs]], separable differential equations are differential equations of the form:
\begin{align}
\label{}
\frac{dy}{dx} = f(y)g(x)
\end{align}
which can be solved by taking an integral on both sides:
\begin{align}
\label{}
\frac{dy}{f(y)} = g(x)dx \\
\int\frac{dy}{f(y)} = \int g(x)dx
\end{align}
evaluating the integrals and solving for $y$, you obtain solutions for $y$ in terms of $x$.
** [[id:ab024db7-6903-48ee-98fc-b2a228709c04][linear]] differential equation
:PROPERTIES:
:ID: 32a116d9-b813-4b5a-a2e8-6dd7b767ec16
:END:
Linear differential equations are differential equations of the form:
\begin{align}
\label{}
[\sum_{i}f_{i}(x)D^{i}]y(x) = g(x)
\end{align}
where $D$ is the [[id:31d3944a-cddc-496c-89a3-67a56e821de3][derivative]] operator. They are linear because the unknown function $y(x)$ is being operated on by some
linear operator, and common methods in linear algebra can be used in order to analyze equations of this kind. For
example, a first-order linear differential equation would look like this:
\begin{align}
\label{}
[f(x)D + g(x)]y(x) = h(x)
\end{align}
which can be easily solved in the following way where $G(x) = \frac{g(x)}{f(x)}$ and $H(x) = \frac{h(x)}{f(x)}$:
\begin{align}
\label{}
[D + G(x)]y(x) = H(x) \\
\mu(x)[D + G(x)]y(x) = \mu(x)H(x) \\
\mu'(x) := G(x)\mu(x) \\
D(\mu(x)y(x)) = \mu(x)H(x) \\
y(x) = \frac{\int\mu(x)H(x)dx}{\mu(x)}
\end{align}
Now to solve for $\mu(x)$, using [[id:8e9c975a-cd75-447e-b094-16258147d83c][separable differential equation]] methods:
\begin{align}
\label{}
\frac{d\mu}{dx} = G(x)\mu(x) \\
\frac{1}{\mu}d\mu = G(x)dx \\
\int\frac{1}{\mu}d\mu = \int G(x)dx \\
ln(\mu) = \int G(x)dx \\
e^{\int G(x)dx} = \mu
\end{align}
Therefore:
\begin{align}
\label{}
y(x) = \frac{\int e^{\int G(x)dx}H(x)dx}{e^{\int G(x)dx}}
\end{align}
Then, to model any particular first order system, plug in functions for $G(x)$ and $H(x)$.
*** superposition principle
:PROPERTIES:
:ID: 422653e2-daa4-422a-9cb7-3983a5a72554
:END:
The principle of superposition states that any solutions $f_i(x)$ add to a new solution:
\begin{align}
\label{}
\sum_{i=0}^{N}f_{i}(x) = f_{new}(x)
\end{align}
that also satisfies the linear differential equation. This works because the operator is [[id:ab024db7-6903-48ee-98fc-b2a228709c04][linear]], so additive properties
work over this space.
*** Higher Order Linear Differential Equations
Solving higher order linear differential equations requires a couple of tricks. For example, transforms such
as the [[id:e73baa24-1a29-4f35-9d3d-0fad4a3a8e59][Laplace Transform]] or the [[id:262ca511-432f-404f-8320-09a2afe1dfb7][Fourier Transform]] may be used. Such transforms reduce differential equation
problems to algebraic problems, thus simplifying their solution methods. Other methods include guessing (I'm not
pulling your leg here, this is real), formulation as an eigenvalue problem, and taylor polynomial solutions. We will
take a look at all of these in this section.
*** Homogeneous Case
Take the case $Ay'' + By' + Cy = 0$, the substitute the form $y = De^{kt}$. Then:
\begin{align}
\label{}
De^{kt}(Ak^{2} + Bk + C) = 0 \\
Ak^{2} + Bk + C = 0
\end{align}
Then, use the quadratic formula to solve for $k$ in terms of the other constants. Such a polynomial is called the
characteristic polynomial of this differential equation.
*** Eigenvalue Problems
Eigenvalue problems can be solved just like in the familiar linear algebra case. For instance, take some differential
equation in this form:
\begin{align}
\label{}
A(f) = \lambda f
\end{align}
where $A$ is a linear operator in [[id:b1f9aa55-5f1e-4865-8118-43e5e5dc7752][function]] space, and $\lambda$ is any constant. Traditionally, one would solve such an eigenvalue
problem like so:
\begin{align}
\label{}
\det{(A - \lambda I)} = 0
\end{align}
In the simple example of a polynomial basis, this function $f$ can be represented as some linear combination of linearly
independent polynomials. A simple basis to choose could be the Taylor series basis i.e. $\vec{e_{n}} = x^{n}$ where $e_{n}$ is the nth
basis vector. Note that there are many polynomial bases that are an orthogonal basis and span this subset of function
space, but this is a simple example. In this case, the matrix $A$ would represent an operation on an infinite polynomial,
and the $\lambda I$ tells you to subtract $\lambda$ from all its diagonals. You can interpret this literally, using the following example.
**** Example
\begin{align}
\label{}
D(r^{2}D(f(r)) = \lambda f(r)
\end{align}
is such an example of an eigenvalue problem. Now, using the Taylor basis, we need to know two things: what $D$ is as an
infinite dimensional matrix in this basis, and what $t^{2}$ is as an infinite dimensional matrix. $f$ is some unknown vector
we are trying to solve for in this system. Note this observation:
\begin{align}
\label{}
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 4 \\
\end{pmatrix}
\begin{pmatrix}
a \\
b \\
c \\
d \\
e
\end{pmatrix}
=
\begin{pmatrix}
b \\
2c \\
3d \\
4e
\end{pmatrix}
\end{align}
That this matrix encodes the power rule for the Taylor eigenbasis for 4 dimensions; each entry in the vectors encodes the nth
power monomial term, which means, for example:
\begin{align}
\label{}
\begin{pmatrix}
a \\
b \\
c \\
d \\
e
\end{pmatrix} := a + bx + cx^{2} + dx^{3} + ex^{4}
\end{align}
then the derivative of this vector would be:
\begin{align}
\label{derivative}
b + 2cx + 3dx^{2} + 4ex^{3}
\end{align}
which is exactly the coefficients in the resultant vector! Now, if we generalize this to an infinite amount of dimensions
(where the vector has an infinite length and the matrix has infinite entries), this corresponds to the same effect.
Thus, the infinite matrix with the off-diagonal increasing entries is the $D$ matrix, or $D$ operator. But what is the
$r^{2}$ operator? We know it must be a matrix operation that shifts the entire vector up by two, and pads the first two
entries of the vector with two zeros. If we find this matrix, the matrix multiplication $DRD$ should yield a new
infinite matrix $M$, which we can use in order to solve the eigenvalue problem $\det{(M - \lambda I)} = 0$. Now this matrix is:
\begin{align}
\label{R matrix}
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
\end{pmatrix}
\end{align}
and so on. This matrix does the exact same thing to a polynomial vector as what multiplying $r^{2}$ does to a polynomial.
We then multiply the two matrices to get this new matrix:
\begin{align}
\label{new}
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
\end{pmatrix}
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 4 \\
0 & 0 & 0 & 0 & 0
\end{pmatrix} =
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 \\
\end{pmatrix}
\end{align}
and so on, as you can see the pattern. Now we multiply in another $D$:
\begin{align}
\label{DS}
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 4 \\
0 & 0 & 0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 \\
\end{pmatrix} =
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 2 \cdot 1 & 0 & 0 & 0 \\
0 & 0 & 3 \cdot 2 & 0 & 0 \\
0 & 0 & 0 & 4 \cdot 3 & 0 \\
0 & 0 & 0 & 0 & 5 \cdot 4
\end{pmatrix}
\end{align}
of course the $5 \cdot 4$ isn't actually resultant from the image above, but it is from the infinite version
of this process. Now, we can finally subtract $\lambda$ from this infinite matrix:
\begin{align}
\label{lambda}
\begin{pmatrix}
-\lambda & 0 & 0 & 0 & 0 \\
0 & 2 \cdot 1 - \lambda & 0 & 0 & 0 \\
0 & 0 & 3 \cdot 2 - \lambda & 0 & 0 \\
0 & 0 & 0 & 4 \cdot 3 - \lambda & 0 \\
0 & 0 & 0 & 0 & 5 \cdot 4 - \lambda
\end{pmatrix}
\end{align}
Now, you might be wondering how we're going to take the determinant of this infinite matrix. We can take
a [[id:122fd244-ffeb-47d0-89ce-bf9bc6f01b70][limit]] of finite matrices to find out what the generalization might be. For instance, the 3d case might look like this:
\begin{align}
\label{}
\lambda^{2}(2\cdot 1 - \lambda)
\end{align}
(as the very last diagonal entry in the finite case does not extend infinitely, there is no $3 \cdot 2$ term). Now taking some
higher dimensions:
\begin{align}
\label{higher dimensions}
\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda) \\
\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda) \\
\lambda^{2}(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda)
\end{align}
using inductive reasoning we should expect the infinite form to be:
\begin{align}
\label{}
det(M) = -\lambda(2 \cdot 1 - \lambda)(3 \cdot 2 - \lambda)(4 \cdot 3 - \lambda)(5 \cdot 4 - \lambda)(6 \cdot 5 - \lambda)\dots
\end{align}
(note that it isn't $\lambda^{2}$ because the very last $-\lambda$ never gets multiplied, and it's negative for that reason too). Note
that if we want to set $det(M) = 0$, $\lambda = n(n - 1)$ where $n$ is a natural number (including zero). Then we substitute
back in the $\lambda$ for some $n$, let's use $n = 2$ as an example:
\begin{align}
\label{n=2}
\begin{pmatrix}
-2 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 \\
0 & 0 & 0 & 10 & 0 \\
0 & 0 & 0 & 0 & 18
\end{pmatrix}
\begin{pmatrix}
a_{1} \\
a_{2} \\
a_{3} \\
a_{4} \\
a_{5} \\
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0
\end{pmatrix}
\end{align}
clearly, when we choose $n = 2$, the second value $a_{2}$ is free, and the rest for the given eigenfunction
must be zero, meaning for a given $n$, the resulting eigenvector is $ax^{n}$ for any value $a$. This is one of the solutions
to this differential equation.
It turns out there's another solution in a space that the Taylor space does not span, but I'll leave it as an exercise
to find the other solution using this method, by extending it to include other kinds of functions. Note that for your
eigenbasis one can use the [[id:262ca511-432f-404f-8320-09a2afe1dfb7][Fourier Transform]] to make a Fourier basis, but that's also easy to generalize.
|