Bill Hall
I'm Bill Hall, commonly known as Ginger Bill.
Creator of the Odin programming language.
Work with JangaFX on numerous different projects.
With a background in Quantum Mechanics,
specifically Quantum Metrology.
Humans are not good at simulating code in their heads. Let the computer do what it's going to do anyway: run the code!
WhiteBox is a software development tool that compiles runs and “debugs” your C(++) code live to report its execution patterns and data transforms.
Engineered entirely from scratch for light-speed performance, featuring a modern and robust interface.
The RAD Debugger is a native, user-mode, multi-process, graphical debugger.
The RAD Linker is a new performance linker for generating x64 PE/COFF binaries. It is designed to be very fast when creating gigantic executables.
Odin is a general-purpose programming language with distinct typing built for high performance, modern systems and data-oriented programming.
Odin is the C alternative for the Joy of Programming.
Was an ongoing project by Casey Muratori to create a professional-quality game accompanied by videos that explain every single line of its source code. The series began 2014 November 17th.
Handmade Hero - Announcement Trailer
Text from the trailer:
When I was little, Games were made from scratch By people who loved them. There were no licensed engines. Each pixely little hero was coded by hand. My thirty years of game programming Were inspired by this way of life And I believe it is worth passing on. So I am starting a project To help preserve it.
Casey is a programmer who has specialized in game development and game engine research.
He's worked on numerous projects in past which include Granny, Bink 2, and the game The Witness.
And beyond Handmade Hero, he's known for popularizing
Immediate Mode GUIs, n-way linear quaternion blending, and
Implementing the GJK Algorithm.
As his most recent educational endeavour he has been
working on is the Computer, Enhance! substack series.
Thank you, Casey.
But there is something else many of us neglect or forget is even a tool for our metaphorical toolbox…
How do you know you don't need something if you don't know it exists? This is fundamentally the problem of unknown unknowns.
This is not a lesson. It is an overview of some mathematical topics I expect many programmers may not know anything about—with the main objective that you should go out and learn more mathematics to broaden your (mathematical) toolbox to become better at your craft.
You've probably wanted a "smooth" transition to a "target value" before:
a = lerp(a, b, k)
a = a + (b - a) * k a += (b - a) * k
But this is frame-rate dependent. So you've probably tried multiplying it by dt
:
dt := curr_frame_time - prev_frame_time a += (b - a) * k * dt
A good approximation but it is still subtly frame-rate dependent.
Let's use this as an approximation or an ansatz:
\[ \Delta a = (b - a(t)) \cdot k \Delta t \] \[ \frac{\Delta a}{\Delta t} = (b - a(t)) \cdot k \] \[ \frac{da}{dt} = (b - a(t)) \cdot k \]
This is a first-order linear ordinary differential equation.
Solving a differential equation can be very difficult, but luckily there is a tool to help us out known as WolframAlpha, with it's solution being:
\[ a(t) = b + c_1 \cdot e^{-k t} \]
\[ a(t) = b + c_1 \cdot e^{-k t} \]
Solving so that \(a(0)\) is well defined:
\[ a(t) = a(0) \cdot e^{-k t} + b \cdot (1 - e^{-k t}) \] \[ a(t) = \text{lerp} \left (a(0),\, b,\, 1-e^{-k t} \right ) \]
However, we wanted this in a discrete differential form which we can increment with each iteration. So let's first slightly rearrange it: \[ a(t) = a(0) + (b - a(0)) \cdot (1 - e^{-k t}) \]
\[ a(t) = a(0) + (b - a(0)) \cdot (1 - e^{-k t}) \]
Notice that the lerp
parameter is this exponential \(1 - e^{-k t}\).
Why don't we try the \(\Delta t\) version of it?
\[ a(0 + \Delta t) = a(0) + (b - a(0)) \cdot (1 - e^{-k \Delta t}) \] \[ \Delta a_{t=0} = (b - a_{t=0}) \cdot (1 - e^{-k \Delta t}) \]
Since we chose to arbitrarily evaluate at \(a(0)\), we know it will hold \(\forall a\), thus:
\[ \Delta a = (b - a) \cdot (1 - e^{-k \Delta t}) \]
This is the actual solution for higher frame-rate independence.
But why does it look very different to the original?
Well the "trick" is to know how that lerping parameter actually looks like when you expand it as a Taylor Series:
\[ 1 - e^{-k \Delta t} = k \Delta t - \frac{(k \Delta t)^2}{2} + \frac{(k \Delta t)^3}{6} - \frac{(k \Delta t)^4}{24} + \cdots \]
n.b. This Taylor series is evaluated around \(0\)—sometimes called Maclaurin series.
If we use that Taylor series expansion with our solution:
\[ \Delta a = (b - a) \cdot \left ( k \Delta t - \frac{(k \Delta t)^2}{2} + \frac{(k \Delta t)^3}{6} - \frac{(k \Delta t)^4}{24} + \cdots \right ) \]
This means that the original equation was just a lower-order approximation of the "true" equation.
\[ \Delta a = (b - a) \cdot \left ( k \Delta t + \mathcal{O} ((k \Delta t)^2) \right ) \]
And finally back in code:
a += (b - a) * (1.0 - exp(-k * dt))
s := k * dt a += (b - a) * (s - s*s/2.0 + s*s*s/6.0 - s*s*s*s/24.0 + …)
a += (b - a) * (1.0 - exp(-k * dt))
Assuming \(k = 1\), product of \(k \Delta t\)
Frame Rate (1/\(\Delta t\)) | Ansatz Method | Better Method | Difference / % |
---|---|---|---|
10 | 0.1 | 0.095162582 | 5.083 |
15 | 0.066667 | 0.064493015 | 3.370 |
20 | 0.05 | 0.048770575 | 2.520 |
24 | 0.041667 | 0.040810543 | 2.097 |
30 | 0.033333 | 0.0327839 | 1.675 |
48 | 0.020833 | 0.020617819 | 1.045 |
60 | 0.016667 | 0.016528546 | 0.835 |
120 | 0.008333 | 0.008298707 | 0.417 |
144 | 0.006944 | 0.006920388 | 0.347 |
This is a good solution IFF you never change how much that timestep is fixed to. However you will probably need to change it.
But making sure you are doing the correct integration in the first place can help get you a more appropriate answer to the problem.
Just relying on a fixed timestep is a bodge.
STEP :: 1.0/6000.0 accumulator += dt for accumulator >= STEP { a += (b-a) * (1.0-exp(-k * STEP)) accumulator -= STEP }
No. In fact there are a lot more implicit and explicit methods to do numerical integration.
en.wikipedia.org/wiki/Explicit_and_implicit_methods
en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations
\[ \Delta a = \left (b(t) - a(t) \right ) \cdot k \Delta t \qquad \text{(ansatz)} \] \[ a(t) = c_1 e^{-kt} + e^{-kt} \int_0^\tau e^{k \tau} b(\tau) \cdot k \cdot d \tau \]
Let \(b(t) = \sin(t) \)
\[ a(t) = a(0) \cdot e^{-kt} + \frac{ k^2 \sin(t) + k \cos(t) }{k^2 + 1} \] \[ \Delta a = a \cdot (e^{-k \Delta t} - 1) + \frac{ k^2 \sin(\Delta t) + k \cos(\Delta t) }{k^2 + 1} \]
An aspect of linear algebra that many people overlook:
The equivalence between Matrices & Graphs
The column-0 (1st column) corresponds to the outgoing edges to the node/vertex-0.
n.b. I am using a column-major convention, i.e. \(\mathbf{M}\vec{a} = \vec{b}\)
Similarly, the row-0 (1st row) corresponds to the incoming edges to the first node/vertex-0.
Now with the nodes/vertices explicitly labelled.
The name for this kind of matrix is an adjacency matrix.
If these weights are normalized so that each column sums to \(1\), these can be used to represent the probability of transversing from one node to another.
The matrix and graph represent the state of a Markov Chain.
The matrix represents a "walk" in the graph.
One Step
\[ \begin{bmatrix} 0.6 & 0.7 & 0.2 \\ 0.4 & 0 & 0.8 \\ 0 & 0.3 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \mathbf{M} \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.6 \\ 0.4 \\ 0 \\ \end{bmatrix} \]
Two Steps
\[ \mathbf{M}^2 \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.64 \\ 0.24 \\ 0.12 \\ \end{bmatrix} \]
Three Steps
\[ \mathbf{M}^3 \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.576 \\ 0.352 \\ 0.072 \\ \end{bmatrix} \]
"Infinite" Steps
\[ \lim_{n\to\infty} \mathbf{M}^n \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.59375 \\ 0.3125 \\ 0.09375 \\ \end{bmatrix} = \frac{1}{32} \begin{bmatrix} 19 \\ 10 \\ 3 \\ \end{bmatrix} = \vec{v} \] This is called the "steady state" vector.
Not all graphs will have a "steady state". However if it does exist, the general way to solve it is by solving the following equation:
\[ \mathbf{M} \vec{v} = \vec{v} \] Rearranged in more conventional form: \[ (\mathbf{M} - \mathbb{I}) \vec{v} = \mathbb{0} \]
Steady states vectors are actually a special case of something called an eigenvector.
\[ \mathbf{M} \vec{v_i} = \lambda_i \vec{v_i} \] \[ (\mathbf{M} - \lambda_i \mathbb{I}) \vec{v_i} = \mathbb{0} \] where \(\lambda_i\) and \(\vec{v_i}\) represents ith eigenvalue and eigenvector, respectively.
The steady state corresponds to the (normalized) eigenvector associated with the eigenvalue equal to \(1\), and the other eigenvalues must be \(|\lambda_i| < 1\).
n.b. Not all matrices with eigenvalues \(|\lambda_i| = 1\) imply a steady state.
A directed graph is strongly connected if every node can be reached from every other node. If this is not possible, the graph is not strongly connected.
Matrices that correspond these strongly connected graphs are called irreducible. All other non-negative matrices are called reducible.
n.b. To keep this simple, the weight for edge is assumed to be 1, but any non-negative weight could be used.
Although not all directed graphs are strongly connected, in our previous example, the graph can be partitioned into strongly connected components (subgraphs).
I have labelled the nodes on this graph and constructed its corresponding matrix.
The corresponding matrix of our graph can be reduced to a simpler form!
Its diagonal comprises blocks whose graphs are strongly connected. (That is, the blocks are irreducible).
In general, this block-matrix structure is called the Frobenius Normal Form (sometimes called the Rational Canonical Form).
\[ \begin{bmatrix} M_1 & 0 \\ M_{1\to 2} & M_2 \\ \end{bmatrix} \]
where \(M_i\) is an irreducible submatrix/subgraph
\[ \begin{bmatrix} M_1 & 0 & \cdots & 0 \\ M_{1 \to 2} & M_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ M_{1 \to n} & M_{2\to n} & \cdots & M_n \\ \end{bmatrix} \]
What this could have been...
The more you know, the more you can apply it to, and the more you are able to do.