Tools of the Trade

2025 July 12th

Bill Hall

Who I am

I'm Bill Hall, commonly known as Ginger Bill.

Creator of the Odin programming language.

Work with JangaFX on numerous different projects.

With a background in Quantum Mechanics,
 specifically Quantum Metrology.

Why Are We All Here?

Craftsmanship

Craftsmanship

Better Software Showcase

WhiteBox—Andrew Reece

Watch how your code behaves as you write it.

Humans are not good at simulating code in their heads. Let the computer do what it's going to do anyway: run the code!

WhiteBox is a software development tool that compiles runs and “debugs” your C(++) code live to report its execution patterns and data transforms.

File Pilot—Vjekoslav Krajačić

Next-gen file explorer.

Engineered entirely from scratch for light-speed performance, featuring a modern and robust interface.

RAD Debugger Project—Ryan Fleury

RAD Debugger

The RAD Debugger is a native, user-mode, multi-process, graphical debugger.




RAD Linker

The RAD Linker is a new performance linker for generating x64 PE/COFF binaries. It is designed to be very fast when creating gigantic executables.

Odin Programming Language—Bill Hall

The Data-Oriented Language for Sane Software Development.

Programming Done Right

Odin is a general-purpose programming language with distinct typing built for high performance, modern systems and data-oriented programming.

Odin is the C alternative for the Joy of Programming.

Why This All Exists

Handmade Hero

Handmade Hero

Was an ongoing project by Casey Muratori to create a professional-quality game accompanied by videos that explain every single line of its source code. The series began 2014 November 17th.

Text from the trailer:

				When I was little,
				Games were made from scratch
				By people who loved them.

				There were no licensed engines.
				Each pixely little hero was coded by hand.

				My thirty years of game programming
				Were inspired by this way of life
				And I believe it is worth passing on.

				So I am starting a project
				To help preserve it.
			

Casey Muratori

Casey is a programmer who has specialized in game development and game engine research.

He's worked on numerous projects in past which include Granny, Bink 2, and the game The Witness.

And beyond Handmade Hero, he's known for popularizing 
Immediate Mode GUIs, n-way linear quaternion blending, and 
Implementing the GJK Algorithm.

As his most recent educational endeavour he has been 
working on is the Computer, Enhance! substack series.

How Casey Muratori Changed My Life

Thank you, Casey.

Tools of the Trade

Tools of the Trade

But there is something else many of us neglect or forget is even a tool for our metaphorical toolbox…

Mathematics

The Lies We Tell Ourselves and Others


How do you know you don't need something if you don't know it exists? This is fundamentally the problem of unknown unknowns.

Let's Learn By Example

WARNING
There will be mathematical notation

What Will Be Briefly Covered


IMPORTANT NOTICE

This is not a lesson. It is an overview of some mathematical topics I expect many programmers may not know anything about—with the main objective that you should go out and learn more mathematics to broaden your (mathematical) toolbox to become better at your craft.

Calculus—Lerp Smoothing

Calculus—Lerp Smoothing

You've probably wanted a "smooth" transition to a "target value" before:

						a = lerp(a, b, k)
					
						a = a + (b - a) * k
						a += (b - a) * k
					

But this is frame-rate dependent. So you've probably tried multiplying it by dt:

						dt := curr_frame_time - prev_frame_time
						a += (b - a) * k * dt
					

A good approximation but it is still subtly frame-rate dependent.

Lerp Smoothing—Solving from an Ansatz

Let's use this as an approximation or an ansatz:

\[ \Delta a = (b - a(t)) \cdot k \Delta t \] \[ \frac{\Delta a}{\Delta t} = (b - a(t)) \cdot k \] \[ \frac{da}{dt} = (b - a(t)) \cdot k \]

This is a first-order linear ordinary differential equation.

Solving a differential equation can be very difficult, but luckily there is a tool to help us out known as WolframAlpha, with it's solution being:

\[ a(t) = b + c_1 \cdot e^{-k t} \]

Lerp Smoothing—Solving from an Ansatz

\[ a(t) = b + c_1 \cdot e^{-k t} \]

Solving so that \(a(0)\) is well defined:

\[ a(t) = a(0) \cdot e^{-k t} + b \cdot (1 - e^{-k t}) \] \[ a(t) = \text{lerp} \left (a(0),\, b,\, 1-e^{-k t} \right ) \]

However, we wanted this in a discrete differential form which we can increment with each iteration. So let's first slightly rearrange it: \[ a(t) = a(0) + (b - a(0)) \cdot (1 - e^{-k t}) \]

Lerp Smoothing—Finding the Actual Form

\[ a(t) = a(0) + (b - a(0)) \cdot (1 - e^{-k t}) \]

Notice that the lerp parameter is this exponential \(1 - e^{-k t}\).
Why don't we try the \(\Delta t\) version of it?

\[ a(0 + \Delta t) = a(0) + (b - a(0)) \cdot (1 - e^{-k \Delta t}) \] \[ \Delta a_{t=0} = (b - a_{t=0}) \cdot (1 - e^{-k \Delta t}) \]

Since we chose to arbitrarily evaluate at \(a(0)\), we know it will hold \(\forall a\), thus:

\[ \Delta a = (b - a) \cdot (1 - e^{-k \Delta t}) \]

This is the actual solution for higher frame-rate independence.

But why does it look very different to the original?

Lerp Smoothing—Taylor Series Expansion

Well the "trick" is to know how that lerping parameter actually looks like when you expand it as a Taylor Series:

\[ 1 - e^{-k \Delta t} = k \Delta t - \frac{(k \Delta t)^2}{2} + \frac{(k \Delta t)^3}{6} - \frac{(k \Delta t)^4}{24} + \cdots \]


n.b. This Taylor series is evaluated around \(0\)—sometimes called Maclaurin series.

Lerp Smoothing—The Improved Form

If we use that Taylor series expansion with our solution:

\[ \Delta a = (b - a) \cdot \left ( k \Delta t - \frac{(k \Delta t)^2}{2} + \frac{(k \Delta t)^3}{6} - \frac{(k \Delta t)^4}{24} + \cdots \right ) \]

This means that the original equation was just a lower-order approximation of the "true" equation.

\[ \Delta a = (b - a) \cdot \left ( k \Delta t + \mathcal{O} ((k \Delta t)^2) \right ) \]

And finally back in code:

					a += (b - a) * (1.0 - exp(-k * dt))
				
Or expanded:
					s := k * dt
					a += (b - a) * (s - s*s/2.0 + s*s*s/6.0 - s*s*s*s/24.0 + …)
				

Lerp Smoothing—The Improved Form

						a += (b - a) * (1.0 - exp(-k * dt))
					

Lerp Smoothing—Comparing The Factors

Assuming \(k = 1\), product of \(k \Delta t\)


Frame Rate (1/\(\Delta t\)) Ansatz Method Better Method Difference / %
10 0.1 0.095162582 5.083
15 0.066667 0.064493015 3.370
20 0.05 0.048770575 2.520
24 0.041667 0.040810543 2.097
30 0.033333 0.0327839 1.675
48 0.020833 0.020617819 1.045
60 0.016667 0.016528546 0.835
120 0.008333 0.008298707 0.417
144 0.006944 0.006920388 0.347

Lerp Smoothing—Advanced

What about fixing the timestep (\(\Delta t\))?

This is a good solution IFF you never change how much that timestep is fixed to. However you will probably need to change it.

But making sure you are doing the correct integration in the first place can help get you a more appropriate answer to the problem.

Just relying on a fixed timestep is a bodge.

Do not bodge your code if you do not have to.

Lerp Smoothing—Fixed Time Step

STEP :: 1.0/6000.0
accumulator += dt
for accumulator >= STEP {
    a += (b-a) * (1.0-exp(-k * STEP))
    accumulator -= STEP
}

Lerp Smoothing—More Methods

Does this method always work?

No. In fact there are a lot more implicit and explicit methods to do numerical integration.

Lerp Smoothing—Advanced

What if \(b\) wasn't constant?

\[ \Delta a = \left (b(t) - a(t) \right ) \cdot k \Delta t \qquad \text{(ansatz)} \] \[ a(t) = c_1 e^{-kt} + e^{-kt} \int_0^\tau e^{k \tau} b(\tau) \cdot k \cdot d \tau \]

Let \(b(t) = \sin(t) \)

\[ a(t) = a(0) \cdot e^{-kt} + \frac{ k^2 \sin(t) + k \cos(t) }{k^2 + 1} \] \[ \Delta a = a \cdot (e^{-k \Delta t} - 1) + \frac{ k^2 \sin(\Delta t) + k \cos(\Delta t) }{k^2 + 1} \]

Linear Algebra (Matrices & Graphs)

Linear Algebra (Matrices & Graphs)

An aspect of linear algebra that many people overlook:

The equivalence between Matrices & Graphs

\[ \begin{bmatrix} 0.6 & 0.7 & 0.6 \\ 4 & 0 & 1 \\ 0 & 3.1 & 0 \\ \end{bmatrix} \]

Linear Algebra (Matrices & Graphs)

The column-0 (1st column) corresponds to the outgoing edges to the node/vertex-0.


\[ \begin{bmatrix} \textcolor{#FFA600}{0.6} & 0.7 & 0.6 \\ \textcolor{#008DFF}{4} & 0 & 1 \\ \textcolor{#FF004D}{0} & 3.1 & 0 \\ \end{bmatrix} \]

n.b. I am using a column-major convention, i.e. \(\mathbf{M}\vec{a} = \vec{b}\)

Linear Algebra (Matrices & Graphs)

Similarly, the row-0 (1st row) corresponds to the incoming edges to the first node/vertex-0.


\[ \begin{bmatrix} \textcolor{#FFA600}{0.6} & \textcolor{#008DFF}{0.7} & \textcolor{#FF004D}{0.6} \\ 4 & 0 & 1 \\ 0 & 3.1 & 0 \\ \end{bmatrix} \]

Linear Algebra (Matrices & Graphs)

Now with the nodes/vertices explicitly labelled.

The name for this kind of matrix is an adjacency matrix.


\[ \begin{array}{c c} & \begin{array}{c c c} v_0 & v_1 & v_2 \\ \end{array} \\ \begin{array}{c c c} v_0 \\ v_1 \\ v_2 \end{array} & \begin{bmatrix} 0.6 & 0.7 & 0.6 \\ 4 & 0 & 1 \\ 0 & 3.1 & 0 \\ \end{bmatrix} \end{array} \]

Markov Chains

If these weights are normalized so that each column sums to \(1\), these can be used to represent the probability of transversing from one node to another.

The matrix and graph represent the state of a Markov Chain.
The matrix represents a "walk" in the graph.

\[ \mathbf{M} = \begin{bmatrix} 0.6 & 0.7 & 0.2 \\ 0.4 & 0 & 0.8 \\ 0 & 0.3 & 0 \\ \end{bmatrix} \]

Markov Chains—Walking/Iteration

One Step

\[ \begin{bmatrix} 0.6 & 0.7 & 0.2 \\ 0.4 & 0 & 0.8 \\ 0 & 0.3 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \mathbf{M} \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.6 \\ 0.4 \\ 0 \\ \end{bmatrix} \]

Two Steps

\[ \mathbf{M}^2 \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.64 \\ 0.24 \\ 0.12 \\ \end{bmatrix} \]

Three Steps

\[ \mathbf{M}^3 \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.576 \\ 0.352 \\ 0.072 \\ \end{bmatrix} \]

Markov Chains—Steady State

"Infinite" Steps

\[ \lim_{n\to\infty} \mathbf{M}^n \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0.59375 \\ 0.3125 \\ 0.09375 \\ \end{bmatrix} = \frac{1}{32} \begin{bmatrix} 19 \\ 10 \\ 3 \\ \end{bmatrix} = \vec{v} \] This is called the "steady state" vector.

Not all graphs will have a "steady state". However if it does exist, the general way to solve it is by solving the following equation:

\[ \mathbf{M} \vec{v} = \vec{v} \] Rearranged in more conventional form: \[ (\mathbf{M} - \mathbb{I}) \vec{v} = \mathbb{0} \]

Markov Chains—Eigenvectors

Steady states vectors are actually a special case of something called an eigenvector.

\[ \mathbf{M} \vec{v_i} = \lambda_i \vec{v_i} \] \[ (\mathbf{M} - \lambda_i \mathbb{I}) \vec{v_i} = \mathbb{0} \] where \(\lambda_i\) and \(\vec{v_i}\) represents ith eigenvalue and eigenvector, respectively.


The steady state corresponds to the (normalized) eigenvector associated with the eigenvalue equal to \(1\), and the other eigenvalues must be \(|\lambda_i| < 1\).

n.b. Not all matrices with eigenvalues \(|\lambda_i| = 1\) imply a steady state.

Graphs—Connections

A directed graph is strongly connected if every node can be reached from every other node. If this is not possible, the graph is not strongly connected.

Strongly Connected Graph
NOT Strongly Connected Graph

Matrix Irreducibility

Matrices that correspond these strongly connected graphs are called irreducible. All other non-negative matrices are called reducible.


\[ \Large{ \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{bmatrix} } \]


n.b. To keep this simple, the weight for edge is assumed to be 1, but any non-negative weight could be used.

Matrix Reducibility

Although not all directed graphs are strongly connected, in our previous example, the graph can be partitioned into strongly connected components (subgraphs).

1st strongly connected graph
2nd strongly connected graph

Matrix Reducibility

I have labelled the nodes on this graph and constructed its corresponding matrix.

\[ \Large{ \begin{bmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{bmatrix} } \]
1
2
3
4
5
6

Matrix Reducibility

The corresponding matrix of our graph can be reduced to a simpler form!

Its diagonal comprises blocks whose graphs are strongly connected. (That is, the blocks are irreducible).

\[ \Large{ \begin{bmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{bmatrix} } \]
edges within 1stcomponent
edges from 1st to 2ndcomponent
edges within 2ndcomponent
edges from 2nd to 1st component
(there aren't any!)
 
 
 
 

Frobenius Normal Form

In general, this block-matrix structure is called the Frobenius Normal Form (sometimes called the Rational Canonical Form).

\[ \begin{bmatrix} M_1 & 0 \\ M_{1\to 2} & M_2 \\ \end{bmatrix} \]

where \(M_i\) is an irreducible submatrix/subgraph

\[ \begin{bmatrix} M_1 & 0 & \cdots & 0 \\ M_{1 \to 2} & M_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ M_{1 \to n} & M_{2\to n} & \cdots & M_n \\ \end{bmatrix} \]

I'll Stop Now

What this could have been...

Introduction to Game AI

Introduction to Machine Learning

Introduction to Traffic Analysis

Introduction to Mesh Topology

Endless Fields of Applicability in Programming

Conclusion

Conclusion

  • Ansatz Method
  • Differential Equations
  • Integration
  • Taylor Series
  • Fixed Time Step
  • Numerical Integrators
  • Matrices
  • Directed Graphs
  • Weighted Graphs
  • Markov Chains
  • Eigenvalues & Eigenvectors
  • Matrices Irreducibility
  • Frobenius Normal Form

The more you know, the more you can apply it to, and the more you are able to do.

Thank you

Bill Hall (Ginger Bill)



Slides available at: bsc2025.gingerbill.org

Any questions? Contact me!