The tradeoff with the understanding gleaned from using a low-level language like C is the lack of features. When writing code for the purpose of demonstration, it’s not really an issue. But when you need to write an actual application, it is. One of the notable features missing from C is compile-time generics.
Compile time generics are a type of code duplication that depends on the following constraints:
There exist multiple classes that implement the same trait There exists code that only depends on those classes through the implemented trait In this article, I want to show you a way of replicating generics in C, specifically with respect to an operation trait that all the element-wise matrix operations will depend on.
Thinkings
Welcome to Part 2 / N of my Machine Learning Fundamentals in C series! If you haven’t already, go through part 1—I’m going to assume you’re familiar with the concepts there in this article.
Beyond one variable# Our first machine learning model, the Univariate Linear Regression Model, was technically machine learning, but if we’re being honest with ourselves, it’s not very impressive.
In fact, our whole gradient descent algorithm wasn’t even necessary, since there is a closed form solution for the optimal parameters $w$ and $b$:
One of the most striking elements of Silicon Valley to outsiders is productivity culture. Whereas most people in most places live in complete satisfaction doing their job as they would, Silicon Valley people won’t find peace without optimizing their every habit and system to extract that extra iota of productivity per unit time. I am one of those people, and this article is about how I revolutionized my productivity switching from Neovim org-mode to Obsidian.
Welcome to my N-part series (N to be determined) on Machine Learning Fundamentals in C. This series will be all about fundamentals, which I feel is missing from much of the online resources related to Machine Learning and Neural Networks. When I initially sought out to learn about neural networks, all I found were articles that either
Showed a couple lines of Tensorflow code that fetched training data, trained the model, and ran predictions, or Lower-level tutorials that gave implementation details, but left me with more questions than answers None of them dragged you through the (rather boring) elementary concepts that build up to the The Neuron.
If you’re a nerd, and you’ve been around Macs for a while, you might remember Applescript. It was a language developed by Apple to allow intermediate–to–advanced users to write simple scripts that could control Mac applications. It was actually created to resemble the English language, so accessing a pixel would be written as
pixel 7 of row 3 of TIFF image "my bitmap" or even
TIFF image "my bitmap"'s 3rd row's 7th pixel Needless to say, there’s a good reason modern programming languages don’t look like this: it doesn’t scale.
My previous post (which was honestly created to test out the theme for this site), provided a few code snippets that computed $N$ terms of the sum of inverse squares. I wrote the code in my 4 favorite languages—Python, C, Rust, and Haskell—but when I ran the Python code, it was embarrassingly slow. Compared to the $\approx 950$ ms it took sequential Rust, Python took 70 seconds! So, in this post, we’re going to attempt to get Python some more reasonable numbers.
Here are some code snippets in various languages that compute the Basel Problem:
To begin, $\LaTeX$
$$ \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} = 1.6449340668482264 $$
Python# def pi_squared_over_6(N: int) -> float: return sum(x**(-2) for x in range(1,N)) Rust# fn pi_squared_over_6(N: u64) -> f64 { (1..N).map(|x| 1.0 / ((x*x) as f64)).sum() } Haskell# piSquaredOver6 :: Integer -> Double -- no capital N in Haskell :( piSquaredOver6 n = sum $ map (\x -> 1 / fromIntegral (x * x)) [1.