Igniting Kids' Math Passion Through Storytelling

2025-04-20

This essay recounts how storytelling can effectively engage children with mathematics. The author shares personal anecdotes, including using fictional spy stories to subtly integrate math concepts into exciting adventures, and inventing heroic tales to boost young scouts' confidence and overcome challenges. The core argument is that storytelling is far more effective than rote exercises for children, fostering a natural curiosity and deeper understanding of mathematical principles. The author advocates for more story-focused math content to bridge the gap between basic number sense and more advanced concepts.

Read more

Demystifying Markov Chain Monte Carlo: A Simple Explanation

2025-04-16

This post provides a clear and accessible explanation of Markov Chain Monte Carlo (MCMC), a powerful technique for sampling from complex probability distributions. Using an analogy of estimating probabilities of baby names, the author illustrates the core problem MCMC solves. The explanation cleverly relates MCMC to a random walk on a graph, leveraging the stationary distribution theorem to show how to construct a Markov chain whose stationary distribution matches the target distribution. The Metropolis-Hastings algorithm, a common MCMC method, is introduced and its effectiveness is demonstrated.

Read more

LLMs Explain Linear Programs: From Side Project to Microsoft Research

2025-02-10

Back in 2020, while working in Google's supply chain, the author developed a side project to help understand linear programs (LPs). When LPs become complex, understanding their results is challenging even for experts. The author's approach involved interactively modifying the model and diffing the results to explain model behavior, finding that adding semantic metadata simplified the process. Recently, Microsoft researchers published a paper using Large Language Models (LLMs) to translate natural language queries into structured queries, achieving a similar outcome. The author believes LLMs are a great fit for translating human ambiguity into structured queries, processed by a robust classical optimization system, with results summarized by the LLM. While the author's early work remained unpublished, he argues that understanding explanations of simpler systems is crucial for explaining more complex AI systems.

Read more