Running LLMs Locally on macOS: A Skeptic's Guide

2025-09-08

This blog post details the author's experience running large language models (LLMs) locally on their macOS machine. While expressing skepticism about the hype surrounding LLMs, the author provides a practical guide to installing and using tools like llama.cpp and LM Studio. The guide covers choosing appropriate models based on factors like size, runtime, quantization, and reasoning capabilities. The author emphasizes the privacy benefits and reduced reliance on AI companies that come with local LLM deployment, offering tips and tricks such as utilizing MCPs to extend functionality and managing the context window to prevent information loss. The post also touches on the ethical concerns surrounding the current state of the AI industry.

Read more
Development

Tailscale: A Surprisingly Useful VPN Alternative

2025-03-05

The author shares their experience with Tailscale, a VPN alternative. Frustrated by CGNAT blocking port forwarding for remote access to a Raspberry Pi, they turned to Tailscale. It successfully solved the problem, creating a virtual private network that allows easy access to devices using simple domain names. Beyond this, Tailscale offers unexpected benefits: effortless file transfer between devices (Taildrop), exposing laptop ports for mobile web app testing, and the ability to function as a VPN with exit nodes, even integrating with Mullvad for enhanced privacy. The author uses the free tier and recommends the open-source server implementation Headscale.

Read more
Development