What’s New
36 expeditions on record
“Grafana's official blog on observability, AI, and cost optimization.”
“HashiCorp's blog on infrastructure security, automation, and multi-cloud management.”
“Datadog's engineering blog: monitoring everything, everywhere, all at once.”
“Meta's engineering blog: scaling infrastructure with AI, security, and performance.”
“Dropbox engineering scales storage and AI with pragmatic deep dives.”
“Lyft's engineering blog: scaling marketplace ML and experimentation.”
“Spotify engineers share practical AI and automation insights.”
“Slack’s engineering blog: scaling reliability, security, and AI agents.”
“GitHub's official engineering blog on tools, community, and developer policy.”
“Discord's engineering blog: scaling chat, shipping features, and postmortems.”
“Netflix's engineering blog: scaling entertainment with machine learning and media infrastructure.”
“Building and running distributed apps on Fly.io's edge cloud.”
“Cloudflare's engineering blog on building a secure, agentic internet.”
“Tailscale's engineering blog: practical networking and security for modern infrastructure.”
“Solid local LLM runner, but the repo is just a marketing page.”
“The de facto RAG framework, but it's a bloated kitchen sink.”
“Overengineered glue for LLMs that became an industry standard despite itself.”
“Solid self-hosted Copilot alternative for teams that want GPU control.”
“Essential for training large models, but overkill for most teams.”
“Overengineered LLM gateway that solves problems you probably don't have.”
“The de facto standard for open-source LLM fine-tuning, but not for beginners.”
“The de facto standard for transformer models, but bloated and opinionated.”
“The de facto standard for local LLM inference, but this repo is just its scripts.”
“Overengineered SDK for a niche local LLM tool.”
“Maintenance mode project, use vLLM or SGLang instead.”
“DSPy: The most important LM framework you're not using yet.”
“Fastest LLM finetuning library, but only if you use their hardware or specific GPUs.”
“Solid local AI server, but not the innovation it claims.”
“The de facto standard for high-throughput LLM serving, but not for beginners.”
“Official Mistral inference code: minimal, but don't expect a framework.”
“Over-engineered Swiss Army knife that tries to do everything, masters nothing.”
“Over-engineered AI gateway that confuses complexity with capability.”
“A neat toy for code-gen agents, but not production-ready.”
“Solid Swiss Army knife for LLM CLI work, but not revolutionary.”
“The architecture that rewrote the decade — every major language model traces its lineage here.”
“The paper that gave the industry vocabulary for hard trade-offs — and proved that 'always wrong sometimes' beats 'sometimes unavailable'.”