Baby.Exe was just born.
Plus: a deep dive into Large Concept Models, a quick intro to interpretability, and so much more.
The first child ever conceived via fully automated IVF was just born.
Let that marinate for a sec. Developed by Conceivable Life Sciences, this new system automates all 23 steps of ICSI (Intracytoplasmic Sperm Injection) - a key process in IVF - using a mix of robotics, AI, and remote operators.
Picture this: a sperm cell is laser-immobilized by an AI, pipetted into an egg by a robot, and triggered from a laptop 3,700km away.
All five eggs injected via the automated system fertilized successfully. One of those grew into a blastocyst (early-stage embryo), was frozen, thawed, implanted — and 9 months later, a healthy baby boy was born.
Why this matters:
IVF is expensive and inconsistent. Automation = lower cost, higher standardization.
It removes human error from one of the most delicate procedures in medicine.
The embryologist can now work from home. Literally.
If you think growing up was hard, imagine the roast battles this AI-assisted baby’s gonna have to survive. But as a defense, bro is simply built different. Here’s a quiet ode to him:
"They pipetted your soul with a neural net, Lasered your seed, no human sweat, A click from New York, a click from L.A., And boom - your whole life booted that day"
And just to take it one step further - cook up your best Yo Momma jokes and drop them in the comments. We’ll feature a few of the best in next week’s issue.
That being said, this week we’re diving into Large Concept Models and cracking open the idea of interpretability.
Let’s get into it.
Large Concept Models as a new frontier
Modern AI models are incredibly good at pattern matching — but if you look inside them, the internal process often feels opaque. Most systems today operate at the raw data level, relying on dense numerical patterns: pixel clusters in images, token embeddings in text, arrays of features few humans could intuitively understand. This makes them powerful, but also fundamentally alien in how they think. When a model predicts or generates something, it’s hard to say what concepts it’s actually using — or if it’s reasoning in a way we would recognize at all.
Large Concept Models (LCMs) aim to change that. Instead of processing the world through disconnected signals, LCMs are trained to reason through human-understandable abstractions. Rather than recognizing a car simply as a statistical arrangement of edges and textures, an LCM would identify structured parts — wheels, metallic body, motion on a road — and relate them to broader concepts like vehicles or transportation. These concepts can then be recombined across different tasks, allowing the model to generalize more reliably and explain its internal decisions in clearer, modular terms.
The broader idea is powerful: if AI can reason in a modular, language-agnostic way, it can become more scalable, extensible, and transparent. Instead of patching massive token streams together and hoping for coherence, we might one day build systems that understand and evolve ideas the way humans do — sketching rough plans first, then filling in the details naturally over time. Reasoning at the concept level could also lead to models that adapt better to new environments, transfer knowledge more efficiently, and show fewer catastrophic failures when faced with unfamiliar inputs. In domains like science, safety-critical applications, and autonomous agents, these properties aren't just nice-to-haves — they could be essential.
Even OGs like Yann LeCun (Meta's Chief AI Scientist) are saying that LLMs alone aren't the path to AGI. We linked a great video where Yann breaks this down.
Inside the Race to Make AI Understandable
As AI systems grow more powerful, a deeper problem is surfacing: we don't really understand how they work. Models today can handle medical exams, write essays, and predict financial risks - yet when asked why they made a decision, even their creators often can’t fully explain it. This is the challenge of AI interpretability. It’s not just about explaining outputs after the fact; it’s about opening up the black box and making the model’s internal reasoning visible, editable, and, ultimately, trustworthy. In fields like healthcare, finance, and law, where mistakes carry real-world consequences, interpretability isn’t just nice to have - it’s essential.
At its core, interpretability is about translating the tangled spaghetti of neurons, activations, and weights into something humans can reason about. It's the difference between a model saying "trust me" and being able to actually show its work. Without it, debugging becomes guesswork, fairness audits become impossible, and public trust erodes. With it, we can build AI systems that are safer, more reliable, and aligned with human values - not just smarter.
A few months ago, we covered Anthropic’s early experiments opening up the "mind" of an LLM (yes, they did surgery on Claude). This week, we’re back on that thread - spotlighting a new piece from Anthropic’s founder, who lays out why interpretability is now a race against time.
Key points: AI models are "grown, not built," meaning we often don't fully design - or even understand - their internal structures. The risks we worry about most (like model deception, power-seeking, or jailbreaks) are fueled by this opacity. And while recent breakthroughs - like sparse autoencoders and tracing circuits - give us the first real tools to peer inside, the pace of AI advancement means we may only have a small window left to catch up.
It’s a fascinating, fast-moving space - and it’s not just research labs that are getting involved. Companies like Goodfire are emerging to turn cutting-edge interpretability work into real-world platforms. Goodfire just raised a $50M Series A to develop Ember, a system designed to decode and control the internal thoughts of AI models. Their goal? Build an MRI for AI — one that helps enterprises, researchers, and society actually see, understand, and steer what’s happening inside the black box.
We’ve linked a few open positions below if you are interested.
For the Students, By the Degens ⚡️
Skip the lecture, catch the alpha. Students from Harvard, Princeton, Stanford, MIT, Berkeley and more trust us to turn the chaos of frontier tech into sharp, digestible insights — every week, in under 5 minutes.
This is your go-to power-up for Web3, AI, and what’s next.
EigenLayer Summer Fellowship Just Landed
EigenLayer has officially launched its Summer Fellowship program - an 8-week, in-person experience for ambitious builders at the frontier of crypto infrastructure and AI.
A dream combo if you want to be at this very spicy intersection. Fellows will work alongside the Eigen team to prototype agent-powered apps, experiment with AVS tooling, and contribute to the growing restaking ecosystem.
-Seattle (on-site)
- 8 weeks | Summer 2025
- $5,000/month + housing + meals
- Mentorship from Eigen core contributors
With fast build cycles, weekly demos, and potential pathways to full-time roles, this fellowship is designed for those ready to move quickly and build with purpose.
Llama Con + Meta Connect
Meta’s throwing its first-ever developer conference for the open-source Llama fam.
Expect updates on model development, tooling, and more. Whether you’re solo hacking or shipping at scale, this one’s for the builders.
📍 Online | Save the date → April 29
The XR and AI glasses crew gets their moment this fall. Meta Connect is back — with updates on Meta Horizon, new mixed reality toys, and what’s next in the metaverse stack.
If you’re into wearables, worldbuilding, or spatial computing, this one’s worth keeping on your radar.
📍 Online & In-Person | September 17–18
Job & Internship Opportunities
Product Designer Intern (UK) – Apply Here | Palantir
Research Scientist (Field) - Apply Here | Goodfire
Research Fellow (All Teams) - Apply Here | Goodfire
Website Engineer - Apply Here | ElevenLabs
Data Scientist & Engineer - Apply Here | ElevenLabs
Product Designer Intern (NY) - Apply Here | Palantir
Applied AI Engineer - Paris (Internship)– Apply Here | Mistral
Brand Designer - Apply here | Browserbase
A range of roles from YC Backed Humanloop - Apply Here
⚡️ Your Job Search, Optimized
We don’t just talk about building — we help you get in the room.
From protocol labs to VC firms, our students have landed roles at places like Coinbase, Pantera, dYdX, Solayer, and more.
Whether you’re polishing your resume, preparing for a technical screen, or figuring out where you actually want to be — we’ve got your back. 👇
Unsubscribe anytime — but if it’s not hitting, slide into our DMs. We’re building this with you, not for you.
USC wins the first ever sperm race
There’s a new championship making waves across the internet — and no, it’s not March Madness.
It’s sperm racing.
Yes, really. It was livestreamed in downtown LA (ofc it’s LA), where high-resolution cameras tracked sperm cells sprinting through a microscopic obstacle course — complete with leaderboards, biomarker stats, and a surprise Ty Dolla $ign performance.
Welcome to the Sperm Singularity folks. It was started by 17-year-old founder Eric Zhu to destigmatize male fertility — and somehow managed to turn sperm quality into a spectator sport.
Honestly? It kind of worked.
Besides that… you made it through another weird, wonderful week. We’ll be back next time with more drops, fresh ideas, and more.
Remember: this is the new normal. And you're living it :)