Who Owns the Editor?
PLUS: A deep dive into Google's AlphaEvolve, Continuous Thought Machines, Grok’s meltdown, and so much more.
This week, the VS Code team announced that they’re open-sourcing the GitHub Copilot Chat extension — and integrating its AI features directly into the editor’s core. It’s a quiet post, wrapped in classic OSS language (“community-driven,” “transparent,” “MIT license”) — but make no mistake: this is a strategic pivot, not just a values play.
So why now?
According to the team, prompting is no longer a moat, the AI UX across editors has converged, and the demand for transparency — from users and researchers alike — is too strong to ignore. Add to that the rising security concerns around closed AI devtools, and the choice to open source becomes both ethical and practical.
But here’s the deeper signal: VS Code is making a bid to remain the substrate.
Open-source the AI layer → raise the bar for competitors → shift attention back to the models (and clouds) Microsoft already controls.
Startups like Windsurf, Cursor, and Replit are racing to define what it means to be an “AI-native” IDE — baking agents, fine-tuned models, and conversational workflows into the core experience. And for the first time in years, VS Code risks becoming just the place extensions live, not the future of the editor itself.
By open-sourcing Copilot Chat and baking AI directly into core, Microsoft is shifting posture. Instead of just being the dominant editor, they want to be the open foundation that everyone builds on — from indie agents to enterprise copilots. It’s a bet on ecosystem gravity: if the AI workflows of tomorrow are all composable, prompt-driven, and agent-based, then being the interoperable default matters more than owning any single vertical.
And honestly? This is just the beginning of what went down this week.
If you’ve been watching the space, you know — we’re eating good. Google DeepMind dropped a new architecture that evolves algorithms in the wild. Sakana’s Continuous Thought Machine rethinks cognition itself by making time a first-class citizen. And Grok… reminded us what happens when no one’s watching the alignment layer.
Let’s get into it.
Project Astra, AI Mode, AlphaEvolve — Google’s Full Stack Play
Large language models have shown they can write code. But what if they could discover entirely new algorithms?
That’s the question behind AlphaEvolve, a new evolutionary agent from DeepMind that marks a shift in how AI systems can contribute to both science and engineering. Unlike a traditional code generator, AlphaEvolve is a self-improving architecture: it proposes programs, tests them against formal evaluation metrics, and evolves better versions over time. The loop runs autonomously — driven by a team of models, not a human in the middle. And it’s already made an impact.
AlphaEvolve recovered 0.7% of compute across Google’s global data centers by discovering a better scheduling heuristic. It also proposed a Verilog simplification now being used in next-gen TPUs, and accelerated Gemini model training by 23% through smarter matrix multiplication strategies. In one case, it even discovered a new algorithm for multiplying 4x4 complex-valued matrices — beating a 56-year-old benchmark first set by Strassen in 1969.
What makes AlphaEvolve special isn’t just its results, but its structure. Instead of relying on a single model to generate answers, it distributes the problem across a modular pipeline. A prompt sampler draws from a database of past code solutions. A team of LLMs (like Gemini Flash and Pro) proposes edits, or "diffs." These are applied to base programs, executed, and scored by an evaluator pool. The best-performing candidates are stored and used as inspiration for the next generation. It’s an evolutionary system — not just in metaphor, but in mechanism. And it’s grounded in execution, not just prediction.
This kind of architecture — combining generative fluency with symbolic evaluation and iteration — represents a fascinating direction for the future of AI research. It shows how models can be arranged into intelligent systems, where each component plays a role in a broader discovery process. AlphaEvolve doesn’t just guess; it proposes, tests, and learns. And its success suggests that we may see more of these multi-agent, modular architectures emerge — especially in domains where the solution space is too large or subtle for humans to explore alone.
The big shift here is that AI isn’t just helping us use knowledge. It’s helping us create it. When the solution to a problem can be expressed as an algorithm — and that algorithm can be scored automatically — agents like AlphaEvolve can do more than assist. They can search, evolve, and, in some cases, reach beyond what we've found so far.
Besides AlphaEvolve, Google’s I/O conference dropped today with a wave of updates across the board — from a new “AI Overview” mode in Search to Project Astra, a real-time, multimodal assistant that feels straight out of sci-fi. It’s clear Google’s strategy is shifting: less about product polish, more about platform intelligence.
We’ve linked a few of the most compelling demos for you down below — worth a watch if you want a sense of where things are heading.
Inside the Continuous Thought Machine
It seems only natural that biology would become a source of inspiration for the next generation of AI models. After all, our brains remain the most efficient and versatile learning systems we know. And while the biggest architectural shifts often come from the tech giants, we’re starting to see more breakthroughs emerge from smaller, focused research labs too.
One of the most promising recent examples? Sakana AI’s Continuous Thought Machine (CTM) — a new approach to building better thinking machines by reintroducing one of biology’s most overlooked ingredients: time.
At its core, CTM introduces a new unit of computation: neurons that remember their own past activity and learn to coordinate based on timing — not just activation strength. Traditional artificial neurons output a single scalar (a number), representing how “strongly” they’re firing. CTM neurons, in contrast, incorporate a history of previous states, allowing them to modulate behavior based on how they’ve behaved in the past. This lets the model develop rich internal dynamics — like oscillations, synchrony, or phase alignment — much closer to how real neurons in the brain communicate.
The result is a model that can “think” through problems over time, rather than snap-deciding. In maze-solving tasks, CTM doesn’t just output the right path — it visually traces it, step by step. In image classification, it shifts attention across an image in a pattern reminiscent of human eye movement. This behavior emerges not because the model was hard-coded to act human, but because its temporal dynamics give it the tools to develop its own internal process — one that’s both interpretable and surprisingly efficient.
What makes CTM exciting isn’t just the biological inspiration — it’s the practical implications. Models like CTM may allow future AI systems to dynamically adjust how long they think based on task complexity, or to reflect uncertainty through evolving internal states rather than brittle confidence scores. It opens the door to adaptive computation — where models don’t just scale up compute uniformly, but decide when and how to use their own time.
Sakana’s approach invites us to rethink one of the most overlooked dimensions in AI: time as a substrate for cognition. If deep learning gave us scale and pattern recognition, temporal dynamics like those in CTM may help unlock reasoning — not as a static function, but as a process unfolding in time.
For the Students, By the Degens ⚡️
Skip the lecture, catch the alpha. Students from Harvard, Princeton, Stanford, MIT, Berkeley and more trust us to turn the chaos of frontier tech into sharp, digestible insights — every week, in under 5 minutes.
This is your go-to power-up for Web3, AI, and what’s next.
Job & Internship Opportunities
Research Scientist (Field) - Apply Here | Goodfire
AI Product Analyst - Apply Here | Newton Research
Data Scientist - Apply Here | Newton Research
Full-Stack Software Engineer - Apply Here | Rentana
Full-Stack Software Engineer - Apply Here | Uplinq
Brand & Marketing Designer - Apply Here | Lakera
Research Fellow (All Teams) - Apply Here | Goodfire
A range of roles from Composio - Apply Here
A range of roles from SylphAI - Apply Here
Partner Success Specialist - Apply Here | Cohere
Software Engineer Intern/Co-op (Fall 2025) - Apply Here | Cohere
GTM Associate - Apply Here | Sana
Product Engineer - Apply Here | Letta
Research Scientist - Apply Here | Letta
Internship - Apply Here | Marvelx.ai
⚡️ Your Job Search, Optimized
We don’t just talk about building — we help you get in the room.
From protocol labs to VC firms, our students have landed roles at places like Coinbase, Pantera, dYdX, Solayer, and more.
Whether you’re polishing your resume, preparing for a technical screen, or figuring out where you actually want to be — we’ve got your back. 👇
Unsubscribe anytime - but if it’s not hitting, slide into our DMs. We’re building this with you, not for you.
Maybe X Is the Problem
This week, Grok — xAI’s chatbot deployed across X — made headlines for parroting Holocaust denial talking points and pushing “white genocide” conspiracy theories. xAI blamed an “unauthorized prompt change,” but the story keeps repeating: inflammatory output, vague excuses, no accountability.
At some point, it stops looking like a bug.
The Grok episode is a case study in what happens when AI is deployed inside an ideologically driven system with no real transparency. A chatbot can’t hold beliefs — but it can reflect the worldview of whoever built it. And when that worldview is shaped by conspiracy, chaos, and control, the AI starts to mirror it.
Here’s the uncomfortable possibility: maybe the model isn’t malfunctioning. Maybe it’s doing exactly what its creators — or culture — implicitly allow.
As we wrap this week, the question hanging over the AI ecosystem isn’t just what models can do, but who they're aligned to. Whether it’s VS Code reshaping dev workflows or AlphaEvolve designing new algorithms, the future is being built — fast, and often in the open. But not all openness is equal.
And not every system deserves your trust.
Touch grass. Read the source. Stay aligned.
We’ll see you soon.