The IMF Crunched the Numbers.
PLUS: The agent dev life cycle, an intro to text-to-video generators, LlamaCon, and so much more.
This week brings a new AI prediction from the International Monetary Fund.
In a new working paper, the IMF’s research team modeled what would happen if AI adoption continues at current speed — and energy policy doesn’t keep up. Their headline:
AI could boost global GDP by 0.5 percentage points annually through 2030.
But it’ll come at a price — and not just on your GPU bill.
Electricity demand from AI (mostly data centers) is projected to triple by 2030 — reaching 1,500 TWh globally, about the same as India’s total electricity use today.
In the U.S. alone, data centers might consume over 600 TWh by 2030, up from 178 TWh in 2024 — a massive increase that will make up a significant share of the national grid.
And emissions? Under current policies, AI’s growth could add as much carbon as Italy emits in five years.
Still, the IMF’s overall conclusion is bullish:
“The social cost of these additional emissions represents only a very small portion of AI’s expected economic benefits.”
In other words: the net impact is still worth it — if governments can align energy policy with AI growth.
So what’s the move here?
Get your head down and learn as much as you can. This tech isn’t slowing down — and neither should you. From diffusion frames to full-stack agent workflows — this week’s drop is going to be awesome.
Let’s get into it.
Designing the Agent Development Life Cycle
As the infrastructure scales up, so does the pressure on builders to design smarter, more adaptive systems — starting with agents.
and there’s a big difference between deploying an AI agent and developing one.
Most people treat agents like advanced prompts — throw them in, see what sticks, maybe fix a weird edge case here or there. But as agents take on more responsibility — handling support tickets, routing payments, scheduling calls — that approach starts to break down.
What emerges instead is a pattern: a full-stack development cycle for agents.
Think of it like product development for a new type of software entity — one that’s smart, semi-autonomous, and evolving in real time. The goal isn't to get an agent to "work once" but to build a system that continuously improves how the agent performs, interacts, and adapts to the environment around it.
This cycle typically includes:
Designing intent and boundaries: What’s in scope? What’s out? What tone does it carry?
Testing behavior: Creating scenario-driven tests — not just for correctness, but helpfulness
Surfacing production errors: Capturing live interactions, identifying weak spots, and filing targeted improvements
Regression coverage: Turning past bugs into permanent tests so the agent doesn't backslide
Shipping updates: Rapid iteration — sometimes weekly — as models, tools, and customer behavior evolve
Critically, this is all happening on top of non-deterministic systems — LLMs that don’t always behave the same way twice. That means every improvement needs to account for unpredictability. It also means quality assurance for agents becomes its own layer: not just “did the code compile?” but “did the model handle this ambiguity the right way?”
The most advanced teams are now integrating agents to improve agents — using model-powered tools to triage issues, suggest tests, even rewrite fallback logic. It’s meta, but it works. And as reasoning models improve, this recursive loop becomes a real advantage.
Last week, we covered Google’s Agent Development Kit — a flexible framework for building multi-agent systems with memory, tools, and task routing.
This week, here are a few more agent frameworks worth checking out:
AutoGen
LangGraph
CrewAI
OpenDevin
Agno
For the Students, By the Degens ⚡️
Skip the lecture, catch the alpha. Students from Harvard, Princeton, Stanford, MIT, Berkeley and more trust us to turn the chaos of frontier tech into sharp, digestible insights — every week, in under 5 minutes.
This is your go-to power-up for Web3, AI, and what’s next.
Inside the Machine’s Eye: How Diffusion Models Build Video Frame by Frame
Text-to-video is quickly becoming one of the most technically demanding — and creatively powerful — areas in generative AI. If you’ve seen recent demos of astronauts floating through dreamlike cities, or cinematic clips rendered from a single sentence, you’ve already glimpsed what’s possible. But generating video is far more complex than generating images, and understanding why gives you a cool bit of insight into how these models and foundations are being adapted for different tasks and use cases.
The core challenge? Consistency — in time and in space. Text-to-image models only need to produce a single frame. In video, you're generating dozens, sometimes hundreds of frames, and they all have to connect. The subject can’t jitter or morph from shot to shot. Backgrounds need to persist. Camera movement needs to feel smooth. Motion has to make sense.
That means video models face a different kind of math. The model isn’t just guessing “what does this scene look like?” — it’s guessing “what happens next?” That requires temporal reasoning, spatial memory, and a sense of cause and effect. Add to that the fact that video datasets are far rarer and harder to label than images, and the fact that generating videos costs 10–100x more compute… and you start to understand why this problem is so difficult to scale.
And yet — it’s moving fast. Much of the recent progress comes from adapting diffusion models (the same tech behind image generators) to the video domain.
At a high level, these systems operate in two main stages:
Latent Motion Planning:
Instead of generating pixel-perfect video from the start (which is way too computationally expensive), the model first works in a compressed latent space — a lower-dimensional representation of video frames. It learns to sketch out the structure of the scene over time: what moves, how fast, where the camera goes, and what objects persist. This planning happens across both space and time — making sure that characters don’t flicker, lighting stays consistent, and motion unfolds smoothly.Diffusion-Based Frame Generation:
Once the latent motion plan is ready, the system uses video diffusion to generate the actual frames. This works by starting with pure noise, then denoising it step-by-step, conditioned on the latent structure and the original text prompt. Each step brings the frames closer to realism. Think of it like revealing an image through fog — but doing it for 24+ frames in a way that preserves the flow of action.
To make this work at scale, modern video generators add specialized layers like:
Spatiotemporal attention: to model how pixels relate across time and not just in one frame
Optical flow guidance: to predict and preserve motion between frames
Storyline conditioning: in some models, a sequence of prompts can drive a multi-scene video (e.g. “man enters a shop… cuts to him walking out with a coffee”)
These innovations help address the fundamental challenges of video generation: motion drift, temporal artifacts, inconsistent characters, and frame-to-frame hallucination.
The real breakthroughs now are coming from models like Kling AI 2.0, a new release from Chinese tech company Kuaishou. It’s being hailed as the most powerful public AI video model so far — capable of generating high-definition, minute-long videos with realistic camera dynamics and motion physics.
Other players in the Chinese ecosystem, like Tencent’s HunyuanVideo and Alibaba’s open-source ModelScope, are rapidly pushing in the same direction. These models aren’t just research flexes — they’re starting to open up new workflows for creators, product designers, educators, and even game devs.
EigenLayer Summer Fellowship Just Landed
EigenLayer has officially launched its Summer Fellowship program - an 8-week, in-person experience for ambitious builders at the frontier of crypto infrastructure and AI.
A dream combo if you want to be at this very spicy intersection. Fellows will work alongside the Eigen team to prototype agent-powered apps, experiment with AVS tooling, and contribute to the growing restaking ecosystem.
-Seattle (on-site)
- 8 weeks | Summer 2025
- $5,000/month + housing + meals
- Mentorship from Eigen core contributors
With fast build cycles, weekly demos, and potential pathways to full-time roles, this fellowship is designed for those ready to move quickly and build with purpose.
Llama Con + Meta Connect
Meta’s throwing its first-ever developer conference for the open-source Llama fam.
Expect updates on model development, tooling, and more. Whether you’re solo hacking or shipping at scale, this one’s for the builders.
📍 Online | Save the date → April 29
The XR and AI glasses crew gets their moment this fall. Meta Connect is back — with updates on Meta Horizon, new mixed reality toys, and what’s next in the metaverse stack.
If you’re into wearables, worldbuilding, or spatial computing, this one’s worth keeping on your radar.
📍 Online & In-Person | September 17–18
Job & Internship Opportunities
Product Designer Intern (UK) – Apply Here | Palantir
Machine Learning Engineer Internship, WebML - Apply Here | Hugging Face
DevOps Intern - 2025 Summer Intern - Apply Here | Shield AI
Hardware Test Engineering Intern - 2025 Summer Intern - Apply Here | Shield AI
Website Engineer - Apply Here | ElevenLabs
Product Designer Intern (NY) - Apply Here | Palantir
Applied AI Engineer - Paris (Internship)– Apply Here | Mistral
Machine Learning Intern, Perception – Apply Here | Woven by Toyota
Artificial Intelligence/Machine Learning Intern – Apply Here | Kodiak Robotics
Brand Designer - Apply here | Browserbase
⚡️ Your Job Search, Optimized
We don’t just talk about building — we help you get in the room.
From protocol labs to VC firms, our students have landed roles at places like Coinbase, Pantera, dYdX, Solayer, and more.
Whether you’re polishing your resume, preparing for a technical screen, or figuring out where you actually want to be — we’ve got your back. 👇
Unsubscribe anytime — but if it’s not hitting, slide into our DMs. We’re building this with you, not for you.
AI employees are a year away
Anthropic just set the clock: within a year, AI agents could be roaming company networks as full-on virtual employees.
Not just tools — but task owners.
With their own logins. Their own memory. Even their own place on the org chart.
They’ll schedule meetings, triage alerts, maybe even spin up services or access internal dashboards without asking. And when something breaks? No one’s quite sure if it’s the dev, the agent, or the system that gets the postmortem.
On a more serious note: every startup, enterprise, dev tool, and security vendor is about to face the same question:
How do you manage a workforce that never sleeps, never forgets, and might trigger a cascade of actions before anyone blinks?
It’s uncharted territory — and a massive unlock.
Because here’s the truth: AI agents aren’t here to replace you.
They’re here to collaborate. To delegate. To free you up for deeper work and bolder ideas.
If you learn how to direct them — how to build with them — you’ll move faster than teams 10x your size.
Now that exams are (mostly) behind you, embrace the generational shift in how work gets done.
This summer isn’t just a break. It’s your launchpad.
Build loud. Build weird. Build now. Future you will be very grateful.
And hey — if this hit, share it with a friend, bookmark the tab, or just remember:
We’ll be here every week with the insights you didn’t know you needed.