Zuck’s on the Stand. Katy’s in Space. And We’re Just Trying to Finish Finals.
PLUS: Explore Google's new Agent-to-Agent protocol, an introduction to the AI memory challenge, and more.
You know it’s a weird week when Mark Zuckerberg is getting grilled in court and Katy Perry is doing zero-gravity flips above Texas.
On day one of FTC v. Meta, Zuck faced questions about whether buying Instagram and WhatsApp was part of a plan to dominate social media. Spoiler: the emails say yes.
Meta’s counter? “We’re not cool anymore anyway. People barely post.” (Harsh… but maybe true?)
And that’s just the surface — there’s way more drama under the hood. The whistleblower videos making the rounds on YouTube are especially spicy if you’re into courtroom chaos and internal leaks.
Meanwhile, Katy Perry hit the stratosphere with Blue Origin’s latest launch - an all-female, celebrity-studded rocket ride that was equal parts STEM inspo and PR spectacle.
Amid all this, Google quietly kicked off a new era in agent infrastructure - launching its Agent-to-Agent (A2A) protocol last week. Combined fresh model drops from Meta and OpenAI, the pieces are clicking into place for a whole new layer of AI coordination, memory, and autonomy.
Let’s unpack everything.
Google said hold my beer
There’s been plenty of debate over who’s really leading the AI arms race - OpenAI with GPT, Anthropic with alignment, or xAI with pure meme energy.
But last week, Google stepped in and reminded everyone not to sleep on them.
Between the launch of the A2A protocol, new agent orchestration tools, and major updates across Gemini and Workspace integrations, Google’s quietly laying down the foundation for the multi-agent future.
A2A is a new open standard designed to let autonomous agents actually coordinate - across vendors, platforms, and frameworks. Think of it as HTTP for AI agents, with built-in memory sync, task lifecycle management, secure comms, and even UI negotiation.
It’s got serious backing too: over 50 partners at launch, from Atlassian to Salesforce to LangChain. The vibe? Less “AI tool,” more fully interoperable agent mesh.
Here’s what makes A2A special:
Modality-agnostic: not just for text - works across video, audio, web apps, and more
Secure by default: built-in auth and encrypted messaging
Memory-optional: agents can work together without shared state, but still sync intelligently
Long-task native: perfect for workflows that take hours (or involve humans)
And it’s just getting started.
While Anthropic’s MCP focuses on enriching agent input - adding tools, identity, and extended memory - Google’s A2A tackles the plumbing for coordination.
They’re not in conflict. In fact, they’re wildly complementary. You can get started today using Google’s newly launched Agent Development Kit.
Google dropped a lot more in their latest AI wave. Catch the full scoop on everything they announced below.
The Quiet Revolution in Agent Memory
One of the biggest technical challenges in AI right now isn’t intelligence - it’s memory.
Most agents today forget everything between chats. That’s fine for quick answers, but it breaks down fast when you try to build anything more persistent: a tutor, a planner, a co-pilot.
That’s why last week’s ChatGPT memory update is so interesting. OpenAI now supports two layers of recall:
Saved memories: facts you’ve explicitly told it to remember (like your name, tone, or project goals)
Chat history insights: patterns it learns over time, even from unmarked conversations
You can turn it all off, edit specific memories, or use "Temporary Chat" for zero-retention convos. It’s subtle - but it moves ChatGPT closer to something that feels like a long-term assistant.
But the memory conversation doesn’t stop there. Researchers are exploring more foundational questions: How should agents store memory? How should they access it? Two recent approaches offer interesting possibilities:
SHIMI (Semantic Hierarchical Memory Index):
A system that organizes memory into a flexible hierarchy, letting agents retrieve info based on meaning, not just keywords.A-MEM (Agentic Memory for LLM Agents):
A modular architecture inspired by the Zettelkasten method. Instead of one big memory blob, A-MEM builds a network of linked notes - each representing a chunk of context or knowledge. Agents can then trace relationships across these nodes dynamically.
These models raise important design questions:
Should memory be flat or structured?
What’s the balance between retrieval and reasoning?
Can agents choose what to remember - or forget?
None of this is fully solved. But it's becoming clear: if we want agents that can plan, coordinate, or evolve over time, memory is one of the next frontiers.
Here’s another sweet resource for you dive deeper in your time.
For the Students, By the Degens ⚡️
Skip the lecture, catch the alpha. Students from Harvard, Princeton, Stanford, MIT, Berkeley and more trust us to turn the chaos of frontier tech into sharp, digestible insights — every week, in under 5 minutes.
This is your go-to power-up for Web3, AI, and what’s next.
EigenLayer Summer Fellowship Just Landed
EigenLayer has officially launched its Summer Fellowship program - an 8-week, in-person experience for ambitious builders at the frontier of crypto infrastructure and AI.
A dream combo if you want to be at this very spicy intersection. Fellows will work alongside the Eigen team to prototype agent-powered apps, experiment with AVS tooling, and contribute to the growing restaking ecosystem.
-Seattle (on-site)
- 8 weeks | Summer 2025
- $5,000/month + housing + meals
- Mentorship from Eigen core contributors
With fast build cycles, weekly demos, and potential pathways to full-time roles, this fellowship is designed for those ready to move quickly and build with purpose.
Palantir Meritocracy Fellowship Enters the Field
Palantir’s new “Meritocracy Fellowship” is basically the Peter Thiel playbook in corporate form:
- Pay high school grads $5,400/month to skip college and build real-world skills instead.
- No cover letter, no degree, just raw intellectual horsepower — and probably some libertarian Twitter in your bookmarks.
Applicants need a sky-high SAT (1460+) or ACT (33+), but no college enrollment. The four-month fellowship is open to students with skills in programming, stats, or applied problem-solving — and yes, it leads to potential full-time roles at Palantir.
Job & Internship Opportunities
Product Designer Intern (UK) – Apply Here | Palantir
Product Designer Intern (NY) - Apply Here | Palantir
Applied AI Engineer - Paris (Internship)– Apply Here | Mistral
Machine Learning Intern, Perception – Apply Here | Woven by Toyota
Artificial Intelligence/Machine Learning Intern – Apply Here | Kodiak Robotics
AI Platform Software Engineer Intern - Apply here | Docusign
Brand Designer - Apply here | Browserbase
⚡️ Your Job Search, Optimized
We don’t just talk about building — we help you get in the room.
From protocol labs to VC firms, our students have landed roles at places like Coinbase, Pantera, dYdX, Solayer, and more.
Whether you’re polishing your resume, preparing for a technical screen, or figuring out where you actually want to be — we’ve got your back. 👇
Unsubscribe anytime — but if it’s not hitting, slide into our DMs. We’re building this with you, not for you.
🐬 AI Decoded Your Essay. Dolphins Are Next.
Yes, this is real. No, we’re not high.
To celebrate National Dolphin Day, Google just dropped something quietly wild: DolphinGemma - a lightweight, open-source AI model trained to understand and even generate dolphin sounds. Think LLM, but for whistles, clicks, and underwater buzzes.
Built in collaboration with Georgia Tech and the Wild Dolphin Project - the world’s longest-running underwater dolphin research team — DolphinGemma is being used to parse decades of audio logs from a pod of wild Atlantic spotted dolphins.
Why dolphins? Because their vocalizations are insanely complex - possibly even symbolic - and studying them might unlock new theories of intelligence, cooperation, and communication across species.
It’s a wild shift: from building agents that do your homework… to building models that might translate non-human minds. Kinda beautiful. Kinda terrifying.
Definitely not on the syllabus. But you know what is? Your finals.
So shut some tabs, hydrate, and maybe don’t train a large language model until after your last exam.
Good luck out there - you’ve got this.