Level up your career, level up your team. Hands-on practice, regular office hours, direct Slack channel for questions. Custom designed for product managers, UX professionals, designers, researchers and builders that are serious about figuring out this AI thing.
Dozens of videos digging deep into the practicalities of building better AI products, including hands-on exercises, walkthroughs. You will follow along and learn actual skills.
We are all trying to wrap our heads around this new, strange material called AI. It takes time to build that intuition. When you have questions, you will have a direct line to me to ask anything through Slack Connect.
Because learning is better with others. With regular in-person group office hours, you can ask any AI question and we can discuss challenges and ideas.
Hi there! I'm Peter.
I've been building AI products since 2023, and it's been a fascinating journey.
I've been using techniques like "evals", "observability", "synthetic data", "context engineering", a bunch of techniques that the AI community has figured out over the past few years.
The teams building the best AI products today rely on these core techniques every day, and they can look pretty complicated.
But it's not rocket science. These methods aren't magic. You don't have to be an engineer to use them. And in fact, I believe that the best AI products are built when diverse teams bring different perspectives to the table. That sounds corny but it is, perhaps surprisingly, more true in AI than any other technology I've worked with.
So I decided to build this course to demystify these techniques, giving you a practical, no-fluff guide to implementing the essential toolkit for creating great AI product experiences.
It's for product managers, UX people, researchers, strategists. Everyone that needs to understand how AI systems are actually built and made useful, without necessarily being an engineer.
It's still day 1, AI continues to evolve incredibly fast, and there is a ton of stuff to be invented. Right now is the best time to get started. If you want to jump in and join me on this journey, sign up.
See you there!
"This is opening up all sorts of new neural pathways for me to see under the hood more of how the sausage is made! 🙏"
"Very timely at my enterprise software company as evaluation of AI features scales."
"Everything I know about evals is from Peter’s talk, which is why I’m back to find out more!"
A deep dive introduction to model capabilities, context design and engineering and experience evaluation.
Let's get started!
Let's do some context design. The model's context window is the key to creating useful and helpful output.
So we did some context design - now how does that become context engineering?
And the final missing piece: evals! But wait, do we even need them here?
What does it mean for models to be stateless? Let's build some intuition around that.
And what does it mean for models to be Stocastic? Why do they hallucinate? Can we ever get beyond that?
Some common misunderstandings about AI and Large Language Models can easily lead us astray.
This is what we came for: some hands-on eval writing.
What are evals, why do we need them, and why isn't this just QA?
This is the fun part, hands-on writing evals together.
Evals can be tricky, and it's easy to make some very expensive (in terms of quality, end result and cost) mistakes.
One reason evals are tricky, is that it can be hard to define what Good looks like when working (as we are) in a team.
There are no evals without data sets. How do we create solid data sets? How many data points are enough? What about synthetic data?
What can LLMs do? How do we know what the capabilities of these models are? How are they trained? And how does that influence our product design decisions?
How are capabilities trained into models? How can we build intuition around these capabilities and best use them?
What is model character, how is it trained, and how can we learn to understand and use this beyond "Claude feels friendlier"?
Despite the "code" in its name, Claude Code is perhaps the most popular agentic AI system right now. Understanding and using it gives you a glimpse into what's coming the coming months and years in terms of agents. And it can be incredibly useful for non-coding tasks.
Set up Claude Code and let's start working on a use case. No code!
How can we use an agent like Claude Code for information architecture? Let's try it out!
AI and UX research - do they even mix? Let's try to use Claude Code on some common UX Research tasks.
Alright here's a glimpse of the future. How do we run multi-agents and subagents in Claude Code? Let's play with an example.
What does the UI and UX of agents look like in the future? What does Claude Code look like as a nice UI? Let's find out.
If AI is different, and AI projects are different, how do we plan projects for AI? What are the roles and tracks we should consider? What are some common gotchas?
Context Design and Evals are two cornerstone activities to build great AI products. How do we plan for them?
How do we budget an AI project? What are some of the things to look out for?
AI is indeed different - what roles or skillsets should we hire for or plan for when preparing AI projects?
The videos are very hands-on, and come with links to tons of useful resources, prompts you can copy and paste, data sets you can use for the exercises and more.
Aside from that, you'll probably have some questions. It's tricky to wrap your head around this new world. How does vector search really work? Does AI really have a world model? You get a direct Slack Connect line to me for all your questions.
The third pillar of learning is community. No, we're not starting another Slack or Discord group. Instead, you'll have full access to regular group office hours calls. It's the best way to learn.
Let's be honest, these plans are not cheap.
I'm building this service for professionals who are serious about investing in their career.
You get an ever growing video course, but more importantly, a direct line to me on Slack Connect,
and access to regular group office hours.
For teams, if you have any learning budget at all, this should be a no-brainer,
and I've set the employee limit generous at 100 people. Teams also get custom team office calls,
exclusive to their teams, where we can discuss the team's internal AI challenges and questions.
$79 /month
Invest in your career for the price of two Ubers a month.
$799 /month
Level up your team and create buy-in around AI for the price of a team dinner a month.
First, I was pretty sceptical of the early 2021-2022 AI hype, until I saw my kids adopt AI within weeks. And it stay adopted. "This might be actually useful technology", I thought. I spent 2023, 2024 and 2025 building AI products for clients, and learning. The ins and outs of vector indexes. Why the hell did these models seem so smart?
1. There is some kind of weird, but nonetheless real "intelligence" embedded in these models. And it's getting smarter.
It took me a while to build some understanding and intuition around this. You can call it what you want. "Intelligence" is a strange word. I get why people feel uncomfortable with it. But the models do embed some strange kind of world model.
And more importantly, they are developing really different and weird, and at the same time very human-like capabilities. Can a model "reason"? Kind of no, but also kind of yes. Once I wrapped my head around this, I did become a bit more hype-y on the whole AI thing. Forget about the hype, but there is some kind of "there" there.
At the very least, it's interesting.
2. The World is slow to change, but Jobs aren't.
The idea here is that, yes, AI won't change the world overnight. Companies take time to adopt things. Societies take time. But Jobs can change rapidly. I'm already seeing how the way we worked the past 20 or so years, since the Internet, is changing. Team compositions are changing. Skillsets are changing. Roles naturally follow. So even though AI will take a long time to perculate through society, our jobs might change pretty fast.
3. Values get embedded in AI, so we need diversity.
The clearest example of how values get embedded in AI is Musk threatening to "rewrite history" to train Grok. Let's not go there. I've been in many AI product discussions where smart engineers were driving the decisions, because they understood the underlying material we are working with. But the moment you include researchers, product people, UX people, the amount of diversity in ideas and perspectives shoots up immediately.
And AI is such a strange and interesting technology in that it revolves a lot around language. The words you use in a prompt. The ideas around "what good looks like" that you embed in your evals. And so I've seen how much impact different perspectives have on these new kinds of products. So I started giving talks and teaching this stuff.
4. It's not rocket science.
The technology is very cool, but the actual product decisions being made are all about users, looking at data, language, and understanding this new material, and you don't need to be an engineer for that. That is my goal with this new platform. Involve everyone.
Level up your career, level up your team. Hands-on practice, regular office hours, direct Slack channel for questions. Custom designed for product managers, UX professionals, designers, researchers and builders that are serious about figuring out this AI thing.