
The Learning Curve
The Learning Curve is Modivus' podcast. Join us for conversations with the engineers, strategists, and visionaries shaping the frontier of intelligence.
We skip the hype to explore the practical and "dangerously productive" realities of the AI era. From autonomous agents and advanced LLMs to AI security and the future of work, we flatten the learning curve so you can navigate the fastest technology shift in history.
In the inaugural episode of The Learning Curve, we sit down with tech veteran Chris Parsons to flatten the learning curve of the "AI Summer." We move beyond the hype to explore the shift toward autonomous agents, the emerging threat of AI Phishing, and why the "Productivity Trap" is the hidden risk of 2026.
Key Topics Covered:
High-Utility AI: Why specialised models have reached a professional tipping point.
The AI Interview: A superior alternative to traditional prompt engineering.
Security & Phishing: Protecting autonomous agents from prompt injection and malicious emails.
The Productivity Trap: Managing the psychological toll of AI-powered efficiency.

+
Transcript
Paul: My guest today is Chris Parsons. Chris has been in tech for over 25 years and he's programmed video games and built some of the systems behind government infrastructure. He's scaled a film analytics company as a CTO and he's built his own consultancy and—these days—he's an AI strategist helping engineering teams actually get results with AI.
I personally have seen Chris because I look at his newsletter and it's always full of really up-to-date, really useful stuff. So I highly recommend: add him on LinkedIn, get his newsletter and see what this guy's up to. So Chris, welcome. Let's have a chat.
Chris: Yeah, great to be here. Thanks so much.
Paul: Would you mind just telling me a little bit about who you are, where you're from, and how you got to where you are today?
Chris: Yeah, I've been in tech, as you say, for getting on for 30 years, which is terrifying really. I started off in video games working in a company in London and I was one of the two or three coders in the company that worked on the AI side of things, which was very rudimentary and basic, but it was fun to work on.
Then I ended up starting my own client services company after doing that, and then ended up building my own video game on Steam which ended up being quite AI-heavy. Then I decided I wanted to go back into slightly bigger startups and companies. I ended up scaling a company in the data analytics and AI space and then most recently founded Cherrypick, which is a product that helps people get their meals quickly and effectively through online grocery services like Sainsbury's and Tesco.
But there's always been an AI thread. And so when the kind of new "AI summer" that we are currently in kicked off in November 2022, I couldn't believe how good it was. The utility that suddenly this chatbot gave you was breathtaking and I really wanted to see if I could take advantage of that. So we built a bunch of features into our startup, but I'm spending most of my time helping companies figure out how to use AI effectively and well.
I've really enjoyed the process of being right at the forefront of this technology and seeing what it can do for people and it's frankly astounding. I think there's so much more to come and I'm most interested, I suppose, in seeing it used well. So how can we do well with this technology? It's such a big shift just like the internet or social before that. We have—I have to say as society—a little bit of an up and down history with technology. Sometimes it goes well and we use it well, sometimes we don't. I'm really keen that we do a good job with AI.
So one of the key drivers for me is: how do we help people to have better lives using this technology—better organizations, better work—rather than maybe falling into some pitfalls.
The Utility of Modern AI: From Wispr Flow to Claude
Paul: What I notice is different about what you do is that you give advice freely on LinkedIn. But you're also very open about your stack. You're very open about the tools that you're using: when you're using them, how you're using them. I don't see that everywhere and it's really encouraging and exciting because it gives me something I can try out.
And so Whisper, for instance, is something that I've started using since you talked about it and it has changed my workflow, even though I often forget I've got it. So Whisper is this speech-to-text which I'm pretty sure I played with—like Dragon NaturallySpeaking 20 years ago or something. It was just really frustrating, but it's now frictionless. It's just so good and so useful. And so I've got many of those things I could say, "Oh yeah, I got that from Chris." So thank you for one.
But what's changed then in the last 6 months? Say, where are we at now if we were to look at the state of play? This is February 24, 2026. It just seems to be going so fast. What's happening? What's changing and what's incoming?
Chris: Half an hour talking about that! Let me try and drill it down to the basics. So you mentioned Whisper, and that's a fascinating thing. I think it's a great example of a technology that's been around for decades, but has finally become so good as to change people's workflows and their work lives.
Dragon NaturallySpeaking and all of those tools—they did what they did, but the technology just wasn't there. Whisper was a tech that was invented by OpenAI, predates ChatGPT, and they released these models for free on the internet, which is fantastic. What's happened now is that enterprising teams have managed to package these models into really usable apps.
With any kind of technology, you need the quality of the underlying technology to be good enough—and Whisper now is good enough. You need the user experience of the product itself to be great. So I use a tool called Wispr Flow. The thing that Wispr Flow is so good at is it's very seamless and it integrates really well with your general workflow when you're using both your phone and your desktop. They've built a hidden layer on top of the underlying OpenAI model that just gets the dictation right. It does a pass through it to fix any kind of weird issues—all of your ums and your Rs—and it helps to create the piece of text that you'd wish you'd said, not what you actually said.
That speaks to the utility of generative AI. We're still figuring it out. Actually, there wasn't much difference between ChatGPT and GPT-3. GPT-3 came out earlier that year or maybe a year or so before. A few people used it, but it was pretty niche. The utility of putting it in a chat interface was the thing that really unlocked it for a huge number of people. It's the same with tools like Whisper—the utility of putting that into a package where I can download a product, pay a few pounds a month, and be able to effortlessly talk to my computer is huge.
Tipping Points in Coding and Agents
Chris: A lot of what we're learning maybe over the last year is we are beginning to figure out how to use this technology well and how to package it well.
It's hard to believe that something like Claude Code is only about 9 months old. It has completely taken over a huge segment of the coding community because of its utility, even though it's a command-line tool that you run in a terminal. The models underlying it have changed a bit, but it's the product work that's happened that has really made the difference.
The things that I think have changed most dramatically are: the models have got a bit better, and the products have got a lot better to the point where we've reached a tipping point with coding. Around the end of November last year, if you're using Claude Code specifically with Opus 4.6, their most recent powerful model from Anthropic, it seems to have reached a point where you can just trust it to write most code. It's now quicker to use that than it was to fight against the tools.
Now we see tools like OpenClaw coming out—which we could talk much more about—which are tools that run on a computer that you can just talk to via messenger like Telegram or WhatsApp or Discord or Slack, and it will just get stuff done for you behind the scenes. You don't even have to think about what it's doing on the computer. I mean, actually, you should think about it because there are security issues there, but ultimately, that's where we're going.
This increasing utility and finding ways to use AI well... I feel like we're only at the very beginning. I think it will take us years to really unlock the power of the models we have right now. The rise of agents, the rise of things like skills and model harnesses like Claude Code—all of those kinds of tools are packaging these models up and giving them to customers. Customers are finding ways to use them even better, and then they're building those back into products.
There is another underlying trend which I think will be a defining trend in the next year or two: the rise of local AI and open-source AI. The best model out there at the moment is still, I think, Kimi 2.5, which came out a few weeks ago. DeepSeek just released another model, but Kimi is still about 6 to 8 months behind the frontier models. What that means is if we've hit a tipping point with the frontier models, by the end of this year open source models should have caught up to that point. Therefore, everyone will be able to use models much more cheaply.
The Security Red Flag
Paul: You've dropped lots of really interesting points there. I completely attest to that inflection point. I remember there was a time on social media where you'd see "this changes everything" every week and I got a bit sick of it. But at the same time, it did!
We seem to just be on this trajectory where things keep getting better. We're like, "Oh, we think we've topped out there," and then we didn't. Things like NotebookLM came out and the way it was structured and accessible did cool stuff, and I was like, "This is amazing."
I do a lot of coding. I'm a researcher and I didn't trust my coding tools; I always had oversight over it. And then I saw some of these frontier models coming out and saying, "Well, our engineers don't do that anymore. They let the models write the code." I was very skeptical. I thought, "Well, maybe it's a frontier model thing and they've got their own special sauce..." But now I'm in the same place. It's happened in the last month that I've grown in confidence enough to do it, but also the capability has taken off. My job is different.
And this idea of OpenClaw and automated agents that do useful stuff... once we solve the security issue, it will be radical. But can we solve the problems? What are the problems? I've read your piece on it and it's a bit of a red flag right now, right?
Chris: Massively. It depends what you're using it for. Fundamentally, the problem you have with agents generally is they're not the same as regular programs. They're what computer scientists call non-deterministic. That basically means they can do different things each time you ask them to do something. It's not going to be predictable. Whereas a normal computer is deterministic—it will do a sum in the same way every time.
The problem is that they are fallible just like humans are fallible, and therefore they can be fished in the same way. The idea that someone can send you a rogue email and trick you into sharing your password... people understand these issues and many of us are educated against clicking on weird links, but it isn't really a solved problem.
AI has the same fundamental problem. It can be tricked by rogue information that comes into it to leak information that you've given it. That's not really solvable in the same way that fishing isn't really solvable because you can get ever-more sophisticated attacks.
The absolute latest models, which are quite expensive to run, got it wrong about 4% of the time. So one in 20 times they fell for the trick. Now that's fine, except if you're putting your credit card numbers or your bank details behind that. We hear stories of people giving these agents access to their emails. Once you've done that, you are leaving yourself open to the AI being tricked to give away that information to a malicious attacker. They can send them emails, or even hide things inside images or Reddit posts where your agent is merely trolling through the internet, comes across an attack, and ends up leaking your data.
So OpenClaw is fundamentally insecure like many agents. It is something that you should use with caution. If you are careful with the information you give it, then you can limit the effectiveness of an attack like that. What you've got to be careful against is just loading it onto your local machine and giving it full access. People often just click through security warnings and say, "Oh, yeah, it's fine." It could be leaking your data without you realizing it.
Training and the Reality of Prompt Engineering
Paul: I want to ask you a few questions about how companies can make use of these technologies. I want to be controversial to start with because I see people saying all the time that "prompt engineering is dead." In the past, some people thought, "Give your people access to AI and that's it." But that is not what I see, and reading your newsletter, I don't think that's what you see either.
Chris: Prompt engineering is "dead" in the sense that you don't have to come up with a lengthy, whole page of text in order to get it to do what you want. You can ask it a simple question, be ambiguous, and it will often infer what you mean.
Having said that, it can be difficult for people to know how to ask AI for the right things or even what to ask for. For example, if you're doing coding using AI, sure, you can ask for something in a few lines, but equally, you're not going to get a good response because it's just going to do something random. If it doesn't know the answer, it will randomly fill in the gaps for you.
The answer is to make sure that you are having the AI pull information out of you so that it knows what it needs. I'm pretty vague when I'm thinking about what I want to do next. So, what I do is I say things like, "I would like to build a to-do list with sorting and tagging, but I'm not really sure." Instead of spending ages trying to specify that, I just dump that into the AI, usually via dictation, and say:
"Interview me, asking me one question at a time, in order to make this an unambiguous and clear set of requirements that I could then use to build this feature."
Then the AI talks back to me and asks all the questions I need to answer to specify the feature. It pulls the information out of me. I find that a much easier and more natural process than writing a big prompt. But you have to know how to do that. You can't just say to someone, "Here you go. Here's Co-pilot, off you go." People need training.
If you have never done anything like this before, the key thing you could do today—right now—is say to an AI (whether it's ChatGPT, Claude, or Gemini): "I would love to know how I can use AI better but I'm not quite sure how I'm going to be able to use it in my work, so I'd like you to interview me and ask me one question at a time so you know enough about me to give me advice." It sounds a little bit meta but it works and you will end up having a fascinating conversation with your AI agent.
Symbiotic Workflows
Paul: And that actually is a pattern that I use all the time and it's now so obvious to me, but it's not necessarily obvious at all. I love the fact that AI can take my "brain soup" and turn it into an ordered, structured thing and be a thought partner.
For me, it's usually me dictating into the AI and then reading the answer back because I'm quicker at reading than I am at listening, but I'm also quicker at talking than I am at typing. I love the fact that we can try different things and that we all have our own ways that we can interact with it that work well for us.
Paul: What I love about that flow is that it's not "I'm asking the AI to solve my problems," but I'm putting it in the loop with me and helping to just pull what I'm thinking out. It's this really symbiotic thing. I often have people who are a bit like, "Oh, I don't want to use AI," and I'm like, "Well, just ask it to ask you questions and let's get the best of you."
AI Readiness in the Organization
Paul: Do you want to talk a little bit about what you do and maybe give some examples where it's had impact?
Chris: Absolutely. I work with companies from about 30 people all the way up to several hundred. I tend to go in with a team that knows they need to use AI—the team is probably already using it—but they feel a bit out of control. They're seeing their cloud spend or bills go up but they're not necessarily seeing a lot of productivity.
So what I tend to do is sit down and figure out what's happening. I also do readiness assessments. Not all teams will get the most out of AI. If you've got a really messy, old codebase that is hard to work with, AI is going to struggle with it. So often one of the earliest things I recommend is that teams use AI to try and clean up their codebases.
In a similar way for operations, finance, or product teams, we're looking at how accessible the context of the business is to the AI tools. Once the tools are in place, I spend time training the team to ensure they are using them well. I've seen really good adoption—people moving through the gears with Co-pilot, using Claude on the web, and then moving into Claude Code. I've seen people get a three-month project done in three days. Then I work with the leaders to say, "Okay, your coding is moving quickly. How do we make sure that translates to releasing more quickly or running more experiments?"
How to Connect
Paul: So if people wanted to reach out to you, what's the best way of contacting you?
Chris: My website is a great place to be: chrismdp.com. And my LinkedIn as well—do reach out on that. I run webinars once a month on interesting topics, and I have a newsletter that I share once a week which has more details about that as well.
Paul: Awesome. And you've got something today, in fact: LLM London.
Chris: Yes. LLM London happens every month or two. It's an event that I helped start for people who are specifically trying to build the future of technology with AI in their products. It's quite a tech-heavy group: developers, product people, startup founders. We get together in the pub, but we also do events with speakers. If that's your kind of thing and you're in London, it's a great opportunity to hang out with like-minded folks.
The Problem of "Insane Productivity"
Paul: Brilliant. So, if you could build one tool with AI that does not yet exist—blue sky thinking—what's the problem that is not yet solved that you would just like to solve today?
Chris: I don't know if it's a tool, but I'd love a cultural shift. I would love it if people were able to use AI in a way that genuinely enriches and benefits their lives rather than just working harder.
I've noticed that using AI for everything—I use Claude Code for all my work now—it is uniquely exhausting to work like that. I worry that people are going to be taken advantage of and work themselves to death for an unscrupulous company. I also worry that the opposite might happen and people might just get lazy—where someone does 3 months worth of work in three days and then sits around for the remaining two and a half months. That also feels pretty unethical.
So, somewhere in the middle, it should be possible for us to be more effective and fulfilled humans—not get to the point where just because we've got tools that make us more productive, we end up working five times as hard. Maybe all it boils down to is a timer to tell you to stop when you're using AI for too long.
Paul: If it were a tool, I would use it! This is a genuine problem. You talked about it on LinkedIn, saying it makes you "dangerously productive." Ordinarily I hit my slump and I'm like, "Okay, time to take a break," but I don't hit that slump anymore because I can keep on working and have 10 Claude Codes working on multiple problems. I find myself at 2:00 AM thinking, "I really need to sleep, but maybe I'll just wait until this next problem is solved." How do I stop when I'm insanely productive?
Chris: We're just at the beginning of figuring out that this is a problem. I took some time off last week and it was very difficult. I'd realized that I had connected it to my phone so that I could just call my phone and get Claude Code to work for me. I'd been working 16 days straight without really stopping and not really paying enough attention to my family. That's not a good way to live.
We need better feedback loops. Right now, AI is like the employee who needs to be supervised every three and a half minutes. It's very persistent in asking us questions. So I feel like as AI improves, perhaps what we should be most careful to work on are the feedback loops and the checking processes so that we can let them keep working more autonomously, but figure out ways of communicating back what's happening so that we can manage our own supervision in a healthy way.
I'm thinking a lot about those loops—using MCP tools so that a AI could, in theory, conceive of, implement, and run an entire experiment using a platform like PostHog, run an AB test, see whether it worked, and then move on. If it's possible to close that loop, then we could just let them run and maybe get some sleep. There's hope.
Paul: Thanks, Chris. That's been a really interesting conversation, and I hope that it's valuable to the people that are listening as well.
In the inaugural episode of The Learning Curve, we sit down with tech veteran Chris Parsons to flatten the learning curve of the "AI Summer." We move beyond the hype to explore the shift toward autonomous agents, the emerging threat of AI Phishing, and why the "Productivity Trap" is the hidden risk of 2026.
Key Topics Covered:
High-Utility AI: Why specialised models have reached a professional tipping point.
The AI Interview: A superior alternative to traditional prompt engineering.
Security & Phishing: Protecting autonomous agents from prompt injection and malicious emails.
The Productivity Trap: Managing the psychological toll of AI-powered efficiency.

+
Transcript
Paul: My guest today is Chris Parsons. Chris has been in tech for over 25 years and he's programmed video games and built some of the systems behind government infrastructure. He's scaled a film analytics company as a CTO and he's built his own consultancy and—these days—he's an AI strategist helping engineering teams actually get results with AI.
I personally have seen Chris because I look at his newsletter and it's always full of really up-to-date, really useful stuff. So I highly recommend: add him on LinkedIn, get his newsletter and see what this guy's up to. So Chris, welcome. Let's have a chat.
Chris: Yeah, great to be here. Thanks so much.
Paul: Would you mind just telling me a little bit about who you are, where you're from, and how you got to where you are today?
Chris: Yeah, I've been in tech, as you say, for getting on for 30 years, which is terrifying really. I started off in video games working in a company in London and I was one of the two or three coders in the company that worked on the AI side of things, which was very rudimentary and basic, but it was fun to work on.
Then I ended up starting my own client services company after doing that, and then ended up building my own video game on Steam which ended up being quite AI-heavy. Then I decided I wanted to go back into slightly bigger startups and companies. I ended up scaling a company in the data analytics and AI space and then most recently founded Cherrypick, which is a product that helps people get their meals quickly and effectively through online grocery services like Sainsbury's and Tesco.
But there's always been an AI thread. And so when the kind of new "AI summer" that we are currently in kicked off in November 2022, I couldn't believe how good it was. The utility that suddenly this chatbot gave you was breathtaking and I really wanted to see if I could take advantage of that. So we built a bunch of features into our startup, but I'm spending most of my time helping companies figure out how to use AI effectively and well.
I've really enjoyed the process of being right at the forefront of this technology and seeing what it can do for people and it's frankly astounding. I think there's so much more to come and I'm most interested, I suppose, in seeing it used well. So how can we do well with this technology? It's such a big shift just like the internet or social before that. We have—I have to say as society—a little bit of an up and down history with technology. Sometimes it goes well and we use it well, sometimes we don't. I'm really keen that we do a good job with AI.
So one of the key drivers for me is: how do we help people to have better lives using this technology—better organizations, better work—rather than maybe falling into some pitfalls.
The Utility of Modern AI: From Wispr Flow to Claude
Paul: What I notice is different about what you do is that you give advice freely on LinkedIn. But you're also very open about your stack. You're very open about the tools that you're using: when you're using them, how you're using them. I don't see that everywhere and it's really encouraging and exciting because it gives me something I can try out.
And so Whisper, for instance, is something that I've started using since you talked about it and it has changed my workflow, even though I often forget I've got it. So Whisper is this speech-to-text which I'm pretty sure I played with—like Dragon NaturallySpeaking 20 years ago or something. It was just really frustrating, but it's now frictionless. It's just so good and so useful. And so I've got many of those things I could say, "Oh yeah, I got that from Chris." So thank you for one.
But what's changed then in the last 6 months? Say, where are we at now if we were to look at the state of play? This is February 24, 2026. It just seems to be going so fast. What's happening? What's changing and what's incoming?
Chris: Half an hour talking about that! Let me try and drill it down to the basics. So you mentioned Whisper, and that's a fascinating thing. I think it's a great example of a technology that's been around for decades, but has finally become so good as to change people's workflows and their work lives.
Dragon NaturallySpeaking and all of those tools—they did what they did, but the technology just wasn't there. Whisper was a tech that was invented by OpenAI, predates ChatGPT, and they released these models for free on the internet, which is fantastic. What's happened now is that enterprising teams have managed to package these models into really usable apps.
With any kind of technology, you need the quality of the underlying technology to be good enough—and Whisper now is good enough. You need the user experience of the product itself to be great. So I use a tool called Wispr Flow. The thing that Wispr Flow is so good at is it's very seamless and it integrates really well with your general workflow when you're using both your phone and your desktop. They've built a hidden layer on top of the underlying OpenAI model that just gets the dictation right. It does a pass through it to fix any kind of weird issues—all of your ums and your Rs—and it helps to create the piece of text that you'd wish you'd said, not what you actually said.
That speaks to the utility of generative AI. We're still figuring it out. Actually, there wasn't much difference between ChatGPT and GPT-3. GPT-3 came out earlier that year or maybe a year or so before. A few people used it, but it was pretty niche. The utility of putting it in a chat interface was the thing that really unlocked it for a huge number of people. It's the same with tools like Whisper—the utility of putting that into a package where I can download a product, pay a few pounds a month, and be able to effortlessly talk to my computer is huge.
Tipping Points in Coding and Agents
Chris: A lot of what we're learning maybe over the last year is we are beginning to figure out how to use this technology well and how to package it well.
It's hard to believe that something like Claude Code is only about 9 months old. It has completely taken over a huge segment of the coding community because of its utility, even though it's a command-line tool that you run in a terminal. The models underlying it have changed a bit, but it's the product work that's happened that has really made the difference.
The things that I think have changed most dramatically are: the models have got a bit better, and the products have got a lot better to the point where we've reached a tipping point with coding. Around the end of November last year, if you're using Claude Code specifically with Opus 4.6, their most recent powerful model from Anthropic, it seems to have reached a point where you can just trust it to write most code. It's now quicker to use that than it was to fight against the tools.
Now we see tools like OpenClaw coming out—which we could talk much more about—which are tools that run on a computer that you can just talk to via messenger like Telegram or WhatsApp or Discord or Slack, and it will just get stuff done for you behind the scenes. You don't even have to think about what it's doing on the computer. I mean, actually, you should think about it because there are security issues there, but ultimately, that's where we're going.
This increasing utility and finding ways to use AI well... I feel like we're only at the very beginning. I think it will take us years to really unlock the power of the models we have right now. The rise of agents, the rise of things like skills and model harnesses like Claude Code—all of those kinds of tools are packaging these models up and giving them to customers. Customers are finding ways to use them even better, and then they're building those back into products.
There is another underlying trend which I think will be a defining trend in the next year or two: the rise of local AI and open-source AI. The best model out there at the moment is still, I think, Kimi 2.5, which came out a few weeks ago. DeepSeek just released another model, but Kimi is still about 6 to 8 months behind the frontier models. What that means is if we've hit a tipping point with the frontier models, by the end of this year open source models should have caught up to that point. Therefore, everyone will be able to use models much more cheaply.
The Security Red Flag
Paul: You've dropped lots of really interesting points there. I completely attest to that inflection point. I remember there was a time on social media where you'd see "this changes everything" every week and I got a bit sick of it. But at the same time, it did!
We seem to just be on this trajectory where things keep getting better. We're like, "Oh, we think we've topped out there," and then we didn't. Things like NotebookLM came out and the way it was structured and accessible did cool stuff, and I was like, "This is amazing."
I do a lot of coding. I'm a researcher and I didn't trust my coding tools; I always had oversight over it. And then I saw some of these frontier models coming out and saying, "Well, our engineers don't do that anymore. They let the models write the code." I was very skeptical. I thought, "Well, maybe it's a frontier model thing and they've got their own special sauce..." But now I'm in the same place. It's happened in the last month that I've grown in confidence enough to do it, but also the capability has taken off. My job is different.
And this idea of OpenClaw and automated agents that do useful stuff... once we solve the security issue, it will be radical. But can we solve the problems? What are the problems? I've read your piece on it and it's a bit of a red flag right now, right?
Chris: Massively. It depends what you're using it for. Fundamentally, the problem you have with agents generally is they're not the same as regular programs. They're what computer scientists call non-deterministic. That basically means they can do different things each time you ask them to do something. It's not going to be predictable. Whereas a normal computer is deterministic—it will do a sum in the same way every time.
The problem is that they are fallible just like humans are fallible, and therefore they can be fished in the same way. The idea that someone can send you a rogue email and trick you into sharing your password... people understand these issues and many of us are educated against clicking on weird links, but it isn't really a solved problem.
AI has the same fundamental problem. It can be tricked by rogue information that comes into it to leak information that you've given it. That's not really solvable in the same way that fishing isn't really solvable because you can get ever-more sophisticated attacks.
The absolute latest models, which are quite expensive to run, got it wrong about 4% of the time. So one in 20 times they fell for the trick. Now that's fine, except if you're putting your credit card numbers or your bank details behind that. We hear stories of people giving these agents access to their emails. Once you've done that, you are leaving yourself open to the AI being tricked to give away that information to a malicious attacker. They can send them emails, or even hide things inside images or Reddit posts where your agent is merely trolling through the internet, comes across an attack, and ends up leaking your data.
So OpenClaw is fundamentally insecure like many agents. It is something that you should use with caution. If you are careful with the information you give it, then you can limit the effectiveness of an attack like that. What you've got to be careful against is just loading it onto your local machine and giving it full access. People often just click through security warnings and say, "Oh, yeah, it's fine." It could be leaking your data without you realizing it.
Training and the Reality of Prompt Engineering
Paul: I want to ask you a few questions about how companies can make use of these technologies. I want to be controversial to start with because I see people saying all the time that "prompt engineering is dead." In the past, some people thought, "Give your people access to AI and that's it." But that is not what I see, and reading your newsletter, I don't think that's what you see either.
Chris: Prompt engineering is "dead" in the sense that you don't have to come up with a lengthy, whole page of text in order to get it to do what you want. You can ask it a simple question, be ambiguous, and it will often infer what you mean.
Having said that, it can be difficult for people to know how to ask AI for the right things or even what to ask for. For example, if you're doing coding using AI, sure, you can ask for something in a few lines, but equally, you're not going to get a good response because it's just going to do something random. If it doesn't know the answer, it will randomly fill in the gaps for you.
The answer is to make sure that you are having the AI pull information out of you so that it knows what it needs. I'm pretty vague when I'm thinking about what I want to do next. So, what I do is I say things like, "I would like to build a to-do list with sorting and tagging, but I'm not really sure." Instead of spending ages trying to specify that, I just dump that into the AI, usually via dictation, and say:
"Interview me, asking me one question at a time, in order to make this an unambiguous and clear set of requirements that I could then use to build this feature."
Then the AI talks back to me and asks all the questions I need to answer to specify the feature. It pulls the information out of me. I find that a much easier and more natural process than writing a big prompt. But you have to know how to do that. You can't just say to someone, "Here you go. Here's Co-pilot, off you go." People need training.
If you have never done anything like this before, the key thing you could do today—right now—is say to an AI (whether it's ChatGPT, Claude, or Gemini): "I would love to know how I can use AI better but I'm not quite sure how I'm going to be able to use it in my work, so I'd like you to interview me and ask me one question at a time so you know enough about me to give me advice." It sounds a little bit meta but it works and you will end up having a fascinating conversation with your AI agent.
Symbiotic Workflows
Paul: And that actually is a pattern that I use all the time and it's now so obvious to me, but it's not necessarily obvious at all. I love the fact that AI can take my "brain soup" and turn it into an ordered, structured thing and be a thought partner.
For me, it's usually me dictating into the AI and then reading the answer back because I'm quicker at reading than I am at listening, but I'm also quicker at talking than I am at typing. I love the fact that we can try different things and that we all have our own ways that we can interact with it that work well for us.
Paul: What I love about that flow is that it's not "I'm asking the AI to solve my problems," but I'm putting it in the loop with me and helping to just pull what I'm thinking out. It's this really symbiotic thing. I often have people who are a bit like, "Oh, I don't want to use AI," and I'm like, "Well, just ask it to ask you questions and let's get the best of you."
AI Readiness in the Organization
Paul: Do you want to talk a little bit about what you do and maybe give some examples where it's had impact?
Chris: Absolutely. I work with companies from about 30 people all the way up to several hundred. I tend to go in with a team that knows they need to use AI—the team is probably already using it—but they feel a bit out of control. They're seeing their cloud spend or bills go up but they're not necessarily seeing a lot of productivity.
So what I tend to do is sit down and figure out what's happening. I also do readiness assessments. Not all teams will get the most out of AI. If you've got a really messy, old codebase that is hard to work with, AI is going to struggle with it. So often one of the earliest things I recommend is that teams use AI to try and clean up their codebases.
In a similar way for operations, finance, or product teams, we're looking at how accessible the context of the business is to the AI tools. Once the tools are in place, I spend time training the team to ensure they are using them well. I've seen really good adoption—people moving through the gears with Co-pilot, using Claude on the web, and then moving into Claude Code. I've seen people get a three-month project done in three days. Then I work with the leaders to say, "Okay, your coding is moving quickly. How do we make sure that translates to releasing more quickly or running more experiments?"
How to Connect
Paul: So if people wanted to reach out to you, what's the best way of contacting you?
Chris: My website is a great place to be: chrismdp.com. And my LinkedIn as well—do reach out on that. I run webinars once a month on interesting topics, and I have a newsletter that I share once a week which has more details about that as well.
Paul: Awesome. And you've got something today, in fact: LLM London.
Chris: Yes. LLM London happens every month or two. It's an event that I helped start for people who are specifically trying to build the future of technology with AI in their products. It's quite a tech-heavy group: developers, product people, startup founders. We get together in the pub, but we also do events with speakers. If that's your kind of thing and you're in London, it's a great opportunity to hang out with like-minded folks.
The Problem of "Insane Productivity"
Paul: Brilliant. So, if you could build one tool with AI that does not yet exist—blue sky thinking—what's the problem that is not yet solved that you would just like to solve today?
Chris: I don't know if it's a tool, but I'd love a cultural shift. I would love it if people were able to use AI in a way that genuinely enriches and benefits their lives rather than just working harder.
I've noticed that using AI for everything—I use Claude Code for all my work now—it is uniquely exhausting to work like that. I worry that people are going to be taken advantage of and work themselves to death for an unscrupulous company. I also worry that the opposite might happen and people might just get lazy—where someone does 3 months worth of work in three days and then sits around for the remaining two and a half months. That also feels pretty unethical.
So, somewhere in the middle, it should be possible for us to be more effective and fulfilled humans—not get to the point where just because we've got tools that make us more productive, we end up working five times as hard. Maybe all it boils down to is a timer to tell you to stop when you're using AI for too long.
Paul: If it were a tool, I would use it! This is a genuine problem. You talked about it on LinkedIn, saying it makes you "dangerously productive." Ordinarily I hit my slump and I'm like, "Okay, time to take a break," but I don't hit that slump anymore because I can keep on working and have 10 Claude Codes working on multiple problems. I find myself at 2:00 AM thinking, "I really need to sleep, but maybe I'll just wait until this next problem is solved." How do I stop when I'm insanely productive?
Chris: We're just at the beginning of figuring out that this is a problem. I took some time off last week and it was very difficult. I'd realized that I had connected it to my phone so that I could just call my phone and get Claude Code to work for me. I'd been working 16 days straight without really stopping and not really paying enough attention to my family. That's not a good way to live.
We need better feedback loops. Right now, AI is like the employee who needs to be supervised every three and a half minutes. It's very persistent in asking us questions. So I feel like as AI improves, perhaps what we should be most careful to work on are the feedback loops and the checking processes so that we can let them keep working more autonomously, but figure out ways of communicating back what's happening so that we can manage our own supervision in a healthy way.
I'm thinking a lot about those loops—using MCP tools so that a AI could, in theory, conceive of, implement, and run an entire experiment using a platform like PostHog, run an AB test, see whether it worked, and then move on. If it's possible to close that loop, then we could just let them run and maybe get some sleep. There's hope.
Paul: Thanks, Chris. That's been a really interesting conversation, and I hope that it's valuable to the people that are listening as well.