Episode 9
January 22, 2026
In this episode, Quinn and Thorsten discuss how everything seems to have changed again with Gemini 3 and Opus 4.5 and what comes after — the assistant is dead, long live the factory.
Transcript
Thorsten: I think my new slogan is, "the assistant is dead, long live the factory." Meaning that agents write code is a given. That the assistant thing, the one-on-one, that this works, yes. The question is: if you give them a longer leash, and if they run on their own, what else can you do? And that means you can build your code base into an agent-native code base and you can build up a factory where from anywhere you should be able to send a prompt to your agents and have them work on your code base. I think that's the next big thing because the models are already ready for it. And in case they're not, they will be. Hello and welcome to Raising an Agent, Episode 9. It's been four months since the last episode and we're back.
Quinn: I'm Quinn on the Amp team and you got me and Thorsten here. A lot has happened since August, since we last came on and talked about what it's like to build Amp, to build agents in this crazy, crazy environment. So Thorsten, what has changed since August?
Thorsten: Reality has been met with all the things that we've been kind of, you know, even saying a year ago or half a year ago, or like nine months ago on this very podcast. So I think one of the biggest changes is that since the last recording, Gemini 3 came out. Fantastic model. I think that was an inflection point. Then shortly afterwards, a week later, Opus 4.5 came out. And people now say this was the inflection point. As a Gemini 3 fanboy, I think it's the other way around.
Quinn: Yeah, one week before Opus... And what's so interesting is before Gemini 3 and Opus came out, a lot of people were thinking, it feels like the progress in the state-of-the-art models has slowed down. And you had Cursor and Windsurf doing what makes sense, given the hand that we were all dealt, trying with a really fast model that could make some rough changes that you'd be a lot more hands on with. And in the end, these better models came out and washed away all those really well intended good ideas. And now the use case for a fast model that's way behind state of the art does not really exist for most people unless you're trying to save on costs. And even then, what you found and pointed out and what's been really interesting is seeing that... actually a smart model is often faster and cheaper when you're actually measuring getting the thing done.
Thorsten: Yeah. And I think that's, well, everybody's been saying this, right? That, "hey, the models will get better." And we've also been saying prepare for the models to get better. What's happening now is that a lot of people are waking up to how good these models truly are. And what you see is people's expectations or even the level of trust they give to a model is often lagging behind the frontier. It's like two, three, four, five, six months behind. So, you know, when I say, you know, writing code is over, like I said this a bunch of like, you know, that's over. Like in a file, like Opus 4.5 can do a file like that level. And then people say, "oh, it can't do it for me." And then it turns out they haven't tried Opus 4.5. They tried, you know, Claude before or Gemini 2.5 or something. And what we've seen, and it made me grip, you know, my table was that over Christmas, a lot of people changed their opinion on agents. And they were kind of skeptical of agents. And when I heard their comments, like Andrej Karpathy is one of them, right? I think in November or October, November, he said, "agents are slop. This is not good. They can't write code. This is slop. It's too much vibe coding. What you want is like the Cursor experience of an assistant and you want a tab." And I remember us talking about it internally, not recording. And I remember thinking, what is he talking? Like, this does not match what I see at all. And then it turns out that over Christmas, he tried out Opus 4.5 in one of the newest agentic harnesses. And then he's like, "oh, I feel so behind. Everything is changing. These agents are crazy. They can do a lot of stuff." But then Antirez, the creator of Redis, one of the probably greatest programmers in the world, one of my heroes, he's been saying, I think he uses ChatGPT and he copy and pastes code. or he used Gemini 2.5 a lot, I think, and he copied and pasted code. And I was like, why does he, what's the problem with these agents? Like, why does he not use it? And I don't want to say he never tried them, but even two weeks ago, he published a post to say, this is happening. Like agents will write most of the code. This is happening. Yesterday, Ryan Dahl, creator of Node.js, tweeted, I don't get the exact words right, but the era of humans writing code is over. And it just, I don't want to say we are, ahead of everybody. I think it's a crazy time where everybody sees the same things, but has different expectations or experiences. And so they all come to different conclusions and everybody's kind of anxious and wants to have like an opinion on which they can stand, like something solid. And we end up where people say they can't write code and others go like writing code is over by humans. And big picture though, I think agents will write most of the code in the future. I think that's a given and and we can we can step towards the future based on this there's no doubt in my mind. There there will still be cases obviously where you have to go in and write code by hand and maybe some type of software will be written 100% by hand, and there will be exceptions just like people still write assembly and there's like you know people who always go in, and know all of layers and do all of the things, but the general industry industry trend is is going towards agents will write most of the code i think that's...
Quinn: Why do you think those really smart people, that are at the time were already way ahead of the median developer and their use of AI... Why do you think people don't just assume the best from models? Just assume that they're always going to be getting better, have that that awe? You know it's dangerous to lay out a statement of, "agents cannot do x" — it's not going to last for very long. Why do you think people do that?
Thorsten: I don't know. I think...
Quinn: Where have we been... Where have you been too pessimistic about models?
Thorsten: I think what I've been too... Yeah what I've changed recently, like in the last two months is or month even, is... Or two things: one was I always thought these models they can't write that well, like it always sounds trite and they use these phrases and ChatGPT like the GPT models are still prone to this just like "this isn't this it's..." you know like these phrases that there's a whole catalog of them but then with Gemini 3 the first time I handed it a collection of my writing and then ask it, like, write something in my style. I was impressed. This was really good, like, impressively good, where I realized, ooh, like, "this might change too." Like, you can probably start giving it, like, 10 collected pieces of writing from yourself, and it can possibly get really close to what you would write like. And so that's one thing. And the other thing that I've been too hesitant with, I've been too stuck or stuck in this mindset of assistant where it's one-on-one conversation with an agent. It's me, the person directing an agent and maybe directing multiple agents, but it would be this, and I mentioned these exact words on this podcast, that it would be like this bob and weave, push and pull. You send it off and do this and it comes back, you review the code, you send it back off again, and you tell it what to do. And it says back and forth. And the models have gotten so good that you can give them now a longer leash. As in, instead of saying, "go over there and fetch me this and then come back, and then I'll tell you what else to fetch, you know? And then I'll show you how to mix it together." Instead of that, I think you can now go, "here's the shelf, go and bake me a cake." And they will figure out what to fetch and how to mix it.
Quinn: And go taste it.
Thorsten: Yeah, "and go taste it and come back. And if you tasted it and it's burned, then turn down the temperature on the oven and try again."
Quinn: Yeah. Yeah. And at the time when agents were starting to get really good, like last February, when we started this podcast and Amp, you could use the term assistant to refer to the previous way it was done, where it literally was just one textual response and no actions. An agent started to mean when you actually have it run commands and get in feedback loops. Now, where we are today, it does feel like there are people that are still using an agent as an assistant, like what you said. They're keeping a really tight leash on it. Even in Amp, where the vast majority of people run with no permissions and it can do anything. Most people have good feedback loops set up. Even so, there's some people that are at the next level that are pushing forward. Even in our own repo, in the Amp repo, in the last couple of weeks, we made a ton of progress toward getting the feedback loops to be really fast and really autonomous so that it can always run. And it was humbling for me to see, I thought we had good feedback loops, but actually they could have been so much better. And now with things like auto handoff that will let it work down a list of tasks and then automatically start a new thread when it's done, we've unlocked a whole new level of autonomy and it almost feels silly to call what we were doing before using an agent, because it wasn't that agentic.
Thorsten: Yeah it's strange it's really strange... Basically we're at there's this this I don't have the phrase for it, but basically the models get better so they can do more, but at the same time, if you adjust your code base to the model and allow it to, you know — what I said, the phrasing I used was, think of your code base as an application. And does the agent know how to use it? Is it agentic? And we all kind of know what that means for other things now, or a lot of us do. Like, if you have a website, it should have an llm.txt thing that explains it. Or if you have an API, you should have one document that shows how to use it, and the agent can go and use it. And the question is: is your code base in that shape? Is your code base ready for agents? And once it is, you realize that it can go far longer on many tasks and you realize the model can actually do more. It wasn't the model's fault. It was the fault of: it's not clear how to do this. If you take that image we've used a bunch of times, we take an engineer off the street and say, go and fix this and look at these two files and fix this bug, they would turn around and ask you, like, "how do I test that this works now?" You know, the models aren't trained to say, "how do I test this now?" They're trained to say, "now you should run the tests" or, you know, "I didn't run the test" or something. So, yeah, that's the next step, I think. Like, the wording I use is you want to weld the agent to the code base. You want to make sure that the agent, when you combine it with your code base, knows exactly how to verify its changes and get feedback, and make sure that what it did actually works. And that is different from code base to code base. And some code bases are easier to do this with than others. But if you can do it, if you can get closer to this, yeah, you unlocked a new level.
Quinn: Yeah, that's something where my mind has really changed. With Amp, we're really fortunate to have users that really want to be on the frontier and to put in a little more work, and we don't have to make Amp watered down for everyone. And one thing that we had seen in the past is cloud IDEs. It's so obvious. Wouldn't it be great if you could just have a development environment? You could have unlimited numbers of them. You could be working on all these things. and this is before AI, and they never really took off to the degree that a lot of people expected they would, except in a few companies where they put in a ton of work, because it's a huge amount of overhead to maintain a CI environment, your prod environment, your local dev environment, and cloud IDE, and it would always just drift. There are enough companies, teams, code bases out there where they are going to probably treat the agent development environment as the main thing and put in that work so that the agent has incredibly good tests so that it's isolated and sandboxed, and can be parallelized. On the Amp team, we want to pretty much exclusively build for the people that are willing to put in that work. If we're not building for the people that want to put in that work, then we're not building where the agent can do the best. The value of an agent that can do a lot is so much bigger than anything else. That's one thing we're going to be doing on Amp.
Thorsten: Yeah, I think that's the big focus. This is the future. And we can talk in a second about what that means for the product, for Amp. But just to make this more concrete, I think that, I don't know if I mentioned this, I don't think I've mentioned this before in this podcast, but I don't know if anybody wants to be named, so let's keep it anonymous but basically there's a CTO who said his team doesn't use agents a lot and or AI and this was I don't know four months ago, and he said he's not worried about AI or competitors using AI he's worried about competitors having an AI native code base, that's built for agents that's built alongside agents. And what he means is that if you have a code base in which an agent can do a lot of stuff and can get good feedback loops, you can move a lot faster. If you think back 20 years ago, Joel Spolsky's list, like back then it was, you know, "can somebody ship something on day one?" Like that was the GitHub meme, right? Can somebody ship something on day one? And everybody was aiming for this and saying, look, I need like one command to run to set up the dev environment. We need to make it super easy to push. We need to make it super easy to review. We need CI. Like it meant all of those things. So now the question is, 2026: can your agent ship something in the first 10 minutes of you letting it loose? You know, like what do you need to do to make this happen? And that means an AI native code base. What does that mean in practice? Two examples. One, and we have a video of this. I think that's like a practical example. I worked on this terminal emulator in Rust and it had rendering issues. And I assumed, well, it's GPU accelerated. It's a native application. How do I give the agent feedback? But then it turns out I keep sending it these screenshots. I keep saying the character drawing, this is off. The margins are off. This looks wrong. And I keep sending these screenshots. And the models are really good with screenshots, right? So I built into the application basically another CLI flag that says capture to here. And then you can specify the path of an image file. So in your application, right? And I'll get to the objection in a second. In the application, I built the feedback loop. So you can start your application and say, start, do this, and then drop a screenshot here. And once I had this, and I let the agent build it, of course, I could basically remote control the terminal and say, "start the terminal, run this command, and then take a screenshot and put it here." And once I had this, the agent went off flying. It was like, "let me take a look at what this renders like now." And then it figured out it doesn't render. And then it went off. The objection would be, "oh, you're changing the main application to make it easier for the agent to get feedback." Yeah, it's the same like how I would make a code base easier to test, you know, for automated tests. Like, yes, that's something I do. The other practical example, this was from two weeks ago. I added a new command to the amp-tui. I started with, "hey, Amp, in order to test this, run the TUI in Tmux, then hit Control-O, then type in this, and then run this command." And it can do it. It's really good at controlling stuff in Tmux. But it took a long time. And it was, like, error-prone because sometimes there were timeouts that didn't match and whatnot. So I thought, I don't care about the UI presentation for now. Let's take the data that's displayed and let's build a new subcommand for the CLI that only outputs the data. And then the agent can run it because it can run CLI commands. And then again, it went off. Like it was this hamster wheel of the agent running in the feedback loop. And it just ran this command over and over and over again. And I think that's exactly what we need more of. These ways for the agent to use your application. And then what you want is agents.md files, instructions prompts that guide the agents so that it knows if I make this change, this is the thing I need to use to get feedback.
Quinn: Yeah. So the agentic code base, that's like 10 different things, like the Joel checklist, which we'll link to.
Thorsten: Yeah, exactly. Yeah.
Quinn: Yeah. And also, how does it authenticate to your web app if it's a web app?
Thorsten: Yeah.
Quinn: How can it click around? One thing that we've thought is this actually means that web apps have an advantage, and I guess terminal apps too, because they're a lot easier to control than a native app. I know that those have accessibility trees, but I think it's a lot more complex. And then, you know, we're expecting to see a shift there. So if it's so much easier to build a certain kind of app than another, then you're going to build that kind of application. And it's going to be a real decision point for a lot of teams when they choose to use some framework or technology that maybe they don't prefer for their idiosyncratic reasons because the agent does better with it. And I would posit that making the choice to do what agents do well at is the right software engineering choice for the future, even if it's not your own taste. Like, I really don't like classes in TypeScript, but the models really like using classes, and I give in. That's like a minor example.
Thorsten: Yeah, I think that's a good point. It's always been like this, right? Like, the naive view would be that you choose a technology based on the merits of the technology by itself or in isolation. As in, I choose a programming language because for this problem, I need this program language. It's good at solving these types of problems. That's just the first step. The actual thing you do when you have a company or a team is you need to figure out, can I hire for the language? Do I have people who can write this language? It doesn't make sense to start a company that uses Haskell in Frankfurt. And your goal is to scale up to have 200 engineers because there's no 200 Haskell engineers in Frankfurt. It's the same with these choices now. You have to do the actual, and this is engineering. You have to find out, here's my problem. Here's the technology with which I can solve it. Here's all of the other constraints. Meaning now, how do the agents do with the proposed solution I have? And then you need to adjust. And that might mean, you know, using a well-known library that's slightly worse in some respect than another library because the agents are better at using it.
Quinn: Yeah. All right. So we think there's gonna be a lot of change. And in particular, in the past, agents and AI never asked companies to change their code base, to change these things that are so core and the source of so many debates. But now we think for agents to do really well, you're going to have to do that. And for Amp, for all the other agent makers out there, we're now in the position of having to ask our customers to put in a lot of upfront work that will not pay off immediately, that will probably spur a lot of arguments on their team. And when we talk about Amp wanting to always be able to change fast and having the mortal fear that everything is changing this is exactly why we did that because we have seen how it is so easy to just optimize for the way things are today and we think the Amp products is going to look totally different in three months and actually the rate of change is going to go up and for our customers are we feel like we have a duty to keep them on the frontier more than we have a duty to continue maintaining the way the product is because there's 10 other coding agents that are going to take the way that coding agents are today and still be offering that in a year. I don't think anyone's going to want that, but a lot of companies will think they want that right now and will be upset if things change. So we are going to change. We're going to ask customers and users to do the hard thing. The $10 free daily grant that we shipped, people love it. That's a nice way that we can incentivize people to make some of these changes, whether it's feeling like, oh, the Amp team subsidizes me, so I'll try things a little bit more on the margin, be a little more forgiving, or just it's less expensive, so still this makes sense. But that's a real question. And if you're using a coding agent, it's not pushing you to make changes to everything in how you build software, then it's probably not pushing you enough.
Thorsten: Yeah, I think just to underline this point, if all progress would stop right now and the models would not get better, a lot of stuff would change, right? Like a lot of our calculations would have to change. Meaning you would then say, okay, if the models are this and they're all frozen in time, then I can maybe optimize for cost and say, I'm going to run local models and these local models aren't as smart as Opus. And then I'm going to optimize for the specific use case and I'm going to use two local models and they complement each other and whatnot. But the problem is if you do this right now and you try to make non-frontier models work and optimize for cost, what you're doing is you're building something that will be outdated in half a year.
Quinn: And you're building it for people that, by the very definition, do not want to pay a lot, which means you're not going to have a business or an interesting product
Thorsten: Exactly and it's you have to if if I now think like if I now were to build an agent and try and make it work for Claude 3.7 you know like or 3.5 let's let's say 3.5
Quinn: Thank goodness Anthropic doesn't like discount that massively or else people would ask
Thorsten: They would ask yeah but it's a waste of time because you're solving problems that are already solved in the new models. And the other thing I want people to know, agents or not, the capabilities are still, some people say they're plateauing, but I think the curve from model to model or from generation to generation is so weird, you cannot get a good impression of what the frontier looks like by looking at the smaller and cheaper and worse models. I don't think you can look at Qwen 3. Like, it's a great model, Qwen, right? It's good. But I don't think you can look at the self-hosted open source model and then infer from that how the frontier models will evolve. I don't think there's a clear path that lets you do this projection. So you need to look at the frontier. And that's what we're doing with that.
Quinn: Yeah.
Thorsten: And yeah.
Quinn: And then, so I think no model selector means we are not beholden to. things that were in the list three months ago, six months ago, no subs, not supporting Claude Max subscription, lets us have no model selector, and it lets us switch the best model, use models in interesting ways, all this stuff. And the only thing we ask is, are we going to learn from this? And is this going to push the frontier? It's not about like, is it useful today? Because a lot of these things, yes, they can be useful today, but they're not going to be useful for very long. And a million other people are building products to let you do that. So why don't we do something different?
Thorsten: Yeah, that's a fine line to work for us, though. Admittedly, it would be nice if we could make everybody happy and build everything for everybody. But what we need to do is we always need to be on the edge. Otherwise, we fall behind. And that's of no use to anyone. So yeah, that's what we're doing. I want to go into the Q&A a little bit, like five, 10 minutes, but to round this off, you know, what would you say?
Quinn: Yeah, what's next? What's next? Other than the agentic code base, like how are people being using these things? How are we using Amp differently in the last few weeks?
Thorsten: I think my new slogan is the assistant is dead, long live the factory. Meaning what we want to build or meaning that Agents write code is a given. That the assistant thing, the one-on-one, that this works, yes. The question is, if you give them a longer leash, and you can right now, and if they run on their own, what else can you do? And that means you can build your code base into an agent-native code base, and you can build up a factory where from anywhere you should be able to send a prompt to your agents or the multitude of agents and have them work on your code base. I think that's next. That's the next big thing because the models are already ready for it. And in case they're not, they will be. So, yeah. Do you want to go into any concrete things here?
Quinn: Yeah, I agree. I think that the human's job is going to be how can you do two things? One, force feed your agent a stream of tasks that you have the intuition that it will succeed on. and two how can you expand the scope of what tasks it can succeed on through making your code base better better feedback loops giving it the capability to finish a task and as reliably as possible give you the minimal proof necessary that you can look at in most cases to know that it's good and I mean the perfect would be if all you need to look at is you hear the ding like I'm going to hear any minute now because I queued up an Amp thing a little while ago and you have confidence is perfect but it might be you want to see a demo or a screenshot or something like that so those two things that's what you should be focused on and then it's just strap it down force feed at these tasks and the thing that Amp that we are looking at building now is what does that force task feeder look like and the way to make your code base agentic and how do you know if it's agentic, how can it become more agentic based on where the agent stumbles and so on. That's what we think is interesting in the next couple months.
Thorsten: I'm going to add that I don't think this is obvious to a lot of people, but I think a lot of the dev tooling we have is not going to cut it because a lot of the tooling we have is based on the assumption that a human wrote code, that the human put a lot of effort and time and expertise into writing a given piece of code. And that flows through everything in our dev tools. Somebody tweeted this last week. He said, we're trying this in the team. Instead of creating a linear ticket, we just send off an agent. And it sounds so trite. But when you think about it, if you think, okay, you have agents, you can just take a bug description and send it off and have it investigate in the same amount of time it takes you to create a linear ticket. But in a world where a human writes code and they have to get context and set up the dev env and switch the branches and blah, blah, blah, and get into the headspace of fixing a bug, not all of the bugs, but in the old world, you would create a linear ticket because the developer was busy. They were doing something else. Only next week when the new sprint starts can you assign the ticket to them. But if you now have unlimited entities being able to investigate your code, why shouldn't you send the agent off immediately before you create a ticket? That's one example. The other example is somebody asked me, like, what are your gripes with GitHub? And they were, I think, concerned about, like, is it a single page application and load times, whatnot. But I think it's built on the assumption that somebody put a lot of effort into a change. The pull request thing. You can emoji react to it. You can leave a heart emoji, a smile emoji, right? You can assign people to it. But in a world where agents write 90% of the code, the perceived value of a given change is completely different because you can actually say to the agent, it's completely wrong. Ask your agent friend to spin up another check. Make 10 variations of this. What would the interface look like for this? If you have, in your words, Quinn, like the primordial soup of agents and code that's always bubbling and brewing and generating new code. I don't think the given tools are going to cut it. So when people think about the factory of agents producing code, they might think, oh, I'm going to put a ticket in here and then the ticket goes to an agent and then the agent does this and then it spits out something and it creates a pull request and then I'm going to review the pull request and I'm going to do this. What you're doing is you're treating agents like a human. And you're creating bottlenecks that shouldn't be there. So I think a lot of that will change.
Quinn: Yeah, in particular, I have been really pleasantly surprised at how many companies I see that used to be really intent on formal code review, like the religion passed down from Google, where now they realize that if you've got a senior engineer who you trust deeply, and they are writing the code with the agent, they're reviewing the agent's code, then that is at least the quality level of a human-written and human-reviewed piece of code. And so you're seeing a lot more teams do what we've been doing on Amp since day one, which is we push to main for the Amp core committers. And we're living a lot of this new world where another big news since the last recording is we have spun Amp Inc. off from Sourcegraph and we have 20 people on the Amp team. We are all coding. We're all using Amp constantly. We do not have a lot of enterprise salespeople. We're not trying to make the product appeal to the kind of median enterprise or anything like that. We don't have PMs. We don't have a lot of these traditional overhead roles. And we're kind of starting from scratch and building Amp Inc. the way that we think a lot of companies will be built, or at least trying to figure that out. If that kind of way of working appeals to you, then know that we're in the same boat and we're going to be sharing what we learn and look forward to just seeing everyone figure this out together.
Thorsten: Yep. All right. Let's do a quick Q&A section. Why not add, I mean, this comes up a bunch of times. Why not add a model selector?
Quinn: What are we going to learn from it? That's what we ask. And look, on the Amp team, we can select the models that Amp uses by, you know, which models do we use? And we try all kinds of models. We get early access to models. And if we had a model selector, not only would we not learn from all of you how you're using Amp, it would just be so much overhead. Everyone would be trying a different selection combination of models. It's like Amp would be a self-assembled kit rather than a product. We wouldn't learn. I don't think you'd like the product as much. And it is so valuable that there's one way to use Amp. Actually, I was sitting with a power user, and he said that he likes that there's not like a million different knobs. And, you know, this is the old way, but we deprecated it three months ago, but you can still be using it. You know, this idea that even some of the smartest people in programming are a few months behind the frontier. If you're using Amp, it's only possible to be used in the way that we think is good. At least we try to make it really hard to use in an archaic way.
Thorsten: And I think the other thing to add here is everything is changing. The software itself is changing. Like a lot of the software we're seeing now has like this non-deterministic element in it called LLM. So the software itself is changing. How we write software is changing. Who or what writes software is changing. And you have to ask yourself as a software engineer, if you are not working on an agent, When everything is changing and it's really hard to figure out where this is going, do you really want to spend your time exploring the strength and weaknesses of five, six, seven different models and switch between them? Or do you want somebody else to figure this out for you so you can actually worry about what matters? Everybody has a different answer and different amount of time they can spend on anything in a day. So if you're not working on this, I think it's a waste of time. Next question. Why not offer a sub, a subscription?
Quinn: A subscription. The idea is that it's something where you pay 200 bucks a month and you get $2,000 a month back. It's like a fountain of youth or a perpetual motion machine. That, I mean, if we could do that, we would. The reason why Anthropic can do it is one, they, I think are, you know, they're growing incredibly fast. They have a ton of cash. They've made a lot of upfront commitments. They have to provision for peak usage. They've got lower marginal cost usage. They're just paying for the power there. That's a business decision. They own the model. For everyone else who doesn't have their own foundation model, it's pure cost. And after the thing where Anthropic said that Claude Max can no longer be used by other tools, there are some other places that offer subscriptions, like they get a co-pilot subscription like OpenCode Black and people some of them are switching over to that but i don't think that it is possible for say OpenCode Black to offer you two thousand dollars of tokens for 200 bucks I there's just no way if we were to offer that that would be endorsing a terrible rug pull for our customers I think it would also add a lot of friction to how we want to build amp Because let's say the models that the sub offered access to suddenly became not the best models that we wanted to use in Amp. Then if we were to switch, we would have a lot of users who say, well, you just jacked up my price by 10 times or more. That's not the kind of user we want. We want the users that want to be on the frontier and exploring this with us. But if the magical sub where you can get 10x your money shows up, then yeah, we'll do it.
Thorsten: And the thing people have to keep in mind is that this is not the SaaS era anymore. And, you know, it's a well-known, I was close to saying common, but it's a tactic to go into a market and to subsidize usage. So you capture the market, right? So you say, for our users, we're going to offer the same product as Netflix, but, you know, $1 subscription or something. And then you can watch all of the movies and you're going to capture the market. But then the next move would be your competitors die and then you jack up the prices. Like that's a move companies have done in many different markets. The problem is in this market, it's so crazy that you cannot jack up the prices because what you would have to do is to make it work in any approximation would be to have your own model or run inference or something. And for that, you need like all of the trucks in the world full of cash, you know, like to get into this model game. It's not a feasible play. You can't say we're going to offer the subscription and then we're going to make it worth by training our own model. If you're not one of the 10 people in the world with access to that amount of cash, I think it's just, it's a waste of time. Like it's just going to burn too much money.
Quinn: And if you have access to that amount of cash, I'm really grateful that a lot of those companies are making incredible models that we all get the benefit from. But also then, in a way, in practice, you're tied to just using that model. And if there's now three or four other companies that are making models that are competitive at the state of the art level that have other differing capabilities, then if your product only uses your own models, and that's the primary selling point, it's very possible that that product will no longer be the best or there could be some change maybe you bet wrong in the pre-training or in the direction of how these things are going to be used or maybe something you get too big and you slow down and you know everyone everyone whether you're making models coding agents whatever everyone is working like crazy everyone is paranoid and everything and yeah I mean I kind of love it there's so much change going on and a sub would just make us make it harder to change Amp and I don't think that the magical sub that people want actually exists.
Thorsten: All right, last thing, real quick. People apparently constantly look at our binaries or they try and figure out what's in there, what we're shipping behind the scenes. We're not going to name any specific thing, but somebody was asking about the new Deep mode. Let's each of us give like two, three sentences on what might.
Quinn: I heard the ding from Deep mode like a minute ago, and I started it right before we got on.
Thorsten: There you go. That's a really good teaser. I think the strength of Amp or one of our advantages is that we can tailor models to specific use cases, right? And for example, rush mode, if you want small, quick, dumb changes, you can use a smaller model. It's faster, it's cheaper. Smart mode, your normal agentic programming. But there's another model that I think is really good and I think it lends itself more to the future possibly, the factory use case where the agent goes off and does something for a while and it's not coming back. It doesn't ask you for the recipe and what else to take off the shelf. It's a model that goes and figures stuff out on its own and I think that's a different mode and in that different mode, you want different tools. Say another model constantly asks, what should I do? Can you paste me this? Can you paste me this? In a Deep mode, deep work, it goes off, it does something. You want the harness completely optimized for this mode. Really excited about it. I'm really excited about it. So I hope we can ship something soon.
Quinn: And this is something that could make a lot of people change how they use coding agents. So get ready to change. And what I would ask for anyone, especially those listening, is your first reaction is probably going to be, I don't like having to do more of this work up front. or this feels different. Every single change we make, there's people that have the human, very human, knee-jerk reaction against it. And just suspend that and come with us. Explore the frontier with us. We think that there's something more here.
Thorsten: Yes. There's lots to build. It's exciting. All right. We're at the end.
Quinn: I should have checked on that ding. Amp has some work for me, so I got to go.
Thorsten: All right. Bye-bye.
Quinn: Bye-bye.
Thorsten: See you next time.