Episode 4

April 4, 2025

Thorsten and Quinn talk about the future of programming and whether code will still be as valuable in the future, how maybe the GitHub contribution graph is already worthless, how LLMs can free us from the tyranny of input boxes, and how conversations with an agent might be a better record of how a code change came to be than git commit tools. They also share where it works and simply doesn't work.

Listen on other platforms

Transcript

Thorsten: You can have an AI generate a colorize function any day now, like any second, in less than a second.

Quinn: And all of – okay, actually, I won't go into that soapbox, but I did have this – for a lot of cases, one that stands out to me is the Sonner component in Amp, where when you copy, link to clipboard, it shows that little notification in the bottom right. That's called a Sonner. Who knew? And there's some libraries that did this. And they were very complex. They didn't quite work with a version of SvelteKit we were using, and they required a different way of doing styling, and that's what everyone was recommending. But it turns out, AI just wrote one that was exactly what we needed, very simple, and that is way, way simpler and better.

Thorsten: Welcome to Episode 4 of Raising an Agent, our little internal podcast, our diary of excitement between Quinn, CEO of Sourcegraph, and myself, Thorsten, while we're building a new thing, a new agent here at Sourcegraph. Hi, Quinn.

Quinn: Hello. How's your coding been?

Thorsten: It's been pretty good. I spent all day yesterday building a new thing.

Quinn: It's good.

Thorsten: The ability to see.

Quinn: I love it.

Thorsten: Yeah. Yeah. You used it?

Quinn: Yeah.

Thorsten: Beyang and I then, we had like a coding session. I did like a coding session in the afternoon with other people. And then I kind of went and started to convince them of AI and whatnot. And then we had a session, Beyang and I, and he said something at the end. He's like, "this is fun. This is not just pair programming. This is us, the two of us watching another thing write the code." And it gives you a fun chance to talk about stuff when you're both just waiting for it. And I joke that, you know, Gen Z doesn't have to have three videos running at the same time with this. They can watch the two of us talk and another thing writes code in the background.

Quinn: Yeah. Watch out YouTube dev streamers. People just watch AI now.

Thorsten: Yeah, just watch AI code in the background. How has your coding been? You've been working a lot on server-side stuff and now you're back in client-side land.

Quinn: Yeah. And I have made it so it's easier to share threads. I'm making it so that you can have one that's truly public so anyone on the web can see it, which is pretty cool. So you can show off the cool stuff you do. And I want to get more back into the client and having it treat feedback loops as a first-class concept where it knows, you know, in this part of the code base, here are the tests that it needs to run. Here's how it gets to the browser. And you can put that on the rules file, and we do. But the problem is the adherence is not 100%. And you want that to be 100%. But, you know, it's tricky to get right.

Thorsten: Yeah, but it's – I said this to Beyang yesterday, and we said this on previous episodes, right? That if you can really nail it down to a feedback loop and you know what it should do and then just can go off and do stuff. Yeah, once you have it, it's mind-blowing. It's really mind-blowing to see. I want to talk about a big topic. And I think, how do I start this? Maybe with a disclaimer. The disclaimer is that this is going to sound over the top, fanatical maybe, I don't know. But I think if we look at the trend lines here, the trajectory of what's happening and how we've been using it. And I also think that, you know, we are maybe more risk tolerant right now. We're on the edge of using these tools, meaning we have it right. A lot of our code, like others say, "I still use the chat functionality and copy and paste stuff over." And I'm like, "even if I can move the function myself, I'll have the agent do it." And, you know, I'm all in. And I think if I follow the trajectory of where this is going, a lot of stuff about software is going to change, meaning code is going to get cheaper and how we treat code is going to change. It's not going to be as I don't know not this precious thing that we have to take care of because we all know that there's on one end of the spectrum there's this handwritten beautifully formatted you know touched by 10 people with lots of expertise nice file and on the other there's like the auto generated you know a thousand lines C file that nobody wants to look at because it's auto generated. And I think with AI, a lot of stuff is going to move in that spectrum because it's not statically generated or deterministically generated, but it is generated by an agent. And maybe if you modify it too, you go through an agent or an AI to modify it. So I think the value of code on average will change. And I think that's going to have some dramatic consequences in near, you know, on longer term future.

Quinn: Yeah. Well, we have to just train ourselves before we can think about where that's going to go. I've run into this a few times. So we added this new feature when the agent needs human input, you know, it's done or needs your confirmation. It plays a sound. That is an awesome feature. And that's the kind of thing where we're just going to make it the default because we need to just be opinionated here. And I went and looked at the code and it was littered with any types and it was jammed over in here. It was not in the right place. I had this moment of realization where why do we get worried about that? Why do we get worried about bad code? It's not just because, oh, you know, this code is bad and it could cause a bug. It's because we think there's some human that fundamentally misunderstands how the system works. And they're going to go and make that mistake a ton of other times. And they spent a lot of time doing that. And so they had a lot of wasted time. But none of that is true. Now you have the other worry that the agent is going to make a lot of mistakes and mess up the code. And that's really valid. But it's more like random with the agent, whereas if human has a misunderstanding, they're biased in that direction. They're always going to have that misunderstanding.

Thorsten: Yeah.

Quinn: And it doesn't matter as much. And if I had actually known that this code was written by this quick prompt, it was a quick hack, and he wanted to see how it worked. And the feature ended up being awesome. That is such a big win. But I didn't know that. And I had to kind of reset my initial reaction to, which is like an any type, what the hell?

Thorsten: Yeah. Yeah. It's strange. It's, I don't know yet how to formulate this, but it's, I think I said this five times on here. I don't know. I say this five times in every two days, but a lot of the way I treated code in the past now seems black and white to me. Like I, you know, the brushes, every line and how it's formatted. It's just, it's a different thing. It's similar to what you described. I don't know exactly how to describe it, but it's, you look at the code in a different way and then it becomes, I don't know, easier to change or it becomes less bad if it's bad or something. And I also don't, you know, it's not vibe coding. It's not, I don't care what the code looks like. It's a different thing. The level of abstraction is a different thing. Like,

Quinn: yeah,

Thorsten: Yes, you can worry about, you know, the function names or whether it's, you know, camel case or stuff like this. And, but I don't know I feel like you know whether it's camel case or whatever else kebab case I don't know it's right now with these tools it doesn't matter that much like it's just because you're on another level of abstraction most of the time and I think

Quinn: And if if the kind of you know weirdness messiness verbosity isn't a well encapsulated part of the code base it's really valuable if you can get over yourself and say, you know what, that is okay. And the reality is today you're using open source code. It's probably written in a totally shitty way. You pull it in transitively. You don't even know. So you've gotten used to that. And believe me, people took a while to get used to that, but people have gotten there.

Thorsten: Yeah. But yeah, it's interesting. I don't know. It's going to change stuff. And to go back to controversial statements, we said this in a different conversation. You said it and it kind of shocked me with how shocking it is and how obvious it is. But you said open source is going to lose value in some sense. And I want to frame it. Before I want you to explain it, I want to add something else to this, which I keep coming back to, which is Eric Meyer saying, you know, why search for code? Like why find the perfect library if you can have something write it for you? And I said this yesterday to a colleague. Like I said, when you look up something or you use a library, the problem you're facing is you don't know how to write a certain piece of code or you don't want to spend the time to write a certain piece of code. And say AI was perfect in generating code, why would you go and pull in a library if you could have it generated for you? And yes, there are advantages to somebody else maintaining some piece of code that you use. But I think if this...

Quinn: There's also disadvantages.

Thorsten: There's disadvantages. And if this option is now in the equation, I think it's going to change the outcome a little bit. Why would I... I mean, to an extreme NPM, right? You have like three line... So three function packages, right? As an extreme, right? For example, I remember this was years ago. I used like a little helper to use like ASCII characters to print stuff yellow and red and blue on the CLI. And somebody commented on my PR saying, you should use the library for this. And it was truly like four functions, you know?

Quinn: I love it because ANSI is literally like the standards institute. And it's something that has not changed in 50 years and will not change for another 50 years.

Thorsten: Yes, it's just because you don't like the escape characters in there. You think it should use library. But for this stuff, right? You can have an AI generate, you know, a colorize function any day now, like any second, in less than a second.

Quinn: Yeah. And all of – okay, actually, I won't go into that soapbox, but I did have this – for a lot of cases, one that stands out to me is the Sonner component in Amp, where when you copy, link to clipboard, it shows that little notification in the bottom right. That's called a Sonner. Who knew? And there's some libraries that did this, and they were very complex. They didn't quite work with a version of SvelteKit we were using, and they required a different way of doing styling. And that's what everyone was recommending. But it turns out, I mean, AI just wrote one that was exactly what we needed, very simple. And that is way, way simpler and better. So I didn't use the library in that case.

Thorsten: Yeah. And I mean, to extrapolate, right, we saw last week, you know, OpenAI launching new image generation stuff, people using it to generate, you know, assets for games, you know, sprites and whatnot. Or like little icons or something. And yes, it's right now not feasible. It's too expensive. It's too slow, blah, blah, blah, blah, blah. But looking at the trajectory, as an example here, like we have an icon library, right? And it's this, oh, we should use an icon library because we need like these five icons and whatsoever. Can you generate SVGs that easily in the future? Can you say, you know, here's like rough drawings, generate these as icons and then give me an icon library. And that, again, diminishes the value of pulling in external dependencies because the value of I couldn't have written it myself or the value of it would have taken me too long to write it has decreased.

Quinn: Yeah. So in that case, I think that you want something more than just on the fly when the dev needs something, generate the icon. You want consistency across icons. If two things are the same thing, you wanted to reuse the same icon and you wanted to fit in with the overall feel. So even that, you know, it's more than just, you know, oh, YOLO, it's so easy. You still want some kind of layer over it, which is interesting. And that's a new thing that people don't yet have.

Thorsten: Yes. I just think that a lot of dev tooling is about, in an abstract sense, it's about generating code. It's about you could write this code yourself. You could type it all out, but instead you use a library or you use a function or you use some other assets or use something that somebody else has generated for you. Now, with LLMs, we can generate code. And I think that changes a lot of stuff. And I tweeted this yesterday and people didn't realize it was a joke or that I meant it in a fun way. But I said, you know, I saw five blog posts where people writing their own static site generator. And I said, when is the first static site generator going to come out that uses an LLM? where you pass a markdown file to the LLM and say, turn this into HTML. And then someone was like, why would I do this? It's too slow. And I'm like, yeah, sure. But don't you see, like, all static site generators are built because we want to turn markdown. Like, we have a specific input format that we want to turn into a specific other format. We put the restriction on ourselves to write in this format so it can be turned in this other thing. But technically, with an LLM as a site generator, you could write your blog post as a handwritten note and take a photo and put it in the repo. You could write your blog post as a paint bitmap drawing. You could write it as a series of emojis or video, and you could pass all of that to an LLM and have it generate your blog post for you. Because ultimately, the goal is that you have an HTML file, and now you can have arbitrary inputs. And I think it's funny and interesting because I think it changes how we think about computation. It's also why I didn't sleep. I stayed up late. I was thinking about this. But I think the same thing is the same effect will happen with software. And it will happen with the value of software. And to zoom out historically before open source, I mean, most people can't imagine, right? But before open source, like where did you get your code? You had to write your own library. So you paid somebody else to do it. That's why open source was so revolutionary. Because suddenly you could pull all of that stuff together and build your app by tying other stuff together. Now with code becoming cheaper, how does the value of open source change, right? How does the value of code hosts change?

Quinn: Yeah, I mean, it went from being very expensive to get new code written to, you know, for a lot of kinds of code to being free, but there's a search cost, a quality cost, and it's not infinitely comprehensive. And now, you know, you still have quality concerns as you do with open source and code that you write, the search costs and the variety. And for search costs are way lower, the variety is way greater. So it's kind of like, you know, the next, it's like what open source did in the previous world, you know, this doing open source. And so then it raises a lot of questions like, you know, where will all that energy go? Where will the developer reputation nexus be? What will happen to GitHub's kind of community and moat?

Thorsten: Yeah.

Quinn: And the code repository, what needs to change? If AI is generating the vast majority of the code in a repo, do commits matter as much? What does matter? What do you want to know in terms of the transcript of why this was made?

Thorsten: Yeah. I mean, so much changes. Like, I think, I don't know, speculation, I think that, I mean, it's kind of already decreased, right? But I think the GitHub contribution chart and whatnot has been gamed a lot. You know there was already the problem of cheap commits where people would push something every day and blah blah blah and wasn't high quality stuff and you know now you know with the push of a button I can generate new code every time I sit down and and and commit it and but that's a real contribution so that's already happening the devaluation

Quinn: How do you know someone's a good developer?

Thorsten: That's a good question I think I mean coming back to what we talked about last time I think there's still a lot of knowledge required to make it generate stuff it's not it's not I mean obviously there's another school of thought where people think it's going to replace engineers but I think you still need expertise and experience in guiding these tools and yeah you know generating code and whatnot I don't know how that'll show up I mean interviewing is famously hard we both know this.

Quinn: Yeah it feels like the actual skill set will change but there will always be some kind of skill set that can make an LLM's output much more valuable than if it was in the hands of someone without that skill set yeah yeah yeah and whatever that skill set is we're probably going to call that being a software engineer.

Thorsten: That's true I think I mean most people say right it's it's going to be more product-focused people, more... You need to think about the product, right? You need to think about what it is that you're shipping because that becomes much more important.

Quinn: I mean, hasn't that always been important for devs?

Thorsten: I would say so, but I do think that, you know... I mean, I've had email interactions with lots of people saying, I don't like AI because I like writing the code. I like the typing. I like the lines. I like how it's formatted. I like organizing stuff into modules. And I think that's...

Quinn: Well, I like all those things, but I also like AI. And I think you do too, right?

Thorsten: I did, I did. But then it's like, I also like being efficient and not wasting time. And then you're like, okay, I can spend five minutes moving this from this file and seeing what that feels like. Or I can just not worry about it because I don't have to touch this again. And that changed a lot of stuff for me. And it made me realize that I do love typing. I'm really good at typing. I can type really fast. I'm pretty good at writing code. But I also realize the need to be efficient and to not waste time out trumps that. And maybe for others it doesn't. And they think I still like writing every line. Yeah.

Quinn: Totally. I can't read fiction. I can't do puzzles because they feel like, what's the point? I only read nonfiction. That's just who I am. And it's weird. But yeah.

Thorsten: I tried to play Grand Theft Auto V again, like, oh, you know, half a year ago to relax. And I realized I lost the ability to do this. I'm in the video game and it's like, "go here and do this." I'm like, "I could solve actual tasks instead of doing this." I'm like, "I want to write this. I want to do this. I want to do this. Why should I go to the shop?" And it felt like not relaxing at all. It felt like I'm doing the thing I don't know. Like it felt like a waste of time. I think that's what you mean with puzzle. Like it's an artificial puzzle.

Quinn: Yeah, exactly. So what do you think happens to the code host in a world where, you know, a lot of people just go to it for code review. What if you're not doing code review or it's in a different way? What happens to the code host?

Thorsten: Yeah, I think in my vision, I think the code host is not a place anymore in which you put, to which you push committed static code. I think in my vision, it's more a thing where you store your code that's and the agents and the prompts along it. Like it's not, oh, the code is stored over here and it's edited over here. It's, I think ultimately we move to a world where, you know, like I joked on Slack, you know, 555-call-my-code-base. And then, you know, you have something else edited. And I mean, obviously there's been a lot of dreams about this, you know, remote dev environments and all of that stuff. But I think something changed and I think when you start to, you know, you mentioned the audio feature, you set the agent off, you go do something else and then it, you know, gives you a ping sound and then it's like your code is ready and you check the code. And when you do that a bunch of times, you realize why should I check out the repo and do this myself? Like why can't I have like an agent running next to my code and maintaining and I can send to the agent and say, please update this. I don't know. You could argue it's two different platforms and you could argue it's the same as with, you know, developer tooling or, sorry, with dev environments. But I don't know. I just, you know, we built the thread sharing. I think that's going to play a bigger role. What do you think?

Quinn: Yeah, it feels like there's a bunch of different silos of information and action. You got your editor, but you want to be able to kick off the same kind of agent work from the web or from your phone. Like when you know that it, the class of problems it can solve very reliably, then you want to do it from your phone. Like I've been out with the kids and someone says, oh, here's a bug and I want to just go fix it or something. And it feels like such an artificial limitation that, oh, it works in my editor on my machine, but not here. And I know that I could do a fixed amount of work. That's not hard to go make it so some remote dev environment could, you know, run it. And why haven't we done that yet? Why haven't remote dev environments taken off yet? I think it's an incentives problem because it never could have been your main development environment. So it'd be a fraction of the human team using it, but now AI can use it. And we see a ton of people in our repo just in the commit message, including a link to the thread that was the main one behind the commit. That kind of association should be done automatically. And they really should be one and the same. When you're reviewing that change, you want to just look at the thread. And so I just don't see a world where these are all separate. And then if I put myself in the shoes of GitHub, it is such a big change to do that. Can you imagine how controversial it would be? They already got so much flack by saying we're refounded based on AI.

Thorsten: Yeah.

Quinn: Like, you know, first of all, props to them. Like, I love GitHub. They, props to them for doing Copilot. But it is such a hard change for them to make.

Thorsten: Yeah.

Quinn: So who's going to do it?

Thorsten: It's every, every, I don't know. Like, every developer is emotionally attached to this not changing, I think. Like, as long as they haven't found the way to look at this, which gives them a sense of joy or, you know, I don't know, some interest in this or, I don't know, fascination with it. And I don't know, nobody wants to admit it. Like it's, I cannot believe that I'm even believing this because I was so invested in writing code by hand and, you know, got code printed out on my wall here.

Quinn: You wrote books about how people can write code to write code.

Thorsten: Yes. This is why I couldn't sleep last night. So in my head, I was writing this blog post and guess what? The blog post is, I don't know what the, "there's beauty in AI." That's a draft title I'm looking at. And basically, in my head, I stayed up because I was thinking of everything I would have to put into disclaimer, you know, to not have people dismiss me immediately because this is the internet, right? So in my disclaimer, I would have said, you know, I probably, if you're reading this, I probably have written more code than you. Like I'm top something percent in amount of code written in my life. And I've written books about code. I've written compilers. I've made more commits to my VimRC than most of you reading this. You know, like I truly care about code. That's the whole disclaimer. And then I kept thinking, what else do I need to add to make people get it? That I'm not like an AI slob hype boy who thinks code is bullshit and just vibe code everything and blah, blah, blah. But I truly believe that there's a beautiful thing about how we can take these huge matrices and turn them into text transformation engines where you think really hard about input and what do you put in the context and how you navigate the latent space and then what do you get out of it. And once you've seen that happen and you work in developer tooling, you start to realize that most of developer tooling is about text transformation. And now we have like these huge orbs that can do magical things where you can put fuzzy stuff in and they can put out a code, working code, and it has to change stuff. And yeah, I forgot what my point was, but this is the thing that I can't think about.

Quinn: So when you think about the disclaimer, there's no disclaimer that can stave off the hoards from the internet. But what have you found actually converts somebody from this cynic to starting to see that, hey, there's actually something here? Because you've done that to a lot of people.

Thorsten: Yeah, I think what worked for me is just yesterday, right? I showed a demo of Cerebras, which is, you know, Cerebras builds chips, GPUs... They're really fast. You can get thousands of tokens per second on small models. But I think they will get bigger models. And when most people think of AI, they think about, oh, I ask it something, then it takes a while, then it comes back and it's wrong. Now I have to retry it. And then their mind goes to, I can write the code by hand now, and it's much faster. And I show them Cerebras and how fast it can generate code. And something clicks because they realize, oh, if the models will be this fast in the future, like a thousand tokens per second or 2,000 or 3,000, that's going to change stuff. Because then suddenly, you know, it's not about waiting for something comes back and I can do it faster. No, it can now do it faster than you. And even if it's like, you know, if it's only 70% of the time correct, you can hit that retry button and it's still faster than you. Because then you can think, okay, I could have loops running that validate this against my language server and whatnot. And then it doesn't matter what it's wrong on the first part because it's still faster than what I could write. So that's one thing that convinces people or I think changes the view on things. And then the other thing is just –

Quinn: And do you think that's because people have the intuition that you can have a kind of rough or lossy objective function? And if you're talking about iteration cycles in terms of like minutes because LLMs are slow, then the objective function, there's no way that it's going to climb that hill. But if you can like throw a thousand things or a million things against the wall like monkeys typing on a typewriter, you think that's where the intuition comes from? Have you ever plumbed their mind?

Thorsten: No not in that sense but I do think based on my own experience I think it's this it felt like you cannot program these things and it felt like a coin toss every time like you would ask it and and it's like this is not why I came here to do programming for to toss coins and have unreliable stuff but you know the other thing that I think changes a lot of perspectives is, or changed mine certainly, is first time I tried, this was last year, like half a year ago, Cursor Tab, which is what we have in Cody now, Cody Auto Edits, and others have it now, but I was editing a file and just trying this out, and it would show tab to go here, tab to go here, tab to go here, and I realized at that moment that I'm pretty good with Vim, and these are not super smart AGI level suggestions, but they took away a lot of toil and they made my typing, you know, unimportant. Why worry about having Colmack to type some Vim macros faster when a small model can, you know, 200 millis can say, you want to remove this line down here? Hit tap and do it. If you want to do this. And I think that is a change in how you frame this going from this big all-knowing Oracle that I ask questions about code to this mechanical helper that can type faster than I can, and it can see ahead and just takes away mechanical stuff and toil and chores.

Quinn: Yeah. So it's not magic. It's a mechanical helper that you can program. It's not deterministic in the way that we think about it. I mean, it is.

Thorsten: Yeah.

Quinn: You know, if you do the ordering of the float operations. Yeah. Yeah. You know, demystifying it. And I think that one of the things we want to do in this podcast and with the, you know, Amp, which we're building is we want to make the thing that is not claiming to be perfect. It's not claiming to one shot. We want to highlight when it doesn't work and what we do when it fails. And so you and Beyang, you've started recording these raw demos and we're going to have a lot more, which is not, "Hey, we do it 50 times. We cherry pick the one where it works", but actually it's really valuable to see when it doesn't work the first time, how can we nudge it. I was just doing that a bunch and it was doing some dumb shit on two of the files that it edited out of like 14. And, you know, I had to nudge it. That's what I think people would benefit from seeing.

Thorsten: Yes, I agree. I think it's a framing of you know, it's a framing of expectations or how you set the expectations. And I think people have been kind of misled or there's like the overall hype about AI and what it can do. And you just ask it stuff and it comes back. And I think that's due to the providers, you know, at least OpenAI, Perplexity or some, I don't know, not Perplexity, but there's a lot of hype. And I think they kind of mix this with programming. And I think a lot of that is the wrong way to look at it. You shouldn't look at it as, oh, I don't have to write any code anymore. You should look at it as, this is like the text editor. Like this is a thing that I direct. This is an extension of me into which I put my knowledge and my requirements. And then I select the input or you still have to do it. And in parentheses here, with the models getting better, that will change. The exact form will change. But I think this is the framing. It's an extension of you. And you have to still direct it and guide it. And it can often fail. And yesterday, I had it fail. For two hours, I was trying some stuff. And every time it failed, I realized I forgot to mention that it shouldn't do this. And then I went back up to the prompt and I started a new thing and I kind of refined my stuff. And I then sat there thinking, did I save time now? Did this make me faster? And the conclusion I came to was if I had not used Amp for this, I would have started writing code. And then after five minutes go, oh, it's actually harder than I thought it would be. And then maybe I would procrastinate or try a different approach or update the ticket or let somebody know it's going to take two days instead of one. Or I would do something and after 20 minutes realize this is not going to work. So I first have to do this. And, you know, you lose motivation. And it happens to me a thousand times, two thousand times.

Quinn: And we've got a special responsibility as people building this tool. And so many, not every, but so many of the times where it fails in a dumb way, it's obvious to me that we could detect something. That kind of failure mode and come up with a protection. Now you run the risk of, is it going to be whack-a-mole? And if you layer all these on, but in my case, I was making it so that you can share these threads without being authenticated. And it completely clobbered a SvelteKit page and a server file. And given the prompt that I gave it should know, like a dumb LLM would know that under no circumstances, should it clobber one of those files. And so now it's on us. How can we build that kind of like feedback loop or guardrail in and yeah it feels like that is a valuable kind of component but like how do we make it so it's not just endless edge cases like that?

Thorsten: Yeah I agree I think that's a big challenge it's yeah it's one last I I we could end like I think this is good thing to end it on because it's open-ended but one last thing on this yesterday while I was failing to do stuff, talking about guardrails, I wrote this whole prompt, and then it went off and did something. And then I realized I included the wrong component in the prompt. Okay, my bad. I redid the prompt. Okay, I did this other thing. And it just went off. And why did it do this? And after the second time, I realized it's because I had this other file open, and it knew that I had this file open. So it thought, surely you want me to look at this too and put this in this component. And so for me, it was this, I closed all of the files. And basically, you know, discarded it. I basically said, remove all of the distractions. Don't worry about what I'm doing over here. Just focus on these two files. And I don't know how we solved this, but I think this is, you kind of want to give them like, you know, tunnel vision in some cases. and say like, you only focus on this and you only see this, but you don't want to take the peripheral vision away, which often helps them. So, I don't know, big problem.

Quinn: Yeah, big problem. And whack-a-mole will not work. So I'm very interested in what we can do with thread sharing and the ability to have more insights into where it failed than I think other tools that do not have that.

Thorsten: Yeah.

Quinn: A lot of work.

Thorsten: Lots to do.

Quinn: Lots to do.

Thorsten: All right. Then until next time. Bye-bye.

Quinn: Happy Coding.