🎧🍌 Why AI Coding Will Never Be Fully Autonomous | Daksh Gupta, Co-founder and CEO, Greptile
How engineers actually use AI, inside SF's 996 discourse, how to run crazy marketing experiments, the importance of onboarding design in AI products, and why YC is worth it
Greptile is fresh off closing a $25M Series A led by Eric Vishria at Benchmark, and it’s AI code reviewer one of the fastest growing products in the world.
We talk about why AI coding will never bee 100% autonomous, how engineering teams are actually using AI, how to get your team to adopt AI, accidentally starting the discourse around 996 in SF, how to run crazy marketing experiments, the importance of onboarding and design in AI products, and why YC is worth it.
Thanks to my friend and early Greptile investor Suds for helping brainstorm topics for Daksh!
Sponsors
Numeral: The end-to-end platform for sales tax and compliance. Try it here.
Hanover Park: Upgrade to a modern, AI-native fund admin here.
To inquire about sponsoring future episodes, click here.
Timestamps to jump in:
3:15 Evolution of AI coding + code review
11:23 Coding will never be fully automated
18:07 Why you need a separate code reviewer
24:34 How engineering teams are adopting AI
27:37 Why LLM costs will come down
31:54 Pricing AI products
35:27 Getting your team to adopt AI
38:17 How Daksh started the 996 discourse
42:10 Recruiting is a funnel, open roles are a product
49:19 Making an energy drink for programmers
51:19 Brainstorming marketing stunts
57:22 Don’t do hype marketing too early
59:41 Starting a band, hitting #14 on Spotify
1:06:35 Evolution of the startup meta
1:12:39 Starting Greptile in class at Georgia Tech
1:19:18 Moving to SF, getting into YC
1:23:44 Pivoting from codebase chat to code review
1:27:09 Crazy growth and mimetic desire
1:29:47 Pricing AI software
1:34:44 How to market developer tools
1:39:46 Greptile’s fundraising journey
1:42:57 Why YC is worth the 7% dilution
1:46:39 Treat fundraising like dating
Referenced:
Find Daksh on X / Twitter and LinkedIn
Related Episodes
👉 Stream on YouTube, Spotify, and Apple
Transcript
Find transcripts of all prior episodes here.
Turner Novak:
. Daksh, welcome to the show.
Daksh Gupta:
Thanks for having me.
Turner Novak:
Real quick, for people who don’t know, what is Greptile?
Daksh Gupta:
Greptile is AI agents that review pull requests with whole context of the code, and software teams use us to catch bugs and enforce coding standards across their company.
Turner Novak:
I think you’re maybe getting these numbers wrong. The tagline on the site is three times faster, four times more bugs, or is it the other way around?
Daksh Gupta:
Statistically, compared if you didn’t use an AI code reviewer... For context, it used to be the case that our pitch was to people that had never used anything that resembled AI code reviews. The hard part was like, “What is an AI code review agent?” The confusion people would have is, “Okay, is this like Copilot? Is it like an IDE?” The thing that happens, I think as time goes on, people have more discerning power and they can tell things that look like the same thing apart more easily, but at the time, our pitch was, “Hey, if you’re not using anything for AI code review, this is going to make it so you’re going to merge your pull requests about four times faster, and you’re going to catch, on average, three times more bugs.” This is just statistical data from our first collection of users, our first 500 users or so.
Turner Novak:
Is there a little bit of a skeptical nature around how people thought about this originally?
Daksh Gupta:
There was, and that’s what’s so interesting. Right now, it feels like everybody wants one of these things, and we’re in the hard seltzer era of AI code reviews. Everyone’s doing one, but a year ago, that was not what it was. We built this thing, it was extremely unclear, people wanted it. The only people that wanted it were fringe AI developers that were early adopters of AI coding agents, and then they had run into this problem where they were AI generating all this code and now they’re new bottleneck was their pull requests were open for way too long, because there’s suddenly like 100 pull requests at a time, and that was their problem. I like to say that we were clairvoyant and we saw this and we were like, “There’s this fringe AI people and they’re having their AI, they’re having their pull requests remain open for too long, and it follows that when everyone is using AI to generate code, everyone’s going to have this problem.”
What it really was was we found some group of people that had a problem that we could solve and they would pay for it, and then we had no expectation that number would grow dramatically in the future. Maybe, to some degree, just luck that it happened to be just a fast-growing market that we weren’t expecting, but that was what it looked like early on. It’s like most people had not fully adopted AI coding in the first place, and pre-AI coding, there was a bit of... There was an equilibrium where people were generally okay with how PR reviews were going. They were producing some amount of code, they were used to it, taking about a day or two to get merged in. Everyone was a happy medium. It wasn’t good, it wasn’t bad. There’s nothing wrong with it.
Then there’s just sudden shift where it’s now taking five days or six days to review a PR, because you’re just generating so many more, and then people are like, “Hey, this is a problem. We need to go and fix this.” I think that was the difference between us doing go-to-market at the start of this year versus now, it’s just night and day.
Turner Novak:
Interesting, okay. I want to hear about how it’s changed over time, but how does an AI code reviewer work in practicality. If I’ve never tried one before, can you just explain to me how this works and why it’s better than doing it manually?
Daksh Gupta:
Yeah. When you open a pull request in GitHub, Greptile writes comments on it, in line comments. It’ll say, “Hey, lines 12 to 17 in this pile, there’s a race condition that’s likely to happen,” or, “There’s an inverted Boolean,” or something in these lines. It’ll catch these bugs and it’ll tell you what’s going wrong with them. The way Greptile does this is it looks at the lines that were changed, it uses its very detailed index of the code base to figure out what would be affected by these lines. For example, you’ve changed some function in a pull request, and Greptile will trace it and say, “Okay, this is a function, it was called by this function, which is called by that function, and that one breaks because of this change,” or it’ll say, “You seem to have added a new integration, but it looks like structurally and architecturally different from other equivalent integrations and the rest of the code base.”
It has a context-aware way of evaluating the changes. Now look at your JIRA ticket and make sure that you’ve done all the things from the JIRA ticket, and so on. I think an interesting sub question here is like, “Well, if you are doing all this stuff with AI and you’re looking at this pull request, you’re pulling in the required context, and you’re catching bugs and issues, and in some sense, providing... A lot of people view this as the providing feedback on the pull request. Why can you not do this locally? The answer roughly is you can. It might not be as good as Greptile, because we’ve built so much tooling and so much... There’s so much of a harness that we’ve built around the LLMs that are very, very for AI code review and, for this, our code review type pipeline.
But you could do this locally too, and you’d probably get like 60, 70% of the way there. The reason not to do it is actually quite simple, it’s people don’t do it, it’s just extra work. If an engineering manager or a CTO can have this be a part of the pull request tool chain, I have it almost... It’s not technically part of CICD in the conventional sense, but in some sense, it is part of the post pull request tool chain, then you can enforce this type of checking across an entire company. I think that’s really compelling about this form factor, where it centrally is applied to all the repos, and then every pull request from every engineer is automatically reviewed through this AI context-aware intelligence thing.
Turner Novak:
How has that changed over time? I think you mentioned the products evolved, what’s been the evolution?
Daksh Gupta:
The form factor has actually remained the same for a long, long time. Pre-AI code review... Well, I guess if you go pre-GitHub, code review looks very, very different, but GitHub introduces the concept of a pull request, as far as I know.
Turner Novak:
Yeah. Do you know how code review worked pre-GitHub? How did code review work?
Daksh Gupta:
I’ve heard this from the elders I’ve come across in Silicon Valley, but that they would essentially sit in a conference room and project the newly written code on a projector or a screen, or something, and then people literally sit around the conference room and look at it line by line and then write down their notes.
Turner Novak:
Really? Okay. They predict, “This looks like it’s probably going to bug out, we should do something about it”?
Daksh Gupta:
That part didn’t change. GitHub introduced really, really good tooling around... The pull request was just a good UI for the same thing, and it had some good workflows around it, but the thing is still that. A human has to look at a code and predict what would happen, but it’s the job of the interpreter. We run the program, you’ll discover things are going wrong, and so on. The human brain is fundamentally bad at catching bugs, because you’re asking this pattern-seeking machine to look at a very complex system and then identify things that are out of pattern. We’re perfectly incapable of doing it. To me, pre-AI code review was a lot like security theater, and maybe that’s not very charitable, because there were some advantages. It was a good opportunity for mentorship, and software engineering can end up being a very individual role if you don’t have good process around collaboration.
This is actually an opportunity for collaboration in an otherwise fairly siloed role, and people work together on things and discuss things. But as a mechanism for catching bugs code review, human code review is not very effective. We call this AI code review, and it’s an entirely new thing. It catches bugs and, by doing that, it’s just a different thing than human code review, because human code review doesn’t really catch bugs. It turns out that AI is also much more comprehensive, it’s applying the same amount of diligence to 10,000 line change that is to a 10 line change, and there’s the old joke that, if you do a 10-line change, you’ll get five comments. If you do a 10,000 line change, you’ll get zero comments, because people just don’t read that much code. They’re not going to review your thing, people are going to rubber stamp it.
I think there’s, obviously, longer thoughts on what the opportunity here is, which is we’re AI generating all this code and our systems for validating this code haven’t scaled in accordance with that, because we have these archaic ways of making sure the code is correct. We actually spend a lot of energy on it, between testing and QA and review. We spend a lot of time making sure that the code we’re producing is valid and safe to merge. AI presents as very fascinating opportunity where we can make that entire process autonomous. I think the reason that core generation might not become fully autonomous is because we still need to express our opinion of what we want, and that opinion is not always rational. We don’t know upfront exactly what we want. We’re bad at serializing our thoughts and we’re bad at serializing our taste, to some degree, into a string of tokens that you can put into a prompt.
That part might require a good amount of human intervention, the part we generate the code, just because you’re at the very least express our opinion. Code review, or code validation more broadly, doesn’t have that problem. You don’t need opinion. Everyone just wants valid code, everyone wants the same thing. Code does not have bugs and code that enforces certain set of standards. That part should actually be completely autonomous, there’s no reason a human should be involved in the autonomous step. People don’t want to be involved in the autonomous. I don’t think anyone likes the code review part of their jobs. People usually don’t like the testing part of their jobs or the QA part of their jobs either, and I think there’s a very interesting opportunity here to make that entire thing autonomous.
Turner Novak:
If all the code is AI-generated, do you eventually get to a point where all the reviewing is done autonomously, to your point? Is there eventually no manual human code review, because you can just instantly review it once it’s generated?
Daksh Gupta:
I would go even farther and say that the code writing part will not be fully autonomous. Not because the agents are going to not be smart enough, but just we won’t get any better at telling the agents what we want upfront, so we’ll have to work with them close data, iteratively tell them what we want, because we’re just not that good at telling them upfront data. The thought experiment I like, is there anyone at Salesforce that can write, given enough time, a sufficiently detailed prompt that can one-shot Salesforce? Probably not, it’s a very complex piece of software, and even if the agents are perfectly intelligent and capable of executing instructions perfectly, we just aren’t that good at describing stuff that’s this complex in a way that we want want it to be. On the other hand, code review, again, that can be fully autonomous. I think the eventual state where we have infinitely intelligent agents and infinitely intelligent models is we’ll probably have partially, or maybe even mostly autonomous code generation, and then we’ll have 100% autonomous code validation. I think that is my prediction for what this looks like.
Turner Novak:
If autonomous code, it becomes the optimal best and you maybe have to prompt a bunch and get to do what you want, but it’s always perfect code, do you even need to review the code?
Daksh Gupta:
That’s a really good question. This is a connected theory too, which is, because I think that there humans will be involved to some degree in writing the code because they have to express opinion, human opinions aren’t getting more perfect over time in the way that agents are. AI gets smarter every year.
So another way to think about it, if you’re trying to figure out what to build and then the intelligence of the agents is extremely nebulous, unpredictable thing, you don’t know if these agents are going to get smarter over time, you don’t know how much and more intelligent they’ll get over time, and then even if they did get really smart, you wouldn’t know what the second order effects of that would be. Then it’s useful to start to look at the things that aren’t changing. Humans aren’t getting smarter every year, we’re not becoming better communicators every year, so we are the bottleneck.
I think the bugs will still be human-introduced at some point. Today, they’re agent-introduced to a great degree. AI agents create often buggy code, they aren’t perfect code writers. There’s some number of codes, the code that’s written, some amount of it, some number of lines of code, where the human’s fault propagated the... The AI correctly followed the instructions, human instruction was wrong, and that’s why there’s a bug. Then there’s a large number of bugs which are just the agent wrote bad code. The yellow lines are bad, because the problems are complex and the code that wrote was bad. I think it’s probably a half-and-half. The second one, which is the AI being bad, and that’s why they’re being... I think there’s a very real chance that goes away over time. It’s entirely possible, and it’s an eventuality that we should prepare for, as a company that serves to make code valid.
We should assume, to be safe, the AI will get perfect, it’ll follow instruction perfectly, it’ll produce perfect code, given some amount of instructions. I think that taking all of the context into account, the ticket, the documentation, the architecture, the rest of the code base, and then using that to figure out that the code is correct or not, it will start to matter a lot. That part, I think, is where... That is the value that we will be providing in the long term, which is a little bit different from the value we provide today.
Turner Novak:
It becomes more and more just making sure the human’s brainwave and thinking is correctly being translated into whatever is being autonomously generated. It’s just helping us convey our thoughts and what we want in a more efficient way, essentially?
Daksh Gupta:
Yeah, so you’ll produce some code, you’ll open a pull request, and there’ll be just like this sometimes adversarial agent, which only reviews the code, it only looks at the code and tells you if it’s correct or not, and then what’s wrong with it, if it’s wrong. It’s not the one that’s producing the code, it’s independent, and it’s very, deeply context-aware. It doesn’t require input from you, it will just look at the code, it’ll do all this stuff. It’ll maybe even generate tests and run them against the pull request, and then it’ll say, “Hey, here’s everything that’s wrong with this code.” Maybe it’ll actually not even tell you, it’ll just tell your coding agent and be like, “Hey, Cloud Code, I reviewed this PR that you wrote with this human author, here are the three or four things I think that are wrong with it. Can you go and fix them? Maybe ask your human if you need help with some things, maybe some things where opinion matters, and you might want to consult your human.
But if not, you can just fix this stuff.” A lot of people do that with Devon, for instance. It’ll have Devon open a pull request and Grubhub will review it and Grubhub will say, “Here are five problems.” Devon will say, “Okay, these five, I can solve. I don’t need a human to be looped in,” because Devon is really good at figuring out when and when not to loop in a person. It’ll just resolve it and then maybe there’ll be another back and forth, and then the PR will be merged. I don’t know if you’re going to have, or maybe you already have had Adit from Reducto on the podcast, but they’re the first ones that I saw using this workflow.
Turner Novak:
Yeah, that’ll have come out by the time this one does.
Daksh Gupta:
Nice, yeah. I think they were one of the first ones to show me this workflow, where they were like, “Hey, we just had Devon open a PR, Grubhub reviewed it, Devon addressed the comments, and then it was merged. There was just no intervention beyond the first expression of what... We told Devon what to do and then we just moved on to the next thing,” and everything else... There was a producing agent and then there was a reviewing agent, and they just worked together and they did my thing and then it’s merged now.
Turner Novak:
Okay, so dumb question, or maybe it’s a good question, I don’t know yet, but if I’m using Cursor or Devon or Copilot, or whatever, to build this stuff, why do you need a separate review layer, like Greptile? Shouldn’t that... It sounds like there’s an extra step, shouldn’t it just become part of the coding agent?
Daksh Gupta:
Yeah, there’s a couple of things. The first one is that if you ask any engineering org, “What are people using?” You’ll never get a single answer. Within an organization, getting everyone to use the same IDE is very hard. There’ll be some people on Cursor, there’ll be some people on Cloud Code, some people will be using Klein, and there’s a long tail of really, really good... Obviously, DevEx and... Sorry, Codex and Devon. There’s a long tail of these really good coding agents and environments, and if we’re going to work with these things really closely, which I suspect is going to be true, because we’ll have to tell these coding agents what to do in an iterative way, the ergonomics of these tools will matter a lot. Developers are very picky about their ergonomics, there are some people that are just like, “We are terminal people, I use them, so I want Cloud Code.”
Other people are like, “I’m a VS Code developer, and so I’m going to be on Cursor. That is the form factor that I like.” If you’re an engineering org, you need some central validation layer, and you want it to be consistent across all of the different things that people are using, and you don’t want the code review thing that you’re using, you don’t want it to be imposing anything on the developers, on what they should be using for writing code. I think people should get to pick their own tools on that front, and then the Greptile coding review agent just turns into a central validation there.
Turner Novak:
You’re basically making the bet that you’re almost like the boring layer that developers are just like, “I don’t care about that. Just review the code and let me get back into the hands-on what I’m using, and you can use whatever the top down chain of command says we should be using it for code review. That’s fine.”
Daksh Gupta:
People care that we are right, but they were right about our comments, that we don’t make too many comments, and the ones that we make are very relevant and reactionable, that we leave on the pull request. But the truth is it’s just not a product that you use every day, it’s a product you use once or twice a week. That’s how often you open a pull requests. You just don’t care as much about the ergonomics of it as you do your IDE. Yeah, I use Cloud Code, I like the terminal, I like the way that it works, and I have coworkers that used VS Code their entire lives, so they’re now Cursor people and they have a hard time getting out of that.
They’re maybe very good at using the keyboard shortcuts, and they just have... Being able to look at the core surface and they just are comfortable with that ergonomic, and that’s what they want to use. I think that imposing that, a different one, would be very hard. I think not even super large companies that are generally pretty good about imposing tools on people, they also fail to do this. Developers just want to use the tools they want to use, you can’t do anything about that. I think it would be bad. It would be bad to force a developer to use an IDE that’s different from the one they want to use. I think that people should just be using whatever they want.
Turner Novak:
And some companies, they build their own, they have their own whole entire stack too, right?
Daksh Gupta:
Yeah, a lot of companies do. The last I’ve heard, people like the Google stack internally, they like the internal developer tooling stack there, but there is value in the centralization of it. There’s probably some ramp-up time for Google hires, where they’re trying to figure out how the internal tooling works, whereas, at startups, including ours, and also a lot of the more recent... Your newly large companies that are maybe founded in the late 2000s or early 2010s, and later on in that, they pretty much have converged around GitLab or GitHub as their source code manager, Git as the version control. Very likely, there’s a small number of editors that they have all converged on. There’s a lot of value in that consistency, because it means that people can move around a lot more easily. A developer that’s really, really used to Uber, then moves to Airbnb, I think they’re both GitHub users, so they can easily... Obviously, there’ll be internal tooling, then things will be different, but the amount of time it’ll take to adopt the new place will be a lot easier.
Turner Novak:
You were going to get into this whole second element of it, I think I cut you off. What was the second layer of it?
Daksh Gupta:
Everyone has the tests for their code base on their computers, it’s in the repo, and yet companies spend all this money on CI runners, running the tests in the CICD pipeline. They’re spending all this extra compute, and it’s probably very meaningful spend for most software companies, is running these tests in the cloud when people open pull requests. The reason it’s actually quite simple is that you don’t want to rely on individual agency if you don’t have to. As a CTO, you can just say, “Hey, it doesn’t cost us anything for everyone to just... Everyone, just remember to run the tests before you open up pull request.” If you successfully convinced everyone to do that, you wouldn’t need the CI runner anymore. You could just make everyone run the tests and then, obviously, the people using their computers for incrementally more compute is free, and you don’t have to pay for these runners in the cloud, but the one that’s slow, these computers aren’t very optimized for that type of workflow.
Second, you just don’t know if everyone’s going to do it, and people not doing it’s very bad. Being able to do it for sure every time before this code is merged is very valuable. Same thing is true for code review. Theoretically, there could be a local version of Greptile which has all the tools and everything, and you could rely on everyone going into their CLI and going like Greptile review and then current commit, or whatever, and they’ll just reviews your current commit, and that would be fine. Functionally, would be the same product, except the main difference being you can’t enforce that, and the enforcement is useful. I think that’s where... Developers are excited by Greptile. CTOs, VPs of engineering, engineering managers are very excited about Greptile, like this type of foreign factor is much more exciting to team leads, tech leads.
It’s very exciting to staff engineers, the more senior folks are more excited by this. One, they spend a lot of their time doing code review, and second, one of their mandates is they have to figure out how to get their team to produce really high-quality code and to build processes and systems that make that be as low effort as possible.
Turner Novak:
How have you seen engineering teams just adopting and using AI? Is there a certain... If I’m a forward-thinking top 1% engineering org, what does my stack maybe look like?
Daksh Gupta:
It has changed a lot over time. Last year, the state-of-the-art was the startups and the very cool companies, they had adopted Copilot really early and then most of the team around that point had moved on and started using Cursor for coding.
Turner Novak:
There was an enterprise contract with Copilot, but no one was necessarily using it?
Daksh Gupta:
Over time, they just slowly bleed to Cursor, because Cursor is a superior product, at the time at least. Might still be a superior product today. This year, it seems to me that... Last year, the frontiest come, the people that are absolutely at the fringes, the ones that were most sophisticated, they were former Copilots and had mostly transitioned to Cursor. Some of the ones that were a little bit slower, they had some Cursor adoption, but still mostly were Copilot, and then there were ones that were just Copilot, but we didn’t talk to them much, because if they were still on Copilot, they were definitely not reaching out to a seed state startup to do code review. That was just not the type of thing you’d do.
Everyone we talked to, if not on Cursor, they were on Windsurf and they were on all these tools. This year, it’s been very interesting. The early adopters of every... Now, Cursor is like, “Most companies we talk to, everyone uses Cursor,” and then, to varying degrees, there’s Cloud Code option. The companies that were at the front of the Cursor adoption curve have now become mostly Cloud Code companies, and in some cases, Codex companies. Then the ones that were Copilot companies last year are mostly Cursor companies today. It’s just this moving transition from tool to tool. If Cursor builds an equally good long-running agent as Cloud Code, I’m sure people start switching back. The really great thing about coding agents... Great thing for developers, probably bad for the coding agents themselves, it’s very, very easy to switch between them, and you can just decide that you... You can use both on the same day.
You can use Cloud Code and... You can use at the same time, you can have one terminal open with Cloud Code, you can have one terminal open with Codex, you can have one Cursor window open. Give all three of them three separate tickets that you think they’re uniquely suited to solve, because the prices with these things are so low for what they are. Cloud Code is the most expensive one, it’s $200 a month. It produces an order of magnitude more value than that, at the very least, probably 100 times more value than that at the very least. There’s no question about whether or not it’s worth it, and you just buy all... You can pay for all of them, it doesn’t matter. I use Cursor for some things, I use Cloud Code for other. I don’t even code that much and I still pay for both, because I use them enough that it’s worth it. That is more than worth it. I use Cursor for times when I need surgical edits and I use Cloud Code for most of the difficult things, and I’m very happy with that pair.
Turner Novak:
Yeah, it’s always interesting to me when you see all these people... It’s interesting to me when you see a lot of these people saying, “These companies, they have bad margins,” or whatever. I’m like, “Okay, just step back and think about this. If I’m Facebook, the all-in cost to employ, a good engineer, it’s like $1 million, right? I’m paying them whatever, 400, 600, 800K, I’m probably giving a bunch of stock. I’m probably paying them seven figures, it’s an insane deal, $200 a month.” They’re paying $1,000 a year for their million-dollar engineer, so you could probably raise prices over time. I’m not sure how they’re going to do that, you probably incorporate new capabilities and new features that you charge for, or you can just naturally raise prices, like Spotify or Netflix did over time. But I just think it’s crazy to say that they won’t.
Daksh Gupta:
ALM’s costs are going to go down. I don’t know. I feel like people have just forgot... We were using GPT-4 when it came out ,and I had to look up the price recently. Prices used to be listed for 1,000 tokens, that’s how you listed prices. It was like 50 bucks per 1,000 tokens or five bucks, 1,000 tokens, whatever. Sorry, I think it was 50 cents or something. It’s 50 cents, 1,000 tokens, it’s $50 per million tokens. GPT-4 equivalent intelligent today is free. It’s like GPT-5 nano is free. It’s functionally free. The frontier has also gone cheaper. Obviously, equal intelligence has gone to zero. The same IQ level that we got two years ago for very, very high cost is basically free now. But whatever the frontier is, it’s also cheaper.
No model today, no matter how good it is, is as expensive as GPT4 was when it came out. All of the models have gotten cheaper. I can’t picture a world that doesn’t keep happening. It’s like a stated goal from OpenAI, an intelligence that’s too cheap to meter.
I believe them that they’re working on that. To some degree, they have to. I think the model, they just keep getting cheaper over time. Whatever indictment people have of Cursor’s business, I think margins should not be one of them at all. I think they’re going to be totally fine. They can easily raise prices. I think the product is worth way more than 40 bucks a month or whatever they charge now.
Turner Novak:
Maybe this is a multi-pronged question, but I guess, and you can answer whatever you think makes the most sense. But I guess there’s an element of, why do people care so much? Why is this such a big deal? Why has nobody kind of internalize this?
I think if you look at those cost curves, it’s like they drop 99% every 18 months or whatever the time is. I probably said the wrong numbers there. But it’s like coming down significantly. What’s been going on where people haven’t quite internalized this yet about the costs coming down?
Daksh Gupta:
I think it’s just the Occam’s razor explanation is that it’s cool and edgy to point out when something is wrong with something. Especially when the thing is so good in all of these external regards.
Cloud Code, an incredible product, goes from zero to hundreds of millions of dollars in revenue in the matter of a few months. People like go, “Hang on.” It’s not cool to glaze it. I’m doing it right now. I’m glazing. It’s very good. It’s a fantastic product. Then Cloud Code is AGI. I think that it is unsurprising to me that it makes as much money as it does.
They’re like, “Clearly, if I’m cool, I have to find something wrong with it.” I’m like, “Margins, that’s the one.” The margins are the problem. It’ll be fine. The margins will be fine.
Uber was like two bucks a ride in San Francisco. They just raise their price because the value Uber creates is just a lot larger. How much would I pay to be moved for free?
The actual thing Uber does me, I don’t have to own a car because of Uber and Waymo. Owning a car in San Francisco is like a $1,000 a month. I don’t own a car. I don’t foresee myself owning a car because between Uber and Waymo, I’m definitely spending less than a $1,000 a month on those. Maybe it’s not 1,000, it’s $600 a month to own a car or whatever, including parking, insurance, gas and everything.
The value created is just so much larger. I think just Occam’s razor, you need something to criticize. This is the only available thing. What access would you criticize it on? People love the product. It makes a lot of money. It creates a very measurable ROI. Like, “What could you possibly hate about it?”
Turner Novak:
Yeah. I think by the time this comes out, we’ll be barely three years post ChatGPT launched. We’re still figuring out what this stuff can do. Unlike, you look at the pricing models, how people have priced these things, that’s been changing. Every year, there’s a new way to price this.
Initially, ChatGPT, there was no revenue. There’s no price. There’s no product you could even pay for originally back when it first started. We’ve really only been, I think, for a while Intercom came out with Fin. There was outcome-based pricing. It was just like ...
Daksh Gupta:
I was just thinking about that, the outcome-based pricing. I’ve recently just been really interested in pricing in general. There’s this book, I think it’s a very popular book called Monetizing Innovation. I think it’s pretty old, but a lot of this stuff applies today.
It says that pricing models are two axes. There’s autonomy, and attribution. The more autonomy you have, the less useful. If you’re very low autonomy, then you should probably price by seat, versus if you have high autonomy. Then if you have low attribution, low autonomy, seat-based pricing works the best.
This is Slack where hard to attribute the value created by Slack. You know it’s there. You enjoy using it. It makes communication better, but it’s really hard to measure it. It’s very low attribution. Then it’s also very low autonomy in that it’s only as useful as the number of people on it. It’s very directly related to how many people are using it. Low attribution, low autonomy, like seat-based pricing makes sense.
Then you have Fin, which is the exact opposite, which is very high attribution. You can literally say, “Here’s how many support tickets that we resolved without human intervention.” Then it’s also very high autonomy. It doesn’t literally need a person using it, unlike something like Zendesk where it was useful when more people used it. That’s the perfect candidate for outcome-based pricing.
Obviously, I really like this framework for thinking about it. The thing that it ignores is that your pricing should be compelling for customers. If customers expect seat-based pricing for your type of product, then there’s kind of have to price that in to factor and the fact that people just like seat-based pricing.
In our case, our product is fairly high. It’s fairly high attribution in that you can see how many bugs it caught. Then it’s also fairly high autonomy in that it doesn’t really need people. Literally, Devin’s writing PRs and Greptile’s reviewing it. I don’t think there’s much of an autonomy question, and is doing its work independently.
Turner Novak:
You have AI reviewing AI.
Daksh Gupta:
Yeah. Even without it, it’s not a co-pilot. Like Cursor is a co-pilot. It works with you. It works independently of you. It corrects your things. It looks at your code and helps you. It’s not really a co-pilot. It’s essentially autonomous, yet we price based on seat instead of outcomes because our customers expect seat-based pricing. Just easier to understand. We’re not really interested right now in value capture. This data of the company we’re at, is we want to create really incredible products that create enormous value for everyone. How much of it we capture is just very much a second thought. Maybe later. Hopefully we’ll be at a point soon where that’ll become one of our core problems that all of our focus right now is just how do we create something really incredible that creates huge amounts of value for our users?
We should capture some of it as much as we can, but we just don’t stress too much on ... We probably capture much more if we were doing outcome-based pricing. But I think for now, just create as much value as possible.
Turner Novak:
This book, it’s called Monetizing Innovation. It’s a yellow book?
Daksh Gupta:
Yeah.
Turner Novak:
I’ll throw a link in the description for people to check it out if they want.
If there’s an engineering leader that’s out there, and they’re like, “What should I do to just get more people to use AI,” I know that’s kind of a top-down directive at a lot of companies, a lot of founders, a lot of CEOs are like, “We need to use AI because of X, Y, and Z.” I’m like, “Fuck, how do I do this?” What would you recommend? Is there a process you found that generally tends to work?
Daksh Gupta:
There’s some stuff with Greptile where you don’t really need people to adopt it. In the sense, you just turn it on. You attach it to your GitHub or your GitLab, and then it just is automatically reviewing everyone’s code every time they open a pull request. There are products like this, which they just work. You don’t have to do anything.
Meeting recorders are like this, where if you have a good meeting recorder, it’ll just create notes for your meetings. You’ll be like, “Oh, wait. What did that person say in that meeting?” Then you go and seek it out. When we started thinking about products we wanted to build, one of our things was we needed something that’s an automation, so it doesn’t require people to choose to use it every day. It should just work. It should just automatically be part of your workflow. The reason for that was we don’t want this extra challenge of also having to evangelize a product after we’ve created value with it. We wanted that part to be essentially automatic.
The aspect of coding agents, for instance, I think they had something in adoption curve that companies actually had to push their people to use them more. What I found very interesting was the person that we talked to at every company, the person that reaches out to us is always, there’s a couple of characteristics. One, they’re usually pretty senior. They’re a staff engineer, a principal engineer, an EM, or a VP. That’s usually who reaches out.
Second, they were usually an early person at the company or at least a very important person at the company that has decided that the highest leverage thing they can do instead of writing code and producing software is to get more of the team to use AI. That is the highest leverage thing they can do.
For instance, at Brex, we worked with someone called Jared. Jared was very early engineer at Brex. I think from what I can tell from folks that work at Brex is they’re very widely respected across the company, spends very large percentage of his time getting more people to adopt AI. The way he does it essentially is just to show people the value. The thing about these AI products is it doesn’t take a week of using it to start seeing value.
I think of the WHOOP as like ... Oh, I’m not wearing my WHOOP right now. But I was thinking of WHOOP as the obvious example where you have to use it for a while for it to generate value for you.
Superhuman is a product I love. Superhuman, I thought it was dumb to pay 30 bucks a month for email because email is free. Then I started using it. I was like, “I can’t go back.” But it took a month for me to really understand like the keyboard shortcuts to get fast enough that I was really getting the speed value.
With Cloud Code or Cursor, I think within 10 seconds of using it, you’ll know how powerful it is. It is just the initial hump of using it in the first place. If you just get more people to just use it once, it’s very likely it’ll stick.
Turner Novak:
Interesting, okay. A slightly different topic, but you people might be familiar with your name or maybe your face too if they’re pretty online. You had I think a tweet or you were quoted an article. I forget exactly what happened. But you started this whole 996 discourse that has picked up a bit. Can you explain what 996 is for who don’t know, and also what exactly happened?
Daksh Gupta:
The context of this was Rya who is a journalist at the SF Standard, I think she’s great. Her stuff is really good. I think it’s something I said earlier was she writes about Silicon Valley the way David Attenborough writes about the rainforest. It’s with this genuine curiosity around this strange place, I thought that was really interesting.
She wrote this article about a Burning Man. What she asked me was, “Hey, are people your age going to Burning Man?” People in their 20’s in Silicon Valley, do they go to Burning Man? I said, “No, actually I think everyone that I know who goes to Burning Man is in their 30’s and has been in Silicon Valley for a while.” They’re a deeper part of the culture here. They’ve been around for longer. I don’t think, as she called it, “The AI kids” are going to Burning Man.
Then she asked me, “What is the general vibe if not that?” Because it seemed like the hippie medicated aspects of Burning Man were such an important part of, I’m assuming 15 years ago Silicon Valley. I said that the current vibe, as far as I could tell was like 996, lifting heavy, not drinking, not doing any drugs, running, eating steak and eggs, marrying early, all these things which are as I could tell, where the vibe. Then a screenshot of that part got that got tweeted. Then that tweet blew up.
I think people’s interpretation of that was this company, 996, is ... Instead of like, “That’s the general vibe of 996.” To me, this is obviously a lot of hate message or whatever. I don’t really care. I think it’s a very bad way to live if you care what regular, just like everyone thinks of you, like people you don’t know.
My team understands. Obviously, they live it so they know what our culture is. My friends and family, they understand. The only part of it where I was worried was I don’t want potential hires that would’ve had a really, really fulfilling career here to mistake our culture for something it isn’t, and then choose not to work here. I did write about it. I wrote about whatever our culture really is.
Turner Novak:
So, what is it?
Daksh Gupta:
It’s not literally 996. We work a lot. Usually, people come in at 9:00. I think the earliest people start leaving is probably 7:00, but I think people are pretty regular. My co-founders and I are always here until 9:00, 9:30. Usually, there’s at least a handful of other people that are around at that time as well.
Then weekends, at least of the three of us, me and my co-founders, at least two of us are here most Saturday and Sunday afternoons. I think the problem with 996 to some degree isn’t the number of total hours you’re working, but it implies enforcement. It gives 2008 factory in a third-world country. That’s what it sounds like. I think that’s what people don’t like about it.
I think there’s two things: extraordinary outcomes require extraordinary effort. I think that people understand that intuitively. Then I think people also understand that recruitment is a sales funnel. But there’s a second order effect of that which people don’t recognize. That means roles at your company are a product. Roles at your company are a product. Spots in your team are product. Then our product is our competition’s very high. Our stock is unusually high, unusually generous for our stage. Base compensation is high.
The work is very interesting. The problems are hard. The team is relatively small. The signal is strong where the good investors have funded us. We work with some good customers. People like the product. That signal is strong. We want to build a really enormous company that takes the entire market. That is the product. Then you have to work really hard for it. That’s the product. If it’s compelling to you, then you should join us. I’m going to be very transparent about what it is.
The other aspect of it is the decision to spend your 20’s, your 30’s working at a company, and then spending the majority of your waking hours at least during a Monday to Thursday on it, is a very big ask. You’re asking for a lot out of a very talented person that can do anything with their time to do this.
The thing that you’re trading for is the possibility of asymmetric upside from the stock. Obviously, that means they need to evaluate the stock. During the interview process we’re like, “Look, we give you even more information than we give to investors when they’re deciding whether or not to invest in us. Here’s our last 12 months of growth rate. Here’s where our revenue’s at here. Here’s where our customers are. Here is customer feedback data. You can look at our NPS score. We’re going to give you a free account to go use the product. You can talk to people on the team. You can talk to our investors.” Many of our investors have been kind enough to take time out of their day to talk to potential hires about their thesis around investing, where their conviction comes from.
“We’re going to give you all the information you need. We’re going to treat you as a very important investor who’s going to invest a lot of their time into this company. Before you decide that it’s going to buy the stock with your time, I want you to have all of the possible information you could possibly have.”
Then the ideal customer profile of this product, all products have one, of the product which is the role of working at Greptile is that you’re most likely at a large company right now, and the thing that you want is higher pace. You want a smaller team. You want something smaller so that you can have a potential for more upside. If that’s what you’re looking for, this is great. You can always go and work at a company that works. You can always work at a 934 type company. It’s very much available to you. It’s just, that’s not the product here. This product is different.
It’s not unique. Most high velocity startups operate this way. It’s not like we’re this strange way of operating.
Turner Novak:
Yeah. You mentioned talking about hiring, how has your process changed over time? I know you described it. It’s like a go-to market motion in a way, how you think about different things.
Daksh Gupta:
Yeah, we treated it very much like a go to market motion. What do you do when you try to make a really, really good sales process? You have really good prospecting. You have very good qualification. I think that part people intuitively understand. Recruiting is to have good prospecting and good qualification. Qualification in this case being the interview process.
I think the part of sales processes that don’t people forget to translate is it should be very easy to go through. It should be the minimum amount of effort from the buyer, or the potential hire in this case. There should be as little effort as possible.
We move extremely fast on it. When we know that we’re interested in someone after an intro call, they’ll get an email from us 30 minutes later saying, “Let’s schedule an interview. Let’s fly you out here. We’ll fly you out tomorrow.” We can get you on the next plane. Come out here, and we’ll interview on a Saturday or Sunday. If you have work during the week, we can do a Saturday. You can meet the team for lunch if you’re coming in on a weekday. We’ll do this two hour interview.
Before you leave, we’re going to ask you for references. We’re going to call them the same day by midnight that night, we will have made an offer. The first call to offer is we will do a ton of work, and we invest heavily in references, especially with engineering. I think references tend to be very good. Engineers are very honest people. They’ll tell you genuinely how they feel about their former co-workers.
We found that you have to move really fast. That people have a strong bias toward the first offer that they get when they start a hiring process. You move extremely fast. I think that those are the scaffolding of the process.
The actual evaluation portion is we’re always iterating it. The important aspects of that is standardized. You can’t use AI for the engineering parts. We have our own thoughts on this is, again, long internal debate that led us to this point. But I think the high bit is move extremely fast, make it extremely friendly.
Then after you’ve made an offer, we’re just like, “Here’s all of the information you could possibly need to evaluate us. You can talk to anyone from the team. You can talk to our investors. I’m sure that people going to talk to you. I want you to have all the information you need before you invest time with this. This is an important decision. By now, you’ve talked to the team. You know what our work culture is like. You know what it’s like to work here. You know the types of problems you’ll be working on. I want you to evaluate this place like you’re an investor that’s investing their time. Then figure out if this is the right place for you.”
Turner Novak:
You mentioned reference is a big piece of it that you really lean hard on. How do you generally recommend doing a good reference process? What kind of questions do you ask? Who do you ask? How do you approach it?
Daksh Gupta:
Honestly, I don’t think we are very sophisticated in how many references. We make it very simple. We just text the person or email them, the references that they shared. We say, “Hey, can we get five minutes of your time in a phone call? Or if you prefer to just tell me your thoughts on email, that’s fine too.”
Often, especially if they’re very positive thoughts, people will just write an email. But on the phone, we’ll just say like, “How was it? How was this person? How was it working with them? Are they good at their jobs? Are they pleasant to work with?”
Then we just seek glowing recommendation. They’re glowing. That’s a very good sign. Yeah, they’ll offer up. We did one recently for someone who we were fortunate enough, joined our team, where their former employer said, “This was one of the greatest engineers that I ever worked with.”
What was funny was they sounded upset on the phone that this person was leaving. That was palpable in their voice. Like, “We’re happy with this person moving on, but we were all worried that they were leaving our team.” Just from the future of the team perspective.
I think that’s the type of references you’re looking for. We’re so risk-averse around hiring. I think everyone is. Obviously, if your engineering team is like 10 people, then the next hire is like 9% of your engineering team. You’re hiring all at once. We take it very seriously.
Turner Novak:
Yeah. Maybe you’d characterize the whole 996 publicity that you got as, I guess it was kind of marketing for Greptile in a way. You’ve done a lot of different marketing stunts over time.
What are some of those, for people who aren’t familiar? How have you approach them? One I thought was interesting was the energy drink one. I don’t know if that was the first one.
Daksh Gupta:
That might be one of the early ones. We made a custom energy drink brand, like a private label energy drink, which was like energy drink for coders. There’s this whole world of energy drinks for athletes. There’s energy drinks for gamers. Then there wasn’t a brand that was for programmers. There are coffee. There’s terminal coffee, which is coffee for developers. I think that’s really brilliant. I think that came around the same time.
We want to do an energy drink. Everyone we know that’s a good engineer is a white Monster enthusiast. White Monster’s aesthetic isn’t for you though. We want to build one that’s aesthetic is for you. That’s what we built. It’s like, “This energy drink helps you ship faster.”
We shipped it to a bunch of customers. We put hundreds of them in the YC office. I think the one stunt that we did that I personally enjoyed a lot, and I use the word enjoy because I don’t know how attributable this stuff is.
I don’t know if it helps. I don’t know if it’s useful. But it’s fun. I forget this a lot. Thankfully, my co-founder soon is very good about remembering this. You have to have fun. It’s supposed to be fun. Building this thing’s supposed to be fun.
This one was, we built a box. We filled it up with cookies. It had a light activated speaker inside. When you opened it on the inside flap was a sweaty picture of Steve Ballmer. It would start playing the Developers, Developers, Developers! speech. People would open it. A bunch of our customers that got it put it in their fridge, so that every time they’d open the door, the lights would come on, and it would start saying, “Developers, developers, developers!” I thought that was brilliant. Probably annoying within a day or two. But that’s another one.
We’re always thinking of crazy things to do. Honestly, I am hoping they’re useful because we’ve just been doing them because they’re fun. We just do one every few months.
Turner Novak:
How do you come up with an idea? How do you know if it could potentially work or not? How do you pick? What’s the creative process look like, and then how do you decide like, “This is the one we’re going to do?”
Daksh Gupta:
I think they come up organically. Our entire team eats lunch together. Because our lunch order arrives all at once. Then all 15 of us sit around a tiny table, and eat lunch together. I did not know. This is unusual. I thought this is how all companies did it, because I’ve never had an actual job before. But I thought this was all companies did.
But we used to sit together for lunch. Actually, it’s really, really good for coming up with crazy ideas like this. Because we have one half which is the go-to-market side, which are pretty technical. Then the other half is engineers, which is their ICP, basically. To some extent, we have the perfect balance of people to come up with crazy ideas.
We used to come up with a bunch of them, and then whatever sounds crazy enough that we can see going viral, and then we just decided to do those. Part of it is going viral. The other thing is it sends the right message about us. The thing I want people to know about us, one, we really care about this specific thing, which is we really care about catching bugs. We really care about creating a lot of really good software. We like having fun. We’re like a fun company.
There was this, I might be getting his name wrong, it was like Rick Ross who’s a music producer. Someone asked me like, “How do you make a hit party song or a hit pop song?” He was like, “You have two or three people in a studio, and they’re having a blast. If you can capture the energy of that room, and put it into a song, that’s a hit song.” If you can capture the fun that the artists are having while making the song, then it’s going to be a hit song. I think that’s how we think about evaluating this stuff.
Turner Novak:
How do you measure if it worked? You said you don’t know if it worked. Do you not do a lot of attribution or are you just like, “Hey, it looks like some more people came to the website or scan the QR code on the cookie box?” How do you measure this stuff?
Daksh Gupta:
I think this is advice we got from CK at runway. We said, “All marketing channels that are attributable are priced in, and usually anything that’s underpriced is unattributable.” I just feel like this made total sense.
I think anything that’s high effort, people don’t do it. Then it’s like attribution is very hard. Usually, we’ll have better outcomes.
The total amount of effort and cost of getting this cookie box thing, for instance, is not very high. I think cookies don’t cost that much. Boxes don’t cost that much either. Then getting custom speakers, I think it was like $2 a speaker or something from China. I think with terrorism, maybe $4 per speaker. It is very, very cheap to do this at an even decently high scale.
Just if you saw a LinkedIn banner ad for a company, you can send that to a million people for not that much. But you won’t remember that.
Turner Novak:
Your eyes skip over. You subconsciously see that it’s an ad, and you just don’t notice it.
Daksh Gupta:
Even if it’s relevant to you, you might have higher attention if it happened to be that earlier that day, you had a quarterly meeting about how you guys need to get an AI code review, you probably would’ve a higher chance of clicking on it. But if you get a box to your office, and it has Steve Ballmer’s sweaty face on it, you will remember that forever. You’re not going to forget the cookie box that talks to you. You’re not going to forget that one.
I think when you think of marketing things, I think of them, obviously they’re funnel, and so the way to evaluate them is how many people can you reach and how long will they remember you for? I think it’s a good way. The area under the curve is basically the total value that you can produce with the marketing activity, as you have stuff that’s very, very wide but not very deep, which is I think, canonical example, be Google ads.
Then there’s stuff like this, which is, you do this. I think we did 100 boxes because I think it’s just difficult to fill up boxes of cookies. We were like a five or six person team at the time. There’s only so much you can do. It takes hours to do that.
But of the 100 cookie boxes we sent out, I think at least a third of them are paying customers now. Everyone else, they definitely remember us. At whatever point they’re looking for an AI code review or within the next few months, they will probably reach out to us, and be like, “Hey, at the very least we want to talk to, because you sent us this crazy cookie box earlier. It’s still sitting in our office somewhere.”
Turner Novak:
Yeah, I think I see people do pizzas a decent amount. Those are maybe a little bit easier to scale up. You could probably just put an order with a pizza shop, and be like, “Hey, slap a sticker on the box or something.” That might be a little bit easier.
Daksh Gupta:
The other aspect is it only works once, and it never works again. We do the cookie box again. Maybe that one, we might be able to do again, because I don’t think that many people heard about it. We could do it because it’ll be new to some people.
But Antimetal did the custom pizza boxes. I don’t think anyone can do it anymore. Antimetal captured all the alpha on that one. I think you’ll just come up with a whole entirely new thing.
Turner Novak:
Yeah, I was going to say, you almost have to come up with new food or something. Food seems to be pretty good, I feel like. Because people will eat it, and they’ll spend time, a couple minutes, literally.
Daksh Gupta:
Hot sauce is my favorite one. Especially if you sell the startups, the hot sauce sits for a long time on the kitchen counter. I think of all of the things, in very long shelf life, coffee like 12 ounces of coffee in our office would last a week. But eight-ounce hot sauce, that could last a year. That could be the whole lease. It would just be sitting there the entire time. It’s not easy to get through.
The funny thing is the hotter you make the sauce, the longer it’ll last. The longer you’ll have mental space for people.
Turner Novak:
Yeah. You almost need to get a magnet. Just have them slap it on the fridge. Every single person sees it every day when they open the fridge.
Daksh Gupta:
Yeah, there’s something there too. It is like, “How do you get in front of more people?” This stuff is great, but I think we waited a long time to do any of these things. We were like, “We need a product that people are just telling their friends about, because they love the product.” Then the fire is already going. Then we’re like, “What jet fuel can we throw on this? What can we throw on this thing that’s already working?”
Turner Novak:
To some people, you think tried to do this too quickly?
Daksh Gupta:
Yeah.
Turner Novak:
Really? How do you know when it’s ready? Is it quote unquote, “you need PMF” to do this?
Daksh Gupta:
I think so. I think you need at least relatively early signs of PMF. Like some people, I think we were at a place when we did this, where we were like, “If you start a free trial with Greptile, and you’re a real software company, you will sign up. You will be a company user.” It was like 70% conversion rate. It was very, very high. And the other 30 were probably largely explained by just being out of ICP. And at that point it was like, okay, clearly the next thing we need to do is just get as many people to try the product as possible and get in front of them as possible. And it’s like what are some fun ways in which we can do it? And the cookie box came up, the energy drinks came up. And we did a hackathon where we gave out Venus fly traps because they catch bugs and we had a shirt to go along with it with a big Venus fly trap on it.
And we have Everett who’s on our team, who does a lot of growth type things, figured out how to get... it’s not actually... they’re very tropical plants, so getting them into California in dry weather and keeping them alive is not that easy actually. They’re not really for this weather. So getting a hundred carnivorous plants, keeping them alive for a few days ahead of the hackathon, giving them out and also creating a website where you have care instructions for the plants so people can keep them alive for long.
And the other thing that happens is you’d think that they’d catch the bugs in your office, but actually, they have pheromones that attract bugs. And so we had more bugs at the end of it. We had way more bugs inside, yeah.
Turner Novak:
Oh, no.
Daksh Gupta:
And it was this strangely poetic thing where maybe Greptile also causes a company to have more bugs. Oh, Greptile will catch it. Let me just sloppily write some stuff. It’s fine. And I’m sure that happens. A number of bugs go up at some point because people are like, “Whatever, it’s just like we have a fail-safe for this. We’ll be fine.”
Turner Novak:
Interesting. Okay. Well, so I know one thing you mentioned was you waited a while to do this. It was a long process to get here. I want to talk a little bit about just that general journey. And I know it starts probably a lot earlier than people would think. One thing I thought was super interesting, back in high school you were a musician and you had a song, you had a couple songs that they charted on Spotify. What’s the story with that?
Daksh Gupta:
So I grew up playing guitar. I was in choir in middle and high school and I was singing a little bit. Friend taught me how to use Ableton when I was, I think maybe in sophomore year. So I started writing songs, recording them, and then I started to really enjoy it. I think the process of producing music is really fun and you start to listen to music differently. I think a lot of people that started writing, for instance, and it might be just writing a Substack, you just start reading differently when you start writing. So you just start listening to music differently when you start producing music.
I think people look at me and they’re like, “Oh, tech bro. Probably made techno or whatever. Was like a DJ and made electronic music or ambient music.” I actually made sad, indie folk music, that’s the kind of music I was doing. I made sad, indie folk songs. And then I think the second or third song I put out kind of just went semi-viral. So I was living in India, Spotify was new to India at the time. And so I think they wanted to promote new, independent Indian artists. And so it got on the Indian indie editorial playlist. It was one of the first songs on a playlist. And so it got tens of thousands of views. I think might have ended up with low hundreds of thousands.
And then the song I put out after that got on the Viral 50 on Spotify and I think it was at the 14th place or something, and there was a precipitous drop-off in streams after the first 10. So it wasn’t a ton of streams, might’ve been half a million or so, but this was senior year of high school and I’m like, “Wait, should I do music full time instead of... Maybe there’s something here.”
Turner Novak:
And you’re still in India?
Daksh Gupta:
I was still in India, yeah. And this was around the time I was applying to school. I knew I wanted to go to engineering school. I had grown up really liking math and science and my dream was I wanted to build cool things with my friends. When people would ask me when I was 15, 16 years, “What do you want to do?”
I was like, “I want to learn how to program or build robots or something and then I want to build cool things with my friends. That’s what I want to do.” And then this music thing was potential side quest where I was like, maybe I should do this instead.
Turner Novak:
And you were in a band too, right?
Daksh Gupta:
I was in a band in college. So got to college... so the band, we would basically play house shows. There’s this neighborhood in Atlanta called Home Park. So these single family houses in the city, formerly constructed for people that were working in the textile industry and in Midtown Atlanta they were set up just in post-Civil War, I think, the early 20th century. I think it was around when that must’ve been. And so that, because it was next to Georgia Tech’s campus, became student housing because obviously they don’t really manufacture stuff in Atlanta anymore. The textile industry moved out and that area all became retail and commercial.
And then that place where the houses were for the workers, it became sort of where the artsy Georgia Tech students lived. So it was a lot of bands and a lot of music people and they were doing shows in backyards. And people would come and there’d be a keg. And so we started playing those shows. So the first thing I learned is I really like programming and I like building things. I think that’s really fun for me. And then while running this band, it was like five people and it was like, it’s really fun to, one, because you’re perfecting this product.
You usually rehearse two or three times a week in either the music room or one of our friend’s garages and just practice. And the set list is the product and you’re refining this product, you’re trying to make the music better, you’re trying to pick the right songs that are fun at these parties. And then you’re doing the go-to-market, you’re trying to book shows. Trying to book shows at frats because the frats pay you to play at their formals, whatever. And frats were a big thing at Georgia Tech. And then you play these house shows, they’re just really fun.
And the thing of you and just a group of five people and you’re trying to make this thing work is so fun. And in some senses, the two things I’ve learned about myself, I like programming and I like being in a five person team trying to make a thing get off the ground. And I was like, okay, I’m starting to realize that... I was very sure I did not want to start a startup in college. I was like, this looks like way too much work. Why would I do this when I could just be at Amazon or something? And then I think the band experience was kind of like it’s really fun to be a small group of people trying to figure something out and trying to take something from Zero to One.
And then I happened to meet Soohoon around that time, who’s now my co-founder, and he very much wanted to start a startup. He used to listen to Gary Tan’s YouTube channel growing up in the Philippines, and we decided to go down this path afterwards. But I think the one part of... I don’t really miss college overall, but being in a band was great. That was a really, really fun thing to do.
Turner Novak:
Yeah, it’s so funny, I think I saw a tweet, it was like, if you could go back to college knowing everything you know and either be cool or not mess up or make better decisions, I was like, I don’t know if I would go back. It just happened. It’s done. It’s like I’m happy now. I mean, I would’ve done stuff differently I guess, but also I don’t really care that much. It’s like high school. High school, honestly, I wasn’t the coolest kid in high school. It’s like it’s done. Would I change things? Yeah, sure. But I don’t really care. I’m happy where I’m at today. It is what it is.
Daksh Gupta:
I do wish I’d learned to program better in college. Soohoon got really, really good at programming in college, and then my third co-founder, Vaishant, got really, really good at programming. I just didn’t learn it as well. I’m not a good programmer now. I think that would’ve been a really good time to learn how to program really well.
Turner Novak:
Yeah, that’s fair. I probably played too many video games. Have you ever heard of Major League gaming?
Daksh Gupta:
Yeah. Yeah, yeah, yeah. Yeah.
Turner Novak:
Yeah. So I went to these MLG tournaments. It’s literally a team. We competed at Halo 3s.
Daksh Gupta:
I was going to ask. Yeah.
Turner Novak:
Yeah. So when you’re playing Halo or Call of Duty online, you just show up in matchmaking and you match with random people. It’s literally a team that’s playing against another team and some people take the extent of you have a coach. Some people, they will literally have practices and know each map in each game setting. You’ll have different plays that you’ll run, how you’ll open, who goes where. There’s even levels of where you stand on the map will influence where the other team spawns. You’re always looking to control the map based on where people are setting up based on a new weapon might spawn in 30 seconds. You have to try to shift so you can get back map control, all that kind of stuff.
Anyways, I went to those tournaments and we ended up... I was probably top 1% but not top 0.1% and you’re not going to be a pro video gamer, but it was fun.
Daksh Gupta:
Something I’ve noticed with startup people and startup founders, investors, et cetera, is I think, one, obviously very often, we’re very good gamers. But I think they’re more abstractly really into learning the meta for some type of winnable game, which is kind of what you do when you start a startup, is you learn the meta and there’s old scripture on the meta, which is old hologram essays. And then you start there and then you learn the meta for how to be really good at this. And it just seems like we’re all hill climbers. We’re just climbing hills. We’re just like RL machines. We’re climbing hills.
I have a friend who’s one of the co-founders of a company called Gumloop, and I think we were in the same YC batch, and I think a few days into the batch, were talking about GeoGuessr and I was like, “Oh, you want to come over and play GeoGuessr after work one of these nights?”
And he was like, “Wait, you guys play GeoGuessr?”
And I was like, “Yeah, why?”
And he was like, “What rank are you?”
I was like, “I don’t think I play GeoGuessr like you play GeoGuessr.” I’m like, “I don’t know.” And then this guy, the first image comes on and he plays a mode of GeoGuessr called NMPZ, which is No Move, Pan or Zoom, which is the setting you can set, which just means it’s a picture because you can’t move or pan. It’s just a picture.
Turner Novak:
So you just have to, the single shot, you got one shot to guess.
Daksh Gupta:
It’s crazy. And the thing will come on and he’ll be like, “That streetlights is in southern Nairobi.”
And I was like, “What?” Just like, “Have you been to Nairobi?”
And he is like, “Nope, I just know that...” There’s entire maps of metas. He’s like, “Oh, these are only in Barcelona or whatever.” And they just learn what the different things are and that. And then the car itself is meta on its own where the type of Google car that took the images, there’s a certain one they use in Ghana which looks different from the one they use in South Africa, and you just learn what they look like for every country. It’s like hill climbing, there’s a game, there’s an objective and you just learn everything you can about how to get good at it.
Turner Novak:
So what do you think is sort of the current meta around startups? There’s obviously the Paul Graham classics, build something people want, talk to customers or whatever. What do you think is something that’s emerged over the past year or so ish that people maybe have not picked up on yet that maybe you have?
Daksh Gupta:
This one I think might be too specific, but I’ve been thinking about this increasingly now. But when you’re building AI products, there’s this thing where if you’re building especially for the enterprise or for B2B, people will use the product and they will determine, based on their first 10 minutes of use, whether the product is good or not. That was totally fine from deterministic products, I think. They work the same all the time. They’re deterministic, but LLM products are different. They’re stochastic, they will hallucinate sometimes they will be bad sometimes.
We were trying to predict at some point, since we had 70% of people that tried Greptile, they become paying users. I think the question was, well, what is different about the other 30%? What’s going wrong there or what is it about them that we... and we had some theories we’re like, oh, it looks like FinTech teams really like Greptile, Brex was one of our early customers for instance, and maybe it’s because they care a lot more about details. FinTech bugs are a lot worse.
A bug in Zendesk is not as horrible as a bug that causes a transaction to fail in Brex. That’s really, really bad. So I was like, maybe culturally these companies are just more attentive to detail. But honestly, none of these things held up. There are always good counter examples. The thing that held up was people that had a great first experience with the product, it just so happened that the first day that they were using it, it caught this crazy bug, it barely mattered what happened over the next two weeks. They just had this belief that this can do things.
And whether or not it catches a bug the first day, the actual question is, well, did you create a bug that day? Because if you caught no bugs that day and you got one of those hallucinating comments the first day, you’re like, this is a product that hallucinates and I’ve written it off in my head. And so your first impression mattered a lot. I think that was not maybe true pre-AI as much. Now, it really matters you make a good first impression.
And to some degree, you can’t control hallucination. You kind of can, but you can’t really completely eliminate it. But what you should probably do though is make your product lovable. I think that that’s the new meta. It’s like invest in design earlier, make the experience using the product delightful. Make it so that by the time the person is experiencing the product’s core value, they are already on your side because they have experienced joy through using your product.
So we obsess over the landing page design and the onboarding flow. We want you to root for the product before you ever even start using it. You go through this entire onboarding process and you say, “Wow, that was really delightful. Everything just worked and it was interesting and it was visually compelling. The design is beautiful and I’m now rooting for this product. And so when I see hallucination, I will not punish too heavily for it and when I see it work really well, I will reward it very strongly.”
Versus someone who just has kind of a broken or uninspiring onboarding. And uninspiring means they’ll be fair and then broken means they’ll be heavily punish the hallucinations and not very strongly reward the positive aspects of the product. I think people’s mood really... you have to really control it I think. And the sort of tactical advice there is invest heavily in the ancillary aspects of your product, eliminate the paper cuts, invest in design early. I think that’s newly important or much more important than it used to be.
Turner Novak:
Yeah, it sounds like because you can’t necessarily always control what their first experience is just in the actual product, so it’s design scaffolding around how they’ll use the product so you can create a moment of delight that isn’t necessarily the product, but it is because you designed it that way, if that makes sense.
Daksh Gupta:
Yeah. I think there’s a lot on that.
Turner Novak:
And so you kind of met your co-founder... did you guys meet in college?
Daksh Gupta:
Yeah, met in college.
Turner Novak:
And he kind of convinced you, it sounds like, “We should start a startup.” What was that whole journey like from, I’m in a band, which is kind of like a startup, even though I don’t know what a startup is. Because I’m in Georgia Tech, it’s not really a thing. What was that arc like?
Daksh Gupta:
Yeah, Georgia Tech, not very common to start startups. I think it is now, but it wasn’t even a few years ago. There were not many Georgia Tech alum in the YC batches for instance. Now, they’re very well represented. But at the time it was uncommon. My friends didn’t really understand what I was doing when I started doing it and I didn’t really know what venture capital was for instance. The concept of it wasn’t... My idea of venture capital was what a Shark Tank viewer’s idea of venture capital would be, which is you have this business that has a high likelihood of working and then you sell some percentage of it.
I took this class my junior year fall semester and it’s called the Startup Capstone class. So for engineering and computer science degrees at Georgia Tech, you have to do some type of project class. And all of the other project classes were three semesters, but there was a startup class, that was two semesters. One semester, you do the startup thing and then there was a semester of, I think, technical writing or something, just unrelated. And so I was like, “This looks like the lowest effort one and the more I do this, to take the easiest possible classes, the more time there is for me to work on my band, which is what I really want to do in college.” And so I was like, “I’m going to take this class because it’s very easy.” And the A-rate was 90% or something. It was basically a free A.
Soohoon took that class because he earnestly wanted to start a company and he was like, “I’m going to learn how to start a startup here.” Obviously you take the resources available at university. And so we met in the first day of the class. The professor, Dr. Craig Forrest, said something along the lines of that a third of the projects that started in this class continue working on it after the class ends, they turn it into companies.
And I think I laughed out loud in that classroom. I was like, “why would I keep working with a group project when the semester ends? That’s insane. You stop working on it the moment it’s been submitted for grading and in what world would you continue?” We started doing it and it was really fun. I was having a lot of fun working with Soohoon and we became close friends through that experience. And then I think we were solving a problem which was... And actually the problem we were solving at the time was to find a problem that we could solve in a way that would make us money.
Turner Novak:
Okay. Okay.
Daksh Gupta:
We were just smart enough in not to do a B2C idea, because we’re like, “Oh, no, B2C is what people do when they’re not sophisticated. B2B is what the cool kids are doing.” Obviously that’s changed now, but I think this is late 2000s. Pre-AI, is what people are thinking. But we were smart enough to come up with an actually good B2B idea because we’d never been a B. And so it’s kind of hard to figure out what a good B2B idea would be. So we came up with a series of plausible sounding B2B ideas.
So one of the ones we came up with was a customer feedback text bot. So it would text message you after a... maybe you just took a JetBlue flight and it would text you after you took the flight from JetBlue being like, “How was your experience?” And it would conversationally get your feedback. That was our brilliant idea. Obviously, no one actually wants this because it turns out companies already have too much customer feedback that they just don’t know what to do with anyway, so it’s not a quantity problem.
Turner Novak:
Yeah, it sounds a good idea, though, as a college student who’s buying a flight, it’s like, “Oh, I bet a business wants my feedback.”
Daksh Gupta:
Yeah, exactly. You land and that’s kind of hilarious. “I bet JetBlue would really use my insights and how they could make their flights better.”
Turner Novak:
Free Wi-Fi. Better snacks.
Daksh Gupta:
Yeah. “The seats are a little hard. I feel like my lumbar is not being supported right now.”
Turner Novak:
Yeah, “You guys should upgrade the seats and all the planes.”
Daksh Gupta:
And in my head I’m thinking JetBlue gets his feedback and they’re like, “Wow, this brilliant college student’s figured out how we can fix JetBlue. It’s time to get new seats, fellas. Bring this to the CEO.”
Turner Novak:
“We need to hire this kid immediately.”
Daksh Gupta:
Yeah. “Maybe this should be our head of product.” I don’t know if JetBlue has a head of product.
So we do this plausible sound and startup idea and the semester ends and we’re like, “Oh, maybe we should keep working on this.” There was this competition thing that we got into, which I think you’re auto enrolled in this competition when you do the class. And so we got enrolled into it. It’s called the InVenture Prize. We became a finalist. We were like, “Okay, let’s just keep working on it until the competition, it’s three months away.” And so we keep working on it until the competition. And this is now three months after the semester has ended and we’re like, “Okay, why must we... what is it? Why are we working on this thing as the semester is done?” And then the competition happens. We don’t win. And we’re like, “Okay, we can finally stop working on this now.”
But then we actually did keep working on it for a few more weeks and during that time, we signed up for another competition and this one was hosted by Chris Klaus, who was one of the inventors of network scanning in the ‘90s and he dropped out of Georgia Tech and sold that company for over $1 billion to IBM. And he then donated the computer science building to Georgia Tech. So it’s called the Klaus Building. At this point, I didn’t even know this was a living person. He’s not even old. He’s like a young person and I obviously assume whoever names buildings... everyone else who was named a building died in the mid-twentieth century.
Turner Novak:
Yeah. Like a Vanderbilt, Rockefeller or whatever, some oil baron or whatever.
Daksh Gupta:
But this person, Chris, he’s pretty young, he’s in his forties or something. He’s a pretty young person and obviously extremely intelligent. And he was hosting this competition and the winner would get $100,000. And so we did the competition, we won and we were like, “Okay, so I guess we should keep working on this at least until we run out of this money.” And then obviously, $100,000 when you’re in college is infinity money.
Turner Novak:
Yeah. That could last 10 years.
Daksh Gupta:
Yeah. I think over the next year we spent $8,000 total. Because I think I got a computer for a little bit because I was like, my computer doesn’t really work. And then we had basically trickling AWS bills because we don’t have any credits yet. And that’s it. That was all our expenses for the next year. And we’re trying to find something that works, we’re trying to sell this thing, we’re learning because you have to do sales. And we now started reading program essays and learning we have to do things that don’t scale. We have to email a thousand people a day and find someone. Talk to as many users as possible, potential users as possible. Nothing is working.
Turner Novak:
And this was still the surveys?
Daksh Gupta:
Yeah, we’re still doing text-based customer surveys. And in what world would two college students who have never been in the consumer goods business be the right people to build this? It just makes no sense. Why would this be the right? But the semester ended and we were graduating about a year after that and we decided, hey, let’s just go full time. Let’s try something. So we were already working with the davinci-002, davinci-003 era of OpenAI APIs and they were really, really good.
And then ChatGPT came out and it’s conversational like, well, this seems very relevant to what we’re doing because doing this text-based thing and they used RL to make these things more conversational. Post-training seems to have been working really well for this type of conversational chat-based use case. And so we’re like, okay, it seems sort of this crazy coincidence that we would graduate with computer science degrees right as one of the biggest transformations in the history of technology is happening. This is like 2023, and so we were like we should at least attempt going out to San Francisco and building things there.
We managed to raise a little bit more money, convinced my college roommate of four years, who’s one of the smartest people I knew, just was like, hey, you should become our third co-founder. All three of us should go to San Francisco. There’s this crazy thing that’s happening there and we should be part of it. We should build something.
Turner Novak:
So had you decided that you were going to do code review or were you still like, “We think we probably need to try something else, but we don’t know yet what it is”?
Daksh Gupta:
At that time, I think we still hadn’t lost conviction in the text-based customer feedback thing. And the funny thing was we didn’t lose conviction because investors funded us and we were like, well, investors funded us, it must be a good idea. And what we didn’t intuitively understand is investors were funding us partly because it’s the potential of us to find a good idea and it was not really validation for the idea. That can only come from customers. And so that was not the correct thing to... and I think it’s sort of a canonical, young founder mistake is you misattribute investor validation as customer validation, when those serve different purposes.
And so we came out to San Francisco. So I was sending investor update emails. We have one investor, so I would send this person an email. I would send Chris an email every month about what happened that month. And I still send monthly investor updates, but obviously now to more investors. And I think someone told me we should start putting our numbers on top of the letter. So the letter was just this narrative that was like a story every month. It was not very compelling. But he was like start putting numbers on top, put your revenue, put your cash, put your runway.
And we move out to San Francisco, suddenly we have real costs because we just really had underestimated how expensive it is to live in San Francisco and we were all living in the same room in this horrible Airbnb in the south end of San Francisco. We didn’t have that much money. It was starting to run out and our revenue was zero. So we had month one, we arrive in May, zero revenue. Month two, June, zero revenue. And we’re like, oh, this is going to be a problem very soon.
And it creates this very strong urgency that we just didn’t have when we were in college. And we were like, okay, we need to reset ourselves. We went to this hackathon to Scale AI, I did a hackathon mid-July of 2023 and we’re like, we should just build stuff that... because clearly not the right people to build this consumer feedback type product, we’re just not the right people for it. But we are programmers and we are early adopters of these LLM APIs and so maybe there’s something here that we can build. And we decided to build a code-based chat product. That was our first idea. We were like it would be really interesting if you could put a GitHub link and start chatting with the code base.
We had struggled with operating a large code bases and finding where stuff is and so on and so forth. and so building that sounded really, really compelling to us. So we built that and that day is when YC, for the first time ever, did early applications. And so at the hackathon we’re wrapping up, I’ve just finished my bits and I think we had an hour left and I’d see this tweet saying that YC is doing early applications. And so I was like, it’d be pretty funny if we just applied with this hackathon project that we’re not even completely done with.
And so we applied to YC with this. The next day, we didn’t think much of it. It was like, we’re probably not going to get in. We just applied to a hackathon project. The next day we put it out, we put a Stripe link on it and we were like, let’s monetize this thing I think within a few days because it seemed like early users seemed to it, the first couple of people we showed it to.
And then it started growing, just kind of automatically grew a few hundred and a thousand and then $2,00, $5,000 a month, the revenue just kept growing. And by the time we did our YC interview, we had a hundred paying users. And I think we got interviewed maybe two months after the application and we got in and we did YC. During YC, we realized code-based chat is actually maybe not the correct startup idea because it doesn’t actually seem like companies want this thing. Maybe individual developers do, but companies don’t. So we should find something companies want.
Turner Novak:
Yeah, the true B. The true B2B.
Daksh Gupta:
The true B, yeah. And again, the funny thing is we were still not equipped because we’ve not even had actual software jobs before. So it’s not like we knew what a software company might want, but at least we were a little closer because we were programmers and so we knew what programmers wanted.
Turner Novak:
How did you figure that out? How’d you figure out maybe no one gives a shit about chatting with the code base, but they want to be able to see it in some sense or see the reviews?
Daksh Gupta:
So we tried a bunch of stuff. The hard part of our code-based chat is getting an enlarged language model that has a very small context window relative to the size of the code base, how to make it understand a large code base and reason over it. That’s the hard part. And that’s what we built all this tech to do. I had a background in semantic embeddings. I’d done that stuff in college and then Soohoon and Vaishant were both programming languages nerds. Soohoon was in the Programming Languages Club at Georgia Tech. So he had this sophisticated understanding of syntax trees and call graphs and whatnot.
And so what we were good at was teaching these models to understand large code bases. And then chat was the obvious thing to do. If you’ve taught an LLM how to understand a code base, the obvious thing is you build a chatbot so you can talk to the LLM about the code base and understand what’s happening. So we were looking for other things to do. We said, okay, maybe it would be compelling to make this an on-call assistant. Maybe there’s an outage and the engineer’s trying to figure out what went wrong. And so you’re trying to quickly figure out through a code base where the issue is coming from. That’s one potential path that it would be helpful in.
But we talked to QA engineers and SDETs and SREs and everything and no one just really found it that compelling, whatever ideas we were coming up with. So we just decided to build an API because people were asking us for an API. They were like, “Can you build an API, which is just like the OpenAI API, but then you add this parameter?” Which is your GitHub link, and it just allows you to essentially have context aware LLM APIs.
And so we built the API and then a lot of our customers started using it to build some version of a code review bot in 2024. So this is like summer 2024, we put out this API, API understands your code base, and then people are using it to build a code reviewer to review their pull requests in GitHub and GitLab. We talked to them, trying to understand why they were doing that and they said, “We adopted Copilot and Cursor really early and now we have too many pull requests. We just have too much code to review and so we have to build these automations for it, and there isn’t really an AI code review type product that exists.”
And that was at the moment where, again, I can pretend to be really smart and say we figured out that everyone was going to need this at some point, but in reality, this is a product and we have found three people that want it and that means we can get three people worth of revenue, we can get three customers worth revenue. We can make three people happy and we can capture some of that value.
Turner Novak:
Because you were wandering through the desert and found water basically, and you’re like, we got to drink. This is something.
Daksh Gupta:
We found just a glass of water. And a lot of people find glasses and they’re able to extrapolate that there must be a well nearby. And then we weren’t doing that. We just wanted a glass of water. And so, we picked up the glass of water and we just started drinking out of it. And it was still months later that the takeoff actually started, which is... So now this is July. We decided to do code review. We kept it in beta until December, so four or five months. I think we had maybe 70 or 80 teams using it by that point. And then, it just exploded from there.
In December, we put it out, and then in nine months there were 1,000 companies using it. And it was just this crazy, large companies that we work with now that I would’ve never imagined being able to work with in the first year. It just felt like dumb luck. It was like we found the glass of water that happened to be from an ocean or something. We could not have predicted that this thing would have the intensity, this category of product would have the intensive product market fit that it does.
Turner Novak:
Why did the adoption take off? Were you limiting it when it was in beta, you couldn’t sign up and suddenly you’re just like, self-serve? You just start using it if you want and it spread?
Daksh Gupta:
You could always self-serve. The only thing that changed is... I think this is the crazy thing. Industries have very strong memetic desire. So, around this time, we were now about 18 months... I think the adoption curve for AI code review lags behind, about 18 months, the adoption curve for coding agents. So, people started using coding IDEs, and about 18 months later every team had adopted it. About 18 months later, they were like, “We need an AI code reviewer.” It’s just how much time it took for it to permeate through the entire company, and it took about a quarter or two for people to realize, “This has killed our time to merge. We have too much code review now and we need something to fix this with.” And I think people couldn’t picture that.
When we started selling an AI code reviewer, people couldn’t picture what that looks like. They would ask us how it’s different from Copilot. We were like, “Well, copilot generates code. We review code in a pull request.” And they’re like, “I don’t get it. Why is that different?” People just couldn’t picture this thing. And then some companies started using it, because they had this real problem. Everyone had this problem. Some companies started using it, and everyone starts using it.
I think a lot of it is just memetic. There are a lot of rational things that everyone should be using that they don’t. There are products that exist that solve people’s problems and then people just still don’t want them. And then there are some products of those that get to some critical mass of early adoption, some spark, and through memetic desire it just spreads and then everyone wants it. I think that’s what’s happening here.
Turner Novak:
Yeah. There’s probably some element of: you just probably asked a couple of friends, if a couple of people all give you the same answer, you’re like, “Cool, I’ll try this one. Seems like the best one.” How did you figure out the pricing? I know we talked about this a little bit earlier, but what was the process generally of figuring out what to charge for it?
Daksh Gupta:
Honestly, with pricing, we’ve been very unsophisticated. We just said we want pricing that is very customer-friendly. Customers understand it very easily. They look at it and they’re like, “Okay, this makes sense. I can predict how much this is going to cost. I can compare it to other things I pay for and see how much better or worse it does.” It just makes sense to have seat-based pricing with a type of product. People are used to paying for GitHub per seat. People use something for Cursor per seat. It just made sense to price it this way.
I think what was unusual about this wave is, again, pre-AI software, used to just help you do the thing, and then AI software does the thing. I think that’s the big difference. And when you do the thing, you can charge for the job, not for assisting the job. And charging for the job is better because it’s more attributable, and so you can have like what we say about marketing. Marketing that’s attributable is priced in; marketing that’s not attributable is usually underpriced. And the same thing is true for software where, software that is not attributable, the value is not attributable, is usually significantly less expensive. Slack is a good example. Honest to God, Slack is worth more than $7.5 a month per person. It creates more value than that in a month. It’s just hard to attribute it. And so, that’s why it’s cheap.
And the other end of the spectrum, you have Datadog. Pretty easy to figure out attribution for Datadog, so they’re able to charge for the outcome. They’re able to have the scaling charge. The easiest one is Stripe, where it’s like, “We help you facilitate that transaction, and so we take a cut from that transaction.” And it’s like there’s a perfect amount of... It’s extremely attributable, and so they can basically take 3% of your revenue from you, which is really crazy when you think about it. 3% of your revenue goes to Stripe, and part of it goes to Interchange or whatever, but you are giving it to Stripe to begin with.
And so, I think that I was feeling that, over time, this industry will move towards outcome-based pricing, but for the time being, you keep it easy. You make it easy to buy. Let customers buy it easily and spread first. I think, for now, it’s okay to leave a bunch of value on the table that you don’t capture. As long as we’re creating enormous value for customers and they’re paying us an amount of money that they see as non-trivial, we can get investment and signal in other forms.
We’re talking to a large public crypto company and we’re doing this trial with them. It’s palpable how much they care. They’re going really deep into making sure they’re getting as much value from it as possible. They’re building all these integrations, they’re changing their life cycle around how they do code reviews for this, and that is the PMF signal I need. We’re at the final point of the stage where revenue to us isn’t intrinsically valuable. It’s valuable to the extent that it signals to us what is worth and not worth building. It is an information-seeking activity to collect revenue. And as time goes on, yeah, I think all companies move from an expanded stage to an extract stage. And I think, there, it starts to matter more and more exactly how you price. And it’s possible that we’re close to getting to that place, but I don’t think we’re there yet. So, right now, I just keep it easy. Just don’t worry too much about creating too much value.
Turner Novak:
So then how did you get that first big enterprise customer?
Daksh Gupta:
Every customer we’ve had just came inbound, and they usually heard about us from a different customer. And then sometimes they heard about us from a blog post. There are blog posts where people have talked about comparing Greptile to other products, or there’s blog posts where people have just done a review of the product on some developer’s blog, or someone tweeted about us that’s a customer. And Brex’s CTO recently tweeted that this was the best code reviewer they tried, and a bunch of traffic came from that.
Turner Novak:
Yeah, I actually DM’d him, I think, when I saw that tweet. I DM’d him. He didn’t get back to me, but I was like, “Hey, what do you think of this? I’m having Daksh on the podcast. What do you think?”
Daksh Gupta:
Brex is a phenomenal company. We’re Brex customers and, just working with their engineering team, I’ve gotten just so bullish on them. They’re so sophisticated with how they do AI. I think, when you talk to the people there, it doesn’t seem like it’s hundreds of engineers. It seems like a 10-person engineering team that’s extremely locked in, and they’ve been able to do this at the scale of hundreds of engineers. And I fully believe, based on the people I’ve met there, that they’ll be able to do this at 10,000 engineers if they ever need to.
Turner Novak:
Wow. So it’s just a function of people, a lot of self-serve customers discovering the product organically through word of mouth, socials, blog posts, etc. Did you do any influencer marketing or not really?
Daksh Gupta:
We’re starting to do it now, but historically we haven’t. We did technical blogs, so we’d solve hard problems, or at least interesting problems, and they were sometimes with surprising results. One of the early problems that we faced with building an AI code review bot is that they’re very nitpicky, because LLMs are very verbose. So they comment on things that are technically true, but are just a nitpick, and no one likes nitpicky coworkers either. No one likes the person that’s, “Oh, actually, you’re missing this very specific way of doing logging.” Their log formats are a little bit... It’s like little things like this, or like, “Oh, don’t use ‘any’ in TypeScript.” It’s whatever. Sometimes it’s fine. People found that to be very annoying.
I think one thing about LLMs is the lack of saying something smart does not tell you a person is stupid, but saying something stupid tells you a person is definitely stupid. And so, when in doubt, you should just not say anything. And I think that applying that to LLMs is hard because they’re paid by the token, and so they’re incentivized to just keep saying stuff. And there’s a paper recently, I think from OpenAI or from Anthropic, which is like, “Why do LLMs hallucinate?” And it’s because, if the reward function is likelier to being right than guessing anything at all confidently, it’s better on that test than saying, “I don’t know.” There are no points for saying, “I don’t know,” but there are some points for being potentially correct, and so teaching the code review bot to not make nitpicky comments was a surprisingly difficult problem. We wrote a blog post about how we did it, and that went viral. It was on the front page of Hacker News. A lot of customers came from that.
I think, with developers, our simple thing is developers don’t like being sold to, but they like trying new stuff. They like trying interesting things and they like reading and learning. And so, we just produce content that is little things we’re learning. We’re solving interesting problems, and we learn stuff while solving those problems. We write about them. That ended up being the most powerful way to reach customers. Just create value, write interesting stuff. We’ve been on the front page of Hacker News probably a dozen times by now.
Turner Novak:
Interesting. Yeah, I feel like Hacker News is one of those beasts. It’s just hard to figure out, but when Hacker News likes you, you just got to ride the tiger, I guess, and just love it.
Daksh Gupta:
Just write things that are surprising to you. I’ve probably spent too much time on Hacker News. Once you start writing and you start wanting to be in the front page, you start noticing what stuff you click on, and you’re like, “What are the patterns here?”
Turner Novak:
What are?
Daksh Gupta:
Well, it piques your curiosity. That’s the main one. I think curiosity is the currency of Hacker News. If you can pique someone’s curiosity, they’ll click on it. And so, going into technical detail, pulling back the curtain on the types of problems we’re solving. Most people aren’t working on building B2B AI agents. Most engineers are not working on that. People are working on different things, but there are interesting insights that come from working on this, weird problems you face, things that break in interesting ways that are interesting stories to share. And I think those things end up doing the best.
Turner Novak:
People are just curious to know what all is out there, how other people are solving problems, all that stuff?
Daksh Gupta:
Yeah, exactly. If I were to see a post that was about: Shopify breaks when people do drops.... I was listening to... I think it might’ve been either a podcast or it was maybe a Hacker News post, which is about: the hardest technical challenge Shopify had to solve was that there became a trend in E-commerce to do drops. And drops are the enemy of a company that’s trying to scale an E-commerce store. That’s the opposite of what you want. You want even predictable traffic, not enormous, out-of-pattern spikes, and scaling to support those. And it’s like, “That sounds interesting. I wonder how they did that. I wonder how they made it so Supreme could drop something in a Shopify store and it didn’t crash the entire website.” That sounded like a really interesting problem to read about.
Turner Novak:
Yeah, interesting. Do you remember how they did it?
Daksh Gupta:
I don’t, no. Well, actually, there was a handful of things that separately break. DB connections break, and so on, so they went into detail of that, how that stuff breaks. Actually ended up not being as interesting as I thought. It was a lot of boring problems to solve, but I clicked on it and I read it. And the thing I gleaned from it, my conclusion was, “Wow, Shopify has very high-quality engineering talent.” So if I was a very smart engineer, I would be like, “I should work at Shopify. I’d probably learn so much.” So if that was a purpose, which... Engineering blocks, to some degree, are part of the recruiting mechanism. And so, it did that. I was like, if I was a smart engineer or I wanted to be better at engineering, I probably want to go work at Shopify. It seems like a really interesting place to work.
Turner Novak:
Yeah, that’s fair. One thing I want to hit on before we jump off, I know you’ve had a little bit of an interesting fundraising journey just from Georgia all the way to San Francisco all the way up to today. My friend, Suds, was I think one of the people who met you when you were still in Georgia.
Daksh Gupta:
Right when we were moved to San Francisco. I think he was one of the first people I met in San Francisco, yeah.
Turner Novak:
And then how did you raise the first little bit of money when it was just, “Hey, we’ve got this survey thing”? What was generally that whole process?
Daksh Gupta:
We honestly just told people about it, and then they wanted to fund us. And I don’t know what it was. I think all three of us are just very high-energy, and I think people respond well to that when they’re funding something that early. The product, I think a lot of people that funded it was like, “Well, this is obviously not going to work, but these people seem so high-energy. I’m sure they’ll figure something out eventually. They seem like they course-correct when they receive new information.”
Paul Graham was one of our investors, and he said something about us and, more broadly, founders that went to Georgia Tech, which is: they have a very high effectiveness-to-entitlement ratio. It’s like a public school in the South, you don’t become entitled from going there. You become entitled when you go to a school that’s prestigious, like Stanford or something. He’s like, “I’m entitled because I went to Stanford.” And I understand. It’s very hard to get in. It’s one of the best institutions in the world. I think it’s not crazy to come out of it a little bit entitled. Also, Stanford people are also very effective at the same time. I think, Georgia Tech, because it has become a good school before it become a prestigious school, and also by virtue of being a public school far away from Silicon Valley, far away from New York in its own isolated corner, ends up having a high ratio. I’m guessing that’s how people gleaned. I’m not entirely sure what it was.
Getting into YC, I think, was just the slope. When we applied, we were like, “There’s this hackathon project. We’re almost done with it.” And then, in the interview, we were like, “Update, we have 100 paying users now, and it’s growing 20% week on week.” And I think they probably also figured out that the codebase chat thing was not going to work, but I think just the point was that we’d probably do something else if this didn’t work. I think we’re decently smart, but we just work harder than anyone. And I think that’s what people gleaned early on.
Turner Novak:
I think you raised money from a pickleball game. Is that true?
Daksh Gupta:
I didn’t, but one of our early angel investors were huge pickleball people. And so, first time I played pickleball was here. I’d never heard of it. This is how out of it the rest of America is from what’s happening on the coasts. I’d never heard of pickleball. I fully thought that it was... I genuinely remember thinking this, “It must be a food. You make a ball of pickles or something. That’s probably what this is.” And it was like, “No, it’s a game.” I think someone described it to me in such a cryptic way. They were like, “Oh, pickleball is for when you’re too poor for golf and too unathletic for tennis. That’s who pickleball is for.” And I was like, “Well, why don’t I play this game?” And I played it. I was like, “This is a lot of fun.” This is a really fun game. So I got into it because of an investor, I think is more accurate.
Turner Novak:
Oh, got it. That makes sense. And I think you guys had an interesting story with dilution in YC. I don’t actually know what happened, but when I was talking to Suds he mentioned to have you talk about that. What is the significance there?
Daksh Gupta:
So we raised a little bit of money before YC, and then we raised money after YC as well. And then the question was: should you do YC if you already raised money? You clearly can raise money. Why would you do YC? And we’re very diligent about not over-diluting. We spent a lot of time at our Series A. It was not the first thing we optimized for, but it was one of the important things to us, was to minimize the amount of dilution we’re taking on. And so, we raised less than we could’ve. People offered us much more money than we ended up raising at the Series A, and we wanted to suppress that amount as much as possible. But I think... And maybe this is helpful for people that are like, “Well, why would I take the 7% YC dilution when I could just do something else and just raise the money outside of it?” I think it’s hard to overstate how valuable it ends up being in the long term.
I have a cousin that worked at a... He’s a phenomenal engineer. Had a, I think, 10- or 12-year career across different companies. Recently started as a staff engineer, and then later an engineering manager at Coinbase. And he was talking about, “Coinbase is the largest company I’ve ever worked at, and is the one that feels most like a startup, extremely high velocity, and everyone is so locked in.” And then every interaction he’s had with Brian Armstrong, there has been this... I mean, this guy, his eyes are on the ball. It’s nuts how locked in he is. And you hear the same thing about Tony at DoorDash and so on. I have a feeling. From what I can tell, there’s a reprogramming of the brain that happens when you do YC, if you do it correctly and if you do it in an engaged way, that just serves you forever. That’s the intangible benefit.
I think the dilution pays for itself when you raise at a high valuation. People complain about YC companies’ rounds being too expensive. As a founder, the 7%, you just get back when you raise that, a valuation that’s two, two and a half times higher than what you would’ve ordinarily raised at. But I think the better way to think about it is: what do you get in return for the dilution? I think what you get for YC is worth it. I think it’s very hard to justify increasing your dilution usually, but I think this specific one, with hindsight now, I can say with confidence that it’s good.
Turner Novak:
Do you think the way to get around YC dilution is you just sell less of the company in a following round? You don’t have to sell 20% of the company in a specific round. People generally do, but you don’t have to.
Daksh Gupta:
Yeah, you don’t have to. I think you can raise less. I mean, the Seed round, we said no to most checks. Initialized invested in us at the time, and we said no to every other angel. We only took two other ones, which were JJ Fliegelman and Rich Aberman, and just because we just liked them both. I think they were awesome. And they’ve been incredibly helpful over the last year, so I think it was 100% worth it. But every other firm, they were really great investors, but we just were like, “I’m sorry, we don’t have room, and we don’t need more money than this.” And I think, to some degree, we probably still ended up raising too much, but I like to be on the safer side. I think the idea of having 18 months of runway as the standard is insane to me. I think it definitely should have more than that if possible.
Turner Novak:
Oh yeah, fair.
Daksh Gupta:
Yeah. I don’t need the extra stress beyond everything. I was just like, “We also might run out money.” I think, that, I’m okay diluting a little bit more if I can just know that we’re safe on that front.
Turner Novak:
Yeah. You had a pretty interesting way of framing how you just think about fundraising in general. You think about it as dating and building a credit score. I was going to ask you: what does that mean? Because you mentioned it. I don’t actually know. It’s an interesting way of thinking about it.
Daksh Gupta:
A lot of this is just received wisdom around Seed and Series A and stuff, but I think early on, raising money, there’s investors that are just good people and you’ll probably, through references, figure out who they are. Obviously, YC, one of the advantages of it, you have this network and you can figure out who the good people are and, I think, just raise a little bit of money from them, just enough that you can survive off of it. If you think you should raise less because you want to create urgency and pressure, I really think you should find another source for that that’s less catastrophic, another source for the urgency. I think running out of money as a source of urgency seems like a crazy thing to play with. Maybe, I don’t know, find a different place to get your urgency from.
Turner Novak:
Yeah, you literally run out of money. You just die you.
Daksh Gupta:
Yeah. That can’t be the thing. There has to be a less insane way to be urgent. And for us, we’ll lose the market to someone else. That’s the urgency. We don’t need the urgency of running out of money. We’re totally fine on runway, we’re very prudent with our spending, and we’ve kept fairly limited headcount and everything. I think, at Series A, the advice that I received from Brad, who’s our partner at YC was like, “If things go well, you’re going to work with your board partner very closely for a decade or more.” And that is a 10-year, permanent relationship, essentially.
Over that course of time, you can’t change your mind on it. This person’s going to be on your board forever, so you should treat it with that level of gravity. It’s like a marriage in some sense. And so I was like, well, what would I do if I was getting married, is probably go on dates and meet people, get to know people. And so I just spent the next few weeks just meeting all of the great investors that invest in dev tools and infrastructure type things.
I got introduced to Eric at Benchmark through Mike at SV Angel, and within our first meeting it was extremely obvious to me that this was exactly the right person. And thankfully he felt similarly. And so, I think that ended up working really well. It’s increasingly less common for people to do board seats at Series A, but I think they’re still frequent enough, and I think it really matters to getting that right. Eric and I, probably early days, we weren’t even working together for a couple of months, but the conversations are just fruitful.
I come out of those, in the 30-, 45-minute conversations and there’s a new thing for me to think about that I think is really compelling, and there’s more clarity than we have when the call started. And obviously references there matter to everyone who’d worked with him, whether it was Spencer an Amplitude or Saji at Benchling. And everyone else I talked to was just like, “This is the greatest investor of all time.” They’ve created enormous value over the course of the last 10 years of working together.
Turner Novak:
Yeah, Spencer’s a big Eric fan. Actually, you mentioned you listened to the episode we did together. Spencer’s actually the one who told me to have him on. He’s just like, “You’ve got to have Eric on the podcast.” I was like, “All right.”
Daksh Gupta:
I really liked that episode.
Turner Novak:
Thank you. One thing I’ve been trying to weigh is founders versus investors. Personally, I think it’s more interesting to just talk to people who built stuff. So I’ve been leaning, if I’m talking to an investor, it’s like you created something. There’s a long list of investors who are like, “Hey, I want to come on the podcast,” so I’m always fighting them off of... You know. So I’ve been trying to figure out: what is the most interesting things to get investors to talk about? What did you like about that episode? Just curious.
Daksh Gupta:
I think part of it is just Eric’s been a founder before, in the internet era. I think the thing I enjoy most about talking to investors is just that, because they’re older, they were around the internet era and then the mobile era. And so there’s just stories from that that I think are so interesting. I think we could learn so much from that era of time. The internet companies came, there was this frenzy, and then 90% of them died, but Amazon and Google did not die. What did they do, and how do I do that? What can I learn from what they did and how can I not be one of the 10,000 companies that died and instead be one of the companies that became really large, important, and influential?
I feel like we don’t pay enough attention to that, and I was not around there. The internet bubble burst before I was born, and all these companies predate my time, but Eric was there. We have office hours with Hologram every now and then since he invested in us shortly after the batch, and he was there. He was in the middle of all of it. He was at Yahoo. He sold his company to Yahoo at the time, and there’s just wisdom from that era that comes. And with Eric, it’s the...
I don’t know if he was the one that said this or if it was someone else, but the thing that an investor adds to your company is that they’re not in the trench with you, but they’re at the surface level so they can see the other trenches and they can point, say, “Hey, your trench I know sucks, but I’m looking at the next one over there and that one’s also pretty bad. But this third one looks fine. Maybe try that one.” It’s just stuff like this. It’s the wisdom you get from a top-down view, and the perspective you get from working with all these different companies.
I would actually argue that I think investors actually are good podcast guests sometimes, or maybe even often. For one, they’re just better at talking. It’s such a much larger part of their job, I think. They’re better at talking about more abstract things. I can talk about code review for four hours, and it’s just like, “Who asked?” The total time for people that are interested in that type of thing is not that large. And I think investors just have an easier time talking more abstractly about things that are more broadly interesting.
Turner Novak:
Yeah, that’s fair. And then, generally speaking, they’re also better at talking about things that are more trendy. I guess, in your case, if I’m thinking about what the title... I actually have no idea what we’re going to call this, but I’m sure there’s “AI” in the title. I’m sure, the thumbnail, there was some AI thing. So it’s trendy. But, generally, as a founder, you might not even know what’s cool, because you’re just not paying attention to it. You’re just focusing on whatever you’re doing versus most VCs could recite why Web3 is going to be a thing, why every code review should have an NFT because everything should be on the blockchain, whatever. You’d probably be like, “Yeah, that sounds stupid,” but an investor could tell you the pitch of why everything’s going to be on the blockchain in the future.
Daksh Gupta:
And investors have been right for 12, 13 years. The best investors did see the future. I don’t know. There’s one side which is just, I think, over obsession around investors. And then I think that there’s this other subculture of, “Oh, investors are dumb.” And it’s like, “That’s not true either,” like that. I think the truth is that, all people, there’s some of us that’s just genuinely brilliant people and there are others that aren’t. That’s true for every category of person. That’s true of founders, too. I’ve met a lot of terrible founders, but also met a lot of really incredible founders. I think this is true for everyone, and I think the discourse around it is just feudalistic and tribalistic to some degree.
Turner Novak:
Yeah. You basically can’t say that things are gray. They have to be black and white. Basically, there must be a heuristic. Anyways... Well, this was a lot of fun. Thanks for doing this.
Daksh Gupta:
Yeah, thanks for having me. This was great.
Stream the full episode on YouTube, Spotify, or Apple.
Find transcripts of all other episodes here.

