markrabo.com

View Original

What we can learn about OpenAI's product strategy

I've been talking about development a lot, but today getting back to product. Wanted to reflect on what we can learn about OpenAI's product strategy after their Dev Days conference, and what product(s) we can expect in the next 12 months.

Transcript below.

Video transcript

I've been talking about development a lot but today I want to get back into the product stuff and specifically talk about OpenAI and their product strategy and what we can glean from their OpenAI DevDay developer conference they held last week. I think there's a lot of interesting things that came out of that that give us an idea of like where they're pointed and what they might be thinking for the future in the next year. So I'll just talk through a few things there. I think the key to understanding the direction and the product strategy is to realize that their current models have overshot the demand for what people expect by a lot. Nobody expected anything like even GPT 3.5. 4 is better. Nobody expected like LLMs to do that kind of work and to have that much flexibility. I mean you can see this everybody is constantly posting about all the incredible stuff they do. So the models are absolutely have overshot people's expectations. So naturally from if you're thinking from a product perspective you don't want to keep building on something that is already good enough. You want to focus your attention in different areas to you know to grow in ways that maybe you haven't developed well enough and in fact that would be sound also in terms of thinking about the disruption in a disruption theory perspective where companies are open to disruption when they have a product that over delivers for a good chunk of the market and it opens them up for somebody coming in and doing something that the majority of people are doing or need but the bigger companies focused in a different place and their business doesn't make sense to serve the underserved. So I think this kind of is very clear that they understand that OpenAI understands this because a lot of their announcements all of their announcements in fact are focused not on the model but on making that model more useful. So I like the car metaphor. It's like having a race car with a very powerful engine. You need to figure out how to get that power to the wheels and to move the car and then to make the car drivable, feel good, steer well and so everything that they're doing is to make this engine drivable. So things like being able to program assistance in natural language. The API is more capable with image generation, text to speech all being kind of included on the API side so that gives developers more Lego blocks to play with. Bigger context for prompts so you can give a given AI a personality or it's easier to give them more information to customize them and make them feel different. Again helping developers to create experiences for people on top of this model and then finally the pricing making it cheaper is a huge barrier for people so now companies can do more, newer companies can start perhaps. Oh and then they had the copyright shield which promises to protect people from lawsuits around any kind of IP related to using a model that has been trained on the internet and other people's content so again de-risking things for people and increasing the number of people that are now more willing to build on top of their model. So all of that shows that they're thinking about drivability for this. So now if we think about like what's in the future, this is an interesting one because I think there's kind of four areas that OpenAI needs to consider or is probably considering so they have the model side, they have the infrastructure side, they have the brand and then they have their consumer product and all of these areas they have a strength in. So the model obviously they have the strongest best model in the world so that's an area of strength. Infrastructure they have their partnership with Microsoft and so they have the biggest training network so that's a huge strength. They have their brand so OpenAI not so much but ChatGPT is a huge brand and it's kind of synonymous for AI chat bots and then their consumer product the actual ChatGPT product is also the most popular AI chat product and is a huge huge benefit. So how do they actually navigate this? That will be a really interesting question and something that I'll be paying attention to quite a bit but you can imagine at this stage of the company that they're playing around with what or thinking about which areas to focus on because you can't probably can't be all of those things. They also have an enterprise business. They're now building custom models for people for two million dollars or fine-tuning the GPT models for bigger enterprises for you know millions of dollars. So there's another kind of line of business there and maybe they can build all of these things out. It definitely will spread them thinner but they're obviously going to be well funded in which case management becomes the biggest challenge in lining everybody up but that'll be very interesting to see. And then the final thing I'll talk about is what Sam Altman kind of alluded to when he said like we've got an amazing thing coming for you in the next year. He talked also about Satya Nadella. He said can't wait to build AGI with you and I think if I had to imagine where things go in the next year I think that their focus is going to be on creating an AGI-like experience. I think AGI from like a practical or rather theoretical perspective isn't interesting or important. The model is so powerful and so compelling and so convincing that even if they just build experiences on top of that to make it feel like you're interacting more naturally I think everybody would have the sense that they are already that they are talking to something real and when you combine that with all their GPTs and their agents then you can really see a world where you're talking to something that seems like a real you know thinking thing and it's also capable because it has connections to all these pieces and right now that's all still janky. You still have to like pick your agents or your GPTs and ask it to do something and then you have to switch modes to do something else and that's kind of an early stage product approach because you're sort of breaking everything up. The integration of everything together is actually the very difficult part to make it seamless so it makes sense that they're doing it this way and I think what we'll end up seeing is a more natural interface so talking will be one thing but also it could be a visual aspect as it's giving you feedback. You can already see that in some of their demos where they have the chat assistant helping plan the trip for somebody and there's a map with pins, calculations showing for like prices and their ticket details coming up so all that stuff is there so more visuals more natural interactions with these things and more capable with these agents so I think an AGI like experience is on the horizon and I honestly wouldn't be surprised if we saw that within the next year by their next dev day. Anyway that's a few thoughts.