Humane AI — Product Analysis & its Fundamental Flaw
I thought it would be interesting to share some thoughts on the Humane AI pin that was just announced for pre-order. It's the first AI assistant as physical product and it's been created by ex-Apple people so presumably, very smart and clued in to human-centred design. This makes it all the more confusing why they missed what I think is a fundamental flaw of this product.
In this video, I talk through that flaw, a siren song that is implied in a new category of products that mass collect data around you (all your conversations, your whole workday as screen recording, etc.), and what would make these products more compelling and cheaper.
Video transcript
NOTE: The transcript was run through a GPT to fix grammatical errors so it won’t be exactly as spoken but should be more readable.
A few thoughts about the Humane AI pin that's been released or announced officially today in detail, and why I think it's fundamentally flawed from a product perspective. Also, a siren song that is a dangerous one to follow, which I think they're following, and the third thing is kind of what I think the product could look like, should look like, and could potentially be a recovery for them if they realized it.
The first thing is the fundamental flaw. This pin sits on your clothing, kind of like a boutonniere on your lapel. It has a camera, a microphone, a speaker, so it records things around you, and you can interact with it. It has a little projection screen that can show you something on your hand as you look at it. What I think they're misunderstanding fundamentally is people's perceptions about privacy. Nobody wants to be recorded in public, so a camera and a microphone is a no-go. I mean, you don't have to look too much farther than the way people are perceived in movies when they have to, you know, go in bugged and record a conversation.
Google Glass showed us that people are not interested. People were beat up at bars when they walked in with Google Glass, stores put up signs that people weren't allowed in with Google Glasses. That was strictly a privacy and recording thing. We just don't trust people who are recording us. In our experience, people who record us are not friendlies; they are police officers, they are lawyers, they are, you know, dangerous people to our freedom. They're going to analyze what we're saying. So, people fundamentally don't like that. And I think anyone wearing that pin will feel a similar expression towards them as people with Google Glass, once people start to understand what this thing means.
I think this is a product category that will never succeed. No microphones, no cameras should ever be in a product that you wear. The one exception for this, I think, is sunglasses, like the Meta AI or the Stories, whatever they call them now. The Meta sunglasses are different because you, by definition, would wear them outside, as they're sunglasses, and there's a less expectation of privacy. So, still a little sketchy, but acceptable. People expect their conversations to be heard by others when they're out in public, but indoors, total no-go.
So, that's the fundamental flaw. I'll talk about what changes they might make to the product in a second, but the other piece I wanted to note was the siren song of collecting data about ourselves. There's this idea that people who have had a lot of success by thinking and having knowledge think that information is equal to success or is equal to good outcomes. And so, they tend to be people like these are people that work in the tech industry a lot, right? These are people that love to absorb knowledge, love to learn, and therefore, collecting data is something that feels like a path to more success, recognition, and acceptance. I'm one of those people. I have 30,000 journal entries in my Evernote about all kinds of different things. But more information does not equal success. You don't have to look too much farther than the fact that we all know we should be, for example, eating better, working out, maybe getting out of a bad relationship, or some kind of situation that we know is bad for us in ways we understand. But doing it is a whole different thing; it's a very different skill. So, knowing doesn't equal doing. Doing is what matters, knowing is the first step.
So, when these products come out, like the Rewind product that's recording your screen on your computer, and now they're releasing a pendant that hangs over your neck that can record audio around you all day. They're making the assumption that more information is better, and that's not true. So, that's a siren song that proves not to be true.
The other fact is that search is actually way harder than I think people understand. I believe we've been lulled into believing search is a solved and solvable problem because of Google, because Google has created a search engine that seems to be able to pull out any information we want. But we all know that Google is struggling now, in the sense that their search results are getting worse. And if you want to find something that is arcane or unique, it's very difficult to do that on Google unless you really understand how to use the search engine or know very specific phrases or descriptions of what you're looking for.
People might point to the way ChatGPT and different large language models have internalized the internet, and talking to them will be the answer to sorting through a lot of data. But in practice, though, they're terrible at giving you data that's specific. They're very good at concepts and discussing things creatively, generating ideas, but when they need to generate or give you specifics about something, they just default to search engine behavior and pointing you towards websites or quoting websites. So, an LLM is not the promise of sorting through a lot of data.
The last thing was what I think the answer is to the product, this product category, and I think there is one. This is a very interesting product category, but the answer is, in my mind, basically what Humane has created without the camera, without the microphone. A little device that sits in your pocket, no one else sees it, connects to an earpiece, and just gives you a simple line to an OpenAI style personal assistant. What's shocking to me is that, you know, even though as we use ChatGPT on other products and have our minds blown constantly by what they can do and what they can tell us and how useful they can be, people creating products seem to be putting all kinds of additional bells and whistles on them when the real answer to making an AI more compelling is just removing all the friction from you speaking to it and it responding. And I think that's the line that Humane should be taking. Create a device that is always on, focus on a battery that can last, a speaker and microphone system that is always on, and a device that you can just put in your pocket, having an earpiece, and talk. Say anything, and get answers back. So, remove all friction and let the large language models do what they do because they do it super well.
Of course, that collides with the smartphones we have and giant companies like Apple and Google who are obviously already the device in our pocket — that placement is going to be very powerful. But it makes sense for OpenAI, Humane, and other companies to try and get in on this market because realistically, we only use the earpiece for a few things: phone calls, listening to music, and then whatever assistant we have.
We'll still always carry a cell phone for the visual stuff, right? We'll always want to see pictures, interact with people on social media. The visual is an aspect of our devices, so that'll always be there. But I can certainly imagine a world where I have a phone and I have this little device that sits in like the top little jeans pocket, kind of like the old iPod shuffles, if you remember those, and then connects to my earpiece, and I just have those two devices in the same way I have the watch for the functionality it has. But I don't take phone calls on it. I mean, unless I'm like absolutely, you know, on the couch and don't want to get up, I can kind of do that in a pinch, but it sucks. So, it's no big deal to carry one more device, especially with how much benefit it will bring.
Anyway, that's a little longer than I wanted to go, but a few thoughts on Humane AI and that whole product category.