For the first time in a while, the Apple event gave us a genuine surprise. We knew something was coming, and roughly knew that it was going to be called AirPods, but the rest was up in the air. What Apple announced was (not the best looking but…) a huge innovation on top of BlueTooth and finally truly wireless earbuds that worked (unlike the woeful Moto ones I tested).
When designing the AirPods, it’s clear that Apple had in mind more than just giving users something to replace the headphone jack it ditched. Apple tends to lay the ground work for advancements long before we see what they have in mind, and analysts have been quick to clarify that. The AirPods and the enclosed W1 chip are classic cases of this.
The iPhone 5s got Touch ID, but it wasn’t until the iPhone 6 arrived did it become clear biometric payments are the real use case Apple had for this tech. Sure, why would Apple spend time developing something like the W1 chip just to get better audio? Ben Thompson from Stratechery thinks Apples moves all indicate the AirPods are a move towards whatever lies beyond the iPhone.
Beyond the smartphone?
Thompson paints a frankly believable picture of us all living without the smartphone bulging in our pockets. Something akin to the world displayed in the film Her – where the vast majority of interaction is done through the AirPods and voice. Texts, emails and everything in between are read aloud to you on your command, and presumably you’re able to dictate your reply back.
All sorts of commentary ensued from all over the web: some of it grounded, like Ben’s piece, and some of it claiming that this is Apple’s move into AR.
While that is definitely a move, for want of trying to avoid expressing too much personal opinion, I just don’t believe that’s the move. Augmented audio is already a huge market for those who are hearing impaired, if only to boost certain sound waves through a hearing aid. Although it is also becoming a big part of immersive apps and products aimed towards the mainstream user; this is very much a niche use case.
You could argue that having Siri in your ear all the time is a very minimal form of AR, but is Siri ever going to be contextually are of your surroundings? Can Apple do the things that Google’s Assistant can do and chirp in your ear when you need to know something? That’s an argument for another day, but having Siri in your ear quite frankly is just a byproduct of Apple innovating in the sphere of earbud technology.
Is verbal communication the future?
It’s almost a meme that no one talks on the phone anymore. In 2012, using your phone to make calls was only the 5th most popular activity done on a smartphone, time has only diminished. Of the 5 hours of the day people spend on their phones, only 5% of that time (15mins) is making calls. In a world of Snapchat, Instagram, and at a push 140 characters on Twitter – why does the market seem so convinced the future is talking to everything?
Going back to Ben’s picture of ‘beyond the iPhone’, he paints the picture of a world where I don’t need my phone. My earphones connect to my watch, and my digital assistant reads everything out to me. It sounds like bliss, with just one problem – despite all its advantages, we just don’t want to talk.
Palm got lots of things right when creating WebOS. They got a lot wrong with other things, but many features in modern mobile OS still arc back to what Palm innovated on. One of those things is text input. At any point on the Palm Pre, you could type in a command and the phone would do as you asked. Simply type “email dave”, and a box opens for you to do just that. It wasn’t revolutionary, but it was extremely useful.
Type those words
At any point on the iPhone I can action Siri, by saying “text Dave” or “tweet hello world”, and it will do just that. Yet I can’t type it — despite Spotlight appearing on a quick swipe down on the home screen (and a swipe right but, that’s a different kind of mess).
Verbal communication is direct, straight to the point and takes much less time than typing anything out. Not to mention communication with each other via the written word is fraught with misunderstanding, delivery issues and complications. So I won’t argue that text input is far better than speech, yet talking to each other is reserved for the closest of relationships.
A simple text command line interface is the ultimate in efficiency, and Apple already knows this because it indexes and searches all your phone’s content already. Facebook, Google and everyone in-between also know this, because everyone else is rushing to put bots inside text-based messaging platforms. Putting Siri in my ear is simply not Apple’s big play for the future.
If the world is so reliant on something that comes after the smartphone, what exactly is it? Maybe it’s something we can’t even comprehend yet, or maybe it’s just the next iteration of the smartphone’s interface. To be frank, the AirPods are just earphones, and nothing more.