As far as rumors go, Andromeda was a pretty good one. Spurred on by the cryptic tweet a few weeks before Google’s big event, it became something of an overnight sensation.
Hell, even my 13 year old nephew had heard about it, and he’s not even a techie.
Alas, despite all the back-and-forth that surrounded Andromeda – what it is or isn’t, what it will or won’t replace – the resounding truth is that, whatever it is, it was not the focus of Google’s new hardware initiative.
That title belongs to Google Assistant.
For now, until we see something more than we’ve seen to this point, it is time to let Andromeda go back to being whatever pet project it likely was all along and focus on what Google seems to actually be doing.
And what they seem to be doing is bringing Google Assistant to the forefront.
From the #madebygoogle event today, straight from Sundar Pichai:
When I look ahead at where computing is headed, it’s clear to me that we are heading, evolving, from a mobile-first to an AI-first world.
At the heart of these efforts is our goal to build a Google Assistant.
That was right from the beginning of today’s hardware announcements and right to the center of where is seems Google is headed. A few days ago, we made the claim that Google was tying this event and all it’s hardware efforts to Google Assistant. Looks like we made the right call.
So, what is Google Assistant and why is it important?
Google Assistant is similar, in many ways, to what you experience with Google Now on Android and Chrome OS. But, it is (and will evolve into) so much more.
While Google Assistant can open apps, get answers and carry out commands, the over-arching purpose of this tech is way bigger than what we are seeing right now.
This platform is the culmination of all the work Google has been doing around machine learning and AI. From the unique Google Photos search ability to seemingly common search predictions in Google itself, machine learning and AI are at the core of what Google has been doing for years now. And all that work and data have a home now: Google Assistant.
Think about this for a second; according to Sudar Pichai:
In many ways we’ve been working hard at this problem ever since Google was founded 18 years ago. Today, our knowledge graph has over 70 billion facts…and we can answer questions based on that.
70 billion facts. That is staggering, but a clear indication of why it is Google is better suited than anyone to create an assistant that learns and grows more capable by the day. In addition, Google has spent years working on language processing, giving it a distinct advantage in text to speech, speech to text, and language translation. Deep machine learning and natural language skills; these are the tools needed for a truly game-changing virtual assistant. One that is not merely the product of what has been done up until this day, but instead a peice of software that literally learns and gets better as it is used more.
That is the promise of what Assistant is and will be. According to Scott Huffman, Google’s Lead Engineer for Google Assistant:
Google’s always been about helping people. People come to Google for help with every imaginable task. So really, the Google Assistant is a continuation of the company’s focus on helping users. It’s what we’ve always been about, and it’s what people everywhere know us for. But to do this really well, the Assistant needs to work with many partners across many kinds of devices and contexts.
Google’s aspirations for Assistant are clearly larger than a couple phones and a wireless speaker. Towards the end of today’s presentation, we got a little peek into how Google will leverage Assistant to work with 3rd parties through an open development platform called Actions on Google. To keep it simple, it divides the way that developers can interact with Assistant in two lanes: direct actions and conversation actions.
Simply put, direct actions are things like: play this song, cast a video, turn off the lights. Conversation actions are more like hailing an Uber. This requires a bit of back-and-forth with the app, and Assistant is made to fully integrate here as well. What’s more, the Google Assistant will work across voice-only, text-only, and hybrid scenarios, allowing Google Assistant to be available across a wide range of devices in the future, not just phones and tablets.
Taking development even further, Google is prepping the Embedded Google Assistant SDK. According to Huffman:
We imagine a future where the Assistant will be ready to help in any context, from any kind of device. So whether you are tinkering with a Raspberry Pi in your basement or you’re building a mass-market consumer device, you will be able to integrate the Google Assistant right into what you make. Today marks an important moment for Google. It’s an inflection point created by incredible advances in machine learning, the power of the knowledge graph, our diverse ecosystems, and the magic that is possible when the best hardware meets the best software.
What this all boils down to is Google stating quite plainly that Assistant is its next big thing. Not a new OS. Not an evolved OS. Not a new phone or tablet. Google builds things that scale. If it doesn’t scale, it never sticks around at Google. They are building Assistant to scale across every imaginable device and every possible use-case.
Whether Google fans collectively see this as game-changing or not, it really is. This is the evolution of Google happening right in front of us. My hope is that, in time, Assistant makes it’s way natively to Chrome OS. Imagine all the ways it could transform education and work. Imagine how helpful it would be on your desk throughout the day. If Google can scale Assistant in the ways they are considering now, the future of this tech looks very, very promising, indeed.