It’s that time of year again when Google I/O takes centre stage of the tech world. This year, Google will be marking the 10th anniversary of the event. Despite the conference generally being quite technical and focused more towards the developer community, there are always some key announcements that add to the excitement for what the Google future holds.
Looking at the schedule for the event and the rumours going around, here are a couple of areas I’m looking forward to hearing about.
Google I/O 2014 saw the release of Google Cardboard, and as of January 2016, over five million Cardboard viewers have been shipped, with over 1,000 applications published and over 25 million application installs.
This year, there are questions around whether there could be a standalone Android based headset, and if this headset will be strongly tied to the technology and platform that has been developed around Project Tango. If there is a session to catch on VR, it will be “VR at Google”.
Project Tango is a platform designed around technology that enables mobile devices to use 3D motion tracking, giving the device the ability to understand its relative position in the world around it.
A great example of this technology in the field was released earlier in the year at the Museu Nacional d’Art de Catalunya. Here, users could walk around Catalan’s national museum of visual arts and be guided to artworks of interest. Once at an artwork, visitors could gain detailed information by holding the device up to a painting.
The schedule for the I/O event has multiple sessions focused around Project Tango, with many of these sessions being live streamed. So, I’m hoping for some exciting developments, and I’ll be interested to see how gaming in the augmented world has moved along and whether we will see a killer app to take Project Tango to the next level.
The I/O session to watch will be “What’s New with Project Tango”.
Machine Learning and Neural Networks will play an interesting component and feature throughout many of the sessions at this year’s event. We can anticipate that the session, “Google’s vision for Machine Learning”, will revolve around a lot of conversation around its neural networks and machine learning properties: Cloud Machine Learning and the TensorFlow framework. TensorFlow is a framework that is used heavily throughout speech recognition systems, photos, inbox and search.
Other sessions of particular interest will be, “Breakthroughs in machine learning”, providing a glimpse into how the intelligence in these systems work, with an example of three machine learning technologies currently being developed at Google. And, if you’re interested in getting your hands dirty, check out the session on “How to build a smart RasPi Bot with Cloud Vision and Speech API”.
The final product I’m hoping to hear about is Google’s new device code-named “Chirp”. Another step into voice controlled assistants and the smart home market, could see Google take on Amazon’s Alexa with their own home hub device. This could have access to the range of Google services as well as integrated IoT devices connected to the home, like Nest and Philips Hue.
There’s speculation that Chirp could look similar to OnHub from Google.
Julian Thomas is a Digital Technical Director at BCM