Google I/O – A 2018 Watershed for AI? Some Notes

This was the week of Google I/O; Part rock concert (the arena part, not the music part), and part tech conference. I settled into a chair, together a group of the Basingstoke Tech Scene folks, and listened. By now it’s apparent that the major story of the event was Google Duo, Google’s conversational AI addition to Google Assistant. It was demonstrated making a phone call to book and appointment, chatting with the receptionist, and making a better job of booking a table at a restaurant than most humans would have. If you haven’t see it, here it is.

 

 

It’s fair to say that a fair few folks on Twitter almost lost control of their bowels over this one. Most of the comments where along the lines of “it’s totally unethical for AI to pretend to be a human”.  Although it is the era of “pile-ons”, “drags” and faux moral outage, I can’t see that anyone making the comments had checked into the context. The opening section of Google O/I was a long and reflexive talk about taking responsibility for the potential harm that modern technology is capable of, and the press release explained how they were experimenting with ways to make it clear the call was robotic. That is a good few steps ahead of the robot dialler calls I get on my phone everyday. Turing test anyone? Of course, this was a pre-recorded demo, so we don’t know if they had Derren Brown‘d it (thank you Charlie Southwell for that).

Many of the new feature announcements from Google were specifically designed to reduce people’s dependency on technology, and to provide greater accountability. I am by now means an unconditional Google fan; My perspective hasn’t changed much since my “Week without Google” experiment, which had me sounding a note of caution on about Google’s collection of data on dozens of radio and a series of press interviews, from the UK and Europe to Australian national radio.

However, the Google of Google I/O 2018 is a much more cautious Google than back them, from the careful gender balance on stage, to a steady stream of mea-culpas in the narrative. This was a Google that has watched the Facebook / Cambridge Analytica debacle and realised it is in a thinly glazed green house, and has gently placed its rocks on the ground. I even sensed a backing away from the use of the term “AI” and more use of the less mystical sounding “Machine Learning”. That might have been down to technical correctness on Google’s part, or it may have been more noticeable because I’ve been swamped by companies using the term “AI” for everything from SPAM filters to animated gifs. These are interesting times for AI and Ethics. The highlights for me:

Photo Thanks to Adrian Braine
  • A light hearted start, with an announcement that Google has fixed its Hamburger emoji. The cheese has been restored to its rightful place. While they were at it they fixed a strange anomaly with the beer emoji, so you now get a full pint, rather than a half with the froff floating in mid air. Fixed it. What wasn’t mentioned was that the gun emoji, inline with other platforms, is now a water pistol. That leaves only Microsoft with a gun for the emoji, which is ironic given that they were the first to substitute the gun for a water pistol, but then switched it back. I’m surprised the NRA isn’t all over this one.
  • Google ran through its efforts with AI in health care, which were familiar territory – with a newly published medical paper coinciding with the event. Not sure I am comfortable with timing the release of medical papers, but perhaps there were other reasons. The Looking to Listen and Morse code work looked like an impressive effort, and pushed up the feel-good factor, especially by having Tania Finlayson in the stadium.
  • Smart Compose, a new feature for Gmail, is auto-correct on performance enhancing drugs. It is capable of automatically adding whole sentences and paragraphs to your email replies. You can see it in action in the video above. Strangely, there didn’t seem to be any ethical complaints about this one. Next time you get a long email from the boss, you might want to check that they wrote it.
  • There were some nifty new features for Google Photos, including re-colourising old black and while photos. Photo editing has been an early beneficiary of faster, cheaper neural networks and Google’s new computing platforms. Apple’s photo service is looking increasingly lost at sea in this space.
  • Google Assistant got a series of boosts. Deepmind’s WaveNet is giving us better voices, with Celebrity voices coming to a Google device near you soon. We have reached the point where software is able to mimic anyone’s voice. Fake news 2.0 is on the way… Don’t believe everything that you hear. Meanwhile Google Assistant is getting better at understanding speech, meaning you’ll have to say “hey Google” less, it will also understand multiple requests in one sentence (co-ordination reduction, if you want to get technical about it). They’ve also run with their own version of Alexia’s magic word feature, which Google calls “pretty please“, in response to concerns that voice assistants are making children rude. Apparently they working with developmental psychologists on the feature. I suspect some will want to put them in the naughty corner over this feature, which gives children positive re-enforcement whenever they ask nicely. I would have preferred an alternative version that took action when the children were rude, perhaps shutting The Internet off for 30 minutes. But I’m just old and grumpy.
  • There was a fair amount of talk about understanding people’s habits and improving wellbeing, which is the next frontier for Google. A new dashboard for Android, to show how you spend your time with your device, a ‘take a break’ feature for YouTube, and a nice ‘fade to black and white’ feature to encourage you to put down your Android device and go to sleep. Google is focusing on:
    • Understanding your habits.
    • Helping you focus on what matters.
    • Enabling you to switch off and wind down.
    • Finding balance with your family.
  • The new News features, while good, rubbed me up the wrong way. The idea of making news “super engaging” and “fast, easy and fun.” sounds like a great product goal. It is also potentially why ‘news’ and ‘entertainment’ seem to be merging and blurring in a deeply unhealthy way. I thing we need to change our relationship with news, but that is a longer discussion. For now, Google is helping people to fact check, and to see different types of coverage of a story, together with a story ‘timeline’. We’ll see.
  • Lots of new features for Android P,  including a few I am pretty sure I had on my Windows Phone quite a while ago. If you are using Android on a Samsung device, expect to get the new version of Android sometime in 2056… Also note, there will be no Android P for the Google Pixel C. That means no Android on a tablet device. It seems that Apple might have won the iPad fight.
  • The maps update including Google’s Visual Positioning System going mainstream, providing augmented reality navigation. I am sure there was more, but that must have been about the time my stomach demanded food. I remember better updates for rapidly developing cities, like Lagos.
  • Google Lens was back… And this time its personal… Actually, this time it is on more handsets. Remember when you were a child and could shout “Hey <parent name>, get me one of those!” *and points* – well, now you can do that too. Lens will recognise clothes and accessories and tell you where to buy them… Or, if you are in the UK, just make you realise that you can’t afford anything anymore, and understand quite how much those big Instagram sponsorship deals are worth…

If you want more depth on the AI/ML aspects, this is a good read. I said I would say more about the technology ethics piece, but I think that is a much longer topic. For now, safe to say, ethics isn’t what the most vocal Twitter users think it is, the righteous moralist position isn’t a good approach for technology. Ethics, at least modern business ethics, are culturally situated and normative. Projecting one culture’s ethics on to another has a long, and awful, historical track record. Not that the Friedman doctrine is much better. Just because something feels creepy, doesn’t make it unethical, just as something that doesn’t feel creepy isn’t always ethical. The debates of right versus wrong have travelled through the troubled intersection of “that’s inappropriate”, hopped over “what’s ethical” and seems headed towards its final destruction of: what ‘feels right’.

 

Leave a Reply