For years Google has crowed about their code of conduct, “Don’t be evil.” In 2015 they doubled down on the concept, if not the actual words, when they introduced their new Do the right thing.
 

As the company explains in their code of conduct, ““Employees of Alphabet (Google’s parent company) and its subsidiaries and controlled affiliates should do the right thing — follow the law, act honorably, and treat each other with respect.”

In case that’s not clear enough for you, executive chairman Eric Schmidt and advisor Jonathan Rosenberg explained it further in their book, How Google Works:

“…it genuinely expresses a company value and aspiration that is deeply felt by employees. But ‘Don’t be evil’ is mainly another way to empower employees… Googlers do regularly check their moral compass when making decisions.”

If that’s true, how do Schmidt, Rosenberg, and the rest of the Googlers explain the company’s latest experiment?

http://www.latimes.com/business/technology/la-fi-tn-virtual-assistants-20180509-story.html

To create a more compelling experience, Google unveiled their latest “,” an automatic telephone algorithm that makes calls and schedules appointments while in a nearly flawless human voice, complete with “ums” and “ahs.”

What’s wrong with that you might ask? At no point does the software reveal it isn’t human. This intentional omission has sparked a furious debate about ethics and consent.

Google clearly heard the brewing controversy. Two days after the product debuted, the LA Times reported that “Google reversed course by saying explicitly that the service would include a disclosure that it’s not a person.”

Regardless of their of direction, what Google is doing here is wrong. They are now trying to play catch-up by saying their service “will come with a warning.” What’s funny is that they had used their latest conference to spin that they were “not evil” and “not ” but then they did something as questionable as faking real human interaction.

Regardless of how they spin it, Google is not using this to enhance our lives. By using AI to ‘make a reservation by phone,’ for example, the software dehumanizes the whole process. What Google will take away from our lives is the simple to talk with another human being.

As tech ethicist David Ryan Polgar points out: human communication and relationships are based on reciprocity. When it all becomes transactional, the conversation is solely about words and not meaning.

What we see happening in Silicon Valley is that technologists are making everything transactional and goal-oriented solely to collect more data on your habits and preferences.

Instead of Google opening up a conversation with the public as to how their innovations can bring value — with more accurate medical transcriptions, say, or better 911 response times — they are creating products without thought to the negative societal implications.

That’s what Facebook did with Cambridge Analytica.

That’s what has done with Russian election trolls.

“The way to win in Silicon Valley now is by figuring out how to capture human attention. How do you manipulate people’s deepest psychological instincts, so you can get them to come back?” said Tristan Harris, a former design ethicist at Google who has since become one of Silicon Valley’s most influential critics.

The proliferation of AI, Harris said, creates an asymmetric relationship between platforms and users. “When someone uses a screen, they don’t really realize they’re walking into an environment where there are 1,000 engineers on the other side of the screen who asymmetrically know way more about their mind [and] their psychology, have 10 years about what’s ever gotten them to click and use AI prediction engines to play chess against that person’s mind. The reason you land on YouTube and wake up two hours later asking, ‘What the hell just happened?’ is that Alphabet and Google are basically deploying the best supercomputers in the world—not at climate change, not at solving cancer, but at basically hijacking human animals and getting them to stay on screens.”

Of course, these companies are designed to generate shareholder value, that’s what our capitalist system is built on. There’s nothing wrong with that. But who determines the societal cost that these companies are profiting from? And what about the stakeholders and the participants?

After all, we are the data generators Google, Facebook, and the rest profit from.

Alphabet calls their new technology “Google Duplex.” Perhaps “Dupe Lex” would be a better name.

Skip to content
×