Issue 2/2017 - Net section
In 1993 AT&T launched an ad campaign with scenes from then unlikely locations and communication technologies and asked “Have you ever … ‘sent someone a fax from the beach,’ ‘tucked your baby-in from a phone-booth,’ ‘opened doors with your voice,’ ‘attended a meeting in your bare feet,’ ‘learned special things from faraway places’ (there were others). Each of the TV ads ended with the same tag-line voice-over and text graphic worthy of 1984: “YOU WILL.” The proposed bargain, part Faustian, part Orwellian, part contrivance, part advance marketing scheme, is what Lewis Mumford might have called “the magnificent bribe.” This comports, in curious form, with an oft cited remark by Alvin Toffler, whose book (and later film – hosted by Orson Welles), Future Shock, announced the ‘premature arrival of the future.’
From the Turing Machine to Marvin Minsky’s dream of a deep AI, the aspirational objective of fully intelligent computational systems has been the holy grail of the systems elite, the speculative cyber-philosophers, the digital futurists, and the marketers now mobilizing AI across the widest array of disciplines and devices.
The commercialization of AI has now permeated nearly every electronic system, every digital device, every network, every service, and comes in the metaphor of both ‘intelligent systems’ and ‘big-data’. AI – it may be more appropriate to say algorithmic ‘intelligence’ – is being deployed as domesticated, friendly, even intimate. It interposes itself as silent presence, intelligent attendant, neutral servant, obedient apprentice, dutiful partner, pithy valet, personal shopper, ‘smart’ companion, as an amenable, impassive, subservient agent behaving with deference and submission. AI’s entry into the public sphere came clumsily as in early attempts to normalize human-machine interactions like ELIZA (Joseph Weizenbaum) that ran “the doctor script” in a Turing ‘inspired’ psychiatric exam, and created what became known as ‘the ELIZA Effect’, (“the susceptibility of people to read far more understanding than is warranted into strings of symbols – especially words – strung together by computers” – Douglas Hofstadter) that spewed pseudo psychiatric tautologies that have been wildly parodied as largely pointless repartee.1 It emerged in sci-fi literature, and particularly in film. Its many public spectacles, IBMs Big Blue defeating Gary Kasparov, winning at television’s game-show Jeopardy,’ its presence in countless films (2001, Welt am Draht, Bladerunner, Terminator, The Matrix, A.I., Her, etc,.) came as harbingers, dystopic or escapist virtual worlds, cautionary tales.
Sentient, omniscient, omnipresent, omnivorous, Kubrick’s film 2001 introduced HAL the mutinous computer that both runs the ship and conspires to murder its human inhabitants as inconvenient, unreliable, and ultimately unnecessary. HAL is the computer that film writer Serge Daney called “the first computer to talk and die in cinema.” HAL’s AI circuits are one-by-one disengaged as the machine spirals and sputters into shut-down mode while revealing both its ‘conscious’ regression and its imminent death throes as secret sub-routines launches a message about the ‘actual,’ if ambiguous, mission.
HAL’s legacy has re-emerged in the dizzying array of ‘smart’ technologies poised to manage daily affairs, and emerging as smart phones, smart gadgets, smart speakers, smart lightbulbs, appliances, home monitoring gear, a ‘smartness’ embedded in the ’internet of things’ coming as a simultaneous triumphant anthropomorphization of AI in the form of intelligent ‘things’ (well actually of connected ‘things’ armed with language recognition algorithms cum intelligent agents) as a technology destined to empower seemingly effortless control over routine activities.
Just recently an agent from my mobile carrier called to let me know that I could upgrade my phone. I asked what advantages there were and, after a brief pause, they said “SIRI.” I responded that I didn’t mind talking on the phone but would prefer that the phone wasn’t listening (there was silence from the other end). But listening (or lip-reading for HAL) is perhaps not so passive.
A headline from the UK Telegraph (1.8.2017): “Amazon Echo rogue payment warning after TV show causes 'Alexa' to order dolls houses”, with the article continuing: “The show depicted a six-year-old asking her family’s new Amazon Echo ‘can you play dollhouse with me and get me a dollhouse? The device followed the command, ordering a KidKraft Sparkle mansion dollhouse, in addition to ‘four pounds of sugar cookies.’” It has been widely reported that police in Arkansas sought the data on an Amazon Echo in a murder investigation. An article on National Public Radio’s news feed, “Amazon Echo Murder Case Renews Privacy Questions Prompted By Our Digital Footprints”2 to determine if it, by ‘choice,’ chance, or surreptitiously, recorded possible evidence of the crime. Amazon has thus-far refused to provide the device or the recorded “utterances” (as they call them) stored in the cloud. Even ‘smart’ televisions are implicated. The television maker Vizio was recently fined millions of dollars for the inappropriate collection of data. The FTC (Federal Trade Commission) report (February 6, 2017) outlined the case: “According to the complaint, Vizio got personal. The company provided consumers’ IP addresses to data aggregators, who then matched the address with an individual consumer or household. Vizio’s contracts with third parties prohibited the re-identification of consumers and households by name, but allowed a host of other personal details – for example, sex, age, income, marital status, household size, education, and home ownership. And Vizio permitted these companies to track and target its consumers across devices.” Examples emerge almost daily: “German regulators tell parents to destroy the ‘spy’ doll Cayla” was the headline on Deutsche Welle (February 17, 2017). The story opened with this ominous statement: “A German regulator has warned parents of the dangers of a children's toy called My Friend Cayla. The doll, capable of revealing personal data, is a de facto ‘espionage device,’ a federal agency said.”3 An earlier report stated: “According to the Norwegian Consumer Council, which researched the smart toys, ‘anything the child tells the doll is transferred to the US-based company Nuance Communications, who specialize in speech recognition technology.’”4
‘Always on,’ ‘stand-by’ mode, ‘smart interactivity,’ voice activation, ‘rogue’ mishaps have become common with these voice-controlled devices. These seemingly ‘innocent’ recordings, activations, increasingly possible hacks, pushes the ‘internet of things’ into focus as less than faultless or benign. Not surprising that US law enforcement, using so-called ‘Stingray’ technology can intercept, identify, activate and ‘listen’ to mobile phone microphones. Indeed, the undetectable hacking of computer and phone recording systems (including audio and video) has become common.
Monitoring seemingly errant utterances, casual outbursts, impulsive remarks, cleverly engineered broadcasts including activation ‘cues,’ murderous arguments might result in more than one bargained for. AI’s friendly new ‘things’ might turn out to be a cross between a Pandora’s box and an errant automaton, between a genial servant providing routine command and control and a malevolent deus ex machina engaged in surveillance and ‘espionage.’ Sound familiar?
1 There’s a parody, using a Google Home Controller, of ELIZA interviewing ELIZA on YouTube, https://www.youtube.com/watch?v=Rk7QQfifov4
2 http://www.npr.org/2016/12/31/507670072/amazon-echo-murder-case-renews-privacy-questions-prompted-by-our-digital-footpri
3 http://www.dw.com/en/german-regulator-tells-parents-to-destroy-spy-doll-cayla/a-37601577
4 http://www.dw.com/en/internet-capable-spy-toys-put-data-protection-and-child-safety-at-risk/a-36674091