The ethics of robotics and using language to develop ‘Singularity’

posted on Monday 19th June

Categories: Features
Article tags: alexa, artificial intelligence, ethics, Kurzweil, language, London, robotics, robots, sci-fi, science museum, Singularity

A new blockbuster exhibition at London’s Science Museum offers the perfect platform for ethical questions about the perpetual growth in artificial intelligence and the robotic population pushing its way into our human lives and how the teaching of language to robots will shape our future.  As much as being a testament to human invention and endeavour, the show also explores the psychology behind our interest in robots and the desire to make them human. As some of the display shows, robots can already look and even behave uncannily human-like. However, robots influence our lives in broader, more subtle ways than a humanoid from an old science fiction movie ever could.

http://www.wired.co.uk/article/robots-exhibition-london-science-museumhttp://www.wired.co.uk/article/robots-exhibition-london-science-museum

In 2011 Spike Jonze released his imaginative film ‘Her,’ in which Joaquin Phoenix’s protagonist falls in love with an artificially intelligent entity. This human aid, voiced by Scarlett Johansen, is not dissimilar in concept to Apple’s Siri or Alexa by Amazon. In the film, the body-less AI seduces him, only to leave him behind as it realises it’s full potential. Although the film is set in the future, the possibility of computer systems learning to emulate – or go beyond – human experience feels disturbingly close at hand. According to Google’s Head of engineering, Ray Kurzweil, Jonze’s film is not so much fantasy, as an “underambitious rendering of the brave new world we are about to enter. “

Kurzweil is eccentric, a hyper-health-conscious ‘techno-optimist’ who has brought us inventions including the first flatbed scanner, the first computer program that could recognise a typeface and the first text-to-speech synthesizer. His ideas about the future include that, by 2029, computers will be able to do all the things that humans do. Only better. He also believes that soon we will be able to live forever, using computer technology to cure cancer, and that using collected ephemera from his dead father, he will one day be able to retro-engineer him.

However, eccentricities aside, Bill Gates calls him “the best person I know at predicting the future of artificial intelligence,” he’s received 19 honorary doctorates, and has been widely recognised as a genius. In 2014 Google appointed him and gave him free reign to help develop natural language understanding, using their vast database made up of the minutiae of the daily lives of Google’s – over 1 billion- daily users.

Kurzweil believes language is the key to everything.

“My project is ultimately understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information.”

https://www.ted.com/talks/ray_kurzweil_announces_singularity_university?utm_source=tedcomshare&utm_medium=referral&utm_campaign=tedspread

According to Kurzweil, computers are on the threshold of reading and understanding the semantic content of a language, although not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. Through teaching computers to understand language, he believes that we are on a sure path towards what he describes as ‘Singularity:’ the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

IBM‘s computer Watson is the most successful example of natural-language processing so far, winning at Jeopardy against two human competitors in 2011. This is a much more complicated and open challenge than winning at chess (a previous AI milestone). It requires an understanding of similes, jokes – even riddles. For example, it was given “a long tiresome speech delivered by a frothy pie topping” in the rhyme section and correctly answered “A meringue harangue.” Watson’s knowledge was not hand-coded by engineers. Like the AI in ‘Her,’ Watson gained knowledge by reading. All of Wikipedia.

Footage of the moment IBM’s Watson defeated human competitors in Jeopardy

But with the technology of robotics growing faster than most of the general public even realise, and with automated machines becoming commonplace, what are the requirements of us and them functioning harmoniously in this new world? What are the ethical implications? As robots become more intelligent, it will become harder to decide who is responsible if they harm someone. Is the designer to blame, or the user, or the robot itself? What happens to people when robots can do our jobs better than us?

April 2017, The Guardian reported that a leading thinktank, the Institute for Public Policy Research (IPPR), has urged the government to spend billions of pounds helping poorly skilled workers in the less prosperous parts of the UK cope with the robot revolution. About a third of total UK jobs are thought to be at risk from automation within the next two decades and the IPPR said the scale of the challenge required urgent action.

Image from GeekWire: The Real Threat of Artificial Intelligence

Although these issues don’t share the same threatening face as a superhuman AI gone rogue – personified in sci-fi thrillers such as Ex-Machina or Metropolis – they loom with an inevitability that will irrevocably change how society functions, with a significant impact on the lives of individuals.

Professor Alan Winfield, who is a leader in the field of robot ethics, is researching the problems of robotics in society. He is exploring solutions to the issues posed by robots in our evolving job market, for our privacy, and safety (eg. drones and driver-less cars); as well as more psychological concerns engaged by humanoid builds, which he describes as the brain body mismatch problem.

“Robots should not infringe on human rights, or deceive us.”

In 1940 Isaac Asimov was already thinking about these problems, when he developed his ‘Three Laws of Robotics’: A robot may not injure a human being, or, through inaction, allow a human being to be harmed. A robot must obey the orders given it by human beings except where such orders conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 There is nothing more human than the will to survive – Ex Machina

Although these ideas seem straight forward, to be followed, robots must at least be able to recognise a human and distinguish between us and other humanoid shapes. Beyond that, the more a robot can engage with these laws, the more intelligent, and eventually sentient it must be – perhaps achieving Kurzweil’s ‘Singularity.’ Asimov’s three laws only address the problem of making robots safe, so even if we could find a way to program robots to follow them, other problems could arise if robots became sentient. If robots can feel pain, should they be granted certain rights? If robots develop emotions, like the AI in ‘Her,’ as some experts think they will, should they be allowed to marry humans?

In the wake of a technological wave advancing ever forward, ever faster, the ethics that guide us must include the robot dynamic. Techno-optimists like Kurzweil believe that developments in robotics will augment us. Make us better, smarter, fitter. However, individuals and governments need to take stock of how robots will continue to influence the way we live our lives, and put systems in place to preserve that first Law of Robotics, and protect the well-being of humans in a future that looks nothing if not sci-fi.