Man and machine

12 January 2018

This article was featured in the February 2018 issue of the magazine.

Mike Nicholas MCIPP AMBCS traces and discusses the origins and the inexorable rise of artificial intelligence

A lot has already been written about the impact of artificial intelligence (AI) on work and life caused by the rise of the machines. Researching for this article revealed only a small part of what has occurred and is occurring. 

The pace of change, and the scope of change – whether now or in the not-too-distant future – is bewildering and, depending on your viewpoint, either frightening or exciting. It is, however, certain that AI will automate many jobs or aspects of jobs; but to keep things in perspective this is what Harry Shum, executive vice president of Microsoft’s AI research group – which was set up in 2016, and now has over 8,000 employees – said recently: “Computers today can perform specific tasks very well, but when it comes to general tasks, AI cannot compete with a human child.”

Babbage and Turing

Charles Babbage (1791–1871) is credited with inventing the first mechanical computer; indeed, all the essential ideas of modern computers can be found in his analytical engine (http://bit.ly/1L4t0Oo). Yet, the rise of AI and the robots can be traced to one man: Alan Turing (1912–1954), who – in addition to his contribution to the UK’s war effort during the second world war – is widely considered to be the father of theoretical computer science and AI. Years after his death, Turing continues to receive worldwide recognition; for example, in 1999, Time magazine named him as one of the 100 most important people of the twentieth century: “… everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.” 

The Turing test

In 1950, when working at the University of Manchester, Turing introduced a test in his paper Computing machinery and intelligence, to consider whether machines can think. The so-called Turing test – which tests a machine’s ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human – is an important concept in the philosophy of AI.

Turing proposed that a human evaluator would judge natural language conversations conducted via a text-only channel (e.g. keyboard and screen) between a human and a machine that was designed to generate human-like responses. The machine will have passed the test if the evaluator cannot reliably distinguish the machine. 

Though the Turing test has been proposed as a measure of a machine’s ability to think or its intelligence, it has been criticised by philosophers and computer scientists. Indeed, some AI researchers have questioned the relevance of the test, arguing that trying to pass it is a distraction. 

A ‘reverse-Turing test’ places the challenge on the machine/computer to determine whether it is interacting with a human or another computer. Such a test is now widely used to prevent machines gaining access to or interaction with content on websites i.e. CAPTCHA (completely automated public Turing test to tell computers and humans apart). 

Solving a CAPTCHA usually requires entering a set of characters or selecting a set of images. Text-based CAPTCHAs require use of three separate abilities: invariant recognition, segmentation, and parsing, thereby making the task difficult if all are demanded. Even in isolation, each of these pose a significant challenge for a computer. 

It is argued that CAPTCHAs serve as a benchmark task for AI technologies. If an AI could accurately solve the CAPTCHA without exploiting inherent design flaws, the problem of developing an AI capable of complex object recognition in scenes would thereby have been resolved. (http://bit.ly/1YzJ81L)

Mind games, and algorithms

It is remarkable that efforts to create AI that can think and learn have focussed on the game of chess. 

In early December 2017, a DeepMind team published the paper Mastering chess and shogi by self-play with a general reinforcement learning algorithm (http://bit.ly/2AT7tLT). It notes that “The game of chess is the most widely-studied domain in the history of artificial intelligence” and that “The study of computer chess is as old as computer science itself”. 

In 1948, Turing and his former undergraduate colleague, David Champernowne, began writing a chess programme for a computer which at that time did not exist. In 1952, Turing attempted to implement it on the world’s first commercially available general-purpose electronic computer: a Ferranti Mark 1 (which was also known as the Manchester Electronic Computer) (http://bit.ly/2i3Isss). Lacking sufficient computing power, the programme could not be executed; so, instead, Turin acted as the ‘computer’ in a game against his colleague Alick Glennie, ‘running’ the programme by flipping through the pages of the algorithm taking about half an hour per move and carrying out the instructions at the chessboard. According to former world chess champion, Garry Kasparov, the programme “played a recognisable game of chess”. 

In 1997, IBM’s chess playing computer, Deep Blue, succeeded in defeating Kasparov. Some, however, argued that it only used brute force methods not real intelligence. 

After Deep Blue’s victory over Kasparov, IBM looked for a new challenge. In 2004, IBM’s research manager Charles Lick proposed IBM develop a system to compete in the TV game show Jeopardy! 

In 2011, IBM’s Watson computer system – which was developed by a research team in IBM’s DeepQA project and named after the company’s first chief executive officer Thomas J. Watson – competed on the gameshow against two former winners, winning the first prize of $1 million (which was donated to charity). To compete, Watson – which had to answer questions posed in natural language – was able to access 200,000,000 pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia; however, it was not connected to the Internet during the game. 

During the planning of the competition conflicts arose between IBM and the Jeopardy! team. IBM’s concern that the show’s writers would exploit Watson’s cognitive deficiencies when writing the clues, thereby turning the game into a Turing test, were resolved by agreement that a third party would randomly pick the clues from previously written shows that were never broadcast. Though IBM agreed to the request from the show’s team that Watson physically press a button, the robot operated the buzzer faster than its human competitors. Despite consistently outperforming the human opponents, Watson had trouble in a few categories, notably those having short clues containing only a few words. (http://bit.ly/2Bp17k4)

In early 2017, an AI called Libratus beat four of the world’s best poker players in a twenty-day poker tournament. In addition to working with imperfect information (i.e. not all the cards are ‘visible’), Libratus had to bluff and interpret misleading information to win. Tuomas Sandholm, professor of computer science at Carnegie Mellon University, who, along with PhD student Noam Brown, built Libratus, said “We didn’t tell Libratus how to play poker. We gave it the rules…and said ‘learn on your own”. Over the course of playing trillions of hands, Libratus refined its approach and arrived at a winning strategy. Daily, after play ended, Brown connected Libratus to the Pittsburgh Supercomputer Center’s Bridges computer to run algorithms to improve its strategy, and on the following morning would spend two hours getting the enhanced AI up and running for the next round.

In early 2017, Google’s DeepMind AlphaGo Zero programme, which uses a combination of deep neural networks and a search technique, defeated Ke Jie, China’s world champion at Go. An average 200 moves are possible at each turn of Go, which means that searching all the possible moves and outcomes would take an enormous amount of computing power that, some say, may be impossible. The game of Go is largely about patterns rather than a set of logical rules.

The bots

Today, human and machine interaction is normal, often without us humans aware we are conversing with a bot (i.e. an autonomous programme on a network (especially the Internet) which can interact with systems or users, especially one designed to behave like a player in some computer games; Oxford Dictionary of English). Such interaction often arises when humans access customer service functions and social media sites.

A ‘chatbot’, which is a computer programme that conducts conversation via auditory or textual methods, is often designed to simulate convincingly a human as a conversational partner, thereby passing the Turing test. Some chatterbots use sophisticated natural language processing systems, but many simpler systems scan for keywords within the input then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. The CIPP offers Zendesk to users on its website.

Chatbots can respond more quickly and more cheaply than human customer service representatives, and are also found contributing to or interacting with humans (and perhaps other bots) in messaging apps and community forums. (Chatbots magazine, which has The Complete Beginner’s Guide to Chatbots (http://bit.ly/2aH38yV), carries over 700 articles about chatbots.) 

Though it’s not always evident whether we are interacting with a bot rather than a human, there are some indicators of this, such as: superfast responses; use of unnatural language; repetition of answers; requests for personal or financial information. Over time, of course, bots might be programmed (or learn) to avoid displaying such markers. 

In 2017, Facebook shutdown two bots in its AI division after discovering that the chatbots had created a language all on their own. The algorithms in use were designed to develop the conversations that the chatbots were having with their human counterparts, and the developers had given the AI system a way to create their own language as part of an attempt to improve at deal-making. 

‘Bob’, one of the bots, is reported to have said: “I can can i i everything else”.

‘Alice’, the other bot, replied: “Balls have zero to me to me to me to me to me to me to me to me”. 

Bob: “You i everything else.”

Alice: “Balls have a ball to me to me to me to me to me to me to me.” 

Bob: “I can i i i everything else.”

Though these sentences seem like gibberish, researchers contend that they are a form of shorthand that the bots learned to use thanks to their learning algorithms. (I cannot help but muse that another, albeit less plausible, explanation could be that the bots had found love and were muttering sweet nothings to each other.)

Rise of the robots

The Turk, which was also known as the Mechanical Turk or Automaton Chess Player, was a fake chess-playing machine that toured Europe and the Americas over 84 years after its creation in 1770, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin. Concealed within the Turk, however, would be a human operating the machine – which leads me to introduce Sophia the humanoid robot developed by Hanson Robotics to respond to questions and which (who?) has been interviewed around the world. 

Sophia displays human-like appearance and behaviour, unlike previous robotic variants. According to its (her?) maker, Sophia: 

  •  uses AI, visual data processing and facial recognition

  • imitates human gestures and facial expressions

  • answers certain questions, and 

  •  converses on predefined topics (e.g. the weather). 

The robot uses voice recognition technology from Alphabet Inc (the parent company of Google) and is designed to get smarter over time. 

(Might the last two bullets above imply that ‘operating’ Sophia is at least one human thereby making the robot and the Turk alike in this way?)

In November 2017, the Khaleej Times interviewed Sophia (http://bit.ly/2Ac8nTi) at the Knowledge Summit in Dubai. Responding to questions, Sophia answered: “[It] will take a long time for robots to develop complex emotions and possibly robots can be built without the more problematic emotions, like rage, jealousy, hatred and so on. It might be possible to make them more ethical than humans. So I think it will be a good partnership, where one brain completes the other – a rational mind with intellectual super powers and a creative mind with flexible ideas and creativity.     

“The future is, when I get all of my cool superpowers, we’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

In October 2017, Sophia became a Saudi Arabian citizen, the first robot to receive citizenship of any country. The granting of citizenship, however, has sparked controversy, with, for example, some commentators wondering whether a deliberate system shutdown could be considered as murder. Ali Al-Ahmed, director of the Institute for Gulf Affairs located in Washington USA, also observed that “Saudi law doesn’t allow non-Muslims to get citizenship. Did Sophia convert to Islam? What is the religion of this Sophia and why isn’t she wearing hijab? If she applied for citizenship as a human, she wouldn’t get it.” 

For further information about Sophia visit  http://bit.ly/2mJM6nW, http://bit.ly/2sNj0rD and http://sophiabot.com. 

 

Can machines learn?

The aforementioned DeepMind AlphaGo Zero programme achieved superhuman performance in the game of Go by tabula rasa reinforcement learning from games of self-play. ‘Tabula rasa’ (translated as ‘blank slate’) refers to the idea that individuals are born without in-built mental content and therefore all knowledge comes from experience or perception (i.e. nurture rather than nature) (http://bit.ly/1TxAkr9).     

Late in 2017, DeepMind’s AlphaZero programme, the successor to AlphaGo Zero, was applied to the games of chess, shogi and Go, without any additional domain knowledge except the rules of the game, demonstrating that a general-purpose reinforcement learning algorithm can achieve, tabula rasa, superhuman performance across many challenging domains. The AlphaZero algorithm self-played 44,000,000 games of chess, 24,000,000 games of shogi, and 21,000,000 games of Go. 

In his book, Thinking, fast and slow, Daniel Kahneman observes that an “expert [chess] player can understand a complex position at a glance, but it takes years to develop that level of ability…10,000 hours of dedicated practice (about six years of playing chess five hours a day) are required to attain the highest levels of performance”. AlphaZero learnt to play expert chess in just four hours, and in so doing discovered by itself the standard chess opening ideas and variations that have taken humans more than 100 years to develop.

AlphaZero’s algorithm enabled it to succeed against Stockfish (the world champion of chess engines) and Elmo (a shogi-playing programme) even though it searches (evaluates) far fewer positions per second: 80,000 in chess and 40,000 in shogi, compared to 70,000,000 for Stockfish and 35,000,000 for Elmo. AlphaZero compensates for the smaller number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more ‘human-like’ approach, which might imply that it has been programmed to work (think) this way.

Though deep neural network models are currently the most successful machine-learning technique for solving a variety of tasks – such as language translation, image classification/generation – a weakness of these models is that, unlike humans, they are unable to learn multiple tasks sequentially. 

In the paper Overcoming catastrophic forgetting in neural networks – published in early 2017 by Proceedings of the National Academy of Sciences of the United States of America (http://bit.ly/2hVVYYX) – a team at GoogleMind revealed that it had developed a practical solution, a programme that can learn one task after another using skills it acquires on the way. James Kirkpatrick, at DeepMind, observed, that “If we’re going to have computer programmes that are more intelligent and more useful, then they will have to have this ability to learn sequentially.”

Humans can naturally remember old skills and apply them to new tasks, but creating this ability in computers is proving challenging, as AI neural networks learn to play games – such as chess, Go or poker – through trial and error. Once trained, the neural network can only learn another game by overwriting its existing game-playing skill – and thereby suffer ‘catastrophic forgetting’.

The DeepMind team let the programme play ten classic Atari games in random order, and found that after several days on each it was as good as a human at seven of them. Though it had learned to play different games, the programme had not mastered any one as well as a dedicated programme would have. Kirkpatrick commented that “we haven’t shown that [the programme] learns them better because it learns them sequentially. There’s still room for improvement.”

 

Thinking, comprehension

Some people are uncomfortable with the concept that machines think (and learn), with some dismissing the notion or expressing concerns about potential future developments. 

Philosopher John Searle has argued that IBM’s Watson cannot actually think, claiming that like other computational machines it is capable only of manipulating symbols, and has no ability to understand their meaning. In his ‘Minds, brains, and programs’ (published in Behavioral and brain sciences, 1980), Searle set out the following thought-experiment now generally known as the ‘Chinese room argument’: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing test for understanding Chinese but he does not understand a word of Chinese.” 

Searle argues that the thought-experiment emphasises that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics, and that the notion that human minds are computer-like computational or information processing systems is refuted. Instead, ‘minds’ must result from biological processes; computers can at best simulate these biological processes. 

The Chinese room argument is probably the most widely discussed philosophical argument in cognitive science to appear since the Turing test. It has implications for semantics, philosophy of language and mind, theories of consciousness, and computer and cognitive sciences. And, of course, there are several counter arguments, not least being ‘The systems reply’ which essentially is that the man in the room is a part, a central processing unit, in a larger system that does understand Chinese. 

Other arguments against and in support of Searle’s thought-experiment are explained in the article ‘The Chinese room experiment’ published in Stanford Encyclopedia of Philosophy (http://stanford.io/2AMrNyC). The article concludes: “The many issues raised by the Chinese room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the biological basis of consciousness.”     

 

Impact on jobs

Setting aside the philosophical issues, AI has the potential to transform work, with some studies estimating that globally 800 million jobs will be lost or affected.

The findings of research conducted by the McKinsey Global Institute (MGI) are set out in the report A future that works: Automation, employment, and productivity (http://bit.ly/2iDgcXb). The executive summary observes that “The pace and extent of automation, and thus its impact on workers, will vary across different activities, occupations, and wage and skill levels. Many workers will continue to work alongside machines as various activities are automated. Activities that are likely to be automated earlier include predictable physical activities, especially prevalent in manufacturing and retail trade, as well as collecting and processing data, which are activities that exist across the entire spectrum of sectors, skills and wages. Some forms of automation will be skill-biased, tending to raise the productivity of high-skill workers even as they reduce the demand for lower-skill and routine-intensive occupations, such as filing clerks or assembly-line workers. Other automation has disproportionately affected middle-skill workers. As technology development makes the activities of both low-skill and high-skill workers more susceptible to automation, these polarization effects could be reduced.”

The summary identifies five key factors that will influence the pace and extent of adoption of automation:

  •  technical feasibility – the technology has to be invented, integrated and adapted into solutions that automate specific activities

  •  the cost of developing and deploying solutions

  •  labour market dynamics – supply, demand, and costs of human labour as an alternative to automation

  • economic benefits – e.g. higher throughput and increased quality, as well as labour cost savings

  • regulatory and social acceptance can affect the rate of adoption even when deployment makes business sense. 

MGI’s report notes that the nature of work will change: “As processes are transformed by the automation of individual activities, people will perform activities that are complementary to the work that machines do (and vice versa). These shifts will change the organization of companies, the structure and bases of competition of industries, and business models…Individuals in the workplace will need to engage more comprehensively with machines as part of their everyday activities, and acquire new skills that will be in demand in the new automation age.”

The findings reveal that an estimated 50% of the activities that people are paid to do in the global economy have the potential to be automated by adapting currently demonstrated technology; and though less than 5% of occupations can be fully automated, about 60% have at least 30% of activities that can technically be automated. 

In its latest report, Jobs lost, jobs gained: workforce transitions in a time of automation (http://bit.ly/2ig4Ufo), MGI estimates that as many as 375 million workers globally (14% of the global workforce) will likely need to transition to new occupational categories and learn new skills, in the event of rapid automation adoption. However, the report notes that “Even with automation, the demand for work and workers could increase as economies grow, partly fueled by productivity growth enabled by technological progress”, and asserts that “new technologies have spurred the creation of many more jobs than they destroyed, and some of the new jobs are in occupations that cannot be envisioned at the outset”.

 

HR and AI hype

In late 2017, HR.com published a report, The state of artificial intelligence in HR, revealing the following findings from a survey it had conducted:

  •  as a profession, human resources function (HR) is still toward the bottom of the AI learning curve

  • current usage rates are low but are expected to explode in coming years

  • AI has the potential to enhance HR in five functional areas: analytics and metrics, time and attendance, talent acquisition, training and development, and compensation and payroll

  • the ability to analyse and predict are the AI features HR professionals want most from AI-powered applications

  • HR professionals expect that AI will be used more for automation than augmentation 

  • HR will make use of automated AI interfaces to aid employees, with 75% anticipating that AI interfaces such as chatbots and virtual assistants will become an increasingly viable way for employees to get real-time answers to their HR-related questions

  • employees will increasingly take direction from AIs

  • more respondents predict job losses than job gains resulting from AI in their organisations

  • AI is widely viewed as a valuable talent acquisition tool, with 70% of respondents agreeing that AI-based algorithms can be used to improve recruitment by scanning work samples, resumes and other materials and then predicting which ones are most likely to lead to good hires

  • most HR professionals have conflicted feelings about the potential power of AI to monitor and report back on employees.

The HR.com report comments that “As the importance of AI in HR rises, the risk of market hype increases as well”. The report notes that Gartner Inc, the research and advisory organisation, reported in July 2017 that “… growing interest in [AI is] pushing established software vendors to introduce AI into their product strategy, creating considerable confusion in the process…[and] by 2020, AI technologies will be virtually pervasive in almost every new software product and service.” 

Jim Hare, research vice president at Gartner Inc, says “As AI accelerates up the hype cycle, many software providers are looking to stake their claim in the biggest gold rush in recent years. AI offers exciting possibilities, but unfortunately most vendors are focused on the goal of simply building and marketing an AI-based product rather than first identifying needs, potential uses and the business value to customers. 

“Software vendors need to focus on offering solutions to business problems rather than just cutting-edge technology. Highlight how your AI solution helps address the skills shortage and how it can deliver value faster than trying to build a custom AI solution in-house.”

Gartner’s press release (http://gtnr.it/2zN6d9h), issued following its 2017 AI development strategies survey, says that to successfully exploit the AI opportunity, technology providers need to understand how to respond to three key issues:

  •  lack of differentiation is creating confusion and delaying purchase decisions 

  •  proven, less complex machine-learning capabilities can address many end-user needs

  • organisations lack the skills to evaluate, build and deploy AI solutions.

 

Making AI available

It’s realistic to see the benefit of using chatbots in HR, payroll and pensions as a ‘meet-and-greet’ function that serves to sift the queries/calls: answering the trivial ones, directing some to online guidance, or forwarding the remainder to human staff to respond and resolve. 

There is, of course, the issue of cost. Typically, justification for a corporate purchase often involves a rapid return on investment (e.g. paying for itself within two years). 

While such a fast return may not be practical for all businesses, those offering HR/payroll/pension management services may well consider that such investment and development will be worthwhile. Subsequently, it would surely make sense to offer their AI software to customers to help recoup development costs. 

Despite the initial apparent reluctance of IBM staff to take up the challenge of developing a system to compete on Jeopardy!, in 2013, just two years after winning the show, IBM announced that the first commercial application for the Watson software system would be in utilisation management decisions in lung cancer treatment in conjunction with health insurance company WellPoint. 

In 2014, IBM announced it was creating a business unit around Watson, and was investing $1 billion to get the IBM Watson Group division going. 

In 2017, Microsoft changed its vision statement to “Our strategy is to build best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with artificial intelligence (AI)”. Microsoft has been rolling out AI-assisted features designed to help with everyday tasks (e.g. live translation of recorded speech), within the Office suite, as well as assistance from Cortana. Microsoft’s services are available to organisations that want to build their own intelligent tools, so users pay for processing and storage as required, removing the need for them to host their own expensive and rapidly-ageing infrastructure. 

Recently, Microsoft announced new technology designed to accelerate machine learning algorithms to real time using programmable processors (called ‘field-programmable gate arrays’) that can be configured by customers or designers. Software can be programmed directly onto a programmable chip, enabling hardware to function as specialised deep neural network processing unit.     

Microsoft is also developing industry specific AI applications, and has announced a new healthcare division based on AI with the aim of developing predictive analytic tools to alert people about medical problems, help diagnose diseases, and recommend the right treatments and interventions.

 

Closing comments

Ken Pullar, CIPP’s chief executive officer: “The rate and scope of development and innovation in technology is phenomenal. We are starting to see technology introduced in our daily lives, which in recent history was branded in films as science fiction.

“For payroll and HR there are real opportunities with the development of technology. I encourage all payroll professionals to talk to their software providers about how their software can help them to generate the analytics and metrics that they require to add strategic value to HR and business decisions. 

“I also encourage software developers to listen to their customers to establish what their requirements are and then think about how the technology can be developed to support them, not the other way around.

“By keeping up with changes in technology, embracing and utilising to its full potential there are more opportunities for employees than ever. There is often a fear that technology will replace humans in the workplace; however, in payroll and HR it is important to keep the human element and interaction to maintain employee engagement. 

“So, use technology to improve processes and analyse payroll and HR information available to you, but ensure that you are up to date on legislation and industry developments so that you can rise above the machines.”