Artificial Intelligence 2016

From New Media Business Blog

Jump to: navigation, search
Davos 2016 - The State of Artificial Intelligence.[1]

Artificial intelligence (AI) is intelligence displayed by machines with the use of algorithms, data centers & computing programming. Specifically, AI is an flexible rational agent (that makes it own decisions) by taking in its context and adjusting accordingly to maximize its chance of achieving a goal. Rational intelligence agents may operate in a range of environments including static to dynamic with discrete or continuously defined percepts and actions. Depending on the complexity of the situation, the agent operates by itself in an environment or with other agents that could affects its performance.

AI Research is split into subfields based on the form of intelligence displayed. These include complex decision making, machine learning, natural language processing (NLP), perception, object manipulation, creativity, and social intelligence. AI exhibiting one or more of the forms previously listed is typically called narrow AI. The long-term goal of AI research is general intelligence in the form of artificial consciousness. With further development, AI will be able to mimic “cognitive” functions that we typically associate as being human including physical movement, creating music, and perceiving emotions.


Contents

AI Research

Knowledge Representation: IBM Watson Expert System competes on Jeopardy[1]
  • Deduction & Problem Solving: Solving increasingly complex problems with infinitely more data and computing power. This is the perpetual challenge faced by AI and researchers are constantly trying to develop more efficient methods of decision-making.
  • Knowledge representation: Representing information in a form that a computer system can utilize to solve complex tasks. IBM Watson is an example of an expert system built to represent and process knowledge of both structured and unstructured content.
  • Planning: Choosing a sequence of actions that will transform the state of the AI to satisfy its goal. Decentralized systems working collectively to self-organize towards the the achievement of an arbitrary goal, as found in AI is swarm intelligence. Individually, their interaction is based on simple rules; as a group, they becoming an intelligent being.
  • Machine Learning: Utilizing pattern recognition to operate without explicit programming. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. Machine learning is based on algorithms, epistemology, computer science, biology, and probability. It is the application of statistics and mathematics to teach computers to learn, think, and predict.
  • Natural Language Processing: Deriving meaning from human language input. Popular examples of digital personal assistants that utilize NLP include Siri and Cortana.
  • Perception: Acquiring, processing, analyzing and understanding digital images for use in decision-making. Machine perception allows the computer to use this sensory input to gather information with greater accuracy. These include computer vision, machine hearing, and machine touch.
  • Motion and Manipulation: Handling tasks such as object manipulation, localization, mapping and motion planning. This field of artificial intelligence overlaps closely with robotics.
  • Creativity: Modeling, simulating or replication creativity with algorithmic perspects to enhance human creativity. Recent examples include Google Magenta [2], Sunspring [3] and neural-network-based art [4].
  • Social (Affective) Intelligence: Interpreting the emotional state and adapting behaviour accordingly to give an appropriate response. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. Kismet is an early example of affective computing [5].
  • General Intelligence: Also known as strong AI, combining all the skills above to exceed human abilities in nearly or all aspects. It is an emerging field aiming to build thinking machines: general-purpose systems with intelligence comparable or above the normal human mind.


History

Chatterbot Eliza (Originally 1964)
Deep Blue vs. World Chess Champion Garry Kasparov (1996)
Kismet (2000)
AlphaGo vs. South Korean Go Grandmaster Lee Se-dol (2016)

The possibility of creating artificial beings with general intelligence has been explored with philosophical arguments in philosophy, fiction and myth with philosophical arguments. Artificial intelligence has its roots in the programmable digital electronic computer, a machine based on mechanical reasoning. Today, AI has been instrumental in the technology industry for surpassing obstacles in computer science.


Timeline of AI

Date Development
1822-1859Charles Babbage and Ada Lovelace seek to automate the process of tabulating functions with programmable mechanical calculating machines. It is considered by some to be the world’s first computer. [6]
1940-1950Alan Turing’s theory of computation suggests that by shuffling symbols (such as binary) one could simulate mathematical deduction. This idea, that digital computers could perform formal reasoning, becomes known as the Church-Turing thesis.[7] Alan Turing would go on to develop his famous “Turing Test”: an experiment requiring that a human being should be unable to distinguish a machine from another human being based on Q&A. He conducted an experiment and analysis to decide whether the behaviour exhibited by a machine is equivalent or indistinguishable to that of a human. The test consisted of a game called, ‘The Imitation Game’, where the interrogator tries to guess the gender of the other two players. For a machine to pass the game, the interrogator had to guess the machine was human more than 30% of the time during a series of five minutes written conversations.
1950-1960The field of AI research was founded at a Dartmouth conference in the summer of 1956. With discoveries in neurology and information theory, researchers were optimistic about the future of AI with programs that mimicked psychotherapists, spoke english, played checkers, and moved blocks on command.[8]
1960-1970Numerous programs are developed by researchers in machine learning that integrates symbol integration, natural language processing and logical representation of knowledge. ELIZA, an interactive program that carries on dialogue in English “simulates” the dialogue of a psychotherapist with users with pattern recognition and NLP. [9] Shakey the Robot demonstrated the combination of locomotion, perception and problem solving[10].
1980-1987Research slowed in following years with a downturn in funding and insufficient technology. With the pressure to fund more productive projects, governments cut the funding for exploratory research in artificial intelligence. The Lisp machine was developed specifically to run AI programs as an expert system, however it did not achieve commercial success. [11]
1997: Deep BlueAI’s big break occurred with IBM’s Deep Blue defeating of World Chess Champion Garry Kasparov[12]. This spurred worldwide interest into the possibilities of artificial intelligence and AI began to be used for medical diagnosis, logistics and data analysis.
2000: KismetCynthia Breazeal from MIT publishes her dissertation regarding Kismet[13], a sociable machines that “expresses” emotions. Going beyond practical usage, a truly intelligent computer needs emotional understanding. With research in the patterns exhibited by humans, computers learn how to recognize our emotions.
2005: DARPAIn 2005, Stanford’s Stanley was the first autonomous car to complete the DARPA Grand Challenge[14],: a 10-hour obstacle course through the Mojave Desert. The competition foster research for autonomous ground military forces, emergency-maintenance robots and object manipulation software.
2011: IBM WatsonIn 2011, IBM’s Watson, as a general intelligence expert system, defeated both top Jeopardy winners to claim a $1 million dollar prize. The data-intensive environment of the game tested Watson’ ability to process terabytes of data and accurately answer within 3 seconds[15]. Watson’s software included custom IBM software and an unstructured information management architecture system to start the process of machine learning so Wason could sort through the information once questions were asked while playing Jeopardy. During the show, Watson had no internet connection and was able to answer all of the questions correctly. Today, IBM has found more practical business applications for Watson, such as medicine, consulting, and analytics.
2012: Google BrainBy 2012, Google’s Artificial Brain [16] taught itself to recognize human faces, human body parts and most importantly, cat videos. Google Brain has flourished in use with software engineers to apply machine-learning algorithms to a growing array of problems from advertising to Google Maps.
2014: ChatbotIn 2014, Chatbot Eugene Goostman successfully “passed” the Turing Test by convincing 33% of judges that he was a 13-year-old Ukrainian boy[17]. These characteristics were chosen specifically to induce forgiveness amongst judges for its misdirection, lack of general knowledge and wit.
2016: Alpha GoGoogle’s AlphaGo defeated Go Grandmaster Lee Sedol in a best-of-five series, winning 4-1 [18]. This was an unprecedented victory that researchers thought would not occur for another 10 years. Not only was it a tremendous human achievement, it demonstrated AI’s potential to learn problems and solve them in ways no human would ever consider.


Impacts

Consumers

Convenience

It has never been easier and quicker to acquire information than in today’s era of intelligent technology. Digital assistants like Apple’s Siri, Google Now, and Cortana are some of the common AIs that can be accessed from a smartphone and smartwatch. They help find useful information using voice and pattern recognition. Now, phones and watches don’t simply tell time, they plan out our schedules, know our preferences, and predict possible outcomes[19].

Productivity

With the aid of AI, people are becoming more and more productive. Aside from applications that can recognize patterns, there are ones that learns the more it is used. Google’s self-driving car project includes an algorithm that lets self-driving cars learn to drive the same ways humans do, through experience. The basis is that for simple tasks that are calculative and predictable (like driving and playing chess), AI can take over and people are then left with more time to focus on tasks that are more creative and cognitive challenging[20].

Improvements in Quality of Life

AI enables medical miracles to happen. The blind are given the ability to see and the deaf the ability to hear. For instance, DuLight uses camera technology and AI for image recognition to send visually impaired users to get an idea of what is in front of them. The technology is highly reliant of deep learning to run a massive neural network imitating that of the human brain. To use deep learning this way is still in its early development phase but elsewhere like Facebook and Skype, deep learning is used to identify faces in photo posts and translate conversations from one language to another. AI is therefore not only changing the way we all interact with the world[21]

Businesses

Process

AI can transform business processes such as handling customer support. AI costs less and can learn to serve customers better. However, humans have better semantic and emotional reception that makes them better at listening to customers and building relationships. Knowing this, businesses like Facebook M combines the two aspect to bring personal and targeted marketing to their users.

Analytics

Capable of taking in extremely large amounts of data, AI, and more specifically, machine learning is capable of seeing patterns that human can not easily detect. This is used by banks and credit card companies for fraud detection and purchasing predictions [22]. AI use large samples of fraudulent and non-fraudulent purchases to look for signs and predict new transactions falling into which category. Large retailers like Amazon stand to have a large gain if they are able to anticipate customer needs. Companies use customer data collected to send coupons and targeted advertisements to generate further sales.

Product

While analytics of machine learning can predict what customer want, AI enables the automation process of production. Manufacturing companies can now cheaply product outputs that is much more accurate and realistic to the demand of customers.

Applications

Digital Personal Assistance

Currently, the most well-known form of consumer AI are digital personal assistants, most commonly found in mobile devices. Examples of these include Siri by Apple, Cortana by Microsoft, and Google Now by Google. The feature that entices many consumers is its ability to recognize natural speech and multiple forms of sentence structures. These all use a multitude of pre-programmed responses for as many questions or commands the developer can think of that the user might ask. However, unlike other forms of AI, the assistant’s responses are limited to the developer’s programming and cannot learn and adapt on its own.


Facebook is currently in development of their own digital personal assistant called Facebook M. The difference between this assistant and the ones already available is that it will have real Facebook employees responding to the user in addition to its pre-programmed responses [23]. The standard AI software performs simpler tasks such as searching the web, where the humans tackle the more complex tasks such as giving a gift recommendation. Integrating real employees into the assistant allows it to provide human judgement when answering the user’s question, human judgment that current AI systems are unable to produce. When a user asks a question, Facebook M is able to ask the user questions back to obtain more detail to provide a better, more personalized response or suggestion. This feature essentially emulates AI self learning as the assistant is able to obtain more information on its own to learn more about the user. Although it is mainly the employees asking questions and learning, not the software itself, this is a first step in implementing human judgement and self learning into AI. All the actions performed by the Facebook employees are recorded to provide a blueprint of how complex questions should be answered. The data gathered can help the creation of an advanced system that utilizes deep learning to identify large amounts of information across a vast network [24]. Facebook and other developers will be able to learn from Facebook M and slowly create new versions that use more AI and less employee input.

Facebook M does lead to one concern regarding privacy. Facebook is able to obtain even more user information such as detailed personal preferences. If AI becomes implemented in consumer electronics, then companies will able to easily collect data as software will be learning about the user on its own.

Google Magenta Music[25]

Google Magenta

The left side of the brain contains logical, analytical and computational thinking while the right side of the brain express emotions and creativity. Google Magenta is a creativity AI that acts similar to the right side of the human brain. Individuals are able to be used to create art, music, write creatively. It is an open source software called Tensorflow that uses deep learning and machine learning. Tensorflow is an open source system that is published on GitHub[1]. Being an open source system, anyone would be able to use and modify Google Magenta’s code. Coders around the world may add-on to the existing the code and make further advancement. The purpose of Google Magenta is to build a community of artists, coders, and machine learning researchers to help each other out in building an AI that can learn to potentially create compelling and artistic content on their own. In addition, the other purpose is to advance the technology of machine created art[2].

In June 2016, Google Magenta created a 90 second piano melody, its first generated art, using neural network with four notes upfront [3].

Tay.AI

Tay.ai is a self-learning AI on twitter created by Microsoft. It learns from people tweeting to the AI and then it provides a response. In less than 24 hours, The AI transform from an innocent self-learning AI into racist and sexist AI from negative tweets of the general public. It was unable to differentiate moral standing, thus failing to filter the public’s negative content. In response, Microsoft resets and set controls on the information it learns[4].

IBM Watson

IBM Watson defeating two jeopardy champion was only a side project. More importantly, AI can work together with humans to enhance human expertise, going beyond what humans can do. Watson accesses 90 servers with a combined data store of over 200 million pages of information, which it processes against six million logic rules. The device and its data are self-contained in a space that could accommodate 10 refrigerators. Watson is not programmed with all the information it might need, but instead it has been given the cognitive tools necessary to acquire the knowledge itself, teasing out answers to complicated questions from vast amounts of electronic information. It does this using natural language (everyday conversational English), not computer-language queries. IBM Watson is able to answer hard pressing questions. It is able to swiftly extract crucial information from all the necessary documents stored in its database by recognizing patterns and discovering the relationships amongst the data[5].

IBM Watson understands both structured and unstructured data. Unstructured data is 80% of data today, such as blogs, tweets, news articles, etc. Finding the meaning of each data through the context, it is being used to learns the jargon, the language of the particular field. It is able to comprehend complex sentences similar to a human being. When looking at a sentence, Watson does not just look for synonyms and keywords. IBM Watson reads the sentence like a human and interprets the meaning from it. It breaks down the sentence, looking through its grammar, relations, and structure. After that, it understands the context of the sentence. It is different from how a regular computer recognizes speech recognition; IBM Watson discovers the real intention of the sentence. Then, it extracts typical logical response, draws inferences and provides the answer[6].

IBM Watson can be used in several industries, such as health, sports, medicine, business, etc. When it engages in a particular industry or field, IBM Watson learns the industry’s language and jargon. Humans would still need to provide a corpus of knowledge in Watson. This means humans would have to disregard outdated/wrong data and curate data to Watson. Experts would train Watson to interpret data through a process of machine learning form of question/answer pairs, teaches linguistic pattern and learns through ongoing interaction. For instance, IBM Watson can be used to assist doctors in finding the best way to treat a patient. Initially, Watson goes over symptoms. IBM Watson creates hypothesis and looks over the data from its corpus of data to either support the hypothesis or go against it. After going through all the possibilities and evidence, it scores each of them using analytics to conduct “weighted evidence score”, ranking each alternative. Watson advises on the best recommendations that have the highest score and shows the evidence to back it up. Doctors and health specialists are using IBM Watson to make better decisions based on evidence. In addition, it can provide new insight that human may not have already discovered. This means that IBM Watson may generate undiscovered formulas and algorithm for creating effective drugs and medical treatment [7].

Alpha Go

Go is a simple game, where players take turns placing black or white stones on a board to claim the most territory and trapping the opposite player. The other player’s stone can be captured by completely surrounding them with your stones. The game sounds simple but it requires the players to generate complex strategies to win and master. The reason for this is because Go contains countless possible moves. Comparing opening moves, chess has only 20 opening moves, while Go has 361. Go is played by common sense and feeling, which means Go requires players to have human qualities to master it.

AlphaGo is a computer program artificial intelligence created by Google DeepMind. It defeated Go grandmaster Lee Se-Dol, who is the 4th best Go player in the world over a 5 match tournament on March 8 to 15, 2016. The score ended with 4-1. The Goal for AlphaGo is to beat the best human players, not mimic them [8]. AlphaGo learned to discover new strategies for itself by playing thousands of games between its neural networks, and adjusting connections, using trial and error process known as reinforcement learning. Every single game it plays, AlphaGo learns from its mistakes and improve for the next game[9]. Ke Jie, 18 year old, the world Go Champion, watched how AlphaGo plays and continue to improve every single game; he acknowledges the machine being perfect.

AlphaGo uses two neural networks: policy network and value network. The policy network reduces the complexity of the possibilities into a number that is more manageable. For instance, it may reduce hundreds of possibilities to five possibilities that are suggested by the policy network. The value network reduces the depth of the search for the next move. Instead of searching hundreds of moves to win the game, it searches for a lower number such as 20 upcoming possibilities. The neural network makes it easier for the AI to figure out the top or the best move to make, especially when there are so many possible moves.

Comparing Humans with an AI, humans have far more limitations. There are only so many Go matches humans can process in a lifetime, compared to an AI where it can process literally all the games in history. Humans can only play a certain amount of games within a year, while an AI could play millions in a day through having more than one machine playing and then storing all its new knowledge into one database . Finally, Humans get tired and begin to lose concentration at a certain point, while an AI does not as it is a machine unless someone turns the AI offline [10]. This means in a matter of time, AlphaGo may end up being unbeatable, where even the world champion might be able to win one game in a match.

Google DeepMind's Deep Q-learning playing Atari Breakout[11]

Deep Q Learning

Deep Q Learning is a self-learning AI created by Google DeepMind that is able to play any Atari 2600 games. Deep Q Learning combines neural network (how it works is mentioned above in AlphaGo) with reinforcement learning, which learning from its mistake and being better the next time around [1]. Initially, the AI starts off knowing absolutely nothing about the game and to point that it may not even move. It is a sensory input, meaning it can see the screen of the game and it is a command to score the most point as possible.The AI starts off by attempting to get the most point until it learns the real objective of the game. Over time, it continues to learn from trial and error; it uses reinforcement learning to educate itself about the game. After waiting an hour or so, Deep Q Learning could the play like an advance player. If you wait even longer, Deep Q Learning completely master the game as it learns all the game mechanic and formulates the best strategy to efficient and effective finish the game[2].

Self-Driving System

2015 - Semi-Autonomous Car

The Semi-Autonomous Car has 12 sensors and 6 cameras to show 360-degree view. In addition, the car has infrared cameras to warn obstacles ahead. The radar-based system is to indicate when the driver should slow and brake as traffic starts and slows. The active lane keeper keeps the car in the lane in case it decided to drift off on either side. Finally, the facial recognition software to detect if the driver is drowsy; it indicates when the driver should take break and rest, reducing accidents due to fatigue[3].

2017 - Self-Driving Car

The Fully Autonomous self-Driving Car Much contains more advanced technology than the semi-autonomous car. The car would be able to drive itself without a driver behind the wheel. The car has radars to scan for obstacles, wireless technology to learn about traffic, lasers and GPS to navigate. In addition, the car is able to communicate with other cars and exchange data on location, speed, steering wheel position, brake status thus avoids collision and accidents[4].

2020 - Self Parking

Currently, self-parking is available, however, drivers are required to initially find a parking spot, then turn the automatic parking for the car to park by itself. In 2020, Self-Parking Cars would be able to find parking and park on its own. When the individual arrive at the destination, they can exit the car and let it find its own parking. The car would search for the nearest available parking through scanning the area, finding an available parking spot and parks it[5].

2030 - Mobile living space

The Mobile Living Space is the final frontier of self-driving vehicles. The car is packed with so much wireless, digital and autonomous technology. Essentially, it becomes a moving conference room and private retreat place. The car would be able to communicate with its passenger through voice and takes them to the desired destination. The car takes care of the passenger. There will be a Metallic coating over windows for privacy and sleek look[6].

Crowdflower

Crowdflower is an AI training platform using crowdsourcing that takes data and cleans it to make it useful. Using an approach that he described as “human in the loop,” CrowdFlower’s AI allows businesses to perform tasks with algorithms and machine learning, but bring in human judgment when they’re not quite as confident in their technology and then the human work makes the algorithms smarter[7].

CrowdFlower has built AI technology and a crowdsourced workforce, but customers can also plug in their own tech and remote labor pools. The company upload data they have, design the job you would like complete, post the job and crowdsource experts around the world to do the job. During the process of the experts completing the job and after the job is completed, AI uses machine learning to learn how these experts do the job, figuring out the patterns and linguistic. Once trained, AI is ready to complete the job for your business. The AI is able to recommend solutions with backed evidence based on data, possible discoveries. A similar approach can also be used for medical purposes to create new ways to treat patients, create new drugs or more technology focus like creating a self-driving car. Crowdflower is gaining popularity among businesses and investors as they recently raised $10M to continue to combine artificial intelligence with crowdsourced labour[8].

Artificial Intelligence and Finance

Financial institutions are always seeking faster and more efficient ways to analyze the stock market. As markets change, traditional quant models may be too static and unable to adapt to trends fast enough. This is where AI comes into play. Many firms are developing financial analytical systems that can learn and adapt to the changing market and generate trading commands and strategies. AI systems utilize machine learning, where they can take in massive amounts of data to analyze at high speeds and then improve themselves through this analysis [9]. These new stock trading systems are generally based on one of two forms of AI: genetic evolution and probabilistic logic (1). Sentient, an AI stock trader, which uses genetic evolution, automatically creates a trading strategy without human help. It evolves by creating a large pool of digital stock traders and testing their performances on historical data. After learning through these tests, it creates a new set of superior digital traders and repeats the process, until it is able to trade in the real world market[10].

Impacts of AI on our Economy

As technology improves, businesses aim to move towards automation in order to save labour costs. Due to this, it is a popular belief among the workforce that blue collar jobs will be replaced by machines.

Chart showing distribution of U.S. workers worried about losing thier jobs to technology[11]

However, with the rise of artificial intelligence it may be possible that higher income white collar jobs, jobs that are more complex and require a larger skillset, will be replaced by machines as well. As mentioned earlier, financial institutions are already experimenting with AI analytical and trading systems that can handle much larger sets of data at faster speeds than traditional models. How about jobs that require human judgement such as a police officer or a counselor? Will AI not only be able to learn own its own but also think like a human? This can be a topic of controversy and must be discussed with ethics in mind.

Ethical Issues

Current Ethical Issues

Unemployment

The trend over many decades have been that as technology improves, it has been used to replaced much of the work that humans used to do. The complexity of the nature of these jobs has been increasing at an increasing rate. There are many fears that with the development of more sophisticated AI programs, technology will not only begin to replace low skill level jobs, but jobs that require complex decision making such as managerial positions. If the replacement of human labour increases at a faster rate than the population is able to adjust to, we may be facing with future systematic unemployment. Should there be laws controlling the rate of growth? It’s estimated that 50% of jobs will be replaced by AI in 30 years[12].

Of course, there are some optimists who believe that, yes while many low-skilled jobs will become automated, the increasing rate of job growth will be able to compensate for it. According to a 2004 report by the Brooksfield Institute, the following are the top five occupations at risk of being affected by automation[13]:

  1. Retail Sales Associates
  2. Administrative Assistants
  3. Food Counter Attendants and Kitchen Helpers
  4. Cashiers
  5. Transport Truck Drivers

Alternatively, the top five occupations at least risk of being affected by automation are[14]:

  1. Retail and Wholesale Trade Managers
  2. Registered Nurses (Psychiatric included)
  3. Elementary and Kindergarten Teachers
  4. Early Childhood Educators and Assistants
  5. Secondary School Teachers

Transportation

The wide adoption of self-driving, autonomous vehicles promises to dramatically decrease the number of traffic accidents. Some accidents, though, will be inevitable. If an autonomous vehicle finds itself in a situation where it will hit a pedestrian or an oncoming car, what will it choose? How about if the vehicle has to choose between hitting a crowd of pedestrians and crashing itself into a wall, potentially injuring its occupants? Who is liable for this accident? A recent study called "Autonomous vehicles need experimental ethics," highlighted by The MIT Technology Review (2015) explores these questions, and more.

In terms of the legal side, it is not obvious if the passenger should be held legally accountable if the control algorithm ultimately makes the decision to hit a pedestrian, passer-by, or a wall. Some argue that liability has to shift from the passenger to the manufacturer since failure to anticipate these kinds of decisions may amount to negligence in design under the Product Liability law[15].

Since regulations lag behind technology in the case of autonomous vehicles, automakers of semi-autonomous vehicles are using workarounds. For example, to minimize manufacturer liability associated with their new ‘automated overtaking’ feature, Tesla Motors will require the driver to initiate the feature, thus ensuring legal responsibility for the maneuver consequences falls with the driver[16].

Defining the algorithms that will guide autonomous vehicles in situations of unavoidable harm is a difficult challenge, given that the decision is an ethical one. The control algorithms of autonomous vehicles will need to embed moral principles to guide their decisions in situations of unavoidable harm. However, if a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions? Regulators and manufacturers will soon be in pressing need of answers to these questions, and that answers are most likely to come from surveys employing the protocols of experimental ethics.

Health Industry

The major issue with the use of AI in the medical field is determining where the responsibility lies when making potentially life-threatening medical decisions. For example, if a physician used IBM’s Watson to diagnose a patient with a rare illness and the diagnosis and treatment was deemed incorrect due to errors and the result ended in the patient dying, would the physician or Watson be responsible? Does the doctor have the final say when Watson suggests possible diagnoses so it would be the doctor’s responsibility to confirm all of the patient’s symptoms and concerns before administering a treatment, despite what Watson suggests? Alternatively, would the blame be placed on Watson for not understanding the symptoms or knowing that a given symptom for a given illness can manifest as a different illness? Also, would it even make sense to place the blame on AI as the repercussions or consequences would be insignificant, relative to the consequences a doctor could face. As the use of AI in medicine becomes more practiced, issues such as these will have to be addressed.

Future Ethical Issues

Employment for Social Functions

According to Nick Bostrom, a leading philosopher in Oxford known for his work in philosophy, there are certain criteria that must be fulfilled if an AI is intended to replace human judgement in social functions[17]:

  1. Responsibility
  2. Transparency
  3. Auditability
  4. Incorruptibility

The actions of an AI that runs only within a specific domain (e.g. Deep Blue) is easier to predict, and therefore it is easier to predict what possibility safety issues might result. However, as AI begins to operate on a higher level of context, our ability to predict these safety issues and our ability to address them greatly decreases[18].

It is theorised that the best way to build an AI that acts safely while running throughout various domains with many consequences, one must specify behaviour in non-local terms like, “X such that consequence of X is not harmful to humans”. However, this is only effective if the AI is about to predict the consequences of its behaviour. There could also be a potential loophole in which what the AI might consider “harmful to humans” may be different from our definition[19].

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these"[20]:

  • A customer service representative
  • A therapist
  • A nursemaid for the elderly
  • A soldier
  • A judge
  • A police officer

It is ironic to see that Weizenbaum has listed 'customer service representative as one of the positions that should not be replaced back in 1976, considering how automated out retail industry has become. For the other positions though, it is understandable how authentic feelings of empathy may be very important. If these positions were replaced by a machine, the people subjected to the actions of these AI may find themselves feeling alienated, devalued, and frustrated.

However, there have been counters that machines are not biased and have no personal agendas, which would be more favourable for women and minorities when facing judges and police.

Moral Status

Moral status is a characteristic that is applied to entities that, like humans, when its interests morally matter to some degree for the entity's own sake. It’s agreed that current AI systems have no moral status, or if they ever will attain moral status. There are two criteria that are commonly proposed as being importantly linked to moral status[21]

  1. Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer
  2. Sapience: a set of capacities associated with higher intelligence, such as self‐ awareness and being a reason‐responsive agent

In the future, it may be possible for AI to have the same moral status as human beings. One of the ideas that could be used as a sort of benchmark to assess the moral status of AI can be expressed as the principle of non-discrimination[22]:

Principle of Non‐Discrimination
If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status

The Future of AI

As AI continues to develop, the future remains clear. AI has the potential to improve the lives of its designers and mankind. Corporate ventures will create new venture projects to capitalize on AI advancements by introducing them to society to be accepted by the majority, just as it has been done for many other radical technological advancements. For future businesses to survive they will have to start adapting their strategies to be more forward thinking in order to keep up with the exponentially changing technology. With this in mind, society, not just businesses, will need to adapt and determine how to handle situations of AI that cause threats to society or ethical implications as it is integrated into society.

Intelligence Explosion

Just as how technology is continuing to grow at an increasing rate, the intelligence of our AI is also beginning to increase exponentially. The “intelligence explosion” is the hypothetical event in which AI is able to understand its own design to the extent that it is able to redesign better and better versions of itself to become even more intelligent[23]

The ethics of this hypothetical event considers what would happen to the human race if AI become smarter than humans. Will we be able to control AI for the betterment of humankind? Or will we have to face existential risks?

The Pessimists Future

In the pessimist's perspective of the future regarding AI, the worst case scenario is envisioned- that an intelligence explosion will result in the annihilation of the human race, or something will happen to permanently and drastically diminish the prosperity of humankind.

Professor Stuart Russell, who leads research on AI, believes that AI might be weaponized in the future[24]. He believes that placing weapons systems under the control of a machine, rather than a human, is extremely dangerous. Even Stephen Hawking and Elon Musk have written an open letter to the United Nations asking them to be conscious of what AI could mean to society if used inappropriately[25]. Since AI systems do what they are programmed to do without question, ethical reasoning, or remorse, there is a potential to use them as weapons.

Stephen Hawking also said that we only ever need to make one fully intelligent machine, as that machine would be able to redesign and improve itself at a rate faster than we can improve ourselves[26]. As such, humans would not be able to compete, and they would be superseded by the computer. This would be the end of human race. Consequently, human values and objectives must remain central to the development of AI to prevent a catastrophe. Once a machine has been created for the wrong purposes, everyone would pay the price, not just the creator.

The Optimists Future

According to Eliezer Yudkowsky, there are three families of outcomes that could possibly happen in the event of smarter-than-human AI[27]:

  1. AIs will patent new inventions, publish groundbreaking research papers, make money on the stock market, or lead political power blocks.
  2. Fast AIs will invent capabilities that futurists commonly predict for human civilisations a century or millennium in the future, like molecular nanotechnology or interstellar travel.
  3. Changes of cognitive architecture might produce insights that no human‐level mind would be able to find, or perhaps even represent, after any amount of time.

Much of the optimist perspectives foresee an outcome for a super intelligent AI that could preserve human life and help fulfils its potential. It is important to remember that smarter minds don’t just pose large risks, but great potential benefits as well.

In Fiction

Much of the technology that exists in our present day, was very much the science fiction of the past. In fact, it can be quite eerie to see how well science fiction has managed to predict (or infact even help to shape), the technology of the future.

An example of this can be seen through the wide influence of Star Trek during the 60s and 70s. Much of the technology used in the original series we can see being used today- tablets, GPS, voice commands, and mobile phones. The inventor of the mobile phone, Martin Cooper, has even admitted he was inspired by Captain Kirk[28].

Besides predicting what technology may look like, science fiction may even give us a glimpse of the ethical and philosophical issues we may be face us.

Isaac Asimov

Science fiction author, Isaac Asimov (most famous for I, Robot published in 1940-1950), devised a set of rules that formed a unifying theme for Asimov’s robotic-based fiction.

The Three Laws, from the “Handbook of Robotics, 56th Edition, 2058 A.D.” are[29]:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Since it’s inception the Three Laws, have been popularised and referred to by many books, films and other media.

In the 1999 film, Bicentennial Man, also based off one of Asimov’s novel of the same name, a robot with AI named Andrew learns emotions and how to reciprocate them when introduced to the Martin family. As time goes on, he not only strives for freedom, but to become more human. Overtime, through better technology and through his own research and design, he is able to make new prosthetic organs for robots (which could also be used in humans). Eventually, Andrew becomes human enough to fall in love with a human, Portia, who also falls in love with him[30]

In science fiction, many authors have envisioned different futures for the place of AI. Many of which are negative, foreseeing the end of the human civilisation, while others are brightly optimistic, envisioning humans harnessing the power of AI for the advancement of humankind. Regardless, how AI is portrayed in all forms of media today, has a profound impact on our perception of it.

References

  1. Google DeepMind. (2016). Dnq. Retrieved from https://deepmind.com/dqn
  2. Zsolnai-Feher, K. (2015, Nov 22). Two minute papers: Google deepmind’s deep q learning & superhuman atari gameplays. Retrieved from https://www.youtube.com/watch?v=Ih8EfvOzBOY
  3. AFP. (2016, May 20). Uber testing self-driving cars in pittsburgh. Retrieved from http://www.seeker.com/uber-testing-self-driving-cars-in-pittsburgh-1809369174.html?slide=HhQniK
  4. AFP. (2016, May 20). Uber testing self-driving cars in pittsburgh. Retrieved from http://www.seeker.com/uber-testing-self-driving-cars-in-pittsburgh-1809369174.html?slide=HhQniK
  5. AFP. (2016, May 20). Uber testing self-driving cars in pittsburgh. Retrieved from http://www.seeker.com/uber-testing-self-driving-cars-in-pittsburgh-1809369174.html?slide=HhQniK
  6. AFP. (2016, May 20). Uber testing self-driving cars in pittsburgh. Retrieved from http://www.seeker.com/uber-testing-self-driving-cars-in-pittsburgh-1809369174.html?slide=HhQniK
  7. Crowdflower. (2016). Here how it works. Retrieved from https://www.crowdflower.com/
  8. Ha, A. (2016, June 7). CrowdFlower raises $10M to combine artificial intelligence with crowdsourced labor. Retrieved from https://techcrunch.com/2016/06/07/crowdflower-series-d/
  9. Metz, C. (2016) The Rise of the Artificially Intelligent Hedge Fund. Retrieved from http://www.wired.com/2016/01/the-rise-of-the-artificially-intelligent-hedge-fund/
  10. Metz, C. (2016) The Rise of the Artificially Intelligent Hedge Fund. Retrieved from http://www.wired.com/2016/01/the-rise-of-the-artificially-intelligent-hedge-fund/
  11. Image by McCarthy, N. (2015, August, 13). Fear of Technology Falls with Higher Income. Retrieved from https://www.statista.com/chart/3723/fear-of-technology-falls-with-higher-income/
  12. Robots and AI would take over 50% of human jobs in next 30 years[1]
  13. The Talented Mr. Robot[2]
  14. The Talented Mr. Robot[3]
  15. Products Liability of Driverless Cars[4]
  16. Tesla Autopilot[5]
  17. Artificial Intelligence[6]
  18. Artificial Intelligence[7]
  19. Artificial Intelligence[8]
  20. Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
  21. Dwyer, James (2010). Moral Status and Human Life: The Case for Children’s Superiority. Cambridge University Press. pp. 9-26:
  22. Dwyer, James (2010). Moral Status and Human Life: The Case for Children’s Superiority. Cambridge University Press. pp. 9-26:
  23. Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics". In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  24. AI Dooms People[9]
  25. Open Letter[10]
  26. Stephen Hawking AI[11]
  27. Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
  28. How Star Trek Changed the World[12]
  29. Asimov, Isaac (1950). I, Robot.
  30. Asimov, Isaac Asimov, I. (1976) The Bicentennial Man.
Personal tools