Artificial Intelligence and Amazon Go - D200

From New Media Business Blog

Jump to: navigation, search
Artificial Intelligent Robot[1]

Artificial Intelligence (AI) is an area of computer science that describes the ability of computer systems to think and respond like humans. Likewise, AI replicates the human thought process such as complex decision-making, reasoning, interpreting different languages and developing original ideas [2]. This raises the discussion on what being “intelligent” actually entails and although this is inherently a subjective concept, intelligence can be captured when a machine has a steady consciousness around acquired knowledge from the past that can be exercised in future decisions.[3] Thus, the machine is able to learn and create better results overtime. The notion of AI can be a mystery as there is no one formula or model that clearly explains how it works. Though simply said, AI in recent years uses algorithms that are loosely based on neural networks of the brain.[4] This field has been particularly important from the advent of big data and processing power, as many researchers are becoming more attentive to AI on reducing human effort, increasing efficiencies, and creating more accurate and faster solutions.

Contents

Artificial Intelligence

History

Year Details
1940 The Turing Test

In response to the rise of robotics nearing the end of WW2, Alan Turing created the “Turing Test” where a machine can only pass it, if it was able to trick a person into thinking they were speaking to another human.[5]

1950 I, Robot

I, Robot, which was a series of science fiction short stories, was published. This was the first time the idea of robots entered pop culture.

1956 “Artificial Intelligence”

“Artificial Intelligence” was first coined by John McCarthy, a computer and cognitive scientist, at an academic conference.

1961 Shakey the Robot

The first general purpose robot, named Shakey, was created at the Standard AI Centre. It was able to make decisions on it’s own and it’s capabilities include having spacial awareness and analyzing commands. The hardware and tall structure caused Shakey to shake a lot, and although it was incredibly slow and faulty, the software stirred the public’s attention in the future possibilities of AI.

1974 AI Winter

Period of time where the AI industry received harsh criticism for creating machines that could only perform simple tasks. Many people and corporations were reluctant to invest in AI as million of dollars were previously spent without receiving significant financial returns [6].

1981 AI for Commercial Use

People began to realize that AI was more valuable when machines were programmed to perform focused tasks, rather than demonstrating “general intelligence”. AI’s commercial value was recognized and early businesses, such as Digital Equipment Corporation, began to adopt AI for their ordering system.

1997 IBM's Deep Blue

Deep Blue, IBM’s first chess-playing machine was able to beat the world chess champion, Garry Kasparov. Deep Blue lost the six-game match previously to Kasparov, though overtime, Deep Blue was able to learn and ultimately beat him in the rematch.[7]

2008 Google's Speech Recognition

Google was able to achieve 80% accuracy in its Google App speech recognition capabilities.[8] This was a breakthrough for Google as AI was now able to shift through massive amounts of data from their users and recognize patterns in speech.

2011 IBM Watson

IBM Watson was an AI developed specifically to answer riddles on the quiz show Jeopardy! using neural networks. It was able to beat two Jeopardy’s greatest champions on this day, showcasing that AI had the capacity to have higher intelligence than a human.

Everyday AI

AI: It’s Not Just Science Fiction Anymore

Historically, the concept of AI was only bounded by fictitious stories in science fiction novels. Though with more tech giants and venture capitalists seeing the lucrative opportunities of AI, the research of “smart machines” have been vastly funded. These once-improbable depictions of robots in our everyday life are now starting to become quite palpable in the near future. AI itself has been already very prevalent in our day-to-day activities and the capacity of how AI affects so many different aspects of our lives should definitely be acknowledged:

  • Recommended For You - As more consumers search products or browse certain web pages, companies started learning your online behavior to personalize your experience. Examples include targeted ads, recommended items in your shopping cart, or recommended movies on Netflix.
  • Face Recognition - AI has allowed your smart devices to recognize every edge of an individual's face to accurately identify the person. In addition, there are studies on how AI is able to predict the identification even if the person is wearing a mask. Most recently, there has been a controversial discovery on how AI can predict a person’s sexual orientation.[1]
  • Financial Fraud - AI is used to understand an accounts regular spending patterns and is able to detect any fraudulent activities. It is able to predict the next fraudulent activity and initiates location capabilities to track the process[2].
  • Automatic Email Filtering - Google tracks how each person interacts with their Gmail accounts and predicts what messages to include in their inbox. Recognizing millions of messages in email, Google is able to automatically filter spam emails at an accuracy of 99.9%[3].

Overall, many businesses are beginning to realize the importance of AI in their operations, seeing it as a valuable tool to further provide personalized experiences for consumers. The applications above are only a few examples of how AI has been running in the background to drive efficiency and ease in our lives. These examples should not be surprising, however, it can be realized that advancements of AI capabilities have a strong influence on how we perceive and interact with technology today.

Branches of AI

There are many different sub-areas of AI that should be noted. Due to the complexity of AI, there were many researchers that had various interpretations of the different sub-areas. It is important to note these sub-areas are not mutually exclusive and there are many applications where there is overlap.

Branches of AI [4]

The following list is the team’s interpretation of the different branches of AI. For the purpose of our research with our case study, we will only cover machine learning and computer vision in depth.

Machine Learning

According to Forbes, Machine learning is the main application of AI where machines can access data and learn on its own, without human intervention or being explicitly programmed to carry out specific tasks [5]. It is very common to use the term “machine learning” and “artificial intelligence” interchangeably, though it should be noted that AI is the overarching field of computer science in which machine learning can be applied.

Deep Learning

Deep Learning

Deep Learning is the most recent and advanced concept behind machine learning and is loosely based on the biological neural system. When data points are fed through a model, deep learning allows the data to be processed through many layers. These layers will allow the model to learn at different levels of extraction, therefore learning to understand representations of the data [1]. Processing these data points in the layers will result in the model finding convoluted relationships and classifications of data points, ultimately creating a deep neural net overtime. Although expensive to train a neural network, deep learning can result in more accurate and complex outputs. Deep learning has been the foundation for computer vision, audio recognition and speech recognition.

If you want to learn more about deep learning, Tensorflow has a neural network playground that simulates the process of information being fed through a machine learning model. To better understand how the playground works and the importance of deep learning, you can watch these videos: the importance of deep learning and advantages of deep learning.

Example of Mark Zuckerberg on Google's Vision API

Computer Vision

Simply said, Computer Vision is the “science and technology of machines that see” [2]. Recreating the eye through imaging processing and vision holds many complex challenges. Not only do researches have the challenge of recreating the visual cortex, they also have to ensure the machine understands what it sees, integrating with brain-like capabilities of reasoning [3].

To understand the application of Computer Vision, Google has a Cloud Vision Platform that conducts a powerful image analysis on any uploaded photo. With Google’s unfathomable pool of photos, it is able to gain incredible insights, such as the location of the photo, the subject, and other relatable web entities, which can bring to light the amount of personal information that can be identified if you were to upload a picture of yourself. For example, we uploaded a photo of a Beedie student during his international Co-op in China, and the photo itself did not explicitly state any details. However, Google’s cloud computing capabilities had searched all over the web for similar photos and were accurately able to identify keywords such as “SFU”, “Beedie School of Business” and “international exchange”, emphasizing this loss of privacy.


Natural Language Processing

Natural Language Processing is the study of how machines are able to understand the written and spoken rules of the human language. This includes different levels of linguistics research, from syntax, pragmatics to semantics [4].

Expert Systems

Expert Systems are intelligent computers who are able to solve expert-level problems and make decisions within a specific domain [5]. Likewise, it is programmed to create solutions which require the same level of knowledge and expertise as a professional.

Speech Recognition

Speech Recognition is the process of a machine translating speech to text and vice versa. It is able to understand various languages and variations of accents. Google has a Cloud Speech API to demonstrate this.

Robotics

Robotics are machines that are able to perform actions with some degree of autonomy [6]. Complemented with sensors and other branches of AI, machines are able to interact with the physical world.

AI in Industries

The following outlines some examples of AI in different industries:

  • Medicine and Healthcare
    AI is helping doctors create “precision medicine” that offers prescription medicine that is specific to each person. It takes into account of their gene makeup, medical history to other environmental factors [7]. Many medical centers are seeing great potential from data mining patient records. In one breakthrough event, IBM Watson processed millions of cancer reports and was able to identify a rare form of leukemia in a patient that was previously incorrectly diagnosed [8].
  • Transportation
    AI now is able to harness real-time data on the road, allowing for driverless vehicles, such as Uber’s Otto and Google’s Waymo.
  • Customer Service
    Many retailers are resorting to AI to create more seamless customer experiences, both online and offline. For example, many company websites have chatbots that are able to use Natural Language Processing to fully comprehend complex questions that require specific personal information.
  • Finance
    Financial institutions are using AI for its predictive techniques to forecast market trends and stocks. “Robo-Advisors” have been popular as they are able to set up wealth management services and diverse portfolios at a very low cost [9]. These are becoming attractive for many investors and financial brokerages.
  • Introducing Amazon Go
  • Recruitment in Business
    There have been many cases where companies are sifting through hundreds of resumes and applications through the use of AI [1]. This helps HR departments to streamline the decision-making and is able to understand which candidates best suit the company’s needs.

Case Study: Amazon Go

One of the most recent business application of AI is Amazon Go – the new retail store concept with no checkout. Amazon Go stores will offer a selection of grocery essentials, ready-to-eat meals, and Amazon Meal Kits. Each Amazon Go store will be roughly 1800 square feet, which is a compact space to complement their “just walk out” technology. [2]

Amazon opened one beta store in Seattle in December 2016 and currently, it can take only up to 20 people at the same time. It also has trouble tracking items once they leave its specific spot. [3]

Technology

Sensor Fusion

Amazon has been refusing to talk about their new technology to the press. Therefore other than the video they released and the information on their homepage, not much information is known about Amazon Go. We only know that the store is based on a “just walk out” technology that uses a combination of Deep Learning, Computer Vision, and Sensor Fusion.

Underlying technologies

As mentioned before, Deep Learning and Computer Vision are the forefront of AI technologies to date. These powerful tools can use data input to produce prediction and insights more intelligently.

Sensor Fusion combines data from multiple sensor sources to produce a more complete and accurate information about the surroundings. [1] Some examples of the sensors include cameras, accelerometers, lasers, radar, etc.

Amazon Patents

In 2013 and 2014, Amazon filed two patents that describe the key technologies enabling the stores: Detecting item interaction and movement on June 26th, 2013 and Transitioning items from the materials handling facility on September 24, 2014. [2]

  • Detecting Item Interaction and Movement patent describes how to detect items when a user goes through a transition area. It is mostly used for inventory facilities, but it also hints a use for the retail location.
As still another example, if the materials handling facility is a retail location and the customer is purchasing items, when the customer passes through the exit (transition area) of the retail location, the items picked by the user may be automatically transitioned from the materials handling facility to the user and the user may be charged a fee for the items. […] For example, if the user is purchasing items from a retail location, rather than the user having to stop and "check out" with a cashier, teller or automated check station, because the picked items are already known and identified on an item identifier list associated with the user, the user may simply exit the retail location with the items. The exit of the user will be detected and, as the user passes through the exit (transition area), the user, without having to stop or otherwise be delayed, will automatically be charged a fee for the items (the items are transitioned to the user)

For example, if the materials handling facility is a retail store, users may pick items from within the facility, place the items in a cart, bag, pocket, or otherwise carry the items and the items are automatically identified and associated with the user. When the user exits the facility the items may be transitioned from the facility to the user and the user charged for the items. This may be done without the user having to undergo the additional step of checking out with a cashier, the cashier identifying and scanning each item and the user paying the cashier for the item.
The patent also notes how the system could use a shopper’s purchase history to predict what item was picked up. [3]

For example, if the inventory management system cannot determine if the picked item is a bottle of ketchup or a bottle of mustard, the inventory management system may consider past purchase history and/or what items the user has already picked from other inventory locations. For example, if the user historically has only picked/purchased ketchup, that information may be used to confirm that the user has likely picked ketchup from the inventory location.
It is apparent that Amazon has prepared for Amazon Go for over four years and perfected the technology in their own warehouse [4].

How Amazon Go works

Referring to the video and the two patents, we can get a sense of how Amazon Go store works. The store would be partitioned into two areas: a transitioned area and a retail area.

Transition Area

The transitioned area is placed at the entrance of each store with the purpose of identifying shoppers when they enter the store. It scans the QR code provided by the smartphone app to match the user with their Amazon Prime account. There will also be surveillance cameras in these areas, providing the system with user’s appearance for tracking purposes. This area will also be responsible for notifying the system when users exit the store to charge them accordingly.

Retail Area

The retail area has two main functions: tracking what item was taken and who has taken it. This is where sensor fusion will come into play. Amazon Go store will have a wide variety of sensor types to better track item and user movement. Sensors that were predicted to be in use include [5]:

  1. Bluetooth beacons: identify mobile phone nearest to the shelf. Each shelf would have Bluetooth sensors to track which customers are in range to pick up its items.
  2. Shelf cameras: detect when an item is removed or added to the shelf, and what item looks like.
  3. Shelf weight sensors: detect the weight of the items removed or added. They would also determine the price for some items that are charged by weight (fruits, vegetables, nuts, etc.). Weight sensors can range from scale, pressure sensor to load cell [6].
  4. Store cameras: track the movement of users within the store.

AI, using sensor fusion algorithm, will combine the data inputs to make the best guess of its surrounding. Item weights and their image allows the inventory system to detect what items were added or removed, in what quantity and when they leave or go back on the shelf. Using this information with the Bluetooth beacon on the shelf and store cameras, it can track the user who is at an approximate distance from the shelf that would likely be able to remove the item.

In addition, AI will also incorporate past purchase pattern to better predict the correct product being picked up, thus increasing confidence score. The inventory system would also be linked to a server to enable virtual shopping cart in Amazon Go mobile app. Overall, this integrated system works seamlessly without affirmative input from the user. Even though QR scanning, computer vision, and sensor fusion are not new technologies, this implementation of AI allows a huge business application.

Business Application

Amazon Go in Food Retail Industry

Amazon Go has been confirmed to specifically target Food Retail Industry. However, this is not the first time Amazon has attempted to penetrate this industry.

AmazonFresh

Amazon’s first attempt was AmazonFresh, an online grocery delivery service. With $14.99/month add-on to your annual $99 Prime Subscription, you can choose a selection from fresh to frozen foods and some general merchandise, that are delivered on either the same day or the next day, depending on your choice. It is available in selected US states, London, Tokyo and Berlin [7]. Amazon has also announced the opening of AmazonFresh Pick Up locations, allowing customers to pick up their own orders. In the first quarter of 2017, AmazonFresh sales hit $10 million, up 30% from last year, which is associated with the expanded inventories. Top-selling categories at AmazonFresh includes baked goods, frozen meals, frozen breakfast foods, frozen vegetables, fruits and frozen desserts [8].

Despite this growth, Amazon is only taking 4.5% of the food retail industry [9]. After 10 years in operation, AmazonFresh did not grow as quickly as the company wished due to the nature of perishable grocery. Even though online shopping is widely accepted now, most Americans are still very reluctant to purchase perishable goods online [10]. Perishable goods also pose a problem for Amazon warehouse. A third of the bananas that the company purchased has to be thrown away, while moldy strawberries are frequently returned by customers [11]. Another problem with the online grocery model is the thin profit margin [12]. With little profit per product, online grocery sites have to charge a delivery fee. On top of $14.99/ month, you have to pay an extra $9.99 for orders less than $40. When competing with brick and mortar supermarkets, this is a built-in disadvantage.

At the beginning of November 2017, Amazon shut down their service in multiple states, partially blaming US Postal Service for their grocery delivery issues [13]. This change would make it more difficult for Amazon to gain advantages over the online grocery business.

Whole Foods[14]

Whole Foods

Whole Foods, as an organic-focused retailer, faces serious problems with price-conscious consumers, earning a reputation for being expensive. On average, it is 19% more expensive than its organic rival, Sprouts Farmers Market. In February 2017, Whole Foods announced closing nine stores: in Chicago, New Mexico, Utah, Arizona, and Georgia, and two each in Colorado and California due to six-straight-quarters sales decline [15].

Knowing the advantages of the brick and mortar model in food retailing, Amazon acquired Whole Foods for $13.7 billion in August 2017, adding 400 physical stores to its assets. Immediately after the acquisition, Amazon implemented many changes to this food retail chains [16]:

  1. Whole Foods slashed prices, with another round of price cuts followed in November. Some Whole Food staples price decreased by up to 40%.
  2. Whole Food began to sell Amazon tech merchandise, including Amazon Echo, Fire TV, and Kindle.
  3. Amazon opened pop-up shops in selected Whole Food stores, allowing customers to test products before buying.
  4. Amazon Prime replaced Whole Foods’ current loyalty program. Prime members will get special discounts on specific items.
  5. Whole Foods products are available on Amazon.com, AmazonFresh, Amazon Prime Pantry and Amazon Prime Now.
  6. Local stores will not get to decide if they want to sell regional items. The headquarters will now choose a higher percentage of the items that stores carry.

These changes are an attempt to improve Whole Food image with the public, from a niche organic store to a more general supermarket chain competing with giants such as Walmart and Trader Joe’s. Despite losing the market in organic retailing, Amazon is aiming for a much bigger part of food retailing as a whole.

How Amazon Go fits into Amazon Food Retail Model

Amazon now has three models that participate in the food retail industry, each of which serves a different purpose:

  1. AmazonFresh is an online grocery shopping model, offering a wide variety of groceries, including niche items that brick and mortar stores might not offer. It appeals to the consumer group leading a busy lifestyle and wanting the convenience of door delivery.
  2. Whole Food competes in the supermarket sectors with giants like Walmart and Trader Joe’s. It is one of the two brick and mortar models that Amazon offers, allowing customers who prefer in-person shopping to pick their own grocery.
  3. Amazon Go, with its small size, will act as a competitor in the small and medium grocery store sector. The smaller size allows Amazon Go more presence in cities than Whole Foods, giving much wider coverage. With the no-cashier concept, Amazon Go can theoretically be opened 24/7, potentially filling in off-hour sales when AmazonFresh and Whole Food are not available.

Since one of the biggest problems with AmazonFresh is its low coverage and slow expansion, Whole Foods and Amazon Go can potentially act as its pickup and fulfillment centers. This strategy, assuming optimized operation between the three models, can increase AmazonFresh delivery speed to approximate the current Amazon Prime Now’s two-hour delivery. Amazon Go and Whole Foods stores would also solve the spoilage problem for AmazonFresh warehouse. Perishables storing at these locations can benefit from a proper food storage system and a storefront. Items that are not sold online will now have a chance to be sold in person before becoming bad. Moreover, different locations can also fill in for low inventory stores. For example, if the closest store of an order runs out of stock for one product, the inventory system can find the next best alternative that can fulfill the order. This model is very similar to the current Amazon.com warehouse model. However, it is harder for AmazonFresh warehouse to achieve this alone due to the very high cost of perishable warehousing.

Convenience Store [17]

By combining the three models, Amazon can achieve a maximum optimized food supply chain, while being able to compete in every sector of this industry. This strategy can potentially fulfill its goal to become the top 5 food retail chain by 2025 [18].

Amazon Go in Convenience Store

Despite participating in food retail industry, Amazon Go fits more into the convenience store concept where customers need quick stops for small snack or toiletry. Amazon also has a serious advantage over cost. PwC estimates labor costing 2-3% of its sales revenue, with checkouts taking up about 10% of floor area [19]. These cost advantage can allow Amazon Go to keep its prices competitive (closer to supermarket price than convenience store price). Amazon participating in convenience store industry can pose a serious threat to giants like 7-Eleven, Mac’s, Circle K. However, the Just Walk Out technology requires membership to participate, causing some pushback from people who are not already in the ecosystem.

Amazon Go Is About Way More Than Groceries

Amazon Go in Retail Industry

It is highly unlikely for Amazon Go to be repurposed into targeting different retail industries (fashion, technology, etc.) due to the very high cost of space and store setup. The physical stores might also compete with sales from Amazon.com, which has already been optimized and works seamlessly. However, as a concept, Just Walk Out technology can follow Amazon Web Service business model and be licensed out to different vendors, who are incapable of matching Amazon’s resources.

Amazon has already planned for this development. In their patents, they specified many use cases to maximize the coverage of the two patents.

In a similar manner, if the materials handling facility is a rental location, a library, etc., the transition area may be near the exit of the facility. When the user leaves with items, the items are transitioned from the materials handling facility to the user that is taking the items. In such a case, a rental or borrow time identifying when the items are to be returned may be associated with the items and the user. The transition of the items may be done automatically and without any affirmative input or delay to the user.

When Amazon Go is perfected and goes public, the market demand for such a level of shopping experience will increase, making it more appealing for other retail chains to participate in the model. Amazon potentially reaches to all sectors of retail industry with this strategy, maximizing its profit.

Benefits of Amazon Go

The opening of Amazon Go and the Just Walk Out technology can provide with many benefits:

  1. Reduce Operation Cost: With the lower labor cost and the increase of retail space, Amazon attains low prices and a serious cost advantage over the competitors.
  2. Drive more data for further analysis: With all the sensors in Amazon Go stores, Amazon will be able to gather new data on in-store shopping patterns. This information can allow the company better insights into the market trend, more rounded knowledge of customers and better product recommendations across multiple platforms. Instantaneous sales data (from when customers picking up items) can improve the decision-making process, thus reducing chances of out-of-stock and lost sales.
  3. Drive the Prime Subscription Sales: With the addition of Whole Food and Amazon Go, Prime membership now has more value than just free shipping. This would drive up the demand for Prime accounts, which have been steadily increasing over the past few years [1].
  4. Potential Use as AmazonFresh Warehouse: As discussed before, Amazon Go can enhance Amazon Food Supply Chain and improve AmazonFresh service.
  5. Reduce Shoplifting: In addition to all sensors and cameras in store, shoplifters may be discouraged from trying to steal. Even when attempts do happen, all the information provided by the system can aid the investigation if required.
  6. First Mover Advantages: As the first player in the game, Amazon has the power to disrupt the industry. It is likely that this technology can act as a very high barrier to entry, thanks to the patents, high research and development cost, and requirements for big data on consumers. Other retail chains or technology companies will lack some aspects mentioned above, making it very difficult to compete with Amazon in this area.

Drawbacks of Amazon Go

  1. Technology Dependency
    1. Potential Users and Items Tracking problems: The implementation of Amazon Go might pose some inherent problems when facing normal consumer shopping behaviors. For example, once leaving the shelves, items will not be tracked further. This causes problems like inability to detect item transfer from one user to another, which happens frequently when shopping in groups, leading to inaccurate charges. Another potential problem is tracking a group of customers that share one Prime Account (a family shopping together). In addition, the use of camera tracking might fail if customers change appearances within the store (removing jackets or hats). In terms of items tracking, if shelves are specialized in one type of product, it might falsely detect items returns if they are not from that shelf originally.
    2. Phone Dependency: It is required to have a smartphone to participate. This might result in a lost sale from customers who run out of phone battery before or during their shopping experience.
    3. Membership Requirement: As mentioned before, membership might discourage some consumers who are not already in the ecosystem. However, this is not a serious problem as membership models like Costco has been proven to work provided enough incentives.
  2. Security concerns: With the Amazon Go app producing QR code for Prime account identification, there is an additional way to get your Prime Account. With Amazon Go requiring an automatic payment method, a breach on Amazon account is significantly more serious than before.
  3. Privacy concerns: While providing multiple points of data for Amazon, the arrays of sensors might be too intrusive for some customers who value privacy.
  4. Encourages impulse buying: Removing the checkout step means that without checking their virtual cart, some less conscious consumers may buy more than they need/afford. Although this is a big benefit for the company, it may not be in the best interest of the customers.
  5. Job cut: Eliminating the need for cashiers means that a large percentage of the population would be out of jobs, especially if Amazon goes through with their plan of licensing the technology. In the US, there are 4.6 million Americans worked in retail sales while 3.4 million more worked as cashiers, making up 6% of total US employment. On average, a cashier makes $20,670 a year [2]. Even though this is not the first time that cashier job was replaced (self-checkout machine), the need for cashier has never been this low. If all of these people are out of job, there needs to be a major social and economic restructuring.

Social Implications of AI - Job Automation

History of Job Automation and Future Predictions

When it comes to AI and intelligence, one of the biggest fears from humans is that robots will replace us in the workplace. In fact, surveys done by research firms have shown that around 70% of the US population fears that robots will take over our lives [1]. As it turns out, we have actually dealt with the threat of automation for the last hundreds of years. There was always a progression in society because humans are constantly striving to be better and more efficient. An example of this would be our progression from the bronze age, to the iron stage and so on. Every time society progresses, people are forced to adapt in order to survive.

Jobs at Risk

Currently, the jobs that are facing the biggest threat from automation are mostly manufacturing jobs. With over 5 million manufacturing jobs lost since 2000 in the US [2], many of which happened because of technology, it is evident that automation is already affecting this industry.


Ted Talk on Safe Jobs
Predicted Probabilities of Which Jobs Will be Affected [1]

Looking into the immediate future, innovations in AI are happening at an unpredictably quick rate. Not only is it happening at an exponentially faster rate, but AI is threatening to replace a whole different type of job altogether; white collar jobs. These jobs, as predicted by various researchers, are professional jobs, such as accountants, lawyers, and doctors. The biggest reason is that these professional jobs, in particular lawyers and accountants, spend a lot of their time doing mundane tasks, like document discovery and tax filings. These tasks will be completed at a much quicker rate by robots, with a higher degree of accuracy. Moreover, robots do not have to deal with physical limitations like fatigue, ensuring a higher degree efficiency when it comes to productivity. In fact, some of the largest law firms have already automated many of these tasks [2]


On the other hand, certain jobs seem fairly safe when putting against the threat of AI. Creative jobs are likely to stay, as AI is good at processing existing tasks with a large amount of data but not creating new ideas and asking questions. For example, entrepreneurs must develop new business strategies in order to take advantage of their market. They must be able to see what others are not doing, move into the gap, and adapt quickly to their changing environments. Another type of job that will not be replaced easily are ones that require human interaction, which is very difficult for AI to replicate. Taking doctor as an example, humans will want to know that their doctor can understand feelings of pain and be able to express empathy in various situations. Fortunately, AI is not at the point where human feelings of emotion can be intelligently understood.


It is important to understand that most occupations will not be replaced by AI entirely and in fact, we can benefit from working alongside machines. For example, most lawyers do long hours of document discovery and review. Adding AI to this equation, robots can take care of these tasks more efficiently and allows the lawyer to focus on the emotional well being of the client.

Possible Solutions

Revamp Education System

The education system is vital to preparing humans for the wave of robots threatening to take over jobs. Currently, the education system in most western countries does not prepare students for the new types of jobs available when automation is in full flight. According to a study done on European students, many students are unprepared for 21st-century jobs because they lack the cognitive, social and behavioral skills required to work alongside robots [3]. In order to remedy this, education systems should focus on these particular skills because they allow us to take advantage of unique and human characteristics that robots are unable to replicate. This allows us to stay competitive in a workplace setting and allows us to build a coexisting relationship with AI.

Universal Basic Income

Universal Basic Income, or UBI, is the idea that citizens of a country will be paid a basic income periodically, no matter what they do for a living. In this case, this will be beneficial to society because employment is expected to drop more than 50% with the threat of automation. Without a basic income, these workers who are no longer able to work, and will need to find a way to survive to meet their basic needs. Instead, with a basic income that will be funded by taxing corporations who implement automation to replace their workers, these workers will be able to stay above the poverty line. Moreover, these workers could choose to spend this income on training programs that allow them to go back into the workplace, working alongside the robots. All in all, UBI balances the increased productivity of a highly automated workforce with allowing unemployed workers to continue to survive in our society.

Recent News

Tay

Tay, an AI Twitterbot that speaks like a “teen girl”, was first created and launched by Microsoft in late 2016. Its initial purpose was to aid Microsoft in providing better customer service for their upcoming voice recognition software. Unfortunately about 24 hours after the initial launch of Tay, the project was shut down due to the mass amounts of inappropriate content tweeted out by Tay. The highly controversial tweets involved topics from racism to sexual references. According to Microsoft, Tay was attacked by various online trolls by exploiting Tay’s machine learning capabilities. Moving forward, Microsoft has come out to say that this project influenced them to take a completely different approach when it comes to the development of AI. This is especially apparent in Tay’s successor, a facebook messenger chatbot named Zo [4].

Sophia

Sophia's Interview at a Future Investment Institute panel in Saudi Arabia

Sophia is a humanoid, robots that are developed with human-like features, developed by Hanson Robotics. Sophia has been in development since 2015 and has been rumored to be modeled after Hollywood actress, Audrey Hepburn. Sophia was designed to work alongside humans and be able to learn about human behavior. In order to accomplish this, Sophia incorporates machine learning, visual data processing, speech recognition and facial recognition. Recently in 2017, Sophia was granted citizenship in Saudi Arabia, becoming the first robot to acquire something so uniquely human [1]. This caused massive controversy because it started to blur the lines between what is human and what is a robot. Many argued that it is a human right for humans to become citizens of a country and it is not something that a robot should be able to hold.

Public Opinion

For the recent years, there has been a massive debate amongst leaders in the technology field about AI innovations. On one hand, there are supporters like Mark Zuckerberg, who openly defend AI development [2]. On the other hand, experts like Elon Musk and Stephen Hawkings continue to warn the public about the risks of AI. An example of this was when Elon Musk compared the risk of AI development to the threat of North Korea [3]. A more recent example would be Stephen Hawking’s recent talk, in which he describes the innovations of AI as the 'worst event in the history of our civilization' [4]. As a result, AI development continues to be a controversial topic in the eyes of the public.

Conclusion and Takeaways

Future of AI

Summary and Future of AI

The future of AI can be seen from two different perspectives. One side argues that AI can be the answer to all of our world’s problems by being a tool that humans can use to increase efficiency and productivity. Humans would no longer have to deal with the issues of disease, world hunger and be able to focus on doing things that make us happy and human. This, of course, requires that our AI is developed in a way that reflects on human values. An analogy that expresses this view is the following boat and captain analogy:

“If a boat’s goal is to take us from point A to point B, a well-built boat may or may not be able to get the job done. On the other hand, if a captain were to be on the boat, controlling and accounting for unexpected problems, the chances of the boat reaching point B will increase.”

In this analogy, the boat symbolizes the future development of AI, whereas the captain of the boat symbolizes humanity. Together, the chances of them reaching point B will be much higher, which signifies the benefits of AI on our society.

The other future outlook of AI looks completely different. When AI is not built with some sort of ethics and regulation, many problems could arise. For example, one of the biggest fears is that AI will be incorporated into warfare, creating autonomous weapons of mass destruction. This, compounded with the fear of job loss due to automation, will create a society that no one would want to live in. As a result, the future of AI really depends on the developers and whether or not they can implement regulations to ensure that AI continues to incorporate humanistic values and ethics.

Final Takeaways

  1. Be aware of AI and its direction of development. It is not just something that is going to happen in the future anymore - it is already around us in forms of autonomous vehicles, smart homes, and retailers like Amazon Go etc. Although AI is still in development, we should pay specific attention as to the direction it is headed in, to ensure that it follows our humanistic values.
  2. Be prepared for AI. People need to realize that the implementation of AI in the workplace is inevitable, like the industrial revolution. In fact, people should start planning and finding ways of working alongside robots to ensure that you will still have a competitive advantage. Remember what makes you uniquely human and use those characteristics to help you stand out.

Authors

Tracy Nguyen Crystal Tseng Johnathan Chan
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
hnn1@sfu.ca cbtseng@sfu.ca jyc37@sfu.ca

References


Personal tools