Artificial Intelligence (Fall 2017 - D100)

From New Media Business Blog

Jump to: navigation, search


Artificial Intelligence

Artificial Intelligence (AI) is the idea of building machines which are capable of thinking like humans. Through ingesting the data we teach the machine, they are able to interpret the world around us and use the information to effect change. Currently, artificial intelligence has the capabilities to learn just about anything, reason, use language, and formulate ideas. Unlike a search engine, which uses keywords to search related topics in a structured database, artificial intelligence involves putting two pieces of information together interpret and understand the meaning of the language. Evidently, artificial intelligence is the main drivers in evolving many technological advances such as autonomous vehicles, robotics, and smart technology. [1] [2]

The Advantages and Disadvantages of Artificial Intelligence


  • Jobs - As the capability of AI increases their competency to deal with difficult, complex, or even dangerous tasks that are currently performed by humans, could be replaced by applied artificial intelligence. Because there is no risk of harm to humans by exposing an artificial intelligence machine to high-risk situations; companies are leveraging AI to explore the new undiscovered land or even planets. AI is capable of completing dangerous tasks where humans are incapable of going.
  • Increase our Technological Growth Rate - AI will potentially help us discover new and more advanced technologies. Such as researchers using AI to increase the breadth of information accessed to more effectively answer queries. As AI is capable of processing vast amounts of data more quickly and accurately than humans, AI could act as a catalyst for further technological and scientific discovery.
  • They do not stop - Unlike humans, machines do not get tired or get sick. They do not require lunch breaks, nor do they need sleep to operate. They are better able to efficiently complete work than a human, and only require an energy source to operate. Due to this unique capability, AI is currently used to assist people with disabilities, and the elderly, by providing aid 24/7, ensuring their safety and wellbeing. [3]
  • Care-Bot for the Elderly in Japan
  • Diving Robot to Explore the Ocean


  • Over-reliance - Like any sort of technology, there is a potential that we may rely too heavily on machines to perform tasks for us, leading to a loss of capability among humans. We are prone to be dependent on technology, and therefore we may feel useless if they were to simply shut down. If this were to happen, it his could potentially ruin our whole economy as we become more reliant on AI in our lives.
  • Potential to replace jobs - It is evident that machines are able to perform almost every task better than humans, such as heavy lifting and repetitive work. This is a concern as it may replace many jobs and could lead to a large unemployment rate. People who are jobless and are replaced by machines may feel useless, which may lead to other societal issues such as mental illness.
  • Misuse - Artificial Intelligence is a revolutionary technology, and there is no doubt that if this powerful technology ends up in the wrong hands, it could be used to cause mass destruction. If for example, there were a malfunction in the algorithm, or someone was able to hack into the system, there may be potential for a Terminator like, robot army that could be used against an enemy to gain power and take over the world.[3]
  • Wall-E, the Last Robot Left on Earth
  • James Cameron New Terminator


IBM Watson

Watson Jeopardy

IBM Watson is a computer system capable of answering questions posed in natural language. In order for IBM Watson to be able to answer a query, Watson requires access to unstructured and structured content that has been fed into a corpus of knowledge. Building the corpus requires human experts to curate the content relevant to a specific task. Once the data has been ingested, the metadata produced helps to build a knowledge graph that allows Watson to answer more precise questions. Thus, when asked a question, Watson will parse questions into different keywords and sentence fragments in order to find statistically related phrases. IBM Watson is constantly trained by experts through question and answer pairs. This is known as machine learning. It is an ongoing interaction with humans so that Watson is constantly adapting and shift in knowledge to provide more accurate responses. [4]

IBM Watson took on the challenge of going against Ken Jenning and Brad Rutter who were Jeopardy-winning machines. In early 2011, IBM Watson was able to beat the world’s best Jeopardy champions and started to gain recognition in its ability to be smarter than a human. [5]

  • IBM Watson

  • Jeopardy

IBM Deep Blue

IBM Watson’s predecessor was IBM Deep Blue, who beat the world chess champion, Gary Kasparov, after a six-game match. It was programmed so that their machine could explore up to 200 million possible chess positions per second. On May 11, 1997, Deep Blue did take the win and defeated the human champion. This machine’s capabilities attracted a lot of attention from the media, and industries began to explore opportunities in using complex calculations in many areas such as discovering a new medical drug and handling large database searches. Then, they’ve created IBM Watson who was able to answer queries through finding the relationship in texts. This shows that technology is continuously advancing and becoming “smarter.” [1]

Adoption of AI in the Job Market

Evolution of Employment

Insatiability, David Autor

Employment is an evolutionary process, as technology creates new and exciting careers. Since 1975, we have been in the information and telecommunications revolution and during this time there have been technological advances that have resulted in major shifts in employment. Some examples are the implementation of ATMs that reduced a bank teller’s repetitive tasks; the introduction of robotics in manufacturing which increased production efficiency; and the transition from paper-based filing systems to electronic databases and mobile communications which reduced dependencies on office administration. Despite all of these changes, employment levels are higher than they have been throughout history.

Potential Job Replacement

Technological advancements in everything from IoT, computer automation, robotics, automated vehicles and big data analytics are supported by artificial intelligence. With the implementation of these technologies, there is a change in the way we perform tasks in the workplace. McKinsey and Company found that 30% of the tasks performed by 60% of the current workforce has the capability to be replaced by AI in the near future. Below is a list of the least and safest jobs as predicted by Carl Benedikt Frey and Michael Osborne in a 2013 study titled, The Future of Employment: How Susceptible are jobs to computerization?[1]

Jobs Potentially Replaced

Cross-Pollination, Anthony Goldbloom

There is a decline in middle-income positions in the manufacturing industry and mid-level white collar office jobs as the repetitive nature of roles can be easily automated. While these roles are declining, there is a shift towards high-income occupations requiring high-level cognitive skills. Low-income service jobs remain in high demand as the flexibility and adaptability required in these positions are not easily replicated by AI. The roles most susceptible to obsolescence are those that are repetitive and have well-defined procedures. These features make it easy to train AI to perform the tasks and increase organizational efficiency while reducing errors. Future careers will require a level of empathy, and human perception that enables creative problem-solving and decision-making skills that cannot be mimicked by machines.

Careers in AI

Research performed by MIT Sloan Management has identified three categories of jobs that will be required to ensure the effective and responsible use of artificial intelligence. The categories are:

  • Trainers:Trainers teach AI how to behave and perform the tasks required for their assigned roles. Some of the tasks that trainers will teach AI are programming natural-language processors and translators to make fewer errors. How to mimic human behaviors such as empathy, or detect nuisances of human communication such as sarcasm. These roles include Algorithm specialists, software engineers, and interaction modelers.
  • Explainers: Explainers will have a business and technology skill set, and will help to bridge the gap between business leaders, governments, and technologists. Their role is such that they can understand the complexity of how algorithms perform their tasks and are able to explain these complexities to non-technologists, increasing transparency of how AI is learning and making decisions. Explainers will use Tools and techniques such as Local Interpretable Model-Agnostic Explanations (LIME) to perform audits on algorithms by making changes to input variables, and observing the decision-making process to determine the data used by AI to form its conclusion. Roles in this field include: Context designer, transparency analyst, and usefulness strategist
  • Sustainers: Sustainers will ensure that AI is working as intended. Ethical compliance managers will identify issues within the algorithm and work with explainers to determine how the AI is making its decisions. Trainers will then debug the system and remove biases. These roles will be of the utmost importance as they provide confidence that the system is working appropriately. Roles include automation ethicist, economist, and machine relations manager. [1] [2]

How to Prepare for the AI Job Market

AI is penetrating the workplace at an incredible pace. The world economic forum published a report in 2016 that defined the skills required for the next 5 years.

Organizations have identified the importance of upskilling their workforce however, there has been little action in this direction. This is primarily due to the lack of understanding of AI and how it can and will affect the employment landscape. Though it is unclear exactly how AI will affect the workforce, it is certain that professionals need to prepare for lifelong learning, and be adaptable to technological changes, and develop strong social skills that differentiate them from AI. [3]

Core Work-Related Skills

Next Generation

In Canada, there are numerous organizations researching and sharing ideas on how best to educate children for the future. A report produced by Action Canada discusses a nationwide endeavor to prepare educators and students, facilitated by the Council of Ministers of Education. These new strategies include not only preparing students for constant technological, economic, and social changes but also focusing on ongoing professional development for educators. [4] The educational system will need to become more easily adaptable to keep pace with technology and continue to ensure a prosperous future for the next generation. The curriculum should focus on problem-solving and critical thinking skills, creativity, adaptability and interpersonal skills with greater emphasis on science, technology, engineering, and mathematics (STEM). Furthermore, schools should consider including robotics, coding, computational math (statistics, probabilities, logic and graph theory) and computational arts for a strong foundation in innovation. [5]

Governance of AI

There have been AI research organizations in place for decades and in 2015 Elon Musk founded which allows the public access to all of the research conducted by this organization. This site is an effort to provide transparency of algorithms, advancements, discoveries and the inner workings of AI and its applications. In January 2017, The Ethics and Governance of Artificial Intelligence Fund were established. This fund is unlike other research organizations as it not only includes technologists but incorporates the insights of ethicists, economists, lawyers, social scientists and policymakers to provide a collaborative view of AI’s effect on society. [2]

While the governance of AI is still new, precedences are being set. Tesla’s partially automated vehicles have been involved in multiple legal cases and with each case, AI is being retrained to adhere to these real-world experiences. Furthermore, this creates greater public awareness of the capabilities and limitations of AI and prepares us for the future with autonomous technologies.

In 2018, the European Union will implement the General Data Protection Regulation, which will effectively create a "right to an explanation," of AI coding. As an example, if you were denied an insurance policy that had been run through an AI algorithm, you would have the right to refute this decision and have the insurance organization explain how the AI came to its conclusion. This legislation provides the public with greater control over their data and privacy while exposing how organizations are using AI to gather and interpret information. [6]

Business Applications

Oncology Industry - Watson

Watson for Oncology

IBM developed Watson for oncology to help clinicians find the best treatment options for their cancer patients. It is not easy for a doctor to remember all treatment plans and keep up with all new plans for different cancer types. Watson supports the doctors with this by identifying the best-individualized treatment option with medical supporting evidence for the patient.[1]

Watson for oncology is an AI-driven cognitive computing system that can rapidly process large volumes of data. Watson uses natural language processing, probabilistic algorithms, and machine learning models to analyze data from a patient's medical record, lab results, the doctor's notes and then combine this data with data from the medical literature to find the best treatment plan for the patient.[1] [2]

Watson for Oncology has been medically trained for 5 years to learn data related to a patient's cancer by oncologists at New York's Memorial Sloan Kettering Cancer Center. [2] Currently, Watson is trained on eight different cancer types (breast, prostate, colon, rectal, cervical, gastric, ovarian and lung) and is used in over 50 hospitals around the world. [1]

Mining Industry - Watson

Goldcorp Inc. Using IBM Watson to Target the Mining Industry

Goldcorp Inc. is joining forces with IBM Watson to target the mining industry by using its cognitive technology to optimize new unexplored potential gold sites. [3] Watson will ingest, study, and combine an immense quantity of data, which includes drilling reports, surveys, historical information, process logs, reports, and studies along with the knowledge of older geologists’ and engineers’ field experience. [3] In addition, after gathering all the learning, it will apply its knowledge to understanding 3D models, and maps. [3]

Watson will be able to detect exact locations of gold composite areas in a more accurate manner as well as detect hazardous situations by analyzing the mining site. [4] It will help Goldcorp Inc. to reduce the costs of managing and storing the growing collection of data. [4] Also, it will increase the time efficiency in data retrieval and its accuracy will avoid sunk costs from multiple erroneous drillings, providing a safe working environment for its employees. However, Watson will not replace geologists and engineers as they will work side by side to explore for new discoveries sites, with more speed and precision. [4]

Pharmaceutical Industry - Atomwise

IBM Helps Atomwise Discover Potential Drug Treatments

Atomwise develops artificial Intelligence systems to discover new medicines. The systems use supercomputers and drug discovery algorithms to run through millions of different molecular structures to find the most workable combination for a disease target like Ebola, cancer, leukemia, or multiple sclerosis and simulate how imaginable medicines can interact with them. [1] The system is built on a simulation model so it is possible to test various chemical components that could work as a potential drug to combat a disease without physically making the drug. Once the most effective combination of chemical components have been determined, they can decide which drugs should be developed and tested. [2]

The drug discovery algorithm is using machine learning and especially neural network to learn by itself about different proteins, blocks of organic chemistry, and drugs by studying how drugs have worked previously. In this way, the system can apply the patterns it has learned to find new hypothetical drugs. [2]

In 2015 Atomwise’s artificial intelligence technology found two potential medicines that could effectively reduce the Ebola virus. Today Atomwise is being used to analyze millions of potential drugs to minimize, or eradicate diseases, further expanding its knowledge of chemistry for future drug innovations. [3]

The average price to create a new medicine and get it to the market is approximately $2.5 billion and can take over ten years. With Atomwise the expense and timeline required to research new treatments and drugs will be drastically reduced and can save millions of lives. [2]

Legal Industry - ROSS Intelligence

Meet ROSS, Your Brand New Artificially Intelligent Lawyer

ROSS Intelligence is built on IBM’s Watson cognitive computing platform and is an artificial intelligence legal research tool that helps lawyers with their research for a case. ROSS is using natural language processing so a lawyer can ask a question to ROSS and it reads and analyzes the whole body of law and billions of text documents in a second. ROSS returns an answer with up-to-date readings for the lawyer. ROSS also shows the level of confidence in the answer, so the lawyer knows if it trustworthy. [1]

ROSS is using machine learning technology and he is taught to understand the law, not just the words. He gets smarter all the time and he can learn from the feedback he gets. The lawyers can, for example, upvote or downvote ROSS’s answer based on how good the answer was. [2]

ROSS is not built to replace the lawyers, instead, it will help the lawyers with their research so they can work more with their clients and create strategies for their case. In addition it will also cut costs and bring down the entry point for customers. [2]

Finance Industry - Fico Falcon Fraud Manager

One of the biggest AI trends in the finance industry is to use Artificial Intelligence to prevent, detect and resolve financial fraud. One fraud detection software is called Fico Falcon Fraud Manager. The platform is using artificial intelligence to handle a company’s fraud detection needs. The platform has a deep insight into fraud activities and trends and by analyzing big data, the software can analyze behavior patterns across millions of transactions to detect frauds. The platform is using self-learning analytics models that can learn in real time from case dispositions. The Fico Falcon Fraud Platform can detect up to 50% more frauds than rule-based systems and it also reduces the number of false positives. [3]

E-commerce Industry - Amazon

To increase their sales, Amazon uses recommendation engine algorithms based on artificial intelligence to give the customers recommendations on what they should buy. The algorithms give a customized experience for returning customers to the webpage. Amazon’s recommendation engine learns from the past data and is driven by Machine Learning and deep learning.[4] The algorithms are based on data about what a customer has bought earlier, items they have liked and rated, what other customers have bought and viewed before and what the customer has in the shopping cart. [5]

The recommendations show what a customer would like to buy but are unlikely to find by themselves. The recommendations are important for Amazon’s sales and it is estimated that around 35% of all Amazon’s sales are generated by their recommendation engine. [6]

Automotive Industry - Tesla

Tesla Autopilot 2.0 Full Self Driving Hardware

Tesla has developed self-driving features in its electric vehicles, Model S, Model X, and Model 3. [1] The vehicle is equipped with eight cameras around it to capture and recognize information about its surroundings. It uses neural network technology to decode data collected from its ultrasonic sensors, forward radar, and rear, and forward cameras providing the ability to calculate the distance to objects through conditions like dust, fog, and rain.[1]

A trip in a Tesla’s self-driving mode starts with the user getting into the car and commanding it to steer toward the user’s desired destination. If an order is not received, the vehicle checks the user’s calendar and transports the user to the presumed destination. [1] If neither options are available, it will automatically take the user home. [1] After it arrives at the destination, the user steps out of the vehicle and it will enter into park seek mode, scanning for an empty spot and then automatically parking itself. [1] Finally, the user can summon its car back with Smart Summon, a simple tap on their smartphone. [1]

Tesla's autonomous vehicles are an enhancement to its user's life. It helps the user save time while performing their daily activities by taking commands to reach a destination and to automatically find, and park itself into an empty spot. Also, Tesla’s technology is very reliable as it takes pride in its safety measures by sending over-the-air updates to its vehicles for its collision avoidance, and automatic emergency braking features. [1] Moreover, the autopilot can be turned off and the user can take control of the car at any moment to make their own decisions when disagreeing with the self-driving mode.

Psychological Industry - Woebot

Meet Woebot!

Woebot is an artificially intelligent chatbot that works in the psychological industry, currently accessible via Facebook Messenger. [1] It engages users with anxiety and depression by creating human-like conversations. [2][3] It tracks the user's mood by asking questions on a daily basis and constantly following up over time. [2] Based on the emotions retrieved, Woebot uses cognitive-behavioural therapy by sending emojis, short videos, and text messages to the user. [4] This teaches the user to understand the motive of their negative and unhappy mindset and thereby discover methods to cope with similar situations in the future. [4] The chatbot is available 24/7 with a free trial of two weeks, and a monthly fee of $39 USD, after. [2] However, Woebot’s main purpose is to listen to the user’s problems, acting similarly to the Crisis Line Association of BC, and not to replace a counselor or psychologist. [5] Additionally, if Woebot perceives that the user is in danger or self-harming, it will remind the user to type “SOS” to obtain a list of resources or to call 911. [2]

Allison Darcy, CEO, and founder of Woebot conducted, a study over a two-week period, proving that Woebot was successful in reducing anxiety and depression symptoms, compared to users engaging in a self-help Ebook. [4][6] Dr. Darcy created Woebot with the intent to make mental health services more accessible to individuals in need of aid at a more affordable price and at the user's convenience. [7] Nonetheless, Woebot faces a substantial privacy problem, it is not a licensed medical provider and the user’s dialogues are not protected by the medical data privacy and security laws. [2] Even though Woebot has been able to keep its users’ identities secret, Facebook knows all the anonymous users and possesses all the rights to the conversations documented through Facebook Messenger. [2]

Privacy Concerns

Although the introduction of AI may bring along many benefits, one must also consider the concerns associated with implementing it. One of the key concerns people have is privacy. With AI being around us most of the time, people may become self-conscious of what they say or do. However, there may be times where you forget that AI is nearby and say or do something that could be taken out of context. There is concern that AI is used as a form of surveillance by companies, the government, or by other individuals to watch over their daily lives. You may have turned off the AI but how does one know whether the AI is truly turned off. As the users of AI are usually not the creators of it, people are afraid that the settings could be manipulated without their consent. In addition, the public also wonder where the data collected by the AI is stored, who has access to the database, and who controls the permissions to access the database. These key privacy issues are preventing AI from being accepted by our society as one will never know whether they are being recorded or watched without their permission.

For AI to advance further, there must be new laws and regulations to minimize the privacy concerns people have. AI developers may have to publish the software codes so people can see how the AI was programmed. For those without programming knowledge, the government can also create a regulatory body of experts who can review and audit the algorithms to ensure there are no malicious codes that could potentially invade a user's privacy. The implementation of such regulations could help ease the public’s concerns about privacy.

Challenges of AI in Business

Evolution of AI

There are two issues with implementing AI in business: honest mistakes and deliberate sabotage. An honest mistake is a concept demonstrating that AI is not perfect. Even the most sophisticated AI could make poor decisions because they lack the human ability to think outside of the box. In addition, AI is also susceptible to being hacked whether remotely or directly. Hackers could perhaps access the AI through the internet at a remote location or they could install malicious software coding into the AI. Businesses must consider these two challenges before implementing AI. [1]

Evolution of AI

Many believe that AI is a new technology that was recently invented. However, the existence of AI can be traced back as far as the 1950s. The concepts of how AI should work was there but the technology and computing power to support the theory was not. With the technological revolution, we are seeing more money injected into the AI market, increasing computing power, more data for processing, and better technology. People used to experiment with AI in specialized labs and with supercomputers but now AI is accessible from your smartphone. In addition, with the internet and the cloud, AI has been given an endless database to discover information. [2]

Gartner Hype Cycle

According to the Gartner Hype Cycle, deep learning and machine learning are approximately two to five years away from adoption. However, artificial general intelligence, the concept that AI can make its own decisions and act on its own, is more than 10 years away from adoption. The general idea of AI is at the peak of inflated expectations, but the more conceptual idea that AI will act like humans are still innovation triggers.

Gartner Hype Cycle for A, July 2017I

Open AI and Dota 2

DOTA 2 and Open AI

Dota 2 is a game that is deemed one of the most challenging MOBAs to play at a professional level. Open AI created a bot to put it up to the test against professional Dota 2 players. The professional players were in shock as they were continuously beaten by the AI. "This thing accomplished something in 6 months that I had set out to do in 8 years" (William Lee). The AI had learned how to improve its gameplay through machine learning as it constantly played against itself. The developers could run thousands of copies of the AI simultaneously to increase the rate at which it understood the game at a professional level. In addition, the developers only told Open AI what is considered “good” and “bad” in the game before letting the AI learn by itself through machine learning. The Open AI bot became so good at Dota 2 that some professional players are starting to wonder why they were not playing the same way as the bot. In this case, the AI was learning from professional gamers but then professional gamers started to learn from the AI.

Current Applications Used Today

Google Maps

Google Maps is a multilingual web mapping service created by Google and can be accessed through computers, smartphones, and tablets. It provides mapping services such as Google Street View, where users can see panoramic street views at 360°, and Satellite Imagery shows pictures of Earth and other planets. [1][2] Also, it offers navigation services in real time data by providing directions with estimated times and travel distances depending on the transportation method chosen. [3][4] In addition, it presents information of places, for example, if it is an address for a shopping mall, it will also show its hours of operation, the reviews, phone number, and the user will have the option to save the location, see what is nearby, share the link with their friends, and send the position to their phone, via email or SMS. [3][5]

  • Google Maps on Smartphone
  • Google Maps Offline Exploring

Google Translate

Google Translate is a multilingual translating service created by Google and can be accessed, either online or offline, through computers, smartphones, and tablets. [6] The user can speak, write, type or take pictures of words or sentences, and translate it into any language they wish. [6] It is also compatible with Google Assistant as it can take commands in any language, understand it, and execute it. [6] However, a limitation of this technology is the accuracy of the translation as some languages can be translated more accurately than others.

  • Google Translate on Computers, Smartphones, and Tablets
  • Google Translate

Virtual Assistants

Google Assistant, developed by Google and widely used on Android, Siri, created by Apple and used on iOS, and Amazon Echo, smart speakers made by Amazon are some of the most commonly utilized Virtual Assistants. [7][8][9] All of them are able to receive voice commands to find directions, search for restaurants nearby, order and purchase food, set alarms, arrange and store events in the calendar, inform weather conditions, make and answer calls, read text messages, modify device settings, take a pictures, and play and provide information about songs. [9][10] [11] A drawback from these Virtual Assistants is the inability to understand and execute commands at times.

  • Google Assistant
  • SIRI
  • Amazon Echo

Biometric Security Technology

Biometric security technology is the usage of unique human intrinsic or behavioural traits as a method of identification and protection of private information. [12] With this technology access can only be granted if one or more characteristics like fingerprint, palm print, iris, facial, voice, DNA and/or signature are verified by scanning, recognizing, or printing. [13] Nowadays, biometrics are very commonly used by governments and law enforcement to spot individuals posing danger to the general public. [12]

Moreover, any human being who owns the latest technological device will have access to biometrics as it is built into the newest smartphones, computers, and/or tablets. Therefore, making users’ lives easier as passwords, PINS, and patterns are no longer needed and it is more protected since only the user himself can have access to the biometrical secured data. [12] Yet, this technology may contain flaws, such as the occasion when a 10-year-old child was able to unlock his mother’s phone by using the new iPhone X’s facial recognition system, due to their extremely similar facial features. [14]

  • Fingerprint Scanner
  • Facial Recognition
  • Palm Print Scanner
  • Amazon Iris Scanner
  • Voice Recognition
  • Signature Recognition

Smartphones and Smartwatches Built-in Sensors

Smartphones and smartwatches feature built-in sensors such as accelerometers, gyroscopes, magnetometers, compasses, GPS, and barometers. [15][16] [17] These sensors are able to track the direction the device is facing, the speed it is moving, the temperature it is exposed to, and the its location. [16] Then, it will trace the number of steps the person has walked or run. Later, all the captured data will be stored into the smartphone or smartwatch and it will know the dates, times, and places the user has visited since the moment the device had been activated. Currently, the advancement in this technology has helped and encouraged its users to live a healthier and more active lifestyle since it provides an amount of walking distance translated into steps depending on the user’s desired weight entered. [18]

  • Smartphones
  • Smartwatch
  • Applications Aiding Law Enforcement

    Google Translate: Action in Court

    Google Translate has been used multiple times in the UK courts. It served as a medium of communication between the defendant and the judges due to interpreters failing to present themselves in the court of law. [19] For example, in two occasions, Google Translate was able to serve as an interpreter for a Polish and a Chinese defendant in England. [19]

    Fitbit: Resolving Murder Case

    Connie Dabate’s data recovered in her Fitbit aided police to resolve her own murder case. The device’s information helped police draw a timeline of the morning she was killed and found her husband, Richard Dabate, guilty of first-degree murder. Richard provided the police a different schedule of the events occurred on the day in question, he stated Connie was lifeless approximately an hour before she was truly murder. [20]

    Current Events

    Should There be Robot Tax?

    It is not a surprise that robots will take over jobs, and this may lead to a vast amount of humans who will be jobless. “If robots are stealing human jobs, it is only fair they pay taxes too right?” (CBC News, 2017). Bill Gates made a statement “right now if a human worker does $50,000 worth of work in a factory that income is taxed. If a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level” (CBC News, 2017). To replace the lost income taxes from humans, a possible solution is to introduce a robot income tax. Automation is inevitable, and it is impossible to charge robots actual taxes. Therefore, the company that employs robots should pay the government a tax based on profit or value the robot has generated. The government should consider implementing robot income tax if the data shows that automation is displacing jobs. [21]

    Sophia - The First Robot Citizen

    Sophia, the Humanoid

    On October 25, 2017, Sophia became the first robot in the world to achieve the status of a full citizen of Saudi Arabia. The Sophia-bot was developed by an AI Developer named David Hanson. He believes that “rendering the social human in all possible detail can help us to better understand social intelligence, both scientifically and artistically.” (Forbes, 2017). David Hanson envisions a symbiotic partnership between robots, and these genius machines will help us solve the most challenging problems.

    Sophia was able to answer questions from the interviewer, and also has a sense of humour. Due to the advancement in AI, she was able to hold eye contact, recognize faces and understand human speech. She was also able to answer with expression, as she wants to “live and work with humans." Thus, "[she] needs to express the emotions to understand humans and build trust with people." Sophia acts like a human and thinks like a human. Having Sophia being the first citizen, this raises a number of questions as to what rights a robot should have. [22]


    1. Where the streets all have Google's name [Accessed November 1, 2017].
    2. Google Maps now lets you explore your local planets and moons, too [Accessed November 1, 2017].
    3. 3.0 3.1 Google Maps will let you share your location with friends and family for a specific period of time [Accessed: October 28, 2017].
    4. Build the next generation of location experiences [Accessed: November 1, 2017].
    5. Eiffel Tower,2.2922926,17z/data=!3m1!4b1!4m5!3m4!1s0x47e66e2964e34e2d:0x8ddca9ee380ef7e0!8m2!3d48.8583701!4d2.2944813 [Accessed December 1, 2017].
    6. 6.0 6.1 6.2 Explore the world in over 100 languages [Accessed December 1, 2017].
    7. Echo (2nd Generation) - Charcoal Fabric [Accessed: December 2, 2017].
    8. Google Assistant, its AI-based personal helper, rolls out to Nougat and Marshmallow handsets [Accessed: December 2, 2017].
    9. 9.0 9.1 “Hey Siri, wake me up at 7 AM tomorrow" [Accessed November 28, 2017].
    10. You can buy stuff with Google Assistant now [Accessed: November 27,2017].
    11. Amazon Echo: the first 13 things to try [Accessed: November 27, 2017].
    12. 12.0 12.1 12.2 An Overview of Biometric Recognition [Accessed: December 1, 2017].
    13. Biometric security systems: a guide to devices, fingerprint scanners and facial recognition access control [Accessed: December 1, 2017].
    14. Watch a 10-Year-Old's Face Unlock His Mom's iPhone X [Accessed: November 30, 2017].
    15. Implicit Sensor-based Authentication of Smartphone Users with Smartwatch [Accessed: December 1, 2017].
    16. 16.0 16.1 All the Sensors in Your Smartphone, and How They Work [Accessed: November 30, 2017].
    17. barometer [Accessed: November 30, 2017].
    18. Precise Devices: Fitness Trackers Are More Accurate Than Ever [Accessed: November 30, 2017].
    19. 19.0 19.1 UK judge uses Google Translate in pre-trial hearing [Accessed: November 30, 2017].
    20. Cops use murdered woman's Fitbit to charge her husband [Accessed: November 30, 2017].
    21. Is it time to tax job-stealing robots?. [Accessed: December 1, 2017].
    22. Sophia, The World's First Robot Citizen. [Accessed: December 1, 2017].


    Priscilla Chui Marco Lai Nancy Li Amilia Akesson Carey Myers
    Beedie School of Business
    Simon Fraser University
    Burnaby, BC, Canada
    Beedie School of Business
    Simon Fraser University
    Burnaby, BC, Canada
    Beedie School of Business
    Simon Fraser University
    Burnaby, BC, Canada
    Beedie School of Business
    Simon Fraser University
    Burnaby, BC, Canada
    Beedie School of Business
    Simon Fraser University
    Burnaby, BC, Canada
    Personal tools