Artificial Intelligence / Internet of Things

From New Media Business Blog

Jump to: navigation, search


History of AI & IoT

Artificial Intelligence

Alan Turing with the Enigma Machine
Watson on Jeopardy

The beginning of the discussion around AI can be noted in 1308. Ramon Llull, a Catalan poet and theologian, published Ars generalis ultima (meaning "The Ultimate General Art"), where he commented how he wanted to perfect his method of "using paper-based mechanical means to create new knowledge from combinations of concepts"[1]. Fast forward to 1950, and Alan Turing argued that the question "Can a machine think?" should be replaced with the question "Can a machine be linguistically indistinguishable from a human?" [2]. Turing then conducted the "Turing Test" (TT) or "The Imitation Game," where a woman and a computer were put through a test to see if the judge could distinguish between the two players [3]. The field of AI officially started in 1956 where John McCarthy joined the term "artificial intelligence" at a conference at Dartmouth College, New Hampshire [4]. In 1986, Mercedes-Benz showcased the first driverless car: A van equipped with cameras and sensors that could drive up to 55 mph on empty streets [5]. IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov in 1997 and highlighted the intelligence of these machines. In 2011, Watson, the question-answering system, won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings [6].

Internet of Things

IoT Coca-Cola machine

From 1982 to today, there has been significant growth in the Internet of Things (IoT) and its devices. In 1982, David Nichols, a computer science graduate student at Carnegie Mellon University, invented the first Advanced Research Projects Agency Network (precursor to today's Internet) connected Coca-Cola machine. The machine allowed people to check the availability status of the Coca-Cola drinks remotely by monitoring the lights in the machine [7] Eight years later, John Romkey connected a Sunbeam toaster to the Internet [8], and in 1994, Steve Mann created the first wearable internet-connected web camera. [9] In 1999, Kevin Ashton officially coined the term "Internet of Things" despite several IoT devices surfacing years before. One year later, LG announced the world's first web-enabled fridge [10]. The fridge was not a successful product at the time as consumers perceived it to be unnecessary and overpriced. However, the rise of smart home devices today made smart fridges a more widely accepted concept. Finally, in 2011, Gartner added IoT to the Hype Cycle. Since then, Gartner has identified that "IoT as a whole has progressed along the Trough of Disillusionment and is nearing the Slope of Enlightenment," but "adoption and maturity for various essential IoT technologies still vary widely" and "adoption of various IoT use cases varies widely by industry" [11].

What is AI and IoT?

Artificial Intelligence

Artificial intelligence can be defined as a "wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence" [12]. By leveraging computers and machines, artificial intelligence combines computer science and robust datasets and enables problem-solving and decision-making capabilities. It encompasses machine learning and deep learning to mimic the human mind [13]. The AI algorithms seek to create sophisticated systems that make predictions or classifications based on input data. AI is significant because, for the first time, traditional human capabilities can be undertaken by software at scale. As well, AI can be applied to every sector to enable new possibilities and efficiencies [14].

Internet of Things

The Internet of Things can be defined as connecting physical objects to the Internet and other connected devices [15]. IoT devices are physical objects embedded with sensors, processing ability, software, and other technologies to exchange and share data over the Internet [16]. From smartwatches to thermostats to security systems, many face encounters with IoT devices each day. Today, over 30 billion IoT devices are connected worldwide, and this number will only continue to grow over the coming years[17]. The evolution and increased capabilities of internet connectivity, sensor technology, computing power, artificial intelligence, cloud computing, and edge computing have contributed to the rise and exponential growth of IoT [18]. Furthermore, the development of 5G networks will increase data network speeds and strengthen network reliability. As a result, the performance and reliability of IoT devices will improve and overall contribute to its continual growth over the next several years [19].

Components of an IoT Device

The essence of IoT is a device's ability to acquire data and convert them into information using sensors at the device-level. This information is then analyzed using cloud computing and data-processing capabilities, which is then relayed back to the device's user interface (UI). In doing so, there are six key building blocks of any IoT-powered device [20]



Data Acquisition Module Converts sensor signal into digital information
Data Processing Module Processes data and performs operations. Requires storage capability
Communication Module Enables communication with third-party components & cloud specific platforms
Network Layer Responsible for reliable transmission of info between end nodes. Tech used: Internet, mobile communication networks, wireless sensor networks, network infrastructures, communication protocols
Service Layer Links network layer with application layer. Provides required computational power
Application Layer Responsible for data sharing along the IoT network so needs to ensure security, data privacy, access control, and prevent info. leaks

These hardware components are then integrated with IoT platforms which are software used to link hardware, networks, and applications [21]. These building blocks enable each IoT device to allow for the continuous communication of objects, "forming large, interconnected systems capable of creating, communicating, aggregating, analyzing, and acting on data. In turn, this opens up a world of opportunity for connected objects that can better serve customers' individual needs and gather data to drive the development of more tailored services" [22]. By reducing power requirements and decreasing costs, IoT device components undergo continuous innovation while being integrated into smart devices that affect every aspect of contemporary society.

Artificial Intelligence of Things

Artificial Intelligence of Things (AIoT) is the combination of Artificial Intelligence (AI) technologies with the Internet of Things (IoT) infrastructure [23]. When these two technologies are combined, it allows the IoT devices to analyze data and make decisions based on that data without any human involvement. The Intelligence from AIoT can be utilized to optimize systems, improve processes, provide meaningful business insights, and assist the system in making decisions and continuously learning from those decisions. Although AI and IoT are powerful technologies on their own, we can achieve more efficient IoT operations, improve human-machine interactions, and enhance data management and analytics when we put the two technologies together.

Four Types of AI

AI's four main types categorize the machines based on their decision-making capabilities. Starting from the simplest form of AI, there are reactive machines, limited memory, theory of mind, and self-awareness machines [24].

Garry Kasparov and Deep Blue
  1. Reactive machines are the most basic form of AI application where it is programmed to consistently produce predictable outcomes based on the received input [25]. As such, this AI will respond to identical situations every time in the same way [26]. Without learning capabilities, reactive machines cannot store information, constraining them from conceiving past or future actions. Therefore, reactive machines are restricted only to perform the tasks it was initially designed for. However, reactive machines are a basis upon which the more advanced AIs are built. An example of a reactive machine includes IBM's chess-playing supercomputer, Deep Blue. It is programmed to consider up to 200 million positions per second, produces the actions of deciding what move to make, and predicts the opponent's response [27]. With its simple nature, the capacity of Deep Blue's abilities is used up for every individual move it faces.
  2. The limited memory machine is the most widely-used AI type today
    Self-driving Cars
    [28]. With its limited storage capacity, this machine can comprehend the past[29]. Historical, observational data is combined with its pre-programmed information to produce predictions, leading to the performance of complex classification tasks [30]. For example, self-driving cars are fed with data and are trained to observe other cars' speed and direction and road signs such as lane markings, traffic lights, and road curves [31]. The machine has the processes necessary to understand and interpret this incoming data so that the cars can drive safer. The limited memory aspect is depicted by how self-driving cars need to recognize specific objects and monitor them over a specified period, like other cars on the road. However, it does not store any information into a "library of experience" from which it can then learn for future reference [32].
    Kismet the Robot
  3. Theory of mind refers to the human mind's ability to "attribute mental states to others" and "encompasses inferring others' beliefs, desires, goals, and preferences" [33]. The actions of this type of AI include the potential to hold meaningful conversations and understand human emotions and thoughts while also reacting to others appropriately. The machine holds the potential to make its own decisions and adjust its behaviours based on the reaction of others. This type of AI marks the border of when AI adopts capabilities to become more human-like, and there are only a few examples today of robots who fall under this category. One would be Sophia, the world's first robot citizen, who can "recognize human faces, see emotional expressions, and recognize various hand gestures" [34]. It forms estimates of the feelings of others during a conversation while also forming emotions of its own through a rough simulation of human psychology [35]. Another example is the Kismet robot that can "demonstrate social and emotional interactions with humans" by recognizing social cues and then displaying an array of emotions [36].
  4. The most
    Sophia the Robot
    advanced type of AI is self-awareness machines. This phase has not been developed yet as current technologies do not have the hardware or algorithms to support self-awareness programming. The concept of self-aware machines means it would have an awareness of its own emotions and the emotions of others, with the ability to feel desires, needs, and emotions. Its consciousness will have the ability to be aware of its own internal emotions and mental states and then make inferences [37].

Machine Learning vs. Deep Learning

Machine Learning vs. Deep Learning

Machine Learning is an application of AI that uses algorithms to make informed decisions based on the data provided [38]. In contrast, deep learning is a subfield of machine learning that uses artificial neural networks to make decisions on its own. These artificial neural networks have been modeled after the human brain and allow data to pass through nodes in multi-layered networks [39]. The algorithm can determine whether the prediction is accurate in a deep learning model, while human intervention would need to adjust for a machine learning model.

Machine Learning

Types of Machine Learning The three types of machine learning are supervised, unsupervised, and reinforcement learning [40].

  1. Supervised learning is defined by using labeled datasets to predict outcomes through algorithms [41]. When data mining, supervised learning can be broken down into two types of problems. The first problem, classification, classifies data into specific categories and helps predict discrete values, such as yes and no, or nominal values. The second problem, regression, is used to predict continuous variables and help understand the relationship between dependent and independent variables. Some supervised learning algorithms include support vector machines (SVM), decision trees, and neural networks.
  2. Unsupervised learning uses algorithms to analyze and identify patterns in datasets containing data points that are not classified or labeled [42]. Common algorithms seen in this learning type are market basket analysis, clustering, and neural networks.
  3. Reinforcement learning allows algorithms to learn in a game-like situation where it uses trial and error to generate a solution [43]. The algorithm receives rewards or penalties for every action during the game, and it understands that it needs to maximize total rewards to win the game.

Deep Learning

Deep learning is based on the human brain and draws conclusions through data analysis [44]. It uses a layered structure of algorithms called artificial neural networks and allows learning on a “deeper” level than standard machine learning models. As machine learning models usually require some guidance, an algorithm can usually determine whether a prediction is accurate or not in deep learning. Hence, less human intervention is required, but more care must be given to ensure that the algorithm does not provide incorrect conclusions.

Types of Deep Learning

  1. Convolutional neural networks (CNNs) are designed to work with images and are used to assign weighted importance to aspects of an image [45]. One application of CNN allows for image classification and can be helpful if there is a need to scan a larger volume of images for a specific feature [46]. CNNs are quickly evolving from image processing to other uses, such as population genetic inference and astrophysics. There will likely be a wide variety of use cases for CNNs outside the scope of image recognition because of the dynamic nature of the algorithm.
  2. The second notable algorithm, recurrent neural networks, solves language translation and speech recognition[47]. This algorithm uses previous data to learn and influence future inputs and outputs and is distinguished by its memory capabilities. Some recurrent neural network applications are Google Translate and Siri.

Applications of Machine Learning and Deep Learning

Image Recognition: Image recognition is one of the most used applications of machine learning. It is used to identify objects, persons, and places and used in applications to detect a face for automatic tagging [48].

Object detection with AI

Example: Google's FaceNet: FaceNet was developed in 2015 and is an open-source program that allows the extraction of high-quality features from faces. [49]. It uses both machine learning and deep learning methods of clustering and deep convolutional neural networking to achieve state-of-the-art results.

Speech Recognition: Speech recognition, or speech-to-text, allows a program to process human speech into writing [50]. Speech recognition includes many dimensions, including language weighting, speaker labeling, acoustics training, and profanity training. These algorithms utilize the methods of natural language processing, n-grams, and neural networks to recognize speech into text while improving the accuracy of the model. Speech recognition is highly used in industries as many individuals use speech recognition in their day-to-day lives. Some industries include technology, healthcare, sales, and security.

Product Recommendation: Product recommendations systems, or recommender systems, allow organizations to understand data about consumers and generate marketing recommendations based on this data [51]. These algorithms generate these by creating an advanced net of complex connections between people and products. Market basket analysis is a widely used machine learning algorithm that can allow organizations to understand these relationships.

Example: Spotify: Spotify uses a recommender system that helps personalize a user's experience by identifying songs of interest [52]. Spotify does this by collaborative modeling, natural language processing, and convolutional neural networks. Using this combination of machine and deep learning, Spotify can recommend music based on a listener's action, scan through thousands of online discussions to uncover consumer sentiments, and analyze raw audio data to recommend similar songs.

Real World Applications of AI & IoT

End-User Experience

Three prominent industries that use AI and IoT applications to increase user satisfaction include: retail, finance and banking, and education. Firms within each of these three industries implement AI and IoT applications to gain valuable insight about their customers in order to enhance the overall end-user experience.

Retail: Brick-and-Mortar

AI and IoT is widely used to provide superior customer support and enhance the overall customer experience within brick-and-mortar stores found in the retail industry. Retailers who operate brick-and-mortar locations are constantly seeking ways to collect data, gain access to meaningful data-driven insights, and better fulfill customer needs. The use of AI and IoT technology applications allow retailers to collect this data and utilize it to ultimately improve their store's operations as well.

Applications of AI found in brick-and-mortar stores are used to benefit retailers in their strategic decisions concerning pricing and promotions and provide cashier-free payment. Through the analysis of historical research, AI can determine the ideal dynamic pricing and decide what items should be promoted [53]. AI can also allow retailers to explore the concept of cashier-free stores to minimize the need for hiring employees. Amazon Go is a prime example of a brick-and-mortar store that uses a collection of data from camera feeds and shelf sensors to better understand the items that customers want to purchase. It allows customers to walk out of the store with the items in hand and they will be automatically charged once they are done shopping [54].

Smart Mirror

Some examples of IoT utilized by retailers include beacons, smart digital signage screens, and smart mirrors. Beacons are used to provide customers with a personalized shopping experience by sending real-time offers, ads, and rewards to customers' mobile phones while they are within the store. It allows retailers to gain valuable insight on their customers, such as shopper engagement with the merchandise, track foot traffic, and customer journey paths, and allows for improved marketing strategies [55]. Unlike traditional signage, IoT digital signage changes when activated by factors such as changes in circumstance, information captured by other sensors and devices, changes in surroundings, or reactions to instructions [56]. When combined with beacons, digital signage has the power to be entirely personalized by greeting customers when they arrive in the store and displaying information unique to the customer's previous purchase history or interests. Smart mirrors have been a key IoT application used within the retail industry.

Through the help of radio frequency identification (RFID) systems, artificial intelligence, augmented reality, and gesture recognition, smart mirrors allow consumers to virtually try on items or receive shopping suggestions [57].


E-commerce has been increasing in popularity in recent decades and continues to overtake the traffic to physical stores. AI and IoT play an essential role in creating an e-commerce environment that replicates (using an omnichannel marketing strategy), or even improves, the in-person shopping experience. Through added connectivity, complemented by machine intelligence, the internet has become an area for major retail brands to employ if they wish to remain competitive.

Two noteworthy applications include chatbots and robo-cars. Chatbots are computer programs designed to simulate human conversations and are used as a supplement or a replacement for customer service representatives [58]. The three common types of chatbots used today are rule-based, intelligent, and AI-powered [59]. Rule-based chatbots respond to specific questions to predefined answers, while intelligent chatbots use machine learning to build off of user requests and information. AI-powered chatbots analyze conversations to distinguish their context and user preferences [60]. With natural language processing abilities, the AI-powered chatbot interactions feel more natural [61]. Another example of AIoT used in e-commerce is Robo-Cars used for delivery. Robo-cars increase the convenience of buying groceries for busy or remote households and grocery stores. Kroger, an American retail company, has been employing these self-driving cars to deliver food to customers' doorsteps [62].

Banking and Finance

Like the retail industry, IoT in banking and finance is used to increase the efficiency and effectiveness of bank processes and transactions and to improve customer interactions. Some of the IoT applications in banking include the use of beacons, wearables, and smart speakers. Like retailers, banks utilize beacons to deliver personalized offers and information to their clients and collect client-specific data [63]. Through the use of beacons, banks can capture data such as the number of mobile users that visit the branch and peak hours to anticipate consumer behaviour. Additionally, some banks have utilized beacons to identify customers [64]. For example, Citibank uses beacons to identify customers to allow them to access ATMs after the branch's typical hours. Banks are also beginning to take advantage of the growing wearables industry. With increasing consumers making contactless payments through wearables such as smartwatches and rings, banks are beginning to embed software for wearables and even develop some of their own devices and payment apps [65].

Smart Wearables

AI has impacted the finance industry on three levels: personal, consumer, and corporate finance.

  1. With AI in personal finance, 24/7 financial guidance is available via chatbots powered by natural language processing. Online banking also provides users personalized insights for wealth management solutions. For example, in 2017, Eno was launched for Capital One. Eno was the first SMS text-based assistant offered by a US bank. It generates insights and anticipates customer news through 12 proactive capabilities, including sending customer alerts about suspected fraud or price increases in subscription services [66].
  2. Consumer finance utilizes AI to prevent fraud and cyberattacks by analyzing and singling-out irregularities in consumer patterns [67]. JPMorgan Chase, an investment banking company, has adopted AI-powered key fraud detection applications for its account holders. For each credit card transaction that is processed, Chase's data centres use an algorithm to detect fraud patterns [68]. Chase also recently launched a digital fraud prevention hub for small to mid-sized businesses to protect unauthorized users from withdrawing money and allow users to approve or reject check payments digitally [69].
  3. AI aids in corporate finance with machine learning by analyzing spending patterns to predict loan borrowing behaviour. It uses information such as smartphone data to assess one's potential for credit decisions. Large lenders can increase a company's profit per loan with AI by reducing underwriting overhead and delays [70]. Some companies have even automated the entire loan process that uses algorithms to evaluate variables when applying for loans (i.e. educational degrees, areas of study, occupation). Users have noted that this technology has led to less bias and better loans [71].


Smart Board

IoT has several advantages in education for students in both a physical and a virtual learning environment. One example is the use of interactive displays and digital whiteboards in classrooms to enhance student engagement and collaboration. Smartboards enable students to physically interact with the information displayed on the board, allowing teachers to support visual, audio, and kinesthetic learning styles. IoT has also been utilized in classrooms to provide accommodations for students with disabilities or unique learning needs [72]. For example, IoT temperature and lighting controls make it simple for teachers to create a learning environment where students who are sensitive to the physical classroom environment can feel safe and comfortable. Additionally, due to the COVID-19 pandemic, students have been exposed to virtual means of education in which IoT technology has been utilized to enhance the learning experience. For example, web cameras have several use cases in distance education, such as collecting data on attendance, varying behavioural indicators, and emotional states. Data collected from wearable devices such as blood pressure and heart rate have also proven to be useful in classes with laboratory components and testing and examination settings [73].

An example of AI in the learning environment is Quizlet’s Learning Assistant. The machine-learning assistant starts by giving students a diagnostic test to better determine what a student currently does not understand. Then, a personalized study path is constructed with adaptable goals and progress reports and insights. The AI grades assignments and tests using natural learning processing and helps interpret the students' answers better [74].


AI and IoT have impacted the healthcare industry through their progress in pharmaceuticals, aiding medical professions through the diagnosis, treatment, outcome prediction process, and telehealth [75].

Bioscience: Pharmaceuticals

AI helps researchers find patterns about chemical compounds at a faster rate. IoT-based data analysis makes repeated studies on potential drugs smoother and reduces human error. The raw materials used in the pharmaceutical industry need to be handled with caution and stored in appropriate environmental conditions. IoT sensors can detect factors like temperature, humidity, pH probes, and carbon dioxide to ensure these materials are handled properly. These devices can further be integrated centrally into the production system for real-time monitoring.

Benevolent AI is the largest private AI company in Europe that uses the Benevolent Platform for discoveries in chemical compounds. The Benevolent Platform is an R&D computational platform capturing connections throughout scientific literature and other relevant available data. Then, more exploratory and predictive AI tools are used to allow scientists to dive deeper through the data and disease networks to build hypotheses so scientists can visualize and pinpoint optimal treatment interventions. This AI-power uses the data to predict which chemical compounds have the potential to cure symptoms or diseases. The Benevolent Platform is disrupting the traditional drug discovery process, and potentially the pharmaceutical process as a whole, by accelerating the process in which life-altering drugs can be developed [76]. One of Benevolent AI's most recent ventures was its AI-derived hypothesis that discovered the previously unknown anti-viral properties of baricitinib. The FDA later approved this drug for emergency use for the treatment of COVID-19 with its capability to reduce mortality of hospitalized patients by 38%. The United States, Japan, and India currently use this drug after global clinical trials further validated it [77].

Clinical Trials: IoT can help make clinical trials more efficient and cost-effective by optimizing data collection, data integration, and patient monitoring. IoT integrated with cloud computing has allowed pharmaceutical companies to simultaneously conduct siteless clinical trials at multiple geographic locations and increase the sample sizes. As a result, it reduced the time required to complete multiple clinical trials and allowed researchers to remotely access and analyze the data. Researchers can also generate and analyze multiple parameters like body temperature, sleep cycle, and blood pressure. In addition, pharmaceutical companies require fewer staff to conduct the trials as the data collected from the patient can be collected remotely using IoT devices and be transferred securely through the cloud. Traditionally these processes were conducted through direct contact with healthcare workers and patients, making the process more time-consuming and costly.

Diagnosis, Treatment, and Outcome Prediction

Diagnosis: AI aids medical professionals through its capabilities in the diagnosis, treatment, and outcome prediction processes. AI-powered by healthcare data and analytical techniques helps to increase the accuracy and speed of medical results. In particular, AI is aiding in the early detection and diagnosis of strokes. Due to the lack of detection and recognition of early stroke symptoms, only a few patients receive timely treatment for strokes. However, two machine-learning algorithms have been implemented into movement-detecting devices for early stroke prediction. The algorithms analyze the patient and once their movement is significantly different from the usual pattern, an alert of a potential stroke is activated and evaluated for treatment as soon as possible. [78].

Treatment: There are endless opportunities to use AI in patient treatment. In neurology, an AI system was developed to restore the control of movement in paralysis patients. An AI-trained computer chip can decode signals recorded from the spinal cord to stimulate a machine that initiates the correct intended behaviours. In the future, there is hope that the artificial neural network could train over time through added research on the spinal cord's circuitry and initiate more movements[79].

Outcome Prediction:

Databases for storing health record information (Figure 1)

"An adverse drug reaction (ADR) can be characterized as a side effect of taking medication. The reaction may occur either after a single dose or a long-term administration. ADR aids in outcome prediction by allowing doctors to better predict the effect of drug treatments on different individuals.This can also be possible due to the adverse reaction when two different medicines are ingested at the same time". Adverse drug reactions vary on a person to person basis and can cost patients $847 million annually [80]. In IoT-based ADR systems, a unique identifier, such as a barcode, is used to identify each medicine at the user/patient terminal. Then, the compatibility of the drug is then matched with the patient through an intelligent pharmaceutical system. Pharmaceutical intelligence is composed of databases (Figure 1) which store various information like a patient's allergy profile, health records, and active ingredients of drugs. After analyzing a patient's past health information, the system will help decide whether the drug is safe for that patient.


Telehealth is the distribution of health-related services and information via electronic information and telecommunication technologies [81]. Telehealth became popular during the COVID-19 pandemic as it allowed health care providers to treat a higher volume of patients remotely, which was critical to support COVID-19 patients.

Benefits of Telemedicine:

  • Improved Access to Health Care - Patients in rural and underserved communities can connect to specialists from their rural health care centers without travelling, which saves money and time.
  • High-Quality Care - Data from IoT devices can be integrated with telemedicine systems which can then be used to analyze data to help doctors identify and respond to patients' health issues.
  • Safer Environment - Patients, especially those immunocompromised, can avoid high risk hospitals and clinics for their routine checkups and appointments. [82]:

Babylon Health is based in the UK and uses an AI platform to connect patients with 24/7 healthcare and doctors. Their AI learns from medical datasets, patient health records, and consultation notes from their clinicians. Further, AI tools can even aid in decisions on triage, cause of symptoms, and future health predictions. The company uses their services to complement actual clinicians to have AI allow more patients to be seen and into the system [83]. TELUS acquired Babylon Health Canada to build on TELUS Health MyCare [84].

A recent endeavour of Babylon Health is creating an AI chat app called GP at Hand, made for the UK's National Health Service. The app's purpose is to provide remote diagnosis, have live monitoring of patients with chronic conditions, and have a mobile platform for patients to connect with doctors [26]. Additional services for patients include the ability to access their health records and review previous consultation notes, book appointments, renew prescriptions, view a "digital twin" of their current states of health based on their medical history, and manage rehabilitation. With this new application, Babylon Health is taking strides to provide healthcare to those in remote areas or who need immediate attention. Through the help of IoT and AI in telehealth, the efficiency of doctors may also increase as they can virtually check on patients with less severe concerns, such as check-ups for minor surgeries [85].

Lifestyle Products

Smart Homes: Gartner's Hype Cycle

Gartner's Hype Cycle is a widely accepted model that tracks technological innovations in terms of its expectations as a function of time. In 2016, through the introduction of smart home hubs (including Amazon's Echo, Google's Nest, Samsung's SmartThings, and Apple's HomePod) from incumbent players in the smartphone industry which scaled its voice-activated operating systems, consumer expectations were at the Peak of Inflated Expectations (Stage 2). In accordance, this stage was characterized by consumers expecting smart home devices to transform each aspect of their homes within a few years. However, by 2018, smart homes began entering the Trough of Disillusionment, with the industry expected to reach the Plateau of Productivity within 5 to 10 years [86]. This stage was characterized by subdued consumer expectations toward the potential of smart home devices, primarily driven by technological limitations of existing platforms and voice-activated operating systems. For example, the existence of frictions such as the inefficiency of using voice commands for menial tasks inhibited the widespread adoption of these devices. However, as technological innovation enables tasks to be increasingly automated through the capability of devices to analyze user information and anticipate their needs, the industry has entered the Slope of Enlightenment in 2021. By 2025, through the continued decrease in power requirements along with the emergence of analytics on edge – "the analysis of data occurring close to where the data is generated, [including] on the device generating the data", the industry is expected to begin entering the Plateau of Productivity, fully realizing the potential of connected homes. [87]

The Connected Car

The prevalence of IoT-enabled vehicles through the continuous decrease in costs and power requirements (lidar sensor costs are expected to drop by 65 times over the next 16 years) has led to the democratization of advanced vehicle systems. In the past, infotainment systems, sensor-enabled safety features, and autonomous driving capabilities were strictly offered to high-end, luxury vehicles. However, in recent years, this has become increasingly affordable and accessible to the average car owner. One of the critical features of IoT technology is the ability for manufacturers and platform providers to update their products beyond their initial intended use regularly. Similar to software updates provided by smartphone operating systems, IoT-enabled devices can download additional features and security updates to prevent short-term obsolescence and a diminished product life cycle. One of the most salient and critical applications is its use in IoT-enabled connected cars. These applications range from pre-installed infotainment systems to the emerging trend of autonomous vehicles that rely on sensors and predictive algorithms to navigate roads. The importance of this feature is highlighted by the continuously increasing age of vehicles on the road, with the current US average being 12.1 years [88]. Vehicles will need to have continuous updates to prevent system vulnerabilities that may endanger drivers of older cars.

Workplace Safety

IoT technologies have been scaled to analyze and improve business processes in recent years, with a critical application being occupational safety and hazard detection. Through the use of sensors, predictive algorithms, and the continuous collection of data from the environment, including workers themselves, the safety of business operations can grow exponentially. This process is called predictive maintenance, where devices can detect structural failures before accidents occur using environmental sensors, mitigating risks in inherently dangerous occupations, such as mining and underwater welding. Further, data collected from workers can be analyzed and used to improve occupational safety strategies for employees. While traditional workplace safety plans often use standardized occupational requirements (i.e., appropriate safety gear, physical ability, and training), IoT technologies allow each aspect of occupational safety to be individualized. For example, wearable health monitors (ex: an Apple Watch) can detect when a worker is distressed or at risk. In workplace accidents, rescue operations can be enhanced through device geolocation, significantly reducing the risk of death while assisting emergency responders.

Workplace Safety: Applications

Transforming mining through technology

Through integrating artificial intelligence with machine learning, companies have been able to draw critical data and insights to prevent injuries in the work environment. For example, Teck Resources Ltd., a mining and mineral development company, has a digital transformation program (called RACE21) that is dedicated to optimizing the mining processes. One area of focus is to see how haul truck failures can be prevented and further reduce the exposure of dangerous situations. AI has made it possible to derive key insights faster and to make effective data-driven decisions. It has allowed leadership and management teams to be more proactive and improve the standards and performance of work. [89]

In the mining industry, wearables are being utilized to address significant health, safety, and environmental challenges such as environmental monitoring, workers’ health, training, and incident rates. Remote sensing technologies can detect factors, such as temperature and humidity, to help frontline workers anticipate potential incidents and allow companies to be proactive about the risks of overheating. Biometric devices can provide real-time alerts of overworked operators who are at risk of injury or causing accidents. Additionally, real-time virtual and augmented reality training applications are being utilized to accelerate skills and knowledge, and IoT-powered data tracking and communication technologies enable increased response capability [90].

Challenges and Risks

Cybersecurity Risk

Today, there are more than 11 billion IoT devices and it is expected to increase to 27 billion by 2025 [91]. A major threat of implementing IoT devices is the lack of security as many of the devices are inexpensive and simple to build. Due to its limited computational power, it lacks built-in security and secure software. In addition, these devices do not use a standard transmission technology so it is harder to update and have standardized protocols [92].

IoT device users also tend to lack the knowledge of cybersecurity, data privacy and network architecture. The users are not aware of the poor security of these devices. Users often do not update the default passwords, update firmwares, review the permissions requests before allowing them. If a user has multiple IoT devices, like in a smart home, it is difficult to ensure that all devices are secured and to assess the vulnerability of each device.

Hackers can take advantage of these security flaws and users' naivety to hack the IoT devices. Hackers can take control of users' networks and gain access to other devices connected in the network. This can be a major threat to companies who have work-from-home programs as hackers can get access to company devices if the devices are connected to a compromised network of an employee [93]. Hackers can use these vulnerabilities to steal user credentials and gain access to sensitive information within accounts such as banks and social media. In addition, these vulnerabilities can lead to major data leaks.

Privacy Risk

Smart devices generate large volumes of sensitive data which include search history, voice recording, locations, and contact information. These types of user data can be sold to data brokers and law enforcement agencies, which leads to a major privacy risk. The constitution of many countries protects the right of privacy of its citizens by protecting private property like homes and papers. In addition, it protects digital property like data stored in laptops and mobile phones. However, these laws are not able to protect sensitive user data generated from these digital devices, which are stored in cloud servers, and law enforcement agencies can simply access them by buying from data brokers without warrants.This is referred to as the “third party doctrine” [94] . Since a large amount of data is generated from smart IoT devices, law enforcement agencies can have unregulated, mass surveillance on citizens without respecting their constitutional rights.

A recent investigation by The Washington Post found that the U.S Immigration and Customs Enforcement (ICE) have accessed private databases of millions of phone, water, electricity, credit reports and vehicle registration information while pursuing immigration violations. This is an example of how law enforcement agencies can exploit commercial sources of information that they are not authorised to collect on their own. By using these databases, ICE tracked immigration violators who got utility connections since these violators tend to stay off government records by not registering themselves for government services but had to get utilities like electricity to survive [95].

Example of a Privacy Risk: Amazon Ring Doorbell - A Mass Surveillance System

Amazon Ring Doorbell

Amazon has partnered with in total of 2014 police departments across the U.S.A in 2020 [96]. This partnership allowed law enforcement agencies to request users to release videos of their private property without having a warrant. Once users agreed to release the footage, the users lost any control of how the footage was used. In addition, Amazon developed a “Law Enforcement Neighborhood Portal which is an interactive map platform where law enforcement agencies can see active Ring cameras and request video footage directly from users. In order to grow this surveillance network, Amazon used the police departments to promote the Ring Doorbell cameras. In exchange, police officers received free cameras when new customers sign up on the Neighbour App using their unique links [97].

IoT Interoperability Challenges

According to McKinsey & Company (2015), interoperability, which is the ability of IoT devices to work as part of a communicative network, is required to capture 40-60% of value [98]. This is because consumers will need to rely on data produced by the network of devices within an ecosystem to realize the total value of IoT. For example, smart home hubs calculating traffic using connected cars require interoperability between smart home devices and GPS systems. However, through the development and wide-scale adoption of multiple platform ecosystems (i.e., Apple, Google, and Samsung) that are mutually incompatible with one another, IoT in the current state has failed to maximize value for consumers, as no single ecosystem can create all IoT products in demand. Despite this, there are inherent technological challenges that inhibit interoperability. For example, "the constant connectivity of an IoT system makes it difficult to separate a product's physical makeup from its digital components [99], while a "lack of common software interfaces, standard data formats, and common connectivity protocols"[100]., creates additional barriers.

There have been emerging approaches to mitigate the challenge of inoperability, such as vehicles connecting to smartphone operating systems wirelessly. While this may seem to be a relatively minute convenience feature, the impact with other IoT devices can be far-reaching. Likewise, introducing standard connectivity protocols can allow vehicles to connect with other cars on the road while sharing information with other IoT devices (ex: smartwatches, home pods, and smartphones).

In a similar vein, new operating systems have emerged to be at the forefront of promoting interoperability between IoT devices. For example, RIOT is an open-source operating system specialized for IoT systems that focuses on low-power, wireless devices. Developed by a large, independent development community, RIOT is known as the Linux of IoT. However, with the movement toward IoT device interoperability being in its early stages, RIOT is constrained in the system and network-level interoperability required for full, mutual compatibility between products of various ecosystems [101].

Ethical Concerns with AI

One key constraint of the use of AI in guiding business decision-making, particularly in decisions with high potential impact, is that the reliability of the AI is contingent on the quality of the data used. The data must be free of implicit biases but it can be difficult to detect as they can have significant implications on the outcome of the AI. With the adoption of AI-enabled decision-making processes growing exponentially across industries, there are increasing ethical concerns surrounding the possibility and subsequent consequences of these biased implementations.

Example of Ethical Concern: Amazon

A critical case study that highlights this notion was Amazon’s development of an AI recruiting tool in 2014. This tool was employed to score and rank potential job candidates who were projected to be high-performing. However, a critical error in its application became apparent when “by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way” [102]. The results were particularly damaging as the historical male dominance of the technology industry ultimately resulted in the system discriminating against female job applicants. The system had taught itself that male candidates were superior, penalizing resumes that contained the word “women’s” and disregarding graduates from women-only colleges [103]. While the project was ultimately disbanded in 2017, the widespread adoption of AI in recruitment and hiring processes across diverse industries has increased these ethical concerns in parallel.

Future Considerations

Augmented Intelligence vs. Artificial Intelligence

With the displacement of human workers in recent years, many people have begun to wonder about the probability of an AI Takeover. Especially with the recent pandemic, many companies are looking for ways to cut costs while increasing efficiencies as a means for survival [104]. By 2025, it is estimated that robots could replace up to two million manufacturing workers. Many public figures have spoken up regarding their concerns regarding an AI takeover. Stephan Hawking has believed that "the development of full artificial intelligence could spell the end of the human race" [105]. His concerns largely stem from his belief that humans are limited by slow biological evolution and, therefore, could not compete against super intelligent AI. Elon Musk has also warned about the dangers of AI as he stated that AI would likely overtake humans in the next five years [106]. Furthermore, he believes that "AI is far more dangerous than nukes, and the fact that there is no regulatory body overseeing its development is insane".

Artificial intelligence, which is the creation of machines to work and react like humans, is a significant risk to human workers. However, augmented intelligence, which enhances the human worker and empowers individuals to work more efficiently, could potentially mitigate that risk. For instance, within the financial services field, instead of completely taking over the financial planners role, augmented intelligence can assist financial planners in offering personalized services based on the customer’s goals, capacity, and risk appetite [107]. Similarly, for the healthcare industry, the quality of patient care can be improved by incorporating augmented intelligence to reduce the likelihood of medical errors while expediting time-consuming procedures like billing and claims [108].

In recent years, augmented intelligence was ranked second place in AI technology rankings for business value, just below virtual agents. However, Gartner predicts that “Decision support and AI augmentation will surpass all other types of AI initiatives,” creeping into first place this year and then exploding as we reach 2025, becoming around twice as valuable as virtual agents [109].

Future Implications of Smart Home Ecosystems

McKinsey & Company’s research (2015) suggests that “within a decade, many [people] will live in smart homes that will feature an intelligent and coordinated ecosystem of software and devices, which will manage and perform household tasks and even establish emotional connections with us.” With this in mind, it is reasonable to project that an increasing number of household items and appliances become interconnected within two years as part of the first stage of ecosystem development. Then, within five years, numerous devices will become affected (stage two), continuously collecting data and optimizing household activities while reducing living costs. Finally, within the next decade, connected homes will be adopted on a broad scale, featuring devices, platforms, and operating systems that leverage AI capabilities that can be tailored to each household’s unique preferences and needs.

According to McKinsey & Company (2015), the future of smart home devices will be captured by three broad segments: Chore Automation (potential economic impact: USD $135-200B by 2025), Energy Management (potential economic impact: USD $50-110B) by 2025, and Home Security (potential economic impact: USD $15-22B by 2025). These technological applications will be further supplemented by two factors: usage-based design, which is the opportunity for manufacturers to monitor the performance of their products after the sale for future product improvements; and pre-sales analytics, which is the analysis of usage data from household devices to determine whether the customer is suited for using another product or service [110].

Chore Automation in the smart home device industry will occur through sensors, computing power, and network connectivity to automate traditional home appliances. These devices will continuously collect and analyze daily usage patterns, understand household preferences to anticipate homeowners' needs. McKinsey & Company (2015) indicated that chore automation would be at the forefront of the connected home through an ecosystem of connected smart appliances that alleviate the burden of daily chores. For example, smart vacuums can learn to clean the house when homeowners are away, while coffee machines can anticipate sleep patterns to provide them with coffee at the most convenient time each morning.

Energy Management combines the utility of sensors and predictive algorithms to create a network of energy-efficient appliances operating around a centralized smart thermostat. Operating in partnership with local utility providers who install the smart thermostats, IoT-enabled energy management applications are expected to alleviate heating, air conditioning, and electricity use costs. Once installed, the predictive algorithms “can detect when no one is home and adjust the temperature to conserve energy. Over time the smart thermostat could learn about usage patterns and adjust heating or cooling to keep the home at the right temperature when residents are due home” while connected washers and dryers “could get information about energy prices to delay cycles during peak energy consumption periods” [111]. Google’s Nest currently claims that its smart thermostats save 20% off home energy bills through these functionalities. As sustainable practices become increasingly adopted on a global scale, the use of smart home technologies to reduce the environmental impact of homes will grow proportionally as well. According to McKinsey & Company (2015), the adoption of IoT energy-control applications could reach 25-50% in developed nations in 2025.

Smart Home Security leverages sensors, cameras, and data analytics to protect homeowners from dangers, both immediate and in the future, automatically issuing alerts to emergency services when inhabitants are at risk. For example, for homeowners with young children, cameras and sensors installed near pools can detect drowning and other risks when parents are not present. As well, “through early detection, sensors could also help reduce property damage from water leaks and fire” (McKinsey, 2015), while these functions can reduce home insurance premiums by up to 10%.

IoT Platforms

Historically, the consumer technology industry has been characterized by format wars, where a dominant platform outcompetes another to win the entire market share (ex: the videotape format war between VHS vs. Betamax). However, the IoT device industry has created new rivalries through platform wars, where competing product devices have attempted to lock in customers within their product ecosystem by creating high switching costs. This resulted in the establishment and wide scale adoption of multiple platform ecosystems, characterized by the popularity of IoT ecosystems created by Apple, Google, Samsung, and Amazon. With this in mind, multiple platform standards will likely evolve, rather than the most popular platform pushing its competitors to obsolescence. However, “that will present complexities both for consumers and businesses but will foster new, niche opportunities as well” [112]. Through continued technological enhancements that will increase computing power, the refined use of advanced big data analytics, and the integration of AI, the capability of IoT devices is projected to grow exponentially within the decade. With this in mind, platforms will provide the foundation to integrate different devices while providing a consistent experience for consumers [113], signifying the need for backend interoperability that is then differentiated through platform ecosystems.

Impact of AI & IoT on Job Opportunities

Impact of AI and IoT on job opportunities
Trends in job titles

While there are significant ethical concerns on the impact of AI, IoT, and automation on job opportunities across industries, it is empirically proven that these advancements have created more net new jobs in reality. For example, according to Statista (2021), there will be 1.8 million jobs eliminated worldwide due to AI in 2022, while 2.3 million jobs will be created alongside these losses [114]. However, it is worth noting that while the impact on the number of jobs is net positive, the nature of jobs is projected to change drastically. For example, jobs requiring customer interactions or more menial tasks will likely become increasingly obsolete in national workforces over time. There will likely be an increased demand for workers with capabilities or formal education in data analysis, technology, and engineering. According to Statista (2021), jobs related to AI with the highest salaries in the US as of 2019 were: machine learning engineers (USD 142,859), data scientists (USD 126,927), computer vision engineers (USD 126,400), data warehouse architects (USD 126,008), and algorithm engineers (USD 109,314) [115].

Our Thoughts

What the Future Looks like

We believe that AIoT will continue to evolve as the use cases of AI and IoT begin to overlap. As an increasing number of companies start integrating AIoT, a lucrative market for AIoT will emerge. Many jobs in the AIoT industry will have niche segments like one for the business security systems or the manufacturing machinery segment. There will be an increase in data analytic companies, such as Google LLC., and jobs in data mining as there will be an increase in data collected from consumers. In the future, all buildings will be equipped with AIoT capabilities, and we no longer need to input a form of payment when purchasing merchandise or taking public transportation. Everything will be digitized through AIoT, and thousands of jobs for blue-collar workers will disappear as a result.

Future Scenario

Drew walks into a building, and facial recognition software immediately registers him as one of the occupants in the room. With his health profile completely downloaded into the system, he has little risk for health emergencies as the robotic paramedics integrated with the building know his entire health profile. As he walks into a room to prepare for his interview, the building’s AI adjusts the room’s brightness levels according to his retina sensitivity on his profile. The AI further uses heat detection to change the current room temperature to ensure that Drew is comfortable. As he waits for his interview to start, he listens to the AI announce the approximate wait time until it is his turn. The AI is responsible for estimating interview wait times and knows every room an applicant is waiting in. From the moment Drew walks into the door to when he walks out, the data is stored and analyzed to improve candidacy satisfaction. Once he leaves the building, the information is uploaded onto his profile and can be examined in the future.

An Illusion of Choice

The current standards for data privacy that we are entitled to will be nonexistent in approximately 100-150 years. Moving towards a futuristic world through integrating AIoT will require trading privacy for convenience, a trade-off that won’t be recognized. Currently, many people are against companies collecting and analyzing consumer data to use in marketing and research. As societal perception starts to accept AIoT, the idea of data privacy will disappear, and having private details about every aspect of one’s life will become the new normal. Especially since technology was invented to create a more efficient world, this efficiency will result in an illusion of choice in data privacy. Governments will have laws regulating the access controls of a person’s profile rather than laws that govern the type of data being collected. People will no longer protest what companies do with their data as it becomes more of a ubiquitous practice.


Sarah Lee Taylor Lee Erica Yu Heidi Wang Marcus Fujiwara Muzzain Wassaf
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


  3. chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?
Personal tools