Facial Recognition

From New Media Business Blog

Jump to: navigation, search
Facial recognition is becoming a part of our everyday lives.

Facial recognition is a form of biometric solution which uses a person’s unique facial features to identify them. Using the nodal points of the face, facial recognition software "can detect faces in images, quantify their features, and then match [these] features" to a database of stored images.[1] Although facial recognition was historically used by government border patrol and law enforcement, its growing versatility lends itself to diverse applications that can provide businesses with a competitive edge and consumers with everyday convenience.

There are numerous benefits that facial recognition provides to societies on a global scale. From unlocking your smartphone to signing into online banking, facial recognition enables businesses to enhance their services provided to customers. As facial recognition becomes increasingly more accurate and user-friendly, companies are leveraging its benefits by including it as a feature within the products they produce, utilizing its security/monitoring capabilities, and strengthening it with artificial intelligence. Facial recognition is a growing technology that can provide many advantages in the future to come.

This expanding technology also raises concerns amongst key stakeholders in society. Facial recognition raises serious questions about the privacy of citizens, the future landscape of businesses, and the challenges governments must face when enforcing regulations. Some organizations and activists are skeptical of facial recognition, comparing the widespread use of facial recognition to George Orwell’s novel, Nineteen Eighty-Four[2]. The Orwellian concept asserts that increased surveillance can be a threat to people’s freedom. In several aspects, the progression of facial recognition makes this concept a potential reality. By understanding the benefits and disadvantages of facial recognition, we can evaluate its value and impact on society.

Contents

How it Works

Simply put, facial recognition pertains to distinguishing and identifying faces by storing and analyzing different facial features. [3] The technology works in two main ways: Verification and Identification.

Verification System: Under this system, a photograph of a person is compared with another photograph that already exists in the database. The system then compares the two photographs and gives a "Yes" or "No" answer depending on whether or not the two pictures match. As this system only needs to compare the presented image to the image stored in the database, it is able to generate results more quickly and is generally more accurate than identification systems.[4] The most common application of Verification systems is unlocking the smartphone. Every time a person uses their face (or any biometric) to unlock their smartphone, the technology verifies that it is that person by comparing their face against the photograph they have previously stored.

Identification system: This type of system goes through a “one-to-many” matching process in order to identify who a particular person is.[5] Under this system, a photograph is run through the database where it is matched with the pre-stored photos. The system then processes the information and gives a list of people whose features have matched the most with the person in an ascending or descending order. This method is mostly used for law enforcement and border surveillance purposes.[6]

Traditional Facial Recognition Methodology

The traditional facial recognition method mainly uses supervised learning to identify a face. The supervised learning process requires a human to assign labels to each input in the dataset. The process requires both training and testing datasets. There are many different types of facial recognition software, which vary in features, but all generally involve the same series of steps - capturing, identifying, analyzing, and comparing.

Capturing

The first step is to capture the person’s image through a sensor (generally a camera) so that their face can be detected. This image must be of good quality for it to be useful. A picture with low resolution will make it difficult for the program to give accurate results. If you have used a camera in the past 10 years, you have seen facial detection in action. The facial detection feature in cameras is particularly important as it automatically picks out faces and makes sure they are in focus before taking a picture.[7] This same feature is also used to detect and choose faces that we want to send to the next steps of the pipeline.

Face Detection Feature on Phone Cameras

Extraction

Once the face is detected, it is localized and normalized to align it with all the other face samples in the database. This process comprises of cropping images such that only oval facial images remain and converting coloured images to black and white.[8] Then the system identifies and stores information about the different facial features of the person (nodal points). This includes features like the distance between the eyes, depth of eye sockets, the width of nose, jawline, etc. These nodal points are then used to create a numerical code that represents the face in the database.[9] This code is also called a "Face Print".[10] Another technique is to look at nodal points on the face and track how the pixels in a photograph cluster around them. This step is also called feature extraction. [11]

Matching/Comparing

Once all the images and their respective face prints have been stored, the employed software uses learning algorithms to compare the image to the stored face prints in order to verify an individual's identity or to identify the individual (depending on the method employed and its application). Different pattern recognition techniques can be used for this step. The most commonly used techniques can be categorized into four categories, namely: template matching, statistical approaches, syntactic approaches, and neural networks.[12] More information about different kinds of pattern recognition techniques can be found on this website.

Different evaluation methods are used to test the performance of different pattern recognition techniques. The two main evaluation plots are the ROC (receiver operating characteristics) curve and the PR (precision and recall) curve. [13] The ROC curve examines the relationship between the true positive rate and the false positive rate, and the PR curve examines the relationship between detection rate (recall) and the detection precision.[14] A threshold for the positives, negatives, and accuracy score are decided. The set threshold plays a major role in the decision making step.
Facial Recognition Timeline

Decision Making

After the images are compared, a decision regarding the match is made. If the accuracy score is higher than the set threshold, a match is made. Hence, the set threshold plays a key role in this process. A lower set threshold could jeopardize the security of the system. The picture on the right provides a summary of the facial recognition pipeline.

Modern Facial Recognition Techniques

The traditional facial recognition methodology is not very accurate as its results are affected by various factors such as the pose, lighting, and facial expressions of the person. Moreover using the traditional mechanism can also be time-consuming. In order to mitigate these issues, researchers have developed techniques that have progressively improved in accuracy and ease of usage. These techniques are explained in detail below: 


Skin Texture Analysis

Skin texture analysis is an addition to the “Feature Extraction” step of the facial recognition timeline. this system takes the approach of individually analyzing each pixel in the image. Within a pixel, the computer quantifies properties of at least three different features, such as the size, shape, and the distance between skin pores or wrinkles, instead of creating a facial imprint with different measurements of nodal points, such as the distance between the eyes, depth of eye sockets, etc.[15] For each different characteristic within the pixel, the computer assigns the characteristic a specific number. Once the computer has generated a number for every pixel, the computer then pieces them together to generate the skin texture of each particular region of the face. This is like the “faceprint” in the traditional method but for the skin. Skin texture analysis highly improves the accuracy of facial recognition as it uses more characteristics that are unique to a person. This technique has been able to distinguish between identical twins and cannot be easily spoofed by a mask.

Deep Learning
The simplest approach towards facial recognition is to compare the unknown face with all the pictures stored in the database as explained in the traditional method above. However, when comparing a picture with a database of millions or billions of people, this approach becomes unreasonable to use and very time-consuming. A more popular approach now used is Deep Convolutional Neural Networks or Deep Learning. Unlike the traditional way, where a human assigns the computer which characteristics to measure (nodal points, the distance between the eyes, etc.), in this technique the computer itself figures out which measurements to collect.[16] It turns out that "deep learning does a better job than humans at figuring out which parts of a face are important to measure".[17] In this technique, the network is trained to automatically identify different features of faces and calculate numbers (output) based on that.
Face Structure Remodeling for 3D Facial Recognition
If two different images of the same person are passed through the network, the output should be very similar, and if images of two different people are passed through the network the output should be very different. This approach is used by many big companies such as Google and Facebook.
3-D Facial Recognition

Three-dimensional facial recognition is a type of facial recognition where a 3-D model of the face is used. It has been shown that 3-D face recognition methods can achieve significantly higher accuracy than 2-D recognition, even competing with fingerprint recognition.[18] This type of technique is not affected by discrepancies in the image such as lighting, angle, and emotions of the person. 3-D Facial Recognition uses distinctive features of the face that do not change over time (depth of the eye socket, nose, and chin) to identify the subject. 3-D facial recognition can even be used in darkness as it uses depth and axis of measurement, which is not affected by lighting. [19] The main limitation of 3-D face recognition methods is the acquisition of a 3-D image. Capturing a 3-D image usually requires a range camera, which takes images from multiple angles. Alternatively, a video capturing all the angles of the person can also be used for this technique to work. [20]

Facial Recognition Timeline

Year Details
1870 Bertillonage

Facial recognition first appeared when Alphonse Bertillon developed “Bertillonage”, anthropometries which can identify an individual based on body measurements.[21] Police authorities worldwide used this system to identify criminals. The idea of compiling body measurements emerged when criminals changed their names to evade the justice system. Bertillonage solved the issue of criminals using aliases by taking basic measurements of the body. For a brief time, police authorities worldwide used this system until it was discovered that some individuals could have the same measurements.

1960s Semi-Automated Facial Recognition

As facial recognition improved over the years, the US government decided to hire Woodrow Bledsoe on a contract to build a semi-automated facial recognition system. This system located facial features on photographs by using coordinates on a grid and a stylus that emitted electromagnetic pulses. The system manually recorded coordinate locations for a variety of facial features (eyes, nose, mouth, hairline).[22] The system required metrics to be inserted into a database manually, and the system could match the individual’s measurements with an image in the database automatically.

1970s 21 Facial Markers

Goldstein, Harmon and Lesk enhanced the accuracy of facial recognition by using 21 specific subjective markers to measure the face. Such markers included lip thickness and hair colour. [23]

1988 First semi-automated facial recognition system implemented

In 1988, the Los Angeles County Sheriff’s Department used composite drawings and video images of a suspect to match them to a database containing mugshots.[24]

1988 to 1991 Eigenfaces

In 1988, Sirovich and Kirby used linear algebra to show that less than one hundred values were necessary for accurately coding a normalized face image. A few years later, in 1991, Turk and Pentland built on the Eigenface approach by discovering how to detect faces within images, which was an integral step towards automatic facial recognition.[25]

1993 to 2003 FERET Program implemented

As facial recognition became more useful in practical settings, The FacE REcognition Technology Evaluation (FERET) program was developed by the US National Defence to inspire innovation and commercialization of facial recognition. The program created a database of over 2000 facial images representing over 800 people.

2000s Face Recognition Vendor Test

The Face Recognition Vendor Tests (FRVT) are conducted to evaluate a variety of facial recognition systems that are commercially available. These evaluations assist law enforcement agencies and the US government in choosing which system to purchase and deploy. These evaluations are still being made today as more facial recognition technology becomes available.[26]

2002 FaceFinder tested at the Superbowl

FaceFinder, a company who develops facial recognition systems, was tested on approximately 100,000 fans present at the Superbowl in 2002.[27] It detected 19 petty criminals who matched the database created by Tampa police. No one was detained or questioned because this was an experiment to test FaceFinder’s accuracy.

2011 FaceFirst implemented in an airport

Another facial recognition system, called FaceFirst, was launched at Panama’s Tocumen airport to apprehend multiple suspects. This airport particularly benefited from the system since it was a common destination for drug smuggling and organized crime. This was the first major facial recognition system implemented at an airport.

2010 to Present Facial recognition becomes available on social media platforms

Since 2010, Facebook has made several attempts to integrate facial recognition into their platform for the purpose of identifying their users. Features include tagging suggestions in pictures and identifying the user in photos they aren’t tagged in. Facebook’s storing of facial templates has been met with scrutiny and class-action lawsuits.[28]

Market

The facial recognition market is still at its growing stage and this can be seen from analyzing the Gartner Hype Cycle. Facial recognition is placed in the slope of enlightenment, which consists of technologies that companies are beginning to understand how to use and how its use can benefit corporate processes. Providers of these technologies will have also started developing next iterations of the technology. This is seen on the consumer side which has newer versions of facial recognition to authenticate users in mobile phones. Finally, technologies in this state are also seeing investment of testing and pilot programs. This is evidenced by many companies demoing facial recognition such as in trade shows and beta programs on devices. However, this technology is still not used by more reserved companies.[29]

The market itself is quite large, as according to a report by ResearchAndMarkets.com, the global market for facial recognition alone is valued at USD $3.85 million in 2017, and forecast to grow to $9.78 billion in 2023.[30] For biometrics scan software in general, a 2017 industry report by IBISWorld indicates that in the United States, facial recognition represents 8% of a USD $5.5 billion industry.[31] The dominant biometrics software remains fingerprint recognition (48.8%) and that is followed by retina/iris scanning software (29.8%).[32] The biometrics market is also growing, as according to Statista, the U.S. market in 2016 was worth $3.5 billion, and projected to grow to $7.2 billion in 2020.[33] Of that market in 2016, facial recognition made up $500 million, and projected to be worth $1.1 billion in 2020.[34]

Facial Recognition on the Gartner Hype Cycle.
Size of biometrics market in the U.S. - chart from Statista.


Facial Recognition Providers

There are many companies specializing in providing facial recognition technologies. Some of these companies provide services for a wide range of purposes, such as German-based company Cognitec. Their overarching product is known as FaceVACS, a software engine that can be licensed and customized for specific use cases.[35] Cognitec lists more specific products based on FaceVACS, such as FaceVACS-DBScan ID for an ID management system[36] or FaceVACS-VideoScan to use facial recognition in security video footage.[37] Customers of Cognitec include consulting firm Accenture and software company Adobe.[38]


China is quickly advancing in facial recognition technologies, and the rise of startups in this field are reflecting this trend. One prominent company is Beijing based Megvii, which provides a facial recognition platform called Face++ for “more than 300,000 developers in 150 countries”.[39] One company using Megvii’s identification platform is Xiaohua Qianbao, an online loans/offers mobile app that features facial authentication.[40] Face++ is also used by a variety of companies such as Alipay, Lenovo Group, Xiaomi, China Merchants Bank, and China Citic Bank.[41]


Mexia One demoes facial recognition at Mobile World Congress 2018
Another large player from China is Hong Kong based company, SenseTime, whose platform is used in Chinese surveillance on the streets.[1] Features range from ID verification to entertainment purposes such as beautifying photos.[2] Their line of “Sense” products include SensePortrait-S, a static facial recognition server used to detect and analyze faces from images.[3] Another “Sense” product is called SenseAR, which maps augmented reality elements over faces similar to Snapchat filters.[4] SenseTime’s partners include police authorities and more than 400 companies such as Nvidia, Qualcomm, and Xiaomi.[5]


A provider of facial recognition is based in Winnipeg, Canada, called Mexia One, which is also a part of the larger company Mexia Interactive. Their recognition software includes watchlist/alerts, event security/access, and mobile I.D./payments.[6] The company demonstrated their event access system at Mobile World Congress 2018, partnering with the GSM Association and PwC.[7] This demonstration was an opt-in option for VIP guests, enrolling 4,225 guests for the event.[8]


As the demand for facial recognition grows, we can anticipate an outcrop of facial recognition companies providing leading edge technology to businesses and government agencies. It will be interesting to see which country will dominate the facial recognition sector as the technology becomes more widely adopted around the world.

Consumer Application

Facial recognition has become quite common in consumer use in recent years. We now see implementations of such features in mobile phones, laptop computers, and other devices featuring camera and other sensors. Many of these consumer implementations are used to enhance biometric authentication, but other purposes such as entertainment have also been increasing in popularity.

Consumer Authentication

Authenticating users on mobile devices with facial recognition has become increasingly standard in 2018, with implementations such as Microsoft’s Windows Hello and Apple’s Face ID being in the forefront. The trend of this technology has expanded as more and more manufacturers such as Samsung have introduced their own facial recognition systems in an effort to improve on the competition in accuracy, security, and speed.

Implementations

Windows Hello
Microsoft demo pictures showing the benefits of near infrared images for consistency.

Windows Hello debuted as a security feature of Windows 10 in March 2015.[9] This feature allowed unlocking Windows 10 devices via various biometrics, such as facial recognition and fingerprints.[10] In order to support facial recognition, Windows Hello requires infrared sensing cameras on the device to distinguish between a real person and a photograph.[11] These cameras take near infrared images, which are beneficial in that the lighting conditions have little effect on the image, improving consistency of facial features.[12] Windows then uses an algorithm to create a representation vector based on certain landmark/alignment points such as the eyes or mouth from those images.[13] The representation vectors are similar to histograms, marking the dark/light areas of the face.[14] As a result, an actual of the face is not stored.[15] Once the enrolment profile is created, it can be used to compare with new representations created when a user authenticates.[16] Passing a certain threshold will accept, the representation and allow the user to log in.[17]

Apple Face ID
TrueDepth camera system on the iPhone X.
The iPhone X's facial recognition feature is called Face ID, which can identify a user’s face at a glance to verify purchases and unlock their phone. A TrueDepth camera system is responsible for Face ID’s recognition and authentication capabilities.[18] A dot projector (some news outlets refer to it as an infrared emitter) is used to project "over 30,000 dots in a known pattern onto the user's face".[19] A depth map is created from the dots and used in conjunction with an infrared image to verify if the current face is indeed the user’s face.[20] A proximity sensor used to detect if the user is close enough for Face ID to activate.[21] To detect light levels, an ambient light sensor is used and in low light situations, the Flood Illuminator is used for low light-flood filling, enabling a user’s face to be recognized in the dark[22] by shining infrared light onto the face.[23] Finally, the infrared camera is also able to pick-up sub-surface features from the skin to prevent Face ID from being spoofed by an average mask or picture.[24]
Samsung Face Recognition

Android has long implemented a rudimentary version of facial authentication with Face Unlock, first released in Android 4.0 Ice Cream Sandwich with the Samsung Galaxy Nexus in 2011[25] and included in the subsequent releases of the Galaxy S line. However, with the 2018 Samsung Galaxy S9, Samsung released its own facial authentication feature known as Intelligent Scan, which combines facial recognition with its existing iris scanning technology.[26] The process first utilizes the front facing camera, which creates a 2D map of the user’s face used for authentication.[27] The phone then authenticates using image to authenticate and only activates the infrared iris scanner if that first process fails. In the case where both fails, the phone will then combine both sensors to authenticate the user.[28]

Security Comparison

Accuracy and security of the recognition system is important to ensure the intended user is authenticated. The documentation on Windows Hello reports that this is measured by two numbers: the False Accept Rate (FAR) and the True Accept Rate (TAR). FAR describes the amount of times a false result is accepted, while the TAR describes the amount of times the correct result is accepted. Windows Hello facial recognition sensors must meet the requirement of a FAR < 0.001%, meaning accepting less than 1 false result in every 100,000 attempts, and a TAR > 95%, meaning the acceptance rate of the correct result should be greater than 95%.[29] Apple claims that Face ID's FAR is 1 in 1,000,000[30], meaning that it may accept the false result once out of 1,000,000 attempts. However no TAR is given for Face ID. These numbers are also not provided for Samsung’s Intelligent Scan.

The final component to security is blended with privacy as it refers to how biometric data is stored. According to Microsoft, Windows Hello biometric data is encrypted before being stored locally on the device[31] and “is never sent to websites or applications for authentication”.[32] With regards to Face ID, its mathematical data is encrypted and stored in the Secure Enclave[33], which is its own processor that loads separately from the phone to encrypt and secure biometric data.[34] Samsung stores its encrypted biometric data on local hardware as well, and is also protected by the Knox platform.[35] This is a security platform that “is designed to separate, isolate, encrypt, and protect work data from attackers”.[36]

Security Breaches due to Accuracy
3D printed mask that fools Face ID on iPhone X.

Despite the above claims on security, many reports have surfaced on the failures of these facial recognition systems. In December 2017, it was reported that certain Windows 10 devices who did not have an anti-spoofing feature (either because Windows 10 wasn’t updated or was disabled when updated to a newer version) were vulnerable to an exploit that allowed printed infrared photos to unlock the device.[37] Reports of Face ID being spoofed were also well documented, with one iPhone X being unlocked by a mask[38] and another being unlocked by the owner’s 10 year old son.[39] Samsung’s Intelligent Scan is reportedly not much improved from the Face Unlock implementation in the Galaxy S8, which was able to be fooled by a normal photograph. This is further evidenced with certain features such as Samsung Pay and Secure Folder not supporting Intelligent Scan and requiring the use of the fingerprint scanner.[40] These events raise significant concerns about security breach as more apps allow for facial recognition to sign-in, such as HSBC’s mobile banking app. Mobile phone companies, such as Samsung, also allow users to log on to many different website using “Samsung Pass”, which uses biometric as verification.[41] If spoofing occurs, it could lead to many issues for consumers and businesses using these mobile features.

Speed

One of the metrics that are important for consumer authentication is speed, as users lock and unlock their mobile devices many times during the day. With Windows Hello, the average duration for authentication is less than two seconds.[42] In a non-scientific test, unlocking the iPhone X using Face ID took an average of 1.5 seconds, although reducing some factors, such as including the time the screen takes to come on, reduces that number to 1.16 seconds.[43] While there have not been exact numbers in how fast Samsung Galaxy S9’s intelligent Scan is in unlocking the phone, CNET reports that it is faster than Face ID.[44]

Consumer Entertainment

Facial recognition has been integrated into many technologies that serve as entertainment. With the mission of keeping users engaged, digital companies specializing in leisure and entertainment are using facial recognition as part of their strategy to gain a competitive edge against one another. In the US, Facebook uses facial recognition to aid the user in tagging their friends in photos. A variety of video games offer real-time facial tracking for a more immersive user experience. Snapchat uses facial recognition to map out a user’s face and apply decorative lenses and filters. It may seem like all fun and games, but companies are taking facial recognition very seriously as evidenced by their heavy investment in such technology.

Video Games

Facial recognition has elevated the gaming industry for both developers and gamers. Developers are able to integrate facial recognition into video games to better understand the gamers’ emotions as they play the game, aiding them in the testing phase. It also enables gamers to have a better gaming experience, as the game becomes more realistic and dynamic.
The room floods when the gamer gets more anxious
Affectiva and Nevermind

Developers of a video game called Nevermind teamed up with Affectiva to use facial recognition as a pivotal component of the game. The slogan of Nevermind is that “the greatest enemy is the one inside your head”[45]. The objective is to control your anxiety so you can proceed to the next level. This immersive gaming experience is achieved with the help of Affectiva, a spin-off from the MIT Media Lab which uses software that analyzes emotions through facial expressions.[46] Using Affectiva technology, the game analyzes a gamer’s reaction through a non-invasive webcam as they play. If the gamer is too anxious, the room will begin to flood or spikes might come out of the floor making it extremely difficult for the gamer to proceed until they calm themselves.[47] This is a unique example of how Affectiva’s facial recognition services can enhance different industries and provide a captivating user experience. Affectiva services are also provided to other industries such as automotive, robotics, education, healthcare, experiential marketing, retail, human resources and video communication.[48]

Cubic Motion and Hellblade

Another company elevating the way we play video games is called Cubic Motion. As a self-proclaimed computer vision company, Cubic Motion specializes in real-time facial tracking.[49] The technology tracks the facial movements of the gamer, and these movements are reflected by the character in the respective video game. Based in England, the company has well-known game developers reach out to them. Since 2009, Cubic Motion has worked alongside developers to create highly acclaimed video games such as “Call of Duty: WWII”, “Spiderman”,“Batman: Arkham Knight”, and “Hellblade: Senua’s Sacrifice”.[50] Going beyond video games, Cubic Motion is also venturing into the film industry, which further accentuates the versatility of facial recognition within the consumer entertainment industry.


Hellblade demo using Cubic Motion technology

Mobile Application Industry

Candy Crush advertises their game with a colourful Snap Lense on Snapchat
Snapchat

About Snapchat

Snapchat is a mobile application which enables its users to send photos, videos and messages to private contacts, or the broader Snapchat community. A key part of the app is its variety of “lenses” which map detailed points on the user's face, augmenting them into different animals, their own best friends (through face swapping), or prettier versions of themselves. Snapchat uses the trained “point-mask” to match the user’s face based on the data its getting from the user’s camera at 24 frames per second.[1] Snapchat developed its “point-mask” by training its system through the manual marking of facial features (nose, mouth, eyes) of thousands of faces.

Ad-Revenue Business Strategy

Given that Snapchat is a free app, it must make revenue through different channels. Snap lenses have become integral to their revenue stream, as companies and artists publish lenses that promote products, songs, and events. For example, Candy Crush advertised its game on Snapchat using a fun filter which rains down confetti when the user opens their mouth. By helping companies promote their products, Snapchat earns ad-revenue while enticing users to come back to the app to check out new lenses.

Using Facial Recognition to Gain a Competitive Edge

Companies across industries are using facial recognition to gain an edge against competitors. We observe this happening with Snapchat and Facebook. In 2015, Snapchat acquired Looksery, a Ukranian start-up, for over $150 million.[2] Looksery is the technology company responsible for creating the enticing Snapchat lenses for users to enjoy.
Snapchat may increase privacy setting capabilities
To stay competitive with Snapchat, Facebook acquired a start-up called FacioMetrics.[3] While facial recognition is not the only strategy these companies are deploying to compete with each other, they are investing millions of dollars as a tactic to keep users entertained with their respective applications.

Competing for Privacy

Snapchat’s privacy settings change very often, making people concerned that they may begin to store their faces and use them for other purposes.[4] Although a database of faces may seem concerning, there is also an opportunity for Snapchat to protect users’ privacy. Snapchat’s CEO, Evan Spiegel, filed a patent which outlined a process of storing users’ faces into a database, using facial recognition to identify users, and adjusting the photo based on each user’s privacy settings. Using facial recognition, a user would take a picture and Snapchat would scan the picture to detect other users in the picture. If a user set their privacy setting so that other users can’t post photos of them without permission, an emoji may pop-up to conceal their identity in their friend’s photo.[5]

Comedian, Kumail Najiani, posts his art doppelgänger on Twitter

This type of feature has not been officially launched on the platform yet, but once it is introduced, it will be interesting to see how competitors will respond. As users become increasingly aware of the privacy concerns, higher privacy settings may become the new industry standard, with Snapchat gaining first-mover advantage if they can act fast enough.

Google Arts & Culture

Although The Google Arts & Culture app was launched in 2016, it didn’t grow in popularity until a facial recognition feature was released in January 2018.[6] Before this feature was introduced, Google Arts & Culture functioned mainly as an art catalogue showcasing art pieces throughout history. Once facial recognition was introduced, it skyrocketed as the top app downloaded, surpassing Facebook Messenger and Youtube at one point in time. The main appeal for the app is its ability to match a user’s selfie with their art doppelgänger. The matching process is simple; the user aligns their face with a small square and takes a selfie, and then the app uses facial recognition technology to analyze the user’s features and match them with historical artwork.[7] Its popularity can also be attributed to celebrities who publish their doppelgängers on social media. While many people are happily sharing their artwork matches on social media, privacy experts have concerns similar to other apps that use facial recognition. They wonder if Google is using a cute application to create a database of our faces.

Peekabeat

Released in 2017, Peekabeat is an app available on iOS devices which captures a user’s facial expression and suggests Spotify playlists based on their mood. [8] Users can take selfies that express emotions like anger, sadness, disgust, happiness and surprise. Based on the facial features of the selfie, the app will detect the emotion. Once confirmed by the user, the app will match the emotion with an assortment of playlists. For example, happiness suggests playlists which include the song “Fun, Fun, Fun” by Pharrell Williams and a variety of sing-a-long songs for children. The app was built in collaboration with Microsoft and an independent digital creative agency called AQuest.[9] AQuest developers used Microsoft Azure facial expression coding system to develop a variety of capabilities. As of November 2017, Peekabeat has garnered more than 4000 downloads.

Business Application

Before facial recognition was widely adopted by consumers, businesses had started leveraging this technology to improve their internal operations and create value for their customers. As the technology is becoming more sophisticated, its use is expanding into various industries including retail, marketing, banking, and hospitality, to name a few. Companies are investigating and experimenting with how facial recognition can give them a competitive edge. This section explores how businesses in different industries are using facial recognition technology to improve their processes and add value to their customers.

Retail

Facial recognition technology is playing a huge role in the evolution of the retail industry. Some of the applications of facial recognition technology in the retail industry include theft prevention, customer experience customization, and cashier-less stores.

Theft Protection

Theft has been a huge problem for retailers for a long time. It not only impacts a company’s bottom line but also hinders the shopper’s experience as it limits product availability. In fact, in 2016 alone, retailers lost around $50 billion due to theft.[10] In recent years, a rising number of retailers are deploying facial technology in their stores, as it has dramatically reduced in cost and increased in accuracy. Retailers are referring to facial recognition cameras as a “game-changing technology”, as it is able to give them nuance information that they initially never have had access to.[11] The technology has cut down the time it takes to identify, verify, and catch the shoplifter from an hour to a few minutes. Within a few seconds of a person entering the store, the system can identify if the person has been caught shoplifting in the store before.[12] When combined with business analytics and statistics, facial recognition can provide crucial information about shoplifting patterns and trends, such as what time of the day is most prone to shoplifting activity, and the distribution of shoplifter activity over the course of a time period.[13] Access to this type of information allows retailers to optimize their security procedures accordingly and save costs.

This retail loss prevention system works by maintaining a database of photos of past shoplifters. Every time a person enters the store, the camera takes their picture and compares it with all the pictures stored in the database. If a match occurs, an alert is sent to the official's smartphone or computer.[14] The official can then verify if the person is correctly identified by the system and take necessary action according to the store’s protocol. If a customer’s picture is not matched by anyone in the database, their picture is automatically deleted. Peter Trepp, CEO of FaceFirst, a technology leader in retail surveillance tools, describes their process: “FaceFirst detects known shoplifters and violent criminals as they walk into a retail environment. It works well in crowds and can detect and recognize multiple faces at the same time."[15] This video below explains FaceFirst’s approach to facial recognition in more detail.


FaceFirst Introduction Video

Although many stores use facial recognition technology, most companies do not openly accept it due to the privacy concerns related to it. Walmart first experimented with facial recognition technology in 2015 to identify and catch shoplifters, but then claimed that they do not use it anymore.[1] However, as the general public is becoming more familiar with the technology, several retailers have started openly accepting the use of it and its benefits. Stores are now developing a national shoplifter database that would allow each additional retailer who adopts the security camera technology to start sharing and would incrementally increase the value of the system for all members.[2]

Customizing Services

Recent research suggests that companies’ number one priority over the next few years is to convert more leads into customers.[3] Facial recognition technology can play a leading role in achieving this by helping retailers understand the customer journey, personalize the customer experience, and anticipate customer needs. Using facial recognition technology, companies can examine each point in the flow of customer journey and determine how it is impacting the customer experience. For instance, they can use it to determine whether customers are frustrated during self check-out or confused by the layout of the store. Walmart recently patented a technology that scans all customers' faces in real time and detects consumers that look distressed, confused, or upset. The system then notifies the employees who then go to the customer and offer assistance.[4] Some firms are also using facial recognition to change the store’s music according to how the customers are feeling.[5] The real promise of facial recognition technology lies in the data it can collect about users, which can then be analyzed to get a comprehensive understanding of individual customers. The facial data can also be paired with customer demographics (gender, age, ethnicity), loyalty metrics, and other product purchase information.[6] This information can be used to target demographics differently and even help identify VIP customers for preferential treatments.[7] When this information is combined with predictive modeling, it can be used to take a proactive approach and anticipate consumer behavior. In the past, predictive models were run almost exclusively on structured data, which had an accuracy rating of 60% to 70%. By incorporating unstructured human data from facial recognition software, social reviews, and comments, accuracy rating can reach the 90% range.[8] Facial recognition technology can be used in innovative ways to deliver value to customers. Tencent, the developer of China’s most famous mobile payment service, WeChat, has partnered with fashion companies Jack & Jones store and Vero Moda to provide an “intelligent retail experience” to customers.[9] Customers will be able to receive custom and suitable dress options from the store’s bestseller product library by just scanning their faces before the computer screens.[10].

Self Checkout Stores

Futuristic sounding cashier-less stores are now a reality with Amazon leading the way. Amazon Go is a grocery store operated by the online retailer Amazon, with currently three locations in Seattle, New York City, and Chicago.[11] It offers customers a seamless experience by allowing them to purchase products without using a cashier or a check-out system. All customers have to do is scan the Amazon Go app at the entrance, put the items they want to buy in their shopping bag, and then walk out of the store once they are done.[12] They do not even need to take out their credit card, Amazon charges them directly through the Amazon Go app. According to Amazon, their customer's online shopping cart is updated real-time every time a customer grabs an item off the shelf or puts it back on the shelf.[13] Amazon has been able to achieve this by installing hundreds of small cameras all over the store which use high-level computer vision, facial recognition, sophisticated machine learning, and sensor fusion technology to detect and update what consumers are buying in real time.[14] The technology allows Amazon Go to streamline their value chain, predict exact consumer demand, and cut down variable costs exponentially. The rest of the retail industry is likely to follow the path Amazon has paved. A start-up called Standard Cognition has recently raised $5.5 million to help businesses create a cashier-less experience. [15] They use powerful cameras, machine vision, image recognition, and facial recognition to detect what a specific person is picking and walking out the door with.[16] The customers use an app to enter and exit the store similar to Amazon Go.

Although this technology augments customer experience, it raises many questions about privacy and ethics. Standard Cognition has stated that it's system anonymizes data, so that the customer is not chased by any kind of targeted advertisements.[17] There is speculation around where Amazon plans to take this technology forward. The information Amazon Go sensors gather about their customers can be easily utilized by Amazon for their other businesses such as Whole Foods, their E-commerce platform, etc. Some groups are worried about the effect this technology will have on the job market as there are around 3.5 million cashiers in the United States.[18] No matter where this technology heads, it is certain that it will be revolutionizing the retail industry.


Introducing Amazon Go

Banking

Over the years, companies have tried to make payments as seamless and easy for their customers. Facial recognition is the next jump in this sector, as it only requires the person’s face, and does not require them to remember any passwords or carry credit cards. It is also arguably safer as it does not require the customer to put their information on the web and prevents identity theft. Facial recognition is already being used in stores and at ATMs. In 2016, MasterCard launched a new selfie pay app called MasterCard Identity Check where customers simply open the app to confirm a payment using their camera.[1] Chinese e-commerce firm Alibaba and affiliate payment software Alipay are also planning to apply the software to purchases made over the Internet. In 2017, Alipay launched a ‘Smile to Pay’ facial recognition system at a KFC outlet in Hangzhou.[2] The system recognizes a face within two seconds and is even able to differentiate between people wearing wigs and makeup. [3] To ensure a secure transaction, the system verifies the scan by sending a mobile alert to the customer.[4]‘Smile to Pay’ is also able to identify people wearing makeup or wigs. CaliBurger is also piloting facial recognition in their stores as part of their loyalty program. The program enables customers to quickly order and pay at kiosks. Its loyalty program stores customer information which suggests meal packages the customer has ordered in the past and allows them to pay by flashing a smile at the kiosks’ camera. The order experience is reduced to just a few seconds with this customized technique. Caliburger has addressed privacy concerns by confirming that they only store an individual’s unique faceprint. They do not store faces and other sensitive identification information in their systems.

Automobiles

Affectiva's Emotion Recognition Technology

Affectiva, as mentioned before, is working towards humanizing AI so that it can effectively and accurately determine human emotions.[5] While the technology is currently being used only for market research, Affectiva is now planning to apply it in automobiles. In fact, they have developed an emotion recognition engine using deep learning technology that promises to “make cars more human".[6] It uses a camera to detect the driver’s face in real time and computer vision algorithms to identify key landmarks on the face and analyze pixels to classify facial expressions. Combinations of these facial expressions are then mapped into emotions including joy, surprise, anger and fear along with other attributes, such as the driver’s engagement.[7] In case of an accident, the technology can be used to determine the cause of the crash based on the driver’s facial expressions. If the driver is detected as anxious or uncomfortable, it can also help change the autonomous driving style.[8] Affectiva also plans to use the technology to adapt environmental conditions such as, lighting, heating, and music based on the passenger’s level of comfort and emotions as well as provide product recommendations based on their emotions and the path they are driving.[9][10]

Hospitality

Similar to how facial recognition technology is used to augment user experience in the retail sector, it is also being used by hotels and casinos. Agilysys a leading global provider of next-generation hospitality software solutions and services has recently added facial recognition features in their software solution name “rGuestStay”.[11] The new feature recognizes a returning guest and displays their profile information including room preferences, purchase history, arrival and departure time, to name a few. This information can be used to provide a personalized welcome to guests, fasten their check-in process, and even tailor hotel services according to how much they are likely to spend.[12] Early adopters of this technology will have an edge in building brand loyalty and increasing lifetime customer value in comparison to their competitors. Examples of some hotels experimenting with this technology include the Marriott hotel chain in China and Lemon tree hotels in India.[13]

Public Sector Applications

Organizations in the public sector were among the first to test and integrate facial recognition technology in their practice. This encompasses organizations in the following sectors: healthcare, airports, law enforcement, military, and education.

Healthcare

Self-service kiosk in a hospital.

The biometric healthcare industry is projected to grow to $6.3 billion by 2024, with facial recognition technology leading the way.[14] Leading players in this industry include 3M Cogent Inc., BIO-key International, and Fujitsu Limited among others, with North America speculated to dominate the biometric healthcare sector.[15] The willingness to integrate facial recognition technology within this sector stems from the rise in cybercrime and increased fraudulent claims that costs the industry billions of dollars every year.[14] The Identity Theft Resource Center reported that 45% of identity theft breaches in 2014 occurred in the healthcare sector.[14]

Hospitals

Facial recognition in the healthcare industry, particularly in hospitals, is used to improve accuracy in identifying patients, manage medical records by eliminating duplicates, and enhance data security.[14] This technology is said to account for differences in detail such as facial hair and lighting.[16] Hospitals in the United States are currently testing self-service kiosks in the emergency ward to help simplify the registration process and flag fraudulent activity.[16]

Detecting Health Concerns

Facial recognition performed on a child to detect a genetic disorder.

Facial recognition technology is being trained to detect certain features of a patient’s face and identify similarities between the face of the patient and the individual known to possess a health concern. For example, a combination of brown spots and wrinkles are indicative of excessive sun exposure. This technology proves to be promising as the matched results obtained an accuracy rating of 93%.[17]

Researchers in Australia used photos of over 270 ethnically diverse people to train facial recognition software to recognize physiological features such as blood pressure and body mass index (BMI) just from looking at the face alone.[18] However, this application of facial recognition technology is still within its early phase but the findings from the experiments are promising.

Physicians are also using facial recognition technology to detect a possibility of genetic disorders through apps like Face2Gene. Face2Gene has been trained with 2000 disorders such as Down syndrome and DiGeorge syndrome.[19] DiGeorge syndrome is a rare disorder that affects people of colour such as Africans, Asians, and Latin Americans.[20] Disorders such as the ones listed previously are reflected in traits such as cleft palate, protruding tongue, and flattened nose bridge.[20] This technology has reported accuracy ratings of 96.6% for all ethnic groups.[20] Face2Gene and other providers of this technology provide a more feasible and accessible alternative than genetic testing which may not be available in all regions around the world.

Future Pharmacy

Facial recognition software is also being used to dispense and pay for medicine. Alipay has launched a Future Pharmacy in China where patients can pay via facial recognition.[21] The patient must authenticate their face on the device to activate this feature and then enter their Alipay phone number.[21] This will allow for payment of prescription drugs if the patient forgets to bring their ID. Future Pharmacy is being tested in the city of Zhengzhou but will soon be expanded to other regions.

The Karie system is used to dispense medication.

Dispense Medication

Toronto-based company AceAge signed a distribution and supply agreement with the Centric Health Corporation to integrate VER-ID facial recognition technology to their Karie system.[22] This system is set to launch by the end of 2018 and will improve the authentication process when dispensing medication to the user.[23] The Karie system notifies the user when to take their medication and the camera matches the face detected to the face print in the system. If these faces match, the Karie system dispenses the correct dosage of medication. This application of facial technology helps ensure that patients are receiving the right dose of medication and prevent healthcare fraud.[24]

Government

Airports

The first major integration of facial recognition software at an airport was in 2011 through a pilot program at Tocumen airport.[25] This program used FaceFirst technology to decrease illegal activities (e.g., drug smuggling) in Panama.[25] Since the pilot program was very successful and lead to numerous arrests, FaceFirst technology was integrating in other regions, while Tocumen airport became the largest biometrics installation at airport in the world.[25]

Many Canadian border crossings are also using pilot programs to test a variety of facial recognition technology. Much of these initiatives are inspired by Calgary Police’s early adoption of facial recognition to solve crimes, as well as Passport Canada’s use of facial recognition technology to prevent individuals from holding passports under the same name.[26]

Primary Inspection Kiosks in the Vancouver International Airport.

Airports around Canada have invested millions of dollars into Primary Inspection Kiosks (PIK).[27] The PIK takes the traveller’s picture and scans their passport. The kiosk then uses facial recognition software to compare the two photos. If the photos match, the kiosk generates a receipt which is brought to a border officer for a final check. These kiosks can be found at airports in Edmonton, Halifax, Montreal, Ottawa, Quebec City, Toronto, and Vancouver. [27] Travellers can also use the eDeclaration app in tandem with the PIK by creating a traveller profile. Once the questions are answered, the app generates a QR code which is scanned at a PIK.[28] Through this initiative, the Canada Border Services Agency (CBSA) aims to decrease processing times for travellers and eliminate paper customs declarations.[27]

The Canadian border agency is not only using facial recognition to increase processing efficiency for travellers, but to improve security as well. They are currently testing facial recognition software in various locations to determine if they can find and track people on the watchlist in crowds.[26] However, this initiative is still in its testing phase as Canadian airports are partnering with international airports to improve the integration of facial recognition technology to ultimately eliminate boarding passes, reduce loading time, and find suspects on the Watch List.

Currently, a test pilot is being conducted between border crossings in Canada the and Netherlands. The Known Traveller Digital Identity program requires travellers to upload personal information such as university education, bank statements, and vaccination records on their mobile device.[29] Similar to NEXUS, the program aims to improve the efficiency of the identification process for international travellers. However, critics argue this new service will form a two-tiered system as individuals unwilling to provide their personal information will not receive faster service and may even face neglect from service providers.[29]

Military

Military branches around the world (e.g., the United States, China, and the Philippines) have integrated facial recognition technology in their surveillance and identification practices. For instance, the United States (US) military used facial recognition, along with DNA testing, to identify al Qaeda leader, Osama bin Laden’s body after the attack in 2011. [30] The US army have also developed facial recognition technology that can work in the dark or in low-light by integrating it with thermal imaging and artificial intelligence.[31] The new system produces a thermal face print using heat signatures and matches the face print to their database of pictures.[32] Currently, the army has installed thermal cameras on aircrafts, vehicles, and wearables which can use this technology for real-time detection of terrorists on the Watch List. [31]
Thermal imaging uses heat signatures to form a face print.

Law Enforcement

United States

Law enforcement agencies have been using early forms of facial technology for security and authorization purposes. In 2014, the Automated Regional Justice Information System (ARJIS) partnered with US law enforcement agencies to introduce facial recognition technology called FaceFirst.[33] FaceFirst software matches individuals in front of the camera with the database of faces accumulated by law enforcement agencies. If a match is found by the software, an alert is sent to an officer in real-time.[34] Integration of such technology allows law enforcement to instantly identify people who either do not possess ID, or do not share their identity with officers. In 2017, law enforcement agencies integrated the WatchList, database to work in tandem with FaceFirst technology.[33]

Current Events

1 in 4 police agencies in the United States admit to possessing and using facial recognition technology.[35] Primarily, facial recognition is used to compare crime scene photos and videos to their database of mugshots. Recently, law enforcement agencies in Maryland used facial recognition technology to identify and arrest Capital Gazette shooter, Jarrod Ramos. Ramos killed five and injured two people after shooting a news station on June 28, 2018.[36] Law enforcement used Maryland’s Image Repository System, which is a database of pictures obtained from driver’s license photos, inmate records, and mugshots to match the photos. County Police Chief Timothy Altomare stated that it would have taken them much longer to move forward with the investigation without their facial recognition system.[36] Pierre Martin was arrested and sentenced to 22 years in Chicago after surveillance cameras captured his face. [37] Police matched this photo to their database of mugshots which led to his subsequent arrest. However, Martin was not solely sentenced using facial recognition technology alone, as an eye witness also confirmed his identity after the armed robbery in 2013.[37]

Amazon's Rekognition Technology

Rekognition technology is provided by Amazon Web Services.

Albeit controversial, Amazon is selling its facial recognition technology, Rekognition, to law enforcement agencies in the United States. Rekognition provides an affordable option to index hundreds of thousands of pictures in just a few days, while allowing officers to find matches in the database within minutes. Rekognition costs $400 for the initial transfer of mug shots into the system and a variable cost between $6 to $12 per month to continue the service.[38]

Rekognition is currently being used by a sheriff’s office in Oregon to check unidentified suspects with mugshots currently in the database. Other law enforcement officials are using it to track and locate people who have been reported as missing or abducted. Additionally, a police department in Orlando, Florida is conducting a pilot program using Rekognition for real time detection.[38] Since Orlando has an abundance of safety cameras over the city, Rekognition would receive feeds from these cameras and search them for wanted suspects.[38]

Many parties have expressed their concerns regarding Amazon’s partnership with law enforcement agencies. 24 organizations, along with the American Civil Liberties Union (ACLU) wrote a letter to Amazon protesting this partnership due to the invasion of privacy of citizens, as well as the potential dangers of targeting minority groups with this technology.[39] In a letter to CEO Jeff Bezos, Amazon employees and shareholders plead the organization to not sell Rekognition to any government agency. Employees cited concerns regarding the potential use of Rekognition for Trump’s Zero Tolerance Policy at border crossings in the US, as well as future concerns of selling the technology to foreign government agencies known to violate human rights laws.[39] However, Amazon continues to support this partnership and state that it is wrong to ban new technology because it can be used by bad actors for wrong things.[39]

Instead of banning the technology, providers of facial recognition technology like Microsoft have demanded for greater government interference. To control the use of facial recognition technology, they ask government officials to define regulation for providers and users of this technology.[40]

Federal Bureau of Investigation

The Federal Bureau of Investigation (FBI) have been building their own facial recognition database for years and reportedly possess 411.9 million images.[41] 30 million of these images are mug shots, while the other images comprise of driver’s license, passport, and visa photos. [41] This surprised many citizens due to the lack of transparency in acquiring the images. 16 states provided the FBI with driver license photos, while another 16 states are also considering provided their images as well. [41] Currently, the FBI is negotiating for exemptions for non-compliance of certain provisions under the Privacy Act because they did not inform citizens of their acquisitions of state driver license photos.[41]

Canada

In 2014, Calgary Police became the first police agency in Canada to implement facial recognition technology in searching for criminals.[26] Before, officers would manually search the database to compare mugshots with suspect photos. Now, police agencies across Canada have contracts with facial recognition providers (e.g., NEC Corporation of America that uses the NeoFace technology) to improve the efficiency of their booking process by comparing crime scene photos and videos to their database of mugshots within minutes.[26] Unlike the US, Canada does not allow government agencies to share facial recognition databases with other agencies without a court order or subpoena.

United Kingdom
NeoFace technology being used in the UK for security purposes.

Numerous law enforcement agencies across the United Kingdom (UK) have been testing facial recognition pilot programs. The UK police agencies use NeoFace technology to search 500,000 mugshots with the faces detected in crime scene footage.[42] Police departments in the UK are also testing facial recognition software at large events such as carnivals and sports games. Police agencies, like the one in Leicestershire, have also recorded and archived faces using these pilot programs.[42]

However, a danger of false positives still exists. Early pilot programs have faced issues with this as the software used by officials produced false positives 98% of the time.[42] For instance, in June 2017 South Wales Police accurately identified only 234 true positive results, 10% of the total, compared to the 2400 false positives that were produced.[42] This technology has improved since then as in June 2018, police in South Wales used facial recognition technology to positively identify and arrest a suspect for the first time.[42]

Recently, UK police used facial recognition technology in tandem with CCTV security footage from UK airports to identify the suspects involved in the March 4, 2018 Novichok attack.[43] A former Russian spy who acted as a double agent for the UK intelligence service, Sergei Skripal, and his daughter were poisoned using Novichok, a nerve agent popularly used during the Soviet-era.[43] Using facial recognition, law enforcement officials verified records of people who had come to the UK in the past few months. The Russian government denied their involvement in this case and the suspects involved were found to have left the UK after these proceedings.[43]

China
Facial recognition technology used for surveillance purposes in China.

Facial recognition technology has been used by a variety of actors in China, including the Chinese government with their introduction of the Sharp Eyes Program. Sharp Eyes is a nation-wide pilot program set to be fully active by 2020.[44] The program aims to integrate security cameras across the country to form a single, integrated surveillance and data-sharing program. There are reports that China’s current surveillance network includes 170 million CCTV cameras and 400 million more are set to be installed over the next three years.[44] Currently, smaller programs of facial recognition technology are being tested to detect people in large crowds at festivals, subways, and concerts.[44] Officials claim the Sharp Eyes will be used for smart city planning, social stability, and for people who deviate from normal thought. In addition to improving public safety, the program aims to track individuals known to frequent illegal gambling dens.[45]

Dangers regarding the Sharp Eyes Program include the targeting of activists and protestors, as well as minority groups. In one instance, police tracked a meeting between two activists, one of which is known to speak out against the Tiananmen Square massacre. The police subsequently told the other activist’s landlord, who threatened to evict him if he did not disassociate himself.[45] Minority groups like the Uighurs, a Muslim minority in Xinjiang, China, are already being targeted by Chinese officials, and can face further marginalization through the implementation of the Sharp Eyes Program.[45] A limitation of the program is the servers’ capacity to filter through all the data that is captured on the millions of CCTV cameras. Therefore, Chinese officials are considering the opportunity for people to watch this footage through their televisions or smartphones and report suspicious activity to the police.[45]

Facial recognition technology used to detect jaywalkers and subject them to public shaming in China.

China has also implemented facial recognition technology in different provinces to identify and shame individuals who break traffic laws. For instance, in the city of Shenzhen, law enforcement agencies use CCTV cameras and facial recognition to detect and identify jaywalkers.[46] Their images, along with their family name and a portion of their government ID, are displayed on monitors located at road crossings. The Chinese government is working with social media platforms like WeChat and Weibo to integrate an instant messaging feature to their current facial recognition technology.[46] This new system would send a message containing a fine to the offender as soon as a traffic violation is detected. Through this, officials aim to reduce repeat offenders and save costs associated with their current public shaming tactics.[46] For instance, identifying and fining offenders instantaneously would eliminate the need to install screens at road crossings. Officials are also considering tracking the number of times someone commits an offence, which would affect their social credit score if they surpass a certain threshold.[46]

China's embracement of new technology and increased surveillance is reminiscent of science-fiction and dystopian films and novels such as Steven Spielberg's Minority Report and George Orwell'sNineteen Eighty-Four. Minority Report centers around a protective government armed with new technology, whose aim is to preempt crime in order to protect its citizens. Similarly, Orwell's Nineteen Eighty-Four, explores the dichotomy between utopian and dystopian society. Orwell contrasts the use of excessive policing, surveillance, and propaganda to achieve a safer society with the lack of free will and destruction that occurs as a result.[47] With integrated surveillance and data sharing, China's Sharp Eyes program mimics the dystopian society painted in Orwellian novels, sharing the sense that Big Brother is always watching its citizens.[47]

Insurance

In Canada, some provincial crown corporations use a variation of facial recognition technology to match photos. For example, the Insurance Corporation of British Columbia (ICBC) has been using facial recognition technology since 2008 to compare a cardholder’s photo to the photo on file.[48] Similarly, Saskatchewan Government Insurance (SGI) uses facial recognition software supplied by Veridos Canada Inc. for authentication of cardholders, and prevention of identity theft and fraud.[49] A photo of the individual is uploaded to their system, which creates a template of the face using pinpoint measurements. The system then compares this template with the photo on file, if it exists. If the photos do not match, it is detected as fraud and the identification is cancelled.[49]

Education

Facial recognition is also being testing in classrooms around the world in countries like China, France, and the United States. Applications of this technology in educational spaces includes tracking attentiveness and attendance, curating quizzes, and improving a professor’s performance.

China

Hangzhou No.11 Middle School is using facial recognition to track attendance and the performance of their students.[50] The cameras scan the classroom every 30 seconds to detect and record expressions such as anger, annoyance, confusion, and happiness.[51] During the lecture, the system tracks the inattentiveness of each student and can inform an instructor when a student’s inattentiveness reaches a certain threshold. Evaluators can use this system to determine a teacher’s performance and provide suggestions to lecture material. The school is also trying to integrate the facial recognition system to pay for lunches from the canteen and borrow items from the school library.[51] Schools officials have praised the system to allow for a better educational experience, as the Principal compares the system to an in-class teaching assistant.[51] In terms of privacy, the Vice-Principal has stated that the system will not save images from the classroom and that any data that is recorded will be stored on a local server.[51]

  • Cameras installed in the classrooms.
  • Facial recognition technology used to track student attentiveness and expressions
  • Facial recognition used in libraries to check-out material.
France

The Paris School of Business is also testing facial recognition technology developed by LCA Learning called Nestor to track the inattentiveness of students.[52] Nestor tracks a student’s eye movements, as well as facial biometrics to determine the level of focus a student is demonstrating while taking a course online.[52] At the end, the system can curate a quiz based on areas in the lesson where the student showed the highest amount of inattentiveness.[52] Using the information collected from the system, professors can determine which material of the lesson is engaging, and which is dry and requires additional clarification.

United States
RealNetworks' SAFR technology used in schools.

North Carolina State University conducted a study in which they recorded and analyzed the facial expressions of students taking a computer coding class.[53] The study comprised of 60 hours of footage and used an expression recognition software in tandem with facial recognition technology.[53] Through this system, researchers will able to determine which students were having trouble with the material versus which students found the class material easy to understand. Researchers compared the results from the expression recognition software to reflections provided by the students at the end of class and determined that the results were very close.[53] Future uses of this system include refining lecture material delivered online or in-person based upon the facial expression information.

Seattle-based company, RealNetworks is offering their facial recognition software, SAFR to schools for free. Wyoming is set to test a pilot program of SAFR in their schools, as well as a school in Seattle to provide a safer environment for students.[54] Previously, a school in New York planned to spend millions of dollars on a facial recognition system to monitor student. However, the New York Civil Liberties Union and Legal Defense Fund cite dangers to students of colour due to existing biases as they may already experience over-policing in their everyday life.[54] Additional misuse includes monitoring which students are associating with each other and potentially disciplining students differently.

The school in Seattle is limiting the program’s participation to students 12 years old or older.[54] In addition to monitoring students, the facial recognition system can be used with their gated-camera system to monitor parents, teachers, and visitors. The new system requires individuals to initially register faces in its database. When someone wants to enter, they must smile at the camera. If the system recognizes the individual, then the gate unlocks.[54]

Many of the parents have expressed their support for the facial recognition system in the school, but only 50% of them registered their faces with the system.[54] Additional concerns with this technology includes potential for inaccuracy, security in storing data, and privacy concerns. SAFR reports an average accuracy rating of 99.8%, however the accuracy rating in detecting people of colour has not been fully disclosed.[54] In terms of security, each face recognized by the SAFR software is given a unique, encrypted hash which will be stored on local servers at the school.[54] Currently, Real Networks only verifies that the organization is in fact a school, but they do not ask what the system will be used for or how long faces will be stored.

Challenges of Facial Recognition

Regulation and Privacy Concerns

The mass integration of facial recognition technology is still new, which invites a spectrum of strictness for regulation by government agencies. Many countries possess no explicit provisions regarding the use of biometric technology, nor and the collection of information using this technology. Instead, they rely on the federal privacy acts or data protection regulations to extend governance to this sector.[55] Russia and China prove to be laxer with regulations, compared to entities like the United States, the European Union (EU), and Canada.

Laxer regulations: China and Russia

China and Russia are among those who possess laxer guidelines when it comes to governing the use of facial recognition technology.

China possesses flexible regulations and privacy laws regarding the digital sector. The Sharp Eyes program, which plans to form an integrated surveillance program, presents numerous privacy concerns for Chinese citizens and visitors travelling to China. However, government officials have yet to introduce explicit regulation restricting the use of facial recognition and the storage of the data collected by the millions of surveillance cameras around the country. The Chinese government has publicly supported facial recognition providers like Megvii, and even own assets in these companies. In fact, state-owned Assets Venture Investment Fund contributed $460 million in 2017 to Megvii alone.[56]

As Moscow continues to integrate facial recognition technology into its surveillance practices, the city alone has installed approximately 130,000 cameras in the last 6 years. The footage from these cameras is stored in a database available to 22,000 officers and government officials. Previously, the Russian Communications Ministry and Central Bank had launched a program which would verify bank applications using biometric verifications such as facial and voice recognition. At the end of 2017, a bill was drafted requiring Russia’s state telecom provider, Rostelecom, to give the Interior Ministry and Federal Security Service (FSB) this biometric data without the bank customer’s consent.[57]

Despite these actions, government officials say that there will be limitations placed on law enforcement agencies which would require them to submit official requests to obtain any biometric data; however, there actions prove otherwise.[57] For instance, the Russian court also allowed communications authority, Roskomnadzor to ban the Telegram messaging app because the app was too secure. Officials did not approve that the app would not allow for a backdoor the Russia government could use to decrypt and view user messages.[58] The increased adoption and vague enforcement surrounding facial recognition regulation in Russia pose numerous privacy concerns for its citizens.

Varied regulations: the United States

The United States has no singular federal regulation regarding the collection and use of biometric data such as face prints. However, some states have begun passing biometric privacy laws in 2017.[55] For instance, states like Florida and California possess vague regulations when it comes to the use of facial recognition technology as it lacks explicit guidelines. Contrarily, Washington state is beginning to include some regulation regarding the use of facial recognition technology, especially by law enforcement agencies. For instance, Seattle possesses vague regulations which limits law enforcement agencies to only have access to mugshot databases and states that these agencies must demonstrate reasonable suspicion to search the database.[59]

Both Florida and California possess mobile facial recognition software which allows law enforcement agencies to compare people who get pulled over to mugshots in the database. In Florida, police officers can also search the system to find driver license photos of cardholders in the state.[59] Law enforcement agencies in Orlando, Florida are currently testing pilot programs of Amazon’s Rekognition technology with seven officers which allows them to track individuals using cameras installed around the city.[38] Los Angles, California is already using 16 cameras around the city to identify people who have warrants out for their arrest.[59]

Despite building their facial recognition database for years, the Federal Bureau of Investigation (FBI) has also been found to be in non-compliance with Privacy Act in the United States. This comes after the FBI integrated state driver license photos into their database, which reportedly contains over 411 million images.[41] As the investigation continues, the FBI is currently trying to negotiate for exemption from the provisions stated in the Privacy Act.

Strict regulations: the European Union and Canada

Canada and the European Union possess more strict regulations in terms of privacy concerns and the use of facial recognition technology. Unlike countries with laxer regulations, these regulatory bodies typically require independent oversight.

Canada’s Privacy Act protects Canadians from exploitation and misuse of their personal information. Provincial privacy laws such as the Freedom of Information and Protection of Privacy Act (FIPPA) in British Columbia (BC) maintains that even government agencies using facial recognition adhere to good practice.[60] For instance, after the Vancouver Stanley Cup Riot in 2011, the Insurance Corporation of British Columbia (ICBC) offered law enforcement officials access into their database of drivers license photos to match with pictures and footage captured by cameras.[60] The Vancouver Police Department did decline their offer; however, an investigation was still launched against ICBC. The investigation led by the BC Privacy Commissioner determined that in offering access to their database, ICBC did in fact violate FIPPA.[61] As stated in the Privacy Act, the information collected by a government agency can only be used for the purpose for which it was collected.[60] Exceptions of this rule can be made for law enforcement agencies, but they still require a court order or subpoena. Additionally, the investigation also suggested that under FIPPA, ICBC also has a duty to inform driver license carriers that the organization is conducting facial recognition to authenticate cardholders.[61]

Similarly, the European Union (EU) provides strict regulation and prioritizes privacy through the General Data Protection Regulation (GDPR). The GDPR explicitly addresses the collection and use of biometric data to provide citizens control with their personal information. The countries in the EU may also add additional provisions to the ones set by the GDPR. Despite leaving the EU, the United Kingdom has also agreed to comply with the basic requirements set by this regulation. Some of these requirements include: the right to be forgotten at any time, the obtainment of explicit consent, and informing data breaches within 72 hours. The EU considers biometric data as a type of personal data therefore, it can not be shared with other parties without the consent. Companies storing biometric data who do not comply with these provisions can face penalties up to millions of dollars.[55]

Consent

Facial recognition technology detected in mall directory in Calgary.

After Facebook’s failure to launch their facial recognition tagging feature across the EU in 2011, they are attempting this again in 2018. In 2012, German and Irish privacy regulations stated that Facebook users did not give their consent to have their Facebook photos scanned and labelled.[62] Fast forward to 2018, and they are attempting to roll-out the facial recognition feature in Europe and Canada with opt-in permissions instead of the opt-out permissions that are implemented in places outside of Europe and Canada. Outside of Europe and Canada, users will be notified of the facial recognition feature, but the user will have to go into their settings to opt-out of it. Those living within Europe and Canada will be asked to “accept” the facial recognition technology, before the facial scanning and database storing can take place. The General Data Protection Regulations (GDPR), a regulation in EU law, requires that the consent forms be written in plain english and does not permit pre-ticked boxes.[63] Furthermore, parent or guardian consent will be required for users who are under 16 years of age. These consent requests have received criticism for being misleading, as the user must navigate passed several pages before they are asked to allow Facebook to recognize them in photos.[64]

With private companies, such as Facebook, the way consent is understood and granted by the user is something to be keenly aware of. While Canada has enacted privacy laws which require companies to ask us for consent, there are still multiple concerns. For example, Canadian law enforcement does not require consent if they can demonstrate that a loss of privacy can be a significant benefit, such as using facial recognition to reduce the chance of a terrorist attack.

Additionally, two malls in Calgary, the Chinook Centre and Market Mall, were secretly using facial recognition technology hidden inside the mall directory. A representative of CF, the parent company of the malls, claims that they were using facial recognition for traffic analysis and to record attributes such as gender and age to improve shopper experience. The representative also stated that they do not store the information captured by the camera and therefore do not need to obtain consent. However, the Canadian Civil Liberties Association have expressed their disagreement with CF’s statement, stating that any surveillance should require consent as it poses privacy rights concerns. [65]

Hacking and Spoofing

Many cases of hacking and spoofing of facial recognition technology have been reported, particularly concerning authentication features of mobile devices. Released in 2011, Android’s original Face Unlock feature was susceptible to spoofing with the use of photographs.[66] Therefore, an additional layer of authentication called Liveness Check was added to Face Unlock, which required the user to blink during authentication.[67] Unfortunately, this was also bypassed quickly by switching between one image edited with the user’s eyes closed and another with their eyes open.[68]

More advanced facial recognition systems do exist within the Android market such as the OnePlus 5T face unlock feature. This system searches for the mannerism in which light reflects from the face, differentiating it from a photo on a phone screen which emits light.[69] However, online forum users have stated that this feature was circumvented by laying a printout of a photograph over a real face.[70]

Additionally, Face ID has also recorded incidents of spoofing, two of which gained significant media coverage. With the first incident, an iPhone X user’s son was able to unlock her iPhone using his face. This incident suggests that certain lighting conditions allowed this security flaw to occur, as the problem was fixed after re-registering the mother’s face. However, comparable flaws could be produced in similar lighting conditions under which the face was first registered.[71]

The second spoofing incident was more deliberate and labor intensive as the Vietnamese cybersecurity firm, Bkav was only able to defeat Face ID with a week’s worth of work. Bkav created a fairly affordable $150 mask consisting of plastic, silicone, makeup, paper cut-outs, and a 3-D printed frame.[72] These examples prove that no current mobile facial authentication feature is without flaw or completely secure.

Other more sophisticated forms of facial recognition technology have also been defeated, although it requires more complicated tools such as virtual reality models and specific computer algorithms.The first tool, developed by security/computer vision specialists from the University of North Carolina, allowed 3D VR renders of faces to spoof five facial authentication systems: KeyLemon, Mobius, TrueKey, BioID, and 1D.[73] The study consisted of renders from control photos, which were able to spoof the five systems with a rate of success of 100%.[73] Additionally, renders retrieved from the participant’s publicly available social media photos were able to spoof the five systems around 55% to 85% of the time.[73]

The second tool does not dupe authentication and biometrics, but instead it disrupts facial recognition software that identifies individuals. Joey Bose, a researcher from the University of Toronto, created an algorithm that can change the pixels of an image around certain regions of the face (e.g., the eyes, mouth, or nose). These changes are nearly undetectable to the human eye and can prevent facial recognition systems from authenticating the face.[74] Bose is currently working on an app which will incorporate this algorithm and can protect users from targeted advertisements and identity theft.[75]

This is interesting as one expert suggests an emerging arms race of facial recognition technologies versus anti-recognition techniques.[76] Anti-recognition algorithms can change pictures uploaded to social media sites, ever so slightly, to provide users with more control and prevent spoofing incidents in the future.

Spoofing faces using pictures uploaded to social media platforms.

Bias and Inaccuracy

With the widespread integration of facial recognition software by a variety of sectors, it is imperative to address the dangers of bias and misidentification in facial recognition software. Bias in algorithms or training sets can misidentify certain groups which can result in increased systemic discrimination and marginalization of minority groups such as women and people of colour.

M.I.T. Media Lab Experiment

Faces used in the MIT Media Lab Experiment.

In 2015, Google Photos recognized and labelled African Americans as gorillas.[77] After this incident, Google made promises to improve their technology, so the error would not be repeated. However, they did not actually make changes apart from removing the term gorilla and similar tags from the app.[78]

Years later, Joy Buolamwini from M.I.T. Media Lab tested the facial recognition software of Microsoft, IBM, and Megvii – a Chinese company who provides organizations like Alibaba with their Face++ technology. The study used 1270 faces from female and male politicians from three African countries (Rwanda, Senegal, and South Africa), and three Nordic countries (Iceland, Finland, and Sweden). The sample was broken down by sex (male/female) and race (lighter-skinned/darker-skinned individuals).[77]

The study revealed that IBM and Megvii’s facial recognition technology produced the highest misidentification rates across the four groups with 54.1% and 42% respectively. Microsoft’s technology possessed the lowest misidentification rate with 28.5%.[77] Additionally, researchers determined that males with lighter-skin possessed the lowest misidentification rate at 1%.[77] Whereas females and darker-skinned individuals possessed higher misidentification results, with darker-skinned females receiving the highest misidentification rate of 35%.[77] After this study was published, IBM and Microsoft released statements indicating their commitment to improving this bias from their facial recognition software but details of how they would accomplish this were not revealed.

Results of the MIT Media Lab Experiment.

Gfycat

Gfycat, a platform that offers users to create and share GIFs, experienced similar results as Asian and darker-skinned employees were misidentified disproportionately by its facial recognition technology. Gfycat attempted to improve the identification rate of employees by training the system with additional images, but it only improved the results slightly. Therefore, they added an additional layer to training which determines if a new image resembles a cluster of images already in Gfycat’s database. If true, then the system uses greater sensitivity with stricter thresholds to determine a match with the new image. This has allowed Gfycat to improve their accuracy rating for Asian employees to 93%, while Caucasian individuals possess an accuracy rating of 98%.[79]

The bias represented in the software seen in the Gfycat and M.I.T. Media Lab study stems from the benchmark datasets used to train the software. This data is used to assess accuracy but may contain unrepresentative data.[80] However, the bias may not be limited to the algorithm itself, but the user of this technology as well. An investigation analyzing the issue of bias in facial recognition software revealed that African-Americans were more likely to be stopped by police and have facial recognition searches done to them.[81] Therefore, stricter thresholds, and proper training must be provided as to not misidentify and isolate a particular group.

Boston Bombing

In addition to misidentification, facial recognition technology has also failed to match surveillance photos to the ones in the database. Through the years, the Department of Homeland Security in the United States invested millions of dollars to build a facial recognition database containing pictures they acquired from driver licenses and mugshots. After the Boston bombing on April 15, 2013, law enforcement agencies gained footage and surveillance photos of the two perpetrators.[82] Despite possessing their images in the Homeland Security database, the facial recognition software was unable to match the surveillance photos to the ones in the database. Amongst other factors, the resolution and angles of the surveillance photos contributed heavily to the inability of the technology to match the photos. Resultantly, the identification of the perpetrators took much longer than anticipated and lead to increased chaos and fear in the public.[82]

ACLU Study

28 US Congress members misidentified by Amazon's Rekognition.

Recently, the American Civil Liberties Union (ACLU) conducted a study using Amazon Web Service’s (AWS) facial recognition technology called Rekognition. They spent $12.33 to build their own facial recognition database using mugshots which are available to the public.[83] The study found that Rekognition incorrectly matched 28 members of the US Congress with criminals in their database.[83] To substantiate the results, the ACLU representatives also got the results verified by an independent body. Despite only making up 20% of the US Congress, a disproportionate amount (40%) of the misidentified members were people of colour.[83]

When asked to comment on the issue, an AWS representative stated the many benefits Rekognition technology has offered to organizations using it in the public sector, namely law enforcement agencies who use it to prevent human trafficking, find missing children, and improve security. The representatives also stated that the ACLU study could have improved their results by changing their confidence threshold, which is the minimum accuracy the system uses to declare matches. They recommended setting the threshold to at least 95% when using Rekognition to identify people.[83] However, the ACLU denied these claims stating that the AWS did not even question the ACLU upon purchasing the technology as to what they were going to use it for and that Amazon’s website actually recommends using a threshold of 80% for detecting human faces.[83]

Some of the Congress members who were misidentified have asked to meet with Amazon CEO Jeff Bezos to discuss the dangers of using Rekognition technology in the public sector, especially by law enforcement agencies. Previously, the ACLU has cited dangers of using this technology in the current political climate as it can be misused to target protesters and monitoring immigrants, especially with Trump’s Zero Tolerance Policy at border crossings.[83]

Future considerations

When questioned, some facial recognition firms admitted that the data used to train the algorithm is most often biased. However, many facial recognition providers are succeeding in reducing the misidentification rates of their algorithms.[84] In the future, any company who does not improve their algorithm to correctly process diverse data will ultimately suffer in the competitive market because of increased public attention to these issues.

Individual companies providing facial recognition such as Microsoft and IBM have stated that they are trying to improve bias and inaccuracy in their own facial recognition algorithms, but there is no global standard for addressing these issues. Companies like Google have stated that they are ameliorating the bias in their software but do not share much information on what is being done.

The National Institute of Standards and Technology (NIST) publishes test accuracy results from their Face Recognition Vendor Test (FVRT) which evaluates facial recognition algorithms under different scenarios. The test assesses the accuracy of the algorithm in determining characteristics such as gender, age, and race.[84] FVRT is only used for research but having an industry-wide test used for regulatory purposes may help elevate the quality of algorithms used in facial recognition.

Additionally, since facial recognition also lacks explicit regulation, guidelines need to be set globally to address the following: which parties can use facial recognition technology, where it is permissible, the tasks for which the technology would be used, where data would be stored, and whether organizations could provide third-parties with this data.

The Future of Facial Recognition

The future of facial recognition looks bright, as it is being more widely used by businesses, consumers, and governments around the world. By 2022, "the global [facial recognition] market is expected to garner $9.6 million".[85] The key players who are predicted to contribute significantly to its growth include North America and Asia-Pacific. North America will likely invest in facial recognition technology for security and criminal investigation purposes. However, the Asia-Pacific is expected to be the fastest growing region due to its wide-use of facial recognition in countries like China and Japan who use it for "defense, law enforcement, physical security, and retail".[86] As the use of facial recognition spreads across societies, we see how it can impact facial recognition vendors, be integrated into other biometric technologies, and leave its mark on society.

Expanding Facial Recognition Vendor Opportunities

Apple is looking to integrate vein recognition into FaceID

As more organizations realize the value facial recognition has to offer, we can anticipate more strategic alliances between businesses and facial recognition vendors. There is also a potential for more acquisitions, as companies are in an arms race to gain the most progressive facial recognition technologies to elevate their business models. As companies increasingly become aware of the strategic benefits facial recognition offers, they will become more eager about developing, acquiring, and purchasing facial recognition technologies.

Improving Facial Recognition with Vein Detection

As facial recognition continues to prove itself as a useful technology in society, companies are looking for ways to patent improvements of facial recognition. Apple patented a “pulsed radiation” feature of facial recognition which can detect a user’s veins.[87] Vein recognition emits pulses of infrared light through the skin and reflects light off of blood vessels to generate an image that can aid identification.[88] In Apple’s case, vein-authentication would be used in tandem with FaceID to unlock iPhones to strengthen accuracy so that identical twins cannot fool the iPhone into unlocking itself. It is anticipated that future generations will be using this type of sub-dermal technology in their smartphones and wearables. Samsung is looking to compete with Apple’s vein recognition as they’ve patented vein recognizing smartwatches in 2016.

Combining Facial Recognition with Upcoming Recognition Technologies

New biometric techniques are being developed with the help of artificial intelligence to enhance the ability of identifying an individual. Footstep recognition is an upcoming biometric which can greatly increase the accuracy of identifying a person by analyzing an individual’s gait (the manner in which someone walks).[89] SfootBD is a new system which is approximately 380 times more accurate than traditional methods like fingerprint and retinal scanners. It is also less invasive and more convenient since it can identify an individual without them having to take off their shoes. This system was trained by 20,000 footstep signals obtained by a database of 127 people.[90] Footstep recognition uses its deep residual neural network to analyze "weight distribution, gait speed, and walking style". [91]The technology is still in its testing phases, as it has been tested in "airport security checkpoints, workplaces, and homes".[92] When tested, it yielded 100% accuracy with an error rate of only 0.7%.

While footstep recognition may be highly accurate, it is limited by its scalability. It is very difficult to collect everyone’s walking style compared to collecting photos for facial recognition purposes. However, the future may reveal more opportunities for foot recognition to be used in practical settings, such as airport security check-outs. Combining footstep recognition with facial recognition may be a way of increasing accuracy in identifying an individual, while also alleviating the challenges that these recognition systems encounter when being used separately.


Autonomy Lab's Ready-Aim-Fly in action.

What's Next: SFU

The School of Computing Science’s Autonomy Lab at Simon Fraser University is testing drones controlled by biometrics such as gesture and facial recognition.[1] Richard Vaughan, a researcher for this project, compares their facial recognition techniques similar to that of the iPhone X Animoji.[2] The Autonomy Lab trains the facial recognition algorithm to learn the neutral and trigger expressions of the user. This process takes around one minute, or less if it is a repeat user. The trigger expressions were introduced as an additional layer for authentication as they previously experienced a problem of false positives.[1]

Currently, the drone operates using the Ready-Aim-Fly method. In the Ready stage, the camera recognizes and registers the user’s neutral and trigger face.[2] Next, in the Aim stage, the drone uses the neutral expression, trigger expression, and location of the face to control its flight path (e.g., straight, circling, and parabolic).[1] Lastly, in the Fly stage, the drone flies along the desired path while continuing to track the user’s face.[2]

Conclusion

Facial recognition is here. It is part of our past, present and future. As facial recognition and other forms of biometrics progress, we must evaluate its benefits and costs on society. Facial recognition allows for more business innovations, safety measures, and forms of entertainment. However, we must also acknowledge the potential threats facial recognition poses as well. Governments and businesses must hold themselves accountable for the decisions they make regarding the utilization of facial recognition in a consumer’s everyday life. Consumers must also be aware of these concerns. By being keenly aware of the technology that surrounds us, we can evaluate its positive and negative implications to make more informed decisions in the future.

Authors

Amanda Chor Eugene Chak Harasees Gill Pihu Gosain

References

  1. 1.0 1.1 1.2 http://autonomy.cs.sfu.ca/doc/bruce_crv17.pdf
  2. 2.0 2.1 2.2 https://ca.finance.yahoo.com/news/controller-controller-fly-drone-facial-213825644.html
Personal tools