Neural Networks - D100

From New Media Business Blog

Jump to: navigation, search

Neural Networks

Contents

Artificial Neural Networks (ANNs)

Example of Simple Neural Network

Artificial neural networks (ANNs) are computer science's answer for the human brain. They are loosely based on neural networks found in human and animal brains and the way neurons fire when learning takes place. ANNs use complex algorithms to process and make decisions. By using significant amounts of data while running these algorithms, ANNs are able to "learn" how to decode complex problems and develop solutions. Neural networks usually consist of an input layer, an output layer, and one or many "hidden layers"[1]. ANNs can be described with the analogy of baking a cake. The input layer for the cake would be the ingredients needed, the output layer would be the finished cake itself, and the hidden layers would be the chemical reactions that occur in the oven while the cake is baking.

A Brief History

Early pioneers in machine learning can be traced back as far back as the 1950s [2], but the programs created consisted only of simple algorithms. A major breakthrough occurred in the 1980s when backpropagation was initially discovered. Backpropagation is essentially the backbone of training deep neural networks, it works by calculating and assigning "weights" to different aspects of the network and working backwards when errors occur to determine what went wrong. The main reason the backpropagation algorithm breakthrough was so essential for ANNs was because it allowed them to solve problems much more quickly and solve problems that were previously impossible to solve[3]. Another breakthrough occurred in 2010, when it became feasible to train networks using large amounts of data because of advancements in hardware. This improved the process of training neural networks and catalyzed many milestones achieved by various companies in the field of machine learning. In 2011, IBM's Watson managed to beat two humans in a game of Jeopardy and in 2016 Google's AlphaGo managed to best one of the world's best players in a game of Go. Continual improvements in hardware and storage capacity indicate a promising future for ANNs.

Supervised vs. Unsupervised Learning

To perform tasks, ANNs generally use one of two types of machine learning: supervised or unsupervised. It is important to distinguish between these two types of learning methods as they affect the process of learning within a neural network. Supervised learning consists of training a machine based on past examples of successful inputs and outputs. This type of learning trains the machine based on predetermined definitions of "right" and "wrong". For example, you could feed a machine an exorbitant amount of data, categorized as junk mail and normal mail, in order to train it to learn to sort mail into these two categories.

On the other hand, unsupervised learning consists of training a machine based on parallels in data. Unsupervised learning takes "clusters" of data and trains itself to find commonalities within these clusters. An important aspect of unsupervised learning is that the data is not "labeled, classified, or categorized"[4]. The data that is fed to an unsupervised network is somewhat random and the network has to train itself to find patterns in the data. One example of an unsupervised neural network is AlphaGo, a network developed by Google DeepMind, which taught itself how to play the game Go, which has been played in China for thousands of years[5]. AlphaGo started without any knowledge or experience of the game, but after analyzing patterns and discovering tactics, it was able to defeat some of the most highly rated Go players in the world[6].

Reinforcement Learning vs. Deep Learning

Another important distinction in machine learning is the difference between reinforcement learning and deep learning. Reinforcement learning trains a network by setting a goal to achieve where the algorithms behind the network are set on attaining that goal. When the algorithm makes a "good" decision with respect to the goal, it is rewarded; likewise, when it makes a "bad" decision, it is penalized. During this process the machine learns to distinguish what needs to be done to achieve the goal and what it needs to stop doing. Deep learning is somewhat different in that the algorithms have many layers and each layer is usually given a specific task to accomplish that contributes to the overall goal. For example, in facial recognition software, one layer of the neural network might focus on identifying the eyes and ears while another focuses on identifying the nose and mouth. Both layers are running separate tasks that ultimately come together to accomplish the main goal of recognizing the entire face. These two ways of learning are not mutually exclusive; deep reinforcement learning occurs where the two types of learning systems work together to speed up the process of attaining a certain goal or outcome[7]. Deep learning is relatively new compared to machine learning and scientists are still approaching full understanding of its capabilities. With deep learning, the hope is to significantly increase efficiency in regard to maximizing an output/goal as it aims to look at substantial amounts of data to learn from.

Technology Leaders in Deep Learning

Deep learning has rapidly emerged as a major trend in the technology industry. A significant amount of human tasks will be automated in the future. Image and audio processing are increasing significantly in quality. Fields such as process automation will be transformed by the widespread adoption of this technology. The use of analytics increasingly relies on deep learning to proactively find patterns and to gain insights in data sets. Smart chatbots and assistants such as Google Assistant are signalling the use of UX complemented by the integration of neural networks.

Other major enterprise organizations such as Microsoft and IBM have fully adopted the use of neural networks, offering a software-as-a-service approach to organizations.

How Industry Will Be Affected (Source: SAP CSG analysis, McKinsey Quarterly Report July 2016, Google PR, Microsoft PR, SAP Market Model)

Google DeepMind

DeepMind is an artificial intelligence company founded in 2010 in the United Kingdom. Originally, DeepMind was used to learn and build neural networks in an unsupervised capacity, by playing several video games. It was acquired by Google in 2014. It is an industry leader in neural networks, with a strong focus on integration and improvement of Google's existing architecture.[8]

DeepMind is being deployed in healthcare settings as well. Streams is an application created by DeepMind to assess kidney health by analyzing a patient's profile proactively to determine whether they may have an acute kidney injury.[9] The National Health Service in the United Kingdom allowed DeepMind's network access to all of its eye scans. These eye scans were surveyed by DeepMind, looking for 50 common eye conditions, such as glaucoma and diabetic retinopathy. It was found to be 94.5% accurate, and "on par" with that of a clinical professional.[10]

The aforementioned AlphaGo is another product of DeepMind. Described at length in a Netflix documentary, DeepMind was able to win several games against highly ranked Go champion Lee Sedol. Previously, this level of Go play from artificial intelligence had not been possible. Competitors such as Facebook's DarkForest have not been able to reach the level that AlphaGo did. Compared to other games like chess, Go is considered to be a significantly more difficult game for artificial intelligence to learn. Furthermore, AlphaGo Zero was announced in October of 2017. This version is said to be significantly more capable than the previous, beating DeepMind's old world champion "AlphaGo Lee" version 100 games to 0.[11]

DeepMind has also been deployed for use with Google Assistant, particularly as phone processing power has increased. DeepMind uses Wavenet, a realistic voice processor. DeepMind has also led to increased energy efficiency for Google's servers, and is being harnessed to automatically adjust brightness and phone performance for optimal energy consumption.[12]

Microsoft Azure

Microsoft's Azure platform is another major enterprise player in the fields of neural networks and deep learning. This platform is a software-as-a-service, offering cloud capabilities and the use of Microsoft's processing power.

Applications of these technologies are wide ranging. Image recognition is one service, using artificial intelligence to classify or sort images. Factories are able to deploy predictive defect prediction, with Azure helping to warn of potential issues with manufacturing. Azure offers smart chat bots as well, with Azure AI, which are able to answer first line customer inquiries, respond in conversation, and assist in decreasing wait times and labour costs. Other uses for Azure AI include demand forecasting for sales, price optimization, and energy supply optimization. [13]

IBM Watson playing Jeopardy.[14]

IBM Watson

IBM's Watson is likely best known for beating Jeopardy's top champions. While not necessarily a neural network, Watson harnesses machine learning and corresponding deep learning techniques. With these technologies, IBM offers a number of cloud-based tools that harness Watson's machine learning and neural network capabilities. One major offering is the Watson Studio, which includes a neural network modeler. This enables users to model and run complex neural networks.[15]

Watson also offers chatbots through its Watson Assistant. These chatbots can be used to quickly answer simple customer questions. However, they are becoming increasingly able to handle more complex tasks such as payments and scheduling. Watson also describes how it can predict personality and emotional characteristics through text. [16]

Another use case described by IBM is the use of this assistant as a contract reviewer. Watson has ability to quickly compare and verify contracts using natural language processing.[17] Other uses for this technology include text-to-speech, visual image recognition, and integration with internet of things (IoT) platforms.[18]

Facebook Provided Example of Deepface [19]

Facebook Deepface

Facebook harnesses machine learning and the use neural networks for its controversial DeepFace technology. This has been used in combination with four million Facebook user photos, which corresponded to four thousand individual users. The neural network has been fed the photos and, as a result, is able to recognize faces in images with near-perfect accuracy. DeepFace's recognition rate, at 97.35%, is approximately the same as that of that humans.[20]

Controversy is centered around the collection and use of user data. The European Union has mandated that Facebook delete its users' facial recognition data.[21]

United States Government

In a widespread example of ANNs, the United States government has revealed some of its uses of neural network and machine learning technologies. One of these is a recent collaboration between the US Transportation Security Administration (TSA) and Google, which seeks to improve weapon detection rate in the TSA's body scanners.[22]

The FBI is using neural networks to create a facial recognition database, which currently has about 400+ million photos. Two programs publicly discussed the FBI are the Next Generation Identification system, where agents manually search a list of retrieved possibilities. The Facial Analysis, Comparison, and Evaluation Services Unit is able to search a more comprehensive database, automatically retrieving suspect results.[23] However, there are a number of limitations, such as an 85% accuracy rate and the need for an individual to be photographed front-facing. Neural networks are also used by the FBI to rapidly process other biometric data, such as fingerprints, palm prints, and iris scans.[24]

Applications for ANNs

The following applications are just a few of the potential uses for ANNs, illustrating the immense range of possibilities for ANNs across industries.

Natural Language Processing

A major field of ANN is natural language processing, which focuses on understanding, interpreting, and manipulating human language.[25]

One prominent platform is Google’s Neural Machine Translation (GNMT) system, which was found to reduce translation errors by up to 60% compared to Google’s phrase-based (PBMT) model [26]. In some cases, the model “approaches the accuracy achieved by average bilingual human translators.” [27]

Translation comparison: Phrase-Based Machine Translation, Google Neural Machine Translation, Human Translation.[28]

In addition to translation, natural language processing can also be used to prioritize incoming messages (such as customer feedback or emails), improving help desk chatbots, and transcribe voicemails. Digital assistants, such as Apple’s Siri or Amazon’s Alexa, also use natural language processing to both comprehend and generate language.

Treating Coma Patients

In September 2018, it was reported that the Chinese Academy of Sciences and PLA General Hospital in Beijing had developed an ANN that could predict the probability of recovery for patients in a vegetative state who had been given “no hope of recovery”. With the permission of the patients’ families, the ANN analyzed brain scans, predicting with 90% accuracy whether the patients would recover. Some believe that this use of ANNs helps loved ones to navigate the difficult ethical issue of whether to prolong the life of a person in this state.[29]

Earthquake Prediction

ANNs are also being used to identify patterns in seismic data, with the objective of forecasting earthquakes. Harvard professor Brendan Meade ran an earthquake analysis using an ANN and found that it improved the speed by 500 times than the previous strategy. Furthermore, by using random forests, a machine learning technique, scientists have discovered signals in a simulated fault that were previously undetectable. Seismic data used for this forecasting is reportedly similar to audio data used by digital assistants.[30]
Before/after comparison of Yelp cover photos.[31]

Talent Recruitment

In an experiment, McKinsey used ANNs to screen resumes. With a sample size of 10,000 candidates, the ANN’s results were similar to those from McKinsey’s existing process. In fact, the ANN selected a slightly higher percentage of female candidates, suggesting that it may help to reduce cognitive bias in the recruitment process.[32]

Image Processing

Restaurant review website Yelp used a convolutional neural network to improve its selection of images to display for restaurant profiles. Previously, the most-viewed images were shown first; however, this method resulted in the earliest images, which were often not the best images, being selected most of the time. Based on a variety of parameters, the ANN learned from 100,000 images. With each iteration, the machine's ability to identify "good" and "bad" images improved.[33]

CRM (Credit Card Providers)

In one McKinsey-provided example, an international bank was able to use to machine learning to analyze customer behaviour, noting that particular customers switched from using credit card transactions during the day to night, with the amount in their savings accounts decreasing. Customers that the bank's algorithm tagged were given financial advice and an increase in their credit limit.[34]

Self-Driving Cars

Self-driving cars are one major use case for machine learning and ANNs. Challenges such as the necessity of significant computational power are an important factor in the development of autonomous vehicles. Factors such as weather, road conditions, visibility, and hazards must all be managed. Self-driving vehicles are being trained with very large datasets (in the petabytes) that take into account as many driving parameters and conditions as possible. Current industry competitors such as Waymo, Ford, and General Motors, are using larger fleets in order to collect a suitable amount of data. Another challenge is teaching new information to a neural network that is also capable of remembering old information, even with a massive dataset. In general, self driving cars with machine learning technologies [35]

Financial Trading

Financial trading is already aided significantly by technology, with one example being pinpointing above average benchmarks. However, trading, forecasting, and other key financial services such as asset management are increasingly being aided or completely taken over by machine learning. Labour intensive tasks will increasingly be automated, and human financial activities will increasingly be centred on managing physical assets. Assets such as electricity and carbon credits will increasingly be traded and managed by machine learning-based technologies. [36]

Manufacturing and General Electric

As above, Azure is a major provider of neural network-based manufacturing error predictive services. Watson similarly offers machine learning-based manufacturing predictions. General Electric has also been leveraging this technology to analyze data from oil wells and jet engines. This data provides helpful insights in terms of failure rates and help in streamlining maintenance. GE is estimating this technology will continue to improve, allowing for increasingly high quality insights for manufacturing. [37]

Ethical Considerations

Criminality Prediction and Facial Recognition

ANNs have been used to judge facial scans in order to predict criminality. Initially, these scans were discredited, with no statistical difference found between criminals and non-criminals. However, a recent ANN, fed with images of criminals and non-criminals in China, has been able to predict criminality with an 89.5% accuracy rate. [38] The United States Government's FACE (Facial Analysis, Comparison, and Evaluation Services Unit) program as mentioned above, is a major use, where security footage and police databases are updated and checked against a "probe mugshot" in order to find suspects.[39]

Deepfakes

In the era of Photoshop, it has become increasingly harder to identify whether a photo is real or modified. Meanwhile, video footage has traditionally been assumed to be immune from this problem; it appears that that is soon to change. New technology, embedded with machine learning, has made it substantially easier to overlay a person's face onto another's, tracking its movements so it looks like someone else is talking. These are known as "deepfakes"[40], and can be convincing enough to dupe people. Currently, this technology has mostly been used for explicit purposes, but has the potential to be used for even more nefarious purposes, posing a serious threat to society. Comedian and filmmaker Jordan Peele exemplified this danger by creating a deepfake where he imitated former US President Barack Obama, saying things that he would not normally say[41]. This could have drastic and frightening societal consequences if the general public can be swayed by deepfakes. Although it is currently relatively easy to identify a deepfake, the technology is rapidly improving due to the integration of deep neural networks. As the issue of deepfakes emerges further, it could erode the trust traditionally placed in the legitimacy of video footage.

Impact on Jobs

One of the most widespread concerns about the rise of artificial intelligence is the cannibalization of jobs. By all indications, it is inevitable that ANNs will change the way we work. Machine learning is particularly suitable for “tasks that involve making quick decisions based on data”.[42] This could reshape the job description of entry level business analysts, for example.

Where humans are still superior is in decisions that depend on “long chains of reasoning, diverse background knowledge, or common sense”.[43] This automation of analytics could shift the focus of the workforce to more nuanced skills such as managing and creating.

Managerial Implications

McKinsey articulates the importance of tying deep learning implementation directly to strategy. “Without strategy as a starting point, machine learning risks becoming a tool buried inside a company’s routine operations: it will provide a useful service, but its long-term value will probably be limited to an endless repetition of ‘cookie cutter’ applications such as models for acquiring, stimulating, and retaining customers.”[44]

The deep integration of this type of cutting edge technology with business strategy naturally results in an increased demand for employees who can bridge the gap between the IT and business sides of an organization. These people are described by Deloitte Analytics advisor Tom Davenport as “light quants” and “analytical translators”.[45]

Looking Ahead

Quantum Neural Networks

Researchers are examining the implications of embedding ANNs within qubits, which would enable them to leverage the power of quantum computing to exponentially increase the storage and speed of ANNs. Xanadu, a Toronto-based quantum computing company suggests the possibility of using a Hopfield network to identify a strain of flu based on a partial sequence of the virus’ genetic code.[46]

Debunking Worst Case Scenarios

Will robots take over?

The film iRobot tells a worst-case-scenario fictional story of robots rising up against humans. Pop culture examples like this paired with recent advancements in machine learning have sparked some fear in the general public of a robot uprising happening in the real world. Related to this is a somewhat unnerving instance in which two chatbots, developed by Facebook to interact with each other, invented their own language and began using it to communicate instead, because it was more efficient [47]. The chatbots were shut down because the researchers were seeking to create chatbots that could talk to humans, not create their own language and talk to each other.

While ANN technology has made significant progress in recent years, several major limitations still exist[48]. One of these is that even simple ANNs require a substantial amount of memory to run and can be quite hardware intensive. For larger ANNs running numerous of tasks at once, it would take an enormous amount of memory, far beyond what the average person currently has access to on their laptop or desktop computer.

Another limitation for ANNs is the vast amount of data required to run and train the complex algorithms. Improvements in cloud and hardware storage capacity are making data storage easier, but there is still a long way to go. For certain types of machine learning, there is a need for data scientists to label and organize the data so that the ANN does not teach itself the wrong things. This is a long and tedious process that may negate long run cost efficiency.

Finally, ANNs are currently very inept at explaining to humans the context and rationale for their decisions. For example, humans playing a platform game such as Super Mario World would have the common sense to jump over an obstacle they encounter. A machine might also jump over the obstacle, but it would not be able to explain to a human why it chose to do that.

These significant limitations indicate that widespread presence of ANNs is still relatively far away. Considering the current pace at which technology is developing, it is likely that the technological landscape will be vastly different at that point, making dystopian scenarios like iRobot unlikely to happen in reality.

Deloitte’s Five Vectors of Progress

To gauge the advancement of machine learning, Deloitte Global selected the following measurable categories:[49]

  1. Automating Data Science – Machine learning requires time-consuming tasks, such as data exploration and feature engineering. Deloitte found that these tasks are being increasingly automated, saving data scientists up to 80% of their time.
  2. Reducing the Need for Training Data – Deloitte used 80% synthesized training data to build an accurate model. This has significant implications for processing speed, which can be critical in cases like earthquake prediction.
  3. Accelerating Training – A team from Microsoft accelerated the development of an advanced natural language processing system by using graphics processing units (GPUs). This cut the time required from five years to just one year.
  4. Explaining Results – Previously, most neural networks functioned as “black boxes”, making it difficult to understand the rationale for the decisions they made. New research from MIT has shown that neural networks can be trained to provide accurate predictions and explain how the decision was made. This is significant in facilitating manager adoption, particularly since many managers would be skeptical of making major decisions without rational explanation.
  5. Deploying Locally – An increasing number of applications for machine learning are being developed for mobile devices and smart sensors. With chip manufacturers creating more power-efficient AI chips, machine learning is becoming more accessible and portable.

These vectors provide a convenient framework to track progress in ANN technology and applications.

Authors

Maxwell Hanson Vuk Petrovic Daniel Breedveld
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada

References

  1. https://www.edge.org/response-detail/10351
  2. https://en.wikipedia.org/wiki/Timeline_of_machine_learning
  3. http://neuralnetworksanddeeplearning.com/chap2.html
  4. https://en.wikipedia.org/wiki/Unsupervised_learning
  5. https://en.wikipedia.org/wiki/Go_(game)#Origin_in_China
  6. https://deepmind.com/research/alphago/alphago-korea/
  7. https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
  8. https://deepmind.com/about/
  9. https://www.zdnet.com/article/deepmind-and-the-nhs-what-its-really-like-to-use-googles-kidney-health-app/
  10. https://www.wired.co.uk/article/deepmind-moorfields-ai-eye-nhs
  11. https://deepmind.com/research/alphago/
  12. https://deepmind.com/applied/deepmind-google/
  13. https://azure.microsoft.com/en-ca/solutions/architecture/?tag=artificial-intelligence&Page=2
  14. title=How did supercomputer Watson beat Jeopardy champion Ken Jennings? Experts discuss.|url=https://blog.ted.com/how-did-supercomputer-watson-beat-jeopardy-champion-ken-jennings-experts-discuss/
  15. https://www.ibm.com/cloud/watson-studio
  16. https://www.ibm.com/watson/how-to-build-a-chatbot/
  17. https://www.ibm.com/watson/contract-governance/
  18. https://www.ibm.com/watson/products-services/
  19. title=Facebook's New 'DeepFace' Program Is Just As Creepy As It Sounds|url=https://www.huffingtonpost.ca/entry/facebook-deepface-facial-recognition_n_4985925
  20. https://research.fb.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/
  21. https://www.huffingtonpost.ca/entry/facebook-deepface-facial-recognition_n_4985925
  22. https://www.nytimes.com/2017/06/22/technology/homeland-security-artificial-intelligence-neural-network.html
  23. https://www.cnn.com/2018/06/29/us/facial-recognition-technology-law-enforcement/index.html
  24. https://www.fbi.gov/services/cjis/fingerprints-and-other-biometrics/ngi
  25. https://www.sas.com/en_ca/insights/analytics/what-is-natural-language-processing-nlp.html
  26. https://arxiv.org/pdf/1609.08144.pdf/
  27. https://arxiv.org/pdf/1609.08144.pdf/
  28. title=Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.|url=https://arxiv.org/pdf/1609.08144.pdf
  29. https://www.scmp.com/news/china/science/article/2163298/doctors-said-coma-patients-would-never-wake-ai-said-they-would
  30. https://www.nytimes.com/2018/10/26/technology/earthquake-predictions-artificial-intelligence.html
  31. title=Finding Beautiful Yelp Photos Using Deep Learning.|url=https://engineeringblog.yelp.com/2016/11/finding-beautiful-yelp-photos-using-deep-learning.html
  32. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning
  33. https://engineeringblog.yelp.com/2016/11/finding-beautiful-yelp-photos-using-deep-learning.html
  34. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning
  35. https://devblogs.nvidia.com/training-self-driving-vehicles-challenge-scale/
  36. https://www.bloomberg.com/news/articles/2018-11-29/quant-chief-says-more-trading-will-be-taken-over-by-machines
  37. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning?fbclid=IwAR3REtCh2hCfya68qdNj28psZLgZDrMpTsUNc5tVqm0tH9rh_Dpv1qLTARw
  38. https://www.technologyreview.com/s/602955/neural-network-learns-to-identify-criminals-by-their-faces/
  39. https://www.cnn.com/2018/06/29/us/facial-recognition-technology-law-enforcement/index.html
  40. https://www.technologyreview.com/s/612501/inside-the-world-of-ai-that-forges-beautiful-art-and-terrifying-deepfakes/
  41. https://www.youtube.com/watch?v=cQ54GDm1eL0
  42. https://www.cmu.edu/news/stories/archives/2017/december/machine-learning-study.html
  43. https://www.cmu.edu/news/stories/archives/2017/december/machine-learning-study.html
  44. https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning
  45. https://www2.deloitte.com/us/en/pages/deloitte-analytics/articles/in-praise-of-light-quants-and-analytical-translators.html
  46. https://medium.com/xanaduai/making-a-neural-network-quantum-34069e284bcf
  47. https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html
  48. https://towardsdatascience.com/hype-disadvantages-of-neural-networks-6af04904ba5b
  49. https://www2.deloitte.com/content/dam/Deloitte/global/Images/infographics/technologymediatelecommunications/gx-deloitte-tmt-2018-intense-machine-learning-report.pdf
Personal tools