History and Future Implications of Machine Learning

anonymous


"Machine Learning" has become a phrase that is synonymous with Computer Science in general today. Huge technology companies like Facebook, Google, Amazon are pushing massive marketing campaigns using cool terms such as this to increase sales or activity on their websites. These campaigns have essentially made Machine Learning a marketing slogan. However, machine learning as a field is much more diverse and fascinating.

Machine learning started from an analysis of how our brains work. We have millions of neurons in our brain that send information through electrical impulses to other neurons and eventually to the correct recipient. In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper about how neurons work and created a model using an electrical circuit [1]. This is the first instance ever of a neural network. This model receives input and outputs a result based on which nodes were activated. This behavior came from their hypothesis that each neuron in our brain is responsible for recognizing certain patterns / behaviors. This research sparked a public interest in computers as a "brain" that can do much more than what they did back in the 1940’s, especially its potential contribution to solving Turing’s Test, the legendary test created by Alan Turing to test if machines can trick human into thinking it was a human.

Frank Rosenblatt created the first artificial neural network in 1958 called the Perceptron [2], however the first real commercial use of machine learning came in 1959 when Bernard Windrow and Marcian Hoff of Stanford created ADELINE and MADELINE. ADELINE was a network that can detect binary patterns. Given 0s and 1s, it can predict what the next binary number in the sequence is. MADELINE is the enhanced version of ADELINE that could eliminate echo on phone lines [3]. Other significant achievements include Stanford Cart, a robot equipped with sensors and cameras that can detect and avoid obstacles [4], and Arthur Samuel’s program that learning to play chess better the more it played. There were also some obstacles that slowed progress in the field. Professor Sir James Lighthill was the leading voice against the belief that machines can do anything other than simple / rudimental functions [5]. Hollywood movies such as Terminator painted an extremely bleak future of humans ruled by machines and inspired fear and paranoia against machines. These and several failed experiments lead to a significant slash in funding for AI and machine learning.

Fast forward to the 21st Century, significant improvements to the field has again been made. 60 years after the Turing Test was invented, "Eugene Goostman", a chatbot that imitated a Ukrainian child has successfully tricked 33% of human judges to thinking it was a human [6]. Personal electronic devices have incorporated many neural network applications into their system, such as face detection, spam and content filter, artificial personal assistant with natural language processing, personalized recommendations, and deep learning to optimize system performance. We have a wide range of smart devices from smart cars such as Tesla, which could detect traffic condition and respond automatically without intervention from driver, to smart toasters. Many applications and features that we take for granted today are a result of years of research and dedication. These improvements in technique and implementation is also accompanied by improvements in architecture. NVIDIA recently introduced new GPUs that are specialized for machine learning. These "tensors" or cores optimizes math and activation functions by using a matrix processing array, thus increasing the number of operations that can be done within a clock cycle [2]. Google is using a similar Tensor, the Google Tensor Processing Unit, that are designed solely for machine learning application in their services. Moreover, major tech companies such as Amazon and Google are renting their servers and machine learning APIs to the public. Access to these resources erased some of the barriers to creating, implementing, and developing applications and services that can be extremely useful to the general public and localized interests.

Improvements to the field of Computer Science and computing technology over the recent years has been massive to say the least. Computational time and architecture costs has decreased exponentially as computational complexity rise significantly. These improvements give us a chance to solve many significant problems that we can only dream of a few years ago. Scientists at MIT have developed a self-reinforcing model that learns from patient data to make cancer treatment less toxic. Vulcan, a philanthropic company based in Seattle, has been using machine learning to analyze data gathered by drones and cameras to measure coral health and wildlife populations, identify dangers to vulnerable species, and gain key insight into key species and ecosystem [7]. Scientists all over the world have gathered environmental data to assess the effect of and fight global warming. Potential of machine learning and fields that can be benefitted by it is huge and diverse. Issues such as privacy are legitimate problems that naturally arise from the way neural networks works, given that it needs tremendous amount of data to calculate, and it should be addressed and improved. However, we should not stop ourselves from developing and improving this field. Maybe in 60 more years, we would finally make a machine that passes the Turing test.

References

  1. Neural Networks - History, cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html
  2. History of Machine Learning, www.doc.ic.ac.uk/~jce317/history-machine-learning.html
  3. "Neural Networks History: The 1940's to the 1970's" Department of Engineering: Computer Science, Stanford University. https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/History/history1.html
  4. "Autonomous Cars Through the Ages", Wired
  5. "IWonder - AI: 15 Key Moments in the Story of Artificial Intelligence." BBC, BBC, www.bbc.com/timelines/zq376fr
  6. Aamoth, Doug. "Interview: Eugene Goostman Passes the Turing Test." Time, Time, 9 June 2014, time.com/2847900/eugene-goostman-turing-test/
  7. "Vulcan Machine Learning Center for Impact." Vulcan, www.vulcan.com/areas-of-practice/technology-science/key-initiatives/machine-learning-center-for-impact