Machine learning is the electricity of the future
Necessity is the mother of invention. Early humans realized the need for sharing and socializing, so they created language. Later, we realized the need to keep account of our belongings so we created mathematics. As time passed, the amount of information we gained skyrocketed and the math needed to explain and preserve it followed suit. Humans realized that our raw intelligence was not enough to keep us going, so we created computing machines. What began with an abacus, has today evolved into mainframes and supercomputers, giving us computing speeds that were unimaginable a few centuries ago. Performing billions of tasks every second, high end computers can outperform humans in most computing based applications. But with all this thinking ability and extraordinary memory, these superhuman devices still need our help in every aspect: from being born, to defining their purpose. What keeps them from getting ahead of us? Or, in general, what has kept them from showing us something we never asked for?
Consider this example. You give your computer some instructions: a series of “inputs”. But instead of having the computer calculate something for you, you decide to tell it some “answers”: the output. You never tell your computer how the outputs were obtained. Suppose the inputs and outputs you gave are as shown in the table below. What you want your computer to do is tell you the answer when you give it an input. So when you input 3, it gives out 9. When you say 7, it says 49.
Now, you ask for the output at 12. If you guessed 144, it’s probably the correct answer. But did your computer say 144? I’m sure it didn’t.
Why did your computer not know the answer? When you saw the table up there, you observed a pattern: the outputs are squares of the given inputs. When asked for the output of 12, you recalled that it is 144. Your computer, on the other hand, never noticed such a thing. All it knew, was that when you ask for 5, it has to say 25 and lay the boundary of its universe.
What humans now want is to try and help your computer do what it couldn’t: to observe patterns in data that it is given. Broadly, this field is called Artificial Intelligence.
The goal of every artificially intelligent system is to make guesses about the output when it is given new input, based on the patterns it noticed in previous interactions. Another goal is to improve its guessing ability (by making as many correct guesses as possible) as it gets “trained” on more and more cases. Prof. Tom Mitchell from Carnegie Mellon University would define machine learning follows: a computer program is said to learn from experience ‘E’, with respect to some class of tasks ‘T’ and performance measure ‘P’ if its performance at tasks in ‘T’ as measured by ‘P’ improves with experience ‘E’.
Most people have a very hazy idea of artificial intelligence, and often interchangeably use the terms machine learning, deep learning and artificial intelligence. Moreover, many people picture robots when they think of AI. Though they are not wrong, they aren’t entirely right either. Here’s a visual depiction of their relationships.
Ubiquity of Artificial Intelligence
AI has become a significant member of any industrial setting. Here are some avenues where it is extensively used.
Prices of commodities like infrastructure, housing, stocks and businesses, etc. are predicted to a reasonable level using ML algorithms. Businesses internally use machine learning for applications like statistical arbitrage like predicting stock prices and market conditions, and determining optimum trading parameters. E-commerce platforms like Swiggy or Amazon and infotainment services like YouTube or Netflix use recommendation systems to market their products better. By using ML algorithms on our history of purchases, we are recommended newer similar products (with similar names or functionality), which increases sales for the platform.
ML techniques help medical practitioners devise better therapeutic procedures by observing trends of patient’s vitals as they are administered a variety of treatments. It also helps diagnosis of illnesses ahead of time in critical areas including cancers and cardiac failures. Self-driving automobiles use deep learning models to perform a huge variety of tasks. Google DeepMind’s AlphaGo defeated world champion Lee Sedol at Go, a strategy game like (but more complex) than chess. Machine learning systems are often used to decode our genome: predict functions of various genes, as in which is responsible for what traits. Mechatronic systems are often built to simulate human behavior and help us in our tasks, called robots like SOPHIA.
Smartphone cameras have evolved exponentially over time, and machine learning has a significant role to play in it. Face detection, age prediction, smile capture and gesture recognition are common app features in most phones performed with a backend trained using ML procedures.Audio and speech recognition is another popular avenue, used by personal assistant software like Google Assistant, Siri, Cortana and the software behind Amazon’s Alexa, which uses Natural Language Processing (NLP).
How are tech magnates using AI?
Companies like Google and IBM have extensively used AI for quite a while now. This reflects in many of their products, some of which are well known while others aren’t. Here are some products you may not have heard of.
Google ML Kit is a software development kit developed mainly for iOS and Android app developers. Using this, developers can integrate machine learning models into their apps without having to go through the effort of designing themselves. Its many features include text recognition, face detection, barcode scanning, image labelling and landmark recognition. Besides these, one may also use the available tools to develop a custom machine learning model for integration into their apps.
Google Duplex is a jaw dropping NLP + speech recognition based AI system can easily mimic human conversations. During its release, Google demonstrated this system making a call to a salon and booking an appointment whilst making it sound like a human was making the reservation. At its heart, this system has a Recurrent Neural Network (RNN) which is built on TensorFlow Extended, a deep learning library developed by Google. It uses a combination of text-to-speech engine and another TTS engine used to vary the tone of speaking. It can even pull off speech disfluencies like “um” and “hmmm” by understanding when to give slow responses and when to give quick ones.
Google photos is becoming smarter everyday. This AI driven system allows you to colorize, brighten, fix or share photos, based on what Google photos sees. It can detect people in your photos and share it with them with a one-click option.
IBM Project Debater is a one of a kind AI system that is designed solely to debate with humans on complex topics. It can digest massive texts, construct a well structured speech given a topic, deliver it with clarity and purpose, and rebut its opponents. In the future, this system could help humans reason by providing compelling, evidence based arguments while suppressing the effect of emotion, bias or ambiguity.
IBM Watson is an analytics machine that uses NLP to mung vast repositories of data and provide answers to human questions, often in fractions of a second. This project is very significant for modern industries as it uses machine learning techniques on the data it gets and often answers questions that we did not have answers to before. Its vast use case space includes cyber security, predictive analysis and user friendly applications of people like physicians and marketers. Thought this system costs multiple millions of dollars for an industry to have it run internally, it is also accessible through IBM cloud for small and midsize companies.
How can we learn and contribute?
Data science is a popular statistical and probabilistic inference based field that extensively uses machine learning models to get insights from data that merely looking at numbers cannot give. There are several avenues on the internet which are excellent places to start learning and master machine learning.
In our institute, here are the basic courses (suggested chronological order) that will help any enthusiast develop required skills. (Note: Some courses part of IDDD Data Science program haven’t been included)
- Probability, Statistics and Stochastic Processes (MA)
- Introduction to Data Analytics (MS)
- Pattern Recognition and Machine Learning (CS)
- Deep Learning (CS)
Going ground up, one needs a good understanding of statistics and probability theory to understand what goes on behind the scenes in these AI models. Courses on these offered in our institute are sufficient to a good degree, although new material keeps coming as we proceed on this path. Websites like edX, Coursera and MIT OpenCourseWare provide state of the art courses on these topics.
Next, we need a programming language. Most commonly used languages for data analysis are Python and R. Platforms like DataCamp and Coursera offer primers on these languages. Python has one of the largest communities across the globe, which means highly optimized code for almost every application is available as packages which can be used by anyone for their own applications. Python being a very versatile language, it is easy to learn and good for a head start. R is designed specifically for data analysis applications.
Moving forward, one must learn the basics of how machines learn. Understanding simple models such as linear regression and logistic regression often provides a good sense of the subject, since they are very intuitive and stem from basic calculus. This opens the door to studying more advanced models, which requires prerequisite knowledge of linear algebra, vector calculus, Bayesian inference and information theory. Dr Andrew Ng of Stanford University offers one of the most popular beginner level courses on machine learning, under their CS229 course. However, most people begin their journey with his completely free machine learning course on Coursera (taught on MATLAB/Octave).
Once one has gained some knowledge about these models, it’s time to start working. Projects and competitions are the way to go for honing your skills and learning new techniques along the way. Blogs like The Medium and Towards Data Science are populated with articles written by experts and practitioners on using machine learning for a plethora of use cases. They explain the logic behind an algorithm, how it was developed and how it can be used in a programming language of your choice.
Kaggle is one of the most popular websites which hosts data science, machine learning and deep learning competitions, partnering with various organizations around the globe. It also features a learning corner, where newcomers can learn both basic and advanced techniques with a hands-on experience. Interaction with top machine learning engineers from around the world is possible through its discussion forums, where people ask and clear doubts and share codes that do interesting stuff. Other popular websites include Analytics Vidhya and Hackerrank.
One could also have a GitHub repository where you can store, update and share codes, documents and projects authored by you and others.
It is immensely magical and satisfying to see new information being brought out efficiently from data, which can help mankind advance. We hope this post has shown you some paths to start your journey in machine learning and data science!
Soon enough, your computer will know it has to say 144.
Science Deconstructed is a series which aims to introduce some exciting advancements made by the scientific community in simple terms with guidance on how to pursue these fields along with feature pieces of research in the institute. Send in your requests for the same at [email protected] Suggestions/ comments are always welcome.
Series by: Sankalpa Venkatraghavan