The increasing use of AI to power many of the programs you use on a daily basis from your email spam folder to facial recognition in photos on social media, means that it has become quite a popular topic, and that data scientists have found themselves in high demand.
Machine learning is a subcategory of AI that deals with programs that can complete a task or action without being explicitly programmed. Ideally, and in the most evolved versions, the program can perform various analyses with a dataset and actively adjust the initial algorithm to ensure maximum success based on the program's experiences or instructions or observations learned while working from a particular dataset. The data is incredibly important to the process and to the success of machine learning. Unsurprisingly, machine learning is set to have a huge impact because of its applicability and effectiveness.
It's helpful to look through the history of machine learning and AI over the last 60 years to gain a better understanding of how fast this field has evolved in such a short time.
A Short History of AI and Machine Learning
1950 | Alan Turing Creates the "Turing Test". Turing developed a test used to measure a machine's ability to act like or appear indistinguishable from a human.
1957 | Frank Rosenblatt Designs the First Neural Network for Computers. Rosenblatt's program simulated the thought process of the human brain.
1981 | Explanation Based Learning is Born. Gerald Dejong introduces the concept of Explanation Based Learning (EBL), in which a computer analyses training data and creates a general rule it can follow by discarding unimportant data.
1990s | Data-centric Programs. During this decade, scientists began focusing more on a data-driven approaches as they attempt to build programs that can analyze large amounts of data and eventually draw helpful conclusions.
2006 | Deep Learning. A new track in machine learning was coined by Computer Scientist, Geoffrey Hinton, to describe recent machine learning algorithms that enable computers to distinguish objects and text in images and videos, essentially teaching the program how to "see."
- Source: Forbes
What Is Machine Learning?
Arthur Samuel defined machine learning in 1959 as, "The field of study that gives computers the ability to learn without being programmed." Essentially, the computer learns through experience or teaching how to accomplish a task rather than a developer programming the computer to accomplish the task. To advance his theory, Arthur Samuel created a program that was taught to play checkers. Because the program could complete hundreds of games of checkers faster than a human, it could adjust its strategy to incorporate learned knowledge about the game. Eventually the program learned how to play checkers better than most humans.
Supervised and Unsupervised Learning Algorithms, and Reinforcement Learning
Machine learning algorithms are typically understood in terms of the different ways in which learning occurs. Supervised and unsupervised learning algorithms and reinforcement learning are all ways in which to define how the program will interpret data.
First, let's take a look at supervised learning which tends to be the most commonly used machine learning algorithm. With supervised learning, the output has already been determined. For example, if you want to know the chances of your clients repurchasing your services, you'll most likely be using a historical dataset to create future predictions and your outcome is defined. What remains to be determined is the algorithm that will get you from input to output. Since the output is clear, if the results from the program do not match the desired outcome, the programmer can make adjustments to the algorithm in order to guide the model through the analysis of the data.
Unsupervised learning algorithms are used when you don't yet know the output you need from the dataset. Hence, the program is tasked with finding patterns in the data. Developers typically accomplish this task by using a training data set. Then, based on the results produced by the model, more data can be added or parameters can be adjusted to guide the machine in producing an end result that is useful to the organization or team. The challenge with this approach is that the output might not be useful and the process for adjusting the algorithm long and tedious.
Then, there is reinforcement learning which contains components of both supervised and unsupervised learning algorithms; however, it doesn't completely conform to either. Reinforcement learning happens when a machine is given a goal or correct outcome and expected to monitor and adjust actions and experiences to reach that reward. So, it's not supervised learning because there is no defined set of data, and it's not unsupervised because the machine is given a reward or end point. However, reinforcement learning usually requires immense amounts of data produced by the machine's experience in order to begin "learn" the decisions that bring it closer to the reward, which equates to a continuously running simulation in order to achieve the desired results. With that said, the field is growing exponentially, and in 2016, AlphaGo beat a human at the incredibly complex game Go.
How Is Machine Learning Used today?
Machine learning can be used in creating predictive algorithms for processing big data, building recommendation platforms, identifying security risks, implementing natural language processing, guiding autonomous cars, medical research and disease predictions, and so much more.
If you've ever asked Siri a question or picked a movie recommendation from Netflix, you've actively interacted with services that utilize machine learning to provide better services for you. Pandora uses a sophisticated machine learning recommendation engine to help you discover new music based on your preferences, the machine incorporates information from users and "human curators" and uses machine learning algorithms to refine and declutter the results. Enabling users to engage with a beautiful symphony of valuable recommendations.
Machine learning is exploding in usefulness and the possibilities at this stage are endless.