top of page

Introduction to Artificial Intelligence and Machine Learning: Part 1


Some people may remember a catchy ad from the 1980s - “This is not your father’s Oldsmobile” - meant to attract a new generation to a modernized car line. Out with the old, in with the new. To borrow the phrase, this is not your father’s A.I.! Artificial Intelligence, or A.I., is no longer relegated to science fiction and film, like Hal, the domineering computer in 2001: Space Odyssey, or Robbie, the life-saving nursemaid robot in Asimov’s I, Robot. A.I. is part of our everyday lives, right now.


If you've ever purchased something out of the ordinary with a credit card or bought something at an unusual time or place, you may have had your credit card shut off. In this instance, A.I. identified a pattern characteristic of fraud, something that didn’t fit with your normal behavior, and A.I. made a decision. A.I. is the brain behind the scenes in your social media. It adjusts your feeds, picks possible friends, selects ads, and more, based on what it thinks you want and how you will respond. A.I. helps shape your perceived independent, highly personal decisions.


Artificial intelligence uses multiple technologies, including machine learning algorithms, and is being developed for multiple use cases, including automation, autonomy, and as an advanced analytical technique for software solutions. Computer science defines A.I. research as the study of, “intelligent agents, any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.” A more elaborate definition characterized as A.I. is a system's ability to correctly interpret external data, learn from such data, and use those learnings to achieve specific goals and tasks through flexible adaptation.


Looking at this overlapping Venn chart created by Nisarg Dave, first focus on the big, blue circle. That's computer science. Inside computer science, there is a circle for artificial intelligence, inside that is a circle for machine learning, and inside that, another circle for deep learning. Outside artificial intelligence, there are databases and then data mining, which crosses multiple boundaries of A.I., machine learning, deep learning, databases, and cloud computing. Data science encompasses all of those areas. And, if you are from a statistics or mathematics background, you will see both of those areas cross all of those same circles for computer science, artificial intelligence, machine learning, and deep learning. This graphical representation highlights the interconnectedness of A.I. in so much of modern technology.


A.I. became a named discipline in the 1950s, although work had really begun several years prior. A.I. includes a component known as machine learning, which is the study of computer algorithms that improve automatically through experience. It is further defined as techniques that give computers the ability to learn without being explicitly programmed to do so. These algorithms build mathematical models based on sample data known as training data in order to make predictions or decisions. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision. Deep learning is a subset of machine learning that makes the computation of multi-layer neural nets feasible. When we talk about your credit card being shut off, that's an example of neural nets at work. Those determine patterns, matching your pattern of fraud, and then turning off your credit card.


Machine learning is typically broken down into three sub-categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms find patterns in both input and output data from a training set of data, and then predict outputs from non-training data based on the learned set of patterns. In the credit card example, the algorithm can be fed input data related to credit card use behaviors and output labels related to whether the behavior resulted in fraud. In supervised learning, data is set with clearly defined outputs. The user essentially defines the patterns for that algorithm to follow. Unsupervised learning algorithms find patterns based solely on the input data. These can be used when you are not sure what to look for; the machine is identifying the patterns from the data and providing that to the user. Reinforcement learning algorithms use positive or negative rewards to achieve certain goals.


If you are an operations research analyst most of the techniques listed in the chart above are quite familiar: decision trees, naive Bayes classification, random forest, linear regression, and logistic regression. Also, for someone with experience in statistics, many of these techniques will be familiar. Even if you have little knowledge on the subject, you are, more than likely, a frequent and dedicated recipient of these techniques and concepts. A.I. is deeply rooted in your daily life and activities, even if not so obvious as robot maids and sentient spacecraft computers. Advancements in the field are made daily, and the unimaginable now may become commonplace.


Please join Renee for the next phase of this A.I. journey in the upcoming Part II, A.I. Techniques, Platforms & Requirements.

 


Renee Carlucci is our Principal Operations Research Analyst here at CANA Advisors. You can reach her at rcarlucci@canallc.com.

bottom of page