Your search is running...

Topic: What is Artificial Intelligence? [30 min]

1. Definition of Artificial Intelligence

Artificial intelligence is the scientific field of computer science that focuses on the creation of programs and mechanisms that can show behaviors considered intelligent. In other words, AI is the concept according to which "machines think like human beings."

Normally, an AI system is able to analyze data in large quantities (big data), identify patterns and trends and, therefore, make predictions automatically, quickly and accurately. For us, the important thing is that AI allows our everyday experiences to be smarter. How? By integrating predictive analyzes (we'll talk about this later) and other AI techniques in applications we use daily.

  • Siri works as a personal assistant, since it uses natural language processing
  • Facebook and Google Photos suggest tagging and grouping photos based on image recognition
  • Amazon offers product recommendations based on shopping basket models
  • Waze provides optimized traffic and navigation information in real time

The following picture shows a representation of a neuronal network which is one of the main fields in the Artificial Intelligence science.

2. Brief history of Artificial Intelligence

Most of us have a concept of artificial intelligence fueled by Hollywood movies. Exterminators, robots with existential crises and red and blue pills. In fact, AI has been in our imagination and in our laboratories since 1956, when a group of scientists initiated the “Artificial Intelligence” research project at Dartmouth College in the United States. The term was first coined there and, since then, we have witnessed a roller coaster of progress ("Wow! How does Amazon know I want this book?"), As well as frustrations ("this translation is completely wrong").

At the beginning of the project, the objective was that human intelligence could be described so accurately that a machine could simulate it. This concept was also known as "generic AI" and this was the idea that fueled the (amazing) fiction that would give us unlimited entertainment.

However, AI derived in specific fields. With the passage of time, science evolved into specific areas of knowledge, and it was then that AI began to generate significant results in our lives. It was a combination of image recognition, language processing, neural networks and automotive mechanics that made an autonomous vehicle possible. Sometimes, the market refers to this type of progress as "weak AI".

The following table shows some important events in the history of Artificial Intelligence.

Year --> Event

  • 1842 --> Lovelace: programmable analytical machine
  • 1950 --> Turing: the Turing test
  • 1956 --> McCarthy, Minsky, Rochester and Shannon hold the first AI conference
  • 1965 --> Weizenbaum: "ELIZA", the first specialist system
  • 1993 --> Horswill: "Polly" (behavior-based robotics)
  • 2005 --> TiVo: recommendation technology
  • 2011 --> Apple, Google and Microsoft: mobile applications recommendations
  • 2013 --> Miscellaneous: technological advances in machine learning and deep learning
  • 2016 --> Google DeepMind: AlphaGo beats Lee Sedol in the game "Go"

3. Main techniques of artificial intelligence

Now that you know the definition of AI and more of its history, the best way to deepen the subject is to know the main techniques of AI, specifically, the cases in which artificial intelligence is used for business.

Machine Learning




Generally, the concept of Machine Learning is confused with that of "weak AI." It is in this field where the most important advances in AI are taking place. In practical terms, "Machine Learning is the science that is responsible for making computers perform actions without explicit programming." The main idea here is that you can provide data to machine learning algorithms and then use them to know how to make predictions or guide decisions.

Some examples of Machine Learning algorithms include the following: decision diagrams, clustering algorithms, genetic algorithms, Bayesian networks and Deep Learning.

Deep Learning

Remember when Google announced an algorithm that found cat videos on YouTube? (If you want to refresh your memory click here). Well, this is Deep Learning, a machine learning technique that uses neural networks (the concept that neurons can be simulated using computational units) to perform classification tasks (think about classifying an image of a cat, a dog or people, for example).

Some examples of practical applications of Deep Learning are the following: identification of vehicles, pedestrians and license plates of autonomous vehicles, image recognition, translation and natural language processing.

Smart Data Discovery

It is the next step in solutions of IE (Business Intelligence). The idea is to allow the total automation of the EI cycle: the incorporation and preparation of data, predictive analysis and patterns and the identification of hypotheses. This is an interesting example of smart data recovery in action. The information that no IE tool had discovered.

Predictive analysis


Think about that moment when you are hiring car insurance and the agent asks you a series of questions. These questions are related to the variables that influence your risk. Behind these questions is a predictive model that informs about the probability of an accident occurring based on your age, zip code, gender, car brand, etc. It is the same principle used in predictive credit models to identify good and bad payers. Therefore, the main concept of predictive analysis (or modeling) means that you can use a number of variables (income, zip code, age, etc.) combined with results (for example, good or bad payer) to generate a model that provides a score (a number between 0 and 1) that represents the probability of an event (for example, payment, customer migration, accident, etc.).




The use cases in business are wide: credit models, customer segmentation models (grouping), purchase probability models and customer migration models, among others.