Klondike Glossary of Artificial Intelligence

AI-glossary

People often see Artificial Intelligence as robots that come to life to interact with humans. But AI is much more, and in order to understand its meaning as a whole, it is first necessary to deepen many other terms.

Klondike decided to create a Glossary of Artificial Intelligence to allow even the “non-experts” to get closer to this fascinating world and better understand some terms whose definitions are too often taken for granted.

The Glossary will be constantly updated, come back to check all the updates.

A

ALGORITHM

The algorithm is a mathematical formula, or procedure, that allows a computer to solve a given problem. In the field of computer science it translates into a sequence of elementary operations, called instructions, executable from a computer.

This may be a calculation, data processing or automation of repetitive tasks. The term comes from the medieval Latin algorismus mediated by al-Khuwārizmī, nickname of the Arabic mathematician Muḥammadibn Mūsa of the ninth century.

 

ARTIFICIAL GENERAL INTELLIGENCE

General Artificial Intelligence (AGI) is a type of AI that possesses the ability to understand, learn, and tackle complex tasks in a manner similar to humans.

Compared to specialized Artificial Intelligence systems focused on specific tasks (Narrow AI or ASI – Artificial Narrow Intelligence), AGI demonstrates cognitive versatility, learning from diverse experiences, understanding, and adaptability to a wide range of situations without requiring specific programming for each individual task.

Despite the current distance, the ultimate goal of AGI is – undoubtedly a complex task – to come as close as possible to replicating the human mind and cognitive abilities.

 

ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) is a field of research that aims to develop and program machines with cognitive abilities that are inspired by human learning patterns.

These artificial software are able to autonomously pursue a defined purpose, making decisions that are usually assigned to people. One of the current developments is the one of being able to entrust to a machine complex tasks previously delegated to a human being. The term AI was first coined by John McCarthy in 1956.

 

ARTIFICIAL INTELLIGENCE OF THINGS

Artificial Intelligence of Things (AIoT) is the combination of Artificial Intelligence (AI) within Internet of Things (IoT) solutions. Internet of Things is based on the idea of “intelligent” objects of everyday life that are interconnected (thanks to Internet) and are able to exchange information owned, collected and/or processed.

Thanks to this integration, Artificial Intelligence can connect to the network to process data and exchange information with other objects, improving the management and analysis of huge amounts of data. Applications that integrate IoT and AI will have a radical impact on businesses and consumers. Some of the many examples? Autonomous vehicles, remote healthcare, smart office buildings, predictive maintenance.

B

BIG DATA

The term Big Data refers to the huge amount of data that companies have produced and continue to produce on a daily basis. These can be analyzed and transformed into valuable information, allowing companies to improve their decisions and optimize process automation.

In 2001 Doug Laney, with the 3V theory, described three factors that identify Big Data:
Variety: data arrive unevenly by source and format
Volume: the amount of data comes from many different sources
Speed: data flows in real time very quickly and needs to be used in a timely manner.
Nowadays the situation has changed and this theory has been enriched by two other variables: Truthfulness (the quality and reliability of data) and Value (the data allow companies to make more informed, timely and informed decisions).

By analyzing a large amount of data allows companies to make more informed decisions such as describing the current and past situation of their business processes, answer questions about what could happen in the future or propose strategic solutions based on the analysis made.

BOT

Bot or Chatbot, one of the most popular solutions in companies, is a software aimed at communicating in natural language with humans in order to automate particular tasks or find information from databases. It is a tool able to offer 24/7 support through texts or audio both to its customers and to its employees, and that lends itself to different uses in different sectors.

A bot can be within another application such as Facebook or Whatsapp, can be integrated into sites to manage the first call center or help desk contacts or can automate the dialogue via email and sms to provide assistance from a company or for a specific product.

C

CLASSIFICATION

Classification models are able to identify to which category or class an incoming data belongs. Starting from a set of previously obtained values, classification models generate sets of rules that allow predicting the class or category of future data.

When there are only two classes (e.g. assigning a mail to the spam class or not spam) it is called binary classification, if there are more than two classes it is called multiclass classification (e.g. determining whether an input phrase is in French, Spanish or Italian).

 

COMPUTER VISION

Computer Vision algorithms allow to analyze and understand the content of images or videos. It’s not just about being able to recognize objects, people or animals within an image or a video, but it’s about the ability to reconstruct a context around the image, giving it a real meaning.

To be able to function properly, Computer Vision systems need to be trained with a large amount of images that will form the dataset that will make the algorithm really smart

Artificial vision systems have many applications, from intelligent surveillance cameras to industrial and manufacturing applications.

D

DATA MINING

Data mining is the automated process of finding information of various kinds through the analysis of large amounts of unstructured data (which can be found in databases or data banks).

Extrapolation of these informations allows computers to recognize patterns, trends, models or recurrent schemes that can be used as a basis for making decisions in areas such as marketing, economics and finance, science, industry, etc.

 

DATA SCIENCE

Data Science understand and analyze real phenomena, looking for logical correlations within the data. In this way, schemes and models are developed to obtain new information that can be exploited in other areas.

Data scientists, or researchers who apply these methodologies, transform large amounts of “raw” data, Big Data, into valuable information for companies to improve their product or to gain competitive advantages.

This sector is in full development thanks to Big Data, and many Data Science techniques have become possible today thanks to the increasing ability of computer systems to store data of the past. Thanks to the computing power of modern systems it is therefore possible to manage this large amount of data and turn them into useful information.

 

DEEP BLUE

Deep Blue was the first calculator to win a tournament-time chess game against a reigning world champion (Garry Kasparov, 1997).

Deep Blue was not an ordinary computer, but a supercomputer capable of processing and analyzing 200 million moves per second. Using extensive documentation of chess games played, he was able to store thousands of different openings and closures. His computing skills allowed him to predict and evaluate possible moves and strategies with enormous advance, allowing him to respond dynamically to moves made by an opponent.

 

DEEPFAKE

The term deepfake refers to an artificial intelligence technique that allows you to create content from a real base of images, videos or audio recordings. Deepfake techniques allow you to modify or recreate, in an extremely realistic way, the features and facial expressions or the voice timbre of the person depicted.

Used in different situations especially in recent years, the dissemination of deepfake material brings with it numerous risks: it can be used to create fake news, hoaxes and scams, to carry out cyberbullying or other cybercrimes of various kinds.

 

DEEP LEARNING

Deep Learning is a branch of Machine Learning, one of the most important and complex to understand. Deep learning techniques try to mimic the way neurons are organized in our brains.

In fact, human brain learning processes are simulated through the so-called ‘neural networks’, which are able to solve very complex machine learning problems without having the need for previously introduced data (principle necessary for Machine Learning).

E

EXPERT SYSTEMS

Expert systems artificially reproduce the performance of an experienced person in a given field of knowledge or subject. This computer program, after being properly instructed by a professional, is able to deduce information from a set of data and starting information

Expert systems can be based on rules (starting from a series of facts expert systems deduce new ones, following the true-false logic or the cause-effect model) or based on reasoning systems (starting from a sequence of facts or decisions, the expert system creates possible alternatives and situations until the conclusion is reached)

F

FORECASTING ALGORITHM

A Forecasting Algorithm is a type of algorithm used to make probable predictions or future estimates based on historical patterns and trends.

In essence, these algorithms analyze patterns and trends in past data to identify schemes that can be utilized to make possible predictions about the future.

These forecasting algorithms can be highly effective in anticipating future events or outcomes, enabling informed decision-making, resource planning, and risk mitigation across a wide range of sectors (economy, finance, meteorology, manufacturing, etc.).

I

IDP (INTELLIGENT DATA PROCESSING)

Intelligent Data Processing (IDP) algorithms are used to collect data and obtain information to initiate and process, on the basis of these, specific actions based on the information acquired.

This type of AI is directly applied to structured data and not to extract relevant information, for example in the case of systems for the detection of financial fraud or predictive analysis.

 

IMAGE PROCESSING

Image processing systems are able to perform certain operations on images such as getting an improved image, recognizing people, animals and things present or, in general, extracting some useful information or features from it.

Its applications range from medicine to geological processing, as well as other applications such as assessing auto damage in insurance accidents.

 

IMAGE RECOGNITION

Image Recognition, a subcategory of Computer Vision, is a technology that allows to detect and identify places, people, objects, features and many other types of elements within an image or video.

This recognition – possible thanks to previously trained neural networks – can be performed to detect if a specific element is present, or to classify and assign an image to a category.

L

LLM

Large Language Models (LLMs) are highly effective neural networks in understanding and generating human language in a manner similar to how a person would.

These models are trained on vast textual datasets collected from the web or other sources (with billions of parameters) and employ transformer neural networks to learn linguistic structures, nuances of language, and relationships between words within texts.

One of the significant advantages of these models is their ability to capture the contexts and complexities of natural language, enabling them to answer questions, complete sentences, translate texts, and perform a variety of other language-related tasks.

LLMs are a subset of Transformer networks.

M

MACHINE LEARNING

When we speak of Machine Learning we refer to systems able to learn from experience, with a mechanism similar (at least in appearance) to what a human being does from birth.

By analyzing large amounts of data, Machine Learning algorithms build models to explain the world and make predictions based on their experience. This type of program is able to improve its analyses and forecasts based on accumulated experiences and further samples of data analyzed.

 

MCCARTHY, JOHN

John McCarthy (Boston 1927 – Stanford 2011) is considered the father of Artificial Intelligence.

Professor of Computer Science first at the Massachusetts Institute of Technology and then at Stanford University, he was responsible for the first research on Artificial Intelligence, of which he is considered one of the main pioneers. Creator of the programming language LISP, used in the field of Artificial Intelligence to make particular software tools.

He coined the term Artificial Intelligence in 1956, the same of a summer conference was held at Dartmouth College in America in which this new discipline was founded.

N

NARROW ARTIFICIAL INTELLIGENCE

With Narrow Artificial Intelligence we refer to specialized Artificial Intelligence systems designed for specific tasks. Unlike General Artificial Intelligence (AGI), which emulates human cognitive abilities comprehensively, ASI is designed to perform circumscribed and well-defined operations.

These systems are highly efficient in carrying out specific tasks, focusing on limited activities such as speech recognition or automatic translation, limiting their intelligence to the specific functions for which they were designed.

 

NEURAL NETWORK

Artificial neural networks are mathematical models composed of artificial neurons that are inspired by the functioning of human biological neural networks. Neural networks are now used on a daily basis and are used to solve engineering problems related to various technological fields such as computer science, electronics, simulation or other disciplines.

Also defined ANN – Artificial Neural Network, but from several years it is passed to the simpler NN – Neural Network. Even in Italy we speak simply of neural networks, without distinction between biological or artificial networks depending on the context.

 

NLP (NATURAL LANGUAGE PROCESSING)

NLP or Natural Language Processing means Artificial Intelligence (AI) algorithms able to analyze and understand the natural language, or the language we use every day.

The NLP allows a communication between man and machine and deals with texts or sequences of words (web pages, social media posts…), but also to understand the spoken language as well as texts (speech recognition). The purposes may vary from simple understanding of the content, to translation, up to the production of text independently from data or documents provided in input.

Although languages are constantly changing and feature idioms or expressions that are difficult to translate, the NLP finds numerous application areas such as spell checkers or automatic translation systems for written texts, chatbots and voice assistants for spoken language.

O

OCR (OPTICAL CHARACTER RECOGNITION)

Optical Character Recognition (OCR) is an area of computer vision that allows to extract and reuse information contained in text images or physical documents, detecting letters, numbers or symbols and automatically converting them into their digital form.

The OCR can be useful for all those companies that manage physical documents and can have numerous applications such as for legal documents, barcodes or banking.

P

PATTERN RECOGNITION

The term pattern is used to describe a recurring model or scheme, but also to indicate repetition of behavior, actions, or situations.

Pattern recognition is the analysis and identification of patterns within raw data. These data are classified according to the knowledge already acquired or to the information extracted from the models already stored. Input data can be words or texts, images or audio files.

Pattern Recognition is useful for a multitude of applications, including image processing, speech and text recognition, optical character recognition in scanned documents such as contracts and invoices.

 

PREDICTIVE ANALYSIS

Predictive analysis involves using data, statistical algorithms and machine learning techniques to make predictions about future outcomes. Predictive models search for patterns within historical and transactional data, assessing the probability and possibility of the occurrence of certain events in the future.

By analyzing past data, companies can detect possible risks and identify potential trends to seize new opportunities.

R

RECOMMENDATION SYSTEM

Recommendation System are designed to recommend and address the preferences, interests, decisions of the user, based on different factors and information provided by it, indirectly or directly.

These systems are now the main pillar of the business model of all social platforms and eCommerce (Amazon, Netflix, Spotify, Youtube…).

Algorithms track the actions of the user and, comparing them with those of others, learn his preferences and interests. In this way, they find similarities between users and elements for recommendation and, as the customer uses the platform, the algorithms suggest more precisely.

 

ROBOTIC PROCESS AUTOMATION (RPA)

Robotic Process Automation (RPA) covers all technologies and applications used to mimic human interaction with computer systems. Specifically, it is the automation of work processes performed using software (bots), which can automatically perform repetitive tasks and mimic human behavior.

Unlike traditional automated tasks that rely on structured data (such as APIs), RPA can also handle unstructured data (such as images and documents). This is possible thanks to the integration with Artificial Intelligence techniques.

S

SELF-DRIVING CARS

Self-driving cars use technology to replace the driver with safety systems that are suitable for autonomous driving on the roads. A self-driving vehicle uses a combination of sensors, cameras, radar and AI to monitor road conditions, but also to move between different destinations without the need for human intervention.

Autonomous cars must pass a series of tests and receive specific authorizations before they can drive on public roads. The Society of Automotive Engineers (SAE) has established 6 levels of driving automation ranging from 0 (fully manual) to 5 (fully autonomous).

 

SENTIMENT ANALYSIS

Sentiment Analysis is a natural language processing technique (NLP) used to listen to and analyze the feelings and opinions expressed by users on social networks, forums or blogs about a product, a company or a service.

By collecting data from online content about the emotions the user has experienced in specific contexts, Sentiment Analysis focuses on polarity (positive, negative, neutral) but also on feelings, emotions (angry, happy, sad, etc.), urgency (urgent, not urgent) and intentions (interested, not interested). It is often performed to monitor customer feedback about a particular product or service, analyze their brand reputation, or understand customer needs.

 

SYNTHETIC DATA

Synthetic data are artificially reproduced data using generative machine learning algorithms. Based on real data sets, a new dataset is generated that retains the same statistical properties as the original one, while not sharing any real data.

The synthesis allows you to anonymize the data and create them according to parameters specified by the user, so as to be as close as possible to the data acquired from real-world scenarios.

 

SPEECH RECOGNITION

Speech Recognition is a feature that allows a computer to understand and process human language in a written format or other data formats. Thanks to the use of Artificial Intelligence, today this technology is able to identify not only natural language, but also other nuances such as accents, dialects or languages.

This type of voice recognition allows you to perform manual tasks that usually require repetitive commands, such as in chatbots with voice automation, to route calls to contact centers, dictation solutions and voice transcriptions, or in user interface controls for PCs, mobile devices and on-board systems.

T

TEST DI TURING

A test developed by the English scientist Alan Turing in the ’50s that verifies the ability of a machine to imitate human behavior and to evaluate the presence or absence of “human” intelligence in a machine.

This test, also known as the “Imitation game“, involved the presence of a judge in front of a terminal, through which he could communicate with two entities: a man and a computer. If the judge could not distinguish the man from the machine, then the computer had passed the test and could be called “intelligent”.

 

TRANSFORMER

Transformer neural networks are a type of neural network introduced in 2017 by Google in the paper “Attention Is All You Need.” This architecture has become one of the most widely used models in natural language processing (NLP) and other applications.

Transformer neural networks are based on attention, a mechanism that allows the network to learn relationships between different parts of an input, such as words and phrases. This makes them effective in handling relationships between words or linguistic units within a text.

Transformer networks are particularly well-suited for Natural Language Processing (NLP) tasks such as automatic translation, text generation, natural language classification, and more.

 

More To Explore

ChatGPT-8-examples-of-application
Pills of AI

ChatGPT: 8 application examples

ChatGPT has revolutionized in a very short time different areas of our daily life, bringing significant changes and offering innovative solutions. This multifunctional, adaptable and

Read More »

Do you want to automate your business?

Contact us and reach the next level