AI glossary

Terms and applications

Artificial Intelligence’ (AI) is the generic term for a family of technologies which all have in common that they are suitable for processing unstructured data (e.g. text, audio data, images, videos, and measurement data). Computers have been able to store and modify

this type of data for a long time. However, special procedures are necessary to allow them to recognize the content of an image or words in spoken language. AI technologies offer such solutions.

AI technologies can be divided into three fields:

  • Data Analytics and Big Data: These make it possible to handle large amounts of data. This includes procedures for studying data quality, accessing various data formats, and determining the key figures that are relevant for the customer.
  • Evolutionary algorithms: These are optimization procedures in which the parameters of an algorithm are continuously adapted and improved.
  • Machine Learning: This refers to methods that can recognize patterns in data and generalize them. As a result, previously unknown input data can be handled correctly.

AI terms and use cases

Artificial neural networks

Computers can easily store and modify unstructured data such as image data or text. However, they cannot easily grasp the content of this data. For example, a computer can differentiate an image file from a document file, but cannot detect whether the image shows a cat or a dog. This is precisely the challenge faced by self-driving vehicles: They have to detect their surroundings using sensors to avoid colliding with obstacles. To do this, the vehicle’s systems must understand what information is contained in the data they receive from the sensors.

There is a whole group of methods to solve this problem, known as artificial neural networks (ANN). Artificial neural networks are technologies that provide a highly simplified imitation of the human brain using artificial neurons. Similar to the human brain, ANNs can be trained by means of learning procedures (machine learning). These procedures include ‘supervised learning’, ‘unsupervised learning’ and ‘reinforcement learning’.

Machine learning

The biggest advantage of AI-based systems is their ability to absorb and interpret massive amounts of data and make complex decisions based on that data. Such systems are also fast and extremely flexible in terms of how they can be used. One category of artificial intelligence is ‘machine learning’ (ML). Machine learning refers to a set of dynamic algorithms that are able to learn ‘independently’ to improve their results or performance. This ‘autonomy’ in terms of problem solving and the system’s own development is based on training processes that are similar to the ways humans learn.

Unlike traditional programming, machine learning does not require fixed rules to be specified. This means ML algorithms do not have to define how the program should behave under certain circumstances. It can regulate itself and learn behavior patterns. This makes the systems highly flexible and efficient.

One method of machine learning is ‘Deep Learning’. This method is based on the use of artificial neural networks with numerous intermediate layers between input and output (called hidden layers). The extensive internal structure means these networks are able to independently improve the capabilities of the algorithm. In other words: deep learning allows machines to learn.

These self-optimizing processes are what make deep learning so special. They are particularly suitable for applications that rely on large amounts of data to recognize patterns and form models.

Unsupervised learning

Unsupervised learning algorithms seek to recognize patterns in existing input data. Using a clustering procedure, the data is sorted according to characteristics and differentiated based on these characteristics. A less common application for unsupervised learning is dimensionality reduction. The algorithms are used to reduce the number of dimensions (parameters) or properties of data sets, and thus also the complexity of the underlying model. This is possible because the parameters or properties of the data sets usually correlate with each other and as such provide redundant information.

Supervised learning

Supervised learning algorithms are designed to create associations. To do this, artificial neural networks are trained with the help of a ‘supervisor’, which provides the correct output value for a given input value. The input value comes from a ‘training set’ that is smaller than the total amount of data available. During the training process, the artificial neural network assigns the correct output values to any input values by means of computational processes. Such algorithms are used, for example, to classify data and/or to build models using regression. The latter includes ‘predictive modeling’: Based on the input data, predictions can be made about how the data might develop over time.

Supervised learning requires manual preparation in that classes or categories have to be defined and, if necessary, the data labeled so that the learning can be carried out.

Reinforcement learning

Reinforcement learning refers to algorithms that can learn appropriate behavior for different situations. This is done by punishing and rewarding the agent, i.e. the system with the learning component. If the agent is not successful in a certain situation, learning and adaption are triggered. Repeated attempts and adaptation allow the agent to find a solution for the situation at hand.

Reinforcement learning is widely used for current topics such as autonomous driving and similar real-time decision problems (for example AI gaming).


Abdülkerim Dagli
+49 8139 9300-0


News: Just released: Customer magazine InNOVAtion 2/2023 more

Press Release: Autocrypt and MicroNova Launch Strategic Partnership to Advance Automotive Cybersecurity more

Career: Exam Consultant (m/f/d) more

MicroNova - Contact

MicroNova AG
Unterfeldring 6
85256 Vierkirchen

    +49 8139 9300-0

» How to find us