Digitalization, IoT and Artificial intelligence

Updated: Apr 26

The digital wave is sweeping across the world - not only the industrialized countries, but also the countries with a backlog of development. It seems that digitalization is reducing underdevelopment and driving progress around the world.

A second wave is following behind: programmable calculating machines are being followed by self-learning machines that are able to solve problems on their own. Artificial intelligence is still in its infancy but it is already apparent that it will also change the world we live in.

There is hardly a term that stands more for technological progress and the expected

upheavals in the economy and society as the as the term Internet of Things (IoT).

It was first used used by Kevin Ashton, who, however, reduced it very strongly to the automatic identifiability of objects (Ashton, 2009). In the meantime perspective has broadened, so that it can now rightly be said that the IoT will be a key driver and integral part of the digital transformation in the coming years.

Internet of Things is designed

  • To expand the range of possible uses and applications of otherwise unintelligent or less intelligent objects

  • Innovative applications and digital services for users and (both consumers and producers)

  • Conserve resources through more efficient use,

  • Make existing business models more efficient or generate new business models

  • Generate new business models

  • Increase the productivity of economic sectors, and increase user satisfaction.

Possible applications and uses of IoT can be found in private, but also public and industrial-commercial sectors

  • Smart home and smart meters for energy management.

  • Smart city concepts,

  • E-health and e-care in the medical and healthcare sectors,

  • Smart security for improving security in private and public environments,

  • Smart mobility systems and autonomous driving,

  • Smart supply chains, etc.,

  • Intelligent manufacturing and logistics systems and

  • Intelligent maintenance systems

The transformation of everyday objects into IoT objects is the addition of smart components (processors, sensors, communication technology) to these objects. Object-specific data can be stored on a microchip and/or transmitted via a communication forwarded via a communication interface. After the data has been transmitted via the communication channel (WLAN, Internet) to other communication partners (other objects, instances, cloud storage, etc.) in the communication system, they can be evaluated there.

New IoT-Software applications:

New markets are developed for higher-value services in the IoT sector (Smart Security, Smart Mobility, Smart Healthcare, Global Maintenance systems in the industrial sector, etc.), combined with innovative business models.

Business models such as differentiated sharing solutions, pay-per-use concepts, agile microservices, usage license sales with various services instead of product purchases.

Services which work with temporary, virtual teams of employees without fixed structures.

High-quality IoT applications require complex evaluation and processing programs.

Smart objects generate large amounts of data, which is analyzed and used in real time if possible. Big Data analysis requires the implementation of a large number of new, complex methods and algorithms.

In Conclusion

The Internet of Things is a key driver and component of the future digitization of the economy and society. IoT extends the Internet as a global communications network by integrating smart products and objects (things) that are integrated as additional "participants" or "communication partners." They are able to collect an immense, diverse data and make it available in the network for analysis. The objects and their condition can also be addressed and controlled from anywhere via the via the communications network. This opens up new, innovative services and business models.

Artificial Intelligence

Often, what we think as artificial intelligence, is merely the ability to process large amounts of data quickly and effectively, such as when computers play chess or Go.

Such capabilities, which are not based on an understanding of biological intelligence, we can refer to them as Silicon intelligence, since they are closely linked to the capabilities of silicon-based information technology.

What is intelligence?

We do not yet understand what exactly intelligence entails. We do not know which partial abilities combine in which way to make in order to enable humans and animals to behave intelligently. Many scientific disciplines are involved in the study of natural

intelligence: psychology, neuroscience, behavioral biology, educational science, philosophy. Each of these disciplines produces important insights. And yet, putting these fragments together of these fragments does not yet produce a consistent overall picture.

As early as the 1980s, researchers in the field of intelligence came to the conclusion that it is relatively easy to make computers to teach computers supposedly high cognitive performance, such as playing chess. It is incomparably more difficult to give them abilities that are natural for a three-year-old child: e.g., perception, intelligence

such as perception, mobility and manual manipulation of the environment. environment. In other words, what humans do automatically and without conscious effort is the real challenge for artificial intelligence.

Examples include the program AlphaGo (Google), which defeated the world champion in the board game Go, or IBM Watson, a computer program, that could probably beat any human in the game Jeopardy! It is not clear whether Watson could also win against a human,

who has access to large amounts of data, as Watson does), and last but not least

last but not least, Deep Blue, the program that already in 1997 defeated world chess champion.

AI is an overarching term for many fields and a broad term for the scientific study of artificial intelligence in the broadest sense. Machine learning and also large parts of robotics are subfields of AI. Although the area is very versatile, there are a number of underlying paradigms through which scientists have attempted to create artificial intelligence.

The early days of AI focused on symbol processing: intelligence was understood as the manipulation of formulas. Then arose room for new concepts, such as connectionism.

In this context, attempts were made to explain the assumed basic functioning of the

human brain with the help of so-called neural networks in the computer.

But here, too, research stagnated, fundamental objections were raised. Neural networks were then niche technology for decades, until a few years ago they were brought back into the spotlight by Deep Learning. Other paradigms in AI include Bayesian probability theory

(named after the English mathematician Thomas Bayes) and statistics. Both methods are still widely used. There are a variety of other approaches, such as evolutionary

Algorithms, which attempt to recreate evolution in the computer.

Machine Learning (ML)

ML is a sub-discipline of AI and is primarily concerned with the

development of (learning) methods. These methods are used to solve various problems. Currently, deep learning is of outstanding importance, but also methods of statistical learning are still attracting a great deal a great attention.

Deep Learning is a further development of connectionism, the machine learning with artificial neural networks. Since several years, research on deep neural networks has yielded results in a number of several application domains (image recognition, speech recognition,

automatic translation of texts). The significant progress has generated great enthusiasm in a variety of AI-related research fields. sparked great enthusiasm. Many scientists believe,

that Deep Learning will massively accelerate artificial intelligence.

Some even believe that Deep Learning is the only key technology needed to understand artificial intelligence.

These advances have been made possible in large part by the availability of large

computing capacities and growing amounts of data. The neural networks, on which Deep Learning is based, were already developed in the middle of the last century. However, it is only with today's existing computing capacities make it possible to use much larger neural networks.

Deep neural networks are trained with the aid of very large very large amounts of data. They are trained by the connections between the nodes, along which intermediate results of the

results of the calculation are forwarded, are strengthened or weaken. The large number of connections in deep neural networks necessitates the large amount of data.

Application fields of artificial intelligence

We will talk about fields of application in which artificial intelligence could be of importance in the near future. Where large amounts of data are already available that can also be interpreted by that can be interpreted by humans, Silicon Intelligence and

machine learning will bring major changes. Because in these computers are already outperforming humans in these fields of application. In these application areas, computers are already surpassing the capabilities of humans.

In addition to the areas already mentioned (image recognition, speech recognition and translation), applications in the legal in the legal sector or medical image analysis. Computers are capable to routinely identify tumors using imaging techniques

more reliably than is possible for humans. The same is true for complex legal contracts, which can be drawn up by computers more conclusively and faster by computers than by lawyers. Examples of other areas of use are industrial manufacturing and logistics. In manufacturing, it is already common practice to adapt production lines to the capabilities of the capabilities of robots. There is an economically viable, incremental path to bring targeted innovation from artificial intelligence to application.

The same is true for logistics. With increasing Internet trade, the distribution of goods is centralize at large logistics centers. But these can only meet the increasing demand if the

still largely carried out by humans today (especially picking, which is (in particular picking, i.e. the compilation of an individual order from the of an individual order from the warehouse stock) is automated. As in manufacturing, this automation can take place gradually.

Manufacturing and logistics are therefore areas in which early applications of artificial intelligence are likely.

Sources: Weißbuch Digitale Plattformen Digitale Ordnungspolitik für Wachstum, Innovation, Wettbewerb und Teilhabe,

Recent Posts

See All