Top 5 Applications of AI Systems: Exemplary Use Cases in Various Fields

0
11
intel has plans to integrate artificial intelligence into all its products
intel has plans to integrate artificial intelligence into all its products

AI-supported systems already accompany you constantly in everyday life – so omnipresent that it is hard to imagine life without them, and so inconspicuous that they are hardly actively noticed anymore (Wahlster and Winterhalter, 2020, p. 11). In fact, you’ve probably already used several different types of AI before you leave the house on a normal workday: unlocking your smartphone using fingerprint or facial recognition, using a weather app to check the chance of rain, or checking the weather morning traffic situation and the associated route planning before your departure – all of these activities require the use of AI.

The use of AI is also increasing steadily and rapidly in an industrial context. It is already used in many companies to optimize processes and increase productivity or to analyze and support decision-making processes. The great strength of AI is that, in contrast to processes that work according to fixed patterns, it is capable of learning and its results can be continuously optimized. For example, B. personalized offers for customers can be generated more and more easily and quickly, which in turn leads to an increase in productivity and thus increased sales .

In the following, selected fields of application of AI-supported systems – current and those in development – ​​will be briefly presented. The selection made makes no claim to completeness or comprehensive informational depth.

The use of AI is becoming increasingly important in medicine and offers an ever greater variety of possible applications. AI can use machine learning to recognize patterns and trends from large amounts of data and evaluate them in a standardized manner, which enables faster and more accurate diagnostics . For example, it can be detected efficiently and reliably whether there are signs of pneumonia or bleeding in the brain in a CT image. Risk factors for serious illnesses, such as dementia, can be identified early on in order to offer targeted preventive measures or therapies. AI also helps develop personalized therapies by using patient data to identify the best treatment options for each individual (Herrmann et al., 2021).

However, in an area as socially relevant and as sensitive for individual citizens as medicine, it should also be borne in mind that high data protection requirements must be taken into account when using AI, especially when it comes to sensitive patient data that can be passed on to unauthorized third parties (Brautzsch, 2021) . Another problem is that the algorithmic evaluation processes of AI-based applications in their current form do not provide any clues as to how a result was achieved. Especially in the medical context, however, it is fundamentally important to make a finding and the associated treatment decisions transparent, because ultimately the treating doctors still bear responsibility – not the algorithms (Meißner, 2021).

Deepfakes

AI can also be used in designing or modifying images and videos. This makes it possible to create content that shows events that never happened, or at least not in the form presented. At the moment, creating these deepfakes is still comparatively complex. However, as technology continues to advance, it will become increasingly easier to create fake videos of events that are indistinguishable from real footage.

Personalization of user content

Much of the information we receive every day is personalized through the AI-based collection and analysis of our user data. This applies, among other things, to content on social media, online advertising, music and film recommendations on streaming platforms and search engine results.

This mass collection of data around our interests and the application of algorithms to that data has led to the intensification of a phenomenon that has become known as filter bubble(s) . These arise, among other things, from the fact that users deal less and less with different content over time – and therefore with fewer perspectives. This applies not only to message bubbles, but also to general information. Despite identical search terms, an online search can therefore be fundamentally different from another person’s search due to the influence of algorithms, depending on where they live, what online activities they do and what their interests are (according to the respective online activity). . In the long run, this narrows your own perception. This is problematic for the social structure because parts of a society can develop very different ideas about reality, depending on what content they consume. Tolerance towards people whose opinions differ from your own is also continually decreasing.

Personnel selection

More and more companies are using AI-based applications to decide which applicants are suitable for an interview (Schlupeck, 2019). First of all, it seems to make sense to use automated data analysis to make application processes more objective and efficient, for example by training AI software to evaluate certain qualifications of a person positively or negatively.

However, one-sided training data can lead to the adoption of unwanted or unconscious patterns. This algorithmic bias, also known as algorithmic bias , is then systematically applied by the AI ​​and distorts the result of the evaluation. For example, if female applicants have previously been rejected more often than their male competitors, the AI ​​will adopt this pattern and possibly sort out women before the interview – not due to a lack of qualifications, but due to a one-sided amount of training data. At the moment, defining target variables is the biggest hurdle in designing a fair AI-based application process. At the moment, people continue to influence the question of the “best” application by defining certain criteria for analyzing a data set. In other words: the AI-based systems can only be as good as their human trainers and therefore sometimes reflect – intentionally or unintentionally – certain values ​​or prejudices.

Predictive policing

Predictive policing, also known as predictive policing , is a method of police prevention work that uses computer models and algorithms to predict future crime patterns and direct police presence where it is needed most. This technology is based on historical crime data and can help reduce crime rates and make police operations more efficient . The procedure is already widely used, especially in the USA, but is very controversial, especially with regard to maintaining or increasing the structural disadvantage of people of color.

Similar to the other application areas described here, the data basis and the way in which it is evaluated by the AI ​​are often problematic. Predictive policing can lead to a kind of vicious circle, as policing certain areas based on pre-existing data analysis can lead to an increase in recorded crime in those areas. This higher crime rate can then in turn be viewed by predictive policing software as evidence of higher crime in these areas, leading to even more surveillance and control . Assessing a person’s future risk of re-offending using “recidivism algorithms” is also problematic due to its proneness to errors and systematically disadvantages the African-American population in particular (Zweig, 2019).

Previous articleShould artificial intelligence be used in teaching?
Next articleBombshell at Decathlon: a smart watch with calls for less than 25 euros
Abraham
Expert tech and gaming writer, blending computer science expertise