Wants to dream of confident AI

AI in business - put an end to dream dances

This contribution belongs to the dossier:

Artificial intelligence

Is artificial intelligence changing education? How do we teach and learn in the future? What skills will the AI ​​world of tomorrow require? What opportunities and risks does the technology offer for higher education?

Artificial intelligence and "big data" as the Answer to problems of our time? We have to wake up from this dream, says Melanie Vogel. In this blog post, she goes straight to the point of commenting on the idea of ​​solutionism and explaining why we need more realism about data and AI.

 

For several years now, an idea has been wafting from Silicon Valley into the German corporate landscape "Solutionism" is called. The idea behind this is that it is a technical solution for all pressing problems of our time gives. The highlight of the idea is that “Big Data” and AI not only show us the way to the problems, but at the same time they also provide the solution to how these problems can be solved. The result would be a brave new world in which the great problems of mankind - simplified down to data and algorithms - leave the chaotic, unpredictable sphere of natural laws and become manageable. However, on closer inspection, the basic idea of ​​solutionism could chase after a chimera ...

The problem of our obsession with data

At least since the invention of Google Digital gold data become. They are an almost priceless raw material, which is why a certain data obsession has spread in the global economy (and thus also in Germany). And this is exactly what the idea of ​​solutionism is based on: the unconditional belief that with enough data, the economy can resolve many complex aspects of life - and the associated inefficiency of areas of life and individuals. It's about nothing more, but also nothing less than one data-driven transparencythat enables companies to build platforms, network infrastructures and control and regulate everyday life in all its facets. At first glance, this trend seems sensible and understandable. Less frictional losses during data transfer, more transparency, e.g. in the fight against terrorism, or faster tracking of infection chains. But behind it three small but decisive mistakes in reasoning

  1. All data that is collected by machine is already out of date by the time it is written to a database. This knowledge is important when it comes to the subsequent interpretation of the data. Namely, they never depict reality, only ever a realitythat can never be omnipresent. For many, this distinction may be irrelevant. This is not the case for the narrative to which we have to subordinate the collected data at some point. No matter how much data “Big Data” generates, it is and always will be incomplete and reductionistbecause human perception is also incomplete and reductionist.
  2. Intuitive knowledge about the use and interpretation of data and algorithms is as good as non-existent because there is no extensive experience and there is also a lack of comprehensive interdisciplinary discus on how data can be read and interpreted. Different areas of knowledge have learned different approaches when data is to be interpreted. It is a delusion to believe that solutionism could only benefit us one result technical solution when it is in truth for the chaotic human problems infinite ways there to cope with them.
  3. Big data creates a (supposed) factual situation that suggests a problem that may not even exist or cannot be solved because the (supposed) factual situation is in reality incomplete. What we get instead is an artificially created problem paradoxthat can drag a previously complex state into chaos. The technical hubris that arises from this is based on the lack of awareness that “big data” only ever happens under laboratory conditions. But life itself is chaotic and not everything can be abstracted into data and simplified in this way and without alternatives.

AI in Germany - between desire and reality

In 2017 the Bitkom, Which What Germans expect from AI to have. The acceptance seemed high, because people saw a meaningful use of AI technologies in many areas of life.

  • 83% were sure that AI could help reduce traffic jams. Today, three years later, we can say that it was not the AI ​​that reduced traffic jams for months, but a virus. The truck traffic jams that have arisen at the German borders in recent weeks or the kilometers of tanker traffic jams in front of the largest ports in the western world could not be solved by the AI ​​because “big data” had not even predicted this problem. Interestingly enough, the responsible people at the borders, in the ports and in politics did not predict these problems either, although they should obviously have been obvious. Where supply chains are interrupted and destroyed, where a global patchwork of quarantine regulations make entry and delivery traffic difficult, smooth processes can no longer take place - AI or not.
  • In 2017, 68% of those surveyed were certain that administrative tasks could be done more quickly using AI. Today we know: Germany is not ready yet. In the health offices of the republic, faxes and manual analog work are done, which is what it takes and just last week we learned that the Bonn tax office has, believe it or not, exactly one fax machine (written out as a number: 1 fax machine), which accelerates it Processing of urgent business concerns extended in a blatantly negligent manner. A scanner connected to a secure e-mail program would be a real quantum leap into the 21st century, which the financial authorities are currently unable to achieve.
  • In 2017, 57% saw great opportunities in AI to improve diagnoses in the healthcare sector. It would be nice if it were like that. Whether it's the Corona app, tracking chains of infection or ordering vaccines - it is evident everywhere: the AI ​​optimism was premature. It is not primarily because of the AI ​​that it fails, but because of the people who develop the existing AI systems, use them and should trust them.
  • After all, as the 2017 Bitkom survey let us know, 21% of those surveyed were sure that you could create completely new things in the fields of art and culture with the help of AI. Today we know: The possibilities would probably be there - only human imagination and collective acceptance are missing. The culture and event industry is dying a quiet but certain death. And why does it fail in many companies? The fear of technology, the lack of imagination to fill digital spaces with human life and the narrow-mindedness of many IT departments in companies to release platforms for use. True to the motto “zoom in and be happy”, other digital channels are not only not allowed, but their use is strictly prohibited.

In a nutshell - unfortunately - you have to state that at least in Germany "digital readiness" is a nice idea, but in reality it is accompanied by implementation dramas, which in most cases have nothing to do with AI and "big data" , but with one general misunderstanding regarding ethically and economically sensible fields of application. But maybe that is exactly an advantage that can now be used.

AI - Get out of the dream dances

The expectations of AI have not been met in many areas, especially in the last 12 months. Chasing the chimeras of Silicon Valley and indulging in solutionism also makes little sense for Germany, because the Central European cultural area is not American and transhumanism (to which solutionism can be assigned) does not correspond to humanism - the basis of our canon of values. However, I do not see this gap as a disadvantage, on the contrary. It is an advantage that we could confidently use.

Based on the values ​​of humanism - especially Humboldt’s ideal of education - it would be time to free AI from the utopian and sometimes misanthropic dream dances of Silicon Valley and a "European" or even a "German AI path" to choose. The foundations for this are already anchored in our culture. Humboldt’s ideal of education implies one holistic approach. Not only in the education and training of people, but also in their ability to think and act holistically. Research and teaching should go hand in hand, just like - according to Humboldt - people as a whole "in himself and regardless of his particular profession, a good, decent, enlightened person and citizen according to his status" should be. So how could you transfer this basic idea to AI and Big Data and thus unite future technological developments ethical-cultural touch that has so far been almost completely missing globally?

We need a reform of the cognitive process

We have to recognize - and our cultural Humboldt’s heritage gives us this access almost easily - that we are a Understanding of (new) sources of error that AI and Big Data tempt us to do. We do not have to do without collecting data and finding technological solutions to problems at all, but the interpretation of these must change. Away from the exclusively economic approach (which contains a fundamental and dangerous "data bondage") to the ethical-economic approach. Four steps would be necessary for this, which should not only find their way into colleges and universities in all departments, but also urgently need to be implemented as a technical-ethical standard in the corporate world.

Would the process chain with the Collecting Big Data should begin afterwards a problem or a theory is deposited become. Is there actually a real problem with the data collected? If so, is this issue relevant and for whom? What narrative does the data tell? Who will interpret them, how and why? The third step is a consistent comparison of reality occur. What human needs would a possible solution satisfy? What are the feasibility options? Which way is the most sensible, the most ethical and the most sustainable? Which actions and attitudes have to be ethically and morally checked if AI processes are developed from certain data? In the fourth step, the focus is on the technology, which in turn has one permanent (so to speak) ethical data comparison with reality must map.

So there are between the technological processes analog-creative processes, interdisciplinary discourses and humanistic-holistic considerations, which we in Germany approach as Technology assessment know, which until now has mostly been limited to institutions and has not been consistently extended to individuals and entrepreneurial organizations. But it is precisely the paradigm shift that has to take place when it comes to sustainable technology development, for which the German-speaking area in particular has always stood and should also stand in the future.

"The machine can only do what we know to tell it to do," said Ada Lovelace. It therefore takes "critical thinking" at all levels of society - Critical thinking when it comes to the use of Big Data and AI in order to identify abuse and inhumane developments at an early stage and to take countermeasures.

This text is licensed under the Creative Commons Attribution - Share Alike 4.0 International - CC BY-SA 4.0 license. In the event of re-use, please state the author's name and the University Forum Digitization as the source.

The three-time award-winning innovator has been a passionate entrepreneur since 1998. The award-winning “Futability® concept” she developed is her answer to VUCA - a world of radical change. As a VUCA expert, she makes people fit for a world of permanent change and ensures a mental rejuvenation. As a business philosopher and innovation coach, she accompanies holistic company transformations. The multiple book author writes regularly as a specialist author for the publications “PersonalEntwickeln” (German Business Service) and “Fundamentals of Further Education” (Luchterhand-Verlag).

www.WirtschaftsPhilosoph.in | www.VogelPerspektiven.gmbh

Laura Wittmann is a student employee in the University Forum Digitization and supports public relations there. She has a bachelor's degree in social and business communication at the Berlin University of the Arts and is now studying for a master's degree in Language - Media - Society at the European University Viadrina.