Artificial intelligence

When algorithms make the decisions: opportunities and risks of artificial intelligence

Robotic Hand Assisting Person For Signing Document

Artificial intelligence (AI) has been a trend since the 1960s. The emergence of deep learning technology has helped to consolidate expectations surrounding algorithms and so-called intelligent systems in general. Today, algorithm technology is mainly in the hands of large US companies, and it can be accessed via their services.
This interdisciplinary TA study evaluates the opportunities and risks of AI on the basis of five different areas: the world of work, education, consumption, administration and media. Technological and ethical aspects are considered across the board and the effects of AI in the five selected areas of application are analysed.

Opportunities and risks

Artificial intelligence is an important driver of the digital transformation and is increasingly used in a variety of areas. Due to its broad application, the opportunities and risks of underlying AI technology cannot be generally evaluated, meaning a generic ‘AI law’ is not a viable option. Instead, it is recommended that problems and unwanted developments caused by AI be regulated within the scope of prevailing laws and ordinances, or through voluntary measures.
AI systems perform many tasks faster and often more precisely than humans, and they could assist us in conducting various activities much more efficiently than is currently the case. AI systems often make it possible to better adapt services and goods to the needs and abilities of individuals. It is in this capacity to personalise that AI holds great potential.
Many AI systems have to be trained on massive data sets to attain the required skills, meaning that AI’s immense need for data threatens to erode our private sphere and to undermine data protection rules. And when the data sets delivered to an AI system contain errors, the results of AI calculations are false. Another problematic issue is that imbalanced data leading to mathematically correct results may generate distorted content that could result in systematic discrimination of certain groups of people. As such, self-learning AI systems can develop autonomously and in certain circumstances generate results that are obscure and incomprehensible not only to their owners, but mainly to affected persons.

Recommendations

Use of artificial intelligence must be clearly and plainly designated so that affected persons know when they are interacting with an AI system and not with a human being.
Important decisions affecting individuals must not be delegated to an AI system before detailed consideration of the advantages and disadvantages has been made. If relevant personal issues are concerned, the results of an AI system must be controlled and justified by a human being.
The use of AI has consequences that go far beyond the technical implications. As such, persons who develop AI or work with data resulting from AI should be informed with regard to the ethical and legal issues involved. Moreover, they should be ready and willing to engage in interdisciplinary collaboration with representatives from other disciplines.

Links and downloads

Organisation

Project duration

April 2018 to March 2020

Project leaders

  • Dr Markus Christen, privatdocent, UZH Digital Society Initiative, University of Zurich
  • Dr Clemens Mader, Technology and Society, Empa
  • Johann Čas, Institute for Technology Assessment, Austrian Academy of Sciences

Supervisory group

  • Dr. Jean Hennebert, Präsident der Begleitgruppe, Informatikdepartement, Universität Freiburg, Mitglied des Leitungsausschusses von TA-SWISS
  • Benjamin Bosshard, Eidgenössische Kommission für Kinder- und Jugendfragen
  • Sabine Brenner, Geschäftsstelle Digitale Schweiz, Bundesamt für Kommunikation BAKOM
  • Dr. Christian Busch, Staatssekretariat für Bildung, Forschung und Innovation SBFI
  • Dr. Christine Clavien, Institut für Ethik, Geschichte und Geisteswissenschaften, Universität Genf
  • Daniel Egloff, Staatssekretariat für Bildung, Forschung und Innovation SBFI
  • Andy Fitze, SwissCognitive – The Global AI Hub
  • Matthias Holenstein, Stiftung Risiko-Dialog
  • Dr. Marjory Hunt, Schweizerischer Nationalfonds SNF
  • Manuel Kugler, Schweizerische Akademie der Technischen Wissenschaften SATW
  • Thomas Müller, Redaktor, Schweizer Radio SRF, Mitglied des Leitungsausschusses von TA-SWISS
  • Katharina Prelicz-Huber, Gewerkschaft VPOD, Mitglied des Leitungsausschusses von TA-SWISS
  • Prof. Ursula Sury, Hochschule Luzern HSLU
  • Dr. Stefan Vannoni, Cemsuisse, Mitglied des Leitungsausschusses von TA-SWISS

Contact

TA-SWISS
info@ta-swiss.ch