AI@BA: Artificial Intelligence in Bamberg (WT 2022/2023)
General Information
- Joint seminar with communication science and psychology.
- This seminar takes place only in winter term 2022/2023.
- It is open for bachelor- and master students (BA AI, CitH).
- You have to pass our course Introduction to Artificial Intelligence (AI-KI-B) to be able to participate in the seminar.
- There will be a limit of 10 students from computer science (BA AI, CitH).
- The course language is German by default.
- You have to apply for this seminar in a central application procedure. Find more info in the respective virtual campus course.
- Participants should sign up for the course in the virtual campus.
- You find more administrative information at UniVis (this link leads you to the version for computer science students).
Topic: Artificial Intelligence in Bamberg
Artificial Intelligence in Bamberg? Exactly - based on Gary Marcus' book "Rebooting AI - Building Artificial Intelligence We Can Trust", we want to address the topic of "Artificial Intelligence" from different professional perspectives. How does AI work? How far are we really? What role does the media play in representing AI? And how can trust in artificial intelligence be built? The seminar is interdisciplinary and open to students of computer science, communication science and psychology from the 3rd semester on. Together we will work on the topic, discuss it in working groups and take it to the city: The goal is to create an AI trail through Bamberg. Learning objectives will include: science communication, critical examination of potentials and limits of artificial intelligence, communication of a vision of future AI (What should AI be like in the future?), artistic and graphical representation of the book's statements, justified trust in technology and artificial intelligence.
A description of the topic in German language can be found in UniVis.
Recommended Reading / Links / Topics:
- Gary Marcus, Davis Ernest: Rebooting AI: Building artificial intelligence we can trust. Vintage 2019.
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160.
- Stuart Russel, Peter Norvig: Artificial Intelligence: A Modern Approach (4th Edition). Pearson 2020.
- Stuart Russel: Artificial Intelligence and the Problem of Control. Perspectives on Digital Humanism 2022: 19-24
- Cynthia Rudin: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell1, 206–215 (2019). doi.org/10.1038/s42256-019-0048-x
- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences (2018 arxiv). Artificial Intelligence. Volume 267, 1-38.
- Muggleton, S. H., Schmid, U., Zeller, C., Tamaddoni-Nezhad, A., & Besold, T. (2018). Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP. Machine Learning, 107(7), 1119-1140.
More literature can be found in the virtual campus.