Joint Diagnoses by Humans and AI
Suppose that a software program presents a serious diagnosis, like cancer, without providing any rationale for the decision. Would humans trust a machine’s judgement in such a case? “Machine learning processes help make diagnoses. But if their decisions are not comprehensible to doctors and patients, they have to be taken with a grain of salt and might even have to be ignored in sensitive fields like medicine,” says Dr. Ute Schmid, Professor of Cognitive Systems at the University of Bamberg. Since September 2018, Schmid’s research team has been involved in an interdisciplinary and multi-institutional project aimed at using a number of particular samples to make automated diagnoses more transparent. The so-called “Transparent Medical Expert Companion” comprises two prototypes: one model uses video material to recognise pain experienced by patients unable to communicate their discomfort themselves and to explain their classification; another prototype currently in development creates confirmable colon cancer diagnoses on the basis of microscopy imaging data.
A system that learns to recognise symptoms and explain diagnoses
In order to enable the software to both recognise an illness and justify its decisions, the research team has combined various computer science methods. With the aid of deep neural networks, or “deep learning,” it is possible to classify enormous volumes of imaging material. However, these processes do not provide information on how decisions are reached. Additional processes are employed to look within the deep neural network and make crucial traits comprehensible to humans. They highlight things like conspicuous areas in the intestinal tissue or use text to explain why a particular section of the tissue structure was classified as abnormal under the microscope.
Various research groups are involved in the development of the “Transparent Medical Expert Companion.” The Fraunhofer Institute for Integrated Circuits IIS in Erlangen and the Fraunhofer Heinrich Hertz Institute HHI in Berlin are using deep learning processes to create a software program. In individual use cases, the expertise of the University of Erlangen’s Pathological Institute, in collaboration with Professor Arndt Hartmann, and the expertise of the University of Bamberg’s pain researcher and Professor of Physiological Psychology, Dr. Stefan Lautenbacher are also required. “This research project calls for knowledge from various fields,” explains Privatdozent and project coordinator Dr. Thomas Wittenberg of the Fraunhofer IIS. “Thanks to the interdisciplinary cooperation, it’s possible for us to develop companions for different medical experts that meet important criteria like transparency and explicability while providing sound diagnostic results.
Transparent companions support medical work
The Bamberg team’s principal task is to program those components which coherently explain the deep neural network’s decisions. In particular, the researchers utilise what’s known as inductive logic programming. Their goal is a system that, for example, not only reports that a person is experiencing pain, but that also displays on a monitor the reasoning behind this assessment. A text presents the rationale: the patient’s eyebrows are lowered, the cheeks are raised and the eyelids are pressed together. An image indicates the relevant parts of the face with coloration and arrows. The system would also estimate the degree of certainty of its diagnosis.
“The attending doctors decide whether or not they agree with the assessment,” says University of Bamberg research assistant Bettina Finzel. “They can influence the algorithms by making amendments and corrections in the system. In this way, the software continues to learn and incorporate the experts’ invaluable knowledge.” Ultimately, responsibility lies with the person who is being assisted – not replaced – by the transparent companion. Furthermore, transparent companions can be used to help train doctors in the future. The German Federal Ministry of Education and Research has funded the project through August 2021 with a total of 1.3 million euros, of which approximately 290,000 euros have been allocated to the University of Bamberg.
Please find additional information at www.uni-bamberg.de/en/cogsys/research/projects/bmbf-project-trameexco
Bild „Mikroskopie(1.6 MB)“(1.6 MB): Microscopical images of the human large intestine illustrate the new system’s functional structure.
Source: Virtuelle Mikroskopie der Universität des Saarlandes (https://mikroskopie-uds.de/)
*The images used in this prototype are taken from the Saarland University’s virtual microscopy programme. They serve only an illustrative purpose and are not actually being handled in this project. The images presented feature only healthy tissue.
Further information for media representatives:
Contact for content-related queries:
Prof. Dr. Ute Schmid
Cognitive Systems
Tel.: 0951/863-2860
ute.schmid(at)uni-bamberg.de
Media contact:
Patricia Achter
Pressereferentin
Tel.: 0951/863-1146
patricia.achter(at)uni-bamberg.de