Note: 1 ECTS = 25 to 30 h students workload; 1 h (Semesterwochenstunde) = 15 academic hours per semester,
UG=Undergraduate, G=Graduate, PG=Postgraduate


Mini Course Medical AI (2021)

Artificial intelligence (AI) is increasingly penetrating the health field. There is no doubt that medical AI will change workflows in the future, but two aspects are required to achieve trustworthy AI: robustness and explainability. Robust AI solutions must be able to handle inaccuracies, missing and incorrect information, and be able to explain the outcome as well as the process to a medical expert how an algorithm has achieved a certain result. Consequently, future medical AI solutions must be technically robust, ethically responsible, and also legally compliant. This Mini Course on Medical AI is an introduction to this core area of health informatics. Students learn the necessary basics and sensitivity to these topics. Andreas Holzinger has taught this course in different versions, variations, durations, and at various institutions since 2005. In four modules, students get a crash introduction into the success of statistical machine learning, the difference between data, information & knowledge, and some basics from decision making to causability.


185.A83 Machine Learning for Health Informatics (3 ECTS, 2h, G, Class of 2021)

Health is evolving into a data-driven science. Health AI is working to use machine learning methods to solve problems in health and life sciences. This master’s course takes a research-centered teaching approach. Topics covered include methods for combining human intelligence and machine intelligence to support medical decision making. Since 2018, the European General Data Protection Regulation explicitly provides for a legal “right to explanation”, and the EU Parliament recently adopted a resolution on “explainable AI” as part of the European Digitization Initiative. This calls for transformative solutions that must enable medical experts to understand, replicate and comprehend machine results. The focus is on making machine decisions transparent, comprehensible and interpretable for experts, to understand the context and enabling them to explore the underlying explanatory factors, with the goal of answering questions of  why a particular machine decision was made. In addition, explainable AI should enable a healthcare professional to ask counterfactual questions, such as “what if?” questions, to also gain new insights. Ultimately, such approaches foster confidence for future solutions from artificial intelligence – which will inevitably enter everyday medical practice.


706.046 AK HCAI Mini-Projects (4,5 ECTS, 5h, G, Class of 2021)

In this research-based course, you will work with your tutors in mini-groups on concrete mini-projects on current topics in the field of Human-Centered AI. You will learn how to apply scientific methods and practical implementation in experimental software engineering. In this course, software engineering is understood as a dynamic, interactive and cooperative process that allows an optimal mix of standardization and individual solutions. General motto of the course: Science tests crazy ideas, engineering puts those ideas into practice. General interest in the seam between human intelligence and artificial intelligence. We are particularly interested in the issues of tractability, interpretability, and explainability that enable human experts to understand machine decisions so that it becomes understandable why an algorithm produced a particular result. One goal of the course is to raise awareness of ethically and legally responsible AI and transparent, interpretable, and verifiable machine learning.


706.996 Human-Centered AI Research Seminar (compulsory 5 ECTS)

This course is for doctoral students and advanced master students of computer science. It is listed as LV 706.996 (5 ECTS, winter semester) and LV 706.998 (5 ECTS, summer semester) as well as LV 706.997 (2h, winter semester) and LV 706.999 (2h, summer semester). The course is mandatory. The goal is to help PhD and master’s candidates improve their strategies for conducting research in the scientific domain in the broad context of Artificial Intelligence. A particular focus is on ethical, social, legal, and privacy aspects of human-centered AI. Beyond current trends and successes of machine learning, potential threats, dangers, and adversarial effects are discussed. Our goal is to enable ethically responsible, accountable machine learning to empower humans-in-control of AI and align it with human values, privacy, security, and safety. Above all, students learn the tools of scientific research. We speak Python. Download the Course Syllabus (pdf, 108 kB).


185.A83 Machine Learning for Health Informatics (class of 2020) Start: 10th March 2020 (3 ECTS, 2 h, G)

This course follows a research-based teaching approach. Topics include methods for combining human intelligence with machine intelligence for medical decision support. Health is increasingly developing into a data science, consequently robust medical AI solutions are needed to enable ethically responsible machine learning, so that humans and computers can jointly make best possible medical decisions. The new general data protection regulation of the European Union explicitly includes a “right for explanations”. The EU Parliament recently passed a resolution on “explainable AI”. This course focuses on making machine decisions transparent, comprehensible and thus interpretable for a medical expert. A future requirement will be to enable medical experts to understand the context and the underlying explanatory factors why a certain machine decision was made, as well as to ask counterfactual questions, such as “what if” questions in human AI dialogue systems.

Course page: https://human-centered.ai/machine-learning-for-health-informatics-class-2020/


Seminar Explainable AI 2019

This course consists of 10 modules: Module 0 is voluntary for students who need a refresher on probability and information; Module 9 is mandatory, on “Ethical, Legal and Social Issues of Explainable AI”, where we deal with bias, fairness and trust of machine learning which is very important for ensuring robustness; Modules 1 to 8 are adaptable to the individual needs and requirements of the class and deal with methodological aspects of explainable AI and interpretable ML in a research-based teaching (RBT) style; this seminar is organically grown from various courses on interactive machine learning and decision support; we speak Python, and experiment with Kandinsky-Patterns, our “Swiss-Knife” for the study of explainable ai (watch this video to get an idea). [Syllabus xAI 2019, pdf, 85kB]
Course Hompage: https://human-centered.ai/seminar-explainable-ai-2019/
GitHub page: https://github.com/human-centered-ai-lab/cla-Seminar-explainable-AI-2019


ECML-PKDD Tutorial 2019: From interactive Machine Learning to Explainable AI

Explainability, Fairness and Robustness are the magical three components for successful applications of medical Artificial Intelligence (“medical AI”) in the future. In this tutorial we learn a few methodological basics on “explainable AI”, “interpretable machine learning” and “ethical responsible machine learning”. Moreover, we will explore the necessity to answer questions of  “what is a good explanation?” with causability measures (see our Systems Causability Scale SCS) and to determine “to whom do we need the explanation”. This basic knowledge is needed for the design, development, testing and evaluating future Human‐AI interfaces, including Q/A systems and dialog systems. Hands-on we work with an open experimental explanation environment [1]  the so‐called KandinskyPatterns, which can be used to experiment in the broader field of xAI.
[1] Project Homepage: https://human‐centered.ai/project/kandinsky‐patterns


185.A83 Machine Learning for Health Informatics (class of 2019) Start: 12th March 2019 (3 ECTS, 2 h, G)

In this course we cover some current topics of data driven AI/machine learning for medicine and health. The focus is on explainability/transparency, fairness/bias and robustness/trust. We follow a human-centered AI approach and integrate ethical, legal, psychological and sociological issues for the design of interpretable verifiable algorithms for decision support. The goal is to enable human domain experts (e.g. medical doctors) to retrace the results on demand and to be able to understand the underlying explanatory factors (causality) of why an AI-decision has been made, paving the way for ethical responsible AI and transparent verifiable machine learning for decision support. [Syllabus, pdf, 77 kB]

Course homepage: https://human-centered.ai/machine-learning-for-health-informatics-class-2019


706.315 Selected Topics on interactive Knowledge Discovery: From explainable AI to causability (class of 2019, for PhDs only, 2 h PG)
This course is an extension of the last years “Methods of explainable AI”, which was a natural offspring of the interactive Machine Learning (iML) courses  (glass-box approaches) and the medical decision making courses held over the last years by Andreas Holzinger. This is relevant because today the most successful AI/machine learning models, e.g. deep learning (see the difference AI-ML-DL) are often considered to be “black-boxes” making it difficult to understand the results, to replicate, to re-enact on demand, to verify the plausibility of results and to answer the question of why a certain machine decision has been reached. This is an important step towards fairness in machine learning, understanding, reducing and avoiding bias.

706.046 AK HCI - Intelligent User Interfaces - Towards explainable-AI (class 2019) Start: 11th March 2019 (5 ECTS, 5 h, G)
706.046 AK HCI - Intelligent User Interfaces - Towards explainable-AI (class 2019) Start: 11th March 2019 (5 ECTS, 5 h, G)

Explainable-AI (xAI) is actually an old field, nowadays gaining interest for science, industry and society. xAI is relevant for various application domains, particularly for medicine and health. Prestigious international academic institutions e.g. Carnegie Mellon, MIT, but also commercial institutions such as Google see here, are emphasizing the importance of a more human-centered AI and the necessity to design effective and efficient Human-AI interfaces, e.g. conversational interfaces. The goal is to make AI/ML (see definition) results, the “machine decisions” re-traceable, thus interpretable and understandable for a domain expert – on demand. One aspect includes contextual adaptive dialogue systems (so-called “Explanation-Interfaces”), where human understandable natural language (NLP) plays a central role for future question-answer (Q/A) dialogs.

Course Homepage: https://human-centered.ai/intelligent-user-interfaces-2019/


706.998/999 Master and PhD (Diplomandinnen und Dissertantinnen) Seminar (class of 2019, 3 ECTS, 2h UG, G)
706.998/999 Master and PhD (Diplomandinnen und Dissertantinnen) Seminar (class of 2019, 3 ECTS, 2h UG, G)

This course is for master and particularly doctoral students of the Holzinger group; the seminar is compulsory, the aim is to help the master/doctoral candidates to improve their research work strategies along with their communication and presentation abilities within their scientific field. A particular focus is given on ethical, social, legal and privacy aspects of human-centered ai, discussing not only current trends and successes of machine learning but also potential threats, dangers and adversarial effects. The students primarliy learn the tools of the trade, the “Handwerkszeug”, of scientific research. At the same time the students will be made sensible for issues of fairness and trust of modern AI/machine learning in medical AI.

Course Homepage: https://human-centered.ai/scientific-working-for-students


Mini-Course: From Data Science to interpretable AI (class of 2019)

This Mini course is an introduction into (human) decision making, and on how human intelligence can be supported by artificial intelligence (AI) in order to make better decisions. The workhorse for AI is data driven Machine Learning (ML). After an introcuction into the fundamentals of data, information and knowledge representation, the central focus of this course is on decision making and decision support systems. A special focus is given on what is now called “explainable AI”, where an introduction in causality and interpretability is given, and awareness is raised for ethical-responsible machine learning. This is desirable for many fields of application,  but absolutely necessary for safety-critical applications and applications which impact human life, to foster transparency, fairness, trust and understanding and reducing bias in machine learning. [Syllabus, pdf, 74 kB]
Course homepage: https://human-centered.ai/mini-course-interpretable-ai-class-of-2019/


706.315 Selected Topics on interactive Knowledge Discovery: Methods of explainable AI (for PhDs only, 2 h PG)
706.315 Selected Topics on interactive Knowledge Discovery: Methods of explainable AI (for PhDs only, 2 h PG)

The need for Explainability and Interpretability is motivated by lacking transparency of so-called “black-box” machine learning approaches (e.g. deep learning). This is wishable in many domains, but mandatory in the medical domain, where we need to foster trust/acceptance of future AI. Rising legal and privacy aspects (European GDPR, May, 28, 2018) will make “black-box” approaches difficult to use in future Business (see explainable AI), due to the (juridical) “right for explanation”. Other topics of explainable AI include the correlation fallacy and all sorts of biases, e.g. bias in learning but also in interpretation. Our goal is to ensure fairness of machine learning algorithms.

Course Homepage: https://human-centered.ai/methods-of-explainable-ai


185.A83 Machine Learning for Health Informatics (class of 2018) Start: 6th March 2018 (3 ECTS, 2 h, G)
185.A83 Machine Learning for Health Informatics (class of 2018) Start: 6th March 2018 (3 ECTS, 2 h, G)

In this course we foster a research-based teaching (RBT) approach on current topics of AI/machine learning for the application in medical decision support. Due to the upcoming importance, a special focus is this years course is given on explainable-AI (and we will learn the differences between explanation and interpretation) and ethical, social, public and legal issues of medical AI for solving real-world problems in health informatics aiming at better medical decision support. We will see that what is called “explainable AI” is an old field, maybe the oldest field in computer science, because the question of “why” is central in the field of causality and is very important for modern data driven medicine.
Course homepage: https://human-centered.ai/machine-learning-for-health-informatics-class-2018/


Mini-Course MAKE-Decisions: From Data Science to Explainable-AI (class of 2018, 3 ECTS, 2h, PG)
Mini-Course MAKE-Decisions: From Data Science to Explainable-AI (class of 2018, 3 ECTS, 2h, PG)

This course is an introduction into a core area of health informatics and helps to understand decision making generally and how human intelligence can be supported by Artificial Intelligence (AI) and Machine Learning (ML) (-> What is the difference between AI/ML?) with decision support systems, having a focus on ethical, social and legal aspects. A very old field in AI is what is now called “explainable AI” and is about a) using interpretable methods, or b) making “black-box” approaches interpretable for a human expert.

Course homepage: https://human-centered.ai/mini-course-make-decisions-practice/


706.046 AK HCI - Intelligent User Interfaces - Towards explainable-AI (class 2018) Start: 5th March 2018 (4,5 ECTS, 5 h, G)

Explainable-AI (xAI) is of increasing importance due to legal aspects (GDPR), but also for ethical and social aspects – fairness and trust and for knowledge discovery. The primary goal of xAI is to make AI/ML (see definition) approaches re-traceable, transparent, understandable and  thus explainable to a human expert and/or end user. Ideally in this approach we can make use of the awesome cognitive capabilities of a human-in-the-loop in a glass-box approach. However, this needs new concepts of Human-AI interfaces particularly for ethical responsible data driven machine learning in application domains that impact human life (agriculture, climate, forestry, health, …).

Course Homepage: https://hci-kdd.org/iui-explainable-ai

 


340.300 Principles of Interaction: Interaction with Agents and Federated ML (class of 2017, 3 ECTS, 2 h, G)

In this course, Linz University computer science and software engineering students, get an introduction and an overview on the latest insights in collaborative interactive machine learning approaches, interaction with multi-agents and the human-in-the-loop. A special focus – in the context of what is called nowadays “explainable AI” – is given on federated machine learning approaches. The course always takes into account social, ethical and legal aspects of “artificial intelligence” solutions, that impact human life (agriculture, climate, forestry, health, …).

Course Homepage: https://human-centered.ai/interactive-machine-learning/

 

 


MAKE-Health: Machine Learning & Knowledge Extraction for Health Informatics (class of 2017, 6 ECTS, 3 h, G)

This course at the University of Verona follows a research-based teaching (RBT) approach and discusses theoretical and practical issues and experimental methods for combining human intelligence with artificial intelligence. In the course is a strong focus on decision support under uncertainty, as well as interpretability and explainability (which is now summarized under the term “explainable ai” and often abbreviated as XAI). While XAI is a term made popular by DARPA, these fundamental principles of explanations and counterfactual explanations goes back to early theory of sciences. This course is integrative and considers ethical, social and legal aspects of human-centered ai, e.g. evaluating bias, fairness and inclusion. The students will be sensitized on how to ensure trust in future AI systems – ultimately fostering trustworthy AI.

Course Homepage: https://human-centered.ai/mini-make-machine-learning-knowledge-extraction-health/

 


185.A83 Machine Learning for Health Informatics (class of 2017, 3 ECTS, 2h, G)
This course considers the whole pipeline (“artificial intellignence ecosystem”) from data preprocessing, information fusion to information visualization and interaction and fosters the HCI-KDD approach, which encompasses a synergistic combination of methods from two areas towards understanding “intelligence”: Human-Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with artificial intelligence. A particular focus is on ethical responsible machine learning and social and legal aspects of ai.

706.046 AK HCI - Intelligent User Interfaces IUI (class of 2017, 4,5 ECTS, 5 h, G)

Intelligent User Interfaces (IUI) is where HCI meet the possibilities of Artificial Intelligence (AI), often defined as the design of intelligent agents – the core essence in Machine Learning. In this course, Software Engineering is seen as dynamic, interactive and cooperative process which facilitate an optimal mix of standardization and tailor-made solutions. For experimental issues gamification approaches are very useful. Putting the human-in-the-loop is the main focus of this course.

Course homepage: https://human-centered.ai/iui-where-hci-meets-ai-challenge-2017/


706.315 Interactive Machine Learning - iML (class of 2016, 2 h PG)

This graduate course follows a research-based teaching (RBT) approach and provides an overview of models of human-centered ai and discusses methods for combining human intelligence with machine intelligence (human-in-the-loop). The application focus is on the health informatics domain, where a human must always remain in control (for legal and social issues) – but the principles are useful in any business domain.

Course homepage for 2015 and 2016: https://human-centered.ai/lv-706-315-interactive-machine-learning/


709.049 Biomedical Informatics: discovering knowledge in (big) data (2010-2017, 3 ECTS, 2, UG)

This course covers computer science aspects of biomedical informatics (= medical informatics + bioinformatics).  The focus is on machine learning for knowledge discovery from data, and concentrates on algorithmic and methodological issues of data science. Health Informatics is the field where machine learning has the greatest potential to provide benefits in improved medical diagnoses, disease analyses, decision making and drug developments with high real-world economic value, ultimately contributing to cancer research for the human good. Ethical and social issues are very important aspects!

Course homepage (2010 – 2017): https://human-centered.ai/biomedical-informatics-big-data/


185.A83 Machine Learning for Health Informatics (class of 2016, 3 ECTS, 2 h, G)
185.A83 Machine Learning for Health Informatics (class of 2016, 3 ECTS, 2 h, G)

Machine Learning is the most growing field in computer science (Jordan & Mitchell, 2015. Machine learning: Trends, perspectives, and prospects. Science, 349, (6245), 255-260), and it is well accepted that Health Informatics is amongst the greatest challenges (LeCun, Bengio, & Hinton, 2015. Deep learning. Nature, 521, (7553), 436-444). For the successful application of Machine Learning in Health Informatics a comprehensive understanding of the whole HCI-KDD-pipeline, ranging from the physical data ecosystem to the understanding of the end-user in the problem domain is necessary. In the medical world the inclusion of privacy, data protection, safety and security is mandatory. Keywords: Automatic machine learning, interactive machine learning, doctor-in-the-loop, subspace clustering, protein folding, k-Anonymization Go to the Course homepage


706.315 Interactive Machine Learning (class of 2015, 2 h PG)
706.315 Interactive Machine Learning (class of 2015, 2 h PG)

Whilst in classic ML usually there is little or no end users’ feedback (a Google car is intended to go without human-in-the-loop), iML takes the human-into-the-loop, hence let the end users’ control the learning behaviour: putting the huge potential of modern sophisticated machine learning algorithms into the hands of domain experts – so that the machine can benefit from the knowledge of this experts. Keywords: Interactive Learning and Optimization with the Human in the Loop, Hybrid Learning Systems, Active Learning, Active preference learning, reinforcement learning, Go to the Course homepage


706.046 AK HCI - Intelligent User Interfaces - HCI meets AI (class of 2016, 4,5 ECTS, 5 h, G)
706.046 AK HCI - Intelligent User Interfaces - HCI meets AI (class of 2016, 4,5 ECTS, 5 h, G)

Intelligent User Interfaces (IUI) is where the Human-computer interaction (HCI) aspects meet Artificial Intelligence (AI), often defined as the design of intelligent agents – the core essence in Machine Learning. In this practically oriented course, Software Engineering is seen as dynamic, interactive and cooperative process which facilitate an optimal mixture of standardization and tailor-made solutions. Keywords: Experimental Software Engineering, Intelligent User Interfaces, Artificial Intelligence, Machine Learning Go to Course homepage


709.049 Medical Informatics / Medizinische Informatik (2010-2015)
709.049 Medical Informatics / Medizinische Informatik (2010-2015)

This course covers computer science aspects of biomedical informatics (= medical informatics + bioinformatics) with a focus on discovering knowledge from big data  concentrating on algorithmic and methodological issues. Medicine and Biology are turning more and more into a data science, consequently the focus of this lecture is on interactive knowledge discovery/data mining and interactive machine learning. Keywords: Biomedical Informatics, Data, Information, Knowledge Go to the course homepage Previous lecture slides from the last semester are available via: http://genome.tugraz.at/medical_informatics.shtml


706.318 DissertantInnenseminar - Ph.D. Seminar (every year 1 h, PG)
706.318 DissertantInnenseminar - Ph.D. Seminar (every year 1 h, PG)

For students of the doctoral school Computer Science (Informatik) this seminar is compulsory, the aim is to help the doctoral candidates to improve their research strategies and their communication abilities and  a better presentation of their scientific field. This seminar is taking place every year focusing on PhD students of the Holzinger group – but it is open to others as well. The content aims to accompany the current work of docroral students and introduces international aspects, such as research-driven work, recognizing, grasping the content and engage in scientific discussion of international publications (Journal Club) and, above all, enabling them to write and publish and to speak about their own work, actively to promote young talents and to inspire, challenge and encourage interested students (Malik principle). PhD students are the most important young scientists and therefore of central strategic importance for the international research community. The support of young doctoral students is therefore an important aspect of the daily scientific routine, my motto is “If my doctoral students are good – I am good”.


706.315 Selected Topics on Interactive Knowledge Discovery 2014
706.315 Selected Topics on Interactive Knowledge Discovery 2014

Our data-centric world – from agriculture to zoology – generates tremendous amounts of complex high-dimensional data sets. Approaches from Artificial Intelligence may help to understand, to interpret and to explain such data, however, the challenge is in making such approaches interactive, hence to include the human into the loop at the very beginning of the knowledge discovery and data mining process. Many problems of our daily life cannot be solved exclusively by autonomous approaches of Artificial Intelligence, but only by the integration of human intelligence – this should not be overlooked.

Course details in TUGOnline

 

 

 


444.152 Medical Informatics / Medizinische Informatik
444.152 Medical Informatics / Medizinische Informatik

2VO, 3 ECTS, Graz University of Technology, Faculty of Electrical and Information Engineering, Insitute of Genomics and Bioinformatics; The focus of this lecture is on data science, knowledge discovery and machine learning. Although this course is called Medizinische Informatik it goes far beyond the classical medical informatics. In digital Medicine of the future Artificial Intelligence and machine learning has the greatest potential to provide benefits in improved medical workflows, processes, diagnoses, disease analyses, decision making and drug developments with high real-world economic value, ultimately contributing to cancer research for the human good. Ethical, social and legal issues are very important aspects of our human-centred approach.

Course page: http://genome.tugraz.at/medical_informatics.shtml


706.046 AK HCI Mensch-Maschine Kommunikation: Applying User-Centered Design 2014
706.046 AK HCI Mensch-Maschine Kommunikation: Applying User-Centered Design 2014

3 VU, 5 ECTS, Selected chapters of Human–Computer Interaction & Usability Engineering, Graz University of Technology, Faculty of Informatics and Biomedical Engineering, Institute for Interactive Systems and Datr Sciene. In this research-based teaching course (held regularly since summer term 2003), the students will learn principles of human-centered aspects of information systems, particularly with a focus on user interfaces and decision support.  Based on the theory of Lecture 706.021 the established findings of HCI and their application in Usability Engineering will be implemented. Design and development are seen as dynamic processes which facilitate an optimal mixture of standardization and tailor-made solutions. Emphasis is given to the so-called “User Centered Design” which is extended into a “User Centered Development” process. 


706.117 DiplomandInnen Seminar (every year, 5 ECTS, 3 h, G)
706.117 DiplomandInnen Seminar (every year, 5 ECTS, 3 h, G)

3 SE, 5 ECTS, Diplomandinnen und Diplomanden Seminar, Graz University of Technology, Faculty of Informatics and Biomedical Engineering, Institute for Interactive Systems and Data Science; this semiar is taking place every year focusing on Master’s students of the Holzinger group. The content aims to accompany the current work of Master students and tries to introduce international aspects, such as research-driven work, recognizing, grasping the content and engage in scientific discussion of international publications (Journal Club) and, above all, enabling students to write and publish and to speak about their own work, actively to promote young talents and to inspire, challenge and encourage interested students (Malik Principle). The master courses of Harvard SEAS are a model for this and have the goal of promoting and motivating a future career in research for young ambitious students already at the master level. The Motto is “If my students are good – I am good”.


Probabilistic Decision Making and Statistical Learning Mini-Course (since 2010) blocked Mini-Course), (3 ECTS, 1h, PG)

This Mini-Course is an introduction into a core area of health informatics and helps to understand decision making generally and how human intelligence can be augmented by Artificial Intelligence (AI) and Machine Learning (ML) specifically.  After a primer on probability and information the students get an introduction to the overlaps from information sciences and life sciences, and then on fundamentals of data-information-knowledge. The central part is on (human) decision making and decision support systems, ranging from a view on early expert systems up to the hot topic of explainable AI. The course is accompanied by a practical session where in a simulation game the students have to bring in the latest AI technologies into the workflows of a hospital.

Course Homepage: https://human-centered.ai/mini-course-make-decisions-practice-2019

 


400.141 Knowledge, Information and Visualiszation 2006-2011 (compulsory, each semester, 3 ECTS, 3 h, UG, G)

This course is mandatory for graduate medical students and postgraduate doctoral students in biomedicine, biology and the life sciences. This course covers fundamental basic principles of data – information – knowledge and addresses data management and information systems, especially hospital information systems (HIS, RIS, PACS) and decision support systems. Emphasis is placed on probabilistic decision making and medical decision support systems. The theoretical lectures are accompanied by practical exercises and hands-on work in seminars. In this course, students learn the underlying principles of data science, data management, information management, and knowledge management in the context of modern artificial intelligence (AI) and statistical machine learning (ML) approaches on the example of problems from the life sciences.

Student’s textbook is available in the library or bookshop