Publications

Selected Publications of Andreas Holzinger in reverse chronological order
more on Google Scholar or DBLP or Mendeley or TUGOnline
[holzinger-publications-last-five-years] [Andreas-Holzinger-Publications]

2016

  • [Holzinger:2016:MLfHI] A. Holzinger, “Machine Learning for Health Informatics“, in Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, Lecture Notes in Artificial Intelligence LNAI 9605, A. Holzinger, Ed., Cham: Springer International Publishing, 2016, pp. 1-24.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Machine Learning (ML) studies algorithms which can learn from data to gain knowledge from experience and to make decisions and predictions. Health Informatics (HI) studies the effective use of probabilistic information for decision making. The combination of both has greatest potential to rise quality, efficacy and efficiency of treatment and care. Health systems worldwide are confronted with “big data” in high dimensions, where the inclusion of a human is impossible and automatic ML (aML) show impressive results. However, sometimes we are confronted with complex data, “little data”, or rare events, where aML-approaches suffer of insufficient training samples. Here interactive ML (iML) may be of help, particularly with a doctor-in-the-loop, e.g. in subspace clustering, k-Anonymization, protein folding and protein design. However, successful application of ML for HI needs an integrated approach, fostering a concerted effort of four areas: (1) data science, (2) algorithms (with focus on networks and topology (structure), and entropy (time), (3) data visualization, and last but not least (4) privacy, data protection, safety & security.

    @incollection{Holzinger:2016:MLfHI,
       year = {2016},
       author = {Holzinger, Andreas},
       title = {Machine Learning for Health Informatics},
       booktitle = {Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, Lecture Notes in Artificial Intelligence LNAI 9605},
       editor = {Holzinger, Andreas},
       publisher = {Springer International Publishing},
       address = {Cham},
       pages = {1-24},
       abstract = {Machine Learning (ML) studies algorithms which can learn from data to gain knowledge from experience and to make decisions and predictions. Health Informatics (HI) studies the effective use of probabilistic information for decision making. The combination of both has greatest potential to rise quality, efficacy and efficiency of treatment and care. Health systems worldwide are confronted with “big data” in high dimensions, where the inclusion of a human is impossible and automatic ML (aML) show impressive results. However, sometimes we are confronted with complex data, “little data”, or rare events, where aML-approaches suffer of insufficient training samples. Here interactive ML (iML) may be of help, particularly with a doctor-in-the-loop, e.g. in subspace clustering, k-Anonymization, protein folding and protein design. However, successful application of ML for HI needs an integrated approach, fostering a concerted effort of four areas: (1) data science, (2) algorithms (with focus on networks and topology (structure), and entropy (time), (3) data visualization, and last but not least (4) privacy, data protection, safety & security.},
       keywords = {Machine learning, health informatics},
       doi = {10.1007/978-3-319-50478-0_1},
       url = {http://dx.doi.org/10.1007/978-3-319-50478-0_1}
    }

  • [MULTILEARN:2016:reference] D. Miljkovic, D. Aleksovski, V. Podpecan, N. Lavrac, B. Malle, and A. Holzinger, “Machine Learning and Data mining Methods for Managing Parkinson’s Disease“, in Machine Learning for Health Informatics – State-of-the-art and future challenges, Lecture Notes in Artificial Intelligence LNAI 9605, A. Holzinger, Ed., Heidelberg et al.: Springer, 2016, pp. 209-220.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Parkinson’s disease (PD) results primarily from dying of dopaminergic neurons in the Substantia Nigra, a part of the Mesencephalon (midbrain), which is not curable to date. PD medications treat symptoms only, none halt or retard dopaminergic neuron degeneration. Here machine learning methods can be of help since one of the crucial roles in the management and treatment of PD patients is detection and classification of tremors. In the clinical practice, this is one of the most common movement disorders and is typically classified using behavioral or etiological factors. Another important issue is to detect and evaluate PD related gait patterns, gait initiation and freezing of gait, which are typical symptoms of PD. Medical studies have shown that 90 % of people with PD suffer from vocal impairment, consequently the analysis of voice data to discriminate healthy people from PD is relevant. This paper provides a quick overview of the state-of-the-art and some directions for future research, motivated by the ongoing PD manager project.

    @incollection{MULTILEARN:2016:reference,
       year = {2016},
       author = {Miljkovic, Dragana and Aleksovski, Darko and Podpecan, Vid and Lavrac, Nada and Malle, Bernd and Holzinger, Andreas},
       title = {Machine Learning and Data mining Methods for Managing Parkinson’s Disease},
       booktitle = {Machine Learning for Health Informatics - State-of-the-art and future challenges, Lecture Notes in Artificial Intelligence LNAI 9605},
       editor = {Holzinger, Andreas},
       publisher = {Springer},
       address = {Heidelberg et al.},
       pages = {209-220},
       abstract = {Parkinson’s disease (PD) results primarily from dying of dopaminergic neurons in the Substantia Nigra, a part of the Mesencephalon (midbrain), which is not curable to date. PD medications treat symptoms only, none halt or retard dopaminergic neuron degeneration.
    Here machine learning methods can be of help since one of the crucial roles in the management and treatment of PD patients is detection and classification of tremors. In the clinical practice, this is one of the most common movement disorders and is typically classified using behavioral or etiological factors. Another important issue is to detect and evaluate PD related gait patterns, gait initiation and freezing of gait, which are typical symptoms of PD. Medical studies have shown that 90 % of people with PD suffer from vocal impairment, consequently the analysis of voice data to discriminate healthy people from PD is relevant. This paper provides a quick overview of the state-of-the-art and some directions for future research, motivated by the ongoing PD manager project.},
       keywords = {Machine learning, data mining, Parkinson disease},
       doi = {10.1007/978-3-319-50478-0_10},
       url = {http://rd.springer.com/chapter/10.1007/978-3-319-50478-0_10}
    }

  • [RobertBuettnerRoeckerHolzinger:2016:CollaborativeMachineLearning] S. Robert, S. Büttner, C. Röcker, and A. Holzinger, “Reasoning Under Uncertainty: Towards Collaborative Interactive Machine Learning“, in Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, A. Holzinger, Ed., Cham: Springer International Publishing, 2016, pp. 357-376.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we present the current state-of-the-art of decision making (DM) and machine learning (ML) and bridge the two research domains to create an integrated approach of complex problem solving based on human and computational agents. We present a novel classification of ML, emphasizing the human-in-the-loop in interactive ML (iML) and more specific on collaborative interactive ML (ciML), which we understand as a deep integrated version of iML, where humans and algorithms work hand in hand to solve complex problems. Both humans and computers have specific strengths and weaknesses and integrating humans into machine learning processes might be a very efficient way for tackling problems. This approach bears immense research potential for various domains, e.g., in health informatics or in industrial applications. We outline open questions and name future challenges that have to be addressed by the research community to enable the use of collaborative interactive machine learning for problem solving in a large scale.

    @incollection{RobertBuettnerRoeckerHolzinger:2016:CollaborativeMachineLearning,
       year = {2016},
       author = {Robert, Sebastian and Büttner, Sebastian and Röcker, Carsten and Holzinger, Andreas},
       title = {Reasoning Under Uncertainty: Towards Collaborative Interactive Machine Learning},
       booktitle = {Machine Learning for Health Informatics: State-of-the-Art and Future Challenges},
       editor = {Holzinger, Andreas},
       publisher = {Springer International Publishing},
       address = {Cham},
       pages = {357-376},
       abstract = {In this paper, we present the current state-of-the-art of decision making (DM) and machine learning (ML) and bridge the two research domains to create an integrated approach of complex problem solving based on human and computational agents. We present a novel classification of ML, emphasizing the human-in-the-loop in interactive ML (iML) and more specific on collaborative interactive ML (ciML), which we understand as a deep integrated version of iML, where humans and algorithms work hand in hand to solve complex problems. Both humans and computers have specific strengths and weaknesses and integrating humans into machine learning processes might be a very efficient way for tackling problems. This approach bears immense research potential for various domains, e.g., in health informatics or in industrial applications. We outline open questions and name future challenges that have to be addressed by the research community to enable the use of collaborative interactive machine learning for problem solving in a large scale.},
       keywords = {Decision making Reasoning Interactive machine learning Collaborative interactive machine learning},
       doi = {10.1007/978-3-319-50478-0_18},
       url = {http://dx.doi.org/10.1007/978-3-319-50478-0_18}
    }

  • [CaleroEtAlHolzinger:2016:HealthRecommender] A. C. Valdez, M. Ziefle, K. Verbert, A. Felfernig, and A. Holzinger, “Recommender Systems for Health Informatics: State-of-the-Art and Future Perspectives“, in Machine Learning for Health Informatics, Lecture Notes in Artificial Intelligence LNAI 9605, A. Holzinger, Ed., Heidelberg et. al.: Springer, 2016, pp. 391-414.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Recommender systems are a classical example for machine learning applications, however, they have not yet been used extensively in health informatics and medical scenarios. We argue that this is due to the specifics of benchmarking criteria in medical scenarios and the multitude of drastically differing end-user groups and the enormous context-complexity of the medical domain. Here both risk perceptions towards data security and privacy as well as trust in safe technical systems play a central and specific role, particularly in the clinical context. These aspects dominate acceptance of such systems. By using a Doctor-in-the-Loop approach some of these difficulties could be mitigated by combining both human expertise with computer efficiency. We provide a three-part research framework to access health recommender systems, suggesting to incorporate domain understanding, evaluation and specific methodology into the development process.

    @incollection{CaleroEtAlHolzinger:2016:HealthRecommender,
       year = {2016},
       author = {Valdez, Andre Calero and Ziefle, Martina and Verbert, Katrien and Felfernig, Alexander and Holzinger, Andreas},
       title = {Recommender Systems for Health Informatics: State-of-the-Art and Future Perspectives},
       booktitle = {Machine Learning for Health Informatics, Lecture Notes in Artificial Intelligence LNAI 9605},
       editor = {Holzinger, Andreas},
       publisher = {Springer},
       address = {Heidelberg et. al.},
       pages = {391-414},
       abstract = {Recommender systems are a classical example for machine learning applications, however, they have not yet been used extensively in health informatics and medical scenarios. We argue that this is due to the specifics of benchmarking criteria in medical scenarios and the multitude of drastically differing end-user groups and the enormous context-complexity of the medical domain. Here both risk perceptions towards data security and privacy as well as trust in safe technical systems play a central and specific role, particularly in the clinical context. These aspects dominate acceptance of such systems. By using a Doctor-in-the-Loop approach some of these difficulties could be mitigated by combining both human expertise with computer efficiency. We provide a three-part research framework to access health recommender systems, suggesting to incorporate domain understanding, evaluation and specific methodology into the development process.},
       keywords = {Health recommender systems
    Human-computer interaction
    Evaluation framework
    Uncertainty
    Trust
    Risk
    Privacy
    Machine learning},
       doi = {10.1007/978-3-319-50478-0_20},
       url = {http://rd.springer.com/chapter/10.1007/978-3-319-50478-0_20}
    }

  • [JeanquartierEtAl:2016:MachineLearningTumorGrowth] F. Jeanquartier, C. Jean-Quartier, M. Kotlyar, T. Tokar, A. Hauschild, I. Jurisica, and A. Holzinger, “Machine Learning for In Silico Modeling of Tumor Growth“, in Machine Learning for Health Informatics, Springer Lecture Notes in Artificial Intelligence LNAI 9605, A. Holzinger, Ed., Cham: Springer International Publishing, 2016, pp. 415-434.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The various interplaying variables of tumor growth remain key questions in cancer research, in particular what makes such a growth malignant and what are possible therapies to stop the growth and prevent re-growth. Given the complexity and heterogeneity of the disease, as well as the steadily growing set of publicly available big data sets, there is an urgent need for approaches to make sense out of these open data sets. Machine learning methods for tumor growth profiles and model validation can be of great help here, particularly, discrete multi-agent approaches. In this paper we provide an overview of current machine learning approaches used for cancer research with the main focus of highlighting the necessity of in silico tumor growth modeling.

    @incollection{JeanquartierEtAl:2016:MachineLearningTumorGrowth,
       year = {2016},
       author = {Jeanquartier, Fleur and Jean-Quartier, Claire and Kotlyar, Max and Tokar, Tomas and Hauschild, Anne-Christin and Jurisica, Igor and Holzinger, Andreas},
       title = {Machine Learning for In Silico Modeling of Tumor Growth},
       booktitle = {Machine Learning for Health Informatics, Springer Lecture Notes in Artificial Intelligence LNAI 9605},
       editor = {Holzinger, Andreas},
       publisher = {Springer International Publishing},
       address = {Cham},
       pages = {415-434},
       abstract = {The various interplaying variables of tumor growth remain key questions in cancer research, in particular what makes such a growth malignant and what are possible therapies to stop the growth and prevent re-growth. Given the complexity and heterogeneity of the disease, as well as the steadily growing set of publicly available big data sets, there is an urgent need for approaches to make sense out of these open data sets. Machine learning methods for tumor growth profiles and model validation can be of great help here, particularly, discrete multi-agent approaches. In this paper we provide an overview of current machine learning approaches used for cancer research with the main focus of highlighting the necessity of in silico tumor growth modeling.},
       keywords = {Tumor growth
    Cancer modeling
    Machine learning
    Computational biology},
       doi = {10.1007/978-3-319-50478-0_21},
       url = {http://dx.doi.org/10.1007/978-3-319-50478-0_21}
    }

  • [BloiceHolzinger:2016:PythonTutorial] M. D. Bloice and A. Holzinger, “A Tutorial on Machine Learning and Data Science Tools with Python“, in Machine Learning for Health Informatics, Lecture Notes in Artificial Intelligence LNAI 9605, A. Holzinger, Ed., Heidelberg: Springer, 2016, pp. 437-483.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this tutorial, we will provide an introduction to the main Python software tools used for applying machine learning techniques to medical data. The focus will be on open-source software that is freely available and is cross platform. To aid the learning experience, a companion GitHub repository is available so that you can follow the examples contained in this paper interactively using Jupyter notebooks. The notebooks will be more exhaustive than what is contained in this chapter, and will focus on medical datasets and healthcare problems. Briefly, this tutorial will first introduce Python as a language, and then describe some of the lower level, general matrix and data structure packages that are popular in the machine learning and data science communities, such as NumPy and Pandas. From there, we will move to dedicated machine learning software, such as SciKit-Learn. Finally we will introduce the Keras deep learning and neural networks library. The emphasis of this paper is readability, with as little jargon used as possible. No previous experience with machine learning is assumed. We will use openly available medical datasets throughout.

    @incollection{BloiceHolzinger:2016:PythonTutorial,
       year = {2016},
       author = {Bloice, Marcus D. and Holzinger, Andreas},
       title = {A Tutorial on Machine Learning and Data Science Tools with Python},
       booktitle = {Machine Learning for Health Informatics, Lecture Notes in Artificial Intelligence LNAI 9605},
       editor = {Holzinger, Andreas},
       publisher = {Springer},
       address = {Heidelberg},
       pages = {437-483},
       abstract = {In this tutorial, we will provide an introduction to the main Python software tools used for applying machine learning techniques to medical data. The focus will be on open-source software that is freely available and is cross platform. To aid the learning experience, a companion GitHub repository is available so that you can follow the examples contained in this paper interactively using Jupyter notebooks. The notebooks will be more exhaustive than what is contained in this chapter, and will focus on medical datasets and healthcare problems. Briefly, this tutorial will first introduce Python as a language, and then describe some of the lower level, general matrix and data structure packages that are popular in the machine learning and data science communities, such as NumPy and Pandas. From there, we will move to dedicated machine learning software, such as SciKit-Learn. Finally we will introduce the Keras deep learning and neural networks library. The emphasis of this paper is readability, with as little jargon used as possible. No previous experience with machine learning is assumed. We will use openly available medical datasets throughout.},
       doi = {10.1007/978-3-319-50478-0_22},
       url = {http://rd.springer.com/chapter/10.1007/978-3-319-50478-0_22}
    }

  • [BuccafurriHolzingerEtAl:2016:LNCS9817] F. Buccafurri, A. Holzinger, P. Kieseberg, M. A. Tjoa, and E. R. Weippl, Availability, Reliability, and Security in Information Systems – IFIP WG 8.4, 8.9, TC 5 International Cross-Domain Conference, CD-ARES 2016, and Workshop on Privacy Aware Machine Learning for Health Data Science, PAML 2016, Salzburg, Austria, August 31 – September 2, 2016 (Springer Lecture Notes in Computer Science, LNCS 9817), Heidelberg: Springer, 2016.
    [BibTeX] [DOI] [Download PDF]
    @book{BuccafurriHolzingerEtAl:2016:LNCS9817,
       year = {2016},
       author = {Buccafurri, Francesco and Holzinger, Andreas and Kieseberg, Peter and Tjoa, A Min and Weippl, Edgar R.},
       title = {Availability, Reliability, and Security in Information Systems - IFIP WG 8.4, 8.9, TC 5 International Cross-Domain Conference, CD-ARES 2016, and Workshop on Privacy Aware Machine Learning for Health Data Science, PAML 2016, Salzburg, Austria, August 31 - September 2, 2016 (Springer Lecture Notes in Computer Science, LNCS 9817)},
       publisher = {Springer},
       address = {Heidelberg},
       doi = {10.1007/978-3-319-45507-5},
       url = {http://dx.doi.org/10.1007/978-3-319-45507-5}
    }

  • [MalleEtAl:2016:PAMLRightToBeForgotten] B. Malle, P. Kieseberg, S. Schrittwieser, and A. Holzinger, “Privacy Aware Machine Learning and the “Right to be Forgotten”“, ERCIM News (special theme: machine learning), vol. 107, iss. 3, pp. 22-23, 2016.
    [BibTeX] [Abstract] [Download PDF]

    While machine learning is one of the fastest growing technologies in the area of computer science, the goal of analysing large amounts of data for information extraction collides with the privacy of individuals. Hence, in order to protect sensitive information, the effects of the right to be forgotten on machine learning algorithms need to be studied more extensively. Data driven economy (and related concepts like Industry 4.0) and data driven science, as well as big data are the keywords most often heard in discussions on the future of high-profile industries and on the upcoming revolutions in the economic world. With the integration of modern information technology into “classical” industrial environments or services, many new opportunities can be envisioned, e.g., in the optimisation of supply chains or in on-demand production of specifically tailored goods, but even in governmental areas like health environments, where P4-medicine (predictive, preventive, personalised, participatory) is seen as a new paradigm that could revolutionise health care. With all these new opportunities, the challenges were traditionally located in the technical area, especially regarding technologies for enabling the efficient and correct analysis of the large amounts of data produced by factories and large sensor networks. In recent years, the area of machine learning has seen a surge in new technologies developed and brought to the market. In combination with the ever increasing amount of computational power and storage that is available for a relatively reasonable price, many of these applications can now be applied in real life environments.

    @article{MalleEtAl:2016:PAMLRightToBeForgotten,
       year = {2016},
       author = {Malle, Bernd and Kieseberg, Peter and Schrittwieser, Sebastian and Holzinger, Andreas},
       title = {Privacy Aware Machine Learning and the “Right to be Forgotten”},
       journal = {ERCIM News (special theme: machine learning)},
       volume = {107},
       number = {3},
       pages = {22-23},
       abstract = {While machine learning is one of the fastest growing technologies in the area of computer science, the goal of analysing large amounts of data for information extraction collides with the privacy of individuals. Hence, in order to protect sensitive information, the effects of the right to be forgotten on machine learning algorithms need to be studied more extensively. Data driven economy (and related concepts like Industry 4.0) and data driven science, as well as big data are the keywords most often heard in discussions on the future of high-profile industries and on the upcoming revolutions in the economic world. With the integration of modern information technology into “classical” industrial environments or services, many new opportunities can be envisioned, e.g., in the optimisation of supply chains or in on-demand production of specifically tailored goods, but even in governmental areas like health environments, where P4-medicine (predictive, preventive, personalised, participatory) is seen as a new paradigm that could revolutionise health care. With all these new opportunities, the challenges were traditionally located in the technical area, especially regarding technologies for enabling the efficient and correct analysis of the large amounts of data produced by factories and large sensor networks. In recent years, the area of machine learning has seen a surge in new technologies developed and brought to the market. In combination with the ever increasing amount of computational power and storage that is available for a relatively reasonable price, many of these applications can now be applied in real life environments.},
       keywords = {Privacy Aware Machine Learning, Human-in-the-Loop, interactive ML},
       url = {http://ercim-news.ercim.eu/en107/special/privacy-aware-machine-learning-and-the-right-to-be-forgotten}
    }

  • [KiesebergWeipplHolzinger:2016:DocInLoop] P. Kieseberg, E. R. Weippl, and A. Holzinger, “Trust for the "Doctor in the Loop"“, ERCIM News (special theme: Tackling Big Data in the Life Sciences), vol. 2016, iss. 104, pp. 32-33, 2016.
    [BibTeX] [Abstract] [Download PDF]

    The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system. The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept: The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her.

    @article{KiesebergWeipplHolzinger:2016:DocInLoop,
       year = {2016},
       author = {Kieseberg, Peter and Weippl, Edgar R. and Holzinger, Andreas},
       title = {Trust for the "Doctor in the Loop"},
       journal = {ERCIM News (special theme: Tackling Big Data in the Life Sciences)},
       volume = {2016},
       number = {104},
       pages = {32-33},
       abstract = {The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system.
    The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept: The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her.},
       url = {http://ercim-news.ercim.eu/en104/special/trust-for-the-doctor-in-the-loop}
    }

  • [LeeHolzinger:2016:HighDim] S. Lee and A. Holzinger, “Knowledge Discovery from Complex High Dimensional Data“, in Solving Large Scale Learning Tasks. Challenges and Algorithms, Lecture Notes in Artificial Intelligence, LNAI 9580, S. Michaelis, N. Piatkowski, and M. Stolpe, Eds., Cham: Springer International Publishing, 2016, pp. 148-167.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Modern data analysis is confronted by increasing dimensionality of problems, mainly contributed by higher resolutions available for data acquisition and by our use of larger models with more degrees of freedom to investigate complex systems deeper. High dimensionality constitutes one aspect of “big data”, which brings us not only computational but also statistical and perceptional challenges. Most data analysis problems are solved using techniques of optimization, where large-scale optimization requires faster algorithms and implementations. Computed solutions must be evaluated for statistical quality, since otherwise false discoveries can be made. Recent papers suggest to control and modify algorithms themselves for better statistical properties. Finally, human perception puts an inherent limit on our understanding to three dimensional spaces, making it almost impossible to grasp complex phenomena. For aid, we use dimensionality reduction or other techniques, but these usually do not capture relations between interesting objects. Here graph-based knowledge representation has lots of potential, for instance to create perceivable and interactive representations and to perform new types of analysis based on graph theory and network topology. In this article, we show glimpses of new developments in these aspects.

    @incollection{LeeHolzinger:2016:HighDim,
       year = {2016},
       author = {Lee, Sangkyun and Holzinger, Andreas},
       title = {Knowledge Discovery from Complex High Dimensional Data},
       booktitle = {Solving Large Scale Learning Tasks. Challenges and Algorithms, Lecture Notes in Artificial Intelligence, LNAI 9580},
       editor = {Michaelis, Stefan and Piatkowski, Nico and Stolpe, Marco},
       publisher = {Springer International Publishing},
       address = {Cham},
       pages = {148-167},
       abstract = {Modern data analysis is confronted by increasing dimensionality of problems, mainly contributed by higher resolutions available for data acquisition and by our use of larger models with more degrees of freedom to investigate complex systems deeper. High dimensionality constitutes one aspect of “big data”, which brings us not only computational but also statistical and perceptional challenges. Most data analysis problems are solved using techniques of optimization, where large-scale optimization requires faster algorithms and implementations. Computed solutions must be evaluated for statistical quality, since otherwise false discoveries can be made. Recent papers suggest to control and modify algorithms themselves for better statistical properties. Finally, human perception puts an inherent limit on our understanding to three dimensional spaces, making it almost impossible to grasp complex phenomena. For aid, we use dimensionality reduction or other techniques, but these usually do not capture relations between interesting objects. Here graph-based knowledge representation has lots of potential, for instance to create perceivable and interactive representations and to perform new types of analysis based on graph theory and network topology. In this article, we show glimpses of new developments in these aspects.},
       doi = {10.1007/978-3-319-41706-6_7},
       url = {http://dx.doi.org/10.1007/978-3-319-41706-6_7}
    }

  • [WartnerEtAl:2016:Limits-Doc-in-Loop] S. Wartner, D. Girardi, M. Wiesinger-Widi, J. Trenkler, R. Kleiser, and A. Holzinger, “Ontology-Guided Principal Component Analysis: Reaching the Limits of the Doctor-in-the-Loop“, in Information Technology in Bio- and Medical Informatics: 7th International Conference, ITBAM 2016, Porto, Portugal, September 5-8, 2016, Proceedings, E. M. Renda, M. Bursa, A. Holzinger, and S. Khuri, Eds., Cham: Springer International Publishing, 2016, pp. 22-33.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biomedical research requires deep domain expertise to perform analyses of complex data sets, assisted by mathematical expertise provided by data scientists who design and develop sophisticated methods and tools. Such methods and tools not only require preprocessing of the data, but most of all a meaningful input selection. Usually, data scientists do not have sufficient background knowledge about the origin of the data and the biomedical problems to be solved, consequently a doctor-in-the-loop can be of great help here. In this paper we revise the viability of integrating an analysis guided visualization component in an ontology-guided data infrastructure, exemplified by the principal component analysis. We evaluated this approach by examining the potential for intelligent support of medical experts on the case of cerebral aneurysms research.

    @incollection{WartnerEtAl:2016:Limits-Doc-in-Loop,
       year = {2016},
       author = {Wartner, Sandra and Girardi, Dominic and Wiesinger-Widi, Manuela and Trenkler, Johannes and Kleiser, Raimund and Holzinger, Andreas},
       title = {Ontology-Guided Principal Component Analysis: Reaching the Limits of the Doctor-in-the-Loop},
       booktitle = {Information Technology in Bio- and Medical Informatics: 7th International Conference, ITBAM 2016, Porto, Portugal, September 5-8, 2016, Proceedings},
       editor = {Renda, Elena M. and Bursa, Miroslav and Holzinger, Andreas and Khuri, Sami},
       publisher = {Springer International Publishing},
       address = {Cham},
       pages = {22-33},
       abstract = {Biomedical research requires deep domain expertise to perform analyses of complex data sets, assisted by mathematical expertise provided by data scientists who design and develop sophisticated methods and tools. Such methods and tools not only require preprocessing of the data, but most of all a meaningful input selection. Usually, data scientists do not have sufficient background knowledge about the origin of the data and the biomedical problems to be solved, consequently a doctor-in-the-loop can be of great help here. In this paper we revise the viability of integrating an analysis guided visualization component in an ontology-guided data infrastructure, exemplified by the principal component analysis. We evaluated this approach by examining the potential for intelligent support of medical experts on the case of cerebral aneurysms research.},
       keywords = {Principal component analysis, PCA, Ontology, Data mining, Data warehousing, Doctor-in-the-loop},
       doi = {10.1007/978-3-319-43949-5_2},
       url = {http://dx.doi.org/10.1007/978-3-319-43949-5_2}
    }

  • [ValdezEtAl:2016:graphentropy] A. C. Valdez, M. Dehmer, and A. Holzinger, “Application of Graph Entropy for Knowledge Discovery and Data Mining in Bibliometric Data“, in Mathematical Foundations and Applications of Graph Entropy, M. Dehmer, F. Emmert-Streib, Z. Chen, X. Li, and Y. Shi, Eds., New York: Wiley, 2016, pp. 259-272.
    [BibTeX] [Abstract] [Download PDF]

    Entropy, originating from statistical physics is a fascinating and challenging concept with many diverse definitions and various applications. Considering all the diverse meanings, entropy can be used as a measure for disorder in the range between total order (structured) and total disorder (unstructured), as long as by "order" we understand that objects are segregated by their properties or parameter values. States of lower entropy occur when objects become organized, and ideally when everything is in complete order the Entropy value is zero. These observations generated a colloquial meaning of entropy. Following the concept of the mathematical theory of communication by Shannon & Weaver (1949), entropy can be used as a measure for the uncertainty in a data set. The application of entropy became popular as a measure for system complexity with the paper by Steven Pincus (1991), who described Approximate Entropy as a statistic quantifying regularity within a wide variety of relatively short (greater than 100 points) and noisy time series data. The development of this approach was initially motivated by data length constraints, which is commonly encountered in typical biomedical signals including: heart rate, electroencephalography (EEG), etc. but also in endocrine hormone secretion data sets [6]. Hamilton et al. were the first to apply the concept of entropy to bibliometrics to measure interdisciplinarity from diversity. While Hamilton et al. work on citation data, a similar approach has been applied by Holzinger et al. using enriched meta-data for a large research cluster.

    @incollection{ValdezEtAl:2016:graphentropy,
       year = {2016},
       author = {Valdez, André Calero and Dehmer, Matthias and Holzinger, Andreas},
       title = {Application of Graph Entropy for Knowledge Discovery and Data Mining in Bibliometric Data},
       booktitle = {Mathematical Foundations and Applications of Graph Entropy},
       editor = {Dehmer, Matthias and Emmert-Streib, Frank and Chen, Zengqiang and Li, Xueliang and Shi, Yongtang},
       publisher = {Wiley},
       address = {New York},
       pages = {259-272},
       abstract = {Entropy, originating from statistical physics is a fascinating and challenging concept with many diverse definitions and various applications. Considering all the diverse meanings, entropy can be used as a measure
    for disorder in the range between total order (structured) and total disorder (unstructured), as long as by "order" we understand that objects are segregated by their properties or parameter values. States of lower entropy occur when objects become organized, and ideally when everything is in complete order the Entropy value is zero. These observations generated a colloquial meaning of entropy. Following the concept of the mathematical theory of communication by Shannon & Weaver (1949), entropy can be used as a measure for the uncertainty in a data set. The application of entropy became popular as a measure for system complexity with the paper by Steven Pincus (1991), who described Approximate Entropy as a statistic quantifying regularity within a wide variety of relatively short (greater than 100 points) and noisy time series data. The development of this approach was initially motivated by data length constraints, which is commonly encountered in typical biomedical signals including: heart rate, electroencephalography (EEG), etc. but also in endocrine
    hormone secretion data sets [6]. Hamilton et al. were the first to apply the concept of entropy to bibliometrics to measure interdisciplinarity from diversity. While Hamilton et al. work on citation data, a similar approach has been applied by Holzinger et
    al. using enriched meta-data for a large research cluster.},
       url = {http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527339094.html#}
    }

  • [MayerEtAl:2016:Entropy] C. Mayer, M. Bachler, A. Holzinger, P. K. Stein, and S. Wassertheurer, “The Effect of Threshold Values and Weighting Factors on the Association between Entropy Measures and Mortality after Myocardial Infarction in the Cardiac Arrhythmia Suppression Trial (CAST)“, Entropy, vol. 18, iss. 4, p. 129:1-15, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Heart rate variability (HRV) is a non-invasive measurement based on the intervals between normal heart beats that characterize cardiac autonomic function. Decreased HRV is associated with increased risk of cardiovascular events. Characterizing HRV using only moment statistics fails to capture abnormalities in regulatory function that are important aspects of disease risk. Thus, entropy measures are a promising approach to quantify HRV for risk stratification. The purpose of this study was to investigate this potential for approximate, corrected approximate, sample, fuzzy, and fuzzy measure entropy and its dependency on the parameter selection. Recently, published parameter sets and further parameter combinations were investigated. Heart rate data were obtained from the "Cardiac Arrhythmia Suppression Trial (CAST) RR Interval Sub-Study Database" (Physionet). Corresponding outcomes and clinical data were provided by one of the investigators. The use of previously-reported parameter sets on the pre-treatment data did not significantly add to the identification of patients at risk for cardiovascular death on follow-up. After arrhythmia suppression treatment, several parameter sets predicted outcomes for all patients and patients without coronary artery bypass grafting (CABG). The strongest results were seen using the threshold parameter as a multiple of the data’s standard deviation ( r=0.2⋅σr=0.2·σ ). Approximate and sample entropy provided significant hazard ratios for patients without CABG and without diabetes for an entropy maximizing threshold approximation. Additional parameter combinations did not improve the results for pre-treatment data. The results of this study illustrate the influence of parameter selection on entropy measures’ potential for cardiovascular risk stratification and support the potential use of entropy measures in future studies.

    @article{MayerEtAl:2016:Entropy,
       year = {2016},
       author = {Mayer, C. and Bachler, M. and Holzinger, A. and Stein, P.K. and Wassertheurer, S. },
       title = {The Effect of Threshold Values and Weighting Factors on the Association between Entropy Measures and Mortality after Myocardial Infarction in the Cardiac Arrhythmia Suppression Trial (CAST)},
       journal = {Entropy},
       volume = {18},
       number = {4},
       pages = {129:1-15},
       abstract = {Heart rate variability (HRV) is a non-invasive measurement based on the intervals between normal heart beats that characterize cardiac autonomic function. Decreased HRV is associated with increased risk of cardiovascular events. Characterizing HRV using only moment statistics fails to capture abnormalities in regulatory function that are important aspects of disease risk. Thus, entropy measures are a promising approach to quantify HRV for risk stratification. The purpose of this study was to investigate this potential for approximate, corrected approximate, sample, fuzzy, and fuzzy measure entropy and its dependency on the parameter selection. Recently, published parameter sets and further parameter combinations were investigated. Heart rate data were obtained from the "Cardiac Arrhythmia Suppression Trial (CAST) RR Interval Sub-Study Database" (Physionet). Corresponding outcomes and clinical data were provided by one of the investigators. The use of previously-reported parameter sets on the pre-treatment data did not significantly add to the identification of patients at risk for cardiovascular death on follow-up. After arrhythmia suppression treatment, several parameter sets predicted outcomes for all patients and patients without coronary artery bypass grafting (CABG). The strongest results were seen using the threshold parameter as a multiple of the data’s standard deviation ( r=0.2⋅σr=0.2·σ ). Approximate and sample entropy provided significant hazard ratios for patients without CABG and without diabetes for an entropy maximizing threshold approximation. Additional parameter combinations did not improve the results for pre-treatment data. The results of this study illustrate the influence of parameter selection on entropy measures’ potential for cardiovascular risk stratification and support the potential use of entropy measures in future studies.},
       doi = {10.3390/e18040129},
       url = {http://www.mdpi.com/1099-4300/18/4/129}
    }

  • [MalleEtAl:2016:forgotten] B. Malle, P. Kieseberg, E. Weippl, and A. Holzinger, “The right to be forgotten: Towards Machine Learning on perturbed knowledge bases“, in Springer Lecture Notes in Computer Science LNCS 9817, Heidelberg, Berlin, New York: Springer, 2016, pp. 251-256.
    [BibTeX] [Abstract] [DOI]

    Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results.

    @incollection{MalleEtAl:2016:forgotten,
       year = {2016},
       author = {Malle, Bernd and Kieseberg, Peter and Weippl, Edgar and Holzinger, Andreas},
       title = {The right to be forgotten: Towards Machine Learning on perturbed knowledge bases},
       booktitle = {Springer Lecture Notes in Computer Science LNCS 9817},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {251-256},
       abstract = {Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed / anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results.},
       keywords = {Machine learning, knowledge bases, right to be forgotten,
    perturbation, anonymization, k-anonymity, SaNGreeA, information loss,
    structural loss, cost weighing vector, interactive machine learning},
       doi = {10.1007/978-3-319-45507-5_17}
    }

  • [KiesebergEtAl:2016:TrustDocInLoop] P. Kieseberg, E. Weippl, and A. Holzinger, “Trust for the Doctor-in-the-Loop“, European Research Consortium for Informatics and Mathematics (ERCIM) News: Tackling Big Data in the Life Sciences , vol. 104, iss. 1, pp. 32-33, 2016.
    [BibTeX] [Abstract] [Download PDF]

    The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system. The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept (see [1], for instance): The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her.

    @article{KiesebergEtAl:2016:TrustDocInLoop,
       year = {2016},
       author = {Kieseberg, Peter and Weippl, Edgar and Holzinger, Andreas},
       title = {Trust for the Doctor-in-the-Loop},
       journal = {European Research Consortium for Informatics and Mathematics (ERCIM) News: Tackling Big Data in the Life Sciences },
       volume = {104},
       number = {1},
       pages = {32-33},
       abstract = {The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system. The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept (see [1], for instance): The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her. },
       keywords = {Privacy Aware Machine Learning, Human-in-the-Loop, interactive ML},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1391379&pCurrPk=90766}
    }

  • [MalleEtAl:2016:PAMLRightToBeForgotten] B. Malle, P. Kieseberg, S. Schrittwieser, and A. Holzinger, “Privacy Aware Machine Learning and the “Right to be Forgotten”“, ERCIM News (special theme: machine learning), vol. 107, iss. 3, pp. 22-23, 2016.
    [BibTeX] [Abstract] [Download PDF]

    While machine learning is one of the fastest growing technologies in the area of computer science, the goal of analysing large amounts of data for information extraction collides with the privacy of individuals. Hence, in order to protect sensitive information, the effects of the right to be forgotten on machine learning algorithms need to be studied more extensively.

    @article{MalleEtAl:2016:PAMLRightToBeForgotten,
       year = {2016},
       author = {Malle, Bernd and Kieseberg, Peter and Schrittwieser, Sebastian and Holzinger, Andreas},
       title = {Privacy Aware Machine Learning and the “Right to be Forgotten”},
       journal = {ERCIM News (special theme: machine learning)},
       volume = {107},
       number = {3},
       pages = {22-23},
       abstract = {While machine learning is one of the fastest growing technologies in the area of computer science, the goal of analysing large amounts of data for information extraction collides with the privacy of individuals. Hence, in order to protect sensitive information, the effects of the
    right to be forgotten on machine learning algorithms need to be studied more extensively.},
       keywords = {Privacy Aware Machine Learning, Human-in-the-Loop, interactive ML},
       url = {http://ercim-news.ercim.eu/en107/special/privacy-aware-machine-learning-and-the-right-to-be-forgotten}
    }

  • [KiesebergEtAl:2016:PrivacyMLDocInLoop] P. Kieseberg, B. Malle, P. Fruehwirt, E. Weippl, and A. Holzinger, “A tamper-proof audit and control system for the doctor in the loop“, Brain Informatics, pp. 1-11, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The “doctor in the loop” is a new paradigm in information-driven medicine, picturing the doctor as authority inside a loop supplying an expert system with information on actual patients, treatment results, and possible additional (side-)effects, including general information in order to enhance data-driven medical science, as well as giving back treatment advice to the doctor himself. While this approach can be very beneficial for new medical approaches like P4 medicine (personal, predictive, preventive, and participatory), it also relies heavily on the authenticity of the data and thus increases the need for secure and reliable databases. In this paper, we propose a solution in order to protect the doctor in the loop against responsibility derived from manipulated data, thus enabling this new paradigm to gain acceptance in the medical community. This work is an extension of the conference paper  Kieseberg et al. (Brain Informatics and Health, 2015), which includes extensions to the original concept.

    @article{KiesebergEtAl:2016:PrivacyMLDocInLoop,
       year = {2016},
       author = {Kieseberg, Peter and Malle, Bernd and Fruehwirt, Peter and Weippl, Edgar and Holzinger, Andreas},
       title = {A tamper-proof audit and control system for the doctor in the loop},
       journal = {Brain Informatics},
       pages = {1-11},
       abstract = {The “doctor in the loop” is a new paradigm in information-driven medicine, picturing the doctor as authority inside a loop supplying an expert system with information on actual patients, treatment results, and possible additional (side-)effects, including general information in order to enhance data-driven medical science, as well as giving back treatment advice to the doctor himself. While this approach can be very beneficial for new medical approaches like P4 medicine (personal, predictive, preventive, and participatory), it also relies heavily on the authenticity of the data and thus increases the need for secure and reliable databases. In this paper, we propose a solution in order to protect the doctor in the loop against responsibility derived from manipulated data, thus enabling this new paradigm to gain acceptance in the medical community. This work is an extension of the conference paper  Kieseberg et al. (Brain Informatics and Health, 2015), which includes extensions to the original concept.},
       doi = {10.1007/s40708-016-0046-2},
       url = {http://dx.doi.org/10.1007/s40708-016-0046-2}
    }

  • [JeanquartierEtAl:2016:OpenData] F. Jeanquartier, C. Jean-Quartier, T. Schreck, D. Cemernek, and A. Holzinger, “Integrating Open Data on Cancer in Support to Tumor Growth Analysis“, in Lecture Notes in Computer Science LNCS 9832, E. M. Renda, M. Bursa, A. Holzinger, and S. Khuri, Eds., Cham: Springer, 2016, pp. 49-66.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The general disease group of malignant neoplasms depicts one of the leading and increasing causes for death. The underlying complexity of cancer demands for abstractions to disclose an exclusive subset of information related to the disease. Our idea is to create a user interface for linking a simulation on cancer modeling to relevant additional publicly and freely available data. We are not only providing a categorized list of open datasets and queryable databases for the different types of cancer and related information, we also identify a certain subset of temporal and spatial data related to tumor growth. Furthermore, we describe the integration possibilities into a simulation tool on tumor growth that incorporates the tumor’s kinetics.

    @incollection{JeanquartierEtAl:2016:OpenData,
       year = {2016},
       author = {Jeanquartier, Fleur and Jean-Quartier, Claire and Schreck, Tobias and Cemernek, David and Holzinger, Andreas},
       title = {Integrating Open Data on Cancer in Support to Tumor Growth Analysis},
       booktitle = {Lecture Notes in Computer Science LNCS 9832},
       editor = {Renda, Elena M. and Bursa, Miroslav and Holzinger, Andreas and Khuri, Sami},
       publisher = {Springer},
       address = {Cham},
       pages = {49-66},
       abstract = {The general disease group of malignant neoplasms depicts one of the leading and increasing causes for death. The underlying complexity of cancer demands for abstractions to disclose an exclusive subset of information related to the disease. Our idea is to create a user interface for linking a simulation on cancer modeling to relevant additional publicly and freely available data. We are not only providing a categorized list of open datasets and queryable databases for the different types of cancer and related information, we also identify a certain subset of temporal and spatial data related to tumor growth. Furthermore, we describe the integration possibilities into a simulation tool on tumor growth that incorporates the tumor’s kinetics.},
       keywords = {Open data, Data integration, Cancer, Tumor growth, Data Visualization, Simulation},
       doi = {10.1007/978-3-319-43949-5_4},
       url = {http://dx.doi.org/10.1007/978-3-319-43949-5_4}
    }

  • [HundEtAl:2016:VisualAnalyticsDocInLoop] M. Hund, D. Boehm, W. Sturm, M. Sedlmair, T. Schreck, T. Ullrich, D. A. Keim, L. Majnaric, and A. Holzinger, “Visual analytics for concept exploration in subspaces of patient groups: Making sense of complex datasets with the Doctor-in-the-loop“, Brain Informatics, vol. 3, iss. 4, pp. 233-247, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Medical doctors and researchers in bio-medicine are increasingly confronted with complex patient data, posing new and difficult analysis challenges. These data are often comprising high-dimensional descriptions of patient conditions and measurements on the success of certain therapies. An important analysis question in such data is to compare and correlate patient conditions and therapy results along with combinations of dimensions. As the number of dimensions is often very large, one needs to map them to a smaller number of relevant dimensions to be more amenable for expert analysis. This is because irrelevant, redundant, and conflicting dimensions can negatively affect effectiveness and efficiency of the analytic process (the so-called curse of dimensionality). However, the possible mappings from high- to low-dimensional spaces are ambiguous. For example, the similarity between patients may change by considering different combinations of relevant dimensions (subspaces). We demonstrate the potential of subspace analysis for the interpretation of high-dimensional medical data. Specifically, we present SubVIS, an interactive tool to visually explore subspace clusters from different perspectives, introduce a novel analysis workflow, and discuss future directions for high-dimensional (medical) data analysis and its visual exploration. We apply the presented workflow to a real-world dataset from the medical domain and show its usefulness with a domain expert evaluation.

    @article{HundEtAl:2016:VisualAnalyticsDocInLoop,
       year = {2016},
       author = {Hund, Michael and Boehm, Dominic and Sturm, Werner and Sedlmair, Michael and Schreck, Tobias and Ullrich, Torsten and Keim, Daniel A. and Majnaric, Ljiljana and Holzinger, Andreas},
       title = {Visual analytics for concept exploration in subspaces of patient groups: Making sense of complex datasets with the Doctor-in-the-loop},
       journal = {Brain Informatics},
       volume = {3},
       number = {4},
       pages = {233-247},
       abstract = {Medical doctors and researchers in bio-medicine are increasingly confronted with complex patient data, posing new and difficult analysis challenges. These data are often comprising high-dimensional descriptions of patient conditions and measurements on the success of certain therapies. An important analysis question in such data is to compare and correlate patient conditions and therapy results along with combinations of dimensions. As the number of dimensions is often very large, one needs to map them to a smaller number of relevant dimensions to be more amenable for expert analysis. This is because irrelevant, redundant, and conflicting dimensions can negatively affect effectiveness and efficiency of the analytic process (the so-called curse of dimensionality). However, the possible mappings from high- to low-dimensional spaces are ambiguous. For example, the similarity between patients may change by considering different combinations of relevant dimensions (subspaces). We demonstrate the potential of subspace analysis for the interpretation of high-dimensional medical data. Specifically, we present SubVIS, an interactive tool to visually explore subspace clusters from different perspectives, introduce a novel analysis workflow, and discuss future directions for high-dimensional (medical) data analysis and its visual exploration. We apply the presented workflow to a real-world dataset from the medical domain and show its usefulness with a domain expert evaluation.},
       keywords = {Knowledge discovery and exploration, Visual analytics, Subspace clustering, Subspace analysis, Subspace exploration and comparison},
       doi = {10.1007/s40708-016-0043-5},
       url = {http://dx.doi.org/10.1007/s40708-016-0043-5}
    }

  • [Holzinger:2016:NIPSiML] A. Holzinger, M. Plass, and M. D. Kickmeier-Rust, “Interactive Machine Learning (iML): a challenge for Game-based approaches“, in Challenges in Machine Learning: Gaming and Education, 2016.
    [BibTeX] [Abstract]

    The goal of the ML-community is to design and develop algorithms which can learn from data and improve with experience over time. However, the application of such automatic machine learning (aML) approaches in the complex biomedical domain seems elusive in the near future, and a good example are Gaussian processes, where aML (e.g. standard kernel machines) struggle on function extrapolation problems – which are trivial for human learners. Psychological research indicates that human intuition is not an ’esotheric’ concept; much more, it is based on distinct behavioraland cognitive strategies that developed evolutionary over millions of years. For improving ML, we need to identify the concrete mechanisms and we argue that this can be done best by observing crowd behaviors and decisions in gamified situations. We have proved experimentally that the iML approach can be used to advance current Traveling Salesman Problem (TSP) solving methods. TSP is important, as it appears in a number of practical problems in biomedical informatics, e.g. the native folded three-dimensional conformation of a protein is its lowest free energy state and both a two- and three-dimensional folding processes as a free energy minimization problem belong to a large set of computational problems, assumed to be very hard (conditionally intractable).

    @inproceedings{Holzinger:2016:NIPSiML,
       year = {2016},
       author = {Holzinger, Andreas and Plass, Markus and Kickmeier-Rust, Michael D.},
       title = {Interactive Machine Learning (iML): a challenge for Game-based approaches},
       booktitle = {Challenges in Machine Learning: Gaming and Education},
       editor = {Guyon, Isabelle and Viegas, Evelyne and Escalera, Sergio and Hamner, Ben and Kegl, Balasz},
       publisher = {NIPS Workshops},
       abstract = {The goal of the ML-community is to design and develop algorithms which can learn from data and improve with experience over time. However, the application of such automatic machine learning (aML) approaches in the complex biomedical domain seems elusive in the near future, and a good example are Gaussian processes, where aML (e.g. standard kernel machines) struggle on function extrapolation problems - which are trivial for human learners. Psychological research indicates that human intuition is not an ’esotheric’ concept; much more, it is based on distinct behavioraland cognitive strategies that developed evolutionary over millions of years. For improving ML, we need to identify the concrete mechanisms and we argue that this can be done best by observing crowd behaviors and decisions in gamified situations. We have proved experimentally that the iML approach can be used to advance current Traveling Salesman Problem (TSP) solving methods. TSP is important, as it appears in a number of practical problems in biomedical informatics, e.g. the native folded three-dimensional conformation of a protein is its lowest free energy state and both a two- and three-dimensional folding processes as a free energy minimization problem belong to a large set of computational problems, assumed to be very hard (conditionally intractable).},
       keywords = {gaming, gamification, machine learning}
    }

  • [Jeanquartier:2016:InsilicoTumorGrowth] F. Jeanquartier, C. Jean-Quartier, D. Cemernek, and A. Holzinger, “In silico modeling for tumor growth visualization“, BMC Systems Biology, vol. 10, iss. 1, pp. 1-15, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Cancer is a complex disease. Fundamental cellular based studies as well as modeling provides insight into cancer biology and strategies to treatment of the disease. In silico models complement in vivo models. Research on tumor growth involves a plethora of models each emphasizing isolated aspects of benign and malignant neoplasms. Biologists and clinical scientists are often overwhelmed by the mathematical background knowledge necessary to grasp and to apply a model to their own research.

    @article{Jeanquartier:2016:InsilicoTumorGrowth,
       year = {2016},
       author = {Jeanquartier, Fleur and Jean-Quartier, Claire and Cemernek, David and Holzinger, Andreas},
       title = {In silico modeling for tumor growth visualization},
       journal = {BMC Systems Biology},
       volume = {10},
       number = {1},
       pages = {1-15},
       abstract = {Cancer is a complex disease. Fundamental cellular based studies as well as modeling provides insight into cancer biology and strategies to treatment of the disease. In silico models complement in vivo models. Research on tumor growth involves a plethora of models each emphasizing isolated aspects of benign and malignant neoplasms. Biologists and clinical scientists are often overwhelmed by the mathematical background knowledge necessary to grasp and to apply a model to their own research.},
       doi = {10.1186/s12918-016-0318-8},
       url = {http://dx.doi.org/10.1186/s12918-016-0318-8}
    }

  • [YimamEtAl:2016:AdaptiveAnnotation] S. M. Yimam, C. Biemann, L. Majnaric, S. Sabanovic, and A. Holzinger, “An adaptive annotation approach for biomedical entity and relation recognition“, Brain Informatics, vol. 3, iss. 3, pp. 157-168, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this article, we demonstrate the impact of interactive machine learning: we develop biomedical entity recognition dataset using a human-into-the-loop approach. In contrary to classical machine learning, human-in-the-loop approaches do not operate on predefined training or test sets, but assume that human input regarding system improvement is supplied iteratively. Here, during annotation, a machine learning model is built on previous annotations and used to propose labels for subsequent annotation. To demonstrate that such interactive and iterative annotation speeds up the development of quality dataset annotation, we conduct three experiments. In the first experiment, we carry out an iterative annotation experimental simulation and show that only a handful of medical abstracts need to be annotated to produce suggestions that increase annotation speed. In the second experiment, clinical doctors have conducted a case study in annotating medical terms documents relevant for their research. The third experiment explores the annotation of semantic relations with relation instance learning across documents. The experiments validate our method qualitatively and quantitatively, and give rise to a more personalized, responsive information extraction technology.

    @article{YimamEtAl:2016:AdaptiveAnnotation,
       year = {2016},
       author = {Yimam, Seid Muhie and Biemann, Chris and Majnaric, Ljiljana and Sabanovic, Sefket and Holzinger, Andreas},
       title = {An adaptive annotation approach for biomedical entity and relation recognition},
       journal = {Brain Informatics},
       volume = {3},
       number = {3},
       pages = {157-168},
       abstract = {In this article, we demonstrate the impact of interactive machine learning: we develop biomedical entity recognition dataset using a human-into-the-loop approach. In contrary to classical machine learning, human-in-the-loop approaches do not operate on predefined training or test sets, but assume that human input regarding system improvement is supplied iteratively. Here, during annotation, a machine learning model is built on previous annotations and used to propose labels for subsequent annotation. To demonstrate that such interactive and iterative annotation speeds up the development of quality dataset annotation, we conduct three experiments. In the first experiment, we carry out an iterative annotation experimental simulation and show that only a handful of medical abstracts need to be annotated to produce suggestions that increase annotation speed. In the second experiment, clinical doctors have conducted a case study in annotating medical terms documents relevant for their research. The third experiment explores the annotation of semantic relations with relation instance learning across documents. The experiments validate our method qualitatively and quantitatively, and give rise to a more personalized, responsive information extraction technology.},
       keywords = {Interactive Machine learning, health informatics, entity recognition, relation learning},
       doi = {10.1007/s40708-016-0036-4},
       url = {http://dx.doi.org/10.1007/s40708-016-0036-4}
    }

  • [HolzingerEtAl:2016:iMLExperiment] A. Holzinger, M. Plass, K. Holzinger, G. Crisan, C. Pintea, and V. Palade, “Towards interactive Machine Learning (iML): Applying Ant Colony Algorithms to solve the Traveling Salesman Problem with the Human-in-the-Loop approach“, in Springer Lecture Notes in Computer Science LNCS 9817, Heidelberg, Berlin, New York: Springer, 2016, pp. 81-95.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of "big data". However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the "human-in-the-loop" approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. <br/>In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-"human-in-the-loop" approach, particularly in opening the "black box", thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.

    @incollection{HolzingerEtAl:2016:iMLExperiment,
       year = {2016},
       author = {Holzinger, A and Plass, M and Holzinger, K and Crisan, GC and Pintea, CM and Palade, V },
       title = {Towards interactive Machine Learning (iML): Applying Ant Colony Algorithms to solve the Traveling Salesman Problem with the Human-in-the-Loop approach},
       booktitle = {Springer Lecture Notes in Computer Science LNCS 9817},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {81-95},
       abstract = {Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of "big data". However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the "human-in-the-loop" approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. <br/>In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-"human-in-the-loop" approach, particularly in opening the "black box", thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.},
       keywords = {interactive Machine Learning, Human-in-the-loop, Traveling Salesman Problem, Ant Colony Optimization},
       doi = {10.1007/978-3-319-45507-56},
       url = {http://rd.springer.com/chapter/10.1007/978-3-319-45507-5_6}
    }

  • [Holzinger:2016:MachineLearningHealthInformatics] A. Holzinger, Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, Lecture Notes in Computer Science LNCS 9605, Heidelberg : Springer, 2016.
    [BibTeX] [Abstract]

    Machine learning (ML) is the fastest growing field in computer science, and Health Informatics (HI) is amongst the greatest application challenges, providing future benefits in improved medical diagnoses, disease analyses, and pharmaceutical development. However, successful ML for HI needs a concerted effort, fostering integrative research between experts ranging from diverse disciplines from data science to visualization. Tackling complex challenges needs both disciplinary excellence and cross-disciplinary networking without any boundaries. Following the HCI-KDD approach, in combining the best of two worlds, it is aimed to support human intelligence with machine intelligence. This state-of-the-art survey is an output of the international HCI-KDD expert network and features 22 carefully selected and peer-reviewed chapters on hot topics in machine learning for health informatics; they discuss open problems and future challenges in order to stimulate further research and international progress in this field.

    @book{Holzinger:2016:MachineLearningHealthInformatics,
       year = {2016},
       author = {Holzinger, Andreas},
       title = {Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, Lecture Notes in Computer Science LNCS 9605},
       publisher = {Springer},
       address = {Heidelberg },
       abstract = {Machine learning (ML) is the fastest growing field in computer science, and Health Informatics (HI) is amongst the greatest application challenges, providing future benefits in improved medical diagnoses, disease analyses, and pharmaceutical development. However, successful ML for HI needs a concerted effort, fostering integrative research between experts ranging from diverse disciplines from data science to visualization. Tackling complex challenges needs both disciplinary excellence and cross-disciplinary networking without any boundaries. Following the HCI-KDD approach, in combining the best of two worlds, it is aimed to support human intelligence with machine intelligence. This state-of-the-art survey is an output of the international HCI-KDD expert network and features 22 carefully selected and peer-reviewed chapters on hot topics in machine learning for health informatics; they discuss open problems and future challenges in order to stimulate further research and international progress in this field.}
    }

  • [Holzinger:2016:iML] A. Holzinger, “Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop?“, Brain Informatics, vol. 3, iss. 2, pp. 119-131, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is amongst the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic Machine Learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive Machine Learning (iML) may be of help, having its roots in Reinforcement Learning (RL), Preference Learning (PL) and Active Learning (AL). The term iML is not yet well used, so we define it as algorithms that can interact with agents and can optimize their learning behaviour through these interactions, where the agents can also be human. This human-in-the-loop can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

    @article{Holzinger:2016:iML,
       year = {2016},
       author = {Holzinger, Andreas},
       title = {Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop?},
       journal = {Brain Informatics},
       volume = {3},
       number = {2},
       pages = {119-131},
       abstract = {Machine learning (ML) is the fastest growing field in computer science, and health informatics is amongst the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic Machine Learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive Machine Learning (iML) may be of help, having its roots in Reinforcement Learning (RL), Preference Learning (PL) and Active Learning (AL). The term iML is not yet well used, so we define it as algorithms that can interact with agents and can optimize their learning behaviour through these interactions, where the agents can also be human. This human-in-the-loop can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.},
       keywords = {interactive Machine learning, health informatics},
       doi = {10.1007/s40708-016-0042-6},
       url = {http://dx.doi.org/10.1007/s40708-016-0042-6}
    }

  • [Girardi:2016:iKDDdocInLoop] D. Girardi, J. Küng, R. Kleiser, M. Sonnberger, D. Csillag, J. Trenkler, and A. Holzinger, “Interactive knowledge discovery with the doctor-in-the-loop: a practical example of cerebral aneurysms research“, Brain Informatics, vol. 3, iss. 3, pp. 133-143, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Established process models for knowledge discovery find the domain-expert in a customer-like and supervising role. In the field of biomedical research, it is necessary to move the domain-experts into the center of this process with far-reaching consequences for both their research output and the process itself. In this paper, we revise the established process models for knowledge discovery and propose a new process model for domain-expert-driven interactive knowledge discovery. Furthermore, we present a research infrastructure which is adapted to this new process model and demonstrate how the domain-expert can be deeply integrated even into the highly complex data-mining process and data-exploration tasks. We evaluated this approach in the medical domain for the case of cerebral aneurysms research.

    @article{Girardi:2016:iKDDdocInLoop,
       year = {2016},
       author = {Girardi, Dominic and Küng, Josef and Kleiser, Raimund and Sonnberger, Michael and Csillag, Doris and Trenkler, Johannes and Holzinger, Andreas},
       title = {Interactive knowledge discovery with the doctor-in-the-loop: a practical example of cerebral aneurysms research},
       journal = {Brain Informatics},
       volume = {3},
       number = {3},
       pages = {133-143},
       abstract = {Established process models for knowledge discovery find the domain-expert in a customer-like and supervising role. In the field of biomedical research, it is necessary to move the domain-experts into the center of this process with far-reaching consequences for both their research output and the process itself. In this paper, we revise the established process models for knowledge discovery and propose a new process model for domain-expert-driven interactive knowledge discovery. Furthermore, we present a research infrastructure which is adapted to this new process model and demonstrate how the domain-expert can be deeply integrated even into the highly complex data-mining process and data-exploration tasks. We evaluated this approach in the medical domain for the case of cerebral aneurysms research.},
       keywords = {Doctor-in-the-loop, Expert-in-the-loop, Interactive machine learning, Process model, Knowledge discovery, Medical research},
       doi = {10.1007/s40708-016-0038-2},
       url = {http://dx.doi.org/10.1007/s40708-016-0038-2}
    }

  • [Holzinger:2015:definitionIML] A. Holzinger, “Interactive Machine Learning (iML)“, Informatik Spektrum, vol. 39, iss. 1, pp. 64-68, 2016.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Während Machine Learning (ML) in vielen Domänen sehr gut funktioniert, wie die Leistung selbstfahrender Autos zeigt, bergen vollautomatisierte ML-Methoden in komplexen Domänen die Gefahr der Modellierung von Artefakten. Ein Beispiel für eine komplexe Domäne ist die Biomedizin, wo wir mit hochdimensionalen, probabilistischen und unvollständigen Datenmengen konfrontiert sind. In solchen Problemstellungen kann es vorteilhaft sein, nicht auf menschliches Domänenwissen zu verzichten, sondern vielmehr menschliche Intelligenz und ML zu kombinieren. Menschen sind vielen Algorithmen häufig noch immer überlegen, beispielsweise in der instinktiven, ja nahezu instantanen Interpretation komplexer Muster. Trotz dieses offensichtlichen Befunds gibt es bis dato kaum quantitative Evaluierungsstudien über die Effektivität und Effizienz von Algorithmen, die mit – teils auch menschlichen – Agenten interagieren. Darüber hinaus gibt es auch kaum Nachweise, wie durch eine solche Interaktion das Lernverhalten von Algorithmen tatsächlich optimiert werden kann, obwohl doch solch ,,natürliche“ intelligente Agenten in großer Zahl vorhanden sind. Dies eröffnet eine Fülle an interessanten zukünftigen Forschungsthemen.

    @article{Holzinger:2015:definitionIML,
       year = {2016},
       author = {Holzinger, Andreas},
       title = {Interactive Machine Learning (iML)},
       journal = {Informatik Spektrum},
       volume = {39},
       number = {1},
       pages = {64-68},
       abstract = {Während Machine Learning (ML) in vielen Domänen sehr gut funktioniert, wie die Leistung selbstfahrender Autos zeigt, bergen vollautomatisierte ML-Methoden in komplexen Domänen die Gefahr der Modellierung von Artefakten. Ein Beispiel für eine komplexe Domäne ist die Biomedizin, wo wir mit hochdimensionalen, probabilistischen und unvollständigen Datenmengen konfrontiert sind. In solchen Problemstellungen kann es vorteilhaft sein, nicht auf menschliches Domänenwissen zu verzichten, sondern vielmehr menschliche Intelligenz und ML zu kombinieren.  Menschen sind vielen Algorithmen häufig noch immer überlegen, beispielsweise in der instinktiven, ja nahezu instantanen Interpretation komplexer Muster. Trotz dieses offensichtlichen Befunds gibt es bis dato kaum quantitative Evaluierungsstudien über die Effektivität und Effizienz von Algorithmen, die mit – teils auch menschlichen – Agenten interagieren. Darüber hinaus gibt es auch kaum Nachweise, wie durch eine solche Interaktion das Lernverhalten von Algorithmen tatsächlich optimiert werden kann, obwohl doch solch ,,natürliche“ intelligente
    Agenten in großer Zahl vorhanden sind. Dies eröffnet eine Fülle an interessanten zukünftigen Forschungsthemen.},
       doi = {10.1007/s00287-015-0941-6},
       url = {https://pure.tugraz.at/portal/files/3107876/iML.pdf}
    }

  • [DehmerEtAl:2016:BigDataComplexNetworks] M. Dehmer, F. Emmert-Streib, S. Pickl, and A. Holzinger, Big Data of Complex Networks, Boca Raton, London, New York: CRC Press Taylor & Francis Group, 2016.
    [BibTeX] [Abstract] [Download PDF]

    Big Data of Complex Networks presents and explains the methods from the study of big data that can be used in analysing massive structural data sets, including both very large networks and sets of graphs. As well as applying statistical analysis techniques like sampling and bootstrapping in an interdisciplinary manner to produce novel techniques for analyzing massive amounts of data, this book also explores the possibilities offered by the special aspects such as computer memory in investigating large sets of complex networks. Intended for computer scientists, statisticians and mathematicians interested in the big data and networks, Big Data of Complex Networks is also a valuable tool for researchers in the fields of visualization, data analysis, computer vision and bioinformatics.

    @book{DehmerEtAl:2016:BigDataComplexNetworks,
       year = {2016},
       author = {Dehmer, M and Emmert-Streib, F and Pickl, S and Holzinger, A},
       title = {Big Data of Complex Networks},
       publisher = {CRC Press Taylor & Francis Group},
       address = {Boca Raton, London, New York},
       abstract = {Big Data of Complex Networks presents and explains the methods from the study of big data that can be used in analysing massive structural data sets, including both very large networks and sets of graphs. As well as applying statistical analysis techniques like sampling and bootstrapping in an interdisciplinary manner to produce novel techniques for analyzing massive amounts of data, this book also explores the possibilities offered by the special aspects such as computer memory in investigating large sets of complex networks. Intended for computer scientists, statisticians and mathematicians interested in the big data and networks, Big Data of Complex Networks is also a valuable tool for researchers in the fields of visualization, data analysis, computer vision and bioinformatics.},
       keywords = {Big Data, network science, data science, graphs, combinatorics, machine learning},
       url = {https://www.crcpress.com/Big-Data-of-Complex-Networks/Dehmer-Emmert-Streib-Pickl-Holzinger/p/book/9781498723619}
    }

  • [MAL-i1ok] P. Kieseberg, E. Weippl, and A. Holzinger, “Trust for the Doctor-in-the-Loop“, European Research Consortium for Informatics and Mathematics (ERCIM) News: Tackling Big Data in the Life Sciences, vol. 104, iss. 1, pp. 32-33, 2016.
    [BibTeX] [Abstract] [Download PDF]

    The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system. The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept (see [1], for instance): The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her.

    @article{MAL-i1ok,
       year = {2016},
       author = {Kieseberg, Peter and Weippl, Edgar and Holzinger, Andreas},
       title = {Trust for the Doctor-in-the-Loop},
       journal = {European Research Consortium for Informatics and Mathematics (ERCIM) News: Tackling Big Data in the Life Sciences},
       volume = {104},
       number = {1},
       pages = {32-33},
       abstract = {The "doctor in the loop" is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with data and information. Before this paradigm is implemented in real environments, the trustworthiness of the system must be assured. The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop with an expert system in order to support the (automated) decision making with expert knowledge. This information not only includes support in pattern finding and supplying external knowledge, but the inclusion of data on actual patients, as well as treatment results and possible additional (side-) effects that relate to previous decisions of this semi-automated system. The concept of the "doctor in the loop" is basically an extension of the increasingly frequent use of knowledge discovery for the enhancement of medical treatments together with the “human in the loop” concept (see [1], for instance): The expert knowledge of the doctor is incorporated into "intelligent" systems (e.g., using interactive machine learning) and enriched with additional information and expert know-how. Using machine learning algorithms, medical knowledge and optimal treatments are identified. This knowledge is then fed back to the doctor to assist him/her. },
       keywords = {Privacy Aware Machine Learning, Human-in-the-Loop, interactive ML},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1391379&pCurrPk=90766}
    }

2015

  • [DAV-j56ok] F. Jeanquartier, C. Jean-Quartier, and A. Holzinger, “Integrated Web visualizations for protein-protein interaction databases“, BMC Bioinformatics, vol. 16, iss. 1, p. 195, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Background: Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. Results: We selected M = 10 out of N = 53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on tinyurl.com/PPI-DB-Comparison-2015 webcite. Conclusions: Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.

    @article{DAV-j56ok,
       year = {2015},
       author = {Jeanquartier, Fleur and Jean-Quartier, Claire and Holzinger, Andreas},
       title = {Integrated Web visualizations for protein-protein interaction databases},
       journal = {BMC Bioinformatics},
       volume = {16},
       number = {1},
       pages = {195},
       abstract = {Background: Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. Results: We selected M = 10 out of N = 53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on tinyurl.com/PPI-DB-Comparison-2015 webcite.
    Conclusions:  Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.},
       keywords = { Visualization; Visual analysis; Network visualization; Protein-protein interaction; Systems biology, methods},
       doi = {10.1186/s12859-015-0615-z},
       url = {http://www.biomedcentral.com/content/pdf/s12859-015-0615-z.pdf}
    }

  • [MAL-j55ok] A. Holzinger and G. Pasi, “Introduction to the special issue on Interactive Data Analysis“, Information Processing and Management, vol. 51, iss. 2, pp. 141-143, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The way in which scientific data analysis is carried out has dramatically changed in past years, and data analysis has been recently defined as the fourth paradigm in the investigation of nature, after empiricism (describing observed natural phenomena), theory (using models and generalizations) and simulation (simulating complex phenomena) ( Bell et al., 2009 and Hey et al., 2009). Today data analysis is an own e-science that unifies theories, experiments and simulations. In the classical scientific hypothetico-deductive approach ( Holzinger, 2011) the scientist asks at first a question, forms a hypothesis, carries out an experiment and collects the data to be analyzed. In eScience it is the reverse, as the data has already been collected and scientists are asking questions to the data. However, whether in astronomy or in life sciences, the increasing flood of data requires sophisticated methods for their handling ( Hirsh, 2008). A typical example is the biomedical area, an extreme data-intensive science, where professionals are confronted with huge masses of complex, high-dimensional data sets from diverse sources; in this context interactive data analysis is a grand challenge ( Holzinger, Dehmer, & Jurisica, 2014). The “interactive” part in this context is aimed at supporting professional end-users in learning to interactively analyze information properties, thus enabling them to visualize the relevant parts of their data, possibly via specific visualization techniques ( Heer and Shneiderman, 2012 and Turkay et al., 2014). In other words, the main aim is to enable an effective human control over powerful machine intelligence, and to integrate statistical methods with information visualization, to support both human insights and decision making processes (Mueller, Reihs, Zatloukal, & Holzinger, 2014). Although we have been in the information processing and management business since more than four decades, we are now facing a tremendous lack of integrated interactive systems, tools and methods. We are still lacking methods that support in the process of interactively finding relevant data – which is essential for sensemaking. It is not sufficient to just deliver to the end user more and more data. Most professionals, such as medical professionals, are in fact primarily not interested in data – they are indeed interested in relevant information. Hence, finding relevant, usable and useful information within the data to support their knowledge and decision making – this is the grand challenge. Consequently, making data both useful and usable is a major topic for information processing and management. [Knowledge Discovery, Data Mining]

    @article{MAL-j55ok,
       year = {2015},
       author = {Holzinger, Andreas and Pasi, Gabriella},
       title = {Introduction to the special issue on Interactive Data Analysis},
       journal = {Information Processing and Management},
       volume = {51},
       number = {2},
       pages = {141-143},
       abstract = {The way in which scientific data analysis is carried out has dramatically changed in past years, and data analysis has been recently defined as the fourth paradigm in the investigation of nature, after empiricism (describing observed natural phenomena), theory (using models and generalizations) and simulation (simulating complex phenomena) ( Bell et al., 2009 and Hey et al., 2009). Today data analysis is an own e-science that unifies theories, experiments and simulations. In the classical scientific hypothetico-deductive approach ( Holzinger, 2011) the scientist asks at first a question, forms a hypothesis, carries out an experiment and collects the data to be analyzed. In eScience it is the reverse, as the data has already been collected and scientists are asking questions to the data. However, whether in astronomy or in life sciences, the increasing flood of data requires sophisticated methods for their handling ( Hirsh, 2008). A typical example is the biomedical area, an extreme data-intensive science, where professionals are confronted with huge masses of complex, high-dimensional data sets from diverse sources; in this context interactive data analysis is a grand challenge ( Holzinger, Dehmer, & Jurisica, 2014). The “interactive” part in this context is aimed at supporting professional end-users in learning to interactively analyze information properties, thus enabling them to visualize the relevant parts of their data, possibly via specific visualization techniques ( Heer and Shneiderman, 2012 and Turkay et al., 2014). In other words, the main aim is to enable an effective human control over powerful machine intelligence, and to integrate statistical methods with information visualization, to support both human insights and decision making processes (Mueller, Reihs, Zatloukal, & Holzinger, 2014). Although we have been in the information processing and management business since more than four decades, we are now facing a tremendous lack of integrated interactive systems, tools and methods. We are still lacking methods that support in the process of interactively finding relevant data – which is essential for sensemaking. It is not sufficient to just deliver to the end user more and more data. Most professionals, such as medical professionals, are in fact primarily not interested in data – they are indeed interested in relevant information. Hence, finding relevant, usable and useful information within the data to support their knowledge and decision making – this is the grand challenge. Consequently, making data both useful and usable is a major topic for information processing and management. [Knowledge Discovery, Data Mining]},
       keywords = {Knowledge Discovery, Data mining, Machine Learning, Data Integration},
       doi = {10.1016/j.ipm.2014.11.002},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1061341&pCurrPk=84336}
    }

  • [PetzEtAl:2015:Sentiment] G. Petz, M. Karpowicz, H. Fuerschuss, A. Auinger, V. Stritesky, and A. Holzinger, “Computational approaches for mining user’s opinions on the Web 2.0“, Information Processing & Management, vol. 51, iss. 4, pp. 510-519, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people’s opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.

    @article{PetzEtAl:2015:Sentiment,
       year = {2015},
       author = {Petz, Gerald and Karpowicz, Michał and Fuerschuss, Harald and Auinger, Andreas and Stritesky, Vaclav and Holzinger, Andreas},
       title = {Computational approaches for mining user’s opinions on the Web 2.0},
       journal = {Information Processing & Management},
       volume = {51},
       number = {4},
       pages = {510--519},
       abstract = {The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people’s opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.},
       keywords = {Opinion mining, Noisy text, Text preprocessing, User generated content, Data mining},
       doi = {10.1016/j.ipm.2014.07.011},
       url = {http://www.sciencedirect.com/science/article/pii/S0306457315000655}
    }

  • [MAL-j53ok] A. Holzinger, “Data Mining with Decision Trees: Theory and Applications“, Online Information Review, vol. 39, iss. 3, pp. 437-438, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    With the importance of exploring large and complex data sets in knowledge discovery and data mining, the application of decision trees have become a powerful and popular approach.

    @article{MAL-j53ok,
      author    = {Andreas Holzinger},
      title     = {Data Mining with Decision Trees: Theory and Applications},
      journal   = {Online Information Review},
      volume    = {39},
      number    = {3},
      pages     = {437--438},
      abstract  = {With the importance of exploring large and complex data sets in knowledge discovery and data mining, the application of decision trees have become a powerful and popular approach.},
      year      = {2015},
      url       = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1394216&pCurrPk=87192},
      doi       = {10.1108/OIR-04-2015-0121}
    }

  • [j52ok] P. Brauner, A. Holzinger, and M. Ziefle, “Ubiquitous computing at its best: Serious exercise games for older adults in ambient assisted living environments “, European Alliance on Innvoation (EAI) Endorsed Transactions: Pervasive Games, vol. 1, iss. 4, pp. 1-12, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Ubiquitous computing and ambient assisted living environments offer promising solutions to meet the demographic change. An example are serious games for health care: Regular exercises mediated through games increase health, well-being, and autonomy of the residents whilst at the same time reducing the costs for caregiving. To understand which factors contribute to an increased acceptance of such exercise games in ambient assisted living environments, a prototypic game was evaluated with 32 younger and 32older players. Game performance is influenced by age, need for achievement, and also gender. Acceptance and projected use are related to the believe in making the game a habit, current gaming frequency, and social influences. Notably, the game increased the perceived health of the subjects, which is animportant issue. This article concludes with guidelines to successfullyintroduce serious exercise gamesinto health care and future ideas to realize social inclusion in game design.

    @article{j52ok,
       year = {2015},
       author = {Brauner, Philipp and Holzinger, Andreas and Ziefle, Martina },
       title = {Ubiquitous computing at its best: Serious exercise games for older adults in ambient assisted living environments },
       journal = {European Alliance on Innvoation (EAI) Endorsed Transactions: Pervasive Games},
       volume = {1},
       number = {4},
       pages = {1-12},
       abstract = {Ubiquitous  computing  and  ambient  assisted  living  environments  offer  promising  solutions  to  meet  the demographic change. An example are serious games for health care: Regular exercises mediated through games increase health, well-being, and autonomy of the residents whilst at the same time reducing the costs for caregiving. To understand which factors contribute to an increased acceptance of such exercise games in ambient assisted living environments, a prototypic game was evaluated with 32 younger and 32older players. Game performance is influenced by age, need for achievement, and also gender. Acceptance and projected use are related to the believe in making the game a habit, current gaming frequency, and social influences. Notably, the game increased the perceived health of the subjects, which is animportant issue. This article concludes with guidelines to successfullyintroduce serious exercise gamesinto health care and future ideas to realize social inclusion in game design.},
       keywords = {SErious Games, Ambient Assistend Living},
       doi = {http://dx.doi.org/10.4108/sg.1.4.e3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1152090&pCurrPk=85967}
    }

  • [j51ok] B. Peischl, M. Ferk, and A. Holzinger, “The fine art of user-centered software development“, Software Quality Journal, vol. 23, iss. 3, pp. 509-536, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this article, we report on the user-centered development of a mobile medical app under limited resources. We discuss (non-functional) quality attributes that we used to choose the platform for development of the medical app. As the major contribution, we show how to integrate user-centered design in an early stage of mobile app development under the presence of limited resources. Moreover, we present empirical results gained from our two-stage testing procedure including recommendations to provide both a useful and useable business app. In complex domains including medicine and health and particularly with mobile health devices (mHealth, eHealth Apps) usability is of vital importance and the factor for success.

    @article{j51ok,
       year = {2015},
       author = {Peischl, Bernhard and Ferk, Michaela and Holzinger, Andreas},
       title = {The fine art of user-centered software development},
       journal = {Software Quality Journal},
       volume = {23},
       number = {3},
       pages = {509-536},
       abstract = {In this article, we report on the user-centered development of a mobile medical app under limited resources. We discuss (non-functional) quality attributes that we used to choose the platform for development of the medical app. As the major contribution, we show how to integrate user-centered design in an early stage of mobile app development under the presence of limited resources. Moreover, we present empirical results gained from our two-stage testing procedure including recommendations to provide both a useful and useable business app. In complex domains including medicine and health and particularly with mobile health devices (mHealth, eHealth Apps) usability is of vital importance and the factor for success.},
       keywords = {User-centered design, Software engineering process, Usability, Mobile software quality, Mobile usability},
       doi = {10.1007/s11219-014-9239-1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1158751&pCurrPk=85057}
    }

  • [j50ok] H. P. da Silva, S. H. Fairclough, A. Holzinger, R. J. K. Jacob, and D. S. Tan, “Introduction to the Special Issue on Physiological Computing for Human-Computer Interaction“, ACM Transactions of Computer-Human Interaction, vol. 21, iss. 6, p. 29:1–29:4, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Physiological data in its different dimensions—bioelectrical, biomechanical, biochemical, or biophysical—and collected through existing sensors or specialized biomedical devices, image capture, or other sources is pushing the boundaries of physiological computing for human-computer interaction (HCI). Although physiological computing shows the potential to enhance the way in which people interact with digital content, systems remain challenging to design and build. The aim of this special issue is to present outstanding work related to use of physiological data in HCI, setting additional bases for next-generation computer interfaces and interaction experiences. Topics covered in this issue include methods and methodologies, human factors, the use of devices, and applications for supporting the development of emerging interfaces

    @article{j50ok,
      author    = {Hugo Pl{\'{a}}cido da Silva and
                   Stephen H. Fairclough and
                   Andreas Holzinger and
                   Robert J. K. Jacob and
                   Desney S. Tan},
      title     = {Introduction to the Special Issue on Physiological Computing for Human-Computer Interaction},
      journal   = {{ACM} Transactions of Computer-Human Interaction},
      volume    = {21},
      number    = {6},
      pages     = {29:1--29:4},
      year      = {2015},
      url       = {http://doi.acm.org/10.1145/2688203},
      doi       = {10.1145/2688203},
      abstract  = {Physiological data in its different dimensions—bioelectrical, biomechanical, biochemical, or biophysical—and collected through existing sensors or specialized biomedical devices, image capture, or other sources is pushing the boundaries of physiological computing for human-computer interaction (HCI). Although physiological computing shows the potential to enhance the way in which people interact with digital content, systems remain challenging to design and build. The aim of this special issue is to present outstanding work related to use of physiological data in HCI, setting additional bases for next-generation computer interfaces and interaction experiences. Topics covered in this issue include methods and methodologies, human factors, the use of devices, and applications for supporting the development of emerging interfaces}
    }

  • [MAL-c138ok] S. Yimam, C. Biemann, L. Majnaric, S. Sabanovic, and A. Holzinger, “Interactive and Iterative Annotation for Biomedical Entity Recognition“, in Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250, Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, Eds., Cham, Heidelberg, Berlin: Springer, 2015, pp. 347-357.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we demonstrate the impact of interactive machine learning for the development of a biomedical entity recognition dataset using a human-into-the-loop approach: during annotation, a machine learning model is built on previous annotations and used to propose labels for subsequent annotation. To demonstrate that such interactive and iterative annotation speeds up the development of quality dataset annotation, we conduct two experiments. In the first experiment, we carry out an iterative annotation experimental simulation and show that only a handful of medical abstracts need to be annotated to produce suggestions that increase annotation speed. In the second experiment, clinical doctors have conducted a case study in annotating medical terms documents relevant for their research. The experiments validate our method qualitatively and quantitatively, and give rise to a more personalized, responsive information extraction technology [interactive machine learning, Human-in-the-loop, Doctor-in-the-loop].

    @incollection{MAL-c138ok,
       year = {2015},
       author = {Yimam, SeidMuhie and Biemann, Chris and Majnaric, Ljiljana and Sabanovic, Sefket and Holzinger, Andreas},
       title = {Interactive and Iterative Annotation for Biomedical Entity Recognition},
       booktitle = {Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250},
       editor = {Guo, Yike and Friston, Karl and Aldo, Faisal and Hill, Sean and Peng, Hanchuan},
       publisher = {Springer},
       address = {Cham, Heidelberg, Berlin},
       pages = {347-357},
       abstract = {In this paper, we demonstrate the impact of interactive machine learning for the development of a biomedical entity recognition dataset using a human-into-the-loop approach: during annotation, a machine learning model is built on previous annotations and used to propose labels for subsequent annotation. To demonstrate that such interactive and iterative annotation speeds up the development of quality dataset annotation, we conduct two experiments. In the first experiment, we carry out an iterative annotation experimental simulation and show that only a handful of medical abstracts need to be annotated to produce suggestions that increase annotation speed. In the second experiment, clinical doctors have conducted a case study in annotating medical terms documents relevant for their research. The experiments validate our method qualitatively and quantitatively, and give rise to a more personalized, responsive information extraction technology [interactive machine learning, Human-in-the-loop, Doctor-in-the-loop].},
       keywords = {Interactive annotation, Machine learning, Knowledge discovery, Data mining, Human in the loop, Biomedical entity recognition},
       doi = {10.1007/978-3-319-23344-4_34},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198814&pCurrPk=85959}
    }

  • [MAL-c137ok] M. Hund, W. Sturm, T. Schreck, T. Ullrich, D. Keim, L. Majnaric, and A. Holzinger, “Analysis of Patient Groups and Immunization Results Based on Subspace Clustering“, in Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250, Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, Eds., Cham: Springer International Publishing, 2015, vol. 9250, pp. 358-368.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biomedical experts are increasingly confronted with what is often called Big Data, an important subclass of high-dimensional data. High-dimensional data analysis can be helpful in finding relationships between records and dimensions. However, due to data complexity, experts are decreasingly capable of dealing with increasingly complex data. Mapping higher dimensional data to a smaller number of relevant dimensions is a big challenge due to the curse of dimensionality. Irrelevant, redundant, and conflicting dimensions affect the effectiveness and efficiency of analysis. Furthermore, the possible mappings from high- to low-dimensional spaces are ambiguous. For example, the similarity between patients may change by considering different combinations of relevant dimensions (subspaces). We show the potential of subspace analysis for the interpretation of high-dimensional medical data. Specifically, we analyze relationships between patients, sets of patient attributes, and outcomes of a vaccination treatment by means of a subspace clustering approach. We present an analysis workflow and discuss future directions for high-dimensional (medical) data analysis and visual exploration [machine learning, knowledge discovery, biomedical informatics].

    @incollection{MAL-c137ok,
       year = {2015},
       author = {Hund, Michael and Sturm, Werner and Schreck, Tobias and Ullrich, Torsten and Keim, Daniel and Majnaric, Ljiljana and Holzinger, Andreas},
       title = {Analysis of Patient Groups and Immunization Results Based on Subspace Clustering},
       booktitle = {Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250},
       editor = {Guo, Yike and Friston, Karl and Aldo, Faisal and Hill, Sean and Peng, Hanchuan},
       publisher = {Springer International Publishing},
       address = {Cham},
       volume = {9250},
       pages = {358-368},
       abstract = {Biomedical experts are increasingly confronted with what is often called Big Data, an important subclass of high-dimensional data. High-dimensional data analysis can be helpful in finding relationships between records and dimensions. However, due to data complexity, experts are decreasingly capable of dealing with increasingly complex data. Mapping higher dimensional data to a smaller number of relevant dimensions is a big challenge due to the curse of dimensionality. Irrelevant, redundant, and conflicting dimensions affect the effectiveness and efficiency of analysis. Furthermore, the possible mappings from high- to low-dimensional spaces are ambiguous. For example, the similarity between patients may change by considering different combinations of relevant dimensions (subspaces). We show the potential of subspace analysis for the interpretation of high-dimensional medical data. Specifically, we analyze relationships between patients, sets of patient attributes, and outcomes of a vaccination treatment by means of a subspace clustering approach. We present an analysis workflow and discuss future directions for high-dimensional (medical) data analysis and visual exploration [machine learning, knowledge discovery, biomedical informatics].},
       keywords = {Knowledge discovery and exploration, Visual Analytics, Subspace Clustering, Subspace Analysis, human-in-the-loop, doctor-in-the-loop, classification, discovery, explanation},
       doi = {10.1007/978-3-319-23344-4_35},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198810&pCurrPk=85960}
    }

  • [MAL-c136ok] P. Kieseberg, J. Schantl, P. Frühwirt, E. Weippl, and A. Holzinger, “Witnesses for the Doctor in the Loop“, in Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250, Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, Eds., Cham, Heidelberg, Berlin: Springer, 2015, pp. 369-378.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with information on actual patients, treatment results and possible additional (side-)effects, as well as general information in order to enhance data driven medical science, as well as giving back treatment advice to the doctor himself. While this approach offers several positive aspects related to P4 medicine (personal, predictive, preventive and participatory), it also relies heavily on the authenticity of the data and increases the reliance on the security of databases, as well as on the correctness of machine learning algorithms. In this paper we propose a solution in order to protect the doctor in the loop against responsibility derived from manipulated data, thus enabling this new paradigm to gain acceptance in the medical community.

    @incollection{MAL-c136ok,
       year = {2015},
       author = {Kieseberg, Peter and Schantl, Johannes and Frühwirt, Peter and Weippl, Edgar and Holzinger, Andreas},
       title = {Witnesses for the Doctor in the Loop},
       booktitle = {Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 9250},
       editor = {Guo, Yike and Friston, Karl and Aldo, Faisal and Hill, Sean and Peng, Hanchuan},
       publisher = {Springer},
       address = {Cham, Heidelberg, Berlin},
       pages = {369-378},
       abstract = {The “doctor in the loop” is a new paradigm in information driven medicine, picturing the doctor as authority inside a loop supplying an expert system with information on actual patients, treatment results and possible additional (side-)effects, as well as general information in order to enhance data driven medical science, as well as giving back treatment advice to the doctor himself. While this approach offers several positive aspects related to P4 medicine (personal, predictive, preventive and participatory), it also relies heavily on the authenticity of the data and increases the reliance on the security of databases, as well as on the correctness of machine learning algorithms. In this paper we propose a solution in order to protect the doctor in the loop against responsibility derived from manipulated data, thus enabling this new paradigm to gain acceptance in the medical community.},
       keywords = {P4 medicine, Fingerprinting, Data driven science, doctor-in-the-loop, human-in-the-loop},
       doi = {10.1007/978-3-319-23344-4_36},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198800&pCurrPk=85962}
    }

  • [MAL-c135ok] S. S. Rahim, V. Palade, C. Jayne, A. Holzinger, and J. Shuttleworth, “Detection of Diabetic Retinopathy and Maculopathy in Eye Fundus Images Using Fuzzy Image Processing“, in Brain Informatics and Health, Lecture Notes in Computer Science, LNCS 9250, Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, Eds., Cham, Heidelberg, New York, Dordrecht, London: Springer, 2015, pp. 379-388.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Diabetic retinopathy is a damage of the retina and it is one of the serious consequences of the diabetes. Early detection of diabetic retinopathy is extremely important in order to prevent premature visual loss and blindness. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images using fuzzy image processing. The detection of maculopathy is essential as it will eventually cause loss of vision if the affected macula is not timely treated. The developed system consists of image acquisition, image preprocessing with a combination of fuzzy techniques, feature extraction, and image classification by using several machine learning techniques. The fuzzy-based image processing decision support system will assist in the diabetic retinopathy screening and reduce the burden borne by the screening team [machine learning, aML],

    @incollection{MAL-c135ok,
       year = {2015},
       author = {Rahim, Sarni Suhaila and Palade, Vasile and Jayne, Chrisina and Holzinger, Andreas and Shuttleworth, James},
       title = {Detection of Diabetic Retinopathy and Maculopathy in Eye Fundus Images Using Fuzzy Image Processing},
       booktitle = {Brain Informatics and Health, Lecture Notes in Computer Science, LNCS 9250},
       editor = {Guo, Yike and Friston, Karl and Aldo, Faisal and Hill, Sean and Peng, Hanchuan},
       publisher = {Springer},
       address = {Cham, Heidelberg, New York, Dordrecht, London},
       pages = {379-388},
       abstract = {Diabetic retinopathy is a damage of the retina and it is one of the serious consequences of the diabetes. Early detection of diabetic retinopathy is extremely important in order to prevent premature visual loss and blindness. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images using fuzzy image processing. The detection of maculopathy is essential as it will eventually cause loss of vision if the affected macula is not timely treated. The developed system consists of image acquisition, image preprocessing with a combination of fuzzy techniques, feature extraction, and image classification by using several machine learning techniques. The fuzzy-based image processing decision support system will assist in the diabetic retinopathy screening and reduce the burden borne by the screening team [machine learning, aML], },
       keywords = {Diabetic retinopathy, Eye screening, Colour fundus images, Fuzzy image processing, Machine learning, Classifiers, aML},
       doi = {10.1007/978-3-319-23344-4_37},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198500&pCurrPk=86749}
    }

  • [MAL-c134ok] D. Girardi, J. Kueng, and A. Holzinger, “A Domain-Expert Centered Process Model for Knowledge Discovery in Medical Research: Putting the Expert-in-the-Loop“, in Brain Informatics and Health, Lecture Notes in Computer Science LNCS 9250, Y. Guo, K. Friston, F. Aldo, S. Hill, and H. Peng, Eds., Cham, Heidelberg, Berlin, London, Dordrecht, New York: Springer, 2015, pp. 389-398.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Established process models for knowledge discovery see the domain expert in a customer-like, supervising role. In the field of biomedical research, it is necessary for the domain experts to move into the center of this process with far-reaching consequences for their research work but also for the process itself. We revise the established process models for knowledge discovery and propose a new process model for domain-expert driven knowledge discovery. Furthermore, we present a research infrastructure which is adapted to this new process model and show how the domain expert can be deeply integrated even into the highly complex data mining and machine learning tasks. [interactive machine learning (iML), doctor-in-the-loop]

    @incollection{MAL-c134ok,
       year = {2015},
       author = {Girardi, Dominic and Kueng, Josef and Holzinger, Andreas},
       title = {A Domain-Expert Centered Process Model for Knowledge Discovery in Medical Research: Putting the Expert-in-the-Loop},
       booktitle = {Brain Informatics and Health, Lecture Notes in Computer Science LNCS 9250},
       editor = {Guo, Yike and Friston, Karl and Aldo, Faisal and Hill, Sean and Peng, Hanchuan},
       publisher = {Springer},
       address = {Cham, Heidelberg, Berlin, London, Dordrecht, New York},
       pages = {389-398},
       abstract = {Established process models for knowledge discovery see the domain expert in a customer-like, supervising role. In the field of biomedical research, it is necessary for the domain experts to move into the center of this process with far-reaching consequences for their research work but also for the process itself. We revise the established process models for knowledge discovery and propose a new process model for domain-expert driven knowledge discovery. Furthermore, we present a research infrastructure which is adapted to this new process model and show how the domain expert can be deeply integrated even into the highly complex data mining and machine learning tasks. [interactive machine learning (iML), doctor-in-the-loop]},
       keywords = {Expert-in-the-Loop, Interactive Machine Learning, Process model, Knowledge Discovery, Medical Research, Human-in-the-loop, Doctor-in-the-loop},
       doi = {10.1007/978-3-319-23344-4_38},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198458&pCurrPk=85963}
    }

  • [DAV-c133ok] W. Sturm, T. Schaefer, T. Schreck, A. Holzinger, and T. Ullrich, “Extending the Scaffold Hunter Visualization Toolkit with Interactive Heatmaps “, in EG UK Computer Graphics & Visual Computing CGVC 2015, 2015, pp. 77-84.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In many application areas, large amounts of data arise, which are often hard to interpret or make use of by humans. Interactive visualization can help to overview and explore large amounts of data. An example is in the life sciences, where databases of chemical compounds need to be analyzed in terms of similarities of molecular properties. Scientists then need to explore this data in an efficient way. The Scaffold Hunter framework is an Open Source software system for interactive visualization of highdimensional data. In this paper, we present an extension of Scaffold Hunter with an interactive heatmap, which ties in tightly with a dendrogram visualization. We added specific interaction modalities and views tailored to the analysis of chemical compounds. Zooming capabilities allow to start from an overview of the data (showing all data elements at once) down to a detail-on-demand view which includes chemical structural views of molecules. We show how the interactive heatmap with clustered rows and columns can bring new insights into the data regarding various properties. The implementation is made available for researchers and practitioners to use. Categories and Subject Descriptors (according to ACM CCS): H.4.3 [Information System Applications]: Communications Applications—Information browsers J.3 [Computer Applications]: Life and Medical Sciences—Medical information systems

    @inproceedings{DAV-c133ok,
       year = {2015},
       author = {Sturm, Werner and Schaefer, Till and Schreck, Tobias and Holzinger, Andeas and Ullrich, Torsten},
       title = {Extending the Scaffold Hunter Visualization Toolkit with Interactive Heatmaps },
       booktitle = {EG UK Computer Graphics & Visual Computing CGVC 2015},
       editor = {Borgo, Rita and Turkay, Cagatay},
       publisher = {Euro Graphics (EG)},
       pages = {77-84},
       abstract = {In many application areas, large amounts of data arise, which are often hard to interpret or make use of by humans. Interactive visualization can help to overview and explore large amounts of data. An example is in the life sciences, where databases of chemical compounds need to be analyzed in terms of similarities of molecular properties. Scientists then need to explore this data in an efficient way. The Scaffold Hunter framework is an Open Source software system for interactive visualization of highdimensional data. In this paper, we present an extension of Scaffold Hunter with an interactive heatmap, which ties in tightly with a dendrogram visualization. We added specific interaction modalities and views tailored to the analysis of chemical compounds. Zooming capabilities allow to start from an overview of the data (showing all data elements at once) down to a detail-on-demand view which includes chemical structural views of molecules. We show how the interactive heatmap with clustered rows and columns can bring new insights into the data regarding
    various properties. The implementation is made available for researchers and practitioners to use. Categories and Subject Descriptors (according to ACM CCS): H.4.3 [Information System Applications]: Communications Applications—Information browsers J.3 [Computer Applications]: Life and Medical Sciences—Medical information systems},
       keywords = {interactive visualization, interactive heatmaps, visusal analytics, chemical compounds, chemical molecules},
       doi = {10.2312/cgvc.20151247},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1232781&pCurrPk=87191}
    }

  • [DAV-c132ok] W. Sturm, T. Schreck, A. Holzinger, and T. Ullrich, “Discovering Medical Knowledge Using Visual Analytics – a survey on methods for systems biology and omics data“, in Eurographics Workshop on Visual Computing for Biology and Medicine (2015), 2015, pp. 71-81.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Due to advanced technologies, the amount of biomedical data has been increasing drastically. Such large data sets might be obtained from hospitals, medical practices or laboratories and can be used to discover unknown knowledge and to find and reflect hypotheses. Based on this fact, knowledge discovery systems can support experts to make further decisions, explore the data or to predict future events. To analyze and communicate such a vast amount of information to the user, advanced techniques such as knowledge discovery and information visualization are necessary. Visual analytics combines these fields and supports users to integrate domain knowledge into the knowledge discovery process. This article gives a state-of-the-art overview on visual analytics reseach with a focus on the biomedical domain, systems biology and omics data. Categories and Subject Descriptors according to ACM CCS: H.1.2 [Information Systems]: User/Machine Systems—Human information processing J.3 [Computer Applications]: Life and Medical Sciences—Biology and genetics J.3 [Computer Applications]: Life and Medical Sciences—Medical information systems

    @inproceedings{DAV-c132ok,
       year = {2015},
       author = {Sturm, Werner and Schreck, Tobias and Holzinger, Andreas and Ullrich, Torsten},
       title = {Discovering Medical Knowledge Using Visual Analytics – a survey on methods for systems biology and omics data},
       booktitle = {Eurographics Workshop on Visual Computing for Biology and Medicine (2015)},
       editor = {Buehler, Katja and Linsen, Lars and John, Nigel W.},
       publisher = {Eurographics EG},
       pages = {71-81},
       abstract = {Due to advanced technologies, the amount of biomedical data has been increasing drastically. Such large data sets might be obtained from hospitals, medical practices or laboratories and can be used to discover unknown knowledge and to find and reflect hypotheses. Based on this fact, knowledge discovery systems can support experts to make further decisions, explore the data or to predict future events. To analyze and communicate such a vast amount of information to the user, advanced techniques such as knowledge discovery and information visualization are necessary. Visual analytics combines these fields and supports users to integrate domain knowledge into the knowledge discovery process. This article gives a state-of-the-art overview on visual analytics reseach with a focus on the biomedical domain, systems biology and omics data.
    Categories and Subject Descriptors according to ACM CCS: H.1.2 [Information Systems]: User/Machine Systems—Human information processing J.3 [Computer Applications]: Life and Medical Sciences—Biology and genetics J.3 [Computer Applications]: Life and Medical Sciences—Medical information systems},
       keywords = {Visual analytics, biomedical data, big data, omics data, unknown knowledge, discovery, hypotheses generation},
       doi = {10.2312/vcbm.20151210},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1232742&pCurrPk=86565}
    }

  • [MAL-p17ok] A. Holzinger, C. Röcker, and M. Ziefle, “From Smart Health to Smart Hospitals“, in Smart Health: State-of-the-Art and Beyond, Springer Lecture Notes in Computer Science, LNCS 8700, Heidelberg, Berlin: Springer, 2015, pp. 1-20.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Prolonged life expectancy along with the increasing complexity of medicine and health services raises health costs worldwide dramatically. Advancements in ubiquitous computing applications in combination with the use of sophisticated intelligent sensor networks may provide a basis for help. Whilst the smart health concept has much potential to support the concept of the emerging P4-medicine (preventive, participatory, predictive, and personalized), such high-tech medicine produces large amounts of high-dimensional, weakly-structured data sets and massive amounts of unstructured information. All these technological approaches along with “big data” are turning the medical sciences into a data-intensive science. To keep pace with the growing amounts of complex data, smart hospital approaches are a commandment of the future, necessitating context aware computing along with advanced interaction paradigms in new physical-digital ecosystems. In such a system the medical doctors are supported by their smart mobile medical assistants on managing their floods of data semi-automatically by following the human-in-the-loop concept. At the same time patients are supported by their health assistants to facilitate a healthier life, wellness and wellbeing. [Ubiquitous Computing, computational intelligence, P4-medicine]

    @incollection{MAL-p17ok,
       year = {2015},
       author = {Holzinger, Andreas and Röcker, Carsten and Ziefle, Martina},
       title = {From Smart Health to Smart Hospitals},
       booktitle = {Smart Health: State-of-the-Art and Beyond, Springer Lecture Notes in Computer Science, LNCS 8700},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {1--20},
       abstract = {Prolonged life expectancy along with the increasing complexity of medicine and health services raises health costs worldwide dramatically. Advancements in ubiquitous computing applications in combination with the use of sophisticated intelligent sensor networks may provide a basis for help. Whilst the smart health concept has much potential to support the concept of the emerging P4-medicine (preventive, participatory, predictive, and personalized), such high-tech medicine produces large amounts of high-dimensional, weakly-structured data sets and massive amounts of unstructured information. All these technological approaches along with “big data” are turning the medical sciences into a data-intensive science. To keep pace with the growing amounts of complex data, smart hospital approaches are a commandment of the future, necessitating context aware computing along with advanced interaction paradigms in new physical-digital ecosystems. In such a system the medical doctors are supported by their smart mobile medical assistants on managing their floods of data semi-automatically by following the human-in-the-loop concept. At the same time patients are supported by their health assistants to facilitate a healthier life, wellness and wellbeing. [Ubiquitous Computing, computational intelligence, P4-medicine]},
       keywords = {Ubiquitous Computing, P4 medicine, Context Awareness, Computational Intelligence},
       doi = {10.1007/978-3-319-16226-3_1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=972679&pCurrPk=82963}
    }

  • [MAL-p16ok] M. Duerr-Specht, R. Goebel, and A. Holzinger, “Medicine and Health Care as a Data Problem: Will Computers become better medical doctors?“, in Smart Health, State-of-the-Art SOTA Lecture Notes in Computer Science LNCS 8700, A. Holzinger, C. Roecker, and M. Ziefle, Eds., Heidelberg, Berlin, New York: Springer, 2015, pp. 21-39.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Modern medicine and health care in all parts of our world are facing formidable challenges: exploding costs, finite resources, aging population as well as deluge of big complex, high-dimensional data sets produced by modern biomedical science, which exceeds the absorptive capacity of human minds. Consequently, the question arises about whether and to what extent the advances of machine intelligence and computational power may be utilized to mitigate the consequences. After prevailing over humans in chess and popular game shows, it is postulated that the biomedical field will be the next domain in which smart computing systems will outperform their human counterparts. In this overview we examine this hypothesis by comparing data formats, data access and heuristic methods used by both humans and computer systems in the medical decision making process. We conclude that the medical reasoning process can be significantly enhanced using emerging smart computing technologies and so-called computational intelligence. However, as humans have access to a larger spectrum of data of higher complexity and continue to perform essential components of the reasoning process more efficiently, it would be unwise to sacrifice the whole human practice of medicine to the digital world; hence a major goal is to mutually exploit the best of the two worlds: We need computational intelligence to deal with big complex data, but we nevertheless – and more than ever before – need human intelligence to interpret abstracted data and information and creatively make decisions. [Medical Informatics, Clinical Decision Support, Cognitive Computing]

    @incollection{MAL-p16ok,
       year = {2015},
       author = {Duerr-Specht, Michael and Goebel, Randy and Holzinger, Andreas},
       title = {Medicine and Health Care as a Data Problem: Will Computers become better medical doctors?},
       booktitle = {Smart Health, State-of-the-Art SOTA Lecture Notes in Computer Science LNCS 8700},
       editor = {Holzinger, Andreas and Roecker, Carsten and Ziefle, Martina},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {21--39},
       abstract = {Modern medicine and health care in all parts of our world are facing formidable challenges: exploding costs, finite resources, aging population as well as deluge of big complex, high-dimensional data sets produced by modern biomedical science, which exceeds the absorptive capacity of human minds. Consequently, the question arises about whether and to what extent the advances of machine intelligence and computational power may be utilized to mitigate the consequences. After prevailing over humans in chess and popular game shows, it is postulated that the biomedical field will be the next domain in which smart computing systems will outperform their human counterparts. In this overview we examine this hypothesis by comparing data formats, data access and heuristic methods used by both humans and computer systems in the medical decision making process. We conclude that the medical reasoning process can be significantly enhanced using emerging smart computing technologies and so-called computational intelligence. However, as humans have access to a larger spectrum of data of higher complexity and continue to perform essential components of the reasoning process more efficiently, it would be unwise to sacrifice the whole human practice of medicine to the digital world; hence a major goal is to mutually exploit the best of the two worlds: We need computational intelligence to deal with big complex data, but we nevertheless – and more than ever before – need human intelligence to interpret abstracted data and information and creatively make decisions. [Medical Informatics, Clinical Decision Support, Cognitive Computing]},
       keywords = {Medical Informatics, Smart Health, Cognitive Computing},
       doi = {10.1007/978-3-319-16226-3_2},
       url = {http://www.springer.com/cda/content/document/cda_downloaddocument/9783319162256-c2.pdf?SGWID=0-0-45-1494853-p177281935}
    }

  • [MAL-p15ok] K. Donsa, S. Spat, P. Beck, T. R. Pieber, and A. Holzinger, “Towards personalization of diabetes therapy using computerized decision support and machine learning: some open problems and challenges“, in Smart Health, Lecture Notes in Computer Science LNCS 8700, A. Holzinger, C. Roecker, and M. Ziefle, Eds., Heidelberg, Berlin: Springer, 2015, pp. 235-260.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Diabetes mellitus (DM) is a growing global disease which highly affects the individual patient and represents a global health burden with financial impact on national health care systems. Type 1 DM can only be treated with insulin, whereas for patients with type 2 DM a wide range of therapeutic options are available. These options include lifestyle changes such as change of diet and an increase of physical activity, but also administration of oral or injectable antidiabetic drugs. The diabetes therapy, especially with insulin, is complex. Therapy decisions include various medical and life-style related information. Computerized decision support systems (CDSS) aim to improve the treatment process in patient’s self-management but also in institutional care. Therefore, the personalization of the patient’s diabetes treatment is possible at different levels. It can provide medication support and therapy control, which aid to correctly estimate the personal medication requirements and improves the adherence to therapy goals. It also supports long-term disease management, aiming to develop a personalization of care according to the patient’s risk stratification. Personalization of therapy is also facilitated by using new therapy aids like food and activity recognition systems, lifestyle support tools and pattern recognition for insulin therapy optimization. In this work we cover relevant parameters to personalize diabetes therapy, how CDSS can support the therapy process and the role of machine learning in this context. Moreover, we identify open problems and challenges for the personalization of diabetes therapy with focus on decision support systems and machine learning technology. [Machine Learning, Clinical Decision Support]

    @incollection{MAL-p15ok,
       year = {2015},
       author = {Donsa, Klaus and Spat, Stephan and Beck, Peter and Pieber, Thomas R. and Holzinger, Andreas},
       title = {Towards personalization of diabetes therapy using computerized decision support and machine learning:  some open problems and challenges},
       booktitle = {Smart Health, Lecture Notes in Computer Science LNCS 8700},
       editor = {Holzinger, Andreas and Roecker, Carsten and Ziefle, Martina},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {235--260},
       abstract = {Diabetes mellitus (DM) is a growing global disease which highly affects the individual patient and represents a global health burden with financial impact on national health care systems. Type 1 DM can only be treated with insulin, whereas for patients with type 2 DM a wide range of therapeutic options are available. These options include lifestyle changes such as change of diet and an increase of physical activity, but also administration of oral or injectable antidiabetic drugs. The diabetes therapy, especially with insulin, is complex. Therapy decisions include various medical and life-style related information. Computerized decision support systems (CDSS) aim to improve the treatment process in patient’s self-management but also in institutional care. Therefore, the personalization of the patient’s diabetes treatment is possible at different levels. It can provide medication support and therapy control, which aid to correctly estimate the personal medication requirements and improves the adherence to therapy goals. It also supports long-term disease management, aiming to develop a personalization of care according to the patient’s risk stratification. Personalization of therapy is also facilitated by using new therapy aids like food and activity recognition systems, lifestyle support tools and pattern recognition for insulin therapy optimization. In this work we cover relevant parameters to personalize diabetes therapy, how CDSS can support the therapy process and the role of machine learning in this context. Moreover, we identify open problems and challenges for the personalization of diabetes therapy with focus on decision support systems and machine learning technology. [Machine Learning, Clinical Decision Support]},
       keywords = {Machine Learning, Biomedical Decision Support},
       doi = {10.1007/978-3-319-16226-3_10},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1061561&pCurrPk=84342}
    }

  • [DAT-p14ok] H. Müller, R. Reihs, K. Zatloukal, F. Jeanquartier, R. Merino-Martinez, D. van Enckevort, M. A. Swertz, and A. Holzinger, “State-of-the-Art and Future Challenges in the Integration of Biobank Catalogues“, in Smart Health, A. Holzinger, C. Röcker, and M. Ziefle, Eds., Springer International Publishing, 2015, vol. 8700, pp. 261-273.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biobanks are essential for the realization of P4-medicine, hence indispensable for smart health. One of the grand challenges in biobank research is to close the research cycle in such a way that all the data generated by one research study can be consistently associated to the original samples, therefore data and knowledge can be reused in other studies. A catalogue must provide the information hub connecting all relevant information sources. The key knowledge embedded in a biobank catalogue is the availability and quality of proper samples to perform a research project. Depending on the study type, the samples can reflect a healthy reference population, a cross sectional representation of a certain group of people (healthy or with various diseases) or a certain disease type or stage. To overview and compare collections from different catalogues, we introduce visual analytics techniques, especially glyph based visualization techniques, which were successfully applied for knowledge discovery of single biobank catalogues. In this paper, we describe the state-of-the art in the integration of biobank catalogues addressing the challenge of combining heterogeneous data sources in a unified and meaningful way, consequently enabling the discovery and visualization of data from different sources. Finally we present open questions both in data integration and visualization of unified catalogues and propose future research in data integration with a linked data approach and the fusion of multi level glyph and network visualization. [Knowledge Discovery, Data Management, Data integration in the life sciences]

    @incollection{DAT-p14ok,
       year = {2015},
       author = {Müller, Heimo and Reihs, Robert and Zatloukal, Kurt and Jeanquartier, Fleur and Merino-Martinez, Roxana and van Enckevort, David and Swertz, Morris A and Holzinger, Andreas},
       title = {State-of-the-Art and Future Challenges in the Integration of Biobank Catalogues},
       booktitle = {Smart Health},
       editor = {Holzinger, Andreas and Röcker, Carsten and Ziefle, Martina},
       publisher = {Springer International Publishing},
       volume = {8700},
       pages = {261--273},
       abstract = {Biobanks are essential for the realization of P4-medicine, hence indispensable for smart health. One of the grand challenges in biobank research is to close the research cycle in such a way that all the data generated by one research study can be consistently associated to the original samples, therefore data and knowledge can be reused in other studies. A catalogue must provide the information hub connecting all relevant information sources. The key knowledge embedded in a biobank catalogue is the availability and quality of proper samples to perform a research project. Depending on the study type, the samples can reflect a healthy reference population, a cross sectional representation of a certain group of people (healthy or with various diseases) or a certain disease type or stage. To overview and compare collections from different catalogues, we introduce visual analytics techniques, especially glyph based visualization techniques, which were successfully applied for knowledge discovery of single biobank catalogues. In this paper, we describe the state-of-the art in the integration of biobank catalogues addressing the challenge of combining heterogeneous data sources in a unified and meaningful way, consequently enabling the discovery and visualization of data from different sources. Finally we present open questions both in data integration and visualization of unified catalogues and propose future research in data integration with a linked data approach and the fusion of multi level glyph and network visualization. [Knowledge Discovery, Data Management, Data integration in the life sciences]},
       keywords = {Knowledge discovery, Data Management, Data Integration},
       doi = {10.1007/978-3-319-16226-3_11},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1070550&pCurrPk=84459}
    }

  • [DAT-e30ok] “DATA 2015 – Proceedings of 4th International Conference on Data Management Technologies and Applications, Colmar, Alsace, France, 20-22 July, 2015“, in DATA 2015, Lisbon, 2015.
    [BibTeX] [Abstract] [Download PDF]

    This volume contains the proceedings of the 4th International Conference on Data Technologies and Applications – DATA 2015), which is sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), and co-organized by the University of Haute Alsace and held in cooperation with the ACM SIGMIS – ACM Special Interest Group on Management Information Systems This conference brings together researchers and practitioners interested in databases, data warehousing, data mining, data management, data security and other aspects of information systems and technology involving advanced applications of data. The high quality of the DATA 2015 program is enhanced by the three keynote lectures, delivered by distinguished speakers who are renowned experts in their fields: Michele Sebag (Laboratoire de Recherche en Informatique, CNRS, France), John Domingue (The Open University, United Kingdom) and Paul Longley (University College London, United Kingdom). The meeting is complemented with the Special Session on Knowledge Discovery meets Information Systems: Applications of Big Data Analytics and BI – methodologies, techniques and tools (KomIS). DATA 2015 received 70 paper submissions, including the special session, from 32 countries in all continents, of which 44\% were orally presented (20\% as full papers). In order to evaluate each submission, a double blind paper review was performed by the Program Committee. The program for this conference required the dedicated effort of many people. Firstly, we must thank the authors, whose research efforts are herewith recorded. Next, we thank the members of the Program Committee and the auxiliary reviewers for their diligent and professional reviewing. We would also like to deeply thank the invited speakers for their invaluable contribution and for taking the time to prepare their talks. Finally, a word of appreciation for the hard work of the INSTICC team; organizing a conference of this level is a task that can only be achieved by the collaborative effort of a dedicated and highly capable team. A successful conference involves more than paper presentations; it is also a meeting place, where ideas about new research projects and other ventures are discussed and debated. Therefore, a social event & banquet has been arranged for the evening of July 21st (Tuesday) in order to promote this kind of social networking.

    @inproceedings{DAT-e30ok,
       year = {2015},
       title = {DATA 2015 - Proceedings of 4th International Conference on Data Management Technologies and Applications, Colmar, Alsace, France, 20-22 July, 2015},
       booktitle = {DATA 2015},
       editor = {Helfert, Markus and Holzinger, Andreas and Belo, Orlando and Francalanci, Chiara},
       publisher = {SciTePress},
       address = {Lisbon},
       abstract = {This volume contains the proceedings of the 4th International Conference on Data Technologies and Applications - DATA 2015), which is sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), and co-organized by the University of Haute Alsace and held in cooperation with the ACM SIGMIS - ACM Special Interest Group on Management Information Systems This conference brings together researchers and practitioners interested in databases, data warehousing, data mining, data management, data security and other aspects of information systems and technology involving advanced applications of data. The high quality of the DATA 2015 program is enhanced by the three keynote lectures, delivered by distinguished speakers who are renowned experts in their fields: Michele Sebag (Laboratoire de Recherche en Informatique, CNRS, France), John Domingue (The Open University, United Kingdom) and Paul Longley (University College London, United Kingdom). The meeting is complemented with the Special Session on Knowledge Discovery meets Information Systems: Applications of Big Data Analytics and BI - methodologies, techniques and tools (KomIS). DATA 2015 received 70 paper submissions, including the special session, from 32 countries in all continents, of which 44\% were orally presented (20\% as full papers). In order to evaluate each submission, a double blind paper review was performed by the Program Committee. The program for this conference required the dedicated effort of many people. Firstly, we must thank the authors, whose research efforts are herewith recorded. Next, we thank the members of the Program Committee and the auxiliary reviewers for their diligent and professional reviewing. We would also like to deeply thank the invited speakers for their invaluable contribution and for taking the time to prepare their talks. Finally, a word of appreciation for the hard work of the INSTICC team; organizing a conference of this level is a task that can only be achieved by the collaborative effort of a dedicated and highly capable team. A successful conference involves more than paper presentations; it is also a meeting place, where ideas about new research projects and other ventures are discussed and debated. Therefore, a social event & banquet has been arranged for the evening of July 21st (Tuesday) in order to promote this kind of social networking.},
       url = {http://dblp.uni-trier.de/db/conf/data/data2015.html}
    }

  • [DAT-e29ok] E-Business and Telecommunications – 11th International Joint Conference, ICETE 2014, Vienna, Austria, August 28-30, 2014, Revised Selected PapersSpringer, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The present book includes extended and revised versions of a set of selected best papers from the 11th International Joint Conference on e-Business and Telecommunications (ICETE), which was held in August 2014, in Vienna, Austria. This conference reflects a continuing effort to increase the dissemination of recent research results among professionals who work in the areas of e-business and telecommunications. ICETE is a joint international conference integrating four major areas of knowledge that are divided into six corresponding conferences: DCNET (International Conference on Data Communication Networking), ICE-B (International Conference on e-Business), OPTICS (International Conference on Optical Communication Systems), SECRYPT (International Conference on Security and Cryptography), SIGMAP (International Conference on Signal Processing and Multimedia) and WINSYS (International Conference on Wireless Information Systems). The program of this joint conference included several outstanding keynote lectures presented by internationally renowned distinguished researchers who are experts in the various ICETE areas. Their keynote speeches contributed to heighten the overall quality of the program and the significance of the theme of the conference. The conference topics encompass a broad spectrum in the key areas of e-business and telecommunications. This wide-view reporting made ICETE appealing to a global audience of engineers, scientists, business practitioners, ICT managers, and policy experts. The papers accepted and presented at the conference demonstrated a number of new and innovative solutions for e-business and telecommunication networks and systems, showing that the technical problems in these closely relatedfields are challenging and worth approaching in an interdisciplinary perspective such as that promoted by ICETE. ICETE 2014 received 328 papers in total, with contributions from 56 different countries, in all continents, which demonstrate its success and global dimension. A double-blind paper evaluation method was used: Each paper was blindly reviewed by at least two experts from the International Program Committee. In fact, most papers had three reviews or more. The selection process followed strict criteria for all tracks. As a result only 37 papers were accepted and orally presented at ICETE as full papers (11 \% of submissions) and 59 as short papers (18 \% of submissions). Additionally, 66 papers were accepted for poster presentation. With these acceptance ratios, ICETE 2014 continues the tradition of previous ICETE conferences as a distinguished and high-quality conference. We hope that you willfind this collection of the extended versions of the best ICETE 2014 papers an excellent source of inspiration as well as a helpful reference for research in the aforementioned areas

    @proceedings{DAT-e29ok,
      editor    = {Mohammad S. Obaidat and
                   Andreas Holzinger and
                   Joaquim Filipe},
      title     = {E-Business and Telecommunications - 11th International Joint Conference,
                   {ICETE} 2014, Vienna, Austria, August 28-30, 2014, Revised Selected
                   Papers},
      series    = {Communications in Computer and Information Science},
      volume    = {554},
      publisher = {Springer},
      year      = {2015},
      url       = {http://dx.doi.org/10.1007/978-3-319-25915-4},
      doi       = {10.1007/978-3-319-25915-4},
      isbn      = {978-3-319-25914-7},
      abstract  = {The present book includes extended and revised versions of a set of selected best
    papers from the 11th International Joint Conference on e-Business and Telecommunications (ICETE), which was held in August 2014, in Vienna, Austria. This conference reflects a continuing effort to increase the dissemination of recent research results
    among professionals who work in the areas of e-business and telecommunications.
    ICETE is a joint international conference integrating four major areas of knowledge
    that are divided into six corresponding conferences: DCNET (International Conference
    on Data Communication Networking), ICE-B (International Conference on
    e-Business), OPTICS (International Conference on Optical Communication Systems),
    SECRYPT (International Conference on Security and Cryptography), SIGMAP
    (International Conference on Signal Processing and Multimedia) and WINSYS
    (International Conference on Wireless Information Systems).
    The program of this joint conference included several outstanding keynote lectures
    presented by internationally renowned distinguished researchers who are experts in the
    various ICETE areas. Their keynote speeches contributed to heighten the overall
    quality of the program and the significance of the theme of the conference.
    The conference topics encompass a broad spectrum in the key areas of e-business and
    telecommunications. This wide-view reporting made ICETE appealing to a global audience of engineers, scientists, business practitioners, ICT managers, and policy experts.
    The papers accepted and presented at the conference demonstrated a number of new and
    innovative solutions for e-business and telecommunication networks and systems,
    showing that the technical problems in these closely relatedfields are challenging and
    worth approaching in an interdisciplinary perspective such as that promoted by ICETE.
    ICETE 2014 received 328 papers in total, with contributions from 56 different
    countries, in all continents, which demonstrate its success and global dimension.
    A double-blind paper evaluation method was used: Each paper was blindly reviewed
    by at least two experts from the International Program Committee. In fact, most papers
    had three reviews or more. The selection process followed strict criteria for all tracks.
    As a result only 37 papers were accepted and orally presented at ICETE as full papers
    (11 \% of submissions) and 59 as short papers (18 \% of submissions). Additionally, 66
    papers were accepted for poster presentation. With these acceptance ratios, ICETE
    2014 continues the tradition of previous ICETE conferences as a distinguished and
    high-quality conference.
    We hope that you willfind this collection of the extended versions of the best
    ICETE 2014 papers an excellent source of inspiration as well as a helpful reference for
    research in the aforementioned areas}
    }

  • [DAT-e28ok] Software Technologies – 9th International Joint Conference, ICSOFT 2014, Vienna, Austria, August 29-31, 2014, Revised Selected PapersSpringer, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The present book includes extended and revised versions of a set of selected papers from the 9th International Joint Conference on Software Technologies (ICSOFT 2014), which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and co-organized by the Austrian Computer Society and the Vienna University of Technology–TU Wien (TUW). ICSOFT 2014 was held in cooperation with the IEICE Special Interest Group on Software Interprise Modelling (SWIM) and technically co-sponsored by the IEEE Computer Society and IEEE Computer Society’s Technical Council on Software Engineering (TCSE). The purpose of ICSOFT is to bring together researchers, engineers, and practitioners working in areas that are related to software engineering and applications. ICSOFT is composed of two co-located conferences: ICSOFT-PT, which specializes in new software paradigm trends, and ICSOFT-EA, which specializes in mainstream software engineering and applications. Together, these conferences aim at becoming a major meeting point for software engineers worldwide. ICSOFT-PT (9th International Conference on Software Paradigm Trends) focused on four main paradigms that have been intensively studied during the last decade for software and system design, namely, Models, Aspects, Services, and Context. ICSOFT-EA (9th International Conference on Software Engineering and Applications) had a practical focus on software engineering and applications. The conference tracks were Enterprise Software Technologies, Software Engineering and Systems Security, Distributed Systems, and Software Project Management. ICSOFT 2014 received 145 paper submissions from 46 countries in all continents, of which 14 % were presented as full papers. To evaluate each submission, a double-blind paper evaluation method was used: each paper was reviewed by at least two internationally known experts from the ICSOFT Program Committee. The quality of the papers presented here stems directly from the dedicated effort of the Steering and Scientific Committees and the INSTICC team responsible for handling all secretariat and logistics details. We are further indebted to the conference keynote speakers, who presented their valuable insights and visions regarding areas of interest to the conference. Finally, we would like to thank all authors and attendants for their contribution to the conference and the scientific community. We hope that you willfind these papers interesting and consider them a helpful reference in the future when addressing any of the aforementioned research areas

    @proceedings{DAT-e28ok,
      editor    = {Andreas Holzinger and
                   Jorge Cardoso and
                   Jos{\'{e}} Cordeiro and
                   Th{\'{e}}r{\`{e}}se Libourel and
                   Leszek A. Maciaszek and
                   Marten van Sinderen},
      title     = {Software Technologies - 9th International Joint Conference, {ICSOFT}
                   2014, Vienna, Austria, August 29-31, 2014, Revised Selected Papers},
      series    = {Communications in Computer and Information Science},
      volume    = {555},
      publisher = {Springer},
      year      = {2015},
      url       = {http://dx.doi.org/10.1007/978-3-319-25579-8},
      doi       = {10.1007/978-3-319-25579-8},
      isbn      = {978-3-319-25578-1},
      abstract  = {The present book includes extended and revised versions of a set of selected papers
    from the 9th International Joint Conference on Software Technologies (ICSOFT 2014),
    which was sponsored by the Institute for Systems and Technologies of Information,
    Control and Communication (INSTICC) and co-organized by the Austrian Computer
    Society and the Vienna University of Technology–TU Wien (TUW). ICSOFT 2014
    was held in cooperation with the IEICE Special Interest Group on Software Interprise
    Modelling (SWIM) and technically co-sponsored by the IEEE Computer Society and
    IEEE Computer Society’s Technical Council on Software Engineering (TCSE).
    The purpose of ICSOFT is to bring together researchers, engineers, and practitioners
    working in areas that are related to software engineering and applications. ICSOFT is
    composed of two co-located conferences: ICSOFT-PT, which specializes in new
    software paradigm trends, and ICSOFT-EA, which specializes in mainstream software
    engineering and applications. Together, these conferences aim at becoming a major
    meeting point for software engineers worldwide.
    ICSOFT-PT (9th International Conference on Software Paradigm Trends) focused
    on four main paradigms that have been intensively studied during the last decade for
    software and system design, namely, Models, Aspects, Services, and Context.
    ICSOFT-EA (9th International Conference on Software Engineering and Applications) had a practical focus on software engineering and applications. The conference
    tracks were Enterprise Software Technologies, Software Engineering and Systems
    Security, Distributed Systems, and Software Project Management.
    ICSOFT 2014 received 145 paper submissions from 46 countries in all continents,
    of which 14 % were presented as full papers. To evaluate each submission, a
    double-blind paper evaluation method was used: each paper was reviewed by at least
    two internationally known experts from the ICSOFT Program Committee.
    The quality of the papers presented here stems directly from the dedicated effort
    of the Steering and Scientific Committees and the INSTICC team responsible for
    handling all secretariat and logistics details. We are further indebted to the conference
    keynote speakers, who presented their valuable insights and visions regarding areas of
    interest to the conference. Finally, we would like to thank all authors and attendants for
    their contribution to the conference and the scientific community.
    We hope that you willfind these papers interesting and consider them a helpful
    reference in the future when addressing any of the aforementioned research areas}
    }

  • [DAT-e27ok] M. Helfert, A. Holzinger, M. Ziefle, A. Fred, J. O’Donoghue, and C. Röcker, Information and Communication Technologies for Ageing Well and e-Health, Cham, New York, Tokyo: Springer International, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This book contains the proceedings of the International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AgeingWell 2015) which was organized and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in cooperation with the International Society for Telemedicine & eHealth – ISfTeH, European Health Telematics Association – EHTEL and AAL Programme. ICT4AgeingWell aims to be an annual meeting point for those that study and apply Information and Communication Technologies for improving the quality of life of the elderly and for helping people stay healthy, independent and active at work or in their community along their whole life. ICT4AgeingWell facilitates the exchange of information and dissemination of best practices, innovation and technical improvements in the fields of age-related health care, education, social coordination and ambient assisted living. This conference brought together researchers and practitioners interested in methodologies and applications related to the Information and Communication Technologies for Ageing Well and e-Health fields. It had five main topic areas, covering different aspects, including Ambient Assisted Living, Telemedicine and E-Health, Monitoring, Accessibility and User Interfaces, Robotics and Devices for Independent Living and HCI for Ageing Populations. We believe these proceedings demonstrate new and innovative solutions, and highlight technical problems in each field that are challenging and worthwhile. ICT4AgeingWell 2015 received 45 paper submissions from 28 countries in all continents, of which 27\% were accepted as full papers. The high quality of the papers received imposed difficult choices in the review process. To evaluate each submission, a double blind paper review was performed by the Program Committee, whose members are highly qualified independent researchers in the ICT4AgeingWel 2015 topic areas. The high quality of the ICT4AgeingWell 2015 programme was enhanced by four keynote lectures, delivered by experts in their fields, including (alphabetically): Juan Carlos Augusto (School of Science and Technology, Middlesex University, United Kingdom), Thomas Hermann (CITEC – Center of Excellence Cognitive Interaction Technology, Bielefeld University, Germany), Victor Chang (Leeds Beckett University, United Kingdom) and William Molloy (Centre for Gerontology and Rehabilitation, School of Medicine, UCC, Ireland). The meeting was also complemented with a Doctoral Consortium on “Information and Communication Technologies for Ageing Well and e-Health” chaired by Ana Fred, Instituto de Telecomunicações / IST, Portugal. All presented papers will be submitted for indexation by Thomson Reuters Conference Proceedings Citation Index (ISI), INSPEC, DBLP, EI (Elsevier Index) and Scopus, as well as being made available at the SCITEPRESS Digital Library. Furthermore, a short list of presented papers will be selected and their authors will be invited to submit an extended version of their papers to be included in a forthcoming book of ICT4AgeingWell Selected Papers to be published in CCIS Series by Springer Verlag during 2015.

    @book{DAT-e27ok,
       year = {2015},
       author = {Helfert, Markus and Holzinger, Andreas and Ziefle, Martina and Fred, Ana and O'Donoghue, John and Röcker, Carsten},
       title = {Information and Communication Technologies for Ageing Well and e-Health},
       publisher = {Springer International},
       address = {Cham, New York, Tokyo},
       abstract = {This book contains the proceedings of the International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AgeingWell 2015) which was organized and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in cooperation with the International Society for Telemedicine & eHealth - ISfTeH, European Health Telematics Association - EHTEL and AAL Programme.
    ICT4AgeingWell aims to be an annual meeting point for those that study and apply Information and Communication Technologies for improving the quality of life of the elderly and for helping people stay healthy, independent and active at work or in their community along their whole life. ICT4AgeingWell facilitates the exchange of information and dissemination of best practices, innovation and technical improvements in the fields of age-related health care, education, social coordination and ambient assisted living.
    This conference brought together researchers and practitioners interested in methodologies and applications related to the Information and Communication Technologies for Ageing Well and e-Health fields. It had five main topic areas, covering different aspects, including Ambient Assisted Living, Telemedicine and E-Health, Monitoring, Accessibility and User Interfaces, Robotics and Devices for Independent Living and HCI for Ageing Populations. We believe these proceedings demonstrate new and innovative solutions, and highlight technical problems in each field that are challenging and worthwhile.
    ICT4AgeingWell 2015 received 45 paper submissions from 28 countries in all continents, of which 27\% were accepted as full papers. The high quality of the papers received imposed difficult choices in the review process. To evaluate each submission, a double blind paper review was performed by the Program Committee, whose members are highly qualified independent researchers in the ICT4AgeingWel 2015 topic areas.
    The high quality of the ICT4AgeingWell 2015 programme was enhanced by four keynote lectures, delivered by experts in their fields, including (alphabetically): Juan Carlos Augusto (School of Science and Technology, Middlesex University, United Kingdom), Thomas Hermann (CITEC - Center of Excellence Cognitive Interaction Technology, Bielefeld University, Germany), Victor Chang (Leeds Beckett University, United Kingdom) and William Molloy (Centre for Gerontology and Rehabilitation, School of Medicine, UCC, Ireland).
    The meeting was also complemented with a Doctoral Consortium on “Information and Communication Technologies for Ageing Well and e-Health” chaired by Ana Fred, Instituto de Telecomunicações / IST, Portugal.
    All presented papers will be submitted for indexation by Thomson Reuters Conference Proceedings Citation Index (ISI), INSPEC, DBLP, EI (Elsevier Index) and Scopus, as well as being made available at the SCITEPRESS Digital Library. Furthermore, a short list of presented papers will be selected and their authors will be invited to submit an extended version of their papers to be included in a forthcoming book of ICT4AgeingWell Selected Papers to be published in CCIS Series by Springer Verlag during 2015.},
       keywords = {Health Informatics, artificial intelligence, user interfaces and human-computer interaction},
       doi = {10.1007/978-3-319-27695-3},
       url = {http://link.springer.com/book/10.1007/978-3-319-27695-3}
    }

  • [MAL-e26ok] E. M. Renda, M. Bursa, A. Holzinger, and S. Khuri, Information Technology in Bio- and Medical Informatics, Lecture Notes in Computer Science LNCS 9267, Heidelberg, Berlin: Springer, 2015.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biomedical engineering and medical informatics represent challenging and rapidly growing areas. Applications of information technology in these areas are of paramount importance. Building on the success of ITBAM 2010, ITBAM 2011, ITBAM 2012, ITBAM 2013, and ITBAM 2014, the aim of the sixth ITBAM conference was to continue bringing together scientists, researchers, and practitioners from different disciplines, namely, from mathematics, computer science, bioinformatics, biomedical engineering, medicine, biology, and differentfields of life sciences, so they can present and discuss their research results in bioinformatics and medical informatics. We hope that ITBAM will continue serving as a platform for fruitful discussions between all attendees, where participants can exchange their recent results, identify future directions and challenges, initiate possible collaborative research, and develop common languages for solving problems in the realm of biomedical engineering, bioinformatics, and medical informatics. The importance of computer-aided diagnosis and therapy continues to draw attention worldwide and has laid the foundations for modern medicine with excellent potential for promising applications in a variety offields, such as telemedicine, Web-based healthcare, analysis of genetic information, and personalized medicine. Following a thorough peer-review process, we selected nine long papers for oral presentation and two short papers for poster session for the sixth annual ITBAM conference. The organizing committee would like to thank the reviewers for their excellent job. The articles can be found in the proceedings and are divided into the following sections: Medical Terminology and Clinical Processes and Machine Learning in Biomedicine. The papers show how broad the spectrum of topics in applications of information technology to biomedical engineering and medical informatics is. The editors would like to thank all the participants for their high-quality contributions and Springer for publishing the proceedings of this conference.

    @book{MAL-e26ok,
       year = {2015},
       author = {Renda, M. Elena and Bursa, Miroslav and Holzinger, Andreas and Khuri, Sami},
       title = {Information Technology in Bio- and Medical Informatics, Lecture Notes in Computer Science LNCS 9267},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       abstract = {Biomedical engineering and medical informatics represent challenging and rapidly growing areas. Applications of information technology in these areas are of paramount importance. Building on the success of ITBAM 2010, ITBAM 2011, ITBAM 2012, ITBAM 2013, and ITBAM 2014, the aim of the sixth ITBAM conference was to continue bringing together scientists, researchers, and practitioners from different disciplines, namely, from mathematics, computer science, bioinformatics, biomedical engineering, medicine, biology, and differentfields of life sciences, so they can present and discuss their research results in bioinformatics and medical informatics. We hope that ITBAM will continue serving as a platform for fruitful discussions between all attendees, where participants can exchange their recent results, identify future directions and challenges, initiate possible collaborative research, and develop common languages for solving problems in the realm of biomedical engineering, bioinformatics, and medical informatics. The importance of computer-aided diagnosis and therapy continues to draw attention worldwide and has laid the foundations for modern medicine with excellent potential for promising applications in a variety offields, such as telemedicine, Web-based healthcare, analysis of genetic information, and personalized medicine. Following a thorough peer-review process, we selected nine long papers for oral presentation and two short papers for poster session for the sixth annual ITBAM conference. The organizing committee would like to thank the reviewers for their
    excellent job. The articles can be found in the proceedings and are divided into the following sections: Medical Terminology and Clinical Processes and Machine Learning in Biomedicine. The papers show how broad the spectrum of topics in applications of information technology to biomedical engineering and medical informatics is. The editors would like to thank all the participants for their high-quality contributions and Springer for publishing the proceedings of this conference. },
       doi = {10.1007/978-3-319-22741-2},
       url = {http://dx.doi.org/10.1007/978-3-319-22741-2}
    }

  • [DAT-e25ok] PhyCS 2015 – Proceedings of the 2nd International Conference on Physiological Computing Systems, ESEO, Angers, Loire Valley, France, 11 – 13 February, 2015SciTePress, 2015.
    [BibTeX]
    @proceedings{DAT-e25ok,
      editor    = {Hugo Pl{\'{a}}cido da Silva and
                   Pierre Chauvet and
                   Andreas Holzinger and
                   Stephen H. Fairclough and
                   Dennis Majoe},
      title     = {PhyCS 2015 - Proceedings of the 2nd International Conference on Physiological
                   Computing Systems, ESEO, Angers, Loire Valley, France, 11 - 13 February,
                   2015},
      publisher = {SciTePress},
      year      = {2015},
      isbn      = {978-989-758-085-7}
    }

2014

  • [j49ok] I. Kožuh, M. Hintermair, A. Holzinger, Z. Volčič, and M. Debevc, “Enhancing universal access: deaf and hard of hearing people on social networking sites“, Universal Access in the Information Society, vol. 14, iss. 4, pp. 537-545, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Despite numerous studies into the online activities of deaf and hard of hearing (D/HH) users, there has been limited research into their experiences on social networking sites (SNSs), a domain where psychology and computer science intersects. The purpose of this study is to illustrate how one can enhance universal access for D/HH users on the example of SNSs. A model for examining the experiences and preferences of D/HH users of SNSs has been proposed. The model consists of three identity-relevant aspects: (1) belonging to online Deaf communities, (2) communication affinity/preferences for sign and/or written language, and (3) the stigma associated with hearing loss. Based on these aspects, a questionnaire was developed and applied to a study with 46 participants. The findings revealed that the motivation to communicate on SNSs is positively associated with identification with online Deaf communities, an affinity for communication in written language and an affinity/preference for communication in sign language. Better reading comprehension skills, crucial for written communication, are associated with less stigmatic experiences with regard to hearing loss. The model and the findings of this study can help improve understanding D/HH users’ online social interactions and can be used for educational purposes. It may contribute to the discussion of integrating SNSs as communication tools in personal learning environments, which can be an advantage for universal access.

    @article{j49ok,
       year = {2014},
       author = {Kožuh, Ines and Hintermair, Manfred and Holzinger, Andreas and Volčič, Zala and Debevc, Matjaž},
       title = {Enhancing universal access: deaf and hard of hearing people on social networking sites},
       journal = {Universal Access in the Information Society},
       volume = {14},
       number = {4},
       pages = {537-545},
       abstract = {Despite numerous studies into the online activities of deaf and hard of hearing (D/HH) users, there has been limited research into their experiences on social networking sites (SNSs), a domain where psychology and computer science intersects. The purpose of this study is to illustrate how one can enhance universal access for D/HH users on the example of SNSs. A model for examining the experiences and preferences of D/HH users of SNSs has been proposed. The model consists of three identity-relevant aspects: (1) belonging to online Deaf communities, (2) communication affinity/preferences for sign and/or written language, and (3) the stigma associated with hearing loss. Based on these aspects, a questionnaire was developed and applied to a study with 46 participants. The findings revealed that the motivation to communicate on SNSs is positively associated with identification with online Deaf communities, an affinity for communication in written language and an affinity/preference for communication in sign language. Better reading comprehension skills, crucial for written communication, are associated with less stigmatic experiences with regard to hearing loss. The model and the findings of this study can help improve understanding D/HH users’ online social interactions and can be used for educational purposes. It may contribute to the discussion of integrating SNSs as communication tools in personal learning environments, which can be an advantage for universal access.},
       keywords = {universal access, health informatics},
       doi = {10.1007/s10209-014-0354-3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1394253&pCurrPk=87193}
    }

  • [MAL-e24ok] A. Holzinger, C. Röcker, and M. Ziefle, Smart Health – state-of-the-art and beyond. Springer Lecture Notes in Computer Science LNCS 8700, Heidelberg, Berlin: Springer, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Prolonged life expectancy along with the increasing complexity of medicine and health services raises health costs worldwide dramatically. Whilst the smart health concept has much potential to support the concept of the emerging P4-medicine (preventive, participatory, predictive, and personalized), such high-tech medicine produces large amounts of high-dimensional, weakly-structured data sets and massive amounts of unstructured information. All these technological approaches along with “big data” are turning the medical sciences into a data-intensive science. To keep pace with the growing amounts of complex data, smart hospital approaches are a commandment of the future, necessitating context aware computing along with advanced interaction paradigms in new physical-digital ecosystems. The very successful synergistic combination of methodologies and approaches from Human-Computer Interaction (HCI) and Knowledge Discovery and Data Mining (KDD) offers ideal conditions for the vision to support human intelligence with machine learning. The papers selected for this volume focus on hot topics in smart health; they discuss open problems and future challenges in order to provide a research agenda to stimulate further research and progress.

    @book{MAL-e24ok,
       year = {2014},
       author = {Holzinger, Andreas and Röcker, Carsten and Ziefle, Martina},
       title = {Smart Health - state-of-the-art and beyond. Springer Lecture Notes in Computer Science LNCS 8700},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       abstract = {Prolonged life expectancy along with the increasing complexity of medicine and health services raises health costs worldwide dramatically. Whilst the smart health concept has much potential to support the concept of the emerging P4-medicine (preventive, participatory, predictive, and personalized), such high-tech medicine produces large amounts of high-dimensional, weakly-structured data sets and massive amounts of unstructured information. All these technological approaches along with “big data” are turning the medical sciences into a data-intensive science. To keep pace with the growing amounts of complex data, smart hospital approaches are a commandment of the future, necessitating context aware computing along with advanced interaction paradigms in new physical-digital ecosystems.
    The very successful synergistic combination of methodologies and approaches from Human-Computer Interaction (HCI) and Knowledge Discovery and Data Mining (KDD) offers ideal conditions for the vision to support human intelligence with machine learning.
    The papers selected for this volume focus on hot topics in smart health; they discuss open problems and future challenges in order to provide a research agenda to stimulate further research and progress.},
       doi = {10.1007/978-3-319-16226-3},
       url = {http://dblp.uni-trier.de/db/series/lncs/lncs8700.html}
    }

  • [j48] A. Holzinger, “Trends in Interactive Knowledge Discovery for Personalized Medicine: Cognitive Science meets Machine Learning“, IEEE Intelligent Informatics Bulletin, vol. 15, iss. 1, pp. 6-14, 2014.
    [BibTeX] [Abstract] [Download PDF]

    A grand goal of future medicine is in modelling the complexity of patients to tailor medical decisions, health practices and therapies to the individual patient. This trend towards personalized medicine produces unprecedented amounts of data, and even though the fact that human experts are excellent at pattern recognition in dimensions of smalle than three, the problem is that most biomedical data is in dimensions much higher than three, making manual analysis difficult and often impossible. Experts in daily medical routine are decreasingly capable of dealing with the complexity of such data. Moreover, they are not interested the data, they need knowledge and insight in order to support their work. Consequently, a big trend in computer science is to provide efficient, useable and useful computational methods, algorithms and tools to discover knowledge and to interactively gain insight into high-dimensional data. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. A trend in both disciplines is the acquisition and adaptation of representations that support efficient learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort of computational methods including recent advances from graphtheory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and timedependent, hence entropy is also amongst promising topics. This paper provides a rough overview of the HCI-KDD approach and focuses on three future trends: graph-based mining, topological data mining and entropy-based data mining.[interactive machine learning]

    @article{j48,
       year = {2014},
       author = {Holzinger, Andreas},
       title = {Trends in Interactive Knowledge Discovery for Personalized Medicine: Cognitive Science meets Machine Learning},
       journal = {IEEE Intelligent Informatics Bulletin},
       volume = {15},
       number = {1},
       pages = {6--14},
       abstract = {A grand goal of future medicine is in modelling the complexity of patients to tailor medical decisions, health practices and therapies to the individual patient. This trend towards personalized medicine produces unprecedented amounts of data, and even though the fact that human experts are excellent at pattern recognition in dimensions of smalle than three, the problem is that most biomedical data is in dimensions much higher than three, making manual analysis difficult and often impossible. Experts in daily medical routine are decreasingly capable of dealing with the complexity of such data. Moreover, they are not interested the data, they need knowledge and insight in order to support their work. Consequently, a big trend in computer science is to provide efficient, useable and useful computational methods, algorithms and tools to discover knowledge and to interactively gain insight into high-dimensional data. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. A trend in both disciplines is the acquisition and adaptation of representations that support efficient learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort of computational methods including recent advances from graphtheory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and timedependent, hence entropy is also amongst promising topics. This paper provides a rough overview of the HCI-KDD approach and focuses on three future trends: graph-based mining, topological data mining and entropy-based data mining.[interactive machine learning]},
       url = {http://www.comp.hkbu.edu.hk/~cib/2014/Dec/article2/iib_vol15no1_article2.pdf}
    }

  • [j47] A. Holzinger, M. Dehmer, and I. Jurisica, “Knowledge Discovery and interactive Data Mining in Bioinformatics – State-of-the-Art, future challenges and research directions“, BMC Bioinformatics, vol. 15, iss. S6, p. I1, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The life sciences, biomedicine and health care are increasingly turning into a data intensive science. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weakly-structured and noisy data, but also the growing need for integrative analysis and modeling. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics). The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data.

    @article{j47,
       year = {2014},
       author = {Holzinger, Andreas and Dehmer, Matthias and Jurisica, Igor},
       title = {Knowledge Discovery and interactive Data Mining in Bioinformatics - State-of-the-Art, future challenges and research directions},
       journal = {BMC Bioinformatics},
       volume = {15},
       number = {S6},
       pages = {I1},
       abstract = {The life sciences, biomedicine and health care are increasingly turning into a data intensive science. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weakly-structured and noisy data, but also the growing need for integrative analysis and modeling. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics). The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data.},
       doi = {doi:10.1186/1471-2105-15-S6-I1},
       url = {http://http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-15-S6-I1}
    }

  • [p2] A. Holzinger, “On Topological Data Mining“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer, 2014, pp. 331-356.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Humans are very good at pattern recognition in dimensions of ≤ 3. However, most of data, e.g. in the biomedical domain, is in dimensions much higher than 3, which makes manual analyses awkward, sometimes practically impossible. Actually, mapping higher dimensional data into lower dimensions is a major task in Human–Computer Interaction and Interactive Data Visualization, and a concerted effort including recent advances in computational topology may contribute to make sense of such data. Topology has its roots in the works of Euler and Gauss, however, for a long time was part of theoretical mathematics. Within the last ten years computational topology rapidly gains much interest amongst computer scientists. Topology is basically the study of abstract shapes and spaces and mappings between them. It originated from the study of geometry and set theory. Topological methods can be applied to data represented by point clouds, that is, finite subsets of the n-dimensional Euclidean space. We can think of the input as a sample of some unknown space which one wishes to reconstruct and understand, and we must distinguish between the ambient (embedding) dimension n, and the intrinsic dimension of the data. Whilst n is usually high, the intrinsic dimension, being of primary interest, is typically small. Therefore, knowing the intrinsic dimensionality of data can be seen as one first step towards understanding its structure. Consequently, applying topological techniques to data mining and knowledge discovery is a hot and promising future research area. [Advanced Knowledge Disovery]

    @incollection{p2,
       year = {2014},
       author = {Holzinger, Andreas},
       title = {On Topological Data Mining},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {331--356},
       keywords = {Computational Topology, Algebraic Topology, Data Mining, Topological Data Mining, Topological Text Mining, Graph-based Text Mining},
       abstract = {Humans are very good at pattern recognition in dimensions of ≤ 3. However, most of data, e.g. in the biomedical domain, is in dimensions much higher than 3, which makes manual analyses awkward, sometimes practically impossible. Actually, mapping higher dimensional data into lower dimensions is a major task in Human–Computer Interaction and Interactive Data Visualization, and a concerted effort including recent advances in computational topology may contribute to make sense of such data. Topology has its roots in the works of Euler and Gauss, however, for a long time was part of theoretical mathematics. Within the last ten years computational topology rapidly gains much interest amongst computer scientists. Topology is basically the study of abstract shapes and spaces and mappings between them. It originated from the study of geometry and set theory. Topological methods can be applied to data represented by point clouds, that is, finite subsets of the n-dimensional Euclidean space. We can think of the input as a sample of some unknown space which one wishes to reconstruct and understand, and we must distinguish between the ambient (embedding) dimension n, and the intrinsic dimension of the data. Whilst n is usually high, the intrinsic dimension, being of primary interest, is typically small. Therefore, knowing the intrinsic dimensionality of data can be seen as one first step towards understanding its structure. Consequently, applying topological techniques to data mining and knowledge discovery is a hot and promising future research area. [Advanced Knowledge Disovery]},
       doi = {10.1007/978-3-662-43968-5_19},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=755111&pCurrPk=79003}
    }

  • [PRE-SD02] A. Holzinger, C. Stocker, and M. Dehmer, “Big Complex Biomedical Data: Towards a Taxonomy of Data“, in Communications in Computer and Information Science CCIS 455, M. S. Obaidat and J. Filipe, Eds., Berlin Heidelberg: Springer , 2014, pp. 3-18.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Professionals in the Life Sciences are faced with increasing masses of complex data sets. Very few data is structured, where traditional information retrieval methods work perfectly. A large portion of data is weakly structured; however, the majority falls into the category of unstructured data. To discover previously unknown knowledge from this data, we need advanced and novel methods to deal with the data from two aspects: time (e.g. information entropy) and space (e.g. computational topology). In this paper we show some examples of biomedical data and discuss a taxonomy of data with the specifics on medical data sets. [Preprocessing Data, Physics of Data, Specifics of biomedical data, fundamentals of data]

    @incollection{PRE-SD02,
       year = {2014},
       author = {Holzinger, Andreas and Stocker, Christof and Dehmer, Matthias},
       title = {Big Complex Biomedical Data: Towards a Taxonomy of Data},
       booktitle = {Communications in Computer and Information Science CCIS 455},
       editor = {Obaidat, Mohammad S. and Filipe, Joaquim},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       pages = {3--18},
       abstract = {Professionals in the Life Sciences are faced with increasing masses of complex data sets. Very few data is structured, where traditional information retrieval methods work perfectly. A large portion of data is weakly structured; however, the majority falls into the category of unstructured data. To discover previously unknown knowledge from this data, we need advanced and novel methods to deal with the data from two aspects: time (e.g. information entropy) and space (e.g. computational topology). In this paper we show some examples of biomedical data and discuss a taxonomy of data with the specifics on medical data sets. [Preprocessing Data, Physics of Data, Specifics of biomedical data, fundamentals of data]},
       keywords = {Complex data, Biomedical data, Weakly-structured data},
       doi = {10.1007/978-3-662-44791-8_1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=869869&pCurrPk=80434}
    }

  • [p13] A. Holzinger and I. Jurisica, “Knowledge Discovery and Data Mining in Biomedical Informatics: The future is in Integrative, Interactive Machine Learning Solutions “, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Heidelberg, Berlin: Springer, 2014, pp. 1-18.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biomedical research is drowning in data, yet starving for knowledge. Current challenges in biomedical research and clinical practice include information overload – the need to combine vast amounts of structured, semi-structured, weakly structured data and vast amounts of unstructured information – and the need to optimize workflows, processes and guidelines, to increase capacity while reducing costs and improving efficiencies. In this paper we provide a very short overview on interactive and integrative solutions for knowledge discovery and data mining. In particular, we emphasize the benefits of including the end user into the “interactive” knowledge discovery process. We describe some of the most important challenges, including the need to develop and apply novel methods, algorithms and tools for the integration, fusion, pre-processing, mapping, analysis and interpretation of complex biomedical data with the aim to identify testable hypotheses, and build realistic models. The HCI-KDD approach, which is a synergistic combination of methodologies and approaches of two areas, Human–Computer Interaction (HCI) and Knowledge Discovery and Data Mining (KDD), offer ideal conditions towards solving these challenges: with the goal of supporting human intelligence with machine intelligence. There is an urgent need for integrative and interactive machine learning solutions, because no medical doctor or biomedical researcher can keep pace today with the increasingly large and complex data sets – often called “Big Data".

    @incollection{p13,
       year = {2014},
       author = {Holzinger, Andreas and Jurisica, Igor},
       title = {Knowledge Discovery and Data Mining in Biomedical Informatics: The future is in Integrative, Interactive Machine Learning Solutions },
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {1--18},
       abstract = {Biomedical research is drowning in data, yet starving for knowledge. Current challenges in biomedical research and clinical practice include information overload – the need to combine vast amounts of structured, semi-structured, weakly structured data and vast amounts of unstructured information – and the need to optimize workflows, processes and guidelines, to increase capacity while reducing costs and improving efficiencies. In this paper we provide a very short overview on interactive and integrative solutions for knowledge discovery and data mining. In particular, we emphasize the benefits of including the end user into the “interactive” knowledge discovery process. We describe some of the most important challenges, including the need to develop and apply novel methods, algorithms and tools for the integration, fusion, pre-processing, mapping, analysis and interpretation of complex biomedical data with the aim to identify testable hypotheses, and build realistic models. The HCI-KDD approach, which is a synergistic combination of methodologies and approaches of two areas, Human–Computer Interaction (HCI) and Knowledge Discovery and Data Mining (KDD), offer ideal conditions towards solving these challenges: with the goal of supporting human intelligence with machine intelligence. There is an urgent need for integrative and interactive machine learning solutions, because no medical doctor or biomedical researcher can keep pace today with the increasingly large and complex data sets – often called “Big Data".},
       doi = {10.1007/978-3-662-43968-5_1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974382&pCurrPk=82997}
    }

  • [p10] A. Holzinger, B. Malle, M. Bloice, M. Wiltgen, M. Ferri, I. Stanganelli, and R. Hofmann-Wellenhof, “On the Generation of Point Cloud Data Sets: Step One in the Knowledge Discovery Process“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer, 2014, vol. 8401, pp. 57-80.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Computational geometry and topology are areas which have much potential for the analysis of arbitrarily high-dimensional data sets. In order to apply geometric or topological methods one must first generate a representative point cloud data set from the original data source, or at least a metric or distance function, which defines a distance between the elements of a given data set. Consequently, the first question is: How to get point cloud data sets? Or more precise: What is the optimal way of generating such data sets? The solution to these questions is not trivial. If a natural image is taken as an example, we are concerned more with the content, with the shape of the relevant data represented by this image than its mere matrix of pixels. Once a point cloud has been generated from a data source, it can be used as input for the application of graph theory and computational topology. In this paper we first describe the case for natural point clouds, i.e. where the data already are represented by points; we then provide some fundamentals of medical images, particularly dermoscopy, confocal laser scanning microscopy, and total-body photography; we describe the use of graph theoretic concepts for image analysis, give some medical background on skin cancer and concentrate on the challenges when dealing with lesion images. We discuss some relevant algorithms, including the watershed algorithm, region splitting (graph cuts), region merging (minimum spanning tree) and finally describe some open problems and future challenges [Graph-based data mining]

    @incollection{p10,
       year = {2014},
       author = {Holzinger, Andreas and Malle, Bernd and Bloice, Marcus and Wiltgen, Marco and Ferri, Massimo and Stanganelli, Ignazio and Hofmann-Wellenhof, Rainer},
       title = {On the Generation of Point Cloud Data Sets: Step One in the Knowledge Discovery Process},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       volume = {8401},
       pages = {57--80},
       abstract = {Computational geometry and topology are areas which have much potential for the analysis of arbitrarily high-dimensional data sets. In order to apply geometric or topological methods one must first generate a representative point cloud data set from the original data source, or at least a metric or distance function, which defines a distance between the elements of a given data set. Consequently, the first question is: How to get point cloud data sets? Or more precise: What is the optimal way of generating such data sets? The solution to these questions is not trivial. If a natural image is taken as an example, we are concerned more with the content, with the shape of the relevant data represented by this image than its mere matrix of pixels. Once a point cloud has been generated from a data source, it can be used as input for the application of graph theory and computational topology. In this paper we first describe the case for natural point clouds, i.e. where the data already are represented by points; we then provide some fundamentals of medical images, particularly dermoscopy, confocal laser scanning microscopy, and total-body photography; we describe the use of graph theoretic concepts for image analysis, give some medical background on skin cancer and concentrate on the challenges when dealing with lesion images. We discuss some relevant algorithms, including the watershed algorithm, region splitting (graph cuts), region merging (minimum spanning tree) and finally describe some open problems and future challenges [Graph-based data mining]},
       doi = {10.1007/978-3-662-43968-5_4},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974579&pCurrPk=83005}
    }

  • [p11] K. Holzinger, V. Palade, R. Rabadan, and A. Holzinger, “Darwin or Lamarck? Future Challenges in Evolutionary Algorithms for Knowledge Discovery and Data Mining“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Heidelberg, Berlin: Springer, 2014, pp. 35-56.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Evolutionary Algorithms (EAs) are a fascinating branch of computational intelligence with much potential for use in many application areas. The fundamental principle of EAs is to use ideas inspired by the biological mechanisms observed in nature, such as selection and genetic changes, to find the best solution for a given optimization problem. Generally, EAs use iterative processes, by growing a population of solutions selected in a guided random search and using parallel processing, in order to achieve a desired result. Such population based approaches, for example particle swarm and ant colony optimization (inspired from biology), are among the most popular metaheuristic methods being used in machine learning, along with others such as the simulated annealing (inspired from thermodynamics). In this paper, we provide a short survey on the state-of-the-art of EAs, beginning with some background on the theory of evolution and contrasting the original ideas of Darwin and Lamarck; we then continue with a discussion on the analogy between biological and computational sciences, and briefly describe some fundamentals of EAs, including the Genetic Algorithms, Genetic Programming, Evolution Strategies, Swarm Intelligence Algorithms (i.e., Particle Swarm Optimization, Ant Colony Optimization, Bacteria Foraging Algorithms, Bees Algorithm, Invasive Weed Optimization), Memetic Search, Differential Evolution Search, Artificial Immune Systems, Gravitational Search Algorithm, Intelligent Water Drops Algorithm. We conclude with a short description of the usefulness of EAs for Knowledge Discovery and Data Mining tasks and present some open problems and challenges to further stimulate research.

    @incollection{p11,
       year = {2014},
       author = {Holzinger, Katharina and Palade, Vasile and Rabadan, Raul and Holzinger, Andreas},
       title = {Darwin or Lamarck? Future Challenges in Evolutionary Algorithms for Knowledge Discovery and Data Mining},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {35--56},
       abstract = {Evolutionary Algorithms (EAs) are a fascinating branch of computational intelligence with much potential for use in many application areas. The fundamental principle of EAs is to use ideas inspired by the biological mechanisms observed in nature, such as selection and genetic changes, to find the best solution for a given optimization problem. Generally, EAs use iterative processes, by growing a population of solutions selected in a guided random search and using parallel processing, in order to achieve a desired result. Such population based approaches, for example particle swarm and ant colony optimization (inspired from biology), are among the most popular metaheuristic methods being used in machine learning, along with others such as the simulated annealing (inspired from thermodynamics). In this paper, we provide a short survey on the state-of-the-art of EAs, beginning with some background on the theory of evolution and contrasting the original ideas of Darwin and Lamarck; we then continue with a discussion on the analogy between biological and computational sciences, and briefly describe some fundamentals of EAs, including the Genetic Algorithms, Genetic Programming, Evolution Strategies, Swarm Intelligence Algorithms (i.e., Particle Swarm Optimization, Ant Colony Optimization, Bacteria Foraging Algorithms, Bees Algorithm, Invasive Weed Optimization), Memetic Search, Differential Evolution Search, Artificial Immune Systems, Gravitational Search Algorithm, Intelligent Water Drops Algorithm. We conclude with a short description of the usefulness of EAs for Knowledge Discovery and Data Mining tasks and present some open problems and challenges to further stimulate research.},
       doi = {10.1007/978-3-662-43968-5_3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974555&pCurrPk=83004}
    }

  • [p12] D. Otasek, C. Pastrello, A. Holzinger, and I. Jurisica, “Visual Data Mining: Effective Exploration of the Biological Universe“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Heidelberg, Berlin: Springer, 2014, p. 19–-34.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Visual Data Mining (VDM) is supported by interactive and scalable network visualization and analysis, which in turn enables effective exploration and communication of ideas within multiple biological and biomedical fields. Large networks, such as the protein interactome or transcriptional regulatory networks, contain hundreds of thousands of objects and millions of relationships. These networks are continuously evolving as new knowledge becomes available, and their content is richly annotated and can be presented in many different ways. Attempting to discover knowledge and new theories within this complex data sets can involve many workflows, such as accurately representing many formats of source data, merging heterogeneous and distributed data sources, complex database searching, integrating results from multiple computational and mathematical analyses, and effectively visualizing properties and results. Our experience with biology researchers has required us to address their needs and requirements in the design and development of a scalable and interactive network visualization and analysis platform, NAViGaTOR, now in its third major release.

    @incollection{p12,
       year = {2014},
       author = {Otasek, David and Pastrello, Chiara and Holzinger, Andreas and Jurisica, Igor},
       title = {Visual Data Mining: Effective Exploration of the Biological Universe},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {19–-34},
       abstract = {Visual Data Mining (VDM) is supported by interactive and scalable network visualization and analysis, which in turn enables effective exploration and communication of ideas within multiple biological and biomedical fields. Large networks, such as the protein interactome or transcriptional regulatory networks, contain hundreds of thousands of objects and millions of relationships. These networks are continuously evolving as new knowledge becomes available, and their content is richly annotated and can be presented in many different ways. Attempting to discover knowledge and new theories within this complex data sets can involve many workflows, such as accurately representing many formats of source data, merging heterogeneous and distributed data sources, complex database searching, integrating results from multiple computational and mathematical analyses, and effectively visualizing properties and results. Our experience with biology researchers has required us to address their needs and requirements in the design and development of a scalable and interactive network visualization and analysis platform, NAViGaTOR, now in its third major release.},
       doi = {10.1007/978-3-662-43968-5_2},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974538&pCurrPk=83003}
    }

  • [p5] A. Holzinger, J. Schantl, M. Schroettner, C. Seifert, and K. Verspoor, “Biomedical Text Mining: State-of-the-Art, Open Problems and Future Challenges“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer , 2014, vol. 8401, pp. 271-300.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Text is a very important type of data within the biomedical domain. For example, patient records contain large amounts of text which has been entered in a non-standardized format, consequently posing a lot of challenges to processing of such data. For the clinical doctor the written text in the medical findings is still the basis for decision making – neither images nor multimedia data. However, the steadily increasing volumes of unstructured information need machine learning approaches for data mining, i.e. text mining. This paper provides a short, concise overview of some selected text mining methods, focusing on statistical methods, i.e. Latent Semantic Analysis, Probabilistic Latent Semantic Analysis, Latent Dirichlet Allocation, Hierarchical Latent Dirichlet Allocation, Principal Component Analysis, and Support Vector Machines, along with some examples from the biomedical domain. Finally, we provide some open problems and future challenges, particularly from the clinical domain, that we expect to stimulate future research.

    @incollection{p5,
       year = {2014},
       author = {Holzinger, Andreas and Schantl, Johannes and Schroettner, Miriam and Seifert, Christin and Verspoor, Karin},
       title = {Biomedical Text Mining: State-of-the-Art, Open Problems and Future Challenges},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       volume = {8401},
       pages = {271--300},
       abstract = {Text is a very important type of data within the biomedical domain. For example, patient records contain large amounts of text which has been entered in a non-standardized format, consequently posing a lot of challenges to processing of such data. For the clinical doctor the written text in the medical findings is still the basis for decision making – neither images nor multimedia data. However, the steadily increasing volumes of unstructured information need machine learning approaches for data mining, i.e. text mining. This paper provides a short, concise overview of some selected text mining methods, focusing on statistical methods, i.e. Latent Semantic Analysis, Probabilistic Latent Semantic Analysis, Latent Dirichlet Allocation, Hierarchical Latent Dirichlet Allocation, Principal Component Analysis, and Support Vector Machines, along with some examples from the biomedical domain. Finally, we provide some open problems and future challenges, particularly from the clinical domain, that we expect to stimulate future research.},
       doi = {10.1007/978-3-662-43968-5_16},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974640&pCurrPk=83009}
    }

  • [p6] A. Holzinger, B. Ofner, and M. Dehmer, “Multi-touch Graph-Based Interaction for Knowledge Discovery on Mobile Devices: State-of-the-Art and Future Challenges“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer, 2014, pp. 241-254.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Graph-based knowledge representation is a hot topic for some years and still has a lot of research potential, particularly in the advancement in the application of graph-theory for creating benefits in the biomedical domain. Graphs are most powerful tools to map structures within a given data set and to recognize relationships between specific data objects. Many advantages of graph-based data structures can be found in the applicability of methods from network analysis, topology and data mining (e.g. small-world phenomenon, cluster analysis). In this paper we present the state-of-the-art in graph-based approaches for multi-touch interaction on mobile devices and we highlight some open problems to stimulate further research and future developments. This is particularly important in the medical domain, as a conceptual graph analysis may provide novel insights on hidden patterns in data, hence support interactive knowledge discovery.[Graph-based Data Mining]

    @incollection{p6,
       year = {2014},
       author = {Holzinger, Andreas and Ofner, Bernhard and Dehmer, Matthias},
       title = {Multi-touch Graph-Based Interaction for Knowledge Discovery on Mobile Devices: State-of-the-Art and Future Challenges},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {241--254},
       abstract = {Graph-based knowledge representation is a hot topic for some years and still has a lot of research potential, particularly in the advancement in the application of graph-theory for creating benefits in the biomedical domain. Graphs are most powerful tools to map structures within a given data set and to recognize relationships between specific data objects. Many advantages of graph-based data structures can be found in the applicability of methods from network analysis, topology and data mining (e.g. small-world phenomenon, cluster analysis). In this paper we present the state-of-the-art in graph-based approaches for multi-touch interaction on mobile devices and we highlight some open problems to stimulate further research and future developments. This is particularly important in the medical domain, as a conceptual graph analysis may provide novel insights on hidden patterns in data, hence support interactive knowledge discovery.[Graph-based Data Mining]},
       doi = {10.1007/978-3-662-43968-5_14},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974629&pCurrPk=83008}
    }

  • [b11] A. Holzinger, Biomedical Informatics: Discovering Knowledge in Big Data, New York: Springer, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This book provides a broad overview of the topic Bioinformatics (medical informatics + biological information) with a focus on data, information and knowledge. From data acquisition and storage to visualization, privacy, regulatory, and other practical and theoretical topics, the author touches on several fundamental aspects of the innovative interface between the medical and computational domains that form biomedical informatics. Each chapter starts by providing a useful inventory of definitions and commonly used acronyms for each topic, and throughout the text, the reader finds several real-world examples, methodologies, and ideas that complement the technical and theoretical background. Also at the beginning of each chapter a new section called key problems, has been added, where the author discusses possible traps and unsolvable or major problems. This new edition includes new sections at the end of each chapter, called future outlook and research avenues, providing pointers to future challenges.

    @book{b11,
       year = {2014},
       author = {Holzinger, Andreas},
       title = {Biomedical Informatics: Discovering Knowledge in Big Data},
       publisher = {Springer},
       address = {New York},
       abstract = {This book provides a broad overview of the topic Bioinformatics (medical informatics + biological information) with a focus on data, information and knowledge. From data acquisition and storage to visualization, privacy, regulatory, and other practical and theoretical topics, the author touches on several fundamental aspects of the innovative interface between the medical and computational domains that form biomedical informatics. Each chapter starts by providing a useful inventory of definitions and commonly used acronyms for each topic, and throughout the text, the reader finds several real-world examples, methodologies, and ideas that complement the technical and theoretical background. Also at the beginning of each chapter a new section called key problems, has been added, where the author discusses possible traps and unsolvable or major problems. This new edition includes new sections at the end of each chapter, called future outlook and research avenues, providing pointers to future challenges.},
       doi = {10.1007/978-3-319-04528-3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974906&pCurrPk=78579}
    }

  • [bookchapter4] C. Röcker, M. Ziefle, and A. Holzinger, “From Computer Innovation to Human Integration: Current Trends and Challenges for Pervasive Health Technologies“, in Pervasive Health, A. Holzinger, M. Ziefle, and C. Röcker, Eds., Springer London, 2014, pp. 1-17.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This chapter starts with an overview of the technical innovations and societal transformation processes we have seen in the last decades and as well as the consequences those changes have for the design of pervasive healthcare systems. Based on this theoretical foundation, emerging design requirements and research challenges are outlined, which are crucial to be addressed when developing future health technologies.

    @incollection{bookchapter4,
       author = {Röcker, Carsten and Ziefle, Martina and Holzinger, Andreas},
       title = {From Computer Innovation to Human Integration: Current Trends and Challenges for Pervasive Health Technologies},
       booktitle = {Pervasive Health},
       editor = {Holzinger, Andreas and Ziefle, Martina and Röcker, Carsten},
       publisher = {Springer London},
       pages = {1--17},
       year = {2014},
       abstract = {This chapter starts with an overview of the technical innovations and societal transformation processes we have seen in the last decades and as well as the consequences those changes have for the design of pervasive healthcare systems. Based on this theoretical foundation, emerging design requirements and research challenges are outlined, which are crucial to be addressed when developing future health technologies.},
       doi = {10.1007/978-1-4471-6413-5_1},
       url = {http://dx.doi.org/10.1007/978-1-4471-6413-5_1}
    }

  • [23] A. Holzinger, M. Ziefle, and C. Röcker, Pervasive Health, London: Springer, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Providing a comprehensive introduction into and overview of the field of pervasive healthcare applications, this volume incorporates a variety of timely topics ranging from medical sensors and hardware infrastructures, to software platforms and applications, and addresses issues of user experience and technology acceptance. The recent developments in the area of information and communication technologies have laid the groundwork for new patient-centred healthcare solutions. While the majority of computer-supported healthcare tools designed in the last decades focused mainly on supporting care-givers and medical personnel, this trend changed with the introduction of pervasive healthcare technologies, which provide supportive and adaptive services for a broad variety and diverse set of end users. With contributions from key researchers the book integrates the various aspects of pervasive healthcare systems including application design, hardware development, system implementation, hardware and software infrastructures as well as end-user aspects providing an excellent overview of this important and evolving field.

    @book{23,
       author = {Holzinger, Andreas and Ziefle, Martina and Röcker, Carsten},
       title = {Pervasive Health},
       publisher = {Springer},
       address = {London},
       year = {2014},
       abstract = {Providing a comprehensive introduction into and overview of the field of pervasive healthcare applications, this volume incorporates a variety of timely topics ranging from medical sensors and hardware infrastructures, to software platforms and applications, and addresses issues of user experience and technology acceptance. The recent developments in the area of information and communication technologies have laid the groundwork for new patient-centred healthcare solutions. While the majority of computer-supported healthcare tools designed in the last decades focused mainly on supporting care-givers and medical personnel, this trend changed with the introduction of pervasive healthcare technologies, which provide supportive and adaptive services for a broad variety and diverse set of end users.  With contributions from key researchers the book integrates the various aspects of pervasive healthcare systems including application design, hardware development, system implementation, hardware and software infrastructures as well as end-user aspects providing an excellent overview of this important and evolving field.},
       doi = {10.1007/978-1-4471-6413-5_1},
       url = {http://dx.doi.org/10.1007/978-1-4471-6413-5_1}
    }

  • [2] S. Juric, V. Flis, M. Debevc, A. Holzinger, and B. Zalik, “Towards a Low-Cost Mobile Subcutaneous Vein Detection Solution Using Near-Infrared Spectroscopy“, The Scientific World Journal, vol. 2014, p. 15, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Excessive venipunctures are both time- and resource-consuming events, which cause anxiety, pain, and distress in patients, or can lead to severe harmful injuries. We propose a low-cost mobile health solution for subcutaneous vein detection using near-infrared spectroscopy, along with an assessment of the current state of the art in this field. The first objective of this study was to get a deeper overview of the research topic, through the initial team discussions and a detailed literature review (using both academic and grey literature). The second objective, that is, identifying the commercial systems employing near-infrared spectroscopy, was conducted using the PubMed database. The goal of the third objective was to identify and evaluate (using the IEEE Xplore database) the research efforts in the field of low-cost near-infrared imaging in general, as a basis for the conceptual model of the upcoming prototype. Although the reviewed commercial devices have demonstrated usefulness and value for peripheral veins visualization, other evaluated clinical outcomes are less conclusive. Previous studies regarding low-cost near-infrared systems demonstrated the general feasibility of developing cost-effective vein detection systems; however, their limitations are restricting their applicability to clinical practice. Finally, based on the current findings, we outline the future research direction.

    @article{2,
       author = {Juric, Simon and Flis, Vojko and Debevc, Matjaz and Holzinger, Andreas and Zalik, Borut},
       title = {Towards a Low-Cost Mobile Subcutaneous Vein Detection Solution Using Near-Infrared Spectroscopy},
       journal = {The Scientific World Journal},
       volume = {2014},
       pages = {15},
       year = {2014},
       abstract = {Excessive venipunctures are both time- and resource-consuming events, which cause anxiety, pain, and distress in patients, or can lead to severe harmful injuries. We propose a low-cost mobile health solution for subcutaneous vein detection using near-infrared spectroscopy, along with an assessment of the current state of the art in this field. The first objective of this study was to get a deeper overview of the research topic, through the initial team discussions and a detailed literature review (using both academic and grey literature). The second objective, that is, identifying the commercial systems employing near-infrared spectroscopy, was conducted using the PubMed database. The goal of the third objective was to identify and evaluate (using the IEEE Xplore database) the research efforts in the field of low-cost near-infrared imaging in general, as a basis for the conceptual model of the upcoming prototype. Although the reviewed commercial devices have demonstrated usefulness and value for peripheral veins visualization, other evaluated clinical outcomes are less conclusive. Previous studies regarding low-cost near-infrared systems demonstrated the general feasibility of developing cost-effective vein detection systems; however, their limitations are restricting their applicability to clinical practice. Finally, based on the current findings, we outline the future research direction.},
       doi = {10.1155/2014/365902},
       url = {http://dx.doi.org/10.1155/2014/365902}
    }

  • [3] B. Peischl, M. Ferk, and A. Holzinger, “The fine art of user-centered software development“, Software Quality Journal, pp. 1-28, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this article, we report on the user-centered development of a mobile medical app under limited resources. We discuss (non-functional) quality attributes that we used to choose the platform for development of the medical app. As the major contribution, we show how to integrate user-centered design in an early stage of mobile app development under the presence of limited resources. Moreover, we present empirical results gained from our two-stage testing procedure including recommendations to provide both a useful and useable business app.

    @article{3,
       author = {Peischl, Bernhard and Ferk, Michaela and Holzinger, Andreas},
       title = {The fine art of user-centered software development},
       journal = {Software Quality Journal},
       pages = {1-28},
       year = {2014},
       abstract = {In this article, we report on the user-centered development of a mobile medical app under limited resources. We discuss (non-functional) quality attributes that we used to choose the platform for development of the medical app. As the major contribution, we show how to integrate user-centered design in an early stage of mobile app development under the presence of limited resources. Moreover, we present empirical results gained from our two-stage testing procedure including recommendations to provide both a useful and useable business app.},
       doi = {10.1007/s11219-014-9239-1},
       url = {http://dx.doi.org/10.1007/s11219-014-9239-1}
    }

  • [c116] A. Holzinger, D. Blanchard, M. Bloice, K. Holzinger, V. Palade, and R. Rabadan, “Darwin, Lamarck, or Baldwin: Applying Evolutionary Algorithms to Machine Learning Techniques“, in IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014, pp. 449-453.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Evolutionary Algorithms (EAs), inspired by biological mechanisms observed in nature, such as selection and genetic changes, have much potential to find the best solution for a given optimisation problem. Contrary to Darwin, and according to Lamarck and Baldwin, organisms in natural systems learn to adapt over their lifetime and allow to adjust over generations. Whereas earlier research was rather reserved, more recent research underpinned by the work of Lamarck and Baldwin, finds that these theories have much potential, particularly in upcoming fields such as epigenetics. In this paper, we report on some experiments with different evolutionary algorithms with the purpose to improve the accuracy of data mining methods. We explore whether and to what extent an optimisation goal can be reached through a calculation of certain parameters or attribute weightings by use of such evolutionary strategies. We provide a look at different EAs inspired by the theories of Darwin, Lamarck, and Baldwin, as well as the problem solving methods of certain species. In this paper we demonstrate that the modification of well-established machine learning techniques can be achieved in order to include methods from genetic algorithm theory without extensive programming effort. Our results pave the way for much further research at the cross section of machine learning optimisation techniques and evolutionary algorithm research.

    @inproceedings{c116,
       author = {Holzinger, A. and Blanchard, D. and Bloice, M. and Holzinger, K. and Palade, V. and Rabadan, R.},
       title = {Darwin, Lamarck, or Baldwin: Applying Evolutionary Algorithms to Machine Learning Techniques},
       booktitle = {IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT)},
       editor = {Ślęzak, Dominik and Dunin-Kęplicz, Barbara  and Lewis, Mike and Terano, Takao},
       publisher = {IEEE},
       pages = {449-453},
       year = {2014},
       abstract = {Evolutionary Algorithms (EAs), inspired by biological mechanisms observed in nature, such as selection and genetic changes, have much potential to find the best solution for a given optimisation problem. Contrary to Darwin, and according to Lamarck and Baldwin, organisms in natural systems learn to adapt over their lifetime and allow to adjust over generations. Whereas earlier research was rather reserved, more recent research underpinned by the work of Lamarck and Baldwin, finds that these theories have much potential, particularly in upcoming fields such as epigenetics. In this paper, we report on some experiments with different evolutionary algorithms with the purpose to improve the accuracy of data mining methods. We explore whether and to what extent an optimisation goal can be reached through a calculation of certain parameters or attribute weightings by use of such evolutionary strategies. We provide a look at different EAs inspired by the theories of Darwin, Lamarck, and Baldwin, as well as the problem solving methods of certain species. In this paper we demonstrate that the modification of well-established machine learning techniques can be achieved in order to include methods from genetic algorithm theory without extensive programming effort. Our results pave the way for much further research at the cross section of machine learning optimisation techniques and evolutionary algorithm research.},
       doi = {10.1109/WI-IAT.2014.132},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=782096&pCurrPk=79357}
    }

  • [c119] F. Babic, L. Majnaric, A. Lukacova, J. Paralic, and A. Holzinger, “On Patient’s Characteristics Extraction for Metabolic Syndrome Diagnosis: Predictive Modelling Based on Machine Learning“, in Information Technology in Bio- and Medical Informatics, M. Bursa, S. Khuri, and E. M. Renda, Eds., Springer International Publishing, 2014, vol. 8649, pp. 118-132.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In large-scale research projects active management of the cooperation process is necessary, e.g. to ensure systematic transfer of knowledge, alignment of research goals, or appropriate dissemination of research efforts. In a large scale research-cluster at the RWTH Aachen University a cybernetic management approach is applied. As a planned measure, publishing efforts (i.e. bibliometric data) will be visualized on a social software platform accessible by researchers and the steering committee. But do researchers agree with the chosen style of visualization of their publications? As part of a user centered design, this paper presents the results of an interview study with researchers (n=22) addressing the usefulness and applicability of this approach. As central findings arguments for using the publication visualization are identified such as enabling retrospective analysis, acquiring new information about the team, improvement in dissemination planning, but at the same time contrasted by arguments against this approach, such as missing information, a possibly negative influence on workflow of researchers, and the bad legibility of the visualization. Additionally requirements and suggested improvements are presented.

    @incollection{c119,
       author = {Babic, Frantisek and Majnaric, Ljiljana and Lukacova, Alexandra and Paralic, Jan and Holzinger, Andreas},
       title = {On Patient’s Characteristics Extraction for Metabolic Syndrome Diagnosis: Predictive Modelling Based on Machine Learning},
       booktitle = {Information Technology in Bio- and Medical Informatics},
       editor = {Bursa, Miroslav and Khuri, Sami and Renda, M. Elena},
       publisher = {Springer International Publishing},
       volume = {8649},
       pages = {118-132},
       year = {2014},
       abstract = {In large-scale research projects active management of the cooperation process is necessary, e.g. to ensure systematic transfer of knowledge, alignment of research goals, or appropriate dissemination of research efforts. In a large scale research-cluster at the RWTH Aachen University a cybernetic management approach is applied. As a planned measure, publishing efforts (i.e. bibliometric data) will be visualized on a social software platform accessible by researchers and the steering committee. But do researchers agree with the chosen style of visualization of their publications? As part of a user centered design, this paper presents the results of an interview study with researchers (n=22) addressing the usefulness and applicability of this approach. As central findings arguments for using the publication visualization are identified such as enabling retrospective analysis, acquiring new information about the team, improvement in dissemination planning, but at the same time contrasted by arguments against this approach, such as missing information, a possibly negative influence on workflow of researchers, and the bad legibility of the visualization. Additionally requirements and suggested improvements are presented.},
       doi = {10.1007/978-3-319-10265-8_11},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974325&pCurrPk=82995}
    }

  • [c121] A. Calero Valdez, A. Schaar, M. Ziefle, and A. Holzinger, “Enhancing Interdisciplinary Cooperation by Social Platforms“, in Human Interface and the Management of Information. Information and Knowledge Design and Evaluation, S. Yamamoto, Ed., Springer International Publishing, 2014, vol. 8521, pp. 298-309.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In large-scale research projects active management of the cooperation process is necessary, e.g. to ensure systematic transfer of knowledge, alignment of research goals, or appropriate dissemination of research efforts. In a large scale research-cluster at the RWTH Aachen University a cybernetic management approach is applied. As a planned measure, publishing efforts (i.e. bibliometric data) will be visualized on a social software platform accessible by researchers and the steering committee. But do researchers agree with the chosen style of visualization of their publications? As part of a user centered design, this paper presents the results of an interview study with researchers (n=22) addressing the usefulness and applicability of this approach. As central findings arguments for using the publication visualization are identified such as enabling retrospective analysis, acquiring new information about the team, improvement in dissemination planning, but at the same time contrasted by arguments against this approach, such as missing information, a possibly negative influence on workflow of researchers, and the bad legibility of the visualization. Additionally requirements and suggested improvements are presented.

    @incollection{c121,
       author = {Calero Valdez, André and Schaar, AnneKathrin and Ziefle, Martina and Holzinger, Andreas},
       title = {Enhancing Interdisciplinary Cooperation by Social Platforms},
       booktitle = {Human Interface and the Management of Information. Information and Knowledge Design and Evaluation},
       editor = {Yamamoto, Sakae},
       publisher = {Springer International Publishing},
       volume = {8521},
       pages = {298-309},
       year = {2014},
       abstract = {In large-scale research projects active management of the cooperation process is necessary, e.g. to ensure systematic transfer of knowledge, alignment of research goals, or appropriate dissemination of research efforts. In a large scale research-cluster at the RWTH Aachen University a cybernetic management approach is applied. As a planned measure, publishing efforts (i.e. bibliometric data) will be visualized on a social software platform accessible by researchers and the steering committee. But do researchers agree with the chosen style of visualization of their publications? As part of a user centered design, this paper presents the results of an interview study with researchers (n=22) addressing the usefulness and applicability of this approach. As central findings arguments for using the publication visualization are identified such as enabling retrospective analysis, acquiring new information about the team, improvement in dissemination planning, but at the same time contrasted by arguments against this approach, such as missing information, a possibly negative influence on workflow of researchers, and the bad legibility of the visualization. Additionally requirements and suggested improvements are presented.},
       doi = {10.1007/978-3-319-07731-4_31},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=943835&pCurrPk=82138}
    }

  • [c122] C. Stocker, L. Marzi, C. Matula, J. Schantl, G. Prohaska, A. Brabenetz, and A. Holzinger, “Enhancing Patient Safety through Human-Computer Information Retrieval on the Example of German-speaking Surgical Reports“, in TIR 2014 – 11th International Workshop on Text-based Information Retrieval , 2014, pp. 1-5.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In view of the high number of deaths and complication rates of major surgical procedures worldwide, surgical safety is described as a substantial global public-health concern. Naturally, patient safety has become an international priority. The increasing amount of electronically available clinical documents holds great potential for the computational analysis of large repositories. However, most of this data is in textual form and the clinical domain is a challenging field for the appliance of natural language processing. This is particularly the case if you deal with a language other than English, due to the little attention from the international research community. In this project, we are concerned with the utilization of a Germanspeaking operative report repository for the purpose of risk management and patient safety research. In this particular paper we focus on the description of our information retrieval approach. We investigated the thought process of a domain expert in order to derive his information of interest and describe a facet-based way to navigate this kind of information in the form of extracted phrases. Initial results and feedback has been very promising, but a formal evaluation is still missing.

    @inproceedings{c122,
       author = {Stocker, Christof and Marzi, Leopold-Michael and Matula, Christian and Schantl, Johannes and Prohaska, Gottfried and Brabenetz, Alberto and Holzinger, Andreas},
       title = {Enhancing Patient Safety through Human-Computer Information Retrieval on the Example of German-speaking Surgical Reports},
       booktitle = {TIR 2014 - 11th International Workshop on Text-based Information Retrieval },
       publisher = {IEEE},
       pages = {1-5},
       year = {2014},
       abstract = {In view of the high number of deaths and complication rates of major surgical procedures worldwide, surgical safety is described as a substantial global public-health concern. Naturally, patient safety has become an international priority. The increasing amount of electronically available clinical documents holds great potential for the computational analysis of large repositories. However, most of this data is in textual form and the clinical domain is a challenging field for the appliance of natural language processing. This is particularly the case if you deal with a language other than English, due to the little attention from the international research community. In this project, we are concerned with the utilization of a Germanspeaking operative report repository for the purpose of risk management and patient safety research. In this particular paper we focus on the description of our information retrieval approach. We investigated the thought process of a domain expert in order to derive his information of interest and describe a facet-based way to navigate this kind of information in the form of extracted phrases. Initial results and feedback has been very promising, but a formal evaluation is still missing.},
       doi = {10.1109/DEXA.2014.53},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974272&pCurrPk=82991}
    }

  • [c123] M. Bachler, M. Hörtenhuber, C. Mayer, A. Holzinger, and S. Wassertheurer, “Entropy-Based Data Mining on the Example of Cardiac Arrhythmia Suppression“, in Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 8609, D. Ślȩzak, A. Tan, J. Peters, and L. Schwabe, Eds., Heidelberg, Berlin: Springer, 2014, pp. 574-585.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Heart rate variability (HRV) is the variation of the time interval between consecutive heartbeats and depends on the extrinsic regulation of the heart rate. It can be quantified using nonlinear methods such as entropy measures, which determine the irregularity of the time intervals. In this work, approximate entropy (ApEn), sample entropy (SampEn), fuzzy entropy (FuzzyEn) and fuzzy measure entropy (FuzzyMEn) were used to assess the effects of three different cardiac arrhythmia suppressing drugs on the HRV after a myocardial infarction. The results show that the ability of all four entropy measures to distinguish between pre- and post-treatment HRV data is highly significant (p < 0.01). Furthermore, approximate entropy and sample entropy are able to differentiate significantly (p < 0.05) between the tested arrhythmia suppressing agents.

    @incollection{c123,
       author = {Bachler, Martin and Hörtenhuber, Matthias and Mayer, Christopher and Holzinger, Andreas and Wassertheurer, Siegfried},
       title = {Entropy-Based Data Mining on the Example of Cardiac Arrhythmia Suppression},
       booktitle = {Brain Informatics and Health, Lecture Notes in Artificial Intelligence LNAI 8609},
       editor = {Ślȩzak, Dominik and Tan, Ah-Hwee and Peters, JamesF and Schwabe, Lars},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {574-585},
       year = {2014},
       abstract = {Heart rate variability (HRV) is the variation of the time interval between consecutive heartbeats and depends on the extrinsic regulation of the heart rate. It can be quantified using nonlinear methods such as entropy measures, which determine the irregularity of the time intervals. In this work, approximate entropy (ApEn), sample entropy (SampEn), fuzzy entropy (FuzzyEn) and fuzzy measure entropy (FuzzyMEn) were used to assess the effects of three different cardiac arrhythmia suppressing drugs on the HRV after a myocardial infarction.
    The results show that the ability of all four entropy measures to distinguish between pre- and post-treatment HRV data is highly significant (p < 0.01). Furthermore, approximate entropy and sample entropy are able to differentiate significantly (p < 0.05) between the tested arrhythmia suppressing agents.},
       doi = {10.1007/978-3-319-09891-3_52},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974216&pCurrPk=82990}
    }

  • [c124] M. Preuss, M. Dehmer, S. Pickl, and A. Holzinger, “On Terrain Coverage Optimization by Using a Network Approach for universal Graph-based Data Mining and Knowledge Discovery“, in BIH 2014 Lecture Notes in Artificial Intelligence LNAI 8609 , D. Slezak, A. Tan, J. F. Peters, and L. Schwabe, Eds., Heidelberg, Berlin: Springer, 2014, pp. 564-573.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This conceptual paper discusses a graph-based approach for on-line terrain coverage, which has many important research aspects and a wide range of application possibilities, e.g in multi-agents. Such approaches can be used in different application domains, e.g. in medical image analysis. In this paper we discuss how the graphs are being generated and analyzed. In particular, the analysis is important for improving the estimation of the parameter set for the used heuristic in the field of route planning. Moreover, we describe some methods from quantitative graph theory and outline a few potential research routes.

    @incollection{c124,
       author = {Preuss, Michael and Dehmer, Matthias and Pickl, Stefan and Holzinger, Andreas},
       title = {On Terrain Coverage Optimization by Using a Network Approach for universal Graph-based Data Mining and Knowledge Discovery},
       booktitle = {BIH 2014 Lecture Notes in Artificial Intelligence LNAI 8609 },
       editor = {Slezak, Dominik and Tan, Ah-Hwee and Peters, James F. and Schwabe, Lars},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {564-573},
       year = {2014},
       abstract = {This conceptual paper discusses a graph-based approach for on-line terrain coverage, which has many important research aspects and a wide range of application possibilities, e.g in multi-agents. Such approaches can be used in different application domains, e.g. in medical image analysis. In this paper we discuss how the graphs are being generated and analyzed. In particular, the analysis is important for improving the estimation of the parameter set for the used heuristic in the field of route planning. Moreover, we describe some methods from quantitative graph theory and outline a few potential research routes.},
       doi = {10.1007/978-3-319-09891-3_51},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974170&pCurrPk=82989}
    }

  • [c125] A. Holzinger, B. Malle, and N. Giuliani, “On Graph Extraction from Image Data“, in Brain Informatics and Health, BIH 2014, Lecture Notes in Artificial Intelligence, LNAI 8609, D. Slezak, J. F. Peters, A. Tan, and L. Schwabe, Eds., Heidelberg, Berlin: Springer, 2014, pp. 552-563.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Hot topics in knowledge discovery and interactive data mining from natural images include the application of topological methods and machine learning algorithms. For any such approach one needs at first a relevant and robust digital content representation from the image data. However, traditional pixel-based image analysis techniques do not effectively extract, hence represent the content. A very promising approach is to extract graphs from images, which is not an easy task. In this paper we present a novel approach for knowledge discovery by extracting graph structures from natural image data. For this purpose, we created a framework built upon modern Web technologies, utilizing HTML canvas and pure Javascript inside a Web-browser, which is a very promising engineering approach. Following a short description of some popular image classification and segmentation methodologies, we outline a specific data processing pipeline suitable for carrying out future scientific research. A demonstration of our implementation, compared to the results of a traditional watershed transformation performed in Matlab showed very promising results in both quality and runtime, despite some remaining challenges. Finally, we provide a short discussion of a few open problems and outline some of our future research routes.

    @incollection{c125,
       author = {Holzinger, Andreas and Malle, Bernd and Giuliani, Nicola},
       title = {On Graph Extraction from Image Data},
       booktitle = {Brain Informatics and Health, BIH 2014, Lecture Notes in Artificial Intelligence, LNAI 8609},
       editor = {Slezak, Dominik and Peters, James F. and Tan, Ah-Hwee and Schwabe, Lars},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {552-563},
       year = {2014},
       abstract = {Hot topics in knowledge discovery and interactive data mining from natural images include the application of topological methods and machine learning algorithms. For any such approach one needs at first a relevant and robust digital content representation from the image data. However, traditional pixel-based image analysis techniques do not effectively extract, hence represent the content. A very promising approach is to extract graphs from images, which is not an easy task. In this paper we present a novel approach for knowledge discovery by extracting graph structures from natural image data. For this purpose, we created a framework built upon modern Web technologies, utilizing HTML canvas and pure Javascript inside a Web-browser, which is a very promising engineering approach. Following a short description of some popular image classification and segmentation methodologies, we outline a specific data processing pipeline suitable for carrying out future scientific research. A demonstration of our implementation, compared to the results of a traditional watershed transformation performed in Matlab showed very promising results in both quality and runtime, despite some remaining challenges. Finally, we provide a short discussion of a few open problems and outline some of our future research routes.},
       doi = {10.1007/978-3-319-09891-3_50},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=868952&pCurrPk=80830}
    }

  • [c126] A. Holzinger, “Extravaganza Tutorial on Hot Ideas for Interactive Knowledge Discovery and Data Mining in Biomedical Informatics“, in Brain Informatics and Health. Lecture Notes in Artificial Intelligence LNAI 8609, D. Ślȩzak, A. Tan, J. Peters, and L. Schwabe, Eds., Heidelberg, Berlin, New York: Springer, 2014, pp. 502-515.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biomedical experts are confronted with ”Big data”, driven by the trend towards precision medicine. Despite the fact that humans are excellent at pattern recognition in dimensions of ≤ 3, most biomedical data is in dimensions much higher than 3, making manual analysis often impossible. Experts in daily routine are decreasingly capable of dealing with such data. Efficient, useable and useful computational methods, algorithms and tools to interactively gain insight into such data are a commandment of the time. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort including recent advances from graph-theory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and time-dependent, hence entropy is also amongst promising topics. This tutorial gives an overview of the HCI-KDD approach and focuses on 3 topics: graphs, topology and entropy. The goal of this intro tutorial is to motivate and stimulate further research.

    @incollection{c126,
       author = {Holzinger, Andreas},
       title = {Extravaganza Tutorial on Hot Ideas for Interactive Knowledge Discovery and Data Mining in Biomedical Informatics},
       booktitle = {Brain Informatics and Health. Lecture Notes in Artificial Intelligence LNAI 8609},
       editor = {Ślȩzak, Dominik and Tan, Ah-Hwee and Peters, JamesF and Schwabe, Lars},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {502-515},
       year = {2014},
       abstract = {Biomedical experts are confronted with ”Big data”, driven by the trend towards precision medicine. Despite the fact that humans are excellent at pattern recognition in dimensions of ≤ 3, most biomedical data is in dimensions much higher than 3, making manual analysis often impossible. Experts in daily routine are decreasingly capable of dealing with such data. Efficient, useable and useful computational methods, algorithms and tools to interactively gain insight into such data are a commandment of the time. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort including recent advances from graph-theory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and time-dependent, hence entropy is also amongst promising topics. This tutorial gives an overview of the HCI-KDD approach and focuses on 3 topics: graphs, topology and entropy. The goal of this intro tutorial is to motivate and stimulate further research.},
       doi = {10.1007/978-3-319-09891-3_46},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=764238&pCurrPk=79139}
    }

  • [c127] A. Holzinger, M. Schwarz, B. Ofner, F. Jeanquartier, A. Calero-Valdez, C. Roecker, and M. Ziefle, “Towards Interactive Visualization of Longitudinal Data to Support Knowledge Discovery on Multi-touch Tablet Computers“, in Availability, Reliability, and Security in Information Systems, LNCS 8708, S. Teufel, M. A. Tjoa, I. You, and E. Weippl, Eds., Heidelberg, Berlin: Springer, 2014, pp. 124-137.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    A major challenge in modern data-centric medicine is the increasing amount of time-dependent data, which requires efficient user-friendly solutions for dealing with such data. To create an effective and efficient knowledge discovery process, it is important to support common data manipulation tasks by creating quick, responsive and intuitive interaction methods. In this paper we describe some methods for interactive longitudinal data visualization with focus on the usage of mobile multi-touch devices as interaction medium, based on our design and development experiences. We argue that when it comes to longitudinal data this device category offers remarkable additional interaction benefits compared to standard point-and-click desktop computer devices. An important advantage of multi-touch devices arises when interacting with particularly large longitudinal data sets: Complex, coupled interactions such as zooming into a region and scrolling around almost simultaneously is more easily achieved with the possibilities of a multi-touch device than compared to a regular mouse-based interaction device.

    @incollection{c127,
       author = {Holzinger, Andreas and Schwarz, Michael and Ofner, Bernhard and Jeanquartier, Fleur and Calero-Valdez, Andre and Roecker, Carsten and Ziefle, Martina},
       title = {Towards Interactive Visualization of Longitudinal Data to Support Knowledge Discovery on Multi-touch Tablet Computers},
       booktitle = {Availability, Reliability, and Security in Information Systems, LNCS 8708},
       editor = {Teufel, Stephanie and Tjoa, A Min and You, Ilsun and Weippl, Edgar},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {124-137},
       year = {2014},
       abstract = {A major challenge in modern data-centric medicine is the increasing amount of time-dependent data, which requires efficient user-friendly solutions for dealing with such data. To create an effective and efficient knowledge discovery process, it is important to support common data manipulation tasks by creating quick, responsive and intuitive interaction methods. In this paper we describe some methods for interactive longitudinal data visualization with focus on the usage of mobile multi-touch devices as interaction medium, based on our design and development experiences. We argue that when it comes to longitudinal data this device category offers remarkable additional interaction benefits compared to standard point-and-click desktop computer devices. An important advantage of multi-touch devices arises when interacting with particularly large longitudinal data sets: Complex, coupled interactions such as zooming into a region and scrolling around almost simultaneously is more easily achieved with the possibilities of a multi-touch device than compared to a regular mouse-based interaction device.},
       doi = {10.1007/978-3-319-10975-6_9},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=865900&pCurrPk=80773}
    }

  • [c128] A. Holzinger, B. Sommerauer, P. Spitzer, S. Juric, B. Zalik, M. Debevc, C. Lidynia, A. Valdez, C. Roecker, and M. Ziefle, “Mobile Computing is not Always Advantageous: Lessons Learned from a Real-World Case Study in a Hospital“, in Availability, Reliability, and Security in Information Systems, LNCS 8708, S. Teufel, M. A. Tjoa, I. You, and E. Weippl, Eds., Heidelberg, Berlin, London, New York: Springer, 2014, pp. 110-123.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The use of mobile computing is expanding dramatically in recent years and trends indicate that “the future is mobile”. Nowadays, mobile computing plays an increasingly important role in the biomedical domain, and particularly in hospitals. The benefits of using mobile devices in hospitals are no longer disputed and many applications for medical care are already available. Many studies have proven that mobile technologies can bring various benefits for enhancing information management in the hospital. But is mobility a solution for every problem? In this paper, we will demonstrate that mobility is not always an advantage. On the basis of a field study at the pediatric surgery of a large University Hospital, we have learned within a two-year long mobile computing project, that mobile devices have indeed many disadvantages, particularly in stressful and hectic situations and we conclude that mobile computing is not always advantageous.

    @incollection{c128,
       author = {Holzinger, Andreas and Sommerauer, Bettina and Spitzer, Peter and Juric, Simon and Zalik, Borut and Debevc, Matjaz and Lidynia, Chantal and Valdez, AndréCalero and Roecker, Carsten and Ziefle, Martina},
       title = {Mobile Computing is not Always Advantageous: Lessons Learned from a Real-World Case Study in a Hospital},
       booktitle = {Availability, Reliability, and Security in Information Systems, LNCS 8708},
       editor = {Teufel, Stephanie and Tjoa, A Min and You, Ilsun and Weippl, Edgar},
       publisher = {Springer},
       address = {Heidelberg, Berlin, London, New York},
       pages = {110-123},
       year = {2014},
       abstract = {The use of mobile computing is expanding dramatically in recent years and trends indicate that “the future is mobile”. Nowadays, mobile computing plays an increasingly important role in the biomedical domain, and particularly in hospitals. The benefits of using mobile devices in hospitals are no longer disputed and many applications for medical care are already available. Many studies have proven that mobile technologies can bring various benefits for enhancing information management in the hospital. But is mobility a solution for every problem? In this paper, we will demonstrate that mobility is not always an advantage. On the basis of a field study at the pediatric surgery of a large University Hospital, we have learned within a two-year long mobile computing project, that mobile devices have indeed many disadvantages, particularly in stressful and hectic situations and we conclude that mobile computing is not always advantageous.},
       doi = {10.1007/978-3-319-10975-6_8},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=865836&pCurrPk=80772}
    }

  • [book12] A. Holzinger and I. Jurisica, Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges, LNCS 8401, Berlin Heidelberg: Springer, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    One of the grand challenges in our digital world are the large, complex and often weakly structured data sets, and massive amounts of unstructured information. This “big data” challenge is most evident in biomedical informatics: the trend towards precision medicine has resulted in an sheer explosion in the amount of generated biomedical data sets. Despite the fact that human experts are very good at pattern recognition in dimensions of <= 3; most of the data is high-dimensional, and complex, which makes manual analysis often impossible and neither the medical doctor nor the biomedical researcher can memorize all these facts. Moreover there is always the danger of modelling artifacts. A synergistic combination of methodologies and approaches of two fields offer ideal conditions towards unraveling these problems: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human capabilities with machine learning. Human-in-the-loop. This state-of-the-art survey is an output of the HCI-KDD expert network and features 19 carefully selected and peer reviewed papers related to seven hot and promising research areas: Area 1: Data Integration, Data Pre-processing and Data Mapping; Area 2: Data Mining Algorithms; Area 3: Graph-based Data Mining; Area 4: Entropy-Based Data Mining; Area 5: Topological Data Mining; Area 6 Data Visualization and Area 7: Privacy, Data Protection, Safety and Security.

    @book{book12,
       year = {2014},
       author = {Holzinger, Andreas and Jurisica, Igor},
       title = {Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges, LNCS 8401},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       abstract = {One of the grand challenges in our digital world are the large, complex and often weakly structured data sets, and massive amounts of unstructured information. This “big data” challenge is most evident in biomedical informatics: the trend towards precision medicine has resulted in an sheer explosion in the amount of generated biomedical data sets. Despite the fact that human experts are very good at pattern recognition in dimensions of <= 3; most of the data is high-dimensional, and complex, which makes manual analysis often impossible and neither the medical doctor nor the biomedical researcher can memorize all these facts. Moreover there is always the danger of modelling artifacts. A synergistic combination of methodologies and approaches of two fields offer ideal conditions towards unraveling these problems: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human capabilities with machine learning.  Human-in-the-loop. This state-of-the-art survey is an output of the HCI-KDD expert network and features 19 carefully selected and peer reviewed papers related to seven hot and promising research areas: Area 1: Data Integration, Data Pre-processing and Data Mapping; Area 2: Data Mining Algorithms; Area 3: Graph-based Data Mining; Area 4: Entropy-Based Data Mining; Area 5: Topological Data Mining; Area 6 Data Visualization and Area 7: Privacy, Data Protection, Safety and Security.},
       keywords = {Knowledge Discovery, Data Mining, Machine Learning, Data Integration, Life Sciences, Medicine, Biology, Biomedicine, Big Data, HCI-KDD},
       doi = {10.1007/978-3-662-43968-5},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1152268&pCurrPk=85975}
    }

  • [j40] M. Bloice, K. Simonic, and A. Holzinger, “Casebook: a virtual patient iPad application for teaching decision-making through the use of electronic health records“, BMC Medical Informatics and Decision Making, vol. 14, iss. 1, p. 66, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: Virtual Patients are a well-known and widely used form of interactive software used to simulate aspects of patient care that students are increasingly less likely to encounter during their studies. However, to take full advantage of the benefits of using Virtual Patients, students should have access to multitudes of cases. In order to promote the creation of collections of cases, a tablet application was developed which makes use of electronic health records as material for Virtual Patient cases. Because electronic health records are abundantly available on hospital information systems, this results in much material for the basis of case creation. RESULTS: An iPad-based Virtual Patient interactive software system was developed entitled Casebook. The application has been designed to read specially formatted patient cases that have been created using electronic health records, in the form of X-ray images, electrocardiograms, lab reports, and physician notes, and present these to the medical student. These health records are organised into a timeline, and the student navigates the case while answering questions regarding the patient along the way. Each health record can also be annotated with meta-information by the case designer, such as insight into the thought processes and the decision-making rationale of the physician who originally worked with the patient. Students learn decision-making skills by observing and interacting with real patient cases in this simulated environment. This paper discusses our approach in detail. CONCLUSIONS: Our group is of the opinion that Virtual Patient cases, targeted at undergraduate students, should concern patients who exhibit prototypical symptoms of the kind students may encounter when beginning their first medical jobs. Learning theory research has shown that students learn decision-making skills best when they have access to multitudes of patient cases and it is this plurality that allows students to develop their illness scripts effectively. Casebook emphasises the use of pre-existing electronic health record data as the basis for case creation, thus, it is hoped, making it easier to produce cases in larger numbers. By creating a Virtual Patient system where cases are built from abundantly available electronic health records, collections of cases can be accumulated by institutions.

    @article{j40,
       author = {Bloice, Marcus and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {Casebook: a virtual patient iPad application for teaching decision-making through the use of electronic health records},
       journal = {BMC Medical Informatics and Decision Making},
       volume = {14},
       number = {1},
       pages = {66},
       year = {2014},
       abstract = {BACKGROUND: Virtual Patients are a well-known and widely used form of interactive software used to simulate aspects of patient care that students are increasingly less likely to encounter during their studies. However, to take full advantage of the benefits of using Virtual Patients, students should have access to multitudes of cases. In order to promote the creation of collections of cases, a tablet application was developed which makes use of electronic health records as material for Virtual Patient cases. Because electronic health records are abundantly available on hospital information systems, this results in much material for the basis of case creation. RESULTS: An iPad-based Virtual Patient interactive software system was developed entitled Casebook. The application has been designed to read specially formatted patient cases that have been created using electronic health records, in the form of X-ray images, electrocardiograms, lab reports, and physician notes, and present these to the medical student. These health records are organised into a timeline, and the student navigates the case while answering questions regarding the patient along the way. Each health record can also be annotated with meta-information by the case designer, such as insight into the thought processes and the decision-making rationale of the physician who originally worked with the patient. Students learn decision-making skills by observing and interacting with real patient cases in this simulated environment. This paper discusses our approach in detail. CONCLUSIONS: Our group is of the opinion that Virtual Patient cases, targeted at undergraduate students, should concern patients who exhibit prototypical symptoms of the kind students may encounter when beginning their first medical jobs. Learning theory research has shown that students learn decision-making skills best when they have access to multitudes of patient cases and it is this plurality that allows students to develop their illness scripts effectively. Casebook emphasises the use of pre-existing electronic health record data as the basis for case creation, thus, it is hoped, making it easier to produce cases in larger numbers. By creating a Virtual Patient system where cases are built from abundantly available electronic health records, collections of cases can be accumulated by institutions.},
       doi = {10.1186/1472-6947-14-66},
       url = {http://www.biomedcentral.com/1472-6947/14/66}
    }

  • [j41] G. Petz, M. Karpowicz, H. Fürschuß, A. Auinger, V. Stříteský, and A. Holzinger, “Computational approaches for mining user’s opinions on the Web 2.0“, Information Processing and Management, vol. 50, iss. 6, pp. 899-908, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people’s opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.

    @article{j41,
       author = {Petz, Gerald and Karpowicz, Michał and Fürschuß, Harald and Auinger, Andreas and Stříteský, Václav and Holzinger, Andreas},
       title = {Computational approaches for mining user’s opinions on the Web 2.0},
       journal = {Information Processing and Management},
       volume = {50},
       number = {6},
       pages = {899-908},
       year = {2014},
       abstract = {The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people’s opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.},
       doi = {10.1016/j.ipm.2014.07.005},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=873559&pCurrPk=81027}
    }

  • [j42] M. Debevc, Z. Stjepanovic, and A. Holzinger, “Development and evaluation of an e-learning course for deaf and hard of hearing based on the advanced Adapted Pedagogical Index (AdaPI) method“, Interactive Learning Environments, vol. 22, iss. 1, pp. 35-50, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Web-based and adapted e-learning materials provide alternative methods of learning to those used in a traditional classroom. Within the study described in this article, deaf and hard of hearing people used an adaptive e-learning environment to improve their computer literacy. This environment included streaming video with sign language interpreter video and subtitles. The courses were based on the learning management system Moodle, which also includes sign language streaming videos and subtitles. A different approach is required when adapting e-learning courses for the deaf and hard of hearing: new guidelines must be developed concerning the loading and display of video material. This is shown in the example of the e-learning course, ECDL (European Computer Driving Licence). The usability of the e-learning course is analyzed and confirmed using two methods: first, the Software Usability Measurement Inventory (SUMI) evaluation method, and second, the Adapted Pedagogical Index (AdaPI), which was developed as part of this study, and gives an index to measure the pedagogical effectiveness of e-learning courses adapted for people with disabilities. With 116 participants, of whom 22 are deaf or hard of hearing, the e-learning course for the target group has been found suitable and appropriate according to both evaluation methods.

    @article{j42,
       year = {2014},
       author = {Debevc, Matjaz and Stjepanovic, Zoran and Holzinger, Andreas},
       title = {Development and evaluation of an e-learning course for deaf and hard of hearing based on the advanced Adapted Pedagogical Index (AdaPI) method},
       journal = {Interactive Learning Environments},
       volume = {22},
       number = {1},
       pages = {35-50},
       abstract = {Web-based and adapted e-learning materials provide alternative methods of learning to those used in a traditional classroom. Within the study described in this article, deaf and hard of hearing people used an adaptive e-learning environment to improve their computer literacy. This environment included streaming video with sign language interpreter video and subtitles. The courses were based on the learning management system Moodle, which also includes sign language streaming videos and subtitles. A different approach is required when adapting e-learning courses for the deaf and hard of hearing: new guidelines must be developed concerning the loading and display of video material. This is shown in the example of the e-learning course, ECDL (European Computer Driving Licence). The usability of the e-learning course is analyzed and confirmed using two methods: first, the Software Usability Measurement Inventory (SUMI) evaluation method, and second, the Adapted Pedagogical Index (AdaPI), which was developed as part of this study, and gives an index to measure the pedagogical effectiveness of e-learning courses adapted for people with disabilities. With 116 participants, of whom 22 are deaf or hard of hearing, the e-learning course for the target group has been found suitable and appropriate according to both evaluation methods.},
       keywords = {Information Systems},
       doi = {http://dx.doi.org/10.1080/10494820.2011.641673},
       url = {http://dx.doi.org/10.1080/10494820.2011.641673}
    }

  • [j43] P. Yildirim, L. Majnaric, O. Ekmekci, and A. Holzinger, “Knowledge discovery of drug data on the example of adverse reaction prediction“, BMC Bioinformatics, vol. 15, iss. Suppl 6, p. S7, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: Antibiotics are the widely prescribed drugs for children and most likely to be related with adverse reactions. Record on adverse reactions and allergies from antibiotics considerably affect the prescription choices. We consider this a biomedical decision-making problem and explore hidden knowledge in survey results on data extracted from a big data pool of health records of children, from the Health Center of Osijek, Eastern Croatia. RESULTS: We applied and evaluated a k-means algorithm to the dataset to generate some clusters which have similar features. Our results highlight that some type of antibiotics form different clusters, which insight is most helpful for the clinician to support better decision-making. CONCLUSIONS: Medical professionals can investigate the clusters which our study revealed, thus gaining useful knowledge and insight into this data for their clinical studies.

    @article{j43,
       author = {Yildirim, Pinar and Majnaric, Ljiljana and Ekmekci, Ozgur and Holzinger, Andreas},
       title = {Knowledge discovery of drug data on the example of adverse reaction prediction},
       journal = {BMC Bioinformatics},
       volume = {15},
       number = {Suppl 6},
       pages = {S7},
       year = {2014},
       abstract = {BACKGROUND: Antibiotics are the widely prescribed drugs for children and most likely to be related with adverse reactions. Record on adverse reactions and allergies from antibiotics considerably affect the prescription choices. We consider this a biomedical decision-making problem and explore hidden knowledge in survey results on data extracted from a big data pool of health records of children, from the Health Center of Osijek, Eastern Croatia. RESULTS: We applied and evaluated a k-means algorithm to the dataset to generate some clusters which have similar features. Our results highlight that some type of antibiotics form different clusters, which insight is most helpful for the clinician to support better decision-making. CONCLUSIONS: Medical professionals can investigate the clusters which our study revealed, thus gaining useful knowledge and insight into this data for their clinical studies.},
       doi = {10.1186/1471-2105-15-S6-S7},
       url = {http://www.biomedcentral.com/1471-2105/15/S6/S7}
    }

  • [j44] F. Emmert-Streib, R. de Matos Simoes, G. Glazko, S. McDade, B. Haibe-Kains, A. Holzinger, M. Dehmer, and F. Campbell, “Functional and genetic analysis of the colon cancer network“, BMC Bioinformatics, vol. 15, iss. Suppl 6, p. S6, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Cancer is a complex disease that has proven to be difficult to understand on the single-gene level. For this reason a functional elucidation needs to take interactions among genes on a systems-level into account. In this study, we infer a colon cancer network from a large-scale gene expression data set by using the method BC3Net. We provide a structural and a functional analysis of this network and also connect its molecular interaction structure with the chromosomal locations of the genes enabling the definition of cis- and trans-interactions. Furthermore, we investigate the interaction of genes that can be found in close neighborhoods on the chromosomes to gain insight into regulatory mechanisms. To our knowledge this is the first study analyzing the genome-scale colon cancer network.

    @article{j44,
       author = {Emmert-Streib, Frank and de Matos Simoes, Ricardo and Glazko, Galina and McDade, Simon and Haibe-Kains, Benjamin and Holzinger, Andreas and Dehmer, Matthias and Campbell, Frederick},
       title = {Functional and genetic analysis of the colon cancer network},
       journal = {BMC Bioinformatics},
       volume = {15},
       number = {Suppl 6},
       pages = {S6},
       year = {2014},
       abstract = {Cancer is a complex disease that has proven to be difficult to understand on the single-gene level. For this reason a functional elucidation needs to take interactions among genes on a systems-level into account. In this study, we infer a colon cancer network from a large-scale gene expression data set by using the method BC3Net. We provide a structural and a functional analysis of this network and also connect its molecular interaction structure with the chromosomal locations of the genes enabling the definition of cis- and trans-interactions. Furthermore, we investigate the interaction of genes that can be found in close neighborhoods on the chromosomes to gain insight into regulatory mechanisms. To our knowledge this is the first study analyzing the genome-scale colon cancer network.},
       doi = {10.1186/1471-2105-15-S6-S6},
       url = {http://www.biomedcentral.com/1471-2105/15/S6/S6}
    }

  • [j45] H. Mueller, R. Reihs, K. Zatloukal, and A. Holzinger, “Analysis of biomedical data with multilevel glyphs“, BMC Bioinformatics, vol. 15, iss. Suppl 6, p. S5, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: This paper presents multilevel data glyphs optimized for the interactive knowledge discovery and visualization of large biomedical data sets. Data glyphs are three- dimensional objects defined by multiple levels of geometric descriptions (levels of detail) combined with a mapping of data attributes to graphical elements and methods, which specify their spatial position. METHODS: In the data mapping phase, which is done by a biomedical expert, meta information about the data attributes (scale, number of distinct values) are compared with the visual capabilities of the graphical elements in order to give a feedback to the user about the correctness of the variable mapping. The spatial arrangement of glyphs is done in a dimetric view, which leads to high data density, a simplified 3D navigation and avoids perspective distortion. RESULTS: We show the usage of data glyphs in the disease analyser a visual analytics application for personalized medicine and provide an outlook to a biomedical web visualization scenario. CONCLUSIONS: Data glyphs can be successfully applied in the disease analyser for the analysis of big medical data sets. Especially the automatic validation of the data mapping, selection of subgroups within histograms and the visual comparison of the value distributions were seen by experts as an important functionality.

    @article{j45,
       author = {Mueller, Heimo and Reihs, Robert and Zatloukal, Kurt and Holzinger, Andreas},
       title = {Analysis of biomedical data with multilevel glyphs},
       journal = {BMC Bioinformatics},
       volume = {15},
       number = {Suppl 6},
       pages = {S5},
       year = {2014},
       abstract = {BACKGROUND: This paper presents multilevel data glyphs optimized for the interactive knowledge discovery and visualization of large biomedical data sets. Data glyphs are three- dimensional objects defined by multiple levels of geometric descriptions (levels of detail) combined with a mapping of data attributes to graphical elements and methods, which specify their spatial position. METHODS: In the data mapping phase, which is done by a biomedical expert, meta information about the data attributes (scale, number of distinct values) are compared with the visual capabilities of the graphical elements in order to give a feedback to the user about the correctness of the variable mapping. The spatial arrangement of glyphs is done in a dimetric view, which leads to high data density, a simplified 3D navigation and avoids perspective distortion. RESULTS: We show the usage of data glyphs in the disease analyser a visual analytics application for personalized medicine and provide an outlook to a biomedical web visualization scenario. CONCLUSIONS: Data glyphs can be successfully applied in the disease analyser for the analysis of big medical data sets. Especially the automatic validation of the data mapping, selection of subgroups within histograms and the visual comparison of the value distributions were seen by experts as an important functionality.},
       doi = {10.1186/1471-2105-15-S6-S5},
       url = {http://www.biomedcentral.com/1471-2105/15/S6/S5}
    }

  • [j46] C. Mayer, M. Bachler, M. Hortenhuber, C. Stocker, A. Holzinger, and S. Wassertheurer, “Selection of entropy-measure parameters for knowledge discovery in heart rate variability data“, BMC Bioinformatics, vol. 15, iss. Suppl 6, p. S2, 2014.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. METHODS: This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation ?, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors’ composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. RESULTS: The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 ? and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2?. CONCLUSIONS: Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

    @article{j46,
       author = {Mayer, Christopher and Bachler, Martin and Hortenhuber, Matthias and Stocker, Christof and Holzinger, Andreas and Wassertheurer, Siegfried},
       title = {Selection of entropy-measure parameters for knowledge discovery in heart rate variability data},
       journal = {BMC Bioinformatics},
       volume = {15},
       number = {Suppl 6},
       pages = {S2},
       year = {2014},
       abstract = {BACKGROUND: Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. METHODS: This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation ?, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. RESULTS: The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 ? and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2?. CONCLUSIONS: Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.},
       doi = {doi:10.1186/1471-2105-15-S6-S2},
       url = {http://www.biomedcentral.com/1471-2105/15/S6/S2}
    }

  • [NN] M. Bloice, K. Simonic, and A. Holzinger, “On the Usage of Health Records for the Teaching of Decision-Making to Students of Medicine“, in The New Development of Technology Enhanced Learning, R. Huang, Kinshuk, and N. Chen, Eds., Springer Berlin Heidelberg, 2014, pp. 185-201.
    [BibTeX] [DOI] [Download PDF]
    @incollection{NN,
       author = {Bloice, MarcusD and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {On the Usage of Health Records for the Teaching of Decision-Making to Students of Medicine},
       booktitle = {The New Development of Technology Enhanced Learning},
       editor = {Huang, Ronghuai and Kinshuk and Chen, Nian-Shing},
       publisher = {Springer Berlin Heidelberg},
       pages = {185-201},
       year = {2014},
       doi = {10.1007/978-3-642-38291-8_11},
       url = {http://dx.doi.org/10.1007/978-3-642-38291-8_11}
    }

  • [p3] B. Huppertz and A. Holzinger, “Biobanks – A Source of large Biological Data Sets: Open Problems and Future Challenges“, in Interactive Knowledge Discovery and Data Mining: State-of-the-Art and Future Challenges in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Heidelberg, Berlin: Springer, 2014, pp. 317-330.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Biobanks are collections of biological samples (e.g. tissues, blood and derivatives, other body fluids, cells, DNA, etc.) and their associated data. Consequently, human biobanks represent collections of human samples and data and are of fundamental importance for scientific research as they are an excellent resource to access and measure biological constituents that can be used to monitor the status and trends of both health and disease. Most -omics data trust on a secure access to these collections of stored human samples to provide the basis for establishing the ranges and frequencies of expression. However, there are many open questions and future challenges associated with the large amounts of heterogeneous data, ranging from pre-processing, data integration and data fusion to knowledge discovery and data mining along with a strong focus on privacy, data protection, safety and security.

    @incollection{p3,
       author = {Huppertz, Berthold  and Holzinger, Andreas },
       title = {Biobanks – A Source of large Biological Data Sets: Open Problems and Future Challenges},
       booktitle = {Interactive Knowledge Discovery and Data Mining: State-of-the-Art and Future Challenges in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {317-330},
       year = {2014},
       abstract = {Biobanks are collections of biological samples (e.g. tissues, blood and derivatives, other body fluids, cells, DNA, etc.) and their associated data. Consequently, human biobanks represent collections of human samples and data and are of fundamental importance for scientific research as they are an excellent resource to access and measure biological constituents that can be used to monitor the status and trends of both health and disease. Most -omics data trust on a secure access to these collections of stored human samples to provide the basis for establishing the ranges and frequencies of expression. However, there are many open questions and future challenges associated with the large amounts of heterogeneous data, ranging from pre-processing, data integration and data fusion to knowledge discovery and data mining along with a strong focus on privacy, data protection, safety and security.},
       doi = {10.1007/978-3-662-43968-5_18},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974699&pCurrPk=83010}
    }

  • [p4] P. Kieseberg, H. Hobel, S. Schrittwieser, E. Weippl, and A. Holzinger, “Protecting Anonymity in Data-Driven Biomedical Science“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer , 2014, pp. 301-316.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    With formidable recent improvements in data processing and information retrieval, knowledge discovery/data mining, business intelligence, content analytics and other upcoming empirical approaches have an enormous potential, particularly for the data intensive biomedical sciences. For results derived using empirical methods, the underlying data set should be made available, at least during the review process for the reviewers, to ensure the quality of the research done and to prevent fraud or errors and to enable the replication of studies. However, in particular in the medicine and the life sciences, this leads to a discrepancy, as the disclosure of research data raises considerable privacy concerns, as researchers have of course the full responsibility to protect their (volunteer) subjects, hence must adhere to respective ethical policies. One solution for this problem lies in the protection of sensitive information in medical data sets by applying appropriate anonymization. This paper provides an overview on the most important and well-researched approaches and discusses open research problems in this area, with the goal to act as a starting point for further investigation.

    @incollection{p4,
       author = {Kieseberg, Peter and Hobel, Heidelinde and Schrittwieser, Sebastian and Weippl, Edgar and Holzinger, Andreas},
       title = {Protecting Anonymity in Data-Driven Biomedical Science},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       pages = {301-316},
       year = {2014},
       abstract = {With formidable recent improvements in data processing and information retrieval, knowledge discovery/data mining, business intelligence, content analytics and other upcoming empirical approaches have an enormous potential, particularly for the data intensive biomedical sciences. For results derived using empirical methods, the underlying data set should be made available, at least during the review process for the reviewers, to ensure the quality of the research done and to prevent fraud or errors and to enable the replication of studies. However, in particular in the medicine and the life sciences, this leads to a discrepancy, as the disclosure of research data raises considerable privacy concerns, as researchers have of course the full responsibility to protect their (volunteer) subjects, hence must adhere to respective ethical policies. One solution for this problem lies in the protection of sensitive information in medical data sets by applying appropriate anonymization. This paper provides an overview on the most important and well-researched approaches and discusses open research problems in this area, with the goal to act as a starting point for further investigation.},
       doi = {10.1007/978-3-662-43968-5_17},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=935397&pCurrPk=82007}
    }

  • [p7] A. Holzinger, M. Hörtenhuber, C. Mayer, M. Bachler, S. Wassertheurer, A. Pinho, and D. Koslicki, “On Entropy-Based Data Mining“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin Heidelberg: Springer, 2014, pp. 209-226.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In the real world, we are confronted not only with complex and high-dimensional data sets, but usually with noisy, incomplete and uncertain data, where the application of traditional methods of knowledge discovery and data mining always entail the danger of modeling artifacts. Originally, information entropy was introduced by Shannon (1949), as a measure of uncertainty in the data. But up to the present, there have emerged many different types of entropy methods with a large number of different purposes and possible application areas. In this paper, we briefly discuss the applicability of entropy methods for the use in knowledge discovery and data mining, with particular emphasis on biomedical data. We present a very short overview of the state-of-the-art, with focus on four methods: Approximate Entropy (ApEn), Sample Entropy (SampEn), Fuzzy Entropy (FuzzyEn), and Topological Entropy (FiniteTopEn). Finally, we discuss some open problems and future research challenges.

    @incollection{p7,
       author = {Holzinger, Andreas and Hörtenhuber, Matthias and Mayer, Christopher and Bachler, Martin and Wassertheurer, Siegfried and Pinho, ArmandoJ and Koslicki, David},
       title = {On Entropy-Based Data Mining},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, Lecture Notes in Computer Science, LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {209-226},
       year = {2014},
       abstract = {In the real world, we are confronted not only with complex and high-dimensional data sets, but usually with noisy, incomplete and uncertain data, where the application of traditional methods of knowledge discovery and data mining always entail the danger of modeling artifacts. Originally, information entropy was introduced by Shannon (1949), as a measure of uncertainty in the data. But up to the present, there have emerged many different types of entropy methods with a large number of different purposes and possible application areas. In this paper, we briefly discuss the applicability of entropy methods for the use in knowledge discovery and data mining, with particular emphasis on biomedical data. We present a very short overview of the state-of-the-art, with focus on four methods: Approximate Entropy (ApEn), Sample Entropy (SampEn), Fuzzy Entropy (FuzzyEn), and Topological Entropy (FiniteTopEn). Finally, we discuss some open problems and future research challenges.},
       doi = {10.1007/978-3-662-43968-5_12},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=765728&pCurrPk=79159}
    }

  • [p8] C. Turkay, F. Jeanquartier, A. Holzinger, and H. Hauser, “On Computationally-enhanced Visual Analysis of Heterogeneous Data and its Application in Biomedical Informatics“, in Interactive Knowledge Discovery and Data Mining: State-of-the-Art and Future Challenges in Biomedical Informatics. Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Berlin, Heidelberg: Springer, 2014, pp. 117-140.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    With the advance of new data acquisition and generation technologies, the biomedical domain is becoming increasingly data-driven. Thus, understanding the information in large and complex data sets has been in the focus of several research fields such as statistics, data mining, machine learning, and visualization. While the first three fields predominantly rely on computational power, visualization relies mainly on human perceptual and cognitive capabilities for extracting information. Data visualization, similar to Human–Computer Interaction, attempts an appropriate interaction between human and data to interactively exploit data sets. Specifically within the analysis of complex data sets, visualization researchers have integrated computational methods to enhance the interactive processes. In this state-of-the-art report, we investigate how such an integration is carried out. We study the related literature with respect to the underlying analytical tasks and methods of integration. In addition, we focus on how such methods are applied to the biomedical domain and present a concise overview within our taxonomy. Finally, we discuss some open problems and future challenges.

    @incollection{p8,
       author = {Turkay, Cagatay and Jeanquartier, Fleur and Holzinger, Andreas and Hauser, Helwig},
       title = {On Computationally-enhanced Visual Analysis of Heterogeneous Data and its Application in Biomedical Informatics},
       booktitle = {Interactive Knowledge Discovery and Data Mining: State-of-the-Art and Future Challenges in Biomedical Informatics. Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {117-140},
       year = {2014},
       abstract = {With the advance of new data acquisition and generation technologies, the biomedical domain is becoming increasingly data-driven. Thus, understanding the information in large and complex data sets has been in the focus of several research fields such as statistics, data mining, machine learning, and visualization. While the first three fields predominantly rely on computational power, visualization relies mainly on human perceptual and cognitive capabilities for extracting information. Data visualization, similar to Human–Computer Interaction, attempts an appropriate interaction between human and data to interactively exploit data sets. Specifically within the analysis of complex data sets, visualization researchers have integrated computational methods to enhance the interactive processes. In this state-of-the-art report, we investigate how such an integration is carried out. We study the related literature with respect to the underlying analytical tasks and methods of integration. In addition, we focus on how such methods are applied to the biomedical domain and present a concise overview within our taxonomy. Finally, we discuss some open problems and future challenges.},
       doi = {10.1007/978-3-662-43968-5_7},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=764244&pCurrPk=79140}
    }

  • [p9] P. Yildirim, M. Bloice, and A. Holzinger, “Knowledge Discovery and Visualization of Clusters for Erythromycin Related Adverse Events in the FDA Drug Adverse Event Reporting System“, in Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401, A. Holzinger and I. Jurisica, Eds., Heidelberg, Berlin: Springer, 2014, pp. 101-116.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, a research study to discover hidden knowledge in the reports of the public release of the Food and Drug Administration (FDA)’s Adverse Event Reporting System (FAERS) for erythromycin is presented. Erythromycin is an antibiotic used to treat certain infections caused by bacteria. Bacterial infections can cause significant morbidity, mortality, and the costs of treatment are known to be detrimental to health institutions around the world. Since erythromycin is of great interest in medical research, the relationships between patient demographics, adverse event outcomes, and the adverse events of this drug were analyzed. The FDA’s FAERS database was used to create a dataset for cluster analysis in order to gain some statistical insights. The reports contained within the dataset consist of 3792 (44.1\%) female and 4798 (55.8\%) male patients. The mean age of each patient is 41.759. The most frequent adverse event reported is oligohtdramnios and the most frequent adverse event outcome is OT(Other). Cluster analysis was used for the analysis of the dataset using the DBSCAN algorithm, and according to the results, a number of clusters and associations were obtained, which are reported here. It is believed medical researchers and pharmaceutical companies can utilize these results and test these relationships within their clinical studies.

    @incollection{p9,
       author = {Yildirim, Pinar  and Bloice, Marcus  and Holzinger, Andreas},
       title = {Knowledge Discovery and Visualization of Clusters for Erythromycin Related Adverse Events in the FDA Drug Adverse Event Reporting System},
       booktitle = {Interactive Knowledge Discovery and Data Mining in Biomedical Informatics: State-of-the-Art and Future Challenges. Lecture Notes in Computer Science LNCS 8401},
       editor = {Holzinger, Andreas and Jurisica, Igor},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {101-116},
       year = {2014},
       abstract = {In this paper, a research study to discover hidden knowledge in the reports of the public release of the Food and Drug Administration (FDA)’s Adverse Event Reporting System (FAERS) for erythromycin is presented. Erythromycin is an antibiotic used to treat certain infections caused by bacteria. Bacterial infections can cause significant morbidity, mortality, and the costs of treatment are known to be detrimental to health institutions around the world. Since erythromycin is of great interest in medical research, the relationships between patient demographics, adverse event outcomes, and the adverse events of this drug were analyzed. The FDA’s FAERS database was used to create a dataset for cluster analysis in order to gain some statistical insights. The reports contained within the dataset consist of 3792 (44.1\%) female and 4798 (55.8\%) male patients. The mean age of each patient is 41.759. The most frequent adverse event reported is oligohtdramnios and the most frequent adverse event outcome is OT(Other). Cluster analysis was used for the analysis of the dataset using the DBSCAN algorithm, and according to the results, a number of clusters and associations were obtained, which are reported here. It is believed medical researchers and pharmaceutical companies can utilize these results and test these relationships within their clinical studies.},
       doi = {10.1007/978-3-662-43968-5_6},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=974587&pCurrPk=83006}
    }

2013

  • [c107] M. Ziefle, L. Klack, W. Wilkowska, and A. Holzinger, “Acceptance of Telemedical Treatments – A Medical Professional Point of View“, in Human Interface and the Management of Information. Information and Interaction for Health, Safety, Mobility and Complex Environments, Lecture Notes in Computer Science LNCS 8017, S. Yamamoto, Ed., Berlin Heidelberg: Springer , 2013, pp. 325-334.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The demographic change has tremendous consequences for health care availability, with a growing mismatch between rising numbers of patients and the declining number of care personnel. As a consequence, considerable shortcomings in availability, accessibility, and quality of health care can be expected. Telemedicine and telemonitoring services are promising approaches to compensate this gap, especially for long-term monitoring, nevertheless also within the supply chain of health care. Despite the potential, the acceptance of telemedicine is quite low. In this paper we report on two studies focusing on acceptance of telemedical services. First, chronically ill persons were experimentally studied with respect to their acceptance of telemedical systems. Second, a survey was conducted to assess medical professionals’ points of view. Findings reveal perceived benefits in the context of telemedical services, however, also considerable barriers, especially on the medical doctors’ side. Outcomes may contribute to the development of a sensitive and transparent communication and information strategy for stakeholders, as well as a public awareness for the benefits and the drawbacks of telemedical services

    @incollection{c107,
       year = {2013},
       author = {Ziefle, Martina and Klack, Lars and Wilkowska, Wiktoria and Holzinger, Andreas},
       title = {Acceptance of Telemedical Treatments – A Medical Professional Point of View},
       booktitle = {Human Interface and the Management of Information. Information and Interaction for Health, Safety, Mobility and Complex Environments, Lecture Notes in Computer Science LNCS 8017},
       editor = {Yamamoto, Sakae},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       pages = {325-334},
       abstract = {The demographic change has tremendous consequences for health care availability, with a growing mismatch between rising numbers of patients and the declining number of care personnel. As a consequence, considerable shortcomings in availability, accessibility, and quality of health care can be expected. Telemedicine and telemonitoring services are promising approaches to compensate this gap, especially for long-term monitoring, nevertheless also within the supply chain of health care. Despite the potential, the acceptance of telemedicine is quite low. In this paper we report on two studies focusing on acceptance of telemedical services. First, chronically ill persons were experimentally studied with respect to their acceptance of telemedical systems. Second, a survey was conducted to assess medical professionals’ points of view. Findings reveal perceived benefits in the context of telemedical services, however, also considerable barriers, especially on the medical doctors’ side. Outcomes may contribute to the development of a sensitive and transparent communication and information strategy for stakeholders, as well as a public awareness for the benefits and the drawbacks of telemedical services},
       doi = {10.1007/978-3-642-39215-3_39},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=985276&pCurrPk=71939}
    }

  • [c112] P. Yildirim, L. Majnarić, O. I. Ekmekci, and A. Holzinger, “On the Prediction of Clusters for Adverse Reactions and Allergies on Antibiotics for Children to Improve Biomedical Decision Making“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin, New York: Springer, 2013, pp. 431-445.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we report on a study to discover hidden patterns in survey results on adverse reactions and allergy (ARA) on antibiotics for children. Antibiotics are the most commonly prescribed drugs in children and most likely to be associated with adverse reactions. Record on adverse reactions and allergy from antibiotics considerably affect the prescription choices. We consider this a biomedical decision problem and explore hidden knowledge in survey results on data extracted from the health records of children, from the Health Center of Osijek, Eastern Croatia. We apply the K-means algorithm to the data in order to generate clusters and evaluate the results. As a result, some antibiotics form their own clusters. Consequently, medical professionals can investigate these clusters, thus gaining useful knowledge and insight into this data for their clinical studies.

    @incollection{c112,
       author = {Yildirim, Pinar and Majnarić, Ljiljana and Ekmekci, Ozgur Ilyas and Holzinger, Andreas},
       title = {On the Prediction of Clusters for Adverse Reactions and Allergies on Antibiotics for Children to Improve Biomedical Decision Making},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127},
       editor = {Cuzzocrea, Alfredo and Kittl, Christian and Simos, Dimitris E. and Weippl, Edgar and Xu, Lida},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {431-445},
       year = {2013},
       abstract = {In this paper, we report on a study to discover hidden patterns in survey results on adverse reactions and allergy (ARA) on antibiotics for children. Antibiotics are the most commonly prescribed drugs in children and most likely to be associated with adverse reactions. Record on adverse reactions and allergy from antibiotics considerably affect the prescription choices. We consider this a biomedical decision problem and explore hidden knowledge in survey results on data extracted from the health records of children, from the Health Center of Osijek, Eastern Croatia. We apply the K-means algorithm to the data in order to generate clusters and evaluate the results. As a result, some antibiotics form their own clusters. Consequently, medical professionals can investigate these clusters, thus gaining useful knowledge and insight into this data for their clinical studies.},
       doi = {10.1007/978-3-642-40511-2_31},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=411644&pCurrPk=72408}
    }

  • [c103] P. Yildirim, I. Ekmekci, and A. Holzinger, “On Knowledge Discovery in Open Medical Data on the Example of the FDA Drug Adverse Event Reporting System for Alendronate (Fosamax)“, in Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data, Lecture Notes in Computer Science, LNCS 7947, A. Holzinger and G. Pasi, Eds., Berlin Heidelberg: Springer, 2013, pp. 195-206.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we present a study to discover hidden patterns in the reports of the public release of the Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) for alendronate (fosamax) drug. Alendronate (fosamax) is a widely used medication for the treatment of osteoporosis disease. Osteoporosis is recognised as an important public health problem because of the significant morbidity, mortality and costs of treatment. We consider the importance of alendronate (fosamax) for medical research and explore the relationship between patient demographics information, the adverse event outcomes and drug’s adverse events. We analyze the FDA’s AERS which cover the period from the third quarter of 2005 through the second quarter of 2012 and create a dataset for association analysis. Both Apriori and Predictive Apriori algorithms are used for implementation which generates rules and the results are interpreted and evaluated. According to the results, some interesting rules and associations are obtained from the dataset. We believe that our results can be useful for medical researchers and decision making at pharmaceutical companies.

    @incollection{c103,
       author = {Yildirim, Pinar and Ekmekci, IlyasOzgur and Holzinger, Andreas},
       title = {On Knowledge Discovery in Open Medical Data on the Example of the FDA Drug Adverse Event Reporting System for Alendronate (Fosamax)},
       booktitle = {Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data, Lecture Notes in Computer Science, LNCS 7947},
       editor = {Holzinger, Andreas and Pasi, Gabriella},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {195-206},
       year = {2013},
       abstract = {In this paper, we present a study to discover hidden patterns in the reports of the public release of the Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) for alendronate (fosamax) drug. Alendronate (fosamax) is a widely used medication for the treatment of osteoporosis disease. Osteoporosis is recognised as an important public health problem because of the significant morbidity, mortality and costs of treatment. We consider the importance of alendronate (fosamax) for medical research and explore the relationship between patient demographics information, the adverse event outcomes and drug’s adverse events. We analyze the FDA’s AERS which cover the period from the third quarter of 2005 through the second quarter of 2012 and create a dataset for association analysis. Both Apriori and Predictive Apriori algorithms are used for implementation which generates rules and the results are interpreted and evaluated. According to the results, some interesting rules and associations are obtained from the dataset. We believe that our results can be useful for medical researchers and decision making at pharmaceutical companies.},
       doi = {10.1007/978-3-642-39146-0_18},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=511050&pCurrPk=71943}
    }

  • [c108] S. Xie, M. Helfert, A. Lugmayr, R. Heimgärtner, and A. Holzinger, “Influence of Organizational Culture and Communication on the Successful Implementation of Information Technology in Hospitals“, in Cross-Cultural Design. Cultural Differences in Everyday Life, , Lecture Notes in Computer Science, LNCS 8024, P. P. L. Rau, Ed., Heidelberg, Berlin: Springer, 2013, pp. 165-174.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we report on a case study examining types of organizational culture influencing communication as an important factor in the study of successful IT adoption and implementation in health care. We observed a hospital organization and focused on technological innovations and the accompanying communication factors in the successful implementation of IT. The results demonstrate the importance of the organizational culture as an important factor in establishing well-balanced communication as a primary influence factor in the implementation of new technologies. Based on theoretical and empirical insights, we propose a model describing the relationship of organizational culture, communication, and the level of success in the implementation and adaptation of new IT systems in hospitals.

    @incollection{c108,
       author = {Xie, Shuyan and Helfert, Markus and Lugmayr, Artur and Heimgärtner, Rüdiger and Holzinger, Andreas},
       title = {Influence of Organizational Culture and Communication on the Successful Implementation of Information Technology in Hospitals},
       booktitle = {Cross-Cultural Design. Cultural Differences in Everyday Life, , Lecture Notes in Computer Science, LNCS 8024},
       editor = {Rau, P. L. Patrick},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {165-174},
       year = {2013},
       abstract = {In this paper, we report on a case study examining types of organizational culture influencing communication as an important factor in the study of successful IT adoption and implementation in health care. We observed a hospital organization and focused on technological innovations and the accompanying communication factors in the successful implementation of IT. The results demonstrate the importance of the organizational culture as an important factor in establishing well-balanced communication as a primary influence factor in the implementation of new technologies. Based on theoretical and empirical insights, we propose a model describing the relationship of organizational culture, communication, and the level of success in the implementation and adaptation of new IT systems in hospitals.},
       doi = {10.1007/978-3-642-39137-8_19},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=985275&pCurrPk=70690}
    }

  • [j35] B. Taraghi, M. Grossegger, M. Ebner, and A. Holzinger, “Web Analytics of user path tracing and a novel algorithm for generating recommendations in Open Journal Systems“, Online Information Review, vol. 37, iss. 5, pp. 672-691, 2013.
    [BibTeX] [DOI] [Download PDF]
    @article{j35,
       author = {Taraghi, Behnam and Grossegger, Martin and Ebner, Martin and Holzinger, Andreas},
       title = {Web Analytics of user path tracing and a novel algorithm for generating recommendations in Open Journal Systems},
       journal = {Online Information Review},
       volume = {37},
       number = {5},
       pages = {672-691},
       year = {2013},
       doi = {10.1108/OIR-09-2012-0152},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=978079&pCurrPk=72720}
    }

  • [c104] G. Petz, M. Karpowicz, H. Fürschuß, A. Auinger, V. Stříteský, and A. Holzinger, “Opinion Mining on the Web 2.0 – Characteristics of User Generated Content and Their Impacts“, in Lecture Notes in Computer Science LNCS 7947, Heidelberg, Berlin: Springer, 2013, pp. 35-46.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The field of opinion mining provides a multitude of methods and techniques to be utilized to find, extract and analyze subjective information, such as the one found on social media channels. Because of the differences between these channels as well as their unique characteristics, not all approaches are suitable for each source; there is no “one-size-fits-all” approach. This paper aims at identifying and determining these differences and characteristics by performing an empirical analysis as a basis for a discussion which opinion mining approach seems to be applicable to which social media channel.

    @incollection{c104,
       author = {Petz, Gerald and Karpowicz, Michał and Fürschuß, Harald and Auinger, Andreas and Stříteský, Václav and Holzinger, Andreas},
       title = {Opinion Mining on the Web 2.0 – Characteristics of User Generated Content and Their Impacts},
       booktitle = {Lecture Notes in Computer Science LNCS 7947},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {35-46 },
       year = {2013},
       abstract = {The field of opinion mining provides a multitude of methods and techniques to be utilized to find, extract and analyze subjective information, such as the one found on social media channels. Because of the differences between these channels as well as their unique characteristics, not all approaches are suitable for each source; there is no “one-size-fits-all” approach. This paper aims at identifying and determining these differences and characteristics by performing an empirical analysis as a basis for a discussion which opinion mining approach seems to be applicable to which social media channel.},
       doi = {10.1007/978-3-642-39146-0_4},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=336152&pCurrPk=71378}
    }

  • [c105] B. Peischl, M. Ferk, and A. Holzinger, On the Success Factors for Mobile Data Acquisition in HealthcareBritish Computer Society, 2013.
    [BibTeX]
    @misc{c105,
       author = {Peischl, Berhard and Ferk, Michaela and Holzinger, Andreas},
       title = {On the Success Factors for Mobile Data Acquisition in Healthcare},
       publisher = {British Computer Society},
       year = {2013},
    }

  • [c106] B. Peischl, M. Ferk, and A. Holzinger, Integrating User-Centred Design in an Early Stage of Mobile Medical Application Prototyping: A case study on Data Acquistion in Health Organisations , 2013.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper reports on collaborative work with an SME, developing a system for data acquisition in health care organisations, providing mobile data support. We briefly introduce the ICF and the ICD classification scheme from the WHO as a foundation for our mobile application. A two-staged usability evaluation in a very early stage of development allows us to integrate user-centred design in the mobile application development process. Our procedure comprises interviews and usability tests with a limited number of users and thus can even be performed within a resource-constrained setting as it is typically found in smaller software development teams. We discuss the consolidated results of the usability tests quantitatively and qualitatively. From these results we deduce recommendations (and open issues) concerning the user interface design of the mobile application.

    @misc{c106,
       author = {Peischl, Bernhard and Ferk, Michaela and Holzinger, Andreas},
       title = {Integrating User-Centred Design in an Early Stage of Mobile Medical Application Prototyping: A case study on Data Acquistion in Health Organisations },
       pages = {185-195},
       abstract = {This paper reports on collaborative work with an SME, developing a system for data acquisition in health care organisations, providing mobile data support. We briefly introduce the ICF and the ICD classification scheme from the WHO as a foundation for our mobile application. A two-staged usability evaluation in a very early stage of development allows us to integrate user-centred design in the mobile application development process. Our procedure comprises interviews and usability tests with a limited number of users and thus can even be performed within a resource-constrained setting as it is typically found in smaller software development teams. We discuss the consolidated results of the usability tests quantitatively and qualitatively. From these results we deduce recommendations (and open issues) concerning the user interface design of the mobile application. },
       keywords = {Biomedical Informatics, software engineering},
       year = {2013},
       doi = {10.5220/0004493901850195},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=985282&pCurrPk=83220}
    }

  • [c110] F. Jeanquartier and A. Holzinger, “On Visual Analytics And Evaluation In Cell Physiology: A Case Study“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin: Springer, 2013, pp. 495-502.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper we present a case study on a visual analytics (VA) process on the example of cell physiology. Following the model of Keim, we illustrate the steps required within an exploration and sense-making process. Moreover, we demonstrate the applicability of this model and show several shortcomings in the analysis tools’ functionality and usability. The case study highlights the need for conducting evaluation and improvements in VA in the domain of biomedical science. The main issue is the absence of a complete toolset that supports all analysis tasks including the many steps of data preprocessing as well as end-user development. Another important issue is to enable collaboration by creating the possibility of evaluating and validating datasets, comparing it with data of other similar research groups.

    @incollection{c110,
       author = {Jeanquartier, Fleur and Holzinger, Andreas},
       title = {On Visual Analytics And Evaluation In Cell Physiology: A Case Study},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127},
       editor = {Cuzzocrea, Alfredo and Kittl, Christian and Simos, Dimitris E. and Weippl, Edgar and Xu, Lida},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {495-502},
       year = {2013},
       abstract = {In this paper we present a case study on a visual analytics (VA) process on the example of cell physiology. Following the model of Keim, we illustrate the steps required within an exploration and sense-making process. Moreover, we demonstrate the applicability of this model and show several shortcomings in the analysis tools’ functionality and usability. The case study highlights the need for conducting evaluation and improvements in VA in the domain of biomedical science. The main issue is the absence of a complete toolset that supports all analysis tasks including the many steps of data preprocessing as well as end-user development. Another important issue is to enable collaboration by creating the possibility of evaluating and validating datasets, comparing it with data of other similar research groups.},
       doi = {10.1007/978-3-642-40511-2_36},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=382976&pCurrPk=71926}
    }

  • [j38] A. Holzinger and M. Zupan, “KNODWAT: A scientific framework application for testing knowledge discovery methods for the biomedical domain“, Bmc Bioinformatics, vol. 14, iss. 1, p. 191, 2013.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. RESULTS: A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. CONCLUSIONS: The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.

    @article{j38,
       author = {Holzinger, Andreas and Zupan, Mario},
       title = {KNODWAT: A scientific framework application for testing knowledge discovery methods for the biomedical domain},
       journal = {Bmc Bioinformatics},
       volume = {14},
       number = {1},
       pages = {191},
       year = {2013},
       abstract = {BACKGROUND: Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. RESULTS: A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. CONCLUSIONS: The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.},
       doi = {10.1186/1471-2105-14-191},
       url = {http://www.biomedcentral.com/1471-2105/14/191}
    }

  • [e11] A. Holzinger, M. Ziefle, M. Hitz, and M. Debevc, Human Factors in Computing and Informatics, Lecture Notes in Computer Science, LNCS 7946, Heidelberg, Berlin: Springer, 2013.
    [BibTeX] [DOI] [Download PDF]
    @book{e11,
       author = {Holzinger, Andreas and Ziefle, Martina and Hitz, Martin and Debevc, Matjaž},
       title = {Human Factors in Computing and Informatics, Lecture Notes in Computer Science, LNCS 7946},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       year = {2013},
       doi = {10.1007/978-3-642-39062-3},
       url = {http://link.springer.com/book/10.1007%2F978-3-642-39062-3}
    }

  • [p1] A. Holzinger, P. Yildirim, M. Geier, and K. Simonic, “Quality-Based Knowledge Discovery from Medical Text on the Web“, in Quality Issues in the Management of Web Information, Intelligent Systems Reference Library, ISRL 50, G. Pasi, G. Bordogna, and L. C. Jain, Eds., Berlin Heidelberg: Springer, 2013, pp. 145-158.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. Consequently there is need for techniques which enable the quality-based discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining helps to cope with the interpretation of these large volumes of data. Co-occurrence analysis is a technique applied in text mining. Statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a selection of quality-oriented Web-based tools for analyzing biomedical literature, and specifically discuss PolySearch, FACTA and Kleio. Finally we discuss Pointwise Mutual Information (PMI), which is a measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, thereby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making and quality-enhancing.

    @incollection{p1,
       author = {Holzinger, Andreas and Yildirim, Pinar and Geier, Michael and Simonic, Klaus-Martin},
       title = {Quality-Based Knowledge Discovery from Medical Text on the Web},
       booktitle = {Quality Issues in the Management of Web Information, Intelligent Systems Reference Library, ISRL 50},
       editor = {Pasi, Gabriella and Bordogna, Gloria and Jain, Lakhmi C.},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {145-158},
       year = {2013},
       abstract = {The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. Consequently there is need for techniques which enable the quality-based discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining helps to cope with the interpretation of these large volumes of data. Co-occurrence analysis is a technique applied in text mining. Statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a selection of quality-oriented Web-based tools for analyzing biomedical literature, and specifically discuss PolySearch, FACTA and Kleio. Finally we discuss Pointwise Mutual Information (PMI), which is a measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, thereby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making and quality-enhancing.},
       doi = {10.1007/978-3-642-37688-7_7},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=301679&pCurrPk=64537}
    }

  • [c105] A. Holzinger, C. Stocker, B. Ofner, G. Prohaska, A. Brabenetz, and R. Hofmann-Wellenhof, “Combining HCI, Natural Language Processing, and Knowledge Discovery – Potential of IBM Content Analytics as an assistive technology in the biomedical domain“, in Springer Lecture Notes in Computer Science LNCS 7947, Heidelberg, Berlin, New York: Springer, 2013, pp. 13-24.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Medical professionals are confronted with a flood of big data most of it containing unstructured information. Such unstructured information is the subset of information, where the information itself describes parts of what constitutes as significant within it, or in other words – structure and information are not completely separable. The best example for such unstructured information is text. For many years, text mining has been an essential area of medical informatics. Although text can easily be created by medical professionals, the support of automatic analyses for knowledge discovery is extremely difficult. We follow the definition that knowledge consists of a set of hypotheses, and knowledge discovery is the process of finding or generating new hypotheses by medical professionals with the aim of getting insight into the data. In this paper we present some lessons learned of ICA for dermatological knowledge discovery, for the first time. We follow the HCI-KDD approach, i.e. with the human expert in the loop matching the best of two worlds: human intelligence with computational intelligence.

    @incollection{c105,
       author = {Holzinger, Andreas and Stocker, Christof and Ofner, Bernhard and Prohaska, Gottfried and Brabenetz, Alberto and Hofmann-Wellenhof, Rainer},
       title = {Combining HCI, Natural Language Processing, and Knowledge Discovery - Potential of IBM Content Analytics as an assistive technology in the biomedical domain},
       booktitle = {Springer Lecture Notes in Computer Science LNCS 7947},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {13-24},
       year = {2013},
       abstract = {Medical professionals are confronted with a flood of big data most of it containing unstructured information. Such unstructured information is the subset of information, where the information itself describes parts of what constitutes as significant within it, or in other words - structure and information are not completely separable. The best example for such unstructured information is text. For many years, text mining has been an essential area of medical informatics. Although text can easily be created by medical professionals, the support of automatic analyses for knowledge discovery is extremely difficult. We follow the definition that knowledge consists of a set of hypotheses, and knowledge discovery is the process of finding or generating new hypotheses by medical professionals with the aim of getting insight into the data. In this paper we present some lessons learned of ICA for dermatological knowledge discovery, for the first time. We follow the HCI-KDD approach, i.e. with the human expert in the loop matching the best of two worlds: human intelligence with computational intelligence.},
       doi = {10.1007/978-3-642-39146-0_2},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=319096&pCurrPk=71082}
    }

  • [e10] A. Holzinger and G. Pasi, Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data, Lecture Notes in Computer Science, LNCS 7947, Heidelberg, Berlin: Springer, 2013.
    [BibTeX] [DOI] [Download PDF]
    @book{e10,
       author = {Holzinger, Andreas and Pasi, Gabriella},
       title = {Human-Computer Interaction and Knowledge Discovery in Complex, Unstructured, Big Data, Lecture Notes in Computer Science, LNCS 7947},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       year = {2013},
       doi = {10.1007/978-3-642-39146-0},
       url = {http://link.springer.com/book/10.1007/978-3-642-39146-0/page/1}
    }

  • [c113] A. Holzinger, B. Ofner, C. Stocker, A. C. Valdez, A. K. Schaar, M. Ziefle, and M. Dehmer, “On Graph Entropy Measures for Knowledge Discovery from Publication Network Data“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin: Springer, 2013, pp. 354-362.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Many research problems are extremely complex, making interdisciplinary knowledge a necessity; consequently cooperative work in mixed teams is a common and increasing research procedure. In this paper, we evaluated information-theoretic network measures on publication networks. For the experiments described in this paper we used the network of excellence from the RWTH Aachen University, described in [1]. Those measures can be understood as graph complexity measures, which evaluate the structural complexity based on the corresponding concept. We see that it is challenging to generalize such results towards different measures as every measure captures structural information differently and, hence, leads to a different entropy value. This calls for exploring the structural interpretation of a graph measure [2] which has been a challenging problem.

    @incollection{c113,
       author = {Holzinger, Andreas and Ofner, Bernhard and Stocker, Christof and Valdez, Andre Calero and Schaar, Anne Kathrin and Ziefle, Martina and Dehmer, Matthias},
       title = {On Graph Entropy Measures for Knowledge Discovery from Publication Network Data},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127},
       editor = {Cuzzocrea, Alfredo and Kittl, Christian and Simos, Dimitris E. and Weippl, Edgar and Xu, Lida},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {354-362},
       year = {2013},
       abstract = {Many research problems are extremely complex, making interdisciplinary knowledge a necessity; consequently cooperative work in mixed teams is a common and increasing research procedure. In this paper, we evaluated information-theoretic network measures on publication networks. For the experiments described in this paper we used the network of excellence from the RWTH Aachen University, described in [1]. Those measures can be understood as graph complexity measures, which evaluate the structural complexity based on the corresponding concept. We see that it is challenging to generalize such results towards different measures as every measure captures structural information differently and, hence, leads to a different entropy value. This calls for exploring the structural interpretation of a graph measure [2] which has been a challenging problem.},
       doi = {10.1007/978-3-642-40511-2_25},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=381912&pCurrPk=71843}
    }

  • [c111] A. Holzinger, M. Bruschi, and W. Eder, “On Interactive Data Visualization of Physiological Low-Cost-Sensor Data with Focus on Mental Stress“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, D. S. E. W. L. X. E. Alfredo Cuzzocrea Christian Kittl, Ed., Heidelberg, Berlin: Springer, 2013, p. 469–480.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Emotions are important mental and physiological states infuencing perception and cognition and have been a topic of interest in Human-Computer Interaction (HCI) for some time. Popular examples include stress detection or a ective computing. The use of emotional effects for various applications in decision support systems is of increasing interest. Emotional and a ective states represent very personal data and could be used for burn-out prevention. In this paper we report on first results and experiences of our EMOMES project, where the goal was to design and develop an end-user centered mobile software for interactive visualization of physiological data. Our solution was a star-plot visualization, which has been tested with data from N=50 managers (aged 25-55) taken during a burn-out prevention seminar. The results demonstrate that the leading psychologist could obtain insight into the data appropriately, thereby providing support in the prevention of stress and burnout syndromes.

    @incollection{c111,
       author = {Holzinger, Andreas and Bruschi, Manuel and Eder, Wolfgang},
       title = {On Interactive Data Visualization of Physiological Low-Cost-Sensor Data with Focus on Mental Stress},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127},
       editor = {Alfredo Cuzzocrea, Christian Kittl, Dimitris E. Simos, Edgar Weippl, Lida Xu},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {469–480},
       year = {2013},
       abstract = {Emotions are important mental and physiological states infuencing perception and cognition and have been a topic of interest in Human-Computer Interaction (HCI) for some time. Popular examples include stress detection or a ective computing. The use of emotional effects for various applications in decision support systems is of increasing interest. Emotional and a ective states represent very personal data and could be used for burn-out prevention. In this paper we report on first results and experiences of our EMOMES project, where the goal was to design and develop an end-user centered mobile software for interactive visualization of physiological data. Our solution was a star-plot visualization, which has been tested with data from N=50 managers (aged 25-55) taken during a burn-out prevention seminar. The results demonstrate that the leading psychologist could obtain insight into the data appropriately, thereby providing support in the prevention of stress and burnout syndromes.},
       doi = {10.1007/978-3-642-40511-2_34},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=411691&pCurrPk=71928}
    }

  • [c114] A. Holzinger, “Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together?“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127, A. Cuzzocrea, C. Kittl, D. E. Simos, E. Weippl, and L. Xu, Eds., Heidelberg, Berlin, New York: Springer, 2013, pp. 319-328.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    A major challenge in our networked world is the increasing amount of data, which require efficient and user-friendly solutions. A timely example is the biomedical domain: the trend towards personalized medicine has resulted in a sheer mass of the generated (-omics) data. In the life sciences domain, most data models are characterized by complexity, which makes manual analysis very time-consuming and frequently practically impossible. Computational methods may help; however, we must acknowledge that the problem-solving knowledge is located in the human mind and – not in machines. A strategic aim to find solutions for data intensive problems could lay in the combination of two areas, which bring ideal pre-conditions: Human–Computer Interaction (HCI) and Knowledge Discovery (KDD). HCI deals with questions of human perception, cognition, intelligence, decision-making and interactive techniques of visualization, so it centers mainly on supervised methods. KDD deals mainly with questions of machine intelligence and data mining, in particular with the development of scalable algorithms for finding previously unknown relationships in data, thus centers on automatic computational methods. A proverb attributed perhaps incorrectly to Albert Einstein illustrates this perfectly: “Computers are incredibly fast, accurate, but stupid. Humans are incredibly slow, inaccurate, but brilliant. Together they may be powerful beyond imagination”. Consequently, a novel approach is to combine HCI and KDD in order to enhance human intelligence by computational intelligence.

    @incollection{c114,
       author = {Holzinger, Andreas},
       title = {Human–Computer Interaction and Knowledge Discovery (HCI-KDD): What is the benefit of bringing those two fields to work together?},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 8127},
       editor = {Cuzzocrea, Alfredo and Kittl, Christian and Simos, Dimitris E. and Weippl, Edgar and Xu, Lida},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {319-328},
       year = {2013},
       abstract = {A major challenge in our networked world is the increasing amount of data, which require efficient and user-friendly solutions. A timely example is the biomedical domain: the trend towards personalized medicine has resulted in a sheer mass of the generated (-omics) data. In the life sciences domain, most data models are characterized by complexity, which makes manual analysis very time-consuming and frequently practically impossible. Computational methods may help; however, we must acknowledge that the problem-solving
    knowledge is located in the human mind and – not in machines. A strategic aim to find solutions for data intensive problems could lay in the combination of two areas, which bring ideal pre-conditions: Human–Computer Interaction (HCI) and Knowledge Discovery (KDD). HCI deals with questions of human perception, cognition, intelligence, decision-making and interactive techniques of visualization, so it centers mainly on supervised methods. KDD deals mainly with questions of machine intelligence and data mining, in particular with the
    development of scalable algorithms for finding previously unknown relationships in data, thus centers on automatic computational methods. A proverb attributed perhaps incorrectly to Albert Einstein illustrates this perfectly: “Computers are incredibly fast, accurate, but stupid. Humans are incredibly slow, inaccurate, but brilliant. Together they may be powerful beyond imagination”. Consequently, a novel approach is to combine HCI and KDD in order to enhance human intelligence by computational intelligence.},
       doi = {10.1007/978-3-642-40511-2_22},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=382991&pCurrPk=72064}
    }

  • [c115] S. Himmel, M. Ziefle, C. Lidynia, and A. Holzinger, “Older Users’ Wish List for Technology Attributes“, in Availability, Reliability, and Security in Information Systems and HCI, Lecture Notes in Computer Science LNCS 8127, Heidelberg, Berlin: Springer, 2013, pp. 16-27.
    [BibTeX] [DOI] [Download PDF]
    @incollection{c115,
       author = {Himmel, Simon and Ziefle, Martina and Lidynia, Chantal and Holzinger, Andreas},
       title = {Older Users’ Wish List for Technology Attributes},
       booktitle = {Availability, Reliability, and Security in Information Systems and HCI, Lecture Notes in Computer Science LNCS 8127},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {16-27},
       year = {2013},
       doi = {10.1007/978-3-642-40511-2_2},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=978083&pCurrPk=83056}
    }

  • [j37] B. Hametner, S. Wassertheurer, J. Kropf, C. Mayer, A. Holzinger, B. Eber, and T. Weber, “Wave reflection quantification based on pressure waveforms alone—Methods, comparison, and clinical covariates“, Computer Methods and Programs in Biomedicine, vol. 109, iss. 3, pp. 250-259, 2013.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Within the last decade the quantification of pulse wave reflections mainly focused on measures of central aortic systolic pressure and its augmentation through reflections based on pulse wave analysis (PWA). A complementary approach is the wave separation analysis (WSA), which quantifies the total amount of arterial wave reflection considering both aortic pulse and flow waves. The aim of this work is the introduction and comparison of aortic blood flow models for WSA assessment. To evaluate the performance of the proposed modeling approaches (Windkessel, triangular and averaged flow), comparisons against Doppler measurements are made for 148 patients with preserved ejection fraction. Stepwise regression analysis between WSA and PWA parameters are performed to provide determinants of methodological differences. Against Doppler measurement mean difference and standard deviation of the amplitudes of the decomposed forward and backward pressure waves are comparable for Windkessel and averaged flow models. Stepwise regression analysis shows similar determinants between Doppler and Windkessel model only. The results indicate that the Windkessel method provides accurate estimates of wave reflection in subjects with preserved ejection fraction. The comparison with waveforms derived from Doppler ultrasound as well as recently proposed simple triangular and averaged flow waves showed that this approach may reduce variability and provide realistic results.

    @article{j37,
       author = {Hametner, Bernhard and Wassertheurer, Siegfried and Kropf, Johannes and Mayer, Christopher and Holzinger, Andreas and Eber, Bernd and Weber, Thomas},
       title = {Wave reflection quantification based on pressure waveforms alone—Methods, comparison, and clinical covariates},
       journal = {Computer Methods and Programs in Biomedicine},
       volume = {109},
       number = {3},
       pages = {250-259},
       year = {2013},
       abstract = {Within the last decade the quantification of pulse wave reflections mainly focused on measures of central aortic systolic pressure and its augmentation through reflections based on pulse wave analysis (PWA). A complementary approach is the wave separation analysis (WSA), which quantifies the total amount of arterial wave reflection considering both aortic pulse and flow waves. The aim of this work is the introduction and comparison of aortic blood flow models for WSA assessment. To evaluate the performance of the proposed modeling approaches (Windkessel, triangular and averaged flow), comparisons against Doppler measurements are made for 148 patients with preserved ejection fraction. Stepwise regression analysis between WSA and PWA parameters are performed to provide determinants of methodological differences. Against Doppler measurement mean difference and standard deviation of the amplitudes of the decomposed forward and backward pressure waves are comparable for Windkessel and averaged flow models. Stepwise regression analysis shows similar determinants between Doppler and Windkessel model only. The results indicate that the Windkessel method provides accurate estimates of wave reflection in subjects with preserved ejection fraction. The comparison with waveforms derived from Doppler ultrasound as well as recently proposed simple triangular and averaged flow waves showed that this approach may reduce variability and provide realistic results.},
       doi = {10.1016/j.cmpb.2012.10.005},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=304382&pCurrPk=67133}
    }

  • [c109] M. Ebner, J. Wachtler, and A. Holzinger, “Introducing an Information System for Successful Support of Selective Attention in Online Courses“, in Universal Access in Human-Computer Interaction. Applications and Services for Quality of Life, Lecture Notes in Computer Science LNCS 8011, C. Stephanidis and M. Antona, Eds., Berlin Heidelberg: Springer, 2013, pp. 153-162.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Human learning processes are strongly depending on attention of each single learner. Due to this fact any measurement helping to increase students’ attention is from high importance. Till now there are some developments called Audience-Response-Systems only available for face-to-face education even for masses. In this publication we introduce a web-based information system which is also usable for online-systems. Students’ attention will be conserved based on different interaction forms during the live stream of a lecture. The evaluation pointed out that the system helps to enlarge the attention of each single participant.

    @incollection{c109,
       author = {Ebner, Martin and Wachtler, Josef and Holzinger, Andreas},
       title = {Introducing an Information System for Successful Support of Selective Attention in Online Courses},
       booktitle = {Universal Access in Human-Computer Interaction. Applications and Services for Quality of Life, Lecture Notes in Computer Science LNCS 8011},
       editor = {Stephanidis, Constantine and Antona, Margherita},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {153-162},
       year = {2013},
       abstract = {Human learning processes are strongly depending on attention of each single learner. Due to this fact any measurement helping to increase students’ attention is from high importance. Till now there are some developments called Audience-Response-Systems only available for face-to-face education even for masses. In this publication we introduce a web-based information system which is also usable for online-systems. Students’ attention will be conserved based on different interaction forms during the live stream of a lecture. The evaluation pointed out that the system helps to enlarge the attention of each single participant.},
       doi = {10.1007/978-3-642-39194-1_18},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=985280&pCurrPk=72165}
    }

  • [j36] M. Bloice, K. Simonic, and A. Holzinger, “On the usage of health records for the design of virtual patients: a systematic review“, Bmc Medical Informatics and Decision Making, vol. 13, iss. 1, p. 103, 2013.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    BACKGROUND: The process of creating and designing Virtual Patients for teaching students of medicine is an expensive and time-consuming task. In order to explore potential methods of mitigating these costs, our group began exploring the possibility of creating Virtual Patients based on electronic health records. This review assesses the usage of electronic health records in the creation of interactive Virtual Patients for teaching clinical decision-making. METHODS: The PubMed database was accessed programmatically to find papers relating to Virtual Patients. The returned citations were classified and the relevant full text articles were reviewed to find Virtual Patient systems that used electronic health records to create learning modalities. RESULTS: A total of n=362 citations were found on PubMed and subsequently classified, of which n=28 full-text articles were reviewed. Few articles used unformatted electronic health records other than patient CT or MRI scans. The use of patient data, extracted from electronic health records or otherwise, is widespread. The use of unformatted electronic health records in their raw form is less frequent. Patient data use is broad and spans several areas, such as teaching, training, 3D visualisation, and assessment. CONCLUSIONS: Virtual Patients that are based on real patient data are widespread, yet the use of unformatted electronic health records, abundant in hospital information systems, is reported less often. The majority of teaching systems use reformatted patient data gathered from electronic health records, and do not use these electronic health records directly. Furthermore, many systems were found that used patient data in the form of CT or MRI scans. Much potential research exists regarding the use of unformatted electronic health records for the creation of Virtual Patients.

    @article{j36,
       author = {Bloice, Marcus and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {On the usage of health records for the design of virtual patients: a systematic review},
       journal = {Bmc Medical Informatics and Decision Making},
       volume = {13},
       number = {1},
       pages = {103},
       year = {2013},
       abstract = {BACKGROUND: The process of creating and designing Virtual Patients for teaching students of medicine is an expensive and time-consuming task. In order to explore potential methods of mitigating these costs, our group began exploring the possibility of creating Virtual Patients based on electronic health records. This review assesses the usage of electronic health records in the creation of interactive Virtual Patients for teaching clinical decision-making. METHODS: The PubMed database was accessed programmatically to find papers relating to Virtual Patients. The returned citations were classified and the relevant full text articles were reviewed to find Virtual Patient systems that used electronic health records to create learning modalities. RESULTS: A total of n=362 citations were found on PubMed and subsequently classified, of which n=28 full-text articles were reviewed. Few articles used unformatted electronic health records other than patient CT or MRI scans. The use of patient data, extracted from electronic health records or otherwise, is widespread. The use of unformatted electronic health records in their raw form is less frequent. Patient data use is broad and spans several areas, such as teaching, training, 3D visualisation, and assessment. CONCLUSIONS: Virtual Patients that are based on real patient data are widespread, yet the use of unformatted electronic health records, abundant in hospital information systems, is reported less often. The majority of teaching systems use reformatted patient data gathered from electronic health records, and do not use these electronic health records directly. Furthermore, many systems were found that used patient data in the form of CT or MRI scans. Much potential research exists regarding the use of unformatted electronic health records for the creation of Virtual Patients.},
       doi = {10.1186/1472-6947-13-103},
       url = {http://www.biomedcentral.com/1472-6947/13/103}
    }

  • [c102] M. Belk, P. Germanakos, C. Fidas, A. Holzinger, and G. Samaras, “Towards the Personalization of CAPTCHA Mechanisms Based on Individual Differences in Cognitive Processing“, in Human Factors in Computing and Informatics, Lecture Notes in Computer Science, LNCS 7946, A. Holzinger, M. Ziefle, M. Hitz, and M. Debevc, Eds., Berlin Heidelberg: Springer, 2013, pp. 409-426.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper studies the effect of individual differences on user performance related to text-recognition CAPTCHA challenges. In particular, a text-recognition CAPTCHA mechanism was deployed in a three-month user study to investigate the effect of individuals’ different cognitive processing abilities, targeting on speed of processing, controlled attention and working memory capacity toward efficiency and effectiveness with regard to different levels of complexity in text-recognition CAPTCHA tasks. A total of 107 users interacted with CAPTCHA challenges between September and November 2012 indicating that the usability of CAPTCHA mechanisms may be supported by personalization techniques based on individual differences in cognitive processing.

    @incollection{c102,
       author = {Belk, Marios and Germanakos, Panagiotis and Fidas, Christos and Holzinger, Andreas and Samaras, George},
       title = {Towards the Personalization of CAPTCHA Mechanisms Based on Individual Differences in Cognitive Processing},
       booktitle = {Human Factors in Computing and Informatics, Lecture Notes in Computer Science, LNCS 7946},
       editor = {Holzinger, Andreas and Ziefle, Martina and Hitz, Martin and Debevc, Matjaž},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {409-426},
       year = {2013},
       abstract = {This paper studies the effect of individual differences on user performance related to text-recognition CAPTCHA challenges. In particular, a text-recognition CAPTCHA mechanism was deployed in a three-month user study to investigate the effect of individuals’ different cognitive processing abilities, targeting on speed of processing, controlled attention and working memory capacity toward efficiency and effectiveness with regard to different levels of complexity in text-recognition CAPTCHA tasks. A total of 107 users interacted with CAPTCHA challenges between September and November 2012 indicating that the usability of CAPTCHA mechanisms may be supported by personalization techniques based on individual differences in cognitive processing.},
       doi = {10.1007/978-3-642-39062-3_26},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=985285&pCurrPk=71941}
    }

  • [c86] M. Bachler, C. Mayer, B. Hametner, S. Wassertheurer, and A. Holzinger, “Online and Offline Determination of QT and PR Interval and QRS Duration in Electrocardiography“, in Pervasive Computing and the Networked World, Lecture Notes in Computer Science LNCS 7719, Q. Zu, B. Hu, and A. Elçi, Eds., Berlin Heidelberg: Springer, 2013, pp. 1-15.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Duration and dynamic changes of QT and PR intervals as well as QRS complexes of ECG measurements are well established parameters in monitoring and diagnosis of cardiac diseases. Since automated annotations show numerous advantages over manual methods, the aim was to develop an algorithm suitable for online (real time) and offline ECG analysis. In this work we present this algorithm, its verification and the development process. The algorithm detects R peaks based on the amplitude, the first derivative and local statistic characteristics of the signal. Classification is performed to distinguish premature ventricular contractions from normal heartbeats. To improve the accuracy of the subsequent detection of QRS complexes, P and T waves, templates are built for each class of heartbeats. Using a continuous integration system, the algorithm was automatically verified against PhysioNet databases and achieved a sensitivity of 98.2% and a positive predictive value of 98.7%, respectively

    @incollection{c86,
       author = {Bachler, Martin and Mayer, Christopher and Hametner, Bernhard and Wassertheurer, Siegfried and Holzinger, Andreas},
       title = {Online and Offline Determination of QT and PR Interval and QRS Duration in Electrocardiography},
       booktitle = {Pervasive Computing and the Networked World, Lecture Notes in Computer Science LNCS 7719},
       editor = {Zu, Qiaohong and Hu, Bo and Elçi, Atilla},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {1-15},
       year = {2013},
       abstract = {Duration and dynamic changes of QT and PR intervals as well as QRS complexes of ECG measurements are well established parameters in monitoring and diagnosis of cardiac diseases. Since automated annotations show numerous advantages over manual methods, the aim was to develop an algorithm suitable for online (real time) and offline ECG analysis. In this work we present this algorithm, its verification and the development process. The algorithm detects R peaks based on the amplitude, the first derivative and local statistic characteristics of the signal. Classification is performed to distinguish premature ventricular contractions from normal heartbeats. To improve the accuracy of the subsequent detection of QRS complexes, P and T waves, templates are built for each class of heartbeats. Using a continuous integration system, the algorithm was automatically verified against PhysioNet databases and achieved a sensitivity of 98.2% and a positive predictive value of 98.7%, respectively},
       doi = {10.1007/978-3-642-37015-1_1},
       url = {http://dx.doi.org/10.1007/978-3-642-37015-1_1}
    }

2012

  • [c83b] M. Ziefle, S. Himmel, and A. Holzinger, “How usage context shapes evaluation and adoption in different technologies“, in Advances in Usability Evaluation Part II, F. Rebelo and M. M. Soares, Eds., Boca Raton (FL): CRC Press, 2012, pp. 2812-2821.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Although lots of technical devices are quite indispensable, the willingness to use the technologies is not necessarily given in all users – some devices are regarded as helpful, others are not perceived as trustful. Technical progress proceeds in every type of technology and new devices have to meet the needs of multiple users. Consequently, user diversity is a key research factor. This paper examines the effects of gender and age on technical interest in specific technological fields, their effects on purchasing criteria for car, medical and ICT and their influence on motivation for usage of these technologies. 92 respondents (age 21-80 years) participated in an exploratory survey revealing that general interest and interest in specific technology branches is significantly influenced by both gender and age. While purchase criteria and motivation for usage differ with the technological context (automobile, medical, ICT), user diversity (gender, age) plays a minor role for adoption and evaluation criteria. (technology context)

    @incollection{c83b,
       author = {Ziefle, Martina and Himmel, Simon and Holzinger, Andreas},
       title = {How usage context shapes evaluation and adoption in different technologies},
       booktitle = {Advances in Usability Evaluation Part II},
       editor = {Rebelo, Francesco and Soares, Marcelo M.},
       publisher = {CRC Press},
       address = {Boca Raton (FL)},
       pages = {2812-2821},
       year = {2012},
       abstract = {Although lots of technical devices are quite indispensable, the willingness to use the technologies is not necessarily given in all users - some devices are regarded as helpful, others are not perceived as trustful. Technical progress proceeds in every type of technology and new devices have to meet the needs of multiple users. Consequently, user diversity is a key research factor. This paper examines the effects of gender and age on technical interest in specific technological fields, their effects on purchasing criteria for car, medical and ICT and their influence on motivation for usage of these technologies. 92 respondents (age 21-80 years) participated in an exploratory survey revealing that general interest and interest in specific technology branches is significantly influenced by both gender and age. While purchase criteria and motivation for usage differ with the technological context (automobile, medical, ICT), user diversity (gender, age) plays a minor role for adoption and evaluation criteria. (technology context)},
       doi = {10.1201/b12324-25},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1004336&pCurrPk=84000}
    }

  • [c98] G. Petz, M. Karpowicz, H. Fürschuß, A. Auinger, S. Winkler, S. Schaller, and A. Holzinger, “On Text Preprocessing for Opinion Mining Outside of Laboratory Environments“, in Active Media Technology, Lecture Notes in Computer Science, LNCS 7669, R. Huang, A. Ghorbani, G. Pasi, T. Yamaguchi, N. Yen, and B. Jin, Eds., Berlin Heidelberg: Springer, 2012, pp. 618-629.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Opinion mining deals with scientific methods in order to find, extract and systematically analyze subjective information. When performing opinion mining to analyze content on the Web, challenges arise that usually do not occur in laboratory environments where prepared and preprocessed texts are used. This paper discusses preprocessing approaches that help coping with the emerging problems of sentiment analysis in real world situations. After outlining the identified shortcomings and presenting a general process model for opinion mining, promising solutions for language identification, content extraction and dealing with Internet slang are discussed.

    @incollection{c98,
       author = {Petz, Gerald and Karpowicz, Michał and Fürschuß, Harald and Auinger, Andreas and Winkler, StephanM and Schaller, Susanne and Holzinger, Andreas},
       title = {On Text Preprocessing for Opinion Mining Outside of Laboratory Environments},
       booktitle = {Active Media Technology, Lecture Notes in Computer Science, LNCS 7669},
       editor = {Huang, Runhe and Ghorbani, AliA and Pasi, Gabriella and Yamaguchi, Takahira and Yen, NeilY and Jin, Beijing},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {618-629},
       year = {2012},
       abstract = {Opinion mining deals with scientific methods in order to find, extract and systematically analyze subjective information. When performing opinion mining to analyze content on the Web, challenges arise that usually do not occur in laboratory environments where prepared and preprocessed texts are used. This paper discusses preprocessing approaches that help coping with the emerging problems of sentiment analysis in real world situations. After outlining the identified shortcomings and presenting a general process model for opinion mining, promising solutions for language identification, content extraction and dealing with Internet slang are discussed.},
       doi = {10.1007/978-3-642-35236-2_62},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=989400&pCurrPk=66883}
    }

  • [c88] B. Peischl, M. Ziefle, and A. Holzinger, “A Mobile Information System for Improved Navigation in Public Transport-User Centered Design, Development, Evaluation and e-Business Scenarios of a Mobile Roadmap Application“, in DCNET/ICE-B/OPTICS, 2012, pp. 217-221.
    [BibTeX] [Abstract] [Download PDF]

    End-user friendly interface design is of tremendous importance for the success of mobile applications which are of increasing interest in the e-Business area. In this paper, we present an empirical evaluation of a mobile information system for improving navigation of public transport. High air pollution and respiratory dust, along with other threats to environmental conditions in urban areas, make the use of public transport system less and less a matter of choice. The central hypothesis of this study is that useful, useable and accessible navigation contributes towards making public transport systems more attractive. [Information Systems, Mobile Computing]

    @inproceedings{c88,
       year = {2012},
       author = {Peischl, Bernhard and Ziefle, Martina and Holzinger, Andreas},
       title = {A Mobile Information System for Improved Navigation in Public Transport-User Centered Design, Development, Evaluation and e-Business Scenarios of a Mobile Roadmap Application},
       booktitle = {DCNET/ICE-B/OPTICS},
       editor = {Obaidat, Mohammad S. and Sevillano, José Luis and Zhang, Zhaoyang and Marca, David A. and Sinderen, Marten van and Marzo, José-Luis and Nicopolitidis, Petros},
       pages = {217-221},
       abstract = {End-user friendly interface design is of tremendous importance for the success of mobile applications which are  of  increasing  interest  in  the  e-Business  area.  In  this  paper,  we  present  an  empirical  evaluation  of  a  mobile information system for improving navigation of public transport. High air pollution and respiratory dust, along with other threats to environmental conditions in urban areas, make the use of public transport system  less  and  less  a  matter  of  choice.  The  central  hypothesis  of  this  study  is  that  useful,  useable  and  accessible navigation contributes towards making public transport systems more attractive.  [Information Systems, Mobile Computing]},
       keywords = {Information Systems, Mobile computing},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=258374&pCurrPk=65922}
    }

  • [c83] J. Novak, J. Ziegler, U. Hoppe, A. Holzinger, C. Heintze, and M. Böckle, “Mobile Anwendungen für Medizin und Gesundheit“, in Mensch & Computer Workshopband: interaktiv informiert – allgegenwärtig und allumfassend!?, H. Reiterer and O. Deussen, Eds., München: Oldenbourg Verlag, 2012, pp. 227-230.
    [BibTeX] [Abstract] [Download PDF]

    Das Ziel des Workshops ist es innovative Anwendungen mobiler Technologien in der Medizin vorzustellen und zu diskutieren. Dies umfasst sowohl die „klassischen“ Bereiche der Optimierung von Krankenhausabläufen oder der mobilen Unterstützung der elektronischen Patientenakte als auch die neuen Einsatzmöglichkeiten, die sich mit der neuen Generation mobiler ubiquitärer Geräte eröffnen (Tablets, SmartPhones, SmartPens).

    @incollection{c83,
       author = {Novak, Jasminko and Ziegler, Jürgen and Hoppe, Ulrich and Holzinger, Andreas and Heintze, Christoph and Böckle, Martin},
       title = {Mobile Anwendungen für Medizin und Gesundheit},
       booktitle = {Mensch \& Computer Workshopband: interaktiv informiert – allgegenwärtig und allumfassend!?},
       editor = {Reiterer, H. and Deussen, O.},
       publisher = {Oldenbourg Verlag},
       address = {München},
       pages = {227-230},
       year = {2012},
       abstract = {Das Ziel des Workshops ist es innovative Anwendungen mobiler Technologien in der Medizin vorzustellen und zu diskutieren. Dies umfasst sowohl die „klassischen“ Bereiche der Optimierung von Krankenhausabläufen oder der mobilen Unterstützung der elektronischen Patientenakte als auch die neuen Einsatzmöglichkeiten, die sich mit der neuen Generation mobiler ubiquitärer Geräte eröffnen (Tablets, SmartPhones, SmartPens).},
       url = {http://dl.mensch-und-computer.de/handle/123456789/2965}
    }

  • [c92] D. Nedbal, A. Auinger, A. Hochmeier, and A. Holzinger, “A Systematic Success Factor Analysis in the Context of Enterprise 2.0: Results of an Exploratory Analysis Comprising Digital Immigrants and Digital Natives“, in E-Commerce and Web Technologies, Lecture Notes in Business Information Processing, LNBIP 123, C. Huemer and P. Lops, Eds., Heidelberg, Berlin: Springer, 2012, pp. 163-175.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Organizations are increasingly investing in social collaboration and communication platforms for integrated exchange of information within and between enterprises. These Enterprise 2.0 projects always have a deep impact on organizational and cultural changes and need a critical mass of user involvement across all different groups. Users that grew up in the digital age and use new forms of collaborative platforms within their daily activities are often more technologically adept and more willing to share information. This leads to a digital divide between Digital Natives and Digital Immigrants, which needs to be addressed within such projects. The main objective of this paper is to investigate the perceived differences in success factors for Enterprise 2.0 seen by Digital Natives and Digital Immigrants and its implications on the implementation of a process oriented methodology for Enterprise 2.0 projects.

    @incollection{c92,
       author = {Nedbal, Dietmar and Auinger, Andreas and Hochmeier, Alexander and Holzinger, Andreas},
       title = {A Systematic Success Factor Analysis in the Context of Enterprise 2.0: Results of an Exploratory Analysis Comprising Digital Immigrants and Digital Natives},
       booktitle = {E-Commerce and Web Technologies, Lecture Notes in Business Information Processing, LNBIP 123},
       editor = {Huemer, Christian and Lops, Pasquale},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       pages = {163-175},
       year = {2012},
       abstract = {Organizations are increasingly investing in social collaboration and communication platforms for integrated exchange of information within and between enterprises. These Enterprise 2.0 projects always have a deep impact on organizational and cultural changes and need a critical mass of user involvement across all different groups. Users that grew up in the digital age and use new forms of collaborative platforms within their daily activities are often more technologically adept and more willing to share information. This leads to a digital divide between Digital Natives and Digital Immigrants, which needs to be addressed within such projects. The main objective of this paper is to investigate the perceived differences in success factors for Enterprise 2.0 seen by Digital Natives and Digital Immigrants and its implications on the implementation of a process oriented methodology for Enterprise 2.0 projects.},
       doi = {10.1007/978-3-642-32273-0_14},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=262756&pCurrPk=65376}
    }

  • [j33] S. Mujacic, M. Debevc, P. Kosec, M. Bloice, and A. Holzinger, “Modeling, design, development and evaluation of a hypervideo presentation for digital systems teaching and learning“, Multimedia Tools and Applications MTAP, vol. 58, iss. 2, pp. 435-452, 2012.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Hypervideos are multimedia files, which differ from traditional video files in that they can be navigated by using links that are embedded in them. Students can therefore easily access content that explains and clarifies certain points of the lectures that are difficult to understand, while at the same time not interrupting the flow of the original video presentation. In this paper we report on the design, development and evaluation of a hypermedia e-Learning tool for university students. First, the structure of the hypervideo model is presented; once the structure is known, the process of creating hypervideo content is described in detail, as are the various ways in which content can be linked together. Finally, an evaluation is presented, which has been carried out in the context of an engineering class by use of an interactive experiment, involving N = 88 students from a digital systems course. In this study the students were randomly assigned to two groups; one group participated in the course as usual, whilst the second group participated in the same course while also combining the conventional learning with the hypervideo content developed for the course. The students’ learning results showed that the students who had access to the hypervideo content performed significantly better than the comparison group.

    @article{j33,
       author = {Mujacic, S. and Debevc, M. and Kosec, P. and Bloice, M. and Holzinger, A.},
       title = {Modeling, design, development and evaluation of a hypervideo presentation for digital systems teaching and learning},
       journal = {Multimedia Tools and Applications MTAP},
       volume = {58},
       number = {2},
       pages = {435-452},
       year = {2012},
       abstract = {Hypervideos are multimedia files, which differ from traditional video files in that they can be navigated by using links that are embedded in them. Students can therefore easily access content that explains and clarifies certain points of the lectures that are difficult to understand, while at the same time not interrupting the flow of the original video presentation. In this paper we report on the design, development and evaluation of a hypermedia e-Learning tool for university students. First, the structure of the hypervideo model is presented; once the structure is known, the process of creating hypervideo content is described in detail, as are the various ways in which content can be linked together. Finally, an evaluation is presented, which has been carried out in the context of an engineering class by use of an interactive experiment, involving N = 88 students from a digital systems course. In this study the students were randomly assigned to two groups; one group participated in the course as usual, whilst the second group participated in the same course while also combining the conventional learning with the hypervideo content developed for the course. The students’ learning results showed that the students who had access to the hypervideo content performed significantly better than the comparison group.},
       doi = {10.1007/s11042-010-0665-1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=188969&pCurrPk=50195}
    }

  • [c90] K. Holzinger, G. Koiner-Erath, P. Kosec, M. Fassold, and A. Holzinger, “ArchaeoApp Rome Edition (AARE): Making Invisible Sites Visible – e-Business Aspects of Historic Knowledge Discovery via Mobile Devices.” 2012, pp. 115-122.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Rome is visited by 7 to 10 million tourists per year, many of them interested in historical sites. Most sites that are described in tourist guides (printed or online) are archaeological sites; we can call them visible archaeological sites. Unfortunately, even visible archaeological sites in Rome are barely marked – and invisible sites are completely ignored. In this paper, we present the ArchaeoApp Rome Edition (AARE). The novelty is not just to mark the important, visible, barely known sites, but to mark the invisible sites, consequently introducing a completely novel type of site to the tourist guidance: historical invisible sites. One challenge is to get to reliable, historic information on demand. A possible approach is to retrieve the information from Wikipedia directly. The second challenge is that most of the end users have no Web access due to the high roaming costs. The third challenge is to address a balance between the best platform available and the most used platform. For e-Business purposes, it is of course necessary to support the highest possible amount of various mobile platforms (Android, iOS and Windows Phone). The advantages of AARE include: no roaming costs, data update on demand (when connected to Wi-Fi, e.g. at a hotel, at a public hotspot, etc. … for free), automatic nearby notification of invisible sites (markers) with a Visual Auditory-Tactile technique to make invisible sites visible.

    @inproceedings{c90,
       author = {Holzinger, Katharina and Koiner-Erath, Gabriele and Kosec, Primoz and Fassold, Markus and Holzinger, Andreas},
       title = {ArchaeoApp Rome Edition (AARE): Making Invisible Sites Visible - e-Business Aspects of Historic Knowledge Discovery via Mobile Devices},
       publisher = {Scitec},
       pages = {115-122},
      year = {2012},
       abstract = {Rome is visited by 7 to 10 million tourists per year, many of them interested in historical sites. Most sites that are described in tourist guides (printed or online) are archaeological sites; we can call them visible archaeological sites. Unfortunately, even visible archaeological sites in Rome are barely marked – and invisible sites are completely ignored. In this paper, we present the ArchaeoApp Rome Edition (AARE).  The novelty is not just to mark the important, visible, barely known sites, but to mark the invisible sites, consequently introducing a completely novel type of site to the tourist guidance: historical invisible sites.  One challenge is to get to reliable, historic information on demand. A possible approach is to retrieve the information from Wikipedia directly. The second challenge is that most of the end users have no Web access due to the high roaming costs. The third challenge is to address a balance between the best platform available and the most used platform. For e-Business purposes, it is of course necessary to support the highest possible amount of various mobile platforms (Android, iOS and Windows Phone). The advantages of AARE include: no roaming costs, data update on demand (when connected to Wi-Fi, e.g. at a hotel, at a public hotspot, etc. ... for free), automatic nearby notification of invisible sites (markers) with a Visual Auditory-Tactile technique to make invisible sites visible. },
       keywords = {Information Retrieval on Mobile devices, KnowledgeManagement, e-Business, e-Business},
       year = {2012},
       doi = {10.5220/0004074801150122},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=246240&pCurrPk=64539}
    }

  • [c100] A. Holzinger, P. Treitler, and W. Slany, “Making Apps Useable on Multiple Different Mobile Platforms: On Interoperability for Business Application Development on Smartphones“, in Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 7465, G. Quirchmayr, J. Basl, I. You, L. Xu, and E. Weippl, Eds., Berlin, Heidelberg: Springer, 2012, pp. 176-189.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The relevance of enabling mobile access to business enterprise information systems for experts working in the field has grown significantly in the last years due to the increasing availability of smartphones; the shipment of smartphones exceeded that of personal computers in 2011. However, the screen sizes and display resolutions of different devices vary to a large degree, along with different aspect ratios and the complexity of mobile tasks. These obstacles are a major challenge for software developers, especially when they try to reach the largest possible audience and develop for multiple mobile platforms or device types. On the other side, the end users’ expectations regarding the usability of the applications are increasing. Consequently, for a successful mobile application the user interface needs to be well-designed, thus justifying research to overcome these obstacles. In this paper, we report on experiences during an industrial project on building user interfaces for database access to a business enterprise information system for professionals in the field. We discuss our systematic analysis of standards and conventions for design of user interfaces for various mobile platforms, as well as scaling methods operational on different physical screen sizes. The interoperability of different systems, including HTML5, Java and .NET is also within the focus of this work.

    @incollection{c100,
       author = {Holzinger, Andreas and Treitler, Peter and Slany, Wolfgang},
       title = {Making Apps Useable on Multiple Different Mobile Platforms: On Interoperability for Business Application Development on Smartphones},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Springer Lecture Notes in Computer Science LNCS 7465},
       editor = {Quirchmayr, Gerald and Basl, Josef and You, Ilsun and Xu, Lida and Weippl, Edgar},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {176-189},
       year = {2012},
       abstract = {The relevance of enabling mobile access to business enterprise information systems for experts working in the field has grown significantly in the last years due to the increasing availability of smartphones; the shipment of smartphones exceeded that of personal computers in 2011. However, the screen sizes and display resolutions of different devices vary to a large degree, along with different aspect ratios and the complexity of mobile tasks. These obstacles are a major challenge for software developers, especially when they try to reach the largest possible audience and develop for multiple mobile platforms or device types. On the other side, the end users’ expectations regarding the usability of the applications are increasing. Consequently, for a successful mobile application the user interface needs to be well-designed, thus justifying research to overcome these obstacles. In this paper, we report on experiences during an industrial project on building user interfaces for database access to a business enterprise information system for professionals in the field. We discuss our systematic analysis of standards and conventions for design of user interfaces for various mobile platforms, as well as scaling methods operational on different physical screen sizes. The interoperability of different systems, including HTML5, Java and .NET is also within the focus of this work.},
       doi = {10.1007/978-3-642-32498-7_14},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=253844&pCurrPk=65315}
    }

  • [j34] A. Holzinger, C. Stocker, B. Peischl, and K. Simonic, “On Using Entropy for Enhancing Handwriting Preprocessing“, Entropy, vol. 14, iss. 11, pp. 2324-2350, 2012.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Handwriting is an important modality for Human-Computer Interaction. For medical professionals, handwriting is (still) the preferred natural method of documentation. Handwriting recognition has long been a primary research area in Computer Science. With the tremendous ubiquity of smartphones, along with the renaissance of the stylus, handwriting recognition has become a new impetus. However, recognition rates are still not 100% perfect, and researchers still are constantly improving handwriting algorithms. In this paper we evaluate the performance of entropy based slant- and skew-correction, and compare the results to other methods. We selected 3700 words of 23 writers out of the Unipen-ICROW-03 benchmark set, which we annotated with their associated error angles by hand. Our results show that the entropy-based slant correction method outperforms a window based approach with an average precision of ±6.02° for the entropy-based method, compared with the ±7.85° for the alternative. On the other hand, the entropy-based skew correction yields a lower average precision of ±2:86°, compared with the average precision of ±2.13° for the alternative LSM based approach.

    @article{j34,
       author = {Holzinger, Andreas and Stocker, Christof and Peischl, Bernhard and Simonic, Klaus-Martin},
       title = {On Using Entropy for Enhancing Handwriting Preprocessing},
       journal = {Entropy},
       volume = {14},
       number = {11},
       pages = {2324-2350},
       year = {2012},
       abstract = {Handwriting is an important modality for Human-Computer Interaction. For medical professionals, handwriting is (still) the preferred natural method of documentation. Handwriting recognition has long been a primary research area in Computer Science. With the tremendous ubiquity of smartphones, along with the renaissance of the stylus, handwriting recognition has become a new impetus. However, recognition rates are still not 100% perfect, and researchers still are constantly improving handwriting algorithms. In this paper we evaluate the performance of entropy based slant- and skew-correction, and compare the results to other methods. We selected 3700 words of 23 writers out of the Unipen-ICROW-03 benchmark set, which we annotated with their associated error angles by hand. Our results show that the entropy-based slant correction method outperforms a window based approach with an average precision of ±6.02° for the entropy-based method, compared with the ±7.85° for the alternative. On the other hand, the entropy-based skew correction yields a lower average precision of ±2:86°, compared with the average precision of ±2.13° for the alternative LSM based approach. },
       doi = {10.3390/e14112324},
       url = {http://www.mdpi.com/1099-4300/14/11/2324/pdf}
    }

  • [c97] A. Holzinger, C. Stocker, M. Bruschi, A. Auinger, H. Silva, H. Gamboa, and A. Fred, “On Applying Approximate Entropy to ECG Signals for Knowledge Discovery on the Example of Big Sensor Data“, in Active Media Technology, Lecture Notes in Computer Science, LNCS 7669, R. Huang, A. Ghorbani, G. Pasi, T. Yamaguchi, N. Yen, and B. Jin, Eds., Berlin Heidelberg: Springer, 2012, pp. 646-657.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Information entropy as a universal and fascinating statistical concept is helpful for numerous problems in the computational sciences. Approximate entropy (ApEn), introduced by Pincus (1991), can classify complex data in diverse settings. The capability to measure complexity from a relatively small amount of data holds promise for applications of ApEn in a variety of contexts. In this work we apply ApEn to ECG data. The data was acquired through an experiment to evaluate human concentration from 26 individuals. The challenge is to gain knowledge with only small ApEn windows while avoiding modeling artifacts. Our central hypothesis is that for intra subject information (e.g. tendencies, fluctuations) the ApEn window size can be significantly smaller than for inter subject classification. For that purpose we propose the term truthfulness to complement the statistical validity of a distribution, and show how truthfulness is able to establish trust in their local properties.

    @incollection{c97,
       author = {Holzinger, Andreas and Stocker, Christof and Bruschi, Manuel and Auinger, Andreas and Silva, Hugo and Gamboa, Hugo and Fred, Ana},
       title = {On Applying Approximate Entropy to ECG Signals for Knowledge Discovery on the Example of Big Sensor Data},
       booktitle = {Active Media Technology, Lecture Notes in Computer Science, LNCS 7669},
       editor = {Huang, Runhe and Ghorbani, Ali and Pasi, Gabriella and Yamaguchi, Takahira and Yen, Neil and Jin, Beijing},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {646-657},
       year = {2012},
       abstract = {Information entropy as a universal and fascinating statistical concept is helpful for numerous problems in the computational sciences. Approximate entropy (ApEn), introduced by Pincus (1991), can classify complex data in diverse settings. The capability to measure complexity from a relatively small amount of data holds promise for applications of ApEn in a variety of contexts. In this work we apply ApEn to ECG data. The data was acquired through an experiment to evaluate human concentration from 26 individuals. The challenge is to gain knowledge with only small ApEn windows while avoiding modeling artifacts. Our central hypothesis is that for intra subject information (e.g. tendencies, fluctuations) the ApEn window size can be significantly smaller than for inter subject classification. For that purpose we propose the term truthfulness to complement the statistical validity of a distribution, and show how truthfulness is able to establish trust in their local properties.},
       doi = {10.1007/978-3-642-35236-2_64},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=989401&pCurrPk=66820}
    }

  • [c94] A. Holzinger, K. M. Simonic, and P. Yildirim, “Disease-Disease Relationships for Rheumatic Diseases: Web-Based Biomedical Textmining an Knowledge Discovery to Assist Medical Decision Making“, in IEEE 36th Annual Computer Software and Applications Conference (COMPSAC), 2012, pp. 573-580.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. There is urgent need for techniques which enable the discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining aims at developing technologies to help cope with the interpretation of these large volumes of publications. Co-occurrence analysis is a technique applied in text mining and the methodologies and statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a method and an evaluation on knowledge discovery of disease-disease relationships for rheumatic diseases. This has huge medical relevance, since rheumatic diseases affect hundreds of millions of people worldwide and lead to substantial loss of functioning and mobility. In this study, we interviewed medical experts and searched the ACR (American College of Rheumatology) web site in order to select the most observed rheumatic diseases to explore disease-disease relationships. We used a web based text-mining tool to find disease names and their co-occurrence frequencies in MEDLINE articles for each disease. After finding disease names and frequencies, we normalized the names by interviewing medical experts and by utilizing biomedical resources. Frequencies are normally a good indicator of the relevance of a concept but they tend to overestimate the importance of common concepts. We also used Pointwise Mutual Information (PMI) measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. After finding PMI values for each disease, we ranked these values and frequencies together. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, the- eby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making.

    @inproceedings{c94,
       author = {Holzinger, A. and Simonic, K. M. and Yildirim, P.},
       title = {Disease-Disease Relationships for Rheumatic Diseases: Web-Based Biomedical Textmining an Knowledge Discovery to Assist Medical Decision Making},
       booktitle = {IEEE 36th Annual Computer Software and Applications Conference (COMPSAC)},
       editor = {Bai, Xiaoying and Belli, Fevzi and Bertino, Elisa and Chang, Carl K. and Elçi, Atilla and Seceleanu, Cristina and Xie, Haihua and Zulkernine, Mohammad},
       pages = {573-580},
      year = {2012},
       abstract = {The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. There is urgent need for techniques which enable the discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining aims at developing technologies to help cope with the interpretation of these large volumes of publications. Co-occurrence analysis is a technique applied in text mining and the methodologies and statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a method and an evaluation on knowledge discovery of disease-disease relationships for rheumatic diseases. This has huge medical relevance, since rheumatic diseases affect hundreds of millions of people worldwide and lead to substantial loss of functioning and mobility. In this study, we interviewed medical experts and searched the ACR (American College of Rheumatology) web site in order to select the most observed rheumatic diseases to explore disease-disease relationships. We used a web based text-mining tool to find disease names and their co-occurrence frequencies in MEDLINE articles for each disease. After finding disease names and frequencies, we normalized the names by interviewing medical experts and by utilizing biomedical resources. Frequencies are normally a good indicator of the relevance of a concept but they tend to overestimate the importance of common concepts. We also used Pointwise Mutual Information (PMI) measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. After finding PMI values for each disease, we ranked these values and frequencies together. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, the- eby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making.},
       doi = {10.1109/COMPSAC.2012.77},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=258875&pCurrPk=64129}
    }

  • [Holzinger2012] A. Holzinger, G. Searle, B. Peischl, and M. Debevc, “An Answer to “Who needs a stylus?” On Handwriting Recognition on Mobile Devices“, in e-Business and Telecommunications, Communications in Computer and Information Science, CCIS 314, Heidelberg, Berlin, New York: Springer, 2012, pp. 156-167.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    ”Who needs a stylus?” asked the late Steve Jobs during his introduction of the iPhone. Interestingly, just at this time, Apple had made a patent application in handwriting and input recognition via pen, and Google and Nokia followed. So, “who needs a stylus then?” According to our experience in projects with mobile devices in the “real-world” we noticed that handwriting is still an issue, e.g. in the medical domain. Medical professionals are very accustomed to use a pen, whereas touch devices are rather used by non-medical professionals and definitely preferred by elderly people. During our projects on mobile devices, we noticed that both handwriting and touch has certain advantages and disadvantages, but that both are of equal importance. So to concretely answer “Who needs a stylus?” we can answer: Medical professionals for example. And this is definitely a large group of users.

    @incollection{Holzinger2012,
       author = {Holzinger, A. and Searle, G. and Peischl, B. and Debevc, M.},
       title = {An Answer to “Who needs a stylus?” On Handwriting Recognition on Mobile Devices},
       booktitle = {e-Business and Telecommunications, Communications in Computer and Information Science, CCIS 314},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {156-167},
       year = {2012},
       abstract = {”Who needs a stylus?” asked the late Steve Jobs during his
    introduction of the iPhone. Interestingly, just at this time, Apple had made a
    patent application in handwriting and input recognition via pen, and Google and
    Nokia followed. So, “who needs a stylus then?” According to our experience in
    projects with mobile devices in the “real-world” we noticed that handwriting is
    still an issue, e.g. in the medical domain. Medical professionals are very
    accustomed to use a pen, whereas touch devices are rather used by non-medical
    professionals and definitely preferred by elderly people. During our projects on
    mobile devices, we noticed that both handwriting and touch has certain
    advantages and disadvantages, but that both are of equal importance. So to
    concretely answer “Who needs a stylus?” we can answer: Medical professionals
    for example. And this is definitely a large group of users.},
       doi = {10.1007/978-3-642-35755-8_12},
       url = {http://link.springer.com/chapter/10.1007/978-3-642-35755-8_12}
    }

  • [c000] A. Holzinger, M. Schlögl, B. Peischl, and M. Debevc, “Optimization of a Handwriting Recognition Algorithm for a Mobile Enterprise Health Information System on the Basis of Real-Life Usability Research“, in e-Business and Telecommunications, Communications in Computer and Information Science, CCIS 222, M. S. Obaidat, G. A. Tsihrintzis, and J. Filipe, Eds., Berlin Heidelberg: Springer, 2012, vol. 222, pp. 97-111.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Optimizing data acquisition in mobile health care in order to increase accuracy and efficiency can benefit the patient. The software company FERK-Systems has been providing enterprise mobile health care information systems for various medical services in Germany for many years. Consequently, the need for a usable front-end for handwriting recognition, particularly for the use in ambulances was needed. While handwriting recognition has been a classical topic of computer science for many years, numerous problems still need to be solved. In this paper, we report on the study and resulting improvements achieved by the adaptation of an existing handwriting algorithm, based on experiences made during medical rescue missions. By improving accuracy and error correction the performance of an available handwriting recognition algorithm was increased. However, the end user studies showed that the virtual keyboard is still the preferred method compared to handwriting, especially among participants with a computer usage of more than 30 hours a week. This is possibly due to the wide availability of the QUERTY/QUERTZ keyboard.

    @incollection{c000,
       author = {Holzinger, Andreas and Schlögl, Martin and Peischl, Bernhard and Debevc, Matjaz},
       title = {Optimization of a Handwriting Recognition Algorithm for a Mobile Enterprise Health Information System on the Basis of Real-Life Usability Research},
       booktitle = {e-Business and Telecommunications, Communications in Computer and Information Science, CCIS 222},
       editor = {Obaidat, Mohammad S. and Tsihrintzis, George A. and Filipe, Joaquim},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       volume = {222},
       pages = {97-111},
       year = {2012},
       abstract = {Optimizing data acquisition in mobile health care in order to increase accuracy and efficiency can benefit the patient. The software company FERK-Systems has been providing enterprise mobile health care information systems for various medical services in Germany for many years. Consequently, the need for a usable front-end for handwriting recognition, particularly for the use in ambulances was needed. While handwriting recognition has been a classical topic of computer science for many years, numerous problems still need to be solved. In this paper, we report on the study and resulting improvements achieved by the adaptation of an existing handwriting algorithm, based on experiences made during medical rescue missions. By improving accuracy and error correction the performance of an available handwriting recognition algorithm was increased. However, the end user studies showed that the virtual keyboard is still the preferred method compared to handwriting, especially among participants with a computer usage of more than 30 hours a week. This is possibly due to the wide availability of the QUERTY/QUERTZ keyboard.},
       doi = {10.1007/978-3-642-25206-8_6},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=262897&pCurrPk=63527}
    }

  • [c84] A. Holzinger, R. Scherer, M. Seeber, J. Wagner, and G. Müller-Putz, “Computational Sensemaking on Examples of Knowledge Discovery from Neuroscience Data: Towards Enhancing Stroke Rehabilitation“, in Information Technology in Bio- and Medical Informatics, Lecture Notes in Computer Science, LNCS 7451, C. Böhm, S. Khuri, L. Lhotská, and M. Renda, Eds., Heidelberg, New York: Springer, 2012, pp. 166-168.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Strokes are often associated with persistent impairment of a lower limb. Functional brain mapping is a set of techniques from neuroscience for mapping biological quantities (computational maps) into spatial representations of the human brain as functional cortical tomography, generating massive data. Our goal is to understand cortical reorganization after a stroke and to develop models for optimizing rehabilitation with non-invasive electroencephalography. The challenge is to obtain insight into brain functioning, in order to develop predictive computational models to increase patient outcome. There are many EEG features that still need to be explored with respect to cortical reorganization. In the present work we use independent component analysis, and data visualization mapping as tools for sensemaking. Our results show activity patterns over the sensorimotor cortex, involved in the execution and association of movements; our results further supports the usefulness of inverse mapping methods and generative models for functional brain mapping in the context of non-invasive monitoring of brain activity. (Independent Component Analysis ICA)

    @incollection{c84,
       author = {Holzinger, Andreas and Scherer, Reinhold and Seeber, Martin and Wagner, Johanna and Müller-Putz, Gernot},
       title = {Computational Sensemaking on Examples of Knowledge Discovery from Neuroscience Data: Towards Enhancing Stroke Rehabilitation},
       booktitle = {Information Technology in Bio- and Medical Informatics, Lecture Notes in Computer Science, LNCS 7451},
       editor = {Böhm, Christian and Khuri, Sami and Lhotská, Lenka and Renda, M.},
       publisher = {Springer},
       address = {Heidelberg, New York},
       pages = {166-168},
       year = {2012},
       abstract = {Strokes are often associated with persistent impairment of a lower limb. Functional brain mapping is a set of techniques from neuroscience for mapping biological quantities (computational maps) into spatial representations of the human brain as functional cortical tomography, generating massive data. Our goal is to understand cortical reorganization after a stroke and to develop models for optimizing rehabilitation with non-invasive electroencephalography. The challenge is to obtain insight into brain functioning, in order to develop predictive computational models to increase patient outcome. There are many EEG features that still need to be explored with respect to cortical reorganization. In the present work we use independent component analysis, and data visualization mapping as tools for sensemaking. Our results show activity patterns over the sensorimotor cortex, involved in the execution and association of movements; our results further supports the usefulness of inverse mapping methods and generative models for functional brain mapping in the context of non-invasive monitoring of brain activity. (Independent Component Analysis ICA)},
       doi = {10.1007/978-3-642-32395-9_13},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=254454&pCurrPk=65375}
    }

  • [c101] A. Holzinger, E. Popova, B. Peischl, and M. Ziefle, “On Complexity Reduction of User Interfaces for Safety-Critical Systems“, in Multidisciplinary Research and Practice for Information Systems, Lecture Notes in Computer Science, LNCS 7465, G. Quirchmayr, J. Basl, I. You, L. Xu, and E. Weippl, Eds., Berlin, Heidelberg: Springer, 2012, pp. 108-122.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Control and communication systems used at power plants or incineration facilities offer various graphical visualizations of the physical parts of the site; however, they rarely provide sufficient visualization of the signal data. There is the problem, that such facilities contain 10,000 or more data acquisition points; each of them continuously sending data updates to the control system (once in 20 ms or less). This huge load of data can be analyzed by a human expert only if appropriately visualized. Such a visualization tool is AutoDyn, developed by the company Technikgruppe, which allows processing and visualizing complex data and supports decision making. In order to configure this tool, a user interface is necessary, called TGtool. It was originally developed by following a system-centered approach, consequently it is difficult to use. Wrong configuration can lead to incorrect signal data visualization, which may lead to wrong decisions of the power plant personnel. An unintentional mistake could have dramatic consequences. The challenge was to re-design this tool, applying a user-centered approach. In this paper we describe the re-design of the configuration tool, following the hypothesis that a user-centered cognitive map structure helps to deal with the complexity without excessive training. The results of the evaluation support this hypothesis.

    @incollection{c101,
       author = {Holzinger, Andreas and Popova, Evgenia and Peischl, Bernhard and Ziefle, Martina},
       title = {On Complexity Reduction of User Interfaces for Safety-Critical Systems},
       booktitle = {Multidisciplinary Research and Practice for Information Systems, Lecture Notes in Computer Science, LNCS 7465},
       editor = {Quirchmayr, Gerald and Basl, Josef and You, Ilsun and Xu, Lida and Weippl, Edgar},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {108-122},
       year = {2012},
       abstract = {Control and communication systems used at power plants or incineration facilities offer various graphical visualizations of the physical parts of the site; however, they rarely provide sufficient visualization of the signal data. There is the problem, that such facilities contain 10,000 or more data acquisition points; each of them continuously sending data updates to the control system (once in 20 ms or less). This huge load of data can be analyzed by a human expert only if appropriately visualized. Such a visualization tool is AutoDyn, developed by the company Technikgruppe, which allows processing and visualizing complex data and supports decision making. In order to configure this tool, a user interface is necessary, called TGtool. It was originally developed by following a system-centered approach, consequently it is difficult to use. Wrong configuration can lead to incorrect signal data visualization, which may lead to wrong decisions of the power plant personnel. An unintentional mistake could have dramatic consequences. The challenge was to re-design this tool, applying a user-centered approach. In this paper we describe the re-design of the configuration tool, following the hypothesis that a user-centered cognitive map structure helps to deal with the complexity without excessive training. The results of the evaluation support this hypothesis.},
       doi = {10.1007/978-3-642-32498-7_9},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=253848&pCurrPk=65316}
    }

  • [c89] A. Holzinger, M. Geier, and P. Germanakos, “On the development of smart adaptive user interfaces for mobile e-Business applications: Towards enhancing User Experience – some lessons learned“, in ICE-B Conference (10.5220/0004067002050214), SciTePress, 2012, pp. 205-214.
    [BibTeX] [Abstract] [Download PDF]

    Mobile end users usually work in complex and hectic environments, consequently for mobile e-Business applications the design and development of context aware, smart, adaptive user interfaces is getting more and more important. The main goal is to make the user interface so simple that the end users can concentrate on their tasks – not on the handling of the application, the main challenge is its adaptation to the context. A possible solution is smart adaptation. Consequently, developers need to know the limits of both context and systems and must be aware of mobile end users different interaction. In this paper, we follow the hypothesis that simple user interfaces enhance performance and we report about some lessons learned during the design, development and evaluation of a smart, adaptive user interface for an e-Business application.

    @incollection{c89,
       author = {Holzinger, A. and Geier, M. and Germanakos, P.},
       title = {On the development of smart adaptive user interfaces for mobile e-Business applications: Towards enhancing User Experience – some lessons learned},
       booktitle = {ICE-B Conference (10.5220/0004067002050214)},
       publisher = {SciTePress},
       pages = {205-214},
       year = {2012},
       abstract = {Mobile end users usually work in complex and hectic environments, consequently for mobile e-Business applications the design and development of context aware, smart, adaptive user interfaces is getting more and more important. The main goal is to make the user interface so simple that the end users can concentrate on their tasks – not on the handling of the application, the main challenge is its adaptation to the context. A possible solution is smart adaptation. Consequently, developers need to know the limits of both context and systems and must be aware of mobile end users different interaction. In this paper, we follow the hypothesis that simple user interfaces enhance performance and we report about some lessons learned during the design, development and evaluation of a smart, adaptive user interface for an e-Business application.},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=999579&pCurrPk=64538}
    }

  • [Bod3] A. Holzinger, Biomedical Informatics: Computational Sciences meets Life Sciences, Norderstedt: BoD, 2012.
    [BibTeX] [Abstract] [Download PDF]

    Computational Sciences meets Life Sciences Medical Informatics is defined as an interdisciplinary field studying the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making, motivated by efforts to improve human health. To emphasize the broad character it is called Biomedical Informatics. The course LV 444.152 consists of the following 12 lectures: 1. Introduction: Computer Science meets Life Sciences, challenges and future directions; 2. Back to the future: Fundamentals of Data, Information and Knowledge; 3. Structured Data: Coding, Classification (ICD, SNOMED, MeSH, UMLS); 4. Biomedical Databases: Acquisition, Storage, Information Retrieval and Use; 5. Semi structured and weakly structured data; 6. Multimedia Data Mining and Knowledge Discovery; 7. Knowledge and Decision: Cognitive Science and Human-Computer Interaction; 8. Biomedical Decision Making: Reasoning and Decision Support; 9. Intelligent Information Visualization and Visual Analytics; 10. Biomedical Information Systems and Medical Knowledge Management; 11. Biomedical Data: Privacy, Safety and Security 12. Methodology for Information Systems: System Design, Usability and Evaluation

    @book{Bod3,
       author = {Holzinger, Andreas},
       title = {Biomedical Informatics: Computational Sciences meets Life Sciences},
       publisher = {BoD},
       address = {Norderstedt},
       year = {2012},
       abstract = {Computational Sciences meets Life Sciences
    Medical Informatics is defined as an interdisciplinary field studying the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making, motivated by efforts to improve human health. To emphasize the broad character it is called Biomedical Informatics. The course LV 444.152 consists of the following 12 lectures:
    1. Introduction: Computer Science meets Life Sciences, challenges and future directions;
    2. Back to the future: Fundamentals of Data, Information and Knowledge;
    3. Structured Data: Coding, Classification (ICD, SNOMED, MeSH, UMLS);
    4. Biomedical Databases: Acquisition, Storage, Information Retrieval and Use;
    5. Semi structured and weakly structured data;
    6. Multimedia Data Mining and Knowledge Discovery;
    7. Knowledge and Decision: Cognitive Science and Human-Computer Interaction;
    8. Biomedical Decision Making: Reasoning and Decision Support;
    9. Intelligent Information Visualization and Visual Analytics;
    10. Biomedical Information Systems and Medical Knowledge Management;
    11. Biomedical Data: Privacy, Safety and Security
    12. Methodology for Information Systems: System Design, Usability and Evaluation},
       url = {http://www.bod.de/index.php?id=1132&objk_id=859299}
    }

  • [c93] A. Holzinger, “On Knowledge Discovery and Interactive Intelligent Visualization of Biomedical Data – Challenges in Human–Computer Interaction and Biomedical Informatics“, in DATA 2012, International Conference on Data Technologies and Applications, 2012, pp. 5-16.
    [BibTeX] [Abstract] [Download PDF]

    Biomedical Informatics can be defined as “the interdisciplinary field that studies and pursues the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making, motivated by efforts to improve human health.” However, professionals in the life sciences are facing an increasing quantity of highly complex, multi-dimensional and weakly structured data. While researchers in Human-Computer Interaction (HCI) and Knowledge Discovery in Databases (KDD) have for long been working independently to develop methods that can support expert end users to identify, extract and understand information out of this data, it is obvious that an interdisciplinary approach to bring these two fields closer together can yield synergies in the application of these methods to weakly structured complex medical data sets. The aim is to support end users to learn how to interactively analyse information properties and to visualize the most relevant parts – in order to gainknowledge, and finally wisdom, to support a smarter decision making. The danger is not only to get overwhelmed by increasing masses of data, moreover, there is the risk of modelling artifacts. [Knowledge Discovery, Interactive Visualization]

    @inproceedings{c93,
       year = {2012},
       author = {Holzinger, Andreas},
       title = {On Knowledge Discovery and Interactive Intelligent Visualization of Biomedical Data - Challenges in Human–Computer Interaction and Biomedical Informatics},
       booktitle = {DATA 2012, International Conference on Data Technologies and Applications},
       editor = {Helfert, Markus and Fancalanci, Chiara and Filipe, Joaquim},
       pages = {5-16},
       abstract = {Biomedical Informatics can be defined as “the interdisciplinary field that studies and pursues the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making, motivated by efforts to improve human health.” However, professionals in the life sciences are facing an increasing quantity of highly complex, multi-dimensional and weakly structured data. While researchers in Human-Computer Interaction (HCI) and Knowledge Discovery in Databases (KDD) have for long been working independently to develop methods that can support expert end users to identify, extract and understand information out of this data, it is obvious that an interdisciplinary approach to bring these two fields closer together can yield synergies in the application of these methods to weakly structured complex medical data sets. The aim is to support end users to learn how to interactively analyse information properties and to visualize the most relevant parts – in order to gainknowledge, and finally wisdom, to support a smarter decision making. The danger is not only to get overwhelmed by increasing masses of data, moreover, there is the risk of modelling artifacts. [Knowledge Discovery, Interactive Visualization]},
       keywords = {HCI-KDD, Knowledge Discovery, Interactive Visualization},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=258208&pCurrPk=64857}
    }

  • [c91] M. Debevc, I. Kožuh, P. Kosec, M. Rotovnik, and A. Holzinger, “Sign Language Multimedia Based Interaction for Aurally Handicapped People“, in Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 7383, K. Miesenberger, A. Karshmer, P. Penaz, and W. Zagler, Eds., Berlin Heidelberg: Springer, 2012, pp. 213-220.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    People with hearing disabilities still do not have a satisfactory access to Internet services. Since sign language is the mother tongue of deaf people, and 80% of this social group cannot successfully understand the written content, different ways of using sign language to deliver information via the Internet should be considered. In this paper, we provide a technical overview of solutions to this problem that we have designed and tested in recent years, along with the evaluation results and users’ experience reports. The solutions discussed prioritize sign language on the Internet for the deaf and hard of hearing using a multimodal approach to delivering information, including video, audio and captions.

    @incollection{c91,
       author = {Debevc, Matjaž and Kožuh, Ines and Kosec, Primož and Rotovnik, Milan and Holzinger, Andreas},
       title = {Sign Language Multimedia Based Interaction for Aurally Handicapped People},
       booktitle = {Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 7383},
       editor = {Miesenberger, Klaus and Karshmer, Arthur and Penaz, Petr and Zagler, Wolfgang},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {213-220},
       year = {2012},
       abstract = {People with hearing disabilities still do not have a satisfactory access to Internet services. Since sign language is the mother tongue of deaf people, and 80% of this social group cannot successfully understand the written content, different ways of using sign language to deliver information via the Internet should be considered. In this paper, we provide a technical overview of solutions to this problem that we have designed and tested in recent years, along with the evaluation results and users’ experience reports. The solutions discussed prioritize sign language on the Internet for the deaf and hard of hearing using a multimodal approach to delivering information, including video, audio and captions.},
       doi = {10.1007/978-3-642-31534-3_33},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=255570&pCurrPk=64130}
    }

  • [c99] A. Calero Valdez, A. Schaar, M. Ziefle, A. Holzinger, S. Jeschke, and C. Brecher, “Using Mixed Node Publication Network Graphs for Analyzing Success in Interdisciplinary Teams“, in Active Media Technology, Lecture Notes in Computer Science LNCS 7669, R. Huang, A. A. Ghorbani, G. Pasi, T. Yamaguchi, N. Y. Yen, and B. Jin, Eds., Heidelberg, Berlin: Springer , 2012, pp. 606-617.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Large-scale research problems (e.g. health and aging, eonomics and production in high-wage countries) are typically complex, needing competencies and research input of different disciplines [1]. Hence, cooperative working in mixed teams is a common research procedure to meet multi-faceted research problems. Though, interdisciplinarity is – socially and scientifically – a challenge, not only in steering cooperation quality, but also in evaluating the interdisciplinary performance. In this paper we demonstrate how using mixed-node publication network graphs can be used in order to get insights into social structures of research groups. Explicating the published element of cooperation in a network graph reveals more than simple co-authorship graphs. The validity of the approach was tested on the 3-year publication outcome of an interdisciplinary research group. The approach was highly useful not only in demonstrating network properties like propinquity and homophily, but also in proposing a performance metric of interdisciplinarity. Furthermore we suggest applying the approach to a large research cluster as a method of self-management and enriching the graph with sociometric data to improve intelligibility of the graph.

    @incollection{c99,
       author = {Calero Valdez, André and Schaar, AnneKathrin and Ziefle, Martina and Holzinger, Andreas and Jeschke, Sabina and Brecher, Christian},
       title = {Using Mixed Node Publication Network Graphs for Analyzing Success in Interdisciplinary Teams},
       booktitle = {Active Media Technology, Lecture Notes in Computer Science LNCS 7669},
       editor = {Huang, Runhe and Ghorbani, Ali A and Pasi, Gabriella and Yamaguchi, Takahira and Yen, Neil Y and Jin, Beijing},
       publisher = {Springer },
       address = {Heidelberg, Berlin},
       pages = {606-617},
       year = {2012},
       abstract = {Large-scale research problems (e.g. health and aging, eonomics and production in high-wage countries) are typically complex, needing competencies and research input of different disciplines [1]. Hence, cooperative working in mixed teams is a common research procedure to meet multi-faceted research problems. Though, interdisciplinarity is – socially and scientifically – a challenge, not only in steering cooperation quality, but also in evaluating the interdisciplinary performance. In this paper we demonstrate how using mixed-node publication network graphs can be used in order to get insights into social structures of research groups. Explicating the published element of cooperation in a network graph reveals more than simple co-authorship graphs. The validity of the approach was tested on the 3-year publication outcome of an interdisciplinary research group. The approach was highly useful not only in demonstrating network properties like propinquity and homophily, but also in proposing a performance metric of interdisciplinarity. Furthermore we suggest applying the approach to a large research cluster as a method of self-management and enriching the graph with sociometric data to improve intelligibility of the graph.},
       doi = {10.1007/978-3-642-35236-2_61},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=989398&pCurrPk=71844}
    }

  • [c85] C. Breitwieser, O. Terbu, A. Holzinger, C. Brunner, S. Lindstaedt, and G. Müller-Putz, “iScope – Viewing Biosignals on Mobile Devices“, in Pervasive Computing and the Networked World, Lecture Notes in Computer Science LNCS 7719, Q. Zu, B. Hu, and A. Elçi, Eds., Berlin Heidelberg: Springer , 2012, pp. 50-56.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    We developed an iOS based application called iScope to monitor biosignals online. iScope is able to receive different signal types via a wireless network connection and is able to present them in the time or the frequency domain. Thus it is possible to inspect recorded data immediately during the recording process and detect potential artifacts early without the need to carry around heavy equipment like laptops or complete PC workstations. The iScope app has been tested during various measurements on the iPhone 3GS as well as on the iPad 1 and is fully functional.

    @incollection{c85,
       author = {Breitwieser, Christian and Terbu, Oliver and Holzinger, Andreas and Brunner, Clemens and Lindstaedt, Stefanie and Müller-Putz, Gernot},
       title = {iScope – Viewing Biosignals on Mobile Devices},
       booktitle = {Pervasive Computing and the Networked World, Lecture Notes in Computer Science LNCS 7719},
       editor = {Zu, Qiaohong and Hu, Bo and Elçi, Atilla},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       pages = {50-56},
       year = {2012},
       abstract = {We developed an iOS based application called iScope to monitor biosignals online. iScope is able to receive different signal types via a wireless network connection and is able to present them in the time or the frequency domain. Thus it is possible to inspect recorded data immediately during the recording process and detect potential artifacts early without the need to carry around heavy equipment like laptops or complete PC workstations. The iScope app has been tested during various measurements on the iPhone 3GS as well as on the iPad 1 and is fully functional.},
       doi = {10.1007/978-3-642-37015-1_5},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=289657&pCurrPk=69931}
    }

  • [c96] M. Billinger, C. Brunner, R. Scherer, A. Holzinger, and G. Müller-Putz, “Towards a Framework Based on Single Trial Connectivity for Enhancing Knowledge Discovery in BCI“, in Active Media Technology, Lecture Notes in Computer Science, LNCS 7669, R. Huang, A. Ghorbani, G. Pasi, T. Yamaguchi, N. Yen, and B. Jin, Eds., Berlin Heidelberg: Springer, 2012, pp. 658-667.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    We developed a framework for systematic evaluation of BCI systems. This framework is intended to compare features extracted from a variety of spectral measures related to functional connectivity, effective connectivity, or instantaneous power. Different measures are treated in a consistent manner, allowing fair comparison within a repeated measures design. We applied the framework to BCI data from 14 subjects recorded on two days each, and demonstrated the framework’s feasibility by confirming results from the literature. Furthermore, we could show that electrode selection becomes more focal in the second BCI session, but classification accuracy stays unchanged.

    @incollection{c96,
       author = {Billinger, Martin and Brunner, Clemens and Scherer, Reinhold and Holzinger, Andreas and Müller-Putz, GernotR},
       title = {Towards a Framework Based on Single Trial Connectivity for Enhancing Knowledge Discovery in BCI},
       booktitle = {Active Media Technology, Lecture Notes in Computer Science, LNCS 7669},
       editor = {Huang, Runhe and Ghorbani, AliA and Pasi, Gabriella and Yamaguchi, Takahira and Yen, NeilY and Jin, Beijing},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {658-667},
       year = {2012},
       abstract = {We developed a framework for systematic evaluation of BCI systems. This framework is intended to compare features extracted from a variety of spectral measures related to functional connectivity, effective connectivity, or instantaneous power. Different measures are treated in a consistent manner, allowing fair comparison within a repeated measures design. We applied the framework to BCI data from 14 subjects recorded on two days each, and demonstrated the framework’s feasibility by confirming results from the literature. Furthermore, we could show that electrode selection becomes more focal in the second BCI session, but classification accuracy stays unchanged.},
       doi = {10.1007/978-3-642-35236-2_65},
       url = {http://rd.springer.com/content/pdf/10.1007%2F978-3-642-35236-2_65.pdf}
    }

  • [c87] A. Auinger, P. Brandtner, P. Großdeßner, and A. Holzinger, “Search Engine Optimization Meets e-Business-A Theory-based Evaluation: Findability and Usability as Key Success Factors“, in DCNET/ICE-B/OPTICS, 2012, pp. 237-250.
    [BibTeX] [Abstract] [Download PDF]

    What can not be found, cannot be used. Consequently, the success of a Website depends, apart from its content, on two main criteria: its top-listing by search engines and its usability. Hence, Website usability and search engine optimization (SEO) are two topics of great relevance. This paper focusses on analysing the extent that selected SEO-criteria, which were experimentally applied to a Website, affect the website’s usability, measured by DIN EN ISO 9241-110 criteria. Our approach followed (i) a theory-based comparison of usability-recommendations and SEO-measures and (ii) a scenario- and questionnaire-based usability evaluation study combined with an eye-tracking analysis. The findings clearly show that Website usability and SEO are closely connected and compatible to a wide extent. The theory-based measures for SEO and Web Usability could be confirmed by the results of the conducted usability evaluation study and a positive correlation between search engine optimization and Website usability could be demonstrated.

    @inproceedings{c87,
       author = {Auinger, Andreas and Brandtner, Patrick and Großdeßner, Petra and Holzinger, Andreas},
       title = {Search Engine Optimization Meets e-Business-A Theory-based Evaluation: Findability and Usability as Key Success Factors},
       booktitle = {DCNET/ICE-B/OPTICS},
       pages = {237-250},
      year = {2012},
       abstract = {What can not be found, cannot be used. Consequently, the success of a Website depends, apart from its content, on two main criteria: its top-listing by search engines and its usability. Hence, Website usability and search engine optimization (SEO) are two topics of great relevance. This paper focusses on analysing the extent that selected SEO-criteria, which were experimentally applied to a Website, affect the website’s usability,  measured  by  DIN  EN  ISO  9241-110  criteria.  Our approach  followed (i)  a  theory-based  comparison of usability-recommendations and SEO-measures and (ii) a scenario- and questionnaire-based  usability evaluation study combined with an eye-tracking analysis. The findings clearly show that Website  usability and SEO are closely connected and compatible to a wide extent.  The theory-based measures for  SEO and Web Usability could be confirmed by the results of the conducted usability evaluation study and a  positive correlation between search engine optimization and Website usability could be demonstrated.},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=258768&pCurrPk=64574}
    }

  • [c95] M. Al-Smadi, G. Wesiak, C. Guetl, and A. Holzinger, “Assessment for/as Learning: Integrated Automatic Assessment in Complex Learning Resources for Self-Directed Learning“, in Complex, Intelligent and Software Intensive Systems (CISIS), 2012 Sixth International Conference on, 2012, pp. 929-934.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In the so-called ‘New Culture for Assessment’ assessment has become a tool for Learning. Assessment is no more considered to be isolated from the learning process and provided as embedded assessment forms. Nevertheless, students have more responsibility in the learning process in general and in assessment activities in particular. They become more engaged in: developing assessment criteria, participating in self, peer-assessments, reflecting on their own learning, monitoring their performance, and utilizing feedback to adapt their knowledge, skills, and behavior. Consequently, assessment tools have emerged from being stand-alone represented by monolithic systems through modular assessment tools to more flexible and interoperable generation by adopting the service-oriented architecture and modern learning specifications and standards. The new generation holds great promise when it comes to having interoperable learning services and tools within more personalized and adaptive e-learning platforms. In this paper, integrated automated assessment forms provided through flexible and SOA-based tools are discussed. Moreover, it presents a show case of how these forms have been integrated with a Complex Learning Resource (CLR) and used for self-directed learning. The results of the study show, that the developed tool for self-directed learning supports students in their learning process.

    @inproceedings{c95,
       author = {Al-Smadi, Mohammad and Wesiak, Gudrun and Guetl, Christian and Holzinger, Andreas},
       title = {Assessment for/as Learning: Integrated Automatic Assessment in Complex Learning Resources for Self-Directed Learning},
       booktitle = {Complex, Intelligent and Software Intensive Systems (CISIS), 2012 Sixth International Conference on},
       pages = {929-934},
      year = {2012},
       abstract = {In the so-called 'New Culture for Assessment' assessment has become a tool for Learning. Assessment is no more considered to be isolated from the learning process and provided as embedded assessment forms. Nevertheless, students have more responsibility in the learning process in general and in assessment activities in particular. They become more engaged in: developing assessment criteria, participating in self, peer-assessments, reflecting on their own learning, monitoring their performance, and utilizing feedback to adapt their knowledge, skills, and behavior. Consequently, assessment tools have emerged from being stand-alone represented by monolithic systems through modular assessment tools to more flexible and interoperable generation by adopting the service-oriented architecture and modern learning specifications and standards. The new generation holds great promise when it comes to having interoperable learning services and tools within more personalized and adaptive e-learning platforms. In this paper, integrated automated assessment forms provided through flexible and SOA-based tools are discussed. Moreover, it presents a show case of how these forms have been integrated with a Complex Learning Resource (CLR) and used for self-directed learning. The results of the study show, that the developed tool for self-directed learning supports students in their learning process.},
       doi = {10.1109/cisis.2012.210},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=258758&pCurrPk=65947}
    }

2011

  • [c72a] M. Ziefle, C. Röcker, and A. Holzinger, “Perceived usefulness of assistive technologies and electronic services for ambient assisted living“, in 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2011, pp. 585-592.
    [BibTeX] [Abstract] [Download PDF]

    This paper reports on a study analyzing the attitudes of users towards different types of Ambient Assisted Living (AAL) services. The study explores the acceptance and terms of use of large interactive screens for the most common applications types: health, social and convenience services. In order to understand the impact of user diversity, we explored age, gender, health status, social contact, interest in technology, and the reported ease of use as well as their relation to acceptance. Using the questionnaire method, 30 women and 30 men between 17-95 years were examined. The results show that users are not yet very familiar with the vision of smart technology at home and report a considerable diffidence and aloofness towards using such technologies. Persons with many social contacts and a high interest in technology show the highest acceptance for electronic services at home. Astonishingly, the results for the different applications were insensitive to gender and age, which indicates that the precautious attitude towards AAL applications represents a universal phenomenon. Consequently, acceptance criteria as well as users’ needs and wants should be seriously considered in order to successfully design smart home technologies. [Smart Health, Ubiquitous Computing]

    @inproceedings{c72a,
       year = {2011},
       author = {Ziefle, M. and Röcker, C. and Holzinger, A.},
       title = {Perceived usefulness of assistive technologies and electronic services for ambient assisted living},
       booktitle = {5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth)},
       publisher = {IEEE},
       pages = {585-592},
       abstract = {This paper reports on a study analyzing the attitudes of users towards different types of Ambient Assisted Living (AAL) services. The study explores the acceptance and terms of use of large interactive screens for the most common applications types: health, social and convenience services. In order to understand the impact of user diversity, we explored age, gender, health status, social contact, interest in technology, and the reported ease of use as well as their relation to acceptance. Using the questionnaire method, 30 women and 30 men between 17-95 years were examined. The results show that users are not yet very familiar with the vision of smart technology at home and report a considerable diffidence and aloofness towards using such technologies. Persons with many social contacts and a high interest in technology show the highest acceptance for electronic services at home. Astonishingly, the results for the different applications were insensitive to gender and age, which indicates that the precautious attitude towards AAL applications represents a universal phenomenon. Consequently, acceptance criteria as well as users' needs and wants should be seriously considered in order to successfully design smart home technologies. [Smart Health, Ubiquitous Computing]},
       keywords = {Smart Health, Ubiquitous Computing},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=246285&pCurrPk=61395}
    }

  • [c82] M. Ziefle, C. Röcker, and A. Holzinger, “Medical Technology in Smart Homes: Exploring the User’s Perspective on Privacy, Intimacy and Trust“, in 35th Annual IEEE Computer Software and Applications Conference Workshops COMPSAC 2011, Munich: IEEE, 2011, pp. 410-415.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper reports on a study exploring the attitudes of users towards video-based monitoring systems for long-term care of elderly or disabled people in smart home environments. The focus of the study was on investigating the willingness of users to accept medical technology in their homes and the specific conditions under which continuous monitoring would be acceptable. Using the questionnaire method, a total of 165 users (17-95 years) were examined regarding privacy, intimacy and trust issues for medical technology in homes. The results highlight trust and privacy as central requirements, especially when implemented within private spaces. The reported concerns were mostly insensitive to gender and age. Overall, it was revealed that acceptance issues and users’ needs and wants should be seriously considered in order to successfully design new medical technologies .

    @incollection{c82,
       year = {2011},
       author = {Ziefle, M. and Röcker, C. and Holzinger, A.},
       title = {Medical Technology in Smart Homes: Exploring the User's Perspective on Privacy, Intimacy and Trust},
       booktitle = {35th Annual IEEE Computer Software and Applications Conference Workshops COMPSAC 2011},
       publisher = {IEEE},
       address = {Munich},
       pages = {410-415},
       abstract = {This paper reports on a study exploring the attitudes of users towards video-based monitoring systems for long-term care of elderly or disabled people in smart home environments. The focus of the study was on investigating the willingness of users to accept medical technology in their homes and the specific conditions under which continuous monitoring would be acceptable. Using the questionnaire method, a total of 165 users (17-95 years) were examined regarding privacy, intimacy and trust issues for medical technology in homes. The results highlight trust and privacy as central requirements, especially when implemented within private spaces. The reported concerns were mostly insensitive to gender and age. Overall, it was revealed that acceptance issues and users' needs and wants should be seriously considered in order to successfully design new medical technologies .},
       keywords = {Smart Health, Ubiquitous Computing, Data Privacy},
       doi = {10.1109/COMPSACW.2011.75},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=234695&pCurrPk=59215}
    }

  • [c71a] W. B. L. Wong, K. Xu, and A. Holzinger, “Interactive Visualization for Information Analysis in Medical Diagnosis“, in Information Quality in e-Health, Lecture Notes in Computer Science, LNCS 7058, A. Holzinger and K. Simonic, Eds., Springer Berlin Heidelberg, 2011, pp. 109-120.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper investigates to what extend the findings and solutions of information analysis in intelligence analysis can be applied and transferred into the medical diagnosis domains. Interactive visualization is proposed to address some of the problems faced by both domain. Its design issues related to selected common problems are then discussed in details. Finally, a visual sense making system INVISQUE is used as an example to illustrate how the interactive visualization can be used to support information analysis and medical diagnosis.

    @incollection{c71a,
       author = {Wong, B. L. William and Xu, Kai and Holzinger, Andreas},
       title = {Interactive Visualization for Information Analysis in Medical Diagnosis},
       booktitle = {Information Quality in e-Health, Lecture Notes in Computer Science, LNCS 7058},
       editor = {Holzinger, Andreas and Simonic, Klaus-Martin},
       publisher = {Springer Berlin Heidelberg},
       pages = {109-120},
       year = {2011},
       abstract = {This paper investigates to what extend the findings and solutions of information analysis in intelligence analysis can be applied and transferred into the medical diagnosis domains. Interactive visualization is proposed to address some of the problems faced by both domain. Its design issues related to selected common problems are then discussed in details. Finally, a visual sense making system INVISQUE is used as an example to illustrate how the interactive visualization can be used to support information analysis and medical diagnosis.},
       doi = {10.1007/978-3-642-25364-5_11},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231643&pCurrPk=60972}
    }

  • [c79] C. Stickel, M. Ebner, and A. Holzinger, “Shadow Expert Technique (SET) for Interaction Analysis in Educational Systems“, in Universal Access in Human-Computer Interaction. Applications and Services, Springer Lecture Notes in Computer Science LNCS 6768, C. Stephanidis, Ed., Berlin Heidelberg: Springer, 2011, pp. 642-651.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper describes a novel usability method called Shadow Expert Technique (SET), which was applied on the learning management system of the Graz University of Technology in two different trials, with focus on consistency and visual complexity. This is the summary of the development of this new method and the approach to generalize it as a new way to get deeper insight into interaction processes (Methodology).

    @incollection{c79,
       author = {Stickel, Christian and Ebner, Martin and Holzinger, Andreas},
       title = {Shadow Expert Technique (SET) for Interaction Analysis in Educational Systems},
       booktitle = {Universal Access in Human-Computer Interaction. Applications and Services, Springer Lecture Notes in Computer Science LNCS 6768},
       editor = {Stephanidis, Constantine},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       pages = {642-651},
       year = {2011},
       abstract = {This paper describes a novel usability method called Shadow Expert Technique (SET), which was applied on the learning management system of the Graz University of Technology in two different trials, with focus on consistency and visual complexity. This is the summary of the development of this new method and the approach to generalize it as a new way to get deeper insight into interaction processes (Methodology).},
       doi = {10.1007/978-3-642-21657-2_69},
       url = {http://rd.springer.com/content/pdf/10.1007%2F978-3-642-21657-2_69.pdf}
    }

  • [B89] K. M. Simonic, A. Holzinger, M. Bloice, and J. Hermann, “Optimizing Long-Term Treatment of Rheumatoid Arthritis with Systematic Documentation“, in Proceedings of Pervasive Health – 5th International Conference on Pervasive Computing Technologies for Healthcare, 2011, pp. 550-554.
    [BibTeX] [Abstract] [Download PDF]

    About 1% of the population suffers from rheumatoid arthritis. They not only experience pain, but during the course of the disease their mobility is reduced due to a deterioration of their joints. To retard this destructive process an assortment of drugs are available today, however, for optimal results both medication and dosage have to be tailored for each individual patient. RCQM is a clinical information system that moderates this process: within the confines of the examination routine, physicians gather more than 100 clinical and functional parameters (time needed <; 10 minutes). The amassed data are morphed into more useable information by applying scoring algorithms (e.g. Disease Activity Score (DAS), Health Assessment Questionnaire (HAQ)), which is subsequently interpreted as a function of time. The resulting DAS trends and patterns are ultimately used for treatment optimization and as a measure for the quality of patient outcome. Graphical data acquisition and information visualization support the entire interaction between doctor and patient. Both are equally informed of the course of the disease and, in practice, treatment decisions are made jointly. The task of documentation becomes an integral part of the dialog with the patient. This yields an increased level of decision quality, higher compliance, and verifiable patient empowerment.

    @inproceedings{B89,
       author = {Simonic, K.M. and Holzinger, A. and Bloice, M. and Hermann, J.},
       title = {Optimizing Long-Term Treatment of Rheumatoid Arthritis with Systematic Documentation},
       booktitle = {Proceedings of Pervasive Health - 5th International Conference on Pervasive Computing Technologies for Healthcare},
       publisher = {IEEE},
       pages = {550-554},
       year = {2011},
       abstract = {About 1% of the population suffers from rheumatoid arthritis. They not only experience pain, but during the course of the disease their mobility is reduced due to a deterioration of their joints. To retard this destructive process an assortment of drugs are available today, however, for optimal results both medication and dosage have to be tailored for each individual patient. RCQM is a clinical information system that moderates this process: within the confines of the examination routine, physicians gather more than 100 clinical and functional parameters (time needed <; 10 minutes). The amassed data are morphed into more useable information by applying scoring algorithms (e.g. Disease Activity Score (DAS), Health Assessment Questionnaire (HAQ)), which is subsequently interpreted as a function of time. The resulting DAS trends and patterns are ultimately used for treatment optimization and as a measure for the quality of patient outcome. Graphical data acquisition and information visualization support the entire interaction between doctor and patient. Both are equally informed of the course of the disease and, in practice, treatment decisions are made jointly. The task of documentation becomes an integral part of the dialog with the patient. This yields an increased level of decision quality, higher compliance, and verifiable patient empowerment.},
       url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6038866}
    }

  • [B87] C. Röcker, M. Ziefle, and A. Holzinger, “Social Inclusion in AAL Environments: Home Automation and Convenience Services for Elderly Users“, in Proceedings of the International Conference on Artificial Intelligence (ICAI 2011), New York: CSERA Press, 2011, pp. 55-59.
    [BibTeX] [Abstract] [Download PDF]

    Traditionally, Ambient Assisted Living applications focus on health-related services, like the detection of emergency situations, long-term treatment of chronic diseases, or the prevention and early detection of illnesses. Over the last years, more and more projects started to extend these classical healthcare scenarios by designing applications that explicitly aim at increasing well-being and social inclusion for elderly users. With the transition away from purely medical services towards integrated homecare environments, holistic design concepts and evaluation approaches will become necessary. This paper takes a detailed look at state-of-the-art applications in this field and illustrates emerging challenges for the design and development of future homecare systems.

    @incollection{B87,
       author = {Röcker, Carsten and Ziefle, Martina and Holzinger, Andreas},
       title = {Social Inclusion in AAL Environments: Home Automation and Convenience Services for Elderly Users},
       booktitle = {Proceedings of the International Conference on Artificial Intelligence (ICAI 2011)},
       publisher = {CSERA Press},
       address = {New York},
       pages = {55-59},
       year = {2011},
       abstract = {Traditionally,  Ambient  Assisted  Living  applications  focus on  health-related  services,  like  the  detection  of  emergency  situations,  long-term  treatment  of  chronic  diseases,  or  the  prevention  and  early detection  of  illnesses.  Over  the  last  years,  more  and  more  projects  started  to  extend  these  classical  healthcare  scenarios  by  designing  applications  that  explicitly  aim  at  increasing  well-being  and  social  inclusion  for  elderly  users.  With  the  transition  away  from  purely  medical  services  towards  integrated  homecare  environments,  holistic  design  concepts  and  evaluation  approaches  will  become  necessary.  This  paper  takes  a  detailed  look  at  state-of-the-art  applications  in  this  field  and  illustrates  emerging  challenges  for  the  design  and  development  of  future  homecare systems.},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=219571&pCurrPk=60718}
    }

  • [j30] M. Kreuzthaler, M. D. Bloice, L. Faulstich, K. M. Simonic, and A. Holzinger, “A Comparison of Different Retrieval Strategies Working on Medical Free Texts“, Journal of Universal Computer Science, vol. 17, iss. 7, pp. 1109-1133, 2011.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Patient information in health care systems mostly consists of textual data, and free text in particular makes up a significant amount of it. Information retrieval systems that concentrate on these text types have to deal with the different challenges these medical free texts pose to achieve an acceptable performance. This paper describes the evaluation of four different types of information retrieval strategies: keyword search, search performed by a medical domain expert, a semantic based information retrieval tool, and a purely statistical information retrieval method. The different methods are evaluated and compared with respect to its appliance in medical health care systems. (unstructured information, text mining)

    @article{j30,
       author = {Kreuzthaler, M. and Bloice, M.D. and Faulstich, L. and Simonic, K.M. and Holzinger, A.},
       title = {A Comparison of Different Retrieval Strategies Working on Medical Free Texts},
       journal = {Journal of Universal Computer Science},
       volume = {17},
       number = {7},
       pages = {1109-1133},
       year = {2011},
       abstract = {Patient information in health care systems mostly consists of textual data, and free text in particular makes up a significant amount of it. Information retrieval systems that concentrate on these text types have to deal with the different challenges these medical free texts pose to achieve an acceptable performance. This paper describes the evaluation of four different types of information retrieval strategies: keyword search, search performed by a medical domain expert, a semantic based information retrieval tool, and a purely statistical information retrieval method. The different methods are evaluated and compared with respect to its appliance in medical health care systems. (unstructured information, text mining)},
       doi = {http://dx.doi.org/10.3217/jucs-017-07-1109},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=195700&pCurrPk=58139}
    }

  • [c69a] M. Kreuzthaler, M. Bloice, K. Simonic, and A. Holzinger, “Navigating through Very Large Sets of Medical Records: An Information Retrieval Evaluation Architecture for Non-standardized Text“, in Information Quality in e-Health, Lecture Notes in Computer Science, LNCS 7058, A. Holzinger and K. Simonic, Eds., Springer Berlin Heidelberg, 2011, pp. 455-470.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Despite the prevalence of informatics and advanced information systems, there exists large amounts of unstructured text data. This is especially true in medicine and health care, where free text is an indispensable part of information representation. In this paper, the motivation behind developing information retrieval systems in medicine and health care is described. An overview of information retrieval evaluation is given, before describing the architecture and the development of an extendible information retrieval evaluation framework. This framework allows different information retrieval tools to be compared to a gold standard in order to test its effectiveness. The paper also gives a review of available gold standards which can be used for research purposes in the area of information retrieval of medical free texts.

    @incollection{c69a,
       author = {Kreuzthaler, Markus and Bloice, Marcus and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {Navigating through Very Large Sets of Medical Records: An Information Retrieval Evaluation Architecture for Non-standardized Text},
       booktitle = {Information Quality in e-Health, Lecture Notes in Computer Science, LNCS 7058},
       editor = {Holzinger, Andreas and Simonic, Klaus-Martin},
       publisher = {Springer Berlin Heidelberg},
       pages = {455-470},
       year = {2011},
       abstract = {Despite the prevalence of informatics and advanced information systems, there exists large amounts of unstructured text data. This is especially true in medicine and health care, where free text is an indispensable part of information representation. In this paper, the motivation behind developing information retrieval systems in medicine and health care is described. An overview of information retrieval evaluation is given, before describing the architecture and the development of an extendible information retrieval evaluation framework. This framework allows different information retrieval tools to be compared to a gold standard in order to test its effectiveness. The paper also gives a review of available gold standards which can be used for research purposes in the area of information retrieval of medical free texts.},
       doi = {10.1007/978-3-642-25364-5_32},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231641&pCurrPk=60974}
    }

  • [c75] K. Holzinger, M. Lehner, M. Fassold, and A. Holzinger, “Archaeological Scavenger Hunt on mobile devices: from Education to e-Business: A triple adaptive mobile application for supporting Experts, Tourists and Children“, in ICEB-2011, Los Alamitos: IEEE, 2011, pp. 131-136.
    [BibTeX] [Abstract] [Download PDF]

    This paper reports on the design and development of a mobile application to support archaeological education and to raise awareness for our cultural heritage by making use of the powerful notion of play. The application reads information fromQuick-Response Codes (QR-Codes)on paper sheets, which can be placed directly at the points of interest. Users cannow follow an archaeological scavenger hunt along those points of interest. They start at one point of interestand get hints on how to find the others. This makes use of collective intelligence, i.e. using the mobile devices amongst the group of users as social communicators in order to get specific information on the target; through these additional discussions both the one who states questions and the one who gets the answer can learn incidentally. Although this App has been developed for educational purposes, it can be used just for fun, e.g. for a children’s birthday party: Hiding treasures in various spots in the garden and delivering information on QR-codes showing hints on how to find the spots. Moreover, the use of the ArchaeoApp inthe Tourism modus, is a challenge for e-Business. (collective intelligence)

    @incollection{c75,
       author = {Holzinger, Katharina and Lehner, M. and Fassold, M. and Holzinger, A.},
       title = {Archaeological Scavenger Hunt on mobile devices: from Education to e-Business: A triple adaptive mobile application for supporting Experts, Tourists and Children},
       booktitle = {ICEB-2011},
       publisher = {IEEE},
       address = {Los Alamitos},
       pages = {131-136},
       year = {2011},
       abstract = {This paper reports on the design and development of a mobile application to support archaeological education and to raise awareness for our cultural heritage by making use of the powerful notion of play. The application reads information fromQuick-Response Codes (QR-Codes)on paper sheets, which can be placed directly at the points of interest. Users cannow follow an archaeological scavenger hunt along those points of interest. They start at one point of interestand get hints on how to find the others. This makes use of collective intelligence, i.e. using the mobile devices amongst the group of users as social communicators in order to get specific information on the target; through these additional discussions both the one who states questions and the one who gets the answer can learn incidentally. Although this App has been developed for educational purposes, it can be used just for fun, e.g. for a children’s birthday party: Hiding treasures in various spots in the garden and delivering information on QR-codes showing hints on how to find the spots. Moreover, the use of the ArchaeoApp inthe Tourism modus, is a challenge for e-Business. (collective intelligence)},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=202016&pCurrPk=57970}
    }

  • [UAT] K. Holzinger, M. Lehner, M. Fassold, and A. Holzinger, “Ubiquitous Computing for Augmenting the Learning Experience within an Urban Archaeological Tour“, in 15th International Conference on Cultural Heritage and New Technologies, W. Boerner and S. Uhlirz, Eds., Vienna: Stadtarchäologie, 2011, pp. 348-356.
    [BibTeX] [Abstract] [Download PDF]

    A particular problem for students of urban archaeology is that objects found at archaeological excavations have been removed to a museum or depot and the site is built over, thus no longer visible. Methods of labelling these sites and providing information about history and contents are available and should be made easily accessible using ubiquitous/mobile devices (e.g. iPhone, iPad etc.). We applied Radio Frequency Identification (RFID) first, however, the handling of both the transponder and the receiver along with a mobile device was technologically working, but had some usability disadvantages. Based on this experiences, we developed an application for an iPhone (iPhone App) and used Quick-Response Codes (QR-Codes) instead of RFID. The current ArchaeoApp is developed to use it for learning purposes along a route of 13 points of interest for a lecture on urban archaeology. (Ubiquitous Computing)

    @incollection{UAT,
       author = {Holzinger, Katharina and Lehner, Manfred and Fassold, Markud and Holzinger, Andreas},
       title = {Ubiquitous Computing for Augmenting the Learning Experience within an Urban Archaeological Tour},
       booktitle = {15th International Conference on Cultural Heritage and New Technologies},
       editor = {Boerner, Wolfgang and Uhlirz, Susanne},
       publisher = {Stadtarchäologie},
       address = {Vienna},
       pages = {348-356},
       year = {2011},
       abstract = {A particular problem for students of urban archaeology is that objects found at archaeological excavations have been removed to a museum or depot and the site is built over, thus no longer visible. Methods of labelling these sites and providing information about history and contents are available and should be made easily accessible using ubiquitous/mobile devices (e.g. iPhone, iPad etc.). We applied Radio Frequency Identification (RFID) first, however, the handling of both the transponder and the receiver along with a mobile device was technologically working, but had some usability disadvantages. Based on this experiences, we developed an application for an iPhone (iPhone App) and used Quick-Response Codes (QR-Codes) instead of RFID. The current ArchaeoApp is developed to use it for learning purposes along a route of 13 points of interest for a lecture on urban archaeology. (Ubiquitous Computing)},
       url = {http://www.stadtarchaeologie.at/?page_id=4268}
    }

  • [c77] A. Holzinger, O. Waclick, F. Kappe, S. Lenhart, G. Orasche, and B. Peischl, “Rapid Prototyping on the example of Software Development in the automotive industry: The Importance of their Provision for Software Projects at the Correct Time“, in ICE-B 2011, Los Alamitos: IEEE, 2011, pp. 57-61.
    [BibTeX] [Abstract] [Download PDF]

    Software prototyping is a powerful method for the identification of usability problems at the very beginning of software development. This paper deals with the development of a prototype used for usability testing and presenting it to stakeholders at the correct time. A low-fidelity (lo-fi) prototype was created for a software product in the automotive industry, however the usability test was shifted to conduct it with the latest build of the software application. This paper emphasizes on the effectiveness of prototypes together with usability studies. It gives an overview about the experiences with usability testing on a high-fidelity (hi-fi) prototype late in the software development process. The main conclusion is that we assume that solving the usability findings of a hi-fi prototype is more difficult and expensive than using results from a lo-fi prototype earlier. In future, we will conduct a lo-fi prototype usability study to confirm this assumption. (Software Engineering)

    @incollection{c77,
       author = {Holzinger, A. and Waclick, O. and Kappe, F. and Lenhart, S. and Orasche, G. and Peischl, B.},
       title = {Rapid Prototyping on the example of Software Development in the automotive industry: The Importance of their Provision for Software Projects at the Correct Time},
       booktitle = {ICE-B 2011},
       publisher = {IEEE},
       address = {Los Alamitos},
       pages = {57-61},
       year = {2011},
       abstract = {Software prototyping is a powerful method for the identification of usability problems at the very beginning of software development. This paper deals with the development of a prototype used for usability testing and presenting it to stakeholders at the correct time. A low-fidelity (lo-fi) prototype was created for a software product in the automotive industry, however the usability test was shifted to conduct it with the latest build of the software application. This paper emphasizes on the effectiveness of prototypes together with usability studies. It gives an overview about the experiences with usability testing on a high-fidelity (hi-fi) prototype late in the software development process. The main conclusion is that we assume that solving the usability findings of a hi-fi prototype is more difficult and expensive than using results from a  lo-fi prototype earlier. In future, we will conduct  a lo-fi prototype usability study to confirm this assumption. (Software Engineering)},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=234420&pCurrPk=57667}
    }

  • [j28a] A. Holzinger, K. Simonic, and J. Steyrer, “Information Overload – stößt die Medizin an ihre Grenzen?“, Wissensmanagement, vol. 13, iss. 1, pp. 10-12, 2011.
    [BibTeX] [Abstract] [Download PDF]

    Moderne Informationstechnologie ermöglicht raschen Zugriff auf immer größere Datenmengen. Mehr Daten heißt aber nicht mehr Information und schon gar nicht mehr Wissen. Der heutige Informationsbegriff ist geprägt von den allgegenwärtigen und stets verfügbaren Massenmedien und wandelt sich mehr und mehr zum Synonym für die theoretische Möglichkeit einer allumfassenden Informiertheit. Während die technische Performanz rapide steigt, stößt die kognitive "Performance" der End-Benutzer an ihre Grenzen. Beispielhaft sei dies an Hand der medizinischen Dokumentation verdeutlicht: Eine elektronische Patientenakte kann mehrere hundert Einzeldokumente enthalten. Informationssysteme bringen diese Informationen auf Knopfdruck an den medizinischen Arbeitsplatz. Doch dort bleibt für die Entscheidungsfindung nur wenig Zeit. Rund fünf Minuten sind es im Durchschnitt. [1] Unter diesen engen zeitlichen Rahmenbedingungen wird das Erfassen der relevanten Information selbst zum kritischen Faktor.

    @article{j28a,
       author = {Holzinger, Andreas and Simonic, Klaus-Martin and Steyrer, Johannes},
       title = {Information Overload - stößt die Medizin an ihre Grenzen?},
       journal = {Wissensmanagement},
       volume = {13},
       number = {1},
       pages = {10-12},
       year = {2011},
       abstract = {Moderne Informationstechnologie ermöglicht raschen Zugriff auf immer größere Datenmengen. Mehr Daten heißt aber nicht mehr Information und schon gar nicht mehr Wissen. Der heutige Informationsbegriff ist geprägt von den allgegenwärtigen und stets verfügbaren Massenmedien und wandelt sich mehr und mehr zum Synonym für die theoretische Möglichkeit einer allumfassenden Informiertheit. Während die technische Performanz rapide steigt, stößt die kognitive "Performance" der End-Benutzer an ihre Grenzen. Beispielhaft sei dies an Hand der medizinischen Dokumentation verdeutlicht: Eine elektronische Patientenakte kann mehrere hundert Einzeldokumente enthalten. Informationssysteme bringen diese Informationen auf Knopfdruck an den medizinischen Arbeitsplatz. Doch dort bleibt für die Entscheidungsfindung nur wenig Zeit. Rund fünf Minuten sind es im Durchschnitt. [1] Unter diesen engen zeitlichen Rahmenbedingungen wird das Erfassen der relevanten Information selbst zum kritischen Faktor.},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=222615&pCurrPk=54589}
    }

  • [e9] A. Holzinger and K. Simonic, Information Quality in e-Health. Lecture Notes in Computer Science LNCS 7058, Heidelberg, Berlin, New York: Springer, 2011.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Medical information systems are already highly sophisticated; however, while computer performance has increased exponentially, human cognitive evolution cannot advance at the same speed. Consequently, the focus on interaction and communication between humans and computers is of increasing importance in medicine and healthcare. The daily actions of medical professionals must be the central concern of any innovation. Simply surrounding and supporting them with new and emerging technologies is not sufficient if these increase rather than decrease the workload. Information systems are a central component of modern knowledge-based medicine and health services; therefore, it is necessary for knowledge management to continually be adapted to the needs and demands of medical professionals within this environment of steadily increasing high-tech medicine. Information processing, in particular its potential effectiveness in modern health services and the optimization ofprocesses and operational sequences, is also of increasing interest. It is particularly important for medical information systems (e.g., hospital information systems and decision support systems) to be designed with the daily schedules, responsibilities and exigencies of the medical professionals in mind. Within the context of this symposium our end users are medical professionals and justifiably expect the software technology to provide a clear benefit: to support them efficiently and effectively in their daily activities. In biomedicine, healthcare, clinical medicine and the life sciences, professional end users are confronted with an increased mass of data. Research in human-computer interaction (HCI) and information retrieval (IR) or knowledge discovery in databases and data mining (KDD) has long been working to develop methods that help users to identify, extract, visualize and understand useful information from these masses of high-dimensional and mostly weakly structured data. HCI and IR/KDD, however, take very different perspectives in tackling this challenge; and historically, they have had little collaboration. Our goal is to combine these efforts to support professionals in interactively analyzing information properties and visualizing the relevant information without becoming overwhelmed. The challenge is to bring HCI and IR/KDD researchers to work together and hence reap the benefits that computer science/informatics can provide to the areas of medicine, healthcare and the life sciences

    @book{e9,
       author = {Holzinger, Andreas and Simonic, Klaus-Martin},
       title = {Information Quality in e-Health. Lecture Notes in Computer Science LNCS 7058},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       year = {2011},
       abstract = {Medical information systems are already highly sophisticated; however, while computer performance has increased exponentially, human cognitive evolution cannot advance at the same speed. Consequently, the focus on interaction and communication between humans and computers is of increasing importance in medicine and healthcare. The daily actions of medical professionals must be the central concern of any innovation. Simply surrounding and supporting them with new and emerging technologies is not sufficient if these increase rather than decrease the workload. Information systems are a central component of modern knowledge-based medicine and health services; therefore, it is necessary for knowledge management to continually be adapted to the needs and demands of medical professionals within this environment of steadily increasing high-tech medicine. Information processing, in particular its potential effectiveness in modern health services and the optimization ofprocesses and operational sequences, is also of increasing interest. It is particularly important for medical information systems (e.g., hospital information systems and decision support systems) to be designed with the daily schedules, responsibilities and exigencies of the medical professionals in mind. Within the context of this symposium our end users are medical professionals and justifiably expect the software technology to provide a clear benefit: to support them efficiently and effectively in their daily activities. In biomedicine, healthcare, clinical medicine and the life sciences, professional end users are confronted with an increased mass of data. Research in human-computer interaction (HCI) and information retrieval (IR) or knowledge discovery in databases and data mining (KDD) has long been working to develop methods that help users to identify, extract, visualize and understand useful information from these masses of high-dimensional and mostly weakly structured data. HCI and IR/KDD, however, take very different perspectives in tackling this challenge; and historically, they have had little collaboration. Our goal is to combine these efforts to support professionals in interactively analyzing information properties and visualizing the relevant information without becoming overwhelmed. The challenge is to bring HCI and IR/KDD researchers to work together and hence reap the benefits that computer science/informatics can provide to the areas of medicine, healthcare and the life sciences },
       doi = {10.1007/978-3-642-25364-5},
       url = {http://rd.springer.com/content/pdf/10.1007%2F978-3-642-25364-5.pdf}
    }

  • [j28] A. Holzinger, G. Searle, and M. Wernbacher, “The effect of Previous Exposure to Technology (PET) on Acceptance and its importance in Usability Engineering“, Springer Universal Access in the Information Society International Journal, vol. 10, iss. 3, pp. 245-260, 2011.
    [BibTeX] [Abstract] [Download PDF]

    In Usability and Accessibility Engineering, metric standards are vital. However, the development of a set of reciprocal metrics—which can serve as an extension of, and supplement to, current standards—becomes indispensable when the specific needs of end-user groups, such as the elderly and people with disabilities, are concerned. While ISO 9126 remains critical to the usability of a product, the needs of the elderly population are forcing the integration of other factors. Familiarity and recognisability are not relevant to someone with no experience and therefore no referent; however, acceptance becomes a major factor in their willingness to learn something new and this acceptance requires trust based on association. Readability and legibility are of less relevance to a blind person than to someone with failing eyesight. This paper describes some usability metrics ascertained on the basis of experiments made with applications for elderly people throughout the summer term of 2007. The factors that influence the older users’ acceptance of software, including the extent of their previous exposure to technology, are evaluated in order to provide short guidelines for software developers on how to design and develop software for the elderly. The evaluation of the expectations, behavior, abilities, and limitations of prospective end-users is considered of primary importance for the development of technology. A total of N = 31 participants (22 women/9 men) took part in various tests. The participants’ ages ranged from 49 to 96 years with an average age of 79. Five of the tests were designed for a PDA or cellular phone, one test was designed for a laptop PC. Of the total of 55 tests, 52 tests provided sufficient data to evaluate the results. In 23 of the tests, all tasks were completed. As a main outcome, it can be experimentally proved that the acceptance is related to a factor, which is this paper is called PET (Previous Exposure to Technology). This is discussed in light of the aforementioned metrics. (Previous experience, Previous knowledge, Technology Acceptance Model, TAM)

    @article{j28,
       author = {Holzinger, Andreas and Searle, Gig and Wernbacher, Michaela},
       title = {The effect of Previous Exposure to Technology (PET) on Acceptance and its importance in Usability Engineering},
       journal = {Springer Universal Access in the Information Society International Journal},
       volume = {10},
       number = {3},
       pages = {245-260},
       year = {2011},
       abstract = {In Usability and Accessibility Engineering, metric standards are vital. However, the development of a set of reciprocal metrics—which can serve as an extension of, and supplement to, current standards—becomes indispensable when the specific needs of end-user groups, such as the elderly and people with disabilities, are concerned. While ISO 9126 remains critical to the usability of a product, the needs of the elderly population are forcing the integration of other factors. Familiarity and recognisability are not relevant to someone with no experience and therefore no referent; however, acceptance becomes a major factor in their willingness to learn something new and this acceptance requires trust based on association. Readability and legibility are of less relevance to a blind person than to someone with failing eyesight. This paper describes some usability metrics ascertained on the basis of experiments made with applications for elderly people throughout the summer term of 2007. The factors that influence the older users’ acceptance of software, including the extent of their previous exposure to technology, are evaluated in order to provide short guidelines for software developers on how to design and develop software for the elderly. The evaluation of the expectations, behavior, abilities, and limitations of prospective end-users is considered of primary importance for the development of technology. A total of N = 31 participants (22 women/9 men) took part in various tests. The participants’ ages ranged from 49 to 96 years with an average age of 79. Five of the tests were designed for a PDA or cellular phone, one test was designed for a laptop PC. Of the total of 55 tests, 52 tests provided sufficient data to evaluate the results. In 23 of the tests, all tasks were completed. As a main outcome, it can be experimentally proved that the acceptance is related to a factor, which is this paper is called PET (Previous Exposure to Technology). This is discussed in light of the aforementioned metrics. (Previous experience, Previous knowledge, Technology Acceptance Model, TAM)},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=201710&pCurrPk=46893}
    }

  • [c80] A. Holzinger, G. Searle, A. Auinger, and M. Ziefle, “Informatics as Semiotics Engineering: Lessons Learned from Design, Development and Evaluation of Ambient Assisted Living Applications for Elderly People“, in Universal Access in Human-Computer Interaction. Context Diversity, Lecture Notes in Computer Science, LNCS 6767, C. Stephanidis, Ed., Berlin, Heidelberg: Springer, 2011, pp. 183-192.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Assisted Living Systems with Ambient Intelligence technology raise new challenges to system and software engineering. The development of Assisted Living applications requires domain-oriented interdisciplinary research – it is essential to know both the domain and the context. It is also important that context-descriptive prototypes are: (1) an integrated description that describes system, work processes, context of use; and (2) a formal description. Because (1), designers, including end users, are provided with a means to investigate the system in the context of the envisioned work processes. Because (2), investigations into questions of formalization and automation, not only of the system, but also of the work processes, can be made explicitly and become subject for discussions and further elaboration. Adapted engineering approaches are required to cope with the specific characteristics of ambient intelligent systems. Elderly are the most demanding stakeholders for IT-development – even highly sophisticated systems will not be accepted when they do not address the real needs of the elderly and are not easily accessible and usable. Communication processes are essential in that respect. The evolution and, in particular, the spread of unambiguous symbols were an necessary postulate for the transfer of information, as for example in sign language, speech, writing, etc. In this paper, we report on our experiences in design, development and evaluation of computer applications in the area of ambient assisted living for elderly people, where, to our experiences, engineers highly underestimate the power of appropriate knowledge on semiotics and we demonstrate how we can emphasize universal access by thinking of informatics as semiotics engineering.

    @incollection{c80,
       author = {Holzinger, Andreas and Searle, Gig and Auinger, Andreas and Ziefle, Martina},
       title = {Informatics as Semiotics Engineering: Lessons Learned from Design, Development and Evaluation of Ambient Assisted Living Applications for Elderly People},
       booktitle = {Universal Access in Human-Computer Interaction. Context Diversity, Lecture Notes in Computer Science, LNCS 6767},
       editor = {Stephanidis, Constantine},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {183-192},
       year = {2011},
       abstract = {Assisted Living Systems with Ambient Intelligence technology raise new challenges to system and software engineering. The development of Assisted Living applications requires domain-oriented interdisciplinary research – it is essential to know both the domain and the context. It is also important that context-descriptive prototypes are: (1) an integrated description that describes system, work processes, context of use; and (2) a formal description. Because (1), designers, including end users, are provided with a means to investigate the system in the context of the envisioned work processes. Because (2), investigations into questions of formalization and automation, not only of the system, but also of the work processes, can be made explicitly and become subject for discussions and further elaboration. Adapted engineering approaches are required to cope with the specific characteristics of ambient intelligent systems. Elderly are the most demanding stakeholders for IT-development – even highly sophisticated systems will not be accepted when they do not address the real needs of the elderly and are not easily accessible and usable. Communication processes are essential in that respect. The evolution and, in particular, the spread of unambiguous symbols were an necessary postulate for the transfer of information, as for example in sign language, speech, writing, etc. In this paper, we report on our experiences in design, development and evaluation of computer applications in the area of ambient assisted living for elderly people, where, to our experiences, engineers highly underestimate the power of appropriate knowledge on semiotics and we demonstrate how we can emphasize universal access by thinking of informatics as semiotics engineering.},
       doi = {10.1007/978-3-642-21666-4_21},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231646&pCurrPk=56673}
    }

  • [B83] A. Holzinger, R. Scherer, and M. Ziefle, “Navigational User Interface Elements on the Left Side: Intuition of Designers or Experimental Evidence?“, in Human-Computer Interaction – INTERACT 2011 Springer Lecture Notes in Computer Science LNCS 6947 , Heidelberg, Berlin, New York: Springer, 2011, pp. 162-177.
    [BibTeX]
    @incollection{B83,
       author = {Holzinger, Andreas and Scherer, Reinhold and Ziefle, Martina},
       title = {Navigational User Interface Elements on the Left Side: Intuition of Designers or Experimental Evidence?},
       booktitle = {Human-Computer Interaction – INTERACT 2011 Springer Lecture Notes in Computer Science LNCS 6947 },
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {162-177},
       year = {2011}
    }

  • [j31] A. Holzinger, P. Kosec, G. Schwantzer, M. Debevc, R. Hofmann-Wellenhof, and J. Frühauf, “Design and Development of a Mobile Computer Application to Reengineer Workflows in the Hospital and the Methodology to evaluate its Effectiveness“, Journal of Biomedical Informatics, vol. 44, iss. 6, pp. 968-977, 2011.
    [BibTeX] [Abstract] [DOI]

    This paper describes a new method of collecting additional data for the purpose of skin cancer research from the patients in the hospital using the system Mobile Computing in Medicine Graz (MoCoMed-Graz). This system departs from the traditional paper-based questionnaire data collection methods and implements a new composition of evaluation methods to demonstrate its effectiveness. The patients fill out a questionnaire on a Tablet-PC (or iPad Device) and the resulting medical data is integrated into the electronic patient record for display when the patient enters the doctor’s examination room. Since the data is now part of the electronic patient record, the doctor can discuss the data together with the patient making corrections or completions where necessary, thus enhancing data quality and patient empowerment. A further advantage is that all questionnaires are in the system at the end of the day – and manual entry is no longer necessary – consequently raising data completeness. The front end was developed using a User Centered Design Process for touch tablet computers and transfers the data in XML to the SAP based enterprise hospital information system. The system was evaluated at the Graz University Hospital – where about 30 outpatients consult the pigmented lesion clinic each day – following Bronfenbrenner’s three level perspective: The microlevel, the mesolevel and the macrolevel: On the microlevel, the questions answered by 194 outpatients, evaluated with the System Usability Scale (SUS) resulted in a median of 97.5 (min: 50, max: 100) which showed that it is easy to use. On the mesolevel, the time spent by medical doctors was measured before and after the implementation of the system; the medical task performance time of 20 doctors (age median 43 (min: 29; max: 50)) showed a reduction of 90%. On the macrolevel, a cost model was developed to show how much money can be saved by the hospital management. This showed that, for an average of 30 patients per day, on a 250 day basis per year in this single clinic, the hospital management can save up to 40,000 EUR per annum, proving that mobile computers can successfully contribute to workflow optimization. (Mobile Computing, Smart Hospital)

    @article{j31,
       author = {Holzinger, A. and Kosec, P. and Schwantzer, G. and Debevc, M. and Hofmann-Wellenhof, R. and Frühauf, J.},
       title = {Design and Development of a Mobile Computer Application to Reengineer Workflows in the Hospital and the Methodology to evaluate its Effectiveness},
       journal = {Journal of Biomedical Informatics},
       volume = {44},
       number = {6},
       pages = {968-977},
       year = {2011},
       abstract = {This paper describes a new method of collecting additional data for the purpose of skin cancer research from the patients in the hospital using the system Mobile Computing in Medicine Graz (MoCoMed-Graz). This system departs from the traditional paper-based questionnaire data collection methods and implements a new composition of evaluation methods to demonstrate its effectiveness.
    The patients fill out a questionnaire on a Tablet-PC (or iPad Device) and the resulting medical data is integrated into the electronic patient record for display when the patient enters the doctor’s examination room. Since the data is now part of the electronic patient record, the doctor can discuss the data together with the patient making corrections or completions where necessary, thus enhancing data quality and patient empowerment. A further advantage is that all questionnaires are in the system at the end of the day – and manual entry is no longer necessary – consequently raising data completeness. The front end was developed using a User Centered Design Process for touch tablet computers and transfers the data in XML to the SAP based enterprise hospital information system. The system was evaluated at the Graz University Hospital – where about 30 outpatients consult the pigmented lesion clinic each day – following Bronfenbrenner’s three level perspective: The microlevel, the mesolevel and the macrolevel: On the microlevel, the questions answered by 194 outpatients, evaluated with the System Usability Scale (SUS) resulted in a median of 97.5 (min: 50, max: 100) which showed that it is easy to use. On the mesolevel, the time spent by medical doctors was measured before and after the implementation of the system; the medical task performance time of 20 doctors (age median 43 (min: 29; max: 50)) showed a reduction of 90%. On the macrolevel, a cost model was developed to show how much money can be saved by the hospital management. This showed that, for an average of 30 patients per day, on a 250 day basis per year in this single clinic, the hospital management can save up to 40,000 EUR per annum, proving that mobile computers can successfully contribute to workflow optimization. (Mobile Computing, Smart Hospital)},
       doi = {http://dx.doi.org/10.1016/j.jbi.2011.07.003}
    }

  • [c78] A. Holzinger, M. Brugger, and W. Slany, “Applying Aspect Oriented Programming (AOP) in Usability Engineering processes: On the example of Tracking Usage Information for Remote Usability Testing“, in Proceedings of the 8th International Conference on electronic Business and Telecommunications, 2011, pp. 53-56.
    [BibTeX] [Abstract] [Download PDF]

    Usability Engineering can be seen as a crosscutting concern within the software development process. Aspect Oriented Programming (AOP) on the other hand is a technology to support separation of concerns in software engineering. Therefore it stands to reason to support usability engineering by applying a technology designed to handle distinct concerns in one single application. Remote usability testing has been proven to deliver good results and AOP is the technology that can be used to streamline the process of testing various software products without mixing concerns by separating the generation of test data from program execution. In this paper we present a sample application, discuss our practical experiences with this approach, and provide recommendations for further development. (Software Engineering)

    @inproceedings{c78,
       author = {Holzinger, Andreas and Brugger, Martin and Slany, Wolfgang},
       title = {Applying Aspect Oriented Programming (AOP) in Usability Engineering processes: On the example of Tracking Usage Information for Remote Usability Testing},
       booktitle = {Proceedings of the 8th International Conference on electronic Business and Telecommunications},
       editor = {Marca, David A. and Shishkov, Boris and Sinderen, Marten van},
       publisher = {SciTePress INSTICC Setubal},
       pages = {53-56},
       year = {2011},
       abstract = {Usability Engineering can be seen as a crosscutting concern within the software development process. Aspect Oriented Programming (AOP) on the other hand is a technology to support separation of concerns in software engineering. Therefore it stands to reason to support usability engineering by applying a technology designed to handle distinct concerns in one single application. Remote usability testing has been proven to deliver good results and AOP is the technology that can be used to streamline the process of testing various software products without mixing concerns by separating the generation of test data from program execution. In this paper we present a sample application, discuss our practical experiences with this approach, and provide recommendations for further development. (Software Engineering)},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231626&pCurrPk=57666}
    }

  • [c74] A. Holzinger, L. Basic, B. Peischl, and M. Debevc, “Handwriting Recognition on Mobile Devices: State of the art technology, usability and business analysis“, in Proceedings of the 8th International Conference on electronic Business and Telecommunications, Los Alamitos: IEEE, 2011, pp. 219-227.
    [BibTeX] [Abstract] [Download PDF]

    The software company FERK-Systems has beenproviding mobile health care information systems for various German medical services (e.g. Red Cross) for many years. Since handwriting is an issue in the medical and health care domain, a system for handwriting recognition on mobile devices has been developed within the last few years. While we have been continually improving the degree of recognition within the system, there are still changes necessary to ensure the reliability that is imperative in this critical domain. In this paper, we present the major improvements made since our presentation at the ICE-B 2010, along with a recent real-life usability evaluation. Moreover, we discuss some of the advantages and disadvantages of current systems, along with some business aspects ofthe vast, and growing, mobile handwriting recognition market. (mobile computing, handwriting recognition, data pre-processing)

    @incollection{c74,
       author = {Holzinger, A. and Basic, L. and Peischl, B. and Debevc, M.},
       title = {Handwriting Recognition on Mobile Devices: State of the art technology, usability and business analysis},
       booktitle = {Proceedings of the 8th International Conference on electronic Business and Telecommunications},
       publisher = {IEEE},
       address = {Los Alamitos},
       pages = {219-227},
       year = {2011},
       abstract = {The software company FERK-Systems has beenproviding mobile health care information systems for various German medical services (e.g. Red Cross) for many years. Since handwriting is an issue in the medical and health care domain, a system for handwriting recognition on mobile devices has been developed within the last few years. While we have been continually improving the degree of recognition within the system, there are still changes necessary to ensure the reliability that is imperative in this critical domain. In this paper, we present the major improvements made since our presentation at the ICE-B 2010, along with a recent real-life usability evaluation. Moreover, we discuss some of the advantages and disadvantages of current systems, along with some business aspects ofthe vast, and growing, mobile handwriting recognition market. (mobile computing, handwriting recognition, data pre-processing)},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=234693&pCurrPk=57665}
    }

  • [j32] A. Holzinger, M. Baernthaler, W. Pammer, H. Katz, V. Bjelic-Radisic, and M. Ziefle, “Investigating paper vs. screen in real-life hospital workflows: Performance contradicts perceived superiority of paper in the user experience“, International Journal of Human-Computer Studies, vol. 69, iss. 9, pp. 563-570, 2011.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Introduction All hospitals in the province of Styria (Austria) are well equipped with sophisticated Information Technology, which provides all-encompassing on-screen patient information. Previous research made on the theoretical properties, advantages and disadvantages, of reading from paper vs. reading from a screen has resulted in the assumption that reading from a screen is slower, less accurate and more tiring. However, recent flat screen technology, especially on the basis of LCD, is of such high quality that obviously this assumption should now be challenged. As the electronic storage and presentation of information has many advantages in addition to a faster transfer and processing of the information, the usage of electronic screens in clinics should outperform the traditional hardcopy in both execution and preference ratings. This study took part in a County hospital Styria, Austria, with 111 medical professionals, working in a real-life setting. They were each asked to read original and authentic diagnosis reports, a gynecological report and an internal medical document, on both screen and paper in a randomly assigned order. Reading comprehension was measured by the Chunked Reading Test, and speed and accuracy of reading performance was quantified. In order to get a full understanding of the clinicians’ preferences, subjective ratings were also collected.Results Wilcoxon Signed Rank Tests showed no significant differences on reading performance between paper vs. screen. However, medical professionals showed a significant (90%) preference for reading from paper. Despite the high quality and the benefits of electronic media, paper still has some qualities which cannot provided electronically do date. (Textual Information, Chuncked Reading Test)

    @article{j32,
       author = {Holzinger, Andreas and Baernthaler, Markus and Pammer, Walter and Katz, Herman and Bjelic-Radisic, Vesna and Ziefle, Martina},
       title = {Investigating paper vs. screen in real-life hospital workflows: Performance contradicts perceived superiority of paper in the user experience},
       journal = {International Journal of Human-Computer Studies},
       volume = {69},
       number = {9},
       pages = {563-570},
       year = {2011},
       abstract = {Introduction All hospitals in the province of Styria (Austria) are well equipped with sophisticated Information Technology, which provides all-encompassing on-screen patient information. Previous research made on the theoretical properties, advantages and disadvantages, of reading from paper vs. reading from a screen has resulted in the assumption that reading from a screen is slower, less accurate and more tiring. However, recent flat screen technology, especially on the basis of LCD, is of such high quality that obviously this assumption should now be challenged. As the electronic storage and presentation of information has many advantages in addition to a faster transfer and processing of the information, the usage of electronic screens in clinics should outperform the traditional hardcopy in both execution and preference ratings. This study took part in a County hospital Styria, Austria, with 111 medical professionals, working in a real-life setting. They were each asked to read original and authentic diagnosis reports, a gynecological report and an internal medical document, on both screen and paper in a randomly assigned order. Reading comprehension was measured by the Chunked Reading Test, and speed and accuracy of reading performance was quantified. In order to get a full understanding of the clinicians' preferences, subjective ratings were also collected.Results Wilcoxon Signed Rank Tests showed no significant differences on reading performance between paper vs. screen. However, medical professionals showed a significant (90%) preference for reading from paper. Despite the high quality and the benefits of electronic media, paper still has some qualities which cannot provided electronically do date. (Textual Information, Chuncked Reading Test)},
       doi = {10.1016/j.ijhcs.2011.05.002},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=233181&pCurrPk=57994}
    }

  • [c74b] A. Holzinger, “Weakly Structured Data in Health-Informatics: The Challenge for Human-Computer Interaction“, in Proceedings of INTERACT 2011 Workshop: Promoting and supporting healthy living by design, N. Baghaei, G. Baxter, L. Dow, and S. Kimani, Eds., Lisbon (Portugal): IFIP, 2011, pp. 5-7.
    [BibTeX]
    @incollection{c74b,
       author = {Holzinger, Andreas},
       title = {Weakly Structured Data in Health-Informatics: The Challenge for Human-Computer Interaction},
       booktitle = {Proceedings of INTERACT 2011 Workshop: Promoting and supporting healthy living by design},
       editor = {Baghaei, Nilufar and Baxter, Gordon and Dow, Lisa and Kimani, Stephen},
       publisher = {IFIP},
       address = {Lisbon (Portugal)},
       pages = {5-7},
       year = {2011}
    }

  • [c66c] A. Holzinger, “Interacting with Information: Challenges in Human-Computer Interaction and Information Retrieval (HCI-IR)“, in Multiconference on Computer Science and Information Systems (MCCSIS), Interfaces and Human-Computer Interaction, Rome: IADIS, 2011, pp. 13-17.
    [BibTeX] [Download PDF]
    @incollection{c66c,
       author = {Holzinger, Andreas},
       title = {Interacting with Information: Challenges in Human-Computer Interaction and Information Retrieval (HCI-IR)},
       booktitle = {Multiconference on Computer Science and Information Systems (MCCSIS), Interfaces and Human-Computer Interaction},
       publisher = {IADIS},
       address = {Rome},
       pages = {13-17},
       year = {2011},
       url = {http://www.ihci-conf.org/keynotes.asp}
    }

  • [bod2] A. Holzinger, Successful Management of Research and Development, Norderstedt: BoD, 2011.
    [BibTeX] [Abstract]

    Establishment, development and management of a successful research and development group require systematic knowledge and skills and a target-oriented process model. It begins with a vision and requires a clear mission and accordant strategy in order to achieve these goals. The people involved in the team work are of primary importance; everything depends on the interaction of this team. To create this team, to develop, scaffold, advance and lead is a challenge. However, even the best team is ineffective if there is no funding. Money is not everything but without money everything is nothing. A substantial budget is required to cover staff costs, premises and basic equipment, travel, computers and basic software, a scientific software portfolio, hosting, special equipment, literature, workshop organization, visiting researcher invitations, etc. In an environment of decreasing public budgets, external funding becomes increasingly important in order to sustain international competitiveness, quality and to maintaining excellence. Ultimately, the team is assessed by output, which is composed of measurable, published "items". "If you ask what real knowledge is, I answer, that which enables action (Hermann von Helmholtz)”

    @book{bod2,
       author = {Holzinger, Andreas},
       title = {Successful Management of Research and Development},
       publisher = {BoD},
       address = {Norderstedt},
       year = {2011},
       abstract = {Establishment, development and management of a successful research and development group require systematic knowledge and skills and a target-oriented process model. It begins with a vision and requires a clear mission and accordant strategy in order to achieve these goals. The people involved in the team work are of primary importance; everything depends on the interaction of this team. To create this team, to develop, scaffold, advance and lead is a challenge. However, even the best team is ineffective if there is no funding. Money is not everything but without money everything is nothing. A substantial budget is required to cover staff costs, premises and basic equipment, travel, computers and basic software, a scientific software portfolio, hosting, special equipment, literature, workshop organization, visiting researcher invitations, etc. In an environment of decreasing public budgets, external funding becomes increasingly important in order to sustain international competitiveness, quality and to maintaining excellence. Ultimately, the team is assessed by output, which is composed of measurable, published "items". "If you ask what real knowledge is, I answer, that which enables action (Hermann von Helmholtz)”}
    }

  • [B81] B. Höll, S. Spat, J. Plank, L. Schaupp, K. Neubauer, T. Pieber, and A. Holzinger, “Design einer mobilen Anwendung für das stationäre Glukosemanagement“, in eHealth2011: Health Informatics meets eHealth – von der Wissenschaft zur Anwendung und zurück, Wien: Austrian Computer Society, 2011, pp. 51-57.
    [BibTeX]
    @incollection{B81,
       author = {Höll, Bernhard and Spat, Stephan and Plank, Johannes and Schaupp, Lukas and Neubauer, Katharina and Pieber, Thomas and Holzinger, Andreas},
       title = {Design einer mobilen Anwendung für das stationäre Glukosemanagement},
       booktitle = {eHealth2011: Health Informatics meets eHealth - von der Wissenschaft zur Anwendung und zurück},
       publisher = {Austrian Computer Society},
       address = {Wien},
       pages = {51-57},
       year = {2011},
    }

  • [c76] M. Ebner, A. Holzinger, N. Scerbakov, and P. Tsang, “EduPunks and Learning Management Systems – Conflict or Chance? “, in Hybrid Learning, Lecture Notes in Computer Science, LNCS 6837, R. Kwan, J. Fong, L. Kwok, and J. Lam, Eds., Berlin, Heidelberg: Springer, 2011, pp. 224-238.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The term Edupunk coined by Jim Groom defines a do-it-yourself concept of using the most recent Web tools available for teaching, instead of relying only on commercial learning platforms – it is the information, the content, the knowledge which matters. Technology itself does not make education valuable per se, it is the creation of individual knowledge which is of paramount importance. However, today, so much free technology is available, which can be used as hands-on tools to enhance learning and teaching of students. However, in this article, we demonstrate that such issues can also be included in a large university wide LMS, which has been developed at Graz University of Technology (TU Graz) during the last years. The development was initiated by the necessity to emphasize and implement three crucial factors for learning: communication, active participation and social interaction. We assess the potential of current Web 2.0 technologies for implementing such factors. We show that the development process was not technology driven; on the contrary, end user requirements of all end user groups engaged into university learning (students, teachers and administrators) were thoroughly investigated and mapped onto functional components of the LMS. Finally, we provide an overview of the platform functionalities with an emphasis on Web 2.0 elements and EduPunk concepts. (Web 2.0)

    @incollection{c76,
       author = {Ebner, Martin and Holzinger, Andreas and Scerbakov, Nick and Tsang, Philip},
       title = {EduPunks and Learning Management Systems – Conflict or Chance? },
       booktitle = {Hybrid Learning, Lecture Notes in Computer Science, LNCS 6837},
       editor = {Kwan, Reggie and Fong, Joseph and Kwok, Lam-for and Lam, Jeanne},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {224-238},
       year = {2011},
       abstract = {The term Edupunk coined by Jim Groom defines a do-it-yourself concept of using the most recent Web tools available for teaching, instead of relying only on commercial learning platforms – it is the information, the content, the knowledge which matters. Technology itself does not make education valuable per se, it is the creation of individual knowledge which is of paramount importance. However, today, so much free technology is available, which can be used as hands-on tools to enhance learning and teaching of students. However, in this article, we demonstrate that such issues can also be included in a large university wide LMS, which has been developed at Graz University of Technology (TU Graz) during the last years. The development was initiated by the necessity to emphasize and implement three crucial factors for learning: communication, active participation and social interaction. We assess the potential of current Web 2.0 technologies for implementing such factors. We show that the development process was not technology driven; on the contrary, end user requirements of all end user groups engaged into university learning (students, teachers and administrators) were thoroughly investigated and mapped onto functional components of the LMS. Finally, we provide an overview of the platform functionalities with an emphasis on Web 2.0 elements and EduPunk concepts. (Web 2.0)},
       doi = {10.1007/978-3-642-22763-9_21},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231645&pCurrPk=58237}
    }

  • [j29] M. Debevc, P. Kosec, and A. Holzinger, “Improving multimodal web accessibility for deaf people: sign language interpreter module“, Multimedia Tools and Applications (MTAP), vol. 54, iss. 1, pp. 181-199, 2011.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The World Wide Web is becoming increasingly necessary for everybody regardless of age, gender, culture, health and individual disabilities. Unfortunately, there are evidently still problems for some deaf and hard of hearing people trying to use certain web pages. These people require the translation of existing written information into their first language, which can be one of many sign languages. In previous technological solutions, the video window dominates the screen, interfering with the presentation and thereby distracting the general public, who have no need of a bilingual web site. One solution to this problem is the development of transparent sign language videos which appear on the screen on request. Therefore, we have designed and developed a system to enable the embedding of selective interactive elements into the original text in appropriate locations, which act as triggers for the video translation into sign language. When the short video clip terminates, the video window is automatically closed and the original web page is shown. In this way, the system significantly simplifies the expansion and availability of additional accessibility functions to web developers, as it preserves the original web page with the addition of a web layer of sign language video. Quantitative and qualitative evaluation has demonstrated that information presented through a transparent sign language video increases the users’ interest in the content of the material by interpreting terms, phrases or sentences, and therefore facilitates the understanding of the material and increases its usefulness for deaf people. (HCI, human-computer interaction, interactive video)

    @article{j29,
       author = {Debevc, M. and Kosec, P. and Holzinger, A.},
       title = {Improving multimodal web accessibility for deaf people: sign language interpreter module},
       journal = {Multimedia Tools and Applications (MTAP)},
       volume = {54},
       number = {1},
       pages = {181-199},
       year = {2011},
       abstract = {The World Wide Web is becoming increasingly necessary for everybody regardless of age, gender, culture, health and individual disabilities. Unfortunately, there are evidently still problems for some deaf and hard of hearing people trying to use certain web pages. These people require the translation of existing written information into their first language, which can be one of many sign languages. In previous technological solutions, the video window dominates the screen, interfering with the presentation and thereby distracting the general public, who have no need of a bilingual web site. One solution to this problem is the development of transparent sign language videos which appear on the screen on request. Therefore, we have designed and developed a system to enable the embedding of selective interactive elements into the original text in appropriate locations, which act as triggers for the video translation into sign language. When the short video clip terminates, the video window is automatically closed and the original web page is shown. In this way, the system significantly simplifies the expansion and availability of additional accessibility functions to web developers, as it preserves the original web page with the addition of a web layer of sign language video. Quantitative and qualitative evaluation has demonstrated that information presented through a transparent sign language video increases the users' interest in the content of the material by interpreting terms, phrases or sentences, and therefore facilitates the understanding of the material and increases its usefulness for deaf people. (HCI, human-computer interaction, interactive video)},
       doi = {10.1007/s11042-010-0529-8},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137845&pCurrPk=47669}
    }

  • [B79] M. Bloice, K. Simonic, M. Kreuzthaler, and A. Holzinger, “Development of an Interactive Application for Learning Medical Procedures and Clinical Decision Making“, in Information Quality in e-Health (Lecture Notes in Computer Science LNCS 7058), A. Holzinger and K. Simonic, Eds., Berlin, Heidelberg, New York: Springer, 2011, vol. 7058, pp. 211-224.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper outlines the development of a Virtual Patient style tablet application for the purpose of teaching decision making to undergraduate students of medicine. In order to objectively compare some of the various technologies available, the application was written using two different languages: one as a native iPad app written in Objective-C, the other as a web-based app written in HTML5, CSS3, and JavaScript. The requirements for both applications were identical, and this paper will discuss the relative advantages and disadvantages of both technologies from both a HCI point of view and from a technological point of view. Application deployment, user-computer interaction, usability, security, and cross-platform interoperability are also discussed. The motivation for developing this application, entitled Casebook , was to create a platform to test the novel approach of using real patient records to teach undergraduate students. These medical records form patient cases, and these cases are navigated using the Casebook application with the goal of teaching decision making and clinical reasoning; the pretext being that real cases more closely match the context of the hospital ward and thereby increase authentic activity . Of course, patient cases must possess a certain level of quality to be useful. Therefore, the quality of documentation and, most importantly, quality’s impact on healthcare is also discussed.

    @incollection{B79,
       author = {Bloice, Marcus and Simonic, Klaus-Martin and Kreuzthaler, Markus and Holzinger, Andreas},
       title = {Development of an Interactive Application for Learning Medical Procedures and Clinical Decision Making},
       booktitle = {Information Quality in e-Health (Lecture Notes in Computer Science LNCS 7058)},
       editor = {Holzinger, Andreas and Simonic, Klaus-Martin},
       publisher = {Springer},
       address = {Berlin, Heidelberg, New York},
       volume = {7058},
       pages = {211-224},
       year = {2011},
       abstract = {This paper outlines the development of a Virtual Patient style tablet application for the purpose of teaching decision making to undergraduate students of medicine. In order to objectively compare some of the various technologies available, the application was written using two different languages: one as a native iPad app written in Objective-C, the other as a web-based app written in HTML5, CSS3, and JavaScript. The requirements for both applications were identical, and this paper will discuss the relative advantages and disadvantages of both technologies from both a HCI point of view and from a technological point of view. Application deployment, user-computer interaction, usability, security, and cross-platform interoperability are also discussed. The motivation for developing this application, entitled Casebook , was to create a platform to test the novel approach of using real patient records to teach undergraduate students. These medical records form patient cases, and these cases are navigated using the Casebook application with the goal of teaching decision making and clinical reasoning; the pretext being that real cases more closely match the context of the hospital ward and thereby increase authentic activity . Of course, patient cases must possess a certain level of quality to be useful. Therefore, the quality of documentation and, most importantly, quality’s impact on healthcare is also discussed.},
       doi = {10.1007/978-3-642-25364-5_17},
       url = {http://dx.doi.org/10.1007/978-3-642-25364-5_17}
    }

  • [c73a] M. Bloice, K. Simonic, and A. Holzinger, “Using Patient Records to Teach Medical Students“, in Association for Medical Education in Europe, Dundee: AMEE, 2011, pp. 230-231.
    [BibTeX]
    @incollection{c73a,
       author = {Bloice, Marcus and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {Using Patient Records to Teach Medical Students},
       booktitle = {Association for Medical Education in Europe},
       publisher = {AMEE},
       address = {Dundee},
       pages = {230-231},
       year = {2011}
    }

  • [c78c] A. Auinger, D. Nedbal, A. Hochmeier, and A. Holzinger, “User-centric Usability Evaluation for Enterprise 2.0 Platforms – A Complementary Multi-method Approach“, in ICE-B 2011, Los Alamitos: IEEE, 2011, pp. 119-124.
    [BibTeX] [Download PDF]
    @incollection{c78c,
       author = {Auinger, Andreas and Nedbal, Dietmar and Hochmeier, Alexander and Holzinger, Andreas},
       title = {User-centric Usability Evaluation for Enterprise 2.0 Platforms - A Complementary Multi-method Approach},
       booktitle = {ICE-B 2011},
       publisher = {IEEE},
       address = {Los Alamitos},
       pages = {119-124},
       year = {2011},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=264158&pCurrPk=57668}
    }

  • [c81] A. Auinger, A. Aistleithner, H. Kindermann, and A. Holzinger, “Conformity with User Expectations on the Web: Are There Cultural Differences for Design Principles?“, in Design, User Experience, and Usability. Theory, Methods, Tools and Practice. Lecture Notes in Computer Science LNCS 6769, A. Marcus, Ed., Heidelberg, Berlin, New York: Springer, 2011, pp. 3-12.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    User-centered Web design essentially impacts a website’s success and therefore directly or indirectly influences a classic or digital enterprise’s prosperity. “Conformity with user expectations” as one of seven dialogue principles according to the ISO 9241-110 standard is one critical success factor as it regards efficient and effective task completion. Over the past ten years, numerous recommendations for designing Web elements have been published, and some of them deal with conformity of user expectations. However, there are cultural differences concerning how design principles should be applied on Web elements. In this paper, we outline examples of their implementation, followed by discussing the results of an eye tracking study, which indicates that not all recommendations for design principles provided in related work – especially from the Anglo-American area – are valid for European end users and, finally, that their validity may change over time.

    @incollection{c81,
       author = {Auinger, Andreas and Aistleithner, Anna and Kindermann, Harald and Holzinger, Andreas},
       title = {Conformity with User Expectations on the Web: Are There Cultural Differences for Design Principles?},
       booktitle = {Design, User Experience, and Usability. Theory, Methods, Tools and Practice. Lecture Notes in Computer Science LNCS 6769},
       editor = {Marcus, Aaron},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {3-12},
       year = {2011},
       abstract = {User-centered Web design essentially impacts a website’s success and therefore directly or indirectly influences a classic or digital enterprise’s prosperity. “Conformity with user expectations” as one of seven dialogue principles according to the ISO 9241-110 standard is one critical success factor as it regards efficient and effective task completion. Over the past ten years, numerous recommendations for designing Web elements have been published, and some of them deal with conformity of user expectations. However, there are cultural differences concerning how design principles should be applied on Web elements. In this paper, we outline examples of their implementation, followed by discussing the results of an eye tracking study, which indicates that not all recommendations for design principles provided in related work - especially from the Anglo-American area - are valid for European end users and, finally, that their validity may change over time.},
       doi = {10.1007/978-3-642-21675-6_1},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231647&pCurrPk=63533}
    }

2010

  • [c58] C. Stickel, M. Ebner, and A. Holzinger, “The XAOS Metric – Understanding Visual Complexity as Measure of Usability“, in HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389, G. Leitner, M. Hitz, and A. Holzinger, Eds., Berlin, Heidelberg: Springer, 2010, pp. 278-290.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The visual complexity of an interface is a crucial factor for usability, since it influences the cognitive load and forms expectations about the subjacent software or system. In this paper we propose a novel method that uses entropy, structure and functions, to calculate the visual complexity of a website. Our method is evaluated against a well known approach of using the file size of color jpeg images for determining visual complexity. Both methods were applied on a dataset consisting of images of 30 different websites. These websites were also evaluated with a web survey. We found a strong correlation for both methods on subjective ratings of visual complexity and structure. This suggests both methods to be reliable for determination of visual complexity. [Information Systems, Entropy, Visual Complexity]

    @incollection{c58,
       author = {Stickel, Christian and Ebner, Martin and Holzinger, Andreas},
       title = {The XAOS Metric – Understanding Visual Complexity as Measure of Usability},
       booktitle = {HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389},
       editor = {Leitner, Gerhard and Hitz, Martin and Holzinger, Andreas},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {278-290},
       year = {2010},
       abstract = {The visual complexity of an interface is a crucial factor for usability, since it influences the cognitive load and forms expectations about the subjacent software or system. In this paper we propose a novel method that uses entropy, structure and functions, to calculate the visual complexity of a website. Our method is evaluated against a well known approach of using the file size of color jpeg images for determining visual complexity. Both methods were applied on a dataset consisting of images of 30 different websites. These websites were also evaluated with a web survey. We found a strong correlation for both methods on subjective ratings of visual complexity and structure. This suggests both methods to be reliable for determination of visual complexity. [Information Systems, Entropy, Visual Complexity]},
       doi = {10.1007/978-3-642-16607-5_18},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=224212&pCurrPk=52574}
    }

  • [c56c] H. Milchrahm, W. Slany, and A. Holzinger, Process Patterns for Agile Usability, Saint Maarten, Netherlands: IEEE, 2010.
    [BibTeX] [Abstract] [Download PDF]

    The main benefit of Agile Process Definitions is to be lightweight and flexible. Nevertheless, often the opposite is true as they can be exhaustive and verbose, particularly when integrated with usability. One way to alleviate this problem is capturing processes in the form of patterns. This paper presents three Agile Usability Process patterns extending an already existing agile process pattern collection. The patterns are derived from the agile usability process of a scientific project. For comparison purposes the process was evaluated by means of using an Extreme Programming Evaluation Framework. The implementation results showed that the usability and the overall user experience of the developed system significantly improved. A high number of usability issues found in Usability Expert Evaluations and User Tests was fixed by means of automated usability tests. This, as well as the application of Automated Usability Evaluation metrics, increased the usability quality of the developed system over time to a very high degree [Software Engineering]

    @book{c56c,
       author = {Milchrahm, H. and Slany, W. and Holzinger, A.},
       title = {Process Patterns for Agile Usability},
       publisher = {IEEE},
       address = {Saint Maarten, Netherlands},
       series = {ACHI 2010. Third International Conference on Advances in Computer-Human Interactions},
       year = {2010},
       abstract = {The main benefit of Agile Process Definitions is to be lightweight and flexible. Nevertheless, often the opposite is true as they can be exhaustive and verbose, particularly when integrated with usability. One way to alleviate this problem is capturing processes in the form of patterns. This paper presents three Agile Usability Process patterns extending an already existing agile process pattern collection. The patterns are derived from the agile usability process of a scientific project. For comparison purposes the process was evaluated by means
    of using an Extreme Programming Evaluation Framework. The implementation results showed that the usability and the overall user experience of the developed system significantly improved. A high number of usability issues found in Usability Expert Evaluations and User Tests was fixed by means of automated usability tests. This, as well as the application of Automated Usability Evaluation metrics, increased the usability quality of the developed system over time to a very high degree [Software Engineering]},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=132634&pCurrPk=50237}
    }

  • [e8] G. Leitner, M. Hitz, and A. Holzinger, HCI in Work and Learning, Life and Leisure: 6th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society (OCG), USAB 2010, Lecture Notes in Computer Science (LNCS 6389), Heidelberg, Berlin: Springer, 2010.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The contributions for USAB 2010 provide important insights on the actual research activities in the field and support the interested audience by presenting the state of the art in HCI research as well as giving valuable input on questions arising when planning or designing research projects. Because of the increasing propagation of the field of HCI research, it is not possible to address all areas within a small conference; however, this is not the goal of USAB 2010—it should be seen as a metaphorical counterpart of a wholesale, an HCI delicatessen shop providing a tasting menu with different courses (hopefully) catering to all tastes. As a kind of appetizer, the session “Psychological Factors of HCI” puts a focus on psychological and social aspects to be considered in the development of end user applications. Based on the example of the participatory design of visual analytics, Mayr et al. illustrate the importance of human problem solving strategies. In their first paper Pommeranz et al. show how the quality of decision support systems can influence the elicitation of user preferences. Arning et al. focus their contribution on usage motives and usage barriers related to the use of mobile technologies. In their second contribution, Pommeranz et al. address the relevance of context and subjective norm on the acceptance of a mobile negotiation support system. The session “e-Health and HCI” illustrates that although the health of the elderly is a central issue in today’s discussion on demography, they are not the only group who can benefit from ICT research. Holzinger et al. sketch an alarming picture of the health status of the youth in Austria, but also show possibilities how to combine the hype of mobile devices and Web 2.0 to change health awareness within youths. Wilkowska et al. focus their contribution on the role of gender in the acceptance of medical devices and show that there are indeed differences in specific situations. The health system of Western countries is prototypical for high public expenditure; therefore financing usability engineering activities seems to be a difficult task. Verhoeven and Gemert-Pijnen show that discount usability methods can even be applied to health care settings with very low costs (which, invested in usability, exhibit a high return on investment, as illustrated by Bias & Mayhew 1 ). Another way of efficient HCI application is the re-use of existing knowledge, e.g., on the basis of HCI patterns. Doyle et al. present an approach to share knowledge in the health care sector by establishing a customized pattern language structured on the needs of the area of application. Since the group of the elderly plays an important role in today’s HCI research, it is considered also in USAB. The session “Enhancing the Quality of Life of Elderly People” is motivated by the fact that current and future generations of the elderly are more active than the generations of elderly in the past. To support their activity, HCI and UE research has to focus on their needs. Schaar and Ziefle show how e-travel services could be enhanced for this special target group. To enhance the activity of elderly at home, Harley et al. present the possibilities of game playing based on the Nintendo Wii console in a sheltered home. But even when activity is already reduced, there are possibilities to support elderly with technology, which, however, has to fulfill certain usability requirements. Otjacques et al. present the system SAMMY, which supports the daily life of elderly in a retirement home. Not only the elderly, but all user groups not optimally supported by ICT are in the focus of HCI research in order to make e-inclusion not an empty phrase. The session “Supporting Fellow Humans with Special Needs” is therefore devoted to this heterogeneous group of users. Kranjc and his colleagues address the possibilities to apply the user-centered design approach to enhance mobile devices for visually impaired people, whereas Debevs et al. focus on the respective possibilities for hearing-impaired people. Finally, Curatelli and Martinengo address motor-impaired users and present a keyboard with a specific layout based on pseudo-syllables. Besides e-health for different groups of people, e-learning includes various challenges for HCI researchers. The authors’ contributions to the session “Teaching and Virtual/Mobile Learning” face these challenges. Safta and Gorgan analyze the characteristics and structure of the teaching process and show how to implement these into a system for computer-based learning. De Troyer et al. discuss the possibilities of adaptive virtual learning environments. Gil-Rodriguez and Rebaque-Rivas focus their contribution on online learning with mobile devices while commuting. Another variation of HCI is presented in the session “Enhanced and New Methods in HCI Research.” Stickel et al. as well as Stork et al. focus their contributions on visual aspects and show possible enhancements to existing approaches. Stickel et al. propose a metric which can be used for measuring the visual complexity of websites and cantherefore be used as some kind of automated evaluation criterion, whereas Stork et al. show how contextual cues can support the quality and efficiency of visual search. Schrammel et al. illustrate an extraordinary approach and propose body motion to be included in HCI research. The dessert of our menu can be chosen between the special thematic sessions UXFUL2and WIMA 3 , which put a focus on the cutting edge research topics user experience and multimedia applications, respectively. The program is rounded up by a tutorial given by Ebner et al. on the usage of iPad, iPhone etc. USAB 2010 received a total of 55 submissions. We followed a careful and rigorous review process, assigning each paper to a minimum of three and maximum of five reviewers. On the basis of the reviewers’ results, 10 full papers and 10 short papers were accepted in the main track of the conference. The two special thematic sessions, UXFUL and WIMA, were established with the intensive support of the organizing colleagues and contributed a further 13 papers to the program. Additionally, to give a selected authors the opportunity to show their work in progress, a poster presentation section was created. The scientific program, the vicinity to the melting pot of ICT research, development and application (Lakeside Science and Technology Park) and the involvement of the local industry, made USAB 2010 a platform that brought together the scientific community focused on HCI and usability with interested people from industry, business, or government as well as from other scientific disciplines. The final product can be seen as a valuable piece of the mosaic of further development of the HCI & UE community. The credit for this belongs to each and every person who contributed to making USAB 2010 a great success: the authors, reviewers, sponsors, organizations, supporters, the members of the organization team, and all the volunteers, without whose help this deli would never have been built.

    @book{e8,
       author = {Leitner, G. and Hitz, M.  and Holzinger, A.},
       title = {HCI in Work and Learning, Life and Leisure: 6th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society (OCG), USAB 2010, Lecture Notes in Computer Science (LNCS 6389)},
       publisher = {Springer},
       address = {Heidelberg, Berlin},
       year = {2010},
       abstract = {The contributions for USAB 2010 provide important insights on the actual research activities in the field and support the interested audience by presenting the state of the art in HCI research as well as giving valuable input on questions arising when planning or designing research projects. Because of the increasing propagation of the field of HCI research, it is not possible to address all areas within a small conference; however, this is not the goal of USAB 2010—it should be seen as a metaphorical counterpart of a wholesale, an HCI delicatessen shop providing a tasting menu with different courses (hopefully) catering to all tastes. As a kind of appetizer, the session “Psychological Factors of HCI” puts a focus on psychological and social aspects to be considered in the development of end user applications. Based on the example of the participatory design of visual analytics, Mayr et al. illustrate the importance of human problem solving strategies. In their first paper Pommeranz et al. show how the quality of decision support systems can influence the elicitation of user preferences. Arning et al. focus their contribution on usage motives and usage barriers related to the use of mobile technologies. In their second contribution, Pommeranz et al. address the relevance of context and subjective norm on the acceptance of a mobile negotiation support system. The session “e-Health and HCI” illustrates that although the health of the elderly is a central issue in today’s discussion on demography, they are not the only group who can benefit from ICT research. Holzinger et al. sketch an alarming picture of the health status of the youth in Austria, but also show possibilities how to combine the hype of mobile devices and Web 2.0 to change health awareness within youths. Wilkowska et al. focus their contribution on the role of gender in the acceptance of medical devices and show that there are indeed differences in specific situations. The health system of Western countries is prototypical for high public expenditure; therefore financing usability engineering activities seems to be a difficult task. Verhoeven and Gemert-Pijnen show that discount usability methods can even be applied to health care settings with very low costs (which, invested in usability, exhibit a high return on investment, as illustrated by Bias & Mayhew 1 ). Another way of efficient HCI application is the re-use of existing knowledge, e.g., on the basis of HCI patterns. Doyle et al. present an approach to share knowledge in the health care sector by establishing a customized pattern language structured on the needs of the area of application. Since the group of the elderly plays an important role in today’s HCI research, it is considered also in USAB. The session “Enhancing the Quality of Life of Elderly People” is motivated by the fact that current and future generations of the elderly are more active than the generations of elderly in the past. To support their activity, HCI and UE research has to focus on their needs. Schaar and Ziefle show how e-travel services could be enhanced for this special target group. To enhance the activity of elderly at home, Harley et al. present the possibilities of game playing based on the Nintendo Wii console in a sheltered home. But even when activity is already reduced, there are possibilities to support elderly with technology, which, however, has to fulfill certain usability requirements. Otjacques et al. present the system SAMMY, which supports the daily life of elderly in a retirement home. Not only the elderly, but all user groups not optimally supported by ICT are in the focus of HCI research in order to make e-inclusion not an empty phrase. The session “Supporting Fellow Humans with Special Needs” is therefore devoted to this heterogeneous group of users. Kranjc and his colleagues address the possibilities to apply the user-centered design approach to enhance mobile devices for visually impaired people, whereas Debevs et al. focus on the respective possibilities for hearing-impaired people. Finally, Curatelli and Martinengo address motor-impaired users and present a keyboard with a specific layout based on pseudo-syllables. Besides e-health for different groups of people, e-learning includes various challenges for HCI researchers. The authors’ contributions to the session “Teaching and Virtual/Mobile Learning” face these challenges. Safta and Gorgan analyze the characteristics and structure of the teaching process and show how to implement these into a system for computer-based learning. De Troyer et al. discuss the possibilities of adaptive virtual learning environments. Gil-Rodriguez and Rebaque-Rivas focus their contribution on online learning with mobile devices while commuting. Another variation of HCI is presented in the session “Enhanced and New Methods in HCI Research.” Stickel et al. as well as Stork et al. focus their contributions on visual aspects and show possible enhancements to existing approaches. Stickel et al. propose a metric which can be used for measuring the visual complexity of websites and cantherefore be used as some kind of automated evaluation criterion, whereas Stork et al. show how contextual cues can support the quality and efficiency of visual search. Schrammel et al. illustrate an extraordinary approach and propose body motion to be included in HCI research. The dessert of our menu can be chosen between the special thematic sessions UXFUL2and WIMA 3 , which put a focus on the cutting edge research topics user experience and multimedia applications, respectively. The program is rounded up by a tutorial given by Ebner et al. on the usage of iPad, iPhone etc. USAB 2010 received a total of 55 submissions. We followed a careful and rigorous review process, assigning each paper to a minimum of three and maximum of five reviewers. On the basis of the reviewers’ results, 10 full papers and 10 short papers were accepted in the main track of the conference. The two special thematic sessions, UXFUL and WIMA, were established with the intensive support of the organizing colleagues and contributed a further 13 papers to the program. Additionally, to give a selected authors the opportunity to show their work in progress, a poster presentation section was created. The scientific program, the vicinity to the melting pot of ICT research, development and application (Lakeside Science and Technology Park) and the involvement of the local industry, made USAB 2010 a platform that brought together the scientific community focused on HCI and usability with interested people from industry, business, or government as well as from other scientific disciplines. The final product can be seen as a valuable piece of the mosaic of further development of the HCI & UE community. The credit for this belongs to each and every person who contributed to making USAB 2010 a great success: the authors, reviewers, sponsors, organizations, supporters, the members of the organization team, and all the volunteers, without whose help this deli would never have been built. },
       doi = {10.1007/978-3-642-16607-5},
       url = {http://rd.springer.com/content/pdf/10.1007%2F978-3-642-16607-5.pdf}
    }

  • [c56b] M. Kreuzthaler, M. D. Bloice, K. -M. Simonic, and A. Holzinger, “On the Need for Open-Source Ground Truths for Medical Information Retrieval Systems“, in I-KNOW 2010, 10th International Conference on Knowledge Management and Knowledge Technologies, 2010, pp. 371-381.
    [BibTeX] [Abstract] [Download PDF]

    Smart information retrieval systems are becoming increasingly prevalent due to the rate at which the amount of digitized raw data has increased, and continues to increase. This is especially true in the medical domain, as there is much data stored in unstructured formats which contain “hidden” information within them. By hidden, this means information that cannot ordinarily be found by performing a simple text search. To test the information retrieval systems that handle such data, a ground truth, or gold standard, is normally required in order to gain performance values according to an information need. In this paper we emphasize the lack of freely available, annotated medical data and wish to encourage the community of developers working in this area to make available whatever data they can. Also, the importance of such annotated medical data is raised, especially its importance and potential impact on teaching and training in medicine. As well as this, this paper will point out some of the advantages that access to a freely available pool of annotated medical objects would provide to several areas of medicine and informatics. The paper then discusses some of the considerations that would have to be made for any future systems developed that would provide a service to make the creating, sharing, and annotating of such data easy to perform (by using an online, web-based interface, for example). Finally, the paper discusses in detail the benefits of such a system to teaching and examining medical students. [Information Systems]

    @inproceedings{c56b,
       author = {Kreuzthaler, M. and Bloice, M. D. and Simonic, K.-M. and Holzinger, A.},
       title = {On the Need for Open-Source Ground Truths for Medical Information Retrieval Systems},
       booktitle = {I-KNOW 2010, 10th International Conference on Knowledge Management and Knowledge Technologies},
       editor = {Tochtermann, Klaus and Maurer, Hermann},
       pages = {371-381},
    year = {2010},
       abstract = {Smart information retrieval systems are becoming increasingly prevalent due to the rate at which the amount of digitized raw data has increased, and continues to increase. This is especially true in the medical domain, as there is much data stored in unstructured formats which contain “hidden” information within them. By hidden, this means information that cannot ordinarily be found by performing a simple text search. To test the information retrieval systems that handle such data, a ground truth, or gold standard, is normally required in order to gain performance values according to an information need. In this paper we emphasize the lack of freely available, annotated medical data and wish to encourage the community of developers working in this area to make available whatever data they can. Also, the importance of such annotated medical data is raised, especially its importance and potential impact on teaching and training in medicine. As well as this, this paper will point out some of the advantages that access to a freely available pool of annotated medical objects would provide to several areas of medicine and informatics. The paper then discusses some of the considerations that would have to be made for any future systems developed that would provide a service to make the creating, sharing, and annotating of such data easy to perform (by using an online, web-based interface, for example). Finally, the paper discusses in detail the benefits of such a system to teaching and examining medical students. [Information Systems]},
       url = {https://i-know.tugraz.at/?paper=on-the-need-for-open-source-ground-truths-for-medical-information-retrieval-systems}
    }

  • [c68] P. Kosec, M. Debevc, and A. Holzinger, “Sign Language Interpreter Module: Accessible Video Retrieval with Subtitles“, in Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180, K. Miesenberger, J. Klaus, W. Zagler, and A. Karshmer, Eds., Berlin, Heidelberg: Springer , 2010, pp. 221-228.
    [BibTeX] [Abstract] [DOI]

    In this paper, we introduce a new approach to the integration of sign language on the Web. Written information is presented by a Sign Language Interpreter Module (SLI Module). The improvement in comparison to state-of-the-art solutions on the Web is that our sign language video has a transparent background and is shown over the existing web page. The end user can activate the video playback by clicking on an interactive icon. The mechanism also provides a simplified approach to enable accessibility requirements of existing web pages. In addition, the subtitles are stored externally in the Timed Text Authoring Format (TTAF), which is a candidate for recommendation by the W3C community. Empirical results from our evaluation study showed that the prototype was well accepted and was pleasant to use.

    @incollection{c68,
       author = {Kosec, Primož and Debevc, Matjaž and Holzinger, Andreas},
       title = {Sign Language Interpreter Module: Accessible Video Retrieval with Subtitles},
       booktitle = {Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180},
       editor = {Miesenberger, Klaus and Klaus, Joachim and Zagler, Wolfgang and Karshmer, Arthur},
       publisher = {Springer },
       address = {Berlin, Heidelberg},
       pages = {221-228},
       year = {2010},
       abstract = {In this paper, we introduce a new approach to the integration of sign language on the Web. Written information is presented by a Sign Language Interpreter Module (SLI Module). The improvement in comparison to state-of-the-art solutions on the Web is that our sign language video has a transparent background and is shown over the existing web page. The end user can activate the video playback by clicking on an interactive icon. The mechanism also provides a simplified approach to enable accessibility requirements of existing web pages. In addition, the subtitles are stored externally in the Timed Text Authoring Format (TTAF), which is a candidate for recommendation by the W3C community. Empirical results from our evaluation study showed that the prototype was well accepted and was pleasant to use.},
       doi = {10.1007/978-3-642-14100-3_33},
    
    }

  • [91c] K. Holzinger, M. Lehner, M. Fassold, and A. Holzinger, “Ubiquitous Computing and Urban Archaeology: Experiences with Augmenting the Learning Experience by Smart Objects“, in International Congress Cultural Heritage and New Technologies, W. Boerner and S. Uhlirz, Eds., Vienna, Austria: Stadtarchaeology Vienna, 2010, p. 64.
    [BibTeX] [Download PDF]
    @incollection{91c,
       author = {Holzinger, Katharina and Lehner, Manfred and Fassold, Markus and Holzinger, Andreas},
       title = {Ubiquitous Computing and Urban Archaeology: Experiences with Augmenting the Learning Experience by Smart Objects},
       booktitle = {International Congress Cultural Heritage and New Technologies},
       editor = {Boerner, Wolfgang and Uhlirz, Susanne},
       publisher = {Stadtarchaeology Vienna},
       address = {Vienna, Austria},
       pages = {64},
       year = {2010},
       url = {http://www.stadtarchaeologie.at/?page_id=559        }
    }

  • [c63] K. Holzinger, A. Holzinger, C. Safran, G. Koiner, and E. Weippl, “Use of Wiki Systems in Archaeology: Privacy, Security and Data Protection as Key Problems“, in ICE-B 2010 – ICETE The International Joint Conference on e-Business and Telecommunications, 2010, pp. 120-123.
    [BibTeX] [Abstract] [Download PDF]

    Wikis are powerful, collaborative tools and can be used for educational purposes in many ways. The original idea of a Wiki is to make information accessible to all. However, it is very interesting that experiences in the use of Wikis in educational settingsshowed that security and data protection of wiki contents is definitely an issue. In this paper, wediscuss problems and solutions on the basis of use cases from archaeological education. Interestingly, archaeologists are extremely worried about online accessible information due to the serious danger of archaeological looting. “Tomb raiders”, i.e. people who excavate artefacts of cultural heritage on the basis of information stored in Geowikis, so called archaeological looters, are not aware of the value of cultural heritage, are interested only in the artefacts and destroying the cultural context, which is of enormous interest for archaeological research. Consequently, the protection of archaeological information is one of the most urgent tasks in the preservation of cultural heritage. [Information Systems, data protection, safety, security]

    @inproceedings{c63,
       author = {Holzinger, Katharina and Holzinger, Andreas and Safran, Christian and Koiner, Gabriele and Weippl, Edgar},
       title = {Use of Wiki Systems in Archaeology: Privacy, Security and Data Protection as Key Problems},
       booktitle = {ICE-B 2010 - ICETE The International Joint Conference on e-Business and Telecommunications},
       publisher = {INSTICC IEEE},
       pages = {120-123},
       year = {2010},
       abstract = {Wikis are powerful, collaborative tools and can be used for educational purposes in many ways. The original idea of a Wiki is to make information accessible to all. However, it is very interesting that experiences in the use of Wikis in educational settingsshowed that security and data protection of wiki contents is definitely an issue. In this paper, wediscuss problems and solutions on the basis of use cases from archaeological education. Interestingly, archaeologists are extremely worried about online accessible information due to the serious danger of archaeological looting. “Tomb raiders”, i.e. people who excavate artefacts of cultural heritage on the basis of information stored in Geowikis, so called archaeological looters, are not aware of the value of cultural heritage, are interested only in the artefacts and destroying the cultural context, which is of enormous interest for archaeological research. Consequently, the protection of archaeological information is one of the most urgent tasks in the preservation of cultural heritage. [Information Systems, data protection, safety, security]},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=555476&pCurrPk=52078}
    }

  • [c67] A. Holzinger, M. Ziefle, and C. Röcker, “Human-Computer Interaction and Usability Engineering for Elderly (HCI4AGING): Introduction to the Special Thematic Session“, in Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180, K. Miesenberger, J. Klaus, W. Zagler, and A. Karshmer, Eds., Berlin, Heidelberg: Springer, 2010, pp. 556-559.
    [BibTeX] [Abstract] [DOI]

    In most countries demographic developments tend towards more and more elderly people in single households. Improving the quality of life for elderly people is an emerging issue within our information society. Good user interfaces have tremendous implications for appropriate accessibility. Though, user interfaces should not only be easily accessible, they should also be useful, usable and most of all enjoyable and a benefit for people. Traditionally, Human–Computer Interaction (HCI) bridges Natural Sciences (Psychology) and Engineering (Informatics/Computer Science), whilst Usability Engineering (UE) is anchored in Software Technology and supports the actual implemen-tation. Together, HCI and UE have a powerful potential to help towards making technology a little bit more accessible, useful, useable and enjoyable for everybody.

    @incollection{c67,
       author = {Holzinger, Andreas and Ziefle, Martina and Röcker, Carsten},
       title = {Human-Computer Interaction and Usability Engineering for Elderly (HCI4AGING): Introduction to the Special Thematic Session},
       booktitle = {Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180},
       editor = {Miesenberger, Klaus and Klaus, Joachim and Zagler, Wolfgang and Karshmer, Arthur},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {556-559},
       year = {2010},
       abstract = {In most countries demographic developments tend towards more and more elderly people in single households. Improving the quality of life for elderly people is an emerging issue within our information society. Good user interfaces have tremendous implications for appropriate accessibility. Though, user interfaces should not only be easily accessible, they should also be useful, usable and most of all enjoyable and a benefit for people. Traditionally, Human–Computer Interaction (HCI) bridges Natural Sciences (Psychology) and Engineering (Informatics/Computer Science), whilst Usability Engineering (UE) is anchored in Software Technology and supports the actual implemen-tation. Together, HCI and UE have a powerful potential to help towards making technology a little bit more accessible, useful, useable and enjoyable for everybody.},
       doi = {10.1007/978-3-642-14100-3_83},
    
    }

  • [j26] A. Holzinger, H. Thimbleby, and R. Beale, “Human–Computer Interaction for Medicine and Health Care (HCI4MED): Towards making Information usable“, International Journal of Human-Computer Studies (IJHCS), vol. 28, iss. 6, pp. 325-327, 2010.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Making IT/Informatics useful, useable and enjoyable can be seen as a key success factor in our future digital world: technology must support and enhance people. Medicine and Health Care in particular are currently subject to exceedingly rapid technological change. They are also a vital area of our economy; consequently, in our modern society Medicine and Health Care issues involve everybody and are a great challenge for Human–Computer Interaction (HCI) research. However, it is of vital importance that the findings are integrated into engineering at a systemic level. Information Processing, in particular its potential effectiveness in modern Health Services and the optimization of processes and operational sequences, is of increasing importance. Therefore, we need to ensure that we engineer effective solutions, as well as understanding the stakeholders and the issues they can and do encounter. It is particularly important for Medical Information Systems (e.g. Hospital Information Systems and Decision Support Systems) to be designed from the perspective of the end users, especially given that this is a diverse set of people. This diversity implies that a solution for everybody is not achievable and compromise may be necessary. Various adaptive solutions for specific end user groups can ease this dilemma, whereby, knowing the end users, the context and the workflows is of vital importance. Meanwhile, Information Systems are extremely sophisticated and their technological performance increases exponentially, resulting in a mass of information (Beale, 2007 and Edmondson and Beale, 2008); however, human cognitive evolution has not advanced at the same speed, consequently this gap results in a possible information overload (Holzinger et al., 2007).

    @article{j26,
       author = {Holzinger, Andreas and Thimbleby, Harold and Beale, Russel},
       title = {Human–Computer Interaction for Medicine and Health Care (HCI4MED): Towards making Information usable},
       journal = {International Journal of Human-Computer Studies (IJHCS)},
       volume = {28},
       number = {6},
       pages = {325-327},
       year = {2010},
       abstract = {Making IT/Informatics useful, useable and enjoyable can be seen as a key success factor in our future digital world: technology must support and enhance people. Medicine and Health Care in particular are currently subject to exceedingly rapid technological change. They are also a vital area of our economy; consequently, in our modern society Medicine and Health Care issues involve everybody and are a great challenge for Human–Computer Interaction (HCI) research. However, it is of vital importance that the findings are integrated into engineering at a systemic level. Information Processing, in particular its potential effectiveness in modern Health Services and the optimization of processes and operational sequences, is of increasing importance. Therefore, we need to ensure that we engineer effective solutions, as well as understanding the stakeholders and the issues they can and do encounter. It is particularly important for Medical Information Systems (e.g. Hospital Information Systems and Decision Support Systems) to be designed from the perspective of the end users, especially given that this is a diverse set of people. This diversity implies that a solution for everybody is not achievable and compromise may be necessary. Various adaptive solutions for specific end user groups can ease this dilemma, whereby, knowing the end users, the context and the workflows is of vital importance. Meanwhile, Information Systems are extremely sophisticated and their technological performance increases exponentially, resulting in a mass of information (Beale, 2007 and Edmondson and Beale, 2008); however, human cognitive evolution has not advanced at the same speed, consequently this gap results in a possible information overload (Holzinger et al., 2007).},
       doi = {10.1016/j.ijhcs.2010.03.001},
       url = {http://www.sciencedirect.com/science/article/pii/S1071581910000297/pdfft?md5=639cfd0d81ed17d35ca6f5d8fc9bb3fb&pid=1-s2.0-S1071581910000297-main.pdf}
    }

  • [c64] A. Holzinger, K. Struggl, and M. Debevc, “Applying Model-View-Controller (MVC) in Design and Development of Information Systems: An example of smart assistive script breakdown in an e-Business Application“, in ICE-B 2010 – ICETE The International Joint Conference on e-Business and Telecommunications, 2010, pp. 63-68.
    [BibTeX] [Abstract] [Download PDF]

    Information systems are supporting professionals in all areas of e-Business. In this paper we concentrate on our experiences in the design and development of information systems for the use in film production processes. Professionals working in this area are neither computer experts, nor interested in spending much time for information systems. Consequently, to provide a useful, useable and enjoyable application the system must be extremely suited to the requirements and demands of those professionals. One of the most important tasks at the beginning of a film production is tobreak down the movie script into its elements and aspects, and create a solid estimate of production costs based on the resulting breakdown data. Several film production software applications provide interfaces to support this task. However, most attempts suffer from numerous usability deficiencies. As a result, many filmproducers still use script printouts and textmarkers to highlight script elements, and transfer the data manually into their film management software. This paper presents a novel approach for unobtrusive and efficient script breakdown using a new way of breaking down text into its relevant elements. We demonstrate howthe implementation of this interface benefits from employing the Model-View-Controller (MVC) as underlying software design paradigm in terms of both software development confidence and user satisfaction. [Information Systems, Model-View Controller]

    @inproceedings{c64,
       author = {Holzinger, Andreas and Struggl, Karl-Heinz and Debevc, Matjaz},
       title = {Applying Model-View-Controller (MVC) in Design and Development of Information Systems: An example of smart assistive script breakdown in an e-Business Application},
       booktitle = {ICE-B 2010 - ICETE The International Joint Conference on e-Business and Telecommunications},
       publisher = {INSTIC IEEE},
       pages = {63-68},
       year = {2010},
       abstract = {Information systems are supporting professionals in all areas of e-Business. In this paper we concentrate on our experiences in the design and development of information systems for the use in film production processes. Professionals working in this area are neither computer experts, nor interested in spending much time for information systems. Consequently, to provide a useful, useable and enjoyable application the system must be extremely suited to the requirements and demands of those professionals. One of the most important tasks at the beginning of a film production is tobreak down the movie script into its elements and aspects, and create a solid estimate of production costs based on the resulting breakdown data. Several film production software applications provide interfaces to support this task. However, most attempts suffer from numerous usability deficiencies. As a result, many filmproducers still use script printouts and textmarkers to highlight script elements, and transfer the data manually into their film management software. This paper presents a novel approach for unobtrusive and efficient script breakdown using a new way of breaking down text into its relevant elements. We demonstrate howthe implementation of this interface benefits from employing the Model-View-Controller (MVC) as underlying software design paradigm in terms of both software development confidence and user satisfaction. [Information Systems, Model-View Controller]},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137848&pCurrPk=50883}
    }

  • [j27] A. Holzinger, S. Softic, C. Stickel, M. Ebner, M. Debevc, and B. Hu, “Nintendo Wii Remote Controller in Higher Education: Development and Evaluation of a Demonstrator Kit for e-Teaching“, Computing and Informatics / Computers and Artificial Intelligence, vol. 29, iss. 4, pp. 1001-1015, 2010.
    [BibTeX] [Abstract] [Download PDF]

    The increasing availability of game based technologies together with advances in Human-Computer Interaction (HCI) and usability engineering provides new challenges and opportunities to virtual environments in the context of e-Teaching. Consequently, an evident trend is to offer learners with the equivalent of practical learning experiences, whilst supporting creativity for both teachers and learners. Current market surveys showed surprisingly that the Wii remote controller (Wiimote) is more widely spread than standard PCs and is the most used computer input device worldwide, which given its collection of sensors, accelerometers and bluetooth technology, makes it of great interest for HCI experiments in e-Learning/e-Teaching. In this paper we discuss the importance of gestures for teaching and describe the design and development of a low-cost demonstrator kit based on Wiimote enhancing the quality of the lecturing with gestures. [ubiquious computing, sensors]

    @article{j27,
       author = {Holzinger, A. and Softic, S. and Stickel, C. and Ebner, M. and Debevc, M. and Hu, B.},
       title = {Nintendo Wii Remote Controller in Higher Education: Development and Evaluation of a Demonstrator Kit for e-Teaching},
       journal = {Computing and Informatics / Computers and Artificial Intelligence},
       volume = {29},
       number = {4},
       pages = {1001-1015},
       year = {2010},
       abstract = {The increasing availability of game based technologies together with advances in Human-Computer Interaction (HCI) and usability engineering provides new challenges and opportunities to virtual environments in the context of e-Teaching. Consequently, an evident trend  is to offer learners with the equivalent of practical learning experiences, whilst supporting creativity for both teachers and learners. Current market surveys showed surprisingly that the Wii remote controller (Wiimote) is more widely spread than standard PCs and is the most used computer input device worldwide, which given its collection of sensors, accelerometers and bluetooth technology, makes it of great interest for HCI experiments in e-Learning/e-Teaching. In this paper we discuss the importance of gestures for teaching and describe the design and development of a low-cost demonstrator kit based on Wiimote enhancing the quality of the lecturing with gestures. [ubiquious computing, sensors]},
       url = {http://www.cai.sk/ojs/index.php/cai/article/viewFile/103/85}
    }

  • [c61] A. Holzinger, G. Searle, S. Prückner, S. Steinbach-Nordmann, T. Kleinberger, E. Hirt, and J. Temnitzer, “Perceived usefulness among elderly people: Experiences and lessons learned during the evaluation of a wrist device“, in IEEE International Conference on Pervasive Computing Technologies for Healthcare (Pervasive Health 2010), 2010, pp. 1-5.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we present and discuss the evaluation of end user acceptance of a wrist device, designed to monitor vital signs and to detect adverse situations, such as falls, unconsciousness etc. and, if necessary, to alert emergency services to the wearers need. The goals of all concerned must be taken into account if the technological advances are to be of benefit to those for whom they are being designed. After the technical assessment was made, a further study of the end users views was aimed to show the acceptance levels of elderly end users to the idea of personal monitoring, its perceived usefulness in their every day lives, and their judgment of the design. This was made in the form of a questionnaire divided into five main areas: usefulness, attractiveness, usability, comfort and acceptance, and each end user was interviewed regarding their goals. Each of the interviewees regarded their own continuing independence as a primary goal; however their views as to the possibility of achieving this goal by the use of advanced technology differed. This work was completed as part of the EMERGE project, aimed at the support of elderly people in everyday life using innovative monitoring and assistance systems, with the use of ambient and unobtrusive sensors in order to increase their safety, thereby promoting a longer period of independence, a step made necessary by the demographic increase in the elderly population in Europe. [Information Systems, Smart health]

    @inproceedings{c61,
       author = {Holzinger, A. and Searle, G. and Prückner, S. and Steinbach-Nordmann, S. and Kleinberger, T. and Hirt, E.  and Temnitzer, J.},
       title = {Perceived usefulness among elderly people: Experiences and lessons learned during the evaluation of a wrist device},
       booktitle = {IEEE International Conference on Pervasive Computing Technologies for Healthcare (Pervasive Health 2010)},
       publisher = {IEEE},
       pages = {1-5},
       year = {2010},
       abstract = {In  this  paper,  we  present  and  discuss  the  evaluation  of end  user  acceptance  of  a  wrist  device,  designed  to  monitor  vital signs  and  to  detect  adverse  situations,  such  as  falls, unconsciousness  etc.  and,  if  necessary,  to  alert  emergency services  to  the  wearers  need.  The  goals  of  all  concerned  must be taken  into  account  if  the  technological  advances  are  to  be  of benefit  to  those  for  whom  they  are  being  designed.  After  the technical  assessment  was  made,  a  further  study  of  the  end  users views  was  aimed  to  show  the  acceptance  levels  of  elderly  end users  to  the  idea  of  personal  monitoring,  its  perceived  usefulness in  their  every  day  lives,  and  their  judgment  of  the  design.  This was  made  in  the  form  of  a  questionnaire  divided  into  five  main areas:  usefulness,  attractiveness,  usability,  comfort  and acceptance,  and  each  end  user  was  interviewed  regarding  their goals.  Each  of  the  interviewees  regarded  their  own  continuing independence  as  a  primary  goal;  however  their  views  as  to  the possibility  of  achieving  this  goal  by  the  use  of  advanced technology  differed.  This  work  was  completed  as  part  of  the EMERGE  project,  aimed  at  the  support  of  elderly  people  in everyday  life  using  innovative  monitoring and assistance systems,  with  the  use  of  ambient  and  unobtrusive  sensors  in  order  to increase  their  safety,  thereby  promoting  a  longer  period  of independence,  a  step  made  necessary  by  the  demographic increase  in the  elderly population in  Europe. [Information Systems, Smart health]},
       doi = {http://dx.doi.org/10.4108/ICST.PERVASIVEHEALTH2010.8912},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=149232&pCurrPk=50002}
    }

  • [c65] A. Holzinger, M. Schlögl, B. Peischl, and M. Debevc, “Preferences of Handwriting Recognition on Mobile Information Systems in Medicine: Improving handwriting algorithm on the basis of real-life usability research (Best Paper Award)“, in ICE-B 2010 – ICETE The International Joint Conference on e-Business and Telecommunications, 2010, pp. 14-21.
    [BibTeX] [Abstract] [Download PDF]

    Streamlining data acquisition in mobile health care in order to increase accuracy and efficiency can only benefit the patient. The company FERK-Systems has been providing health care information systems for various German medical services for many years. The design and development of a compatible front-end system for handwriting recognition, particularly for use in ambulances was clearly needed. While handwriting recognition has been a classical topic of computer science for many years, many problems still need to be solved. In this paper, we report on the study and resulting improvements achieved by the adaptation of an existing handwriting algorithm, based on experiences made during medical rescue missions. By improving accuracy and error correction the performance of an available handwriting recognition algorithm was increased. However, the end userstudies showed that the virtual keyboard is still the overall preferred method compared to handwriting, especially among participantswith a computer usage of more than 30 hours a week. This is possibly dueto the wide availability of the QUERTY/QUERTZ keyboard.

    @inproceedings{c65,
       author = {Holzinger, A. and Schlögl, M. and Peischl, B. and Debevc, M. },
       title = {Preferences of Handwriting Recognition on Mobile Information Systems in Medicine: Improving handwriting algorithm on the basis of real-life usability research (Best Paper Award)},
       booktitle = {ICE-B 2010 - ICETE The International Joint Conference on e-Business and Telecommunications},
       publisher = {INSTICC},
       pages = {14-21},
       year = {2010},
       abstract = {Streamlining data acquisition in mobile health care in order to increase accuracy and efficiency can only benefit the patient. The company FERK-Systems has been providing health care information systems for various German medical services for many years. The design and development of a compatible front-end system for handwriting recognition, particularly for use in ambulances was clearly needed. While handwriting recognition has been a classical topic of computer science for many years, many problems still need to be solved. In this paper, we report on the study and resulting improvements achieved by the adaptation of an existing handwriting algorithm, based on experiences made during medical rescue missions. By improving accuracy and error correction the performance of an available handwriting recognition algorithm was increased. However, the end userstudies showed that the virtual keyboard is still the overall preferred method compared to handwriting, especially among participantswith a computer usage of more than 30 hours a week. This is possibly dueto the wide availability of the QUERTY/QUERTZ keyboard. },
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137846&pCurrPk=50881}
    }

  • [j25] A. Holzinger, A. Nischelwitzer, S. Friedl, and B. Hu, “Towards life long learning: three models for ubiquitous applications“, Wireless Communications and Mobile Computing, vol. 10, iss. 10, pp. 1350-1365, 2010.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we present three experimental proof-of-concepts: first, we demonstrate a ubiquitous computing framework (UCF), which is a network of interacting technologies that support humans ubiquitously. We then present practical work based on this UCF framework: TalkingPoints, which was originally developed for use at trading fairs in order to identify each participant and company via transponder and provide specific information during and after use. Finally, we propose GARFID, a concept for using advanced technologies for teaching young children. The main outcome of this research is that the concept of UCF raises a lot of possibilities, which can bring value and benefits for end-users. When one follows the working-is-learning paradigm, it can be seen that the implementation of this type of technology can support life long learning (LLL), thereby providing evidence that technology can benefit everybody and make life easier.

    @article{j25,
       author = {Holzinger, Andreas and Nischelwitzer, Alexander and Friedl, Silvia and Hu, Bo},
       title = {Towards life long learning: three models for ubiquitous applications},
       journal = {Wireless Communications and Mobile Computing},
       volume = {10},
       number = {10},
       pages = {1350-1365},
       year = {2010},
       abstract = {In this paper, we present three experimental proof-of-concepts: first, we demonstrate a ubiquitous computing framework (UCF), which is a network of interacting technologies that support humans ubiquitously. We then present practical work based on this UCF framework: TalkingPoints, which was originally developed for use at trading fairs in order to identify each participant and company via transponder and provide specific information during and after use. Finally, we propose GARFID, a concept for using advanced technologies for teaching young children. The main outcome of this research is that the concept of UCF raises a lot of possibilities, which can bring value and benefits for end-users. When one follows the working-is-learning paradigm, it can be seen that the implementation of this type of technology can support life long learning (LLL), thereby providing evidence that technology can benefit everybody and make life easier. },
       doi = {10.1002/wcm.715},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231938&pCurrPk=36725}
    }

  • [c62] A. Holzinger, S. Mayr, W. Slany, and M. Debevc, “The influence of AJAX on Web Usability“, in ICE-B 2010 – ICETE The International Joint Conference on e-Business and Telecommunications, 2010, pp. 124-127.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we discusses some pros and cons of using AJAX for increasing the usability of Web applications. As AJAX allows Web applications look like desktop applications, it can increase the learnability of a Web application. Nevertheless, AJAX can also be the source of end user frustration if the XMLHttpRequest is not supported by the browser, Javascript is not available, or an Internet connection is missing. We also provide some workarounds for server response time gaps, for example by providing visible user feedback messages) and enabling the back button to work properly. [Information Systems]

    @inproceedings{c62,
       author = {Holzinger, Andreas and Mayr, Stefan and Slany, Wolfgang and Debevc, Matjaz},
       title = {The influence of AJAX on Web Usability},
       booktitle = {ICE-B 2010 - ICETE The International Joint Conference on e-Business and Telecommunications},
       publisher = {INSTIC IEEE},
       pages = {124-127},
       year = {2010},
       abstract = {In this paper we discusses some pros and cons of using AJAX for increasing the usability of Web applications. As AJAX allows Web applications look like desktop applications, it can increase the learnability of a Web application. Nevertheless, AJAX can also be the source of end user frustration if the XMLHttpRequest is not supported by the browser, Javascript is not available, or an Internet connection is missing. We also provide some workarounds for server response time gaps, for example by providing visible user feedback messages) and enabling the back button to work properly. [Information Systems]},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137847&pCurrPk=50882}
    }

  • [c60] A. Holzinger, S. Dorner, M. Födinger, A. Valdez, and M. Ziefle, “Chances of Increasing Youth Health Awareness through Mobile Wellness Applications“, in HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389, G. Leitner, M. Hitz, and A. Holzinger, Eds., Berlin, Heidelberg: Springer, 2010, pp. 71-81.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The poor general state of health of the Austrian youth – which is possibly representative for the western industrial world – will have dramatic effects on our health care system in years to come. Health risks among adolescents, including smoking, alcohol, obesity, lack of physical activity and an unhealthy diet, will lead to an increase in chronic diseases. A preventive measure against such a development could be to reinforce health awareness through the use of web and mobile applications supporting self observation and behavior change. In this paper, we present an overview of the latest developments in the area of mobile wellness and take a look at the features of applications that constitutes the current state of the art, as well as their shortcomings and ways of overcoming these. Finally, we discuss the possibilities offered by new technological developments in the area of mobile devices and by incorporating the characteristics that make up the Web 2.0. [Information Systems, Smart Health]

    @incollection{c60,
       author = {Holzinger, Andreas and Dorner, Stefan and Födinger, Manuela and Valdez, AndréCalero and Ziefle, Martina},
       title = {Chances of Increasing Youth Health Awareness through Mobile Wellness Applications},
       booktitle = {HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389},
       editor = {Leitner, Gerhard and Hitz, Martin and Holzinger, Andreas},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {71-81},
       year = {2010},
       abstract = {The poor general state of health of the Austrian youth – which is possibly representative for the western industrial world – will have dramatic effects on our health care system in years to come. Health risks among adolescents, including smoking, alcohol, obesity, lack of physical activity and an unhealthy diet, will lead to an increase in chronic diseases. A preventive measure against such a development could be to reinforce health awareness through the use of web and mobile applications supporting self observation and behavior change. In this paper, we present an overview of the latest developments in the area of mobile wellness and take a look at the features of applications that constitutes the current state of the art, as well as their shortcomings and ways of overcoming these. Finally, we discuss the possibilities offered by new technological developments in the area of mobile devices and by incorporating the characteristics that make up the Web 2.0. [Information Systems, Smart Health]},
       doi = {10.1007/978-3-642-16607-5_5},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=163015&pCurrPk=52640}
    }

  • [e16] A. Holzinger, Process Guide for Students for Interdisciplinary Work in Computer Science/Informatics. Second Edition, Norderstedt: BoD, 2010.
    [BibTeX] [Abstract] [Download PDF]

    The process of doing an academic work, whether a mini-project, diploma thesis, master’s thesis or PhD thesis, requires systematic knowledge and skills in order to answer the following questions: “How do I find a topic?”, “How do I obtain funding money?”, “How do I write a project proposal?”, “How is the organisatoric workflow?“, “How do I search Literature systematically?”, “Why should I read patents?”, “How can I organize my references?”, “Why English as a working language?”, “What is the formal structure of a thesis like?”, „What is the classical hypothetic-deductive research process?”, „Which research methods could I use?”, “How will my posters, my presentations and my written work be graded?”, “How do I contribute to a conference?”, “How do I contribute to an archival Journal?”. These questions are discussed on the basis of the subjects Engineering (Computer Science/Informatics) and Natural Sciences (Psychology) and Business (Software Engineering/Business), which can be bridged by the subject “Human-Computer Interaction and Usability Engineering (HCI&UE). Since science is trans-cultural, inter-subjective and reproductive; these fundamentals can be further applied to almost any subject.

    @book{e16,
       author = {Holzinger, Andreas},
       title = {Process Guide for Students for Interdisciplinary Work in Computer Science/Informatics. Second Edition},
       publisher = {BoD},
       address = {Norderstedt},
       year = {2010},
       abstract = {The process of doing an academic work, whether a mini-project, diploma thesis, master’s thesis or PhD thesis, requires systematic knowledge and skills in order to answer the following questions: “How do I find a topic?”, “How do I obtain funding money?”, “How do I write a project proposal?”, “How is the organisatoric workflow?“, “How do I search Literature systematically?”, “Why should I read patents?”, “How can I organize my references?”, “Why English as a working language?”, “What is the formal structure of a thesis like?”, „What is the classical hypothetic-deductive research process?”, „Which research methods could I use?”, “How will my posters, my presentations and my written work be graded?”, “How do I contribute to a conference?”, “How do I contribute to an archival Journal?”. These questions are discussed on the basis of the subjects Engineering (Computer Science/Informatics) and Natural Sciences (Psychology) and Business (Software Engineering/Business), which can be bridged by the subject “Human-Computer Interaction and Usability Engineering (HCI&UE). Since science is trans-cultural, inter-subjective and reproductive; these fundamentals can be further applied to almost any subject.},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1062118&pCurrPk=52615}
    }

  • [c57] S. Graf, Kinshuk, and A. Holzinger, “International Workshop on Enabling User Experience with Future Interactive Learning Systems (UXFUL 2010)“, in HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389, G. Leitner, M. Hitz, and A. Holzinger, Eds., Berlin, Heidelberg: Springer, 2010, pp. 318-321.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Nowadays most educational institutions use learning systems in order to provide blended or fully online courses in a formal setting with high similarity to learning in a classroom. However, new technologies such as mobile, pervasive and ubiquitous technologies can enable learners to have richer learning experiences through learning that can take place whenever learners are interested in learning, at anytime and anywhere. Multimodal, smart and intelligent devices make the interaction between the learners and the system more natural and intuitive and considering the learners’ current situation and characteristics allows the personalization and adaptation of learning material and activities, leading to more effective learning by providing learners with information that is relevant for them. This workshop brings together researchers from Psychology and Computer Science, aiming at discussing research on using and incorporating such new technologies in learning systems and therefore, providing learners with rich learning experiences at anytime and anywhere, in a more intuitive and personalized way.[Information Systems, Smart Computing]

    @incollection{c57,
       author = {Graf, Sabine and Kinshuk and Holzinger, Andreas},
       title = {International Workshop on Enabling User Experience with Future Interactive Learning Systems (UXFUL 2010)},
       booktitle = {HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389},
       editor = {Leitner, Gerhard and Hitz, Martin and Holzinger, Andreas},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {318-321},
       year = {2010},
       abstract = {Nowadays most educational institutions use learning systems in order to provide blended or fully online courses in a formal setting with high similarity to learning in a classroom. However, new technologies such as mobile, pervasive and ubiquitous technologies can enable learners to have richer learning experiences through learning that can take place whenever learners are interested in learning, at anytime and anywhere. Multimodal, smart and intelligent devices make the interaction between the learners and the system more natural and intuitive and considering the learners’ current situation and characteristics allows the personalization and adaptation of learning material and activities, leading to more effective learning by providing learners with information that is relevant for them. This workshop brings together researchers from Psychology and Computer Science, aiming at discussing research on using and incorporating such new technologies in learning systems and therefore, providing learners with rich learning experiences at anytime and anywhere, in a more intuitive and personalized way.[Information Systems, Smart Computing]},
       doi = {10.1007/978-3-642-16607-5_21},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1061923&pCurrPk=52575}
    }

  • [c59] M. Debevc, P. Kosec, and A. Holzinger, “E-Learning Accessibility for the Deaf and Hard of Hearing – Practical Examples and Experiences“, in HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389, G. Leitner, M. Hitz, and A. Holzinger, Eds., Berlin, Heidelberg: Springer, 2010, pp. 203-213.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Development of information and communication technology has offered new horizons to the deaf and hard of hearing for their integration into working, social and economic environment. Despite the positive attitude of international guidelines, the lack of accessibility of e-learning material is still noticeable for these users. The process of adapting the e-learning materials for deaf and hard of hearing required different approach and guidelines to properly displaying sign language video. Paper presents basic e-learning accessibility guidelines for deaf and hard of hearing and basic directions for suitable design of e-learning sites accessibility. E-learning course (European Computer Driving License Course – ECDL) for deaf, automated video recording system and the transparent presentation of a sign language interpreter within the e-learning material are used as examples of good practice. Evaluations of these examples show high degree of satisfaction, ease of use and comprehension. [Information Systems, Smart Health]

    @incollection{c59,
       author = {Debevc, Matjaž and Kosec, Primož and Holzinger, Andreas},
       title = {E-Learning Accessibility for the Deaf and Hard of Hearing - Practical Examples and Experiences},
       booktitle = {HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389},
       editor = {Leitner, Gerhard and Hitz, Martin and Holzinger, Andreas},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {203-213},
       year = {2010},
       abstract = {Development of information and communication technology has offered new horizons to the deaf and hard of hearing for their integration into working, social and economic environment. Despite the positive attitude of international guidelines, the lack of accessibility of e-learning material is still noticeable for these users. The process of adapting the e-learning materials for deaf and hard of hearing required different approach and guidelines to properly displaying sign language video. Paper presents basic e-learning accessibility guidelines for deaf and hard of hearing and basic directions for suitable design of e-learning sites accessibility. E-learning course (European Computer Driving License Course – ECDL) for deaf, automated video recording system and the transparent presentation of a sign language interpreter within the e-learning material are used as examples of good practice. Evaluations of these examples show high degree of satisfaction, ease of use and comprehension. [Information Systems, Smart Health]},
       doi = {10.1007/978-3-642-16607-5_13},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=233173&pCurrPk=52576}
    }

  • [c66] A. Calero Valdez, M. Ziefle, F. Alagöz, and A. Holzinger, “Mental Models of Menu Structures in Diabetes Assistants“, in Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180, K. Miesenberger, J. Klaus, W. Zagler, and A. Karshmer, Eds., Berlin, Heidelberg: Springer, 2010, pp. 584-591.
    [BibTeX] [Abstract] [DOI]

    Demographic change in regard to an aging population with an increasing amount of diabetes patients will put a strain on health care rentability in all modern societies. Electronic living assistants for diabetes patients might help lift the burden on taxpayers, if they are usable for the heterogeneous user group. Research has shown that correct mental models of device menu structures might help users in handling electronic devices. This exploratory study investigates construction and facilitation of spatial mental models for a menu structure of a diabetes living assistant and relates them to performance in usage of a device. Furthemore impact of age, domain knowledge and technical expertise on complexity and quality of the mental model are evaluated. Results indicate that even having a simplified spatial representation of the menu structure increases navigation performance. Interestingly not the overall correctness of the model was important for task success but rather the amount of route knowledge within the model.

    @incollection{c66,
       author = {Calero Valdez, André and Ziefle, Martina and Alagöz, Firat and Holzinger, Andreas},
       title = {Mental Models of Menu Structures in Diabetes Assistants},
       booktitle = {Computers Helping People with Special Needs, Lecture Notes in Computer Science, LNCS 6180},
       editor = {Miesenberger, Klaus and Klaus, Joachim and Zagler, Wolfgang and Karshmer, Arthur},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {584-591},
       year = {2010},
       abstract = {Demographic change in regard to an aging population with an increasing amount of diabetes patients will put a strain on health care rentability in all modern societies. Electronic living assistants for diabetes patients might help lift the burden on taxpayers, if they are usable for the heterogeneous user group. Research has shown that correct mental models of device menu structures might help users in handling electronic devices. This exploratory study investigates construction and facilitation of spatial mental models for a menu structure of a diabetes living assistant and relates them to performance in usage of a device. Furthemore impact of age, domain knowledge and technical expertise on complexity and quality of the mental model are evaluated. Results indicate that even having a simplified spatial representation of the menu structure increases navigation performance. Interestingly not the overall correctness of the model was important for task success but rather the amount of route knowledge within the model.},
       doi = {10.1007/978-3-642-14100-3_87}
    }

  • [c56] M. Bloice, M. Kreuzthaler, K. Simonic, and A. Holzinger, “On the Paradigm Shift of Search on Mobile Devices: Some Remarks on User Habits“, in HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389, G. Leitner, M. Hitz, and A. Holzinger, Eds., Berlin, Heidelberg: Springer , 2010, pp. 493-496.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper addresses a paradigm shift in the way the web is being searched. This shift is occurring due to the increasing percentage of search requests being made from mobile devices, changing the way users search the web. This change is occurring for two reasons: first, users of smart phones are no longer searching the web relying on generic, horizontal search engines as they do on the desktop, and second, smart phones are far more aware of the user’s context than desktop machines. Smart phones typically include multiple sensors that can describe the user’s current context in a very accurate way, something the standard desktop machine cannot normally do. This shift will mean changes for the information retrieval community, the developers of applications, the developers of online services, usability engineers, and the developers of search engines themselves.

    @incollection{c56,
       author = {Bloice, Marcus and Kreuzthaler, Markus and Simonic, Klaus-Martin and Holzinger, Andreas},
       title = {On the Paradigm Shift of Search on Mobile Devices: Some Remarks on User Habits},
       booktitle = {HCI in Work and Learning, Life and Leisure, Lecture Notes in Computer Science, LNCS 6389},
       editor = {Leitner, Gerhard and Hitz, Martin and Holzinger, Andreas},
       publisher = {Springer },
       address = {Berlin, Heidelberg},
       pages = {493-496},
       year = {2010},
       abstract = {This paper addresses a paradigm shift in the way the web is being searched. This shift is occurring due to the increasing percentage of search requests being made from mobile devices, changing the way users search the web. This change is occurring for two reasons: first, users of smart phones are no longer searching the web relying on generic, horizontal search engines as they do on the desktop, and second, smart phones are far more aware of the user’s context than desktop machines. Smart phones typically include multiple sensors that can describe the user’s current context in a very accurate way, something the standard desktop machine cannot normally do. This shift will mean changes for the information retrieval community, the developers of applications, the developers of online services, usability engineers, and the developers of search engines themselves.},
       doi = {10.1007/978-3-642-16607-5_35},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231640&pCurrPk=52642}
    }

  • [c56a] F. Alagoez, A. C. Valdez, W. Wilkowska, M. Ziefle, S. Dorner, and A. Holzinger, “From cloud computing to mobile Internet, from user focus to culture and hedonism: The crucible of mobile health care and Wellness applications“, in 5th International Conference on Pervasive Computing and Applications (ICPCA), 2010, pp. 38-45.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    With the rise of mobile Internet and cloud computing new ubiquitous medical services will emerge coinciding with changes in demographics and social structures. Mobile e-health and Wellness applications can help relieving the burden of accelerating health care costs due to aging societies. In order to leverage these new innovations a holistic approach must be considered. Facilitating user centered design, acceptance models for user diversity and cultural as well as hedonic aspects can lead to development of services that improve therapy compliance and can even change the youth’s lifestyle. An overview of such applications is presented and put into a cultural context [Information Systems, Mobile Computing, Smart Health].

    @inproceedings{c56a,
       author = {Alagoez, F. and Valdez, A. C. and Wilkowska, W. and Ziefle, M. and Dorner, S. and Holzinger, A.},
       title = {From cloud computing to mobile Internet, from user focus to culture and hedonism: The crucible of mobile health care and Wellness applications},
       booktitle = {5th International Conference on Pervasive Computing and Applications (ICPCA)},
       publisher = {IEEE},
       pages = {38-45},
       year = {2010},
       abstract = {With the rise of mobile Internet and cloud computing new ubiquitous medical services will emerge coinciding with changes in demographics and social structures. Mobile e-health and Wellness applications can help relieving the burden of accelerating health care costs due to aging societies. In order to leverage these new innovations a holistic approach must be considered. Facilitating user centered design, acceptance models for user diversity and cultural as well as hedonic aspects can lead to development of services that improve therapy compliance and can even change the youth's lifestyle. An overview of such applications is presented and put into a cultural context [Information Systems, Mobile Computing, Smart Health].},
       doi = {10.1109/ICPCA.2010.5704072},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1062108&pCurrPk=52641}
    }

2009

  • [j24] A. Holzinger, M. D. Kickmeier-Rust, S. Wassertheurer, and M. Hessinger, “Learning performance with interactive simulations in medical education: Lessons learned from results of learning complex physiological models with the HAEMOdynamics SIMulator“, Computers and Education, vol. 52, iss. 2, pp. 292-301, 2009.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Objective: Since simulations are often accepted uncritically, with excessive emphasis being placed on technological sophistication at the expense of underlying psychological and educational theories, we evaluated the learning performance of simulation software, in order to gain insight into the proper use of simulations for application in medical education. Design: The authors designed and evaluated a software packet, following of user-centered development, which they call Haemodynamics Simulator (HAEMOSIM), for the simulation of complex physiological models, e.g.. the modeling of arterial blood flow dependent on the pressure gradient, radius and bifurcations; shear-stress and blood flow profiles depending on viscosity and radius. Measurements: In a quasi-experimental real-life setup, the authors compared the learning performance of 96 medical students for three conditions: (1) conventional text-based lesson; (2) HAEMOSIM alone and (3) HAEMOSIM with a combination of additional material and support, found necessary during user-centered development. The individual student’s learning time was unvarying in all three conditions. Results: While the first two settings produced equivalent results, the combination of additional support and HAEMOSIM yielded a significantly higher learning performance. These results are discussed regarding Mayer’s multimedia learning theory, Sweller’s cognitive load theory, and claims of prior research on utilizing interactive simulations for learning. Conclusion: The results showed that simulations can be beneficial for learning complex concepts, however, interacting with sophisticated simulations strain the limitation of cognitive processes; therefore successful application of simulations require careful additional guidance from medical professionals and a certain amount of previous knowledge on the part of the learners. The inclusion of pedagogical and psychological expertise into the design and development of educational software is essential. [Information Systems, Computational Simulation]

    @article{j24,
       year = {2009},
       author = {Holzinger, A. and Kickmeier-Rust, M. D. and Wassertheurer, S. and Hessinger, M.},
       title = {Learning performance with interactive simulations in medical education: Lessons learned from results of learning complex physiological models with the HAEMOdynamics SIMulator},
       journal = {Computers and Education},
       volume = {52},
       number = {2},
       pages = {292-301},
       abstract = {Objective: Since simulations are often accepted uncritically, with excessive emphasis being placed on technological sophistication at the expense of underlying psychological and educational theories, we evaluated the learning performance of simulation software, in order to gain insight into the proper use of simulations for application in medical education. Design: The authors designed and evaluated a software packet, following of user-centered development, which they call Haemodynamics Simulator (HAEMOSIM), for the simulation of complex physiological models, e.g.. the modeling of arterial blood flow dependent on the pressure gradient, radius and bifurcations; shear-stress and blood flow profiles depending on viscosity and radius. Measurements: In a quasi-experimental real-life setup, the authors compared the learning performance of 96 medical students for three conditions: (1) conventional text-based lesson; (2) HAEMOSIM alone and (3) HAEMOSIM with a combination of additional material and support, found necessary during user-centered development. The individual student's learning time was unvarying in all three conditions. Results: While the first two settings produced equivalent results, the combination of additional support and HAEMOSIM yielded a significantly higher learning performance. These results are discussed regarding Mayer's multimedia learning theory, Sweller's cognitive load theory, and claims of prior research on utilizing interactive simulations for learning. Conclusion: The results showed that simulations can be beneficial for learning complex concepts, however, interacting with sophisticated simulations strain the limitation of cognitive processes; therefore successful application of simulations require careful additional guidance from medical professionals and a certain amount of previous knowledge on the part of the learners. The inclusion of pedagogical and psychological expertise into the design and development of educational software is essential.  [Information Systems, Computational Simulation]},
       keywords = {Information Systems, Simulation, Modeling},
       doi = {10.1016/j.compedu.2008.08.008},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=92360&pCurrPk=38022}
    }

  • [c55] A. Holzinger, M. D. Kickmeier-Rust, and M. Ebner, “Interactive Technology for Enhancing Distributed Learning: A Study on Weblogs“, in HCI 2009 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology, Cambridge University (UK): British Computer Society, 2009, p. 309–312.
    [BibTeX] [Abstract] [Download PDF]

    In this study, it was investigated whether, and to what extent, Web 2.0 technologies, actually Weblogs, can be a suitable instrument for enhancing the practice of distributed learning. In educational settings, which are based on traditional lectures many students begin serious study shortly before the exam. However, from previous empirical research, it is known that the practice of distributed learning is much more conducive to retaining knowledge than that of massed learning. A 2×2 factorial design (within — repeated measures) with pre-test and post-test in a real life setting was applied; the study lasted for the whole summer term 2007. Participants were N=28 computer science undergraduates of Graz University of Technology. We randomly assigned them to two groups of equal size: The experimental group given the Weblog treatment are referred to as Group W; whereas the control group with no access are referred to as Group C. Students of group W were instructed to use the Weblog for developing their paper and studying during the lecture and they were requested not to reveal their group affiliation. The results showed that performance scores of group W were significantly higher than that of group C. This demonstrates that Weblogs can be an appropriate instrument to supplement a classical lecture in order to enable deeper processing of information over a longer period of time, consequently resulting in enhanced learning performance. [Web 2.0]

    @incollection{c55,
       year = {2009},
       author = {Holzinger, A. and Kickmeier-Rust, M.D.  and Ebner, M.},
       title = {Interactive Technology for Enhancing Distributed Learning: A Study on Weblogs},
       booktitle = {HCI 2009 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology},
       publisher = {British Computer Society},
       address = {Cambridge University (UK)},
       pages = {309–312},
       abstract = {In this study, it was investigated whether, and to what extent, Web 2.0 technologies, actually Weblogs, can be a suitable instrument for enhancing the practice of distributed learning. In educational settings, which are based on traditional lectures many students begin serious study shortly before the exam. However, from previous empirical research, it is known that the practice of distributed learning is much more conducive to retaining knowledge than that of massed learning. A 2x2 factorial design (within -- repeated measures) with pre-test and post-test in a real life setting was applied; the study lasted for the whole summer term 2007. Participants were N=28 computer science undergraduates of Graz University of Technology. We randomly assigned them to two groups of equal size: The experimental group given the Weblog treatment are referred to as Group W; whereas the control group with no access are referred to as Group C. Students of group W were instructed to use the Weblog for developing their paper and studying during the lecture and they were requested not to reveal their group affiliation. The results showed that performance scores of group W were significantly higher than that of group C. This demonstrates that Weblogs can be an appropriate instrument to supplement a classical lecture in order to enable deeper processing of information over a longer period of time, consequently resulting in enhanced learning performance. [Web 2.0]},
       keywords = {e-Learning, e-Teaching, Collaborative Learning, Distributed Learning, Social Software},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=106842&pCurrPk=43785}
    }

  • [c54] M. Debevc, P. Kosec, M. Rotovnik, and A. Holzinger, “Accessible Multimodal Web Pages with Sign Language Translations for Deaf and Hard of Hearing Users“, in 20th International Conference on Database and Expert Systems Application, DEXA 2009, 2009, pp. 279-283.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper, we introduce a sign language interpreter module (SLIM), which delivers transparent sign language videos to deaf and hard of hearing users. Since their first language is the sign language, they rely on the visual modality with some speech input. Therefore in addition to text and images, a video of sign language interpreter should be provided. The SLIM system uses layers for exposing videos over existing Web pages, which preserves the layout structure. Our evaluation study has shown that such a system is highly acceptable by deaf and hard of hearing users. Therefore our proposal is to enhance the Web content accessibility guidelines, by adding an additional multimodal aspect for presenting existing Web information with transparent videos for deaf and hard of hearing users.

    @inproceedings{c54,
       year = {2009},
       author = {Debevc, M. and Kosec, P. and Rotovnik, M. and Holzinger, A.},
       title = {Accessible Multimodal Web Pages with Sign Language Translations for Deaf and Hard of Hearing Users},
       booktitle = {20th International Conference on Database and Expert Systems Application, DEXA 2009},
       editor = {Tjoa, A Min and Wagner, Roland},
       publisher = {IEEE},
       pages = {279-283},
       abstract = {In this paper, we introduce a sign language interpreter module (SLIM), which delivers transparent sign language videos to deaf and hard of hearing users. Since their first language is the sign language, they rely on the visual modality with some speech input. Therefore in addition to text and images, a video of sign language interpreter should be provided. The SLIM system uses layers for exposing videos over existing Web pages, which preserves the layout structure. Our evaluation study has shown that such a system is highly acceptable by deaf and hard of hearing users. Therefore our proposal is to enhance the Web content accessibility guidelines, by adding an additional multimodal aspect for presenting existing Web information with transparent videos for deaf and hard of hearing users.},
       keywords = {Web Information System},
       doi = {10.1109/DEXA.2009.92},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1074854&pCurrPk=43776}
    }

  • [c53] A. Auinger, M. Ebner, D. Nedbal, and A. Holzinger, “Mixing Content and Endless Collaboration – MashUps: Towards Future Personal Learning Environments“, in Universal Access in Human-Computer Interaction HCI, Part III: Applications and Services, Lecture Notes in Computer Science LNCS 5616, C. Stephanidis, Ed., Berlin, Heidelberg, New York: Springer, 2009, pp. 14-23.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The recent movement by major Web services towards making many application programming interfaces (APIs) available for public use has led to the development of the new MashUp technology, a method of merging content, services and applications from multiple web sites. The new technology is now being successfully applied in the academic community to enrich and improve learning and teaching applications. This paper examines its implementation and use, discusses methods and styles of usage and highlights the advantages and disadvantages of client and server application, based on related work and recent experiences gathered with a large university-wide open learning management system (WBT-Master/TeachCenter of Graz University of Technology), which allows lecturers to use diverse web resources. [Informationsystems]

    @incollection{c53,
       year = {2009},
       author = {Auinger, A. and Ebner, M. and Nedbal, D. and Holzinger, A.},
       title = {Mixing Content and Endless Collaboration – MashUps: Towards Future Personal Learning Environments},
       booktitle = {Universal Access in Human-Computer Interaction HCI, Part III: Applications and Services, Lecture Notes in Computer Science LNCS 5616},
       editor = {Stephanidis, C.},
       publisher = {Springer},
       address = {Berlin, Heidelberg, New York},
       pages = {14-23},
       abstract = {The recent movement by major Web services towards making many application programming interfaces (APIs) available for public use has led to the development of the new MashUp technology, a method of merging content, services and applications from multiple web sites. The new technology is now being successfully applied in the academic community to enrich and improve learning and teaching applications. This paper examines its implementation and use, discusses methods and styles of usage and highlights the advantages and disadvantages of client and server application, based on related work and recent experiences gathered with a large university-wide open learning management system (WBT-Master/TeachCenter of Graz University of Technology), which allows lecturers to use diverse web resources. [Informationsystems]},
       keywords = {Application Programming Interfaces, MashUp technologies, content integration, content fusion},
       doi = {10.1007/978-3-642-02713-0_2},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=108510&pCurrPk=42744}
    }

  • [c52] M. Ebner, C. Stickel, N. Scerbakov, and A. Holzinger, “A Study on the Compatibility of Ubiquitous Learning (u-Learning) Systems at University Level“, in Universal Access in Human-Computer Interaction. Applications and Services, Lecture Notes in Computer Science, LNCS 5616, C. Stephanidis, Ed., Berlin, Heidelberg: Springer, 2009, pp. 34-43.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Graz University of Technology has a long tradition in the design, development and research of university wide Learning Management Systems (LMS). Inspired by the iPhone Style, the available system has now been extended by the addition of a mobile viewer, which grants the student mobile accessibility to all available online content. In this paper, we report on the lessons learned within a study on user experience with this specially designed LMS mobile viewer. The User Experience (UX) was measured by application of a 26 item questionnaire including the six factors Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation and Novelty, according to Laugwitz et al. (2008). The results showed high rates of acceptance, although the novelty of our approach received a surprisingly low rating amongst the novice end users. [Information Systems]

    @incollection{c52,
       year = {2009},
       author = {Ebner, Martin and Stickel, Christian and Scerbakov, Nick and Holzinger, Andreas},
       title = {A Study on the Compatibility of Ubiquitous Learning (u-Learning) Systems at University Level},
       booktitle = {Universal Access in Human-Computer Interaction. Applications and Services, Lecture Notes in Computer Science, LNCS 5616},
       editor = {Stephanidis, Constantine},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {34-43},
       abstract = {Graz University of Technology has a long tradition in the design, development and research of university wide Learning Management Systems (LMS). Inspired by the iPhone Style, the available system has now been extended by the addition of a mobile viewer, which grants the student mobile accessibility to all available online content. In this paper, we report on the lessons learned within a study on user experience with this specially designed LMS mobile viewer. The User Experience (UX) was measured by application of a 26 item questionnaire including the six factors Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation and Novelty, according to Laugwitz et al. (2008). The results showed high rates of acceptance, although the novelty of our approach received a surprisingly low rating amongst the novice end users. [Information Systems]},
       keywords = {Mobile Usability, Factor analysis, User Experience},
       doi = {10.1007/978-3-642-02713-0_4},
       url = {http://rd.springer.com/content/pdf/10.1007%2F978-3-642-02713-0_4.pdf}
    }

  • [c51] A. Holzinger, S. Softic, C. Stickel, M. Ebner, and M. Debevc, “Intuitive E-Teaching by Using Combined HCI Devices: Experiences with Wiimote Applications“, in Universal Access in Human-Computer Interaction. Applications and Services, Lecture Notes in Computer Science, LNCS 5616, C. Stephanidis, Ed., Berlin Heidelberg: Springer , 2009, pp. 44-52.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The wide availability of game based technologies and sophisticated e-Learning possibilities creates new demands and challenges on Human–Computer Interaction and Usability Engineering (HCI&UE). Solid research in HCI must support improvement in learning ability and creativity for both teachers and students. According to recent market surveys the Wii remote controller or Wiimote is currently more wide spread than standard Tablet PCs and is the most used computer input device worldwide. As a collection of many sensors, also including Bluetooth technology, accelerometers and IR sensors, Wiimote is of great interest for HCI experiments, especially in the area of e-Learning and e-Teaching. In this paper, we present results gained from the investigation of the potential of Wiimote as both a standard input device – such as mouse or presenter – and as a gesture and finger tracking sensor. We demonstrate, on the basis of examples from e-Teaching, how easily everyday gestures can be interpreted in regular computer applications utilizing Wiimote’s hardware modules and some additional software modules. [Ubiquitous computing]

    @incollection{c51,
       year = {2009},
       author = {Holzinger, Andreas and Softic, Selver and Stickel, Christian and Ebner, Martin and Debevc, Matjaz},
       title = {Intuitive E-Teaching by Using Combined HCI Devices: Experiences with Wiimote Applications},
       booktitle = {Universal Access in Human-Computer Interaction. Applications and Services,  Lecture Notes in Computer Science, LNCS 5616},
       editor = {Stephanidis, Constantine},
       publisher = {Springer },
       address = {Berlin Heidelberg},
       pages = {44-52},
       abstract = {The wide availability of game based technologies and sophisticated e-Learning possibilities creates new demands and challenges on Human–Computer Interaction and Usability Engineering (HCI&UE). Solid research in HCI must support improvement in learning ability and creativity for both teachers and students. According to recent market surveys the Wii remote controller or Wiimote is currently more wide spread than standard Tablet PCs and is the most used computer input device worldwide. As a collection of many sensors, also including Bluetooth technology, accelerometers and IR sensors, Wiimote is of great interest for HCI experiments, especially in the area of e-Learning and e-Teaching. In this paper, we present results gained from the investigation of the potential of Wiimote as both a standard input device – such as mouse or presenter – and as a gesture and finger tracking sensor. We demonstrate, on the basis of examples from e-Teaching, how easily everyday gestures can be interpreted in regular computer applications utilizing Wiimote’s hardware modules and some additional software modules. [Ubiquitous computing]},
       keywords = {Ubiquitous computing},
       doi = {10.1007/978-3-642-02713-0_5},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=108154&pCurrPk=42743}
    }

  • [c50] C. Stickel, M. Ebner, S. Steinbach-Nordmann, G. Searle, and A. Holzinger, “Emotion Detection: Application of the Valence Arousal Space for Rapid Biological Usability Testing to Enhance Universal Access“, in Universal Access in Human-Computer Interaction. Addressing Diversity, Lecture Notes in Computer Science, LNCS 5614, C. Stephanidis, Ed., Berlin, Heidelberg: Springer, 2009, pp. 615-624.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Emotion is an important mental and physiological state, influencing cognition, perception, learning, communication, decision making, etc. It is considered as a definitive important aspect of user experience (UX), although at least well developed and most of all lacking experimental evidence. This paper deals with an application for emotion detection in usability testing of software. It describes the approach to utilize the valence arousal space for emotion modeling in a formal experiment. Our study revealed correlations between low performance and negative emotional states. Reliable emotion detection in usability tests will help to prevent negative emotions and attitudes in the final products. This can be a great advantage to enhance Universal Access. [Physiological Computing]

    @incollection{c50,
       year = {2009},
       author = {Stickel, Christian and Ebner, Martin and Steinbach-Nordmann, Silke and Searle, Gig and Holzinger, Andreas},
       title = {Emotion Detection: Application of the Valence Arousal Space for Rapid Biological Usability Testing to Enhance Universal Access},
       booktitle = {Universal Access in Human-Computer Interaction. Addressing Diversity, Lecture Notes in Computer Science, LNCS 5614},
       editor = {Stephanidis, Constantine},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {615-624},
       abstract = {Emotion is an important mental and physiological state, influencing cognition, perception, learning, communication, decision making, etc. It is considered as a definitive important aspect of user experience (UX), although at least well developed and most of all lacking experimental evidence. This paper deals with an application for emotion detection in usability testing of software. It describes the approach to utilize the valence arousal space for emotion modeling in a formal experiment. Our study revealed correlations between low performance and negative emotional states. Reliable emotion detection in usability tests will help to prevent negative emotions and attitudes in the final products. This can be a great advantage to enhance Universal Access. [Physiological Computing]},
       keywords = {Biological Rapid Usability Testing, Valence, Arousal, Emotion},
       doi = {10.1007/978-3-642-02707-9_70},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=108496&pCurrPk=42742}
    }

  • [c49] M. D. Bloice, F. Wotawa, and A. Holzinger, “Java’s Alternatives and the Limitations of Java when Writing Cross-Platform Applications for Mobile Devices in the Medical Domain“, in 31st International Conference on Information Technology Interfaces ITI , 2009, p. 47–54.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    In this paper we discuss alternatives to Java ME when writing medical applications for mobile devices across multiple platforms. The Java virtual machine, which runs Java programs, is not available for the majority of handheld devices, such as palm PDAs, Windows Mobile based devices, or the Apple iPhone. As well as this, we conclude that full GUI interaction, such as the interaction provided by Java programs, is not an absolute requirement to make a program useful, and we developed an HTML-based medical information application to illustrate this. This program displays various sample patient parameters to the user in graph form, and was tested on multiple platforms and operating systems to demonstrate its platform/OS independence and usefulness. [Information Systems, Software Engineering]

    @inproceedings{c49,
       year = {2009},
       author = {Bloice, M.D. and Wotawa, F. and Holzinger, A.},
       title = {Java’s Alternatives and the Limitations of Java when Writing Cross-Platform Applications for Mobile Devices in the Medical Domain},
       booktitle = {31st International Conference on Information Technology Interfaces ITI },
       editor = {Luzar-Stiffler, V. and Dobric, V. H. and Bekic, Z.},
       publisher = {IEEE},
       pages = {47–54},
       abstract = {In this paper we discuss alternatives to Java ME when writing medical applications for mobile devices across multiple platforms. The Java virtual machine, which runs Java programs, is not available for the majority of handheld devices, such as palm PDAs, Windows Mobile based devices, or the Apple iPhone. As well as this, we conclude that full GUI interaction, such as the interaction provided by Java programs, is not an absolute requirement to make a program useful, and we developed an HTML-based medical information application to illustrate this. This program displays various sample patient parameters to the user in graph form, and was tested on multiple platforms and operating systems to demonstrate its platform/OS independence and usefulness. [Information Systems, Software Engineering]},
       keywords = {Software Engieering, Programming, Java},
       doi = {10.1109/ITI.2009.5196053},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1076905&pCurrPk=43270}
    }

  • [c48] C. Stickel, K. Maier, M. Ebner, and A. Holzinger, “The Modelling of Harmonious Colour Combinations for improved Usability and User Experience (UX)“, in 31st International Conference on Information Technology Interfaces (ITI 2009), 2009, pp. 323-328.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This study compares three different models for the calculation and prediction of harmonious color combinations. Therefore a dataset of user rated color combinations was taken from a large online database. The user rating was compared to the outcome of the three models on this dataset in order to test the performance of the models. The first model based on the idea that color combinations are more pleasing the greater their difference in brightness. The second model is a slightly modified version of Ou & Lou (2006) using chromatic difference, lightness sum, lightness difference and hue effect. The last model was invented by us and is based on an experiment of Polzella & Montgomery (1993). From the outcome of their experiment we generated a lookup table for single color rating. This rating is then used in a formula, which is able to evaluate the color harmony for color combinations up to five colors. This model also performed best in the overall comparison between the three color harmony models. [Usability Engineering]

    @inproceedings{c48,
       year = {2009},
       author = {Stickel, C. and Maier, K. and Ebner, M.  and Holzinger, A. },
       title = {The Modelling of Harmonious Colour Combinations for improved Usability and User Experience (UX)},
       booktitle = {31st International Conference on Information Technology Interfaces (ITI 2009)},
       editor = {Luzar-Stiffler, V. and Dobric, V. H. and Bekic, Z. },
       publisher = {IEEE},
       pages = {323-328},
       abstract = {This study compares three different models for the calculation and prediction of harmonious color combinations. Therefore a dataset of user rated color combinations was taken from a large online database. The user rating was compared to the outcome of the three models on this dataset in order to test the performance of the models. The first model based on the idea that color combinations are more pleasing the greater their difference in brightness. The second model is a slightly modified version of Ou & Lou (2006) using chromatic difference, lightness sum, lightness difference and hue effect. The last model was invented by us and is based on an experiment of Polzella & Montgomery (1993). From the outcome of their experiment we generated a lookup table for single color rating. This rating is then used in a formula, which is able to evaluate the color harmony for color combinations up to five colors. This model also performed best in the overall comparison between the three color harmony models. [Usability Engineering]},
       keywords = {Usability Engineering, User Experience, Methods},
       doi = {10.1109/ITI.2009.5196102},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1076917&pCurrPk=43272}
    }

  • [c47] E. L. -C. Law, T. Gamble, D. Schwarz, M. D. Kickmeier-Rust, and Holzinger, “A Mixed-Method Approach on Digital Educational Games for K12: Gender, Attitudes and Performance“, in Human-Computer Interaction and Usability for e-Inclusion. 5th Symposium of the Austrian Computer Society, USAB 2009, Lecture Notes in Computer Science (LNCS 5889), A. Holzinger and K. Miesenberger, Eds., Berlin, Heidelberg, New York: Springer, 2009, pp. 42-54.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Research on the influence of gender on attitudes towards and performance in digital educational games (DEGs) has quite a long history. Generally, males tend to play such games more engagingly than females, consequently attitude and performance of males using DEGs should be presumably higher than that of females. This paper reports an investigation of a DEG, which was developed to enhance the acquisition of geographical knowledge, carried out on British, German and Austrian K12 students aged between 11 and 14. Methods include a survey on initial design concepts, user tests on the system and two single-gender focus groups. Gender and cultural differences in gameplay habit, game type preferences and game character perceptions were observed. The results showed that both genders similarly improved their geographical knowledge, although boys tended to have a higher level of positive user experience than the girls. The qualitative data from the focus groups illustrated some interesting gender differences in perceiving various aspects of the game. [Gamification]

    @incollection{c47,
       year = {2009},
       author = {Law, E. L.-C. and Gamble, T. and Schwarz, D. and Kickmeier-Rust, M.D. and Holzinger},
       title = {A Mixed-Method Approach on Digital Educational Games for K12: Gender, Attitudes and Performance},
       booktitle = {Human-Computer Interaction and Usability for e-Inclusion. 5th Symposium of the Austrian Computer Society, USAB 2009, Lecture Notes in Computer Science (LNCS 5889)},
       editor = {Holzinger, A. and Miesenberger, K.},
       publisher = {Springer},
       address = {Berlin, Heidelberg, New York},
       pages = {42-54},
       abstract = {Research on the influence of gender on attitudes towards and performance in digital educational games (DEGs) has quite a long history. Generally, males tend to play such games more engagingly than females, consequently attitude and performance of males using DEGs should be presumably higher than that of females. This paper reports an investigation of a DEG, which was developed to enhance the acquisition of geographical knowledge, carried out on British, German and Austrian K12 students aged between 11 and 14. Methods include a survey on initial design concepts, user tests on the system and two single-gender focus groups. Gender and cultural differences in gameplay habit, game type preferences and game character perceptions were observed. The results showed that both genders similarly improved their geographical knowledge, although boys tended to have a higher level of positive user experience than the girls. The qualitative data from the focus groups illustrated some interesting gender differences in perceiving various aspects of the game. [Gamification]},
       keywords = {User experience (UX), gender differences, digital educational game (DEG), performance, attitude, gamification},
       doi = {10.1007/978-3-642-10308-7_3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=95372&pCurrPk=42747}
    }

  • [c46] A. Holzinger, C. Stickel, M. Fassold, and M. Ebner, “Seeing the System through the End Users’ Eyes: Shadow Expert Technique for Evaluating the Consistency of a Learning Management System“, in HCI and Usability for e-Inclusion, Lecture Notes in Computer Science, LNCS 5889, A. Holzinger and K. Miesenberger, Eds., Berlin, Heidelberg: Springer, 2009, pp. 178-192.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, the evaluation of consistency within a e-learning system and the ensuing eradication of irritating discrepancies in the user interface redesign is a big issue. In this paper, we report of our experiences with the Shadow Expert Technique (SET) during the evaluation of the consistency of the user interface of a large university learning management system. The main objective of this new usability evaluation method is to understand the interaction processes of end users with a specific system interface. Two teams of usability experts worked independently from each other in order to maximize the objectivity of the results. The outcome of this SET method is a list of recommended changes to improve the user interaction processes, hence to facilitate high consistency. [Method]

    @incollection{c46,
       year = {2009},
       author = {Holzinger, Andreas and Stickel, Christian and Fassold, Markus and Ebner, Martin},
       title = {Seeing the System through the End Users’ Eyes: Shadow Expert Technique for Evaluating the Consistency of a Learning Management System},
       booktitle = {HCI and Usability for e-Inclusion, Lecture Notes in Computer Science, LNCS 5889},
       editor = {Holzinger, Andreas and Miesenberger, Klaus},
       publisher = {Springer},
       address = {Berlin, Heidelberg},
       pages = {178-192},
       abstract = {Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, the evaluation of consistency within a e-learning system and the ensuing eradication of irritating discrepancies in the user interface redesign is a big issue. In this paper, we report of our experiences with the Shadow Expert Technique (SET) during the evaluation of the consistency of the user interface of a large university learning management system. The main objective of this new usability evaluation method is to understand the interaction processes of end users with a specific system interface. Two teams of usability experts worked independently from each other in order to maximize the objectivity of the results. The outcome of this SET method is a list of recommended changes to improve the user interaction processes, hence to facilitate high consistency. [Method]},
       keywords = {Shadow Expert Technique, Usability Test, Method, Performance Measurement},
       doi = {10.1007/978-3-642-10308-7_12},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1076918&pCurrPk=46021}
    }

  • [c45] Z. Hussain, W. Slany, and A. Holzinger, “Investigating Agile User-Centered Design in Practice: A Grounded Theory Perspective“, in HCI and Usability for e-Inclusion, A. Holzinger and K. Miesenberger, Eds., Berlin Heidelberg: Springer, 2009, vol. 5889, pp. 279-289.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    This paper investigates how the integration of agile methods and User-Centered Design (UCD) is carried out in practice. For this study, we have applied grounded theory as a suitable qualitative approach to determine what is happening in actual practice. The data was collected by semi-structured interviews with professionals who have already worked with an integrated agile UCD methodology. Further data was collected by observing these professionals in their working context, and by studying their documents, where possible. The emerging themes that the study found show that there is an increasing realization of the importance of usability in software development among agile team members. The requirements are emerging; and both low and high fidelity prototypes based usability tests are highly used in agile teams. There is an appreciation of each other’s work from both UCD professionals and developers and both sides can learn from each other. [Software Engineering, User-Centered Design]

    @incollection{c45,
       year = {2009},
       author = {Hussain, Zahid and Slany, Wolfgang and Holzinger, Andreas},
       title = {Investigating Agile User-Centered Design in Practice: A Grounded Theory Perspective},
       booktitle = {HCI and Usability for e-Inclusion},
       editor = {Holzinger, Andreas and Miesenberger, Klaus},
       publisher = {Springer},
       address = {Berlin Heidelberg},
       volume = {5889},
       pages = {279-289},
       abstract = {This paper investigates how the integration of agile methods and User-Centered Design (UCD) is carried out in practice. For this study, we have applied grounded theory as a suitable qualitative approach to determine what is happening in actual practice. The data was collected by semi-structured interviews with professionals who have already worked with an integrated agile UCD methodology. Further data was collected by observing these professionals in their working context, and by studying their documents, where possible. The emerging themes that the study found show that there is an increasing realization of the importance of usability in software development among agile team members. The requirements are emerging; and both low and high fidelity prototypes based usability tests are highly used in agile teams. There is an appreciation of each other’s work from both UCD professionals and developers and both sides can learn from each other. [Software Engineering, User-Centered Design]},
       keywords = {Agile Methods, Extreme Programming, Scrum, Usability, User-Centered Design, Grounded Theory},
       doi = {10.1007/978-3-642-10308-7_19},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=114877&pCurrPk=46006}
    }

  • [c44] Z. Hussain, W. Slany, and A. Holzinger, “Current State of Agile User-Centered Design: A Survey“, in HCI and Usability for e-Inclusion, USAB 2009, Lecture Notes in Computer Science, LNCS 5889, Berlin, Heidelberg: Springer , 2009, pp. 416-427.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Agile software development methods are quite popular nowadays and are being adopted at an increasing rate in the industry every year. However, these methods are still lacking usability awareness in their development lifecycle, and the integration of usability/User-Centered Design (UCD) into agile methods is not adequately addressed. This paper presents the preliminary results of a recently conducted online survey regarding the current state of the integration of agile methods and usability/UCD. A world wide response of 92 practitioners was received. The results show that the majority of practitioners perceive that the integration of agile methods with usability/UCD has added value to their adopted processes and to their teams; has resulted in the improvement of usability and quality of the product developed; and has increased the satisfaction of the end-users of the product developed. The top most used HCI techniques are low-fidelity prototyping, conceptual designs, observational studies of users, usability expert evaluations, field studies, personas, rapid iterative testing, and laboratory usability testing. [Software Engineering, Usability Engineering]

    @incollection{c44,
       year = {2009},
       author = {Hussain, Zahid and Slany, Wolfgang and Holzinger, Andreas},
       title = {Current State of Agile User-Centered Design: A Survey},
       booktitle = {HCI and Usability for e-Inclusion, USAB 2009, Lecture Notes in Computer Science, LNCS 5889},
       publisher = {Springer },
       address = {Berlin, Heidelberg},
       pages = {416-427},
       abstract = {Agile software development methods are quite popular nowadays and are being adopted at an increasing rate in the industry every year. However, these methods are still lacking usability awareness in their development lifecycle, and the integration of usability/User-Centered Design (UCD) into agile methods is not adequately addressed. This paper presents the preliminary results of a recently conducted online survey regarding the current state of the integration of agile methods and usability/UCD. A world wide response of 92 practitioners was received. The results show that the majority of practitioners perceive that the integration of agile methods with usability/UCD has added value to their adopted processes and to their teams; has resulted in the improvement of usability and quality of the product developed; and has increased the satisfaction of the end-users of the product developed. The top most used HCI techniques are low-fidelity prototyping, conceptual designs, observational studies of users, usability expert evaluations, field studies, personas, rapid iterative testing, and laboratory usability testing. [Software Engineering, Usability Engineering]},
       keywords = {Agile Methods, Extreme Programming, Scrum, Usability, User-Centered Design},
       doi = {10.1007/978-3-642-10308-7_30},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=114876&pCurrPk=46005}
    }

  • [e7] A. Holzinger and K. Miesenberger, HCI and Usability for e-Inclusion. 5th Symposium of theWorkgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2009, Lecture Notes in Computer Science LNCS 5889, Heidelberg, Berlin, New York: Springer, 2009.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    The term e-inclusion, also known as digital inclusion, is used within the European Union to encompass all activities related to the achievement of an inclusive information society. New information technologies always bring the risk of a digital divide, and consequently e-Inclusion wants to put emphasis on a digital cohesion and on enhancing opportunities with IT into all segments of the European population, including disadvantaged people, e.g., due to lack of education (e-Competences, e-Learning), age (e-Ageing), gender apartheid (equality=e-Quality), disabilities (e-Accessibility), ill health (e-Health) etc. At the European level, e-Inclusion is part of the third pillar of the 2010 policy initiative, managed by the Directorate General for Information Society and Media of the European Commission. We are convinced that the solution to these challenges can be found at the intersection of the disciplines pedagogy, psychology and computer science (informatics). The interface of this trinity encompasses thinking, concepts and methods from the humanities, from natural science and from engineering science. Engineering science carries the most responsibility and ethical demands since engineers are the ones who ensure the appropriate development. The daily actions of the end users must be the central concern, supporting them with newly available and rapidly emerging, ubiquitous and pervasive technologies. Obviously, an interdisciplinary view produces specific problems. On the one hand government, universities and industry require interdisciplinary work. However, younger researchers especially, being new to their field and not yet firmly anchored in one single discipline, are still in danger of “falling between two seats.” It is certainly easier for researchers to gain depth and acknowledgement in a narrow scientific community by remaining within one single field. Everybody accepts the necessity for interdisciplinary work; however, it is difficult to gain honor. We are of the firm opinion that innovation and new insights often take place at the junction of two or more disciplines; consequently, this requires a much broader basis of knowledge, openness for other fields and more acceptance. [Information Systems]

    @book{e7,
       year = {2009},
       author = {Holzinger, Andreas and Miesenberger, Klaus},
       title = {HCI and Usability for e-Inclusion. 5th Symposium of theWorkgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2009, Lecture Notes in Computer Science LNCS 5889},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       abstract = {The term e-inclusion, also known as digital inclusion, is used within the European Union to encompass all activities related to the achievement of an inclusive information society. New information technologies always bring the risk of a digital divide, and consequently e-Inclusion wants to put emphasis on a digital cohesion and on enhancing opportunities with IT into all segments of the European population, including disadvantaged people, e.g., due to lack of education (e-Competences, e-Learning), age (e-Ageing), gender apartheid (equality=e-Quality), disabilities (e-Accessibility), ill health (e-Health) etc. At the European level, e-Inclusion is part of the third pillar of the 2010 policy initiative, managed by the Directorate General for Information Society and Media of the European Commission. We are convinced that the solution to these challenges can be found at the intersection of the disciplines pedagogy, psychology and computer science (informatics). The interface of this trinity encompasses thinking, concepts and methods from the humanities, from natural science and from engineering science. Engineering science carries the most responsibility and ethical demands since engineers are the ones who ensure the appropriate development. The daily actions of the end users must be the central concern, supporting them with newly available and rapidly emerging, ubiquitous and pervasive technologies. Obviously, an interdisciplinary view produces specific problems. On the one hand government, universities and industry require interdisciplinary work. However, younger researchers especially, being new to their field and not yet firmly anchored in one single discipline, are still in danger of “falling between two seats.” It is certainly easier for researchers to gain depth and acknowledgement in a narrow scientific community by remaining within one single field. Everybody accepts the necessity for interdisciplinary work; however, it is difficult to gain honor. We are of the firm opinion that innovation and new insights often take place at the junction of two or more disciplines; consequently, this requires a much broader basis of knowledge, openness for other fields and more acceptance. [Information Systems]},
       keywords = {e-Inclusion, European Information Society},
       doi = {10.1007/978-3-642-10308-7},
       url = {http://rd.springer.com/book/10.1007%2F978-3-642-10308-7}
    }

2008

  • [j23] C. Stickel, M. Ebner, and A. Holzinger, “Useful Oblivion Versus Information Overload in e-Learning Examples in the Context of Wiki Systems“, Journal of Computing and Information Technology (CIT), vol. 16, iss. 4, pp. 271-277, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Information overload refers to the state of having too much information to make a decision or remain informed about a topic. We present a novel approach of filtering, adapting and visualizing content inside a Wiki knowledge base. Thereby we follow the question of how to optimize the process of learning, with respect to shorter time and higher quality, in face of increasing and changing information. Our work adopts a consolidation mechanism of the human memory, in order to reveal and shape key structures of a Wiki hypergraph. Our hypothesis so far is that visualization of these structures enables a more efficient learning.

    @article{j23,
       year = {2008},
       author = {Stickel, Christian and Ebner, Martin and Holzinger, Andreas},
       title = {Useful Oblivion Versus Information Overload in e-Learning Examples in the Context of Wiki Systems},
       journal = {Journal of Computing and Information Technology (CIT)},
       volume = {16},
       number = {4},
       pages = {271-277},
       abstract = {Information overload refers to the state of having too much information to make a decision or remain informed about a topic. We present a novel approach of filtering, adapting and visualizing content inside a Wiki knowledge base. Thereby we follow the question of
    how to optimize the process of learning, with respect to shorter time and higher quality, in face of increasing and changing information. Our work adopts a consolidation mechanism of the human memory, in order to reveal and shape key structures of a Wiki hypergraph. Our
    hypothesis so far is that visualization of these structures enables a more efficient learning.},
       keywords = {Neural Networks, information visualization, mental models},
       doi = {http://dx.doi.org/10.2498/cit.1001677},
       url = {http://cit.srce.unizg.hr/index.php/CIT/article/download/1677/1381}
    }

  • [J22] A. Holzinger, W. Emberger, S. Wassertheurer, and L. Neal, “Design, development and evaluation of online interactive simulation software for learning human genetics“, Elektrotechnik und Informationstechnik, vol. 125, iss. 5, pp. 190-196, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    OBJECTIVE: In this paper, the authors describe the design, development and evaluation of specific simulation software for Cytogenetics training in order to demonstrate the usefulness of computer simulations for both teaching and learning of complex educational content. BACKGROUND: Simulations have a long tradition in medicine and can be very helpful for learning complex content, for example Cytogenetics, which is an integral part of diagnostics in dysmorphology, syndromology, prenatal and developmental diagnosis, reproductive medicine, neuropediatrics, hematology and oncology. METHODS AND MATERIALS: The simulation software was developed as an Interactive Learning Object (ILO) in Java2, following a user-centered approach. The simulation was tested on various platforms (Windows, Linux, Mac-OSX, HP-UX) without any limitations; the evaluation was based on questionnaires and interviews amongst 600 students in 15 groups. CONCLUSION: This simulation has proved its worth in daily teaching since 2002 and further demonstrates that computer simulations can be helpful for both teaching and learning of complex content in Cytogenetics. ZIELSETZUNG: In dieser Arbeit beschreiben die Autoren die Entwicklung und Evaluierung einer internetfähigen Lernsoftware zur interaktiven Karyotypisierung für den Einsatz im Medizinstudium. Dabei wird auch der Frage nachgegangen, ob Computersimulationen den hohen Anforderungen bei der Vermittlung komplexer Inhalte gerecht werden können. Es wird gezeigt, dass dieser Ansatz sowohl für Lernende als auch für Lehrende Mehrwerte bringt und den traditionellen Methoden in diesem Bereich überlegen ist, wenn die Simulation didaktisch richtig eingesetzt wird. HINTERGRUND: Simulationen haben in der Medizin eine lange Tradition und erweisen sich insbesondere dann als hilfreich, wenn es darum geht, hochkomplexe Zusammenhänge verständlicher darzustellen, wie dieses Beispiel aus der Zytogenetik, einem Spezialgebiet der Humangenetik, zeigt. Die zytogenetische Diagnostik beschäftigt sich mit reproduzierbaren strukturellen und numerischen Veränderungen der menschlichen Chromosomen und kommt in der Dysmorphologie, Syndromologie, Pränatal- und Entwicklungsdiagnostik, Reproduktionsmedizin, Neuropädiatrie, Hämatologie und Onkologie zum Einsatz. MATERIAL UND METHODEN: Die Simulationssoftware wurde als interaktives Lern-Objekt (ILO) entwickelt. Als Entwicklungsumgebung wurde die Java2-Plattform gewählt; die Entwicklung selbst erfolgte nach den Grundsätzen des User-Centered-Design. Die Software wurde Plattform-unabhängig ausgelegt und konnte auf verschiedenen Systemarchitekturen (Windows, Linux, MacOSX, HP-UX) erfolgreich und ohne Beschränkungen getestet werden. Die durchgeführte Evaluierung basierte auf Fragebögen und Interviews mit rund 600 Studenten in 15 Gruppen. SCHLUSSFOLGERUNGEN: Die Simulationssoftware hat ihre Alltagstauglichkeit seit 2002 im Lehrbetrieb unter Beweis gestellt. Anhand der Evaluierung konnte an diesem Beispiel gezeigt werden, dass diese Lernsoftware sehr gut geeignet ist, den diagnostisch-analytischen Prozess in der Zytogenetik anschaulicher zu vermitteln als die traditionelle papierbasierte Methode. Insbesondere wirkt sich die Einbettung in ein integratives Unterrichtskonzept positiv, sowohl auf Lernende als auch auf Lehrende, aus.

    @article{J22,
       year = {2008},
       author = {Holzinger, A. and Emberger, W. and Wassertheurer, S. and Neal, L.},
       title = {Design, development and evaluation of online interactive simulation software for learning human genetics},
       journal = {Elektrotechnik und Informationstechnik},
       volume = {125},
       number = {5},
       pages = {190-196},
       abstract = {OBJECTIVE: In this paper, the authors describe the design, development and evaluation of specific simulation software for Cytogenetics training in order to demonstrate the usefulness of computer simulations for both teaching and learning of complex educational content. BACKGROUND: Simulations have a long tradition in medicine and can be very helpful for learning complex content, for example Cytogenetics, which is an integral part of diagnostics in dysmorphology, syndromology, prenatal and developmental diagnosis, reproductive medicine, neuropediatrics, hematology and oncology. METHODS AND MATERIALS: The simulation software was developed as an Interactive Learning Object (ILO) in Java2, following a user-centered approach. The simulation was tested on various platforms (Windows, Linux, Mac-OSX, HP-UX) without any limitations; the evaluation was based on questionnaires and interviews amongst 600 students in 15 groups. CONCLUSION: This simulation has proved its worth in daily teaching since 2002 and further demonstrates that computer simulations can be helpful for both teaching and learning of complex content in Cytogenetics. ZIELSETZUNG: In dieser Arbeit beschreiben die Autoren die Entwicklung und Evaluierung einer internetfähigen Lernsoftware zur interaktiven Karyotypisierung für den Einsatz im Medizinstudium. Dabei wird auch der Frage nachgegangen, ob Computersimulationen den hohen Anforderungen bei der Vermittlung komplexer Inhalte gerecht werden können. Es wird gezeigt, dass dieser Ansatz sowohl für Lernende als auch für Lehrende Mehrwerte bringt und den traditionellen Methoden in diesem Bereich überlegen ist, wenn die Simulation didaktisch richtig eingesetzt wird. HINTERGRUND: Simulationen haben in der Medizin eine lange Tradition und erweisen sich insbesondere dann als hilfreich, wenn es darum geht, hochkomplexe Zusammenhänge verständlicher darzustellen, wie dieses Beispiel aus der Zytogenetik, einem Spezialgebiet der Humangenetik, zeigt. Die zytogenetische Diagnostik beschäftigt sich mit reproduzierbaren strukturellen und numerischen Veränderungen der menschlichen Chromosomen und kommt in der Dysmorphologie, Syndromologie, Pränatal- und Entwicklungsdiagnostik, Reproduktionsmedizin, Neuropädiatrie, Hämatologie und Onkologie zum Einsatz. MATERIAL UND METHODEN: Die Simulationssoftware wurde als interaktives Lern-Objekt (ILO) entwickelt. Als Entwicklungsumgebung wurde die Java2-Plattform gewählt; die Entwicklung selbst erfolgte nach den Grundsätzen des User-Centered-Design. Die Software wurde Plattform-unabhängig ausgelegt und konnte auf verschiedenen Systemarchitekturen (Windows, Linux, MacOSX, HP-UX) erfolgreich und ohne Beschränkungen getestet werden. Die durchgeführte Evaluierung basierte auf Fragebögen und Interviews mit rund 600 Studenten in 15 Gruppen. SCHLUSSFOLGERUNGEN: Die Simulationssoftware hat ihre Alltagstauglichkeit seit 2002 im Lehrbetrieb unter Beweis gestellt. Anhand der Evaluierung konnte an diesem Beispiel gezeigt werden, dass diese Lernsoftware sehr gut geeignet ist, den diagnostisch-analytischen Prozess in der Zytogenetik anschaulicher zu vermitteln als die traditionelle papierbasierte Methode. Insbesondere wirkt sich die Einbettung in ein integratives Unterrichtskonzept positiv, sowohl auf Lernende als auch auf Lehrende, aus.},
       keywords = {Simulation, Interactive learning, Simulation-based Learning},
       doi = {10.1007/s00502-008-0537-9},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=78595&pCurrPk=36478}
    }

  • [e09x] K. S. Mukasa, A. Holzinger, and A. I. Karshmer, Intelligent User Interfaces for Ambient Assisted Living (IUI4AAL 2008), Stuttgart: Fraunhofer IRB Verlag, 2008.
    [BibTeX] [Abstract] [Download PDF]

    Intelligent User Interfaces (IUI) for Ambient Assisted Living (AAL) should use intelligent technologies to support elderly and impaired users. Intuitive and nearly natural interaction for the user is an important requirement. Knowledge from different fields including AmI technology, cognitive psychology, user interface design, and context awareness is required. This workshop was a step towards establishing the IUI4AAL community specific for this purpose.

    @book{e09x,
       year = {2008},
       author = {Mukasa, Kizito Ssamula and Holzinger, Andreas and Karshmer, Arthur I.},
       title = {Intelligent User Interfaces for Ambient Assisted Living (IUI4AAL 2008)},
       publisher = {Fraunhofer IRB Verlag},
       address = {Stuttgart},
       abstract = {Intelligent User Interfaces (IUI) for Ambient Assisted Living (AAL) should use intelligent technologies to support elderly and impaired users. Intuitive and nearly natural interaction for the user is an important requirement. Knowledge from different fields including AmI technology, cognitive psychology, user interface design, and context awareness is required. This workshop was a step towards establishing the IUI4AAL community specific for this purpose.},
       url={http://www.irb.fraunhofer.de/bookshop/artikel.jsp?v=225136}
    }

  • [j21] A. Holzinger, M. Kickmeier-Rust, and D. Albert, “Dynamic Media in Computer Science Education; Content Complexity and Learning Performance: Is Less More?“, Educational Technology & Society, vol. 11, iss. 1, pp. 279-290, 2008.
    [BibTeX] [Abstract] [Download PDF]

    With the increasing use of dynamic media in multimedia learning material, it is important to consider not only the technological but also the cognitive aspects of its application. A large amount of previous research does not provide preference to either static or dynamic media for educational purposes and a considerable number of studies found positive, negative or even no effects of dynamic media on learning performance. Consequently, it is still necessary to discern which factors contribute to the success or failure of static or dynamic media. The study presented here can be seen as another brick in the wall of understanding students’ learning supported by dynamic media. In this study, aspects of cognitive load and the ability to generate mental representations for the purpose of appropriate animation design and development are considered. The learning performance of static versus dynamic media amongst a total of 129 Computer Science students, including a control group, was investigated. The results showed that learning performance using dynamic media was significantly higher than those of the static textbook lesson when the learning material had a certain level of complexity; the more complex the learning material, the larger the benefit of using animations. The results were examined for possible factors that contributed to the success or failure of dynamic media in education. In conclusion, this study has successfully confirmed the theory that dynamic media can support learning when cognitive load and learners’ mental representations are taken into account during the design and development of learning material containing dynamic media.

    @article{j21,
       year = {2008},
       author = {Holzinger, A. and Kickmeier-Rust, M. and Albert, D. },
       title = {Dynamic Media in Computer Science Education; Content Complexity and Learning Performance: Is Less More?},
       journal = {Educational Technology & Society},
       volume = {11},
       number = {1},
       pages = {279-290},
       abstract = {With the increasing use of dynamic media in multimedia learning material, it is important to consider not only the technological but also the cognitive aspects of its application. A large amount of previous research does not provide preference to either static or dynamic media for educational purposes and a considerable number of studies found positive, negative or even no effects of dynamic media on learning performance. Consequently, it is still necessary to discern which factors contribute to the success or failure of static or dynamic media. The study presented here can be seen as another brick in the wall of understanding students’ learning supported by dynamic media. In this study, aspects of cognitive load and the ability to generate mental representations for the purpose of appropriate animation design and development are considered. The learning performance of static versus dynamic media amongst a total of 129 Computer Science students, including a control group, was investigated. The results showed that learning performance using dynamic media was significantly higher than those of the static textbook lesson when the learning material had a certain level of complexity; the more complex the learning material, the larger the benefit of using animations. The results were examined for possible factors that contributed to the success or failure of dynamic media in education. In conclusion, this study has successfully confirmed the theory that dynamic media can support learning when cognitive load and learners’ mental representations are taken into account during the design and development of learning material containing dynamic media.},
       keywords = {Static media, Dynamic media, Animations, Learning performance, Cognitive load},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=78937&pCurrPk=36241}
    }

  • [j20] A. Holzinger, R. Geierhofer, F. Modritscher, and R. Tatzl, “Semantic Information in Medical Information Systems: Utilization of Text Mining Techniques to Analyze Medical Diagnoses“, Journal of Universal Computer Science, vol. 14, iss. 22, pp. 3781-3795, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Most information in Hospitals is still only available in text format and the amount of this data is immensely increasing. Consequently, text mining is an essential area of medical informatics. With the aid of statistic and linguistic procedures, text mining software attempts to dig out (mine) information from plain text. The aim is to transform data into information. However, for the efficient support of end users, facets of computer science alone are insufficient; the next step consists of making the information both usable and useful. Consequently, aspects of cognitive psychology must be taken into account in order to enable the transformation of information into knowledge of the end users. In this paper we describe the design and development of an application for analyzing expert comments on magnetic resonance images (MRI) diagnoses by applying a text mining method in order to scan them for regional correlations. Consequently, we propose a calculation of significant co-occurrences of diseases and defined regions of the human body, in order to identify possible risks for health.

    @article{j20,
       year = {2008},
       author = {Holzinger, Andreas and Geierhofer, Regina and Modritscher, Felix and Tatzl, Roland},
       title = {Semantic Information in Medical Information Systems: Utilization of Text Mining Techniques to Analyze Medical Diagnoses},
       journal = {Journal of Universal Computer Science},
       volume = {14},
       number = {22},
       pages = {3781-3795},
       abstract = {Most information in Hospitals is still only available in text format and the amount of this data is immensely increasing. Consequently, text mining is an essential area of medical informatics. With the aid of statistic and linguistic procedures, text mining software attempts to dig out (mine) information from plain text. The aim is to transform data into information. However, for the efficient support of end users, facets of computer science alone are insufficient; the next step consists of making the information both usable and useful. Consequently, aspects of cognitive psychology must be taken into account in order to enable the transformation of information into knowledge of the end users. In this paper we describe the design and development of an application for analyzing expert comments on magnetic resonance images (MRI) diagnoses by applying a text mining method in order to scan them for regional correlations. Consequently, we propose a calculation of significant co-occurrences of diseases and defined regions of the human body, in order to identify possible risks for health.},
       keywords = {Information Retrieval; Text Mining; Performance; Medical Documentation, Hypotheses, Text data, unstructured information, data mining},
       doi = {10.3217/jucs-014-22-3781},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=231943&pCurrPk=43193}
    }

  • [DAT-j19ok] M. Hessinger, A. Holzinger, D. Leitner, and S. Wassertheurer, “Hemodynamic models for education in physiology“, Mathematics and Computers in Simulation, vol. 79, iss. 4, pp. 1039-1047, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    By application of case-based learning (CBL) various effects can be analyzed and demonstrated more easily. In the area of medicine one rapidly reaches boundaries in the visualization of complex information [J.L.M. Poiseuille, Recherches experimentales sur le mouvement des liquids dans les tubes de tres petits diametres, Memoires Savant des Etrangers 9 (1846) 433–544]. Learning and teaching without recourse to patients is difficult. Consequently the use of models and simulations are useful. In this paper the authors report about experiences gained with HAEMOSIM, a web-based project in medical education. The goal of this project is the design and development of interactive simulations in local hemodynamics by the application of mathematical–physiological models. These include the modelling of arterial blood flow dependent on the pressure gradient, radius and bifurcations, as well as blood flow profiles in dependency of viscosity, density and radius and finally pulse-wave dynamics with regard to local and global compliance. [computational model]

    @article{DAT-j19ok,
       year = {2008},
       author = {Hessinger, M. and Holzinger, A. and Leitner, D. and Wassertheurer, S.},
       title = {Hemodynamic models for education in physiology},
       journal = {Mathematics and Computers in Simulation},
       volume = {79},
       number = {4},
       pages = {1039-1047},
       abstract = {By application of case-based learning (CBL) various effects can be analyzed and demonstrated more easily. In the area of medicine one rapidly reaches boundaries in the visualization of complex information [J.L.M. Poiseuille, Recherches experimentales sur le mouvement des liquids dans les tubes de tres petits diametres, Memoires Savant des Etrangers 9 (1846) 433–544]. Learning and teaching without recourse to patients is difficult. Consequently the use of models and simulations are useful. In this paper the authors report about experiences gained with HAEMOSIM, a web-based project in medical education. The goal of this project is the design and development of interactive simulations in local hemodynamics by the application of mathematical–physiological models. These include the modelling of arterial blood flow dependent on the pressure gradient, radius and bifurcations, as well as blood flow profiles in dependency of viscosity, density and radius and finally pulse-wave dynamics with regard to local and global compliance. [computational model]},
       keywords = {Simulation-based learning, simulation, hemodynamic modelling, haemosym, haemodynamics, biomedical modelling},
       doi = {http://dx.doi.org/10.1016/j.matcom.2008.02.015},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=92361&pCurrPk=21746}
    }

  • [DAT-j18ok] A. Holzinger, “Universal access to technology-enhanced learning“, Springer Universal Access in the Information Society International Journal, vol. 7, iss. 4, pp. 195-197, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Within the context of human–computer interaction (HCI), the concept of universal access introduced a new perspective that promotes the accommodation of a wide range of human abilities, skills, requirements, and preferences in the design of information technology. This automatically reduces the need for many special features, while fostering individualization, quality of interaction, and ultimately, end-user acceptability. The notion of universal access reflects the concept of an information society in which anyone can potentially interact with information technology, at anytime and at anyplace, in any context of use, and for virtually any task. Consequently, technologyenhanced learning (TEL) is an extremely important part in this context. However, designers and developers of this type of technology often ignore the needs, demands, and requirements of the end users, and consequently fail to examine how the end-users learn, work, and communicate with this technology. This is often related to a lack of general usability engineering methods, as for example end user-centered methods. [context, adaptation, personalization, acceptance]

    @article{DAT-j18ok,
       year = {2008},
       author = {Holzinger, Andreas},
       title = {Universal access to technology-enhanced learning},
       journal = {Springer Universal Access in the Information Society International Journal},
       volume = {7},
       number = {4},
       pages = {195-197},
       abstract = {Within the context of human–computer interaction (HCI), the concept of universal access introduced a new perspective that promotes the accommodation of a wide range of human abilities, skills, requirements, and preferences in the design of information technology. This automatically reduces the need for many special features, while fostering individualization, quality of interaction, and ultimately, end-user acceptability. The notion of universal access reflects the concept of an information society in which anyone can potentially interact with information technology, at anytime and at anyplace, in any context of use, and for virtually any task. Consequently, technologyenhanced learning (TEL) is an extremely important part in this context. However, designers and developers of this type of technology often ignore the needs, demands, and requirements of the end users, and consequently fail to examine how the end-users learn, work, and communicate with this technology. This is often related to a lack of general usability engineering methods, as for example end user-centered methods. [context, adaptation, personalization, acceptance]},
       keywords = {Technology Enhanced Learning, Life-Long Learning, Usability in e-Learning},
       doi = {10.1007/s10209-008-0120-5},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=88542&pCurrPk=39420}
    }

  • [DAT-j17ok] M. Ebner, M. Kickmeier-Rust, and A. Holzinger, “Utilizing Wiki-Systems in higher education classes: a chance for universal access?“, Universal Access in the Information Society, vol. 7, iss. 4, pp. 199-207, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Wikis are a website technology for mass collaborative authoring. Today, wikis are increasingly used for educational purposes. Basically, the most important asset of wikis is free and easy access for end users: everybody can contribute, comment and edit—following the principles of Universal access. Consequently, wikis are ideally suited for collaborative learning and a number of studies reported a great success of wikis in terms of active participation, collaboration, and a rapidly growing content. However, the wikis success in education was often linked either to direct incentives or even pressure. This paper strongly argues that this contradicts the original intentions of wikis and, furthermore, weakens the psycho-pedagogical impact. A study is presented which focuses on investigating the success of wikis in higher education, when students are neither enforced to contribute nor directly rewarded similar to the principles of Wikipedia. Amazingly, the results show that, in total, none of the N = 287 students created new articles or edited existing ones during a whole semester. It is concluded that the use of Wiki-Systems in educational settings is much more complicated, and it needs more time to develop a kind of “give-and-take” generation.

    @article{DAT-j17ok,
       year = {2008},
       author = {Ebner, Martin and Kickmeier-Rust, Michael and Holzinger, Andreas},
       title = {Utilizing Wiki-Systems in higher education classes: a chance for universal access?},
       journal = {Universal Access in the Information Society},
       volume = {7},
       number = {4},
       pages = {199-207},
       abstract = {Wikis are a website technology for mass collaborative authoring. Today, wikis are increasingly used for educational purposes. Basically, the most important asset of wikis is free and easy access for end users: everybody can contribute, comment and edit—following the principles of Universal access. Consequently, wikis are ideally suited for collaborative learning and a number of studies reported a great success of wikis in terms of active participation, collaboration, and a rapidly growing content. However, the wikis success in education was often linked either to direct incentives or even pressure. This paper strongly argues that this contradicts the original intentions of wikis and, furthermore, weakens the psycho-pedagogical impact. A study is presented which focuses on investigating the success of wikis in higher education, when students are neither enforced to contribute nor directly rewarded similar to the principles of Wikipedia. Amazingly, the results show that, in total, none of the N = 287 students created new articles or edited existing ones during a whole semester. It is concluded that the use of Wiki-Systems in educational settings is much more complicated, and it needs more time to develop a kind of “give-and-take” generation.},
       keywords = {Technology enhanced learning, Wiki, Wikipedia, Higher education, Collaborative learning},
       doi = {10.1007/s10209-008-0115-2},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1198871&pCurrPk=86754}
    }

  • [DAT-j16ok] T. Kleinberger, A. Holzinger, and P. Müller, “Adaptive multimedia presentations enabling universal access in technology enhanced situational learning“, Universal Access in the Information Society, vol. 7, iss. 4, pp. 223-245, 2008.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Successful situational learning, with continuous media support, requires both sophisticated technological and appropriate psychological concepts to enable learners, independently of age, to easily access continuous media learning objects (CMLO), which must be properly adapted to their actual needs, demands, requirements and previous knowledge. Current technological approaches fail to cover all relevant aspects concurrently. For example, systems providing adequate media management either are insufficiently adaptable and learning management systems lack sufficient support for continuous media. This paper addresses three main issues: (1) an analysis of adaptive situational learning with continuous media, identifying the shortcomings of some current solutions; (2) outline of an integrated approach for adaptive multimedia presentations enabling universal access for situational learning; and (3) a description of the multimedia module repository (MEMORY) system implementing this approach, the basic idea being to define multimedia presentations as dynamic processes, comparable to a computer program.

    @article{DAT-j16ok,
       year = {2008},
       author = {Kleinberger, Thomas and Holzinger, Andreas and Müller, Paul},
       title = {Adaptive multimedia presentations enabling universal access in technology enhanced situational learning},
       journal = {Universal Access in the Information Society},
       volume = {7},
       number = {4},
       pages = {223-245},
       abstract = {Successful situational learning, with continuous media support, requires both sophisticated technological and appropriate psychological concepts to enable learners, independently of age, to easily access continuous media learning objects (CMLO), which must be properly adapted to their actual needs, demands, requirements and previous knowledge. Current technological approaches fail to cover all relevant aspects concurrently. For example, systems providing adequate media management either are insufficiently adaptable and learning management systems lack sufficient support for continuous media. This paper addresses three main issues: (1) an analysis of adaptive situational learning with continuous media, identifying the shortcomings of some current solutions; (2) outline of an integrated approach for adaptive multimedia presentations enabling universal access for situational learning; and (3) a description of the multimedia module repository (MEMORY) system implementing this approach, the basic idea being to define multimedia presentations as dynamic processes, comparable to a computer program.},
       keywords = {Continuous media, Adaptive multimedia systems, Technology enhanced learning, Life-long learning, Situational learning},
       doi = {10.1007/s10209-008-0122-3},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=88544&pCurrPk=36242}
    }

  • [c33] A. Holzinger, M. Höller, M. Schedlbauer, and B. Urlesberger, “An Investigation of Finger versus Stylus Input in Medical Scenarios.” IEEE, 2008, pp. 433-438.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    An in-situ study on the routine work of clinicians at Graz University Hospital was carried out in order to evaluate the input method preferences. We conducted several experiments consisting of selection tasks on two different types of tablet PCs, with the end users in three experimental conditions: sitting, standing and walking. The results show that the medical staff performed better when using stylus operated device. In almost all tests, subjects performed the selection tasks significantly faster and more accurately (p < 0.001) with the stylus operated device, even though it had a smaller screen and therefore the targets were smaller. The only exception was the selection performance when seated, where no significant difference was found (p = 0.06). However, the error rate was significantly lower for stylus input for all experiment conditions. This result is also supported by the analysis of the questionnaires, where it was found that almost all subjects preferred stylus input.

    @incollection{c33,
       year = {2008},
       author = {Holzinger, Andreas and Höller, Martin and Schedlbauer, Martin and Urlesberger, Berndt},
       title = {An Investigation of Finger versus Stylus Input in Medical Scenarios},
       publisher = {IEEE},
       pages = {433-438},
       abstract = {An in-situ study on the routine work of clinicians at Graz University Hospital was carried out in order to evaluate the input method preferences. We conducted several experiments consisting of selection tasks on two different types of tablet PCs, with the end users in three experimental conditions: sitting, standing and walking. The results show that the medical staff performed better when using stylus operated device. In almost all tests, subjects performed the selection tasks significantly faster and more accurately (p < 0.001) with the stylus operated device, even though it had a smaller screen and therefore the targets were smaller. The only exception was the selection performance when seated, where no significant difference was found (p = 0.06). However, the error rate was significantly lower for stylus input for all experiment conditions. This result is also supported by the analysis of the questionnaires, where it was found that almost all subjects preferred stylus input.},
       doi = {10.1109/ITI.2008.4588449},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=77890&pCurrPk=36512}
    }

2007

  • [j15] M. Ebner and A. Holzinger, “Successful implementation of user-centered game based learning in higher education: An example from civil engineering“, Computers and Education, vol. 49, iss. 3, pp. 873-890, 2007.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Goal The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental setting The online game was used for the first time during a lecture on Structural Concrete at Master’s level, involving 121 seventh semester students. Methods Pre-test/post-test experimental control group design with questionnaires and an independent online evaluation. Results The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called “joy” was introduced, according to [Nielsen, J. (2002): User empowerment and the fun factor. In Jakob Nielsen’s Alertbox, July 7, 2002. Available from http://www.useit.com/alertbox/20020707.html.], which was amazingly high. Conclusion The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-learning.

    @article{j15,
       year = {2007},
       author = {Ebner, Martin and Holzinger, Andreas},
       title = {Successful implementation of user-centered game based learning in higher education: An example from civil engineering},
       journal = {Computers and Education},
       volume = {49},
       number = {3},
       pages = {873-890},
       abstract = {Goal The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental setting The online game was used for the first time during a lecture on Structural Concrete at Master’s level, involving 121 seventh semester students. Methods Pre-test/post-test experimental control group design with questionnaires and an independent online evaluation. Results The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called “joy” was introduced, according to [Nielsen, J. (2002): User empowerment and the fun factor. In Jakob Nielsen’s Alertbox, July 7, 2002. Available from http://www.useit.com/alertbox/20020707.html.], which was amazingly high. Conclusion The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-learning.},
       keywords = {Game-based learning, Gamification},
       doi = {10.1016/j.compedu.2005.11.026},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=1138885&pCurrPk=85709}
    }

  • [j13] A. Holzinger, R. Geierhofer, and M. Errath, “Semantische Informationsextraktion in medizinischen Informationssystemen“, Informatik Spektrum, vol. 30, iss. 2, pp. 69-78, 2007.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Dieser Artikel beschreibt einige Erfahrungen und typische Problemstellungen mit Textmining in der Medizin und gibt einen Einblick in aktuelle und zukünftige Herausforderungen in Forschung & Entwicklung. Interessant ist nämlich, dass auch im ,,Multimedia-Zeitalter“ die meiste Information immer noch als ,,Text“ vorliegt. Mithilfe von statistischen und linguistischen Verfahren wird mit sogenannter ,,Textmining-Software“ versucht, aus Freitexten Information ,,heraus zu schürfen“ (deshalb ,,Textmining“). Allerdings ist es damit noch nicht genug. Der nächste Schritt besteht darin, die Information sowohl nutzbar als auch brauchbar zu machen. Die jeweiligen End-Benutzerinnen und End-Benutzer müssen in die Lage versetzt werden, auf der Basis der gewonnenen Information deren Wissen zu erweitern. In unserem konkreten Fall sollen damit Entscheidungen im Rahmen ärztlichen Handelns unterstützt werden. Problemlösungen in diesem Bereich erfordern eine holistische Sicht- und Herangehensweise. Daher wird es immer wichtiger, Erkenntnisse aus Informatik und Psychologie zusammenfließen zu lassen und auf systemischer Ebene technologisch umzusetzen. [Data Mining, Text Mining, Semantic Information Extraction]

    @article{j13,
       year = {2007},
       author = {Holzinger, Andreas and Geierhofer, Regina and Errath, Maximilian},
       title = {Semantische Informationsextraktion in medizinischen Informationssystemen},
       journal = {Informatik Spektrum},
       volume = {30},
       number = {2},
       pages = {69-78},
       abstract = {Dieser Artikel beschreibt einige Erfahrungen und typische Problemstellungen mit Textmining in der Medizin und gibt einen Einblick in aktuelle und zukünftige Herausforderungen in Forschung & Entwicklung. Interessant ist nämlich, dass auch im ,,Multimedia-Zeitalter“ die meiste Information immer noch als ,,Text“ vorliegt. Mithilfe von statistischen und linguistischen Verfahren wird mit sogenannter ,,Textmining-Software“ versucht, aus Freitexten Information ,,heraus zu schürfen“ (deshalb ,,Textmining“). Allerdings ist es damit noch nicht genug. Der nächste Schritt besteht darin, die Information sowohl nutzbar als auch brauchbar zu machen. Die jeweiligen End-Benutzerinnen und End-Benutzer müssen in die Lage versetzt werden, auf der Basis der gewonnenen Information deren Wissen zu erweitern. In unserem konkreten Fall sollen damit Entscheidungen im Rahmen ärztlichen Handelns unterstützt werden. Problemlösungen in diesem Bereich erfordern eine holistische Sicht- und Herangehensweise. Daher wird es immer wichtiger, Erkenntnisse aus Informatik und Psychologie zusammenfließen zu lassen und auf systemischer Ebene technologisch umzusetzen. [Data Mining, Text Mining, Semantic Information Extraction]},
       keywords = {Semantic Information extraction, text mining, data mining},
       doi = {10.1007/s00287-007-0139-7},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=73380&pCurrPk=29007}
    }

  • [j12] A. Holzinger and M. Errath, “Mobile computer Web-application design in medicine: some research based guidelines“, Universal Access in the Information Society, vol. 6, iss. 1, pp. 31-41, 2007.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Designing Web-applications is considerably different for mobile computers (handhelds, Personal Digital Assistants) than for desktop computers. The screen size and system resources are more limited and end-users interact differently. Consequently, detecting handheld-browsers on the server side and delivering pages optimized for a small client form factor is inevitable. The authors discuss their experiences during the design and development of an application for medical research, which was designed for both mobile and personal desktop computers. The investigations presented in this paper highlight some ways in which Web content can be adapted to make it more accessible to mobile computing users. As a result, the authors summarize their experiences in design guidelines and provide an overview of those factors which have to be taken into consideration when designing software for mobile computers. “The old computing is about what computers can do, the new computing is about what people can do” (Leonardo’s laptop: human needs and the new computing technologies, MIT Press, 2002).

    @article{j12,
       year = {2007},
       author = {Holzinger, Andreas and Errath, Maximilian},
       title = {Mobile computer Web-application design in medicine: some research based guidelines},
       journal = {Universal Access in the Information Society},
       volume = {6},
       number = {1},
       pages = {31-41},
       abstract = {Designing Web-applications is considerably different for mobile computers (handhelds, Personal Digital Assistants) than for desktop computers. The screen size and system resources are more limited and end-users interact differently. Consequently, detecting handheld-browsers on the server side and delivering pages optimized for a small client form factor is inevitable. The authors discuss their experiences during the design and development of an application for medical research, which was designed for both mobile and personal desktop computers. The investigations presented in this paper highlight some ways in which Web content can be adapted to make it more accessible to mobile computing users. As a result, the authors summarize their experiences in design guidelines and provide an overview of those factors which have to be taken into consideration when designing software for mobile computers. “The old computing is about what computers can do, the new computing is about what people can do” (Leonardo’s laptop: human needs and the new computing technologies, MIT Press, 2002).},
       keywords = {Information interfaces and representation
    Interface design
    Mobile computing
    Life and medical sciences
    Internet applications},
       doi = {10.1007/s10209-007-0074-z},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=73379&pCurrPk=30724}
    }

  • [c28] T. Kleinberger, M. Becker, E. Ras, A. Holzinger, and P. Müller, “Ambient Intelligence in Assisted Living: Enable Elderly People to Handle Future Interfaces, Lecture Notes in Computer Science 4555“, , C. Stephanidis, Ed., Heidelberg, Berlin, New York: Springer, 2007, pp. 103-112.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Ambient Assisted Living is currently one of the important research and development areas, where accessibility, usability and learning plays a major role and where future interfaces are an important concern for applied engineering. The general goal of ambient assisted living solutions is to apply ambient intelligence technology to enable people with specific demands, e.g. handicapped or elderly, to live in their preferred environment longer. Due to the high potential of emergencies, a sound emergency assistance is required, for instance assisting elderly people with comprehensive ambient assisted living solutions sets high demands on the overall system quality and consequently on software and system engineering – user acceptance and support by various user-interfaces is an absolute necessity. In this article, we present an Assisted Living Laboratory that is used to train elderly people to handle modern interfaces for Assisted Living and evaluate the usability and suitability of these interfaces in specific situations, e.g., emergency cases.

    @incollection{c28,
       year = {2007},
       author = {Kleinberger, Thomas and Becker, Martin and Ras, Eric and Holzinger, Andreas and Müller, Paul},
       title = {Ambient Intelligence in Assisted Living: Enable Elderly People to Handle Future Interfaces, Lecture Notes in Computer Science 4555},
       editor = {Stephanidis, Constantine},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {103-112},
       abstract = {Ambient Assisted Living is currently one of the important research and development areas, where accessibility, usability and learning plays a major role and where future interfaces are an important concern for applied engineering. The general goal of ambient assisted living solutions is to apply ambient intelligence technology to enable people with specific demands, e.g. handicapped or elderly, to live in their preferred environment longer. Due to the high potential of emergencies, a sound emergency assistance is required, for instance assisting elderly people with comprehensive ambient assisted living solutions sets high demands on the overall system quality and consequently on software and system engineering – user acceptance and support by various user-interfaces is an absolute necessity. In this article, we present an Assisted Living Laboratory that is used to train elderly people to handle modern interfaces for Assisted Living and evaluate the usability and suitability of these interfaces in specific situations, e.g., emergency cases.},
       keywords = {Ambient Intelligence
    Assisted Living
    User-Interfaces
    Learning
    Elderly People},
       doi = {10.1007/978-3-540-73281-5_11},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=85340&pCurrPk=30727}
    }

  • [c20] M. Wiltgen, A. Holzinger, and G. P. Tilz, “Interactive Analysis and Visualization of Macromolecular Interfaces between Proteins“, in Lecture Notes in Computer Science LNCS 4799, A. Holzinger, Ed., Heidelberg, Berlin, New York: Springer, 2007, pp. 199-212.
    [BibTeX] [Abstract] [DOI] [Download PDF]

    Molecular interfaces between proteins are of high importance for understanding their interactions and functions. In this paper protein complexes in the PDB database are used as input to calculate an interface contact matrix between two proteins, based on the distance between individual residues and atoms of each protein. The interface contact matrix is linked to a 3D visualization of the macromolecular structures in that way, that mouse clicking on the appropriate part of the interface contact matrix highlights the corresponding residues in the 3D structure. Additionally, the identified residues in the interface contact matrix are used to define the molecular surface at the interface. The interface contact matrix allows the end user to overview the distribution of the involved residues and an evaluation of interfacial binding hot spots. Theinteractive visualization of the selected residues in a 3D view via interacting windows allows realistic analysis of the macromolecular interface.

    @incollection{c20,
       year = {2007},
       author = {Wiltgen, Marco and Holzinger, Andreas and Tilz, Gernot P},
       title = {Interactive Analysis and Visualization of Macromolecular Interfaces between Proteins},
       booktitle = {Lecture Notes in Computer Science LNCS 4799},
       editor = {Holzinger, Andreas},
       publisher = {Springer},
       address = {Heidelberg, Berlin, New York},
       pages = {199-212},
       abstract = {Molecular interfaces between proteins are of high importance for understanding their interactions and functions. In this paper protein complexes in the PDB database are used as input to calculate an interface contact matrix between two proteins, based on the distance between individual residues and atoms of each protein. The interface contact matrix is linked to a 3D visualization of the macromolecular structures in that way, that mouse clicking on the appropriate part of the interface contact matrix highlights the corresponding residues in the 3D structure. Additionally, the identified residues in the interface contact matrix are used to define the molecular surface at the interface. The interface contact matrix allows the end user to overview the distribution of the involved residues and an evaluation of interfacial binding hot spots. Theinteractive visualization of the selected residues in a 3D view via interacting windows allows realistic analysis of the macromolecular interface.},
       keywords = {Interface Contact Matrix
    Bioinformatics
    Macromolecular Interfaces
    Human–Computer Interaction
    Tumour Necrosis Factor},
       doi = {10.1007/978-3-540-76805-0_17},
       url = {https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=305016&pCurrPk=40182}
    }

  • [DAT-c19] D. Leitner, S. Wassertheurer, M. Hessinger, A. Holzinger, and F. Breitenecker, “Modeling Elastic Vessels with the LBGK Method in Three Dimensions“, in HCI and Usability for Medicine and Health Care, Third Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2007, Graz, Austria, November, 22, 2007, Proceedings, , 2007, pp. 213-226.
    [BibTeX] [DOI] [Download PDF]
    @incollection{DAT-c19,
      author    = {Daniel Leitner and
                   Siegfried Wassertheurer and
                   Michael Hessinger and
                   Andreas Holzinger and
                   Felix Breitenecker},
      title     = {Modeling Elastic Vessels with the {LBGK} Method in Three Dimensions},
      booktitle = {{HCI} and Usability for Medicine and Health Care, Third Symposium
                   of the Workgroup Human-Computer Interaction and Usability Engineering
                   of the Austrian Computer Society, {USAB} 2007, Graz, Austria, November,
                   22, 2007, Proceedings},
      pages     = {213--226},
      year      = {2007},
      crossref  = {DBLP:conf/usab/2007},
      url       = {http://dx.doi.org/10.1007/978-3-540-76805-0_18},
      doi       = {10.1007/978-3-540-76805-0_18},
      timestamp = {Fri, 09 Nov 2007 12:40:35 +0100},
      biburl    = {http://dblp.uni-trier.de/rec/bib/conf/usab/LeitnerWHHB07},
      bibsource = {dblp computer science bibliography, http://dblp.org}
    }

  • [c18] R. Behringer, J. Christian, A. Holzinger, and S. Wilkinson, “Some Usability Issues of Augmented and Mixed Reality for e-Health Applications in the Medical Domain“, in HCI and Usability for Medicine and Health Care, Third Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2007, Graz, Austria, November, 22, 2007, Proceedings, , 2007, pp. 255-266.
    [BibTeX] [DOI] [Download PDF]
    @incollection{c18,
      author    = {Reinhold Behringer and
                   Johannes Christian and
                   Andreas Holzinger and
                   Steve Wilkinson},
      title     = {Some Usability Issues of Augmented and Mixed Reality for e-Health
                   Applications in the Medical Domain},
      booktitle = {{HCI} and Usability for Medicine and Health Care, Third Symposium
                   of the Workgroup Human-Computer Interaction and Usability Engineering
                   of the Austrian Computer Society, {USAB} 2007, Graz, Austria, November,
                   22, 2007, Proceedings},
      pages     = {255--266},
      year      = {2007},
      crossref  = {DBLP:conf/usab/2007},
      url       = {http://dx.doi.org/10.1007/978-3-540-76805-0_21},
      doi       = {10.1007/978-3-540-76805-0_21},
      timestamp = {Fri, 09 Nov 2007 12:40:35 +0100},
      biburl    = {http://dblp.uni-trier.de/rec/bib/conf/usab/BehringerCHW07},
      bibsource = {dblp computer science bibliography, http://dblp.org}
    }

  • [e5] HCI and Usability for Medicine and Health Care, Third Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2007, Graz, Austria, November, 22, 2007, ProceedingsSpringer, 2007.
    [BibTeX]
    @proceedings{e5,
      editor    = {Andreas Holzinger},
      title     = {{HCI} and Usability for Medicine and Health Care, Third Symposium
                   of the Workgroup Human-Computer Interaction and Usability Engineering
                   of the Austrian Computer Society, {USAB} 2007, Graz, Austria, November,
                   22, 2007, Proceedings},
      series    = {Lecture Notes in Computer Science, LNCS 4799},
      volume    = {4799},
      publisher = {Springer},
      year      = {2007},
      isbn      = {978-3-540-76804-3},
      timestamp = {Fri, 09 Nov 2007 12:40:35 +0100},
      biburl    = {http://dblp.uni-trier.de/rec/bib/conf/usab/2007},
      bibsource = {dblp computer science bibliography, http://dblp.org}
    }