Topics

Analytic Computing

We offer different topics for Bachelor and Master theses. Our list gets updated regularly, so please check it again in the near future to see new topics.

The focus group Artificial Intelligence and Knowledge Graphs pays special attention to symbolic methods for AI. Symbolic methods are based on explicit, human-readable representations. They usually offer more transparency and better explainability than subsymbolic methods, but can be more difficult to maintain and to learn from data. Knowledge graphs are currently one of the most popular symbolic methods in industry and one of the driving forces behind intelligent software products by companies like eBay, Facebook, Google, IBM and Microsoft[1]. Challenges include building up knowledge graphs from unstructured data, resolving inconsistent and redundant information and the efficient processing of more and more expressive queries over larger and larger knowledge bases.

Some of our current projects include:

KnowGraphs: scale knowledge graphs to be accessible to a wide audience of users across multiple domains including companies (in domains including Industry 4.0, biomedicine, finance, law) of all sizes and even end users (e.g., through personal assistants and web search). The KnowGraphs team focuses on addressing four of the facets of knowledge graph management: representation, construction and maintenance, operation and exploitation.

COFFEE: support the reuse of designs and processes in product development by intentional forgetting methods. Intentional forgetting aims at simulating human forgetting and can be seen as a natural counterpart of machine learning. While machine learning deals with adding new knowledge, forgetting deals with removing irrelevant information.

[1] Noy, N., Gao, Y., Jain, A., Narayanan, A., Patterson, A., & Taylor, J. (2019). Industry-scale knowledge graphs: Lessons and challenges. Queue, 17(2), 48-75.

The focus group of Artificial Intelligence and Human-Computer Interaction pays special attention on computational methodologies from data science and AI to enhance HCI research and applications to make digital technology more effective, efficient, and expressive to use. In particular, we follow multimodal and natural interaction modalities and styles (e.g., eye gaze, touch, VR, AR, conversational voice or text user interfaces), and enhancement of traditional user interfaces (e.g. Web, GUI). The twofold goal of focus groups is to enhance the accessibility and the usability of interfaces.  We aim to enhance the accessibility of interfaces through novel interaction channels and its adaptation with AI-enabled components. For enhancing usability we understand user behaviour patterns and the semantics of interfaces using machine learning methods. We believe that the intersection of AI and HCI enables users to better explore, steer, and extend their capabilities in challenging tasks. 

Some of our current projects include:

The focus group Machine Learning for Natural Language Processing is building natural language processing tools in multiple application domains like question answering, named entity recognition and linking and argument mining. While much of the work is empirical, and aimed at improving the quality of the systems developed for each application domain, we are interested at the same time in providing insights into the decision-making process of various algorithms through the use of interpretability measures.

Current challenges in all of these applications include: (i) correctly identifying entities that have never been mentioned before (zero-shot recognition) (ii) combining the information captured by transformer-based models through pre-training on large collections of text with the structured information available in various knowledge graphs, ontologies and thesauri and (iii) modelling changes in the underlying data sources.


Some of our current projects include:

ServiceMeister: The research at the University of Stuttgart focuses on bringing together multiple knowledge sources (e.g. textual knowledge, information from knowledge graphs, common sense knowledge, master knowledge) into a unified system, powered using deep learning techniques. Aside from providing answers to the questions posed by the service technicians, the system will also provide explanations regarding its predictions, thus making them easier to interpret and use.

Outcite: Making bibliographic data available is important in all disciplines to ensure easy and fast access to the literature and other scientific resources such as research datasets. To this end, many publishers strive to index their publications in bibliographic databases enabling the linking of publications in a citation graph. Still, a significant part of citation data in disciplines such as social science is not accessible via bibliographic databases. The main goal of OUTCITE is to research, develop and deploy a toolchain which follows-up on the output produced by the EXCITE pipeline in order to link non-source items to their sources.

Open Argument Mining: Open debates include so many arguments that sound decision making exceeds cognitive capabilities of the interested public or responsible experts. New arguments are continuously contributed, are often incomplete, and knowledge about common facts or previous arguments is needed to understand them. This project aims at investigating computational methods that: (1) continuously improve their capability to recognize arguments in ongoing debates, (2) align incomplete arguments with previous arguments and enrich them with automatically acquired background knowledge, and
(3) constantly extend semantic knowledge bases with information required to understand arguments.

Co-inform: Misinformation generates misperceptions, which have affected policies in many domains, including economy, health, environment, and foreign policy. Co-Inform brings together a multidisciplinary team of scientists and practitioners, to foster co-creational methodologies and practices for engaging stakeholders in combating misinformation posts and news articles, combined with advanced intelligent methods for misinformation detection, misinformation flow prediction, and real-time processing and measurement of crowds’ acceptance or refusal of misinformation. 

The availability of measurements for the motion prediction of vehicles and the success of deep learning in handling sequential data has led to techniques for identifying non-linear systems using deep learning. The resulting models do not provide stability certificates for the input-output behavior. Therefore, we want to combine system theory applied to first principle models with neural networks to design systems that are flexible, have good prediction results, and satisfy safety guarantees.

First principle models are based on expert knowledge. This leads to the question on how to integrate and exploit prior scientific knowledge in Machine Learning for the general case.

Some of our current projects:

Description in preparation.

To the top of the page