Offene Abschlussarbeiten
Thema | Betreuer | Typ | Partner |
Explainable AI and Large Language Models (LLM)
Description: Artificial Intelligence has experienced tremendous advancements in recent years, with the development of Large Language Models (LLMs) such as GPT-4, which are capable of generating human-like text. However, one of the significant challenges that these complex models pose is their „black box“ nature. That is, the exact workings of these models are often opaque and difficult to understand, even for experts in the field. This thesis aims to investigate this critical issue through the lens of Explainable AI (XAI). The goal of this research is twofold:
This research has the potential to contribute significantly to the AI and machine learning community by enhancing our understanding of the complex decision-making processes of LLMs, improving the trustworthiness and accountability of these models. Prerequisites: Candidates should have a strong background in machine learning, artificial intelligence, and natural language processing. Good programming skills (Python preferred) and experience with deep learning frameworks such as TensorFlow or PyTorch are highly recommended. |
Tobias Clement | Bachelor/Master | |
Explainable AI for Image Segmentation and Quality Inspection
Description: Image segmentation and quality inspection are crucial tasks in various domains, such as manufacturing, healthcare, and autonomous vehicles. Deep learning-based approaches have shown remarkable performance in these tasks; however, they often lack interpretability, making it challenging to understand how the models make decisions. Explainable AI (XAI) can help address this issue by providing insights into the decision-making process of AI models. This thesis will focus on developing an XAI approach for image segmentation and quality inspection tasks. The goal is to design and implement an XAI system that can accurately segment images and inspect their quality while providing interpretability into the model’s decision-making process. The thesis will involve conducting a comprehensive literature review, designing and implementing a deep learning-based image segmentation and quality inspection model, developing an XAI approach to explain the model’s decision-making process, and evaluating the effectiveness of the XAI system in a given use case based on the TTPLA dataset. The thesis will also involve exploring the potential of integrating different XAI techniques, such as attention mechanisms and visualization, to enhance the interpretability of the model. This research will contribute to the advancement of XAI and help organizations develop more interpretable and trustworthy image segmentation and quality inspection systems. |
Tobias Clement | Master | |
Theses in the Context of Trustworthy AI
Machine learning (ML) methods are used in numerous application areas nowadays. Given the real-world implications, there is a high demand for the trustworthiness of ML-based systems. The research area „Trustworthy AI“ investigates how a safe, transparent and responsible use of AI or ML can be guaranteed. Various requirements have to be fulfilled. One aspect is that predictions of ML models are fair, i.e. do not discriminate against certain individuals or groups. Additionally, ML systems should be robust in various situations, e.g. when real data deviate from training data (e.g. different lighting conditions in pictures). Privacy refers to the protection of personal information used by ML models for decision-making purposes, and explainability deals with the ability to communicate the decision-making process of a model in a way that is understandable to humans. Possible topics:
Please apply with a brief motivation for your selected topic and CV (in german or english). |
Annika Schreiner | Bachelor/ Master | – |
Conceptual Design and Implementation of a Procedure for Evaluating the Robustness of Machine Learning Systems Machine learning (ML) methods are used in numerous application areas nowadays. Considering the real (societal) impact, there is a high demand on the trustworthiness of ML-based systems. The research area „Trustworthy AI“ investigates how a safe, transparent and responsible use of AI or ML can be designed. One requirement of ML systems is to be reliable and robust, e.g. against deviations of real data from training data (distribution shift, e.g. different lighting conditions in images) or spurious correlations. The aim is to develop a concept for evaluating the robustness of ML systems and to implement a concrete example based on images. Please apply with a brief motivation and CV (in german or english). |
Annika Schreiner | Master | – |
Abschlussarbeit zu Cloud-Architekturen und Design Pattern bei der XITASO GmbH
Anwendungen können auf vielfältige Weise entwickelt und betrieben werden. Von Monolithen On-Premises bis zu Serverless und Cloud-native Microservices ist vieles möglich, wenn auch nicht immer alles sinnvoll. Jedes Pattern hat seine eigenen Herausforderungen bezüglich:
Das Ziel der Abschlussarbeit ist die Analyse dieser Faktoren am konkreten Anwendungsfall GovRadar. Dieses im Frühjahr 2020 gegründete Tech-Startup treibt die Digitalisierung Deutschlands voran, indem es modernste Technologien einsetzt, um etablierte Prozesse in Behörden zu automatisieren oder komplett abzulösen. Aktuell wird u.a. eine innovative, datengetriebene Beschaffungsplattform für den öffentlichen Sektor entwickelt. Wir suchen stets engagierte Kolleginnen und Kollegen mit agilem Mindset, die dafür brennen, herausragende Softwarelösungen zu entwickeln und die mit uns den XITASO Weg mitgestalten möchten. Bei XITASO kannst Du Dein im Studium erworbenes Wissen direkt praktisch anwenden. Die Einarbeitung in das Thema verschafft Dir sowohl Grundlagen- als auch vertiefte Kenntnisse aus der Praxis. Zur Ausschreibung: https://xitaso.com/karriere/jobs/abschlussarbeit-cloud-architekturen-und-design-patterns/ Bewerbungen bitte mit kurzem Anschreiben (z.B. Erfahrungen im Bereich Softwareentwicklung) und Lebenslauf. |
Annika Schreiner | Bachelor/ Master | XITASO GmbH |
State-of-the-Art einer ausgewählten Forschungsrichtung des machinellen Lernens
Maschinelles Lernen ist ein Thema von großem Forschungsinteresse. Für aktuelle Trends siehe z.B. Google AI Blog. Bitte nur mit einem eigenen Thema bzw. konkreter Forschungsfrage bewerben. Beispiel: CO2 Fußabdruck von ML und energiesparende Maßnahmen |
Stefan Arnold | Bachelor/Master | – |
Evaluating Architectures for Computer Vision w.r.t. Spurious Correlations in Images: A Fairness Perspective
Computer Vision is a subfield of Machine Learning (ML) that typically uses Convolutional Neural Networks (CNNs) to process Images. Instead of using CNNs, Google recently suggested new archtectures (see e.g., Transformer and Mixer) to deal with spatial data. While the top-line generalization accuracy has been comparatevly to CNNs, the fairness (measured as the parity of accuracy) among subgropus remains an open question. Using the publicly available CelebA dataset, the goal of this thesis is to compare ResNet18 (representative for CNNs), ViT, and MLPMixer in terms of their accuracy within ethnical subgroupus (i.e., Gender and Age). See Sagawa et al. (2020) for the methodological setup (also code snippets available upon request) and Hooker et al. (2020) for a rigorous assessment of algorithmic bias. NVIDIA GPU or Google Colab Prenium recommended. |
Stefan Arnold | Master | – |