• Navigation überspringen
  • Zur Navigation
  • Zum Seitenende
Organisationsmenü öffnen Organisationsmenü schließen
Friedrich-Alexander-Universität Lehrstuhl für Wirtschaftsinformatik IT-Management
  • FAUZur zentralen FAU Website
  1. Friedrich-Alexander-Universität
  2. Fachbereich Wirtschafts- und Sozialwissenschaften
Suche öffnen
  • Campo
  • StudOn
  • Lageplan
  1. Friedrich-Alexander-Universität
  2. Fachbereich Wirtschafts- und Sozialwissenschaften
Friedrich-Alexander-Universität Lehrstuhl für Wirtschaftsinformatik IT-Management
Menu Menu schließen
  • Home
  • Lehre
    • Bachelor
    • Master
    • Learning Agreements
    Portal Lehre
  • Forschung
    • Projekte
    • Veröffentlichungen
    • Förderer & Partner
    Portal Forschung
  • Team
  • Abschlussarbeiten
  1. Startseite
  2. Forschung
  3. Projekte
  4. TrustOps

TrustOps

Bereichsnavigation: Forschung
  • Projekte
    • Beyond explaining: XAI-based Adaptive Learning with SHAP Clustering for Energy Consumption Prediction
    • TrustOps
    • EFRE E|ASY OPT
    • Digitalisierung in der Pharmabranche und dem Arzneimittelvertrieb
    • Studie für den Verband der Automobilindustrie (2014)
    • Studie für Siemens (2013)
    • 2nd Tech-Cyle
    • Open Innovation
  • Veröffentlichungen
  • Förderer & Partner

TrustOps

Nils Kemmerzell

Nils Kemmerzell

Raum: Raum 5.441
Lange Gasse 20
90403 Nürnberg
  • Telefon: +499115302-95789
  • E-Mail: nils.kemmerzell@fau.de
  • LinkedIn: Seite von Nils Kemmerzell

Sprechzeiten

Nach Vereinbarung

Annika Schreiner

Annika Schreiner

Raum: Raum 5.437
Lange Gasse 20
90403 Nürnberg
  • Telefon: +499115302-95863
  • E-Mail: annika.schreiner@fau.de

Sprechzeiten

Nach Vereinbarung

In many application areas of machine learning (ML), there is a great demand for trustworthiness. Under the aspect of trustworthiness, high demands are placed on ML systems in terms of privacy, explainability, fairness and robustness. Currently, these facets are only addressed individually: Explainable AI (XAI) aims to improve the explainability of ML models, while Privacy-Preserving Machine Learning (PPML) aims to increase data protection. There is a lack of holistic approaches to trustworthiness that give equal weight to all facets. In addition, companies need concrete tools to consider trustworthiness not only in the development but also in the operation of ML systems.

The overall goal of this research project is to investigate trustworthiness in the development and operation of ML systems in order to develop tools and best practices that support companies in implementing trustworthiness. Privacy, explainability, fairness and robustness as essential facets of trustworthiness are considered in a balanced and comprehensive way to enable a multi-perspective design of tools and best practices. The tools developed focus on the entire ML lifecycle to explicitly consider the intertwining of development and operations (MLOps). Specifically, the tools aim to make trustworthiness evaluable, testable and monitorable. The tools and best practices are to be validated on the basis of numerous use cases of the corporate partner. Finally, it is expected that the findings can be transferred to other ML applications in highly sensitive areas.

 

Partner: Veridos GmbH

Lehrstuhl für Wirtschaftsinformatik,
insb. IT-Management

Lange Gasse 20
90403 Nürnberg
  • Institut für WIN
  • Impressum
  • Datenschutz
  • Barrierefreiheit
  • RSS Feed
Nach oben