Keynot Talks
DOI:
https://doi.org/10.11576/dataninja-1184Abstract
Tuesday, 25.06.2024
Prof. Dr. Anand Subramoney
Scalable Architectures for Neuromorphic Machine Learning
I will discuss how to design architectures for neuromorphic machine learning from first principles. These architectures take inspiration from biology without being constrained by biological details. Two major themes will be sparsity and asynchrony, and their significant role in scalable neuromorphic systems. I will present recent work from my group on using various forms of sparsity and distributed learning to improve the scalability and efficiency of neuromorphic deep learning models.
Prof. Dr. Kerstin Bunte
Scientific Machine Learning for Partially Observed Dynamical Systems
Nowadays, most successful machine learning (ML) techniques for the analysis of complex interdisciplinary data use significant amounts of measurements as input to a statistical system. The domain expert knowledge is often only used in data preprocessing. The subsequently trained technique appears as a “black box”, which is difficult to interpret and rarely allows insight into the underlying natural process. Especially in critical domains such as medicine and engineering, the analysis of dynamic data in the form of sequences and time series is often difficult. Due to natural or cost limitations and ethical considerations data is often irregularly and sparsely sampled and the underlying dynamic system is complex. Therefore, domain experts currently enter a time-consuming and laborious cycle of mechanistic model construction and simulation, often without direct use of the experimental data or the task at hand. We now combine the predictive power of ML and the explanatory power of mechanistic models.
Therefore we perform learning in the space of dynamic models that represent the complex underlying natural processes, with potentially very few and limited measurements. We use principles of dimensionality reduction, such as subspace learning, to determine relevant areas in the parameter space of the underlying model as a first step to achieve task-driven model reduction. We furthermore incorporate identifiability analysis for informed posterior construction to improve learning with ill-posed systems caused by data limitations. Findings indicate the possibility of an alternative handling of epistemic uncertainties for scientific machine learning techniques applicable for all linear and classes of non-linear mechanistic models based on Lie symmetries.
Joint work of:
Bunte, Kerstin; Tino, Peter; Oostwal, Elisa; Norden, Janis; Chappell, Michael; Smith, Dave
Prof. Dr. Holger Hoos
How and Why AI Will Shape the Future of Science and Engineering
Recent progress in artificial intelligence has elevated what used to be a highly specialised research area to a topic of public discourse and debate. In this presentation I will discuss why beyond the hype, there are good reasons to be excited, but also concerned about AI. Specifically, I will explain how and why AI will have transformative impact on all sciences and engineering disciplines. Based on my own research on the robustness of neural networks, I will discuss some of the fundamental strengths, weaknesses and limitations of current AI systems. Finally, I will share some thoughts on the most serious risks of deploying these systems quickly and broadly, as well as on what needs to be done in order to manage these risks and to realise the benefits AI can bring.
Prof. Dr. Christian Igel
Deep Learning for Large-Scale Tree Carbon Stock Estimation From Satellite Imagery
Trees play an important role for carbon sequestration, biodiversity, as well as timber and food production. We need a better characterization of woody resources at global scale to understand how they are affected by climate change and human management. Recent advances in satellite remote sensing and machine learning (ML) based computer vision makes this possible. This talk discusses large-scale mapping of individual trees using deep learning applied to high-resolution satellite imagery. The biomass of each tree, and thereby its carbon content, is estimated from the crown size using allometric equations. The parameters of these equations are learned from data. The functional relation is assumed to be non-decreasing. Such monotonicity constraints are powerful regularizers in ML in general. They can support fairness in computer-aided decision making and increase plausibility in data-driven scientific models. This talk introduces a conceptually simple and efficient neural network architecture for monotonic modelling that compares favorable to state-of-the-art alternatives. After this technical excursion, we present an application of our tree monitoring in Rwanda, where it helps quantifying progress of restoration projects and developing a pathway to reach the country’s goal of net zero emissions by 2050.
Wednesday, 26.06.24
Prof. Dr. Lucie Flek
Perspective Taking in Large Language Models
Perspective-taking, the process of conceptualizing the point of view of another person, remains a challenge for LLMs. Understanding the mental state of others – emotions, beliefs, intentions – is central for the ability to empathize in social interactions. It is also the key to choose the best action to take next.
Enhancing perspective-taking capabilities of LLMs can unlock their potential to react better and safer to hints of distress, to engage in a more receptive argumentation, or to target an explanation to an audience. In this talk, I will present our recent perspective-taking experiments, and discuss further opportunities for bringing the human-centered perspectivist paradigm into the LLMs.
Prof. Dr. Henning Wachsmuth
LLM-based Argument Quality Improvement
Natural language processing (NLP) has recently seen a revolutionary breakthrough, due to the impressive capabilities of large language models (LLM). This also affects NLP research on computational argumentation: the computational analysis and synthesis of natural language arguments. While one of the core tasks studied in computational argumentation is the assessment of an argument’s quality, in this talk I look one beyond, namely at how to improve argument quality. Starting from basics of argumentation, I present insights from selected research of my group involving LLMs for improving argument quality. As part of this, I also look at the recent breakthroughs of LLMs and the paradigm shift that comes with them for computational argumentation in particular and for NLP in general.
Thursday, 27.06.2024
Prof. Dr. Sebastian Trimpe
Trustworthy AI for Physical Machines: Integrating Machine Learning and Control
AI promises significant advancements in engineering, enhancing both design and operation processes. Given that engineering focuses on physical machines like vehicles or robots, ensuring trustworthy solutions is crucial. This talk will explore how combining classical control methods with modern machine learning can create reliable algorithms for real-world applications. Specifically, we will discuss some of our recent research on (i) Bayesian optimization for controller learning, (ii) deep reinforcement learning, and (iii) approximate model-predictive control via imitation learning. The effectiveness of the developed algorithms will be demonstrated through experimental results on robotic hardware.
Prof. Dr. Malte Schilling
Biological Biases for Learning Robust Robot Behavior: Does Deep Reinforcement Learning Run into the Alignment Problem?
Published
Issue
Section
License
Copyright (c) 2024 Kuhl Ulrike
This work is licensed under a Creative Commons Attribution 4.0 International License.