Livestream & RecordingsThe whole event was live streamed on YouTube, free for every one and no registration needed. Recordings are now available here:
Participant surveyIf you are attending the event, we kindly ask you to complete a very short survey: https://forms.gle/rbnLqedJ9dy7tgBJ6
|Time||Speaker||Title of the talk|
|08:30-09:30||Fabian Theis||Latent space learning in single cell genomics|
|09:45-10:45||Mathias Niepert||Neural-Relational Learning and some Biomedical Applications|
|11:00-12:00||Nataša Pržulj||Between viral targets and differentially expressed genes in COVID-19: the sweet spot for therapeutic intervention|
|13:30-14:30||Michael Bronstein||Geometric Deep Learning: from Euclid to drug design|
|14:45-15:45||Dana Pe’er||Machine Learning Meets Single Cell Biology: Insights and Challenges|
|16:00-17:00||David Sontag||Using machine learning to guide treatment suggestions|
Invited talksFabian Theis (Institute of Computational Biology, Helmholtz Munich) Thursday, March 11, 08:30-09:30 Abstract: Modeling cellular state as well as dynamics e.g. during differentiation or in response to perturbations is a central goal of computational biology. Single-cell technologies now give us easy and large-scale access to state observations on the transcriptomic, epigenomic and more recently also spatial level. In particular, they allow resolving potential heterogeneities due to asynchronicity of differentiating or responding cells, and profiles across multiple conditions such as time points, space and replicates are being generated, with a series of implications across biology and medicine. Most computational methods for single cell genomics are operating on an intermediate often nonlinear representation of the high-dimensional data such as a cell-cell knn graph or some more general latent space. Interpretation of these led already in early days towards models of cellular differentiation for example by pseudotemporal ordering or mapping time information. Hence latent space modeling and manifold learning have become a popular tool to learn overall variation in single cell gene expression, more recently also across data sets and modalities. After a short review of these approaches, I will discuss how latent space learning can be achieved using variants of autoencoders, with applications from denoising, imputation to learning perturbations. I will then show how it can be used to integrate single cell RNA-seq data sets across multiple labs in a privacy-aware manner, and demonstrate mapping disease variation by querying COVID-19 patients ontop of a healthy immune reference atlas. I will present our recent resource Sfaira of data loaders and shared latent spaces across tissues, and finish with short outlook towards spatial modeling and interpretability of latent projections under perturbations.
Mathias Niepert (NEC Labs Europe) Thursday, March 11, 09:45-10:45 Abstract: The talk will provide an overview of graph-based machine learning research conducted at NEC Labs Europe. The biomedical applications are, among others, cancer vaccine development, variant calling, and drug side effect prediction.
Between viral targets and differentially expressed genes in COVID-19: the sweet spot for therapeutic interventionNataša Pržulj (Catalan Institution for Research and Advanced Studies (ICREA) & Barcelona Supercomputing Center & University College London) Thursday, March 11, 11:00-12:00 Abstract: The COVID-19 pandemic is raging. It revealed the importance of rapid scientific advancement towards understanding and treating new diseases. To address this challenge, we build onto our previous methods for extracting new biomedical knowledge from the wiring patterns of systems-level, heterogeneous biomedical networks. These methods are needed due to the flood of molecular and clinical data, measuring interactions between various bio-molecules in and around a cell that form large, complex systems. These systems-level network data provide heterogeneous, but complementary information about cells, tissues and diseases. The challenge is how to mine them collectively to answer fundamental biological and medical questions. This is nontrivial, because of computational intractability of many underlying problems on networks (also called graphs), necessitating the development of approximate algorithms (heuristic methods) for finding approximate solutions. We will give an overview of the lab’s work. Also, we will focus on explaining how we adapt an explainable artificial intelligence algorithm for data fusion and utilize it on new omics data on viral-host interactions, human protein interactions, and drugs to better understand SARS-CoV-2 infection mechanisms and predict new drug-target interactions for COVID-19. We discover that in the human interactome, the human proteins targeted by SARS-CoV-2 proteins and the genes that are differentially expressed after the infection have common neighbors central in the interactome that may be key to the disease mechanisms. We uncover 185 new drug-target interactions targeting 49 of these key genes and suggest re-purposing of 149 FDA-approved drugs, including drugs targeting VEGF and nitric oxide signaling, whose pathways coincide with the observed COVID-19 symptoms. Our integrative methodology is universal and can enable insight into this and other serious diseases, as well as personalize treatment.
Michael Bronstein (Imperial College London & University of Lugano & Twitter) Thursday, March 11, 13:30-14:30 Abstract: For nearly two millennia, the word “geometry” was synonymous with Euclidean geometry, as no other types of geometry existed. Euclid’s monopoly came to an end in the 19th century, where multiple examples of non-Euclidean geometries were shown. However, these studies quickly diverged into disparate fields, with mathematicians debating the relations between different geometries and what defines one. A way out of this pickle was shown by Felix Klein in his Erlangen Programme, which proposed approaching geometry as the study of invariants or symmetries using the language of group theory. In the 20th century, these ideas have been fundamental in developing modern physics, culminating in the Standard Model. The current state of deep learning somewhat resembles the situation in the field of geometry in the 19h century: On the one hand, in the past decade, deep learning has brought a revolution in data science and made possible many tasks previously thought to be beyond reach — including computer vision, playing Go, or protein folding. At the same time, we have a zoo of neural network architectures for various kinds of data, but few unifying principles. As in times past, it is difficult to understand the relations between different methods, inevitably resulting in the reinvention and re-branding of the same concepts. Geometric Deep Learning aims to bring geometric unification to deep learning in the spirit of the Erlangen Programme. Such an endeavour serves a dual purpose: it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers, and gives a constructive procedure to incorporate prior knowledge into neural networks and build future architectures in a principled way. In this talk, I will overview the mathematical principles underlying Geometric Deep Learning on grids, graphs, and manifolds, and show some of the exciting applications of these methods in the domains of healthcare, biology, and drug design.
Dana Pe’er (Memorial Sloan Kettering Cancer Center) Thursday, March 11, 14:45-15:45 David Sontag (MIT) Thursday, March 11, 16:00-17:00 Abstract: The next decade will see a shift in focus of machine learning in healthcare from models for diagnosis and prognosis to models that directly guide treatment decisions. We show how to learn treatment policies from electronic medical records, doing a deep dive into our recent work on learning to recommend antibiotics for women with uncomplicated urinary tract infections (Kanjilal et al., Science Translational Medicine ’20). We then discuss bigger picture questions for the field, such as how to do rigorous retrospective evaluations, fairly comparing to existing clinical practice, and how to optimally design for clinician-AI interaction, including how to build trust and how to decide when to defer decisions to clinicians. We find that, relative to clinicians, our best models reduce inappropriate antibiotic prescriptions from 11.9% to 9.5% while at the same time using 50% fewer second-line antibiotics.