Plenary Speakers

Lauren Gardner

Tracking COVID-19 in Real-time: Challenges Faced and Lessons Learned

In response to the COVID-19 public health emergency, we developed an online interactive dashboard, first released publicly on January 22, 2020, hosted by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University. The dashboard visualizes and tracks the number of reported confirmed cases, deaths and recoveries for all countries affected by COVID-19. Further, all the data collected and displayed on the dashboard is made freely available in a GitHub repository, along with the live feature layers of the dashboard. The motivation behind the development of the dashboard was to provide researchers, public health authorities and the general public with a user-friendly tool to track the outbreak situation as it unfolds, critically, with access to the data underlying it. The demand for such a service became evident in the first weeks the dashboard was online, and by the end of February we were receiving over one billion requests for the dashboard feature layers every day, which since increased to between three and 4.5 billion requests every day. The dashboard has been featured on most major national and international media outlets (NYT, Washington Post, CNN, NPR, etc), and is either directly embedded in their websites, or used as the data source for in-house mapping efforts. Further, members of the public health community, including local and national governmental organizations, emergency response teams, public health agencies, and infectious disease researchers around the world rely on the dashboard and its data for informing and planning COVID-19 response. In this talk I will give a brief overview of the evolution of the dashboard, discuss some of the challenges we faced along the way, and suggest some methods by which disease tracking could be done better in the future.

Biography: Lauren Gardner is the Alton and Sandra Cleveland Professor in the Department of Civil and Systems Engineering at Johns Hopkins Whiting School of Engineering, holds a joint appointment In the Bloomberg School of Public Health, and is Director of the Center for Systems Science and Engineering. She is the creator of the interactive web-based dashboard being used by public health authorities, researchers, and the general public around the globe to track the outbreak of the novel coronavirus. Because of her expertise and leadership, Gardner was one of six Johns Hopkins experts who briefed congressional staff about the outbreak during a Capitol Hill event in early March 2020. She was awarded the 2022 Lasker~Bloomberg Public Service Award, America’s top medical research prize, for creating the COVID-19 dashboard that became the world’s most trusted source for reliable, real-time data about the pandemic, as well as named one of TIME’s 100 Most Influential People of 2020; was included on BBC’s 100 Women List 2020: Women who led change; was named one of Fast Company’s Most Creative People in Business for 2020. Her research expertise is in integrated transport and epidemiological modeling. Gardner has previously led related interdisciplinary research projects which utilize network optimization and mathematical modeling to progress the state of the art in global epidemiological risk assessment. Beyond mobility, her work focuses more holistically on virus diffusion as a function of climate, land use, mobility, and other contributing risk factors. On these topics Gardner has received research funding from organizations including NIH, NSF, NASA, and the CDC. Outcomes from her research projects have led to publications in leading interdisciplinary and infectious disease journals, presentations at international academic conferences, as well invited seminars and keynote talks at Universities and various events. Gardner is also an invited member of multiple international professional committees, reviewer for top-tier journals and grant funding organizations, and invited participant of various Scientific Advisory Committees. She has also supervised more than 30 graduate students and post-docs, and teaches courses on network science and systems engineering.


Ken R. Duffy

Guess What? Guessing Random Additive Noise Decoding

Shannon’s 1948 opus established that the highest communication rate that a noisy channel can reliably support is achieved as error correcting codes become long. Since 1978 it has been known that Maximum Likelihood (ML) decoding of linear codes is NP-complete. Those results drove the existing paradigm of co-designing restricted classes of codebooks with code-specific methods that exploit code-structure to enable computationally efficient approximate-ML decoding for high-redundancy codes. Contemporary applications are driving demand for low latency communication, and realizing them requires shorter, higher rate codes, vacating the computational complexity issues associated with long codes and motivating the possibility of revisiting the creation of practical, accurate universal decoders. In this talk, we introduce Guessing Random Additive Noise Decoding (GRAND), a class of universal hard- and soft-detection decoders suitable for use with any moderate redundancy code of any length. Mathematically, GRAND offers a new approach to establishing error exponents, and introduces success exponents, the likelihood of correctly decoding when code rate is above capacity. Despite being first published in 2018, GRAND has already resulted in circuit designs and taped-out chips that demonstrate its suitability and efficiency in hardware. In this talk, we explain the theoretical rationale behind GRAND, recent developments, and future possibilities. The talk is based on joint work with Muriel Medard (MIT), with the circuits work performed in collaboration with Rabia Yazicigil (BU).

Biography: From 2023, Ken R. Duffy is a professor at Northeastern University with a joint appointment in the Department of Electrical & Computer Engineering and the Department of Mathematics. He was previously a professor at Maynooth University in Ireland, where he was the Director of the Hamilton Institute, an interdisciplinary research centre with 40 affiliated faculty, from 2016 to 2022. He is one of three co-Directors of the Science Foundation Ireland Centre for Research Training in Foundations of Data Science, which currently funds more than 110 PhD students. He obtained a B.A. in mathematics in 1996 and Ph.D. in probability theory in 2000, both awarded by Trinity College Dublin. He works in works collaborative multi-disciplinary teams to design, analyse and realise algorithms using tools from probability, statistics, and machine learning. Algorithms he has developed have been implemented in digital circuits and in DNA. He is a co-founder of the Royal Statistical Society’s Applied Probability Section (2011), co-authored a cover article of Trends in Cell Biology (2012), is a winner of a best paper award at the IEEE International Conference on Communications (2015), the best paper award from IEEE Transactions on Network Science and Engineering (2019), the best research demo award from COMSNETS (2022), and the best demo award from COMSNETS (2023).


Mahdi Soltanolkotabi

Foundations for feature learning via gradient descent

One of the key mysteries in modern learning is that a variety of models such as deep neural networks when  trained via (stochastic) gradient descent can extract useful features and learn high quality representations directly from data simultaneously with fitting the labels. This feature learning capability is also at the forefront of the recent success of a variety of contemporary paradigms such as transformer architectures, self-supervised and transfer learning. Despite a flurry of exciting activity over the past few years, existing theoretical results are often too crude and/or pessimistic to explain feature/representation learning in practical regimes of operation or serve as a guiding principle for practitioners. Indeed, existing literature often requires unrealistic hyperparameter choices (e.g. very small step sizes, large initialization or wide models).  In this talk I will focus on demystifying this feature/representation learning phenomena for a variety of problems spanning single index models, low-rank factorization, matrix reconstruction, and neural networks. Our results are based on an intriguing spectral bias phenomena for gradient descent, that puts the iterations on a particular trajectory towards solutions that are not only globally optimal but also generalize well by simultaneously finding good features/representations of the data while fitting to the labels. The proofs combine ideas from high-dimensional probability/statistics, optimization and nonlinear control to develop a precise analysis of model generalization along the trajectory of gradient descent. Time permitting, I will explain the implications of these theoretical results for more contemporary use cases including transfer learning, self-attention, prompt-tuning via transformers and simple self-supervised learning settings.

Biography: Mahdi Soltanolkotabi is the director of the center on AI Foundations for the Sciences (AIF4S) at the University of Southern California. He is also an associate professor in the Departments of Electrical and Computer Engineering, Computer Science, and Industrial and Systems engineering where he holds an Andrew and Erna Viterbi Early Career Chair. Prior to joining USC, he completed his PhD in electrical engineering at Stanford in 2014. He was a postdoctoral researcher in the EECS department at UC Berkeley during the 2014-2015 academic year. Mahdi is the recipient of the Information Theory Society Best Paper Award, Packard Fellowship in Science and Engineering, a Sloan Research Fellowship, an NSF Career award, an Airforce Office of Research Young Investigator award (AFOSR-YIP), the Viterbi school of engineering junior faculty research award, and faculty awards from Google and Amazon. His research focuses on developing the mathematical foundations of modern data science via characterizing the behavior and pitfalls of contemporary nonconvex learning and optimization algorithms with applications in deep learning, large scale distributed training, federated learning, computational imaging, and AI for scientific applications.