Past Keynote Speakers (CVIS 2020)
Title: How Experimental Psychology Can Help Explainable Artificial Intelligence
Date and time: Wednesday, November 25th, 2020 at 9:00 am - 10:15 am (EST) URL: Please register to receive the URL via Email Abstract: Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this talk I will describe how cognitive psychologists can make contributions to XAI. The human mind is also a black box, and cognitive psychologists have over one hundred and fifty years of experience modeling it through experimentation. We aim to translate the methods and rigour of cognitive psychology to the study of artificial black boxes in the service of explainability. The outline of the talk is as follows: I will provide a review of XAI for non-experts, arguing that current methods possess a blind spot that can be complemented by the experimental cognitive tradition. I will then provide a framework for research in XAI, and highlight exemplary cases of experimentation within XAI inspired by psychological science. Today's XAI methods rely on access to model architecture and parameters that is not always feasible for most users, practitioners, and regulators. I will end the talk by describing a psychology-inspired technique that uses response times (RTs). RTs are observable without privileged access to the model. Moreover, dynamic inference models performing conditional computation generate variable RTs for visual learning tasks. These depend on how the hierarchy of features learned by the network are utilized in prediction. I will show how this method can cast a light on deep neural nets without opening up the black box. Graham Taylor Associate Professor of Engineering at the University of Guelph Faculty Member, Vector Institute Canada CIFAR AI Research Chair Bio: Graham Taylor is a Canada Research Chair and Associate Professor of Engineering at the University of Guelph. He directs the University of Guelph Centre for Advancing Responsible and Ethical AI and is a member of the Vector Institute for AI. He has co-organized the annual CIFAR Deep Learning Summer School, and trained more than 60 students and researchers on AI-related projects. In 2016 he was named as one of 18 inaugural CIFAR Azrieli Global Scholars. In 2018 he was honoured as one of Canada's Top 40 under 40. In 2019 he was named a Canada CIFAR AI Chair. He spent 2018-2019 as a Visiting Faculty member at Google Brain, Montreal. Graham co-founded Kindred, which was featured at number 29 on MIT Technology Review's 2017 list of smartest companies in the world. He is the Academic Director of NextAI, a non-profit accelerator for AI-focused entrepreneurs. |
Title: Radiomics and Radio-genomics: Opportunities for Precision Medicine
Date and time: Thursday, November 26th, 2020 at 9:00 am - 10:15 am (EST) URL: Please register to receive the URL via Email Abstract: In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude her talk with a discussion of some of the translational aspects of her work from a clinical perspective. Pallavi Tiwari Assistant Professor, Department of Biomedical Engineering, School of Medicine Case Center for Imaging Research Case Western Reserve University, USA Bio: Dr. Pallavi Tiwari is an Assistant Professor of Biomedical Engineering and the director of Brain Image Computing Laboratory at Case Western Reserve University. She is also a member of the Case Comprehensive Cancer Center. Her research interests lie in machine learning, data mining, and image analysis for personalized medicine solutions in oncology and neurological disorders. Her research has so far evolved into over 50 peer-reviewed publications, 50 peer-reviewed abstracts, and 9 patents (3 issued, 6 pending). Dr. Tiwari has been a recipient of several scientific awards, most notably being named as one of 100 women achievers by Government of India for making a positive impact in the field of Science and Innovation. In 2018, she was selected as one of Crain’s Business Cleveland Forty under 40. In 2020, she was awarded the J&J Women in STEM (WiSTEM2D) scholar award in Technology. Her research is funded through the National Cancer Institute, Department of Defense, Johnson & Johnson, V Foundation Translational Award, Dana Foundation, State of Ohio, and the Case Comprehensive Cancer Center. |
Title: Probabilistic Object Detection for Autonomous Driving: Moving Beyond Detection Accuracy
Date and time: Thursday, November 26th, 2020 at 1:00 pm - 2:15 pm (EST) URL: Please register to receive the URL via Email Abstract: Modern object detection has relentlessly pursued perfection on a single metric: mean average precision, with impressive gains in performance across datasets and sensor types over that last few years. I will discuss our multiple contributions in this domain, particularly in 3D object detection with monocular, stereo and LIDAR/camera fusion. Our work has regularly topped the Kitti Vision Benchmark, and emphasizes the value of attention focused on object shape to enhance localization and extent estimation. Despite the strong progress in this domain, current network outputs are primarily deterministic, providing little visibility beyond class confidence as to the probability a detection is accurate. This makes current object detectors a black box for downstream processes such as tracking and prediction, and can lead to over confidence in low-quality detection. To help address this challenge, I will discuss our recent work on probabilistic object detectors (PODs) on two fronts. First, I will describe efforts to place the evaluation of PODs on a secure footing, by introducing proper scoring rules with both local and global extent that can determine whether a predictive distribution is both well calibrated and discriminative. Then, I will discuss our work BayesOD, a novel probabilistic object detector that exhibits strong output distribution prediction capabilities and outperforms existing PODs in terms of calibration and sharpness. Steven L. Waslander, PhD Associate Professor, Institute for Aerospace Studies Director, Toronto Robotics and AI Laboratory University of Toronto Bio: Prof. Steven Waslander is a leading authority on autonomous aerial and ground vehicles, including multirotor drones and autonomous driving vehicles. Simultaneous Localization and Mapping (SLAM) and multi-vehicle systems. He received his B.Sc.E. in 1998 from Queen’s University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics, where as a graduate student he created the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), the world’s most capable outdoor multi-vehicle quadrotor platform at the time. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. He was recruited to Waterloo from Stanford in 2008, where he founded and directs the Waterloo Autonomous Vehicle Laboratory (WAVELab), extending the state of the art in autonomous drones and autonomous driving through advances in localization and mapping, object detection and tracking, integrated planning and control methods and multi-robot coordination. In 2018, he joined the University of Toronto Institute for Aerospace Studies (UTIAS), and founded the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab). Prof. Waslander’s innovations were recognized by the Ontario Centres of Excellence Mind to Market award for the best Industry/Academia collaboration (2012, with Aeryon Labs), best paper and best poster awards at the Computer and Robot Vision Conference (2018), and through two Outstanding Performance Awards, and two Distinguished Performance Awards while at the University of Waterloo. His work on autonomous vehicles has resulted in the Autonomoose, the first autonomous vehicle created at a Canadian University to drive on public roads. His insights into autonomous driving have been featured in the Globe and Mail, Toronto Star, National Post, the Rick Mercer Report, and on national CBC Radio. He is Associate Editor of the IEEE Transactions on Aerospace and Electronic Systems, has served as the General Chair for the International Autonomous Robot Racing Competition (IARRC 2012-15), as the program chair for the 13th and 14th Conference on Computer and Robot Vision (CRV 2016-17), and as the Competitions Chair for the International Conference on Intelligent Robots and Systems (IROS 2017). |
Title: Real-time Sport Analytics from Broadcast Feeds
Date and time: Friday, November 27th, 2020 at 1:00 pm - 2:15 pm (EST) URL: Please register to receive the URL via Email Abstract: Sports analytics is about observing, understanding, and describing the game in an intelligent manner. In practice, this means designing a fully-automated, robust, end-to-end pipeline; from visual input, to player and group activities, to player and team evaluation, to planning. Despite major advancements in computer vision and machine learning, sports analytics is still in its infancy and relies mostly on manually collected data. This talk focuses on the use of broadcast feed for sport analytics, covers the components of a vision system for data acquisition, provides examples of how Sportlogiq captures the data from broadcast videos and finally describes the challenges of deploying vision systems at scale and what Sportlogiq has learnt by processing games from more that 20 different soccer leagues. Mehrsan Javan, PhD, MBA Chief Technology Officer at Sportlogiq Adjunct Professor at the Electrical and Computer Engineering Department, McGill University Bio: Mehrsan Javan is the co-founder and CTO of Sportlogiq. Mehrsan holds a PhD degree in electrical engineering (McGill University, 2014) and MBA (2017) with over a decade of experience in building intelligent systems. His passion is new technologies with a particular interest in intelligent systems and their positive impacts on our daily life. He is also an adjunct faculty member at ECE department, McGill University and has published numerous research articles in the fields of computer vision, machine learning, sport analytics and holds several patents and patents pending applications. His main research interest is explainability and causal inference in artificial intelligence. |