Past Keynote Speakers (CVIS 2018)
Colour and Consumer Cameras: The Good, the bad, and the ugly
Cameras are now used for many purposes beyond taking photographs. Example applications include remote medical diagnosis, crop monitoring, 3d reconstruction, document recognition, and many more. For such applications, it is desirable to have a camera act as a sensor that directly measures scene light. The problem, however, is that most commodity cameras apply a number of camera-specific processing steps to the captured image in order to produce visually pleasing photos. As a result, different cameras produce noticeably different colors when imaging the exact same scene. This is problematic for applications relying on color because algorithms developed using images from one camera often will not work with images captured on another camera due to color differences. In this talk, I’ll discuss the current state of affairs for color on commodity cameras, common incorrect assumptions made in the scientific literature regarding image color, and recent developments that are helping to improve the situation. Michael S. Brown, Canada Research Chair in Computer Vision, York University: Michael S. Brown is a professor and Canada Research Chair in Computer Vision at York University in Toronto. Before joining York in 2016, he spent 14 years in Asia, working at the Hong Kong University of Science and Technology, Nanyang Technological University, and National University of Singapore. Dr. Brown's research is focused on computer vision, image processing, and graphics. Dr. Brown routinely serves as an area chair for the major computer vision conferences (CVPR, ICCV, ECCV, and ACCV) and served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) from 2011 to 2016. He is currently an associate editor for the International Journal of Computer Vision. |
Objectives for Image Segmentation
Segmentation of images is an essential part of many computer vision problems. It is important for biomedical imaging, 3D reconstruction, motion analysis, scene understanding, autonomous driving, etc. In particularly simple cases, the problem may be trivially solved by independent processing of pixels (e.g. thresholding), but more typically segmentation is an ill-posed problem requiring additional constraints to compensate for data ambiguities. Such regularization constraints may represent prior knowledge on shape, geometry, topology, structure, or physical properties of segmented objects. This talk discusses standard and some newer objectives (loss functions) for image segmentation, their optimization methods, limitations, and reviews their use in unsupervised and weakly-supervised settings, including semantic CNN segmentation. Yuri Boykov, University of Waterloo: Yuri Boykov is a Professor at Cheriton School of Computer Science at the University of Waterloo. He is also an adjunct Professor of Computer Science at Western University. His research is concentrated in the area of computer vision and biomedical image analysis with focus on modeling and optimization for structured segmentation, restoration, registration, stereo, motion, model fitting, recognition, photo-video editing and other data analysis problems. He is an editor for the International Journal of Computer Vision (IJCV). His work was listed among 10 most influencial papers in IEEE Transactions of Pattern Analysis and Machine Intelligence (TPAMI Top Picks for 30 years). In 2017 Google Scholar listed his work on segmentaion as a "classic paper in computer vision and pattern recognition" (from 2006). In 2011 he received Helmholtz Prize from IEEE and Test of Time Award by the International Conference on Computer Vision. The Faculty of Science at the University of Western Ontaio recognized his work by awarding Distinguished Research Professorship in 2014 and Florence Bucke Prize in 2008. |