|9:45||Keynote Speaker - Kristen Grauman|
|11:00|| Spotlight Presentations: |
|11:45||Invited speaker - Ariel Shamir|
|12:15||Invited speaker - Kavita Bala|
|12:45||Lunch sponsored by Evolv Technology|
|2:15||Sponsor talk - Brendan McCord|
|2:30||Invited speaker - Sanja Fidler|
|3:00||Invited speaker - Kotaro Hara|
|5:15||Best Paper Award (sponsored by Evolv Technology) & Closing Comments|
- M Sameki, M Gentil, D Gurari, E Saraee, E Hasenberg, J Wong and M Betke. CrowdTrack: Interactive Tracking of Cells in Microscopy Image Sequences with Crowdsourcing Support
- Shay Sheinfeld, Yotam Gingold and Ariel Shamir. Video Summarization using Crowdsourced Causality Graphs
- Apeksha Kumavat and Alexander J. Quinn. Show Me More! Worker-guided Privacy Filtering for Crowd Video Annotation
- Sneha Mehta, Chris North and Kurt Luther. An Exploratory Study of Human Performance in Image Geolocation Tasks
- Darius Lam and Genevieve Patterson. Kaizen: the Crowd Pathologist
- Abhisek Dash, Sujoy Chatterjee, Tripti Prasad and Malay Bhattacharyya. Image Clustering without Ground Truth
Encore Track Papers (previously appearing at other venues)
- Suyog Jain and Kristen Grauman. Active Image Segmentation Propagation
- Wai-Tat Fu, Huaming Rao and Shih-Wen Huang. Leveraging Human Computations to Improve Schematization of Spatial Relations from Imagery
- Gunnar A. Sigurdsson, Olga Russakovsky, Ali Farhadi, Ivan Laptev, Abhinav Gupta. Much Ado About Time: Exhaustive Annotation of Temporal Data
- Ting-Hao (Kenneth) Huang et al. Visual Storytelling
University of Texas at Austin
"My research interests are in computer vision and machine learning. In general, the goal of computer vision is to develop the algorithms and representations that will allow a computer to autonomously analyze visual information. I am especially interested in learning and recognizing visual object categories, and scalable methods for content-based retrieval and visual search."
Crowdsourcing for Material Recognition in the Wild
Human beings are good at perceiving subtle distinctions in material appearance (e.g., is this fabric silk or velvet?), but computers lag far behind in the task of material recognition. I will describe how we collected several large-scale, crowdsourced datasets of materials and reflectance comparisons from consumer photographs, and how we used these datasets to achieve state-of-the-art results in material recognition, intrinsic image decomposition, and material-based image browsing and design.
Kavita Bala is a Professor in the Computer Science Department and Program of Computer Graphics at Cornell University. Prof. Bala specializes in computer graphics and computer vision, leading research projects in material perception, recognition, and acquisition; realistic rendering; perception; and computational lighting design.
Efi Arazi School of Computer Science at the Interdisciplinary Center Israel
Passive Human Computation
Image and video processing have gone a long way in recent years with automatic algorithms. However, some simple tasks that even a child can do, still pose a challenge for automatic algorithms. Especially if these tasks involve some semantic understanding. Children and humans often learn by observing others. In this talk I will present some examples where algorithms for image and video processing learn semantic understanding by observing human behaviors. This can be seen as a form of passive human computation - humans must participate in the computation process, but they are not actively computing anything.
Ariel Shamir is interested in geometric modeling, computer graphics, fabrication, visualization, and machine learning. Besides his professorial work, he is or was affiliated with Mitsubishi Electric Research Labs, Disney Research, NASA, Lawrence Livermore National Labs, and a number of high-tech companies in Israel. Prof. Shamir has been a part of many seminal projects in computer vision and computational photography such as Seam Carving, Sketch2Photo, and 3-Sweep.
University of Toronto
"My work is in the area of Computer Vision. My main research interests are 2D and 3D object detection, particularly scalable multi-class detection, object segmentation and image labeling, and (3D) scene understanding. I am also interested in the interplay between language and vision: generating sentential descriptions about complex scenes, as well as using textual descriptions for better scene parsing (e.g., in the scenario of the human-robot interaction)."
University of Maryland, College Park
Using Crowdsourcing, Computer Vision, and Google Street View to Collect Sidewalk Accessibility Data
Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments. Despite comprehensive civil rights legislation of Americans with Disabilities Act, many city streets and sidewalks in the U.S. remain inaccessible. The problem is not just that sidewalk accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori.
In this talk, I will introduce a scalable data collection method for acquiring street-level accessibility information using a combination of crowdsourcing, computer vision, automatic workflow controller, and Google Street View. Our work shows that by combining crowdsourcing and automated methods, we can increase data collection efficiency by 13% compared to a fully manual approach without sacrificing accuracy. Our overarching goal is to transform the ways in which accessibility information is collected and visualized for every sidewalk, street, and building facade in the world.
"My research focuses on design of systems to collect and deliver information about accessibility of streets and sidewalks that help people with mobility impairments. Using geo-tagged street-level imagery like Google Street View as data source, we locate sidewalk accessibility problems with crowdsourcing and computer vision technologies."
Director at Evolv Technology
AI + IQ: Building Best of Breed Security Systems
Evolv, a Boston-based startup with a mission to keep people safe, is developing a new real-time threat detection and prevention platform called Mosaiq that utilizes the judgment of distributed human "Agents" to augment and train state of the art deep neural networks. We'll review the challenges in dividing tasks between AI and IQ as well as in obtaining high quality real-time human judgments. Mr. McCord and Evolv Principal Software Engineer Mr. Brandon Wolfe will share learnings from product development and pose open-ended technical problems to the audience.
Brendan McCord is Director at Evolv Technology, a startup backed by Bill Gates, General Catalyst, and Lux Capital that is comprised of a multi-disciplinary team of experts. Evolv's goal is to build the most advanced threat detection system the world has ever known by combining powerful sensors, AI, and human IQ. Mosaiq was built for today's increasingly dangerous world, and it introduces a proactive approach to security that shatters the status quo.