Keynote Speakers

Unfortunately, Eric Cheng is not able to join us.


    Edward H. Adelson (MIT)

    Living in Shadeworld

    Shadeworld is an imaginary place populated with opaque surfaces that are smoothly shaded. The real world is more complex, but Shadeworld contains some of the key properties that make images compelling. Scanning electron microscopy (SEM) images live in Shadeworld even though the math and physics is entirely unlike optical shading. The human visual system loves this kind of image, and microscopists have invented various other methods (e.g., freeze fracture and Nomarski) to provide pseudo-shaded images that are attractive and informative. Normally, Shadeworld is most relevant to medium sized things. Very big things, like galaxies, and very small things, like bacteria, don’t usually look this way. Paper is a human-scale example of a matte material, but when viewed through an optical microscope, the truth is revealed: the fibers are shiny and clear, not white. Shadeworld is a convenient fiction nonetheless. The opacity and occlusion of Shadeworld set up certain image statistics. A plenoptic camera ignores these statistics and is therefore very inefficient (in bits captured per pixel). Other 3D cameras, like stereo cameras or coded aperture cameras, are better tuned to the statistics, but have other costs. Getting 3D data can be tough in the real world, due to the optical complexity of real materials, but it is easy in Shadeworld; in particular, photometric stereo works great. Wouldn’t it be nice if you could force real world surfaces into Shadeworld? Our lab has developed a system called “GelSight” which does just that. A slab of clear elastomer covered with a reflective membrane is pressed against the surface of interest, and multiple shaded images of the microstructure are captured. This system lets us make beautiful SEM-like pictures of challenging subjects like human skin; we also get 3D topography. We are limited to optical resolution, but we have some advantages: our process takes seconds rather than hours, and the human’s skin stays alive.

    Edward Adelson is the John and Dorothy Wilson Professor of Vision Science at MIT, in the Department of Brain and Cognitive Sciences, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He was elected to the National Academy of Sciences in 2007. Prof. Adelson has over 100 publications on topics in human vision, machine vision, computer graphics, neuroscience, and computational photography. He is well known for contributions to multiscale image representation (such as the Laplacian pyramid) and basic concepts in early vision such as steerable filters and motion energy models. His work on layered representations for motion won the IEEE Computer Society’s Longuet-Higgins Award (2005). Prof. Adelson introduced the plenoptic function, and built the first plenoptic camera. His work on the neural mechanisms of motion perception was honored with the Rank Prize in Optoelectronics (1992). He currently works on perceptual and computational aspects of material perception including the perception of gloss, shading, and shape. He has produced some well known illusions such as the Checker-Shadow Illusion. He has recently developed a new elastomeric technology for tactile sensing, called GelSight, which converts touch to images, and which opens up new possibilities in sensing 3D microscale topography.


    Nader Engheta (University of Pennsylvania)

    Seeing the Unseen: From Polarization-sensitive Eyes in Nature to Man-made Polarization Cameras

    Certain animal species in nature have visual systems that are sensitive to light’s polarization – a capability that is lacking in the human eyes. The species with polarization vision can detect this characteristic of image-forming light and can extract its information. Polarization is obviously an important feature of optical signals, and can be affected by surface shapes, materials, local curvature, and relative location of sources and objects, and thus it can provide useful information about the observed scene and objects. What can one learn from this interesting ability of polarization sensing and detection in nature that has been evolved in certain biological visual systems? Understanding the biophysical mechanism behind the polarization vision and reverse engineering its functionality leads to exciting novel methods and techniques in sensing and imaging with various applications. Inspired by the features of polarization-sensitive visual systems in nature, in our group we have been developing various man-made, non-invasive imaging methodologies, sensing schemes and visualization and display schemes that have shown exciting and promising outcomes with useful applications in system design in the optical and microwave domains. These techniques provide better target detection, enhanced visibility in otherwise low-contrast conditions, longer detection range in scattering media, polarization-sensitive adaptation based on changing environments, surface deformation-variation detection, “seeing” objects in shadows, and other novel outcomes and applications. In this talk, I will discuss several optical aspects of the biophysical mechanisms of polarization vision, and present sample results of our bio-inspired imaging methodologies.

    Winner of the 2012 IEEE Electromagnetics Award, Nader Engheta is the H. Nedwill Ramsey Professor at the University of Pennsylvania with affiliations in the Departments of Electrical and Systems Engineering, Bioengineering, and Physics and Astronomy. He received his B.S. degree from the University of Tehran, and his M.S and Ph.D. degrees from Caltech. Selected as one of the Scientific American Magazine 50 Leaders in Science and Technology in 2006 for developing the concept of optical lumped nanocircuits, he is a Guggenheim Fellow, an IEEE Third Millennium Medalist, a Fellow of IEEE, American Physical Society (APS), Optical Society of America (OSA), American Association for the Advancement of Science (AAAS), and SPIE-The International Society for Optical Engineering, and the recipient of the 2008 George H. Heilmeier Award for Excellence in Research, the Fulbright Naples Chair Award, NSF Presidential Young Investigator award, the UPS Foundation Distinguished Educator term Chair, and several teaching awards including the Christian F. and Mary R. Lindback Foundation Award, S. Reid Warren, Jr. Award and W. M. Keck Foundation Award. His current research activities span a broad range of areas including metamaterials and plasmonics, nanooptics and nanophotonics, biologically-inspired sensing and imaging, miniaturized antennas and nanoantennas, physics and reverse-engineering of polarization vision in nature, mathematics of fractional operators, and physics of fields and waves phenomena. He has co-edited the book entitled “Metamaterials: Physics and Engineering Explorations” by Wiley-IEEE Press, 2006. He was the Chair of the Gordon Research Conference on Plasmonics in June 2012.


    Hany Farid (Dartmouth College)

    Photo Forensics

    From the tabloid magazines to main-stream media outlets, political campaigns, courtrooms, and the photo hoaxes that land in our email, doctored photographs are appearing with a growing frequency and sophistication. The resulting lack of trust is impacting law enforcement, national security, the media, e-commerce, and more. The field of digital photo forensics has emerged to help return some trust in digital photographs. In the absence of any digital watermark or signature, we work on the assumption that most forms of tampering will disturb some statistical or geometric property of an image. To the extent that these perturbations can be quantified and detected, they can be used to invalidate a photo. I will describe forensic techniques that can determine if lighting, cast shadows, and attached shadows in a single photo are physically plausible. These techniques operate by specifying a collection of strong and weak constraints on the position of the light source. The resulting constraints are cast as a linear programming problem and hence lend themselves to an efficient solution.

    Hany Farid received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989. He received his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two year post-doctoral position in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth in 1999, where he is currently a Professor of Computer Science. Hany is also the Chief Technology Officer and co-founder of Fourandsix Technologies, Inc. Hany is the recipient of an NSF CAREER award, a Sloan Fellowship and a Guggenheim Fellowship.


    Marc Levoy (Stanford University)

    What Google Glass Means for the Future of Photography

    Although head-mounted cameras (and displays) are not new, Google Glass has the potential to make these devices commonplace. This has implications for the practice, art, and uses of photography. So what's different about doing photography with Glass? First, Glass doesn't work like a conventional camera; it's hands-free, point-of-view, always available, and instantly triggerable. Second, Glass facilitates different uses than a conventional camera: recording documents, making visual todo lists, logging your life, and swapping eyes with other Glass users. Third, Glass will be an open platform, unlike most cameras. This is not easy, because Glass is a heterogeneous computing platform, with multiple processors having different performance, efficiency, and programmability. The challenge is to invent software abstractions that allow control over the camera as well as access to these specialized processors. Finally, devices like Glass that are head-mounted and perform computational photography in real time have the potential to give wearers "superhero vision", like seeing in the dark, or magnifying subtle motion or changes. If such devices can also perform computer vision in real time and are connected to the cloud, then they can do face recognition, live language translation, and information recall. The hard part is not imagining these capabilities, but deciding which ones are feasible, useful, and socially acceptable.

    Marc Levoy is the VMware Founders Professor of Computer Science at Stanford University, with a joint appointment in the Department of Electrical Engineering. He received degrees in Architecture from Cornell University in 1976 and 1978 and a PhD in Computer Science from the University of North Carolina in 1989. In previous lives he worked on computer-assisted cartoon animation (1970s), volume rendering (1980s), and 3D scanning (1990s). His current interests include light field sensing and display, computational photography, and computational microscopy. At Stanford he teaches computer graphics, photography, and the science of art. Outside of academia, Levoy co-designed the Google book scanner, launched Google's Street View project, and currently works on Google's Project Glass. He is a NSF Presidential Young Investigator, 1996 winner of the SIGGRAPH Achievement award, and a fellow of the ACM.


    Austin Roorda (University of California, Berkeley)

    How the Eye Sees a Stable and Moving World

    Human eyes, even while fixating, are in constant motion. Even though the range of this motion can be much larger than the smallest features that we can resolve, we are unaware of it. At the same time, we remain exquisitely sensitive to actual motion of objects in the world with an ability to detect motion that is smaller than a single foveal cone.  Results from experiments and simulations suggest that this adaptation is not a way to cope with uncontrolled eye motion, but is actually a mechanism intended to optimize spatial vision.

    Austin Roorda received his Ph.D. in Vision Science & Physics from the University of Waterloo, Canada in 1996. In his postdoctoral appointment at the University of Rochester, he used the world's first adaptive optics ophthalmoscope to measure the properties of human photoreceptors, which included generating the first-ever maps of the trichromatic cone mosaic. From 1998 to 2004, he was at the University of Houston College of Optometry, where he designed and built the first Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO). Since January 2005, he’s been at the UC Berkeley School of Optometry where he is the current chair of the Vision Science Graduate Program. He is a Fellow of the Optical Society of America and of the Association for Research in Vision and Ophthalmology and is a recipient of the Glenn A. Fry award, the highest research honor from the American Academy of Optometry. His current research involves the development use of adaptive optics and other advanced technology for clinical and basic applications.


    J. Kim Vandiver (MIT)

    Stopping Time: The life work of Prof. Harold "Doc" Edgerton

    Doc Edgerton is the father of modern high speed photography. Although stroboscopic phenomena have been used for centuries and high speed photographs using spark discharges date back to 1859, Harold Edgerton introduced the modern electronic circuits and controls that revolutionized stop motion photography and perfected high quality, short duration, xenon flash tubes. His iconic photographs of milk drops, bullets and apples, and people doing everyday things changed the way we see the world. His apparatus took the first deep sea photographs, and evolved into a decades long engagement in marine archaeology with Jacques Cousteau and others. The army air force used his 50,000 watt-second strobes to take reconnaissance photographs before D-Day in 1944. His side scan sonar helped find the Civil War ironclad Monitor. This and more...

    Prof. J. Kim Vandiver is MITs Dean for Undergraduate Research and the Director of the Edgerton Center. He began his association with “Doc” Edgerton in 1972, first as a student in “Strobe Project Lab” and then as Edgerton’s teaching assistant in 1972-73. While a TA, he set up a high speed color schlieren system at Strobe Alley and with “Doc” published many of the resulting photos. After Edgerton’s death in 1990, Prof. Vandiver in collaboration with Prof. Paul Penfield(the department head of Electrical Engineering and Computer Science) founded the Edgerton Center at MIT, which provides resources for MIT students engaged in hands-on educational projects. The Center also runs a K-12 outreach program for local teachers and their classrooms and is the home of MIT’s D-Lab, which engages MIT students in humanitarian engineering projects in the developing world: http://web.mit.edu/edgerton/ Over the last five years Prof. Vandiver has worked with the MIT Museum and the MIT Archives to assemble the photographs, film and notebooks of Harold Edgerton in a searchable online archive: http://edgerton-digital-collections.org

    Throughout his teaching career, Prof. Vandiver has stressed the importance of hands-on learning. He has worked to enliven the MIT core curriculum, incorporating more and earlier opportunities for students to solve real-life problems, engage in research, and develop relationships with faculty. In 1998 he was the recipient of the MIT President's Award for Community Service for the Edgerton Center's work with the Cambridge Public Schools. In 2001 he was honored as a MacVicar Fellow for excellence in teaching.

    Prof. Vandiver joined the faculty of the Department of Ocean Engineering in 1975 and is now a professor of Mechanical and Ocean Engineering. His research focuses on the dynamics of offshore structures and flow-induced vibration. He teaches dynamics and mechanical vibration at the graduate and undergraduate level.

    Prof. Vandiver received his bachelor’s degree in engineering in 1968 from Harvey Mudd College of Science and Engineering, his master’s degree in Ocean Engineering from MIT, and a Ph.D. in Oceanographic Engineering from the MIT and Woods Hole Oceanographic Institution Joint Program in 1975. He is a Registered Mechanical Engineer in the state of Massachusetts and is an active consultant in structural dynamics with the offshore engineering industry. For fun, he volunteers as a certified flight instructor in gliders.