Monday, September 21, 2009

Google Talks: Photo Tech EDU - A lecture series.

Google Talks: Photo Tech EDU

The goal of PhotoTechEDU is to have a Photographic Technology short course for engineers. 
The course will teach:
  • useful properties of light and image formation
  • theory and techniques of photographic optics and image capture
  • theory of colorimetry and techniques of color reproduction
  • and lots more...

Day 1: Photo Technology Overview  Speaker: Richard Lyon     SLIDES

Day 2: Photo Technology Overview Continued Speaker: Richard Lyon  SLIDES
Overview of front-end photographic technology with more optics, waves, diffraction, silicon photosensors, and color sensing methods.


Day 3: Ray Tracing, Lenses, and Mirrors Speaker: Rom Clement   SLIDES
Principles of ray tracing for simple optical components: thin spherical lenses and spherical mirrors. The notion of real/virtual object and real/virtual images will be highlighted as well as the computation of the position and magnification the image. The case of thick lenses will also be mentioned including the notion of nodal and cardinal points.


Day 4: Contrast, MTF, Flare, and Noise Speaker: Iain McClatchie   SLIDES
Most consumers know that more megapixels is better. But why does a 6MP DSLR take nicer pictures than a 10MP point-and-shoot? This talk will follow light from the surface to the sensor, exploring some of the things that degrade images along the way.


Day 5: Silicon Image Sensors Speaker: Richard Lyon  SLIDES
In this session we examine how digital cameras capture images via the interaction of light with silicon in CCD and CMOS image sensors, including sampling and aliasing effects, noise effects, etc.


Day 6: Digital Camera Image Processing Speaker: Richard Lyon  SLIDES
In this session we examine the steps that a digital camera goes through to take raw data from an image sensor and make a photograph out of it. There are more steps than you might imagine, arranged in what is usually termed a pipeline, and is sometimes implemented on pipelined hardware, to get to a pleasing photographic rendering of the scene. 


Day 7: Lossy Image Compression Speaker: Ashok Popat   SLIDES
This session covers lossy image compression of many forms, including: Transform coding: DCT and relationship to KLT and filter banks; Energy compaction, zonal sampling, and bit allocation; Scalar quantization: Lloyd-Max, entropy constrained; Wavelets and embedded zerotrees; Vector quantization: k-means, Lloyd algorithm.


Day 8: Diffraction and Interference in Imaging Speaker: Rom Clement   SLIDES
This session addresses effects of the wave nature of light. This approach will allow us to talk about the phenomena of interference as well as diffraction. The understanding of the notion of diffraction will be used to determine the Rayleigh criteria and finally the resolving power of an optical system. In the second part of the lecture, we will study gratings using the wave approach. An example of an amateur spectroscope for astronomy using a reflective grating will be shown.


Day 9: Amateur Astrophotography Speaker: Ben Lutch   SLIDES
This session covers amateur astrophotography, particularly automation, gear (cameras, telescopes) and some of the technical challenges of photographing dim objects across the universe from your backyard or remote observatory.


Day 10: Image Compression Part 2 Speaker: Ashok Popat   SLIDES
Continues discussion of lossy image compression and introduces lossless image compression. Recap of transform coding; intro to vector quantization and wavelet-based approaches; basic theory of lossless compression; entropy coding techniques; context-based lossless image compression. Will emphasize underlying principles rather than details of specific methods.




Day 11: Document Image Analysis with Leptonica Speaker: Dan Bloomberg   SLIDES
Graphics typically takes a representation of an image or scene and renders it in raster form. This normally occurs through a well-specified process. What happens when you try to go the other way, from a raster image to a description of its contents? The process, so easy for humans, is not easy for machines, because the input raster data can be highly variable and the interpretation of the contents somewhat arbitrary. We'll talk about how this 'inverse graphics' process can be accomplished quickly and usually with sufficient accuracy for most applications, using rasters of document images as input. The 'trick' is to use the image as the primary...


Day 12: High Dynamic Range Image Capture Speaker: Greg Ward   SLIDES
Although digital imaging has been around for nearly 40 years, the past decade has seen the nearly complete replacement of analog film by digital sensors, most of which capture only a tenth the dynamic range of the black and white film Ansel Adams worked with 80 years ago. Using multiple exposure techniques (or advanced sensors), it is in fact possible to turn this around and capture ten times the range of film, equaling or surpassing the capacity of human vision. Greg will introduce some of the concepts and present a few details of HDR imaging in the context of digital photography, and demonstrate interactive software he has developed to assist in the...


Day 13: Light field (plenoptic) photography – Not Available

Day 14: Exposing Digital Forgeries from Inconsistencies in Lighting Speaker: Hany Farid
With the advent of high-resolution digital cameras, powerful personal computers and sophisticated photo-editing software, the manipulation of digital images is becoming more common. To this end, we have been developing a suite of tools to detect tampering in digital images. I will discuss two related techniques for exposing forgeries from inconsistencies in lighting. In each case we show how to estimate the direction to a light source from only a single image: inconsistencies across the image are then used as evidence of tampering.


Day 15: The Gigapxl Project – Not Available

Day 16: Multi-View Image Compositions Speaker: Lihi Zelnik   SLIDES
Pictures taken by a rotating camera can be registered and blended on the sphere into a smooth panorama. What is left to be done, in order to obtain a flat panorama, is projecting the spherical image onto a picture plane. This step is unfortunately not obvious: the surface of the sphere may not be flattened onto a page without some form of distortion. Distortions are also unavoidable when mosaicking images taken while the point of view changes or/and the scene changes (e.g., due to objects moving). In such cases no geometrical consistent mosaic may be obtained. Artists have explored this problem and demonstrated that the geometrical consistency is not the...


Day 17: Color Management - Not Available

Day 18: Non-destructive , Selective, Non-linear, and Non-Modal Editing Of Photographs Speakers: Uwe Steinmueller & Fabio Riccardi  SLIDES
For photo editing tools there is a lot of buzz about non-destructive or parametric editing. While this is quite standard for todays RAW converters it is not sufficient for a more refined imaging workflow. The talk will demonstrate the need for a Non-destructive, Selective, Non-linear and Non-modal editing workflow. Useful properties of light and image formation. Theory and techniques of photographic optics and image capture. Theory of colorimetry and techniques of color reproduction. Where and how photography is being used in Google products and projects, what tools exist inside Google for photographic image storage, processing, etc.


Day 19: Inkjet printing coming of age  Speaker:Uwe Steinmueller   SLIDES
Today's fine-art printing is dominated mainly by pigment-based inkjet printers. Why do some photographers find inkjet printing so easy and others so hard? The talk presents an overview about inkjet printing in 2007 (printers, inks, papers, software, color management). useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction, where and how photography is being used in Google products and projects, what tools exist inside Google for photographic image storage, processing, etc.


Day 20: (topic not yet cleared for release from Google) Not Available

Day 21: Visualization Via Matlab: Color Profiles, Ray Tracing, Diffreaction Speaker: Richard Lyon SLIDES

Some topics that we've heard about from other points of views are fruitfully re-examined through some explorations using Matlab. First, we have a bit of code that reads ICC profiles and shows us what's in them, including some plots so that we can see what sorts of data are used to represent colorspace transformations. Second, we look at ray tracing for a non-Gaussian, non-paraxial, real-world situation, in which tracing rays numerically leads to an understanding of an aberration that we always need to deal with in digital cameras: the effect of a flat filter in the optical path. Third, brute-force numerical simulation of light's propagation as waves leads...

Day 22: Measuring, Interpreting and Correcting Optical Aberrations in the Human Eye Speaker: Eric Gross
Every lens has flaws. The eye—arguably the most important lens—has more than it's share. This talk is about recent technology that lets physicians measure and understand these wavefront aberrations. Some of these aberrations degrade our vision, while others seem to enhance it. Finally, I'll talk about efforts and technology to correct those aberrations. useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction, where and how photography is being used in Google products and projects, what tools exist inside Google for photographic...


Day 23: Raw Files and Formats Speaker: David Cardinal
David Cardinal, veteran nature photographer and software developer discuss raw files, their history, contents and politics, including some of the do's and don'ts for raw file reading and writing. David has written extensively on raw files for magazines including PC Magazine and Outdoor Photographer, and in his books on the Nikon D1 series of cameras, as well as written DigitalPro for Windows, an image management system including a raw file reader and writer that is featured in this months (June, 2007) Dr. Dobbs Journal.


Day 24: Dequantization, Texture, and Superresolution  Speaker: Lance Williams & Diego Rother  SLIDES  and more SLIDES

Quantization is an intrinsic feature of digital signals, an unavoidable nonlinearity in representation. It's instructive to analyze scalar quantization as part of the signal sampling process, how it characterizes the signal, and how signals so characterized can best be reconstructed. The resulting process has many advantages over conventional interpolation. It can reconstruct image structures such as edges and contours with sub-sample precision, an important ingredient of "superresolution." An open question is whether analogous techniques have a role to play in vector quantization. In images, vector quantization is often used spatially, to encode groups of pixels at a time. Typically this is used for data compression. Recent work in texture synthesis, however, appeals to vector quantization to characterize and reproduce the joint statistics of pixel values in a "random" field. In current usage, this is an optimization for what would otherwise be a big search; vector quantization is used to index spatial blocks of "texture." In an early effort in this area, vector quantization was used to synthesize texture by capturing correlations among spatial VQ blocks at different scales. This idea leads to a characterization and interpolation of texture that delivers fine detail at all scales, in the manner of a fractal. After a very brief review of leading texture synthesis techniques, some recent work in continuous scaling of image textures will be demonstrated, and the prospects for combining the superresolution of image structures and image textures will be discussed.

Day 25: Open source based high resolution cameras Speaker: Andrey N. Filippov
Andrey will explain the designs and applications of Elphel, Inc. intelligent, network-enabled cameras based on open source hardware and software. Google currently uses Elphel cameras for book scanning and for capturing street imagery in Google Maps. Andrey hopes Elphel's newest modular cameras, the Model 353 camera and the Model 363 camera, will attract software engineers and FPGA hardware engineers interested in exploring high-definition videography and other innovative applications. # useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color...


Day 26: Image quality testing & real-world challenges Speaker:Norman Koren   SLIDES
Useful properties of light and image formation, theory and techniques of photographic optics and image capture, theory of colorimetry and techniques of color reproduction



Day 27: Focus on Resolution: Degradations in Image Acquistion Speaker: Ken Turkowski  SLIDES
We investigate the digital image acquisition process for factors that adversely affect the quality of acquired imagery. The physical properties of optical systems impose a fundamental limit to the resolution of captured imagery. Real-world optics manifest several types of aberrations, and we show how these types of aberrations affect resolution. Image acquisition devices are imperfect, and have aliasing and other sampling artifacts that affect resolution. We derive the 70% rule for digital imagery, and verify it experimentally. There is a distinction between resolution and...


Day 28: Capturing more Light: Pragmatic use of HDR Capture and Tonemapping Speaker:Uwe Steinmueller
The high contrast of normal sunny daylight scenes is often hard to capture with a single shot. HDR images try to work around these limitations by combining multiple images. This talk is about the ins and outs of handling a higher dynamic range and of course also the limitations of this technique. The contents is the result of extensive practical work photographing urban
scenes.



Day 29: Photographing VR Panoramas Speaker: Scott Highton
Scott is one of the pioneers of virtual reality photography, will present an overview of methods and techniques for photographing VR panoramas. While VR panoramas have become common for online tours in the real estate and travel industries, where low-quality point-and-shoot technique seems to prevail, Scott focuses on the higher-end and higher-quality approaches that yield memorable and evocative imagery. He was the first independent photographer contracted by Apple to work with and test QuickTime VR, as well as an early photographic consultant and contract photographer during the development of IPIX's PhotoBubble technology. Specializing in photography of extreme locations and environments, he was the first to use both technologies underwater.  Scott has been a commercial photographer, documentary cinematographer, and writer for close to 30 years, and is in the process of finishing his long-awaited book on Virtual Reality Photography techniques. He has lectured at a number of photo industry events, and produces the Virtu...


Day 30: Imaging optics for the next decade Speaker: Robert E. Fischer  (THIS IS THE BEST TALK)
Digital cameras in their many forms will continue to be one of the primary drivers towards new technologies in optics as well as improvements of classical technologies. This has been well illustrated in the past 5-10 years which has seen, for example, the development of compression molded glass aspheric lenses for improved performance and packaging. The incorporation of injection molded plastic lenses and possibly hybrid refractive/diffractive surfaces will grow. Furthermore, as the trend continues towards smaller pixels as well as more pixels in a given sensor, the imaging optics will be further driven towards higher image quality. Zoom lenses will increase in their zoom range, yet there will be a continuing emphasis towards smaller and smaller packaging. The optics and their associated mechanics will need to be more robust with respect to stray light such as flare, glare, ghost images, and other undesirable image anomalies. And our optics must be more robust with respect to environmental effects such as thermal soaks and gradients. And with all of the above, customers will want lower cost too. It is going to be a fun ride over the next 5-10 years so fasten your seat belt and hold on real tight to the safety bar!

Robert is CEO of Optics 1 in Westlake Village, CA, a past president of the SPIE, and a winner of that society's highest award, the Gold Medal for outstanding engineering or scientific accomplishments in optics and electro-optics.  Mr. Fischer's technical interests are in optical system design and engineering, in particular lens design. He is also interested in optical component and system manufacturing, assembly, and testing. His interests extend from the deep UV through the visible and on to the thermal infrared. He is known for his tireless efforts to advance optical science, engineering and scholarship. He served as a book editor of the McGraw-Hill Series on Optical and Electro-Optical
Engineering, and as executive editor of OE Reports, bringing timely and practical information to professionals in the field.


Day 31: Color Balance: Babies, Rugs & Sunsets Speaker: Paul Hubel
Achieving pleasing color balance is one of the most important and difficult problems in photographic systems; if the color balance is off, other image quality attributes drop in important and the result is unacceptable. In this talk I will discuss the differences between color balance and white balance for both photographic and machine vision applications and I will outline the literature of the subject. I will explain some of the more basic and some of the more advanced methods and relate these to complexity and system calibration issues. I will give several examples that show how some methods can fail and why some images can be extremely difficult. I will touch on the relationship between color balance and color perception and how this differentiates photographic systems from machine vision systems.

Dr. Paul M. Hubel has been working as Chief Image Scientist at Foveon, Inc. since 2002. His work includes the design of image processing algorithms and sensor architectures for high, middle, and low end cameras. Before joining Foveon, Dr. Hubel worked for ten years as a Principal Project Scientist at Hewlett-Packard Laboratories working on color algorithms for digital cameras, photofinishing, scanners, copiers, and printers.

Dr. Hubel received his B.Sc. in Optics from The University of Rochester in 1986, and his D.Phil. from Oxford University in 1990. His D.Phil thesis is titled "Colour Reflection Holography". As a graduate student, Dr. Hubel worked part-time at the Rowland Institute for Science under Dr. E.H. Land and later as a Post Doctoral Fellow at the MIT-Media Laboratory. Dr. Hubel has published over 30 technical papers, book chapters, and authored 25 patents. Dr. Hubel is a member of IS&T and SPIE, and has served as the technical and general chair of the IS&T/SID Color Imaging Conference.


Day 32: Art, Science and Reality of High Dynamic Range (HDR) Imaging Speaker: John McCann
High Dynamic Range (HDR) image capture and display has become an important engineering topic. The discipline of reproducing scenes with a high range of luminances has a five-century history that includes painting, photography, electronic imaging and image processing. HDR images are superior to conventional images. There are two fundamental scientific issues that control HDR image capture and reproduction. The first is the range of information that can be measured using different techniques. The second is the range of image information that can be utilized by humans. Optical veiling glare severely limits the range of
luminance that can be captured and seen.

In recent experiments, we measured camera and human responses to calibrated HDR test targets. We calibrated a 4.3-log-unit test target, with minimal and maximal glare from a changeable surround. Glare is an uncontrolled spread of an image-dependent fraction of scene luminance in cameras and in the eye. We use this standard test target to measure the range of luminances that can be captured on a camera's image plane. Further, we measure the appearance of these test luminance patches. It is the improved quantization of digital data and the preservation of the scene's spatial information that cause the improvement in quality in HDR reproductions. HDR is better than conventional imaging, despite the fact the multiple- exposure-HDR reproduction of luminance is inaccurate. This talk describes the history of HDR image processing techniques including painting,  photography, and electronic image processing (analog and digital) over the past 40 years. It reviews the development of Retinex theory, and other spatial-image- processing algorithms, that calculate appearance in images from arrays of radiances.

John McCann received a B.A. degree in Biology from Harvard University in 1964. He worked in, and later managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography and the reproduction of fine art. He is a Fellow of IS&T. He is a past President of IS&T and the Artists Foundation, Boston. He is currently consulting and continuing his research on color vision. He is the IS&T/OSA 2002 Edwin H. Land Medalist and IS&T 2005 Honorary Member and will be a 2008 Fellow of the Optical Society of America. 


-------------------------------------


A related talk by Dick Lyon at Berkeley pictures meeting: The Ideal Camera – An Outside-the-Box Analysis – slides


Richard Lyon's site on Phototech talks.

No comments: