Notes
Slide Show
Outline
1
Computational Photography
2
The ultimate camera
  • What does it do?
3
The ultimate camera
  • Infinite resolution


  • Infinite zoom control


  • Desired object(s) are in focus


  • No noise


  • No motion blur


  • Infinite dynamic range (can see dark and bright things)


  • ...
4
Creating the ultimate camera
  • The “analog” camera has changed very little in >100 yrs
    • we’re unlikely to get there following this path


  • More promising is to combine “analog” optics with computational techniques
    • “Computational cameras” or “Computational photography”


  • This lecture will survey techniques for producing higher quality images by combining optics and computation


  • Common themes:
    • take multiple photos
    • modify the camera
5
Noise reduction
  • Take several images and average them


  • Why does this work?


  • Basic statistics:
    • variance of the mean decreases with n:


6
Field of view
  • We can artificially increase the field of view by compositing several photos together (project 2).
7
Improving resolution:  Gigapixel images
  • A few other notable examples:
    • Obama inauguration (gigapan.org)
    • HDView (Microsoft Research)
8
Improving resolution:  super resolution
  • What if you don’t have a zoom lens?


9
Intuition (slides from Yossi Rubner & Miki Elad)
10
 
11
 
12
 
13
 
14
Intuition
15
 
16
Handling more general 2D motions
17
Super-resolution
  • Basic idea:
    • define a destination (dst) image of desired resolution
    • assume mapping from dst to each input image is known
      • usually a combination of a 2D motion/warp and an average (point-spread function)
      • can be expressed as a set of linear constraints
      • sometimes the mapping is solved for as well
    • add some form of regularization (e.g., “smoothness assumption”)
      • can also be expressed using linear constraints
      • but L1, other nonlinear methods work better
18
How does this work?  [Baker & Kanade, 2002]
19
Limits of super-resolution  [Baker & Kanade, 2002]
  • Performance degrades significantly beyond 4x or so
  • Doesn’t matter how many new images you add
    • space of possible (ambiguous) solutions explodes quickly
  • Major cause
    • quantizing pixels to 8-bit gray values


  • Possible solutions:
    • nonlinear techniques (e.g., L1)
    • better priors (e.g., using domain knowledge)
      • Baker & Kanade “Hallucination”, 2002
      • Freeman et al. “Example-based super-resolution”


20
Dynamic Range
21
HDR images — merge multiple inputs
22
HDR images — merged
23
Camera is not a photometer!
  • Limited dynamic range
    • 8 bits captures only 2 orders of magnitude of light intensity
    • We can see ~10 orders of magnitude of light intensity

  • Unknown, nonlinear response
    • pixel intensity ¹ amount of light (# photons, or “radiance”)

  • Solution:
    • Recover response curve from multiple exposures, then reconstruct the radiance map
24
 
25
Calculating response function
26
Debevec & Malik [SIGGRAPH 1997]
27
The Math
  • Let g(z) be the discrete inverse response function
  • For each pixel site i in each image j, want:
  • Solve the over-determined linear system:


28
Capture and composite several photos
  • Same trick works for
    • field of view
    • resolution
    • signal to noise
    • dynamic range
    • Focus


  • But sometimes you can do better by modifying the camera…
29
Focus
  • Suppose we want to produce images where the desired object is guaranteed to be in focus?






  • Or suppose we want everything to be in focus?
30
Light field camera [Ng et al., 2005]
31
Conventional vs. light field camera
32
Prototype camera
  • 4000 × 4000 pixels  ÷  292 × 292 lenses  =  14 × 14 pixels per lens
33
 
34
Simulating depth of field
  • stopping down aperture  =  summing only the central portion of each microlens
35
Digital refocusing
  • refocusing  =  summing windows extracted from several microlenses
36
Example of digital refocusing
37
All-in-focus
  • If you only want to produce an all-focus image, there are simpler alternatives


  • E.g.,
    • Wavefront coding [Dowsky 1995]
    • Coded aperture [Levin SIGGRAPH 2007],  [Raskar SIGGRAPH 2007]
      • can also produce change in focus (ala Ng’s light field camera)
38
 
39
 
40
 
41
 
42
Close-up
43
Motion blur removal
  • Instead of coding the aperture, code the...
44
 
45
 
46
Many more possibilities
  • Seeing through/behind objects
    • Using a camera array (“synthetic aperture”)
    • Levoy et al., SIGGRAPH 2004

  • Removing interreflections
    • Nayar et al., SIGGRAPH 2006

  • Family portraits where everyone’s smiling
    • Photomontage (Agarwala at al., SIGGRAPH 2004)


  • …



47
More on computational photography
  • SIGGRAPH course notes and video
  • Other courses
    • MIT course
    • CMU course
    • Stanford course
    • Columbia course
  • Wikipedia page
  • Symposium on Computational Photography
  • ICCP 2009 (conference)