|
1
|
- Computer Vision
CSE576, Spring 2005
Richard Szeliski
|
|
2
|
- Image–Based Rendering
- Light Fields and Lumigraphs
- Panoramas and Concentric Mosaics
- Environment Matting
- Image-Based models
|
|
3
|
- Video-Based Rendering
- Facial animation
- Video matting and shadow matting
- Video Textures and Animating Stills
- Video-based tours
|
|
4
|
- S. J. Gortler , R. Grzeszczuk , R. Szeliski and M. F. Cohen, The
Lumigraph, SIGGRAPH'96.
- M. Levoy and P. Hanrahan, Light field rendering, SIGGRAPH'96.
- H.-Y. Shum and L.-W. He. Rendering with concentric mosaics, SIGGRAPH’99.
|
|
5
|
- D. E. Zongker et al. Environment matting and compositing,
SIGGRAPH'99.
- Y.-Y. Chuang et al. Environment matting extensions: Towards higher
accuracy and real-time capture. SIGGRAPH'2000, pp.121-130, 2000.
- P. E. Debevec , C. J. Taylor and J. Malik, Modeling and rendering
architecture from photographs:…, SIGGRAPH'96.
|
|
6
|
- Y.-Y. Chuang et al. Video matting of complex scenes. ACM Trans. on
Graphics, 21(3):243-248, July 2002
- Y.-Y. Chuang et al. Shadow matting. ACM Transactions on Graphics,
22(3):494-500, July 2003.
- A. Schödl et al., Video textures. SIGGRAPH'2000, pp. 489-498, 2000.
- M. Uyttendaele et al. Image-based interactive exploration of real-world
environments. IEEE Comp. Graphics and Applications, 24(3), May/June
2004.
|
|
7
|
- (with lots of slides from Michael Cohen)
|
|
8
|
- How do we generate new scenes and animations from existing ones?
- Classic “3D Vision + Graphics”:
- take (lots of) pictures
- recover camera pose
- build 3D model
- extract texture maps / BRDFs
- synthesize new views
|
|
9
|
|
|
10
|
|
|
11
|
|
|
12
|
|
|
13
|
|
|
14
|
|
|
15
|
|
|
16
|
- Plenoptic Function:
- all possible images
- too much stuff!
|
|
17
|
- Infinite line
- 4D
- 2D direction
- 2D position
- non-dispersive medium
|
|
18
|
- Discretize, then interpolate
- Distance between 2 rays
- Which is closer together?
|
|
19
|
- What is an image?
- All rays through a point
|
|
20
|
- Convert panoramic image sequence into a cylindrical image
- + + …
+ =
|
|
21
|
|
|
22
|
- Light leaving towards “eye”
- 2D
|
|
23
|
|
|
24
|
|
|
25
|
|
|
26
|
|
|
27
|
|
|
28
|
|
|
29
|
|
|
30
|
|
|
31
|
|
|
32
|
|
|
33
|
|
|
34
|
|
|
35
|
|
|
36
|
- For each output pixel
- determine s,t,u,v
- either
- find closest discrete RGB
- interpolate near values
|
|
37
|
|
|
38
|
- Nearest
- closest s
- closest u
- draw it
- Blend 16 nearest
- quadrilinear interpolation
|
|
39
|
- Depth Correction
- closest s
- intersection with “object”
- best u
- closest u
|
|
40
|
- Depth Correction
- quadralinear interpolation
- new “closest”
- like focus
- [Dynamically
- Reparameterized
- Light Fields,
- Isaksen, SG’2000]
|
|
41
|
- Fast s,t,u,v finding
- scanline interpolate
- texture mapping
- shear warp
|
|
42
|
- 3D space ray space
- surface depth Û slope in ray
space
|
|
43
|
- Image effects:
- parallax
- occlusion
- transparency
- highlights
|
|
44
|
|
|
45
|
- Digital Michelangelo Project
- Marc Levoy, Stanford University
- Lightfield (“night”) assembled by Jon Shade
|
|
46
|
- What if the images aren’t sampled on a regular 2D grid?
- can still re-sample rays
- ray weighting becomes more complex
[Buehler et al., SIGGRAPH’2000]
|
|
47
|
- Turn 4D parameterization around:
- image @ every surface pt.
- Leverage coherence:
- compress radiance fn
(BRDF * illumination)
after rotation by n
|
|
48
|
- [Wood et al, SIGGRAPH 2000]
|
|
49
|
- Image (and panoramas) are 2D
- Lumigraph is 4D
- What happened to 3D?
- 3D Lumigraph subset
- Concentric mosaics
|
|
50
|
|
|
51
|
- One row of s,t plane
- i.e., hold t constant
- thus s,u,v
- a “row of images”
- [Sloan et al., Symp. I3DG 97]
|
|
52
|
- Replace “row” with “circle” of images
- [Shum & He, SIGGRAPH’97]
|
|
53
|
|
|
54
|
- Rendering
- ( as seen
from above )
|
|
55
|
|
|
56
|
|
|
57
|
- Image is 2D
- Lumigraph is 4D
- 3D
- 3D Lumigraph subset
- Concentric mosaics
- 2.5D
- Layered Depth Images
- Sprites with Depth (impostors)
- View Dependent Surfaces (see Façade)
|
|
58
|
|
|
59
|
|
|
60
|
|
|
61
|
- Represent scene as collection of cutouts with depth (planes + parallax)
- Render back to front with fwd/inverse warping [Shade et al.,
SIGGRAPH’98]
|
|
62
|
- D. E. Zongker, D. M. Werner,
B. Curless and D. H. Salesin. SIGGRAPH'99
|
|
63
|
- Capture the reflections and refractions of a real-world object
- Composite object over a novel background
|
|
64
|
|
|
65
|
- Capture the mapping from each
image pixel to a real-world
ray direction(s)
|
|
66
|
- Use several monitors with stripes
|
|
67
|
- Captures foreground color
and background directions
|
|
68
|
|
|
69
|
- [Chuang et al., SIGGRAPH’2001]
- accurate (multiple refractions):
- fast (video rate):
|
|
70
|
|
|
71
|
|
|
72
|
- Create 3D model (and texture maps) from images
- automated
- (structure from motion,
stereo)
- interactive
|
|
73
|
- Select building blocks
- Align them in each image
- Solve for camera pose
and block parameters
(using constraints)
|
|
74
|
- Determine visible cameras for each surface element
- Blend textures (images) depending on distance between original camera
and novel viewpoint
|
|
75
|
- Compute offset from block model
- Some more results:
|
|
76
|
- Estimate shape from images
- Match metrics to shape
- Project video onto shape
- Animate
- [Z. Liu et al., MSR-TR-2000-11]
|
|
77
|
- 8D: Refractive/reflective environment
- 5D: Plenoptic Function (Ray)
- 4D: Lumigraph / Lightfield
- 4D*: Environment Matte (single view)
- 3D: Lumigraph Subset
- 3D: Concentric Mosaics
- 2.5D: Layered Depth Image
- 2.5D: Image Based Models
- 2D: Images and Panoramas
|
|
78
|
|
|
79
|
|
|
80
|
- Image-Based Rendering:
- render from (real-world) images for efficiency, quality, and
photo-realism
- Video-Based Rendering
- use video instead of still images for dynamic elements and source
footage
- generate computer video instead of
computer graphics
|
|
81
|
- Facial animation
- Layer/matte extraction
- Dynamic (stochastic) elements
- 3-D world navigation
|
|
82
|
- Modeling from still images
- Lip-synching from video
- Video Rewrite
[Bregler et al., SG’97]
- [Ezzat et al., SG’02]
|
|
83
|
|
|
84
|
|
|
85
|
- Pull dynamic a-matte
from video with
complex backgrounds
[Chuang et al. @ UW, SIGGRAPH’2002]
|
|
86
|
|
|
87
|
- Transfer a shadow from one background to another:
- Extract and model photometry (darkening)
- Extract and model geometry (deformation)
|
|
88
|
|
|
89
|
|
|
90
|
|
|
91
|
|
|
92
|
- How can we turn a short video clip
into an ¥ amount of
continuous video?
- dynamic elements in 3D games and presentations
- alternative to 3D graphics animation?
- [Schödl, Szeliski, Salesin, Essa, SG’2000]
|
|
93
|
- Find cyclic structure in the video
- (Optional) region-based analysis
- Play frames with random shuffle
- Smooth over discontinuities (morph)
|
|
94
|
|
|
95
|
|
|
96
|
|
|
97
|
- Live waterfall in static panorama
|
|
98
|
|
|
99
|
|
|
100
|
|
|
101
|
- Move camera along a rail (“dolly track”) and play back a 360° video
- Applications:
- Homes and architecture
- Outdoor locations
(tourist destinations)
|
|
102
|
|
|
103
|
- Built by Point Grey Research (Ladybug)
- Six camera head
- Portable hard drives, fiber-optic link
- Resolution per image: 1024 x 768
- FOV: ~100o x ~80o
- Acquisition speed: 15 fps uncompressed
|
|
104
|
|
|
105
|
|
|
106
|
- How to best sample and interpolate Light Field
- (sub-?) pixel accurate stereo
- reflections, refractions, …
- Compositing
- how to insert Light Field into new environment
- relighting
- …?
|
|
107
|
- Image–Based Rendering
- Light Fields and Lumigraphs
- Panoramas and Concentric Mosaics
- Matting: natural, environment, and shadows
- Image-Based models
- Video-Based Rendering
- Facial animation
- Video Textures and Animating Stills
- Video-based tours
|
|
108
|
- Image–Based Rendering
- Light Fields and Lumigraphs
- Panoramas and Concentric Mosaics
- Environment Matting
- Image-Based models
- Video-Based Rendering
- Facial animation
- Video matting
- Video Textures and Animating Stills
- Video-based tours
|