CS395/495(26)  Image Based Modeling and Rendering

Northwestern University Spring 2002  Jack Tumblin

Grading:   5 Self-Contained Programming Projects,
                  1 Written Midterm Exam.
                  (I changed my mind: no final exam):

Filename Prefix Topic Due Date Weight
p1<your last name> Image Grid Viewer Tues April 16   5%
p2<your last name> 2D Projective Image Warper Tues May 2 15%
(handouts, paper) Homework alternate weeks 20%
p3<your last name> 3D Projective Image Warper Tues May 14 20%
p4<your last name> Monocular Camera Calibrator Tues May 28 20%
p5<your last name> 2-Image Epipolar Geometry Finder Exam Day 20%

--Please don't skip class to work on your project:
    instead, come to class and ask/learn how to do the hard parts. It's far more efficient.


Project 3: 3D Projective Warper 

DRAFT:

    Now we will add some depth data to our mesh image.
1) Use this image of a shaded cube (or tetrahedron, or make your own) as the texture map for a mesh image.  
2) Set some depth values.  Choose few mesh points at the corners of the cube in the image, and manually and give them some plausible non-zero depth values. Use linear interpolation between these depths to find depth for all the in-between image mesh points. Set large background depth values; for example, you might set them all to a value twice as large as the initial distance from the camera to the cube center.
3) Set the camera to view the mesh image as we did in Project 1: 'head-on', so the mesh image = screen image.
4) Without moving the camera, move the mesh image vertices in 2D (no change in depth), so as to give the appearance of viewing the image from a different camera position (position B).
5) As with Project 1, repeat step 4, but now move the camera around to a different position (position B will be interesting!) so you can see what your warping does from a different angle.


Project 4: Single Image Camera Finder

DRAFT:

Read in a mesh image of a cube again,  (or anything with at least three large non-coplanar polygons with visible vertices).  Manually mark mesh image the corners of each visible cube side.  From this data, find the camera matrix that was used to make the image!  Extra credit: estimate error bounds (error ellipsoids, etc.).


Project 5: Epipolar Geometry: 
                    Find Depth, Calibrate Cameras

DRAFT:

 Now read in TWO mesh images A,B that are photos of the same 3D scene taken from mildly different camera positions (most of the things visible in image A are visible in image B).  Mark a few corresponding points in the images. Find the two camera positions and fundamental matrix, and display them as objects in 3D.  Extra Credit: Move the image meshes to camera positions in 3D, set depth values for the vertex correspondences. Show epipolar planes. Search along epipolar lines in the mesh images to find more correspondences.