![]() |
Project 3: Subdivision SurfacesDate Assigned: 7 May 1999Date Due: 20 May 1999 Artifact Due: 25 May 1999 |
Sub
that will let
you edit and view several types of subdivision surfaces.
Implement Doo-Sabin or Catmull-Clark subdivision surfaces (or both.)
Add
support for surfaces with boundaries. Intuitively, a point lies on
the boundary if it has a surface neighborhood on one side, but not the
other. In a triangle mesh, a boundary edge has a triangle on one side
but not the other.
Add support for
texture mapping. This requires defining (u,v) coordinates for the
control vertices and applying the subdivision rules to the (u,v)
coordinates as well as the vertex positions. See [DeRose
98] for details. Of course, you need to read in an image and show
that you can map it over the surface.
Add
support for semi-sharp features as in [DeRose
98] or [Sederberg
et al 98]. This amounts to varying the subdivision masks across
subdivision levels and spatially for a few iterations, and then pusing
the vertices to the limit. Extra credit for a nice user interface.
Improve the speed of editing operations by only
resubdividing the area of the surface that is affected when you move a
control vertex (the region of influence of the control point
being moved).
Improve
the included "lo res" editing switch so that the surface displayed
while editing is subdivided as far it can while still remaining
responsive. Therefore, if you move a control point slowly you will
see a higher resolution model than if you move a control point
quickly. And if you stop moving the control point for long enough the
display will update with a higher resolution version. (Remember that
if you use an evaluation mask to send vertices to their limit
positions you can't continue to subdivide!)
Combining these last two bells could would be worth three bells.
Provide an interface for snapping vertices to a grid. It's common to
want to create simple "canonical" models that have symmetry, e.g.,
cylinder-like and donut-like shapes. Snapping the control points to a
grid helps ensure the desired regularity.
Allow the user to add more control points to the base mesh in
places where they want more control. A simple implementation
of this would let them choose an edge and split it. Better
would be to let them click anywhere and add a vertex there.
Let the user change the genus of the base mesh. Then he or she
can, for example, turn a cube into a donut. Just figuring out a good
interface is a challenge. Perhaps selecting two faces and "joining"
them?
Let the user select several control vertices and move them as a group.
Add a
simple form of multiresolution edit. Here's the idea: when moving a
control point, a neighborhood (adjustable in size) of control points
around it also move. The amount that the neighboring points move
depends on their distance from the selected control point. The
distance metric could be based on number of edge steps from the
control point to the neighbor, or it could be the Euclidean distance
in 3-space, or even based on an approximation to the shortest distance
measured over the surface (geodesic distance).
Implement direct manipulation of the surface, i.e. let the user grab
any point on the surface and move it to where they want it. What should
the rest of the surface do?
Automatically
adjust the amount of subdivision for the model using some measure of error,
and a tolerance (or target mesh size).
For your error measure you could fit a plane to a vertex neighborhood
and compute the distance from the vertex to that plane. If any vertex
exceeds the tolerance, subdivide up to the user-specified limit.
Implement a
view-dependent error metric. For example, to keep the silhouettes
accurate, you could compute the distance between a vertex and the
nearest point on the planar fit as before, but do the distance
calulation in projected screen space. Then, for a specified
tolerance, you can use fewer polygons if the object is moved further
from the camera. More credit is available if you take apparent errors
in shading into account (see, e.g., Hoppe's SIGGRAPH '97 paper on
view-dependent progressive meshes).
Implement adaptive subdivision. Either style of LOD can
be enhanced by subdividing different areas of the model to different
levels. With an object-based error metric, the higher curvature, more
detailed areas of the object will require more subdivision. With
a view-dependent error metric, silhouettes will be more highly
subdivided.