next up previous
Next: One-Shot Images Conveying Shape Up: Background Previous: Background

Paint Programs and One-Shot Images

  Creating sophisticated paint programs which generate single images and emulate techniques used by artists for centuries was the focus of research done by Meier [27], Haeberli [17], Curtis [10], Salisbury et al. [30,31,32], and Winkenbach et al. [38]. However, conveying shape and structure is not the goal of these images.

     
Copyright 1996 Barbara Meier~\cite{meie96}. Used by permission. Copyright 1990 Paul Haeberli~\cite{haeb90}. Used by permission. Copyright 1997 Cassidy Curtis~\cite{curt97}. Used by permission.
Figure 2.1. Non-photorealistic one-shot images with a high level of abstraction.

Meier [27] presents a technique for rendering animations in which every frame looks as though it were painted by an artist. She models the surfaces of 3D objects as 3D particle sets. The surface area of each triangle is computed, and particles are randomly distributed. The number of particles per triangle is equal to a ratio of the surface area of triangle to the surface area of the whole object. To maintain coherence from one shot of the object to the next, the random seed is stored for each particle. Particles are transformed into screen space and sorted with regard to the distance from viewer. The particles are then painted with 2D paint strokes, starting farthest from viewer, moving forwards, until everything is painted. The user determines artistic decisions like light, color, and brush stroke, similar to most paint programs. The geometric lighting properties of the surface control the appearance of the brush strokes.

Haeberli [17] created a paint program which allows the user to manipulate an image using tools that can change the color and size of a brush, as well as the direction and shape of the stroke. The goal of his program is to allow the user to communicate surface color, surface curvature, center of focus, and location of edges, as well as eliminate distracting detail, to provide cues about surface orientation and to influence viewer's perception of the subject. Haeberli studied the techniques of several artists. He observed that traditional artists exaggerate important edges. Where dark areas meet light areas, the dark region is drawn darker and light region is drawn lighter. This causes the depth relationship between objects in a scene to be more explicit where they overlap. Haeberli also notes that artists use color to provide depth cues because, perceptually, green, cyan, blue (cool-colored) shapes recede, whereas red, orange, yellow, magenta (warm-colored) objects advance. He commented in his paper that he used these color depth cues and other techniques to enhance digital images before the paint begins, but he never provided any details on how these could be used algorithmically.

The computer-generated watercolor work by Curtis et al. [10] created a high-end paint program which generates pictures by interactive painting, or automatic image ``watercolorization'' or 3D non-photorealistic rendering. Given a 3D geometric scene, they generate mattes isolating each object and then use the photorealistic rendering of the scene as the target image. The authors studied the techniques and physics of watercolor painting to developed algorithms, which depend on the behavior of the paint, water, and paper. They provide information on watercolor materials and effects of dry-brush, edge darkening, backruns, granulation and separation of pigments, flow patterns, glazing, washes.

Salisbury et al. [32,30,31] designed an interactive system which allows users to paint with stroke textures to create pen-and-ink style illustrations, as shown in Figure 2.2(a). Using ``stroke textures,'' the user can interactively create images similar to pen-and-ink drawings of an illustrator by placing the stroke textures. Their system supports scanned or rendered images which the user can reference as guides for outline and tone (intensity) [32]. In their paper, ``Scale-Dependent Reproduction of Pen-and-Ink Illustrations'' [30], they gave a new reconstruction algorithm that magnifies the low-resolution image while keeping the resulting image sharp along discontinuities. Scalability makes it really easy to incorporate pen-and-ink style image in printed media. Their ``Orientable Textures for Image-Based Pen-and-Ink Illustration'' [31] paper added high-level tools so the user could specify the texture orientation as well as new stroke textures. The end result is a compelling 2D pen-and-ink illustration.

Winkenbach et al. [38] itemized rules and traditions of hand-drawn black-and-white illustrations and incorporated a large number of those principles into an automated rendering system. To render a scene, visible surfaces and shadow polygons are computed. The polygons are projected to normalized device coordinate space and then used to build a 2D BSP (binary space partition) tree and planar map. Visible surfaces are rendered, and textures and strokes applied to surfaces using set operations on the 2D BSP tree. Afterwards, outline strokes are added. Their system allows the user to specify where the detail lies. They also take into consideration the viewing direction of user, in addition to the light source. They are limited by a library of ``stroke textures.'' Their process takes about 30 minutes to compute and print out the resulting image, as shown in Figure 2.2(b).


   
Figure 2.2: Computer-generated pen-and-ink illustrations.
Pen-and-Ink Illustration. Copyright 1996 Michael Salisbury~\cite{sali96}}. Used by permission. Pen-and-Ink Illustration. Copyright 1994 Georges Winkenbach~\cite{wink94}}. Used by permission.


next up previous
Next: One-Shot Images Conveying Shape Up: Background Previous: Background