The earliest computer displays, developed from the oscilloscope industry, plotted individual points, and drew all higher-level objects as sequences of dots.
Later generations of displays were controlled by a display list: a list of instructions to the display, typically encoding the (x, y) coordinates of each point and its intensity. Subroutines in the display list allowed more complicated pictures — even beyond the capability of the display to draw quickly — to be displayed and dynamically changed.
An obvious improvement was to interpret descriptions of lines and other simple curves directly by the hardware, so the display list was more compact and less expensive to compute. From this background has grown the vector graphics or display segment model of computer graphics, described in Newman and Sproull and currently represented by the GKS standard.
The frame buffer model, also called raster graphics, is rooted in the television industry, and instead represents an image as a two-dimensional array of intensities mapped onto a television tube to create the image. Frame buffers have one great advantage: the complexity of the displayed image does not affect the amount of memory consumed to display it.
They have disadvantages, too, of course. The most obvious is economic: to store a reasonably complex picture might take 8 bits per pixel on a 512x512 display, which requires a quarter megabyte of relatively expensive high-speed memory. Also, the processing required for dynamic graphics on a frame buffer demands a dedicated CPU for reasonable performance, again because of the large amount of memory that must be updated. In time, though, the cost of TV tubes and memories has dropped, so a frame buffer is now less expensive to make than a vector display, and personal computers with frame buffers are becoming common.
During the early 1970's, researchers at Xerox PARC built a small personal computer called the Alto, and gave it a frame buffer with only a single bit per pixel — a bitmap display. Although binary pixels are clearly incapable of high-quality graphics, at least at typical frame buffer resolutions, the Alto's frame buffer was intended to simulate paper, for which only two values are required (print and background, or black and white). Unfortunately, the simple segmented graphics model that works well on vector displays is clumsy on bitmap displays. Perhaps surprisingly, the graphics model that has arisen to take the place of segments on bitmap displays capitalizes upon the lowest level of representation of single-bit-per-pixel images, rather than hiding it.
Traditionally, bitmaps have been viewed as imperfect approximations to the ideal images one would like to display. Such ideal images are assumed to be described by continuous variation in color, intensity and so forth. The imperfection is brought about by the discrete nature of the imaging devices we possess, as well as the digital nature of the computers themselves, which forces us to quantize these continuous variables into a discrete (but possibly very large) set of values. In particular, bitmap coordinates are integral, as are the pixel values within them. From this 'imperfect viewpoint, bitmaps are essentially an implementation artifact, and the graphics programmer should not be exposed to them. Instead, the programmer should be given access to the ideal shapes of plane geometry, plus continuous functions for describing color and intensity modulation. The operators available in such a package always correspond to operations on these ideal objects. It is the responsibility of the package's implementor to discretize all these continuous functions internally and transform the ideal continuous operators into their discrete counterparts.
For a variety of reasons, though, no such package can hide fully the discrete representation involved beneath the surface. Perhaps the most fundamental reason is that certain laws that hold in the continuous domain cannot be made to hold in the discrete domain, no matter how careful one is about quantization. (We will have more to say about this shortly.) Also, good quantized approximations to continuous tone images naturally involve very large bitmaps. Such large bitmaps are expensive to store and manipulate. Thus efficiency considerations often force the graphics programmer to be aware of the implementation underneath.
As anyone who has taken a course in numerical methods knows, floating-point numbers do not satisfy many of the identities that hold true for real numbers (in the mathematical sense of 'real'). The science of numerical computing is largely devoted to compensating for these fundamental imperfections in floating-point arithmetic. Similarly, when images are discretized, certain errors are unavoidable. For example, consider the picket fence example shown in Figure 2. Suppose each picket is 1 pixel wide, but the pickets are spaced 2.5 pixels apart. One reasonable quantization method would rasterize the picket spacing to an alternating thickness of 2 and 3 pixels respectively, so as to make the overall length of the fence as close to its real value as possible, thereby maintaining the criterion of global faithfulness. Another method would simply choose either 2 or 3 pixels and make all spacings that thickness, maintaining local faithfulness. It is impossible to satisfy both criteria simultaneously.
[…]
Figure 2. A picket fence image (a) and two discretized versions of it: (b) globally faithful, and (c) locally faithful.
Another reason bitmaps seep through to higher levels of system design is efficiency. Dynamic raster graphics require very high-speed manipulations of the raster memory, where the image being displayed is stored. In every case, the contents of the raster memory can be computed by scan conversion algorithms from higher level shape, illumination and color descriptions. Such algorithms are rarely, however, fast enough to cope with incremental updating of the display, as is often required in interactive applications.
Although, as we remarked earlier, ideal image manipulations do not always have exact counterparts in discrete form, they sometimes do. For instance, the operation of copying or translating a subimage on the screen has an exact counterpart in the discretized form. There are immense speed advantages to be gained by doing these manipulations in the raster representations, rather than in the ideal ones and then repeating the scan conversion.
Any attempt to ban bitmaps from anything other than a temporary low-level representation for computer images is bound to encounter difficulties.
To facilitate raster manipulations, several novel computer systems have used specialized instructions dealing with the raster memory. Typically these systems have been personal computers, such as the Xerox Alto or the M.I.T. Lisp Machine, where a premium is placed on interactive graphics facilities. Their raster instructions are powerful primitives, often implemented in a combination of hardware and micro-code. Such an instruction is known as bitblt (bit boundary block transfer), or RasterOp in Newman and Sproull's terminology.
The most common form of this instruction is a bitwise Boolean operation between two conformable (i.e., having the same dimensions), possibly overlapping, rectangles of pixels. If we call one rectangle S for source and the other D for destination, then bitbit performs the operation d ← d * s for corresponding bits d and s in D and S. Here * denotes some two-argument Boolean operation, which is a parameter to bitbit. Such an instruction can obviously be used to move rectangles of bits around the screen, by setting both D and S to the display bitmap.
There are many subtle issues regarding bitbit. How are the two rectangles to be specified? In whose coordinate system? What if they are not conformable? Useful stipple and grey-scale patterns can be obtained by replicating the image in a small rectangle across a large one. There are also several interesting issues about the implementation of the bitbit instruction itself.
Bitblt has been found to have an amazing number of uses, far beyond simple rectangle copying. (This fact has been part of the raster graphics folklore for some time.) For example, it is possible to rotate or transpose an n × n bitmap using a small constant times n bitblts, each of which touches O(n) bits.
Bitblt can also be used to fill in areas, count connected components, and do other interesting and useful bitmap computations in ingenious ways. Part of the reason for this power is that many interesting bitmap computations can be done entirely through local operations, that is, by uniformly replacing each pixel with a function of itself and its neighboring pixels. Such algorithms have been used to a certain extent in the theory of iterative arrays (though not especially in a graphics context) and correspond naturally to invocations of bitblt. We will see some examples later.
Although bitblt was born of necessity, it has become the center of a powerful and convenient graphics model. The most important attribute of this model is conceptual economy. Bitmaps are a general representation of an image, and bitblt is a general primitive for manipulating them. Graphical algorithms based on bitblt work identically on bitmaps containing characters, synthetic shapes, lines, scanned images or any combination thereof.
Because bitblt operates at the lowest level of the representation, the hardware for a bitmap graphics system can be very simple. This simplicity can result in high performance; indeed, some of file most dynamic, interactive graphics systems have been developed for bitmap displays. And although bitblt uses the image representation directly, it hides that representation from the programmer and is general enough to be the only access to the display on well-designed systems.
There are other advantages to doing all raster manipulations through a single primitive. There is an economy of implementation: the details of accessing the bitmap need be expressed in only one place, and any performance improvements in bitblt benefit every application. Also, applications that use bitblt are portable from one display to another, and can take advantage, without change, of better hardware as it becomes available. From a software engineering viewpoint, if all the applications are based on a common data structure and operation, different applications coexist and compose more effectively.