The Sierra Hicolor DAC
Journal: Dr. Dobb's Journal Oct 1991 v16 n10 p155(6)
-----------------------------------------------------------------------------
Title: The virtues of affordable technology: the Sierra Hicolor DAC.
(Sierra Semiconductor Corp.'s Hicolor digital-to-analog converter)
(Hardware Review) (evaluation)
Author: Abrash, Michael.
AttFile: Program: GP-OCT91.ASC Source code listing.
Summary: Sierra Semiconductor Corp's Hicolor digital-to-analog converter
(DAC) is easy to program and supports an 800x600, 32,768-color
mode. The difference between the cost of the Hicolor DAC compared
to a standard VGA DAC is less than $10 and some HiColor-based
SuperVGA's are priced at less than $200. On the downside, the
Hicolor DAC has some disadvantages. It needs twice as much memory
for a given resolution as an equivalent CEG/DAC and graphics
operations can take longer. The Hicolor DAC also does not perform
gamma correction in hardware. It also does not include a built-in
lookup table for correction of programmable gamma. However, the
Hicolor DAC makes it possible for the first time in the VGA market
to perform general antialiasing.
-----------------------------------------------------------------------------
Descriptors..
Company: Sierra Semiconductor Corp. (Products).
Ticker: SERA.
Product: Sierra Semiconductor HiColor (Digital-to-analog converter)
(evaluation).
Topic: Digital to Analog Converters
Evaluation.
Feature: illustration
chart
program.
Caption: Mappings of sets of four double-resolution pixels to single screen
pixels. (chart)
Mappings from double-resolution buffer pixels. (chart)
Listing one. (program)
-----------------------------------------------------------------------------
Full Text:
My, how quickly the PC world changes! Six months ago, I described the Edsun
CEG/DAC as a triumph of inexpensive approximation. That chip was and is an
ingenious bridge between SuperVGA and true color that requires no
modifications to VGA chips or additional memory, yet achieves often-stunning
results. Six months ago, the CEG/DAC was the only affordable path beyond
SuperVGA.
Time and technology march on, and, in this case, technology has marched much
the faster. I have on my desk a SuperVGA card, built around the Tseng Labs
ET4000 VGA chip, 1 Mbyte of RAM, and the Sierra Semiconductor Hicolor DAC
(digital-to-analog converter, the chip that converts pixel values from the
VGA into analog signals for the monitor), that supports an 800x600,
32,768-color mode. The added cost of the Hicolor DAC over a standard VGA DAC
(of which the Hi-color DAC is a fully compatible superset) to the board
manufacturer is less than $10; I have already seen a Hicolor-based SuperVGA
listed in Computer Shopper for under $200.
To those of us who remember buying IBM EGAs for $1000, there's a certain
degree of unreality to the though of an 800x600 32K-color VGA for less than
$200.
Understand, now, that I'm not talking about clever bitmap encoding or other
tricky ways of boosting colore here. This is the real, 15-bpp, almost
true-color McCoy, beautifully suited to imaging, antialiasing, and virtually
any sort of high-color graphics you might imagine. The Hicolor DAC supports
normal bitmaps that are just like 256-color bitmaps, except that each pixel
is composed of 15 bits spread across 2 bytes. If you know how to program
800x600 256-color mode, you should have no trouble at all programming 800x600
32K-color mode; for the most part, just double the horizontal byte counts.
(Lower-resolution 32K-color modes, such as 640x480, are available. No
1024x768 32K-color mode is supported, not due to any limitation of the
Hicolor DAC, but because no VGA chip currently supports the 1.5 Mbyte of
memory that mode requires. Expect that to change soon.) The 32K-color
banking schemes are the same as in 256-color modes, except that there are
half as many pixels in each bank. Even the complexities of the DAC's
programmable palette go away in 32K-color mode, because there is no
programmable palette.
And therein lies the strength of the Hicolor DAC: It's easy to program.
Theoretically, the CEG/DAC can produce higher-color and more precise images
using less display memory than the Hicolor DAC, because CEG color resolutions
of 24-bpp and even higher are possible. Practically speaking, it's hard to
write software -- especially real-time software -- that takes full advantage
of the CEG/DAC's capabilities. On the other hand, it's very easy to extend
existing 256-color SuperVGA code to support the Hicolor DAC, and although 32K
colors is not the same as true color (24-bpp), it's close enough for most
purposes, and astonishingly better than 256 colors. Digitized and rendered
images look terrific on the Hicolor DAC, just as they do on the CEG/DAC --
and it's a lot easier and much faster to generate such images for the Hicolor
DAC.
The Hicolor DAC has three disadvantages. First, it requires twice as much
memory at a given resolution as does an equivalent 256-color or CEG/DAC mode.
This is no longer a significant problem (apart from temporarily precluding a
1024x768 32K-color mode, as explained earlier); memory is cheap, and 1 Mbyte
is becoming standard on SuperVGAs. Secondly, graphics operations can take
considerably longer, simply because there are twice as many bytes of display
memory to be dealt with; however, the latest generation of SuperVGAs provides
for such fast memory access that 32K-color software will probably run faster
than 256-color software did on the first generation of SuperVGAs. Finally,
the Hicolor DAC neither performs gamma correction in hardware nor provides a
built-in look-up tables to allow programmable gamma correction.
To refresh your memory, gamma correction is the process of compensating for
the nonlinear response of pixel brightness to input voltage. A pixel with a
green value of 60 is much more than twice as bright as a pixel of value 30.
The Hicolor DAC's lack of built-in gamma correction puts the burden on
software to perform the correction so that antialising will work properly,
and images such as digitized photographs will display with the proper
brightness. Software gamma correction is possible, but it's a time-consuming
nuisance; it also decreases the effective color resolution of the Hicolor DAC
for bright colors, because the bright colors supported by the Hicolor DAC are
spaced relatively farther apart than the dim colors.
The lack of gamma correction is, however, a manageable annoyance. On
balance, the Hicolor DAC is true to its heritage; a logical, inexpensive, and
painless extension of SuperVGA. The obvious next steps are 1024x768 in 32K
colors, and 800x600 with 24 bpp; heck, 4 Mbytes of display memory (eight
4-Mbit RAMs) would be enough for 1024x768 24-bpp with room to spare. In
short, the Hicolor DAC appears to be squarely in the mainstream of VGA
evolution. (Note that although most of the first generation of Hicolor
boards are built around the ET4000, which has quietly and for good reason
became the preeminent SuperVGA chip, the Hicolor DAC works with other VGA
chips and will surely appear on SuperVGAs of all sorts in the near future.)
Does that mean that the Hicolor DAC will become a standard? Beats me. I'm
out of the forecasting business; the world changes too fast. The CEG/DAC has
a head start and is showing up in a number of systems, and who knows what
else is in the pipeline? Still, programmers love the Hicolor DAC, and I
would be astonished if there were not an installed base of at least 100,000
by the end of the year. Draw your own conclusions; but me, I can't wait to
do some antialiased drawing on the Hicolor DAC (and I will, in this column,
next month).
If the CEG/DAC is a triumph of inexpensive approxumation, the Hicolor DAC is
a masterpiece of affordable technology. I'd have to call a 1-Mbyte Hicolor
SuperVGA for around $200 the ultimate in graphics cost effectiveness at this
moment -- but don't expect it to hold that title for more than six months.
Things change fast in this industry; $200 true-color in a year, anyone?
Polygon Antialiasing
To my mind, the best thing about the Hicolor DAC is that, for the first time
in the VGA market, it makes fast, general antialiasing possible -- and the
readers of this column will soon see the fruits of that. You see, what I've
been working toward in this column is real-time 3-D, perspective drawing on a
standard PC, without the assistance of any expensive hardware. The object
model I'll be using is polygon-based; hence the fast polygon fill code I've
presented. With mode X (320x240, 256 colors, undocumented by IBM), we now
have a fast, square-pixel, page-flipped, 256 colors, undocumented by IBM), we
now have a fast, square-pixel, page-flipped, 256-color mode, the best that
standard VGA has to offer. In this mode, it's possible to do not only
real-time, polygon-based perspective drawing and animation, but also
relatively sophisticated effects such as lighting sources, smooth shading,
and hidden surface removal. That's everything we need for real-time e-d --
but things could still be better.
Pixel are so large in mode X that polygons have very visibly jagged edges.
These jaggies are the results of the aliasing of which I spoke back in April
and May; that is, distortion of the true image that results from
undersampling at the low pixel rate of the screen. Jaggies are a serious
problem; the whole point of real-time 3-D is to create the illusion of
reality, but jaggies quickly destroy that illusion, particularly when they're
crawling along the edges of moving objects. More frequent sampling (higher
resolution) helps, but not as much as you'd think. What's really needed is
the ability to blend colors arbitrarily within a single pixel, the better to
reflect the nature of the true image in the neighborhood of that pixel --
that is, antialiasing. The pixels are still as large as ever, but with the
colors blended properly, the eye processes the screen as a continuous image,
rather than as a collection of discrete pixels, and perceives the image at
much higher resolution than the display actually supports.
There are many ways to antialias, some of them fast enough for real-time
processing, and they can work wonders in improving image appearance -- but
they all require a high degree of freedom in choosing colors. For many sorts
of graphics, 256 simultaneous colors is fine, but it's not enough for
generally useful antialiasing (although we will shortly see an interesting
sort of special-case antialiasing with 256 colors). Therefore, the one
element lacking in my quest for affordable real-time 3-D has been good
antialiasing.
No longer. The Hicolor DAC provides plenty of colors (although I sure do
with the software didn't have to do gamma correction!), and makes them
available in a way that allows for efficient programming. In a couple of
months, I'm going to start presenting 3-D code; initially, this code will be
for mode X, but you can expect to see some antialiasing code for the Hicolor
DAC soon.
256 Color Antialiasing
Next month, I'll explain how the Hicolor DAC works -- how to detect it, how
to initialize it, the pixel format, banking, and so on -- and then I'll
demonstrate Hicolor antialiasing. This month, I'm going to demonstrate
antialiasing on a standard VGA, partly to introduce the uncomplicated but
effective antialiasing technique that I'll use the next month, partly so you
can see the improvement that even quick and dirty antialiasing produces, and
partly to show the sorts of interesting things that can be done with the
palette in 256-color mode.
I'm going to draw a cube in perspective. For reference, Listing One (page
173) draws the cube in mode 13h (320x200, 256 colors) using the standard
polygon fill routine that I developed back in February and March. No, the
perspective calculations aren't performed in Listing One; I just got the
polygon vertices out of 3-D software that I'm developing and hardwired them
into Listing One. Never fear, though; we'll get to true 3-D soon enough.
Listing One draws a serviceable cube, but the edges of the cube are very
jagged. Imagine the cube spinning, and the jaggies rippling along its edges,
and you'll see the full dimensions of the problem.
Listing Two (page 173) and Three (page 173) together draw the same cube, but
with simple, unweighted antialiasing. The results are much better than
Listing One; there's no question in my mind as to which cube I'd rather see
in my graphics software.
The antialiasing technique used in Listing Two is straightforward. Each
polygon is scanned out in the usual way, but at twice the screen's resolution
both horizontally and vertically (which I'll call "double-resolution,"
although it produces four times as many pixels), with the double-resolution
pixels drawn to a memory buffer, rather than directly to the screen. Then,
after all the polygons have been drawn to the memory buffer, a second pass is
performed; this pass looks at the colors stored in each set of four
double-resolution pixels, and draws to the screen a single pixel that
reflects the colors and intensities of the four double-resolution pixels that
make it up, as shown in Figure 1. In other words, Listing Two temporarily
draws the polygons at double resolution, then uses the extra information from
the double-resolution bitmap to generate an image with an effective
resolution considerably higher than the screen's actual 320x200 capabilities.
Two interesting tricks are employed in Listing Two. First, it would be best
from the standpoint of speed, if the entire screen could be drawn to the
double-resolution intermediate buffer in a single pass. Unfortunately, a
buffer capable of holding one full 640x400 screen would be 64,000 or more
bytes in size -- too much memory for most programs to spare. Consequently,
Listing Two instead scans out the image just two double-resolution scan lines
(corresponding to one screen scan line) at a time. That is, the entire image
is scanned once for every two double-resolution scan lines, and all
information not concerning the two lines of current interest is thrown away.
This banding is implemented in Listing Three, which accepts a full list of
scan lines to draw, but actually draws only those lines within the current
scan line band. Listing Three also draws to the intermediate buffer, rather
than to the screen.
The polygon-scanning code from February was hard-wired to call the function
DrawHorizontalLineList, which drew to the display; this is the
polygon-drawing code called by Listing One. That was fine so long as there
was only one possible drawing target, but now we have two possible targets --
the display (for nonantialiased drawing), and the intermediate buffer (for
antialiased drawing). It's desirable to be able to mix the two, even within
a single screen, because antialiased drawing looks better but nonantialiased
is faster. Consequently, I have modified Listing One from February -- the
function FillConvexPolygon -- to create FillCnvxPolyDrvr, which is the same
as FillConvexPolygon, except that it accepts as a parameter the name of the
function to be used to draw the scanned-out polygon. FillCnvxPolyDrvr is so
similar to FillConvexPolygon that it's not worth taking up printed space to
show it in its entirety; Listing Four (page 174) shows the differences
between the two; and the new module will be available in its entirety as part
of the code from this issue, under the name FILCNVXD.C.
The second interesting trick in Listing Two is the way in which the palette
is stacked to allow unweighted antialiasing. Listing Two arranges the
palette so that rather than 256 independent colors, we'll work with four-way
combinations within each pixel of three independent colors (red, green, and
blue), with each pixel accurately reflecting the intensities of each of the
four color components that it contains. This allows fast and easy mapping
from four double-resolution pixels to the single screen pixel to which they
correspond. Figure 2 illustrates the mapping of subpixels (double-resolution
pixels) through the palette to screen pixels. This palette organization
converts mode 13h from a 256-color mode to a four-color antialiasing mode.
It's worth noting that many palette registers are set to identical values by
Listin Two, because the values of sub-pixels matter, arrangements of these
values do not. For example, the pixel values 0x01, 0x04, 0x10, and 0x40 all
map to 25 percent blue. By using a table look-up to map sets of four
double-resolution pixels to screen pixel values, more than half the palette
could be freed up for drawing with other colors.
Unweighted Antialiasing: How Good?
Is the antialiasing used in Listing Two the finest possible antialiasing
technique? It is not. It is an unweighted antialiasing technique, meaning
that no accounting is made for how close to the center of a pixel a polygon
edge might be. The edges are also biased a half-pixel or so in some cases,
so registration with the underlying image isn't perfect. Nonetheless, the
technique used in Listing Two produces attractive results, which is what
really matters; all screen displays are approximations, and unweighted
antialiasing is certainly good enough for PC animation applications.
Unweighted antialiasing can also support good performance, although this is
not the case in Listing Two and Three, where I have opted for clarity rather
than performance. Increasing the number of lines drawn on each pass, or
reducing the area processed to the smallest possible bounding rectangle would
help improve performance, as, of course, would the use of assembly language.
If there's room, I'll demonstrate some of these techniques next month.
For further information on antialiasing, you might check out the standard
reference: Computer Graphics, by Foley and van Dam. Michael Covington's
"Smooth Views," in the May, 1990 Byte, provides a short but meaty discussion
of unweighted line antialiasing.
As relatively good as it looks, Listing Two is still watered-down
antialiasing, even of the unweighted variety. For all our clever palette
stacking, we have only five levels of each color component available; that's
a far cry from the 32 levels of the Hicolor DAC, or the 256 levels of true
color. The limiations of 256-color modes, even with the palette, are showing
through.
Nexth month, 15-bpp antialiasing.
The Mode X Mode Set Bug, Revisited
Two months back, I added a last-minute note to this column describing a fix
to the mode X mode set code that I presented in the July column. I'd like to
describe how this bug slipped past me, as an illustration of why it's so
difficult to write flawless software nowadays. The key is this: The PC world
is so huge and diverse that it's a sure thing that someone, somewhere, will
eventually get clobbered by even the most innocuous bug -- a bug that you
might well not have found if you had spent the rest of your life doing
nothing but beta testing. It's like the thought that 100 monkeys, typing
endlessly, would eventually write the complete works of Shakespeare; there
are 50,000,000 monkeys out there banging on keyboards and mousing around, and
they will inevitably find any holes you leave in your software.
In writing the mode X mode set code, I started by modifying known-good code.
I tried the final version of the code on both of my computers with five
different VGAs, and I had other people test it out on their systems. In
short, I put the code through all the hoops I had available, and then I sent
it out to be beaten on by 100,000 DDJ readers. It took all of one day for
someone to find a bug.
The code I started with used the VGA's 28-MHz clock. Mode X should have used
the 25-MHz clock, a simple matter of setting bit 2 of the Miscellaneous
Output register (3C2h) to 0 instead of 1.
Alas, I neglected to change that single bit, so frames were drawn at a faster
rate than they should have been; however, both of my monitors are
multifrequency types, and they automatically compensated for the faster frame
rate. Consequently, my clock-selection bug was invisible and innocuous --
until all those monkeys started banging on it.
IBM makes only fixed-frequency VGA monitors, which require very specific
frame rates; if they don't get what you've told them to expect, the image
rolls -- and that's what the July mode X mode set code did on fixed-frequency
monitors. The corrected version, shown in Listing Five (page 174), selects
the 25-MHz clock, and works just fine on fixed-frequency monitors.
Why didn't I catch this bug? Neither I nor a single one of my testers had a
fixed-frequency monitor! This nicely illustrates how difficult it is these
days to test code in all the PC-compatible environments in which it might
run. The problem is particularly severe for small developers, who can't
afford to buy every model from every manufacturer; just imagine trying to
test network-aware software in all possible configurations.
When people ask why software isn't bulletproof; why it crashes or doesn't
coexist with certain programs; why PC clones aren't always compatible; why,
in short, the myriad of irritations of using a PC exist -- this is a big part
of the reason. That's just the price we pay for unfettered creativity and
vast choice in the PC market.
Unfettered for the moment; but consider AT&T's patent on backing store, the
"esoteric" idea of storing an obscured area of a window in a buffer so as to
be able to redraw it quickly. It took me all of ten minutes to independently
invent that one five years ago. Better yet, check out the letters to the
editor in the July Programmer's Journal, about which I will say no more
because it sets my teeth on edge. We'd better hope that no one patents
"patterned tactile-pressure information input," that is, typing. Trust
50,000,000 monkeys to come up with a system as ridiculous as this.
[BACK] Back
Discuss this article in the forums
See Also: © 1999-2011 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
|