Saturday, July 02, 2005

Newfangled Lighting


I'm going to assume
that most of the readers that stumble upon my blog are already aware of High Dynamic Range Lighting. Many of them may even know exactly what it's all about. Some may even be brushing up on their leet programming skills in anticipation of the new flock of GPUs that will be coming out to support it.

But I just started hearing about the idea a few months ago when Valve announced their plans to release a Half-Life 2 add-on dubbed "The Lost Coast" which would show off all the beautiful benefits of using HDRL (assuming, of course, that your computer was actually a fragment from a future sentient quantum computer sent back in time). I thought, "Cool," and then went on with my life.

And then the other day, while updating my Steam files in anticipation of fucking around with Hammer, I noticed that they were showing a video which had half the screen using the normal Source lighting model and the other half using this High Dynamic Range thing. Excitedly, I clicked on the link.

The movie did not impress. Truth be told, I couldn't really discern exactly what was so different. The HDRL side was a little shinier, the reflections a bit brighter. That's it?

Well, no. That's not it. And, yeah, that's kinda it. But I had to do a little searching to find decent explanations of this no-doubt-soon-to-be-widely-implemented technique.

Okay, here's how it boils down:

Computers use a certain number of bits to represent color. Right now it's typical to see 24 or 32 bits used. 24 bit color will use 8 bits per channel, which means 256 different shades of color per channel. 32 bit color usually uses the extra 8 bits for an alpha channel, which governs transparency.

Sooo . . . basically HDRL, the way I understand it, uses more than 32 bits (64 seems to be the target number) to represent a greater range of colors, leading to, as some detergents claim, brighter whites and bolder darks. And there are even leftover bits for programmers with which to do, well, weird and wonderful things.

At least, that's what Carmack seems to be babbling about.

Now, some people are calling for HDRL because it makes rendered scenes more realistic. And it does, if by realistic we mean that the renderings can more closely resemble the world around us as viewed by our eyes.

And that is an important point, because the renderings I've seen are simply amazing in terms of visual clarity and how closely they resemble, not just photographs, but me actually being in that place looking at those specific things.

Of course, that sort of realism can be useful and interesting. But for the videogame world it's also largely irrelevant. What game developers should focus on (and, yeah, get all excited about) is that HDRL gives them a significant increase in the tools available to present a visually interesting world to a gamer.

Yes, going from 16-bit color to 32-bit color allowed some games to become more realistic (as per our previous definition), but it also allowed many games to become more cartoony, and more fantastical, and more stylized, and more whatever - it allowed more artistic styles to present themselves, because they were given a larger palette and more control over that palette.

I'm not saying that you can't do some great things with one or two colors. Lots of painters have.

But it's useful to have choice. A painter may decide to use only blue and white, but she made that decision with the realization that with a set of a few basic colors she could mix and thin and thicken and produce thousands and thousands of different colors.

The issues of the whole High Dynamic Range thing affect all manner of artwork that in some way utilize the digital realm.

Photography
, for instance.

Terrain rendering, which I totally dig.

Any digital process that might possibly benefit from a larger palette.

With non-realtime rendering, the important thing to note is the encoding of the data. If you use a digital camera with low range then your pictures will display low range. If you put your movie into a lossy compression format, then you're going to lose color clarity, sometimes horrendously so. Using cameras that can capture images in HDR format are going to look great (well, displays need some improvement but even current cathode-ray monitors have a slightly greater range than 32-bit color . . . as far as I can discover).

On the realtime rendering side, video games/simulations are going to be able to pack more visual information into a scene that uses the HDR methods. Which, in order to render well in realtime, require new video cards designed with HDR in mind.

There are even ways to sort of shoehorn High Dynamic Range data through current GPUs using complicated workarounds that I understand even less than the regular techniques. But I definitely won't get into those.

Did I simplify things? At all?

Anyway, take a look at Paul Debevec's Home Page, which is just chockful of assorted papers on lighting and whatnot. Some great videos that really help illustrate the jump in quality that these new techniques represent.

No comments: