• Advertisement
Sign in to follow this  
  • entries
    88
  • comments
    117
  • views
    62266

About this blog

C++, OpenGL, Blender, Lua, Bash, and interactive stories

Entries in this blog

In another ShaderX6 article, Macro Salvi presents "Exponential Shadow Maps". Like variance shadow maps, they can be hardware filtered, but they only require storing a single moment. They're also faster, with a simpler depth comparison equation.

...but... to avoid overflow issues when storing the exponential depth, the paper stores linear depth instead, deferring the exponential until the depth comparison and using ln to recover the proper value for filtering. This means emulating (separable) filtering in a shader. :(

Nevertheless, my current implementation uses regular hardware filtering with linear depth, which is likely the cause of the massive light bleeding when the occluder is near the receiver, and poor filtering when they're far apart.



In the depth comparison, ek(o-r), where k is the slope coefficient, which can be raised to reduce light bleeding at the expense of poorer filtering, k=32 for the above screenshot. [wow]

uniform sampler2D shadowSampler;

varying vec4 shadowCoord;

const float k = 32;

float GetInvShadowImpl()
{
float moment = texture2D(shadowSampler, shadowCoord.xy).r;
return clamp(exp(k * (moment - shadowCoord.z)), 0, 1);
}

I saved 8ms per frame by using this algorithm, so it shouldn't be a big deal to sacrifice a few of those ms to emulate filtering for better quality.
I implemented 3x3 median-filter post-processing (yes, I'm jumping all over the place). I hope you can see the slightly more fluid appearance of the screenshot below. I'm using the method presented by Morgan McGuire in ShaderX6.



You may remember that I had image-space outlines a while ago. There's a big problem with such outlines: because they overlay the rendering, they can occlude the precious screen-space of small objects. A better solution might be rim highlights (used in some form by Valve), where the fragment brightens as the normal approaches perpendicularity with the eye vector. For additional contrast, you can use drop-shadows, calculated from the depth-difference of a fragment neighbourhood. It makes sense that receeding fragments should be darker and occluded by brighter advancing fragments.

I already have some form of rim-highlights (could be better though). I plan to swap the existing (but broken) outline code for drop-shadows soon.



Fun fact: the following is valid code. Call me a n00b, but I didn't know you could pass a class template name as a template argument like this.

template  struct A
{
typedef T type;
};
template <template class T, typename U> struct B
{
typename T::type value;
};
int main()
{
Bint> b;
}
I've made the leap into Lua scripting (finally). This has been the biggest hole in the project for a long time. So, here's what I have now:



That guy stuck in the ground is an NPC(!). The exclamation mark is because he's the first ever NPC in the engine. He's at position [0,0,0] because I haven't implemented the Character.position property in Lua yet. He might be participating in collision detection, so I'm really not sure at the moment why he's in the ground.

Here's the Lua script that makes this happen:

print("Hello world!", _VERSION)
coroutine.yield()
print("this is the second frame")
local Fred = Character:new("character/male-1/male.char");
coroutine.wait("false");
print("we should never get here")
Quit()



Anyway its really beautiful outside, so I'm leaving to enjoy it!
So one of the classic problems with Win32 is the macro pollution.

As some of you may remember, near and far used to have significance as keywords on the segmented x86 architecture. They remain in as empty macros.

Enter: me. I want to make a frustum class with near and far members. Unfortunately, when is in view, my members get hit by the preprocessor. What do I do?

One option is to change the member names. If the members are public, this results in a frustum class with a non-obvious interface.

GCC provides an alternate solution: #include_next. It works like this:
  1. Create a special header in a subdirectory of your project tree. I use "src/extern/windows.h".

    #pragma GCC system_header

    #ifndef page_extern_windows_inc
    #define page_extern_windows_inc

    #include_next

    #undef far
    #undef near

    #endif

  2. Add -Isrc/extern to your CFLAGS.
When you include , you'll get "src/extern/windows.h" first, which will itself include the real and undefine the offending macros.

Dynamic AABB

I finished implementing dynamic axis-aligned bounding-boxes for skinned meshes. I use a two stage approach, where each stage is cached and only updated when necessary.
  1. The first stage calculates static bounding information. This includes an OBB for static vertices and a bounding capsule for each bone. Each vertex affects the bounding capsule of its most influential bone. At this point, the OBB is essentially an AABB before the object transformation.
  2. The second stage uses the information from the first stage to construct the AABB. The bounding capsules are transformed by the current pose and added to the static OBB. Then the OBB is transformed by the current object transformation and converted to the AABB.

Click the picture to see the video.



Now I can start thinking about doing some scene partitioning. I also want to work on some atmospheric effects.

Greetings

Updating a journal is alot like committing to a subversion repository; if you don't do it often enough, the updates pile up and you lose track of your changes. But I'll try my best.

I scored a contract position doing anatomical 3D modeling for medical education. I'm looking forward to doing more DCC instead of just programming.

I made the leap to C++0x today. I'm using GCC 4.3.0 with this patch for delegating constructors. The patch is a year old, but it appears to be working after some minor tweaks.

I'm in the process of implementing dynamic AABBs for skinned objects. This is needed for two things: spatial partitioning and calculation of center points. I'm also working on a "hero cam" a la Fable. I can't remember everything I've done, so here's the Subversion log dump! You'll notice that my commit logging habits have improved over time.

I have failed miserably to follow through on my promise of frequent releases.

Edit: Its about time I throw a kudos over to dbaumgart, whom I know personally as an excellent digital artist.
I spent some time implementing Theora video recording via libtheora. My implementation is currently quite slow, mainly because of two things:
  1. I'm rendering off-screen at the specified video resolution, rather than simply grabbing the current framebuffer and scaling. This yields higher quality, but at a much reduced recording speed.
  2. My RGB to Y'CbCr 4:2:0 conversion uses floating-point calculations and no SIMD.
As a result, my frame-rate drops to about 4 fps during recording. Here's the YouTube video (poor quality), and the original video (16 MB). The original video seems to crash the Theora DirectShow filter, but it plays fine under MPlayer.

Bug-fix update

I've fixed a number of bugs since the first release, perhaps the most important being ATI card support. The problem had to do with incorrect GLSL code (as we all know, nVidia's GLSL compiler is rather permissive), as well as a subtle GLSL incompatibility on ATI's part regarding the linkage of multiple const variable declarations with only one definition, which is valid according to the spec.

Here is the new version: [zip | tar.gz] (24 MB)

Note that this is a bug-fix release; there are no new features or art. I intend to make frequent releases, so don't feel like you have to download this one.

After overhauling the package (library) selection part of the build system, I'm currently working on OpenGL optimization and X11 support (for Linux).

First demo release!



Download: [zip | tar.bz2 | tar.gz] (23 MB)

The demo requires OpenAL. If you don't have it already, you can grab it here.

To play with the options, open up the "conf" file in a text editor. If you have a faster machine, you might try reducing vid.shadow.down and enabling vid.shadow.blur.

Please post any comments, especially with regard to what you thought was good and what was lacking. I'm also interested to hear what kind of performance you get. I get 20-25 fps on a Geforce 6600. If you get errors or crashes, send me the log file, or post the last few lines.

Current issues:
  • No scene partitioning!
  • Skinning is entirely on CPU, without SIMD
  • Lots of glGet calls
  • No object collision detection
  • Insufficient toolset == slow content development
  • Lack of interface
I'm in the process of fixing up a lot of rendering code. I've been working on it since about 6:00pm yesterday and I'm quite tired, so I've lost the ability to write good code and I'm just fixing syntax errors before I hit the sack. Anyway, I fixed them all and got this interesting shot:



I've been implementing some more flexible shader pipeline and fixed-function fallback code, which is probably the cause of this funky result.

EDIT 4:00pm: Actually, I now realize my texture coordinates are probably being lost, combined with the emissive pass happening for everything despite there being no emissive objects. Although the emissive pass shouldn't cause dark halos around the trees, since its additive...

EDIT 4:20pm: I fixed the texture coordinate issue. I had previously added some code to assign the texture coordinate set for each texture unit using an index vector during the glXXXPointer setup, and I wasn't passing it in the shader path so the pointer setup function thought there weren't any textures. A similar issue was also causing a missing normal, because the function thought there wasn't any lighting, so it didn't bother setting the normal pointer. Next up is the emissive bug.

The last two days have been spent working on shadow mapping. I implemented variance shadow mapping pretty quickly, owing to the excellent paper and demo from AndyTX and Pragma. The rest of the time was spent tweaking the appearance and fixing my own errors. I'm only doing shadows from the sun, which is directional, meaning orthographic projection. Since the engine is meant for fairly close-quarters with a top-down-ish camera orientation, I can use a moveable shadow window of constant size that fits over the visible scene, as well as relatively close near/far planes for high depth precision. This moveable window is the last major shadow mapping feature left to implement for the time being. Essentially, I need to determine the position of the orthographic frustum and calculate its near/far planes from its intersection with the viewing frustum. For determining its position, I could either have it pass through the player's origin (since the camera will typically be orientated toward the player), or perhaps read back the depth at the center of the screen and have it pass through whatever the camera is aiming at, although this last option would be slower and would cause the shadow map to remain one frame behind the current frame. Any other ideas?

Here are some images of the variance shadow mapping. Forgive the small size, but note that the in-game characters will likely be even smaller on screen than this. I will take some larger screenshots later. The first two shots are using a 2048x2048 shadow map spread over a 40x40 metre area with a gaussian blur. The last shot shows a 512x512 shadow map over the same area with hardware linear filtering and no blur.



I also spent some time before this working on the image-space outlines. I'm using the basic technique presented in GPU Gems 2 (Blueprint Rendering and "Sketchy Drawings"), minus the depth peeling and sketchy rendering. Essentially, it involves rendering the normal and depth to an FBO. From there you can extract the edges using 4 samples and some dot products. I avoided floating-point textures because they're slow on my hardware, and I failed to find a good method of packing the depth, so I ended up only using the normals. I'm wondering whether packing the 32-bit depth into a RGBA8 texture would allow for hardware filtering without corrupting the encoded value. I will likely take another stab at this sometime in the future, when time allows.

I'm also running into the inevitable combinatorial explosion in my rendering shader. Therefore, I'm planning to have the engine automatically create programs by combining the proper fragment/vertex shaders together, using information from a compile-time database with categories and dependencies. This is possible because GLSL doesn't resolve functions until linking, which means that if I'm not using a diffuse texture for instance, I can link with a GetDiffuse function that returns vec4(1.). My intention is to load all possible combinations on initialization, which should be less than 100 for my purposes. For accessing the programs at runtime, I'll need a way to specify the required features (diffuse texture?, normal mapping?, shadowing?, etc) via function parameters, maybe flags. Then the implementation will fetch the shader that provides the required features. How does this sound?

My avatar is quickly regaining its relevance. Its been almost a year since I added the Christmas theme, but I think it looks better this way anyway.

Erm...

Working on the textures a bit. What do you think: exquisite taste or fashion disaster?



I want to go with a hand-painted look, but that would mean creating all of the textures by hand, which would take forever. So for the time being, I'm only going to paint textures for which I don't have an existing alternative. After all, I think I've overshot my desired first demo release date by a fair bit already.

By the way, Ben Cloward has some useful textures available for use in "any project, personal or professional". They appear to be raw photos, so you may have to spend some effort making them seamless and equalizing the dark/light spots.

Tool maintainance

Since I'm using an unmaintained open-source tool in my pipeline (Wings 3D), it can involve some extra work when I run into some functionality that I need but isn't available. Such was the case today when I tried to do some UV mapping. The UV tools in Wings are missing a lot of the functionality of the modelling tools. I like to use scale-from-point as an axis-constrained absolute scale, but it was missing from the UV toolset.

It ended up taking me about 7 hours to add it to the UV tools because Erlang, as used in Wings 3D, can be confusing for me, particularly with regards to the menu system. I don't like wasting so much time on such an insignificant issue, but I really wanted it. Anyway, I released the patch here if anybody thinks it could be useful to them.

Environment art

I'm working on some buildings for the demo. This inn is the third attempt; the previous ones were learning experiences and not really presentable. It is not complete; the rooms have been fleshed out, with a stairway and administration desk, but it needs some additional architectural details and objects like dining tables, lanterns, and beds.



Some things I've learned in the process:
  • Since the camera is third person, the buildings need to be fairly spacious to maintain some distance between the character and the camaera.

  • One should have some kind of floor plan concept before one begins modelling.
One book I've found useful is "The English Mediaeval House" by Margaret Wood. It contains details about various aspects of domestic architecture in around the 12-15th centuries, with lots of pictures. It seems to be out of print.

Music

Music is all finished for the demo, but I haven't posted anything about it. I thought I would do so now. Since I'm not the most capable composer, and I don't have a lot of money to contract it out, I've been using public domain sheet music. There are multiple sites with large archives of it, dating back to the 14th century and beyond. And, AFAICT, music notation hasn't changed much since then. I was using the International Music Score Library Project before it recently got shut down. Now I'm using the Werner Icking Music Archive.

My workflow goes as follows. Browse the site until I find a piece that would be appropriate for the required atmosphere. Fire up Rosegarden and enter the music by hand. Select the instruments according to my General MIDI instrument range guide, which is probably of questionable technical accuracy. Play with the levels and export to MIDI. Use Timidity with Frank Wen's Fluid R3 soundfont to render to WAV. Compress with FLAC and Ogg Vorbis.

Of course, custom music by an indie with good synths would be preferable, but this solution provides reasonable quality without any expense.
I think my current user interface theme is too big; it takes up too much screen space. Themes in PAGE use a base vertical resolution specified in the theme resource to determine the scale of the interface elements. So I've changed it from 864 to 1200 and modified the font sizes to match.

I've decided to use circular icons instead of square ones. I'm still going with the metallic border and wooden base, but I've decided to try gold embossing for the embedded icon shape. I still have to texture the wood, but here is the current main menu icon. I think I may have to tone down the highlights. [wink]



I used Inkscape to create the paths before importing them into GIMP.

Working on icons

I'm working on the icons now. I need 4 icons for the in-game command bar. I looked at the interfaces in some other games (Dreamfall, Guild Wars, and Warcraft 3) for inspiration. Dreamfall uses simple monochromatic icons...




Guild Wars and Warcraft 3 use color icons on a dark background surrounded by a stone/metallic border. I don't have any images from Warcraft, but I think most of you are familiar with it. From Guild Wars...




I tried a few different styles. First, I tried a clean, rounded border like in Dreamfall. However, this creates a futuristic look and I'm going for a medieval style. I'm currently trying out a wooden base with a clean metallic border and monochromatic Dreamfall-style engravings...



The walking animation is complete and working in the engine. I made the arm animation a little less stiff and more noodle-like. The offset bone that I mentioned earlier made it easy to avoid foot sliding.

I spent a few days on the exporter, making it possible to export animations with constraints without baking. To avoid generating keyframes for every frame of the animation, I build a list of dependencies for each bone. The dependencies can be constraints or non-exported parent bones that get collapsed. We can then say that a keyframe will be exported for a bone whenever one of its dependencies sets a keyframe. It ended up taking so long because I had some difficulties converting local bone matrices from Blender's coordinate system (Z axis up, inverted X axis) to PAGE's (Y axis up).

Quick update

I worked on the arm rigging today. I failed to find a way to control the tracking of the elbow, so the IK is of limited use. Because of this, I used FK to animate the arms for the walk cycle. Here is the final walk-cycle for now. Tomorrow I will export it into the engine and write the little bit of code to move through the walk animation depending on the player's speed. I will also need a turning animation someday, but I think I will do without that for now.

More walk cyclage

Here is the second walk cycle attempt. With the new spine rigging it actually has some hip and shoulder movement. I also decreased the stride distance.

Here is the new spine rigging. The bone that sticks out from the base of the neck controls the upper half of the spine, while the bone sticking out of the pelvis controls the lower half. Conceptually, the spine is supposed to bend elastically to fit between those two control bones. This was accomplished using some sneaky constraints, so it ends up not being quite as versatile as that.



By the way, in my last entry I suggested that the offset bone should be the parent of world-space IK bones. I've since discovered that this is wrong.
...and its harder than I thought, even with the rigging. But here is my first attempt. I've only done the legs for now, and it seems a little... stiff.

I've learned about a few issues I need to take care of while doing it. I need an offset bone that is the root of everything else, even the theoretically world-space IK stuff. I also need to write a quick script for translating multiple bones to the cursor from a reference bone for foot registration when copying the flipped pose.

Here's my current human rig. The arms are straight up IK with constraints. The legs are more complex and are based on a few different rigging tutorials and examples from online (I will dig them up if anyone wants them). toe.rotation rotates the toe around the X axis. heel.rotation handles the lifting/planting of the heel/toes, depending on the angle. And foot.track handles the rotation of the foot overall (to match the angle of the floor), as well as its position.

Quick update

So, um... long time since my last post (as usual). I figured that staying away from GDnet would help me get more done.

Lots of things have been added to PAGE and I'm just working on the art a bit before I release a demo. I could list the changes, but I think I'll just wait a week and hopefully by then you can see for yourself.

Here's a screenshot of the wharf that I'm working on now. I used some information from here for the design. The man is for scale.



I'm currently finding and deriving textures from environment-textures.com for the wharf. The steps and the piles need better wood textures, and I'm not happy with the current deck wood texture either.

TODO before release:
  • icon graphics
  • more environment art
  • create the walking animation in Blender (the feet and arms are rigged with IK and constraints, so it should be straighforward, but I've been avoiding it anyway)

Linux

PAGE finally runs under Linux! I had to add support for .conf configuration files, a POSIX implementation of the sys/file, sys/proc, and sys/Timer modules, a X11 implementation of env/Window, and an X11 adapter for vid/opengl/Driver. The X11 code is very minimal right now, and some things don't work yet. But the important thing at this point is that it runs.



I also went on a bug hunt for a few days and slaughtered some particularly devious troublemakers. I'm in the process of trying to get everything ready for a demo release. This means I need to fix my renderer, which is theoretically almost working, get collision working again (it broke a long time ago during an overhaul), and finish some of the animation controllers. I also seriously need to put some time into the art, but I find it hard to do that when the code is crying out to me.
I'm working on some shadow mapping right now. Its the first time I've done this so its taking me longer than I hoped. Nevertheless, here are my first shadow mapping screenshots.




I'm using cube mapping with the depth packed into RGBA as described by Phantom here. Before this I had tried virtual shadow depth cube maps, apparantly described somewhere in ShaderX 3, although I don't have the book, but failed due to indirect cubemap sampling causing an error for some reason. I'm planning to try variance shadow mapping with 2 component floating-point cube maps once I finish this implementation.

Obviously there are problems here. First thing I'm going to do is render backfaces and try glPolygonOffset for bias. Once the noise is gone, I'll try some manual bilinear filtering, or maybe some cubic filtering (described in GPU Gems 2 I think).
Worked on some compositing. Alpha compositing is going to be used to make windows/widgets fade in/out. Glow compositing is for showing that something is clickable when the mouse is over it, or showing what is logically selected if the arrow keys are used.




I'm leaving implementing the rest of the controls for now since it should be really straightforward (and its getting boring). I need to clean up a bit of code and then move on to interaction right away.

I also met Ravuya, Moe, and Tape_Worm yesterday. Its good to have faces to put to the names now. We went and saw transformers: lots of hollywoodisms but not completely unfortunate looking. Anyway, back to work.
There's a storm rolling in, so I'll just post this before switching to the laptop. I've been fixing bugs, cleaning stuff up, and implementing some minor new features. I haven't got as much code done as I wanted because I've been trying to get into more digital painting. Stahlberg's paint-over thread at CGTalk is pretty inspirational. Here's what I have for now. The leaf decorations were done a long time ago; I'm working on some better ones.

Sign in to follow this  
  • Advertisement