Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


mark ds

Member Since 07 Jan 2010
Offline Last Active Today, 05:36 PM

Topics I've Started

DXT for terrain normals?

10 July 2014 - 03:09 PM

This is just an exploratory question really, but here's my situation.

 

I have a very large terrain, with all the textures stuffed into a texture array, addressable by their array index. Eventually there will be quite a lot of textures (I envisage around 30 or more in total), but any given 'chunk' will only have a subset of the total - no more than say 16, which conveniently fits into 4 bits. I can then use a base offset for each chunk to determine which of the consecutive textures I can access in the array. The simplest would be to use an R8 texture, but that requires 8 bits per pixel, when 4 bits would do.

 

DXT3 textures handily provide 4 bits per pixel in the alpha, leaving RGB empty, which would be ideal for terrain vertex normals. Now, I understand that DXT is generally a crappy way to store normals, but was wondering if anyone knows how bad it would be for terrains, bearing in mind that most normal point more or less up, rather than every-which-way.

 

Maybe there is a way of using RG & B to better encode XY normals? Or a better compression method that allows 4 bits per pixel.

 

I suspect however, that I'll just have to use bytes and bit shift to get at the data. A 'built in' method (like DXT) would be nicer/easier though, and would avoid "if( mod(...) )" type statements.

 

Any suggestions?


GPU based heightmap editor

08 June 2014 - 04:28 PM

Like many others, I'm currently going through tool creation hell!

 

Part of that includes a heightmap editor which I'd gotten quite a way into - but it was CPU based and I was really unhappy with it. So I've deleted it, lock stock and barrel, and I'm going to rethink the whole thing and do it on the GPU.

 

Part of the problem was to do with vertex picking, and applying texture based brushes centred on the 'picked' vertex. Whilst quadtrees are ideal for this, as the map grows (and I want very large maps) edits can become quite a problem as the quad tree is constantly being updated before I can use it to get the vertex currently under the brush, and of course I have to guarantee that the data on the CPU & GPU are in sync. There where several other minor issues which also made the performance inconsistent.

 

My idea is to essentially write a traditional event driven Win32 program (GetMessage, vsync off, vs PeekMessage, vsync on) which triggers the GPU to update the heightmap on 'mouse down' events at fixed time intervals - maybe 20 times per second. For vertex picking, I intend to write to two colour buffers - a normal colour buffer, and a 32-bit RG floating point buffer using colours interpolated from 0.0f, 0.0f to heighmap width and height (e.g. 8192.0f). On mouse move, I can then use the mouse XY coordinates to copy the pixel value under the mouse into a 1x1 texture, which can then be read back by a shader (rounding the RG colour to the nearest integers will give me a precise XY vertex), which in turn can modify heighmap values based around these coordinates using a another texture 'brush'. This also allows for a really simple undo/redo mechanism totally on the GPU by pre-copying an area into a buffer before modifying the original data.

 

The benefits include only using one GPU side set of data, and better performance to boot.

 

 

 

I spent many, many weeks on the original version, so I thought it wise to post here before I embark on version 2!

 

My question is, can anyone think of any problems I may encounter with the above approach, or maybe suggest a totally different approach to a heightmap editor?

 


timeBeginPeriod under Windows 8.1 is always 15ms using GetTickCount?

07 December 2013 - 09:25 AM

I was playing around with some code earlier, and I noticed that whatever value I use for timeBeginPeriod, the OS seems to ignore it, and always use 15ms increments for GetTickCount, but honours it for timeGetTime. Under Windows 7, timeBeginPeriod applied to both GetTickCount and timeGetTime

 

It's *probably* the same using Win8, but I can't test it.

 

Can anyone else confirm this behaviour?


Self-shadowing terrain idea - asking for feedback

24 September 2013 - 10:07 AM

I've been wrestling for some time on how to produce decent quality, self-shadowing terrain that effectively shadows all other objects in the world. A couple of days ago I had an idea.

 

For any given point on the terrain, the direct sunlight contribution can be calculated thus:

 

While the sun is below the horizon or behind occluders (hill, mountains etc.) it's contribution is 0. We can describe this as a function of time - between 9pm and 10am, for example, a point of the terrain is NOT lit by the sun.

 

While the sun is fully visible it's contribution is 1. The can happen at between say 10.30am and 8.30pm.

 

The in between times (as the sun appears over distant peaks) would essentially represent the penumbra period - where we could interpolate, based on time of day, a contribution factor between 0 and 1.

 

However, whilst the above would work well for the ground, no information would be available to correctly light buildings or other objects.

 

So my idea is to represent sun light using two height values per point on the terrain (probably per vertex in a heightmap). The first value is the height at which the sun begins to fully light a point above the terrain, and the second value the point below which no sunlight has an effect (the first value would always be higher than the second).

 

So a fragment shader would effectively be:

 

if fragment is above first value, sunlight contribution = 1;

else if fragment is below second value, sunlight contribution = 0

else interpolate between the two.

 

This contribution would be included in a g-buffer (after a depth only pass).

 

The sun changes position slowly over time, so I think it's perfectly possible to recalculate new values by dealing with a fairly small sector of the 'sunlightmap' each frame and storing the results in a 16-bit RG texture.

 

I haven't really fleshed out this idea, but I thought I'd throw it out there and hopefully get some feedback.

 

Cheers.

 

 

 

 


Events+WaitForMultipleObjects vs IO completion ports?

10 July 2012 - 09:33 AM

I'm frustum culling terrain using a quad tree, which is an obvious candidate for multithreading. In my case, I'm using a precalculated PVS pre-sorted front to back. I'm looking to chop up the PVS and multithread the culling.

Would using completion ports be advantageous over events and wait functions, performance wise? There seems to be mixed opinions on this, with many saying that the maximum number of events being the only downside, which isn't relevant to this example.

Another possibility would be to use PostThreadMessage, though that would probably (?) be least efficient.

Cheers

PARTNERS