Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Rectangle

Member Since 26 Aug 2012
Offline Last Active Jul 21 2014 05:21 PM

Topics I've Started

GLSL texture "tile selection" algorithm?

06 July 2014 - 08:25 PM

In theory, let's say I have a vertex shader which knows:

  • The dimensions of a 2D texture/tilesheet (in pixels)
  • The dimensions of each tile within that texture (in pixels)
  • The number of tiles within the texture (not necessarily RowCount * ColumnCount... The final row may fall short X number of tiles)
  • A given "TileIndex", representing a desired tile # which increments from left to right, top to bottom (in that order)

Using the built-in gl_VertexID input variable to aid in the creation of proper texture coordinates, what sort of math/algorithm must be used to get this shader to "select" the correct tile index and corresponding texture coordinates (to be passed on to the fragment shader)?

I believe it has something to do with X + Y * Width, or perhaps the use of a modulus operation somewhere, but it becomes complicated when the only given is a single tile index value...

 

So, I've only gotten as far as the following:

#version 330 core

layout(location=0) in vec3 in_pos;      // Vertex Position
layout(location=1) in int in_tileIndex; // Desired Tile Index
out vec2 out_texCoord;                  // Resulting Texture Coordinates
                                        //  (passed on to the frag shader)

void main()
{
        // The following variables will eventually be uniforms/attributes, and are only
        //  used here for more immediate debugging while I figure this algorithm out...

	float tileWidth = 32;               // In pixels
	float tileHeight = 32;              // In pixels
	float tileCount = 88;               // Not always rows * cols
	float imgWidth = 255;               // In pixels
	float imgHeight = 351;              // In pixels
	float width = 1.0f / tileWidth;     // In textels (To be used as a scalar down below?)
	float height = 1.0f / tileHeight;   // In textels (To be used as a scalar down below?)

	///////////////////////////////////////
	// NOT SURE WHAT TO PUT HERE...
	// (need to find a way to use the
	//  above info to generate correct
	//  texture coordinates below...)
	///////////////////////////////////////

	int vid = gl_VertexID % 4;
	switch(vid) {
		case 0:
			out_texCoord = vec2(0, 1);  // Currently set to the entire image (instead of a single tile within it)
			break;
		case 1:
			out_texCoord = vec2(1, 1);  // Currently set to the entire image (instead of a single tile within it)
			break;
		case 2:
			out_texCoord = vec2(0, 0);  // Currently set to the entire image (instead of a single tile within it)
			break;
		case 3:
			out_texCoord = vec2(1, 0);  // Currently set to the entire image (instead of a single tile within it)
			break;
	}
	gl_Position = vec4(in_pos, 1);
}

Photoshop-style "Layer View" panel using SDL and C++/MFC?

25 June 2014 - 07:24 AM

Overview

 

Okay so first, a little background about my project...

I have an MSVC solution containing two separate projects:

  1. Tile Editor ("OnionEditor")
  2. Tile Engine ("OnionEngine")

... And in OnionEngine, I have the following objects:

  1. TileSheet (Responsible for loading an image and defining the SDL_Rects of a each individual tile)
  2. TileLayer (Responsible for keeping track of tile data: image index, XY coords, row count, column count, etc)

In OnionEditor, I have a "main" editor view which subclasses CView, and another class which subclasses CDockablePane and is responsible for displaying thumbnail views of each individual tile layer. In OnionEngine, each TileSheet object is (simply put) an SDL_Texture, created using the SDL_Renderer from the main editor view, and containing an array of SDL_Rects which are designed to cover the area of each individual tile within the loaded texture, allowing a simple SDL_RenderCopy to the main view (by using a tile index to specify the source rect to render). By pre-assigning an array of tile indices to a TileLayer object, and using a simple for loop to iterate over each one, this becomes very efficient for rendering individual tile layers using  a single hardware-accelerated texture for each TileSheet...

 

Issues

 

And now comes the tricky part... In my main editor view, I've setup a render loop which uses it's own SDL_Window and SDL_Renderer.

In the CLayerView object, I create new classes (each of which subclass CWnd) containing their own SDL_Window and SDL_Renderer, and are responsible for displaying scaled-down versions for each individual tile layer. The issue here, other than a possible performance hit, is that SDL requires a texture to be associated with only one renderer upon creation, and seemingly cannot be displayed using another renderer. I have tried using render targets, which work reasonably, but still provide me with no way to get that rendered output into another view; Creating a texture with SDL_TEXTUREACCESS_TARGET doesn't allow texture locking, and creating a texture with SDL_TEXTUREACCESS_STREAMING doesn't allow the use of SDL_SetRenderTarget...

 

Even trickier, I need some way to render each individual layer separate from eachother into their own offscreen surfaces, as well as composite each of those into the main editor view, without causing the view to flicker from multiple calls to SDL_RenderClear and SDL_RenderCopy... Which would basically mean that I need to use render targets anyway, right?

 

Question

 

So I suppose the real question here is 50% design, and 50% technical...

What can I do to achieve the following:

  1. Full-scale, hardware-accelerated textures of each individual layer (to somehow be scaled down and copied into a separate renderer)
  2. Scaled-down, hardware-accelerated copies of each individual layer (to render into each individual layer thumbnail view)
  3. Composited texture of each full-scale layer (to render into the main editor view)

... Or am I just dreaming that this is even remotely possible? I'll be willing to take whatever I can get here.

Hell, I've even tried just using simple CDC and CBitmap objects at strategic moments, but failed to separate the layers without flickering the editor view. Please help! Thanks...


Copying render target data to D3DPOOL_MANAGED?

30 April 2013 - 02:35 PM

The engine I am using creates render targets like so:

D3DXCreateTexture(pD3DDevice, width, height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &pRT)

 

It also creates textures with:

D3DXCreateTexture(pD3DDevice, width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pTex)

 

Is it possible to copy the render target data to one of these D3DPOOL_MANAGED textures? In DX8, we used CopyRects to get the job done. In DX9, it appears to have been replaced with UpdateTexture, UpdateSurface, GetRenderTargetData, LockRect, etc. But since the formats slightly differ, as well as the pools, most attempts I have made at using these functions have failed at one point or another with either E_FAIL or D3DERR_INVALIDCALL.
 
I did eventually get this to work via:
inline PVOID LockedBits(LPDIRECT3DSURFACE9 surface, UINT w, UINT h, INT* size)
{
    D3DLOCKED_RECT lr;
    RECT rc = {0, 0, w, h};
    surface->LockRect(&lr, &rc, 0);
    if(size) *size = (w*h) * 4;
    return lr.pBits;
}
inline PVOID LockedBits(LPDIRECT3DTEXTURE9 texture, UINT w, UINT h, INT* size)
{
    D3DLOCKED_RECT lr;
    RECT rc = {0, 0, w, h};
    texture->LockRect(0, &lr, &rc, 0);
    if(size) *size = (w*h) * 4;
    return lr.pBits;
}

LPDIRECT3DTEXTURE9 CloneTextureFromTarget(LPDIRECT3DDEVICE9 device, LPDIRECT3DTEXTURE9 target, UINT w, UINT h)
{
    D3DDISPLAYMODE dm;
    device->GetDisplayMode(0, &dm);

    // Create source and destination surfaces and copy rendertarget
    LPDIRECT3DSURFACE9 dstSurf = NULL, srcSurf = NULL;
    device->CreateOffscreenPlainSurface(w, h, dm.Format, D3DPOOL_SYSTEMMEM, &dstSurf, NULL);
    target->GetSurfaceLevel(0, &srcSurf);
    device->GetRenderTargetData(srcSurf, dstSurf);
    SafeRelease(&srcSurf);

    // Create destination texture
    LPDIRECT3DTEXTURE9 dstTexture = NULL;
    D3DXCreateTexture(device, w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &dstTexture);

    // Get bits for destination surface and texture
    INT dwSrc, dwDst;
    PVOID pBitsSrc = LockedBits(dstSurf, w, h, &dwSrc);
    PVOID pBitsDst = LockedBits(dstTexture, w, h, &dwDst);

    // Copy bits from surface to texture
    RtlCopyMemory(pBitsSrc, pBitsDst, dwSrc);
    dstTexture->UnlockRect(0);
    dstSurf->UnlockRect();
    SafeRelease(&dstSurf);

    /* Just to double-check if it worked... */
    D3DXSaveTextureToFileA("C:\\outSrc.png", D3DXIFF_PNG, target, NULL);
    D3DXSaveTextureToFileA("C:\\outDst.png", D3DXIFF_PNG, dstTexture, NULL);

    // Return the result
    return dstTexture;
}

 

...But even though both textures save to disk and appear correct, the returned texture refuses to render (or only renders black pixels).
And yet, all other textures created by the engine in this manner seem to render just fine. So what am I doing wrong?
And what would be the most efficient way of getting this to work?

Texture splatting on the GPU?

24 April 2013 - 11:35 AM

I'm using HGEPP to handle the rendering of a 2D, top-down tile map.

To create a more organic feel, I decided to implement texture splatting.

When a tileset is allocated, it checks each tile for a property which determines if it can be splatted against other tiles.

Once an array of "splattable" tile graphics has been made, I create new tiles which implement transistions in each and all directions for each other "splattable" graphic.

This is incredibly time consuming, and was wondering if it would be possible to move this functionality into a DX9-compatible pixel shader.

 

Just as a quick example, I'm trying to do the following:

ZIyNE.png + 2PWKNbp.png + ygbKah7.png = t6aOMf8.png

 

With the third image as an alpha map, how could this be implemented in a DX9-compatible pixel shader to "blend" between the first two images, creating an effect similar to the fourth image?

Furthermore, how could this newly created texture be given back to the CPU, where it could be placed back inside the original array of textures?


Looping ONLY through visible tiles in a 2D engine?

19 April 2013 - 12:21 AM

I'm using HGE to render a one-dimensional array of tiles. Each tile in this array stores vertex, color and texture information, which is pre-calculated (only once, before the first render cycle) and does NOT change. Instead, I have implemented a virtual camera system, storing an X/Y pixel offset + view width/height information, allowing a tilemap to be rendered at any given offset without actually changing the vertices of any tiles. This means that tilemaps are always located at 0,0 world-space coordinates, but the user is allowed to scroll through the map using the camera system.

 

I had this system working rather nicely for a while, until it was time to apply a few optimizations... One of which included calculating which tiles are within range of the camera view, and only rendering these tiles. This optimization nearly doubled my rendering speeds, but I realized the downfall is that I am still looping through each & every tile in the array. For smaler maps, this wasn't a big deal. But for larger maps, the performance hit is almost unbearable... And this is only for a single layer of tiles!

 

So, given the following variables:

  • One-dimensional Tile Array (stores vertex info)
  • Camera X Offset (pixel, world-space)
  • Camera Y Offset (pixel, world-space)
  • Camera View Width (pixel, screen-space)
  • Camera View Height (pixel, screen-space)
  • Tilemap Size (logical # of columns & rows)
  • Tile Dimensions (logical width & height, in pixels)

How could I create a loop which ONLY iterates through tiles within range of the camera's current view?


PARTNERS