• Content count

  • Joined

  • Last visited

Community Reputation

161 Neutral

About Rectangle

  • Rank
  1. Using the following code (and with help from Kaptein), I could've sworn I had this whole thing solved... Guess not. It just shows a solid color from the texture... #version 330 core layout(location=0) in vec3 in_pos; out vec2 out_texCoord; void main() { int tileNum = 1; int tileCount = 88; int tileIndex = tileNum % tileCount; float tileWidth = 32; float tileHeight = 32; float imgWidth = 255; float imgHeight = 351; int columnCount = int(imgWidth / tileWidth); int rowCount = int(imgHeight / tileHeight); // Unused int tileX = tileIndex % columnCount; int tileY = int(float(tileIndex) / float(columnCount)); float startX = float(tileX) * tileWidth; float startY = float(tileY) * tileHeight; float endX = 1.0f / (startX + tileWidth); float endY = 1.0f / (startY + tileHeight); startX = 1.0f / startX; startY = 1.0f / startY; int vid = gl_VertexID % 4; switch(vid) { case 0: out_texCoord = vec2(startX, endY); break; case 1: out_texCoord = vec2(endX, endY); break; case 2: out_texCoord = vec2(startX, startY); break; case 3: out_texCoord = vec2(endX, startY); break; } gl_Position = vec4(in_pos, 1); }
  2.   Hrmm... Strangely enough, your post didn't show up until after I made my last post... Anyhow, yes there is a use for a maximum tile count (see my last post for details). And could you please elaborate on what you mean by "You can also use a static array of texture coordinates to add based on gl_VertexID % 4"? I'm only using gl_VertexID to verify which "corner" of the image we are currently dealing with. Am I missing an alternative, more efficient solution to this?   Thanks!
  3. Here's a more visual example of what I am trying to accomplish... Let's say I have the following texture:     You might notice that the tile index goes from left to right, top to bottom. There is no X/Y or Row/Column variables necessary, as long as we know that each cell is 32x32 pixels, and that the overall image is 256x128... You may also notice that the last 4 tiles are not part of this tilesheet, making a total of only 28 tiles in this image (out of a possible 32, in this case).   But when dealing with shaders, we are working with normalized textels which represent the entire image. Effectively, this means that we would need to scale down the width & height of each cell to be some arbitrary value between 0 and 1, depending on the actual dimensions of the image AND the dimensions of each cell within the image, and somehow use that information (along with the current tile index and maximum number of tiles within the image) to come up with all four corners (in textels) of the corresponding desired cell. Then, depending on which vertex # we are working on, we would need to pass the correct values to the fragment shader, where the sampler can make use of these newly calculated texture coordinates, effectively displaying a single tile/cell of the texture.   Any idea how can this be achieved?
  4. In theory, let's say I have a vertex shader which knows: The dimensions of a 2D texture/tilesheet (in pixels) The dimensions of each tile within that texture (in pixels) The number of tiles within the texture (not necessarily RowCount * ColumnCount... The final row may fall short X number of tiles) A given "TileIndex", representing a desired tile # which increments from left to right, top to bottom (in that order) Using the built-in gl_VertexID input variable to aid in the creation of proper texture coordinates, what sort of math/algorithm must be used to get this shader to "select" the correct tile index and corresponding texture coordinates (to be passed on to the fragment shader)? I believe it has something to do with X + Y * Width, or perhaps the use of a modulus operation somewhere, but it becomes complicated when the only given is a single tile index value...   So, I've only gotten as far as the following: #version 330 core layout(location=0) in vec3 in_pos; // Vertex Position layout(location=1) in int in_tileIndex; // Desired Tile Index out vec2 out_texCoord; // Resulting Texture Coordinates // (passed on to the frag shader) void main() {         // The following variables will eventually be uniforms/attributes, and are only         // used here for more immediate debugging while I figure this algorithm out... float tileWidth = 32; // In pixels float tileHeight = 32; // In pixels float tileCount = 88; // Not always rows * cols float imgWidth = 255; // In pixels float imgHeight = 351; // In pixels float width = 1.0f / tileWidth; // In textels (To be used as a scalar down below?) float height = 1.0f / tileHeight; // In textels (To be used as a scalar down below?) /////////////////////////////////////// // NOT SURE WHAT TO PUT HERE... // (need to find a way to use the // above info to generate correct // texture coordinates below...) /////////////////////////////////////// int vid = gl_VertexID % 4; switch(vid) { case 0: out_texCoord = vec2(0, 1); // Currently set to the entire image (instead of a single tile within it) break; case 1: out_texCoord = vec2(1, 1); // Currently set to the entire image (instead of a single tile within it) break; case 2: out_texCoord = vec2(0, 0); // Currently set to the entire image (instead of a single tile within it) break; case 3: out_texCoord = vec2(1, 0); // Currently set to the entire image (instead of a single tile within it) break; } gl_Position = vec4(in_pos, 1); }
  5. Meh. Guess I'll just try my hand at SFML and see if it has better support for this sort of thing.
  6. SlimDX Screenshot

    There's really not enough information here, and it looks like you're not doing any error checking either (unless, of course, you've just simplified the code for posting)... Does CreateOffscreenPlain succeed and give you a valid pointer? Or does it generate an error? Is g_device3D a valid pointer to an initialized D3D::Device you've created?
  7. Overview   Okay so first, a little background about my project... I have an MSVC solution containing two separate projects: Tile Editor ("OnionEditor") Tile Engine ("OnionEngine") ... And in OnionEngine, I have the following objects: TileSheet (Responsible for loading an image and defining the SDL_Rects of a each individual tile) TileLayer (Responsible for keeping track of tile data: image index, XY coords, row count, column count, etc) In OnionEditor, I have a "main" editor view which subclasses CView, and another class which subclasses CDockablePane and is responsible for displaying thumbnail views of each individual tile layer. In OnionEngine, each TileSheet object is (simply put) an SDL_Texture, created using the SDL_Renderer from the main editor view, and containing an array of SDL_Rects which are designed to cover the area of each individual tile within the loaded texture, allowing a simple SDL_RenderCopy to the main view (by using a tile index to specify the source rect to render). By pre-assigning an array of tile indices to a TileLayer object, and using a simple for loop to iterate over each one, this becomes very efficient for rendering individual tile layers using  a single hardware-accelerated texture for each TileSheet...   Issues   And now comes the tricky part... In my main editor view, I've setup a render loop which uses it's own SDL_Window and SDL_Renderer. In the CLayerView object, I create new classes (each of which subclass CWnd) containing their own SDL_Window and SDL_Renderer, and are responsible for displaying scaled-down versions for each individual tile layer. The issue here, other than a possible performance hit, is that SDL requires a texture to be associated with only one renderer upon creation, and seemingly cannot be displayed using another renderer. I have tried using render targets, which work reasonably, but still provide me with no way to get that rendered output into another view; Creating a texture with SDL_TEXTUREACCESS_TARGET doesn't allow texture locking, and creating a texture with SDL_TEXTUREACCESS_STREAMING doesn't allow the use of SDL_SetRenderTarget...   Even trickier, I need some way to render each individual layer separate from eachother into their own offscreen surfaces, as well as composite each of those into the main editor view, without causing the view to flicker from multiple calls to SDL_RenderClear and SDL_RenderCopy... Which would basically mean that I need to use render targets anyway, right?   Question   So I suppose the real question here is 50% design, and 50% technical... What can I do to achieve the following: Full-scale, hardware-accelerated textures of each individual layer (to somehow be scaled down and copied into a separate renderer) Scaled-down, hardware-accelerated copies of each individual layer (to render into each individual layer thumbnail view) Composited texture of each full-scale layer (to render into the main editor view) ... Or am I just dreaming that this is even remotely possible? I'll be willing to take whatever I can get here. Hell, I've even tried just using simple CDC and CBitmap objects at strategic moments, but failed to separate the layers without flickering the editor view. Please help! Thanks...
  8. The engine I am using creates render targets like so: D3DXCreateTexture(pD3DDevice, width, height, 1, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &pRT)   It also creates textures with: D3DXCreateTexture(pD3DDevice, width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pTex)   Is it possible to copy the render target data to one of these D3DPOOL_MANAGED textures? In DX8, we used CopyRects to get the job done. In DX9, it appears to have been replaced with UpdateTexture, UpdateSurface, GetRenderTargetData, LockRect, etc. But since the formats slightly differ, as well as the pools, most attempts I have made at using these functions have failed at one point or another with either E_FAIL or D3DERR_INVALIDCALL.   I did eventually get this to work via: inline PVOID LockedBits(LPDIRECT3DSURFACE9 surface, UINT w, UINT h, INT* size) {     D3DLOCKED_RECT lr;     RECT rc = {0, 0, w, h};     surface->LockRect(&lr, &rc, 0);     if(size) *size = (w*h) * 4;     return lr.pBits; } inline PVOID LockedBits(LPDIRECT3DTEXTURE9 texture, UINT w, UINT h, INT* size) {     D3DLOCKED_RECT lr;     RECT rc = {0, 0, w, h};     texture->LockRect(0, &lr, &rc, 0);     if(size) *size = (w*h) * 4;     return lr.pBits; } LPDIRECT3DTEXTURE9 CloneTextureFromTarget(LPDIRECT3DDEVICE9 device, LPDIRECT3DTEXTURE9 target, UINT w, UINT h) {     D3DDISPLAYMODE dm;     device->GetDisplayMode(0, &dm);     // Create source and destination surfaces and copy rendertarget     LPDIRECT3DSURFACE9 dstSurf = NULL, srcSurf = NULL;     device->CreateOffscreenPlainSurface(w, h, dm.Format, D3DPOOL_SYSTEMMEM, &dstSurf, NULL);     target->GetSurfaceLevel(0, &srcSurf);     device->GetRenderTargetData(srcSurf, dstSurf);     SafeRelease(&srcSurf);     // Create destination texture     LPDIRECT3DTEXTURE9 dstTexture = NULL;     D3DXCreateTexture(device, w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &dstTexture);     // Get bits for destination surface and texture     INT dwSrc, dwDst;     PVOID pBitsSrc = LockedBits(dstSurf, w, h, &dwSrc);     PVOID pBitsDst = LockedBits(dstTexture, w, h, &dwDst);     // Copy bits from surface to texture     RtlCopyMemory(pBitsSrc, pBitsDst, dwSrc);     dstTexture->UnlockRect(0);     dstSurf->UnlockRect();     SafeRelease(&dstSurf);     /* Just to double-check if it worked... */     D3DXSaveTextureToFileA("C:\\outSrc.png", D3DXIFF_PNG, target, NULL);     D3DXSaveTextureToFileA("C:\\outDst.png", D3DXIFF_PNG, dstTexture, NULL);     // Return the result     return dstTexture; }   ...But even though both textures save to disk and appear correct, the returned texture refuses to render (or only renders black pixels). And yet, all other textures created by the engine in this manner seem to render just fine. So what am I doing wrong? And what would be the most efficient way of getting this to work?
  9. Texture splatting on the GPU?

    If you look at the my original (unedited) post, you will see that I've spent countless hours researching, experimenting and trying to figure this out. However, I wasn't using the correct search terms, and only found examples in GLSL. To complicate matters even further, I had to make it specific to HGE, since there is limited support for shaders and other D3D functions without modifying the HGE source. I also had no knowledge of a linear interpolation function, and even had I known it would've failed to render properly in FX Composer without a proper vertex shader (which would seem pointless for my purposes). After someone from StackOverflow mentioned a similar answer, I tested it in Visual Studio's Shader Designer and was able to produce the desired output. I was coming here to confirm that it works, but you beat me to the punch.   In any regard, thanks for the help.
  10. I'm using HGEPP to handle the rendering of a 2D, top-down tile map. To create a more organic feel, I decided to implement texture splatting. When a tileset is allocated, it checks each tile for a property which determines if it can be splatted against other tiles. Once an array of "splattable" tile graphics has been made, I create new tiles which implement transistions in each and all directions for each other "splattable" graphic. This is incredibly time consuming, and was wondering if it would be possible to move this functionality into a DX9-compatible pixel shader.   Just as a quick example, I'm trying to do the following: + + =   With the third image as an alpha map, how could this be implemented in a DX9-compatible pixel shader to "blend" between the first two images, creating an effect similar to the fourth image? Furthermore, how could this newly created texture be given back to the CPU, where it could be placed back inside the original array of textures?
  11. Fly from one camera to another

    Just a curious thing to note.... How would you handle situations where the target camera's orientation is a polar opposite of the current camera's orientation? For example, camera A is rotated at 90 degrees while camera B is rotated at 270 degrees... Seems lengthy to create a spline which keeps this transition "moving forward" since it would have to first choose a direction and then fulfil a 180 degree "S-spline" interpolation. Just something to keep in mind   That being said, I do agree with Hodgman's suggestion over my own, but depending on what this is for, there may be certain scenarios where a more 'immediate' response to orientation could benefit the whole package.
  12. Fly from one camera to another

    Sounds like a fun little task. Well first off, I would attach a target vector to the camera class, as well as some sort of maximum speed limit to act as a scalar value for each axis. Next, I would have an update function in the camera class which took the delta time between frames as a parameter, and call that function each frame. In the update function, I would use the typical formula of: camPos.x += ((camTarget.x - camPos.x) * deltaTime) * speedScalar.x; camPos.y += ((camTarget.y - camPos.y) * deltaTime) * speedScalar.y; camPos.z += ((camTarget.z - camPos.z) * deltaTime) * speedScalar.z; This will animate the camera's current position to eventually match the target camera's position, at a speed based upon the scalar value. You can raise or lower this value as needed until it feels right for you.   As for making the camera "point" in the direction it needs to go, just do a simple "LookAt" transformation (ie, "gluLookAt" or "Matrix.LookAt*" functions) from your camera's current position to the target camera's position, and either set that as the first target camera (to animate the rotation sequence) or determine at which points along the animation you would like to "combine" these animations, and I believe you should get the effect you're looking for. Or at least get you on a pretty good start.   Of course, this is just one way to do it, though.
  13. Thanks! With a couple of minor tweaks, I was able to get this algorithm to work:   void Render(HGE* hge, float cameraX, float cameraY, float cameraWidth, float cameraHeight) { int firstcolumn = int( floor(cameraX / m_tileWidth) ); int firstrow = int( floor(cameraY / m_tileHeight) ); int lastcolumn = int( ceil((cameraX + cameraWidth) / m_tileWidth) ); int lastrow = int( ceil((cameraY + cameraHeight) / m_tileHeight) ); for(int r = firstrow; r < lastrow; r++) { for(int c = firstcolumn; c < lastcolumn; c++) { hge->Gfx_RenderQuad(&m_tiles[r * m_nColumns + c].quad); } } }
  14. I'm using HGE to render a one-dimensional array of tiles. Each tile in this array stores vertex, color and texture information, which is pre-calculated (only once, before the first render cycle) and does NOT change. Instead, I have implemented a virtual camera system, storing an X/Y pixel offset + view width/height information, allowing a tilemap to be rendered at any given offset without actually changing the vertices of any tiles. This means that tilemaps are always located at 0,0 world-space coordinates, but the user is allowed to scroll through the map using the camera system.   I had this system working rather nicely for a while, until it was time to apply a few optimizations... One of which included calculating which tiles are within range of the camera view, and only rendering these tiles. This optimization nearly doubled my rendering speeds, but I realized the downfall is that I am still looping through each & every tile in the array. For smaler maps, this wasn't a big deal. But for larger maps, the performance hit is almost unbearable... And this is only for a single layer of tiles!   So, given the following variables: One-dimensional Tile Array (stores vertex info) Camera X Offset (pixel, world-space) Camera Y Offset (pixel, world-space) Camera View Width (pixel, screen-space) Camera View Height (pixel, screen-space) Tilemap Size (logical # of columns & rows) Tile Dimensions (logical width & height, in pixels) How could I create a loop which ONLY iterates through tiles within range of the camera's current view?