• Content count

  • Joined

  • Last visited

Community Reputation

2864 Excellent


About irreversible

  • Rank

Personal Information

  • Interests
  1. C++ Get HWND of another application

    Read the reply in that thread. It outlines the method and mentions the fact that you may not be dealing with a single main window. Once you can list windows that belong to a process, figure out how to go about doing that to a different process. The first logical step here would be to substitute the current process for what ever process you need. Again, first reply. Note that there may be more than once instance of a single executable running, so you probably need to list all processes called notepad.exe, open each one and list all windows, doing your best to figure out which one is the main window. This may be trivial for notepad.exe, but not so much for something messier, like Gimp. This will only give you a valid result if the user is editing an unnamed and likely unsaved document and only if there is one instance of Notepad running.
  2. C++ Get HWND of another application

    First result.
  3. 3rd person camera on "rails"

    Based on your description I'm assuming you're not having trouble with setting up the view matrix so the camera looks at the player. It's not really clear, though, if your problem is related to how to place the camera in the scene so it isn't obscured by objects or if you're simply concerned with the transition from one angle to another. The former is a fairly complex problem and either likely requires manual camera placement or allowing the camera to see through geometry, as for instance in Divinity: Original Sin. The latter kinda depends on the context. Alone In The Dark simply jumps from one angle to another - the effect is jarring and highly effective at briefly disorienting the player. If your locations are more tightly knitted together, you might consider a fast transition and obscure it with something like motion blur or lens distortion (you're likely already using a quaternion to interpolate the position and lookat vector anyway), or if you want to move really slowly and cinematically, you'll need to set up your camera movement manually - eg have it follow a spline and stop at specific locations based on where the player is.
  4. This is actually the problem - yes, the encoding is identical, but all blocks are stored as 4x4 pixels, which are encoded using the original field order. In order to conform a regular DDS texture to GL, the order of individual scanlines (or UVs, as was pointed) need to be flipped vertically. Which is to say, after a texture is encoded, the flip needs to be also performed within each block or you'll end up with a texture with each four-scanline horizontal slice flipped vertically. Simply flipping the rows of blocks would effectively flip the order of the blocks, but not individual scanlines. To fix that, pixels in a block can be swizzled during loading. For BC3 that looks something like this: // (C) 2009 Cruden BV static void FlipDXT3BlockFull(unsigned char* block) { unsigned char tmp = block[0]; block[0] = block[6]; block[6] = tmp; tmp = block[1]; block[1] = block[7]; block[7] = tmp; tmp = block[2]; block[2] = block[4]; block[4] = tmp; tmp = block[3]; block[3] = block[5]; block[5] = tmp; } The problem is that while flipping encoded blocks is fairly easy for BC versions 1-5, the process is not as straightforward for BC 6/7 (and likely also ASTC), which AFAIK necessitates flipping the source texture and then reencoding it. Encoding a large BC7 texture can take on the order minutes so as far as I can tell the only realistic solution is to perform this during cooking. This isn't something I'm just throwing out there, but rather something I'm currently dealing with in my own code. As far as flipping the V coordinate goes, I'm still not sure how to that would work in all tiled cases (see below). Suppose you have splatted surface or some sort of terrain and your V coordinate runs from 0.2 to 18.6 or some other similarly arbitrary tiling. The only way to flip that would be to know the UV bounds, which in itself can be cumbersome if not outright difficult in a shader. Now, what if the texture coordinates are animated?
  5. This may work in trivial cases, but not for geometry that uses texture tiling.
  6. I meant the vertical UV coord difference between D3D and OpenGL. Unless I'm uninformed and D3D allows setting (0, 0) to the bottom left corner as in GL, you need to flip your textures vertically. For block-compressed data this means not flipping scanlines across the entire texture, but rather within each block. I'm unaware of a way to accomplish this for BC 6/7 post-compression - it might be possible, but it seems to be easier to just re-compress. Which is too expensive. This isn't an issue when targeting a single API, but unless I'm missing something, seems like a problem when trying to support both. Hm - I'll give it a shot. The decompress can reasonably be performed once at first run so that doesn't seem like too much of an issue. This makes sense. So, ship at max resolution, but during loading simply feed data to the GPU from a different mip offset. I have to admit I was overthinking it
  7. Things are pretty straightforward if you only target a single tier (which, let's be honest, is my case), but I've still been pondering how to go about basic scalability. Assumption: all textures are encoded with BC (DDS) 1/(3)/5/6/7 or in the future ASTC once that reaches mainstream. The target is the PC market. Building BC-compressed textures is trivial for 1/3/5, but becomes a strictly offline process for versions 6 and 7. Moreover, while cooking textures for a single API (D3D or OpenGL/Vulkan in this case) is a fixed process, switching between the two requires swizzling blocks in the encoded texture. Again, this is fairly trivial for 1/3/5, but I'm not really aware of any publicly available implementation of how to do it for 6 and 7. In practice this means that the texture needs to be read and re-encoded for which ever API it wasn't cooked for. I'm assuming (sic!) this is also true for ASTC. The same problem applies to resolution - scaling BC 1/3/5 textures down on the user's machine probably entails a fairly short preprocessing step during first run or installation, but re-encoding a couple of hundred or more BC 6/7 textures will probably end with the game getting uninstalled before it is even run. So here are the options I can think of: - target only one API and don't care about supporting both - or target both APIs and ship all high quality textures for either platform. (or, you known, figure out how to swizzle BC 6/7 blocks). Reorder blocks for BC 1/3/5 for your non-preferred API. - ship textures in 2-3 sizes (eg 4k, 2k, 1k), turning a blind eye to space requirements - don't use texture compression for high quality textures Any thoughts on a semi-automatic "best case" implementation?
  8. What could be causing this crash (?)

    Hehe - thanks, but it's not that. My laptop lacks the extra properietary-looking SATA connector and I'm holding out for a substantial upgrade when Ryzen versions of the new LG and/or Samsung models hopefully come out next year :). PS - the DLL is cached into memory, at which point it is initialized. By re-linking, the previous file gets overwritten, causing it to be reinitialized when the program is run again after a build. It's just strange to see it stall the main application like that.
  9. What could be causing this crash (?)

    Welp. I just had a walk outside and realized it's probably just ispc_texcomp.dll caching stuff when it gets initialized. The cache gets invalidated when the module is overwritten. Yes. Hard drive space.
  10. I'm adapting the Intel ISPC Texture Compressor into my workflow and I'm experiencing a strange issue. Basically what is happening is that the first time after building the solution the executable crashes. Except that it doesn't. The following does NOT seem to happen on consecutive runs, but invariably occurs every time I relink: 1) the binary seems to take a fair bit longer to start up (10 or even 15+ seconds). This happens for every instance of the binary (eg the delay occurs again when I copy the exe to a new location). Note that no external files are being referenced other than ispc_texcomp.dll. 2) it then throws the "This program has stopped responding" error for about another 10 seconds. I can close this without stopping execution and it's barely enough time to start up a debug session, which seems to get me nowhere. The notification then goes away and the program executes normally, is if it had been stuck in a tight loop or was run a second time. 3) steps 1 and 2 seem to happen before WinMain is called. Which is to say I can't even step into anything in the debugger before the end of the stall. Now, the number one thing here is that the code is DX-heavy, which is quite foreign to me. If I had to guess, I'd surmise the problem has to do with some kind of (shader) caching or whatnot by the DX API itself. Except that this seems to (so far) only happen prior to the first run and before any actual program code is called. The main reason I'm making a fuss about this is because I'm unsure about DX versions and might well be linking against bad modules. Also, the problem could potentially be exacerbated once I start using the tool for batch conversions. I'm using the June 2010 SDK libraries in VS2013, which I fed manually into the project. Can someone maybe download and see if they get similar behavior or suggest what might be causing this?
  11. Looking for tips in 3D

    Actually - writing a WAD (Doom data container format) loader and a custom 3D level renderer from scratch is a fairly enlightening hobby project. The format is well-documented, a lot of clones already exist to do side-by-side comparison, including original source code in case you get stuck, and there's the instant gratification of actually seeing stuff as soon as you figure out how to access the level data and spit it on the screen. Not to mention that the sheer visual gratification is doubled if you're a fan of the games. Writing a BSP loader (the level format that Quake and co use) is somewhat more involved*. * which is not to say the Doom level format isn't a form of binary space partitioning. I'm referring to the file extension here
  12. Extracting face/hit data after a GJK step

    Thanks for the explanations, Dirk and Randy! First off - I've read horror stories of EPA in the past, which is why I was hoping I wouldn't need to spend time on it. In my case I really only need feature information in order to figure out what type of surface my actor is in contact with. I've now completely rethought my approach. To give a little bit of perspective and describe what I ended up doing - the initial problem I was tackling was basically handling a car-like vehicle, which I was maintaining as a dynamic actor, applying impulses to steer it as needed. I only need it to report contact with walls, which basically terminate the game or throw the actor off in some direction, or the ground, which can have a number of surface types that affect gameplay. I could sample this data directly, but I was naively hoping I might be able to integrate surface queries into my physics code. I gave up on that. Ultimately I ended up converting my dynamic player into a separately simulated kinematic actor. I'm now emulating all forces on it as a separate step and completely bypass the physics loop. Instead, I'm colliding it manually with a dynamically build poly soup, which already contains surface information. Just to clarify - you mean to "make sure to take advantage of temporal coherence", right?
  13. Extracting face/hit data after a GJK step

    This is actually what I was kind of afraid of, as alluded to in my original post. Thanks for clearing it up. In any case - not being able to access more specific information largely defeats the purpose of my current narrowphase. I simply need surface features to perform things like sliding. Other than that I only need collision data for stuff like trigger overlaps - eg no actual physics.
  14. Extracting face/hit data after a GJK step

    Hm. My sets can be disjoint or touching, but not penetrating at the start of the measurement. In either case - the distance is calculated to determine the time of impact with the assumption that collision will occur during the current step. I need the surface features to know how to respond to this contact (calculate TOI and then stop or slide in my case). The next best thing I can think of is raycasting to determine the faces involved, but this is also something I can't do without additional information from something that would give me a direction to test. GJK gives me a direction in most cases.
  15. Extracting face/hit data after a GJK step

    A followup. This thread has a fair amount of useful information on the issue. However, I'm still struggling with the edge/edge case. Here's a top-down view of my test case: The purple boxes are the floor. The small pink box is the subject resting on top of it, affected by gravity - hence it is falling down the z axis (away from the screen). It started centered exactly where the two floor meshes meet. Then it moved up along the y axis until it reached the edge of the floor meshes as shown, causing an edge/edge case simplex. This is an extremely unfortuitous case, because the box is clearly resting on the floor, but the edge collision generates vertex pairs that on closer inspection run along the positive and the negative x-axis, respectively. This means the cross product between the two is zero, completely failing to extract any meaningful surface features. This causes the box to clip through the floor. The points in the simplex do not follow consistent winding, which means that while I could otherwise use the vertex indexes to identify the face, that is not possible as a reliable solution. I don't see any way to handle this case, except: a) if an edge/edge case is detected and a previous non-edge/edge contact exists between the two shapes, ignore the current case and reuse those features b) if no previous contact exists, test if the edges are parallel as in this case: b1) if not, extract the normal as Dirk suggests in the linked thread b2) if yes, then the shapes are meeting for the first time exactly edge-to-edge, so pick a random face on either object that contains the vertices [a0, a1] or [b0, b1], respectively Anyways - I'm looking for a more robust solution, so suggestions are very much welcome