Jump to content
  • Advertisement

Hodgman

Moderator
  • Content Count

    15052
  • Joined

  • Last visited

  • Days Won

    27

Hodgman last won the day on October 12

Hodgman had the most liked content!

Community Reputation

52376 Excellent

About Hodgman

  • Rank
    Moderator - APIs & Tools

Personal Information

  • Website
  • Role
    Game Designer
    Programmer
    Technical Artist
    Technical Director
  • Interests
    Programming

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

94217 profile views
  1. Hodgman

    Missing pixels in mesh seams

    Firstly, it's very similar to view-space shading. In view-space shading, you send model-to-view (aka worldview) and view-to-projection matrices through to the shader (or model-to-view and model-to-projection matrices (aka worldviewproj)), and do all your shading using view-space positions. If the CPU sends through lighting data, they should be pre-transformed into view-space on the CPU as well. The "floating origin" technique is basically the same, but only using the camera's origin and not orientation. "model-to-view" (aka worldview) is typically "world * view", which is also "world * inverse(camera)". "model-to-floating-origin" is the same, but only using the camera's position data -- "world * inverse( TranslationMatrix(camera.position) )". As with view-space shading, the CPU will have to pre-transform all the lighting data into "floating origin space". This is slightly easier because only position data needs to be transformed -- directional data doesn't need any changes.
  2. Hodgman

    Capture constantbuffer in commandlist

    I'm confused? If you know that you're submitting commands in the wrong order, can't you just, submit them in the right order?? Using multiple command lists / multiple threads to record commands in D3D11 actually increases the amount of work that the D3D runtime/driver have to do! When recording the commands, extra work is required to serialize them into a temporary command list format. At a later point in time, a single-thread belonging to the driver has to actually deserialise and process the commands from the command list. Using a single thread to record all commands is the fastest option in D3D11, if that's possible for you. Also, updating resources using a deferred context in D3D11 is especially wasteful. When you map/update a resource from a deferred context, the runtime has to allocate a new memory buffer to store your updated data. You copy your data into that buffer. Later, when you submit the deferred commands, the runtime actually maps the resource and copies that temporary buffer into the mapped memory (and free's the temporary allocation)... Using multiple command lists / threads only helps to increase performance if it allows you to offload your own application's work onto those threads. For example, if when drawing objects, your applicaiton has to do a lot of work to traverse the scene -- for example, perhaps it interleaves frustum culling work with D3D drawing commands -- then by using a deferred context, you can move that workload onto another thread. The performance improvement in that example does not come from moving the D3D commands onto another thread, but actually comes from moving the frustum culling work onto another thread.
  3. Hodgman

    Snow physics in Red Dead Redemption 2

    This might be useful to you too:
  4. Yeah the swap-chain is a bit more of a high-level object, different from most of the D3D12 low level design. Partly, this is because it's a DXGI object, not a purely D3D12-only object. It actually creates a collection of resources (the chain of buffers that alternate between being rendered to and being displayed), manages some synchronization of that buffer-swapping work, and submits the actual commands to display a buffer to the screen.
  5. Hodgman

    Capture constantbuffer in commandlist

    In D3D11, resource updates do occur in the order that you submit them in (as determined by the immediate context), along with draws. So, if you update a resource to contain the value"1", then draw a mesh "A" that uses the resource, then update that resource to contain the value "2" and then draw mesh "B", and then submit these commands to the GPU, then, mesh A will see the value 1 and mesh B will see the value 2. If you're getting weird results, you're likely submitting your command lists and draws to the immediate context in the wrong order, are doing something invalid (make sure you routinely test with the D3D debug layer enabled to check for API usage errors!) or have some kind of threading bug like a race condition, such as two threads using the immediate context simultaneously. This is one of the benefits of D3D11 vs 12 - 11 does a lot of work to make resource updates become visible in command order. In 12 you have to do a lot of work manually to implement those semantics yourself.
  6. Yes, the width, height and format (RGB is 3 byes, RGBA is 4 bytes). Your code might be incorrect if you load a greyscale image -- it would load one byte per pixel, and then GL would try to read 3 or 4 bytes per pixel. The final parameter of stbi_load (0 in your code) tells it how many channels that you want the loaded data to have -- you should probably be using something like "trans ? 4 : 3" instead of "0" there.
  7. If you get a crash inside a dll with a name like "nv something gl something dll" or "amd something gl something dll", then yeah, that's inside your GL driver. 99% of the time, it's your bug that's caused the crash, not theirs. For example, if you've passed a pointer to a buffer into the driver (e.g. to tell it to copy some pixel data out of that buffer), but the pointer isn't actually valid to read from, then the driver will crash when it tries to read from it. The crash occurs in the driver, but the bug is in your code (passing invalid pointers to the driver). Seeing this is to do with texture loading, I would guess that you've somehow asked the driver to read more pixel than actually exist within your memory allocation.
  8. I've seen people within the Kerbal Space Program community doing this kind of math by hand, perhaps some of the tech savvy ones have put together some basic programs/scripts or even GUIs to automate the process somewhat? Worth checking out what their player-community and modding-community is doing.
  9. Nothing eliminates risk as anyone can sue you for anything at any time. Technically, it's about the process of creation. Whether you've made a derived work or not. If you traced the original car directly, this work (the tracing) is derived from the original, so is infringing. Any modifications you do from that point are also a derived work. However in practice, if you change it enough that no one can tell that it started off as a derived work, then you won't get caught out. If you drew it all yourself without copying from another design directly, then you're good. There's a bit of a grey are for if you were inspired by something g and recreated it's style. That's mostly safe, but at the most legally-paranoid place where I worked, the artist were banned from using Google-images, and we're only allowed to look at reference images from collections that we'd bought the rights to... At every other job I ever worked, gathering reference/inspiration material from Google was common/fine.
  10. Yeah, you can almost re-frame this article as a checklist of signs that you're doing OOP wrong My quick feedback / comments on it: Data is more important than code - yep, people often write stupid class structures without considering the data first. That's a flaw in their execution, not the tools they have at hand. Once the data model is well designed, OO tools can be used to ensure that the program invariants are kept in check and that the code is maintainable at scale. Encouraging complexity - yep, "enterprise software" written by 100 interns is shitty. KISS is life. One of the strengths of OO if done right is managing complexity and allowing software to continue to be maintainable. The typical "enterprise software" crap is simply failing at using the theory. Bad performance - As above, if you structure your data first, and then use OO to do the things it's meant to do (enforce data model invariants, decouple the large-scale architecture, etc)... then this just isn't true. If you make everything an object, just because, and write crap with no structure, then yes, you get bad performance. You often see Pitfalls of OOP cited in this area, but IMHO it's actually a great tutorial on how you should be implementing your badly written OO code Graphs everywhere - this has nothing to do with OO. You can have the same object relations in OO, procedural or relational data models. The actual optimal data model is probably exactly the same in all three paradigms... While we're here though, handles are the better pointers, and that applies to OO coders too. Cross-cutting concerns - if the data was designed properly, then cross-cutting concerns aren't an issue. Also, the argument about where a function should be placed is more valid in languages like Java or C# which force everything into an object, but not in C++ where the use of free-functions is actually considered best practice (even in OO designs). OO is an extension of procedural programming after all, so there's no conflict with continuing to use procedures that don't belong to a single class of object. Object encapsulation is schizophrenic - this whole things smacks of incorrect usages. Getters and setters are a code smell -- they exist when there's encapsulation but zero abstraction. There's no conflict of using plain-old-data structures with public, primitive type members, in an OO program -- it's actually a common solution when employing OO's DIP rule. A simple data structure can be a common interface between modules! If you're creating encapsulation at the wrong level, then just don't create encapsulation at that level... This section is honestly an argument against enterprise zombies who dogmatically apply the methods what their school taught them without any original thought of their own. There are multiple ways to look at the same data - IMHO it's common for an underlying data model to be tabular as in the relational style, with multiple different OO 'views' of that data, exposing it to different modules, with different restrictions, for different purposes, with zero copies/overhead. So, this section is false in my experience. What to do instead? - Learn what OO is actually meant to solve / is actually good at, and use it sparingly for those purposes only
  11. IMHO the presence of assertions is the biggest difference between code written by beginners and advanced programmers. Assertions act as documentation for each of the assumptions that you were making when writing the code, plus documentation and checking of any pre-conditions and post-conditions of a function (which otherwise aren't formally specified in languages like C). Assertions then also check that all of your assumptions are correct and invariant are adhered to. IMHO, they are vital in any kind of serious code. Yeah I'd agree with that. I'd also point out that exceptions are designed for cases where you know that the next bit of code to run (flow control choice) is going to be higher up the stack than your immediate caller -- things that will abort some kind of larger operation. e.g. a file IO error will likely abort an entire save-game loading operation, and not just the function that was trying to write one field into the save file. For D3D in particular, there are some functions that can fail at any time for any reason. e.g. Present can fail because the user physically removed their GPU. In D3D9 there were a LOT of functions that would only fail if you passed them incorrect arguments (i.e. a programming error) -- these are the kind of ones that you should handle with assertions instead of error-handling code. If your code is correct, then they can't occur. In D3D10/11, most of that category have changed to a void return type instead There's still a few, such as CreateBlendState, etc, which can only fail due to programming error (bad arguments) or out of memory, which is unlikely to happen in practice, and to which your only option really is to crash anyway... So, if you know that a function can't fail as long as you've satisfied all of it's conditions, then check for error using assertions (which act as documentation of those conditions, and runtime validation of correctness). If you know that a function can possibly fail, then you've got to pass that up the chain... For D3D, most failures (besides programming errors) are that the GPU was removed, the GPU driver was rebooted, or your ran out of RAM... You could be lazy and just quit in all these cases, popping up a dialog box telling the user that something went wrong ...but on dual-GPU laptops, I have actually seen the "Device physically removed" error occur when the driver has decided to switch from the Intel to NVidia GPU moments after starting the game... so it would definitely be useful to handle it gracefully. My personal philosophy is to write most of your code in a way that there are no failure conditions. Be strict about what the pre/post-conditions and other invariants of each of your functions are. Validate/document these invariants with assertions everywhere. If something has a "failure" / "error" case, treat it as just another unique program feature / branch, not some kind of uniform items that needs a singular, common one-sized-fits-all "error handling" methodology applied to it. Just say no to "error handling". Say yes to features My other philosophy around programming errors, is that if you detect one, you should not try to "handle" it and allow the program to limp onward. You've just discovered that the rules of the program are not being followed - invariants are being violated! At this point, everthing is up in the air (especially in an unsafe language such as C -- memory corruption at this point means the program could do anything next)... So the best course of action is to halt the program and exit quickly. On your way out, flush the program's log to the disk, save a minidump file, and pop up a dialog box asking the user to email the log and minidump to you, so that you can fix the programming error.
  12. I'll dissent and say that this can be perfectly legal, but is still extremely risky. Unauthorized use of trademarks can still be legal and non-infringing. Using the trademarks (Ford badge, Mustang GT name) is fine for descriptive purposes as long as you don't imply endorsement of Ford, are accurate in your description of the product, and only use it to identify their product. It's risky because their lawyers can simply claim that you are implying endorsement, or that your description is damaging, or some other small detail like that, and then sue you for damages. You'd have to hire a lawyer to go and prove to a judge that what you did was fair and non-damaging. As for the photo of the car, that's likely owned by some photographer (or someone who hired the photographer). You can't go reproducing other peoples photos without their permission -- that would be copyright infringement.
  13. Hodgman

    Missing pixels in mesh seams

    Glad it worked Going from nice small, round model coordinates to nice small, round world coordinates is unlikely to encounter floating point precision issues. So, say you've got two instances, A and B, of a mesh with 2 vertices local coords 0 and 1, and their world matrices place them at position 10 and 11. Instance A ends up with verts at 10 and 11. Instance B ends up with verts at 11 and 12. Seeing as both instances agree that the edge is at 11, there's no crack. Both of those two different vertices at position 11 are transformed to the screen using the exact same projection matrix, so they'll end up with the bitwise same screen position and still have no crack. In your original code where you go directly from local vertex coordinates to screen coordinates (by using a world-view-projection matrix), the two instances are being transformed to the screen (which involves a lot of very precise, long, fractional numbers) using a completely different set of numbers. If you're unlucky (which it seems you were), the left vertex of one instance won't be bitwise exact with the right vertex of the other instance. Doh. This is kind of a funny problem though, because in general, the "fixed" version (local to world, followed by world to screen) has some bad precision implications -- because you're using world space as an intermediary, any vertices that end up being a long way from the origin can suffer quantisation problems. In a planetary scale renderer, this would absolutely destroy most of your data quality... Going directly from local coord to screen coord works well even for solar system sized scenes (assuming your code that constructed the 32bit float matrices was using 64bit double input data on the CPU). So, use local to world, world to screen if you want edges of different models to perfectly match up. Use local to screen if you want best precision within a single model. There's also one other solution that's popular - use "camera relative world space" as an intermediate coordinate system, where it's world space, but [0,0] is relocated to be wherever your main camera is. This can give a blend of both of the other solution's strengths and weaknesses.
  14. It's somewhat of a grey area, because you can be sued for anything at any time, regardless of whether the person launching a legal attack against you is correct or not. Even if they're not in the right, can you afford to hire an IP lawyer to defend you in court against such an attack? So the question isn't "is this legal" (a black and white yes/no answer), it's "how risky is this" (a grey area answer). Using real-world products within fiction has a very long precedent as a way to ground the fiction within the real world. Usage of a product in it's intended manner, in a way that doesn't imply endorsement by the product, or unfairly attribute flaws to the product, is legally allowed. However, many games which have tried to do this have found themselves the target of lawsuits anyway. I'm not aware of any game that has actually gone to court to successfully argue that they have done no damage to the brands involved and are innocent -- in every case I've seen, the game developer has backed down and paid a settlement fee, or changed their game, etc... The actual design (shapes, styles) of the cars is covered by copyright. Straight up copying those designs opens you up to legal attacks based around copyright infringement. The names (brands, logos, etc too) of the cars are covered by trademark. Using them at all opens you up to legal attacks based around trademark infringement. So, it actually can be legal if done carefully, but it's always extremely risky, especially if you don't have the money to fight for it in court...
  15. Hodgman

    Depth buffer resource, mipmaps

    You're correct. From memory, mip maps aren't compatible depth-stencil resources either, so you'd have to copy it to a mippable resource anyway.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!