Jump to content
  • Advertisement

Promit

Senior Moderator
  • Content Count

    15804
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Promit


  1. I apologize for not having the time to really review the discussion in this thread, although I'm very pleased with how the community seems to have handled it. Very little black and white, which is good. For my part, I set out building our own tech eight years ago for a variety of reasons, and now I run a funded company with shipping products built on top of that engine. And for as many things as that engine lacks, it has a few key things which allow us to do things that we would have had significant trouble pulling off in UE and are not actually possible in Unity. We're redoubling our effort to bring that engine up to 2020 standards, in our own way and with a mix of custom and off-the-shelf components. I have seven figures of investment (past and future) banking on this project, it's not a toy by any means although it is quite domain specific.

     


  2. As the above poster says, you're basically telling it to copy and then immediately forcing a full stall until that copy completes. Ideally, set up a rotating buffer of three read targets, and always map the oldest one. Your pick results will be slightly out of date, but you won't stall.


  3. Computer science would be the conventional degree to get; if your university offers a software engineering degree then that is a reasonable choice too. Computer engineering is not generally going to be a good idea unless your interests lie much more with hardware. There are also game programming specific programs (either minors or majors) and the opinions on those are distinctly mixed. I think a minor is not a bad choice at all if offered, but majoring in games specifically is a very dangerous career choice to make.

    It is worth noting at this stage that you cannot and should not rely on the university curriculum to teach you game programming. You're much better served using the university to get a wide ranging foundational base of knowledge, and pursue game programming specifically on the side during free time and breaks.


  4. On a conventional desktop GPU, pixels that are not on screen (more specifically, not in the viewport/scissor region) will not have their pixel shader executed at all. The triangles are clipped to screen edge as part of rasterization. However this does require complete vertex shader execution and rasterization and so it can be beneficial to cull groups of polygons before ever submitting them to the GPU. Note also that pixels can be computed fully, and then covered up by geometry in front of it that arrives later. 

    On a tile based GPU (common in mobile devices), the complete set of opaque polygons is processed as a batch for a given render target and the polys are clipped not only against the screen edge but against each other. This creates close to zero percent overdraw (near perfect pixel shader utilization) regardless of what order things are drawn in.


  5. On 3/24/2019 at 9:48 PM, Psychopathetica said:

    Hey guys. For nearly every game I run on my laptop, the GeForce Experience tag pops up asking you to enable it with Alt+Z or something. But for some reason, it never pops up in any of my C++ DirectX apps. Which means programmably there has got to be a way to enable it. I tried Googling this and got nothing. What do the pros do? Would be nice to have for FPS counters without having to do your own or use a graphics diagnostics. I use Visual Studio 2017. Thanks in advance.

    There's an API and developer page. https://developer.nvidia.com/geforce-settings-api


  6. 7 hours ago, Fulcrum.013 said:

    Nobody has promised that it simplier to implement then triangles. Especially in current 2-stage tesselation architecture (on initial de Casteljau algo number of subdivisions determined on vertex position computation stage ). And also splines and other curved surfaces much better and efficient for ray-trasing (that is realy next gen of game realism)  then triangles. All CADs have a ray-traced renders for photo-realistic image generation.

    It's not a question of simpler or more complex. I get the impression you feel that game graphics programmers don't know what they're doing or haven't heard about splines in the forty-five years since Catmull-Rom or the nearly sixty years since NURBS. Our job is to provide the maximum amount of visual quality on the hardware that consumers actually have in hand at the current time. Splines and patches do not accomplish that goal. You keep bringing up CAD as if it's somehow relevant, but their goals and priorities are very different from ours.

    You're describing a bunch of things which are mathematically sound on paper, but simply do not reflect the reality of achieving visual quality on a consumer GPU. And to the extent we can push the GPU designers for more powerful tools, geometric handling just isn't that interesting or important anymore. Our triangle budgets are high enough nowadays that there are far more pressing problems than continuous level of detail or analytical ray intersection tests or analytical solutions to normals and tangents.


  7. Just to clarify, the primary reason that game developers don't use spline or patch based meshes at runtime is that the throughput on GPU is relatively poor. In most cases the GPUs are much better at pushing triangles than trying to do adaptive detail tessellation, as there are challenges across different hardware with how much expansion is actually viable and how on-chip buffers for tessellation outputs get sized. It can be more useful to use compute shaders to expand the tessellation ahead of time, but it's not really that helpful at the end of the day for most of the models we actually want in a game.

    Please don't take Fulcrum's ignorance as representative of where game developers are at technically. In general, I would put runtime polygonal detail levels waaaay down the list of challenges graphics programmers should be spending their time on. It wouldn't even be close to making my top ten.


  8. 2 hours ago, Vilem Otte said:

    Out of my curiosity - Can you elaborate a bit on that 'from scratch'?

    I don't care so much about helper libraries (SDL or GLFW or something), but I've really found it much more interesting to see something where the functional components are substantially homegrown. That means all of the architecture, design, and problem solving is really the dev's own solo work, which is not possible in Unity or Unreal and is often not the case with other large engines (Godot, Ogre, what have you). There's real value in being forced to build something (almost) entirely on your own, and not basing on tens or hundreds of thousands of lines of other people's work.

    (And yes, you could twist this to talk about standard libraries or operating systems or whatever as being other people's code. I don't think the comparison is valid.)


  9. On 2/1/2019 at 3:59 PM, Vilem Otte said:

    One advice though, make SOMETHING which will prove that you can finish something. I'd say that even simple games for game jams count (like Ludum Dare).

    I'm afraid I have to disagree. This was good advice once upon a time when making finished looking projects was really difficult. That's no longer the case, to be candid about it. Between the big engines and asset stores for those engines and plentiful samples, making a finished simple game is about the least useful thing. It's no longer a good marker of competence or tenacity, and employers are beginning to catch on to this fact.

    If your goal is to just get a job on simple games-related work (mobile games or entertainment app type stuff), go ahead and learn Unity and throw some stuff together. But if your goal is serious AAA game development, do something that demonstrates heavy hitting technical competence. Don't write a game - write something that is legitimately challenging. Show off some sophisticated graphics techniques, maybe something with advanced custom physics, or something with interesting complex gameplay.

    I'm not going to speak for hiring practices across what's a very diverse industry these days. But we develop our own engine and tech, and at this point I'm outright discarding any candidate who can't show me a track record of building interesting technical work from scratch.


  10. You're computing your fragment position in view space, but your light is presumably defined in world space and that's where you leave it. Personally I find it much easier and more intuitive to do all my lighting in world space rather than the old school view space tradition. Just output position times world into the shader, and then everything else will generally already be in world space.


  11. There are a couple ways to approach this. The simplest, as mentioned above, is to simply implement the deformation effect in the vertex shader. If you're dealing with a simple one in, one out style of effect then this is a great way to do it and this is how skinning for example is done.

    The next step up in sophistication is to not supply the vertex directly to the vertex shader, but to give it access to the entire buffer and use the vertex index to look up the vertices in a more flexible format. (Some GPUs only work this way internally.) That way your vertex shader can use multiple vertices or arbitrary vertices to compute its final output.

    The most complex version of this is to write a buffer to buffer transformation of the vertices, which can either be done via stream out in the simple cases or a compute shader in the advanced cases. This lets you store the results for later, not just compute them instantaneously for that frame.


  12. 1 hour ago, turanszkij said:

    Should be handled by the driver, but an other thing is that if you still have a pixel shader which wants to write output and no render target is bound, the DX debug layer will begin spamming warning messages, so it's still a good idea to have a null or void PS.

    Oh yeah I forgot about that. I'd blank the shader for that reason alone 🤣


  13. 6 hours ago, Hodgman said:

    You can also pass 0 for the number of views in OMSetRenderTargets -- no need to have a color-render-target bound at all, just use a depth one.

    On top of that, having a NULL pixel shader bound can improve performance further, as mentioned above :D 

    Will setting a NULL pixel shader improve performance even if no color RT is bound? That seems like an easy optimization for a driver to integrate automatically.


  14. As we said on GameDev's Discord (that everyone should join!) what really happened here is that Unity tried to strong arm a company for a license fee, didn't get it, and then lost the ensuing PR war when they tried to force the issue. While the resolution is probably a good thing and makes Unity a more transparent business, it's alarming that things went this way in the first place and that Unity tried to leverage someone that hard. Improbable really came out on top by being savvy with social media and forging a strong alliance with Unreal and Tim Sweeney to back them up on it.

    I'm not inclined to the charitable interpretation of Unity making a mistake here or Spatial doing something that was actually an offense. I think Unity wanted a cut from a certain class and thought they could get it.


  15. The absolute easiest thing to do is to use an archive library like PhysFS to read files like those, maybe with password protection enabled, although it's limited to some common formats that people will generally know how to work with. You could go in and make a modified version of one of the formats that is similar but different enough to require different parsing, though now you'll need to make a tool to output that too. Another option would be to layer some encryption into the files inside the package, and that can range from simple to complex.


  16. I see you're in India, so I am not entirely sure how much of this advice will apply. But typically at least in the US and several other countries, there are programs to get a graduate degree by taking evening and online classes in some combination. So you might have classes Monday and Wednesday 7-9 pm. From there, it's just about discipline and busting your ass doing all the work, because it doesn't come easy. You would likely take one or two courses each semester, which keeps the workload sane with the pressures of a full time job, and probably requires 2-3 years to complete.


  17. NVIDIA has announced a christmas present with the release of PhysX SDK 4.0 later this month, open sourced under the 3-clause BSD license. The big technical change in the physics engine appears to be the temporal Gauss-Seidel solver, enabling faster and more robust handling of contact points and constraints.

    For full details, please see https://news.developer.nvidia.com/announcing-physx-sdk-4-0-an-open-source-physics-engine/


    View full story


  18. Just now, Lactose said:

    That would work if you're wanting just the direction, but surely it would prevent being able to use the magnitude as e.g. speed or similar?

    I would say stuff x and y into a vector, and normalize the vector only if its magnitude is larger than 1 (something I've seen with an old/flimsy/faulty gamepad). Like Promit says, atan2 will give you the angle

    We have a lot of code that computes the length of a vector into a separate variable and then divides the vector through to normalize it. Enough that I'm tempted to make an extra helper function just for that usage pattern.

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!