Jump to content
  • Advertisement

Leaderboard


Popular Content

Showing content with the highest reputation on 09/16/18 in all areas

  1. 2 points
    It looks like you have some gameplay. Good job on that. There are many things that can be added. I'm wary of describing them because of the way you ask questions on the site, but I'll still cover them: You can add more gameplay. You can have scores, you can have a victory or loss condition. You could have teams, multiple players, computer-controlled players, networked players. You can have a variety of ship types and a variety of weapon types. You can have goals and objectives. Games usually have menus and a front-end. That's a place for options, for high screens, player profile management, and a way to exit the game. Games usually have sound, and I don't see anything about that in your post. Adding features to a game increase the complexity. Often adding one feature will break previous features. The larger a game gets the more difficult it becomes to implement new features. Increased complexity means more difficulty in reasoning about features, and an ever-growing number of unexpected side effects. I worry in your situation that the added code complexity and difficulty would be too much for you.
  2. 1 point
    So there's separate (but related) topics here: HDR rendering, and HDR output for displays. Depending on your exact Google queries you might find information about one of these or both of these topics. HDR rendering has been popular in games ever since the last generation of consoles (PS3/XB360) came out.The basic idea there is to perform lighting and shading calculations internally using values that can be outside the [0, 1] range, which is most easily done using floating-point values. Performing lighting without floats seems silly now, but historically GPU's did a lot of lighting calculations with limited-precision fixed-point numbers. Support for storing floating point values (including writing, reading, filtering, and blending) was also very patchy 10-12 years ago, but is now ubiquitous. Storing floating-point values isn't strictly necessary for HDR rendering (Valve famously used a setup that didn't require it), but it certainly makes things much simpler (particularly performing post-processing like bloom and depth of field in HDR). You can find a lot of information about this out there now that it's very common. HDR output for displays is a relatively new topic. This is all about how the application sends its data to be displayed, and format of that data. With older displays you would typically have a game render with a rather wide HDR range (potentially going from the dark of night to full daytime brightness if using a physical intensity scale) and then using a set of special mapping functions (usually consisting of exposure + tone mapping) to squish that down into the limited range of a display. The basic idea of HDR displays is that you remove the need for "squishing things down", and have the display take a wide range of intensity values in a specially-coded format (like HDR10). In practice that's not really the case, since these displays have a wider intensity range than previous displays, but still nowhere wide enough to represent the full range of possible intensity values (imagine watching a TV as bright as the sun!). So that means either the application or the display itself still needs to compress the dynamic range somehow, with each approach having various trade-offs. I would recommend reading or watching this presentation by Paul Malin for a good overview of how all of this works. As for actually sending HDR data to a display on a PC, it depends on whether the OS and display driver support it. I know that Nvidia and Windows definitely support it, with DirectX having native API support. For OpenGL I believe that you have to use Nvidia's extension API (NVAPI). Nvidia has some information here and here. Be aware that using HDR output isn't necessarily going to fix your banding issues. If fixing banding is your main priority, I would suggest making sure that your entire rendering pipeline is setup in a way to avoid common sources of banding. The most common source is usually storing color-data without the sRGB transfer curve applied to it, which acts like a sort of compression function that ensures darker color values have sufficient precision in an 8-bit encoding. It's also possible to mask banding through careful use of dithering.
  3. 1 point
    Unfortunately a lot of information on the internet is woefully out of date. For example, Valve's original HDR implementation was for their Lost Coast tech demo, and that ran on consumer cards over a decade ago.
  4. 1 point
    I haven't used WinRT myself, so I can't answer those questions sorry. I guess since WinRT is rather the groundwork around everything else the same restrictions for UWP and the swap chain apply there as well. From the looks it can also be used to make non UWP apps/games.
  5. 1 point
    WinRT is the new Windows Runtime. It's a base for UWP apps. Lookie here for a better explanation: https://docs.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/ Based on their explanation it seems to boast better performance.
  6. 1 point
    I'm kind of working on the same thing you are. Currently I'm also doing noise on the CPU. I did implement Simplex noise for HLSL on DirectX9 a few years back and it worked OK. However I only used it for shading and didn't use the results on the CPU. For that I worry about about the numeric stability. First off planets tend to get big so that typically means double precision which GPUs aren't as optimized for. Second if the results from different GPUs are a little different, that might present a problem especially for online games. However I'm not sure if these are serious issues or not. I'm pretty much a novice on the GPU side of things so maybe someone else can chime in.
  7. 1 point
    Typically malloc returns memory aligned to (at least) the most restrictive type on a given architecture. You really don't have to worry about it unless you are doing something special. For instance it might return 4 byte aligned memory on a 32 bit machine and 8 byte aligned memory on a 64 bit machine. Since malloc has no idea what you are going to do with the memory, it has to do this. If I remember correctly intel CPUs don't specifically require alignment but there used to be a performance penalty if your data wasn't aligned correctly. If you did this on some RISC processors however you would get the dreaded "bus error" and your program would crash, so in general it's best to make sure everything is aligned, but again unless you are doing something odd, it shouldn't be a problem. If you are doing a custom allocator, like a slab allocator or some such, then you have to worry about such things. Generally however, you again just align to the natural architecture alignment. As for pool sizes, that's going to vary from machine to machine, depending on page sizes and probably other things. One trick I use on Windows 64 is to use. VirtualAlloc. This lets you reserve a giant chunk of address space which is contiguous, yet not actually allocate the memory until needed. You can get away with this because the address space is so huge, even if your memory isn't. In any case any time you are allocating a lot of small objects you should generality do your own memory pool stuff for performance reasons assuming you care about that.
  8. 1 point
    In my experience, I don't think I've been on a project that's strictly followed a formal model. Typically it's some iterative stuff during pre-production. Then to actually budget the project you pretend it will be waterfall and draw out the Gantt charts. Then month to month / week to week you do some stuff that riffs off scrum, agile, kanban, etc. And then occasionally a producer reconciles actual progress with that fictional waterfall plan. Each team will usually have a lead directing them in their own style, and a producer overseeing and coordinating the leads. The project will usually be split up into many milestones (which are tied to payments if the game is being externally funded, to reduce investor risk). Planning / communication styles may change depending. On which milestone you're up to, whether you're behind schedule, and whether you're focussed on features, bug fixing, polishing, etc...
  9. 1 point
    I'd start with Morgan McGuire's article from 2014. It's still close to the state-of-the-art. vterrain.org is a virtual treasure trove of older techniques, but they don't seem to have updated much since 2010.
  10. -1 points
    ... erase with whatever you want? Google texture filtering
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!