Jump to content
  • Advertisement

Swartz27

Member
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

144 Neutral

About Swartz27

  • Rank
    Newbie

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Swartz27

    Ideal rendering engine?

    Thank you so much @Hodgman I truly do appreciate it. I'm currently trying to play catch-up with your post [quote[RTX is just NVidia's marketing buzzword for "supports RTRT APIs" [/quote[ Yeah, I knew this was likely the case. I want to strangle the people at Nvidia (I ask very simple questions once in a while, but since I'm an indie dev I guess I don't matter to them). I'll do both and then map out the results for others in a blog post. I want to say for the record that my understanding of C++ is terrible: HLSL makes a lot more sense to me. Have you taken a look at Unity's upcoming "Scriptable Renderer Pipeline"? I have to admit that it seems impressive (giving the graphics programmer a lot more control, even over the rendering order).
  2. Swartz27

    Ideal rendering engine?

    I just realized I typed all that without explaining what type of game it would be. It would be a first-person shooter/survival game set in a somewhat open-world environment. Single player only.
  3. I'm looking to create a small game engine, though my main focus is the renderer. I'm trying to decide which of these techniques I like better: Deferred Texturing or Volume Tiled Forward Shading ( https://github.com/jpvanoosten/VolumeTiledForwardShading ). Which would you choose,if not something else? Here are my current goals: I want to keep middleware to a minimum I want to use either D3D12 or Vulkan. However I understand D3D best so that is where I'm currently siding. I want to design for today's high-end GPU's and not worry too much about compatibility, as I'm assuming this is going to take a long time anyway I'm only interested in real-time ray-tracing if/when it can be done without an RTX-enabled card PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this (hint: I like cavity maps) I want dynamic resolution scaling. I know it's simply a form of super-sampling, but I haven't found many ideal sources that explain super-sampling in a way that I would understand. I don't want to use any static lighting. I have good reasons which I'd be happy to explain. So I guess what I'm asking you fine people, is that if time were not a concern, or money, what type of renderer would you write and more importantly "WHY"? Thank you for your time.
  4.   I don't care if you're a moderator: you're a dick. 
  5. Hi, yes I likely butchered the terms.   Basically, what I am looking for, is to be able to assign a special shader to a given material that I have, like you said, PBR.   I'll just go ahead and say that the project I'm working on is with Xray Engine 1.6 (note: I have "unofficial" permission to be using their source code, I am not doing anything illegal/immoral, GSC just won't come right out and say it's ok due to 3rd party code it contains [which is being removed]). To clarify, I did contact GSC about it.   The whole renderer, and well, the whole engine, is a total mess.    So I guess I have a question for a moderator rather than creating a brand new topic: would it be ok if I make a topic where I ask someone that has experience to look through the rendering code, and let me know of what needs to be fixed and improved for it's DX11 renderer? I don't expect anyone to do any work for me, I just would like some hints in the right direction.
  6. It's not "Diffuse light" and "Specular light", but material properties of a pixel. On this stage you are writing to GBuffer properties for each pixel. These MATERIAL properties will be used with Light-related data on step 2.       I've done it via packed structured buffer. Each type of light (direct, ambient, point, and spot) will be written to this kind of structure: struct LightData {     uint   type;          //All lights     float3 color;         //All lights     float3 dirToLightVS;  //Directional, Spot     float  innerAngleCos; //Spot     float3 posVS;         //Point, Spot     float  outerAngleCos; //Spot }; //48 bytes StructuredBuffer<LightData> Lights; The buffer will contain n elements (== light count + 1-last stub "LIGHT_NO" to break a loop). The shader unpacks the data, calculate amount of light from each light source, and sum it with different light: for (each pixel/sample) { for (uint i = 0; i < 102400; ++i) // i - index of Light in LightBuffer     {         if (lights[i].type == LIGHT_NO)             break; //No more lights         DiffSpec curr = (DiffSpec)0;         switch (lights[i].type)         {         //Sorted by occurrence         case LIGHT_POINT:             curr = CalcPoint(Lights[i], ...);             break;         case LIGHT_SPOT:             curr = CalcSpot(Lights[i], ...);             break;         case LIGHT_DIRECTIONAL:             curr = CalcDirectional(Lights[i], ...);             break;         case LIGHT_AMBIENT:             curr = CalcAmbient(Lights[i], ...);             break;         case LIGHT_SHADOW_PRIM:             curr = CalcDirectional(Lights[i], ...);             break;         } totalLight += curr; } You're awesome (same with Hodgman) :)  This is very helpful, thank you so much!
  7. Thanks :)   I just want to make sure I understand this 100% Step 1: Create thin gbuffer: by "thin" I suppose you mean either less things in the gbuffer, a lower DXGI format, or both? The current Gbuffer consists of Position, Normal, Diffuse Light, Specular Light, and Material type.    Step 2: Create intermediate lighting buffer, so include all lights here, and then have it read the necessary info from the thin Gbuffer?   Step 3: Draw opaque objects to light buffer: straightforward As for how neat the code is, even though I'm a beginner I can tell that this game engine is pretty much held together by duct-tape. The renderer is a huge mess as well, but I know how to navigate around it.
  8. Hi, Unfortunately I haven't had much luck searching for information on this.   Basically I'm looking to take the source code to a game, take their deferred shading renderer, and switch it over to deferred lighting, or better yet, Tiled Index Deferred Lighting.   I'm very new to all this (still working through DirectX11 tutorials) but I've learned a lot in that time.   There are many reasons for wanting to switch the method of rendering, with the main one being adding a more complex  BRDF system to the game.   Any help or useful links on this subject would be very much appreciated.
  9. Hi,   One of the biggest problems in the game I'm talking of is shadow aliasing.   I tried enabling jittering in the shader but it still doesn't mask the problem enough.   I recently came across this great GPU gems article that sounds like it would solve the problem:   http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html   Specifically, the "brute-force method".   Here is the shadows shader for the game: http://pastebin.com/SghPgsTq   The problem is I don't really understand HLSL that well yet so I was hoping someone would be willing to take the time to walk me through it.   I have access to the engine code in addition to that shader so if there is any information pertaining to shadowing that you need to know about the engine then let me know and I'll post it.   Thank you for your time.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!