• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

piluve

Members
  • Content count

    43
  • Joined

  • Last visited

Community Reputation

286 Neutral

About piluve

  • Rank
    Member
  1. Hey guys, I started working on a port from dx11 to dx12. The first thing, was to setup everything to work with Dx11On12. Right now I've got that done. Basically, the render frame goes as follows: D3D12Prepare(); // setups the command list and command allocators (as well as basic clear and set render targets) GetWrappedResources(); // Dx11on12 step to adquire resources Render(); // Basically all the Dx11 rendering code etc D3D12End(); // On D3D12Prepare we left the command list opened so we can add aditional // commands, now close and execute it ReleaseWrappedResources(); Flush(); // Flush all the dx11 code Dx12Sync(); // Wait for fence Dx12Present(); That setup is working and I changed some commands inside Render() from dx11 to dx12. (Basic stuff like setviewport) I want to start porting more stuff inside the Render(), for example, we have a simple method to draw a quad (without vertex or index buffers, we use the vertex_id inside the shader). Basically, it should translate to this: mCmdList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); mCmdList->DrawInstanced(4, 1, 0, 0); But even that simple piece of code is just not working. I would like to get some advice from someone that has done a similar process (using dx11on12), what are the limitations, things that wont work etc My main concern right now, is that if I want to start setting up commands that touch the IA, I would have to also create the PSO, root signatures etc etc. Thanks.
  2. Hello! Thanks for the tip, I'll give it a try ;)
  3. Hello! I implemented the god rays algorith presented in GPU gems but I have a few questions, here the link of the article (http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html). How could I remove the rays when I'm not facing the sun? As the effect is being aplied if the sun is behind the view it gives weird results. Right now, I'm checking the projected z value and if it is less than 0, I just dont perform the effect, this works OK but the change is really abrupt. As I'm aplying the god rays to the sun, to get the sun in NDC space I'm doing the following math: glm::vec3 sunEyePos = mCamera.GetPosition() + (mSunDirection); glm::vec4 sunProj =  mCamera.Projection * mCamera.View * glm::vec4(sunEyePos, 1.0f); sunProj /= sunProj.w; sunProj = (sunProj + 1.0f) * 0.5f; I'm a bit worried on how do I get the sun pos, the idea is to get a global light source , is that ok? Thanks!
  4. Alright, I found a way to mix it with my scene, I render the effect first with depth mask disabled and then render everything else. That part is working! I have a problem tho with how I calculate the ray you can see the problem here: https://i.gyazo.com/745305ef1e04aab74b3c4cffaf0812f1.mp4. As you can see its moving up and down... I calculate the ray using: vec3 GetRayDir(vec2 uv) { mat4 rotMatrix = mat4(uView); rotMatrix[3] = vec4(0.0f,0.0f,0.0f,1.0f); mat4 iView = inverse(rotMatrix); mat4 iProj = inverse(uProjection);       mat4 iVp = iProj * iView;     vec3 rd;     rd = (iVp * vec4(uv.x * uAspect,uv.y,-1.0f,1.0f)).xyz;     return normalize(rd); } vec3 rayDir = GetRayDir(iPos.xy); iPos is the attribute position of the fullscreen quad. Any ideas? Thanks!
  5. Okey I found a way  to get a ray and also to use the current view matrix. I would try to figure how to mix both the postprocess and the scene. 
  6. Yep. Standard ray/sphere intersection. Mine implementation is buried somewhere, but it looks like someone wrote the same thing on shadertoy.     Hey! I found this project: https://github.com/wwwtyro/glsl-atmosphere and it's basically what I wanted. I implemented it in shadertoy (https://www.shadertoy.com/view/4sXcRS) so I can test with it first. The doubt I have right now, is how to translate the directions: vec3 rayDir = normalize(vec3(uv,-1.0)); So the effect is in camera space. Should I multiply it by the view matrix? I'm not sure about that. See you!
  7. Yep. Standard ray/sphere intersection. Mine implementation is buried somewhere, but it looks like someone wrote the same thing on shadertoy.   Alright I'll give it a try. Thanks!
  8. I implemented O'Neil's scattering as a post-process a long time ago. It works.   Planets are pretty big, so broadly sorting rendered elements into [ground, ocean, clouds, atmosphere, space] makes it pretty easy to ensure the atmosphere is rendered in the right order such that alpha blending is resolved beforehand.     Did you use raycasting to fake the atmosphere? 
  9. Update 3 I started working again on the atmosphere scattering shader and I'm still figuring out the best way to use it without having to render a sphere with the size of the atmosphere. I've been using a sphere with radius 1 on top of the camera (like you would do with a skybox) but I have to hack some values. For example, I have to displace down the sphere, if I dont do that, the atmosphere apears on the North pole of the sphere... Maybe the best way, would be to implement it on a postprocess but I'm not sure how it will fit with other elements (with alpha). Any ideas? Thanks!
  10. Source : https://developer.nvidia.com/implementing-hdr-rise-tomb-raider   Hello, I'll check how Opengl handles sRGB textures and thanks for that article its really informative!
  11. IMHO, it should be the whole image, not just the bright parts. The versions of bloom that appeared before HDR, you had to use some kind of threshold value to extract only bright parts, but that makes no physical sense. Bloom is the light being blurred by imperfections in the lens (either your eye, or smudges/etc on the camera lens... ), and it's impossible to construct a lens that will let through X number of photons perfectly, and then blur all other photons. Natural lighting effects are additive and multiplicative, but thresholding is a subtractive (unnatural) effect. In my HDR pipeline, I just multiply the scene by a small value, such as 1%, instead of thresholding -- e.g. 1% of light refracts through the smudges taking blurry paths to the sensor, and 99% take a direct path. Changing that multiplier will change how smudged or imperfect your lens is. However, after doing this, the end result is similar - you only notice the bloom effect on bright areas :D You should do tone-mapping and output gamma correction at the same time. You never want to go from one 8-bit colour space to another 8-bit colour space, because doing so will lose information in the process and you don't have enough precision to spare at only 8-bits. So when you go from 16-bit HDR down to LDR in the tone-mapper and are about to output the results to an 8-bit buffer, that's the perfect time to convert to the output colour space. As well as adjusting the output curve of your tone-mapper, you also have to adjust your input textures. Your textures are (hopefully) being authored by an artist on an sRGB monitor, so they're visualizing those 0-255 values in the sRGB colour space (which is approximately the same as gamma-2.2). In your shaders, you need to make sure that this texture data is converted from sRGB/gamma-2.2 to linear-RGB/gamma-1.0 before you do any lighting calculations with it. You can do that in your shader with code, or use the HW accelerated sRGB decoding (e.g. the *_SRGB texture formats in D3D, or the sampler state in GL).   Hey! I like your approach for the bloom, it seems more natural that way. For the gamma correction, I should only apply it when I sample an albedo texture, right?
  12. Hello guys, hope you are doing well.   This week I'll start working on postprocessing for my demo scene and I would like to add some effects like bloom or lens flares. I've been reading some blogs and information related with this effects and most of this sources say that you should apply this effects in a specific order.  This is what I understand of each effect:   + HDR: basic render targets stores color in the range of 0-255 (0-1), many scenes will generate colors beyond this mark for example specular highlights. We can use floating point render targets so values wont be clamped. + Bloom: Takes bright parts of the image, blurs them and adds to the final scene (it gives a cool glare effect). It works well with HDR as the bright values can be bigger than 1. + Lens flares: Its an effect that happens when the light interacts with the lens of a camera. + Tone mapping: transforms the HDR values to the expected LDR values. + Gamma correction: adjust the color curve so it matches with the monitor.   In my engine, I'm using a forward rendering pipeline. At the end of the frame, I have a framebuffer with all the stuff rendered that frame. This is how I would do the postprocessing: Bloom->Lens flares->tone mapping->gamma correction (All of the steps are using floating point textures)   What do you think? Thanks!
  13. Update 2 I've updated all the variables in the shader that were harcoded and now I'm ussing uniforms to set them. I found a problem with the SunDirection that I send from the vertex to the fragment shader, I must normalize it, now the sun is no longer black but it has a strange shape.
  14. Hello once again.   I've been adding the atmosphere model explained by Sean O'Neil (http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html) and I have it 70% working. I'm thinking how is the best way to implement it in a scenario where I won't be using a planet. I'll just render a terrain (lets say 20x20km) and a few other elements. My first idea was to render a 3d sphere for the sky with a radius of 4Km, then on the shader I'll specify the outer and inner radius accordingly. This is not a really good idea as I have to hack a few values to move up all the elements of my scene (so they will placed at the "earth" surface).  The other idea is to make it as a postprocessing effect, define a sphere and raycast it using the scattering shader. I think it will work but I'm not sure how well it will fit in my scene (water,terrain,clouds) forward rendering with some transparent objects.   Any ideas? See you!   Edit: Here are the vertex and fragment shader I've been using. They are used with the 3D sphere: Vert:http://pastebin.com/p00jTSSy Frag:http://pastebin.com/KmcXcGc7   In those shaders i've been rendering the sky sphere like a skybox(depth mask disabled, rendered before the scene). The sphere in those shaders have 1m radius and I'm harcoding the camera position.   Update Alright, I've been working on the solution using the sphere as a skybox and I'm having some problems with the algorithm itself. The sun gets black when its position is not laying on the x axis (0.0f,0.0f,1.0f) or (0.0f,0.0f,-1.0f) this is how it looks: https://i.gyazo.com/47778fc56b41eeccef798ff79a8a7379.png Lets see an example if I set the sun position (sun position in the shader = sun direction) (0.0f,0.5f,1.0f): https://i.gyazo.com/22c556be20360b32e2f4431dc22816a0.png.   I updated the vertex and fragment codes:  Vert:http://pastebin.com/shuL2iL5 Frag:http://pastebin.com/V7Pdi3Lq I would like to mention that I'm ignoring the alpha value in the fragment shader.
  15. Sometimes I just need to start making a post here or at reddit to just find out the solution while I write it. I had some problem at the uv generation stage, I rplaced it by something more naive but that works.   See you! Thanks for being my rubber duck.