• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Chetanhl

Members
  • Content count

    140
  • Joined

  • Last visited

Community Reputation

354 Neutral

About Chetanhl

  • Rank
    Member

Personal Information

  1.       I think I shouldn't have used the word "3D renderer" there. Basically, I created a sort of 3d looking scene based on my observation while playing games like F1 race, 3d scooter racing etc. It did it using turbo C++ gfx functions like line & circle and some crazy equations based on 2d geometry maths. Here's an example image (created in paint.net not actual screenshot) -    [attachment=28149:3Dsrc.png]   That 'T' was player object & we could move it right / left based on the input. Roads had scrolling animation and I could increase/decrease width for simulating downhill/uphill. Circular objects used to get bigger as they come closer. I don't really know what to call it? Maybe "Hacky 3D wannabe simulation ?"  Whatever it was, the joy of creating a 3d looking scene for the first time was beyond words.    This was done even before I studied matrices so yes there's no way I had the knowledge of transforms let alone 3d pipeline. I implemented a proper software renderer later in 2nd yr of college after learning about gfx pipeline n all. It was a wireframe renderer tested only with some simple objects like cubes, planes, etc.   Regarding Maths in School -  Yes, we were taught only basic operations on matrices like add/ multiple, inverse etc. No transformation and not even any hints on why matrices were important.  But we understood the vectors, dot & cross product and their uses. They were necessary even for physics problems.   I don't remember covering Bezier curves either in school or college, but I remember solving problems like given an arbitrary function graph and we have to solve for function equation using some basic trig functions (sinQ, cosQ, etc) but that's something more similar to SH I guess.  
  2. I am a self-taught graphics programmer.   We had C++ as part of our course in high school (2005-2006). After getting bored of coding calculators, I coded some text based games and then tried a basic 3D renderer using high school maths, geometry and turbo c++ graphics (lines).  I bought my first PC in 2006 a P4 because I needed it to complete my TicTacToe game and cyber cafe were not cheap in those days. And It was hard to find good PCs with USB drives in cyber cafes in those days to install Turbo C++. After I bought my PC a devil (friend of mine) introduced me to PC games like Doom95, Doom 2 & Halflife 1 and I was transformed from a sportsman to a nerd within a month. I finally knew what I must become   As if distracting me over the whole year wasn't enough for him he gave me Doom 3 & HL2 just before final exams. Yes, he was a Monster.   2007 - After school, I took Computer Engineering course. During the first year I made some basic 2d games using allegro. Then I discovered a wonderful website called http://www.gamedev.net/, which helped me a lot in learning different aspects of game development including graphics. I always liked the graphics more than other parts of game development.  Since the 3rd year of college I started developing my own hobby engine (JustAnotherGameEngine) and still working on it. Current version 4 is based on Direct3D 11 and that's where all fun stuff happens.     So I passed out of college in 2010. Since then I am hopping jobs and working on small to medium sized games from over 5 yrs now. I have worked on almost all aspects of game development in the past including gameplay, graphics, ai, multiplayer etc.We don't have any hardcore game development in my country India so I never got the chance to work on any game that I can be proud of. But I scratch that itch by working on my own game engine.   I made a tech demo 2 yrs back. It was a demo of all the different features of ver 3 of my engine excluding high-quality graphics or fancy shaders - https://www.youtube.com/watch?v=XvAIgO2bs5A   I am planning to make another Demo with the latest version of my engine whose main focus will be on graphics this time.   I don't want all of my hard work to go to waste, so I am still trying for an opportunity to get into AAA as a Graphics Developer.   P S - Once upon a time I ran Crysis on Pentium4, 1 GB DDR1 Ram & Nvidia 6400 on ultra settings and left it for an hour. Except few graphics glitches and slideshow gaming, everything was fine. After 60 minutes, my PC was still alive. 
  3. Thanks MJP and Radikalizm, I got it now. I understood the n=v=r assumption but I mixed up the reference images in unreal notes and was wondering why my results are so different than the middle row in that image.
  4. I think I have misinterpreted the term "split-sum approximation".    (Screenshot from Unreal notes) [attachment=27829:UEscreenshot.jpg]   My understanding is that  -    Reference (Top row) - It is being rendered in real-time.  Split-Sum Approx (Middle row)  -  It is the method of computing brdf texture and pre convoluting the cubemap. Then we sample and combine them in realtime. Full Approx (Last row) - It is the method of computing the whole equation and storing in cubemap and just sampling that cubemap based on roughness and normal in realtime.   Am i right ? or Is it something different ?
  5. Here's another good old link on the topic  - http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html
  6. Hello! I have been trying to fix this bug/error in my Image Based Lighting implementation (using GGX BRDF). Problem is when I use pre-convoluted probes and BRDFtexture the output has some extra brightness near the edges or extreme angles (easily visible at bottom of the roughest spheres) - [attachment=27814:IBLError_Probe.jpg]   Brdf texture generated - [attachment=27813:IBLErrorBRDF.png] pre-convoluted probe - https://dl.dropboxusercontent.com/u/71235621/IBLErrorSPEC_CONV.DDS     But if I run the convolution in real time (shading time) everything looks fine :  [attachment=27811:IBL_Realtime.jpg]   According to the unreal's siggraph 2013 presentation and notes, the split-sum approximation was used to rectify this problem but in my case I am getting those errors even with split-sum approximation. I am using same equations in both the cases (realtime and preconv) and I double checked the split-sum implementation they all combine into same equations but still the output is different.   Here's the HLSL code I am using for Realtime -  float2 hammersley_seq(uint i, uint N) { float den = reversebits(i) * 2.3283064365386963e-10f ; return float2(float(i) / float(N), (den) ); } float3 ImportanceSampleGGX(float2 xi, float roughness, float3 N) { float alpha2 = roughness * roughness * roughness * roughness; float phi = 2.0f * CH_PI * xi.x; float cosTheta = sqrt( (1.0f - xi.y) / (1.0f + (alpha2 - 1.0f) * xi.y )); float sinTheta = sqrt( 1.0f - cosTheta*cosTheta ); float3 h; h.x = sinTheta * cos( phi ); h.y = sinTheta * sin( phi ); h.z = cosTheta; float3 up = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0); float3 tangentX = normalize( cross( up, N ) ); float3 tangentY = cross( N, tangentX ); return (tangentX * h.x + tangentY * h.y + N * h.z); } float3 SpecularIBLRealtime(TextureCube envMap, sampler samEnv , float3 normal, float3 toEye, float roughness, float3 specColor) { float3 res = (float3)0.0f; normal = normalize(normal); //roughness = max(0.02f,roughness); static const uint NUM_SAMPLES = 256; for(uint i=0;i<NUM_SAMPLES;++i) { float2 xi = hammersley_seq(i, NUM_SAMPLES); float3 halfway = ImportanceSampleGGX(xi,roughness,normal); float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye; float NdotL = saturate ( dot( normal, lightVec ) ) ; float NdotV = saturate ( dot( normal, toEye ) ) ; float NdotH = saturate ( dot( normal, halfway ) ) ; float HdotV = saturate ( dot( halfway, toEye ) ) ; if( NdotL > 0 ) { float V = V_SmithJoint(roughness,NdotV,NdotL); float fc = pow(1.0f - HdotV,5.0f); float3 F = (1.0f - fc) * specColor + fc; // Incident light = SampleColor * NoL // Microfacet specular = D*G*F / (4*NoL*NoV) // pdf = D * NoH / (4 * VoH) float D = DFactor(roughness,NdotH); float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f ; float saTexel = 4.0f * CH_PI / (6.0f * CONV_SPEC_TEX_WIDTH * CONV_SPEC_TEX_WIDTH); float saSample = 1.0f / (NUM_SAMPLES * pdf) ; float mipLevel = roughness == 0.0f ? 0.0f : 0.5f * log2( saSample / saTexel ) ; float3 col = envMap.SampleLevel( samEnv, lightVec, mipLevel + 1).rgb; res += col * F * V * NdotL * HdotV * 4.0f / ( NdotH ); } } return res / NUM_SAMPLES; } And here's the code for PreConvolution -  float3 PreFilterEnvMap(TextureCube envMap, sampler samEnv , float roughness, float3 R) { float3 res = (float3)0.0f; float totalWeight = 0.0f; float3 normal = normalize(R); float3 toEye = normal; //roughness = max(0.02f,roughness); static const uint NUM_SAMPLES = 1024; for(uint i=0;i<NUM_SAMPLES;++i) { float2 xi = hammersley_seq(i, NUM_SAMPLES); float3 halfway = ImportanceSampleGGX(xi,roughness,normal); float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye; float NdotL = saturate ( dot( normal, lightVec ) ) ; //float NdotV = saturate ( dot( normal, toEye ) ) ; float NdotH = saturate ( dot( normal, halfway ) ) ; float HdotV = saturate ( dot( halfway, toEye ) ) ; if( NdotL > 0 ) { float D = DFactor(roughness,NdotH); float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f ; float saTexel = 4.0f * CH_PI / (6.0f * CONV_SPEC_TEX_WIDTH * CONV_SPEC_TEX_WIDTH); float saSample = 1.0f / (NUM_SAMPLES * pdf + 0.00001f); float mipLevel = roughness == 0.0f ? 0.0f : 0.5f * log2( saSample / saTexel ) ; res += envMap.SampleLevel( samEnv, lightVec, mipLevel + 1).rgb *NdotL; totalWeight += NdotL; } } return res / max(totalWeight,0.001f); } Code for computing BRDF texture -  float2 IntegrateEnvBRDF(float roughness, float NdotV) { float2 res = (float2)0.0f; //roughness = max(0.02f,roughness); float3 toEye = float3( sqrt(1.0f - NdotV*NdotV), 0.0f, NdotV ); float3 normal = float3(0.0f, 0.0f, 1.0f); static const uint NUM_SAMPLES = 1024; for(uint i=0;i<NUM_SAMPLES;++i) { float2 xi = hammersley_seq(i, NUM_SAMPLES); float3 halfway = ImportanceSampleGGX(xi,roughness,normal); float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye; float NdotL = saturate ( lightVec.z ) ; float NdotH = saturate ( halfway.z ) ; float HdotV = saturate ( dot( halfway, toEye ) ) ; //NdotV = saturate ( dot( normal,toEye ) ); if( NdotL > 0 ) { float D = DFactor(roughness,NdotH); float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f ; float V = V_SmithJoint(roughness,NdotV,NdotL) ; float Vis = V * NdotL * 4.0f * HdotV / NdotH ; float fc = pow(1.0f - HdotV,5.0f); res.x += (1.0f - fc)* Vis; res.y += fc * Vis; } } return res /(float)NUM_SAMPLES; } [numthreads(16,16,1)] void mainCS( uint3 dispatchThreadID : SV_DispatchThreadID) { float roughness = (float) ( dispatchThreadID.y + 0.5f ) / 256.0f ; float NdotV = (float) ( dispatchThreadID.x + 0.5f ) / 256.0f; float2 res = IntegrateEnvBRDF(roughness,NdotV); gOutputTex[int2(dispatchThreadID.x, 255 - dispatchThreadID.y)] = res; } Here's the BRDF equations I am using -  float3 SpecularIBL(float3 normal, float3 toEye, float roughness, float3 specColor) { float NdotV = saturate(dot(normal,toEye)); float mipLevel = IBL_MIP_FROM_ROUGHNESS(roughness); float2 brdfVal = gTexBrdf.SampleLevel( gSamBrdf, float2(NdotV, 1.0f - roughness), 0).rg; float3 vecReflect = normalize(reflect(-toEye,normal)); float3 preColor = gTexSkyBox.SampleLevel( gSamSkyBox, vecReflect, mipLevel).rgb; return preColor * ( specColor *brdfVal.x + brdfVal.y ); } float V_Smith(float roughness, float NdotV, float NdotL ) { float alpha2 = roughness * roughness * roughness* roughness ; float Gv = ( NdotV + sqrt( alpha2 + ( 1.0f - alpha2 )*(NdotV*NdotV) ) ); float Gl = ( NdotL + sqrt( alpha2 + ( 1.0f - alpha2 )*(NdotL*NdotL) ) ); return 1.0f / (Gv * Gl); } float D_GGX(float roughness, float NdotH ) { float alpha2 = roughness * roughness; float den = NdotH * NdotH ; den *= (alpha2 - 1.0f); den += 1.0f; return (alpha2)/( CH_PI * den * den ) ; } I have been trying to debug this from couple of days but no luck so far. I tried changing brdf equations to schlick , etc. I have also tried different methods of generating normal vectors for pre convolution. I have also manually calculated the equations on paper to confirm both methods resolve to same equations at the end, etc.   Any suggestions or pointers on how to debug or solve this would be of great help.   Thanks.
  7. @Mahuiztaccihuatl   Depending on your needs heres another UI library - AntTweakBar It's very easy to integrate and use in your own code base. It's more suitable for runtime parameters tweaking / modification than using it as a GUI solution for game.
  8. Hi, for debugging your shaders you can use these compiler flags - D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION | D3DCOMPILE_PREFER_FLOW_CONTROL. More info here.   Also, I have noticed many times in graphics debugger that variables that are no longer needed are assigned NAN.  You might want to try RenderDoc - for debugging. It's very easy to use and similar to pix & visual graphics debugger. I find it much better than visual graphics debugger. For example - One benefit is that it will show you the values in the registers which are retained unless overwritten by some other operation. Also you can go both backwards & forward while debugging.
  9. Hello,  I am working on improving ToneMapping in my engine by implementing Histogram based method (my current implementation is based on Avg scene luminance) . But I am a little bit confused on how to generate histograms - Following are some of the main questions that are bugging me and I have been googling / checking blogs / papers for couple of hours now but I can't seem to find anything good on the subject -    1) Basic Idea of Histogram is to calculate lumination for each pixel and increment its corresponding bucket counter. But how do we construct buckets? based on what data ?    2) Do we just choose an arbitrary range of lumination values say 0-10 and divide the range into equal size buckets ? If yes How do we choose min and max lum for this range ?  3)We keep this range fixed or we should change it depending on scene lumination or something ?   4) Is it better to use max(r , max(g,b)) or actual luminance of pixel for constructing buckets ?       Thanks!
  10. I don't think pix works with windows 8 sdk version of Direct3D 11 and 11.1.   @ericrrichards22 I didnt knew about Classic Shell sounds useful but I guess I wait for windows 10 only.  lol @ metro start screen in server edition
  11. @theflamingskunk Yes RenderDoc is awesome I have been using it for months now and I find it way more useful than visual graphics debugger. It saved me nights on many occassions :) That been said I never realized we can use it for profiling also (feeling really stupid right now). It works great specially with GPU events.         Visual Graphics Debugger  works only on windows 8 (windows 8 sux) so thats not an option for me.     Remotery sounds cool. Even though renderdoc worked for now but I will check this out for sure maybe on weekend or something. Looks like a opensoure commercial grade profiler and seems very easy to use.   I tried AMD perfstudio but my app keep crashing maybe I have to go through documentation and figure out how to set it up properly. I think i will keep this and Intel GPA in top of my list for later since render doc server my purpose for now.   Thank you all  
  12. Hi, I am looking for some good profiler for Direct3D11 that would work on win7. Basically what I am looking for is something to show me Timeline of  Frames rendered and detailed analysis of selective frames to analyse a trace with time spent in each section of code, etc.It would be great if I can get timings between GPU events also.   I tried Nvidia Nsight Visual studio edition couple of months back but it kept crashing so I deleted it and I don't think it (or maybe i failed to find it) provides detailed frame analysis in terms of trace and time spend in each section.     
  13. Yea same deal, but sorry missed that part. I'm trying to implement rotation for how a first person camera would rotate. The yaw for a first person camera is unaffected by the previous pitch. Looking up and then rotating wouldn't make your vision go sideways.     If all you want is just a FirstPersonCamera a simple way to do it would be  to save yaw and pitch in some variables and update them upon mouse movement. Then during update/render call we can easily derive the matrix or quaterion from yaw and pitch.   Or You can maintain 3 vectors vecLook, vecUp and vecRight and update them whenever you move camera. These 3 vectors are basically the rows of your rotation matrix. Also you need to normalize them before forming the matrix.    You might find this link useful http://r3dux.org/2012/12/a-c-camera-class-for-simple-opengl-fps-controls/ (even though its for opengl, it can be easily converted for directx)
  14.     Many test scenes had these non-uniform scaling (its so easy to create non uniform scaled mesh even when creating a simple cube in 3ds max etc) .I wasn't not really sure whether I should worry about this scaling stuff or not but after reading your post yes i think models have to be fixed in these cases, just realized I haven't really seen any stretched texture in any game because it doesnt really make sense to stretch it like that and tiling solves the issue automatically.   Thanks 
  15.   I checked everything again and this time using a cube generated through code to rule out possibility of any funky stuff going while importing stuff through assimp library. And I found the root cause, its due to different scaling along u and v either in object space or texture space.    So this bug happens in 2 cases -    1) If we have a quad and our transformation matrix has scaling of x = 2 and y = 3 this stretching will come up. 2) If you have different distance b/w the vertices (b-a) is not euqal to (c-a) stretching will come up again while rendering triangle abc.   And it sorts of make sense because when we were transforming viewRay in tangent space we didn't take into account the scaling, because we are forming TBN using normalized vectors ( that's how all the samples worked too ).   Solution for case 1 easy - either apply inverse scaling to viewRay or multiple by Vertor3D(1/scaleX,1/scaleY,1/scaleZ).   Problem is in the 2nd case - I can't figure out a reliable way to detect this in shaders.  If we have 3 vertices  -           ( pos ) ( texcoord ) A = (0,0,0) (0,0) B = (2,0,0) (1,0) C = (0,0,-1) ( 0,1)   Solution is to multiply viewRay with a factor of float3( 0.5f,1.0f,1.0f ) in this case. But problem is how to detect that factor for each case in pixel shader ?? It feels like its a limitation of ray marching ? or maybe I am missing something ?   @JasonZ Have you encountered this case in your implementation ?