• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

cifa

Members
  • Content count

    33
  • Joined

  • Last visited

Community Reputation

348 Neutral

About cifa

  • Rank
    Member
  1. If your goal is just to make it "blurrier" you really don't need multiple passes (well not more than 2) but you can space more the samples you fetch. Supposing you want a gaussian blur with 7 taps:   This is not actual code, is just to give an idea.  const float weights[7] = {0.006, 0.061, 0.242, 0.382, 0.242, 0.061, 0.006}; // This would be equivalent to a gauss blur with std dev 1 const int offsets[7] = {-3, -2, -1, 0, 1, 2, 3}; float2 width = direction*gaussWidth*pixelSize; // Direction is (0,1) or (1,0) based on vertical or horizontal for(int i=0; i<7; i++){ OutColor += TextureSample(sceneColor, sceneColorSampler, UV + offsets[i]*width)*weights[i]; }   Increasing gaussWidth will increase the amount of blur.    This will introduce visible artefacts if you chose a very high width, but works well for reasonable blurs. 
  2. I think you should at least make your email address public. After all it might happen that someone stumbles across your page and want to contact you. To prevent spambots from reading the email address you can use a mix of javascript and <noscript>. I use the following code on my homepage and did not get a single spam mail so far (but there are tons of bots on the website). <script language="JavaScript" type="text/javascript"> var string1 = "mail"; var string2 = "@"; var string3 = "domain.com"; var string4 = string1 + string2 + string3; document.write("<a href=" + "mail" + "to:" + string1 + string2 + string3 + ">" + string4 + "</" + "a>"); </script><noscript>mail [at] domain [dot] com</noscript> Thanks for the script, I will use it! However when sending the page for application I will make it public for sure. It is just now that the page is not yet used for its purpose.
  3. Hi all!   I am on the verge of sending out my first applications and I have recently put up a portfolio website.  My goal is to try and get a job as (junior) graphics/engine programmer-      Any kind of feedback would be greatly appreciated, here's the website: http://fcifariellociardi.com/     (Note, in the CV section all the personal data info, apart from the name, are eliminated as I am still unsure if I am up to having them public)    Thank you! 
  4. Thank you very much! I don't know why I didn't thought of Gram-Schmidt. 
  5. Hi there,   I was wondering if it is possible somehow to find two orthogonal vectors on a mesh, but starting from screen space.  I know that I can bring two points (e.g. currPixel and currPixel + (1,0)) to object space if I have also depth info. In such way I can find a vector that is on the mesh in object space. Now we all know that in 3D there are an infinite number of perpendicular vectors to another one so if I just take one of them I have no guarantee it would be on the surface of the mesh. Taking perpendicular vectors in screenspace is of no help as they may well map to non-orthogonal vector in object space.   Is it possible, starting from the data I have (persp. matrix, viewMatrix, modelMatrix, depth info and screenspace info), to obtain the said vector or is it an impossible task?    Thank you!
  6. ************** UPDATE **********************    I had a stupid issue due to the resolution of the texture, not the code below works as it should.    ***********************************************     Hi there!   I'm trying to implement the IBL technique described in here: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf     Now I'm still trying to get the envBRDF LUT (roughness / NdotV) to render properly as shown in the paper:      What I have now is:    the usual Hammersley functions: float radicalInverse_VdC(uint bits) { bits = (bits << 16u) | (bits >> 16u); bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u); bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u); bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u); bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u); return float(bits) * 2.3283064365386963e-10; // / 0x100000000 } // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html vec2 Hammersley(uint i, uint N) { return vec2(float(i)/float(N), radicalInverse_VdC(i)); } The ImportanceSampleGGX practically as reported in the paper: vec3 importanceSampleGGX(vec2 sample, float m, vec3 N){ float phi = sample.x * 2.0f * PI; float cosTheta = sqrt( (1.0f - sample.y) / 1.0f + (m*m - 1.0f) * sample.y ); float sinTheta = sqrt(1.0f - cosTheta*cosTheta); vec3 vector = vec3( sinTheta * cos(phi), sinTheta * sin(phi), cosTheta ); return vector; vec3 up = abs(N.z) < 0.999 ? vec3(0.0,0.0,1.0) : vec3(1.0,0.0,0.0); vec3 tangentX = normalize(cross(up,N)); vec3 tangentY = normalize(cross(N,tangentX)); // Project return tangentX * vector.x + tangentY * vector.y + N * vector.z; } The integrateBRDF vec2 IntegrateBRDF( float Roughness, float NoV ){ vec3 V; V.x = sqrt( 1.0f - NoV * NoV ); // sin V.y = 0.0; V.z = NoV; // cos float A = 0.0; float B = 0.0; for( uint i = 0u; i < NUMBER_OF_SAMPLES; i++ ) { vec2 Xi = Hammersley( i, NUMBER_OF_SAMPLES ); vec3 H = ImportanceSampleGGX( Xi, Roughness, vec3(0.0,0.0,1.0) ); vec3 L = 2.0 * dot( V, H ) * H - V; float NoL = max( L.z, 0.0 ); float NoH = max( H.z, 0.0 ); float VoH = max( dot( V, H ), 0.0 ); if( NoL > 0 ) { float G = G_Smith( Roughness, NoV, NoL ); float G_Vis = G * VoH / (NoH * NoV); float Fc = pow( 1.0 - VoH, 5.0 ); A += (1.0 - Fc) * G_Vis; B += Fc * G_Vis; } } return vec2( (A)/float(NUMBER_OF_SAMPLES), (B)/float(NUMBER_OF_SAMPLES)) ; }       As you can see they're almost a copy-paste from the reference, but what I got is completely different:        The roughness is the uv.x and uv.y of a quad that are calculated in the VS as:  vTextureCoordinate = (aVertexPosition.xy+vec2(1,1))/2.0; Am I missin something very stupid? 
  7. Hi everyone!   I've been trying adding HDR support to my system but I got a couple of questions. I believe what I'm doing before  tone-mapping is fine:    - Setup a RGBA16F Render Target - Render to that RT - call the postprocessing tone mapping on the cited RT and display the result on screen.      Now to test this I've cranked up a bit my light intensity:      Now if I apply the following tone mapping function (either one, the result is similar) col *= exposure; col = max(vec3(0.0), col -0.004); col = (col *(6.2*col + 0.5)) / (col *(6.2*col +1.7)+0.06); return vec4(col ,1.0) or  col *= exposure; col = col / (1+col); return vec4(col, 1.0); if the exposure is more than 0.5 for the above image and previous functions the result is pretty flat:     Where for an exposure value of about 0.4 the result is ok:      Although it's very darkish (unsurprisingly as the exposure is pretty low).    Now the same issue (flatness of the tone-mapped image) is there for a "normal" light intensity which is fine even without using HDR. To get a decent result I have to lower the exposure even for that case, resulting though, again, in a darkish image.    My questions are then two:    - Am I missing something important here? I feel that the result I get is somewhat wrong - Is there a way to "automatically" select what's considered a good exposure value for a given scene?        Thank you very much
  8. ********** EDIT: I've found a bug in the CPU side that wasn't my fault, I'm investigating and then update the topic, for now the following description is not valid anymore, if mods want to  close the topic it's fine to me ****************     I'm implementing a VSM algorithm. My code is pretty much like everything is online.  //coords is after the w divide and 0.5 + 0.5* corrections float VSM(vec3 coords, float dist){ vec2 moments = texture(shadowTexture, coords.xy).rg; if(dist <= moments.x) return 1.0; float variance = moments.y - (moments.x*moments.x); variance = max(variance, 0.001); float d = (dist) - moments.x; float P_max = variance / (variance + d*d); return pMax; } Here if I use a small value to clamp the variance as done in almost every source I found the result is a terrible acne:       If I start increasing the max value for the variance the acne diminish, although still evident. Also the proper shadow is definitely lighten (too much)     Eventually if I keep increasing such value the proper shadow disappear. Moreover the value to produce this image above is 6.0! Way higher than everything I saw around.     Similarly I've tried something like: float d = (dist + bias) - moments.x; but I found no value that solve the problem, although not tried that many.    The depth map is I think fine because with PCF I get a good result.   What can be the issue here?   Thanks!  
  9. Ok I'm sorry for the terrible title, but I'm up for suggestion for a better one :)    I am trying to write a small framework for a project I have to do soon. I never did nothing more than "one cpp file" OpenGL stuff up to now, so I was trying to come up with a decent class structure as required to do what I have to do.  What is related to the following question is pretty basic:    - Mesh class that has        - ShaderProgram class (with a tables to link attributes/uniforms IDs and more user-friendly names) instance that is "used" when rendering the mesh.       - Various attributes (VBO etc.)       - A render function.   I had the plan to add a textures vector in Mesh class to relate all the textures that may be useful (e.g. normal map, occlusion, diffuse etc.) to the mesh they belong to (I have no re-use of texture for different models). The big problem is that for the application I had to do at the end, there's no certainty on how many texture should I use for the Mesh, I could potentially have just diffuse and normal maps or I can have a dozen of texture maps needed for various parameters.  That'd be fine if all the ShaderProgram will use all those textures, but that's not the case, I may need the same Mesh to be drawn with a fairly complex fragment shader or just drawing it's alpha matte.   How can I handle such situation? There's any way I can achieve such degree generality? Also I'm a tad confused, so I'm afraid this question came up like a mess, I'm sorry about that and I'm extremely happy to clarify anything unclear.    Thank you very much
  10.   Oh thank you very much! I indeed re-wrote everything (that snippet is an extract) so to  avoid division as much as possible and it now works like a charm. As for the NdotL issue thank you for pointing out, I did know that on back of my head but I probably got confused by what was in a old piece of code of mine! 
  11. Hi all,    After reading 'bout BRDF I tried to implement it in a shader, but things apparently goes wrong, but don't know why. It's my first attempt to go beyond a basic phong shading, so sorry if the following is a silly question.     First of all here's the model with just the diffusion component:       Now if I activate the specular component as well the result is strange:     The right side of the face is completely blackened and overall it looks strange.  This was with the beckmann distribution, if I switch to Blinn:     Same issue.      My frag shader (relevant part) is: float beckmannD(float roughness,float NdotH){ float roughness2 = roughness*roughness; float roughness4 = roughness2*roughness2; float NdotH2 = NdotH*NdotH; float term1 = 1/(roughness4*NdotH2*NdotH2); float expTerm = exp((NdotH2 - 1)/(roughness4*NdotH2)); return term1*expTerm; } float blinnD(float roughness, float NdotH){ return (roughness + 2)/2 * pow(NdotH, roughness); } float kelemenG(float NdotL, float NdotV, float LdotV){ float numerator = 2*NdotL*NdotV; float denominator = 1+LdotV; return numerator/denominator; } float implicitG(float NdotL, float NdotV){ return NdotL*NdotV; } .... void main(void){ vec3 N = texture2D(uNormalSampler, vTextureCoord).xyz; N = N*2.0 - 1.0; N = normalize(N*uNMatrix); vec3 L = normalize(uLightPosition-vPosition.xyz); vec3 V = normalize(-vPosition.xyz); ... vec3 H = normalize(V+L); float NdotH = max(dot(N,H),0.0); float NdotL = max(dot(N,L),0.0); float NdotV = max(dot(N,V),0.0); D = beckmannD(roughness, NdotH); // or D = blinnD(roughness,NdotH); float LdotV = max(dot(L,V),0.0); G = kelemenG(NdotL, NdotV, LdotV); //or G = implicitG(NdotL,NdotV); // Note I'm not considering Fresnel yet specularWeighting = D*G/(3.14*NdotL*NdotV); vec3 diffuseColor = vec3(1.0,0.6,0.6); vec3 diffuseReflection = uLightColor * diffuseColor * diffuseWeighting; vec3 specularReflection = specularColor * specularWeighting; fragmentColor = vec4(uAmbientColor+diffuseReflection+specularReflection,1.0); } What can be? Thank you very much, and sorry for the probably silly issue.
  12.   An enlightening reply! Thank you so much! I'll try some of the things you cited. As far learning is concerned I'm always extremely eager to learn!   Again, I sincerely thank you a lot for this answer! 
  13.   Thank you very much! Do you have any idea of the scope of works on the portfolio they want?        Oh ok! So basically keep trying sending application while building the portfolio! Thanks!  Do you have any suggestion about what can be a good project to work on my own to increase my chances of be considered as graphic programmer?    (I know internship is a job, for job position I meant everything from intern to junior programmer, whilst with the first option I don't think I can do much more than internship) 
  14.     Fortunately I had a  scholarship and the rest I was able to cover with my own finances without loans. Although I can't afford to stay still forever I definitely can for couple of months as I can go back home and work from there. In such situation I would definitely try to work hard to get the job I want, is this delusional?  All the internship system is not usual in Italy, so I never had the chance
  15. Hi all!    I start by saying that this post is maybe premature but these questions have been puzzling me a lot lately, so I prefer to ask now.  My life-goal is to become a graphics programmer in games industry, but before jump to the question let me post  a short bio to frame my situation.    I've completed my undergrad studies in computer science in Italy obtaining the highest mark. However, during I don't feel like I've programmed a lot and I barely touched graphics. For thesis I've finally done something implementing a very simple mesh viewer with phong shading in OpenGL and WebGL + an .obj to JSON converter. So actually not much.  I'm now finishing an MSc at UCL in UK. While this master is computer graphics and imaging related (Image Processing, Virtual Environment, Computational Photography, Geometry Processing, Computer Graphics are most relevant module), I feel like the rendering part wasn't that relevant, just pretty basic stuff, at least in practice. On the Image processing side I've done way more, but I'm not sure if/how this will help me in my life goal (that's actually question 1: will it? ) and everything was implemented in MATLAB.  Every coursework so far has gone very good in markings but due to the completely different system here I'm not sure I will be that good in exams.   Unfortunately during my undergrads years I didn't have the ideas clear enough to start some side-project by my own and now here I definitely don't have the time to start one before the uni finishes. Giving this what I've in my hands are just some coursework and what will be my final thesis (screen space subsurface scattering implementation for skin rendering). On a theoretical side for rendering I feel that I have some knowledge although not that complete on an intermediate-advanced level. I tend to have the Imposter Syndrome but in this case I really feel inadequate to start looking for a job. Now, I don't really mind start my working experience with an internship, but even so I don't think I've enough to show or I'm good enough. So this lead to my central question:   Wanting to be a graphics programmer what is in your opinion the best path to follow in my situation?  Starting the search for an internship straight after the graduation. As material to show use just the courseworks and the final thesis (which hopefully will be good in terms of result and not that basic)  Stop for a while and produce something by myself before trying to apply for job positions. I think this is the only way possible for me, but in this case what you think is a good project to start that will be well appreciated by a possible employer?  Thank you very much and sorry for the long post :)