• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Daniel Bowler

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

166 Neutral

About Daniel Bowler

  • Rank
    Member
  1. Hi,   Firstly, could I ask the OP what paper he is reading? I didn't actually find anything on the tile based deferred system whilst hunting around the internet about a year back and it's something I wish I knew more about (past the BF3 powerpoint presentation)   Secondly, I recently did a "Tile Based Forward Renderer" for my final year project at uni and found the book: GPU Pro 4 to be extreamly handy (2 chapters on the system, one from the guys at AMD). Theres also some very useful source code on the AMD website (they call it Forward+).   My system (and the AMD system) did these light (bounding spheres) to tile intersection tests in view space.
  2.   Thank you MJP (Again :P), cleared things up perfectly and plugged a knowledge gap of mine!   :)
  3. Hi all.   Buffer Creation:     //Create LLSEB     D3D11_BUFFER_DESC llsebBD;     llsebBD.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;     llsebBD.ByteWidth = sizeof(UINT) * totalTiles * 2; //2 valus per tile (start and end)     llsebBD.CPUAccessFlags = 0;     llsebBD.MiscFlags = 0;     llsebBD.StructureByteStride = 0;     llsebBD.Usage = D3D11_USAGE_DEFAULT;     //Create buffer - no initial data again.     HR(d3dDevice->CreateBuffer(&llsebBD, 0, &lightListStartEndBuffer)); UAV Creation: //UAV D3D11_UNORDERED_ACCESS_VIEW_DESC llsebUAVDesc; llsebUAVDesc.Format = DXGI_FORMAT_R32G32_UINT; llsebUAVDesc.Buffer.FirstElement = 0; llsebUAVDesc.Buffer.Flags = 0; llsebUAVDesc.Buffer.NumElements = totalTiles; llsebUAVDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; HR(d3dDevice->CreateUnorderedAccessView(lightListStartEndBuffer, &llsebUAVDesc, &lightListStartEndBufferUAV)); SRV Creation: D3D11_SHADER_RESOURCE_VIEW_DESC llsebSRVDesc; //2 32 bit UINTs per entry (xy) llsebSRVDesc.Format = DXGI_FORMAT_R32G32_UINT; llsebSRVDesc.Buffer.FirstElement = 0; llsebSRVDesc.Buffer.ElementOffset = 0; llsebSRVDesc.Buffer.NumElements = totalTiles; llsebSRVDesc.Buffer.ElementWidth = sizeof(UINT) * 2; llsebSRVDesc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER; HR(d3dDevice->CreateShaderResourceView(lightListStartEndBuffer, &llsebSRVDesc, &lightListStartEndBufferSRV)); The idea here is to create a Light List Start End Buffer (llseb) for a tile based forward renderer. In my solution, I have a buffer of 32 bit uints (2 per tile). Creating the buffer works fine, as does the UAV (Writes in the compute shader seem to work perfectly fine). It is, however, with the SRV I have problems (reading the buffer in a pixel shader in order to visualise how many lighs affect a tile). Especially, the following line: llsebSRVDesc.Buffer.ElementWidth = sizeof(UINT) * 2; With this configuration, it produces the following output when we want to visualise this buffer (note that I have faked the output from the compute shader - I'm also having some issues with lights not passing the frustum/sphere intesection test, but i'll be working on that later).     Now, this clearly isnt what we want. :/   However, if we change the questioning line to:     llsebSRVDesc.Buffer.ElementWidth = totalTiles; We get the following output indicating that we are reading the buffer correctly:       This seems correct. But I am seriously at a loss as to why this works. According to MSDN:   ElementWidth Type: UINT The width of each element (in bytes). This can be determined from the format stored in the shader-resource-view description.   Which seems to indicate that I was right in my first example (Or, at least, sort of correct). Or am I miss understanding this completly?   Many thanks.   Dan
  4. Thanks to the suggestions above and some source code from AMD, I have redesigned my code a little this morning: // //Shared Memory // //Groupshared memory which holds our maximum depth value of all pixels within a tile - //which conveniantly, is also one of our thread groups. We use the atomic InterlockedMax //function to write to this in a thread safe manner. Note this is a uint - the atomic //functions only work with uints or ints and NOT floats. Happily, asuint() and asfloat() //will convert between data types with no extra work (They work on a bit level) // //GroupShared memory is memory that is read/write between threads within a single //thread group. Relaivly fast - especially compared to resource read/writes. // //Soted in NDC space - [0,1] (Actually, NDC space is [-1,1] but the hardware //automatically bungs depths in to the [0,1] range as do we. Plus, my //ConvertSampledDepthFromDepthBufferToViewSpace() function expects it in the //[0,1] range. groupshared uint MaxGroupSharedDepthNDC = 0; // //Compute Shader helper functions // //Converts a sampled depth value back to view space depth. Please note: The depth //value passed in to this function (z), should be taken directly from the depth buffer. //NDC space has the range [-1,1] whilst the depth buffer stores things to [0,1]. Our //function handles this. // //With view space depth aquired, it is possible (given screen space xy coords) to //reconstruct the full 3D position (view space) of any pixel in our scene. // //Credit to the Forward+ Source Code for this solution. I did actually test it out //in my little maths demo application and it looks right. float ConvertSampledDepthFromDepthBufferToViewSpace(float z) {     //From [0,1] to [-1,1]     float depth = z * 2.0f - 1.0f;     //Calculate     float viewDepth = 1.0f / (depth*invProjection._34 + invProjection._44);     //And return     return viewDepth; } //Function that samples the depth buffer (Or MSAA Depth buffer) and uses //the InterlockedMax atomic function to store it (Note: As a UINT - HLSL //functions asuint and asfloat are useful for storing data here and then //converting the return from uint to float. These functions work on the //bit patterns). void ThreadCalculateMaxDepth(int3 dispatchID, uniform bool msaa, uint msaaSampleIndex) {         //Load value to sample the depth buffer.     int3 sampleVal = int3( (dispatchID.x), (dispatchID.y), 0);     //uint - we will sample our depth buffer and convert it to a uint. This can     //then be stored (via the atomic funcion interlockedmax()) in our group shared     //memory.     uint samp = 0;     [flatten]     if (msaa)         //MSAA Enabled. Sample the buffer using msaaSampleIndex as the 3rd param         //in the Load function.         samp = asuint(ZPrePassDepthBufferMSAA.Load(sampleVal.xy, msaaSampleIndex).r);     else         //MSAA disabled - Sample the standard buffer.         samp = asuint(ZPrePassDepthBuffer.Load(sampleVal).r);     //write to shared memory.     InterlockedMax(MaxGroupSharedDepthNDC, samp); } // //Compute Shader Entry functions. // //Num threads per thread group. One thread per pixel. This is a 2D thread group. Shared //memory will be used (shared between threads in the same thread group) to cache the //depth value from the depth buffer. For this pass, we have one thread group per tile //and a thread per pixel in the tile. [numthreads (TILE_PIXEL_RESOLUTION, TILE_PIXEL_RESOLUTION, 1)] void CSMain(     in int3 groupID            : SV_GroupID,           //Uniquely identifies each thread group     in int3 groupThreadID      : SV_GroupThreadID,     //Uniquely identifies a thread inside a thread group.     in int3 dispatchThreadID   : SV_DispatchThreadID,  //Uniquely identifies a thread relative to ALL threads generated in a Dispatch() call     uniform bool useMSAA)                              //MSAA Enabled? Sample MSAA Depth Buffer {     //Stage 1 - We sample the depth buffer and work out what the maximum Z value is for every tile.     //This is done by looping through all the depth values of the pixels that share the same     //tile and comparing them.     //     //We then write this data to the MaxZTileBuffer RWBuffer (Optional). The shared data is handy     //for stage 2 where we can cull more lights based on this maximum depth value. (Assume     //min depth is 0.0f so that we can accomodate transparant objects - there is a much     //more efficient solution available which I will explore, but if this comment is still     //here... I left it out for whatever reason - check my report, I will have made some     //comment regarding said feature)          //GroupThreadIndex - 1D index uniquly identifying threads within a thread     //group - can be used to determin if a thread in the thread group     //is the first (0,0)     int groupThreadIndex = groupThreadID.x + (groupThreadID.y * TILE_PIXEL_RESOLUTION);          //Sample depth buffer and write to shared memory.     [flatten]     if (useMSAA)           [unroll]         for(uint i = 0; i < 4; i++)             //Sample MSAA buffer - Note, we could actually ask the texture             //what MSAA count said resource has been created with, but since I only             //support 4xMSAA, I'm not too fussed...             ThreadCalculateMaxDepth(dispatchThreadID, true, i);     else         //Sample standard buffer           ThreadCalculateMaxDepth(dispatchThreadID, false, 0);          //Wait for the threads to complete there work (Ie, work out there     //max depth value and preform the write. This is important to do     //before we write to the max depth buffer (optional) or use     //it in our calculations)     //     //If we dont have this, we end up with some very odd flashing     //as we write to the buffer before all the threads have completed     //there InterlockMax call.     GroupMemoryBarrierWithGroupSync();     //Write to Maz Z Tile Buffer for visualisation if option enabled.     //     //Note, we can turn this feature off (buffer writes are very, very, very     //expensive. Since this is actually not required for our algorithm - though needed if we want     //to visualise the tiles max depth values - a #define has been used to enable/disable     //the buffer write)     //     //NOTE: The first thread only to reduce buffer writes. (well, first thread warp in reality.     //32 threads will actually run this code block (32 threads in a warp on nvidia cards.     //64 on ATI. Every other thread in a thread group will not do this.(well, any thread in     //a differnt thread warp for this thread group anyway...)     //     //This advice came from the authorof Practical Rendering and Computation     //with Direct3D11 on GameDev.net so I will not be questioning his wisdom. Much     //thanks for your time MJP!) #ifdef SHOULD_WRITE_TO_MAX_Z_TILE_BUFFER     if (groupThreadIndex == 0)     {         //Work out the index in to our buffer. One entry in the         //buffer per tile:         //         //------------------->         //------------------->         //------------------->         int tilesX = ceil( (rtWidth  / (float)TILE_PIXEL_RESOLUTION) );         int maxZTileIndex = groupID.x + (groupID.y * tilesX);         //Reinterpret as a floating point value.         MaxZTileBuffer[maxZTileIndex] = asfloat(MaxGroupSharedDepthNDC);       } #endif         //Stage 2 - In this stage, we will build our LLIB (Light List Index Buffer - essentially     //a list which indexes in to the List List Buffer and tells us which lights affect     //a given tile) and our LLSEB (Light List Start End Buffer - a list which indexes     //in to the LLIB) }//End CSMain() (Sorry for the wall of red text - Being an assignment, you are expected to make decent comments)   Just wondering if anyone has any extra feedback on the solution (Apart from the God awful spelling in some of my comments - I've changes a few of the inaccurate comments just now)? And if it is an acceptable solution, hopefully the above code can become somewhat useful for others.   Cheers.
  5.   Hi,   Thanks for your reply.   In response to 1). The project is only meant to take 400 hours total, which includes a 25k word report, all the preliminary education (D3D11) and applicaion framework. I'm lucky enough to have put in some time over the summer to learn D3D11 and build a decent enough framework (For this purpose, anyway) - so I dont have any time issues yet, but, I still have had to make some decisions to simplify the program - this being one.   I will be using the suggestion from GPU Pro 4 - Just assuming the minimum depth is 0.0f, thus our bounding volume per tile will extend from the origin and up untill the maximum value of the opaque objcts (thus supporting transparancy - allthough inefficiently as you say).   However, I'm still yet to decide on which marking shceme to go with  - the other places more emphersis on the product (IE, a more complete product. More design documentation, etc. But a much smaller report) and that we have to follow an accepted development path (Eg, an itterative method). So there isnt anything stopping me changing mark schemes for another few weeks - Therefore, If I develop the simple solution now (Which is more than enough for the original mark scheme. So long as it can demonstrate support for transparancy and MSAA, were golden. Plus, the method you suggested (Individual tile list for opaque and transparant objects) can go in to my report as part of the imporovments section), I can then extending functionality at a later date (I'll claim this as an Itterative method of some sort).   I'm going to give a proper look in to InterlockedMax tomorrow. I have some source code from an AMD demo on Forward+ (I think) which makes use of InterlockedMax and Min. And demonstrates the ways to convert from float to uint and back again. If this solution is good enough for AMD, it should be fine for my needs. :)   Thanks for clearing up the branching - that has helped alot! :)   Also, I'm a rather big fan of (I presume) your book - Practical Rendering and Computation. Its actually one of the major influences in my decision to do a rendering system as my final year project after implementing the deferred renderer in the book!   Lot of good stuff in that book I must say! :)
  6.   Thank you very much for your reply.   I'll be looking right in to Interlocking functions - I have seen source code for a tile based forward renderer that made use of it and had a solution which seemed extreamly clean (as one would expect from developers over at AMD) and didnt have my issues above.   Also thanks for the link to a paper/source code - I havnt had a look yet, but I would be surprised if it didnt feature somewhere in the report at the very least!   Once again, thanks for the help! :)
  7. Hi,   I originally posted this in the graphics programming section of the forum. But it probably also applies to this portion of the forum aswell.    
  8. Hi,   I'm currently in the process of developing a tile based forward renderer (Like Forward+) for a university project and this week have begun work on the light culling stage utilising the compute shader. I am a little inexperianced with regards to the compute shader and parallel programming but I do know that you should avoid dynamic branching as much as possible.    The following code is what I have so far. In the application code (not shown), I launch enough thread groups to cover the entire screen with each thread group containing n,n,1 threads (n is 8 in my example all though you can change this). Thus, a thread per pixel.   The idea is for each thread in a thread group to sample the depth buffer (MSAA depth buffer supported) and store this in shared memory. Then loop through all these depth values and work out which is the largest.(I am supporting transparancy with the trivial solution of having the minimum depth as 0. This was suggested in GPU Pro 4 as a potential solution for the time being. I have an idea which uses 2 depth buffers to better support transparancy, but for the time being, we will stick with what I've got).   However, in order to do this, I have had to add an if statement. This if statement checks the group thread ID to ensure that only the first thread in every thread group executes the code - or at least, that was the idea (EDIT: Bold and Enlarge didnt work - you are hunting for this line "if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)"):   //Num threads per thread group. One thread per pixel. This is a 2D thread group. Shared //memory will be used (shared between threads in the same thread group) to cache the //depth value from the depth buffer. For this pass, we have one thread group per tile //and a thread per pixel in the tile. [numthreads (TILE_PIXEL_RESOLUTION, TILE_PIXEL_RESOLUTION, 1)] void CSMain(     in int3 groupID            : SV_GroupID,           //Uniquely identifies each thread group     in int3 groupThreadID      : SV_GroupThreadID,     //Uniquely identifies a thread inside a thread group.     in int3 dispatchThreadID   : SV_DispatchThreadID,  //Uniquely identifies a thread relative to ALL threads generated in a Dispatch() call     uniform bool useMSAA)                              //MSAA Enabled? Sample MSAA DEPTH Buffer {     //Stage 1 - We sample the depth buffer and work out what the maximum Z value is for every tile.     //This is done by looping through all the depth values of the pixels that share the same     //tile and comparing them.     //     //We then write this data to the MaxZTileBuffer RWBuffer (Optional). This data is handy     //for stage 2 where we can cull more lights based on this maximum depth value.          //Load value to sample the depth buffer.     int3 sampleVal = int3( (dispatchThreadID.x), (dispatchThreadID.y), 0);          //This is the sampled depth value from the depth buffer for this given thread.     //If msaa is used (Ie, MSAA enabled depth buffer), this will represent the average     //of the 4 samples.     float sampledDepth = 0.0f;     //Sample MSAA buffer if MSAA is enabled     [flatten]     if (useMSAA)     {         //Sample the buffer (4 times)         float s0 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 0).r;         float s1 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 1).r;         float s2 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 2).r;         float s3 = ZPrePassDepthBufferMSAA.Load(sampleVal.xy, 3).r;                  //Average out.         sampledDepth = (s0 + s1 + s2 + s3) / 4.0f;     }     //Sample standard buffer       else         sampledDepth = ZPrePassDepthBuffer.Load(sampleVal).r;     //Write to the (thread group) shared memory and wait for threads to complete their work.     depthCache[groupThreadID.x][groupThreadID.y] = sampledDepth;     GroupMemoryBarrierWithGroupSync();     //Only one thread in the thread group should preform this check and then the     //write to our MaxTileZBuffer.     if (groupThreadID.x == 0 && groupThreadID.y == 0 && groupThreadID.z == 0)     {         //Loop through the shared pool (essentially a 2D array) and workout what the maximum         //value is for this thread group (Tile).         //Store the maximum value in the following floating point variable - Init to 0.0f.         float maxDepthVal = 0.0f;         //Unroll - i and j are knowen at compile time - The compiler will happily         //do this for us, but just incase.         [unroll]         for (int i = 0; i < TILE_PIXEL_RESOLUTION; i++)         {               for (int j = 0; j < TILE_PIXEL_RESOLUTION; j++)             {                 //Extract value from the depth cache.                 float depthToTest = depthCache[i][j];                 //Test and update if larger than the already stored value.                 if (depthToTest > maxDepthVal)                     maxDepthVal = depthToTest;             }//End for j         }//End for i         //Write to Maz Z Tile Buffer for use in the second pass - Only one thread in a thread         //group should do this.         //         //Note, we can turn this feature off (buffer writes         //are expensive. Since this is actually not required - though needed if we want         //to visualise the tiles max depth values, a #define has been used to enable/disable         //the buffer write. ) #ifdef SHOULD_WRITE_TO_MAX_Z_TILE_BUFFER         int tilesX = ceil( (rtWidth  / (float)TILE_PIXEL_RESOLUTION) );         int maxZTileIndex = groupID.x + (groupID.y * tilesX);         MaxZTileBuffer[maxZTileIndex] = maxDepthVal;   #endif         //Stage 2 - In this stage, we will build our LLIB (Light List Index Buffer - essentially         //a list which indexes in to the List List Buffer and tells us which lights affect         //a given tile) and our LLSEB (Light List Start End Buffer - a list which indexes         //in to the LLIB).     }//End if(...) }//End CSMain() Now, my limited understanding of dynamic branching in shaders suggests this may not be a good move - Each thread will execute the code within the code block and then decide if it should be kept or discarded later (In order to ensure parallelism??). Not ideal, particually when I am going to do >3000 sphere/frustum instersections in stage 2.   Or, since all but one thread in a thread group will not actually execute the code, does the hardware actually do a pretty good job in handling this sytem? (63 threads not doing it in our example.   (My test GPU is: 650M (Laptop that I work on in uni) or 570 (at home - Will be upgrading to a 770/680 in the near future. I am led to belive that on modern GPUs, dynamic branching is less of a concern, all though I dont really understand why :P)   Many thanks,   Dan.
  9.   Thanks for the suggestion on Forward+ - I'm giving it a look now though the only (full) resource I can find about that technique specifically is in Gpu Pro 4 - its an expensive purchase for just 15 pages with no guarantee that the resource is actually useful (for me). Actually, I found a power point from GDC 2012 which I'll give a watch tonight. On the other hand, plenty on Tile Based Rendering which I suspect is the same general idea.   Either-way, I shall research further (as I'm meant to be doing anyway) and see if anything comes of it.   Also thanks for the tips regarding the paper. Its good to know what type of resources I should be looking at. Big budget research papers over blogs and the like - Will certainly look at SIGGRAPH - may even download a few papers and read them for leisure!     Thanks for your feedback . It's good to know that it wont instantly be rejected as too easy - especially, as you say, since I've had to learn the API with no assistance from university staff.   With regards to the amount of work with the renderer - Primarily I'm concentrating on the actual underlying system rather than the interface a programmer uses (ie, my framework is less important) which cuts time since I'm happy enough to directly use the API rather than abstract everything I want to use (Partially time, partially inexperience, partially the fact that the more I use the API, the more comfortable things become).   I also had built a solid framework (scene management, shader manager, Win32 code, D3D11 init code, a system to create/use render states, etc) during my time learning the API so I've ported that across to my deferred rendering project.   Last night I managed to implement the gbuffer pass (and you can call functions to visualize the data in each RTV). Simple and unoptimised system as it is currently is, I don’t imagine the light pass (in the simplest form) should be too hard. Plus one of my books has a solid few pages dedicated to optimising a deferred renderer. Enhancements could be fun, but I suspect I'll simply employ a blur algorithm for AA, ignore the transparency issue (Just use a forward renderer), and have an indexing system for multiple materials. Nothing revolutionary yes, but it is just an undergrad project and I would bet that the system I create could be used for some small scale PSN/XBL type games.
  10. Unity

    "me becoming employed at a AAA studio, and actually needing c++, is really unlikely to happen"   I dont work in the industry (Still a student), but I would have to say I disagree with the above statement.   I have no idea what your physics degree consisted of, but I guess you have a solid understanding of motion, forces, vecotrs, etc. In addition, you may have all sorts of maths knowledge which applies to 3D games / graphics / physics engines. At the very least, you will easily be able to learn the 3D math stuff - and at a *MUCH* better level than a student. Maths is a key skill in the games industry without a doubt. Mathemetics For 3D Games Programming and Computer Graphics by Eric Lengyel is a good place to start looking/comparing your skill sets.   Personally (as I did in formal education) I would start with C (Being able to make some console apps/games) and then learn C++. C and C++ are similar and most books dont go in to classes from chapter one (Unlike Objective C, for example, which is a class heavy language).   From their, the world is your oyster. You can apply your degree and aim to develop some physics systems. Can learn Direct3D/Open GL and go down the graphics route. Learn sound programming, networking, etc.   No Games Programming degree/Computer Science degree can teach everything. Mine teaches C++ and thats literally about it (and we have a decent employment rate within the first 6 months). The best games programming degree in the UK (Teesside if im not mistaken) hasnt tought any content that I havnt been able to do in 4/5 months of studying Direct3D11 by myself (And I would say that learning something yourself is more important than the degree - indeed, everyone I have spoke too says the first question in an interview centers around things you do outside of education. Plus, imo, teaching something yourself gives you a much more rounded knowledge whenever you finally get it. Uni has wayyyyy too much hand holding to be good for you).   My mate did Computer Science (Newcastle) and, well... I wasn't impressed tbh. Defenatly not for games programming anyway.   So yeah, why not? If your looking for a career change, then your not in a terrible position. Most junior jobs state that they are looking for people with "Computer Science (and games programming) or maths related degree" - You do have a maths related degree.   Finally, learning c++ and games related programming can take time to do correctly. C++ is a hard language. Its of course worth noting that learning C++ as a second language is MUCH easier than it being your first. C#/Java (I did objective C. But I'm a mini mac fanboy ) I have heard is a good start if you want another langauge first .
  11. Hi all,   *tl;dr is at the bottom – sorry for the long one!   I'm about to begin my third and final year at uni (studying Games Programming) and it's certainly getting close to the time I need to be thinking of a thesis for my final year individual project. (400 hours which includes a major piece of software (60%), 10k word report (30%) and 1 hour long demo/viva (10%))   My degree, sadly, doesn't do all that much in the way of the programmable pipeline - actually, it does absolutely nothing! So I have spent the past 3/4 months studying and developing a rendering demo application using Direct3D11 via the book (Excellent resource) "Direct3D11: A Shader Approach" (Frank D. Luna).   That is now complete and I'm looking to get started on the actual project as soon as physically possible (The code, anyway). But, I'm a little out of ideas on what would make a reasonable undergrad project.   I've had discussions with one of my lecturers (It's worth noting he’s a physics kind of guy and not 3D graphics. But no one in the uni - as far as I know - is a graphics kind of guy so I'm stuck with it like so - which doesn’t bother me... I like learning about this kind of stuff!) and proposed an idea: The original idea was to develop a forward render, classic deferred renderer and a light pre pass renderer and compare the performance, limitations, etc of the three renderers in a series of different scenes (Eg: low poly+high light count, high poly+low light count, etc). He liked the idea - it did need some refining, however.   I was told that the whole idea of a thesis is to pick a subject, explore subject and come up with a project based (I have found very little in the way od data that can compare these systems around the web - just general statements) on an academic survey (Looking at posts on the internet and then developing an idea for a project based on, for example, known issues that could be fixed...). This I did, and I came up with a slightly better idea (in my opinion. Given that this is a software engineering project, I should be focusing more on producing an (almost) complete system rather than producing basic systems and comparing them).   The current idea is to develop an (almost) complete deferred renderer, in several stages:   Stage 1: Develop a (classic) deferred render from scratch (and basic framework to make it semi-user friendly. All though I don’t fancy spending too much time on this (Eg, abstracting ID3D11Buffers is a waste of time)). This would be a bog standard system with a larger G-Buffer (if you have read "Practical Rendering and Computation with Direct3D11", its pretty much going to be the exact same system. 4 RTVs, full screen quads for the lighting pass). I've begun this stage anyway as I feel it would be interesting to get a deferred renderer done even if it doesn’t form the basis of a thesis.   Stage 2: Optimise the GBuffer pass. (Reducing the size of the Gbuffer)   Stage 3: Optimise the Lighting Pass.(reducing the number of pixel shader invocations)   Stage 4: Enhancements - Solving the multiple material, AA and transparency issues (This one I may simply use a forward renderer unless I find a very good resource on a good and simple system).   Stage 5: Other enhancements – I'll try and get a lovely original scene made which uses stuff from my forward rendering demo and port them in to a deferred system (Displacement mapping, character animation, maybe SSAO, etc). This proves that my deferred system isn’t limited to texture mapping, interpolated colours and normal mapping.   I'll speak to my lecturer tomorrow and suggest the idea to him. As this has come from the academic research (Ie, there’s a lot of talk about deferred rendering systems and their cons), it should (*fingers crossed*) be well received.   I feel the above idea is a decent one given that I have had to learn Direct3D11 by myself (which itself took ~200 hours of the 400 hours)... but it's not too far off what people are doing in second year at Teesside University (a single module). All though, they are tough Direct3D10, taught deferred rendering systems and given a framework.   So my two questions are : 1) Do you think that the above system is a good project idea for an undergrad thesis? Any amendments? Comments? And 2) Do you have any other suggestions for a thesis which relates to graphics programming in some way (Personally, I would like to stay away from a pure GPGPU programming thesis, but I'm open to something)   tl;dr   Just starting third year – looking for ideas/suggestions for a graphics programming thesis. Current idea is to develop a deferred renderer (and basic framework) in 4/5 stages: Basic implementation, optimisation (Gbuffer pass and lighting pass), enhancements(Solve multiple material, AA and transparency issues in some way) Is this a good idea baring in mind I have had to learn Direct3D11 by myself (Not taught it at uni – and thus ~200 hours spent learning it should be credited to the 400 hours allocated to the project)? Would it be respected by games companies looking to emply me as a junior/intern graphics programmer? Suggestions/comments welcomed on the deferred system. Suggestions for something else related to graphics programming welcomed.   Many thanks,   Dan
  12. [quote name='Dbowler92' timestamp='1347264374' post='4978509'] [quote name='angelmu88' timestamp='1347203913' post='4978313'] [quote name='Dbowler92' timestamp='1347195175' post='4978287'] So I have done some testing... Where I clear my back buffer, I changed the code to: [source lang="cpp"] d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(100,100,100), 1.0f, 0);[/source] Which gives a nice grey background. However, in full screen... I still see a black background - this must be where my issues lie? [/quote] Do you call device->Present() after drawing the scene. Your main Drawing method should look like this: dd3dDevice->BeginScene(); //Here you draws all your textures dd3dDevice->EndScene(); dd3dDevice->Present(0, 0, 0, 0); When you're drawing in D3D you're drawing into something called back buffer. If you want to show the back buffer content on your screen you have to call the Present() method so that the buffer that is currently displaying swaps with the backbuffer. As you point out, if you clear the backbuffer that way you shoud see a Grey Screen, not a black one, so the problem has probably nothing to do with your textures. [/quote] Hi, Yeah I call present() [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] My rendering code is like so: Device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,0,0), 1.0f, 0) Device->BeginScene() SpriteObject->Begin(D3DXSPRITE_ALPHABLEND) //Rendering Textures SpriteObject -> End() Device->EndScene() Device->Present(NULL,NULL,NULL,NULL) Still no luck [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img] [/quote] Found a solution to the issue... At home I use a two monitor set up - When I unplug one from the PC and run the program, everything happily renders both in windowed mode and full screen. So the issue is with the second monitor. I havnt had chance to search for a fix yet (will do so now), but if anyone has any experience in fixing this particular issue - please feel free to share
  13. [quote name='angelmu88' timestamp='1347203913' post='4978313'] [quote name='Dbowler92' timestamp='1347195175' post='4978287'] So I have done some testing... Where I clear my back buffer, I changed the code to: [source lang="cpp"] d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(100,100,100), 1.0f, 0);[/source] Which gives a nice grey background. However, in full screen... I still see a black background - this must be where my issues lie? [/quote] Do you call device->Present() after drawing the scene. Your main Drawing method should look like this: dd3dDevice->BeginScene(); //Here you draws all your textures dd3dDevice->EndScene(); dd3dDevice->Present(0, 0, 0, 0); When you're drawing in D3D you're drawing into something called back buffer. If you want to show the back buffer content on your screen you have to call the Present() method so that the buffer that is currently displaying swaps with the backbuffer. As you point out, if you clear the backbuffer that way you shoud see a Grey Screen, not a black one, so the problem has probably nothing to do with your textures. [/quote] Hi, Yeah I call present() [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] My rendering code is like so: Device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,0,0), 1.0f, 0) Device->BeginScene() SpriteObject->Begin(D3DXSPRITE_ALPHABLEND) //Rendering Textures SpriteObject -> End() Device->EndScene() Device->Present(NULL,NULL,NULL,NULL) Still no luck
  14. [quote name='angelmu88' timestamp='1347193151' post='4978280'] [quote name='Dbowler92' timestamp='1347189347' post='4978258'] One last question regarding full screen mode... My Textures and such render perfectly happily in Windowed mode - yet in full screen mode, they dont render at all... :S Any ideas?? (The same applies even if I start the game in full screen mode and then Alt+Enter to windowed mode where the textures appear again) [/quote] Can you post your drawing texture code. The only thing I can think of without seeing the code is that you might be using invalid coordinates, i.e, if the texture rendering method is something like this: drawTextures(originX, originY, sizeX, sizeY) (originX +sizeX) is out of the window. Anyway, is this only happening with textures or is it happening also with 3d models? [/quote] Hi. Currently I have no support for 3D models - I havent got to that point in the book yet and to be honest, I dont see myself implementing support just yet (2D Game engine only at the moment) so I cant test that just yet. So at the moment, just textures. Surfaces - well, they dont support POOL_MANAGED so im basically giving up on them [img]http://public.gamedev.net//public/style_emoticons/default/tongue.png[/img] Here is my rendering code for Textures - Makes use of the Transfrom Matrix: [source lang="cpp"]void Texture2D::RenderAtPointWithTransformations(int locationX, int locationY, float scaleHorizontal, float scaleVerticle, float rotation, D3DCOLOR color) { ManipulateMatrix(locationX, locationY, rotation,scaleHorizontal, scaleVerticle); spriteHandler->SetTransform(&matrix); spriteHandler->Draw(texture, NULL, NULL, NULL, color); } void Texture2D::RenderSubImageAtPointWithTransformations(RECT *subImage, int locationX, int locationY, float scaleHorizontal, float scaleVerticle, float rotation, D3DCOLOR color) { ManipulateMatrix(locationX, locationY, rotation,scaleHorizontal, scaleVerticle); spriteHandler->SetTransform(&matrix); spriteHandler->Draw(texture, subImage, NULL, NULL, color); } void Texture2D::RenderWithMatrix(D3DCOLOR color) { spriteHandler->SetTransform(&matrix); spriteHandler->Draw(texture, NULL, NULL, NULL, color); } void Texture2D::RenderSubImageWithMatrix(RECT *subImage, D3DCOLOR color) { spriteHandler->SetTransform(&matrix); spriteHandler->Draw(texture, subImage, NULL, NULL, color); } void Texture2D::ManipulateMatrix(int transX, int transY,float rotation, float scaleHorizontal, float scaleVerticle) { //Scale D3DXVECTOR2 scale(scaleHorizontal, scaleVerticle); //Centre point D3DXVECTOR2 cent((float)(imageWidth * scaleHorizontal) / 2, (float)(imageHeight * scaleVerticle) / 2); //Translate (position) (centered) D3DXVECTOR2 trans((float)transX-cent.x, (float)transY-cent.y); //Non centered //D3DXVECTOR2 trans((float)transX, (float)transY); //Update the location points locationX = transX; locationY = transY; //Fill matrix D3DXMatrixTransformation2D(&matrix,NULL,0,&scale,&cent,rotation,&trans); } [/source] EDIT: No luck with the non centered D3DXVECTOR trans(...) either EDIT 2: This "¢" is actually &cent EDIT 3: The rest of the rendering is: This is called from my game loop [source lang="cpp"]void GameController::RenderCurrentScene() { //Function renders current scene (calling Scene->Render();) //Rendering - Will clear the backbuffer, BeginScene for rendering //pass over to the currrent scene to render its bits and bobs (Surface2D/ //Texture2D will handle the rendering for the game objects and such), //regain control and present the view. // //Clear backbuffer (black) (0,0,0) d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0,0,0), 1.0f, 0); d3ddev->BeginScene(); //Current scene will render currentScene->RenderScene(); d3ddev->EndScene(); //Present backbuffer. d3ddev->Present(NULL,NULL,NULL,NULL); }[/source] This then calls Render() on my current and active scene here: [source lang="cpp"]void TestScene::RenderScene() { //Put your rendering code in here //Its best if you abstract and ask each object in your //game (A class) to render itself. (Eg, whilst your player could be represented //by a Texture2D object (and you can use the Texture2D->Render(), it would be MUCH //better to make a new Player Class so you can keep the scene code nice and //streamlined. //Somethings (EG, the background - a surface2D object) images could be created //and rendered here rather than via a class) //OutputDebugStringW(L"Rendering Current Scene \n"); // //***RENDER SURFACES*** //(Note it is possible to render surfaces whilst rendering the //texture - just unlock (End();) Direct3D - render - and lock (Begin();) again) // //Set up spriteHandler to allow for textures to be rendered. (lock) //Default to use ALPHABLEND spriteHandler->Begin(D3DXSPRITE_ALPHABLEND); //***RENDER TEXTURES*** //Rendering shuttle->RenderAtPointWithTransformations(1024,768); ss->RenderSubImageFromSpriteSheet(1,2,100,100,rotation,1,1); //sf->RenderAtPointWithTransformations(250,250); //Clean up spriteHandler (unlock) spriteHandler->End(); }[/source] ss is a spritesheet object... which does no rendering and rather lets the Texture class do this. EDIT4: (Ill stop editing soon xD) So I have done some testing... Where I clear my back buffer, I changed the code to: [source lang="cpp"] d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(100,100,100), 1.0f, 0);[/source] Which gives a nice grey background. However, in full screen... I still see a black background - this must be where my issues lie?
  15. [quote name='Dbowler92' timestamp='1347187186' post='4978246'] [quote name='angelmu88' timestamp='1347179483' post='4978212'] Hi I'll try to answer your question [quote name='Dbowler92' timestamp='1347171217' post='4978190'] I have read that I need to change the D3DPRESENT PARAMS and call Reset() via the Direct3D Device object... But this blew up my game and im not sure why. [/quote] That's true you have to call Reset(), but you have to keep some things in mind. Take a look at my enableFullScreen() function: [source lang="cpp"]void D3DApp::enableFullScreenMode() { // Are we already in fullscreen mode? if( !md3dPP.Windowed ) return; int width = GetSystemMetrics(SM_CXSCREEN); int height = GetSystemMetrics(SM_CYSCREEN); md3dPP.BackBufferFormat = D3DFMT_X8R8G8B8; md3dPP.BackBufferWidth = width; md3dPP.BackBufferHeight = height; md3dPP.Windowed = false; // Reset the device with the changes. onLostDevice(); HR(gd3dDevice->Reset(&md3dPP)); onResetDevice(); }[/source] md3dPP is a D3DPRESENT_PARAMETERS (like the one you use to create the device). Notice that before calling device->Reset() I call a method named onLostDevice() and after calling device->Reset() I call onResetDevice(). What onLostDevice() does is to prepare all your game assets (textures, meshes, etc) for a device resetting, for instance, any D3DAsset has to placed in what is called a memory pool, let's say you create a texture and you specify you're going to use de D3DPOOL_DEFAULT, in that case, objects placed in that pool need to be destroyed before calling device->Reset() and reinitialized after that. Some other objects like dynamic renderTargets need to be destroyed too. I suppose that's why you're game is crashing when you call Reset(), because you're not taking this into account. (In the previous example you can place your texture in the D3DPOOL_MANAGED so that you don't have to destroy the texture) [quote name='Dbowler92' timestamp='1347171217' post='4978190'] How do you guys handle rendering? For example, say my window is of size 1024 by 768. Lets say I render my Texture (with the transformation matrix) at point 1024, 768 (centred) - This obviously renders right at the bottom right of the screen... but lets say the user changes the resolution settings to 1920 by 1080. That image is no longer rendered at the bottom right of the screen [/quote] That's true, for 2D objects you have to take into account the window size when you're rendering them, so if you were drawing your texture like this: texture->Draw(0,0,1024,768) Now you have to draw like this: texture->Draw(0,0,1920,1080) (The same goes with other 2D objects like plain Text) For 3D objects you don't have to worry, the only thing you have to know is that when you change to a higher resolution you're going to see more things of your scene, i.e, your field of view is going to change to a bigger one (usually you specify a vertical field of view, so It's the horizontal field of view that changes). [/quote] Hi, Many thanks for the detailed response [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] As you guessed, I had failed to release my game assets and that was causing the crash. I have, however, updraded my Texture2D and Surface2D classes so that they are all created in the MANAGED pool. That has stopped the crashing - which is good, but right now when I press Alt+Tab, I end up with a white box that represents my drawing area. :/ [source lang="cpp"]void ChangeScreenMode(HWND window) { OutputDebugStringW(L"ChangeScreenmode\n"); if (isInWindowedMode) { OutputDebugStringW(L"To FS\n"); //Change to FS spriteHandler->OnLostDevice(); d3dpp.Windowed = FALSE; HRESULT hResult = d3ddev->Reset(&d3dpp); SetWindowLong(d3dpp.hDeviceWindow, GWL_STYLE, WS_POPUPWINDOW); SetWindowPos(d3dpp.hDeviceWindow, HWND_TOP, 0, 0, 0, 0, SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_SHOWWINDOW); isInWindowedMode = false; spriteHandler->OnResetDevice(); } else { OutputDebugStringW(L"To Windowed\n"); //Change to Windowed spriteHandler->OnLostDevice(); d3dpp.Windowed = TRUE; d3ddev->Reset(&d3dpp); SetWindowLong(d3dpp.hDeviceWindow, GWL_STYLE, WS_OVERLAPPEDWINDOW); SetWindowPos(d3dpp.hDeviceWindow, HWND_TOP, 0, 0, SCREENW, SCREENH, SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_SHOWWINDOW); isInWindowedMode = true; spriteHandler->OnResetDevice(); } }[/source] This is the current function... First things first - It should be noted that this function is ugly... ive yet to clear it up. Persoanlly, I prefer things to work before I make them look good/optimized. Second, the SetWindowsLong & Pos functions... I copied and pasted from google and as a result, far from finished. Any idea why my beutiful black Direct3D window is replaced by an ugly white box?? [/quote] Turns out that I had lost the back buffer [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] So the solution is like this - Again, not a clean bit of code [source lang="cpp"]void ChangeScreenMode(HWND window) { OutputDebugStringW(L"ChangeScreenmode\n"); backbuffer->Release(); if (isInWindowedMode) { OutputDebugStringW(L"To FS\n"); //Change to FS spriteHandler->OnLostDevice(); d3dpp.Windowed = FALSE; HRESULT hResult = d3ddev->Reset(&d3dpp); SetWindowLong(d3dpp.hDeviceWindow, GWL_STYLE, WS_EX_TOPMOST | WS_VISIBLE | WS_POPUP); SetWindowPos(d3dpp.hDeviceWindow, HWND_TOP, 0, 0, SCREENW, SCREENH, SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_SHOWWINDOW); isInWindowedMode = false; spriteHandler->OnResetDevice(); } else { OutputDebugStringW(L"To Windowed\n"); //Change to Windowed spriteHandler->OnLostDevice(); d3dpp.Windowed = TRUE; d3ddev->Reset(&d3dpp); SetWindowLong(d3dpp.hDeviceWindow, GWL_STYLE, WS_OVERLAPPEDWINDOW); SetWindowPos(d3dpp.hDeviceWindow, HWND_TOP, 0, 0, SCREENW, SCREENH, SWP_FRAMECHANGED | SWP_NOMOVE | SWP_NOSIZE | SWP_SHOWWINDOW); isInWindowedMode = true; spriteHandler->OnResetDevice(); } d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &backbuffer); completedTransition = true; }[/source] One last question regarding full screen mode... My Textures and such render perfectly happily in Windowed mode - yet in full screen mode, they dont render at all... :S Any ideas?? (The same applies even if I start the game in full screen mode and then Alt+Enter to windowed mode where the textures appear again)