Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 04:59 PM

#5037482 Geomtery based lighting

Posted by Hodgman on 28 February 2013 - 12:30 AM

The projected light texture that Hodgman is talking about gives you similar, and probably way more control over the shape of the emission but is not the same as geometry clipping light shape. You would then be controlling the light shape with a texture instead of geometry. It works like a lot like a decal.

There's two main ways to apply lights in a deferred renderer -- either by screen-space quads that cover the screen-space area of the light, or by 3D geometry that covers the volume of the light. Any deferred rendering tutorial that uses the latter technique will teach you how to apply light only to areas inside a meshes volume using the stencil buffer. You can use any aribtray, closed mesh to apply the lighting if you wanted to.
No matter which lighting technique you're using (deferred with quads, deferred with volumes, forward...), you can also apply projected gobo texture as part of the same lighting effect.

Is there a tutorial about how to do something like this?

Googling "deferred rendering tutorial" and "projected texture tutorial" brings up a lot of hits.


#5037416 OpenGL vs DirectX

Posted by Hodgman on 27 February 2013 - 06:54 PM

To generalize... on Windows D3D is more stable/reliable because it's largely implemented by one entity (Microsoft) with the rest implemented by the driver (nVidia/AMD/Intel/etc) according to a strict D3D driver specification from Microsoft.

On the other hand, GL is almost entierly implemented by the driver, and Khronos do not test implementations for compliance with the specification.

 

To be fair, you do find driver bugs occasionally in both D3D and GL, but in my experience GL has the worse reputation in this regard.

 

With features/extensions, sometimes one API provides more capabilities than the other, and it goes both ways. Intel in particular have often lagged behind with their GL driver support, with their D3D driver providing more capabilities than their GL driver... but there are also cases of the opposite.

 

In the D3D9 vs GL days, GL's CPU-side overheads tended to be slightly smaller... however, D3D10/11 are much more efficient than 9 in this regard, so I'd guess that they've closed this gap with GL, and probably even beaten it now, seeing as GL is much more complex due to backwards compatibility  whereas D3D makes a clean break with each version, discarding old cruft. In any case, these performance differences should be very small (e.g. a millisecond per frame...).

 

There is one annoying case with GL that can cause poor performance -- many drivers when asked to perform a function that is not supported in hardware, will resort to CPU-emulation of that feature. For example, I added array indexing to a pixel shader once, but my GPU only supported this feature in vertex shaders, so my driver decided that instead of failing to draw anything, it would instead run my pixel shader in software emulation on the CPU, at 1 frame per second... GL has a lot more of these kinds of fast-path/slow-path pitfalls compared to D3D.




#5037354 Geomtery based lighting

Posted by Hodgman on 27 February 2013 - 03:58 PM

Many tutorials on deferred rendering will do this. The projected light texture is sometimes called a 'gobo'.


#5037161 FPS Movie Based Game

Posted by Hodgman on 27 February 2013 - 08:31 AM

I didn't include it in my last post because I'd forgotten the name of it, but you should look at facade. It's just a short drama instead of a complex feature-length action movie, but it is a dynamic story presented in first-person, just like you're describing.

It must've been incredibly complicated to make, and I'm guessing that the only reason the author managed to finish it is by constraining it to just a few scenes...

They've published several papers/presentations on the technology behind it too.




#5037124 FPS Movie Based Game

Posted by Hodgman on 27 February 2013 - 06:57 AM

I think lots of people working on "dynamic narrative" (i think that's what they call it?) have had this same idea smile.png The project would largely revolve around your ability to make an AI that can direct a story in the face of a character that it can't directly control. As you said, if you don't pick up a clue, or stop the bad guy, then it's got to change the storyline now so that it still has decent pacing, and still follows a dramatic structure that we recognize as "a story", and is interesting enough to keep you engaged for 2 hours.

If someone makes such a thing, it would be an awesome game, because despite only being 2 hours long, it would have amazing replay value.

 

Another issue is that if you tried to make a film following the structure you've set out, it would be very hard. When was the last time you saw a 2 hour film where 1 particular character was the centre of every scene? Now out of those, how many of them were shot in the first person from that character's point of view?

I think to pull off being able to tell a film-worthy story, you'd probably have to have more than one central character, and/or use other points of view.

 

Games like Fahrenheit did this fairly well, where at the beginning of the game you alternate between controlling two detectives hunting a killer, and controlling the killer himself. It allowed much more flexibility in the storytelling, and would probably allow things to be more dynamic as well. e.g. if you refuse to pick up a keycard off a table, and instead just stand in the corner, the "AI director" can just cut to the next scene, which is about other people. Later on, the "director" can explain that the keycard was picked up off the table after the scene that you saw.  It also means that you can die; if you make stupid decisions that gets your current character killed, the "AI director" still has options to keep the story going, by using other characters. It might have to re-work it's dramatic structure quite a bit, hacking a tragedy plot into the story, or any number of appropriate tropes -- e.g. if your character is a thief and he dies defending someone who's being robbed, then some kind of redemption theme could be woven in at that point...




#5037087 PS3 games in C++..

Posted by Hodgman on 27 February 2013 - 04:54 AM

You don't rent SDK kits btw you have to buy them, it is often the publisher that owns the kit though and then they lend it to the studio for use.

Technically you neither rent nor buy them. You do basically "buy" them, but they remain the property of Sony/MS, and are supposed to be returned when your license for them expires or is invalidated (such as if you go bankrupt). If you actually owned them, you would see more of them on ebay.

 

but after finishing the pc game then if i get sony to rent my game i have to change the codes right and also how about xbox games

Yes, you have to change some of the code -- basically any part that interacts with the operating system (e.g. Windows APIs in your PC game) or the GPU (e.g. D3D/GL), or anything that you've written in assembly will have to be re-written.

 

...however, I hate to be a nay-sayer, but this isn't something that concerns you, because you are not a licensed Sony/MS PS3/360 developer. You have to be a large company with tens of thousands of dollars lying around to spend on licences and dev-kits in order for that to happen. You also have to have hundreds of thousands of dollards lying around when it comes to actually shipping your game (verification and printing costs)...

 

Making professional C++ console games on current-gen consoles is not an achievable goal for a hobbyist. Either stick with the PC, make use of one of the console hobbyist programs (like XNA), or work with older consoles via homebrew kits if you're really eager for some masochism wink.png




#5036958 mouse input without windows messages ?

Posted by Hodgman on 26 February 2013 - 07:40 PM

there are some windows hotkeys that can get in the way of a game. I have problems with the windows key and Oblivion for example. Alt-tab might interfere with gameplay in Oblivion too. tab is your inventory, and you use it a LOT! even in the middle of combat to drink healing potions and select spells.

So add an option to your configuration screen where people can opt-in to disabling these keys.

 

however, unless the "restore" from system mem to vidram is triggered explicitly by the developer (and i believe its automatic), that would seem to require a check before using a resource in vidram, to make sure it was in sync with the system mem copy first.

No, it can use the same mechanism that you'd use on the application side to deal with vidram losses -- only when the lost device flag is raised (checked once per frame upon present), then iterate the list of managed resources and restore them using the sysram copy.

 lost device wouldn't be an issue if i didn't have to use windows to talk to the vidcard.   but then i wouldn't get the benefits of using windows to talk to the vidcard.

Exactly. On the PS3 I had the luxury of not having to go through the OS to talk to the GPU... and it was a nightmare. I had the option of using higher level APIs, but I was writing an engine, so I may as well go as close to the metal as I could, right?

* Packing bits and bytes manually into structures in order to construct command packets, instead of just calling SetBlahState() -- not fun. Yeah, slightly less clock cycles, but not enough to matter. Profiling the code shows it wasn't a hot-spot, so time-consuming micro-optimisations are a waste. I'm talking about boosting the framerate from 30FPS up to 30.09FPS, by a huge development cost. I could've spent that time optimising an actual bottleneck. Also, any malformed command packets would simply crash the GPU, without any nice OS notification of the device failure, or device restarts, or debug logs... The amount of time required to debug these systems was phenomenal, which again, means less time that I could use to optimize parts that actually mattered.

* Dealing with vidram resource management myself -- not fun. Did you know that any of your GPU resources, such as textures, may exist as multiple allocations? In order to achieve a good level of parallelism without stalls, the driver programmer (or poor console programmer) often intentionally introduces a large amount of latency between the CPU and GPU. When you ask the GPU to draw something, the driver puts this command into a queue that might not be read for upwards of 30ms. This means that if  you want to CPU-update a resource that's in use by the GPU, you can either stall for 30ms (no thanks), or allocate a 2nd block of memory for it. Then you need to do all of the juggling that makes n-different vidram objects appear to be a single object to the application developer. The guys that write drivers for your PC are really good at this stuff and know how to do it efficiently. There's also lots of sub-optimal strategies that seem like a good idea to everyone else (i.e. your GPU driver probably solves these issues more efficiently than you would anyway).

* Then there's porting. Repeat the above work for every single GPU that you want to support...

 

Giving up a few clock cycles to the API has turned out to be a necessary evil. The alternative just isn't feasible any more.

Profile your code and optimize the bits that matter. Also, your obsession with clock cycles as a measure of performance is a bit out-dated. Fetching a variable from RAM into a register can stall the CPU for hundreds of clock cycles if your memory organization and access patterns aren't optimized -- on a CPU that I used recently, reading a variable from RAM could potentially be as costly as 800 float multiplications, if you had sub-optimal memory access patterns! 

The number one optimisation target these days is memory bandwidth, not ALU operations.




#5036688 Full screen render targets - how many?

Posted by Hodgman on 26 February 2013 - 07:33 AM

In my last game (forward rendered), at a guess: 1 fullscreen FP16 rgba, 2 fullscreen 8-bit rgba, 2 half-resolution 8-bit rgba, and the 8-bit rgba back-buffer.
[edit] Also a full and half res D24S8.


#5036673 IDirect3DDevice9::SetRenderTarget

Posted by Hodgman on 26 February 2013 - 06:25 AM

That's not what the index is for. When drawing, the triangles are drawn to the currently bound render target(s). Usually you only have a single render-target bound at a time, to index 0, however by using different indices, you can bind more than one render target simultaneously. This allows your pixel shader to output multiple colour values -- the triangle is drawn to the same position to every bound render-target, but different colour values are written. This is known as MRT (multiple render targets) and is usually how deferred rendering is implemented.

 

You use the Present function to make swap-chains appear on the screen, or GetBackBuffer to retrieve a swap-chain object's internal render-target that will be presented.




#5036516 Infringement of remakes and their component parts (engine, plugins, data)

Posted by Hodgman on 25 February 2013 - 05:51 PM

Also note that copyright infringement is not a criminal offence -- you don't commit a crime at some point and then get arrested for it. It's a civil offense -- other citizenry take you to court to try and prove that you've done harm to them.

Your obfuscated-with-delay distribution hypothesis just moves the point in time where you get caught. Maybe the copyright holder doesn't find out for years anyway, at which point they try and prove that you've harmed them with your game...


#5036304 Writing to Render Target from Itself

Posted by Hodgman on 25 February 2013 - 06:27 AM

1) I'm writing to what is currently being read to the shader.

2) Should I be using two FBOs, and be ping-ponging back and forth?

1) In my experience, that's asking for trouble. Some GPUs do actually support this (as long as other pixels aren't reading from the one that you're writing to, which would cause a race-condition), but many GPUs don't support it at all and will do god-knows-what if asked to.

2) Yes.




#5036303 Infringement of remakes and their component parts (engine, plugins, data)

Posted by Hodgman on 25 February 2013 - 06:15 AM

As such, it seems to me that only a combination package of both the engine containing the necessary C++ modules and the game XML file together can possibly be deemed to be infringing.

By that logic, pirating movies isn't infringement unless VLC/media-player/etc are bundled into the torrent along with the MPEG... which isn't the case.

The copyright enforcement barons push the line far enough that, in their eyes, even if you encode some copyrighted information into a number and then share that number with someone (even if you don't tell this person what it represents, or how the number can be used in an infringing way), you are infringing the rights of the copyright holder.
 

If I create an XML file that mimics the game mechanics, but using different sprites, textures, sounds and map layout, is the project still an infringement?  How about just using a different map layout?

Don't copy any of their creative works period. You can't use their assets (but, you can be compatible with the original assets, so that someone who owns the original game can import those assets into your game, seeing that user has a license to use said assets), and you can't copy their layouts.
Game rules are an exception. In the US, the rules of a game aren't covered by copyright, but a particular expression of those rules, is.

So yes -- steal the ideas behind the mechanics, but make a new game with those ideas.
 

then it will be fairly trivial for any user to rebuild the original map layout of the game in question using my project as a basis

As long as you don't host user-generated content yourself, then you're clean (however, you're never at 0% risk of being supriously sued!).
Minecraft has done this brilliantly -- there's so many grey-area creations and mods for it, but Mojang don't host any. If anyone does want to go after some infringing bit of user-created content, they have to go after some small community website, instead of Notch the millionaire.


#5036265 Your preferred or desired BRDF?

Posted by Hodgman on 24 February 2013 - 10:42 PM

The features that I think I need so far are: Non-lambertian diffuse, IOR/F(0º)/spec-mask, anisotropic roughness, metal/non-metal, retro-reflectiveness and translucency.

I took Chris_F's BRDF containing Cook-Torrence/Schlick/GGX/Smith and Oren-Nayar, and re-implemented it with hacked support for anisotropy (based roughly on Ashikhmin-Shirley) and retroreflectivity.

If both the roughness factors are equal (or if the isotropic bool is true), then the distribution should be the same as GGX, otherwise it behaves a bit like Ashikhmin-Shirley. Also, the distribution isn't properly normalized any more though when using anisotropic roughness.

The retro-reflectivity is a complete hack and won't be energy conserving. When the retro-reflectivity factor is set to 0.5, you get two specular lobes -- a regular reflected one, and one reflected back at the light source -- without any attempt to split the energy between them. At 0 you just get the regular specular lobe, and at 1 you only get the retro-reflected one.
 
BRDF Explorer file for anyone interested: http://pastebin.com/6ZpQGgpP
 
Thanks again for sending me on a weekend BRDF exploration quest, Chris and Promit biggrin.png


#5036249 Hardware

Posted by Hodgman on 24 February 2013 - 08:23 PM

A beefy workstation can be handy during development -- e.g. if an artist is working on a million polygon mesh in 3DS Max, then they'll want a good GPU and lots of RAM, etc... I personally recommend getting an SSD in your development PC, because it makes loading large amounts of data (or many different applications) very fast.

 

However, it's also handy to have a crappy old PC around the place (even if it's not your main one) for testing your game. For compatability testing, it's also good to have PCs with Intel, nVidia and ATI graphics cards, etc, etc...

Sometimes if your main PC is too powerful, then you won't realise that your game is a resource hog, and when you run it on an "average" PC, you'll be getting 15 frames-per-second rather than 60 wink.png

 

For really intensive jobs, like rendering/baking out data from 3D programs that takes hours, you'd usually use a "farm" of PCs, rather than a single PC with 100 CPUs. At my last job, all the artists would leave their PC's on overnight while running a program that connected them to a "render farm" network. A main server would go through a queue of jobs that needed to be rendered, and would pass out small chunks of work to all the different computers that made up the farm, and then merge their results together. This gave us the equivalent of a giant super-computer without having to actually buy one, plus we could add more power just by connecting more regular PC's to the network.




#5036230 Your preferred or desired BRDF?

Posted by Hodgman on 24 February 2013 - 07:32 PM

What's the computation for the X and Y parameters that BRDF Explorer uses for aniso distributions? Are they tangent/bitangent vectors?

Yeah, you can peek into their "shaderTemplates" files:

   vec3 normal = normalize( gl_TexCoord[0].xyz );
    vec3 tangent = normalize( cross( vec3(0,1,0), normal ) );
    vec3 bitangent = normalize( cross( normal, tangent ) );

These are then passed into your BRDF function as N, X and Y.

 

[edit]Oh man this new IPB text box is screwing up hardcore lately... It just deleted half my post that proceeded a code block, again...

 

Shaders do need NdotL

He was talking about dividing by NdotL.

BRDF explorer will multiply by NdotL outside of the BRDF, so if you've included NdotL inside your BRDF (as we usually do in games), then you need to divide by NdotL at the end to cancel it out.






PARTNERS