Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 13 Sep 2012
Offline Last Active Today, 07:37 PM

#5210000 Deferred optimization

Posted by TheChubu on 11 February 2015 - 05:43 AM

Actually, you mentioned that you already tried stencilling your point lights... Did you use sphere meshes for this? If you used fullscreen quads to initialize the stencil buffer, that might explain why you didn't gain any performance from it...
Yeah this. I was assuming OP was using regular spheres for point lights. If you use a fullscreen quad for each light indiscriminately, its bound to run slow, and even slower doing the stencil pass (hell I don't think it would even work, can't mark any pixels in front of the volume).

#5209895 Deferred optimization

Posted by TheChubu on 10 February 2015 - 05:34 PM

Am I just hitting the wall on my laptop ? What are some common tricks I can use to reduce the time required for the lighting passes?

Dont store position, reconstruct it from depth. Don't store specular color as kalle_h said.


I don't know what exact stenciling method you use but the one from Killzone 2 is pretty straightforward, lighting is done in two passes: First mark all pixels in front of all the light volumes, then compute lighting on those marked pixels. Say for point lights, it would be two draw calls in total with instancing for example.


If you're okay with limiting yourself to 255 lights per pixel at most (and you're not using the rest of the stencil bits), you could use a different trick, instead of marking each pixel, increment the stencil, then in the second pass, decrement it and test for stencil greater than 0. That way it could save up some computations in the case of overlapped lights (from the camera perspective).


 I think compressing this to 3rts is do-able, reconstruction position from z + screen space, compressing normals to 2 channels, etc, but am worried that this will be a wasted effort if it increases the pixel shader complexity.

 Often the issue is not number crunching but going through hoops in memory to sample various textures around. Thin G buffer -> Less memory trips.


EDIT: And I insist the text editor should eat a bag of dicks for still messing the quote blocks. Here is the thread where I reported this issue.

#5209889 What version of opengl should i learn?

Posted by TheChubu on 10 February 2015 - 05:15 PM

my graphics card supports opengl 3.0

Whats your graphics card?


What about that, is it that difficult? Are there any alternatives to doing that?


Kinda. Thing is, do you want learn to do cool stuff or not?


EDIT: Text editor should eat a bag of dicks for messing the quote blocks AND adding whitespace out of nowhere.


Here is the thread where I reported the editor issue: http://www.gamedev.net/topic/665337-editor-messes-up-quote-blocks/

#5209705 ASTC compressed textures on normal OpenGL

Posted by TheChubu on 09 February 2015 - 06:04 PM

Look here: http://www.g-truc.net/project-0034.html#menu


Grab the January 2015 matrix PDF, ASTC isn't supported on desktop. So DXT all the things.

#5209508 Article suggestions

Posted by TheChubu on 08 February 2015 - 05:42 PM

From the engine building standpoint I often find the following thing: I don't know what I'm designing for exactly.


See, when you have little experience building actual games, its kinda hard to say "well, I'll start with the rendering system", because you have no idea how exactly will such system will be used in the future. You can have a vague idea of what you want, but since you havent implemented those parts yet, you simply lack the knowledge to identify all of the concrete requirements the subsystem must fulfill to be useful.


In that regard I often end up refactoring a lot of code simply because at the time of writing it, didn't imagined my now current use case would need XYZ feature, or doing ZYX thing in a different way.


A guide on what normally you would expect of each subsystem would be nice. My most recent example is a physics system, I had to dig out an old Gamasutra article to find out what kind of things a physics subsystem would need to offer to the rest of the engine so it can be useful.


Having that kind of insight of what you might need of certain parts of the engine in the future is valuable knowledge, since those are usually discovered by experience or trial and error, which are at the very least time consuming.

#5209445 Programming

Posted by TheChubu on 08 February 2015 - 12:20 PM

That's what I'm saying.  And it's not like I care.  It doesn't matter, except I don't want to insult anyone.  
Oh don't worry, she doesn't minds.

#5209391 calculating z coordinate of camera

Posted by TheChubu on 08 February 2015 - 05:44 AM

I don't see how just drawing a fullscreen quad (with simple hardcoded screen space coordinates in a shader) and sampling from the texture wouldn't be enough.

#5209281 Article suggestions

Posted by TheChubu on 07 February 2015 - 11:20 AM

* OpenGL and Java - It is such a pain to get this working.  Mac, Windows, Linux, different versions, jar files and native libraries.  I think I spent two days trying to get a 3D display in our app at work.  Then I got home and found out I used the wrong GL interface and it wouldn't run on my Mac.

Really? I found LWJGL much easier to use than GLEW, freeGLUT and others.


Just downloaded the lib, link the natives in the IDE, and create a window (LWJGL 2 uses its own Display class, LWJGL 3 uses GLFW 3 to handle windowing). Got it working in Windows and Linux. No idea about OSX but it didn't supported OpenGL 3.3 for a long time so I didn't bothered much.


The only issue I had was an actual platform specific issue, you have to ask for 32 bit color in Windows and 24 bit color in Linux for the default framebuffer.


Although you could specify ways to handle what native libraries to load depending on the OS the user is running and things like that, LWJGL uses Java's Propertiy API to fetch a specific path if its specified.

#5208534 CoreCLR (.NET Core) in GitHub right now

Posted by TheChubu on 03 February 2015 - 09:56 PM

Well this is fun, here: https://github.com/dotnet/coreclr

#5208225 SDL2 and Linux [RANT]

Posted by TheChubu on 02 February 2015 - 11:24 AM

I think it's a little silly to be angry about a product which was provided to you to use for free.
Free software isn't above criticism. That you think that you have to pay for something for you to criticize it tells me more about you than about how silly the idea might be.

#5207954 Problem with FloatBuffer Object as a paramerer

Posted by TheChubu on 31 January 2015 - 04:34 PM

But this smart person would not highlight (quote) the exact flaws directly in 1 or 2 simple line's effort and show how to correct them (and thats the problem)

 Or you could you know, do the reasonable thing "I don't understand why do you say that I'm copying references needlessly, could you specify what you mean?" You making a thread doesn't automatically makes you worthy of anyone's time. Which means that you're the arrogant here, in case the point didn't get across. 


And you can easily Google Java's code conventions to see what he means.


Anyway, enough of this, I see a problem here:




Don't set the position back to zero, put stuff in the buffer, then call buffer.flip(). Buffers hold internally a position and a limit, the limit is what the OGL functions will use to know how far they have to read into the buffer. What you have right there is that limit still remains zero, since you never called flip().


Moreover, you're probably going to start leaking memory, Google around for direct buffers in Java (Android, whatever) and how to free them. They're memory outside JVM managed heap, and they're not guaranteed to be freed at any specific time, so its best if you free them yourself manually in a timely manner.


EDIT: And the editor fucked up the quotes again, *sigh*

#5207756 LGPL ugliness and LZMA

Posted by TheChubu on 30 January 2015 - 03:44 PM

You also need to allow the user to be able to drop in a new version of the lib and the intention to be for it just to work with the new version. Usually this isn't technically possible without recompiling as open source authors don't really think with backwards binary compatibility in mind.
That's why the reverse engineering part is there, because if it a drop in replacement doesn't works, user needs to be able to reverse engineer the application to make it work. It doesn't demands of you to keep it compatible forever.

#5207751 LGPL ugliness and LZMA

Posted by TheChubu on 30 January 2015 - 03:21 PM

What's wrong with LGPL?

Technically the annoying part is that LGPL needs for you to allow the reverse engineering of your software for the purpose of switching the LGPL library. Now, how much reverse engineering would need to be considered "enough" for that? That's the tricky part. 


You'd need to architect your code so the boundaries between the LGPL lib and the rest of the application are well defined and publicly documented. Otherwise people can claim they were reverse engineering your application for replacing the LGPL lib with another version and they just happened to come across say, your super secret map generation code, or your DRM routines.


Now, I haven't come across these things happening and its not like crackers first check if there is any LGPL licenced software being used before cracking an application. So I'm not going to say its something you really need to worry about, just have it in mind.

#5207267 Some programmers actually hate OOP languages? WHAT?!

Posted by TheChubu on 28 January 2015 - 02:44 PM

in other words, many of the things people call "typical C++ bullshit" or "typical OOP bullshit" is actually "typical Java bullshit" that's been drug into C++ by recent grads or Java refuges who don't know how to write idiomatic C++ and so write idiomatic Java in C++ instead.
Now that might be just me but it sounds awfully like a "dey took our jibs!" reasoning there. Must be those filthy Java programmers, et cetera. Also you seem to assume colleges teach "idiomatic Java", I'm going to tell you they don't. They teach generic OO concepts, often badly, no matter the language they end up actually using. 


I'd say that the main reason few people write "idiomatic" C++ is because C++ is an impenetrable mess rather than other languages having obviously spoiled your programmers. Moreover, "idiomatic C++" that takes advantage of all the features of the language guarantees the codebase will become an impenetrable mess. There is no such thing as "idiomatic C++", you can get as idiomatic as the subset of features your project limits itself to allows you to, doing anything else ends up in madness and despair.


Also, turns out that many of your so called "idioms" of C++ are actually beneficial for Java and for any language. Avoid heap allocations, avoid virtual function calls*, branching has a cost, pointer indirection has a cost, and so on. I agree with exceptions there, they're just annoying, then again I'd say exceptions are a big thing on "enterprisey" software, regardless of the language.


Now if these people don't even know generic beneficial idioms that would help them no matter the language, then I don't see why you would put that on Java's shoulders.


*Although this is specific to OO languages. Also there are Java-specific considerations here, JIT can inline the virtual call if it only sees one or two concrete implementations.

#5206809 FBO only renders to the first target

Posted by TheChubu on 26 January 2015 - 07:32 PM

. They aren't relative to each other for some reason in lwjgl 
They do are from what I've used in LWJGL 2.9.2 and LWJGL 3. They take the values directly from OGL headers.


This is from LWJGL 3:

GL_COLOR_ATTACHMENT0        = 0x8CE0, // Thats 36064
GL_COLOR_ATTACHMENT1        = 0x8CE1, // 36065
GL_COLOR_ATTACHMENT2        = 0x8CE2, // 36066
... etc