Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Dec 2007
Offline Last Active Apr 19 2016 12:32 PM

Posts I've Made

In Topic: "Getters" and "Setters" - do you write them if you don't...

09 October 2012 - 07:36 AM

I often write getters and setters. Not everytime I declare a variable, of course.

I think first we should not associate getters and setters so tightly.

A setter is potentially a lot more dangerous than a getter. On the other hand I assume it might be convenient to implement a getter, expecially if you are subclassing.

For this reason in my code a setter is often declared as protected.

While it is true a constant might be enough and I totally understand the point behind YAGNI for any non-trivial project I usually have data loaded from an external file. I want to be able to modify the datafile and restart, no recompilation needed.

In that case a public getter and a protected setter are a useful combination.

The point is if you are programming since years it is likely you have self-contained components which might be reused for different projects so it makes sense from my point of view to take a component I already have in my toolbox, create new classes according to my needs, tweak the data files and see what happens.


In Topic: Is using a debugger lazy?

19 September 2012 - 07:59 AM

Assuming the professor has some knowledge and assuming he met many unskilled people, those who try to code "randomly"... well he might have a point.

I mean if you write something and then by default you debug because there's surely something wrong you missed in our own code then better if you don't have a debugger and just learn how a while or for loop works.

But in the end debugging functionality from a broader point of view is also a printf. Debugging is also mentally parse your code trying to figure out what's wrong with your algorithm.

But development of every non-trivial software in 2012 needs good debugging tools.

Those who fail to realize it either live 40 years in the past or have never written (and shipped) any non trivial product.

Posted Image

In Topic: android memory restrictions

19 September 2012 - 04:15 AM

Just a few additions. Certain apps like live wallpapers can have TWO instances running at the same time. This happens when you have your live wallpaper set and user decides to change settings. In order to do it he has to go into live wallpapers menu and select the wallpaper.
When he does it another instance of your app is created when the preview starts, effectively creating a second instances before user can press "settings".
This happens because the "engine-generator" in a live wallpaper is unique (technically it is the wallpaper service) so multiple instances are generated sharing the same heap space.

In situations like that if you have a decent screen res and you are loading multiple images you can have problems (a single 1200x800*32-bit image is 3.5-4mb). In such cases you might want to give a try to Bitmap.recycle() to quickly free heap space before loading another image...

Posted Image

In Topic: Indoor rendering

03 November 2011 - 01:12 AM

for outdoor, you are usually limited by 1. as you have a big view range, seeing tons of objects, not really important if they have perfect shading, in addition most parts are lit just by the sun, and usually you have a big part of the rendering covered by 'sky'.

for indoor rendering, you will rather be limited by 2, you cannot stick in 5000 individual drawcalls into the level in every room, as that would mean you have about 200pixel per object, or 16x12. it would be like creating your level of tiny bricks..

so, while you are right bout those two limitations, it's very context dependent, what you need to optimize for.

I agree, I was posting that as general optimization rules.

that's how it was before the year 2000, since about the geforce256 (geforce 1), we stopped touching individual polygones, it's just way faster to push a whole 'maybe' hidden objects, than updating thousands of polys on cpu side. per pixel graphics was anyway too slow at that time to do anything fancy (even one fullscreen bumpmapping effect was dropping your framerate <30).

Exactly, rendering potentially hidden geometry is faster on modern GPUs.

the problem with optimzations for a theoretical situation is that you cannot know what would help or make it worse. 'optimizing' is just exploiting a special context, it's trying to find cheats that nobody will notice, at least not in visual artifacts. while your ideas are valid, they might not change the framerate at all in real world, they might make it faster just like it all might become slower.

I think, if I had 200+ drawcalls for an indoor scene, I'd probably not care about drawcalls at all. if I'am drawcall-bound with just 200, there must do something seriously wrong.

Well, the problem isn't 200 drawcalls are limiting, my point is if in a simple scenario like that there's a solution to submit 10% of the drawcalls it just looks like a good solution.

considering this, the situation is way simpler, you might observe, that the geometry is not your problem, neither is the actual surface shading, you will be probably limited by lighting and by other deferred passes (e.g. fog, decals, postprocessing like motion blur).

so, it might be smart to go deferred like you said, you dont need a zpass for that, you probably do best with simple portal culling in combination with scissor rects and depthbound checks.

now all you want to optimize is to find the perfect area your lights have to touch, to modify as few pixel as possible and you need to solve a problem, (nearly) completely unrelated to BSP/Portals/PVS/etc.

you might want to

- portal cull lights and/or occlusion culling

-depth carving using some light volumes (similar to doom 3's shadow volumes)

-fusing deferred + light indexed similar to frostbite 2

-reducing resolution like most console games do, maybe just for special cases e.g. distant lights, particles, motion blur (e.g. check out the UDK wiki), with some smart upsampling

-you might want to try scissor and depthbound culling per light
-you might want to decrease quality based on frame time (e.g. less samples for your SSAO)

-you might want to add special 'classification' for lights, to decide how to render which type of light, under what conditions, with which optimizations, e.g. it might make sense to batch a lot of tiny lights into one drawcall, handling them like a particle system, it might make sense to do depth carving just on near-by lights, as distant lights might be fully limited by that carving and a simple light-objects would do the job already).

what I want to basically show is, that your scene handling is nowadays not as big deal as it was 10 years ago. you still dont want to waste processing power, of course, but you won't implement a bsp system to get a perfect geometry set, I've even stopped to use portal culling nowadays. it's rather important to have a very stable system, that is flexible and doesn't need much maintaining, while giving good 'pre-processing' for the actual expensive stage nowadays, which is the rendering. (as an example, "resistance" used just some kind of grid with PVS, no portals, bsp etc.)

and like you said, there are two points, drawcalls and fillrate, you deal with them mostly after the pre-processing (culling). you have a bunch of drawcalls and you need to organize them in the most optimal way you can, not just sorting or batching, as for indoor you'll spend probably 10% of your time to generate a g-buffer, you'll have 10%-30% creating shadow maps, the majority of the frame time will be spend on lighting and post processing effects.

Yes that was exactly my point. To my experience static (and opaque) geometry is just submitted to the g-buffer (I go deferred) with no spatial structure traversing (I have the chance to generate an octree if needed but submitting geometry turns out to be usually faster). Then all optimizations are about dynamic objects and lights/shadows. Also spent some time optimizing shadows when it comes down to static and dynamic geometry, shadowmap resolution, distance, etc.

And since all my shaders are assembled and generated on the fly according to what effect each material requires, I can also generate simpler shaders if there's not enough horsepower available.

The only reason why I still use a spatial structure is for scenes heavily using transparent static objects. I use a BSP but that was a very specific scenario in which I had to use the engine to render a real world building which was 70% glass, with different colors and opacity levels. In that case I needed a perfect geometry set and a perfect sorting so I went for a BSP. Of course it's a performance killer but I couldn't come up with a better solution at that time.

In Topic: Indoor rendering

02 November 2011 - 07:20 AM

In which way any acceleration structure can render something faster than that?

You'll have to profile if the benefit of the reduced draw call count outweighs the penalty from the vertex buffer updates needed by dynamically merging your visible set of objects. My gut feeling is that draw calls are becoming less expensive as CPU speeds go up, while updating large vertex buffers can be costly, so there might be "hiccups" as you for example rotate your camera, and the visible set changes.

For reducing overdraw, you can also do something like setting a threshold distance where you render the closest objects front-to-back without state-sorting, then switch to state-sorting for the rest.

Octrees can also be used without involving any splitting of the objects, this is commonly accomplished by so-called "loose octrees" where the objects can come out halfway from the octree cell they're based in.

Well my idea is to logically divide your level in object types.

Let's consider 4 different object types:

- static, unsorted (a static indoor level)
- static, sorted (static translucent objects)
- not static, unsorted (a character)
- not static, sorted (a moveable glass)

Of course my approach is intended only for static objects not needing sort. In that case there's no need to update the vertex buffer, when camera moves there's no special work to be performed, etc. Yes it's brute force and inelegant but I don't see how an octree (loose or standard) can be faster than just merging static geometry.

And even if your geometry won't fit into a single vertex buffer, you can still use multiple VBs containing geometry grouped by the same criteria (example: materials 1 to 50 goes into the first vertex buffer, materials 51 to 70 goes into the second, etc.).

As for transparency unless you use order independent transparency a portal/octree isn't enough to accurately resolve sorting, which in theory should be performed per polygon.
BSPs can be useful in this case.

My point being as far as I can see OP has a good portal prototype working on static geometry which might be more useful for objects than for the level itself.

Maybe I am missing something... :unsure: