Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 08:13 AM

#5207377 Some programmers actually hate OOP languages? WHAT?!

Posted by on 29 January 2015 - 02:42 AM

Even if you're going to stick with a language like C++, it's immensely beneficial to learn many languages of different styles, with their own paradigms and idioms.

e.g. My personal C++ style involves OOP's focus on encapsulation and invariant-enforcement, Functional's focus on immutable state, Procedural's ability to KISS with transparent data structures and Flow/DOD's focus on transform graphs. :lol:
You'll find that your idea of good code will drastically change with new experiences :D


#5207370 Is there a way to draw super precise lines?

Posted by on 29 January 2015 - 02:07 AM

If you want thicker/softer lines, the foolproof solution is to render a quad that bounds every pixel-center that could possibly need shading, and then in the pixel shader, derive an opacity/alpha value from that pixel's distance to the mathematical line.


#5207323 Questions about D3D9 and D3D11

Posted by on 28 January 2015 - 06:46 PM

With #1, you're trying to emulate D3D9's philosophy on D3D11. It's much easier to do the opposite, and emulate D3D11`s philosophy on D3D9. Allow the user to create rasterizer states - on D3D11 this is a simple wrapper around the API, and on D3D9 you can make your own struct that contains values for all the same/separate render states.

#2 - on top of what LS pointed out above, you don't HAVE to use a multisampled swapchain; you can create a separate multisampled rendertarget, which you resolve onto the swapchain's rendertarget.
This is useful when the 3d scene should be multisampled, but the HUD/etc doesn't need it.

#3 - what does the draw function do?

#4 - my personal preference is to have no in-built shaders, and allow the user to provide and choose the shader for each draw call.


#5207318 Encapsulation through anonymous namespaces

Posted by on 28 January 2015 - 06:34 PM

Generally, if you've obtained an object via something other than new, you should release it with something other than delete.

This means that as well as your Create function, you should have a matching Destroy/etc function.

If you want to use smart pointers, you can probably then configure them to call your custom release function, rather than the global delete operator.


#5207134 Desura is not paying me my money

Posted by on 28 January 2015 - 04:42 AM

Do you have a copy of the contract you agreed to when you licensed your game to them?

How much money do they owe you? Is it above any thresholds for payments?


#5207098 Row v Column Majors

Posted by on 28 January 2015 - 01:51 AM

Yet the in memory layout of an OpenGL matrix is:

x.x x.y x.z 0 y.x y.y y.z 0 z.x z.y z.z 0 p.x p.y p.z 1
Which is a row major layout!

No, not necessarily!

Given a matrix:
A,B,C,D
E,F,G,H
I,J,K,L
M,N,O,P

If the *memory layout* is row-major, it will be stored as:
ABCDEFGHIJKLMNOP
If the memory layout is column -major, it will be
AEIMBFJNCGKODHLP

Now leaving computers and going to pure math:
If the mathematical conventions are row major, then the X-axis basis vector will be stored in ABC.
If the mathematical conventions are column-major, then the X-axis basis vector will be stored in AEI.

So, the data you've observed is just using the same memory storage convention and mathematical convention.
It is either using row-major memory layouts and row-major maths, or its using column-major memory layouts and column-major maths.

From the data you've presented, you can't tell which it is.
With the further information that "GL is column major" then we can then deduct that it must use both column-major memory layouts and column-major maths notations.
That means that your 2d visualization is wrong. The 1d dataset you posted should actually be visualised as:
x.x y.x z.x p.x
x.y y.y z.y p.y
x.z y.z z.z p.z
  0   0   0   1



#5207036 Current-Gen Lighting

Posted by on 27 January 2015 - 06:49 PM

@Hodgman, after dissecting your post, I don't think I have as much of a grasp on lighting as I thought. Before we begin, I just want to clarify:
Gourand Shading: calculating the diffuse component of light
Phone Shading: calculating the basic specular component of light
Blinn-Phong Shading: calculating the specular component of light with account to reflection and refraction
Normalized Blinn-Phong Shading: Blinn-Phong shading with energy conservation

The kind of lighting that only uses N*L (and a diffuse color/texture) and nothing else is called Lambert / Lambertian.

Despite the name, Gouraud shading had more to do with interpolation than lighting. Basically, pre-Gouraud, you had one normal per triangle, and computed N*L per triangle - AKA flat shading.
With Gouraud shading, you compute N*L per-vertex, and then interpolate the results - AKA per vertex lighting.
With Phong shading (unrelated to Phong lighting!), you interpolate 3 vertex normal values to get a per-pixel N value, and then calculate N*L for each pixel - AKA per pixel lighting.

These days you don't often see Gouraud shading / Phong shading mentioned much. Instead, people usually just say flat/vertex/pixel shading.

Phong's specular lighting forumula isn't used much any more. To produce highlights, Phong just completely made up his lighting formula, with no physical basis - just the observation than highlights appear when the reflection vector is similar to the view vector, so he used a dot product to compare those two vectors for similarity, and raised the result to a power to allow you to tune the size of the highlight: pow(dot(R,V),power).

Blinn then helped recreate this same effect based on physical principles, producing the improved Blinn-Phong specular lighting forumula.
"Microfacet theory" comes up a bit here - this is the idea that we can pretend that surfaces are made up of uncountable numbers of microscopic perfect mirrors. Imagine a 2d line with lots of V's in it - the roughness/spec-power corresponds to how deep or shallow those V's are.
Blinn states that for a micro-mirror to show us a reflection, it's (micro-)normal needs to be perfectly aligned with H (half way between L and V). If it's perfectly aligned like this is will reflect light to the camera, otherwise it won't -- it's a boolean/binary operation.
His formula - pow(dot(N,H),power) - is actually a probability distribution, describing what percentage of the billions of micro-mirrors with in the pixel's area have this perfect alignment.

 

This is one of the first specular lighting formulas that is actually based on some simple physics, instead of being completely made up based on guesswork.

 

Yep, normalized Blinn-Phong then just extends this to add conservation of energy. 
 

Does the Cook-Torrance BRDF only deal with specular lighting?

Yes.
Phong/Blinn-phong/Cook-Torrance only deal with specular lighting. Lambert only deals with diffuse lighting.
Each of these is a BRDF - a function that describes how light leaves a surface after arriving at it. "Reflectance distribution" basically means how does the light that leaves the surface vary by viewing angle. Lambertian diffuse's BRDF is just "1.0" (or "diffuseColor") because it doesn't vary by viewing angle. Specular BRDF's on the other hand always take the view angle (V) into account within their formulas.
To make a useful BRDF for a game, you'll usually add together a diffuse BRDF and a specular BRDF -- e.g. Lambert + Blinn-Phong is the most common choice.

It turns out that basically, diffuse lighting is refraction (in an opaque surface) and specular lighting is reflection.
When light hits a surface, Fresnel's laws state how much is reflected and how much is refracted. The reflected percentage should go into your specular BRDF. The refracted percentage goes into your diffuse BRDF.
In an opaque surface, when light refracts into it, it continues to bounce around inside that surface. Some percentage of it will be absorbed (1-diffuseColor) and some percentage will manage to bounce back out of the surface (diffuseColour).
 

You also talk about geometry visibility, and this is something I haven't gotten into yet. Is this linked with AO or occlusion querying? I haven't had the time to dive into these concepts just yet.

Cook-Torrance extend Blinn-Phong with Fresnel, Geometry and Visibility terms.
The Geometry/Visibility terms are part of the BRDF (and don't have anything to do with AO/etc), and they basically describe the microscopic self-shadowing of those tiny V shapes that the surface is made out of.
With a rough surface, the V's are very deep, so if light hits the surface side-on, then most of the interior of the V itself will be in shadow! Likewise, even if light does make it down into a V, maybe when it reflects off one side, it then bumps into the other side of the V instead of making it out the top.
These effects are subtle, but the extra darkening of rough surfaces at different viewing angles is a nice quality improvement over regular Blinn-Phong.
 

What exactly is the BRDF? I know it stands for Bi-directional Reflectance Distribution Function, and I always thought they were referring to the vector operations used to compute lighting such as NdotL, reflection, refraction, etc. Then, the rendering equation is the combination of all of that

"The rendering equation" is a huge formula that describes how light bounces around a scene, from light sources to cameras.
Within the middle of this huge formula is the BRDF, which only deals with tiny surfaces. The rules for how the surface absorbs, refracts, and reflects any arriving light is enclosed in the BRDF.
The rules for how light actually arrives at the surface in the first place (shadows, ambient-occlusion) etc, are external to the BRDF.
 

For example, in the ad-hoc lighting model of last gen, ambient lighting was a constant color/texture, diffuse lighting was a color/texture multiplied by NdotL, and specular lighting was NdotL multiplied by an exponent channel in the spec map that was then multiplied by the the spec map's RGB channels that was used for the mask. If I understand this correctly, the lighting equation would be:

foreach light in scene:
     frag_color += ambientFactor + diffuseFactor + specularFactor;
Then, the BRDF would be the devil in the details of how ambientFactor, diffuseFactor and specularFactor were calculated:
ambientFactor = ambientColor;
diffuseFactor = diffuseColor * dot(n, l);
specularFactor = specularMask * dot(h, l) * dot(n, l);
This doesn't account for reflection/refraction, but the factors being calculated above would be the BDRFs. Is this correct?

 

Firstly, this kind of ambient lighting is a complete hack biggrin.png To make sense of it, we can say that there's always a magic light that's aligned with the surface normal (so that N*L==1 and can be ignored) and that this light only ever causes refractions (no specular reflections).
Ignoring the magic ambient light, the BRDF can be split into diffuse sub-BRDF and the specular sub-BRDF -- the former is simulating refraction -> internal scattering -> re-emission, and the latter is simulating reflection.
Ideally, you'd use Fresnel's law to weight the two sub-BRDFs.
 

Is specMask vector that's treated as a color where the RGB components are set to 0.04 when dielectric?

If you're using the same shader for metals and dielectrics, Yes.
If you're using forward shading, you might have two different lighting shaders -- in that case you could optimize the dielectric one by only using a float instead of float3.
 

Was energy conservation common back in the ad-hoc days, or is this a new-ish take on specular lighting?

It started to gain popularity maybe half way through the 360/PS3 generation. The later games on those systems with amazing lighting might be making use of slightly more physically based techniques, including better energy conservation.

IMHO it's much easier to paint spec-power/spec-mask maps when using the normalized version!
 

A few years ago, I used to think specular lighting was just a shininess factor. I learned that specular was much more than that... It's actually more about the very reflectance of light than just "highlights". It also plays an important role with how reflections work with environment maps, right?

Old-school environment maps are just another hack like "ambient lights" smile.png They're meant to represent the specular lighting on an object, coming from a complex light source (e.g. bounced lighting from a scene).
These days, IBL is the modern replacement for environment maps -- you can think of them pretty much the same way, except they usually account for roughness now (rougher surfaces get blurrier reflections).




#5206903 Some programmers actually hate OOP languages? WHAT?!

Posted by on 27 January 2015 - 06:38 AM

Honestly just reading through the thread and all the links provided by you guys, I'm actually started to get anxious. Its like when I was a kid and some teacher starts screaming at me for doing something wrong, when I didn't know it was wrong, and instead of teaching me that this is the wrong way of doing things, they just keep screaming at me and telling me how stupid I am (probably not a good teacher to learn from him, but still).

After reading this thread, I'm honestly more scared to share my source code with the public than ever. Because I might be doing something stupid and someone will just come along and start bashing me for writing shit code. Problem is, I will never learn the right way to code from the wrong way if I don't share my code and no one explains it to me.

TL;DR - unless you've been telling everyone that you're the greatest software engineer who's ever lived, you should never feel like that.

Often this is something that goes away with age, but it's also a cultural issue - the importance placed on protecting one's ego.
E.g. In America it's common for people to not want to admit they don't know something - from lying to your friends, saying you've heard of [obscure band]... to dancing around your co-workers questions with qualified stalling like:
"Now I would say... that this system X probably works similar to system Y"
instead of being straightforward with:
"I haven't used system X. Greg wrote it. You should ask him, or I can look at it with you to try and figure it out".

It's ok to not know things. It's good to know that you don't know things!
"Ignorant" is not an insult, except when it's used to mean that you're choosing not to learn.
If you discover that you're ignorant of something, and then follow up with more learning, then you're now a better person.
Without first admitting and accepting that you need to learn, this self-improvement is impossible.

The internet (and professional software studios) are amazing as they can surround you with lots of really knowledgeable programmers. That's an amazing resource that you can use to become an amazing programmer yourself.
The only catch, is you've got to first admit that you're a shitty programmer. Even me, now, with over a decade making games, I'm willing to humble myself before other programmers if they're more knowledgeable in an area than me. Hell, even if they're a newbie student who knows some neat trick that I don't, it's worthwhile letting them momentarily be the master and giving them your full attention as an eager-to-learn student.

The second part is learning to accept criticism. The Wikipedia page for that word is a good start in explaining it - that a critique is not a personal attack and can/should be objective. If you get defensive when provided with a critique of you're work, then you're choosing to be a bad student, choosing permanent ignorance over learning.
Sometimes people are bad at providing critique / some people are lacking in tact. These people's critiques may sound offensive, may sound like they're attacking you personally. Just remember that they're really only there to show you where you can continue learning. If they've done this in a seemingly hurtful way, it is not your problem to deal with - it's actually their own problem, their own bad interpersonal skills. They need to learn how to interact with people better ;)
Always assume good faith and mentally translate people's words into the best meaning before taking them to heart. This can mean translating "OMG U SUCK! Why is this code 100 lines?" to "You could have done this with less lines. Ask me how." :lol:

Also, when people suck at communication like this, keep in mind that (ironically) it's usually tactful not to tell them, even though without this feedback, they themselves are being robbed of an opportunity to learn/improve their communication skills :lol:


#5206884 Some programmers actually hate OOP languages? WHAT?!

Posted by on 27 January 2015 - 03:58 AM

In my experience, people who "hate on" OOP fit into these groups:

1) The old guru. You've been programming the same way since 1983 and you resist change. You're proficient in procedural C, or FORTH or FORTRAN and you see no reason to learn anything else.

2) The elitist. You use your choice of language as a filter to form an exclusive club. You feel that 10% of people who program in X are bad programmers, but 90% of people who program in Y are bad programmers. Therefore you must shun Y, in order to keep the bad programmers away from your club.

3) The enterprise defector. You've been raised on Java in a large company, and have been forced to write one too many SolutionFactoryAdaptaorPatternVisitorFlyweightProxyFactoryProviders and other wtfs, with coworkers who buy into this tripe, probably just for political gain within the enterprise.

4) The tried it once-er. You tried learning C++/Java/SmallTalk back in 1998, wrote tonnes of inheritance trees, realized your code was terrible, and decided to forever shun this thing that you barely understood in the first place.

5) The old-school optimizer. You wrote a better version of virtual in C 20 years ago, so now shun all C++ compilers, even though you've never bothered to re-run your tests with modern compilers. You're probably also an old-guru.

6) The hardcore game-dev. You actually understand all the points being made in Mike Acton's rants, and therefore you subscribe to point #2 -- you're sick of seeing bad programmers write bad code, so you want to take away the tools they're abusing.
 

And he was talking about OOP and how bad it is.

If anyone is making rash, absolute statements like that, then they should not be teaching other people.

There is great blog entry where Casey is explaining his coding style.
http://mollyrocket.com/casey/stream_0019.html

That falls into the straw-man argument category. To paraphrase:
"I don't understand OOP, so based off the badly taught perversion of it that I was using, I'm going to declare that it's all horseshit."
 
The big staw-man there is he bases all the opening paragraphs on an example where a problem is solved using a faulty inheritance hierarchy, implying that this is the OO way to do things... when in fact OO teaches the opposite of this -- to prefer composition over inheritance -- and thus the opening paragraphs are either ignorantly or deliberately presenting a terrible and/or contrived solution to generate a false impression.

All the ideas he's putting forth in that post can be combined with the useful ideas from OO theory... i.e. they're mostly orthogonal.

It's very arrogant to just dismiss it (and harmful to teach such arrogance to others) because you had a bad experience in the past.




#5206801 Should I Port to D3D10 or D3D11 ?

Posted by on 26 January 2015 - 07:06 PM

If I chose D3D11 will the end user need Windows 8 and/or a D3D11 video card ?

No, they'll need Windows Vista (or above) and a D3D9 (or above) video card.

When you create a D3D11 device, it can be one of many "feature levels". There's feature levels for D3D9, D3D10 and D3D11 era video cards. 

e.g. If you choose to support D3D_FEATURE_LEVEL_10_0 as your minimum, then your users would require a D3D10-compatible video card.

 

Because of this, there's no point in ever using D3D10 any more -- you may as well just use D3D11.




#5206617 Unbinding a constant buffer

Posted by on 25 January 2015 - 04:02 PM

So on that note, if you have slots that are rarely used (e.g. only on object uses VS cbuffer slot #13) then you might want to periodically set all slots to NULL (e.g. One a frame) to ensure that Released buffers actually get released :)

Going off topic - this hidden reference counting is slow, so I'd expect them to get rid of it in D3D12.


#5206347 OO where do entity type definitions go?

Posted by on 24 January 2015 - 01:28 AM

If you want to make something that's pure OOP and want classes for Orc, Elf and Goblin then they should be classes that inherit all from the same base class.

An important rule of OO is to prefer composition over inheritance.
So an OO solution would probably have:
- a Monster class for instances of monster entities in the world, with a pointer to:
- A MonsterDescriptor class containing values that are common to one 'class' of monsters.
- 3 instances of MonsterDescriptor, containing values for Orcs, Elves and Goblins.
- A utility for loading these descriptors from disc, DB, etc.
- Many instances of Monster, which must be passed a MonsterDescriptor in their constructors.


#5206183 a better fix your timestep?

Posted by on 23 January 2015 - 06:49 AM

Let's look at some example code...

First - assumptions:

Due to users being able to override their driver settings, you never know if vsync is on or not. Most monitors are 60Hz, so you should assume that the Present/Swap function may block for up to 16ms.

We can put games into 3 categories:
A) update works with variable elapsed time values fine. Integration errors are tolerable.
B) integration errors need to be avoided (and there's not an analytical solution), so we need a fixed 'elapsed time' value.
B.1) the game is simulating very precise or fast moving objects, so the fixed step rate must be very small, e.g. race car tyres at 1000Hz.
B.2) the simulation is more coarse, and or costly, so the time step can be huge, e.g. RTS games with hundreds of tanks shooting each other.

Type (A) can use a traditional unlocked loop.
While(1):
..state=Update(state, elapsed);
..Render(state);

Type (B.2) works great with FYT and interpolation.
While(1):
..buffer += elapsed;
..While(buffer-=step≥0):
....states[i=!i] = Update(states[i], step);
..state = Tween(states, buffer);
..Render(state);

This also works fine for type (B.1) games, where the update rate is smaller than the render rate.
Exept for when the vsync cost becomes an issue!

Let's say Render takes 1ms, Update takes 30ms, but within Render we have Present, which is blocking for long periods when vsync is enabled (15.6667ms on tweened frames).

*Without* vsync we get 30 tweened renders, 1ms apart, then an update which halts rendering for 30ms, then another 30 1ms frames.
That's an FPS that's constantly alternating between 1000fps and 33fps, which is very unpleasant.

When we turn on vsync now it just constantlying alternates between 60fps and 30fps, which is also very unpleasant.

So, what I think Norman is suggesting is that we employ cooperative multitasking in the Update method, so it can be yielded/resumed over multiple frames.
Instead of Update() taking 30ms per call, we can have Update0() take 15ms and Update1() also take 15ms.
On frames where we call Update0, the game-state used by the renderer IS NOT UPDATED, meaning we don't see a half updated frame - everything is the same as far as the renderer is concerned - it continues to Tween between the last two COMPLETED states. All weve done is apply the standard games solution for long running tasks - do a small bit per frame and get the results at the end. After Update1() returns, it's added to the tweening list.

Now, we have constant 60fps rendering -- every frame does 15ms updating, 1ms rendering, and vsync only blocks for 0.6667ms now!

The tweening logic from FYT has to stay, otherwise there's no point rendering faster than the update frequency! The extra rendered frames would just be duplicates!
But besides his inability to accept tweening, if I've understood him correctly, then this approach does actually double the minimum framerate (and increase the average by a third, and decrease variance to zero) in my specific example.


#5206146 Options for GPU debugging DX11 on Windows 7

Posted by on 23 January 2015 - 12:42 AM

Ah I didn't know about that restriction. The author is very responsive - maybe email and ask if he has plans for feature level 10_1 support in the near future...
Otherwise, try Visual Studio or one of the vendors tools.


#5206127 Would this be considered duplicate code?

Posted by on 22 January 2015 - 10:28 PM

You can probably remove the duplication with something like:
if(state == ActionState.RUNNING)
{
	ASSERT( direction == Direction.LEFT || direction == Direction.RIGHT );
	runAnim[direction].update();
	if(runKeys[direction] == RELEASED)
		state = ActionState.IDLE;
}





PARTNERS