Jump to content

  • Log In with Google      Sign In   
  • Create Account

dpadam450

Member Since 18 Nov 2005
Offline Last Active Today, 11:57 AM

#5289120 Render to screen buffer vs. to texture problem

Posted by dpadam450 on 28 April 2016 - 12:28 PM

No, hodgman is correct.

d.)Render target texture color = ARGB(1, 1, 1, 1) (than i render this texture it has all pixels alpha == 1)

 

This is wrong. Your target has an alpha channel, and therefore will write to the alpha component. Which in this case will be .3.

If you want to do that properly, createa a render texture as RGB, not ARGB.




#5288167 How can I get the roatation in PhysX in opengl

Posted by dpadam450 on 22 April 2016 - 12:06 PM

[X.x  Y.x    Z.x    Tx]

[X.y Y.y    Z.y     Ty]

[X.z  Y.z    Z.z     Tz]

[0     0       0       1]

 

You can get the helicopters right, up and forward vectors from the matrix itself. Those are 3 vectors in world coordinates. Whatever you are doing with heading would deal with the Zx, Zy, Zz vector.
 




#5287136 Computing an optimized mesh from a number of spheres?

Posted by dpadam450 on 15 April 2016 - 10:49 PM

In blender this is called a Boolean operation. That may lead you to something.

 

Your algorithm however doesn't sound useful. I'm assuming you want to connect a bunch of mountain or hills that are all spheres? Better to use a heightmap or some other scheme.




#5286983 How to scale a model in OpenGL?

Posted by dpadam450 on 15 April 2016 - 12:35 AM

Trying or doing? glScalef is what you want. It is adding to the matrix stack used to multiply all incoming vectors.  Make sure you call glLoadIdentity before, otherwise it will continually multiply the scale matrix everyframe inflating it to insanity and not working.




#5286945 Which per-frame operations are expensive in terms of performance in OpenGL?

Posted by dpadam450 on 14 April 2016 - 06:03 PM

You are asking a problem that may not be necessary for many years to come for you. A lot of people ask this and I tell everyone, optimize when you need to. Hardware is extremely fast nowadays that this shouldn't be a concern. Focus on your game if that is what you are building. If you just want to build the best tech in the world, then that is a different story.




#5285033 How to find the center of a modelview matrix?

Posted by dpadam450 on 04 April 2016 - 10:20 AM

To make things simpler, everything is always in world coordinates. Anytime a matrix is present it takes the points in your 3d model as vectors and stretches those vectors to point somewhere else and gives you new vectors. Any matrix operations afterwords are applied to those new vectors.

 

If you have played Zelda, or any game similar. You might have a light or creature that circles around a main character. It is circling relative to the player. Keyword relative, so you need a vector relative to the player to happen first. So translate some distance from the origin of the world (treating 0,0,0 as the characters position even though he is thousands of units from the origin).  Apply your rotation treating the origin as the player, and then finally translate to the players position.

 

If you don't think locally you might translate to the player first, and then apply rotation, but every rotation takes vectors from 0,0,0 and rotates them. So if you translate thousands of units away first, you are then rotating vectors around the origin that are thousands of units long (centered around the origin).

 

If that clears anything up. Local matrix is simply more of a logical thing.




#5284554 When would you want to use Forward+ or Differed Rendering?

Posted by dpadam450 on 31 March 2016 - 06:59 PM

Forward+ allows more material shaders for specific needs. With deferred you are limiting yourself to run one shader on the entire scene. Also Forward+ doesn't have to write a GBuffer so the bandwidth is lowered. I think most people will be moving to Forward+ if possible. MSAA will be cheaper on Forward+. You also don't have to perform any blending operations in Forward+ like you do with deferred.




#5282452 Per Triangle Culling (GDC Frostbite)

Posted by dpadam450 on 21 March 2016 - 04:05 PM

I came across this presentation and they are talking about using compute for per-triangle culling (standard backface culling, standard hi-Z). I'm not sure what exactly they are talking about. Is this meant to rewrite that part of the pipeline completely? and then when it comes time to draw, just disable backface culling and all the other built-in pipeline stages? I'm not getting why you would write a compute shader to determine what triangles are visible when the pipeline does that already. Even if you turn that stuff off, is this really that much better? Can you even tell the GPU to turn off Hi-Z culling?

 

Slides 41-44

http://www.wihlidal.ca/Presentations/GDC_2016_Compute.pdf

 




#5273377 Combining shadows from multiple light sources

Posted by dpadam450 on 30 January 2016 - 12:34 PM

 

Yes, I'm not getting unwanted artifacts anymore. But I have another question, take a look at this screenshot: http://s16.postimg.org/rzbnd4rv9/ss14538.png Is it normal that point light shadow is brighter than directional light shadow?

 

To this question and your original post.  Shadows are the lack of light received. Most people think doing phong shading and then multiplying shadows works. The shadows as pointed out should be multiplied by the N*L diffuse calculation because it is basically saying "Is the surfacing facing the light: (yes then it is receiving photons)" "If it is facing the light, is something blocking it: (If so, then it receives no photons)".  So if you perform all your lighting equation and texture lookups as if all the photons in the world hit it, it will get lit, and then some random multiplication happens to "darken" the image. It should be darkened by the fact that no light was hitting it in the lighting equation.

 

So this leads to your second question. Light is addative, without light every area in the world is completely black. So if your sun hits any surfaces not in shadow, those will be added quite highly with lighting (lots of photons hitting the surface). Then for every other light, more photons will hit.

 

So you have areas with:

Sun + Point light (which you can see very whitely)

Sun only (which is your brighter shadow because it received quite a lot of photons, just not the extra from the point light)

Point Light Only( this is your dark shadow. Only a few photons from the point light hit the surface, so its still pretty dark)




#5271641 Is my texture mapper correct?

Posted by dpadam450 on 17 January 2016 - 09:42 PM

 

I've been working on subpixel accuracy

What do you mean?

 

I ran the demo, it looks like you aren't using any mip-mapping/filtering. The texture flickers alot (not just at the edges). Are you using nearest/point filtering?




#5271500 Is my texture mapper correct?

Posted by dpadam450 on 16 January 2016 - 09:03 PM

d3dx9d_43.dll is missing. Don't think I have the DX sdk installed. Not sure what you are asking though.




#5268796 Non-manual Texture positions addition.

Posted by dpadam450 on 02 January 2016 - 01:26 AM

"all my meshes are automatically generated."

 

In what sense? Surely they can't be purely random vertices to make some globs. Can you post an image of a bunch of these generated objects? You can always write an unwrapper that will unwrap 3D meshes. How good will it be depends on what your goal is. I don't know what it is called in other tools, but Blender uses one called Unwrap Smart Projections, which is probably the best method you could use given any 3D mesh. I don't know all the inner details of the algorithm, but I have an idea.




#5265394 Blending settings question

Posted by dpadam450 on 08 December 2015 - 12:25 AM

Swiftcoders method will work.




#5264837 Indie Game Company Names

Posted by dpadam450 on 04 December 2015 - 12:36 AM

 

Doesnt seem very game-like...

Neither do Bungie, Valve, Naughty Dog, probably a billion others.




#5260160 Per pixel sprite depth

Posted by dpadam450 on 02 November 2015 - 11:56 AM

 

because the sprite depth is in range [0,1] for the given tile. But the camera depth buffer range [0,1] is used for whole camera frustrum.

If you are exporting 2d images with a depth of 0 to 1, we will call this reference sprite relative depth.

Camera space will use 0 to 1, yes.

 

So the obvious solution is given a sprite's depth location, we add (or subtract) depth from the sprite location in the camera. So we need a way to translate sprite relative depth into the world.  You need to scale the relative sprite depth by the real world depth of the 2d sprite.

 

If you are rendering a couch and it is spanning 3 tiles in depth, which equates to the orthographic depth range .5 to .7, then obviously your 2d sprite is going to make between .5 and .7

OutputDepth = .5 + (.5 - .7)*pixelSpriteDepth

 

So the couch is .2 in the depth dimension in length. Your 2d sprite then represents a .2 range in depth values.

 

.........seems pretty simple, not sure what else you would be asking.






PARTNERS