lordikon

Members
  • Content count

    111
  • Joined

  • Last visited

Community Reputation

169 Neutral

About lordikon

  • Rank
    Member
  1. Thanks for all the replies everyone, I definitely have some design to think about.
  2. [quote name='SiCrane' timestamp='1325183835' post='4897861'] This is usually done by using a factory function. The factory creates the object and after it's created the factory calls the virtual function. Generally the constructors are all made protected and the factory is friended. [/quote] I've done it this way in the past, I was hoping there was some other way. I might be able to go with a factory this time around as well I guess. I'll have to consider that. [quote name='MikeNicolella' timestamp='1325185609' post='4897874'] Is there a good reason the constructor itself isn't initializing the object? [/quote] I've made it a habit to leave certain initializations out of the constructor for some of my classes. I often have cases where I want to reset an object, or re-initialize it. If I have an initialize method I can call it to reset the object, as well as setup the object when it is first created. I'll usually have constructors call the initialize method.
  3. What I'm trying to achieve is that after my object is finished constructing, the base class makes a call to a virtual function named Initialize(). However the only time I can see to call that method is during the base class' constructor, and if I call it then it will occur before the derived classes constructor has had time to finish, not to mention calling a virtual function during a constructor just seems dangerous, even if it worked the derived class(es) aren't finished initializing. What I'm trying to avoid is having to have all programmers writing derived classes have to remember to call Initialize(), I would prefer that it "just happen" as far as they're concerned. Here's an example of the flow I would like to achieve. Class B derives from A // Programmer only does this MyObj B = new B(); 1.) A's constructor occurs 2.) B's constructor occurs 3.) A calls virtual Initialize() method // =============================== // This part occurs if B handles Initialize() 3a.) B receives call to Initialize() 3b.) B calls base class Initialize() // =============================== 4.) A receives call to Initialize()
  4. Shader divide by zero?

    MJP, not sure why this wouldn't work, it worked fine for me, no errors: [color="#000000"]float4 ldir [/color][color="#666600"]=[/color] [color="#666600"]([/color][color="#006666"]0.0f[/color][color="#666600"],[/color] [color="#006666"]0.7f[/color][color="#666600"],[/color] [color="#006666"]0.0f[/color][color="#666600"],[/color][color="#000000"] 1[/color][color="#006666"].0f[/color][color="#666600"])[/color] [color="#666600"]I did have to change the W component to 1.0f however to get rid of my divide by zero error.[/color] [color="#666600"]In the end that shader just didn't seem to work right at all, I ended up moving the matrix calculations to the pixel shader. Anyway, here's the final result:[/color] [color="#666600"][code]float4x4 LightViewProjection : LIGHTVIEWPROJECTION; float4x4 WorldViewProjection : WORLDVIEWPROJECTION; float4x4 World : WORLD; float4x4 view : VIEW; float4 LightDir : LIGHT_DIRECTION; float4 CameraForward : VIEW_FORWARD; float4 CameraPos : VIEW_POS; float SpecularPower : SPECULAR_POWER; float4 SpecularColor : SPECULAR_COLOR; float4 AmbientColor : AMBIENT_COLOR; texture diffuseMap : DIFFUSE_MAP; texture normalMap : NORMAL_MAP; bool UsingClippingPlane : USING_CLIPPING_PLANE; float4 ClippingPlane : CLIPPING_PLANE; float DepthBias = 0.0015f; sampler diffuseSampler = sampler_state { Texture = (diffuseMap); ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; }; sampler normalSampler = sampler_state { Texture = (normalMap); ADDRESSU = CLAMP; ADDRESSV = CLAMP; MAGFILTER = LINEAR; MINFILTER = LINEAR; MIPFILTER = LINEAR; }; Texture ShadowMap : SHADOW_MAP; sampler ShadowMapSampler = sampler_state { texture = ; MinFilter = POINT; MagFilter = POINT; MipFilter = NONE; AddressU = Clamp; AddressV = Clamp; AddressW = Wrap; }; float4 ComputeShadowColor(float4 worldPos, float4 Color) { // Find the position of this pixel in light space float4 lightingPosition = mul(worldPos, LightViewProjection); // Find the position in the shadow map for this pixel float2 ShadowTexCoord = 0.5 * lightingPosition.xy / lightingPosition.w + float2( 0.5, 0.5 ); ShadowTexCoord.y = 1.0f - ShadowTexCoord.y; // Get the current depth stored in the shadow map float4 shadowInfo = tex2D(ShadowMapSampler, ShadowTexCoord); float shadowdepth = shadowInfo.r; float shadowOpacity = 0.5f + 0.5f * (1 - shadowInfo.g); // Calculate the current pixel depth // The bias is used to prevent folating point errors that occur when // the pixel of the occluder is being drawn float ourdepth = (lightingPosition.z / lightingPosition.w) - DepthBias; // Check to see if this pixel is in front or behind the value in the shadow map if ( shadowdepth < ourdepth) { // Shadow the pixel by lowering the intensity Color *= float4(shadowOpacity, shadowOpacity, shadowOpacity, 1); }; return Color; } struct VertexInput { float4 position : POSITION; float2 texCoords : TEXCOORD0; float3 normal : NORMAL0; float3 binormal : BINORMAL0; float3 tangent : TANGENT0; }; struct VertexToPixel { float4 position : POSITION; float2 texCoords : TEXCOORD0; float3 lightDir : TEXCOORD1; float3 viewDir : TEXCOORD2; float4 ClipDistances : TEXCOORD3; float4 WorldPos : TEXCOORD4; }; VertexToPixel VertexShaderFunction(VertexInput input) { VertexToPixel output; // Transform vertex by world-view-projection matrix output.position = mul(input.position, WorldViewProjection); output.WorldPos = mul(input.position, World); float3 viewPos = mul(-view._m30_m31_m32, transpose(view)); float3 viewDir = viewPos - output.WorldPos.xyz; float3 normNormal = normalize(input.normal); float3 normBinormal = normalize(input.binormal); float3 normTangent = normalize(input.tangent); output.viewDir.x = dot(normTangent, viewDir); output.viewDir.y = dot(normBinormal, viewDir); output.viewDir.z = dot(normNormal, viewDir); output.lightDir.x = dot(normTangent, -LightDir); output.lightDir.y = dot(normBinormal, -LightDir); output.lightDir.z = dot(normNormal, -LightDir); output.texCoords = input.texCoords; output.ClipDistances = dot(input.position, ClippingPlane); return output; } float4 PixelShaderFunction(VertexToPixel input) : COLOR0 { float4 diffuseColor = tex2D(diffuseSampler, input.texCoords); float3 normal = normalize((tex2D(normalSampler, input.texCoords).xyz * 2.0f) - 1.0f); float3 normLightDir = normalize(input.lightDir); float3 normViewDir = normalize(input.viewDir); float nDotL = dot(normal, normLightDir); float3 refl = normalize(((2.0f * normal) * nDotL) - normLightDir); float rDotV = max(dot(refl, normViewDir), 0); float4 retColor = (AmbientColor * diffuseColor + (diffuseColor * rDotV) + (SpecularColor * pow(rDotV, SpecularPower))); float4 finalColor = ComputeShadowColor(input.WorldPos, retColor); if ( UsingClippingPlane ) { clip(input.ClipDistances); } return float4(finalColor.xyz, 1.0f); } technique NormalMapSpecular { pass Single_Pass { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } }[/code] [/color] [color="#666600"]The result is a normal mapped, textured object with specularity that can receive shadows cast onto it. There is more to the shader, which is the part that allows this object to also cast shadows.[/color]
  5. Shader divide by zero?

    [quote name='NicoLaCrevette' timestamp='1322647051' post='4889014'] Hi, As suggested by bwhiting, [font="Courier New"]lightDir.w[/font] must be explicitly set to zero, since [font="Courier New"]lightDir [/font]is a vector. I also think it's a problem of normalizing a null vector : [font="Courier New"]lightDir [/font]is obviously not null, so the problem can come from your [color="#1C2837"][size="2"][font="Courier New"]worldToTangentSpace [/font]matrix : it probably transforms [font="Courier New"]lightDir [/font]into a null vector[/size][/color] [color="#1C2837"][size="2"]Maybe you can check this [/size][/color][font="Courier New"][color="#1C2837"][size="2"]worldToTangentSpace[/size][/color] [/font][color="#1C2837"][size="2"]matrix ?[/size][/color] [color="#1C2837"][size="2"]Nico [/size][/color][img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img] EDIT : Personally, I transform tangent, binormal and normal vectors (into world space) in the vertex shader; in the pixel shader, I just gather the 3 vectors into a 'TBN' matrix, and compute the normal accordingly EDIT : Could you post your vertex shader code so that we can see how tangent, normal, binormal are computed ? [/quote] The tangent, binormal, and normal are computed in a content processor and stored within the vertex buffer. That part is done by XNA, but the content processor was written custom so there could theoretically be a problem there. However I have taken a look at the data in each vertex and all three vectors appear to be correct and orthogonal. The problem with the divide-by-zero was indeed that I wasn't defining a .w component. That's what I get for programming at 2am. I have thought about computing the normal in the pixel shader, but as pixel shaders tend to usually be a game's bottleneck I first try and put everything I can into the vertex shader. EDIT: There are still other things wrong with my shader. I guess the main problem is that I'm not a graphics programmer, but since I have no graphics programmer to donate time to this open-source project I'm doing what I can. I managed to get shadow mapping and reflective/refractive water working so far, which is a big feat for me. EDIT: I'm working off of this tutorial though: [url="http://digitalerr0r.wordpress.com/2009/03/23/xna-shader-programming-tutorial-4-normal-mapping/"]http://digitalerr0r.wordpress.com/2009/03/23/xna-shader-programming-tutorial-4-normal-mapping/[/url]
  6. I am trying to implement a normal-mapped, textures, specular and ambient shader. Within my vertex shader I'm getting a divide by zero error on the line below that is in bold: VertexToPixel GenericTransform_VS(VertexInput input) { VertexToPixel output; // Transform vertex by world-view-projection matrix output.position = mul(input.position, WorldViewProjection); output.WorldPos = mul(input.position, World); float4 lightDir = (0.0f, 0.7f, 0.0f); float3x3 worldToTangentSpace; worldToTangentSpace[0] = mul(normalize(input.tangent), World); worldToTangentSpace[1] = mul(normalize(input.binormal), World); worldToTangentSpace[2] = mul(normalize(input.normal), World); [b]output.lightDir = normalize( mul( worldToTangentSpace, lightDir ) );[/b] output.viewDir = normalize( mul( worldToTangentSpace, CameraPos - output.WorldPos ) ); output.texCoords = input.texCoords; output.ClipDistances = dot(input.position, ClippingPlane); return output; } I get two warnings and an error on the line in bold, but only if the lightDir's Z component is zero. It doesn't give this error if the X or Y components are zero. The warning and error are this: (126,18): warning X4008: floating point division by zero (126,18): warning X4008: floating point division by zero (126,18): error X4579: NaN and infinity literals not allowed by shader model The only division I see on that line would be within the normalize method, but it shouldn't run into that problem unless the lightDir vector was all zeros. It's almost as if it only cares about the Z component and the X and Y are treated as zero no matter what I do. This explains why I'm getting another issue, which is that the Z variable is the only one that seems be affecting lighting of my model, my changes to X and Y are completely ignored. Here's the vertex shader input and output structures, in case it helps: [code] struct VertexInput { float4 position : POSITION; float2 texCoords : TEXCOORD0; float3 normal : NORMAL0; float3 binormal : BINORMAL0; float3 tangent : TANGENT0; }; struct VertexToPixel { float4 position : POSITION; float2 texCoords : TEXCOORD0; float3 lightDir : TEXCOORD1; float3 viewDir : TEXCOORD2; float4 ClipDistances : TEXCOORD3; float4 WorldPos : TEXCOORD4; };[/code] Thanks in advance on any help.
  7. Good stuff guys, thanks. I do believe that checking if all four points of the rectangle are on the outside of any of the planes then that will mean it's not colliding. Thanks!
  8. I'm basically just trying to check if a flat quad rectangle lies within the view frustum or not. The rectangle is not axis-aligned, it can have any scale or transform (position, rotation). The current frustum information I have is just the 6 planes that make up the frustum, although if I truly need I could probably calculate the 8 corners of the frustum. I searched all over google and most of the results seem to be plane checks rather than rectangles. EDIT: I think I might have it. Let me know if this sounds like it would work: Check each point on the rectangle against the frustum. If any point is in the frustum then it's in view. If no points are within the frustum check if any edge of the rectangle intersects any of the frustum planes. If any edges intersect a frustum plane then it's in view. I think this will work, and I already have the algorithms for checking points against the frustum, so I should be able to use that algorithm and the fact that all the points are outside of the frustum to handle either case.
  9. I'm working on a windows directX app, I want to handle an event for a file being drag-and-dropped into my app. I'm sure there is some windows message sent for this I just have no idea what it is. Any help is appreciated, thanks. [Edited by - lordikon on July 30, 2010 12:26:52 AM]
  10. You might get more help on this one if you put it in the graphics programming and theory section.
  11. Quote:Original post by SiS-Shadowman Quote:Original post by s.kwee Anyway I think you are not familiar with component based system. I just skimmed through your post, but programmermattc has a valid point. Why are you introducing a hierarchy to the Entity class? An entity can only be visible if there's a position component, so the visibility could fall under the position component's responsibility. The same thing goes for the Movable and NPC subclasses: They can easily be expressed through components. Apart from the Entity hierarchy, it looks good to me though. Agreed, I see no need for an Entity hierarchy. Whether or not an entity is visible could be determine by whether or not it has a Render Component. Whether or not it is an NPC could be determined by an NPC component, or an AI component possibly. Whether or not it is movable could be based on a Physics Component. Rather than have a long chain of inheritance to get a movable, visible, NPC, you simply have an entity with three components: Render, Physics, and AI components.
  12. I'm confused, if you know what game you want, and you're an expert programmer, then I'd assume you could structure the code however you need for your game. There are some game architectures that may be more common than others I guess, but I've never needed one before making a game, I just sat down and designed it out based on my needs.
  13. Knowing when an object is "squished"

    Quote:Original post by Emergent I haven't used Havok either, but...I wonder... To each contact point, there has got to be an associated Lagrange multiplier, somewhere inside Havok. Will it give you either the multiplier, or the contact force? It would seem that killing objects when this becomes very large would be the most natural way of dealing with your problem. I should be able to get the depth of the contact, and the separating velocity. I'm not sure I can simply use a single contact point to determine "squishing" though. A high speed collision (like one pool ball hitting another), may give a deep and fast moving contact point, but it wouldn't constitute "squishing". I need to know if the body is being contacted, but also that it cannot move away from the contact.
  14. Knowing when an object is "squished"

    Quote:Original post by Buckeye Again, I'm sure about Havok, but does the contact data have references or pointers to the two objects in contact? Also, each object should be able to be assigned a type, or a combination of types. If the above is true for Havok, you can check the contacts for "squishables." If a "squishable" is in contact with an object that's not a "valid-pusher" and the penetration depth is > epsilon, kill the "squishable." In pseudo-code: set_kill = NULL for each contact C if ( (C.obj1.type & SQUISHABLE) && (C.obj2.type & SQUISHER) ) || ( (C.obj1.type & SQUISHER) && (C.obj2.type & SQUISHABLE) ) && C.depth > 0.1 then if( C.obj1.type & SQUISHABLE ) set_kill = C.obj1 else set_kill = C.obj2 if ( set_kill != NULL ) then kill set_skill The contact points themselves do not have the data, however, havok lets you create a listener that will fire whenever a contact point is created, and within that listener it includes the two bodies that are colliding. So I should be able to try something like this.
  15. 2D Helicopter Motion

    Quote:Original post by Fire Lancer Why would you need to make it exactly 0? Quote:Original post by lordikon If you want a smooth motion you should probably incorporate time into your movements (unless you can guarantee a perfect framerate, which most OSes won't even allow you to do). So what if your fixed time step isn't exactly every (say 20ms)? Any decent implementation would make up for it and might run at say: 20, 18, 23, 19, 19, 19, 22, ms intervals, which is perfectly correct and doesnt loose or gain time overall. Using a fixed time step for logic (and a variable timestep for rendering with say interpolation) is far better than a variable one. As I found in another project even simple things don't always work the same with a variable step eg: *** Source Snippet Removed *** As I found at 20fps and 50fps, the position will get a different result over time (In a test I did it was nearly 2.5% after one second). There are some equations that will give the same results for constant acceleration, but AFAIK there is no general solution to this divergence. Far better to use a fixed time step in the first place. If you use fixed time step, you don't have to factor in timedelta into each movement. However, if you ever decide to change the time step length, it affects all of your movements, and you must go an adjust all movement speeds accordingly, which can be a maintenance nightmare. Additionally, if you do fixed time step for movement, you often need to have the same time step for rendering, or you will see jittery movement of objects, where you render on frames where the object doesn't move.