derek_of_bodom@hotmail.com

Members
  • Content count

    53
  • Joined

  • Last visited

Community Reputation

104 Neutral

About derek_of_bodom@hotmail.com

  • Rank
    Member
  1. Rotating camera around player origin

    I'm glad I was able to help. I wrote all that code from memory, so I'm actually a little surprised that you were able to get it to do what you want.
  2. "Dark Shadows" with Johnny Depp is seriously one of the shittiest movies I have EVER seen. I would not recommend it to anyone. I seriously would have rather watched Twilight than that shit.
  3. Rotating camera around player origin

    I always have yaw, pitch, and roll variables, then every frame I update them with the controller or whatever, then I create a quaternion using those variables, then I transform the "backward" vector with that quaternion. Make the camera position that vector (multiply it by whatever distance you want it to be from the player), add the player position to that vector, then set the target to the character position. It's really not hard to do, and I would provide some sample code, but I'm a little lazy right now Aww, hell, here's some pseudo-code: [code] float yaw, pitch, roll;//Some variables that will be in your camera class or wherever you want them //You could also just use a quaternion rather than those variables, then just multiply the quaternions. //I feel it's a little better to use yaw, pitch, and roll since you can keep track of your camera's rotations a little better (doesn't really matter if it's a free camera anyway) //I really don't know which is which with yaw pitch roll, but only two are important (I think yaw and pitch, yaw being up/down, pitch being rotation?) //Anyway, doesn't matter. Just find out which is which, add to them using input from a controller/keyboard, then create your quaternion Quaternion rotator = Quaternion.FromYawPitchRoll(yaw,pitch,roll); float camDistance = 5.0f; vector3 back = Vector3.Backward; back = Vector3.Transform(back, rotator);//Rotate our vector rotator *= distance;//Push it out a distance from the player) rotator += playerPosition;//Add the position View = Matrix.CreateLookAt(back, playerPosition, upVector); [/code]
  4. I'm working on a library for collision detection/response and perhaps slightly realistic physics. It's going well, but I want my library to have some pretty advanced features. I honestly think if I took enough time, I could find a solution to my problem, but I was wondering if there were any good resources to figure out how to get convex polygons out of concave ones. I thought about using triangulation, which would technically work, but it would be best if I could just use convex polygons with multiple sides (if that's what the shape calls for) rather than using multiple triangles to form a single convex shape. I haven't written a triangulation algorithm yet, but I think I have a good idea of how to write one (thanks to reading up on it online). Could I use triangulation at first, then get each convex shape, or is it easier to get each convex shape individually?
  5. Quadtree problem.

    Are you not releasing any memory? It looks like you're allocating, but not releasing. That might be the problem.
  6. Diffuse color?

    [quote name='Bacterius' timestamp='1335043507' post='4933611'] Then you can just go like this in your pixel shader (in pseudocode): [CODE]outputcolor = (texColor + blendColor) * 0.5;[/CODE] Or you can blend it more or less strongly like this: [CODE]outputcolor = lerp(texColor, blendColor, t);[/CODE] If t = 0, the output color will just be texColor, if t = 1 then it will be blendColor. Values in between give various blend levels. t = 0.5 corresponds to the blend before. [/quote] I actually figured it out myself. I tried the code you provided, and it didn't do what I wanted. It did some weird blending. [CODE]texColor * blendColor[/CODE] does exactly what I want.
  7. Diffuse color?

    With XNA, you have BasicEffect which has the "Diffuse" property. I haven't messed with HLSL in a while, but I'm wondering how to write the code to add color to a texture. For example: I have a texture, and I want to draw it with it blended red. What would I do if I had float4 blendColor and float4 texColor (where texColor is the color of the pixel for that pass)?
  8. [HLSL] Half vectors in shaders?

    [quote name='MJP' timestamp='1334440602' post='4931274'] [quote name='Nyxenon' timestamp='1334437414' post='4931263'] [quote name='MJP' timestamp='1334432743' post='4931248'] [quote name='Nyxenon' timestamp='1334411063' post='4931186'] I'm not entirely sure of how to get the data in the shader. If it were normal floats and vectors, I would have no problem, but it's not. Would it be alright to write a shader as if it's going to be accessing normal float data, or will I need to use half types, and somehow convert them to float? [/quote] You can use float types, and the GPU will automatically convert them to 32-bit float for the vertex shader. [/quote] So what you're saying is that for the above structure that I provided, I could write the following in HLSL? [CODE] struct VoxVertexIn { float4 Position : POSITION; float4 Normal : NORMAL; float2 UV : TEXCOORD; float4 Color : COLOR; }; [/CODE] [/quote] Yes, that will work. [/quote] That's definitely a relief. Thanks for the help.
  9. How to read PNG images without a library?

    [quote name='Krohm' timestamp='1333435972' post='4927806'] [quote name='Nyxenon' timestamp='1333387330' post='4927562'] I am writing an engine. It's not so hard, most of it is pretty easy for me[/quote]I still don't understand why cannot you just focus on another problem. [/quote] Well, it's mostly because I want to get to work on the renderer, and I want to use a texture for testing. I COULD just save a texture to the raw format, and load it that way, but eventually I'd need to load PNGs. I've decided to just use XNA to write a test renderer to see how well it performs so I can optimize it, then I'll write it in C++.
  10. [HLSL] Half vectors in shaders?

    [quote name='MJP' timestamp='1334432743' post='4931248'] [quote name='Nyxenon' timestamp='1334411063' post='4931186'] I'm not entirely sure of how to get the data in the shader. If it were normal floats and vectors, I would have no problem, but it's not. Would it be alright to write a shader as if it's going to be accessing normal float data, or will I need to use half types, and somehow convert them to float? [/quote] You can use float types, and the GPU will automatically convert them to 32-bit float for the vertex shader. [/quote] So what you're saying is that for the above structure that I provided, I could write the following in HLSL? [CODE] struct VoxVertexIn { float4 Position : POSITION; float4 Normal : NORMAL; float2 UV : TEXCOORD; float4 Color : COLOR; }; [/CODE]
  11. [HLSL] Half vectors in shaders?

    [quote name='Matias Goldberg' timestamp='1334419208' post='4931212'] If it's for memory saving Ok. If it's for performance, forget about it since in CPU there's no native support for 16-bit floats. Operations must be emulated. Most likely even to fill your values, you'll have to fill the Half16 by converting floats. [/quote] It's for memory. Technically, I only need the position and the Texture coordinates. The Diffuse is for lighting (which I don't plan on implementing yet anyway). The vertices won't be updated too often, and XNA supports constructors for half vectors. The XNA side of it is fine, it's the HLSL side that I'm unsure of. Ideally, I'd like to be able to store the Diffuse as a 16-bit unsigned integer, but I'm unsure of how to be able to do that with XNA and HLSL. (there will only be one color of light, so the value will be a grayscale value)
  12. [HLSL] Half vectors in shaders?

    I'm working on a prototype of sorts that will use voxels, and I'm trying to make the renderer. For performance and memory preservation, I've written a struct using XNA types like this: [CODE] //32 bytes [StructLayout(LayoutKind.Sequential)] public struct VoxVertex { //12 bytes Vector3 Position; //8 bytes HalfVector4 Normal; //4 bytes HalfVector2 UV; //8 bytes HalfVector4 Diffuse; public static int Size { get { return 32; } } public static readonly VertexElement[] Format = new VertexElement[] { new VertexElement(0,0,VertexElementFormat.Vector3,VertexElementMethod.Default,VertexElementUsage.Position,0), new VertexElement(0,12,VertexElementFormat.HalfVector4,VertexElementMethod.Default,VertexElementUsage.Normal,0), new VertexElement(0,20,VertexElementFormat.HalfVector2,VertexElementMethod.Default,VertexElementUsage.TextureCoordinate,0), new VertexElement(0,24,VertexElementFormat.HalfVector4,VertexElementMethod.Default,VertexElementUsage.Color,0) }; } [/CODE] I'm not entirely sure of how to get the data in the shader. If it were normal floats and vectors, I would have no problem, but it's not. Would it be alright to write a shader as if it's going to be accessing normal float data, or will I need to use half types, and somehow convert them to float?
  13. An atheist, communist, ACLU lawyer professor was teaching a class on Karl Marx. “Before the class begins, you must get on your knees and accept that Marx was the most highly-evolved being the world has ever known, greater than Jesus Christ!” At this moment, a brave, patriotic, pro-life Navy SEAL stood up, holding a rock. “How old is this rock, professor?” The arrogant professor smirked and smugly replied “4.6 billion years, you stupid Christian.” “Wrong. It’s been 5,000 years since God create...
  14. Frequently setting data to a vertex buffer?

    [quote name='bullfrog' timestamp='1333936156' post='4929435'] Graphics cards have lots of memory, so it depends on how much you need. If you vertices are 24 bytes each (Thats x, y, z positions, texures coords and diffuse), you can store over 400,000 vertices with 10MB of video memory. Creating a buffer big enough will save programming time and lag spikes. But in the end, you need to make what you need to make, to make your game : ) [/quote] I understand that bit, but I'm wondering if the buffer needs to be initialized with the predicted size, or if I could initialize it with 0 and put data into it. I don't really think that's how it would work, but it would definitely be memory efficient like that.
  15. Frequently setting data to a vertex buffer?

    Thanks for the reply. Now I only have one more question about it. Should I allocate the vertex buffer with plenty of space for everything to fit, or will it automatically resize?