Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!

We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Member Since 02 Nov 2012
Online Last Active Today, 02:18 AM

#5119273 Some things about memory management

Posted by C0lumbo on 26 December 2013 - 03:40 AM

If I understoo it right you have to take the memory youwant to use over the game ap as a static field.

fr example i got an linear allocator:

I would do something like this

char* subsystemMemory[2048];

linearAllocator.start = subsystemMemory;

char* renderSystem = linearAllocator.Allocate(sizeof(renderSystem), 4);

So to say I could do a file, where i declare all of the memory as static and use it over the game?


I think this is a perfectly valid way to do it, but I don't think you "have" to do it that way. I've always acquired the memory for my memory allocators by calling the system's memory allocation function (e.g. malloc).


In fact, in some cases your approach won't work. e.g. Some consoles have multiple memory 'arenas', and any static allocations will always come only from the default arena. so if you want to make an allocator for MEM2 on Wii for instance, you would need to use a different approach.


Regarding alignment, it's not unusual for the caller to have some alignment requirement so any general purpose allocator should have that functionality built in. It's also pretty reasonable to set the minimum alignment to 4 bytes for a 32-bit platform, 8 bytes for a 64-bit platform, or just round up to 16 byte alignment to make life easier for SIMD

#5119081 Texture problem on a 3d model

Posted by C0lumbo on 24 December 2013 - 01:36 PM

Is it possible the UVs need to be normalised?


DirectX and OpenGL use normalised texture coordinates. That is, regardless of the texture's width and height, (0,0) and (1,1) will always represent opposite corners of the texture image. It's possible that the ms3d model format (or the loading code you use) uses non-normalised texture coordinates, so that a 512x512 texture would have (0,0) and (512,512) as it's corners.


Try dividing the UV coordinates by the texture dimensions and see if that gets you anywhere.

#5118911 OpenGL using one texture as the alpha mask of another

Posted by C0lumbo on 23 December 2013 - 04:31 PM

A) Does OpenGL already have functionality built-in for using one texture as the alpha mask of another?

B) When passing in a second texture to GLSL, are there features built in for rotating the texture coordinate for the second texture separate from the texture coordinate of the first texture?

C) This simple RotateTexCoord() function has 5 if() statements... that's 5 branches per fragment drawn for a trivial operation. How could it be optimized?


A) Don't think so

B) Don't think so

C) Use a matrix to represent the transformation. Calculate the 3x2 transformation matrix on the CPU and pass it into the vertex shader as a couple of vec3's, transforming the UVs should boil down to a couple of dot products. If you're not already doing so, make sure you do the transformation on the UVs in the vertex shader, not in the pixel shader.

#5118312 Camera shake

Posted by C0lumbo on 20 December 2013 - 12:32 AM

I've started using perlin noise for camera shakes instead of using spring-like solutions. It gives a similar result visually and will be completely frame-rate independent.


Spring/physics style solutions have a habit of behaving differently or even catastrophically falling apart when you run at very low or very fast frame-rates. Of course, that can be fixed by running that bit of the simulation at a fixed frame rate, but that didn't fit in nicely with my camera code which was independent of the fixed-rate update part of my simulation so for me, perlin noise was less hassle.


I suppose you lose some control though - e.g. with a physics based solution you could push the camera in a specific direction (say, in response to an explosion or bullet impact) and have it return. With perlin noise, My only control is fading in and fading out the shakiness.

#5117120 Convex hulls in modern games

Posted by C0lumbo on 15 December 2013 - 12:08 PM

I think most modern games will use the simplest representation they can get away with for physics. That means there will almost always be a separation between collision mesh and render mesh. They will often use convex hulls or a couple of convex hulls attached to represent objects, or even simple geometric shapes like boxes, spheres and cylinders when they can get away with it.


Typically for environments it'll be a polygon soup, but lower detail than the render mesh.

For objects/characters in order of preference it'll be geometric shapes, single convex hulls, multiple convex hulls, then quite rarely, non-convex polygonal shapes.

#5112912 Blending for a Particle System

Posted by C0lumbo on 29 November 2013 - 01:58 AM

BC2_UNORM (aka DXT3) only gives 16 possible values for alpha so it's pretty poor for smooth gradients, dithering at compression time might help a little.


You're better off using BC3_UNORM (aka DXT5), it uses block compression on the alpha channel and is generally going to produce better results in most circumstances.

#5112385 Blending for a Particle System

Posted by C0lumbo on 27 November 2013 - 01:57 AM

I believe the problem with your video is that you have depth write and depth testing enabled. Usually, you want depth write disabled for particles.

#5109891 Fresh Graduate, Looking for Advice and Feedback

Posted by C0lumbo on 17 November 2013 - 02:58 AM

My recommendations are:


1. 10 rejections (well, 1 rejection and 9 no replies) isn't that many. You should apply to a lot more places

2. Working on a new from-scratch RTS might not be the best way to create a resume piece, it'll take a long time before there's something showable, and even then, it'll probably be pretty rough looking. Why don't you consider making a map or a minor mod for a popular game that you particularly love?

#5109654 Inverse fog, Ugly interpolation

Posted by C0lumbo on 16 November 2013 - 01:51 AM

When you pass data from your vertex shader to your pixel shader, the GPU will interpolate values linearly. In many cases (e.g. interpolating UV coordinates), this linear interpolation is correct. However, for other functions, the linear interpolation will introduce some errors. Calculations done on the vertex shader are much cheaper and part of writing shaders well is balancing the performance versus accuracy of doing calculations per vertex.


I suggested passing distance from camera to the pixel shader and doing the remaining calculations on the pixel shader. This is because the distance from camera is a big part of the fog calculation and it's pretty linear (although not perfectly so, imagine the case where you're standing on a very large triangle). If you really needed complete accuracy you could pass the vertex positions through to the pixel shader and do the entire calculation per pixel.

#5109544 Inverse fog, Ugly interpolation

Posted by C0lumbo on 15 November 2013 - 01:38 PM

It looks like the fog values are being calculated per vertex. If you have control of the vertex/pixel shaders then you could try changing it so that the distance from the eye position is calculated on the vertex shader (as this will interpolate linearly quite well), and then the fog equation is applied on a per pixel basis.


If you're using some technology that doesn't let you control the shaders then it might be trickier, you'll probably end up having more triangles, or a fog that blends in more gently so the artifacts are not so visible.

#5109380 Boolean issues

Posted by C0lumbo on 15 November 2013 - 02:10 AM

The array Vertex_List is only length 3, but you are writing to the fourth element of it in Draw_Blizzard_Logo. You are writing off the end of the array and splatting the bool value that happens to be sat next to it in memory.


Edit: Bugs like this can be very tricky to track. The technique I'd recommend you learn is how to set hardware breakpoints in your IDE. It's possible to create a breakpoint that will fire when a variable changes. In this sort of situation you could have set a hardware breakpoint on 'Enabled' and that would have led you to the bug

#5105130 Programmatic 2d outline for textures, but with some z-offset

Posted by C0lumbo on 28 October 2013 - 01:58 PM

How about you render the black outlines for all three segments first, then you draw the three segments colour bits over the top.


To render high quality black outlines without rendering/sampling the texture 24 times you could preprocess the alpha channel of the texture to use a signed distance field, that way you only need a simple shader modification, and as an added bonus, you can tweak the thickness of the silhouette easily at runtime.

#5104960 Simulation accuracy across platforms

Posted by C0lumbo on 28 October 2013 - 02:04 AM

That is indeed an excellent article, read it carefully.


You don't mention which platforms you're targeting, I believe that by being incredibly careful and putting in a huge amount of effort you can eventually achieve consistency across AMD/Intel processor PCs, but throw in different platforms like Linux, OSX, iPhone and Android, and I think it's just not a viable approach. (I know Linux/OSX are the same architecture, but you'll probably be using different compilers which will optimise differently)


Fixed point is the only way to be sure. I believe someone did a fixed point version of Box2D for NDS (http://code.google.com/p/box2d/wiki/FAQ), might it be an option to switch to that?

Another possible option might be to use a software floating point implementation. This'll likely be even slower than fixed point, and when I last looked into it, I couldn't find a good free implementaiton, but if you've already gone a long way down the road of using floats it might be quicker to replace the floats than refactor the codebase.


I think that if all you're doing it for is cheat prevention then maybe you should take some other approach.

#5104161 Direct3D 11 Present makes up 98% of FPS ?

Posted by C0lumbo on 24 October 2013 - 12:35 PM

The present function will be doing nothing on the CPU, except waiting for the GPU to finish rendering your mesh. Your GPU is taking about 5ms to render it.


Once you add some work to the CPU (the game), you'll find that work is happening in parallel to the GPU, so you'll effectively get that 'present' time back.

#5100956 Connection minutes a problem much ?

Posted by C0lumbo on 13 October 2013 - 12:45 AM

I think a continous connection is fine on a smartphone, I think that plenty of big-name Android/iOS single-player games are keeping connections open and regularly sending metrics about your play session to their analytics server, so it seems perfectly reasonable for an MMORPG related app to do the same.


Obviously making efforts to minimise any unnecessary traffic is in the interests of both your customer's data plan and your own server's workload, but I don't see any need to tear down the socket regularly.