Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 20 Mar 2012
Offline Last Active Yesterday, 12:49 PM

#5309025 Need a very simple collision detection for walls. ( 3d )

Posted by on 01 September 2016 - 11:26 AM

Depending on your game design, you might be able to use a navigation mesh:



This can greatly simplify and accelerate your collision detection, and allow path finding too.

#5304832 Adequate Windows Operating System Testing

Posted by on 09 August 2016 - 01:11 AM

To start with I'd get an old PC / emulator running your lowest target, and thoroughly debug it on that. That won't cost much, if anything, and should identify a lot of possible problems, to start with.

#5298851 Preparing a 3d game as a demo for my CV.

Posted by on 03 July 2016 - 01:11 AM

braindigitalis, collada supports animation, too. The problem is that I don't know how to load it into OpenGL.

lawnjelly, I already have the animation data from Blender, i have the file, I just don't have any idea how to parse it.


And I cant use vertex tweening, I need to use skeletal animation, because this is how I make my animations in Blender.

'Having a file' means nothing *unless* you can parse it. A quick google of collada suggests it may be overly difficult to parse.



Hint: Skeletal animation is often used to create the data for vertex tweening.

#5298814 Preparing a 3d game as a demo for my CV.

Posted by on 02 July 2016 - 11:00 AM

But to load it into OpenGL, it is really above my skill level. I didn't find any code on the internet that loads collada files into opengl. I can't find sufficient info on the internet about skeletal animation and how exactly everything works, there are just a few articles that touch the subject on the surface, but nothing more.( or I'm just stupid)


To state the obvious but the one of the best ways to increase your skill level is to try new things out. Programming is all about this. :D


Think about things logically:

  1. Are you being realistic about what you can achieve in the time frames? If yes...
  2. You need animation data from blender. Either you need to parse some 3rd party format, or export the data into your own format. Have a look at the source code for blender python addons for exporting. It isn't a great stretch to learn some basic python (which is icky!) and export what you need.
  3. Consider what kind of animation you want. If you are starting out, I'd recommend starting just exporting 'vertex tweening animation', which means just export the position of each vertex on each frame, rather than the bones / skinning stuff. Vertex tweening is much easier to get working, and is not too daunting a task to write a shader for in the game.


Getting bones animation working is usually the final solution used for character animation, but is quite tricky / finicky to debug, and not something I'd recommend until you understand tweening. Tweening can of course be used for character animation too, and was used in many games back in the day (quake 3 etc), it just has some drawbacks such as memory use and lack of flexibility for animation blending.

#5297343 Artefacts in triangle mesh

Posted by on 20 June 2016 - 12:51 PM

Ok with a .obj I could finally load it in blender to see what was going on.


Here is the before pic:




Here, I cutaway all the polys around the smoothing group of interest. This was the diagnostic test I suggested earlier. It cures the problem. This suggests it is because of shared vertex normals between faces across the blade. :)




Solution, in blender, along with make sharp (which you already have done successfully), was to add an edge split modifier, set to split edges on the 'sharp' marked edges. This forces the geometry to be split, so there is more than one vertex normal along the edge. Thus the vertex normal can be closer to the face and doesn't have to approximate between 2 wildly different faces with different face normals:



I'm sure there must be some setting you need to make in Max to get the smoothing group to 'do its thing' (I am not a 3d studio user), because it doesn't seem to be.


And here is a more clear explanation, I've set blender to show vertex normals. Here is the original file. The normals at the side of the blade are *shared* between faces pointing in opposite directions, so it just averages, which makes it look icky on a sharp edge.



This is after the edge split, which splits the geometry so there is a separate vertex normal for each face on the edge. Notice how the normal now points more towards the face, so you get the sharp edge you are looking for.


#5296937 Artefacts in triangle mesh

Posted by on 17 June 2016 - 05:09 AM

For one, this is why we have normal maps. You can bake nice normals at a super high poly resolution. Your real issue is just not enough edge loops.


Anytime you have a vertex that is on an edge that is 90 degrees, or close to it, you need to create loops around it, otherwise it is interpolation lighting over such high angles. Also, this is why you usually apply sub surface division and then back normals onto low poly meshes. For this same type of reason. More polys = better surface representation and lighting.



Isn't the whole point of smoothing groups so you avoid having to manually create an edge loop though? Note I've only used blender, and 'make sharp' on an edge does (I think) the same thing, it internally forces a duplicate of vertices on edges so they can have different normals and not share the same normal.


To figure out what is happening, you could delete all the polys outside the smoothing group you have marked in red, and see if the 'artefacts' persist. At least you will know then whether it has been caused by the shared normals, or some issue in the red group mesh.


[just checked in blender and you also need the edge split modifier in addition to 'make sharp', sorry for any confusion :) ]

#5296682 Real find job in game industry with my 2d skill level ?

Posted by on 15 June 2016 - 10:35 AM

I think they are very good, show a lot of promise. :)


In my opinion as well as 2d, you should buy / beg / steal something like zbrush, mudbox, mari, maya, 3d studio max or even blender, so you can familiarise yourself with how your skills would translate to an asset creation pipeline. Don't get me wrong, there is often a need for straight 2d art, but you will be so much more marketable if you can use the same techniques with tools for texture painting / sculpting.

#5067111 Difference between software engineer and programmer

Posted by on 03 June 2013 - 09:15 AM

Job title inflation, imo.wink.png




It's like calling a toilet cleaner a 'sanitation engineer'.

#5034102 Game engine Memory Manager

Posted by on 19 February 2013 - 04:47 AM




(don't hate me for using singletons smile.png the engine only uses two) 


Two too many.

#5034100 Game engine Memory Manager

Posted by on 19 February 2013 - 04:44 AM

In my experience, tight memory management can be incredibly useful in certain situations, on certain platforms, and less necessary in others.


Keeping tight control of memory is particularly useful on things like consoles, devices with limited memory, and especially situations where there is no swapping to disk to page in 'extra memory' when you run out. In these situations, if you run out of memory, either your application handles it gracefully (says 'cannot do this' or whatever) or you get a crash.


Certain applications, like games and rocket control software, or plane autopilots, I see as more 'mission critical' so I don't want them to fail or crash under any circumstances. Whereas for e.g. a word processor, it is more acceptable when trying to load a document if it says 'cannot load document, not enough memory on this device' (although obviously you'd try and design to prevent this happening). But playing a game it's no good if it says 'cannot load level 5, out of memory', as you cannot progress in the game.wacko.png


So for games that are anything other than very simple ones, myself I would tend to use a memory manager. However, for general applications / editors etc which have to adapt to what particular document / documents they are editing, where they are allowed to 'fail' due to out of memory errors, I'm much more likely to just use directly or indirectly the OS allocators.smile.png


If you do preallocate blocks of memory for each of your game 'modules', you are right in saying it is useful in advance to know how much memory to allocate. Preallocating blocks for different modules can be very useful when you need to work to a memory budget, particularly with a team of programmers, rather than just putting it all together and 'hoping it doesn't run out of memory'. For some areas, this will be easy to workout (e.g max number of sound buffers, things like that). 


For others, particularly game level resources, the memory requirements may change from level to level. You may want certain levels to have more sound data than others, some more texture data, etc etc. However, a way around this, rather than having set limits for sound data / textures / geometry etc, is to have these data shared in a 'level file'. And have a certain maximum size for your level file data.cool.png


For tracking memory leaks, as the others say, just because you are using your own allocator it doesn't automatically 'fix' leaks. However, you should design your allocator so that along with the allocation it can store things like the line number and filename (in some type of debug build). Then on exit, you can report any allocations unfreed after cleanup, and other statistics, like the maximum memory used in each module etc.


You can also put 'check' regions of a few bytes around allocations, to detect when you have written outside of bounds, off the end of arrays etc.ph34r.png


There are also 3rd party systems you can use for most of this leak detection and bounds checking. Although these may not be available on your target platform .. so having your own can be very useful. It's the kind of thing you can write once and reuse in other projects, and great to have in your 'toolbox'.

#5030400 pointers while serializing

Posted by on 09 February 2013 - 10:32 AM

You can also store the pointers as offsets from the start of a structure.


Then when you load this structure into memory, you can 'fixup' the pointers by adding the offset to the actual memory address of the start of the loaded structure and saving the result back, and voila, your pointers are valid again.


0 Start of structure
1 Pointer to Chicken (offset 3)
2 Pointer to Duck (offset 6)
3 Chicken
4 ..
5 ..
6 Duck
7 ..
8 ..


#5010964 "Correct" Music Note Structure?

Posted by on 15 December 2012 - 09:43 AM

More stuff:

Note pitch: I'd stick with just a note number like MIDI for now, and the 12 note western scale. 99% of music is written like this, and handling other systems is a bit more advanced and something you can tap on later. Storing notes as float frequencies I wouldn't recommend for several reasons : accuracy (say you transpose down, then up later) .. the wavelengths don't have a linear relationship with note number. You might want to do operations based on the relative pitches of notes, or detect chords etc. All of this would be stupidly difficult just trying to store wavelength / frequencies. Besides the fact your source instruments may have different base frequencies anyway and these would need to be compensated for.

Pan: Why limit yourself to stereo pan? What about surround sound?

Channels / instrument info on a note: Would you want the note to determine this, or the track and / or pattern? Having a 'grouping' feature for notes can be useful though. Remember you are going to want to be able to do stuff like edit the instruments you are using quickly and easily, and not change this for every note.

What happens when by accident you set 2 bunches of notes to the same instrument ID (if storing on the notes?) you have then lost their 'individuality'. Better to store something else that then maps to the instrument.

Volume: This is usually key velocity rather than volume (there is midi volume as well, but you wouldn't store this per note, but as a separate event), which in midi is 0-127. There is also release velocity, which may or may not be used by the instrument.

There's also other stuff like pitch bend, aftertouch etc, which you can store as a separate event.

Note name / ID: Why try and store this on the note? If your pattern has e.g. an array or vector of 35 notes, then you know its ID as you access it.

An example to start with might be something like this:

class Note
int m_iStartTime; // in PPQN. this could be negative? if you want some notes to start before the official start of pattern
unsigned int m_uiLength; // in PPQN
unsigned int m_uiKey; // e.g. like MIDI have middle C as 60
unsigned int m_uiVelocity; // 0-127?
unsigned int m_uiReleaseVelocity; // 0-127?

Once you have a simple system working then it will become more obvious where to add things.

To reiterate on the notes side of things, don't worry so much about space saving, just concentrate on simplicity. Note data doesn't tend to be that large. It's more when you get to the audio side you need to pay attention to the data structures / bottlenecks.

And rather than just having a struct-like class you can use accessor functions so the actual data underneath can be anything you want.

#5010955 "Correct" Music Note Structure?

Posted by on 15 December 2012 - 08:55 AM

Bit of a brainfart here, but hopefully something useful: Posted Image

As L. Spiro says, don't store you times as floats or something like that, store than as ticks.

Sequencers commonly work on a scale of PPQN (pulses per quarter note), so if you adjust tempo (once off or gradually throughout a song) it just *works*. The PPQN values are usually things like 48,96,192 etc.

Bear in mind that if you are doing 4/4 music that's all good, but if you are using triplets, or groove, you'll want the PPQN divisible by 3, and with enough precision for your 'groove'.

You'll also probably want to store your note timings as offsets from the start of a pattern, rather than the start of the song. This way you can several instances of the same pattern at different parts in the song.

Also instead of storing things as e.g. char[3] to save space, it's probably more sensible just to make them 4 byte unsigned int / ints and keep your structures 4 byte aligned so you (or the processor) aren't faffing about for no reason. You can always compress them on import / export, if you really need to.

Another reason for PPQN is so you easily change the output sample rate (assuming you are going to do some audio instead of purely MIDI).

I've done several audio / sequencing apps and don't think I stored anything as floats. PPQN can be used to calculate the exact sample for an instrument to start / end (and you might precache this kind of info). You could possibly use something more accurate to get within sample accuracy for timing, but I've never bothered myself.

It's really worth using a plugin architecture for different components of a sequencing / audio app, I'd highly recommend it. You can make effects (reverb, delay, chorus etc) plugins, and instruments plugins. You could potentially also use VST plugins or similar if you can work out their interface (you may find some open source apps that have managed this).Posted Image

I'm currently rewriting a sequencer / audio app I wrote a few years ago, and have actually moved to using plugins for things like quantization / legato / groove / argeggios. Have a think about whether you want to be able to do stuff like 'undo' quantization, keep original values, or have a modification 'stack' applied to notes.

I don't think you'll get the exact structures bang on first time, it's the kind of thing you write a first version, then realise there's a better way of doing it, redo it, etc etc. But it is fairly easy to get something usable. You may also spend as much time on user interface / editing features as the stuff 'under the hood'.

As for APIs, I have so far cheated and don't actually use MIDI input or output (although I have done that in the distant past and it wasn't that difficult I don't think). I have just been writing a MIDI file importer though refreshing my memory lol.

If you want realtime MIDI input you'll have to pay much more attention to latency and the APIs you use. I was just getting by with the old Win32 audio functions for primary / secondary buffers, but the latency is awful, so using direct sound or I think there may be a new API in windows 7 would be better. Sorry can't help yet in that as I haven't researched it myself yet.

Also I'd add, consider using direct3d or (in my case) opengl to accelerate the graphics side. This way you can easily show the position within a song without overloading the CPU and causing stalls and having your audio stutter.

Once you start doing the audio side a bit of SSE / SIMD stuff helps. And you have to think carefully about how you'll structure your tracks / sends to effects, to make it efficient but also customizable.

#5008463 A game made solely to tell a story

Posted by on 08 December 2012 - 05:27 AM

Sure you can do this. There is a continuum between zero user choice audio visuals, say a 3d movie like toy story, and full free choice games. A lot of game artists also work on movies.Posted Image

When there's no user interaction, there's a lot of shortcuts you can take. Objects may only need to be built with one viewing angle for instance. You don't tend to need physics representations in the same way. And the whole thing can be prerendered with smoother curves and effects. On the other hand the detail expected tends to be higher for movies.

A lot of games have scripted elements. This would be more akin to a movie sequence. Game designers have to choose how scripted a game is (which is good for storytelling), and how much choice there is. Sometimes it can be difficult to force the intended story without giving the player the impression that the game is linear.

As the others say, there is a cutoff point, where your game is so linear, that you might as well make it pre-rendered or a movie, so you can take advantage of those techniques.

#4985394 Should I give up?

Posted by on 30 September 2012 - 10:33 AM

No matter what language you use, there is always something that has to be distributed with it.

With C/C++ programs, you'll have to distribute the proper library DLLs and make sure the user has the right version of the C/C++ runtime installed. Then there are video card drivers. The DirectX user redistributable, etc...

So nothing will change. Just the name of the language!

I see what you are saying, but I'm not sure it's strictly always true, it depends on the type of game / app you are writing, and the market (think e.g. casual games, apps, versus AAA games).

If you write c / c++ and statically link to the runtimes, there are no dependencies due to the language, only core OS dlls (which will always be present).

The need for more and more sprawling dependencies is up to you, your choice of language / tech and what third party stuff you decide to pull in. (Sometimes you can statically link to third party stuff though).

I think azonicrider is right to an extent .. end users are easily put off installing stuff. If your game needs a 15 minute download from a third party and separate installation just to run, they'll probably move on. I know I do. If I see a 'your java needs to be updated' or flash or whatever, I'm like 'forget it'. So there is a good argument for considering your market before deciding what dependencies to rely on.