CukyDoh

Members
  • Content count

    42
  • Joined

  • Last visited

Community Reputation

220 Neutral

About CukyDoh

  • Rank
    Member
  1. Particles with DOF

    I'll agree with everyone else and say the more interesting approaches would be:[list] [*][b][i]Inferred-lighting style dithered depth[/i][/b] [list] [*]Only draw transparency to some pixels and then resolve after DoF. [*]Should "just work" and give particle lighting (See Volition's rain for an example), but likely low quality. [/list][*][b][i]Draw particles after DoF and apply manually[/i][/b] [list] [*]I've played with this myself but never had the best results. This solves two issues though, one the DoF destroying your particles, and 2 your particles not getting the correct DoF. [*]Draw the particles and scale and blur them (or cheat with mips and pre-blurred textures) after your DoF pass. [*]You can have some fun putting e.g. a hexagon in the lower mips of a flame ember particle; Gives interesting results! [/list][*][b][i]DoF pass per render batch[/i][/b] [list] [*]Another I've played with and had interesting (but expensive) results. This works well for larger and rare objects, e.g. a glass window. [*]When drawing the window, apply DoF on the background based on the window to previous depth distance. Allow the window to depth write and then apply post DoF as usual. [*]This will give a fairly correct layered DoF when the far original layer written will accumulate layers of DoF. [*]Using a layered depth buffer approach is likely much cheaper, correct and flexlible though! ;-) [/list] [/list] For the most part this is still just an "unsolved" issue though in most major engines, e.g. CryEngine and Unity both exhibit the artifact. Even the latest Unreal 4 screenshots show this problem, which means they've haven't found a robust real-time solution yet. Note how most of the below particles are actually in the foreground: [img]http://attackofthefanboy.com/wp-content/uploads/2012/05/unreal-engine-4-screenshots.jpg[/img] [img]http://attackofthefanboy.com/wp-content/uploads/2012/05/f_unreal43_ss.jpg[/img]
  2. Just an FYI, jump to page 43 of this: http://fumufumu.q-games.com/gdc2010/shooterGDC.pdf to get a better look at PixelJunk's approach(es). They settle on some pretty SPU-intensive algorithms, so it may be of limited use to you. They do reference a few methods though. If you don't anything that accurate (i.e. just edge detection), then you can always do a simple blur for the field approximation near edges, or the iterative style (Only comparing immediately neighbouring cells) for just a few repeats can also give a similar edge detection. Basically, it's worth asking yourself - Do I *[i]need[/i]* the entire field, or just the edges? [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  3. [quote name='circassia' timestamp='1320253706' post='4879758'] Am i understanding correct, that i am doing 4 lookups like this and get data of 16 pixels?: (middle point = actual pixel) [/quote] Yes, it should indeed!
  4. From the effect it gives, I'd have a guess that you're using a fixed kernel size, which is then stepping over pixels and missing some data. As you gradually increase the size of the filter, you're "randomly" getting different pixels factored in, seen as a jitter. I'll try to explain what I mean in this image: [img]http://img847.imageshack.us/img847/5807/samplesa.png[/img] That's not the best example I know, but hopefully will illustrate my point. As you move your sample points, you're getting different data coming in (under each arrow), and in the best case will interpolate between the nearest pixels. The fully blurred version of the above colour strip can "only" be obtained by sampling [i]every [/i]pixel. The problem with using 3 samples is that (at most) you can only sample 6 out of those 9 pixels, which leads to a very different average, based on where the samples are taken. Assuming this is the case, you'd be better to either increase your sample count/filter size when trying to get a more blurred image, or instead running multiple blur passes to blur the blur (taking averages of the averages!). Another option would be to look into downsampling the image and blending between the results, e.g. a full-size and half-size version.
  5. Simple but clever little technique, good job!
  6. UK Universities

    Both myself and my brother have just finished degrees in Computer Games Development at the University of Central Lancashire, well almost, we technically graduate in July. We both have though, already signed contracts for non-graduate roles at some pretty major UK-based studios. So if you're still looking around I'd recommend checking that course out. The programming taught is mainly based on C++, although you will cover a fair bit of C# and Java, and then various other languages like Python, XML, XHTML and Lua during some of the modules. There's a strong emphasis on games though and you will learn alot of AI and Graphics (mainly DirectX based for actual coding, but all information is transferable to any other API) information, with half the course devoted to these areas. The other half is more general software development, methodologies, design and theory. Having just gone for a bunch of interviews up and down the country, I got offered nearly every position I went for because the course had prepared me so well. I was well aware of everything I was being asked, and in some cases knew alot more than I was expected to know! As for advice for your courses now, I'd just re-iterate what everyone else has said. Anything related to programming, sciences and maths are all going to help down the line, but are not essential. Any good course will teach you everything you need to know anyway, but prior experience can make a big difference. Particularly look into vector and matrix maths.
  7. float issues

    You could always just try adding 0.5 instead of 1.0. This way it won't tip it over to the next number unless it was already going to round up to there anyway. Or as everyone else said, just use a double for the accuracy.
  8. Cosine Interpolation

    Quote:Original post by Eagle11 Ok, so if I understand correctly, what this function does is find a point on the curve between v1 and v2. The greater 'a' is the closer the point is to v2. Is that correct? Yep, that's exactly right! Sorry I don't have time to read the code, but I'd assume it's used when generating a heightmap because; If you imagine a linear interpolation between points (or the noise or whatever is used to generate the points, as noise is generally.. well, noisy! and so lots of random spiking values), you'd end up with very sharp lines, so points going up and down with big spikes everywhere kind of like: /\/\/\/\/ for your surface if you sampled along it. But if you use a cosine interpolation instead to generate the points, you'd get smooth tops and bottoms to the line generated instead, so something more like the above, but curved instead of the sharp points with a sudden change in direction. So basically just to give a more pleasing visual result I'd assume. Again sorry I don't have time to read it and try and look more into the actual reasoning, but just as a comparison, try changing the function yourself to linear interpolation instead and see how the heightmap looks. So the consine interpolation function from before would become a simplified: float LinearInterpolate(float v1, float v2, float a) { return v1*(1.0f - a) + v2*a; } Presumably with a heightmap it's something along the lines of what I said, but changing it yourself is probably the quickest way to see! And of course if you're curious, try writing your own otehr interpolation functions and see what happens.
  9. Cosine Interpolation

    Firstly, if you haven't already, I'd suggest you read up on and play around with interpolation as it's an incredibly useful and pretty common technique. Linear interpolation is where you have two values, in this case v1 and v2, and you can specify a value between 0.0 and 1.0 that says how much to interpolate between the two values. E.g. Interpolating 0.0 between v1 and v2, would be equal to v1. Interpolating 1.0 would be equal to v2. Interpolating 0.5 would be equal to 0.5 * v1 added to 0.5 * v2. This can be written in code as: float v1, v2; // Interpolating values float i; // Interpolation amount float result = v1 * (1.0f - i) + v2 * i; Essentially saying that you take the interpolation amount of one value, and the opposite amount from the other value - Assuming that the interpolation amount goes from 0.0 to 1.0. What your function does is the same process, but rather than a linear interpolation line between the two values, it uses the cosine curve instead. The first two lines turn your interpolation value into an angle to sample from the cosine wave, and then sample from the wave, but converting the scale run between 0 and 1 instead of the wave's usual -1 to 1. Cosine wave goes from -1 to 1. Adding 1 changes it to 0 to 2. Halving changes it to 0 to 1. The last part of the function then performs a normal linear interpolation, but using the value from the cosine wave instead of the value,"a", that you passed into the function. Sorry if that's a little unclear. I'd recommend you experiment a little with linear interpolation, and have a look at the cosine wave and hopefully the point of the function should be come a little more clear.
  10. ok so im' a bit new

    In my experience I've found that it's generally best to read and try to understand the entire book. I've also found that although I could follow the material, I didn't fully realise just where to use all the concepts I'd learned and that just came with experience. So it is worth reading everything imo, but alot will jsut come with experience and you will msot likely be coming back and using your books for refernce. I've been programming for around 2 1/2 years now, which I know isn't that long, but more than enough to get all the basics down, but I still go back to some of my early c++ books occasionally for reference. Also, I've heard great things about the Sams books too. My brother used to have one and always said how great it was!
  11. game ending itself problem...help?

    Quote:Original post by BlueBan007 Is it an option to move my project and its settings from my computer in vc++ to another computers vc++? like can you just use like a flash drive and drag and drop projects from the same program between computers? Generally you can do just that. Make sure you get all the project files too and you should be able to move it wherever you want to with settings intact!
  12. Help, game is killing my computer

    After quickly scanning your code, the problem seems to be what everyone else has already suggested; You're loading images into pointers and then never freeing them up again. I'll try to show what I mean in an example: All of your images are cycling the same few pointers: SDL_Surface *img1 = NULL; SDL_Surface *img2 = NULL; SDL_Surface *img3 = NULL; SDL_Surface *img4 = NULL; SDL_Surface *img5 = NULL; And so each time you load a new area you're trying to overwrite the contents of these images. However in reality you're actually loading a new image and telling the pointer "okay, please point to the new image now". Which it does do successfully, but this has the problem that it leaves the old image still loaded in memory, but now with no pointer pointing at it, and so no way to access it! Each time you move between zones you're repeating this and so you slowly build up more and more images in memory that you're no longer using, but can no longer access to remove them either. What you need to do is alter your code so that each time you try to load a new image with the same pointers, make sure you remove the old one first, or even use seperate pointers for all images. // Check if "img1" is pointing to an image already if (img1 != NULL) { // Free image out of memory SDL_FreeSurface(img1); // Set to NULL so we know it's okay to use // and avoid accessing the now empty memory // by accident img1 = NULL; } Now I'm not quite sure what the exact code is as I don't use SDL, but I saw that function in your code and I'm assuming it's the correct one. What you should try to do is; For every pointer you're about to use, that may already be pointing to something, check if that's the case. If the image is pointing to something, free it up first before loading something new, and it's as simple as that code above. So go through each of your load functions and for each pointer you use, make sure it's okay to use it first. Alternatively you could build a seperate "Unload All" function that you can call each time that checks all image pointers you're using. Just double-check that's the right way to release the images first, and correct the code if not. Other than that this should hopefully help you out! As a general rule of thumb, as soon as you allocate something to a pointer in code, if necessary, write the code to de-allocate it straight away so you don't forget!
  13. Out of curiosity, and going purely from the screenshots, try removing the specular lighting component and see if that (temporarily) fixes it. i.e. gl_FragColor = texel * (diffuseLight + gl_LightSource[0].ambient); If that does work, then it means the white shine is just your specular lighting, which from a directional light can be a bit unwieldy. Essentially as soon as you look towards the light, almost every piece of geometry with normals facing the right direction (in this case facing upwards) will receive full specular. Try increasing the value for "shine", and decreasing the value for "gl_LightSource[0].specular". This should make your highlights smaller and sharper, although this might not be too visible in your particular scene on large flat objects like the ground. By decrasing the specular strength/colour, you'll also make the remaining specular highlight less visible, which should avoid the problem you have at the moment. From your screenshots, I suspect that you're getting a large value close to white for your "diffuse + texture + ambient" already, and then another large value for your specular, which of course will just saturate everything to white in the final image. Of course, if removing the specular lighting didn't change anything, then jsut ignore my post! =) [Edited by - CukyDoh on April 18, 2009 6:31:32 AM]
  14. I'm just wondering if you've forgotten to transform your rays into the local mesh space, or does your SetTransform(D3DTS_WORLD, GetWM()) line take care of that already? Or have you already done something like that elsewhere? Transforming the origin point and ray vector by the inverse world matrix for each object should do the trick if not. As far as my understanding of the function is; The mesh you test against will be in it's default local space, defined directly in it's vertex buffers etc., and will be tested directly against whichever ray data you pass in, and the function will not take into account anything else. And so because of this you'd have to transform your ray to be relative to that local mesh space, rather than in the global world space I'm assuming your ray is in, or else you won't be testing for a collision at the correct location. I can't imagine you would get any correct results at all if not, but I figured it was worth mentioning anyway as I looked around and couldn't find anything in D3DXIntersect's documentation about auto-transforming the ray by setting a world matrix for it, nor could i find other examples using that method. Nice to know if that is the case though!
  15. [FMod C++] Error (23) File not found...

    Have you tried createSound() instead of createStream() by the way? Other than that one function I use almost identical code to play sounds just fine with FMod.