deftware

Members
  • Content count

    529
  • Joined

  • Last visited

  1. @7thbeat @cartoonnetwork @kylelabriola @bikimbap Holy crap, what asshats.

  2. Have you tried outputting csDir to the fragment to see if that is in fact correct? I just managed to get SSR running in my game project a few weeks ago after some tumultuousness.
  3. Mind blow Ring Puzzle Toy

    These were all over the place when I was a kid, before the internet made everyone forget about them.
  4. How to start Working towards making games.

    I started out modding existing games back in the 90s, which gave me a pretty complete idea of what a game is comprised of.
  5. I'm trying to slap screen-space reflections into my engine, and in theory it should work fine but I'm having trouble figuring out how to properly generate the ray vectors. I have the fragment surface normals in world-space, I also have the vectors from camera to fragment in world-space. I can therefore generate the reflection vector itself using:   reflect(camtofrag, fragnormal)   However, I am trying to transform that into screen space properly for outputting to a framebuffer texture, which is then fed into a postfx shader to perform the actual raytracing with itself. My inclination was to do this:   output.xyz = inverse(transpose(modelview)) * reflect(camtofrag, fragnormal);   But the problem is that the reflection vectors shift around a lot when the camera rotates. Surfaces only appear to reflect properly if the camera is at a 45 degree angle to them. A shallower angle results in squished reflections at the edge of the surface where the reflected geometry is connected. Conversely, looking straight at the surface (allowing the shader to reflect whatever is on the outside edges, like a mirror) the reflection stretches deep 'into' the reflecting surface.   Here's a youtube of the described, it may still be uploading at the moment: https://www.youtube.com/watch?v=G2w169gPro4   Here's a set of images showing the reflection vector buffer, and it's clear that there's just too much gradiation across surfaces, and it moves with the camera's rotation. It looks like there needs to be some kind of inverse projection applied so that it's "flatter" and not producing a fisheye sort of reflection: http://imgur.com/gallery/h9w3X   I have a linearized depth buffer, so I figured I could just calculate the direction of the screenspace reflection ray while rasterizing the geometry that will be reflective in the postprocess render, and then in the postprocess do the screenspace reflection ontop of everything using the reflection normals and linear depth buffer, without having to do any more matrix transforms or anything - just trace lines in XYZ, check against depth buffer, and behave accordingly. I'm just using the UV coordinate of the fullscreen quad fragment to sample the reflection vector texture, whose alpha channel contains the linearized depths, and then tracing a line along the reflection vector checking the depths in the alpha channel along the way. As far as I can tell it would work just fine if my reflection vectors were correct - which they clearly aren't judging by the 3 screenshots that show how drastically the normals change just by rotating the camera.   Any ideas off the top of your heads? My goal is to keep this ultra simple and minimal (of course with all the edge fading and artifact mitigation stuff) without storing a bunch of textures to do it. It seems simple enough to just store the reflection vector as generated by the fragment shader of the geometry surfaces themselves - if I could just transform them properly. As I said, I have the exact right reflection vectors in world-space, but I'm just having trouble transforming into screenspace.   Thanks!
  6. Dual contouring implementation on GPU

    They store the pointers in textures, as indexes to nodes.    Check out how they store octrees in here:   http://on-demand.gputechconf.com/gtc/2012/presentations/SB134-Voxel-Cone-Tracing-Octree-Real-Time-Illumination.pdf
  7. Use one texture with R and G channels as your XY velocity, and the B channel as the density?   As for writing to a specific place on the texture, you're going to want to render-to-texture, and in the pixel shader generate what you want for each pixel of that new texture. In this case I would double-buffer, where one texture I am reading from in the shader to generate my output from, and then the output is to a texture of the same size. At the end of the frame, I just swap their roles, so that the texture I was just writing (drawing) to is now the input texture and the other texture I was reading from is now the destination.
  8. Obtaining low ping

    It took me a little while to figure this one out as well. What's happening is that the other games are showing the actual measured network RTT, which is a separate thing from the game network update RTT.   The update RTT is naturally going to be as large as 1000ms divided by the update rate Hz, and with different update rates on each end it will keep cycling in a sawtooth fashion as the update rates on both sides line up and fall out of alignment, along with network latency on top of that.   The network RTT is just how long it takes for a packet to travel to and from the other side, outside of the game update packets being sent.
  9. Seems like you're going down the rabbit hole of over engineering, instead of taking a step back and re-evaluating your entire approach altogether.   To my mind, the trick is doing the outer lower-res first, and then progressively refining closer to the camera. If you're handling 'boxes' then something is wrong.
  10.   this...   Make sure you validate your function pointers!
  11. You can convert a quaternion to axis-angle representation by doing something like this: axis.angle = 2 * acos(quat.w) * (3.14159 / 180); // angle of rotation axis.x = quat.x; axis.y = quat.y; axis.z = quat.z; normalize(axis); Maybe that will help.
  12. The CPU is a logic controller that reads assembly instructions from a loaded program in RAM which dictates how program memory is manipulated in RAM, and also how input signals are interpreted and output signals are generated. In the case of modern OS's, the output signals are typically generated over a USB connection, which requires that the program explains to the CPU how to interact and accept the USB device, along with device drivers that basically handle the actual interpretation of hardware signals over USB and generalize them (and responding to them) via an abstraction called an API, or 'Application Programming Interface', which is just a collection of functions that generalize everything that could happen with whatever hardware is communicating via USB.   However, it's been the same since the original serial and parallel ports that proceeded the 'universal serial bus' of modern times.   For graphics, and audio, you *could* control such things over USB, serial, parallel, but they have become their own core parts of the computer because they've been essential since the dawn of the personal computer. So they have their own means of communication. But you could just as well develop your own protocol for communicating with a robotic body via audio output or video output: it's all about the output signals generated and what they are intended to do on the recipient hardware.   It's actually not that complicated once you understand how a CPU itself works.
  13. 3D vector rotations around an axis.

    I'm going to take a stab in the dark, and guess that the situation is that you have an arbitrary vector in space and you want to derive two perpendicular vectors from it?   In that case, you're going to have to choose an axis that is 'most expendable', in that, you're either going to completely avoid and ignore cases where your arbitrary normal is facing along that axis or you're going to write special-case code to handle anything that's within a threshold (aka 'epsilon') of that axis.   It looks like you're already doing this, in a roundabout way. The vector that you have, which is the 'arbitrary normal', will first be used to perform a cross product with a cardinal vector (like '1 0 0', '0 1 0', or '0 0 1', depending on which vector you want to 'avoid the most'). In the case of '0 1 0', which it appears as if your code is already dealing with, and happens to be the 'vertical' vector in most gfx cases, the cross product of the vector and '0 1 0' will give you a vector completely perpendicular to whatever plane is formed by the vector and '0 1 0'.   If you can imagine a vector pointing in any random direction (edit: the 'arbitrary normal vector' you already have), and a vector pointing straight up, this forms two sides of a triangle if they are emanating from one point. The direction that triangle is facing is the result of their cross product. From here you would generate the 3rd vector by simply getting the cross product of your original arbitrary vector and the resulting vector of the first cross product. The 'cardinal' vector that you use in the first cross product is just a place-holder, and only has a bearing on your final outcome if your math precision is crappy and the arbitrary vector is very close to the cardinal vector itself... (edit: because it would form a very skinny triangle that is hard to calculate the normal vector of precisely)   This means that, in the case of using '0 1 0' as your cardinal vector, any arbitrary vector that is very close to it, or its inverse ('0 -1 0') will start to cause precision issues, but would otherwise be fine for all other vectors where the X and Z values are dominant in the starting vector.   This simple two-step algorithm will yield two vectors that are the 'horizontal' and 'vertical' vectors oriented with your initial arbitrary vector, depending on what cardinal vector you decide to use ('1 0 0', '0 1 0', or '0 0 1').. Typically, the vertical vector is used, because most applications involve 'things' that are facing in more horizontal directions than vertical.   If I'm way off base and clearly have no idea what you're talking about, please let me know. Otherwise, I hope this helps.   P.S. Don't forget to normalize the result of each cross product!
  14. Have you taken mass/acceleration into consideration? It sounds like you've completely ignored them and that incorporating them would dampen any 'explosive' issues.
  15. C++ SFML Stuttering

    What's your hardware rendering config like? Do you have v-sync enabled?   Have you tried profiling your code to find out what exactly is stalling out for hundreds of ms at a time? This involves adding timing code into your project where you can instantiate a timer object and then call a function to 'start' timing and 'stop' timing for that object, much like a stopwatch. EDIT: you would then create these timers for different pieces of your main update loop code, so as to differentiate what parts of your code are spending more/less time each frame. At the end of a session you would then output the total execution time for all of the timing objects you instantiated. This should help you narrow down what part of your code is stalling. Are you (assuming C/C++) compiling in debug or release mode? What other software do you have running in the background? Have you restarted your computer recently? (some people just sleep/awake their computer for months at a time, and it slows stuff down as memory fragments worse and worse)   I'd suggest running your project on another machine besides the one you are having problems on, to see if it's your project specifically or your machine itself. However, I think profiling your code will help you figure it out better than anything else.