Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Solid_Spy

Member Since 26 Nov 2012
Offline Last Active Jul 26 2015 10:00 PM

Topics I've Started

How to create an MSAA shader? (For deferred rendering)

09 May 2015 - 12:36 AM

I'm sure a lot of people have been asking how to create a pleasing anti-aliasing effect for deferred rendering without falling back on FXAA.

 

A lot of people unfortunately don't seem to like FXAA, and I hear there are alternatives.

 

One alternative I heard is that you can use MSAA with deferred rendering, after the geometry has already been drawn and calculated. But, how the hell is that supposed to work?

 

For one thing, I figured MSAA required that the triangle edges are to be used to calculate the blur samples. If all i'm working with is a texture, how am I supposed to get the triangle edges? Am I supposed to re-calculate them somehow?

 

I also hear that screen space MSAA can prevent texture blurring, unlike FXAA. But.. I can't find any information on how to implement an MSAA shader, anywhere.

 

Can anyone point me to a tutorial or something? Or if there isn't one, can anyone help me figure out how to do this? If I cannot get MSAA to work, I mind as well fall back on FXAA, like a lot of people have, but I want to see if I can do this.


Efficient way to check which lights belong to which meshes?

23 March 2015 - 09:53 AM

Yo, this is Solid here. I'm in a bit of a bind here.

 

I was wondering what method to people generally use to determine which dynamic lights can affect which objects?

 

For instance, I have a lighting shader that supports only up to 8 lights. I have over 20 dynamic lights in my scene, and over a hundred objects with only around 6-7 lights within range of them. I want the objects to know which lights affect them, and the lights to know which objects it will use to generate shadow maps.

 

How do I go about determining which lights belong to which objects, in an efficient way?

 

I already wrote a collision test that loops through every light and then through every mesh to check for collision with sphere vs sphere tests, and it works just fine, except it is rather slow, and takes up around 3-4 ms on my laptop, and 0.8 - 1.0 ms on my computer. It is a tad slow, and I want to speed it up some more. Also I use frustum culling to cull lights that are out of view.

 

I came up with the idea of using an octree, to separate all the static objects on the first frame, and then each frame add and remove the dynamic objects/lights. Each node has an std::vector for storing objects as well as their rendering/lighting/physics/ components, if it has one.

 

Then after, I am planning to do a sweep and prune for every separate node in the quadtree on the x and z axis, and then store the light vs object collision pairs in an std::vector.

 

And then last but not least, I can just iterate through every collision pair and push the meshes that collide into an std::vector inside of the lighting component it collides with for shadow map rendering, as well as add data from the lighting component into the mesh component so that it knows which light it gets affected by.

 

Although this may sound like a good idea, I fear that it might actually be slower. I mean, I am using a lot of std::vectors, and creating them every frame in the octree (for dynamic objects), and constantly filling/flushing vectors in the lighting components.

 

I was wondering if there is a more efficient method, or is this as good as it can get? What other methods are other people using for this sort of thing? I'm guessing I may have to just tone down the number of dynamic lights ultimately, since they are very expensive in general. I can do the collision test only once for static lights.

 

What are your recommendations?


How to limit FPS (without vsync)?

15 March 2015 - 10:34 AM

Yo, I'm having a bit of trouble trying to figure out how to limit my fps in my game engine. The engine runs at about 1000 fps, and I can already hear my graphics card squealing tongue.png. This obviously isn't good, and won't be good for anyone else playing a game made with my engine.

 

I wan't to know how to cap the framerate of the rendering part of the game engine, so that way the game renders far less often.

 

I am using a variable time step like so:

void GameSystem::GameLoop()
{
	previousTime = glfwGetTime();

	while(!glfwWindowShouldClose(glfwWindow))
	{
		currentTime = glfwGetTime();
		deltaTime = (currentTime - previousTime);
		previousTime = currentTime;

		profiler->TimeStampStartAccumulate(PROFILER_TIMESTAMP_FPS_UNLIMITED);

		InputMouseMovementProcessing();
		UpdateGameEntities();

		renderingEngine->StartRender();

		glfwPollEvents();

		profiler->TimeStampEndAccumulate(PROFILER_TIMESTAMP_FPS_UNLIMITED);
		profiler->EndOfFrame();
	}
	delete profiler;
	//CleanUpGameSystem();

	glfwTerminate();
}

So it's a pretty basic gameloop.

 

Things i've tried that never worked:

 

-Sleep(1). Unfortunately, this always takes away around 1 ms to about more than 16.66666 (60 fps),  Definitely NOT

                 acceptable. Sleeping isn't an option.

 

-A spinloop. Looping through a while loop and accumulating how much time elapsed until 16.66666 ms is reached, then render. This sucks because it isn't even accurate, since you have to perform operations to accumulate the time elapsed, which pollutes the time accumulations.

 

-Fixing a timestep for Rendering:

while(deltaTimeAccumulated > 16.66666)
{
    if(deltaTimeAccumulated > 200.0)
    {
        //Break if entering spiral of death
        deltaTimeAccumulated = 0;
        break;
    }
    renderingEngine->StartRender();
    deltaTimeAccumulated -= 16.66666;
}

This does work to some extent, except that I encounter stuttering every half second or so. This is caused by frame skipping every time the number of frames per render exceeds the average number of frames per render (imagine looping 5 times per render, 300 times in a row. Update 4 times per render, only once, and you will notice a stutter). I cannot find any way to fix this, because there is no guarantee that the number of frames per render will be the same, since elapsed time will always vary :/.

 

I am using GLFWGetTime(); which is high precision up to doubles, so that's not the problem.

 

~~~

 

The only solution I could come up with (a rather shoddy one) is to cheat and check if the framerate is dangerously high (like 500 or so), and just run some expensive function just to pool more time to the cpu instead of the gpu.

 

I'm sure i'm not the only one who has encountered this issue. Does anyone have any ideas?


How to store data for reflection in preprocessor stage

23 February 2015 - 07:00 PM

Hey, I have just been studying how to implement reflection in the simplest and least expensive way possible in c++.

 

I've been studying up on macros, since those are all pre-processor, but I can't seem to find a way to accomplish what I am trying to do.

 

Sorry if this sounds kinda stupid. I am just sorta learning this stuff. I want to write a macro function that takes a string literal (for a variable or function name), and put it in a vector. The only problem is, I don't know how to create a vector in the preprocessing stage.

 

I was wondering if it is possible to create some sort of 'macro class' or struct to store this data, so that I can use this for later.

 

For example, I want to do something like this:

Psuedo-code:
#include <iostream>
#include <string>
#include <vector>

#define struct ReflectionData /
{/
    std::vector<string> names;/
}

#define ADD_REFLECTION_DATA(std::string data) (ReflectionData.names.push_back(data))

ADD_REFLECTION_DATA("mario")
ADD_REFLECTION_DATA("luigi")

int main()
{
    std::cout << ReflectionData.names[0];
    std::cout << ReflectionData.names[1];
    std::cout << ReflectionData.names[2];

    return 0;
}

ADD_REFLECTION_DATA("Bowser")
Output:
Mario
Luigi
Bowser

Any idea how to accomplish something like this?


How to get Boost::asio working with MinGW

22 February 2015 - 11:47 PM

I seem to be having trouble getting Boost::asio to work with MinGW with Eclipse.

 

I am using Boost version 1.50.

 

I have read around and am suprised very few people had this problem. I get this error when trying to use the Boost::asio Library:

"swprintf was not declared in this scope"

 

I got this error when I tried to use:

#define WIN32_WINNT 0x0501
#define WINVER 0x0501

 

in order to get around the UnregisteredWaitEx has not been declared error. Which seems unfortunate, because this makes the windows library for Windows XP the library to use. What if I want to use Windows 8 features?

 

Isn't there another way? I can get other Boost libraries to work, including Thread and XML.

 

I hear that often times these problems are due to ordering, but i've been switching things around and I still can't get rid of the error.

 

This is the order I am including:

#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/thread/thread.hpp>

#define WIN32_LEAN_AND_MEAN

#include <windows.h>
#include <winsock2.h>
#include <glew.h>
#include <glfw3.h>

 

The libraries are in the same order.

 

How do you get Boost::asio to work? Preferably without having to downgrade to WindowsXP windows.h.


PARTNERS