Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


madRenEGadE

Member Since 01 Feb 2006
Offline Last Active May 14 2014 01:41 PM

Topics I've Started

TBB and some custom threads

13 May 2014 - 10:41 AM

Hello all, long time since I was last active here but now I want to do some engine programming again after a long absence.

My engine should be based on jobs which are executed by Intel's TBB. My design is similar to the one Intel uses in Smoke.

The job system itself is working but there are two things I want to do in 2 separate threads. The first one is creating textures etc. (With a shared OpenGL context) and the second one is rendering.

Now to my questions:
1. Are such job systems in general state of the art? I ask because I was out of engine programming for some time.

2. Is it a good idea to make the 2 extra threads? My thoughts behind this idea were that using dedicated threads for those two tasks reduces permant calls to MakeCurrent because in the "normal" job system I would not know in which thread the task is running the next time.

3. Is the performance of TBB's job scheduling affected by my custom threads? The number oft threads TBB creates is based on the number of CPU cores so my two tasks "steal" some processing power. But I think that the job stealing in TBB will handle this, am I right?

Is this a valid usage for unions?

09 May 2012 - 12:25 PM

Hello all,

recently I decided to discard my hand-written math library code and use the GLM library.

Because I want to make it "feel" like my own code need some thin wrapper around it. Because performance matters I don't want to
write the wrapper using inheritance like this:

class Vector3 : public glm::dvec3
{
};

Now I have the idea to use a union and write code like this:

union Vector3
{
public:
	 Vector3(double x, double y, double z)
			: v(x, y, z)
    {
    }

    double x, y, z;

private:
    glm::dvec3 v;
};

Is this valid code (using C++11)?

Problem with deferred rendering

03 May 2012 - 01:59 PM

Hello all,

I am currently trying to implement a deferred renderer.
I am mostly following this tutorial: http://bat710.univ-lyon1.fr/~jciehl/Public/educ/GAMA/2007/Deferred_Shading_Tutorial_SBGAMES2005.pdf

Here are some screenshot from my current state. The test scene consists of two cubes and a point light.

1. This first screenshot looks correct
Posted Image

2. The second screenshot is taken from the opposite side. Here you can see that the cube which is further away overlaps the near cube:
Posted Image

3. Because I think this is a problem related to the depth buffer I rendered the depth buffer to the screen and everything was white. Then I manually created a texture and filled it with the z-values. The next two screenshots show this manually created depth texture:

Posted Image

Posted Image


Does anyone have an idea what the problem could be? If you need source code of specific parts just let me know.

Thx in advance.

Multithreaded OpenGL App

07 September 2010 - 10:26 PM

Hello I am trying to implement a multithreaded game engine.

Multithreading is done using Intel's TBB and so the rendering is done in a task.
In this task at first MakeCurrent is called and at the end the context is made "uncurrent".

Because you cannot decide in which thread a task is executed this procedure is neccessary isn't it?

The problem is that the screen flickers. This only occurs when I tell TBB to use more than 1 thread.

Only one render task is executed at a time so normally there should be no problem?!

Did anybody experience a similar problem?

thx in advance

Using speech to control a game

29 May 2010 - 12:20 AM

Hello, I want to implement a system to make it possible for the player to control a game via speech like Tom Clancy's Endwar does. Does anyone know a good starting point for this? I thought of using neural networks but I am not sure if they work well in this case. Some things I have thought about yet: - neural networks are good for pattern matching so from this point of view they could work - once they are trained they are quite fast, but are they fast enough to do a pattern matching for audio which is a second long? - all input patterns need to be of the same size, this means the commands you want to support have to be the same length Maybe someone did something similar oder uses another technique or someone found out that neural networks really don't work...

PARTNERS