Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


noodleBowl

Member Since 08 Sep 2013
Offline Last Active Yesterday, 02:32 PM

#5218979 OpenGL Method Variable Type Explanation

Posted by noodleBowl on 24 March 2015 - 11:19 PM

I was wondering if someone could explain to me this variable

 

I am trying to use a OpenGL function called glTransformFeedbackVaryings(), where its third parameter according to my compiler is:

const GLchar *const *varyings

 

This may sound stupid, but I kind of don't know what its expecting. When I try to make a variable of this type I run into some issue like "requires initializer" GLchar *const myVaryings[1] or "GLchar const ** is incompatible with type GLchar *const *" when I declare a variable like so: GLchar *const *myVaryings[1];




#5218168 Nothing visible when doing indexed drawing

Posted by noodleBowl on 21 March 2015 - 05:53 PM

I'm curious, how do you even get anything to show without binding a buffer of some kind for your vertex data?


#5215346 Map Buffer Range Super Slow?

Posted by noodleBowl on 08 March 2015 - 09:19 PM

I found the problem!
 
The problem lies in here

for (int i = 0; i < 12 * MAX_SPRITE_BATCH_SIZE; i += 12)
{
pointer[i] = pos[i];
pointer[i + 1] = pos[i + 1];
pointer[i + 2] = 0.0f;
 
pointer[i + 3] = pointer[i];
pointer[i + 4] = pointer[i + 1] + 32.0f;
pointer[i + 5] = 0.0f;
 
pointer[i + 6] = pointer[i] + 32.0f;
pointer[i + 7] = pointer[i + 1] + 32.0f;
pointer[i + 8] = 0.0f;
 
pointer[i + 9] = pointer[i] + 32.0f;
pointer[i + 10] = pointer[i + 1];
pointer[i + 11] = 0.0f;
}

Turns out lines like these: pointer[i + 9] = pointer[i] + 32.0f;
Count as reading the buffer!

 

So after changing all the lines to just use the pos array like the first 2 lines my FPS shot up to ~1050 FPS!




#5214952 Intel gives better FPS than NVidia?

Posted by noodleBowl on 06 March 2015 - 08:29 AM

I got to the part where I finally started to rewrite my spritebatcher and I noticed something strange.

My Intel 4600 HD was returning more FPS than the dedicated GTX 850m card. I tried just a normal window test

where the window was 800x600 and I was drawing nothing (the only happening was glClear and the SDL swap buffer call).

 

Intel gives me 4.5K FPS on average, where my nvidia card gives me only 1.8K?

I also tried doing a 3DMark test (I think this uses DirectX) and the Nvidia out performed the Intel card significantly, so I;m not sure whats going on? 

 

I have tried the newest drivers for the nvidia card, I tried old nividia drivers, the same results sad.png

Has this happened to anyone? Anyone have a solution?

 

This is a laptop with optimus just in case that matters




#5214010 Dynamic Memory and throwing Exceptions

Posted by noodleBowl on 02 March 2015 - 02:00 PM

Don't catch exceptions you can't handle, and don't write (...) blocks


What is a (...) block?
 

Most professional software these days has a top-level crash handler that will catch exceptions and crashes, perform a memory dump, and allow the user to submit a bug report. (You don't need exceptions to do that though, in Windows, a Structured Exception Handler will be enough)


From what I understand, isn't this just like a Structured Exception Handler?
 

int main()
{   try
    {
       /*All of my other code*/

       TypeObject obj = CreateType(4);

       /* Some other code in my main funciton */
    }
    catch(exception &e)
    {
       //Logging of e.what() to file
    }
}



#5213458 Sprite batching performance

Posted by noodleBowl on 28 February 2015 - 12:16 AM

Ummm.... Correct me if I'm wrong but isn't the point of a sprite batcher to render as many sprites as possible with as few drawcalls as possible. Why are you only rendering 800 sprites at a time when you could just render them all at once. The only reason that I can think of are memory constraints but a batchsize of 800 is kinda low even for that.


You're right that is the idea, but I think toto is just doing a bench mark.


On a side note according to some nvidia doc (idk where this claim comes from official doc wise) a nice size to make your VBO is 1MB to 4MB. Something about the GPU can handle the memory better

So assuming I can do math and you have a XYZ position and a UV tex coord per sprite (quad). That is 80 bytes of mem per sprite (using float values for each attribute)

Meaning that taking the min 1 MB recommendation the VBO used to hold the batch should be enough to handle 12500 sprites


#5212941 Sprite batching performance

Posted by noodleBowl on 25 February 2015 - 12:34 PM



My machine has Integrated Intel® HD Graphics 3000

 

I loaded up my sprite batcher project, fixed up some stuff to get it running (In in the middle of a major rewrite / copy project / hardware change), and did a quick test

 

Results:

~ 50 - 51 FPS (Delta time ~0.0191 - 0.020)

 

The test:

Window 800 x 600 plus CMD Console attachment (For debugging purpose; no statement updates during the test other than the init ones)

Dynamic run (Sprite are pushed to the GPU each frame)

10K 64 x 64 sprite (no alpha pixels) random position (Positions are pre-randomized and loaded into an array)

Constant sprite rotation per frame (Each sprite rotates 5 degrees clockwise per frame)

 

Hardware:

Intel HD Graphics 4600 [I know its not exactly the same a yours, but its still a integrated GPU]

CPU I7 4710HQ 2.5 GHz

 

Now, with the above in mind, I feel like I should say that even with one 64x64 sprite being drawn I still get that same FPS. So I'm not too sure what that means (I assume it means Vsync is on even though I don't enable it), but maybe some bug from quick porting

 

Turns out vsync was being forced by intel driver. Updated results to reflect




#5212783 Sprite batching performance

Posted by noodleBowl on 24 February 2015 - 03:14 PM

Hello sprite batcher friend :)

 

I too have made a dynamic sprite batcher and I'm always looking to test performance

 

So when it comes to this I have been to do:

1. Go off your frame's Delta Time, because FPS can be inaccurate.

2. Compare your scene to a static scene

Basically make a static buffer (GL_STATIC_DRAW) and fill it with the same amount and type (Quads in this case) used by your sprites. Then record the average delta time.

Then do the same thing again, but this time use your sprite batcher! All that is left is to compare the delta times together. The closer your sprite batcher's numbers are to the static draw, the better your sprite batcher is doing.

3. Force your sprite batcher to draw 1x1 pixel sprites. If your Delta Time goes down significantly (FPS increases significantly) then you know your sprite batcher's GPU limited. If it says the same you're CPU limited or even  you've hit your max fill rate. 




#5209144 Best Current Choices for Rapid 2D Game Prototyping?

Posted by noodleBowl on 06 February 2015 - 03:03 PM

To be honest, I think the question you really want to ask yourself is:

 

As a single person dev, do I want to and can I afford to sacrifice my time, energy, resources, etc doing 3D in the future and how far is that future?

 

3D modeling, let alone the animation portion, is a lot of work. Making a game in itself 2D or 3D is a lot of work. Especially when you are doing everything by yourself. If you feel that you are good and ready to do 3D stuff then go for it, but just know there is going to be work involved.

 

As for the 2D part, if you want to do some quick prototypes or even just make a 2D game I would say look into Game Maker Studio.

You can turn out things pretty quickly in there especially when it comes to prototypes




#5207439 Orthographic projection test - doesn't render

Posted by noodleBowl on 29 January 2015 - 08:54 AM

I want to say your buffer data bind and vertex attribute pointer is at fault. I see you use a function to upload the data which is fine it looks good, but you are uploading data (triangleArray) that is GLint. Then in your VAO calls you are doing the vertex attribute call with float. There is a miss match here for sure. I think you'll be all good (with atleast getting that normalized triangle to show) if you make your triangleArry a GLFloat, keep the type for the pointer as GL_FLOAT, and change the size of the stride to: 2 * sizeof(GLFloat)


#5207227 What to do for debugging

Posted by noodleBowl on 28 January 2015 - 01:16 PM

A few days ago, I wanted to test some initial systems I made and see how they would perform on a computer that wasn't my dev pc.

 

So I handed out some EXEs to my friends, first friend everything runs perfectly. Second friend instant crash! I thought maybe it has something to do with graphics drivers, so they tested it on a different computer... Boom crash again! Having no idea why I added more debug and checkpoint statements. After a hour or so I finally had enough statements to pin point the problem, figured out I never set a GLEW boolean to true

 

Then this got me thinking about debugging. What happens if something like this happens again? I don't want to go back and add a ton of statements every time a problem happens. Normally I would add some global debug variable to switch debugging on/off, but when I think about it that can't be that great of a solution right? I mean, you'ed have a if check for every block of debug statements. That's got to hurt performance to do all those checks, especially if you have a lot

 

So I'm wondering what do you guys do? Or am I doing it?

 

 




#5203942 Minimal test case not working - using intermediate framebuffer

Posted by noodleBowl on 13 January 2015 - 09:10 AM

But it still doesn't work. Here is the current source: http://sprunge.us/CDPR


I think it might be something to do with the actual ordering values for your indices and the data your quad is using.
Assuming you did the fixes Xycaleth pointed out and that the left corner of the screen is -1,-1, maybe the following will work:


GLfloat quad[] = {
	-1.0f, -1.0f, 1.0f, 0.0f,
	-1.0f,  1.0f, 1.0f, 0.0f,
	 1.0f,  1.0f, 1.0f, 0.0f,
	 1.0f, -1.0f, 1.0f, 0.0f
};




#5203293 Triangles can't keep up?

Posted by noodleBowl on 10 January 2015 - 10:31 AM

Currently I'm finishing up my particle engine and I noticed something very weird, it appears I have a graphics glitch
 
My particle engine in a nut shell uses transform feedback and gl_triangles to save the state of each particle. Then when I'm ready to render I use glDrawArrays with gl_triangles. My particles are quads, so 6 vertex per particle or 2 triangles. When I update I just run everything through my vertex shader, I have no geometry shader

Update method - CPU side
glUseProgram(updateProgram.programID);

glEnable(GL_RASTERIZER_DISCARD);

glBindVertexArray(vaos[currentBuffer]);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, vbos[1 - currentBuffer]);

glBeginTransformFeedback(GL_TRIANGLES);
glDrawArrays(GL_TRIANGLES, 0, particleLimit * NON_INDEXED_VERTEX_COUNT_PER_QUAD);
glEndTransformFeedback();

glBindVertexArray(NULL);
glDisable(GL_RASTERIZER_DISCARD);

currentBuffer = 1 - currentBuffer;
 
Now my problem comes in when I turn Vsync on
if (SDL_GL_SetSwapInterval(1) < 0)
	std::cout << "[WARNING] Failed to turn VSYNC on" << std::endl;
Update - GPU side (vertex shader code)
#version 330 core

in vec2 positionIN;
in vec2 deviationIN;
in vec2 velocityIN;
in vec2 originalVelocityIN;
in vec2 accelerationIN;
in float particleLifespanIN;
in float originalParticleLifespanIN;

out vec2 positionOUT;
out vec2 deviationOUT;
out vec2 velocityOUT;
out vec2 originalVelocityOUT;
out vec2 accelerationOUT;
out float particleLifespanOUT;
out float originalParticleLifespanOUT;

void main()
{
	positionOUT = positionIN;
	deviationOUT = deviationIN;
	velocityOUT = velocityIN;
	originalVelocityOUT = originalVelocityIN;
	accelerationOUT = accelerationIN;
	particleLifespanOUT = particleLifespanIN;
	originalParticleLifespanOUT = originalParticleLifespanIN;
	
	if(particleLifespanOUT >= 0 || originalParticleLifespanIN == 0)
	{
		particleLifespanOUT -= deltaTime;
	
		velocityOUT += accelerationIN * deltaTime;
		positionOUT += velocityIN * deltaTime;
	}
	else
	{
		positionOUT = emitterPosition + deviationIN;
		velocityOUT = originalVelocityIN;
		particleLifespanOUT = originalParticleLifespanIN;
	}
}
My render
glUseProgram(renderProgram.programID);
glUniformMatrix4fv(renderProgram.uniforms["MVP"].location, 1, GL_FALSE, viewMatrix->matrix);

glBindVertexArray(vaos[currentBuffer]);
glDrawArrays(GL_TRIANGLES, 0, particleLimit * NON_INDEXED_VERTEX_COUNT_PER_QUAD);
glBindVertexArray(NULL);
Once my particle moves, the 2 triangles that form the quad seem to be unable to keep up with each other, even though they are moving at the same speed. It just looks like this at times
glitch.png

Very subtle, but still can be seen
glitch2.png

Real screen capture. Middle-ish quad split two
ActualScreenCap.png

Any ideas how to fix it? It makes my particles look very strange as they move along


#5201038 How to create a particle system on the GPU?

Posted by noodleBowl on 31 December 2014 - 12:40 PM

An alternative to CDProps, If you don't want to use the texture based solution is to use transform feedback.

There are a few tutorials around on the net like this one: http://ogldev.atspace.co.uk/www/tutorial28/tutorial28.html

 

The premise is the same basically, but instead of using textures for each of your particle attributes you just have two buffer objects.

Where one buffer is used as you input buffer and one buffer is used as your ouput buffer.




#5195272 Super Cool Effects

Posted by noodleBowl on 28 November 2014 - 05:49 PM

So all those effects are done just using multiple textures and blending them all together? Thats it, nothing really special?

 

I hope this makes sense, but couldn't I just "pre" blend my special effect textures in photoshop and render them as normal? Why even use the GPU to do this for me?

 

glBlendFunc(GL_ONE, GL_ONE) usually does it smile.png

 

Could you explain the difference between this and what I have below? I don't truly understand the glBendFunc command. I just know that what I have below allows me to blend textures that use a Alpha channel. Would this work for what I want to do?

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
 





PARTNERS