Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Matt328

Member Since 11 May 2006
Offline Last Active Jul 26 2014 07:38 PM

Topics I've Started

Interleaved Arrays

16 July 2012 - 08:20 PM

I believe that's what I'm trying to do. Coming from DirectX, you would define a structure for your vertices, pack up a position, normal, texturecoords, whatever else your shader needed, and bind an array of them to the api to be drawn using an index buffer. I'm trying to get that same thing going with OpenGL 4.2, and having a hard time getting anything rendered on the screen.

Here is my vertex structure:
[source lang="cpp"]struct VertexPositionNormalTexture { float x, y, z; //Vertex float nx, ny, nz; //Normal float s0, t0; //Texcoord0};[/source]
Here is creation of my buffer objects:
[source lang="cpp"]BWResult create_cube_geometry(Geometry* geometry, VertexType type) { VertexPositionNormalTexture pvertex[3]; //VERTEX 0 pvertex[0].x = 0.0; pvertex[0].y = 0.0; pvertex[0].z = 0.0; pvertex[0].nx = 0.0; pvertex[0].ny = 0.0; pvertex[0].nz = 1.0; pvertex[0].s0 = 0.0; pvertex[0].t0 = 0.0; //VERTEX 1 pvertex[1].x = 1.0; pvertex[1].y = 0.0; pvertex[1].z = 0.0; pvertex[1].nx = 0.0; pvertex[1].ny = 0.0; pvertex[1].nz = 1.0; pvertex[1].s0 = 1.0; pvertex[1].t0 = 0.0; //VERTEX 2 pvertex[2].x = 0.0; pvertex[2].y = 1.0; pvertex[2].z = 0.0; pvertex[2].nx = 0.0; pvertex[2].ny = 0.0; pvertex[2].nz = 1.0; pvertex[2].s0 = 0.0; pvertex[2].t0 = 1.0; GLushort indices[3]; indices[0] = 0; indices[1] = 1; indices[2] = 2; GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(VertexPositionNormalTexture) * 4, pvertex, GL_STATIC_DRAW); GLuint indexBuffer; glGenBuffers(1, &indexBuffer); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ELEMENT_ARRAY_BUFFER, 3 * sizeof(GLushort), indices, GL_STATIC_DRAW); geometry->indexBufferId = indexBuffer; geometry->vertexBufferId = vertexBuffer; geometry->vertexType = VERTEX_PNT; geometry->indexBufferSize = 6; geometry->vertexSize = sizeof(GLfloat) * 8; geometry->offset1 = sizeof(GLfloat) * 3; geometry->offset2 = geometry->offset1 + (sizeof(GLfloat) * 3); return BW_SUCCESS;}[/source]
And here is the function that renders the object:
[source lang="cpp"]void render_geometry(Geometry geometry, ShaderInfo shaderInfo) { glBindBuffer(GL_ARRAY_BUFFER, geometry.vertexBufferId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, geometry.indexBufferId); // Set Position Pointer GLuint positionAttribute = glGetAttribLocation(shaderInfo.programId, "position"); glEnableVertexAttribArray(positionAttribute); glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, GL_FALSE, geometry.vertexSize, BUFFER_OFFSET(0)); // Set Normal Pointer GLuint normalAttribute = glGetAttribLocation(shaderInfo.programId, "normal"); glEnableVertexAttribArray(normalAttribute); glVertexAttribPointer(normalAttribute, 3, GL_FLOAT, GL_FALSE, geometry.vertexSize, BUFFER_OFFSET(12)); // Set TexCoord Pointer GLuint texCoordAttribute = glGetAttribLocation(shaderInfo.programId, "texCoord"); glEnableVertexAttribArray(texCoordAttribute); glVertexAttribPointer(texCoordAttribute, 2, GL_FLOAT, GL_FALSE, geometry.vertexSize, BUFFER_OFFSET(24)); glDrawElements(GL_TRIANGLE_STRIP, 3, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); glDisableVertexAttribArray(positionAttribute); glDisableVertexAttribArray(normalAttribute); glDisableVertexAttribArray(texCoordAttribute);}[/source]
And in my main area for now, i'm doing this in my loop:
[source lang="cpp"] glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); running = !glfwGetKey(GLFW_KEY_ESC) && glfwGetWindowParam(GLFW_OPENED); glUseProgram(shader_info.programId); GLuint matrixId = glGetUniformLocation(shader_info.programId, "MVP"); glUniformMatrix4fv(matrixId, 1, GL_FALSE, &mvp[0][0]); render_geometry(geo, shader_info); glUseProgram(0); glfwSwapBuffers();[/source]
One last piece that may be relevant, here is how I am creating my model-view-projection matrix:
[source lang="cpp"]glm::mat4 projection = glm::perspective(45.0f, 16.0f / 9.0f, 0.1f, 100.0f);glm::mat4 view = glm::lookAt(glm::vec3(0, 0, 10), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));glm::mat4 model = glm::mat4(1.0f);glm::mat4 mvp = projection * view * model;[/source]
I lied, you may need to check out my vertex shader as well:
[source lang="cpp"]#version 330layout(location = 0) in vec3 position;layout(location = 1) in vec2 texCoord;out vec2 UV;uniform mat4 MVP;void main() { vec4 v = vec4(position, 1); gl_Position = MVP * v; UV = texCoord;}[/source]
I have a fragment shader that just sets the color to white for now.

Sorry for the code dumps, but I think that is all the main parts where I might have something messed up. With DX, I would fire up PIX and be able to look at the contents of a) the bound buffer, and b) debug the individual vertices through the shader stages to find out where something went wrong. With gDEBugger, in the 'Textures, Buffers, and Images viewer' when I click on my VBO that should contain my vertices, I get 'Unable to load Buffer'. I'm guessing that is indicating something is messed up with the way I'm binding my buffer, or specifying its data, but I can't for the life of me find a complete example using interleaved arrays with modern OpenGL.

Boost.Python loading from script

12 February 2012 - 08:57 PM

I'm investigating using Boost.Python for scripting in a game. After reading about all the documentation I can find on embedding Python using Boost.Python, I think a good strategy to pursue would to be have base classes defined in C++, and be able to create derived classes in Python scripts, and then call the base class methods on objects from my game, which would defer to the virtual methods defined in a Python script to execute some logic.

Given the following code, the next step I would like to try to take would be to have the exec call be replaced with the exec_file call, but apparently there is alot I don't understand yet.

#include "stdafx.h"



namespace py = boost::python;



class Object {

public:

	Object(int id) :

		id(id) {

	}

	virtual ~Object() {

	}



	int getId() const {

		return id;

	}



	virtual int f() {

		std::cout << "Object" << std::endl;

		return 0;

	}



private:

	int id;

};



struct Derived: Object {

	Derived(int i) :

		Object(i) {

	}



	virtual int f(void) {

		std::cout << "Derived\n";

		return 0;

	}

};



struct ObjectWrap: public Object, public py::wrapper<object> {

	ObjectWrap(int i) :

		Object(i) {

	}



	int f() {

		if (py::override f = this->get_override("f")) {

			return f();

		}

		return Object::f();

	}



	int default_f() {

		return this->Object::f();

	}

};



BOOST_PYTHON_MODULE(Core) {

	py::class_<object, boost::shared_ptr<object="">, boost::noncopyable>("__Object", "I'm an implementation detail, Pretend I don't exist", py::no_init)

	.add_property("id", &Object::getId)

	.def("f", &Object::f);



	py::class_<objectwrap, boost::shared_ptr<objectwrap="">, boost::noncopyable>("Object", py::init<int>())

	.add_property("id", &Object::getId)

	.def("f", &Object::f, &ObjectWrap::default_f);

}



int _tmain(int argc, _TCHAR* argv[]) {

	Py_Initialize();

	try {

		initCore();



		py::object main_module = py::import("__main__");

		py::object main_namespace = main_module.attr("__dict__");



		py::object ignored3 = py::exec_file("CustomObject.py", main_namespace,

				main_namespace);



		py::object ignored = py::exec("import Core\n"

			"class CustomObject(Core.Object):\n"

			" def __init__(self,id):\n"

			" Core.Object.__init__(self,id)\n"

			" def f(self):\n"

			" print \"CustomObject\"\n"

			" return 0\n"

			"object = CustomObject(1337)\n", main_namespace, main_namespace);



		py::object object = main_namespace["object"];

		object.attr("f")();



		boost::shared_ptr<object> o = py::extract<boost::shared_ptr<object>>(

				object);

		o->f();



		boost::shared_ptr<object> o2(new Derived(1337));

		main_namespace["object"] = o2;

		py::object ignored2 = py::exec("object.f()\n", main_namespace,

				main_namespace);



	} catch (const py::error_already_set&) {

		PyErr_Print();

	}



	Py_Finalize();

	system("pause");

	return 0;

}


When I uncomment the line with the exec_file call, and comment out the line with the exec call, I get the following error:

Traceback (most recent call last):
File "CustomObject.py", line 10, in <module>
object = CustomObject(1337)
File "CustomObject.py", line 5, in __init__
Core.Object.__init__self(id)
AttributeError: type object 'Object' has no attribute '_CustomObject__init__self'


I am guessing this is happening because my base class, Object is not able to be imported into the main namespace, being that it is defined on the C++ side? Or not, I really have no clue and am just stabbing in the dark here.

The code, as posted, works just fine, but isn't all that useful to me since it's executing a Python string defined in C++. I could read the .py files into a string, and use the exec call to execute them that way, but the exec_file seems much more elegant. Unfortunately, all the examples end at this point, and leave incorporating reading actual Python scripts as an exercise for the reader.

Can anyone point me in the right direction, and at least verify that my approach is valid?

Edit: There is definitely something wrong with the source tags, putting lang="cpp" in there caused it to seem to choke on the double less than symbol. At least all of the code is there now. If a mod wants to fix it or enlighten me on the subtleties of the new forum software, I'd appreciate it.

[DX11] Blending Lights

22 September 2011 - 03:23 PM

I'm implementing deferred rendering, everything was going smooth until I got to the part where in order to support multiple directional lights. Just for testing purposes, I've created a red light coming from the positive x-axis, and a blue light coming from the negative x-axis. The problem I'm having is I render one after the other in the same pass before I call present(), but only the first color is drawn. In PIX, I've debugged a pixel that should be colored with the blue light, the debugger shows that the pixel would be blue as expected, but it's failed the depth test.

Here is my blend state description:
D3D11_BLEND_DESC omDesc;
	ZeroMemory( &omDesc, sizeof( D3D11_BLEND_DESC ) );
	omDesc.RenderTarget[0].BlendEnable		= TRUE;
	omDesc.RenderTarget[0].BlendOp			= D3D11_BLEND_OP_ADD;
	omDesc.RenderTarget[0].SrcBlend			= D3D11_BLEND_ONE;
	omDesc.RenderTarget[0].DestBlend		= D3D11_BLEND_ONE;
	omDesc.RenderTarget[0].BlendOpAlpha		= D3D11_BLEND_OP_ADD;
	omDesc.RenderTarget[0].SrcBlendAlpha	= D3D11_BLEND_ONE;
	omDesc.RenderTarget[0].DestBlendAlpha	= D3D11_BLEND_ONE;
	omDesc.RenderTarget[0].RenderTargetWriteMask = 0x0F;

And here is my depth stencil description:
D3D11_DEPTH_STENCIL_DESC dsDesc;
	dsDesc.DepthEnable		= true;
	dsDesc.DepthWriteMask	= D3D11_DEPTH_WRITE_MASK_ALL;
	dsDesc.DepthFunc		= D3D11_COMPARISON_LESS;

	// Stencil test parameters
	dsDesc.StencilEnable	= true;
	dsDesc.StencilReadMask	= D3D11_DEFAULT_STENCIL_READ_MASK;
	dsDesc.StencilWriteMask = D3D11_DEFAULT_STENCIL_WRITE_MASK;

	// Stencil operations if pixel is front-facing
	dsDesc.FrontFace.StencilFunc		= D3D11_COMPARISON_ALWAYS;
	dsDesc.FrontFace.StencilPassOp		= D3D11_STENCIL_OP_KEEP;
	dsDesc.FrontFace.StencilFailOp		= D3D11_STENCIL_OP_DECR;
	dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;

	// Stencil operations if pixel is back-facing
	dsDesc.BackFace.StencilFunc			= D3D11_COMPARISON_ALWAYS;
	dsDesc.BackFace.StencilPassOp		= D3D11_STENCIL_OP_KEEP;
	dsDesc.BackFace.StencilFailOp		= D3D11_STENCIL_OP_DECR;
	dsDesc.BackFace.StencilDepthFailOp	= D3D11_STENCIL_OP_KEEP;


I am drawing this as a fullscreen quad, built from components of my g-buffer, so I'm not sure I fully understand how the depth buffer / stencil thing works. I'd appreciate any help anyone can provide, this has had me pretty stumped all afternoon.

Dynamic Vertex Buffers

02 June 2010 - 09:39 AM

I'm reading through the book 3D Game Engine Programming (Zerbst, 2004) while a little dated, might have some decent ideas still applicable today. One I'm trying to get my head around is the idea of introducing a layer of caching between the engine's draw calls and the actual D3D draw calls using a vertex caching scheme based around filling up several dynamic vertex buffers until they either need flushed, or you're done for that frame. I'm kind of skeptical of how he handles vertex translation/rotation/scale, etc. Basically anytime the world matrix needs to change (you translate to the next model's position, for example) it needs to flush the buffers, resulting in a DrawPrimitive (or related) call. I'm having trouble seeing how you would ever end up with more than one set of local vertices in one of the dynamic buffers with this scheme. Wouldn't this require translating/rotating/scaling every single vertex before putting them in a dynamic buffer in order to achieve any sort of batching?

I guess I'm a little unclear on the benefits of dynamic buffers in general. Are they only meant to contain vertices that get translated, rotated and scaled as a group, or is there some other piece I'm unaware of?

From Java to C++

07 February 2010 - 03:53 AM

I'm having trouble applying some of the concepts I use at my day job coding in Java to my hobby project using C++. Let's just say Java has me somewhat spoiled with its instanceof operator and using reflection to treat class definitions as just another variable. (I'm referring to the Class<?> object here) On the java side, we are all about separation of responsibilities. When we have a class hierarchy of objects, and sets of operations to be performed using those objects, we're very hesitant to just say ok, Object X knows how to perform operation Y on itself. We'll create a factory that produces objects that know how to perform the operation on the given Object. The rationale for this is that when operations needs to change, or be outright replaced, there is one central place to do that, and we don't have operation logic spread out through our Object hierarchy, nor do we risk having to change Object's interface and break a bunch of other code using it. The piece I'm trying to fill in in C++ is how does the factory decide which type of operation to return based on an object given? In java, you can just create a map of Class<? extends MyObjectInterface> to Class<? extends OperationInterface>, and look up the type of and produce operation instances using reflection. One solution I've seen for C++ is dynamic_cast to try to guess what type of object is passed in, but that seems kind of sloppy, and too much of trying to cram a java shaped peg into a C++ shaped hole. Another alternative is to have a virtual getType() method in your Object hierarchy and the factory can use that. I think I'm leaning towards the getType() method so the Objects don't know how to perform an operation on themselves, but they do know how to instruct factories to create operation performers. Do either of these sound acceptable, or is there a whole other design paradigm or philosophy I should be applying in C++?

PARTNERS