Jump to content

  • Log In with Google      Sign In   
  • Create Account

gamedev199191

Member Since 14 Jan 2009
Offline Last Active Private

Topics I've Started

[solved] OpenGL GLSL shader tutorial example translated, but not working

04 March 2012 - 04:38 PM

Hi,

I've been converting the "OpenGL Tutorials for the modern graphics programmer" (see here) from the C it is written in, to Python using the gletools module. Up to this point, each direct Python conversion works exactly as the original C version did. And if it doesn't, it has been due to something not being converted across correctly.

However, having converted the Lesson 6 / Translation example (see here) over to Python, it just won't work. I've gone over the original code side by side with the Python conversion several times and found all the differences, but with no discernable differences remaining (besides the implementation language).. I'm at a loss for why it won't work.

The code executes without error, but does not display anything. Whereas the C version displays three rainbow coloured shapes that move around. I've debugged the Python and C version to ensure that all values are the same.

Can anyone see anything incorrect?

The code is also attached to this post, but pyglet (1.2dev or later) and gletools (from pypi is fine) both need to be installed if it is to be run.

import ctypes
import time
import math
import pyglet
from pyglet.gl import *
from gletools import ShaderProgram, FragmentShader, VertexShader
window = pyglet.window.Window(width=500, height=500)
modelToCameraMatrixUnif = None
cameraToClipMatrixUnif = None
cameraToClipMatrix = (GLfloat * 16)(*([ 0.0 ] * 16))
def CalcFrustumScale(fFovDeg):
	degToRad = 3.14159 * 2.0 / 360.0
	fFovRad = fFovDeg * degToRad
	return 1.0 / math.tan(fFovRad / 2.0)
fFrustrumScale = CalcFrustumScale(45.0)
def InitializeProgram():
	global cameraToClipMatrix
	global modelToCameraMatrixUnif
	global cameraToClipMatrixUnif
	modelToCameraMatrixUnif = program.uniform_location("modelToCameraMatrix")
	cameraToClipMatrixUnif = program.uniform_location("cameraToClipMatrix")
	fzNear = 1.0
	fzFar = 45.0
	cameraToClipMatrix[0] = fFrustrumScale # Column 0.x
	cameraToClipMatrix[5] = fFrustrumScale # Column 1.y
	cameraToClipMatrix[10] = (fzFar + fzNear) / (fzNear - fzFar) # Column 2.z
	cameraToClipMatrix[11] = -1.0 # Column3.z
	cameraToClipMatrix[14] = (2 * fzFar * fzNear) / (fzNear - fzFar) # Column 3.w
	with program:
		glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix)

GREEN_COLOR	 = 0.0, 1.0, 0.0, 1.0
BLUE_COLOR	  = 0.0, 0.0, 1.0, 1.0
RED_COLOR	   = 1.0, 0.0, 0.0, 1.0
GREY_COLOR	  = 0.8, 0.8, 0.8, 1.0
BROWN_COLOR	 = 0.5, 0.5, 0.0, 1.0
vertexData = [
	+1.0, +1.0, +1.0,
	-1.0, -1.0, +1.0,
	-1.0, +1.0, -1.0,
	+1.0, -1.0, -1.0,
	-1.0, -1.0, -1.0,
	+1.0, +1.0, -1.0,
	+1.0, -1.0, +1.0,
	-1.0, +1.0, +1.0,
]
numberOfVertices = len(vertexData) / 3
colours = [
	GREEN_COLOR,
	BLUE_COLOR,
	RED_COLOR,
	BROWN_COLOR,
	GREEN_COLOR,
	BLUE_COLOR,
	RED_COLOR,
	BROWN_COLOR,
]
for colour in colours:
	vertexData.extend(colour)
vertexDataGl = (GLfloat * len(vertexData))(*vertexData)
sizeof_GLfloat = ctypes.sizeof(GLfloat)
indexData = [
	0, 1, 2,
	1, 0, 3,
	2, 3, 0,
	3, 2, 1,
	5, 4, 6,
	4, 5, 7,
	7, 6, 4,
	6, 7, 5,
]
indexDataGl = (GLushort * len(indexData))(*indexData)
sizeof_GLushort = ctypes.sizeof(GLushort)

program = ShaderProgram(
	FragmentShader('''
	#version 330
	smooth in vec4 theColor;
	out vec4 outputColor;
	void main()
	{
		outputColor = theColor;
	}
	'''),
	VertexShader('''
	#version 330
	layout (location = 0) in vec4 position;
	layout (location = 1) in vec4 color;
	smooth out vec4 theColor;
	uniform mat4 cameraToClipMatrix;
	uniform mat4 modelToCameraMatrix;
	void main()
	{
		vec4 cameraPos = modelToCameraMatrix * position;
		gl_Position = cameraToClipMatrix * cameraPos;
		theColor = color;
	}
	''')
)

vertexBufferObject = GLuint()
indexBufferObject = GLuint()
vao = GLuint()

def InitializeVertexBuffer():
	global vertexBufferObject, indexBufferObject
	glGenBuffers(1, vertexBufferObject)
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject)
	glBufferData(GL_ARRAY_BUFFER, len(vertexDataGl)*sizeof_GLfloat, vertexDataGl, GL_STATIC_DRAW)
	glBindBuffer(GL_ARRAY_BUFFER, 0)
	glGenBuffers(1, indexBufferObject)
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject)
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(indexDataGl)*sizeof_GLushort, indexDataGl, GL_STATIC_DRAW)
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)

def _StationaryOffset(fElapsedTime):
	return [ 0.0, 0.0, -20.0 ]
def _OvalOffset(fElapsedTime):
	fLoopDuration = 3.0
	fScale = 3.14159 * 2.0 / fLoopDuration
	fCurrTimeThroughLoop = math.fmod(fElapsedTime, fLoopDuration)
	return [ math.cos(fCurrTimeThroughLoop * fScale) * 4.0, math.sin(fCurrTimeThroughLoop * fScale) * 6.0, -20.0 ]
def _BottomCircleOffset(fElapsedTime):
	fLoopDuration = 12.0
	fScale = 3.14159 * 2.0 / fLoopDuration
	fCurrTimeThroughLoop = math.fmod(fElapsedTime, fLoopDuration)
	return [ math.cos(fCurrTimeThroughLoop * fScale) * 5.0, -3.5, math.sin(fCurrTimeThroughLoop * fScale) * 5.0 - 20.0 ]
def ConstructMatrix(f, fElapsedTime):
	theMat = (GLfloat * 16)(*([ 0.0 ] * 16))
	theMat[0] = 1.0 # column 0, row 0 / x
	theMat[5] = 1.0 # column 1, row 1 / y
	theMat[10] = 1.0 # column 2, row 2 / z
	theMat[15] = 1.0 # column 3, row 3 / w
	wVec3 = f(fElapsedTime)
	theMat[3] = wVec3[0] # column 3, row 0 / x
	theMat[7] = wVec3[1] # column 3, row 1 / y
	theMat[11] = wVec3[2] # column 3, row 2 / z
	return theMat
objects = [
	_StationaryOffset,
	_OvalOffset,
	_BottomCircleOffset,
]
def init():
	global vao
	InitializeProgram()
	InitializeVertexBuffer()
	glGenVertexArrays(1, vao)
	glBindVertexArray(vao)
	colorDataOffset = sizeof_GLfloat * 3 * numberOfVertices
	glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject)
	glEnableVertexAttribArray(0)
	glEnableVertexAttribArray(1)
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0)
	glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, colorDataOffset)
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject)
	glBindVertexArray(0)
	glEnable(GL_CULL_FACE)
	glCullFace(GL_BACK)
	glFrontFace(GL_CW)
	glEnable(GL_DEPTH_TEST)
	glDepthMask(GL_TRUE)
	glDepthFunc(GL_LEQUAL)
	glDepthRange(0.0, 1.0)
@window.event
def on_draw():
	glClearColor(0.0, 0.0, 0.0, 0.0)
	glClearDepth(1.0)
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
	with program:
		glBindVertexArray(vao)
		elapsedTime = time.clock()
		for f in objects:
			transformMatrix = ConstructMatrix(f, elapsedTime)
			glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, transformMatrix)
			glDrawElements(GL_TRIANGLES, len(indexData), GL_UNSIGNED_SHORT, 0)
		glBindVertexArray(0)
@window.event
def on_resize(width, height):
	cameraToClipMatrix[0] = fFrustrumScale / (height / float(width))
	cameraToClipMatrix[5] = fFrustrumScale
	with program:
		glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix)
	glViewport(0, 0, width, height)
	return pyglet.event.EVENT_HANDLED
init()

def update(dt):
	pass
pyglet.clock.schedule_interval(update, 1/60.0)
pyglet.app.run()

Game items and the database

24 January 2009 - 04:33 AM

Let's say I'm programming an RPG and I am intending to store my data in the database. There's an Orc race. Now, orcs normally have: Height: 1.5 meters Weight: 120 kilograms Maximum movement speed: 3 ..and a number of other standard 'orcy' properties. I also want to have a stronger, larger, slower orc. Let's call these Orc Thumpers. I want them to have a different height, weight and maximum movement speed, but I also want them to keep in sync with the standard 'orcy' properties so I don't have to work out all the different types of Orcs and update them all when I change one of these. How would you recommend I store this data?

Thoughts on multi-user live content development

14 January 2009 - 02:26 PM

Something I have been thinking about recently is one or more users doing content development against a shared central server where this content would be tested by clients running against the same server. One example is the creation of item templates. This might, for example be a kind of sword a user might come across or a kind of trade good which player's might be able to sell. One way is defining them with code. The data from the template object is then propagated either to all clients, or just to the ones where instances of that template are present. There's an advantage to this approach in that the code content definitions are most likely checked into the same repository as the matching game logic for a build, which makes it easier to ensure builds can be remade as time passes. Multiverse and Torque MMO Kit both take the code template definition approach. Then there is defining them in a database. When modifying content, designers would work against the same development/authoring servers which run against the database backend. For builds, you might serialise the tables or views and bundle them with those builds of the live servers. However, when the time comes for you to edit content for the released build each approach faces it differently. In the code-driven approach, you might work in the relevant build branch and because the matching data is there with the game logic, you can just build those together after the changes. In the data-driven approach, you might find that the content server has moved on to incorporate changes for the next release. But the aspect which interests me the most is live use of content authoring changes. What if your server doesn't send template specifications to the clients, but rather just says that they should know the template already and hey, here's a new instance of it. That's a potentially ideal situation, because then you reduce the bandwidth used. I know WoW doesn't do that, and one of the arguments is the fun of discovering this content versus being suckered into browsing a website which catalogues it having perhaps extracting it from the client datasets. But then again, who really gets to discover this, but the hardcore guilds, so whether it is worth implementing this way just to limit discovery is questionable. Anyway, back to what I consider the interesting stuff. A potential problem which for me illustrates the complexity of live content development and testing. It's not an ideal situation, but it covers clean recovery from a worst case scenario. Your clients have serialised template data. The server just sends instance data. It doesn't keep an anal level of detail about what a given client knows about, but rather bases it abstractly on proximity. In this case, the clients are connecting to the authoring server. A content developer looks at a dungeon instance and realises he doesn't want some encounter there, and furthermore he no longer wants the monster template variant he specifically created for it. So the template variant usages are deleted from the encounter, the encounter is deleted and then the template variant itself is deleted. Now, there's two places that the template variant might still be used. The server-side encounter which has been spawned and controlled within the dungeon, with instances of the entity template variant. The client-side representation of these instances. Any reference to the template past the point of its deletion, by the client or server, may cause errors when joins are done against the template table. In an ideal game authoring engine, the live state would be cleaned up. But who has time to write an ideal engine? You just write the best you can and then refactor it as you develop it further. Is it possible to really design this ability into the engine from scratch? Are there industry best practices for writing engines to handle this? Anyone have any experience or references to articles, talks and similar media relating to systems like this? Anyway, I've enjoyed writing this post but don't have time to reread and edit it before I hit the sack. Hope I haven't written anything too incoherent :-)

PARTNERS