• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

gamedev199191

Members
  • Content count

    11
  • Joined

  • Last visited

Community Reputation

122 Neutral

About gamedev199191

  • Rank
    Member
  1. Aha, got it. I assumed the float ordering in the matrices was row after row, rather than column after column. Changing that fixes it :-)
  2. Hi, I've been converting the "OpenGL Tutorials for the modern graphics programmer" ([url="https://bitbucket.org/alfonse/gltut/overview"]see here[/url]) from the C it is written in, to Python using the gletools module. Up to this point, each direct Python conversion works exactly as the original C version did. And if it doesn't, it has been due to something not being converted across correctly. However, having converted the Lesson 6 / Translation example ([url="https://bitbucket.org/alfonse/gltut/src/65c213cb8c93/Tut%2006%20Objects%20in%20Motion/Translation.cpp"]see here[/url]) over to Python, it just won't work. I've gone over the original code side by side with the Python conversion several times and found all the differences, but with no discernable differences remaining (besides the implementation language).. I'm at a loss for why it won't work. The code executes without error, but does not display anything. Whereas the C version displays three rainbow coloured shapes that move around. I've debugged the Python and C version to ensure that all values are the same. Can anyone see anything incorrect? The code is also attached to this post, but pyglet (1.2dev or later) and gletools (from pypi is fine) both need to be installed if it is to be run. [code]import ctypes import time import math import pyglet from pyglet.gl import * from gletools import ShaderProgram, FragmentShader, VertexShader window = pyglet.window.Window(width=500, height=500) modelToCameraMatrixUnif = None cameraToClipMatrixUnif = None cameraToClipMatrix = (GLfloat * 16)(*([ 0.0 ] * 16)) def CalcFrustumScale(fFovDeg): degToRad = 3.14159 * 2.0 / 360.0 fFovRad = fFovDeg * degToRad return 1.0 / math.tan(fFovRad / 2.0) fFrustrumScale = CalcFrustumScale(45.0) def InitializeProgram(): global cameraToClipMatrix global modelToCameraMatrixUnif global cameraToClipMatrixUnif modelToCameraMatrixUnif = program.uniform_location("modelToCameraMatrix") cameraToClipMatrixUnif = program.uniform_location("cameraToClipMatrix") fzNear = 1.0 fzFar = 45.0 cameraToClipMatrix[0] = fFrustrumScale # Column 0.x cameraToClipMatrix[5] = fFrustrumScale # Column 1.y cameraToClipMatrix[10] = (fzFar + fzNear) / (fzNear - fzFar) # Column 2.z cameraToClipMatrix[11] = -1.0 # Column3.z cameraToClipMatrix[14] = (2 * fzFar * fzNear) / (fzNear - fzFar) # Column 3.w with program: glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix) GREEN_COLOR = 0.0, 1.0, 0.0, 1.0 BLUE_COLOR = 0.0, 0.0, 1.0, 1.0 RED_COLOR = 1.0, 0.0, 0.0, 1.0 GREY_COLOR = 0.8, 0.8, 0.8, 1.0 BROWN_COLOR = 0.5, 0.5, 0.0, 1.0 vertexData = [ +1.0, +1.0, +1.0, -1.0, -1.0, +1.0, -1.0, +1.0, -1.0, +1.0, -1.0, -1.0, -1.0, -1.0, -1.0, +1.0, +1.0, -1.0, +1.0, -1.0, +1.0, -1.0, +1.0, +1.0, ] numberOfVertices = len(vertexData) / 3 colours = [ GREEN_COLOR, BLUE_COLOR, RED_COLOR, BROWN_COLOR, GREEN_COLOR, BLUE_COLOR, RED_COLOR, BROWN_COLOR, ] for colour in colours: vertexData.extend(colour) vertexDataGl = (GLfloat * len(vertexData))(*vertexData) sizeof_GLfloat = ctypes.sizeof(GLfloat) indexData = [ 0, 1, 2, 1, 0, 3, 2, 3, 0, 3, 2, 1, 5, 4, 6, 4, 5, 7, 7, 6, 4, 6, 7, 5, ] indexDataGl = (GLushort * len(indexData))(*indexData) sizeof_GLushort = ctypes.sizeof(GLushort) program = ShaderProgram( FragmentShader(''' #version 330 smooth in vec4 theColor; out vec4 outputColor; void main() { outputColor = theColor; } '''), VertexShader(''' #version 330 layout (location = 0) in vec4 position; layout (location = 1) in vec4 color; smooth out vec4 theColor; uniform mat4 cameraToClipMatrix; uniform mat4 modelToCameraMatrix; void main() { vec4 cameraPos = modelToCameraMatrix * position; gl_Position = cameraToClipMatrix * cameraPos; theColor = color; } ''') ) vertexBufferObject = GLuint() indexBufferObject = GLuint() vao = GLuint() def InitializeVertexBuffer(): global vertexBufferObject, indexBufferObject glGenBuffers(1, vertexBufferObject) glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject) glBufferData(GL_ARRAY_BUFFER, len(vertexDataGl)*sizeof_GLfloat, vertexDataGl, GL_STATIC_DRAW) glBindBuffer(GL_ARRAY_BUFFER, 0) glGenBuffers(1, indexBufferObject) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(indexDataGl)*sizeof_GLushort, indexDataGl, GL_STATIC_DRAW) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0) def _StationaryOffset(fElapsedTime): return [ 0.0, 0.0, -20.0 ] def _OvalOffset(fElapsedTime): fLoopDuration = 3.0 fScale = 3.14159 * 2.0 / fLoopDuration fCurrTimeThroughLoop = math.fmod(fElapsedTime, fLoopDuration) return [ math.cos(fCurrTimeThroughLoop * fScale) * 4.0, math.sin(fCurrTimeThroughLoop * fScale) * 6.0, -20.0 ] def _BottomCircleOffset(fElapsedTime): fLoopDuration = 12.0 fScale = 3.14159 * 2.0 / fLoopDuration fCurrTimeThroughLoop = math.fmod(fElapsedTime, fLoopDuration) return [ math.cos(fCurrTimeThroughLoop * fScale) * 5.0, -3.5, math.sin(fCurrTimeThroughLoop * fScale) * 5.0 - 20.0 ] def ConstructMatrix(f, fElapsedTime): theMat = (GLfloat * 16)(*([ 0.0 ] * 16)) theMat[0] = 1.0 # column 0, row 0 / x theMat[5] = 1.0 # column 1, row 1 / y theMat[10] = 1.0 # column 2, row 2 / z theMat[15] = 1.0 # column 3, row 3 / w wVec3 = f(fElapsedTime) theMat[3] = wVec3[0] # column 3, row 0 / x theMat[7] = wVec3[1] # column 3, row 1 / y theMat[11] = wVec3[2] # column 3, row 2 / z return theMat objects = [ _StationaryOffset, _OvalOffset, _BottomCircleOffset, ] def init(): global vao InitializeProgram() InitializeVertexBuffer() glGenVertexArrays(1, vao) glBindVertexArray(vao) colorDataOffset = sizeof_GLfloat * 3 * numberOfVertices glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject) glEnableVertexAttribArray(0) glEnableVertexAttribArray(1) glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0) glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, colorDataOffset) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferObject) glBindVertexArray(0) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) @window.event def on_draw(): glClearColor(0.0, 0.0, 0.0, 0.0) glClearDepth(1.0) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) with program: glBindVertexArray(vao) elapsedTime = time.clock() for f in objects: transformMatrix = ConstructMatrix(f, elapsedTime) glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, transformMatrix) glDrawElements(GL_TRIANGLES, len(indexData), GL_UNSIGNED_SHORT, 0) glBindVertexArray(0) @window.event def on_resize(width, height): cameraToClipMatrix[0] = fFrustrumScale / (height / float(width)) cameraToClipMatrix[5] = fFrustrumScale with program: glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix) glViewport(0, 0, width, height) return pyglet.event.EVENT_HANDLED init() def update(dt): pass pyglet.clock.schedule_interval(update, 1/60.0) pyglet.app.run() [/code]
  3. Quote:Original post by ddn3The difficult part is to find a good demarcation between what to put in script and what to keep in c++. You'll have to weight the factors of performance, stability, flexibility and security. If you put too much in scripts you'll find that they become a bottleneck in performance, too little and you won't be harnessing the full power of the system you've built. I've done a lot of game programming in scripting languages, and when the need arises, moving the script-based logic into C++. I find most of the work is in determining the best way to implement whatever system I might be working on, and this work exists in whatever language I might be using, high-level or low-level. Once this is done in the scripting language, moving it into C++ is an almost mechanical endeavor. In my experience, the straightforward approach of writing in script when you can and profiling and moving what needs to be moved into a lower level language when the need arises, has worked well.
  4. Quote:Original post by CadetUmfer Well I'm trying to avoid sending even slower projectiles over the net. Since each projectile can only ever interact with the environment once (to explode or die), I was hoping to get away with it. Something like grenades that bounce around would be synced. Managing updates for 100 players is one thing...when they can each have a dozen projectiles in the world at once... If projectiles don't interact with the environment does this mean that their fate is predetermined? That you can know at launch, whether they will explode or die? Why not just send this with the projectile state when it launches, and just do it when the time comes? Or is it not that simple?
  5. Quote:Original post by WitchLordI prefer using event handlers with FSM. I feel it is easier to debug the code that way, and also to serialize the game state when saving/loading. It probably uses less memory overall too, since you don't need to keep a stack for each scripted entity. But as mentioned above, it all comes down to personal preference and what fits best with the rest of the game engine. Both ways have their advantages and disadvantages, but I can't think of anything that can't be implemented in either of the ways. The choice of script library may also be influenced by this or, or it may influence your decision depending on which is more important to you. Script libraries may be better at one of the design patterns than the other. Absolutely. Regarding stack usage. I just wrote a function with a loop in it and then started it up until it blocked. The amount of memory the microthread's stack is taking up is 48 bytes. In my language of choice, they are a lightweight tool where the programmer doesn't need to be concerned about their impact on resource usage.
  6. I could rewrite my entity loops to be periodic callbacks without too much effort. There are still callbacks coming in for events anyway, even if actions are primarily managed by the loop. Some clarity is gained from being able to write logic in a looping function, but a degree of it is lost from all the relevance checks that have to be made after each time the looping function blocks for any reason. Then again, even with the diluted clarity of the relevance checks, a loop still seems preferable to the loss of ability to gauge visual flow that comes with more callback driven approaches. I think it comes down to personal preference on programming style.
  7. Quote:Original post by Kylotan I just would do this: CREATE TABLE properties( propertyID int PRIMARY KEY AUTOINCREMENT/SEQUENCE/etc, entityID int, propertyName string, propertyValue string ) Property names can be whatever you need, but one would be called 'Inherit', and the value would be the entity you inherit from. To generate all properties for an entity, you select all properties for that entity ID and store them in a map. Then, for any of the properties that are 'Inherit', you repeat the process for the associated value (ie. the 'parent' entity ID), only adding any properties you don't already have. Somewhere in your entity class you will probably have a definitive list of properties, and that list can handle conversion from the database string to an int, float, bool, whatever. If you need ad-hoc properties that don't apply to each entity, you can just convert them at the point of use or the point of access. All it takes is about 10 lines of code to write a few conversion functions and you're done. Unfortunately, I have come to the realisation that this is not an acceptable approach for me at least. It is important that I can manipulate data at the database level, and storing arbitrary datatypes as strings does not allow me to do this in a straightforward manner.
  8. Quote:Original post by someboddy Maybe you can declare that if one of the fields in the creature's database is 0(or -1, if you want to allow 0 valued fields), than that field should be looked up at the base creature's row. For example, if you set Orc Thumpers' height to 0, than it's height should be Orc's height. That's certainly an option. But it implies that each property stored for a creature, is stored as a column in a creatures table. Now, if I decide not to have a creatures table, and to instead have a things table, I'm then going to have a lot of fields. A large number of these which are not relevent to any given kind of thing which might be thought of and created. CREATE TABLE things ( thingID int, thingName string, height float, weight float, movementSpeed float ) Another option might be to have a table for properties where you only add rows for a given kind of creature when they are used. Then when you look up a property for an Orc Thumper, if there isn't a row added for that property linked to the relevant creatures row for Orc Thumpers, it would know that the parent row for Orc Thumpers was Orc and would do the look-up for that. The same sort of thing as what you suggest, but with a different database model behind it. CREATE TABLE things ( thingID int, baseThingID int, thingName string ) INSERT INTO things (thingID, baseThingID, thingName) VALUES (1, NULL, 'Orc') INSERT INTO things (thingID, baseThingID, thingName) VALUES (2, 1, 'Orc Thumper') CREATE TABLE properties ( thingID int, propertyName string, value float ) INSERT INTO properties (1, "height", 1.5) Now when you have all the properties as columns in a table, you can make the columns datatype the correct one for the proper values of a given property. But when you have a generic properties table, where the properties for a given kind of thing or creature are separated from the relevant row, now you have to settle for some other approach. You can decide that you'll have a field for each datatype so that you can store values in the fields of the appropriate type. You might then register valid properties with their real datatype name in a property registration table. This means that given N datatypes, for each property value stored, you're wasting N-1 fields. CREATE TABLE property_registrations ( propertyID int, propertyName string, propertyDatatype string ) CREATE TABLE properties ( thingID int, propertyID int, value_string string, value_int int, value_float float, ) Maybe the wasted database space is a concern, so you can decide that you'll just have an generic datatype which you'll encode property values into, like a string field or order. When you fetch the values, they can then be cast from a string to a number if needed. CREATE TABLE property_registrations ( propertyID int, propertyName string, propertyDatatype string ) CREATE TABLE properties ( thingID int, propertyID int, value string ) There's any number of factors which may be worth considering. Will the inflexibility and wasted space matter if real columns are used in the things/creatures table? Will the space wasted matter if the properties table approach is taken? Will the hiding of any datatype in a string mean that these rows can't be dealt with on the database layer, and instead have to be manipulated by code above it? How will that effect things in the future. Hmm as I write this it occurs to me. Let's say you have a things table, and there's a parent thing for all creatures. And you have a creatures table, with creature specific fields. So you aren't wasting space for creature data for all things. Chances are you can introspect database internals to work out the lowest level in the chain at which a field exists. You could probably generate these tables and do all sorts of fancy things like that, keeping the full power of the database. Something none of the above solutions allow. Anyone have any experience they can share with any of these solutions? Preference one way or the other for some reason?
  9. Quote:Original post by suika Define a base orc class, then extend on it to define sub-classes.That should be obvious though. In the database, not as code.
  10. Let's say I'm programming an RPG and I am intending to store my data in the database. There's an Orc race. Now, orcs normally have: Height: 1.5 meters Weight: 120 kilograms Maximum movement speed: 3 ..and a number of other standard 'orcy' properties. I also want to have a stronger, larger, slower orc. Let's call these Orc Thumpers. I want them to have a different height, weight and maximum movement speed, but I also want them to keep in sync with the standard 'orcy' properties so I don't have to work out all the different types of Orcs and update them all when I change one of these. How would you recommend I store this data?
  11. Something I have been thinking about recently is one or more users doing content development against a shared central server where this content would be tested by clients running against the same server. One example is the creation of item templates. This might, for example be a kind of sword a user might come across or a kind of trade good which player's might be able to sell. One way is defining them with code. The data from the template object is then propagated either to all clients, or just to the ones where instances of that template are present. There's an advantage to this approach in that the code content definitions are most likely checked into the same repository as the matching game logic for a build, which makes it easier to ensure builds can be remade as time passes. Multiverse and Torque MMO Kit both take the code template definition approach. Then there is defining them in a database. When modifying content, designers would work against the same development/authoring servers which run against the database backend. For builds, you might serialise the tables or views and bundle them with those builds of the live servers. However, when the time comes for you to edit content for the released build each approach faces it differently. In the code-driven approach, you might work in the relevant build branch and because the matching data is there with the game logic, you can just build those together after the changes. In the data-driven approach, you might find that the content server has moved on to incorporate changes for the next release. But the aspect which interests me the most is live use of content authoring changes. What if your server doesn't send template specifications to the clients, but rather just says that they should know the template already and hey, here's a new instance of it. That's a potentially ideal situation, because then you reduce the bandwidth used. I know WoW doesn't do that, and one of the arguments is the fun of discovering this content versus being suckered into browsing a website which catalogues it having perhaps extracting it from the client datasets. But then again, who really gets to discover this, but the hardcore guilds, so whether it is worth implementing this way just to limit discovery is questionable. Anyway, back to what I consider the interesting stuff. A potential problem which for me illustrates the complexity of live content development and testing. It's not an ideal situation, but it covers clean recovery from a worst case scenario. Your clients have serialised template data. The server just sends instance data. It doesn't keep an anal level of detail about what a given client knows about, but rather bases it abstractly on proximity. In this case, the clients are connecting to the authoring server. A content developer looks at a dungeon instance and realises he doesn't want some encounter there, and furthermore he no longer wants the monster template variant he specifically created for it. So the template variant usages are deleted from the encounter, the encounter is deleted and then the template variant itself is deleted. Now, there's two places that the template variant might still be used. The server-side encounter which has been spawned and controlled within the dungeon, with instances of the entity template variant. The client-side representation of these instances. Any reference to the template past the point of its deletion, by the client or server, may cause errors when joins are done against the template table. In an ideal game authoring engine, the live state would be cleaned up. But who has time to write an ideal engine? You just write the best you can and then refactor it as you develop it further. Is it possible to really design this ability into the engine from scratch? Are there industry best practices for writing engines to handle this? Anyone have any experience or references to articles, talks and similar media relating to systems like this? Anyway, I've enjoyed writing this post but don't have time to reread and edit it before I hit the sack. Hope I haven't written anything too incoherent :-)