• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Derakon

Members
  • Content count

    842
  • Joined

  • Last visited

Community Reputation

456 Neutral

About Derakon

  • Rank
    Advanced Member
  1. Bumping this for the first and only time -- I posted late on a Friday, which I recognize isn't exactly optimal for getting help. Does anyone perhaps know of a guide to using framebuffers for pre-rendering 2D images or something along those lines? I'm sure I'm just missing something minor and trivial, but despite my best efforts I'm stuck.
  2. EDIT The reason for the offset is that the pre-rendered texture was being drawn upside-down. I'm still not certain why this is the case, but it explains the offset. I was able to determine this by modifying the textures so they had a recognizable sequence. /EDIT I have a program that is used to get a high-level view of the contents of microscope slides. We take a 512x512 image of one small area of the slide, move the slide over, take another image, and repeat until we can generate a mosaic that covers a comparatively large area. These mosaics are composed of thousands of individual tiles, and the user can pan and zoom all over. I'm working on improving its performance. When I first saw this system it wasn't doing any optimizations. Every tile was drawn every frame, even when those tiles were out of the view. So my first change was to cull out-of-view tiles. This is great so long as the user is fairly well zoomed-in; at zoom levels of .1 or so you physically can't fit enough tiles onto the screen to cause noticeable slowdown. But if you zoom out further, then more and more tiles are in view, and simply rendering 4k tiles, even if they each only take up 50 pixels, causes significant slowdown. Given this, I thought it'd make sense to pre-render the mosaic tiles at a fixed, zoomed-out scale. For example, if I have a 10x10 grid of 512x512 tiles, and I shrank them all down by a factor of 10, the result would fit onto a single 512x512 texture. Thus, any time I'm zoomed out by at least a factor of 10, I can render a single full-size texture instead of 100 downscaled textures. This should significantly improve performance. I wrote up a test program to get the details hammered out, and I'm running into some trouble. This is my first time working with framebuffers, so I guess that's to be expected. I'm starting out with only a single pre-rendered texture that covers the entire view; thus, the pre-rendering logic should consist of "grab all tiles that fit entirely within the view and render them to the pre-rendered texture". My test pattern is a 12x12 grid of textures, 11x11 of which fit onto the canvas and the rest of which overlap the edge. I'm ending up with an offset between the tiles that fit and the ones that don't: (Ignore the border on three edges of this image, which is caused by sloppy screenshotting technique) As far as I can tell, the prerendered texture is using exactly the same rendering parameters as the normal rendering, so why the offset of 3/4ths of a tile's height? Here's the source code for my program. The drawing code is in onPaint (for rendering tiles that aren't pre-rendered, and for rendering the pre-rendered tiles), and in refreshCachedTextures, for pre-rendering tiles. import numpy import OpenGL.GL as GL import OpenGL.GL.EXT.framebuffer_object as Framebuffer import random import traceback import wx import wx.glcanvas ## Simple module to provide an OpenGL canvas for the testapp. ## Size of one texture in the "mosaic", in pixels SIZE = 384 ## Zoom factor ZOOM = .1 ## Number of macro tiles to use, per edge (so total tiles = x^2). Macro tiles # are sized to completely cover the canvas with no overlap. NUMTILES = 1 class TestCanvas(wx.glcanvas.GLCanvas): def __init__(self, parent, size, id = -1, *args, **kwargs): wx.glcanvas.GLCanvas.__init__(self, parent, id, size = size, *args, **kwargs) (self.width, self.height) = size self.canvasWidth = int(self.width / ZOOM) self.canvasHeight = int(self.height / ZOOM) ## Whether or not we have done some one-time-only logic. self.haveInitedGL = False ## Whether or not we should try to draw self.shouldDraw = True ## Framebuffer object for prerendered textures self.buffer = Framebuffer.glGenFramebuffersEXT(1) ## Array of textures to parcel up the view. self.cachedTextures = [] for x in xrange(NUMTILES): self.cachedTextures.append([]) for y in xrange(NUMTILES): self.cachedTextures[x].append( self.makeTexture(self.width / NUMTILES, self.height / NUMTILES)) self.timer = wx.Timer(self, -1) ## Maps all textures to their locations. self.allTextures = {} ## Maps new textures to their locations. self.newTextures = {} ## Maps textures that can't fit into one of self.cachedTextures to # their locations. self.outlierTextures = {} ## Data used to generate textures - a simple horizontal gradient. self.texArray = numpy.array(range(SIZE) * SIZE) / float(SIZE) self.texArray.shape = SIZE, SIZE self.texArray = numpy.array(self.texArray, dtype = numpy.float32) wx.EVT_PAINT(self, self.onPaint) wx.EVT_SIZE(self, lambda event: event) wx.EVT_ERASE_BACKGROUND(self, lambda event: event) # Do nothing, to avoid flashing wx.EVT_TIMER(self, self.timer.GetId(), self.onTimer) self.timer.Start(100) # Add a tile 10 times per second. random.seed(0) # Create a grid of tiles to start off. for x in xrange(0, self.canvasWidth, SIZE * 3 / 2): for y in xrange(0, self.canvasHeight, SIZE * 3 / 2): self.onTimer(shouldRefresh = False, pos = (x, y)) ## Set up some set-once things for OpenGL. def initGL(self): (self.width, self.height) = self.GetClientSizeTuple() self.SetCurrent() GL.glClearColor(0.0, 0.0, 0.0, 0.0) ## Add a tile. def onTimer(self, event = None, shouldRefresh = True, pos = None): if pos is None: x = random.uniform(0, self.canvasWidth) y = random.uniform(0, self.canvasHeight) return else: (x, y) = pos texture = self.makeTexture(SIZE, SIZE, self.texArray.tostring()) self.newTextures[texture] = (x, y) if len(self.newTextures) % 10 == 0: print "Made texture",(len(self.allTextures) + len(self.newTextures)) if shouldRefresh: self.Refresh() ## Create a texture def makeTexture(self, width, height, data = None): if data is None: data = numpy.ones((width, height), dtype = numpy.float32) texture = GL.glGenTextures(1) GL.glBindTexture(GL.GL_TEXTURE_2D, texture) GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR) GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR) GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB, width, height, 0, GL.GL_LUMINANCE, GL.GL_FLOAT, data) return texture def onPaint(self, event = None): if not self.shouldDraw: return try: if not self.haveInitedGL: self.initGL() self.haveInitedGL = True if len(self.newTextures) > 10: self.refreshCachedTextures() dc = wx.PaintDC(self) self.SetCurrent() GL.glViewport(0, 0, self.width, self.height) GL.glMatrixMode(GL.GL_PROJECTION) GL.glLoadIdentity() GL.glOrtho(0, self.width, self.height, 0, 0, 1) GL.glScalef(ZOOM, ZOOM, 1) GL.glMatrixMode(GL.GL_MODELVIEW) GL.glClear(GL.GL_COLOR_BUFFER_BIT, GL.GL_DEPTH_BUFFER_BIT) GL.glEnable(GL.GL_TEXTURE_2D) # Draw the cached textures self.drawTexture(self.cachedTextures[0][0], (0, 0), (self.canvasWidth, self.canvasHeight)) # macroTileWidth = self.canvasWidth / NUMTILES # macroTileHeight = self.canvasHeight / NUMTILES # for x in xrange(NUMTILES): # for y in xrange(NUMTILES): # texture = self.cachedTextures[x][y] # pos = (x * macroTileWidth, y * macroTileHeight) # self.drawTexture(texture, pos, # (self.canvasWidth / float(NUMTILES), # self.canvasHeight / float(NUMTILES)) # ) # GL.glColor3f(1, 0, 0) # GL.glBegin(GL.GL_LINE_STRIP) # for offX, offY in [(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)]: # GL.glVertex2f(pos[0] + offX * macroTileWidth, # pos[1] + offY * macroTileHeight) # GL.glEnd() # GL.glColor3f(1, 1, 1) for texture, pos in dict(self.outlierTextures, **self.newTextures).iteritems(): self.drawTexture(texture, pos, (SIZE, SIZE)) GL.glFlush() self.SwapBuffers() except Exception, e: print "Exception:",e traceback.print_exc() self.shouldDraw = False def drawTexture(self, texture, pos, size): GL.glBindTexture(GL.GL_TEXTURE_2D, texture) GL.glBegin(GL.GL_QUADS) for offX, offY in [(0, 0), (1, 0), (1, 1), (0, 1)]: GL.glTexCoord2f(offX, offY) GL.glVertex2f(pos[0] + offX * size[0], pos[1] + offY * size[1]) GL.glEnd() GL.glColor3f(1, 0, 0) GL.glBegin(GL.GL_LINE_STRIP) for offX, offY in [(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)]: a = -1 if offX else 1 b = -1 if offY else 1 GL.glVertex2f(pos[0] + offX * size[0] + a, pos[1] + offY * size[1] + b) GL.glEnd() GL.glColor3f(1, 1, 1) ## Render new tiles to self.cachedTextures so we don't have to render # them individually. def refreshCachedTextures(self): # Determine which new textures fit into each of our cached textures. for i in xrange(NUMTILES): minX = i / float(NUMTILES) * self.canvasWidth maxX = minX + self.canvasWidth / float(NUMTILES) for j in xrange(NUMTILES): minY = j / float(NUMTILES) * self.canvasHeight maxY = minY + self.canvasHeight / float(NUMTILES) target = self.cachedTextures[i][j] targetTextures = {} # Find new textures that belong in this one's area. for texture, (x, y) in self.newTextures.iteritems(): if x > minX and x < maxX and y > minY and y < maxY: # Upper-left corner is in us. Check if lower-right # matches; if not, this texture will never fit and # goes into self.outlierTextures if x + SIZE < maxX and y + SIZE < maxY: targetTextures[texture] = (x, y) else: self.outlierTextures[texture] = (x, y) if targetTextures: # We have new textures to render in this block. Framebuffer.glBindFramebufferEXT( Framebuffer.GL_FRAMEBUFFER_EXT, target) Framebuffer.glFramebufferTexture2DEXT( Framebuffer.GL_FRAMEBUFFER_EXT, Framebuffer.GL_COLOR_ATTACHMENT0_EXT, GL.GL_TEXTURE_2D, target, 0) GL.glViewport(i * self.width / NUMTILES, j * self.height / NUMTILES, self.width / NUMTILES, self.height / NUMTILES) GL.glMatrixMode(GL.GL_PROJECTION) GL.glLoadIdentity() GL.glOrtho(0, self.width, self.height, 0, 0, 1) GL.glScalef(ZOOM, ZOOM, 1) GL.glMatrixMode(GL.GL_MODELVIEW) GL.glEnable(GL.GL_TEXTURE_2D) for texture, pos in targetTextures.iteritems(): self.drawTexture(texture, pos, (SIZE, SIZE)) Framebuffer.glBindFramebufferEXT(Framebuffer.GL_FRAMEBUFFER_EXT, 0) self.allTextures.update(self.newTextures) self.newTextures = {} class DemoFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, parent = None, size = (700, 700)) sizer = wx.BoxSizer(wx.HORIZONTAL) sizer.Add(TestCanvas(self, (700, 700))) self.SetSizerAndFit(sizer) class App(wx.App): def OnInit(self): import sys self.frame = DemoFrame() self.frame.Show() self.SetTopWindow(self.frame) return True app = App(redirect = False) app.MainLoop() Incidentally, I discovered an optical illusion while working on this: [Edited by - Derakon on November 15, 2010 2:28:48 PM]
  3. I'd go with usernameLabel instead of labelUsername, but I don't have a strongly-articulated reason for doing so. It just feels more natural to me.
  4. A good call, though, and I've made the suggested change. Thanks! I don't think it'll make that big a difference in my runtime, but why be sloppy? However, your suggested change would not have fixed the problem, since the problem was excessive lookups into edgeQueue, not sortedHull. Though, come to think, there's no need for edgeQueue to be a list; sets can serve as queues just fine so long as you aren't picky about the order you process the queue in, and I'm not. So I can remove the redundant queueSet variable and save on memory!
  5. EDIT: never mind! Right after writing this, I discovered an optimization that cut my runtime down by a factor of 8. It's at the end of this post if you want to try to figure it out yourself. :) I've written my own implementation of the S-hull Delaunay triangulation algorithm in pure Python, as part of a procedural mapgen system. The algorithm works, but it's about 700x slower than the reference implementation. Which, granted, the reference implementation is 2800 lines of tightly-implemented and badly-documented C code, while I'm clocking in at 400 lines of Python with a decent level of documentation, so obviously there's going to be some tradeoffs there. But before I start porting my code to Cython (i.e. precompiled Python), I want to make certain I'm not missing any major algorithmic improvements. According to the profiler, I'm spending the vast majority of my time in one function, makeDelaunay(). This is the function that finds triangle pairs in the graph that do not satisfy the Delaunay condition and flips their shared edge. The profiler says that of 8.706s total CPU time, 7.756s are spent in this function, but practically no time (<.5s) is spent in the functions that it calls. I've looked over the function, and I don't see any clear optimizations I could make to speed things up. But maybe you all have some better ideas? ## Given that we're done making a triangulation, make that triangulation # into a Delaunay triangulation by flipping the shared edge of any two # adjacent triangles that are not Delaunay. # ( http://en.wikipedia.org/wiki/Delaunay_triangulation#Visual_Delaunay_definition:_Flipping ) def makeDelaunay(self): # These are the edges that we know will never need to be flipped, as # they are on the perimeter of the graph. hull = self.constructHullFrom(Vector2D(-1, -1), self.nodes) sortedHull = [] for i, vertex in enumerate(hull): # Ensure vertices are in a consistent ordering so we can do # lookups on the hull later. tmp = [vertex, hull[(i + 1) % len(hull)]] tmp.sort(sortVectors) sortedHull.append(tuple(tmp)) sortedHull = set(sortedHull) edgeQueue = [] ## Add all non-exterior edges to the edge queue. for sourceNode, targetNodes in self.edges.iteritems(): for targetNode in targetNodes: tmp = [sourceNode, targetNode] tmp.sort(sortVectors) tmp = tuple(tmp) if tmp not in sortedHull and tmp not in edgeQueue: # Edge is interior edge. edgeQueue.append(tmp) # Edges that are currently in the queue, so we can avoid adding # redundant edges. queueSet = set(edgeQueue) while edgeQueue: (v1, v2) = edgeQueue.pop(0) queueSet.remove((v1, v2)) n1, n2 = self.getNearestNeighbors(v1, v2) if not self.isDelaunay(v1, v2, n1, n2): # Triangles are not Delaunay; flip them. if v2 in self.edges[v1]: self.edges[v1].remove(v2) if v1 in self.edges[v2]: self.edges[v2].remove(v1) self.edges[n1].add(n2) self.edges[n2].add(n1) for vertPair in [(v1, n1), (v1, n2), (v2, n1), (v2, n2)]: tmp = list(vertPair) tmp.sort(sortVectors) tmp = tuple(tmp) if tmp not in sortedHull and tmp not in queueSet: edgeQueue.append(tmp) queueSet.add(tmp) Some notes: * isDelaunay() examines the inner angles of the triangle pair, and returns True if the sum of the inner angles is less than pi. * constructHullFrom() generates a convex hull of the graph; it's also used in other parts of the program. * self.edges is a dict (hash map) that maps nodes to sets of nodes. In other words, it's an adjacency list. * sortVectors is a function that orders vectors in an arbitrary but consistent way. I appreciate any insight you care to share! EDIT: So, the optimization that fixed the problem? I'd assumed that most of my time was spent in the "while queue is not empty, pop an edge and examine it" code. Turns out that's not the case! Most of my runtime was spent in preparing the queue to go. In particular, the problematic line was "if tmp not in sortedHull and tmp not in edgeQueue". Of course, edgeQueue is a list, which means that for each new edge I added to the queue, I was examining all the existing edges to see if they were the same edge -- n^2 operations! I simply moved the creation of queueSet up a few lines so that I could do a hash lookup instead of a list lookup, and now things are nice and speedy!
  6. I'll look into FBOs, thanks. To further explain what I was talking about: Say I have a map that is composed of a 100x100 grid of 50x50-pixel blocks, making for an overall map size of 5000x5000 pixels. With the SDL, I had the ability to generate a 5000x5000 image of that map, by creating a 5000x5000 Surface, setting the camera at (2500, 2500), and passing the surface down my render chain. With OpenGL, I have a fixed "surface" of whatever my output resolution is (currently 800x600). I can use that surface to render the entire map by putting the camera at (2500, 2500, some moderately large Z value), but that downscales the entire map to fit into the 800x600 output resolution. I don't want to draw a close-up version for this; I want to have a gigantic 1:1 scale image of the entire game map. Think something you could print out and put on your wall if you were so inclined. This isn't something I'd use for normal gameplay, especially given the sizes involved. But it's useful for debugging work and for general interest. (Incidentally, I have a couple of existing images online here. These are 1/25th the size of the originals (10x10 tiles instead of 50x50) just so they don't take ages to download)
  7. I recently transitioned my game from using SDL to OpenGL for rendering in-game. One of the things that was broken by this was a "save a large image of the map" function I'd written. In SDL, my draw functions all accepted a Surface to draw to; if I wanted to draw the entire map, I simply made a huge Surface, passed it to the relevant draw functions, and then saved it. This doesn't work with OpenGL, since all draw functions implicitly draw to the main display. I don't want to preserve the SDL render pipeline just for this one feature, but at the same time I don't really want to lose the feature either. I can render the entire map simply by changing where I put the camera, of course, but then it'll all be scaled down to the size of the display. Is there some way to tell OpenGL "render this scene to a notional display of X by Y pixels, and save it to this filename"? Ideally without resizing the actual display, of course.
  8. OpenGL won't handle input or sound or physics for you, no. But there's no reason why you can't use other libraries to handle those specific sub-components and use OpenGL for the graphics.
  9. I have two main suggestions for you. The first is to start learning to draw. That basically means practicing drawing, though if you can take an introductory drawing course at your college, that'd be an excellent idea. You don't need to draw well to draw usefully, but you do need to be able to get the gist of your concept down on paper. The second is to start learning to make 3D models. Download
  10. Gah, now I see it too. The drop shadow beneath the header bar is also missing between that point and the Features button.
  11. Quote:Original post by Denzin Quote:Original post by Derakon There might well already be a library you can use that will let you bind functions to the console. For example, my Python/PyGame game was able to use pyconsole very easily; just initialize the console, tell it what functions it needs to expose, and choose a key to activate/deactivate it. Take a look around and you may find this problem has already been solved for you. me and my team are actually using c++/lua for our project, although that python console will definitely help on later projects.I should note that I've made some moderately sizable changes to that console to get it better-integrated with how I'm processing user input, and to draw using OpenGL instead of SDL. But the basic framework hasn't changed, so I was just able to ninja in, find the code I needed to tweak, change it, and get out again. Anyway, my more general point is "This sounds like a problem that someone else has already solved in a general fashion for your language of choice. Why reinvent the wheel?"
  12. There might well already be a library you can use that will let you bind functions to the console. For example, my Python/PyGame game was able to use pyconsole very easily; just initialize the console, tell it what functions it needs to expose, and choose a key to activate/deactivate it. Take a look around and you may find this problem has already been solved for you.
  13. Loading resources could be highly parallelizable, depending on how much CPU work is needed to process a given resource before it's usable. For example, if your texture data is compressed, then resource loading needs both hard drive and CPU time. In that situation, threads could improve the speed at which resources are loaded by letting one thread use the CPU while another thread is blocked on the HD, for example. However, the basic concept of a loading bar is as Alvaro described.
  14. Well, what if you host game mods on your website, and someone sneaks in a mod that does nasty things to your legitimate players? It's not just the person playing the game you have to worry about. Anyway, this isn't necessarily relevant to your specific case, but it pays to be aware of vulnerabilities, regardless of whether or not they're worth addressing.
  15. Loading code as a config file can be a bad idea if you don't trust your users, since that code could do anything when you load it. Basically it gives your users an obvious entry point to start mucking with your program's execution. My game is open-source, so I don't really have to worry about that kind of thing -- if my users want to muck with things, then they can do so freely. There's still a certain amount of worry that someone could make a malicious plugin that messes with the player's files when imported, but I currently judge that risk pretty low (and I'm confident it can be dealt with). I put all of my configuration into Python dicts that can be serialized using the pretty-printer, making human-readable config files that can be trivially loaded and printed by the program. In addition to mucking with your path, you can also import a module from anywhere using Python's __import__ function, which accepts a path to a module to load, and optionally a list of symbol names to load from that module.