• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

bluntman

Members
  • Content count

    627
  • Joined

  • Last visited

Community Reputation

255 Neutral

About bluntman

  • Rank
    Advanced Member
  1.   Yeah I came to the same conclusions as you after looking into CEF for a few days. The lag, and the cost of pixel transfer make it fairly limited for all but the simplest UI. The system you have is exactly what I would like, but unfortunately way out of my price range for a hobby project. I also looked briefly at Awesomium, including downloading and running their demos, and they barely offer anything beyond what CEF offers as far as I can tell. They still use multiple processes, and can't handle rendering more than one fullscreen target at a time.   My solution is probably going to be rolling my own layout and rendering system, with Javascript V8 integrated. Start with something simple and then developing it in parallel with the rest of my project. Obviously it won't cover all HTML5 and CSS3 features, but then they aren't all required for a game UI. I do think that Adobe have it right with Flash and Edge Animate, with respect to their "everything is animatable, and all actions are animations" approach. Once I started thinking of UI as a state graph with animations along the edges it starts to seem like a very powerful and coherent approach (although maybe it is an oversimplification for some scenarios). 
  2. What I did today, that may be of some interest for anybody else attempting this: Investigated the Chromium Embedded Framework, a simple way to integrate Chromium in to any application. CEF3 is provided as prebuilt binaries, but unfortunately not for VS2012.  So I had to build my own.   Like usual its an open source project using an obscure build system, and a custom project generator. Luckily they have a simple option which will, with a bit of tweaking, do the whole get->generate->build->deploy step. See https://code.google.com/p/chromiumembedded/wiki/BranchesAndBuilding and look for Automated Method section. NB: There is an error in the automate.py script, in that it references svn.bat, which doesn't exist. You can modify it to point at svn instead and then it works fine. Make sure to get from one of the stable release branches (I used 1750). It takes about an hour to get and build. Also you have to specify various options to get what you want out of automate.py. My build command looked like this: set GYP_MSVS_VERSION=2012 svn checkout http://chromiumembedded.googlecode.com/svn/branches/1750/cef3/tools/automate cef3\tools\automate python cef3\tools\automate\automate.py --download-dir=cef3 --url=http://chromiumembedded.googlecode.com/svn/branches/1750/cef3 --ninja-build --verbose --x64-build --force-build Obviously you need Python and SVN already installed and in the PATH to bootstrap this.   This gets you prebuilt libs, required dlls, and a couple of sample client applications. It also includes a project that can be used to build a wrapper library with custom build settings (for instance if your own project uses /MT you need the libs built with the same flag).   The framework exposes all you need to communicate from C++ to Javascript and vice versa by way of extensions (for the former) and executing Javascript commands (for the latter). Edit: actually extensions can only be used in the render process, CEF provides an asynchronous callback system for calls from Javascript to C++ (https://code.google.com/p/chromiumembedded/wiki/GeneralUsage#Asynchronous_Bindings).   Interaction is handled simply by passing mouse and keyboard events to browser objects.   Obviously CEF3 allows offscreen rendering otherwise it would be useless for this task, and it also supports transparent backgrounds for offscreen rendering so you can composite UI into your framebuffer. A very useful find was this thread relating someones experience of doing the same thing I am doing: http://www.ogre3d.org/forums/viewtopic.php?f=11&t=79079.   Next is Edge Animate. I found out there is a fair amount of support for this on the web, including a nice video on how to dynamically generate content (https://www.youtube.com/watch?v=6nAicCniA1g). Edge also has the concept of Symbols (reusable components) which is going to make menus and such like much easier. Unfortunately the concept seems to be an Adobe one, I suspect other editors probably don't have the same facility (maybe something similar though). The UI is familiar to anyone who has used Adobe Flash CS, animation is powerful, adding script is easy, and minimum data for a page isn't excessive (200kb for my test page).   So far so good I think, but if I run in to any show stoppers I will update this thread.
  3. So what do you think of the editors in the first link I posted? They seem to offer the same features as Flash but generate HTML5, CSS and Javascript.  They aren't really comparable to in browser editors designed for making simple web pages I don't think. I tried out Google Web Developer, and it isn't great, but I did manage to make some nice tweened animations in it in less than 5 minutes. And it actually uses HTML and CSS as its data representation, rather than just generating it as a build/publish step.
  4. WYSIWIG editing, lots of tooling, choice of a few mature layout engines, lots of documentation, powerful features (dependent on the layout engine). I'm not interested in coding HTML, but in using a WYSWIG editor + Javascript to quickly iterate on UI, it doesn't matter if HTML is mysterious or complex, because I am never going to see it. Yeah, but the latest HTML5 tools are allowing the same thing. Even Adobe is moving away from Flash to HTML5. Adobe Edge Animate is the HTML5 equivalent of Adobe Flash Pro. In fact the link I posted previously shows this and some others.           Looking into this a bit further today, it looks like Chromium forked Webkit to Blink, I am going to see how far I get with that, I'm not sure I would want to try to integrate the whole of Chromium!  
  5. I'm looking for a "poor mans" Scaleform solution, and after discovering Awesomium I am interested in the idea of using HTML5. I'm interested in any experiences of people who have done this, and what animation tool they used. Google Web Developer seems rather basic and doesn't support custom components, Adobe Edge apparently uses jQuery for all animation resulting in slow and large animations. http://www.webdesignerdepot.com/2013/10/html5-app-smackdown-which-tool-is-best/ Non of these seem ideal.
  6. The OpenGL wiki has me confused here:     You CAN'T bind to GL_ELEMENT_ARRAY_BUFFER unless you have already bound a VAO?! This makes no sense to me as one would naturally want to bind to GL_ELEMENT_ARRAY_BUFFER when filling it in and not want to be worried about binding an associated VAO. Is there another way to fill an index buffer? Does having 0 bound as VAO count as having a VAO bound?   
  7. Okay, I will rephrase: where can I find out the intended usage patterns for the various new features? The spec explains what the features are and their syntax, not their rationale (usually), or the specific problems they were intended to solve...
  8. So I came back to my main project after a couple of years away and GL has changed quite a lot! I have familiarized myself with the new 4.4 specs for the API and GLSL, and now am left with questions of best-practices.   My old engine used interleaved vertex arrays to provide vertex data, which appears to still be possible via glVertexAttrib using the stride parameter, but it occurred to me that it might be better from an engine design standpoint to use separate buffers. These buffers could by dynamically bound to vertex shader inputs based on interrogation of the shader program. i.e. use consistent naming of vertex shader inputs (e.g. 'vertex', 'normal', 'uv' etc.) and map these to separate attribute buffers in my geometry objects. This would essentially make my vertex formats data driven by the shader, and allow for easy detection of mismatches between vertex data and shader requirements. Good idea, or overkill (or just a total misapprehension!)?   The second question I have is regarding buffer backed uniform blocks. It want to use the unsized array feature (last member of block can be an unsized array, size defined by your API call) for light specifications, material specifications to match material IDs (my renderer uses deferred lighting), and cascaded shadow frustum matrices. Is this an appropriate use, or is there a more canonical method?    My head is buzzing with new ideas and I haven't even got to the tessellation stages yet (something about awesome water)!        
  9. What is the actual problem? Screenshots? 
  10. Does anybody know of a good allocator that can use a specified block of memory as its pool? All the ones I can find seem to simply work on global pools or internally managed pools. I just want to give the allocator a big contiguous block of memory along with the size and have it return memory allocated from within that. Alternatively (although it is functionally the same) give it a pool size only have have it return blocks from the pool as offsets into it. I guess you could call it an abstract or virtual allocator... Even better would be an entire system that will manage pointers into a block, where the block can be specified by me, which would include an allocator, and a pointer wrapper that deferences the allocated object from the specified block. This (incase you haven't guessed) is to allow me to share memory between processes, and dynamically create and allocate objects into it.
  11. OpenGL

    If I can just add a two bits of info that could help you greatly while trying to understand what is wrong with the programs you are going to make: 1) OpenGL is a state machine. Always keep this in mind, remember to bind objects before you attempt modify them, and don't bother trying to use OpenGL across multiple threads (single thread for all OpenGL operations). 2) Use gDEbugger to quickly identify errors in your OpenGL calls, and to easily view the current OpenGL state (textures, shaders, variables, bound objects etc.)
  12. Possibly a pre rotate or a post rotate. I recognise the naming style, might be from 3ds? Anyway, check the specific collada exporter documentation for info might be a place to start.
  13. [quote name='obhi' timestamp='1310389329' post='4833708'] That would be really nice. I am at work so couldnt check the video. But I can see how that will work as all we can assume the directional light as a point alight, calculate the coefficients and then only use the coefficient depending upon the direction of light. Thanks, obhi [/quote] Don't need to assume it is a point light at all! Spherical harmonics are great at encoding complex lighting environments (not as great as Haar wavelets apparently but I haven't looked into them). Think of it as compressing a full environment map into just a few numbers (massively lossy of course). Another way to think of an environment map is "what colour/brightness is the incoming light from each possible direction". So you can reverse this and instead encode into the environment map for a single point (vertex or texel) what colour/brightness that point is when a directional light is cast on it from all possible directions. In some cases it will be lit, some it will be shadowed by other geometry, and some it will have secondary illumination from ambient lighting and light bounces. Then you can encode this environment into a SH with a limited number of coefficients, and hard code it into vertex data or textures. Then when you want to simulate a directional light you can encode the directional light into the same number of SH coefficients and simply multiply all the environment coefficients by these, like a mask, in your shaders. The directional light can be created by taking a cardinal axis SH and rotating it (there is a fairly easy way to rotate SH) to the direction of the light. If you want you can also create much more complex lighting environments and apply them instead. Google for Precomputed radiance transfer (PRT) and spherical harmonics and it throws up a few papers.
  14. [quote name='obhi' timestamp='1310368365' post='4833614'] I havent read much on SH but I m sure they will work quite well with static point lights. However wouldn't changing the direction of the directional light would cause SH coefficients to get invalidated. In which case I think (since the directional light cannot be varied at least in motion), a baked light map will be better. However, having said all that, I know intensity can be varied so its a good tradeoff. [/quote] No the SH in the case of dynamic directional lighting represents the lighting environment for ALL directions that the light could come from. So you simply multiply the SH encoded into the vertices or texture by a SH that represents the dynamic light (i.e. a SH coefficient set that represents a lit point in the direction of the directional light). Really, just look at the video I posted. That uses SH to encode the light from all possible directions, then allows the light direction to be changed in real time, and all it costs is 9 muls in either the vertex or pixel shader, and some more storage in either the verts or the textures.
  15. [quote name='obhi' timestamp='1310060423' post='4832371'] Spherical Harmonics if I recall correctly can be used with a little dynamics, i.e. changing the light intensity. But this will not alter the shadow position so it will work with this restriction I guess. [/quote] They will encode low frequency dynamic shadowing, check the link I posted in the previous reply. That uses 3 float3 coefficients if I remember correctly and encodes low frequency shadowing and as many GI bounces as you want. It only works for directional lights and static geometry, and must be pre-calculated, which are the draw backs.