• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

1236 Excellent

About lawnjelly

  • Rank

Personal Information

  • Location
  • Interests
  1. Well to suggest anything, we need to know more .. how old are you? Who owns the PC? Why are they taking it away from you? Is it because of the cost of the electricity to run it / internet? Good news is many computer type bods prefer many of their interactions online than in social situations, so you are not alone, and computers / internet provide a massive opportunity for such people which was simply not available in the past. If you want to, you can earn your living from your PC, meet your girlfriend, play games, read, get your education, entertainment etc etc.
  2. It should absolutely be possible to learn development on such a machine, consider that many of the games you mention will have been developed on lower spec machines. As the others point out, particularly on windows, you may have problems getting the most recent versions of development software to install, they often tend to flat out refuse if they consider your OS 'too old' (visual studio *cough*) or your graphics card doesn't have the latest functionality, and getting a legit copy of old commercial development software may be tricky. As an alternative that no one else has mentioned, can I suggest the possibility of testing a lightweight version of Linux on your PC, perhaps with a live USB stick. It may well run very well even on your old PC, and give you an up to date operating system, and run much of the latest development software for the platform. Any experience you build up here should be directly transferable when you get a more powerful PC, as well as your source code, although if you are going to write directly to e.g. OpenGL you would be learning an old version.
  3. Ah some very nice prebuilt solutions there from Hodgman, many thanks. I might have to steal some of those ideas, using the offset from the "fake pointer" looks like it works well!
  4. Yes I admit, given the need to keep support for 32 bit, I'm inclined towards option 2. As you say none of the offsets require being more than 32 bit as they are relative addresses within the file. Rather than keeping the pointers as offsets in the 32 bit version, maybe I can make the fixup routine a 'no op' in the 64 bit version, and access the pointers through an accessor function that simply returns the pointer in the 32 bit version, and does the offset + start calculation in the 64 bit version.
  5. I'm converting a load of c++ code from 32 bit to 64 bit, and have run up to the predictable snag of fixup (relocation) pointers in binary files. Essentially there are a bunch of pointers in a binary file, but when saved on disk they are relative to the start of the file. Then on loading, the pointers are 'fixed up' by adding the address in memory of the start of the file to the offset, to give an absolute pointer which can be resaved in the memory location, and used at runtime as normal pointers. This is great but has so far been relying on the offset and pointer being 32 bit. The files are unlikely to be anywhere near 4 gigs so the offsets don't *need* to be 64 bit. My question is what would be best (or rather what do most of you guys do) for this situation? One particular quirk is that the code needs to compile and run fine as 32 bit and as 64 bit as it needs to run on both classes of device, and the binary files must be the same. The most obvious solution is to store all the offsets / pointers in the binary file as 64 bit. This would mean re-exporting all the binary files, but this is doable (even if somewhat of a pain). This would simplify things for 64 bit version, and require only slight modification for 32 bit. The downside is the file sizes / size in memory would be bigger + any cache implications. Keep the pointers as 32 bit offsets and do the pointer addition on the fly as the parts of the data need to be accessed. The files are kept the same and the only cost is the extra pointer arithmetic at runtime. I have a vague memory of seeing a presentation by a guy who did such relocation on the fly and found there was very little runtime cost. There also appears to me the question, even with 64 bit pointers, are they ever going to be more than a 32 bit value if the program is using a lot less than 4 gigs? I'm assuming yes as the address space may be past the first 4 gigs, and all the virtual memory address space / paging / randomization that goes on, but I just thought I'd check that assumption, as I'm not well versed on the low level details.
  6. Doh! After all that, I'm getting the inkling that linux may just support opengl es 2.0 out of the box, via the open source mesa driver thingies. Here was me thinking it was something to do with the black mesa research facility. Anyway I've successfully got a triangle on the screen, am praying it is not software emulated...
  7. The Mali OpenGL ES emulator works afaik by translating the openg es calls into regular desktop opengl calls, only allowing a subset of the full opengl, and performing lots of validation. This is what it appeared to do on windows, and everything I've read suggests it may be doing the same on linux. What is slightly confusing is that there may be some kind of mesa opengl es support built into my linux, presumably as it may be running on hardware that *does* natively support opengl es(?). What that does if you call it without hardware support I have no idea, I wish I could find some decent linux for dummies tutorials lol. I did manage to successfully install the mali emulator by first uninstalling sdl2. However, it gets worse. Their test cube app failed, apparently because it is trying to use the fallback compatibility opengl 3.0 profile instead of the core 4.3 profile on my kaby lake PC. The docs suggest this maybe because they've only tested it with nvidia hardware, but I haven't a clue, maybe I would have to force it to use the core profile somehow. I'm now trying to get some PowerVR OpenGL ES emulator working, in the hope it plays nicer. I have managed to get some SDL / opengl es code to compile and link, but not show a triangle yet so I don't know if it is working...
  8. I'm developing an android game and have been primarily using a PC build (on windows) with the Mali OpenGL ES 2.0 emulator, with a secondary Android Studio build for the devices. For various reasons I'm trying to change over to Linux, and I have Linux Mint on a new PC, and so I'm trying to get a similar PC build working under linux, however I am an absolute beginner at linux. It seems that a sensible option might be to use SDL on linux for stuff like creating a window, keyboard input etc. I gather that SDL2 is just the most up to date version of SDL, so I have been installing that with apt-get. However, when I try and install the Mali OpenGL ES emulator from ARM, I am getting a conflict: installed package 'libegl1-mesa-dev' conflicts with the installed package 'libgles2-mesa-dev' I am guessing this means that both SDL2 and Mali have some egl functionality, and they are treading on each others toes? There is some mention of this in the Mali help file: Is this --force-all option what I should do? Or is there no way to get SDL2 and Mali to play together? If not SDL2 then how should I be using Mali under linux (i.e. what other API should I be talking to for creating a window, keyboard input etc?)?
  9. Curse this mammalian brain, it's actually Mark Ridley, I think it was just titled 'Evolution' but I had it as a textbook on a 3rd year genetics course 20 years ago. There are probably many other great more recent books. For Dawkins I'd recommend Selfish Gene, then Blind Watchmaker. They are his earlier books but did very well, the later ones often rehash the same points. After all the principles involved haven't changed, although I'm pretty sure there's been a lot of breakthroughs in stuff like epigenetics since I studied it. Spot on, I was going to mention this in my first post. There's a lot of programmers experimenting with genetic type methods / selection to evolve artificial life / methods of locomotion in physics simulations, things like that. It may not be biological but the principles involved are exactly the same. This kind of thing for locomotion: https://www.youtube.com/watch?v=pgaEE27nsQw Well 'free will' I'd just say is a fancy name for the decision making process our brains do all the time (and most other organisms more complex than say, a fly). I don't really know anything about subjective experience..
  10. I know next to nothing about Pilobolus, but if you are interested in how life achieves complexity then I would recommend reading some Dawkins as a grounding to how it all works - 'the selfish gene' and 'the blind watchmaker'. I really don't know the extent of your biology knowledge, but imo there are 2 big aspects to get a grasp of - evolution and genetics (which Dawkins is a good introduction to, and there are more advanced books by e.g. Matt Ridley). The other is development and complexity arising from simple rules, and understanding that something apparently complex (e.g. a tree) can be built by simpler branching etc rules (have a look at Conway's 'game of life' cellular automaton for an example). Even things like human organs tend to be built in the same way - see for example the similarity in the branching in the lungs with the structure of a tree. It is a means to increase the surface area to volume ratio for gas exchange. I'm not super familiar with the specifics of development of any particular organism, but a lot of work has been done on simple organisms like fruit flies to understand how they are built, you could read about this to see how things like limbs and specialisation can happen. As you read about evolution you will read about how most of the organisms today are built from a few body plans / phyla, and share a lot of their blueprint. I just finished reading 'wonderful life' by Stephen J Gould, which aside from being a little rambling and overlong, suggests that during the first explosion of multicellular life there were far more bodyplans being experimented on by mother nature, and whether by random accident or better design, just a few of them won out and form the basis for later life on earth. As to creating models, go for it, maybe even start with simpler models than Pilobolus. You can even add genetics to your model and let nature 'select' the best version of your species. Or even compete 2 or more species against each other if you want to make things interesting, or have predator prey interactions. This is all assuming you are not a religious fruitcake, of course, in which case, forget all this, and just accept that everything was created by the flying spaghetti monster, waving his noodly appendages.
  11. Yes definitely, I've been finding this. Has made me so glad I went with pre-rendering the scrolling background as rendering all those sprites every frame would have killed performance. Most of the work on a frame is done by just drawing one big screen size quad for the background. The 'big work' is done when rendering a new row or column of the background, which only happens every few frames, and is limited to a small viewport so it minimizes the fillrate requirements. See here: https://www.youtube.com/watch?v=Xfaj4TtvjKk which shows it working on the ground texture. As well as hardware depth testing (so the particles interact with the animals), the particles and models also can do a depth check against the custom encoded RGBA depth texture for the background, so they go behind trees etc. This is an extra texture read and calculations in the fragment shader so did give a speedup when turned off. Yup I definitely found this to be the case.
  12. Aside from being careful with headers, nested includes etc, I too found the biggest win using unity builds, changed my life.. :lol: For my current little game I'm getting 15 seconds with a full rebuild, ~150K LOC according to the visual studio trick, about 2 seconds of which is my code, the rest is third party stuff which I couldn't get in the unity build like lua, and linking. One thing for the linking, being someone who favours static linking of runtime libraries, I think I found that linking the dynamic libraries was faster, so I often do that for debug builds, and static link for release. If you can, I believe building some of your stuff that is less likely to change as dlls for development builds might help quite a bit in very large projects, in terms of speeding up iteration.
  13. Hmm, I'm aiming for a scripted jungle exploration game, where you will get lots of sub missions depending on the areas you enter, mostly from natives and chiefs, kind of like rpg like baldur's gate etc, but simplified without all the character stats and inventory. The animals will be there to bother you as you go about missions. There will be an overall story arc too. It is influenced a lot by things like the king kong movies. Although the test levels here are just random, the levels can have biomes such as desert, forest, plains, rocky, volcano, lake, sea shore etc. I'm keen to not get out of hand so the major technical challenges are scrolling renderer, animals, and hopefully some kind of geometry morphing for the natives so I can have a lot of variation. Then simple objects you can interact with / collect. I'm also planning on the player being able to pilot a canoe, swim, jump. Also there may be a wrestling sub-game, it depends how difficult it would be to coordination the animations, and how time consuming that would be. I have experimented with auto level generation on the previous version, and this works well while having the facility to manually map edit (in the game engine) and move things after the auto-generate. Then I would place actors and write scripts for the missions. Possibly some of the scripts could be auto generated too.
  14. Ah yes! Good thinking, :D The simplest solution to pre-render some frames. I actually did this as the first version, long ago but had forgotten! PCF is just basic shadow mapping but taking multiple samples: http://fabiensanglard.net/shadowmappingPCF/   It kind of works like this already, a little more complex though as it has a wrapping tiling background bigger than the screen, and handles ground textures separately.   This is how it does things already with the shadows, except I am not casting from the animals at this stage as I figured that would be too expensive, I'll probably just add simple blob shadows for the animals. Although the shadows are received by the animals, from trees etc. The shadow map only needs to be regenerated as you move across the map, it is not rendered every frame. With the dynamic shadows received on the animals turned off, the shadows on the terrain are essentially free for most frames, but they do cost when scrolling to a new tile. I have this afternoon implemented the static water as part of the background (although not yet done the bit to add blue colour to under water animals). It doesn't look really bad on my low end phone and is now rendering mostly 60fps. There are occasional dropped frames during scrolling to new tiles but I'll see if I can address that. I will see if I can add random jitter to the terrain shadows to make it look better with fewer samples.
  15. I've just got my latest build of my OpenGL ES 2.0 jungle game working on my android devices (nexus 7 2012 tablet, cat b15 phone) and am currently deciding the best way to address performance. (more details here: https://www.gamedev.net/blog/2199/entry-2262867-sound-particles-water-etc/) All the graphics seem to be working fine, but it seems that the biggest issue is fillrate / texture samples / shader complexity. So far I've identified the biggest issues: :blink: dynamic water shader particles pcf shadows [attachment=35790:dynamic.jpg] I'm aiming for 60fps even on low end phones, if possible. It seems to me that I should have graphic options so the user can get the best graphics / performance for their device.   Some of the issues are a consequence of using a scrolling pre-rendered background, with colour and a custom depth texture (as depth textures are not supported on some devices). When rendering the background as the viewer moves around I currently use 2 passes, one for the colour and one to write the depth into an RGBA, then in realtime I render dynamic objects on top (e.g. the animals) and I read from the depth texture, decode it and compare to the fragment z value.   One obvious speedup is to remove the depth comparison with the background for shaders that do not require it. For the particles, they look much nicer when they are hidden by trees / vegetation, but still look acceptable without it.   The PCF shadows I always suspected were going to be a problem. I was using PCF shadows for the pre-rendered scrolling background (only need refreshing every few frames) and PCF shadow on the animals as they get shaded by trees etc. Taking this down to a single sample greatly sped the shader up, so it is obviously a bottleneck. The single sample shadows look very bad however, so I think the options should be: turning them off for animals perhaps simplifying them for background or using some kind of pre-calculation. There is also the option of randomized jitter / rotating sample window to get a softer shadow with less performance hit. The biggest question I am still facing is how to do the water. :huh:  Is it actually *feasible* to run a complex water shader covering the whole screen on these devices (worst case for sea parts) or do they lack the horse power? I am actually considering (!!) pre-rendering a static water as part of the background. Then bodging in some kind of depth blue colour for parts of animals that are below the surface on each frame. It won't look amazing but should be super fast. I could even add some dynamic particles or something on the water surface to make it look at least a little dynamic. This is what static water might look like: :blink:  [attachment=35791:simpleshader.jpg] I am currently just rendering a giant quad for the water, then using depth testing against the custom depth texture to handle visibility. But this is a bottleneck, as well as the calculations of the water colour. I have already considered drawing polys for the rough area where water will be (around the shores etc) rather than the whole screen, however this will only help in best case scenarios, not in worst cases. Maybe there is a cheaper way of deciding where it can draw the water? I would use the standard z buffer but that option does not appear to be open, given that I am using a custom encoded depth texture, and the shaders cannot write to the standard z buffer without an opengl extension (which may or may not be present lol :rolleyes: ).   I could maybe wangle another background luminance layer or something for where to draw realtime water, but this seems a lot of effort for not much reward (it would only be saving on decoding the depth texture and doing a comparison).   Another question that does occur is, whether all of these bottlenecks are simple bottlenecks, or whether I am stalling the pipeline somewhere with a dependency, and could I double / triple buffer the resources to alleviate the problem.   Anyway sorry for this long rambling post, but I would welcome and thoughts / ideas - probably along the lines of whether these should actually be causing such problems, and any ideas around them, particularly the water. In fact any suggestions for super fast simple water shaders would be useful .. I suspect just adding 2 scrolled tiled textures might produce something useable enough, if the texture reads are faster than calculations within the shader.