• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Sean_Seanston

Members
  • Content count

    593
  • Joined

  • Last visited

Community Reputation

880 Good

About Sean_Seanston

  • Rank
    Advanced Member
  1. Cool... I'll definitely give it a go very soon then. I think I've spent too long being stuck with one thing and I've lost motivation...
  2.   Out of curiosity, are there any engines more suited to 2D that also have something comparable to Unreal's UMG interface system? I've been trying to make a specific kind of 2D game in Unreal for a while (encountering some frankly bizarre problems lately that I haven't yet figured out...) and what attracted me to Unreal was that the UMG would allow me to more easily mess around with menus and at least quickly prototype the kind of gameplay interfaces that I wasn't completely sure about.   I'll probably give Unity a go soon too... maybe that'll prove more to my liking...
  3.   Could be make_shared<>() alright... I was using that a bit too in the same project.
  4. Trying the VS2010 build now...   ...   Works!   Excellent. I think I just assumed the glsdk folder had come that way rather than having to build it. Maybe there isn't any new version because the 2010 one works, I dunno.   Thanks all.
  5.   Yeah, I thought of that but nothing.       I had a look and sure enough... I did have to build them for VS2008 some time ago, I just forgot. So I guess that's probably the problem...   HOWEVER, it seems the Unofficial OpenGL SDK only has support for VS2008 and VS2010... though I think I saw something when I googled about building for 2010 working for VS2013... so perhaps it also works for 2015, I'll try. Strange... looks like the SDK hasn't been updated for a long time, despite it seeming to be quite popular. Assuming 0.5.2 is the latest version, but that's the one linked from the sourceforge page so I'm assuming it is.     Interesting... could've sworn it didn't work otherwise. Unless I was confusing it with something similar, like std::pair probably. Oh well.
  6. I migrated a VC++ 2008 project to Visual Studio 2015 and now I'm having linking errors with unresolved externals...   I'm using the Unofficial OpenGL SDK, so I put that in the same place where I used to have it when it worked (before I upgraded from Vista), then made sure all the includes were the same (the include directories are a little different in the new version but they all seem to be there correctly for includes and libraries now as far as I can see).   I had to change some code to work with C++11, specifically not specifying the types with make_pair, so I'm not sure if it may be that something else has changed and that's what's causing this or if some settings have just been lost with the migration or what...   When I try to build, it gives me tons of warnings and errors, mostly for glimgD.lib but also a few for glfwD.lib. Not sure if the warnings really matter or if they're indicative of some error-causing problem. The warnings take the form of: 1>glimgD.lib(ImageFormat.obj) : warning LNK4049: locally defined symbol ?c_str@?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@QBEPBDXZ (public: char const * __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::c_str(void)const ) imported Whereas the corresponding errors look like: 1>glimgD.lib(StbLoader.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl std::operator+<char,struct std::char_traits<char>,class std::allocator<char> >(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,char const *)" (__imp_??$?HDU?$char_traits@D@std@@V?$allocator@D@1@@std@@YA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@0@ABV10@PBD@Z) referenced in function "public: __thiscall glimg::loaders::stb::UnableToLoadException::UnableToLoadException(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (??0UnableToLoadException@stb@loaders@glimg@@QAE@ABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) All the include and library directories seem to be there in Properties, and the libs in Additional Dependencies seem to be visible and correct... in fact, it seems to be reading the .lib and not being able to find something that's referenced inside if I'm reading it right?   I took screenshots of the project's properties when it was working (though they shouldn't have changed with the migration anyway, ideally) and it seems like everything is like it's meant to be. Does it look like some kind of version incompatibility or have I probably just missed something?
  7.   I guess so...   More than anything I just wanted to get a better grounding in certain things, before moving onto the more proper commercial solutions for more practical purposes.   Maybe best would be just to continue to have a look around and read up on some of the theory behind things to satisfy my curiosity, while focusing on the likes of Unity for actual projects for the foreseeable future.   Which right now actually leaves just Unreal Engine 4, since Unity doesn't work on Vista and I BELIEVE (could be wrong) that I saw Unreal Engine 4 does.   EDIT: Actually it looks as though Vista isn't supported... at least it's not mentioned...
  8.   K... that's good to know. I'll keep that in mind for my real projects... thanks.
  9.   Well, that was the general idea of splitting the map up into sectors; to try to easily restrict the recalculations to the smallest area necessary.   Just hypothesizing that maybe splitting the map into squares or w/e would be a practical way to do that.     Possibly. In the sense that I'm speaking in a very general sense where all we know is that the terrain is changing somewhere somehow and the normals have been invalidated. Though I was thinking along the lines of such falloff areas counting as the total region of the spell's effect for terrain purposes.     I assume you'd just allow raising and lower vertices, probably cap it to some kind of reasonable value, and have some sort of minimum height, beyond which would be bedrock or perhaps water.   TBH, I find Magic Carpet a very impressive game for 1994 in retrospect. This was a time in the 16-bit era when even the Amiga was still very much alive (though only just...) and Magic Carpet has all kinds of terrain deforming, castle-building weirdness. Probably a bit ahead of its time though which is where it may suffer... haven't actually played it in many many years tho, and even then only on PS1... but I digress.     As a thought... I remember Tiberian Sun (which was 2D, granted, but if we wanted to achieve it in 3D...) had the ability to deform the terrain, e.g. by repeatedly bombarding the same spot with artillery.   Let's say we have a perfectly flat terrain mesh, and we just want to create some sort of simple crater or depression; it might not be necessary to create any new vertices, and the effect might be achieved just by lowering a few vertices by certain amounts. - In that case would you just use a static VBO and simply give it new data, then recalculate the normals based on whatever method you have of narrowing down the normals to be recalculated? Or maybe the VBO should be dynamic, or does it depend on how many times you expect the terrain to change? (I see stream is another usage hint for data that changes often, but static seems not to necessarily be the worst choice for data that changes in all cases, if I'm reading it correctly). - If we wanted to increase the amount of vertices to create a more detailed crater, I suppose that would involved complicated code in a geometry shader?
  10. I think I understand normals reasonably well, but I'm curious about some of the practical issues relating to using normals for terrain.   The simplest way to calculate normals appears to be precalculating them by getting the cross products of the sides of all the terrain's triangles and sending them to the vertex shader as in this tutorial: http://www.mbsoftworks.sk/index.php?page=tutorials&series=1&tutorial=24   But what about deformable terrain? Games like Magic Carpet or Populous the Beginning come to mind for me, where you have spells that can raise and lower the terrain.   The obvious naive solution is to recalculate the entire set of terrain normals after any change to the terrain, but what is an actual practical way like might have been efficient enough to have been used by those games ~20 years ago?   Is it as simple as splitting the map up into sectors, detecting the maximum area that might be affected by a spell, and then recalculating all of the normals in the triangles in just those sectors?
  11. And BTW, is there any disadvantage or reason not to use the transpose of the inverse for normals just in case you end up doing non-uniform scaling? Does it complicate things much or result in noticeable framerate drops in some scenarios?
  12. Think I've found the problem now... if I have, then it was indeed as dumb as I thought the eventual mistake was going to be .   Just looking now, I believe I was putting the uniform setting code in the wrong, very similar and similarly named function that I was using for an earlier test without lighting... which would mean that the reason it was all a solid colour is that modelView was never being initialized.   Yes... it is in fact working now. *Sigh*. Just as dumb as I thought but the idea never occurred to me because it all seemed to make sense... though if I'd been more careful I might have noticed the lack of setting uniforms that were to be used in the fragment shader which should have tipped me off, but of course I wasn't paying attention to that... In my semi-defence, I'm sure the setting of the normal matrix was mistakenly in the wrong function that I was looking at, despite it not being used at all, so that made it look more legit... oh well.   Thanks for the help. These are at least the kind of frustrations that help drill into you the importance of being properly thorough and paying attention to every conceivable detail...
  13.   Interesting... if it wasn't such a compact piece of code I would assume there was a variable I hadn't set somewhere, or I gave 2 variables the same name or something, but I just can't see what's doing it...   Well I'm just using the idea behind the code in a little side project so it's set up a bit differently and there's only 1 relevant place for the uniform to draw a single cube object. So it's not that I left out setting a uniform.     Well it comes from the tutorial, where it's used because...   Of this.   BTW, does that essentially mean that it's only necessary to use the transpose of the inverse if you're doing a scale like (1.0, 2.0, 1.0)? I assume that's all that's meant by non-uniform scaling... but just want to be sure.   In which case, while it's probably very unlikely in a real application, I am in fact using such scaling in this contrived example.   Either way, it doesn't explain the differing effects that are confounding me...   Out of curiosity I got rid of the transpose and inverse functions, the effect still worked with the original method, then I used the uniform modelMatrix and it came out wrong the same way as it was with the functions... So whatever is wrong, it seems to be with modelMatrix being passed in wrong somehow... even though the other matrices are being passed fine... hmmm.   This is going to turn out to be something dumb I bet, but I still can't figure it. Couldn't be some sort of compiler weirdness, could it? I did try rebuilding several times, so I don't think so but this is getting weird.
  14. I was experimenting with some lighting code I had based on the tutorial found here: http://www.mbsoftworks.sk/index.php?page=tutorials&series=1&tutorial=11   For whatever reason, I decided to move the normal matrix calculation into the vertex shader instead of passing it in via uniform.   Now it looks to me like doing the same thing in the shader is giving me a different lighting result than leaving it in C++ and passing it in. I'm reasonably sure now after repeated checking that I haven't slipped up somewhere with something stupid, so I'm assuming that perhaps I just don't understand how something works (a difference between how GLM and GLSL deal with matrices maybe...? I dunno).   The relevant code from the tutorial source: glm::mat4 mModelView = cCamera.look(); glm::mat4 mModelToCamera; spDirectionalLight.setUniform("modelViewMatrix", &mModelView); mModelToCamera = glm::translate(glm::mat4(1.0), vSunPos); spDirectionalLight.setUniform("modelViewMatrix", mModelView*mModelToCamera); spDirectionalLight.setUniform("normalMatrix", glm::transpose(glm::inverse(mModelToCamera))); So he calculates normalMatrix as the transpose of the inverse of the model matrix, and passes that as a uniform into the vertex shader.   The relevant vertex shader code is: vec4 vRes = normalMatrix*vec4(inNormal, 0.0); vNormal = vRes.xyz; Where vNormal is a vec3 out variable going into the fragment shader.   I copied that, and my C++ code looks like this... glm::mat4 viewMat = gameCam.look(); glm::mat4 modelToCam = glm::translate( glm::mat4( 1.0f ), buildPos ); modelToCam = glm::scale( modelToCam, glm::vec3( scale, scale, scale * 0.75 ) ); shProg.setUniform( "modelViewMatrix", viewMat*modelToCam ); shProg.setUniform( "normalMatrix", glm::transpose( glm::inverse( modelToCam ) ) ); With similarly a vertex shader that looks like this: vec4 vRes = normalMatrix * vec4( inNormal, 0.0 ); vNormal = vRes.xyz; By doing it that way, I get one result, which I assume to be the correct result given how it looks (light colour on 2 sides of a cube, darker colours on the opposite 2 sides, and the top is darker again).   HOWEVER... I get a completely different result (just a solid colour on all visible sides; perhaps suggesting a complete mess of some sort being made of some calculation or other) by doing this in the shader instead: mat4 normMat = transpose( inverse( modelMatrix ) ); vec4 vRes = normMat * vec4( inNormal, 0.0 ); Where I've obviously inserted the following into the C++ to get the value for the modelMatrix: shProg.setUniform( "modelMatrix", modelToCam ); Why might this be the case? To me, I appear to have replaced one value calculated in C++ via GLM with another value that I would think to be equivalent, using the transpose() and inverse() functions of GLSL instead. So I can't figure out what's going wrong... I can't possibly seem to have slipped up anywhere else when the code involved is so tiny and it's just a direct substitution of one term for another, and I've checked it quite a few times for anything I might have missed.   Does GLSL work differently to GLM somehow when dealing with matrices (though I can't see why...)?   I thought at one point it could have been something about each vertex having different values for something or other but that can't be it when modelMatrix is a uniform too and it's set right before the original calculation method's normalMatrix so they're definitely not using different values for the model matrix as far as I can see. I could of course just leave it be calculated in C++, which is probably the better idea, but I was just curious and decided to experiment and now this is really bothering me...
  15. Yeah, that's the kind of thing I had in mind: I mostly just want to make a game at this stage, at least to get back in the swing of making games for the time being and then maybe when I've gained some more confidence go back and look at things more closely, BUT... I also wouldn't mind going relatively low level now IF it's not going to take forever just to get something reasonable up and running.   Looks like that tutorial should hopefully be the answer to my problem...   Then I ought to be able to get a game up and running with a vehicle going around a landscape at least, and being a strategy game of some sort it shouldn't matter if it's quite simplistic.   The rest... sounds like the 3D shouldn't complicate it too far, beyond what would be expected in an ordinary 3D game anyway. I hope.