Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 30 Jun 2004
Offline Last Active Yesterday, 06:46 PM

Posts I've Made

In Topic: [2D Platformer] Horizontal Loop

21 April 2016 - 06:55 PM

In my opinion, the general easiest way is what has been said above.  Figure out a good distance(dependent on the move speed) and as soon as you would be about out, simply do an instant movement of your character from one place to another, and keep the other stats, like inputs, movement speed, etc...  You don't have to do it when completely out, rather for example, if your character moves at 2 pixels a frame, then when your character only has 2-4 pixels left inside, that's the point where you switch to the other side, and the position on the other side could have 2-4 pixels of the character on the inside.


One other idea, if you are using any programming language that lets you create classes/objects, is to create a special object.  You would make this object only do a couple things, move a few pixels in a certain direction, draw a certain sprite/animation, and then delete itself after a few frames.  Now, the catch to this method is that you probably would need to have a certain commitment at a certain point.  Maybe once the character is halfway out, you then commit it to follow through with the motion.  At this point, you create your "dummy" and give it the sprite/frames of the character, along with the speed, etc...  and at that point, you can move your real character to the other side.  The point of having the dummy object is so that you don't have more than one instance of your actual character, so for example if your AI depends on that, you don't have to worry about accounting for that as you never duplicate the character.  And the fake object would delete itself after however many frames you set it to, so you don't have to worry about managing it later either.  The bad part about this method isn't so much the coding(as long as you understand OOP, or are using a framework that makes this easy, like GMStudio or Unity), but rather in the whole part about the commitment.  If your players are used to being able to turn on a dime, then you may have to forgo the whole visual, but if movement is grid-based, or simply can work without instant friction, etc... than this version could work.  And the techniques can work for other things, making you learn how to create and use different types of objects for things like this.

In Topic: In terms of engine technology, what ground is left to break?

03 April 2016 - 04:27 PM

I'm kind of with the above posters on this one.  I think the only real thing they could do to make a game engine better in the graphics side would be full-on ray tracing, similar to what higher end offline renderers do, but done realtime, along with all of the game code, physics, sounds, etc... we have today.  In today's engines, most things are a cheat.  Lights aren't actually bounced around much(if at all, depending on the engine), rather things are calculated in a more general way.  This is a lot of what they are talking about when they mention indirect lighting in the above posts.


About the sound side of things, in the future, I say a sound engine being a full blown thing.  I see sound waves being tracked from source to listener, using portals/zones to control things.  The waves would bounce around the geometry of the level.  There would be sound from direct waves, and then sound from the bounced around waves.  So, a single pole or enemy between the sound and the listener wouldn't make a difference due to the fact that sound bounces around the room, but a larger wall in the way would make a different type of sound, and if there is no opening at all, the sound could be 2 feet away, and you either wouldn't hear it because of it being sound proof, or more likely the sound would be quite muffled like sounds are when they go through walls.  Also, the volume of sounds would have similar falloff scales like they do now, where some sounds die off quickly with distance, and others are more linear losing volume with distance, but slowly.  And this last bit would affect the length of the calculated distance of the sound waves.


The biggest improvement I've been impressed with, and see being a bigger deal over time, is indeed the development pipeline.  We have for example, UE4 and blueprints, which for many isn't comfortable, but seems to be so for others.  Development is somewhat moving from the low level code higher into higher level "scripting" like C#, and some languages like Lua, and even the GML that GMStudio uses.  I see this being a trend towards the future.


The above paragraph also applies to other areas besides game logic.  Graphics are improving as well.  Blender is free and great software.  There are many little "cheats" to modelling that didn't exist before.  I've heard of software that does things like automatic booling of many objects all at once(unlike most which only bools 2 objects at a time).  There are also things like metaballs that can then be used for modelling.  There are many modifiers like whatever flavor your software has of Lattice modifying, where you can easily modify a model without having to move things all separately.  But this only cracks the surface of the possibilities.  I'm still waiting on the software that allows you to easily model based on a picture.  There are currently programs that can do it with several pictures at once, but I found way back a few years ago a video showcasing software that can do it quite well with only one picture, and a user drawing a few lines to assist the program to know what the object is shaped like.  Check that out here.


Then on the graphics side, we have software like the Substance Suite.  Substance designer can use nodes to create textures, and from what I have seen(and the bit that I have done), you can get almost anything out of it that you could ever want, and then have easy variants of it as well.  Then we have Painter that allows us to paint directly on the model, not only with whole material brushes, but with particle systems.  The video I saw of painting the guy with a cloth wrap on his face, and the blood dripping down, then spreading out around the cloth, was just amazing.  The fact that these things can be done is great, and helps someone like me who is not really an artist, do better work than I could otherwise.  Sure, a real artist will beat me out of it, but I can come much closer this way.


So yeah, I think most of the improvements are going to be in the dev pipeline itself, moreso than the actual capabilities of the engines, though there are still a few things left in that department too.

In Topic: Allegorithmic Substance for Non-Artist, Tools for Amping up Graphics?

02 April 2016 - 06:01 PM

Unity has a Substance engine I believe to be comparable to UE4's.  If the Substance has exposed parameters you can change them, and if the material is being used as a Substance(not as baked textures from the substance, as not all platforms are supporting it yet), you can change those parameters at run-time.


I'm no artist, but I can tell you, these softwares, if you learn at the least some basics, can help you make better graphics, especially combined with pre-made textures(like GameTextures.com textures).  I don't think it will help with "hand-made" textures like some cartoon styles because those require actual art skill generally, while using pre-made textures and Substance Painter you can take a UV, mask off the UVs, and apply materials that can look good if done right.  But this is more skill with software, not so much with actual art.  It is like making textures/graphics relying primarily on Photoshop filters instead of actually drawing things with the brushes.  It CAN work though in my opinion.


About Substance Designer, I think it is also a worthy investment.  In theory it allows you to not be an artist and via the node system still create good textures.  I haven't done much with it, but given time I've been able to make a couple things that I wanted to.


I honestly believe the LIVE Subscription is perfect for me.  The $20 per month isn't much for me, a working adult with 3 children and a wife.  But I had a bit of a head start, having learned basics of modelling with Blender3d, and knowing basics of UV mapping, etc... despite not having actual artist skill.

In Topic: picture to texture workflow (especially to height map) ?

20 March 2016 - 11:05 AM

I Should mention, at least on that specific texture it might be quicker with a program.  The insides of the cement brick things appears to be generally the same color, which is darker than all of the grays of the bricks.  Generally that is a "best deal" when doing textures with an algorithm or with photoshop height map creation.


 Also, if you continue making it with this approach, you will end up needing more detail before it is over with, either in the modelling stage or somehow adding things on.  I'm sure the bricks aren't all the exact same height too.  This approach is the most time taking and difficult one I'm sure, but it can get the best possible results generally(though maybe not the most bang for buck, depending on the texture).

In Topic: picture to texture workflow (especially to height map) ?

13 March 2016 - 12:47 PM

There is another post here talking about exactly that.  The artist is taking the picture, and actually modelling the details.  He then creates the height, normal, AO, by baking that model.  There is some more post-processing as well, making the height and normal maps more noisy/detailed.


There are programs that can do the job automatically.  It appears that you are already using one for the normals.  You could easily convert that normal map into heights I'm sure, though it wouldn't be as good as modelling the stuff.  I think Crazy Bump does it, and I'm sure Bitmap2Material from Allegorithmic could do it as well.