Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Sep 2011
Offline Last Active Mar 21 2014 04:56 PM

#4965940 Using different kinds off art files in game development nood question

Posted by on 03 August 2012 - 02:18 PM

.BMP is typically not used for much in game development, due to its lack of an alpha channel. .PNG is more useful in that regard.

The image you posted is an example of what they often call a sprite sheet. It is intended to be "split up" (not literally) into frames. A frame can be thought of as a sub-rectangle of the image, which encloses one bit or part of the explosion. If the frames are displayed in sequence, one after another, at a rapid enough rate the illusion of something happening is created. With a sprite such as your explosion above, you will require an alpha channel (or some other method of removing all the pixels that are not part of the explosion), hence the recommendation of using .PNG, .TGA or some other format that supports alpha. The alpha channel is used to allow the background to show through the drawn sprite wherever there is not a pixel that is part of the explosion.

Now, you don't literally split the image up. Instead, in your game you conceptually split it up by mapping each bit or piece (sub-rectangle) to a sprite frame. When a given sprite frame is drawn to the screen, it only draws the particular sub-rectangle associated with that frame. How you accomplish this exactly depends highly upon your graphical API. An animation will typically store some sort of list of frames to display in sequence.

#4961330 Which software to produce 2D art like Angry Birds?

Posted by on 20 July 2012 - 08:36 AM

I didn't recognize it offhand. It is entirely possibly they are using something developed in-house. That being said, you could use Adobe Illustrator to create those kinds of graphics. For an open-source solution, try Inkscape.

#4960645 textures textures textures :)

Posted by on 18 July 2012 - 02:47 PM

.DDS is sort of an industry standard. Supports compression + mipmaps. Other than that, I see .TGA widely used, as well as .PNG.

#4960224 Procedural drawing language

Posted by on 17 July 2012 - 05:05 PM

For inspiration, you could look at these blog posts at Procedural World:


Granted, they deal with procedural construction of geometry, but the idea is the same. The author of that blog constructed for himself a grammar, or language, to describe the generation of architecture.

The idea of it is pretty simple. You have a number of primitives, or commands, that describe certain aspects of the rune tree or structure. Some commands actually generate a primitive, some deal with positioning and orientation, etc...

For example, say you have the basic primitive line. The line extends from the [/b]current[/b] position to the position (current.x+1, current.y). Now, the line can also have a pair of modifiers: scale and spin. Scale, of course, scales the length of the line segment, and spin will rotate it around its origin point. This, then, suggests a very simple grammar for drawing connected sequences of lines:

line { spin 30, scale 1}
line { spin 10, scale 3}

The evaluation of such a grammar would require storing some state (the current position) and stepping through the "program" described by the grammar. In the above, you start at (0,0) and call line() with spin=30, scale=1. line() will calculate the requisite end-point, draw a line between the current and endpoint, then set current=endpoint before returning. The next call to line() will use the previous endpoint to draw another line relative to the first. And so forth.

Grammars can get pretty complex, but their expression in a language such as Lua is relatively straightforward.

#4958571 Procedural map building... combining techniques to get nice features

Posted by on 12 July 2012 - 03:59 PM

Fractal noise methods in 2D to give a rough, basic elevation map.

Erosion simulation + Rainfall simulation to model the shaping of the base landform in a realistic fashion, plus a reasonable approximation of water flow.

Simulation of temperature zones, probably some combination of gradient banding North-to-South, with turbulence added by elevation.

Simulation of weather patterns; can be derived from the rainfall simulation earlier. In this stage, you generate a "moisture map" to approximate a given cell's moisture level.

Vegetation determination. For each cell in the map, assign a set of data describing the type of vegetation likely to be found there, based upon temperature, moisture, elevation, steepness, rockiness, etc...

If a location is determined to have a tree, put a tree there. Find the bounding box based on availability of space at the location (requires a pass scan through the area to check for blocking) then generate a tree within that space. The Procedural World blog has a couple entries on space colonization to generate trees that you might look into. That algorithm is fairly well suited for "fitting" a tree into whatever space is available for it, as long as you craft your generation routines carefully.

You can see from this thoroughly non-comprehensive list, that it requires a mix of techniques. Your requirement of being able to zoom can possibly throw a monkey wrench into the works. Implicit methods such as fractal noise are well-suited for zooming; simulation methods such as erosion and vegetation placement are not, due to the necessity of having to perform processing "passes" over chunks of data. You can, however, perform the generation at the finest level of detail and down-sample for the coarser levels of zoom.

It's a non-trivial task you've set yourself. Best thing I can say is just keep experimenting and trying stuff.

#4955416 procedural map generator for sidescroller

Posted by on 03 July 2012 - 01:23 PM

The actual generation isn't hard; there are dozens of tricks. The chief area of concern, though, would be the same as with a top-down proc gen game: ensuring the level is navigable by the player. This task is somewhat more difficult in a side-scroller than in a top-down, since the movement mechanics of jumping are so different from just simple pathfinding on a top-down map. The generator has to be able to generate a level that is 1) Interesting 2) Suitable to the gameplay and 3) Fully navigable by the player given the abilities the player character is given, whether this includes jumping, grappling hooks, jet-packs, or what have you. They physics-based nature of movement in a side-scroller makes analysis of a side-scroller to determine navigability to be a fairly complex problem.

Many games will circumvent the whole issue by allowing the player to alter the world, via digging and whatnot, which certainly does reduce the complexity of the problem but also forces the game into a certain "Minecrafty" style of play that may not be suitable for some gameplay tropes.

#4953021 Test point inside triangle, 3D space

Posted by on 26 June 2012 - 08:17 AM

You know how to calculate the barycentric coordinates (s,t) of a point relative to a triangle, right? Well, if s>0 and t>0 and s+t<1 then the point lies within the triangle.

#4952790 Best 3d modelling software for beginners?

Posted by on 25 June 2012 - 02:51 PM

Hi All!

So i recently asked a question "UDK or Unity, the best engine for beginners?" And as i explored Unity a little more i realized i'll need a 3d modelling software. From Autodesk i can get a free software and spent 2 hours downloading Autodesk Maya 2013 and reached a corrupted files message when installing after the download, so i figured i'd start over and figure out which one is best.

So for a beginner, which 3d modelling software do you think is the easiest, most helpful, and with some pretty good features?

i've heard of a few

3ds max

(getting sidetracked: did anyone else try to download Maya 2013 and get a corrupted files message and to redownload?)

Thank you for all the info, as you help my journey as an indie game dev. greatly!

Maya is good. I recommend you contact Autodesk support about your problem.

#4951423 How to use gluUnProject()?

Posted by on 21 June 2012 - 10:49 AM

If your quad is M by N pixels in size, and your mouse is at screen coords (mx, my), then the corners of the quad in screen space are (mx-M/2, my-N/2), (mx+M/2, my-N/2), (mx-M/2, my+N/2), (mx+M/2, my+N/2). Simple as that.

Really, if this is so hard for you to understand, I recommend that you take a break from this and work through some 3D math primers. You are going to need at least a halfway decent grasp on this math if you want to succeed in what you are attempting without having to rely on a huge amount of hand-holding.

#4951393 How to use gluUnProject()?

Posted by on 21 June 2012 - 09:52 AM

Ah, okay, I see.

If it was me, what I would do is instead of trying to draw the selection quad in world space, draw it during the UI phase. The UI phase is done after rendering the 3D scene, and is typically done in an orthographic projection. This way, the quad is drawn in screen space overlaid upon the scene, and you don't have to worry about reverse projection into the world and all of the clipping and z-fighting that may occur.

Once the quad selection is done, you can then un-project the screen coordinates of the four corners of the quad into world space using a depth of 0 and a depth of 1 (thus two unprojections per quad corner). The result will be a set of 8 points representing the volume occluded by the on-screen selection quad. You can then test geometry against this volume to see if it is selected or not.

#4950994 How to use gluUnProject()?

Posted by on 20 June 2012 - 09:22 AM

Looks better. Now, on to the problem at hand.

The way that the depth buffer works is that when geometry is rendered, it is processed as fragments (pixels) and written to the screen. Part of the pixel is the pixel's depth in the scene, or distance from the near clip plane. This depth is written such that fragments at the near plane have a depth of 0, fragments at the far clip plane have a depth of 1, and fragments in-between, obviously, have a depth in the range (0,1).

When glClear is called, with the depth buffer bit set, then the depth buffer is cleared to 1. This way, fragments that are less than 1 in depth will be correctly written. If no fragments are written before calling glReadPixels, then any call to glReadPixels will return 1, because that is what the depth buffer is cleared to.

Now, since 1 corresponds to the far clip plane, when you call gluUnProject, the Z value that is returned will be at the far extent of what is visible in the scene. The quad will be drawn basically co-planar with the far clip plane. Mathematical precision issues in floating points being what they are, this might result in the quad sometimes being visible and sometimes not being visible. This may or may not be your issue, but just remember that un-projecting on a freshly cleared depth buffer with no geometry rendered rarely gives you any kind of meaningful result.

Un-projecting a window coordinate essentially nets you a line segment bounded by the near and far planes. The depth value is used to get a specific point on that line segment, and if the depth value is not meaningful, the result won't be meaningful either.

Edit: I missed the fact you are calling glCallList. How are you building your list? Are you enabling depth writes? Is your depth test function set up correctly? Also, are you trying to render this as a decal on your cube? Because if that is the case, then this isn't really the right way to do that. In that case, you would want to un-project with winZ=0 and winZ=1, and calculate the intersection of the calculated line-segment with the geometry in order to find the point of contact, then construct a quad that is co-planar with the intersected face(s) and draw the decal there, probably with some arbitrary offset out from the face to prevent Z-fighting.

If you aren't trying to do a decal, then what you are going to end up with is a quad that is centered at some point on the cube's surface, with a given orientation. Depending on how the cube is oriented, this quad could be fully visible, partly visible, intersecting the face, co-planar with the face, etc... If co-planar, then you would probably get Z-fighting artifacts at some point, leading to flickering.

It might help to say exactly what you are trying to do.

#4950959 How to use gluUnProject()?

Posted by on 20 June 2012 - 08:10 AM

lol. Look again. You're not calling gluUnProject anywhere in that function. I see you do call glReadPixels to get the depth buffer value, but at least in the posted code you never call gluUnProject.

#4950742 persistent procedural voxel trees

Posted by on 19 June 2012 - 03:35 PM

Trees are tricky to do in 3D with noise functions. In 2D, you can do a vegetation map easily enough, because placement of the tree is only on a 2D plane, but in 3D you have to take into account other factors including headroom, light, etc... these kinds of placement systems are difficult to model through pure Perlin noise alone, as you may be discovering.

Perlin noise is an example of an implicit function, ie a function that you evaluate at a point, and you can fully evaluate it at the given point without taking into account external factors. The actual tree itself could easily be modeled this way (perhaps as a truncated cone for the trunk and a blob of flattened spheres for the leaf lobes, all distorted with just the slightest amount of noise to add organic shape to it) but the placement of the tree will probably require other means.

to do the placement pass, you will need to analyze the geometry of the world after the world has been generated, and find likely places to put a tree. Candidates include: places near water, places with sufficient sunlight, places with sufficient head-room to grow, places with sufficient surface area, etc... You can construct an algorithm to find locations, then randomize the placement of a tree at those locations.

Doing the placement explicitly in this manner will allow you to test against which chunks the tree's box will intersect, and allow you to tag that tree for any chunk that it intersects with, so that it can be loaded as needed by any chunk that touches it.

#4949252 Attack Scripting?

Posted by on 14 June 2012 - 01:26 PM

Well, first off you wouldn't want your chain lightning effect script to actually handle drawing the lightning bolt. That responsibility belongs elsewhere (in the rendering system, however that is structured). The script would merely have access to functionality that would allow the spawning of effects.

Second, the chain lightning effect script shouldn't have the responsibility of displaying combat floating text. That also should be handled elsewhere.

Third, the chain lightning effect shouldn't be setting Target state, such as hurt. What if the Target is lightning immune? Why should the effect even care?

Here are the things the chain lightning effect should be responsible for: spawning a visual effect, generating the damage value, and handing some sort of ApplyDamage(DmgValue) message off to the target object. How the Target reacts to that message (is it immune? Does it take double damage? does it cause the target to put on a party hat and dance a little jig?) is all entirely up to the Target.

complex effects can simply be split up into various smaller and simpler effects.

#4949178 Attack Scripting?

Posted by on 14 June 2012 - 09:32 AM

Yeah, don't roll your own. If you're in this to make a game, why would you waste time writing a language instead? They're complex, and perfecting them is a lengthy process. Lua is lightweight, robust, well-tested and under very active support and development. Not to mention easy as balls to use.