Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

173 Neutral

About Ganoosh_

  • Rank
  1. Ganoosh_

    2D geometry outline shader

    It's all at one depth (depth buffer is disabled as well), but that's an interesting idea. I feel that may be as slow as convolution though being at the pixel level. I'm going to see if I can apply that at the vertex level somehow. Thanks for the input
  2. Hello, I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all. I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter. I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore. The following image shows what this looks like for one object when it is fully intact. The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case. This image shows the same object with a few fixtures removed. Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated. Thanks
  3. Well, I'd like to get acquainted with ES1.1 before I jump into 2, I plan on moving to 2 next year and eventually into 3D as well. However the main reason is that although most users have new enough devices, I personally know a few that don't, my girlfriend being one of them (at least for my first iOS game I don't wanna disclude her from having a copy ), and I'm sure there's still a decent enough percentage of users to consider. Besides some effects like these, I wouldn't really be taking advantage of the power of ES2 right now, so I don't wanna disclude a bunch of users just to make a few small implementations easier. What I may end up doing is running a run-time check, and decide which methods to use based on hardware. From what I'm aware I'm still able to use 8 texture units on my device even with ES1.1, but the older devices are limited to 2.
  4. Still can't figure this out. I've only got 2 texture units to work with, some I'm using 2 render textures to render back and forth using the other as the source texture, offseting the UV of the 2nd texture unit more each pass. However adding the color/alpha is not working at all, the only way I can get close to the desired result is by applying some alpha (using 0.5) as a constant to the first texture unit, and using GL_MODULATE for both color and alpha on the 2nd. It basically just looks as if i had transparent copies of the image placed on top of each other but offset a little each time. However, there's no way to get a solid texture this way since I'm always multiplying the alpha by 0.5. Adding colors usually ends up in a a solid black or white. GL_ADD_SIGNED just looks like a mess. GL_MODULATE seems to be the only option for alpha, as otherwise parts that should be clear end up as some sort of gray. I also cannot anywhere find the different between calling glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); and glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE); I know COMBINE_RGB is the function that affects color, and alpha for alpha, but what does the overall TEXTURE_ENV_MODE do?
  5. Thanks for the super quick reply! I had no idea about multitexturing, I'm reading about it now, though it's sort of confusing me. Is there a visual difference between multitexturing, and using blend functions? I'm guessing it's much faster and more efficient for texture level blending, but I'm trying to grasp the concept. For example, say I have 2 textures, each the same size, and a sprite object wrapping each texture, each sprite pointing to the same vertices and texture coords (since same size). If I wanted to draw sprite2 at half transparency exactly on top of sprite1, I would draw sprite1 as normal, turn on alpha blending, set the alpha to 0.5, and then draw sprite2. I'd obviously be rendering the same geometry twice. So for an instance like this, multitexturing would basically allow me to achieve the same result by blending just the textures, and rendering the geometry once with one set of vertices and UV coords? Edit: So I started looking up texture units, and info about texture combiners, but I'm getting unexpected results. I copied the first sample from here, and proceded to draw my sprite with usual calls. However, everything else that was previously drawn is just gone, the second texture I point to in the 2nd texture unit has no effect, and despite glPushMatrix and glLoadIdentity, the previous calls from my camera class are still translating the geometry. The code for the multitexture is as follows glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); ... other stuff that used to work .... glPushMatrix(); glLoadIdentity(); glActiveTexture(GL_TEXTURE0); glEnable(GL_TEXTURE_2D); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, t1->id()); //Simply sample the texture glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); // i tried changing this to GL_COMBINE, same result glTexCoordPointer(2, GL_FLOAT, 0, &(s1->frameBounds)); //------------------------ glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, t2->id()); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); //Sample RGB, multiply by previous texunit result glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE); //Modulate RGB with RGB glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); //Sample ALPHA, multiply by previous texunit result glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE); //Modulate ALPHA with ALPHA glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); glTexCoordPointer(2, GL_FLOAT, 0, &(s2->frameBounds)); glClientActiveTexture(GL_TEXTURE0); glTranslatef(3.0f, 3.0f, 0.0f); glVertexPointer(2, GL_FLOAT, 0, &(s1->vert())); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glPopMatrix(); Edit 2 I finally figured it out , [s]apparently glActiveTexture is only for glBegin/glEnd calls, which aren't available in iOS, glClientActiveTexture must always be used. [/s] glActiveTexture is needs for glEnable/glDisable on iOS, glClientActiveTexture is used for vertex pointers, etc like other glClient functions. I needed to call both glActiveTexture and glClientActiveTexture together, the reason changing it seemed to fix it was that before I wasn't setting GL_TEX_COORD_ARRAY on the second texture unit, but both calls are need for the desired result. Hope this helps anyone else that may be trying to understand this. Now it's just time to learn how to use the combiners, thanks for pointing me in the right direction Hodgman
  6. Hello,I've recently jumped over to iOS, using OpenGL ES1.1, for 2D games, after mainly using flash for the past few years. My engine in flash included ghosting effect very similar to the exmaples on this page: http://www.flashandmath.com/intermediate/ghost/ I used bitmap blitting overall, and could simply draw everything I wanted to be "ghosted" to an offscreen bitmap buffer, apply flash's built in blur and color filters, and then render this buffer to the screen. By not clearing the offscreen buffer, and changing the filter paramters, many blur/glow like effects and ghost trails could be achieved. I want to implement the same deal using OpenGL ES1.1, but I have no pixel shader access, nor an accumulation buffer, and of course no simple filters that will do the blurring for me. From what I've read online, it seems I'll have to keep an extra texture for each texture I want to have blurred/glowed at some interval. Applying a dynamic blur per frame would probably be too slow, and the blur/glow amount wouldn't change very often for a single sprite anyway. The upside to this is that if I just want to have a specific sprite glow without any trails, I just point to a different texture in memory (assuming it was already created). I figure I can achieve the motion trails by simulating an accumulation buffer, and fading it out over time. If I just use a render texture the size of the screen, draw the blurred/glowed objects, apply some alpha blending, and avoid clearing it each frame, I would get the same effect. Then I just draw that whole texture to the renderbuffer. I'm curious about the performance of this, but the only way to find out is to go for it. What I'm most confused about is how to achieve the initial glow/blur on a texture. I've read that drawing an object ontto a scaled down texture, then using that texture with the same vertices would do it. However when I do this, it just looks like a blown up image with cheap interpolation; not what I was looking for. I know how to blur an image pixel-by-pixel or using a shader, but I don't have that option. How can I quickly blur/glow a texture using core OpenGL, as quickly as possible? And if you have a better idea for an accumulation-style buffer, post away! Thanks in advance
  7. I just released my first Facebook App. A small flash arcade style game called Space Climb. If you have Facebook, check it out! Don't forget to post your highscores and challenge your friends! Check it out here: http://apps.facebook.com/spaceclimb/ As well, I'm looking for feedback from the gamedev community. Problems or suggestions, please post up! The controls are very sensitive and take a bit of getting used to. I made it this way to increase difficulty and add more pep to the game, however some people think it would be better and more attractive if they were less sensitive. Also, I've had some suggestions of adding checkpoints, so you don't have to start over everytime you die, but I feel that may defeat the purpose of the game. What do you guys think? Thanks for looking!
  8. Ganoosh_

    Warping spread effect

    I see what you're saying. I'll try that. Thanks man
  9. Ganoosh_

    Warping spread effect

    It's tranparent, just shows up white cuz of the background. It's basically being split. The "pushing" effect didn't come out too good, just a quick photoshop. Here's a better picture of what i mean. http://i80.photobucket.com/albums/j178/Ganoosh1/test3.png
  10. Ok I'm gonna post pictures because it will be a lot easier to explain. This will be done on a bitmap. Basically I have a bitmap for the background, and anytime an object moves, it will create an effect that spreads the pixels of the background away from it, leaving a sort of trail. How would I go about doing this? And it needs to be fast since it will be done on multiple objects per frame. Pictures are below. Thanks in advance Here is what it would look like normally http://i80.photobucket.com/albums/j178/Ganoosh1/test.png Here is what i would look like after the object passes by http://i80.photobucket.com/albums/j178/Ganoosh1/test2.png
  11. Pretty much what it says. I'm using all bitmaps with a double buffer, and copyPixels to a BitmapData object everyframe for drawing: for speed, and simplicity of code. All's working well, but I can't figure out how to copy a bitmap to the buffer at a lowered alpha or similar effect. Basically I have another BitmapData for a dynamic background image. I want to draw trails and other effects onto it with a sort of "burn-in" effect (they're going to stay there and build up over time). So basically for example, say, a bullet, every frame it would copy it's bitmap at it's location, lower it's alpha and color down, and copy it to the bg image. But I can't figure out how to change the color and alpha. The only examples I could find were to make a Bitmap from the BitmapData, add it to a movie clip, apply the filters and such to the clip, then redraw the bitmap from the movie clip and THEN copy it to the bg/buffer/etc. This seems like way too much code and overhead for such a simple effect, and is probably slow as hell by incorporating the movieclip, and almost defeats the purpose of using bitmaps. There HAS to be another way. Any help? Thanks in advance.
  12. Ganoosh_

    Fake 3D?

    Are there any articles on the fake 3D systems used in games like Space Harrier or Trail Blazer? What exactly is it called? I think I have an idea of how to implement it, but I would like some outside sources. Any input welcomed. Thanks
  13. Ganoosh_

    Taught rope

    Quote:Original post by gpu_fem I am writing some code / writeup using hard constraints to solve for rope physics instead of springs - need a few more hours though. If there is significant interest (and novelty) I can prepare it to a full blown tutorial. That would be awesome man. That would probably be the easiest way for me without redesigning everything I have from the ground up haha
  14. Ganoosh_

    Taught rope

    Ok, so I have a 2D rope simulation implemented using springs, very similar to the NeHe tutorial. It works well, except it's too slack. It moves too slow and can stretch waaaay too far. I have the k of the springs as high as they can go before shit just goes crazy.. I tried increases the mass of the segments but it just slows it down until I increase k and friction again as far as it will go, but it ends up at pretty much the same as it was. I'm just wondering how I can make a really TIGHT rope. Is there some special balance I need between the mass, k and friction or do I need to add something else? Basically I want it so if the rope is of length 10 at equilibrium, it won't stretch any further than, say 14. Thanks in advance
  15. Ok, I've been searching for hours, trying a few things, but can't seem to get any of them to work. I'm just looking for a simple gravitation pull between 2 masses, mass 1 has the pull, mass 2 is pulled towards it. Mass 1 is just an object that follows the mouse, it has no direction or speed, just a location. The other (mass 2) has a location, direction vector and speed of it's own. However I want mass 2 to somewhat retain it's original velocity, as in the further away it is from the gravity source, the less it pulls on it's current direction and speed (rather than just constantly being pulled directly towards mass 1). Mass 1 also has a gravitional field that so that the gravity won't affect mass 2 if mass 2 is too far away. To put it in simple terms, basically you move mass 1 closer to mass 2 to alter it's velocity to the desired one (using the gravitation pull), and then dart away so that it is no longer affected by gravity and can continue in it's new velocty. If you don't move it will continue faster towards mass 1 until collision causing death or something. :) This is for an arcade game, so realism is not the biggest concern. The simpler the equation(s), the better. Thanks in advance
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!