TwistedPrune

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

141 Neutral

About TwistedPrune

  • Rank
    Member
  1. best way to handle layered sprite colouring

    Yes it might be simpler to just copy the image. The only thing there is that I thought it might also be nice to be able to do more dynamic things with the mask, like with spell effects for example. You could have barkskin spell that changes the colour of the skin only, or a burning blood spell that gives the skin a pulsing glow. (I'm not sure how this would look in practice.)
  2. Hi, I'm thinking about the best way to handle sprites with user selected colours, this is most commonly seen in an RPG where the player can choose for example skin and hair colour and perhaps one or two clothing colours as well. So clearly the starting point is a layered image. But I see a couple of ways to go from there. We could just store the layers separately and draw them one at a time with the appropriate colour. Or we could try and do something clever with the fragment shader, say we bind a greyscale texture as a layer map and use this to index into an array of colours passed as a uniform. This sounds simple enough, but I have a very poor feel for whether it would actually perform better. I suppose I should just try it. One thing I don't like about it is that it would be nice to use the same shader to draw things like spell particles and projectiles which are very unlikely to have multiple layers, making this somewhat wasteful. Also I was looking into how the old infinity engine games store their character sprites and see something interesting: [sharedmedia=gallery:images:8306]   It looks they avoid storing the pixel layer map separately by storing it in the image hue values, the image has around 6 different hue values that are all multiples of 60 or very close. If the user is choosing a colour, I suppose that means the sprite itself is basically providing brightness and saturation values, is that correct? Although I see the image is mostly completely saturated, only varying around some highlights. I don't actually understand colour that well. But in any case what do you think of this idea?    
  3. Screens

  4. Hi,   I'm working on a hobbyist project and I'm struggling to create a particular effect. The game has an isometric or at least top down perspective and has a fixed camera orientation. I want to be able to show some kind of translucent overlay for characters when they occluded behind level geometry in the foreground. This idea is probably easiest to show with an image, this is taken from the game Pillars of Eternity:   [sharedmedia=gallery:images:8263]   As you can see where the character is hidden by the wall they draw this grey semi transparent outline. So this needs to be in regions where the characters are occluded but only by specific things, not for example by each other.   So the first idea I had was:   -create a frame buffer to render to so we can have a new depth buffer -render the subset of level geometry we are interested in. meaning the foreground walls and such to the depth buffer only, using some kind of no op fragment shader -render the characters in the transparent manner using an inverted depth test, i.e using glDepthFunc(GL_GEQUAL) and glDepthMask(GL_FALSE) -draw the buffer to the screen   I thought this would mean that it draws the characters only in the region where the walls are found in front of it. Unfortunately this is causing strange behaviour, that particular combination of glDepthFunc and glDepthMask seems to cause problems regardless of whether I use the FBO or not. It's a bit hard to explain and understand the issues, but I get black marks and streaks on the screen.   Anyway so my question is, should I expect any problems using depth testing with writing to the depth buffer disabled, as I think it has to be.   Alternatively can anyone suggest a better strategy for this kind of problem?  
  5. We have a mesh made up of triangular elements in 2d and a set of values defined for each element. We want to create a thermal plot of these values over the surface of the mesh. For this we need to obtain the value at an arbitrary position on the surface using some kind of interpolation. I've been experimenting with nearest neighbour interpolation but the results have a visual "spottiness" that management does not like.   I'm wondering if anyone has worked on a problem like this or knows any alternative methods I can try?    
  6. GLSL: Simple 2d lighting with many lights

    [quote name='zacaj' timestamp='1350240000' post='4990106'] The easiest way would be to do multiple passes. You render the scene once for each light and have it add the colors together each pass. This results in some overhead from rendering everything multiple times, but this can be reduced with a z-prepass or by using deferred rendering. [/quote] Yes, I suppose that could be quite slow for large numbers of lights, but if it only happens in extreme cases then it could work. This is probably a dumb question, but can you do the multiple passes using a simple additive blend function? Or something more complicated. The only problem with this is that for game-play reasons I need to be able to use translucency. (This is for showing partially transparent character sprites when they are hidden behind the environment. Think infinity engine style.) I'm not sure if that can work, I need to think about it. [quote name='SuperVGA' timestamp='1350240097' post='4990107'] Your description made me very curious, - would you mind showing your lighting off a bit with demo screenshots? [/quote] Yes I can do, it's really nothing special and I have only stock art to test it with. I've generalised the spotlight formula a bit to use 3 attenuation factors in the normal way, and a range falloff on top of that so that the range can be know precisely. The two circles are debug drawings of the light position in my game editor. [sharedmedia=gallery:images:2886] Putting the data in a texture is interesting, I hadn't considered that. As I said I don't have much intuition for the performance costs of various things in opengl, so I don't know how this would compare with using a large number of uniforms. For example, how important is it to minimise the setting of uniforms in shaders? And how does it vary by the amount of data? (i.e is setting an array of 10 vec4 much worse than simply setting 3 ints or similar.)
  7. I'd like some advice on how best to go about this. I'm making a 2d isometric RPG and I've implemented a simple 2d lighting solution using GLSL that allows you to put simple spherical spot lights in your scene. The lights can move and this is important for gameplay reasons, since ideally I would like the player to be able to equip torches or other light sources and carry them around. This is a party based RPG, which is a problem because in principle if one party member is carrying a torch then I can't really rule out them all carrying one. This makes it hard to predict the maximum number of spot lights that will be relevant to each tile. Each spot light is specified in some uniform arrays for the position, colour and attenuation data. At first I thought I could just make the number of lights itself a uniform variable within the shader. I think this doesn't work however, since at least on older hardware if you have a for loop within the fragment shader it must be over a constant. So I have two different approaches I'm thinking about: 1. Determine the total number of lights relevant to the current scene at the beginning of the frame, have different shaders defined for different possible numbers (within some reasonable range) and activate the correct one. Set the lighting data once and use this for the whole scene. 2. Pick a fixed smaller number of lights for the shader and then try to determine for each tile which lights have the most influence, set the lighting data uniforms for each tile (or whenever it changes from tile to tile). So the second option will in some cases consider a smaller number of lights and will have less calculations per pixel, but you will need to set the uniforms more often. The first one will need different shader versions which means additional management, (perhaps there is some macro magic that can help avoid maintaining 5 versions of the same file?) I don't have much intuition for the likely trade-offs in performance. The calculations for each light are the usual distance and dot-product and so on.
  8. [color=#000000][font=verdana, geneva, lucida,]We serialize a vector by writing out the size and then each element in the usual way. However in a badly written class the number written to some files that should be the size is less than the actual number of elements written from the vector.[/font][/color] [color=#000000][font=verdana, geneva, lucida,]This error happens rarely however it would be good to be able to recover these corrupt files when they appear. [/font][/color] [color=#000000][font=verdana, geneva, lucida,]The thing been serialized in the vector is itself a class which has a serialize method which begins:[/font][/color] [color=#000000][font=verdana, geneva, lucida,]CObject::Serialize(ar);[/font][/color] [color=#000000][font=verdana, geneva, lucida,]ar.SerializeClass(RUNTIME_CLASS...[/font][/color] [color=#000000][font=verdana, geneva, lucida,]in the usual way. The next element after the vector is an integer.[/font][/color] [color=#000000][font=verdana, geneva, lucida,]Is there any way to attempt to serialize a class which may or may not be there and then recover when it doesn't work to proceed in a different way?[/font][/color] [color=#000000][font=verdana, geneva, lucida,]Thanks for any help.[/font][/color]
  9. I've been making a pathfinding system for my project using a triangular navigation mesh. This is working quite well, I can now build the mesh and find simple paths and smooth the paths using the funnel algorithm outlined in this blog post: [url="http://digestingduck.blogspot.com/2010/03/simple-stupid-funnel-algorithm.html"]http://digestingduck...-algorithm.html[/url] I'm having trouble deciding how to proceed in modifying my approach to account for non-zero creature radius. I'm happy for my creatures to be simple circles and for there to be a small fixed set of allowed creature sizes, perhaps four or five. Given this one approach is to rebuild the mesh for each allowed size and "expand" the invalid polygon regions by the creature radius using an approximate Minkowski sum calculation. This seems to work ok but it seems quite inefficient and I'd prefer something more general. There's a few references in discussions about "skrinking" the navigation mesh, however I don't understand what this means in practice. One possibility that does make sense to me is instead of reducing the mesh as a whole we first obtain the sequence of edges or portals that we want to traverse to get to the goal by searching the unmodified mesh. We then form a new channel by taking each edge and contracting it along it's length by the radius and use the funnel algorithm on that. This has a few issues to work out: [list][*]Some channels may be too small for large creatures to traverse at all, the A* search at the start needs to know this so it finds a different path. We need to independently calculate the maximum radius that can traverse each pair of edges.[*]The reduced channel doesn't exactly produce the smooth circular arc around the corners that you strictly need, just an approximation[*]The boundary of the channel may not always be made of constrained/unwalkable edges, in which case you may not need to skrink it there. Although it could still be near a different constrained edge in which case you need to do something different to work out how much to shrink the channel by. (I think this should be a rare case with a proper triangulation but I don't think it' impossible for the way I build mine at the moment.)[/list] I'd be grateful for any advice on this approach and whether it's worth pursuing, or any other suggestions.
  10. It will probably be 3.5 edition. (I haven't looked into 4, but the few things I've heard about it made me face palm a bit.) I'm mostly modelling my gameplay on Temple of Elemental Evil which is my favourite RPG engine, aside from all the bugs anyway. I also really like some of the ideas in the Pathfinder rulebook, which is a version of 3.5.
  11. I've been working on the back end of my RPG for a while and now I'm just starting to think about how to implement some of the gameplay details, in particular things like stats, items and spells. My project is going to be based on a version of D&D rules, and this has the downside that some of the rules can be very complex, I guess as long as a human DM is there to clarify things there's no problem, not so much for dumb computers. My system might need to be able to handle item and spell effects like: [list][*]immunity or bonus against mind effecting spells[*]immunity or bonus against charm spells, a subset of mind effecting spells[*]immunity to a particular spell[*]bonus to resist spells cast by creatures of a particular alignment[*]numerical bonuses with a particular type which determines the stacking rules, (e.g deflection bonuses to AC do not stack, but dodge bonuses do)[*]effects which set an attribute to a specific value overriding other effects[*]attacks which ignore specific bonuses, (e.g. touch attacks ignore armour bonuses)[*]bonuses which apply only for attacks against creatures or a particular race or alignment[/list] In the ideal case I'd like the user to be able to customise everything. I'm not sure how generic it's possible to be with systems like this, for some things I don't see an alternative too hard coding effect types and their associated fiddly condition checks in the C++. So far I have the following classes: [b]EffectManager:[/b] Not sure if I need this, people say to watch out for manager classes. I was thinking of having some central place to monitor temporary effect durations and handle callbacks from events like equipping items, entering or leaving a persistent spell region. [b]EffectGroup[/b]: A container for a set of effects. This stores information about the duration and the caster or item it came from. EffectGroups are added and removed as a unit for the purposes of dispels. [b]Attribute[/b]: Anything with a numerical value, a type and a subtype. Types include: stat, skill, BAB, AC, movement, casting speed, health, saving throw, caster level, caster spell difficulty class. Subtype is to distinguish, stats, skills and saving throw types from each other. [b]AttributeEffect[/b]: A type of effect that represents a numerical attribute buff or penalty. Has a type to use for stacking rules and a value. The idea is that these are stored on the creature and when we want to obtain an attribute we first loop over the active attribute effects and apply any modifiers that apply each time. One possibility to make this more flexible is for specific effects to have a script associated with them to specify additional conditions. Depending on attribute type we define a specific context that the script can query, such as: [list][*]BAB and AC allow you to obtain the attacker and the target creature[*]Saving throw lets you obtain the casting creature and the spell itself.[/list] These scripts would let you define arbitrary conditions like, +2 to AC when attacked by a dwarf on a Tuesday and could be edited by the user. I don't know if this is along the right lines, I get the feeling this could get very messy when stuff I haven't thought about crops up late in the project. I'd be grateful for any advice on building systems like this.