• Content count

  • Joined

  • Last visited

Community Reputation

205 Neutral

About Halsafar

  • Rank
    Advanced Member

Personal Information

  • Location
    Saskatchewan, Canada
  1. I am rotating a unit vector around the z-axis. I need to calculate the tangent at each point and draw a quad with the extends of the quad along the tangent. This is essentially like rotating a point around a sphere of radius 1.0 with center at 0,0,0. I believe I have found one tangent so I can draw one axis of the quad. This error leads me to believe I have to find two tangents. A---B | | C---D Where the center of the quad is the point I'm finding the tangent on. Each point A,B,C,D is the extends of the quad along the tangents. Not even sure if this is the best approach. Essentially I need to draw a quad at each point along a sphere.
  2. Given a geodesic dome with proper UV spherical texture mapping and a sun position. I am trying to find some readings or examples of a shader for adjusting the sky horizon and give the appearance the sun is in the sky.
  3. Adding some finishing touches to a graphics project: [media]http://www.youtube.com/watch?v=6BMgMdrR9-w[/media] Recently added a skydome using spherical texture mapping. Looks great. Added horizon effects as the sun rises and lowers. Added a halo effect around the suns position in the sky. The skydome is rendered around the view origin at all times (so it is at infinity). The problem is actually drawing a billboarded quad for the sun in the skydome. Any tips would be great! Our billboarding never looks right, always is rotating the sun opposite the camera or something silly. The sun position and rotation around the terrain looks broken and definitely doesn't look like it is behind the skydome like one would expect. Instead of going over everything we have tried it might be easier to just ask for help. Specifically some guidance on what type of billboarding to use for the sun quad. How to position the sun so it looks convincingly in the sky. Thanks!
  4. [quote name='othello' timestamp='1328317107' post='4909386'] I'm currently looking at Geometry Clipmaps (Hoppe). It has a really nice, fast and intuitive lod and frustrum culling system. It uses vertex texture lookups and only a few draw calls per frame. [url="http://research.microsoft.com/en-us/um/people/hoppe/proj/gpugcm/"]http://research.micr...pe/proj/gpugcm/[/url] [/quote] Just finished reading the paper. It does sound very impressive. I was originally reading the follow article and planning to implement it but the one you present seems much better. [url="http://www.gamasutra.com/view/feature/1754/binary_triangle_trees_for_terrain_.php"]http://www.gamasutra.com/view/feature/1754/binary_triangle_trees_for_terrain_.php[/url]
  5. A long question with a likely long complicated answer. For context I am using OpenGL 3.3+. I am already able to take a height field and generate a terrain. I've subdivided it into a quad tree structure for frustum culling. The method is simple and likely a performance killer. Each leaf in the quad tree has its own vertex buffer and index buffer (so VOA object in OpenGL). When I traverse the tree I end up having many draw calls (bad). There is no LOD yet. What is the best approach here to gain some performance but also get LOD in there? One approach would be to traverse the quad tree building a dynamic vertex buffer each frame. This will get me one draw call but seems awfully slow. It might solve problems with a giant list of vertices where I can queue up vertices up to some max, draw, then queue the rest and draw. Another approach. I could use a global vertex buffer for the entire terrain (set once, reside on GPU). This single global vertex buffer seems problematic for large terrain. The height fields are very large, 2048x2048 for example. Each quad tree leaf can contain a set of index buffers, one for each LOD. I believe each index buffer for each LOD is identical for each node. So I need to just store a small list of index buffers to select from when rendering a node. This logic becomes circular as I try and choose between multiple draw calls or batching into fewer draw calls. Should this even be a concern. I googled around a bunch, read some articles. There seems to be a ton of approaches. I became rather confused on when to combine what methods. I'm hoping someone here can scope my thoughts, push me towards one method to try out. Any articles you have found useful covering this stuff would help. Thanks!
  6. I've decided to go the simple route. Keeping my vertex data as small as possible. I store vertex arrays for the sprites in a hashmap (unordered_map) which is keyed by a texture id. Each vertex stores simply x,y,z,u,v. When the vertices are pushed onto their array I calculate their position in world space right there (including rotations). They all share the same world matrix. When rendering comes I can render each vertex array as one batch.
  7. [quote name='Kryzon' timestamp='1319412067' post='4876150'] Store each quad's matrix as vertex-attributes in each of their 4 vertices. Read these attributes in a vertex shader and multiply appropriately so you can restore each quad's exclusive transformations even when batching quad soups. [/quote] Maybe I misunderstand but that sounds like the worst of both worlds. Now I am sending both vertices and matrix data for each quad each frame?
  8. The preferred method is clearly to adjust the matrix. Draw your objects in local space and use a matrix to transform that object. This is definitely true for 3D. In my specific scenario I am rendering only quads (sprites) using batching (sorted by texture for example). The option for adjusting the position of the quads has come up. I can either: - adjust the vertices and send the new 4 vertices to the GPU, use the same matrix for all - send the vertices once and adjust the world matrix each frame. The latter again seems most logical. The concern I have is for just quads is updating an entire matrix for each quad each frame a bit overkill? In reality the vertices come out to be 12 floats and the matrix is 16 floats. Will any of this even make a major difference? I am curious what XNA does with its spritebatch. I am working with OpenGL ES 2.0 in C/C++.
  9. [font="arial, verdana, tahoma, sans-serif"][size="2"][quote name='Hodgman' timestamp='1318394324' post='4871726'] 1) There's two schools of "component systems" -- ones where components magically find each other (by inspecting the entity somehow) and automatically send messages to each other, and ones where you explicitly plug components into each other. For the former, the physics component would ask it's parent entity to give it a pointer to a child component that implements the IPositionable interface, etc, etc, and the physics component would send "position updated" messages to that target. For the latter, the person creating the entity would plug them together, e.g.[code]e = new Entity; c = new PhysicsComponent; p = new PositionComponent; c->SetOutputComponent( p ); e->Add( c ); e->Add( p );[/code] [/quote] I do believe you made a light bulb click even if your solution isn't exactly what I will end up implementing. [/size][/font] So far all the articles I've been reading on the topic seemed to talk about an entity system as a golden hammer to avoid OOP paradigms entirely. Adding some OOP to manage dependencies is fair in my books. [quote name='Hodgman' timestamp='1318394324' post='4871726'] 2) Why does every component need an "Update" function? If you don't force everything to operate in terms of "IUpdatable", then you don't have this problem ([i]seriously, imagine writing any other piece of software, such as a calculator or spreadsheet or database, where every class was controlled through a generic IUpdatable interface....[/i]) [quote]Remember the whole idea of the entity system is to avoid this type of dependency problem[/quote]Not really. The idea that composition is preferable to inheritance is a good software engineering rule-of-thumb. Modern "entity/component systems" are better than traditional "inherited entity systems" due to this fact, but component systems are not supposed to suddenly remove all dependencies in your code. [/quote] Previously I am used to the game object paradigm where an "IUpdatable" is common. In the entity+component case not all components require an update but the major light bulb from this was that not all components need to update individually. There is nothing stopping me from having a PhysicsAndCollisionSubSystem that updates all entities containing one or the other keeping the logic for both in one place. In terms of ordering I think it reasonable to just assume we update the sub systems in the order required by the game context. This is likely to change based on your game. One thing I must say if the component sub systems can remain "atomic" then one could potentially thread the processing of components.
  10. I have been gathering lots of reading information lately on using an entity system for representing game objects. The approach I enjoy most compares closely with relational databases. Some good notes before I ask my question: [url="http://entity-systems.wikidot.com/rdbms-with-code-in-systems"]http://entity-systems.wikidot.com/rdbms-with-code-in-systems[/url] [url="http://www.gamedev.net/topic/463508-outboard-component-based-entity-system-architecture/"]http://www.gamedev.net/topic/463508-outboard-component-based-entity-system-architecture/[/url] [url="https://github.com/adamgit/Game--Escape-from-the-Pit"]https://github.com/adamgit/Game--Escape-from-the-Pit[/url] The biggest problem in my understanding seems quite common among others. 1.) Simply put, if I have two components which should remain separate, lets say PositionComponent and PhysicsComponent but they require knowledge of each other. What is the appropriate solution here? I can't imagine a PhysicsComponent without a PositionComponent so perhaps a bad example. 2.) Regardless of your solution to my first question it leads directly to my next concerns. The dependencies also have an implied ordering. Lets say we have a MoveableComponent which takes gamepad input and sets up some deltas for movement then our component sub systems must be run in order. You would have to update the MoveableComponent first for the PositionComponent to be correctly computer. PositionComponent needs to be updated for the PhysicsComponent which has to update for the CollisionComponent, and so on. I see no way to avoid the extremely ugly chain of dependencies created nor a way to avoid the ordering implied on these components. Remember the whole idea of the entity system is to avoid this type of dependency problem. Perhaps my understanding lacks but I see this as a serious problem.
  11. It is an open source project or at least will be when we are finished with it. I've already put the code under LGPL license I just haven't release it anywhere yet and the svn repo is private. We call it granular synthesis. The library can be used to create real life sound effects based on some end point sound effects. So a good example is a chalk board. We want our virtual chalk sounds to sound very much like real life. So we record some sound samples and create a sound space: (light pressure, slow movement) x (hard pressure, slow movement) x (light pressure fast movement) x (hard pressure hard movement). You then feed those sound samples into the library creating a sound space (think of a square [0, 0], [1, 0], [0, 1], [1, 1]). When you want to play a sound effect you just add the library to play a sound and give it a point in the sound space and a length of time. The library will generate the appropriate sound effects. Works really well Shoot me a PM if you want more info and I'd be happy to link you to the source on release.
  12. This appears to work, my mono samples are played only on the right speaker with this setup. Change to 1, 0 to enable the left speaker only. I am glad google is already picking up on this forum post. Very little info on this. [code] float mat[] = { 0, 1 }; pSourceVoice->SetOutputMatrix(NULL, 1, 2, mat); [/code] Thanks a bunch!
  13. [color=#4A4A4A]I have a complex audio synthesis system setup and tested using XAudio2. It has come time to implement simple panning (Left<->Right [-1,1]). I cannot find any information on this for XAudio2 aside from setting up XAudio2 for 3D audio. I am hoping there is a easier way but I can't find anyway to control pan directly. [/color][color=#4A4A4A] From all my research there is nothing short of going all out with Xaudio2 3D. Any help would be appreciated. Thanks, Halsafar[/color]
  14. And you would have me replace gl_LightSource[i].specular with?