• Content count

  • Joined

  • Last visited

Community Reputation

164 Neutral

About amirabiri

  • Rank
  1. OpenGL FX Composer issues with cgfx

    OK I take it back in regards to DirectX 10, I managed to get it to work on the HLSL/.fx side (not Cg/.cgfx). It seems that the .fx templates that FX Composer generates by default only provide a DirectX 9 technique. With DirectX 10, you need to use the technique10 keyword instead of technique. The technique10 keyword has different syntax. Where previously it looked like this: [CODE] technique technique1 { pass p0 { VertexShader = compile vs_3_0 mainVS(); PixelShader = compile ps_3_0 mainPS(); } } [/CODE] With DirectX 10 it should look like this: [CODE] technique10 technique_dx10 { pass p0 { SetVertexShader( CompileShader( vs_4_0, mainVS() ) ); SetPixelShader( CompileShader( ps_4_0, mainPS() ) ); } } [/CODE] With this your effect will work both on DirectX 9 and DirectX 10. It's a shame nVidia didn't make FX Composer generate both to begin with. You can find the various bits and pieces of the technique10 syntax online (mainly on microsoft's website).
  2. OpenGL FX Composer issues with cgfx

    I [url=""]posted the question on Stack Overflow[/url] and it doesn't seem like anyone has an answer for it there either. I've done a lot of investigations and tried many different things, and the bottom line I think is that our hardware doesn't support certain profiles. For example for Direct3D 10 you need shader model 4 it seems. However when I tried changing the profiles of the effect from vs_3_0/ps_3_0 to vs_4_0/ps_4_0, I got an error that my hardware doesn't support it. I also found a forum thread on nvidia's site where a member of nvidia's staff mentioned something along the lines of FX Composer not supporting cgfx with Direct3D. I've also come across multiple forum posts all over the web (very old mostly) talking about cg dead/dying/will die, nvidia not supporting it and so on. Also note the FX Composer [url=""]FAQ[/url], scroll down and you'll find [indent=1][size=3][b]Q: "Does FX Composer work with Cg or GLSL?"[/b][/size][/indent] [indent=1][size=3]A: "We are always evaluating opportunites to provide powerful tools for developers, but have no plans to add support for other languages to FX Composer. FX Composer was built to support the large number of developers using HLSL in their applications."[/size][/indent] So basically, I think the bottom line is this:[list] [*]Some of the problems we are seeing are related to the hardware capabilities (or lack of them). [*]Some of the problems are related to restrictions of FX Composer. [/list] For me since my objective is purely to learn shaders in general, the solution was to stick to HLSL and Direct3D 9. I will verify my suspicion about DirectX 10 when I have access to a newer card on another machine that I have. There is also an interesting (old) forum thread here on gamedev: [url=""][/url]
  3. OpenGL FX Composer issues with cgfx

    I've just come across the exact same problem myself... did you solve it?
  4. The Galactic Asteroids Patrol

    Hi again, I was wondering if anyone here might have some advice about how best to spread around the url to the game's site? We've tried posting on gaming forums but in most cases the forum admins saw it as advertisement and disallowed it. At the moment the site is linked to only from, Unity's forums and Anandtech's forums. Of course FB helps as well. That feels like a miss because these are developer communities which would be only a small subset of the gaming crowd (gamedev and unity that is, Anandtech is just small). What other forums / blogs would your recommend to to post a link to the game's site on that can reach the gamers community?
  5. The Galactic Asteroids Patrol

    Thank you we're glad you like it :-)
  6. Hi All, We have been working on a small game titled "The Galactic Asteroids Patrol" which we hope to release in a few months. The Galactic Asteroids Patrol is a small casual game that is sort of a "modernization" of the old Asteroids type of games with 3D graphics physics weapon selection etc. It is based on the Unity game engine. Although the game isn't finished yet, today we've finished building The Galactic Asteroids Patrol's website: [url=""][/url] We hope you like it and we appreciate any feedback :-).
  7. AI movement in space

    I have a simple questions that I think I know the answer for but I thought I'd ask it anyway to verify + maybe get some more useful information. Basically I have a spaceship in space (...) controlled by an artificial intelligence algorithm. This is classic space movement - i.e the ship's facing and velocity are detached. When the engines are activated on the other hand, the force that is applied on the ship is in the direction of the ship's facing. This sort of setup is relatively easy for a human to operate in (well, for gamers anyway...). A human being intuitively compensates between a desired velocity and current velocity. Now I'd like to teach my AI to do something similar. When the AI decides it wants to go to point A, simply facing in that direction gives unsatisfactory results. Due to the current velocity and depending on distance and movement of the target point, the AI might miss, and it often enters an orbit around the target point. What I need instead is to find the right formula or algorithm that determines the ideal acceleration vector [b]a[/b] in order to change from current velocity [b]v[/b] to desired velocity [b]v[/b][sub][b]d[/b][/sub]. I have attached the following diagram to illustrate what I mean: [attachment=7346:AI velocity.png] Looking at the problem and assuming I correctly understand the effect of acceleration on velocity, it seems to me that the ideal acceleration direction is the direction between the tips of the current and desired velocities. i.e: [attachment=7347:AI velocity 2.png] This will of course be affected by the ship's maximum velocity and maximum acceleration which the ship's AI will have available. However, this feels somewhat counter-intuitive. So my questions is: Is this correct? and if not what is the better approach?
  8. Thank you for your reply, what you say makes a lot of sense. Another question that pops to mind is if I should be worried about the performance penalties of too much polymorphism and virtual methods, or not worry about that. For I could have a collection of IMoveable objects with a Move() method, and various implementations of that like LinearMovement and BallisticMovement? On the other hand I could save the virtual method by having two collections: one of LinearMovement objects and one of BallisticMovement objects and just iterate both from my Update() method. That can be done for ICollidable, IRenderable etc. What you say about update intervals makes sense. In 75Hrz with vsync I will have 1 render cycle approx. every 13.33ms. Then I can iterate all moveables every 10ms or so but iterate collidables every 100ms and so on. My question is if there are any established patterns of how to keep all of this in sync?
  9. Hi all, I've tried searching for this before posting but couldn't find something concrete. Basically I'm looking for resources or advice on general game engine design, but from the game logic point of view, not rendering. I'm using Mogre and plan to use more addons and libraries, so most things are covered for me already. All I'm left with really is stringing it all together and managing the game logic. To be clearer, here are some questions on my mind: - Should I strive for as many game logic updates as possible, i.e Update() between each two frames without H-sync? Or should I aim to call Update() only every specific interval? - If I go for intervals in the previous questions, should I try and compensate for the lack of sync between the render frames and the game ticks by rendering "half movement"? - Should I have an Actor base class that contains lots of functionality in it and rely on subclassing, or should I have compositions of objects each responsible for different parts of he actor like moving, animation, physics, AI, etc. - Should I have one big list of Actors and iterate through all of them each game logic cycle calling Update(), or should I try to minimize this big iteration in some way? - Should the game be single threaded or is it advisable to employ multi-threading? These are just some examples of the sort of questions that I'm looking for resource about and I was wondering if anyone here could point my in the right direction? Any book or article on the matter will be appreciated and of course forum members own opinions and experiences.
  10. Hi, I am currently in the process of designing my second engine. My first engine was good for single player, but didn't do so well for multiplayer. One of the things I've noticed when trying to switch from single player to multiplayer, is that replicating state isn't enough. It seems to me that it is necessary to support both replication of state of a given object, but also to support replication of "events". Examples of events, or reasons why I'm thinking of events: - Spawn actor, actor destroyed, fire laser, collision, etc. - UI events. These would not be replicated, however beside replication they resemble other events. - The handling of the multiplayer game itself seems to map well to events: new connection, player connecting, player connected, etc. - It is far easier to replicate events than to infer their occurrence from changes in a replicated state stream. - With time-based lag-compensation techniques, events are essential. I.e I'd like to replicate the event "player fired" with a timestamp of when that happened, so that the server can compensate for the lag when processing the event on its end. This is impossible to do with simple state replication (of a flag in this case), and requires events. - Replicated events come from the network while the origin could be something else. However in both cases they should be handled similarly. Events lend themselves to this abstraction. For example, let's say "Player started turning right" is an event that is generated by the keyboard on the client, but received though the network on the server although in both cases the game logic (beyond lag compensation) is the same. It follows that ideally the same piece of code should handle both regardless of the source (keyboard/network). Some problems I can think of with an events-based approach: - Events require information (mouse position, spawned actor details, the colliding objects, etc). Each event requires different information. This implies different classes for different types of events. This makes it harder to replicate the creation and initialization of these event objects. - There might be an impact on performance: instead of creating an object, I now generate an event about its creation, then some other piece of code handles that event and actually creates the object. - Events may introduce delays and asynchronization. For example one piece of code generates an event that says that a certain object needs to be created. This piece of code, in that point in time, is the wisest about how to initialize the object. However, instead of doing so some other piece of code, potentially in the following game tick, is the one the actually creates and initializes the object. So my question is if you guys think it's a good idea to design a game engine on events which are then processed and optionally replicated. Or if that is not a good idea and if there is a better design for these goals?
  11. Hi, I've got a bit of an odd question: is there a way to tell the monitor's "physical" dimensions or aspect ratio? The reason I'm asking is that in my game engine I've got a conversion system so that all my game logic is based on game size and distance units. Those units are translated to pixels when rendering, but all game logic is agnostic of this conversion. The conversion itself is based on the current resolution of course, so you'd always see the game with the same relative sizes regardless of the resolution. It works quite well as long as you use the same monitor resolution, i.e 3:4, 9:16, 1:1, etc. But the problem is that the same monitor can be used with different aspect ratios, and I was wondering if there is a way to overcome that? I imagine that in order to do so I would have to consider the resolution ratio as an unreliable source, and acquire the physical ratio to base my conversion on. ?
  12. Synchronizing timers

    Hi, Thanks for the tips. Just to clarify: I don't expect to have perfect time-sync, I just wanted to mitigate the problem of asymmetric latency to a minimum. Am I correct in understanding that you are basically telling me not to bother with asymmetric latency since it almost never occurs in gaming?
  13. Synchronizing timers

    I've found this as well: However it basically details the same algorithm: average of half latency of several samples. I think the part I don't understand is that I was under the impression that the client-server latency can be very different to the server-client latency, which means that using [recv time] - [sent time] / 2 can be very inaccurate. So I guess what I'm looking for is to know if this is a fair assumption, or otherwise if there a more accurate technique than that?
  14. Hi all, Following my last post about extrapolation and time-based anti-lag techniques I've made some good progress in AAsteroids and got a basic algorithm working. I'd like to thank everybody who shared tips and info with me in that thread, thank you. I have now reached a followup point where the basic algorithm is working, but is crude, and I'd like to refine it. The problem: consider the following snippets from the game's logs - Client00:06:10.0766 [debug] Turning right, Angle: -6.167618 00:06:10.0766 [debug] Angle: -6.129788 00:06:10.0772 [debug] Angle: -6.092988 00:06:10.0777 [debug] Angle: -6.0561 00:06:10.0783 [debug] Angle: -6.028537 00:06:10.0789 [debug] Angle: -6.001168 00:06:10.0795 [debug] Angle: -5.974089 00:06:10.0801 [debug] Angle: -5.946125 00:06:10.0807 [debug] Angle: -5.918099 00:06:10.0813 [debug] Angle: -5.890485 00:06:10.0818 [debug] Angle: -5.863604 00:06:10.0824 [debug] Angle: -5.835265 00:06:10.0830 [debug] Angle: -5.808373 00:06:10.0836 [debug] Stopped turning Server00:06:34.0170 [debug] Angle after rewind: -6.167618 00:06:34.0170 [debug] Turning right, Angle: -6.167618 00:06:34.0170 [debug] Fastforwarding 27 ticks 00:06:34.0170 [debug] Angle after fastforward: -5.173763 00:06:34.0170 [debug] Angle: -5.136973 00:06:34.0176 [debug] Angle: -5.100628 00:06:34.0182 [debug] Angle: -5.06374 00:06:34.0188 [debug] Angle: -5.026403 00:06:34.0193 [debug] Angle: -4.990155 00:06:34.0199 [debug] Angle: -4.952763 00:06:34.0205 [debug] Angle: -4.915812 00:06:34.0211 [debug] Angle: -4.879279 00:06:34.0217 [debug] Angle: -4.842765 00:06:34.0223 [debug] Angle: -4.805939 00:06:34.0229 [debug] Angle: -4.768663 00:06:34.0235 [debug] Angle: -4.730613 00:06:34.0241 [debug] Angle: -4.694594 00:06:34.0246 [debug] Angle: -4.657986 00:06:34.0252 [debug] Angle: -4.621612 00:06:34.0258 [debug] Angle: -4.585176 00:06:34.0264 [debug] Rewinding 27 ticks 00:06:34.0264 [debug] Angle after rewind: -5.541834 00:06:34.0264 [debug] Stopped turning 00:06:34.0264 [debug] Fastforwarding 27 ticks 00:06:34.0264 [debug] Angle after fastforward: -5.541834 To explain a bit about my logs: * Time is given in hours:minutes:seconds:miliseconds. * Each line that details the angle is produced by the tick function. * The lines that talk about turning, rewinding and fastforwarding are produced by the function that processes user events. * On the server the sequence of events is: rewind, apply turn, rerun ticks. These correspond to lines 1, 2 and 3-4 respectively in the server log. The client only performs the "apply turn" logic, which corresponds to line 1 in its log. * The calculation for how many ticks to go back is [connection roundtrip time] / 2 * 5. * My tick time is 5ms, but of course its not accurate. * I am using Lidgren.Network with simulated roundtrip of 200ms-300ms. * This is not directly relevant but I'm also applying relaxtion techniques on top of the time-based lag compensation. So with these details you can clearly see from the logs what happens: 1) The client performs a turn that lasts 64ms and results in a 20.6 degrees clockwise turn. 2) The server receives the turn event, applies the difference in the spin variable, then fastforwards 27 ticks. It actually starts with an angle which is beyond what the client reached altogether (which makes sense since the client turned for 64ms which is less than half the latency). 3) The server receives the turn stop event and again rewinds, processes, and fastforwards. 4) The resulting turn on the server lasted 88ms and resulted in a 35.5 degrees turn. The reason for the crudeness are: 1) I am not calculating which tick in the history I should rewind to based on each ticks time, but based on the crude assumption that each entry in the history buffer equals 5ms. This is something I can easily fix, however: 2) The amount of time to compensate for is calculated crudely based on current latency. - This is the part I'm looking for some help with. What I had in mind is to add timestamps to user input events sent over the network. That way the server can more accurately calculate exactly which tick to go back to. However to accomplish this I need an accurate method of finding the time different between the server and the client during the connection phase. From there on everything else gets much easier. Ideally I would like to reach an accuracy of 20ms - 50ms. I have come across this: which I think is a good starting point. However it says that the accuracy level is 100ms. My questions are: 1) Am I even in the right direction? Is it sane to expect to reach a time-sync accuracy of 50ms or less? 2) Any resources that anyone knows that can be of use, Be it working code examples, libraries or articles? 3) In folks opinion / experience, what is the best time-sync-over-lag technique? ( This is one brain cruncher! I'm really enjoying it.. :-) ) Thanks, A
  15. Thank you for the information. To answer you questions: 1) All "important" objects are replicated from the server to the client and extrapolated. So the asteroids as well as the laser projectiles are extrapolated. I make a distinction between "important" and "unimportant" or "decorative" objects. So for example the explosions and small fragments that result from shooting an asteroid are deemed "decorative" and so they are not replicated. This means that the explosions and fragments are different on the client and the server, but it's OK because they are "leaf nodes" in the game logic, i'e they have no further effect on game logic around them. The asteroids indeed look fine with the extrapolation, i.e you don't feel the lag affecting you too badly in their case. So what you're saying makes sense - I should have another distinction level for objects that the client controls and only apply the time-based techniques to them. These objects would be the ship and the laser projectiles. However I'm not sure how to handle the following scenario: Ship is moving towards an asteroids, and hits it at tick 200 of the server. The server handles the collision, reduces the ship's ship and changes the moving direction of both the asteroid and the ship. In the mean time, the player brakes hard and avoids the collision, on the client the player successfully dodged the asteroid. The client sends a message to turn to the server. The client has a lag of let's say 100ms, which equals 20 ticks. The server receives the turn message from the client at tick 210. So it goes back and finds the state of the ship at tick 190 and re-runs 20 ticks ? There are two problems here: 1) The ship already hit the asteroid and caused it to change direction, if I don't include the asteroid in this process, the asteroid will seem to change direction for no apparent reason. 2) The ship already sustained damage. To reverse this I will need to save not only position and velocity in the history buffer, but the full state - i.e shields, ammo, everything. That's a lot of things to hold and also a lot of things to run game logic for, which in turn again may have an affect on other things. For example if I reverse 20 ticks and re-run them, then a laser projectile that was previously spawned will now be spawned again. Instead I need to acquire it and change its current position based on a fixed starting position. This sounds like a mess to me.. So what am I missing?