wilberolive

Members
  • Content count

    36
  • Joined

  • Last visited

Community Reputation

381 Neutral

About wilberolive

  • Rank
    Member
  1. I thought the same thing too. Just wanted to check if anyone happen to know something I didn't about these websites.
  2. I was looking for a specific 3D model for a project I'm working on and I stumbled across this website, which happens to have the exact model I need.   http://archive3d.net/   I've seen websites like these before, that host a whole bunch of so called "free" resources. However nowhere on the website does it have any information about who created the model, what the license terms are for it, etc...   How do these sort of websites normally work? I'm just trying to figure out if I can use the model I found on there or not.
  3. In the past I've always worked on 3D games using proper physics engines and such. However I'm starting on my own game now, which will be a simple 2d game a bit like the old Missle Command. I want to do some simple line intersection tests in 2D, but I don't want to use a physics engine just for this simple ray casting as the game has no need for elaborate physics and collision response. Here is a mockup image I drew of what I want to achieve. Red line is the ground that the game objects will sit on. Green shapes are the collision volumes for the sprites on the ground. Blue lines are what I want to calculate. I would like to be able to specify a 2D start point and a 2D direction vector for the line, then detect the first thing it hits, whether that be the red ground or one of the green shapes. [attachment=28249:image.png] So to achieve this I'm thinking I need to find some math for the following 3 scenarios. line-box collision check line-circle collision check line-ground collision check I imagine scenario 1 and 2 should be pretty straight forward and would just be a matter of looping through the collision shapes and testing the line against each one. Then just returning the hit against the one that was the closest (i.e. the first hit). Does that sound right? Is there any links to this kind of math anywhere or even a basic math library somewhere that can do this?   Scenario 3 has me a little stuck, which is why I'm here. I was thinking that I could define a 1-dimensional array to store the height value of each pixel along the red line from left to right, i.e. left side of the red line in the image would be at index-0 in the array and the right side would be index-n. This would effectively give a 1-dimensional height map in a way. How could I test a line against this height map? Is there math for this sort of thing?
  4. I'm toying with an idea in my head for a 2D game that involves a fluid like substance on the screen that changes color and moves as sprites move over (or "through") it. In my research I came across a few other games that do something similar to what I'm thinking.   One that I quite like is an old game called Plasma Pong. Here is an example. https://www.youtube.com/watch?v=NDjseVmruH8   From what I understand in this game he is using proper fluid dynamics, which is probably overkill for what I'm thinking. But the effect is similar. I'm thinking perhaps a pixel shader applied to a full screen quad. Then use a render target to set pixels for the position and color of the sprites on the screen. The pixel shader can then use this render target texture to change the "plasma" so it moves and changes color.   Would an approach like this work? I'm not a shader expert so just not sure where to start with this.
  5. Never mind, I discovered that the c++ std library has an exp() function that does exactly what it says on the box. Your solution works brilliantly! I can't believe I wasted so much time trying to solve something that ended up being about 5 lines of code... wow, why don't I come here more often instead of wrestling with these things on my own?!?!
  6. OK, that makes sense, except I don't understand the last line. Not sure what exp(logZoom) means and how it is calculated? I've tried Googling it but have so far not found a reasonable explanation that makes sense to me.
  7. For some reason I'm struggling to work out something that I thought would be simple. I'm working on a 2D game and I'm trying to get a zoom working. Here's what I've got to work with.   minZoom = 0.2f <-- This is the smallest zoom amount (i.e. so the whole map can be seen on the screen) maxZoom = 4.0f <-- This is the largest zoom amount (i.e. the map is zoomed in 4x its size)   My first thought was that I could just linearly interpolate between the two values. I wanted to have something like 10 or 20 discreet steps in between minZoom and maxZoom (I'll decide on how many once I get this working and see how it feels). However liner interpolation doesn't work because with a fixed number of steps, the step size is the same between each step. This means that as you zoom out, the jump in pixel size gets bigger and bigger so the zoom becomes like an exponential zoom, which means that near maxZoom it is very slow (turn the mouse wheel a lot for not much zoom) and near minZoom it is very fast (turn the mouse wheel a little and it jumps a lot).   So to solve this problem I need to do some sort of easing I think, so that the number of steps is fixed, but the step sizes are variable. They should start out at size x at maxZoom and then get smaller as you approach minZoom. What size x is and how fast they get smaller will probably require so trial and error to get the feel right so that the zoom feels constant.   I'm stuck with working out the math for this. I've tried a number of things but can't get anything to work right. I found the quadratic easing in function here http://gizma.com/easing/#quad1, which seems to do what I want, but the problem with that function is it works over a period of time. I don't know what that "period of time" is.   I have my two extreme values (4.0 and 0.2) and the number of steps in between (let's say 10, but it could be 20 or 15). Each time the mouse wheel is "clicked" I want to increment that step by 1 up or down depending on which way the mouse wheel is rolled. I need to come up with a function that will interpolate between 4.0 and 0.2 in 10 steps (inclusive, i.e. step 0 = 4.0 and step 9 = 0.2) but with a variable step size so that it starts with a step size of x, which is reduced in size with each step in order to slow down the zoom.
  8. I've noticed that every since Oculus VR purchased RakNet it seems to have died? The last official release was in June. The github repository hasn't had any real updates since July last year, with a bunch of unattended reported issues. Rakker himself hasn't been active on the forums since March last year (obviously because he sold it) and the RakNet forums are so quiet now. In fact there is more spam on those forums than actual posts. Any actual posts from people looking for help just get no replies.   I'm just about to start a new project and am investigating networking at the moment, so have been looking into RakNet, but it just seems to have died now ever since Oculus VR took over? I don't want to start using something that has no support or active development. Does anyone know anything more about this?
  9. Hey thanks for the reply. I actually thought of the same thing you suggested too. I just wanted to check if there was another way of doing it.
  10. I'm just wondering if it is possible to actually create an asIScriptObject in C++ and add member variables to it. I know I can create an AS class and use it to create an asIScriptObject, but what I'd really like to be able to do is parse an xml file of name and type pairs and then build an asIScriptObject from it. For example if the xml had <testMember>int</testMember> in it, then I would like to create an asIScriptObject instance and then do something like testObject->addMember("testMember", "int"). The idea is that latter on I can pass these dynamic asIScriptObjects to AS scripts to be operated on.
  11.   I did not realise this. Please excuse my ignorance. That is good news though, and has persuaded my to take another look at it. I also didn't know that it would fall back to regular AngelScript execution if needed, which is also good to know.   One point I forgot to mention in my previous post was that the JIT Compiler appears to only support Windows and Linux? I am planning on targeting Windows, Mac and Linux. Do you know if the JIT Compiler would build and run on Mac?
  12. Thanks for replying.   Guess I'll just have to keep stumbling along with the @ symbol.   I have seen Blind Mind's JIT Compiler, but I was under the impression that it wasn't much supported any more? They don't seem to update it anymore, so I didn't think it was safe to use something that could become broken when a new version of AngleScript is released as the JIT Compiler would need to be kept up-to-date with every new version of AngelScript, right? It shows the last update to the JIT Compiler was for AngelScript v2.27, which is over a year old now. Not to mention if there was a bug, and nobody is around to fix it, then it kind of leaves you on your own. I consider myself a seasoned C++ programmer, but unfortunately not good enough to try and fix a bug in the JIT Compiler if one existed, or to try and keep it up-to-date with AngleScript if I was to use it.
  13. I've been working the AngelScript for about 12 months now and am progressively get more tired of the @ symbol. For some reason it always confuses me and trips me up. I've read the documentation a number of times about it, but always seem to struggle with this one aspect of AngelScript. Other than that I've found AngelScript to be really good (except for the lack of a supported JIT compiler, but that's another question).   I was just browsing the AngelScript website and noticed the removal of the @ symbol under the "Work in progress" section, which made me pretty happy. Why wasn't this decision made earlier? I would love to see all ref types treated as handles. Aren't they almost the same thing anyway?   Is there a reason though why this change is under the "long term goals" and how "long term" are we talking here?
  14. Are you implying that I should save the entire physics state of all objects (including the player's character controller) after every simulation step?   Then find the matching historical state for the server correction message, reset the physics state of all objects to that historical state, then replay from there, applying the player's unacknowledged moves?
  15. I've been researching and experimenting with using simple physics in a multiplayer game. Specifically, I'm really only interested in having a character controller move around a 3D environment. To do this, I've been using PhysX, which works great. My game doesn't have any need for more advance physics than that.   I have read and followed a lot of the material on Glenn Fiedler's site, which everyone seems to point people to when they start asking these sorts of questions. His material has been really helpful and I've got something working. However, there is one part that I simply cannot get working. That is applying the corrected state on the client when received from the server. Specifically, Glenn says to "rewind" the physics state of the player, apply the correction from the server and then replay the rest of the player's moves.   This works fine in a situation where you have explicit control over the player's movements. However, when using a physics engine (like PhysX or Bullet) you have virtually no control over the player character controller's state or the movements it makes. All you can do is apply forces to the player's character controller. It's not possible to rewind the physics state of the character controller to correct it and then reapply the player's unacknowledged moves. How have other games done this using physics engines like these?   The best I've been able to come up with so far is to apply corrections using forces. For example, when a correction comes back from the server and I determine that the player's current position is out by a bit from where the server says it should be, I start applying small forces to the character controller to gradually bring it inline with the server over several frames. There's no rewinding going on with this. If the client and server disagree by too much, then I just snap the player to where they should be (rarely happens). Although this "seems" to work, there is a lot of rubber-banding going on when the server says you have run into an object, but the client thinks you haven't for example.   Do other games not use entire physics engines for character movement around a 3D level? Do they do something else that gives more control to allow for rewinding and correction?