Enosch

Members
  • Content count

    39
  • Joined

  • Last visited

Community Reputation

133 Neutral

About Enosch

  • Rank
    Member
  1. Believable Futuristic Technology

    In english, technically lightning IS silent (we call the resulting sound thunder), lol. But yeah, ionization of air is a little noisy. For computing in the future, I think we will soon move past actual devices for computing (I mean electronic devices). With the advent of cheap nanotech and nanofactories, we will be able to create nanocomputers that will have an incredible computing power/size ratio. Also, we will be able to store information at the atomic (and eventually subatomic) level, so data storage (on our level of size magnitude) will be inconsequential. Also, as we move into nantcomputing, we will see all technology become more integrated. Thus, I don't see us using human interface devices for much longer. By integrated, I mean that we will be able to keep nanobots inside our bodies that will enhance our biological functions. We will be able to communicate "wirelessly" via the nanobots directly from the electro-checmical signals in our brain to someone else's brain. Many obstacles stand in the way of this, not the least of which is that each human's brain stores and uses information differently (like each person has a different and unknown computer architecture and OS), but we will overcome those. Eventually, I believe that we will shed our biological bodies over time via tech enhancements and run our processes ("consciousness") on more stable and efficient "bodies". On teleportation, we will definitely achieve this, though with nanofactories it will most likely be more plausible to make a copy by sending over the "blueprints" of an object than to decompose an object, ship the particles, then reassemble them. It is a weird idea that you could (theoretically) make a copy of yourself at a given time and rebuild it next to you. Kind of makes you wonder if that "person" is you, or if you are? On weapons, I think that in the future, nanoweapons will be the most alarming. Think about it. Nanotech already has produced a "pirhana" type of fluid that breaks down only biological substances (like living things). Imagine a gas bomb of this substance dropped in a group of people. Or swarms of remote or AI controlled nano "bees" that could be targetted onto a group of enemies. I would imagine it kind of difficult to fight back against nano weapons with modern technology, though we will have such amenities by the time these are developed. Just my two cents. Oh yeah, and I think that with the exponential growth of technology, these advances are within the next century, at least. Check out Ray Kurtzweil on google. He has a few books out about future technologies. - Enosch
  2. questions about gravity

    Just orient your player so that his "up" vector is the negation of the gravity vector. For instance, if your gravity vector is [0, -9.8, 0], then your player's up vector is [0, 1, 0] (the negation of [0, -1, 0]). To get your effect very simply, draw your character (right side up), then set the camera's up vector to be the opposite of gravity (as above) and draw everything else. That should make everything right side up when gravity points down, and updside down when gravity points up. But your character will always be right side up. This will work for a 3rd person game where your camera orientation is relative to your player's orientation. I am sure there are better solutions, but maybe this will help you get started. Good luck dood! - Enosch
  3. Beginning Game Making Please Help

    I think you misunderstood what I meant. Using a game engine to make games is GREAT. Nearly all professional games have engines under the hood that run things (Torque, Unreal, etc). So using one that is already done will take that burden off of you. With that said, you will still need to design gameplay code that uses the engine of your choice. I've heard a lot of great things about Torque. It provides tools and scripting along with its engine that streamlines gameplay and content generation. However, if you are planning to use scripting for your gameplay, I would imagine that you will run into a point where things are running too slow. This is because interpreted (scripting) languages are not fast. So using C++ code for the bulk of your gameplay module will speed things up, allowing you to add more gameplay. Scripts are most useful for things that require heavy tweakage and for things that may change a lot. Think of scripts like content. They are easily generated, but aren't the meat and potatoes of your professional game. I would bet that most of those games you saw use another language like C++ with Torque to make their games. Also, even with the content generation tools available with Torque (and others), you will still need to devote a serious amount of time to content generation. I think it is correct to say that most of a game is content, not code, so even with the engine and gameplay out of the way, you are still looking at a very large undertaking to generate content for your game. Definitly check out Torque if you are looking for a game creation tool. If you are wanting to get something together very quickly and learn a lot about using scripting and generating content, you might check out the Unreal or Quake engines and start making a mod. Not only will this be fun and very informative, having a completed MOD under your belt will make you a more enticing candidate for game dev jobs. From what I understand, making a MOD is easier, but still similar to making games using a game system like Torque. I hope I've helped a bit. - Enosch
  4. Beginning Game Making Please Help

    Heya dood. First off, congrats on your huge endeavor. I hope you find it as rewarding as everyone else. Second, I would (in my soon to be professional opinion) recommend learning C++ and DirectX. C++ because it is (currently) the language used by most professional game dev companies. If you are wanting to get a job working for one of those, you pretty much need C++. The reason is that most/all of their existing code bases are in C++ and because it is fast. Whether C++ will remain the defacto language or be replaced by C# or others is still up for debate (rather hot debate at that). It is NOT the only language you can use and not the only one you can get a job with. But if getting a job is your priority, C++ opens up more doors at the moment. DirectX is (again, currently) the most used API for games, because most games run on Windows because most people have Windows. While that may or may not change in the future, it is the way it is right now. And while DX is not often a requirement for general programmer jobs, it is a definite bonus and a declaration that you are well rounded and experienced enough to use it. Knowledge of other APIs will do this too, but DX is one of the most desirable. Having said that, you want to use a language that feels natural to you. If you will be writing your own games, this will increase your productivity and your enjoyment of the process. However, keep in mind that there are limits to what some languages can and can't do. Visual Basic isn't made for real time 3D games. C++ may be overkill for a simple tic-tac-toe game. You need to find the kinds of games you want to make and learn accordingly. Also, what do you mean by professional quality? If you are thinking of writing the next HALO, you will never finish working on it by yourself. By the time you learn all you need to make HALO and then make it (maybe 7-8 years working full time, lol) games will have moved so far past that level of quality that you will be lucky to get it onto the shelf at Bud's Movie Rental. Realistically, the odds are against you completing a project that large, simply because of all the content required. If you have never worked on a "real" game before, you may not realize just how much work goes into it. But only considering the gameplay programming (assuming you have a fully featured engine), you will need to develop many different complex systems just to get a game that is playable, forget professional quality. After you get gameplay that works, you still need music, sound fx, models, animations, textures, physics data, ai data, levels, visual effects, etc. Most/all of these are usually generated by tools that you may can find online or at a software store. But for the rest, you will need to design and build your own, which is another task. Finally, you actually need use the tools (3D graphics editors for instance) enough to get good at it. Most programmers really suck at art and music generation (hasty generalization) because we don't use it enough and/or lack the talent that artists have. THAT is why so many dev companies have large teams. Professional level games made with large teams or experienced professinoals usually still take 1-2 years to complete, because not only do you need to make the game, you have to make it fun and idiot-proof. That in itself is a daunting task, because by the time you think you've solved it for the most idiotic idiot out there, they go and make a better one. (Jeez this is getting long...) My overall point (wow, that thing I started to get to at the beginning...) is that while your goal is admirable, it is unlikely and may just lead to frustration when you actually had the stuff to get good at this after all. My advice is to not quit your day job. Unless you are incredibly gifted and lucky (more so than most of the really smart folks here) you won't be able to start cranking out good games after only a month of C++. Software development is an infinite learning process, so if you are up for that (and you sound like you are), this industry is where you want to be. Just start small. As Kazgoroth said, you won't "master" C++ or DX in a year, probably not even in two if you have an ounce of a life. So keep working where you are and just learn as fast as you can. I would definitly advise going to school. A four year degree will open up so many doors for you and you will not only learn a lot there, but also have a chance to learn on your own. Alternatively, there are a few 2 year degree schools in the US where you can study ONLY game development. When you have made a few small games and feel confident in your language, try to join one of the groups here on gamedev that is working on a project that interests you. I think that you'll find plenty of challenges and new stuff with just that. Finally, when you are experienced enough, try to get a job working for a game dev company. I know that by reading some of the posts around here it may sound nearly impossible to get hired, but if you know your stuff you can do it (I did, and I've only been programming for 2.5 years; I'm only 21). After you learn all you can there, then move out to your own shop. You will have the experience and professional contacts needed to make it happen. I know this doesn't look very exciting, but not very much in life worth having happens easily. I would hate for you to jump out there and start making games only to flop a couple of months down the road when you run out of money. There is SO much out there to learn that you haven't seen yet. Keep asking questions. Best of luck to you. And have fun. - Enosch
  5. 3D Compass

    To get the compass in 3D coords, try this: You have the Camera position [x,y,z] and the Camera's direction vectors. If you don't keep the direction vectors (it helps a LOT), you can compute them from your Camera's rotation (I won't describe it here) without much difficulty. I always recompute them each time the Camera's rotation changes. Once you know those vectors, positioning the compass in 3D space is easy. For instance, if you want it in the upper-right corner of the screen d distance in front of the camera, you can simply say: Compass.position = Camera.position + Camera.rightVec + Camera.upVec + Camera.frontVec*d; The frontVec is scaled by d to move the compass d distance directly in front of you, then you simply add the up and right vectors to get it to the upper-right corner, and add the camera's 3D position and viola, it is positioned in world space and is always in the upper right corner of your screen. Having said all that, it is probably more intuitive to just use screen space coords, since it is a HUD object, not a game object. That way, you can position it on the screen, instead of in the world. Making it always point north shouldn't be a problem, though. Just apply a rotation in the opposite direction as your camera. For instance, if your camera is facing north, the compass points straight ahead. As the camera rotates left, the compass should rotate right with the same angle, so that when the Camera is facing south, the compass is facing straight back. Sounds like a cool HUD object to have. Good luck making it work! - Enosch
  6. Linear motion problems

    I finally figured out the problem. I was updating all particles in the physics sim every frame from within the physics engine, and also updating all scene graph nodes from within the scene graph. This updated the linear motion of the "particles" (objects) twice each frame. Easily fixed. Thanks for the help though. - Enosch
  7. Hey guys, I've recently implemented a "scene graph" in my 3d engine. Mainly, it supports heirarchical relationships using a matrix stack. Right now, it is demoed VERY simply with the terrain as the root, a dwarf model as its child and the camera as the dwarf's child. This way, you move the camera relative to the dwarf model. This gives a 3rd person camera effect, which works nicely. The problem happens when you move the dwarf model. I use linear motion model with a position and velocity. There are forces that act on each particle, like gravity and drag, and particles may collide with each other and with the terrain. Currently, gravity and drag are applied to the dwarf and only consider it colliding with the terrain. When it moves, its image (onscreen) appears to "skip". It moves correctly, but the image seems to shake from time to time around its accurate position and orientation (quaternion rotation). Any ideas? I'll try to get a demo posted asap. Thanks, - Enosch
  8. OpenGL Need help on a very simple prob

    If I understand your problem correctly, you want both sides of each face to be lit. As Thought suggested, this is a setting that can be modified. Try: glDisable( GL_CULL_FACE ); This should make sure both sides of every face are drawn. Then, don't specify normals when you are drawing. This "should" draw them both equal. I'm not sure how this is defined if you specify lights, though. If you disable lighting, though, I would think that this would work for you. If there are different / better solutions (there most likely are) please feel free to correct me. - Enosch
  9. I use a sky sphere instead of a skybox for my 3d engine. It looks the same and is more intuitive (no adjusting to avoid corners, for instance). You still leave off things like lighting and can simply create your sky sphere with minimal faces. To implement your effects, simply have the sky sphere at radius x and the sky effects sphere at radius x - y. Then you can just rotate the actual mesh instead of the uv coords. Heck you could even make multiple spheres and do multiple effects or have some clouds moving fast and some slow. Btw, if you are using particle effects for the clouds instead of textures, it would be simpler in my opinion to just make them face the camera and swing them around the camera at radius x - y (the same as above, just place them at that distance, don't have a seperate sphere with that radius). You could set up cloud layers by simply have clouds at different radii. For instance, if you wanted 5 layers, set the minimum "altitude" (read radius) and simply place the other four layers evenly between that and your sky sphere radius. Using this method would make your effects "stick" to your sky sphere. If your sky goes all the way down to the horizon, the clouds do too, lol. Good luck dood! - Enosch
  10. BIG spells in mmorpg's

    I've always wondered about using keypresses as spells in MMOs. Sounds like a good idea to me. I think it would be an interesting game where spellcasting is a seperate "language" that must be learned in order to be decent wizards. You would still need some sort of "level" system in place behind it. Perhaps casting takes mana over time so only more powerful wizards may cast longer spells. But this would allow spellcasting skill to be placed in the players' hands, removing "push-button" playing. This would make wizards more vulnerable, but you could offset this by making spells more powerful or flexible. I also think it would be cool to redo the stealth system of most MMOs. Instead of a simple "invis-mode"+"backstab" approach, do something more like metal gear solid or manhunt or tenchu. You could still have the "invis mode", but make it temporary and a spell or something. The problem with this is that it makes stealthers less viable in open field combat, where now they roxor. But in an urban (time period open) type environment, stealth MMOs would be cool. And archery. If I were making an MMO, I would have the ranged weaponry behave like FPS games where you actually have to hit what you are aiming at, not just select it and let the computer decide whether or not you hit from 2 feet away. That would be sweet. The main idea, I think, is to have an MMO where more skill is required to play a class than figuring out the best attacks and pressing that button as fast as possible. My two copper. - Enosch
  11. Question about LOD's

    ROAM isn't THAT slow if you do it right. There is a paper on Gamasutra that I can't think of the name for (if anyone knows this, feel free to add). It implements ROAM with a Variance Tree instead of a distance metric to speed things up. I get around 40-50 FPS on my 2.8GHz, Radeon 9800 machine rendering 1024x1024 terrains with 4 textures blended by height. I haven't optimized it yet, but it works well enough for now. It runs between 4000-5000 tris at a time (less of you aren't looking at the terrain) and dynamically sets the LOD rather realistically. The only problem I have atm is a slight popping while moving around, but I plan to quell this with geomorphing when I have time. Just my two cents. I've heard that simply using lots of little patches at a single LOD works as well, but for 1024x1024 terrain you would have 1024 / 20 = 51 patches per side, which is 51*51 = 2,601 center points to cull against your camera. Also, if you only test the center of the patch against the camera, you have severely noticeable popping when a patch enters the view frustum. And if you test all four corner vertices, you get 2,601*4 = 10,401 points to test. And yes, as you said quadtrees can make this a lot less. Finally, if you still want to use ROAM, but don't want to deal with dynamically changing terrain, there is another paper on gamasutra (again, no name) that deals with using ROAM to generate the level of detail based on flat vs. lumpy areas on a map. This means you generate few triangles in flat areas with not much detail and much more triangles for hills where you need lots of detail. This is done only once when you load the terrain, then you can divide it into sections and use quadtrees like ade-the-heat mentioned. You get static level of detail and can even (with the algorithm in this paper) set the max number of tris to use for a map (maybe scaleable by system speed: fewer/less detial for slow systems, more/higher detail for faster systems). I plan to use this one for my MMO engine later. Good luck dood, - Daniel
  12. Help on my DX engine

    Any ideas? I am thinking that the problem is my near / far clipping planes. It looks like it takes a 2D "slice" and draws that. However, I have had that problem before and it still applied textures and materials. And I double checked my near / far distances and they are set to 0.18f and 100.0f respectively. The projection matrix is updated using D3DXMatrixPerspectiveLH() with these values each frame. ANY suggestions would be greatly appreciated...
  13. Hey guys. I'm writing my own DX graphics engine and getting some odd results. Wondering if anyone might have some suggestions. My design works like this: gfx namspace contains all graphics classes. Resources (wrappers for the D3D versions): gfx::cShader gfx::cTexture gfx::cMesh gfx::cMaterial gfx::cEngine class manages engine components (camera, window, etc) gfx::cCamera class manages view and projection transforms like this: in the Update function (called each frame) it computes the projection and view matrices and sets them to all loaded shaders. gfx::cWindow class manages Win32 and D3D initialization, alt+tabbing, etc. My meshes currently only support loading from .X files, which I've used before. I'm currently loading the dwarf.x mesh to test. It loads the mesh into the D3DXMESH and loads the materials into a material buffer, then loops through the material buffer and creates each material with the values loaded. The materials in my engine contain the textures that material uses, so the engine then takes the filename in the D3DXMATERIAL it is loading from (the materialbuffer) and loads that texture (LPDIRECT3DTEXTURE9) into the material. These are stored in an array that the attribute buffer indexes (like normal). My mesh rendering looks like this: gfx::cMesh::Render (const cMatrix44& matWorld) { iUInt numPasses = 0; shader.SetWorldMatrix(matWorld); shader.Begin(numPasses); for (iUInt i=0; i<numPasses; ++i) { shader.BeginPass(i); for (iUInt j=0; j<nMaterials; ++j) { shader.SetMaterial(pMaterials[j]); shader.CommitChanges(); pD3DMesh->DrawSubset(j); } shader.EndPass(); } shader.End(); } shader.SetMaterial() looks like this: Each shader keeps a reference to the last loaded texture. If the next material uses the same texture as the last one, it doesn't send the texture data to the shader. void SetMaterial(const cMaterial& mat) { pD3DEffect->SetFloatArray("mAmb", (iFloat*)&mat.ambient, 4); pD3DEffect->SetFloatArray("mDif", (iFloat*)&mat.diffuse, 4); pD3DEffect->SetFloatArray("mSpe", (iFloat*)&mat.specular, 4); pD3DEffect->SetFloatArray("mEmi", (iFloat*)&mat.emissive, 4); pD3DEffect->SetFloat ("mShi", mat.shine); if (texturemap != mat.texturemap) { texturemap = mat.texturemap; pD3DEffect->SetTexture ("texMap", mat.texturemap); } } My resource classes contain cast operators to the D3D types they wrap, so passing them directly into a function expecting their D3D type should call the cast operator right? Anyways, here is my result from this design. Screenie It obviously loads the mesh, and and the other resource load, because any load errors send an error message to the debug window on my compiler, and there are none. At this point I'm at a loss as to what is wrong. Any suggestions?
  14. I used something similar with combining graphics and physics material properties, but kept them seperate (more modular). My engine is divided into several components, including graphics, physics, and simulation. Each of these contains its own namespace to keep things sane. Graphics=gfx, Physics=psx, Simulation=sim. I then have a gfx::cMaterial class, a psx::cMaterial class, and a sim::cMaterial class. The simulation engine basically provides a way for the game using the engine to handle game objects on a higher level, without worrying about graphics and physics and AI, etc. So the gfx::cMaterial class contains data that the graphics engine will use (textures, dif/spec/amb/emi/shine, etc.) and the psx::cMaterial class contains data that the physics engine will use (drag, friction, mass, density, etc.). Finally, the sim::cMaterial class provides higher level interaction, for example setting what happens if the material is shot or collided with (a shimmer effect for dragon scales would be an example). This keeps things modular. If I change out my graphics engine, it doesn't affect the other components. Likewise the physics engine. - Enosch
  15. a software development people grow stage.

    I would say that an "experienced programmer" is one who knows how to recognize, classify, and break down problems. He can also know how and where to gain knowledege about his problem in order to solve it on his own. Also, I would say that a "master programmer" is one who has already learned how others accomplish things, but has graduated to a point where he strives to develop systems that work better than those. He also has developed his own software development process to efficiently translate design to implementation and knows how to implement a design to be as fast and small (CPU and memory) as possible. Just my two cents. - Enosch