• Content count

  • Joined

  • Last visited

Community Reputation

485 Neutral

About AmzBee

  • Rank

Personal Information

  1. Pyro is back! - This week I talk about the resurrection of Pyro, the scene builder for our engine Phobius...
  2. News flash no you can't just tell everyone to sling their hook from a public archeblade server because you want hang out with a bezzie mate.
  3. Directional Light

    Daft question, but why are you multiplying the normal by the world matrix in your vertex shader? Out.Normal = mul(IN.Normal, (float3x3)World); Aimee
  4. It's fairly straight forward to render a mesh for each face on a texture cube, the restriction is that you must preform 6 draw calls because each face of the texture cube will need assigning as a render target. In our engine we use RenderTargetCube in combination with CubeMapFace and individually set view matrices for each face, we do this to perform shadow mapping for point lights. It incurs 6 draw calls like I said, but our engine is optimized to only draw relevant geometry so it's very fast. In the following MyRenderTargetCube can be cast to a TextureCube:   GraphicsDevice.SetRenderTarget(MyRenderTargetCube, CubeMapFace.PositiveY); I still maintain that the technique you are using will likely drive you bonkers later with the restrictions it imposes and the extra overhead that may not be apparent yet. But it's not my place to tell you what to do, and to tell you the truth I am intrigued to find out if you manage to get it working   Aimee
  5. Oh hang on I think I mis-interpreted what you were asking, lets take another stab at it. So you have one huge cube map that surrounds the scene and you want to render distant meshes onto that 1 cube map so distant meshes don't have to be rendered individually.   In this case for outdoor scenes where the ground is flat this may work, but for uneven ground and indoor scenes you'll likely encounter problems with how you deal with depth and perspective. To add a little irony when you move through the scene, you would likely have to re-construct the 1 cube map often as things get closer or further away, which means any performance gain you get from the technique will be close to nullified.   Although this may quash what I said previously in parts, a lot of the previous post still applies, did you know for example that in Half Life 2 they used a technique where all geometry that would permanently be far away from the player was very low poly? Just another very helpful and common technique that could offer what you are after.   Aimee
  6. It took me a few minutes to figure out what you were asking, but I think I understand now and no it's not really practical. This is how I understand it:   You want distant meshes to instead draw as cubes that each have an individual cube map representing the view of the mesh at each face (6 in total). Using which ever faces the view matrix can see, you wish the mesh to be "reconstructed" to look as if it is a fabrication of a mesh (like a billboard).   If I have understood it correctly, this means there are some unfortunate flaws as follows:   This means you will be storing a cube map per mesh that would be subject to this technique, which will likely consume far more vram than a large number of vertices and indices. Here is a quick example: For example a 512 * 512 * 6 faces * 4 bytes = approximately 6 megabytes per cube map where each face is 512 x 512 pixels in 32 bit. Versus (300,000 vertices * 20 bytes stride) = 5.72 megabytes per 100,000 unique triangles where each vertex is a VertexPositionTexture. When looking at this I think most people would rather distribute 100,000 triangles between meshes than have to store 6 MB per single mesh, more efficient.   I have seen a technique talked about by Renaud Bédard (guy who programmed fez) where a Texture3D is linearly sampled in shader, technically this could be used to achieve what you asked, but it is complicated and very shader heavy. Should be noted he used the technique to store many tiles in one texture, so in his case although a 3D texture is to the power of 3 pixel wise (far larger than a cube map) it was a worth while trade off versus texture swapping.   Even if you do elect this technique, it will mean on start-up of the scene you will need to make the user wait while each face is rendered for each cube map, the waiting time may not be noticeable with a few cube maps in the scene, but I bet it becomes more and more undesirable the more you cube maps you throw in the mix, I think your idea is intuitive, but at the same time there is a big part of me crying out saying this technique is like trying to open a door with your shoulder blades. There are time tested techniques for dealing with distant object, such as frustum checking, back face culling, fog (oldie but goldie), bsp tree's, and many many more, I bet you are familiar with many of them, but it's worth noting that these techniques are common because people find them reliable.   Anyway down to brass tax, when all else fails remember that the most common technique is often the best.   Aimee
  7. Porting from XNA to MonoGame

    Hi BGH welcome to gamedev!   Many of the questions you asked have already been answered many many times before so I recommend you try the search features of the forum, however here is a brief summary of the answers you'll find:   XNA is no longer supported correct, however it works and will continue to work for years to come. So if you are new to coding games, stick with XNA until you are happy to move on.   MonoGame would be the next stage *after* building your game, it's very straight forward (not easy) to port to as long as the game is optimized and does not rely on windows only features. Also as they are still working on the content pipeline, it's very useful to still have XNA installed.   XNA doesn't actually have any plugin's, you can get 3rd party libraries that give you extra features, but no plugins that directly tamper with the API. Which 3rd party libraries you are planning on using? (note: most of the popular ones like physics engines etc... provide a DLL that works on MonoGame too now days). Try not to dwell on trying to port before your game is built, it's a mistake I and others have made in the past over and over, this tends to lead to games never getting built, but that's just my two cents   Aimee       - 
  8. Consider something, in order for your game to render fast it needs to call Draw() and Update() as often as possible (update is not required to but that's going off topic). This means if you have something that requires a certain amount of time to execute after it is triggered, the action will have to last for longer than 1 draw call. Here is an example of such a technique used in XNA 4.0: using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace AnimationExample { public class Game1 : Game { class Bullet { public Vector2 Position, Velocity; public bool IsDead; public Bullet(Vector2 position, Vector2 velocity) { this.Position = position; this.Velocity = velocity; } } List<Bullet> Bullets = new List<Bullet>(); Random Random1 = new Random(); GraphicsDeviceManager Graphics; SpriteBatch Batch1; KeyboardState OldState, NewState; Texture2D ProjectileTexture; float HalfTextureH, QuaterTextureH; public Game1() { Graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { Batch1 = new SpriteBatch(GraphicsDevice); ProjectileTexture = Content.Load<Texture2D>("tex1"); HalfTextureH = ProjectileTexture.Height / 2.0f; QuaterTextureH = ProjectileTexture.Height / 4.0f; } protected override void Update(GameTime gameTime) { OldState = NewState; NewState = Keyboard.GetState(); // detect key press. if (NewState.IsKeyDown(Keys.Space) && OldState.IsKeyUp(Keys.Space)) ShootBullet(); // update any existing projectiles. Bullets.ForEach(UpdateBullet); // remove any projectiels that are dead. Bullets.RemoveAll(t => t.IsDead); Window.Title = Bullets.Count + " projectile(s) are alive."; } void ShootBullet(float minSpeed = 2f, float maxSpeed = 10f) { // get a random y starting coordinate. float yPos = ((GraphicsDevice.Viewport.Height - HalfTextureH) * (float)Random1.NextDouble()) - QuaterTextureH; // calculate speed. float xVel = MathHelper.Clamp(maxSpeed * (float)Random1.NextDouble(), minSpeed, maxSpeed); // add the projectile. Bullets.Add(new Bullet(Vector2.UnitY * yPos, Vector2.UnitX * xVel)); } void UpdateBullet(Bullet bullet) { bullet.Position += bullet.Velocity; // check if the bullet has gone off the screen. if (bullet.Position.X > 800) bullet.IsDead = true; } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Batch1.Begin(); Bullets.ForEach(DrawBullet); Batch1.End(); base.Draw(gameTime); } void DrawBullet(Bullet bullet) { Batch1.Draw(ProjectileTexture, bullet.Position, Color.White); } } } A screenshot for good measure :)     Here is the texture I used if you wish to try the example out:   Note, it is important to understand that if you wish something to happen over a duration of time longer than a single update/draw call, then you will always have to keep track of the state of that action, this is called state management. Each important state of something usually requires a logic update each time Update() is called, this is a core principle of game development.   Side Notes   The example will likely run slow if you were to run it on the Xbox 360 or Windows Phone because it is unoptimized and creates a lot of garbage. This could easily be rectified by using a fixed array as a resource pool where we only ever create a number of Bullet instances once, each bullet would need an extra bool to say if it was being used or not, however this is out of scope so if you come across that problem later on in development, feel free to PM me.   Aimee  
  9. Photo: A screenshot of Radius in fabuloso purple theme, phoaaa!
  10. XNA to C++

    It's answers like the one frob gave that make the time spent on such a joy :) He's absolutely right, so to add my two cents I would also say it is probably more important to learn to create a game first, from experience I have found it's a lot easier to pick up other languages after a while, but learning to make a game is difficult whichever you choose.   On the compatibility track, MonoGame for the most part is making a good effort to make sure people who learn XNA can port their games, there are still a few niggles, but nothing that would stop you making your game for Windows 8 for example.   Aimee
  11. Tip, in visual studio when you see a class that looks unfamiliar, type it out into the code editor then right click and choose "Go To Definition" or click on the text and press F12. This will take you to a window that shows how the class is seen by reflection.     In this case you can see the DrawableGameComponent class inherits the GameComponent class, and the IDrawable interface. So basically it's a derivative of the GameComponent class that implements methods found in IDrawable including the Draw method. This however doesn't meant it's the only one you can use, in fact you could write your own version of DrawableGameComponent that includes your own set of methods, the XNA team created this class as a prefabricated example of how to make a customized GameComponent    Aimee
  12. I've made a cube sky box for a free game we are building so you can have a look at the example project I made for it if you wish:   It's been created to work specifically with the free Spacescape software so you shouldnt have any problem making your own sky textures.   Aimee
  13. Setting texture in effect pass

    Short answer is not quite :) what you could do is group the cubes by texture, then set the texture only when you start to draw another group. Like I said in a previous thread if this is not possible, then your next option is to pack all the textures you need into 1 big texture, this involves manipulating the UV texture coordinates in order for that particular "sub-texture" to be used. If you get stuck feel free to pm me on skype (XpodGames).   Aimee
  14. Before my response Id like to express my concern towards the language you are using, swearing 3 times just because you are frustrated on a forum where young creative people regularly visit does not seem very becoming of someone who's been a member for almost 5 years. I am sure many other members of the forum would agree with me that if you truly wish to have help from us, that refraining from that sort of behaviour in the future would be appropriate. At this time I'll take it that you didn't realise your language would offend, I'll try to help this time.   OK to the question you asked, if you are allowing non-integer positioning and scaling of the geometry then the pixels you output may not necessarily ever map to the screen coordinates you are hoping for, to make matters worse even if you get it to work on your computer, it may not look right on others. There are some work around's like adding a half texel (not pixel) offset to your UV's, but it really depends on what interpolation method you are relying upon.   In the end there are only a few options when it comes to this sort of thing, either accept it the way it is, try using higher resolution textures so the sampler can give better results, play with the half texel hack and hope it works for other computers, and/or stick to integer positioning and scaling to help the correct positioning of pixels.   Aimee
  15. If you intend on drawing an explorable star field, especially in XNA, mesh instancing is definitely the way to go.   Mesh Instancing   Mesh instancing works by storing a mesh on the graphics card only once (If you think about it, why bother having many copies of a mesh you will be reusing, it's inefficient, memory wasteful, and could actually act as a bottleneck when drawing many copies). Coupled with something called an instance buffer (bit like an IndexBuffer that stores an array of structs that describe position, scale, and rotation for each copy),   You simply call GraphicsDevice.DrawInstancedPrimitives once each draw cycle and the graphics card takes care of drawing all of the copies.   The benefits are obvious, you can draw thousands of copies of the same mesh and yet only store 1 copy of the mesh on the graphics card. As the instance buffer can be dynamic (only if you need), this enables the ability to animate instances (copies) as seen in particle system examples. Also offloading this sort of task to the graphics card is particularly useful in XNA as it helps get round the overhead .Net throws at you.   Cameras   In regards to cameras, it's important you get a good understanding of how matrices work in XNA. The view matrix for example represents how the user views the world, which means you only have to alter the view matrix to move, twist, turn etc.. through your star field. As you are using XNA, Riemer has an excellent tutorial on this sort of camera that I think would be of benefit, even though the site is old, I thoroughly recommend you give it a go:   >   Further Reading   Funnily enough Microsoft have provided a number of excellent code examples that show you how to perform instancing in XNA, so here's a few links to ones I think are most relevant to your goals:   > (Mesh Instancing Example)   > (Particle System Example)   Also to answer your question about 'straight up pixel particles', yes I can imagine a way doing something like that could be possible with shaders, but it's a very complicated way of accomplishing what you get easily with mesh instancing, I recommend you give what works a go first to build your confidence in this field.   Good luck, and keep us in the loop with how you get on :) Aimee