• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

codymanix

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

132 Neutral

About codymanix

  • Rank
    Member
  1. I found the problem: Rendertagets loose their contents on backbuffer change and normally need to be recreated, so needed to copy the data to a new texture.   I managed it to create a normal mip mapped Texture2D from a file with the following code. It uses 3 Steps:   1. Load file with Texture2D.FromStream 2. Create mipmapped RenderTarget2D and render texture as sprite on it 3. Copy rendertarget texture to new Texture2D using GetData(mipLevel, ..)/SetData(mipLevel, ...)   It is unbelievable unefficient but I fear there is no better way to do it in XNA:       MemoryStream ms = new MemoryStream();                 s.CopyTo(ms, 1024);                 ms.Seek(0, SeekOrigin.Begin);                 // load texture from file                 using (Texture2D intermediateTexture = Texture2D.FromStream(Graphics, ms))                 {                     // create mip mapped texture                     using (RenderTarget2D renderTarget = new RenderTarget2D(                         Graphics,                         intermediateTexture.Width,                         intermediateTexture.Height,                         mipMap: true,                         preferredFormat: SurfaceFormat.Color,                         preferredDepthFormat: DepthFormat.None,                         preferredMultiSampleCount: 0,                         usage: RenderTargetUsage.PreserveContents))                     {                         SamplerState oldSS = Graphics.SamplerStates[0];                         RasterizerState oldrs = Graphics.RasterizerState;                         SamplerState newss = SamplerState.LinearClamp; // todo which is best?                         Graphics.SetRenderTarget(renderTarget);                         spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, newss, DepthStencilState.None, RasterizerState.CullNone, effect: null);                         spriteBatch.Draw(intermediateTexture, new Vector2(0, 0), Color.White);                         spriteBatch.End();                         Graphics.SetRenderTarget(null);                         Graphics.DepthStencilState = DepthStencilState.Default;                         Graphics.BlendState = BlendState.Opaque;                         Graphics.SamplerStates[0] = oldSS;                         Graphics.RasterizerState = oldrs;                         // since rendertarget textures are volatile (contents get lost on device) we have to copy data in new texture                         Texture2D mergedTexture = new Texture2D(Graphics, intermediateTexture.Width, intermediateTexture.Height, true, SurfaceFormat.Color);                         Color[] content = new Color[intermediateTexture.Width * intermediateTexture.Height];                         for (int i = 0; i < renderTarget.LevelCount; i++)                         {                             int n = renderTarget.Width * renderTarget.Height / ((1 << i) * (1 << i));                             renderTarget.GetData<Color>(i, null, content, 0, n);                             mergedTexture.SetData<Color>(i, null, content, 0, n);                         }                         t = mergedTexture;                     }                                    }  
  2.   Thank you for the Code. It first seemed to work, it creates textures which also look mip mapped. But now if I switch during the running game to fullscreen or activate multisampling (Anything that needs   GraphicsDeviceManager.ApplyChanges()     to be called), then things go mad. My textures seem to be exchanged and sometimes I see nothing at all. When I use the original texture loading mechanism through content pipeline, it works again.   My first thought was passing   RenderTargetUsage.PreserveContents     when creating the rendertarget, but this didn't help.   Do you have an idea what the problem could be?
  3. Hi thanks for the answer!   Yes I draw the quads in arbitrary order and with enabled Z-Buffer. Due to the (potentially) huge number of transparent objects I have sorting is no option for me. But I searched a bit and found out that I can use the AlphaTestEffect or the HLSL Instruction clip(). Is HLSL portable to XBox or Windows Phone?
  4. Hi Iam having trouble getting transparent texture to work correctly. I append what the problem is. [sharedmedia=core:attachments:13305] As you can see, the transparency works good for the bottom faces. But the faces in the background are somehow obstructed by the transparent texture although it should shine through. This effect seem always to happen with all faces in this direction, doesnt seem to happen with faces pointing in the other direction. What could this strange effect be? Iam using the following (and the BasicShader) to draw things: CullMode=Off; FieldOfView = MathHelper.ToRadians(45); AspectRatio = (float)Engine.Instance.Graphics.Viewport.Width / (float)Engine.Instance.Graphics.Viewport.Height; NearPlane = 0.1f; FarPlane = 250.0f; Graphics.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.LightBlue, 1.0f, 0); Graphics.BlendState = BlendState.AlphaBlend; Graphics.DrawPrimitives(PrimitiveType.TriangleStrip, cubeSide * 4, 2); EDIT3: I found that it only happens if I look at a face that is behind, right or top from the cube I am looking at. It doesn't happen for front, bottom or left faces. Could this have to do something with my normals, since Iam using different geometry for each cube side (6 different Meshes)? Without AlphaBlend everything is rendering fine. I now tried VertexPositionTexure (without normals), still the same problem.
  5. Thank you for the reply!   I also found out that I could use new Texture2D(device,w,h,true,format) and then call SetData() on it. Would this be possible? Will it automatically create a valid mip map too?   I believe that there is a performance penalty on writeable textures (RenderTarget2D). Is this true?
  6. Hi!   Can I generate Mip Maps in XNA without using the content pipeline? When I use Texture.FromStream there is no parameter which can be used for that.   Is it even recommended using XNA without the content pipeline? I know that it provides faster content loading and its content importers save your quite a bit coding..   But the thing with the content pipeline is that everybody who wants/needs to change game content (graphics designer, sound artist or even a hobbyist modder) needs to have an installed copy of visual studio. That is quite bad, so I want to avoid using the content pipeline.  
  7. Hi folks, I want to create a multiplayer game. The clients can communicate with the server with different messages for example "move object x", "send chat msg to all", "send chat msg to x", "request stats". what is best: 1. Having clients creating a single tcp connection to the server and then so all communication with that conection, but requiring some kind of message type code in the packet OR: 2. Having clients creating a connection for each message type. How would you solve it?
  8. Hi Folks, Iam trying to use hires satellite images (like google maps) in a game. Where can I get such things? I want free colored orthogonal images where I can see buildings on it and so on. Do I have any chance to get something like that?
  9. Hi Iam using managed directX and want to do some bumpmapping. It looks fine so far but the problem is that ambient light is completely ignored so that the the side of my models which aren't facing the light is completely black: device.RenderState.Ambient = System.Drawing.Color.FromArgb(0x606060); int factor = VectorToRgba(Vector3.Normalize(viewV-planetV), 1.0f); device.RenderState.TextureFactor = factor; device.SetTexture(0, planet.BumpTexture); device.TextureState[0].TextureCoordinateIndex = 0; device.TextureState[0].ColorOperation = TextureOperation.DotProduct3; device.TextureState[0].ColorArgument1 = TextureArgument.Diffuse;//;TextureColor device.TextureState[0].ColorArgument2 = TextureArgument.TFactor; device.SetTexture(1, planet.Texture); device.TextureState[1].TextureCoordinateIndex = 0; device.TextureState[1].ColorOperation = TextureOperation.Modulate; device.TextureState[1].ColorArgument1 = TextureArgument.TextureColor; device.TextureState[1].ColorArgument2 = TextureArgument.Current;
  10. > Isn't centering the skybox just as simple as this: > skybox.position = camera.position; Isn't that the same as device.Transform.World = Matrix.Scaling(40f, 40f, 40f) * Matrix.Translation(camPos); ? To Roquqkie: I already turned depth writing off.
  11. Hi I have a planets simulation. The look direction is always the center(sun) but the user can rotate the solar system and change and the view distance. The problem now is that the skybox should, no matter which distance user user looks from never be scaled that is, always be the same size which means it should be centered around the camera. This is my code but it does not seem to center the skybox: Vector3 camPos = new Vector3(0, 0, camDist); Matrix mat = Matrix.RotationYawPitchRoll(camYAngle, camXAngle, 0) * Matrix.Translation(camPos); device.Transform.View = mat; // make skybox big enough and center it around camera device.Transform.World = Matrix.Scaling(40f, 40f, 40f) * Matrix.Translation(camPos); skyBox.Show(); // draw skybox with current world transform [Edited by - codymanix on March 20, 2005 12:43:55 PM]
  12. > It's better to make each effect self reliant, setting all values, even if > they're defaults, as drivers or another effect may have changed the setting from > expected default. Yes that is true. I once tried to capture the renderstates and reappling the original ones after each effect, but notices the applyign a renderstates object to the device is a serious performance bottleneck. However, do you have an Idea what could be wrong with my lightvector? The sun is at coord (0,0,0). The planet circles around the sun, the viewer look staight at the planet. if I transform the coord(0,0,0) with the matrix the planet is using shouldn't I get the correct lightvector?
  13. I don't believe it!!! Putting device.TextureState[1].TextureCoordinateIndex = 0; in solved the problem! I don't know why but after experimenting for hours that did it. But I have now another problem: The sun rotates around the mars (from viewer). If you see sun coming from the side mars gets brighter (as it should), but if the sun reaches the "highest point" or zenith (you know what I mean) it starts to get darker again which is wrong. What could be the reason? Iam calculating the vecot like this: int VectorToRgba(Vector3 v, float height) { int r = (int)(127.0f * v.X + 128.0f); int g = (int)(127.0f * v.Y + 128.0f); int b = (int)(127.0f * v.Z + 128.0f); int a = (int)(255.0f * height); return((a<<24) + (r<<16) + (g<<8) + (b<<0)); } int factor = VectorToRgba(Vector3.Normalize(Vector3.TransformCoordinate(lightVector, planet.Matrix)), 0.0f); device.RenderState.TextureFactor = factor; However I have one additional question: What is in your example the disabling of device.TextureState[2] good for since it is never set shouldn't it be disabled by default?
  14. I created my texture using format Q8W8V8U8. The result looks a bit bumpy now but the texture is gone away so I just see the bumps: /* Layer 0 */ lpDev7->SetTexture(0, lpDDSBaseTexture); lpDev7->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SelectArg1); lpDev7->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); /* Layer 1 */ lpDev7->SetTexture(1, lpDDSBumpMapTexture); lpDev7->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_DOT3); lpDev7->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_CURRENT); lpDev7->SetTextureStageState(1, D3DTSS_COLORARG2, D3DTA_TFACTOR); In case case my translation to unmanaged directx got wrong here is my original: device.SetTexture(0, planet.Texture); device.TextureState[0].ColorOperation = TextureOperation.SelectArg1; device.TextureState[0].ColorArgument1 = TextureArgument.TextureColor; device.SetTexture(1, planet.BumpTexture); device.TextureState[1].ColorArgument1 = TextureArgument.Current; device.TextureState[1].ColorOperation = TextureOperation.DotProduct3; device.TextureState[1].ColorArgument2 = TextureArgument.TFactor; This is how I create my textures now: Texture marsTexture = TextureLoader.FromFile(device, resPath + "mars.png"); Texture marsBumpTexture = TextureLoader.FromFile(device, resPath + "mars_bump.jpg"); Texture normalTexture = new Texture(device, 1024, 512, 0, (Usage)0, Format.Q8W8V8U8, Pool.Default); TextureLoader.ComputeNormalMap(normalTexture, marsBumpTexture, NormalMap.InvertSign, Channel.Luminance, 1f);
  15. So I need a *particular* (vertex, texture?)format. And which one exactly? To Trenton05: Uuh vertex shaders. I wanted to start with something easy, not with HLSL :)