Jump to content
  • Advertisement
  • 07/04/18 10:10 AM

    Finding Game Objects and Components in Unity with a Simple Pattern

    General and Gameplay Programming

    Ahmed Sabry

    I have some fun here for those who search for game objects and components in the scenes using Unity:

    GameObject.Find("") , FindObjectOfType<> , GetComponent<>

    Or any other searching method. And since searching for objects and components is a high cost operation and annoying for some people. This is a very simple and easy pattern that will make the process easier.
    We start with making a class StaticObjectwhich holds the game object, its name, and a reference to its components that we will need to use.

    And then we create a class AllStaticObjects, which holds a list of the StaticObject to assign in the inspector, and an indexer to search for the desired game object by its name.

    Second step is creating a class StaticObjectComponentwhich holds a component and its name. Then a class AllComponentswhich holds a list of, StaticObjectComponent to assign in the inspector, and an indexer to search for the desired component by its name.

    Of course we don't forget to mark all the classes as 

    [System.Serializable]

    So that they be displayed in the inspector.

    In our manager script, we make a singleton, and a public reference to AllStaticObjects class to assign them from the inspector. Now, if we want to access a game object we can just type it like this:

    GameObject ui =(GameObject)StaticObjectsManager.Instance.AllObjects["UI Canvas"].TheGameObject;
    

    if we want to access a component, we will do:

    var animation = (Animation)StaticObjectsManager.Instance.AllObjects["UI Canvas"].AllComponents["Animation"].Component;
    

    This is the full code.

    using System;
    using System.Collections;
    using System.Collections.Generic;
    using UnityEngine;
     
    [System.Serializable]
    public class AllStaticObjects
    {
        public List<StaticObject> Objects;
     
        public StaticObject this[string name]
        {
            get
            {
                foreach (StaticObject staticObject in Objects)
                {
                    if (staticObject.Name == name)
                    {
                        return (staticObject);
                    }
                }
     
                throw (new MissingReferenceException("The game object with name -" + name + "- is not found"));
            }
        }
    }
     
    [System.Serializable]
    public class StaticObject
    {
        public string Name;
        public GameObject TheGameObject;
        public AllComponents AllComponents;
    }
     
    [System.Serializable]
    public class AllComponents
    {
        public List<StaticObjectComponent> Components;
     
        public StaticObjectComponent this[string name]
        {
            get
            {
                foreach (StaticObjectComponent component in Components)
                {
                    if (component.Name == name)
                    {
                        return (component);
                    }
                }
     
                throw (new MissingReferenceException("The component with name -" + name + "- is not found in the game object " + name));
            }
        }
    }
     
    [System.Serializable]
    public class StaticObjectComponent
    {
        public string Name;
        public Component Component;
    }
     
    public class StaticObjectsManager : MonoBehaviour
    {
        public static StaticObjectsManager Instance;
     
        #region Singleton
        private void Awake()
        {
            if (Instance == null)
            {
                Instance = this;
                DontDestroyOnLoad(gameObject);
            }
            else
            {
                Destroy(gameObject);
            }
        }
        #endregion
     
        #region Game Objects
        public AllStaticObjects AllObjects;
        #endregion
    }

     



      Report Article


    User Feedback


    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • intellogo.png

    Are you ready to promote your game?

    Submit your game for Intel® certification by December 21, 2018 and you could win big! 

    Click here to learn more.

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By rogerdv
      Im looking for a pixel artist to improve the current graphics of a mobile game. Here is a video of current beta. Knowledge of unity3d is a plus.
       
       
       
    • By NyquistVelocity
      I'm having trouble wrapping my brain around what actually is the issue here, but the sampler I'm using in my volume renderer is only interpolating the 3D texture along the Y axis.
      I roughly followed (and borrowed a lot of code from) this tutorial, but I'm using SlimDX and WPF: http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
      Here's an example, showing voxel-ish artifacts on the X and Z axes, which are evidently not being interpolated:

      ...whereas on the Y axis it appears to be interpolating correctly:

      If I disable any kind of interpolation in the sampler, the whole volume ends up looking voxel-ish / bad:

      Thinking maybe my hardware didn't support 3D textures (even though it's modern?) I wrote a little trilinear interpolation function, and got the same results.
      In the trilinear code, I calculate the position of the ray in grid coordinates, and use the fractional portion to do the lerps.
      So I experimented by just painting the fractional part of the grid coordinate where a ray starts, onto my geometry cast to a float4. As expected, the Y axis looks good, as my input dataset has 30 layers. So I see a white => black fade 30 times:

      However, my X and Z fractional values are strange. What I should be seeing is the same white => black fade 144 and 145 times, respectively. But what I get is this:


      ... which is definitely not right. The values are A) discretized and uniform per grid cell, and B) exhibit a pattern that repeats every handful of grid rows, instead of a smooth fade on each cell.
      My suspicion is that I'm initializing my texture badly, but here's a look at the whole pipeline from initialization to rendering
      1) Loading data from a file, then constructing all my rendering-related objects:
      Data = new GURUGridFile(@"E:\GURU2 Test Data\GoshenDual\Finished\30_DOW7_(X)_20090605_220006.ggf"); double DataX = Data.CellSize[0] * Data.Dimensions[0]; double DataY = Data.CellSize[1] * Data.Dimensions[1]; double DataZ = Data.CellSize[2] * Data.Dimensions[2]; double MaxSize = Math.Max(DataX, Math.Max(DataY, DataZ)); DataX /= MaxSize; DataY /= MaxSize; DataZ /= MaxSize; Renderer.XSize = (float)DataX; Renderer.YSize = (float)DataY; Renderer.ZSize = (float)DataZ; int ProductCode = Data.LayerProducts[0].ToList().IndexOf("A_DZ"); float[,,] RadarData = new float[Data.Dimensions[0], Data.Dimensions[1], Data.Dimensions[2]]; for (int x = 0; x < Data.Dimensions[0]; x++) for (int y = 0; y < Data.Dimensions[1]; y++) for (int z = 0; z < Data.Dimensions[2]; z++) RadarData[x, y, z] = Data.Data[z][ProductCode][x, y]; int DataSize = Math.Max(RadarData.GetLength(0), Math.Max(RadarData.GetLength(1), RadarData.GetLength(2))); int mWidth = RadarData.GetLength(0); int mHeight = RadarData.GetLength(2); int mDepth = RadarData.GetLength(1); float mStepScale = 1.0F; float maxSize = (float)Math.Max(mWidth, Math.Max(mHeight, mDepth)); SlimDX.Vector3 stepSize = new SlimDX.Vector3( 1.0f / (mWidth * (maxSize / mWidth)), 1.0f / (mHeight * (maxSize / mHeight)), 1.0f / (mDepth * (maxSize / mDepth))); VolumeRenderer = new VolumeRenderEngine(false, Renderer.device); VolumeRenderer.Data = VolumeRenderTest.Rendering.TextureObject3D.FromData(RadarData); VolumeRenderer.StepSize = stepSize * mStepScale; VolumeRenderer.Iterations = (int)(maxSize * (1.0f / mStepScale) * 2.0F); Renderer.Initialize(); SetupSlimDX(); this.VolumeRenderer.DataWidth = Data.Dimensions[0]; this.VolumeRenderer.DataHeight = Data.Dimensions[2]; this.VolumeRenderer.DataDepth = Data.Dimensions[1]; It's worth noting here that I flip the Z and Y axes when passing data to the volume renderer so as to comply with DirectX coordinates.
      Next is my construction of the Texture3D and related fields. This is the step I think I'm messing up, both in terms of correctness as well as general violation of best practices.
      public static TextureObject3D FromData(float[,,] Data) { Texture3DDescription texDesc = new Texture3DDescription() { BindFlags = SlimDX.Direct3D11.BindFlags.ShaderResource, CpuAccessFlags = SlimDX.Direct3D11.CpuAccessFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipLevels = 1, OptionFlags = SlimDX.Direct3D11.ResourceOptionFlags.None, Usage = SlimDX.Direct3D11.ResourceUsage.Default, Width = Data.GetLength(0), Height = Data.GetLength(2), Depth = Data.GetLength(1) }; int i = 0; float[] FlatData = new float[Data.GetLength(0) * Data.GetLength(1) * Data.GetLength(2)]; for (int y = 0; y < Data.GetLength(1); y++) for (int z = 0; z < Data.GetLength(2); z++) for (int x = 0; x < Data.GetLength(0); x++) FlatData[i++] = Data[x, y, z]; DataStream TextureStream = new DataStream(FlatData, true, true); DataBox TextureBox = new DataBox(texDesc.Width * 4, texDesc.Width * texDesc.Height * 4, TextureStream); Texture3D valTex = new Texture3D(Renderer.device, texDesc, TextureBox); var viewDesc = new SlimDX.Direct3D11.ShaderResourceViewDescription() { Format = texDesc.Format, Dimension = SlimDX.Direct3D11.ShaderResourceViewDimension.Texture3D, MipLevels = texDesc.MipLevels, MostDetailedMip = 0, ArraySize = 1, CubeCount = 1, ElementCount = 1 }; ShaderResourceView valTexSRV = new ShaderResourceView(Renderer.device, valTex, viewDesc); TextureObject3D tex = new TextureObject3D(); tex.Device = Renderer.device; tex.Size = TextureStream.Length; tex.TextureStream = TextureStream; tex.TextureBox = TextureBox; tex.Texture = valTex; tex.TextureSRV = valTexSRV; return tex; } The TextureObject3D class is just a helper class that I wrap around a Texture3D to make things a little simpler to work with.
      At the rendering phase, I draw the back and front faces of my geometry (that is colored according to the vertex coordinates) to textures so that ray starting and ending positions can be calculated, then pass all that nonsense to the effect.
      private void RenderVolume() { // Rasterizer states RasterizerStateDescription RSD_Front = new RasterizerStateDescription(); RSD_Front.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Front.CullMode = CullMode.Back; RSD_Front.IsFrontCounterclockwise = false; RasterizerStateDescription RSD_Rear = new RasterizerStateDescription(); RSD_Rear.FillMode = SlimDX.Direct3D11.FillMode.Solid; RSD_Rear.CullMode = CullMode.Front; RSD_Rear.IsFrontCounterclockwise = false; RasterizerState RS_OLD = Device.ImmediateContext.Rasterizer.State; RasterizerState RS_FRONT = RasterizerState.FromDescription(Renderer.device, RSD_Front); RasterizerState RS_REAR = RasterizerState.FromDescription(Renderer.device, RSD_Rear); // Calculate world view matrix Matrix wvp = _world * _view * _proj; RenderTargetView NullRTV = null; // First we need to render to the rear texture SetupBlend(false); PrepareRTV(RearTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_REAR; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); // Now we draw to the front texture SetupBlend(false); PrepareRTV(FrontTextureView); SetBuffers(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); ExecuteTechnique(Renderer.RayCasting101FX_RenderPosition); Device.ImmediateContext.Flush(); Device.ImmediateContext.OutputMerger.SetTargets(NullRTV); SetupBlend(false); //Set Render Target View Device.ImmediateContext.OutputMerger.SetTargets(SampleRenderView); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear screen Device.ImmediateContext.ClearRenderTargetView(SampleRenderView, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); if (Wireframe) { RenderWireframeBack(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } SetBuffers(); // Render Position Renderer.RayCasting101FX_WVP.SetMatrix(wvp); Renderer.RayCasting101FX_ScaleFactor.Set(ScaleFactor); Renderer.RayCasting101FX_Back.SetResource(new ShaderResourceView(Renderer.device, RearTexture));// RearTextureSRV); Renderer.RayCasting101FX_Front.SetResource(new ShaderResourceView(Renderer.device, FrontTexture));//FrontTextureSRV); Renderer.RayCasting101FX_Volume.SetResource(new ShaderResourceView(Renderer.device, Data.Texture)); Renderer.RayCasting101FX_StepSize.Set(StepSize); Renderer.RayCasting101FX_Iterations.Set(Iterations); Renderer.RayCasting101FX_Width.Set(DataWidth); Renderer.RayCasting101FX_Height.Set(DataHeight); Renderer.RayCasting101FX_Depth.Set(DataDepth); ExecuteTechnique(Renderer.RayCasting101FX_RayCastSimple); if (Wireframe) { RenderWireframeFront(); Device.ImmediateContext.Rasterizer.State = RS_FRONT; } int sourceSubresource; sourceSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1);// MSAATexture.CalculateSubResourceIndex(0, 0, out sourceMipLevels); int destinationSubresource; destinationSubresource = SlimDX.Direct3D11.Resource.CalculateSubresourceIndex(0, 1, 1); //m_renderTarget.CalculateSubResourceIndex(0, 0, out destinationMipLevels); Device.ImmediateContext.ResolveSubresource(MSAATexture, 0, SharedTexture, 0, Format.B8G8R8A8_UNorm); Device.ImmediateContext.Flush(); CanvasInvalid = false; sw.Stop(); this.LastFrame = sw.ElapsedTicks / 10000.0; } private void PrepareRTV(RenderTargetView rtv) { //Set Depth Stencil and Render Target View Device.ImmediateContext.OutputMerger.SetTargets(rtv); // Set Viewport Device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, WindowWidth, WindowHeight, 0.0f, 1.0f)); // Clear render target Device.ImmediateContext.ClearRenderTargetView(rtv, new Color4(1.0F, 0.0F, 0.0F, 0.0F)); } private void SetBuffers() { // Setup buffer info Device.ImmediateContext.InputAssembler.InputLayout = Renderer.RayCastVBLayout; Device.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; Device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(Renderer.VertexBuffer, Renderer.VertexPC.Stride, 0)); Device.ImmediateContext.InputAssembler.SetIndexBuffer(Renderer.IndexBuffer, Format.R32_UInt, 0); } private void ExecuteTechnique(EffectTechnique T) { for (int p = 0; p < T.Description.PassCount; p++) { T.GetPassByIndex(p).Apply(Device.ImmediateContext); Device.ImmediateContext.DrawIndexed(36, 0, 0); } } Finally, here's the shader in its entirety. The TrilinearSample function is supposed to compute a good, interpolated sample but is what ended up highlighting what the problem likely is. What it does, or at least attempts to do, is calculate the actual coordinate of the ray in the original grid coordinates, then use the decimal portion to do the interpolation.
      float4x4 World; float4x4 WorldViewProj; float4x4 WorldInvTrans; float3 StepSize; int Iterations; int Side; float4 ScaleFactor; int Width; int Height; int Depth; Texture2D<float3> Front; Texture2D<float3> Back; Texture3D<float1> Volume; SamplerState FrontSS = sampler_state { Texture = <Front>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState BackSS = sampler_state { Texture = <Back>; Filter = MIN_MAG_MIP_POINT; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; SamplerState VolumeSS = sampler_state { Texture = <Volume>; Filter = MIN_MAG_MIP_LINEAR; AddressU = Border; // border sampling in U AddressV = Border; // border sampling in V AddressW = Border; // border sampling in W BorderColor = float4(0, 0, 0, 0); // outside of border should be black }; struct VertexShaderInput { float3 Position : POSITION; float4 texC : COLOR; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float3 texC : TEXCOORD0; float4 pos : TEXCOORD1; }; VertexShaderOutput PositionVS(VertexShaderInput input) { VertexShaderOutput output; output.Position = float4(input.Position, 1.0); output.Position = mul(output.Position * ScaleFactor, WorldViewProj); output.texC = input.texC.xyz; output.pos = output.Position; return output; } float4 PositionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(input.texC, 1.0f); } float4 WireFramePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { return float4(1.0f, .5f, 0.0f, .85f); } //draws the front or back positions, or the ray direction through the volume float4 DirectionPS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f * texC.x + 0.5f; texC.y = -0.5f * texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb;// tex2D(FrontS, texC).rgb; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).rgb; if(Side == 0) { float4 res = float4(front, 1.0f); return res; } if(Side == 1) { float4 res = float4(back, 1.0f); return res; } return float4(abs(back - front), 1.0f); } float TrilinearSample(float3 pos) { float X = pos.x * Width; float Y = pos.y * Height; float Z = pos.z * Depth; float iX = floor(X); float iY = floor(Y); float iZ = floor(Z); float iXn = iX + 1; float iYn = iY + 1; float iZn = iZ + 1; float XD = X - iX; float YD = Y - iY; float ZD = Z - iZ; float LL = lerp(Volume[float3(iX, iY, iZ)], Volume[float3(iX, iY, iZn)], ZD); float LR = lerp(Volume[float3(iXn, iY, iZ)], Volume[float3(iXn, iY, iZn)], ZD); float UL = lerp(Volume[float3(iX, iYn, iZ)], Volume[float3(iX, iYn, iZn)], ZD); float UR = lerp(Volume[float3(iXn, iYn, iZ)], Volume[float3(iXn, iYn, iZn)], ZD); float L = lerp(LL, UL, YD); float R = lerp(LR, UR, YD); //return ZD; return lerp(L, R, XD); return 0.0F; } float4 RayCastSimplePS(VertexShaderOutput input) : SV_TARGET // : COLOR0 { //calculate projective texture coordinates //used to project the front and back position textures onto the cube float2 texC = input.pos.xy /= input.pos.w; texC.x = 0.5f* texC.x + 0.5f; texC.y = -0.5f* texC.y + 0.5f; float3 front = Front.Sample(FrontSS, texC).rgb; // tex2D(FrontS, texC).xyz; float3 back = Back.Sample(BackSS, texC).rgb; // tex2D(BackS, texC).xyz; float3 dir = normalize(back - front); float4 pos = float4(front, 0); float4 dst = float4(0, 0, 0, 0); float4 src = 0; float value = 0; //Iterations = 1500; float3 Step = dir * StepSize; // / (float)Iterations; float3 TotalStep = float3(0, 0, 0); value = Volume.Sample(VolumeSS, pos.xyz).r; int i = 0; for(i = 0; i < Iterations; i++) { pos.w = 0; //value = Volume.SampleLevel(VolumeSS, pos.xyz, 0); value = TrilinearSample(pos.xyz); // tex3Dlod(VolumeS, pos).r; // Radar reflectivity related threshold values if (value < 40) value = 40; if (value > 60) value = 60; value = (value - 40.0) / 20.0; src = (float4)(value); src.a /= (Iterations / 50.0); //Front to back blending // dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb // dst.a = dst.a + (1 - dst.a) * src.a src.rgb *= src.a; dst = (1.0f - dst.a) * src + dst; //break from the loop when alpha gets high enough if (dst.a >= .95f) break; //advance the current position pos.xyz += Step; TotalStep += Step; //break if the position is greater than <1, 1, 1> if (pos.x > 1.0f || pos.y > 1.0f || pos.z > 1.0f || pos.x < 0.0f || pos.y < 0.0f || pos.z < 0.0f) break; } return dst; } technique11 RenderPosition { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, PositionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 PositionPS(); } } technique11 RayCastDirection { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, DirectionPS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 DirectionPS(); } } technique11 RayCastSimple { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, RayCastSimplePS())); //VertexShader = compile vs_3_0 PositionVS(); //PixelShader = compile ps_3_0 RayCastSimplePS(); } } technique11 WireFrame { pass Pass1 { SetVertexShader(CompileShader(vs_4_0, PositionVS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, WireFramePS())); //VertexShader = compile vs_2_0 PositionVS(); //PixelShader = compile ps_2_0 WireFramePS(); } } Any insight is hugely appreciated, whether on the specific problem or just random things I'm doing wrong.
      With the coordinates in the Texture3D being so messed up, I'm surprised this renders at all, let alone close to correctly.
      Thank you in advance!
    • By ggenije
      On picture 1 is actually what I get, but how to make it to look like pic 2?

    • By Valdiralita
      TLDR: is there a way to "capture" a constantbuffer in a command list (like the InstanceCount in DrawIndexedInstanced is captured) so i can update it before the command list is executed?
      Hey,
      I want to draw millions of objects and i use instancing to do so. My current implementation caches the matrix buffers, so I have a constantbuffer for each model-material combination. This is done so I don't have to rebuild my buffers each frame, because most of my scene is static, but can move at times. To update the constantbuffers I have another thread which creates command lists to update the constantbuffers and executes them on the immediate context. My render thread(s) also create command lists ahead of time to issue to the gpu when a new frame is needed. The matrix buffers are shared between multiple render threads.
      The result is that when an object changes color, so it goes from one model-material buffer to another, it hides one frame and is visible at the next or is for one frame at a different location where an object was before. I speculate this is because the constantbuffer for matrices is updated immediately but the InstanceCount in the draw command list is not. This leads to matrices which contain old or uninitialized memory.
      Is there a way to update my matrix constant buffers without stalling every renderthread and invalidating all render command lists?
      regards
       
    • By Mercutio604x
      Hi,
      I am using unitys mecanim system
      I have a layer that controls movement ie idle -> run
      it works fine when I press W I run forward with the animation via transform.position += anim.deltaPosition; // anim is on a child object that has the animator
      I have a second layer for attack set to override 100%, on this layer I have just the head moving and is masked just for the neck up. it plays when I press the mouse button.
       
      Now my problem is when I press W and mouse button, it plays both animations BUT it stops moving forward.
      Theres no location key frames on the attack or any key frames on the root node for that matter.
       
      Please tell me if I am being unclear.
      Thanks in advance,
       
      Also I lost my other acc because I cant remember my pass or the email i used, is there a way I can get it back? like I say the account name and it tells me which email i used?
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!