• Content count

  • Joined

  • Last visited

Community Reputation

105 Neutral

About gordon13

  • Rank
  1. Hi, I'm new to game programming and C++ (I'm competent with Python and Javascript).   I'm using SDL2 to make a 2D game. I think I don't have the right words to search for what I want yet so I'll try and explain it here.   Currently the way I load the player image, background etc is all in the main function. The objects are defined there and pointers are used to store the images.   I guess that's acceptable for things that will be on screen all the time, but what about for objects like props or enemies that will only be in certain levels? Also what about building actual levels? How would you make that without hard coding everything?   I already have a way of reading level data (json parser) and setting the player's starting position and background image based on the player' progress (stored in another external file).   What I'd like to do is be able to define some objects and their xy positions as well as other metadata in the level file, and then be able to load the game level based on that instead of hard coded variables. Basically, what normal games do.     The following is what I think it would look like. But like I said I'm new to this so I'd appreciate some input.   On initialise: Parse the json file, get the type of each element and match that against the corresponding object class. Create an instance of that class along with all the initialisation data needed (texture, starting position etc) and store it in a level array (a vector ? or a list or something?).   In the game loop: For each iteration, user another loop to iterate through the level array and call the input handler (if it exists for that object), update and render functions for each element.   Does this seem like a good idea?   If so, any pointers (pun intended) on achieving this? If not, what would you suggest?   Are there any code examples of something like this out there?   Thanks
  2. Hello I'm learning shader programming and have come across a problem when attempting to add a custom shader to the Torque game engine. I looked at existing shaders that work in torque and tried to make my shader have the same format but the shader doesnt seem to load. theres no error message or anything that might give a clue to what could be wrong.   I made this in FX composer and modified the format to match existing torque shaders as best I can.  It works in fx composer but not in torque. I'm thinking its something to do with the link between the engine and the shader. the shader isnt getting the data it needs and the engine isn't getting the shader stuff it needs  but cant seem to find any useful information or anything I can understand(at least up till now).   Could someone have a look at my files and shed some light? Id really appreciate it, thanks.   material.cs http://pastebin.com/PZw77U1T   shader.cs http://pastebin.com/Bkk6eJ9C   heroV.hlsl http://pastebin.com/H6VMvJWw   heroP.hlsl http://pastebin.com/xjTxWzTL
  3. Hi, I need to create a refraction shader for a bullet trail effect in C#(Neoaxis engine). Im totally new to this. Ive found one or two tutorials that got me started and Managed to make a simple Phong shader. Im a little confused with all the different elements of RenderMonkey. Im finding it hard to understand the example files so Im asking here. I have a few questions: If I was going to create a simple refraction shader(I just need some basic transparency and distortion(no need for cubemaps/reflections/whatever)), what would be a good way to start? Im just trying to get my head around the program. Why are there two programming areas in a pass and whats the difference between the two?(Vertex Program, Fragment Program) The tutorial shader: Why is this part of the code in Vertex Program rather than Fragment? ------------------------------------------------------------------------ uniform vec3 LightPosition; uniform vec3 EyePosition; varying vec3 ViewDirection; varying vec3 LightDirection; varying vec3 Normal; void main( void ) { gl_Position = ftransform(); vec4 ObjectPosition = gl_ModelViewMatrix * gl_Vertex; ViewDirection = EyePosition - ObjectPosition.xyz; LightDirection = LightPosition - ObjectPosition.xyz; Normal = gl_NormalMatrix * gl_Normal; } and why is this part in Fragment program rather than Vertex?: ---------------------------------------------------------------------- uniform vec4 Ambient; uniform vec4 Specular; uniform vec4 Diffuse; uniform vec4 BaseColor; uniform float SpecularPower; uniform float RefractionPower; varying vec3 ViewDirection; varying vec3 LightDirection; varying vec3 Normal; void main( void ) { vec3 LightDirection = normalize( LightDirection ); vec3 Normal = normalize( Normal ); float NDotL = dot( Normal, LightDirection ); vec3 Reflection = normalize( ( ( 2.0 * Normal ) * NDotL ) - LightDirection ); vec3 ViewDirection = normalize( ViewDirection ); float RDotV = max( 0.0, dot( Reflection, ViewDirection ) ); vec4 TotalAmbient = Ambient * BaseColor; vec4 TotalDiffuse = Diffuse * NDotL * BaseColor; vec4 TotalSpecular = Specular * ( pow( RDotV, SpecularPower ) ); gl_FragColor = ( TotalAmbient + TotalDiffuse + TotalSpecular ); } I hope someone can shed some light :)