• Content count

  • Joined

  • Last visited

Community Reputation

145 Neutral

About RoadToRuin

  • Rank
  1. [C++] bask in my awesomeness.

    Big Smile... Big Smile!! Sir well done!! And a massive thank you!
  2. Newbie needs help in basic c++ calculator code

    Hey man I'm sorry that I don't have the time to go through all the theory with you right now as I have problems of my own (just wait till you get to shader programming now theres a world of fun :p). But try this code instead: #include <iostream> #include <string> using namespace std; int Multiply(int no1, int no2); int Divide(int no1, int no2); int Subtract(int no1, int no2); int Add(int no1, int no2); int num1 = 0; int num2 = 0; int result = 0; // Task is a character not an int char task = '0'; int main()// Calculator which performs the basic Add, Multiply, Divide and \\Subtract duties using functions { cout << "Enter a number: " << endl; cin >> num1; cout << "\nEnter a second number: "; cin >> num2; cout << "\nchoose a task: \n"; cout << "\n1. Multiply\n"; cout << "\n2. Divide\n"; cout << "\n3. Subtract\n"; cout << "\n4. Add\n\n"; // You can only input characters (you can convert them to ints later on, but lets leave this for now) cin >> task; // When dealing with multiple options such as this, use a switch statement instead of "whiles". // It will simplify your code, make it more readable and reduce the change of error. switch(task) { // (Task == '1') case '1': result = Multiply(num1, num2); break; // (Task == '2') case '2': result = Divide(num1, num2); break; // (Task == '3') case '3': result = Subtract(num1, num2); break; // (Task == '4') case '4': result = Add(num1, num2); break; default: cout << "\nThe task is no recognised\n"; break; } cout << "\nThe answer is " << result << ". \n\n"; cout << "\n Thank you and have a nice day!\n\n\n"; cin.get(); system("PAUSE"); return 0; } // This function takes in two different numbers (int) and then returns the multiplication of the two numbers int Multiply(int no1, int no2) { return (no1 * no2); } // This function takes in two different numbers (int) and then returns the division of the two numbers int Divide(int no1, int no2) { return (no1 / no2); } // This function takes in two different numbers (int) and then returns number one minus number two int Subtract(int no1, int no2) { return (no1 - no2); } // This function takes in two different numbers (int) and then returns number one plus number two int Add(int no1, int no2) { return (no1 + no2); } Look through what I've done and think about how this makes things better. I through in alittle documentation too to help. Again sorry I don't have time right now to talk through all the theory will you. I would recommend reading a book like "Teach yourself C++ in 21 days" (one of the better fast learning books), you can find this specific book here: Hope this helps
  3. Point sprites on WIndows/ATI

    Hey all, thought I would just say that we have given up on point sprites for ATI, after two and a half weeks of no progress it just isn't worth it, it is costing too much to keep working at it. We rewrote our billboarding classes to make them more effecient and simple, they now run fluently and smoothly on all hardware and look almost as good as point sprites. On Nvidia hardware we still use point sprites, but for ATI we now default to billboards. It is not the best solution I know, but we are behind schedule because of this so we have to make concessions somewhere. Coincidentally ATI announced that the 9.1 drivers would support Geometry shaders, shader program 4 and full OpenGL 3.0, however they are now on 9.2 with no sign of the above, buts its on the way. Heres the story: Best of luck to the rest of you in fixing the errors.
  4. Point sprites on WIndows/ATI

    Quote:Original post by V-manProbably your glext.h is messed up or perhaps some other .h file of yours. Unfortunately not, all the values are correct in the GLee.h and GLee.c and there is no conflicting defines in the other headers and libraries that we link too. What it looks like is the ATI OGL driver dll having trouble recognising and controlling the correct extensions. The crash involving: glEnable(GL_POINT_SPRITE_ARB); would only occur in the initialisation and only in fixed function mode, "GL_POINT_SPRITE_ARB" could be called, and was called, in multiple other areas of application without consequence. All of our fatal errors with ATI and OGL sprites have all orginated in the "atioglxx.dll" system driver, which is ATI OpenGL driver file. Additionally everything is perfect on Nvidia hardware. Edit: We have tried linking to glext and ATI's own extension headers and libraries all have had the same result: 0 errors on NVIDIA hardware and sprite and texture errors on ATI hardware.
  5. Point sprites on WIndows/ATI

    Yeah we have the same results here, all calls to is enabled return true. Right now we have shut off all our shaders and we are outputing through fixed function only, however the sprites still do not have texture coordinates, so it must be something in the GL setup. We have also observed some other strange ATI only errors. For example calling: glEnable(GL_POINT_SPRITE_ARB); in fixed functionality would crash the game straight off the bat, this error was not present with shaders, however calling glEnable(GL_POINT_SPRITE); is fine and additionally both "__GLEE_GL_ARB_point_sprite" and "GL_ARB_point_sprite" return possitive. We've also had pixel format errors appear from nowhere, code that we haven't touched for two months which was used to find a mulitsampled pixel format began to error yesterday and again only on ATI hardware, we have resolved this now by reducing and refining our tests however it still seems strange that code that has been error free for two months would just start to error on ATI out of the blue. All our ATI specific crashes so far have all been linked back to the "atioglxx.dll" file, which is of course the ATI OpenGL driver file. But we still can't just write this off as terrible drivers (even though they are awful) as our old demos and third party tutorial files can be observed running sprites just fine on ATI hardware. Edit: Currently we only support PC but we will support Mac too after the release of our current game on PC. Right now we have positive sprite tests on Nvidia hardware ranging from (AGP) GEFORCE 6800s to (PCIE) GEFORCE 9500's, positive fixed function tests on embedded INTEL chips (OpenGL 1.4), positive fixed function tests on 4 year old ATI RADEON MOBILITY cards (yeah beleive it or not, but we have no errors on these cards! I almost fell off my chair. Additionally the drivers for the cards in question had updates terminated on them 3 years ago) and we have massive sprite errors, texture errors and pixel format errors on all the HD series' of ATI cards. (Texture errors on the HD series are shader only and were fixed by rendering a dummy level, which made use of every shader, for a single frame at the start of the game. This practice is advised by the ATI OpenGL programming optimisation guide.)
  6. Point sprites on WIndows/ATI

    Just thinking, but have your tried calling glGetTexEnviv(GL_POINT_SPRITE_ARB, GL_COORD_REPLACE, &params) To test if coordinate replacement is enabling and disabling correctly. Just another test to root out possible issues.
  7. Point sprites on WIndows/ATI

    GL_VERTEX_PROGRAM_POINT_SIZE allows the vertex shader to output point size, again on our test rigs this enable behaves as expected, when on the vertex shader output controls the point size, and when off points default to the set glPointSize() and gl point attenuation is by passed and this behaviour (correct behaviour) is present on both our ATI and Nvidia rigs. One thing we've notice on the ATIs is if GL_POINT_SPRITE is enabled as a global parameter then when points are rendered they default back to regular points, however when GL_POINT_SPRITE is enabled and disabled per sprite object then points are rendered as sprites. On Nvidia hardware GL_POINT_SPRITE can be enabled globally and behave correctly, i.e. render points as sprites. The GL_ARB_point_sprite spec notes the GL_POINT_SPRITE is a global parameter, so along as a program doesn't need to render regular points, a program should be able to call GL_POINT_SPRITE once and acheive correct behaviour. Again before our rewrite enabling GL_POINT_SPRITE globally on ATI worked.
  8. Point sprites on WIndows/ATI

    SwiftCoder I have noticed an issue with your code glClientActiveTexture(GL_TEXTURE1) glEnableClientState(GL_TEXTURE_COORD_ARRAY) glTexCoordPointer(1, GL_FLOAT, sizeof(Particle), sizeof(c_float)*6) glDrawArrays(GL_POINTS, 0, self.count) glDisableClientState(GL_TEXTURE_COORD_ARRAY) glClientActiveTexture(GL_TEXTURE0) Your enable Texture Unit 1 and the send your texture coordinates into it, you also disable through this Your code should probably look more like this glClientActiveTexture(GL_TEXTURE0_ARB) glEnableClientState(GL_TEXTURE_COORD_ARRAY) glTexCoordPointer(1, GL_FLOAT, sizeof(Particle), sizeof(c_float)*6) glDrawArrays(GL_POINTS, 0, self.count) glClientActiveTexture(GL_TEXTURE0)ARB) glDisableClientState(GL_TEXTURE_COORD_ARRAY) Texture Coordinates should also be sent down Unit0. When GL_COORD_REPLACE_ARB is enabled it will always attempt to send texture coordinates down Unit0. Hope this helps in some way
  9. Point sprites on WIndows/ATI

    Quote:Original post by swiftcoder The most confusing item is that this call has absolutely no effect on either machine: glTexEnvi(GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE). You can set it to GL_TRUE or GL_FALSE, and the PC will always behave as if it is false, while the Mac (OS X 10.5, Intel integrated X3100) will always behave as if it is true. I don't know if this function has been deprecated along the way, or if both drivers just happen to have different bugs regarding it? It is definately not deprecated as it is vital to setting sprite texture coordinates and can be observed working correctly. On the Nvidia hardware here (mostly 8800 GTXs and 9500 GTs) setting glTexEnvi(GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, .....) to GL_TRUE has the effect of generating texture coordinates for the sprites, whilest GL_FALSE has the sprites using the original texture coordinates; Exactly the behaviour that should be expected. It is odd, that your Mac will force GL_TRUE on coordinate replacement, especially as written in the Sprite Spec the default value is GL_FALSE, but then again it is the behaviour that is usually desired. With coordinate replacement enabled, coordinates fed from GL to the pipeline via glTexCoordPointer and similar functions are ignored, however the original coords are present in the vertex shader, but are only overwritten/hijacked at the fragment stage. Removing the glTexCoordPointer while having sprite coordinate replacement enabled, has zero effect on srpites on Nvidia as the srpites are generating their own coords, so they render correctly, however on ATI hardware sprites render black as they no longer have texCoords to map the texture. Is it possible we're both missing something in the shader initialisation? For us our shader Init was completely rewrittenduring our render system rewrite, while the shader code remained largely untouched. However I have read throught the GL_ARB_point_sprite spec many times but I cannot spot any mistakes in our code, and the code you posted looks correct The spec can be found here: The offical .txt can be found on the OpenGL website, but that appears to be down right now, never the less here's the address:
  10. Point sprites on WIndows/ATI

    Quote:Original post by swiftcoder Does anyone know if ATI is aware of this issue? I don't see any mention of it on the ATI developer forums, or even on the web (via google). People have been upset with ATI's implementation of OpenGL point sprites for years. Googling ATI and GL Point Sprite, is as hilarious as it is annoying to look back over the years and see just how many people have been upset for so long. The thing which I keep coming back to (and which has me eternally floored) is Point Sprites can work on ATI hardware and can be observed doing so, so why aren't they working in our cases? I'm just wondering swiftcoder, but are you using the GLSL or CG? And which variables do you have enabled in your renderstate? The problem we have in the studio here arose when we rewrote the render systems to incorporate the ability to reset back to fixed funtionally, so that older hardware could play our games. But in our attempted to fix it we have even gone as far as culling everything from the engine until it was only capable of rendering a single point sprite and the issue still persisted. Below is our render state initialisation (with some pseudo code for copywrite reasons, sounds lame I know but I don't own it) // Initialise the everything (if possible) int RenderEngineGL_CG::Render_InitialiseAll() { // Store the device context hDc = GetDC(hWnd); if(hDc == NULL) {return CONTEXT_ERROR;} // Set a pixel format SetPixelFormat(hDc, iPixelFormat, NULL); // Get the OpenGl rendering Context hRc = wglCreateContext(hDc); if(hRc == NULL) {return RENDER_CONTEXT_ERROR;} // Check/Set the current context if(wglMakeCurrent(hDc, hRc) == FALSE) {return CURRENT_CONTEXT_ERROR;} // Initialise GLEE GLeeInit(); // Get the graphics vendor iGLVendor = CurrentVendor; // Get the GL information fGLVersion = OpenGlVersion; // Check multisampling if(UseMultisampling == TRUE) { // Enable multisampling glEnable(GL_MULTISAMPLE_EXT); //Check version is above 1.3 if(fGLVersion >= 1.3) { // Use the alpha value glEnable(GL_SAMPLE_COVERAGE_ARB); // Set the sample coverage glSampleCoverage(1.0, FALSE); } } // Test the OpenGl version, if it is less than 1.2 error and quite the game as this comp is not good enough to run it if(fGLVersion < 1.2) {return GL_VERSION_ERROR;} // Test for GL 1.3 or better if(fGLVersion < 1.3) { // Disable Cloud sprites, Texture filtering and MipMaps } // Test for GL 1.4 or better if(fGLVersion >= 1.4) { SetUpThePointAttributes(); } // Test the OpenGl version, if it is less than 1.5 hardset the fixed functionality pipeline if(fGLVersion < 1.5) { // Set Fixed functionality bFixedFunctionality = TRUE; // Turn off all framebuffers UseFrameBuffers = FALSE; UseBrightPass = FALSE; UseBlurPass = FALSE; } // Check for GL version 1.5 if(fGLVersion >= 1.5) { // Enable point sprites in vertex programs glEnable(GL_VERTEX_PROGRAM_POINT_SIZE_ARB); } // Check if the OpenGL versions is less than 2.0 if((fGLVersion < 2.0) ) { // Hardset points to sprites bPointsOverSprites = TRUE; } // Check the points over sprites value if((bPointsOverSprites == FALSE) && (bFixedFunctionality == FALSE)) { // Enable Point Sprites glEnable(GL_POINT_SPRITE_ARB); } else { // Set up smooth points glEnable(GL_POINT_SMOOTH); // Set the hint glHint(GL_POINT_SMOOTH_HINT, GL_FASTEST); } // Set default point size glPointSize(63.0f); // Set the GL clear colours glClearColor(0.75f, 0.75f, 0.75f, 0.75f); glClearDepth(1.0f); // Clear the colour and depth buffers glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Set the PolygonMode glPolygonMode(GL_FRONT, GL_FILL); // Disbale polygon smoothing glDisable(GL_POLYGON_SMOOTH); // Set up back face culling glEnable(GL_CULL_FACE); glCullFace(GL_BACK); // Set up the alpha testing glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.05); // Set up the Blending function glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Set the shademodel to smooth, the winding to counter clockwise glShadeModel(GL_FLAT); glFrontFace(GL_CCW); // Set up depth testing glEnable(GL_DEPTH_TEST); glDepthMask(GL_TRUE); glDepthFunc(GL_LESS); // Enable materials glEnable(GL_COLOR_MATERIAL); glColorMaterial(GL_FRONT, GL_AMBIENT_AND_DIFFUSE); // The Render engine has initialised bInitialised = TRUE; // Check the fixed function if(bFixedFunctionality == TRUE) { SetUpFixedFunctionalityPipeline(); return NO_ERRORS; } else { // Turn on Framebuffers UseFrameBuffers = TRUE; InitialiseTheShaders(); InitialisetheFramebuffers(); return NO_ERRORS; } // If we are here then something went wrong! return UNDEFINED_ERROR; } Similarly to you swiftcoder, we enable and disable point sprites per sprite object, however we do not turn off the depth mask. And just to note, our render state now is virtually identical to how it was when sprites worked for us on ATI hardware, the only real additions are the fixed functionality checks. We have also tried using to ATI's own extension headers and libraries but all have failed to make any change.
  11. Point sprites on WIndows/ATI

    It would be nice to knock it off as impossible. But like I said we have an old build with Sprites working on ATI. External tutorials such as those supplied with the OpenGL Superbilbe (Blue Book) have working sprite programs on ATI as well. It is possible, but there must be a conflicting variable in the render state, memory management systems or in the actual compile options. We attempted to use bill boards to over come the ATI issue here, but due to the mass amount of sprites/billboards we need to use here, the extra matrix calculations caused an exponential decrease in performance. So bill boards are not an option here and sprites can work on ATI. You can find the Blue Book source coude at "" and the "" is the one to go for, for Win32. Compile the "Chapter 9: PointSprites" program and you will be able to see working sprites on ATI. (p.s you may have to tell your linker to ignore the outdated "LIBMCT.lib & LIBC.lib" libraries). Hope this can help in some way.
  12. Point sprites on WIndows/ATI

    We're currently stuck with the same problem at the studio where I work. Our game runs perfectly on all Nvidia hardward but sprites cause issues on ATI hardware. We were also having depth issues with point sprites on ATI hardware, but we fixed that by changing the polygon mode to "GL_POINT", for sprite rendering, as apposed to drawing with "glDrawElements(GL_POINTS,....." and that cured the depth issue. But like I say we still can't get Sprite Texture Coords on ATI. The strangest thing which has us all completely stumped is that we have an old build of the game in which ATI point sprite errors are not present. I don't think any of the above will be really all the helpful for you, but we are just as interested in finding a fix as you are. Just to note, we are using CG as our Shading Language and our primary ATI testing cards are HD 3450s, but we have observed the errors on the HD 3800 series as well. Good luck mate, I'll post up if we find a solution here.
  13. !SOLVED! I am nervous writing this because usually when it works it promptly stops again, but after ten straight working compiles, it works!! Ok for anyone in the future who gets issues with Cg (possibly other shaders too), shader classes and win32 wrapper classes, the solution is to set and initialise Cg in the wrapper class, just after the window is created. Have the wrapper class output the Cg Context, vertex and fragment programs and profiles and any other needed shader parameters that you may have, then use those outputs to set the contexts, profiles and programs in your shader class. This will work. The problem would appear to have been setting the contexts, programs etc.. in the shader class (in its constructor), as it was isolated from a specific OpenGL initialisation and therefore would output to any which one. I should have known better really, buts its been months since I last worked shaders and as soon as it didn't work I should dumped it and tried a different method as soon as it failed, but unfortunately the first time it actually happened to output to the correct screen was the first time I tried the method, so I was dead set that it should work, oh well. Anyway I hope this can help someone else out in the future! Thanks to everyone who looked and tried to think of a way to help.
  14. Just a quick thought all, this code is currently running our toolset and this toolset has multiple other windows all running their own instances of OpenGL. Is there any chance that Cg is getting confused as to which instance of OpenGL to either get the state of (glstate.) or to output to? It would explain the why it sometimes renders and sometimes does not. If it is the case, is there any way that this can be limited? Thanks in advance.
  15. Hey all. I started adding Cg to my OpenGL based rendering engine yesterday and so far i have had an incredible hassle at the first hurdle, just getting Cg to draw. The engine is built upon classes, theres a class for objects in the scene (holds tansformation data and matrices), a class for imported meshes these pass the mesh data to the objects so that they can be draw and transformed and there s class for shaders. Each shader class holds the Cg parameters and initialises it's own Cg context and program, the shader class (like the mesh class) can be attached to an object in the scene, when it is the Cg shader's enable and bind functions are called at the start of the object's render and unbind etc.. is called at the end of the render. My problem is that this sometime's draws and sometimes does not, most of the time it doesn't, this is the part that really has me stuck, if it was a straight error then nothing would work, but there are no error messages from Cg or OpenGL, if it never drew then there would obviously be an error in the rendering code or the shader, but it sometimes draws and runs perfectly, that has me stumped. Right now I am only running the most basic of basic vertex shaders just to get it to draw, all it does is multiple the out position by the modelview projection matrix and the incomnig position. So I think the problem must be in the multiplication of the modelview projection matrix, its the only thing that's there to be wrong. But i can't solve it no matter how many ways I try. Below is the current Cg vertex shader: void C2E1v_green( float4 position : POSITION, float3 color : COLOR, out float4 oPosition : POSITION, out float3 oColor : COLOR, uniform float4x4 modelViewProj) { // Transform position from object space to clip space oPosition = mul(glstate.matrix.mvp, position); oColor = float3(0,1,0); } Below is the shader class binding function: // Enable the profile cgGLEnableProfile(myCgVertexProfile); checkForCgError("Enabling vertex profile", myProgramName, myCgContext); // Bind the program cgGLBindProgram(myCgVertexProgram); checkForCgError("Binding vertex program", myProgramName, myCgContext); (As the shader calls the glstate.matrix.mvp there is no need to pass in a matrix from OpenGL through Cg parameters Below is the object rendering function: // Set the modelview matrix glMatrixMode(GL_MODELVIEW); // Set the colour to white glColor3f(1.0f, 1.0f, 1.0f); // Enable the texturing glEnable(GL_TEXTURE_2D); // Counter clockwise faces are front facing glFrontFace(GL_CCW); // Push the matrix glPushMatrix(); // Multiple the matrices glMultMatrixf(Translate4x4); glMultMatrixf(Rotate4x4); glMultMatrixf(Scale4x4); // Check the Bone Handler if(pBoneHandler != NULL) { // Bind the skin pBoneHandler->SkinMesh(); // Draw the bones (depending on their own render state pBoneHandler->Render(); } // Bind the texture if(pShader != NULL) { // Set up the textures glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBindTexture(GL_TEXTURE_2D, pShader->GetTextureName()); // Bind the shader pShader->BindShader(this); } // If there is a mesh then render it, if not then draw the axis if(pMesh != NULL) { // Enable the array client states. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Draw the elements glVertexPointer(3, GL_FLOAT, 0, pMesh->GetVertices()); glNormalPointer(GL_FLOAT, 0, pMesh->GetNormals()); glTexCoordPointer(2, GL_FLOAT, 0, pMesh->GetTextureCoords()); glDrawElements(GL_TRIANGLES, pMesh->GetIndexSize(), GL_UNSIGNED_INT, pMesh->GetIndices()); // Disable the array client states. glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY); } else { DrawAxis(); } // Check if a Camera is pointed to if(pCamera != NULL) { DrawCamera(); } // Check the bone handler if(pBoneHandler != NULL) { // Bind the skin pBoneHandler->UnskinMesh(); } // UnBind the shader if(pShader != NULL) { pShader->UnbindShader(); } // Pop the matrix glPopMatrix(); Previously I have tried to pass in the matrix to the uniform float4x4 modelviewproj Cg param of the shader and to mulitply that. I have passed it in using the following methods: // Set the CgParameters cgGLSetStateMatrixParameter(myCgVertexParam_modelViewProj, CG_GL_MODELVIEW_PROJECTION_MATRIX, CG_GL_MATRIX_IDENTITY); cgUpdateProgramParameters(myCgVertexProgram); // Create the model matrix multMatrix(ModelMatrix, pObject->GetTranslateMatrix(), pObject->GetRotateMatrix()); multMatrix(ModelMatrix, ModelMatrix, pObject->GetScaleMatrix()); // Create the view matrix multMatrix(ViewMatrix, pCamera->GetInverseTranslateMatrix(), pCamera->GetInverseRotateMatrix()); // Create the model view matrix multMatrix(ModelViewMatrix, ViewMatrix, ModelMatrix); // Get the Modelview and projection matrices glGetFloatv(GL_PROJECTION_MATRIX, ProjectionMatrix); // Create the ModelViewProjectionMatrix multMatrix(ModelViewProjectionMatrix, ModelViewMatrix ,ProjectionMatrix); cgSetMatrixParameterfr(myCgVertexParam_modelViewProj, ModelViewProjectionMatrix); cgUpdateProgramParameters(myCgVertexProgram); And now calling oPosition = mul(glstate.matrix.mvp, position); In the shader All have the same result, sometimes it renders and sometimes it does. Please help, I'm really stumped. [Edited by - RoadToRuin on November 19, 2008 3:59:31 PM]