Jump to content
  • Advertisement

MiguelMartin

Member
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

100 Neutral

About MiguelMartin

  • Rank
    Member
  1. MiguelMartin

    FreeType troubles...

    Okay, I'm still having troubles, I have a question and I am not sure what I'm doing wrong o.O... 1. So with the FT_Set_Pixel_Sizes it sets the amount of pixels a glyph in the font should use for it's width/height, correct? Which means I can't have decimal sizes, right? Here's my Code, it doesn't complain about loading or rendering, but I must be stuffing up somewhere. Loading the Font: void OGLGraphicalAssetLoader::loadFontFromFile(const std::string&amp; filepath, Font* output) const { FT_Library library; // a FreeType Library object FT_Face face; // This holds the TrueType Font. FT_Error error; // Holds any errors that could occur. error = FT_Init_FreeType(&amp;library); // Initialize the FreeType Library if(error) { throw AnaxException("FreeType 2 Library could not be initialized", -2); } // Load the TrueType Font into memory error = FT_New_Face(library, filepath.c_str(), 0, &amp;face); if(error) { throw AnaxException("Could not load TrueType Font: " + filepath, -2); } float maxSize = output->getMaxSize(); float width = maxSize * 64; // Set the size of the Font FT_Set_Pixel_Sizes(face, 0, maxSize); // Create a blank Texture (Image) Image tempImage; tempImage.create(maxSize * 256, maxSize * 2); Rect2DFloat textureCoords; // Holds temporary Texture Coordinates int drawX = 0, drawY = maxSize; // The x and y coordinates that the glypth will be drawn to in the Texture. FT_GlyphSlot slot = face->glyph; // The Glyph slot for the face // Loop through the Glyph's putting them in the Texture (Image) for(int i = 0; i < 256; ++i) { Uint32 index = FT_Get_Char_Index(face, i); error = FT_Load_Char(face, index, FT_LOAD_RENDER); if(error) continue; // just ignore it.. // Place Texture Coordinates textureCoords.position.x = drawX + slot->bitmap_left; textureCoords.position.y = drawY - slot->bitmap_top; textureCoords.size.width = slot->bitmap.width; textureCoords.size.height = slot->bitmap.rows; setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates // Render into Image BlitGlypth(face->glyph, &amp;tempImage, textureCoords.position.x, textureCoords.position.y); // Increment drawing position drawX += face->glyph->advance.x >> 6; } // Upload the Texture to OpenGL Texture2D tempTexture; loadTextureFromImage(tempImage, &amp;tempTexture); // Set the ID of the Font setFontTexture(output, tempTexture); } Bltting the Glyph to the Image: void BlitGlypth(const FT_GlyphSlot glypth, Image *output, Uint32 xForOutput, Uint32 yForOutput) { Uint32 x, y; for(y = 0; y < glypth->bitmap.rows; ++y) { for(x = 0; x < glypth->bitmap.width; ++x) { Uint32 pixel = glypth->bitmap.buffer[(x * glypth->bitmap.width) + y]; // access the pixel output->setPixel(xForOutput + x, yForOutput + y, Colour(255, 255, 255, pixel)); // place it in the image } } } Rendering the Font: void OGLRenderer::renderText(const Renderable2DText&amp; text) { const std::string&amp; theCharactersToRender = text.getText(); Renderable2DRect&amp; rect = getRectFromText(text); float drawX = 0; float drawY = 0; char currentChar = 0; // Loop through all the characters for(int i = 0; i < theCharactersToRender.length(); ++i) { // Update the current character currentChar = theCharactersToRender; if(currentChar == '\n') { //drawX = 0; //drawY += subRect.size.height; } else if(currentChar == ' ') { //drawX += text.getLineSpacing(); } else { const Rect2DFloat&amp; subRect = text.getFont()->getGlypth(currentChar); rect.setSubRect(subRect); // Render the Rect renderRect(rect); drawX += subRect.size.width + 1; } rect.move(drawX, drawY); } } I've noticed that my Font is rendered sideways or something, and it seems to not be in the correct order or something? S: What I mean is: For example the '+' sign (43 in ASCII) is not 43 characters from the NULL character (0). Now I tried rendering the '+' character (adding 28 to it too, to make it render the '+' character) but all I get when I render it (zoomed in, with a rectangle next to it) is this, how do I fix it? Here is the Image that was generated from the .ttf (the alpha channels might not show in the .png, I'm not sure S:), I don't know how to get the Image width exactly so it's not giving me excess pixels...: Many thanks in advance.
  2. MiguelMartin

    FreeType troubles...

    Couple of Questions: 1. Why are you parsing the height instead of the width in this function: [color=#000000]FT_Set_Pixel_Sizes[color=#666600]([color=#000000] face[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] fontSize [color=#666600]); i.e. why are you calling it like this? [color=#000000]FT_Set_Pixel_Sizes[color=#666600]([color=#000000] face[color=#666600],[color=#000000] [color=#000000]fontSize[color=#666600],[color=#000000] [color=#006666]0[color=#666600]); 2. What are you doing with textureWidth and textureHeight? [color=#000088]int[color=#000000] textureWidth [color=#666600]=[color=#000000] [color=#006666]0[color=#666600]; [color=#000088]int[color=#000000] textureHeight [color=#666600]=[color=#000000] [color=#006666]0[color=#666600];
  3. I am having a really harsh time in trying to implement Font rendering into my Engine... Now I'm mainly getting pissed off with FreeType, I just can't seem to understand it 100%. I'm loading the Font with TrueType and then looping through all the glyphs in the font and saving them into one single big Texture, in ASCII order, and uploading them to OpenGL, but that's not working out so well. I would really appreciate it if someone looked at my code and explain to me what I am doing wrong. At the moment I can't load a TrueType font and I doubt I'm rendering them correctly either but I am not sure as I have not tested it... Here is how I am loading fonts; the maximum size of a font is defaulted to 20, to save into a Texture: void OGLGraphicalAssetLoader::loadFontFromFile(const std::string& filepath, Font* output) const { FT_Library library; // a FreeType Library object FT_Face face; // This holds the TrueType Font. FT_Error error; // Holds any errors that could occur. error = FT_Init_FreeType(&library); // Initialize the FreeType Library if(error) { throw AnaxException("FreeType 2 Library could not be initialized", -2); } // Load the TrueType Font into memory error = FT_New_Face(library, filepath.c_str(), 0, &face); if(error) { throw AnaxException("Could not load TrueType Font: " + filepath, -2); } FT_Set_Char_Size(face, output->getMaxSize() * 64, output->getMaxSize() * 64, 96, 96); // Set the size of the Font // Create a blank Texture (Image) Image tempImage; tempImage.create(face->glyph->bitmap.width, face->glyph->bitmap.rows); Rect2DFloat textureCoords; // Holds temporary Texture Coordinates Uint32 drawX = 0, drawY = 0; // The x and y coordinates that the glypth will be drawn to in the Texture. // Loop through the Glyph's putting them in the Texture (Image) for(int i = 0; i < 256; ++i) { Uint32 index = FT_Get_Char_Index(face, (char)i); error = FT_Load_Glyph(face, index, FT_LOAD_DEFAULT); if(error) continue; // just ignore it.. (should throw an except or something along those lines error = FT_Render_Glyph(face->glyph, FT_RENDER_MODE_NORMAL); if(error) continue; // just ignore it... // Place Texture Coordinates textureCoords.position.x = drawX + face->glyph->bitmap_left; textureCoords.position.y = drawY - face->glyph->bitmap_top; textureCoords.size.width = face->glyph->bitmap.width; textureCoords.size.height = face->glyph->bitmap.rows; setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates // Render into Image BlitGlypth(face->glyph, &tempImage, textureCoords.position.x, textureCoords.position.y); // Increment drawing position drawX += face->glyph->advance.x >> 6; drawY += face->glyph->advance.y >> 6; } // Upload the Texture to OpenGL Texture2D tempTexture; loadTextureFromImage(tempImage, &tempTexture); // Set the ID of the Font setFontIdNumber(output, tempTexture.getID()); } I'm not quite sure if I'm formatting each character correctly into my texture, also I do not know how to get or calculate the size that the Texture will be. When I am calling tempImage.create() it parses 0, 0 for the dimensions of the image... S:? Is it because there is not current glyph selected or..? How do I calculate what the Texture size should be. Here is how I am drawing the Font's, using a Rectangle to draw them: void OGLRenderer::renderText(const Renderable2DText& text) { const std::string& theCharactersToRender = text.getText(); Renderable2DRect& rect = getRectFromText(text); // Loop through all the characters for(int i = 0; i < theCharactersToRender.length(); ++i) { const Rect2DFloat& subRect = text.getFont()->getGlypth(i); rect.setSubRect(subRect); // Render the Rect renderRect(rect); rect.move(subRect.position.x, subRect.position.y); } } If you need anymore detail on how I am implementing this, please say so
  4. this is my moment!! MY moment
  5. I am currently trying to get true-type fonts implemented in my Rendering Engine. I'm not sure on how to approach this, I've done a lot of Google-ing and from what I've read I can load a Texture from the True Type Font (.ttf) into VRAM for OpenGL to use and then draw each character with a quad or something similar, cropped appropriately (probably storing dimensions of where to crop each character in RAM, instead of calculating on the spot? S: e.g. Where along the x axis to crop to the actual pixels of the letter). Now I was thinking of doing it this way, but how would I dynamically make the font bolt or italic, I would have to re-load the texture into memory wouldn't I, or just put them side by side, or something? I don't know S:. Now I've read somewhere, can't remember where that I could generate geometry from the font or something? I don't know, but would this be easier than doing what I mentioned above, if so, why? Also how should I implement that, I'm not sure on how to go on that. Would it be better than rendering each character as a quad from a texture, or drawing the text as geometry. Many thanks for reading. (:
  6. Thanks a lot, it seems not complain any more . Oh and V-man, my Mac OpenGL headers didn't seem to declare GL_TEXTURE_2D_ARRAY, but it did declare GL_TEXTURE_2D_EXT. Anyhow I wasn't using textures, so that wasn't the problem. Thanks again .
  7. Well it seems like I can't do anything right, I've tried for hours to get this working but I just can't seem to do it. Basically I'm getting 'EXC_BAD_ACCESS' (I believe a null pointer dereference when I'm running on windows with VC++) with this function: glDrawArrays(GL_QUADS, 0, 4); Now I'm not sure why exactly this isn't working, as I have generated a buffer, updated it and then rendered it. I also tried to get VBO's to work with glut instead of what I am using SFML 2.0 and my library that I'm making. It seemed to work with GLUT and just regular C functions, but won't for C++. I've been trying to see what I've done differently to the GLUT version, but everything look just about the same (except for me checking if it requires texture coordinates, which it doesn't even hit). Here's my code, can someone PLEASE explain what I have done wrong? P.S. I am doing this after creating the OpenGL context and window (which SFML 2.0 does). And sorry if the indentation is wrong/messed up, Xcode is weird o: // Initalizing the VBO ///////////////////////////////////////////////////////// /// Adds a Renderbale2DObject that the Renderer will render. /// \param renderableObject The Renderable2DObject you wish to add. ///////////////////////////////////////////////////////// void OGLRenderer::addObjectToRenderList(const Renderable2D& renderableObject) { // Create a VBO for a specific object. GLuint sizeOfBuffer = 0; // The size of the buffer. GLuint nameOfBuffer = 0; // The name of the buffer // if it is a Renderable2DRect if(typeid(renderableObject) == typeid(Renderable2DRect*)) { sizeOfBuffer = 4 * sizeof(OGLVertex); } glGenBuffers(sizeOfBuffer, &nameOfBuffer); // Generate the buffer glBindBuffer(GL_ARRAY_BUFFER, nameOfBuffer); // Bind to the buffer // Tell OpenGL how we're going to manage data, but not upload anything at this time. glBufferData(GL_ARRAY_BUFFER, sizeOfBuffer, // The size of the buffer 0, // The actual data GL_STREAM_DRAW ); renderList.insert(RenderListPair(nameOfBuffer, &renderableObject)); // Add it to the Rendering List. } // Updating the VBO ///////////////////////////////////////////////////////// /// Updates the VBO information for a Renderable2D object. /// \param rect The Renderable2DRect you wish to update information for. ///////////////////////////////////////////////////////// void OGLRenderer::updateVboInformationForRect(const Renderable2DRect& rect) { std::vector<OGLVertex> vertices; // The vertices of the object // If the Renderable2D object is actually a Renderable2DRect. const Colour& colour = rect.getTintColour(); // The size of the Final Rectangle. If it's using cropping, get the cropping dimensions. Else if it's not get the size of the Rectangle const DimensionFloat& size = rect.doesUseCropping() ? rect.getSubRectDimensions() : rect.getSize(); vertices.resize(4); // Resize the Vertices's to 4 points // Now setup the data // Set the vertices's up vertices[0].position.x = 0; vertices[0].position.y = 0; vertices[1].position.x = size.width; vertices[1].position.y = 0; vertices[2].position.x = size.width; vertices[2].position.y = size.height; vertices[3].position.x = 0; vertices[3].position.y = size.height; // If Texture2D mapping is enabled and there is a texture if(isRenderingOptionEnabled(RenderingOptions::Texture2DMapping) && rect.getTexture() != 0) { // Set the Texture Coords up. Rect2DFloat textureCoords = calcTextCoords(rect); vertices[0].textureCoords.x = textureCoords.position.x; vertices[0].textureCoords.y = textureCoords.position.y; vertices[1].textureCoords.x = textureCoords.size.width; vertices[1].textureCoords.y = textureCoords.position.y; vertices[2].textureCoords.x = textureCoords.size.width; vertices[2].textureCoords.y = textureCoords.size.height; vertices[3].textureCoords.x = textureCoords.position.x; vertices[3].textureCoords.y = textureCoords.size.height; } // Set the Colour of the Object for(int i = 0; i < 4; ++i) { vertices.colour = colour; } // Now send this information to OpenGL glBufferSubData(GL_ARRAY_BUFFER, 0, 4 * sizeof(OGLVertex), &vertices[0]); } // Rendering the VBO ///////////////////////////////////////////////////////// /// Renders a Renderable2DRect. /// \param rect The Renderable2DRect you wish to render. ///////////////////////////////////////////////////////// void OGLRenderer::renderRect(const Renderable2DRect& rect) { // Update the information updateVboInformationForRect(rect); bool isTexturingEnabled = isRenderingOptionEnabled(RenderingOptions::Texture2DMapping); // Tells whether Texutring is enabled. bool shouldUseTextures = isTexturingEnabled && (rect.getTexture() != 0); // If the renderer should use texturing // Enable the Client States (for VBO's), getting ready to draw glEnableClientState(GL_VERTEX_ARRAY); // Enable Vertex Arrays for the VBO's glVertexPointer(3, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(0)); glEnableClientState(GL_COLOR_ARRAY); // Enable Colours Arrays for the VBO's glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>))); if(shouldUseTextures) { // Enable Texturing glEnableClientState(GL_TEXTURE_2D_ARRAY_EXT); glTexCoordPointer(2, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>) + sizeof(Colour))); // Bind to the actual texture glBindTexture(GL_TEXTURE_2D, rect.getTexture()->getID()); } glDrawArrays(GL_QUADS, 0, 4); // Draw the Renderable2DRect. // Now disable them glDisable(GL_VERTEX_ARRAY); glDisableClientState(GL_COLOR_ARRAY); // If texturing is enabled if(shouldUseTextures) { // That means it was enabled, so disable it glDisableClientState(GL_TEXTURE_2D_ARRAY_EXT); } } I'm sorry if it isn't indented properly o.O, oh and here's that OGLVertex. /////////////////////////////////////////////////////////////////////////// /// \struct OGLVertex /// \brief A Vertex data structure. /// /// A Vertex Data structure that holds everything that is required for one Vertex, /// this is mainly used to upload data to the VRAM, using OpenGL. /// /// \author Miguel Martin. /////////////////////////////////////////////////////////////////////////// struct OGLVertex { public: ///////////////////////////////////////////////////////// /// The Position of the OGLVertex. ///////////////////////////////////////////////////////// Vector3<GLfloat> position; ///////////////////////////////////////////////////////// /// The Colour of the OGLVertex. ///////////////////////////////////////////////////////// Colour colour; ///////////////////////////////////////////////////////// /// The Texture Coordinates of the OGLVertex. ///////////////////////////////////////////////////////// Vector2<GLfloat> textureCoords; ///////////////////////////////////////////////////////// /// Extra padding to round off the size of the Vertex to 64 bytes. ///////////////////////////////////////////////////////// GLfloat padding[2]; };
  8. piece of shit blood nose.
  9. MiguelMartin

    Not sure how to implement this.

    Thanks very much guys, really appreciate the help. Oh and don't worry about going off topic, haha. Thanks again
  10. MiguelMartin

    Water Reflection and Opacity

    Well, I'm not sure if this will work or not, but assuming that you're using alpha values for the Water (or maybe you need alpha values for the reflection?), you should be enabling blending. Usually you use it like so, you can however use it many different ways though: glEnable(GL_BLEND); // Enables blending for OpenGL glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Set the Blending function There's a lot more OpenGL blending functions, but this might do the job, ask Google if you don't like it. FAQ on Transperancy: http://www.opengl.or...ransparency.htm Sorry if you already know this, it just doesn't seem like your enabling blending o:
  11. So currently I'm writing a Rendering Engine with OpenGL and possibly some other API's in the future (very far away ;)). Anyhow I have encountered a problem, I for some reason cannot seem to figure out how I should render my scene multiple times into different viewports. The first thing I did was create one display list and just encapsulate every function that was called prior to it into that display list. For example: // Render everything renderer->beginDrawing(); // Creates a display list. renderer->renderRectangle(); // Draws a rectangle (stores it in the display list), with transformations renderer->finishDrawing(); // Ends the display list and renders everything to the screen. Now I tried it and it seemed to work at first, but then I tried adding more objects in my scene and the transformations of the objects were stuffing up. I.E. If I were to rotate one object everything else would rotate, for some reason whenever I tried loading the identity matrix for every object in the scene, but it just wouldn't do reset the matrix for every object? I wasn't sure if a display list saved calls to glLoadIdentity or glTranslatef and etc., so I scratched that plan. Then I decided adding display lists for every object on the fly, then loop through all the objects and render them (calling the display list). Now it works and all, but I'd imagine if I had a huge scene that it would cause a lot of overhead or something. I was thinking of using VBO's instead of display lists, from what I hear they're pretty light weight and not deprecated? I would still imagine some overhead, if I was making VBO's on the fly, every frame. Should I do this, or try to "add" objects to some sort of list and then render that list (full of objects) all at once with one function call. For example: // Outside of the game-loop (initialization code) renderer->addObjectToRender(rect); // Add a rectangle to render. // Inside the game-loop renderer->renderScene(); // Renders the ENTIRE scene all at ONCE. Now I would think that this wouldn't cause that much over-head as it only allocates memory for a VBO/Display List once (for every object) during the entire program (or at least scene). The only problem I think I have is, I don't think it would be as "dynamic" of some sort, perhaps? I'm not sure, that's why I'm asking. Any help would be appreciated, I have no idea how expensive it is to create a VBO/Display List on every frame, that's why I was thinking to add objects to render and then just render the entire scene. Many thanks for reading this .
  12. dat pure virtual function call.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!