Jump to content
  • Advertisement

MiguelMartin

Member
  • Content Count

    13
  • Joined

  • Last visited

Posts posted by MiguelMartin


  1. Okay, I'm still having troubles, I have a question and I am not sure what I'm doing wrong o.O...

    1. So with the FT_Set_Pixel_Sizes it sets the amount of pixels a glyph in the font should use for it's width/height, correct? Which means I can't have decimal sizes, right?


    Here's my Code, it doesn't complain about loading or rendering, but I must be stuffing up somewhere.

    Loading the Font:

    void OGLGraphicalAssetLoader::loadFontFromFile(const std::string& filepath, Font* output) const
    {
    FT_Library library; // a FreeType Library object
    FT_Face face; // This holds the TrueType Font.

    FT_Error error; // Holds any errors that could occur.

    error = FT_Init_FreeType(&library); // Initialize the FreeType Library

    if(error)
    {
    throw AnaxException("FreeType 2 Library could not be initialized", -2);
    }

    // Load the TrueType Font into memory
    error = FT_New_Face(library, filepath.c_str(), 0, &face);
    if(error)
    {
    throw AnaxException("Could not load TrueType Font: " + filepath, -2);
    }

    float maxSize = output->getMaxSize();
    float width = maxSize * 64;


    // Set the size of the Font
    FT_Set_Pixel_Sizes(face, 0, maxSize);

    // Create a blank Texture (Image)
    Image tempImage;
    tempImage.create(maxSize * 256, maxSize * 2);


    Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
    int drawX = 0, drawY = maxSize; // The x and y coordinates that the glypth will be drawn to in the Texture.


    FT_GlyphSlot slot = face->glyph; // The Glyph slot for the face

    // Loop through the Glyph's putting them in the Texture (Image)
    for(int i = 0; i < 256; ++i)
    {
    Uint32 index = FT_Get_Char_Index(face, i);

    error = FT_Load_Char(face, index, FT_LOAD_RENDER);
    if(error)
    continue; // just ignore it..


    // Place Texture Coordinates
    textureCoords.position.x = drawX + slot->bitmap_left;
    textureCoords.position.y = drawY - slot->bitmap_top;
    textureCoords.size.width = slot->bitmap.width;
    textureCoords.size.height = slot->bitmap.rows;

    setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates

    // Render into Image
    BlitGlypth(face->glyph, &amp;tempImage, textureCoords.position.x, textureCoords.position.y);


    // Increment drawing position
    drawX += face->glyph->advance.x >> 6;
    }

    // Upload the Texture to OpenGL
    Texture2D tempTexture;
    loadTextureFromImage(tempImage, &amp;tempTexture);

    // Set the ID of the Font
    setFontTexture(output, tempTexture);
    }



    Bltting the Glyph to the Image:

    void BlitGlypth(const FT_GlyphSlot glypth, Image *output, Uint32 xForOutput, Uint32 yForOutput)
    {
    Uint32 x, y;
    for(y = 0; y < glypth->bitmap.rows; ++y)
    {
    for(x = 0; x < glypth->bitmap.width; ++x)
    {
    Uint32 pixel = glypth->bitmap.buffer[(x * glypth->bitmap.width) + y]; // access the pixel
    output->setPixel(xForOutput + x, yForOutput + y, Colour(255, 255, 255, pixel)); // place it in the image
    }
    }
    }


    Rendering the Font:


    void OGLRenderer::renderText(const Renderable2DText&amp; text)
    {
    const std::string&amp; theCharactersToRender = text.getText();
    Renderable2DRect&amp; rect = getRectFromText(text);

    float drawX = 0;
    float drawY = 0;

    char currentChar = 0;

    // Loop through all the characters
    for(int i = 0; i < theCharactersToRender.length(); ++i)
    {
    // Update the current character
    currentChar = theCharactersToRender;


    if(currentChar == '\n')
    {
    //drawX = 0;
    //drawY += subRect.size.height;
    }
    else if(currentChar == ' ')
    {
    //drawX += text.getLineSpacing();
    }
    else
    {
    const Rect2DFloat&amp; subRect = text.getFont()->getGlypth(currentChar);
    rect.setSubRect(subRect);

    // Render the Rect
    renderRect(rect);

    drawX += subRect.size.width + 1;
    }


    rect.move(drawX, drawY);
    }
    }


    I've noticed that my Font is rendered sideways or something, and it seems to not be in the correct order or something? S: What I mean is: For example the '+' sign (43 in ASCII) is not 43 characters from the NULL character (0). Now I tried rendering the '+' character (adding 28 to it too, to make it render the '+' character) but all I get when I render it (zoomed in, with a rectangle next to it) is this, how do I fix it?
    plussign.png


    Here is the Image that was generated from the .ttf (the alpha channels might not show in the .png, I'm not sure S:), I don't know how to get the Image width exactly so it's not giving me excess pixels...:
    temp0.png

    Many thanks in advance.

  2. Couple of Questions:
    1. Why are you parsing the height instead of the width in this function:

    [color=#000000]FT_Set_Pixel_Sizes[color=#666600]([color=#000000] face[color=#666600],[color=#000000] [color=#006666]0[color=#666600],[color=#000000] fontSize [color=#666600]);
    i.e. why are you calling it like this?

    [color=#000000]FT_Set_Pixel_Sizes[color=#666600]([color=#000000] face[color=#666600],[color=#000000] [color=#000000]fontSize[color=#666600],[color=#000000] [color=#006666]0[color=#666600]);

    2. What are you doing with textureWidth and textureHeight?

    [color=#000088]int[color=#000000] textureWidth [color=#666600]=[color=#000000] [color=#006666]0[color=#666600];
    [color=#000088]int[color=#000000] textureHeight [color=#666600]=[color=#000000] [color=#006666]0[color=#666600];

  3. I am having a really harsh time in trying to implement Font rendering into my Engine... Now I'm mainly getting pissed off with FreeType, I just can't seem to understand it 100%. I'm loading the Font with TrueType and then looping through all the glyphs in the font and saving them into one single big Texture, in ASCII order, and uploading them to OpenGL, but that's not working out so well.

    I would really appreciate it if someone looked at my code and explain to me what I am doing wrong. At the moment I can't load a TrueType font and I doubt I'm rendering them correctly either but I am not sure as I have not tested it...

    Here is how I am loading fonts; the maximum size of a font is defaulted to 20, to save into a Texture:
    void OGLGraphicalAssetLoader::loadFontFromFile(const std::string& filepath, Font* output) const
    {
    FT_Library library; // a FreeType Library object
    FT_Face face; // This holds the TrueType Font.
    FT_Error error; // Holds any errors that could occur.
    error = FT_Init_FreeType(&library); // Initialize the FreeType Library
    if(error)
    {
    throw AnaxException("FreeType 2 Library could not be initialized", -2);
    }
    // Load the TrueType Font into memory
    error = FT_New_Face(library, filepath.c_str(), 0, &face);
    if(error)
    {
    throw AnaxException("Could not load TrueType Font: " + filepath, -2);
    }
    FT_Set_Char_Size(face, output->getMaxSize() * 64, output->getMaxSize() * 64, 96, 96); // Set the size of the Font

    // Create a blank Texture (Image)
    Image tempImage;
    tempImage.create(face->glyph->bitmap.width, face->glyph->bitmap.rows);

    Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
    Uint32 drawX = 0, drawY = 0; // The x and y coordinates that the glypth will be drawn to in the Texture.
    // Loop through the Glyph's putting them in the Texture (Image)
    for(int i = 0; i < 256; ++i)
    {
    Uint32 index = FT_Get_Char_Index(face, (char)i);
    error = FT_Load_Glyph(face, index, FT_LOAD_DEFAULT);
    if(error)
    continue; // just ignore it.. (should throw an except or something along those lines
    error = FT_Render_Glyph(face->glyph, FT_RENDER_MODE_NORMAL);
    if(error)
    continue; // just ignore it...

    // Place Texture Coordinates
    textureCoords.position.x = drawX + face->glyph->bitmap_left;
    textureCoords.position.y = drawY - face->glyph->bitmap_top;
    textureCoords.size.width = face->glyph->bitmap.width;
    textureCoords.size.height = face->glyph->bitmap.rows;
    setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates
    // Render into Image
    BlitGlypth(face->glyph, &tempImage, textureCoords.position.x, textureCoords.position.y);

    // Increment drawing position
    drawX += face->glyph->advance.x >> 6;
    drawY += face->glyph->advance.y >> 6;
    }
    // Upload the Texture to OpenGL
    Texture2D tempTexture;
    loadTextureFromImage(tempImage, &tempTexture);
    // Set the ID of the Font
    setFontIdNumber(output, tempTexture.getID());
    }



    I'm not quite sure if I'm formatting each character correctly into my texture, also I do not know how to get or calculate the size that the Texture will be. When I am calling tempImage.create() it parses 0, 0 for the dimensions of the image... S:? Is it because there is not current glyph selected or..? How do I calculate what the Texture size should be.

    Here is how I am drawing the Font's, using a Rectangle to draw them:

    void OGLRenderer::renderText(const Renderable2DText& text)
    {
    const std::string& theCharactersToRender = text.getText();
    Renderable2DRect& rect = getRectFromText(text);

    // Loop through all the characters
    for(int i = 0; i < theCharactersToRender.length(); ++i)
    {
    const Rect2DFloat& subRect = text.getFont()->getGlypth(i);
    rect.setSubRect(subRect);

    // Render the Rect
    renderRect(rect);
    rect.move(subRect.position.x, subRect.position.y);
    }
    }


    If you need anymore detail on how I am implementing this, please say so :)

  4. I am currently trying to get true-type fonts implemented in my Rendering Engine. I'm not sure on how to approach this, I've done a lot of Google-ing and from what I've read I can load a Texture from the True Type Font (.ttf) into VRAM for OpenGL to use and then draw each character with a quad or something similar, cropped appropriately (probably storing dimensions of where to crop each character in RAM, instead of calculating on the spot? S: e.g. Where along the x axis to crop to the actual pixels of the letter). Now I was thinking of doing it this way, but how would I dynamically make the font bolt or italic, I would have to re-load the texture into memory wouldn't I, or just put them side by side, or something? I don't know S:.

    Now I've read somewhere, can't remember where that I could generate geometry from the font or something? I don't know, but would this be easier than doing what I mentioned above, if so, why? Also how should I implement that, I'm not sure on how to go on that. Would it be better than rendering each character as a quad from a texture, or drawing the text as geometry.

    Many thanks for reading. (:

  5. Well it seems like I can't do anything right, I've tried for hours to get this working but I just can't seem to do it. Basically I'm getting 'EXC_BAD_ACCESS' (I believe a null pointer dereference when I'm running on windows with VC++) with this function:


    glDrawArrays(GL_QUADS, 0, 4);


    Now I'm not sure why exactly this isn't working, as I have generated a buffer, updated it and then rendered it. I also tried to get VBO's to work with glut instead of what I am using SFML 2.0 and my library that I'm making. It seemed to work with GLUT and just regular C functions, but won't for C++. I've been trying to see what I've done differently to the GLUT version, but everything look just about the same (except for me checking if it requires texture coordinates, which it doesn't even hit).

    Here's my code, can someone PLEASE explain what I have done wrong?
    P.S. I am doing this after creating the OpenGL context and window (which SFML 2.0 does). And sorry if the indentation is wrong/messed up, Xcode is weird o:



    // Initalizing the VBO

    /////////////////////////////////////////////////////////
    /// Adds a Renderbale2DObject that the Renderer will render.
    /// \param renderableObject The Renderable2DObject you wish to add.
    /////////////////////////////////////////////////////////
    void OGLRenderer::addObjectToRenderList(const Renderable2D& renderableObject)
    {
    // Create a VBO for a specific object.
    GLuint sizeOfBuffer = 0; // The size of the buffer.
    GLuint nameOfBuffer = 0; // The name of the buffer

    // if it is a Renderable2DRect
    if(typeid(renderableObject) == typeid(Renderable2DRect*))
    {
    sizeOfBuffer = 4 * sizeof(OGLVertex);
    }

    glGenBuffers(sizeOfBuffer, &nameOfBuffer); // Generate the buffer

    glBindBuffer(GL_ARRAY_BUFFER, nameOfBuffer); // Bind to the buffer
    // Tell OpenGL how we're going to manage data, but not upload anything at this time.
    glBufferData(GL_ARRAY_BUFFER,
    sizeOfBuffer, // The size of the buffer
    0, // The actual data
    GL_STREAM_DRAW
    );

    renderList.insert(RenderListPair(nameOfBuffer, &renderableObject)); // Add it to the Rendering List.
    }

    // Updating the VBO
    /////////////////////////////////////////////////////////
    /// Updates the VBO information for a Renderable2D object.
    /// \param rect The Renderable2DRect you wish to update information for.
    /////////////////////////////////////////////////////////
    void OGLRenderer::updateVboInformationForRect(const Renderable2DRect& rect)
    {
    std::vector<OGLVertex> vertices; // The vertices of the object

    // If the Renderable2D object is actually a Renderable2DRect.
    const Colour& colour = rect.getTintColour();
    // The size of the Final Rectangle. If it's using cropping, get the cropping dimensions. Else if it's not get the size of the Rectangle
    const DimensionFloat& size = rect.doesUseCropping() ? rect.getSubRectDimensions() : rect.getSize();

    vertices.resize(4); // Resize the Vertices's to 4 points

    // Now setup the data

    // Set the vertices's up
    vertices[0].position.x = 0;
    vertices[0].position.y = 0;

    vertices[1].position.x = size.width;
    vertices[1].position.y = 0;

    vertices[2].position.x = size.width;
    vertices[2].position.y = size.height;

    vertices[3].position.x = 0;
    vertices[3].position.y = size.height;


    // If Texture2D mapping is enabled and there is a texture
    if(isRenderingOptionEnabled(RenderingOptions::Texture2DMapping) &&
    rect.getTexture() != 0)
    {
    // Set the Texture Coords up.
    Rect2DFloat textureCoords = calcTextCoords(rect);

    vertices[0].textureCoords.x = textureCoords.position.x;
    vertices[0].textureCoords.y = textureCoords.position.y;

    vertices[1].textureCoords.x = textureCoords.size.width;
    vertices[1].textureCoords.y = textureCoords.position.y;

    vertices[2].textureCoords.x = textureCoords.size.width;
    vertices[2].textureCoords.y = textureCoords.size.height;

    vertices[3].textureCoords.x = textureCoords.position.x;
    vertices[3].textureCoords.y = textureCoords.size.height;
    }

    // Set the Colour of the Object
    for(int i = 0; i < 4; ++i)
    {
    vertices.colour = colour;
    }

    // Now send this information to OpenGL
    glBufferSubData(GL_ARRAY_BUFFER, 0, 4 * sizeof(OGLVertex), &vertices[0]);
    }


    // Rendering the VBO
    /////////////////////////////////////////////////////////
    /// Renders a Renderable2DRect.
    /// \param rect The Renderable2DRect you wish to render.
    /////////////////////////////////////////////////////////
    void OGLRenderer::renderRect(const Renderable2DRect& rect)
    {
    // Update the information
    updateVboInformationForRect(rect);

    bool isTexturingEnabled = isRenderingOptionEnabled(RenderingOptions::Texture2DMapping); // Tells whether Texutring is enabled.
    bool shouldUseTextures = isTexturingEnabled && (rect.getTexture() != 0); // If the renderer should use texturing

    // Enable the Client States (for VBO's), getting ready to draw
    glEnableClientState(GL_VERTEX_ARRAY); // Enable Vertex Arrays for the VBO's
    glVertexPointer(3, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(0));

    glEnableClientState(GL_COLOR_ARRAY); // Enable Colours Arrays for the VBO's
    glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>)));

    if(shouldUseTextures)
    {
    // Enable Texturing
    glEnableClientState(GL_TEXTURE_2D_ARRAY_EXT);
    glTexCoordPointer(2, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>) + sizeof(Colour)));

    // Bind to the actual texture
    glBindTexture(GL_TEXTURE_2D, rect.getTexture()->getID());
    }

    glDrawArrays(GL_QUADS, 0, 4); // Draw the Renderable2DRect.

    // Now disable them
    glDisable(GL_VERTEX_ARRAY);
    glDisableClientState(GL_COLOR_ARRAY);

    // If texturing is enabled
    if(shouldUseTextures)
    {
    // That means it was enabled, so disable it
    glDisableClientState(GL_TEXTURE_2D_ARRAY_EXT);
    }
    }


    I'm sorry if it isn't indented properly o.O, oh and here's that OGLVertex.


    ///////////////////////////////////////////////////////////////////////////
    /// \struct OGLVertex
    /// \brief A Vertex data structure.
    ///
    /// A Vertex Data structure that holds everything that is required for one Vertex,
    /// this is mainly used to upload data to the VRAM, using OpenGL.
    ///
    /// \author Miguel Martin.
    ///////////////////////////////////////////////////////////////////////////
    struct OGLVertex
    {
    public:
    /////////////////////////////////////////////////////////
    /// The Position of the OGLVertex.
    /////////////////////////////////////////////////////////
    Vector3<GLfloat> position;
    /////////////////////////////////////////////////////////
    /// The Colour of the OGLVertex.
    /////////////////////////////////////////////////////////
    Colour colour;
    /////////////////////////////////////////////////////////
    /// The Texture Coordinates of the OGLVertex.
    /////////////////////////////////////////////////////////
    Vector2<GLfloat> textureCoords;

    /////////////////////////////////////////////////////////
    /// Extra padding to round off the size of the Vertex to 64 bytes.
    /////////////////////////////////////////////////////////
    GLfloat padding[2];
    };

  6. Well, I'm not sure if this will work or not, but assuming that you're using alpha values for the Water (or maybe you need alpha values for the reflection?), you should be enabling blending. Usually you use it like so, you can however use it many different ways though:


    glEnable(GL_BLEND); // Enables blending for OpenGL


    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Set the Blending function


    There's a lot more OpenGL blending functions, but this might do the job, ask Google if you don't like it. smile.png

    FAQ on Transperancy:
    http://www.opengl.or...ransparency.htm

    Sorry if you already know this, it just doesn't seem like your enabling blending o:

  7. So currently I'm writing a Rendering Engine with OpenGL and possibly some other API's in the future (very far away ;)). Anyhow I have encountered a problem, I for some reason cannot seem to figure out how I should render my scene multiple times into different viewports. The first thing I did was create one display list and just encapsulate every function that was called prior to it into that display list.

    For example:

    // Render everything
    renderer->beginDrawing(); // Creates a display list.
    renderer->renderRectangle(); // Draws a rectangle (stores it in the display list), with transformations
    renderer->finishDrawing(); // Ends the display list and renders everything to the screen.


    Now I tried it and it seemed to work at first, but then I tried adding more objects in my scene and the transformations of the objects were stuffing up. I.E. If I were to rotate one object everything else would rotate, for some reason whenever I tried loading the identity matrix for every object in the scene, but it just wouldn't do reset the matrix for every object? I wasn't sure if a display list saved calls to glLoadIdentity or glTranslatef and etc., so I scratched that plan.

    Then I decided adding display lists for every object on the fly, then loop through all the objects and render them (calling the display list). Now it works and all, but I'd imagine if I had a huge scene that it would cause a lot of overhead or something. I was thinking of using VBO's instead of display lists, from what I hear they're pretty light weight and not deprecated? I would still imagine some overhead, if I was making VBO's on the fly, every frame. Should I do this, or try to "add" objects to some sort of list and then render that list (full of objects) all at once with one function call.

    For example:

    // Outside of the game-loop (initialization code)
    renderer->addObjectToRender(rect); // Add a rectangle to render.


    // Inside the game-loop
    renderer->renderScene(); // Renders the ENTIRE scene all at ONCE.


    Now I would think that this wouldn't cause that much over-head as it only allocates memory for a VBO/Display List once (for every object) during the entire program (or at least scene). The only problem I think I have is, I don't think it would be as "dynamic" of some sort, perhaps? I'm not sure, that's why I'm asking.

    Any help would be appreciated, I have no idea how expensive it is to create a VBO/Display List on every frame, that's why I was thinking to add objects to render and then just render the entire scene.

    Many thanks for reading this smile.png.

  8. Okay I'm am REALLY confused with the Projection matrix and how I set it up, I know how to use it but I just don't understand how you can tell what the coordinate system of your scene are. OK so lets say I setup the Viewing Frustum like so:


    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    gluPerspective(60, 640/480, 1, 100);


    What Point is the top left of the screen? How do I tell what the maximum value of the frustrum for the top, right, etc.? It's really giving me a head ache. I don't see how to effectively use gluPerspective.

    For example: How do I know whether to use really small numbers or really big ones when creating primtives in my scene?

    Oh and, seeing as the Viewing Frustrum can move around and such, is the OpenGL entire coordinate system, is there a maximum value you can place on object on the y-axis (or any other axis) e.g. MAX_DOUBLE or something? S:

    Is there a way to set up my coordinate system to a Cube with the size of 1000 units. (NOT the viewing frustrum just the coordinate system, or something). Also, is there a way to have a function somewhat similar to gluPerspective and glFrustrum so I can set the length of the sides (i.e. width/height of the frustum) or something (I doubt this is possible, but worth asking).

    Sorry that this may seem like a really nooby question but I am EXTREMELY confused (and sorry if some parts made abosolutely no sense at all).


  9. sfml trumps sdl , reasons : sfml is constantly being updated/improved , has full documentation + examples, object oriented, you can suggest a certain functionality you want and if the devs like it they will add it in. + alot more cant think of them right now.



    Umm.. I'm pretty sure SDL is also being updated and improved as we speak they are implementing SDL 1.3 which will come with hardware rendering. SFML does come with Object Oriented design, but it is also kindof annoying because I would much prefer to use my OWN design and work with that instead of use someone elses. But there is a CSFML I know, and SDL has full documentation and there's tonnes of tutorials and books on SDL too, so I dont know why you're going on about that.

    The only thing that SFML does do that SDL doesn't at the moment is that SFML uses OpenGL out of the box with it's class design and such, but SDL has the ability to work with OpenGL just as well as SFML. Oh and btw the only platforms that SFML support are Windows, Linux and Mac OS (Mac very so slightly). But SDL supports: Linux, Windows, Windows CE, BeOS, MacOS, Mac OS X, FreeBSD, NetBSD, OpenBSD, BSD/OS, Solaris, IRIX, and QNX. The code contains support for AmigaOS, Dreamcast, Atari, AIX, OSF/Tru64, RISC OS, SymbianOS, and OS/2, but these are not officially supported.

  10. Okay so I've just tried making some menus with my game loop but I got the buttons to work just fine before but now, it doesn't show the image when I click the button, it seems to be loading quite fine. Is there anything wrong with the game loop, or is there any easier way to make menus in SDL?
    My main.cpp
    #include "miguelsengine.h"
    int main(int argc, char *args[])
    {
    APP main_game;
    //load the settings
    main_game.load_file("settings/window_info.txt");
    //system set up...
    main_game.Setup();
    return main_game.main_loop();
    }

    and the actual APP class, which is derived from a system class...

    the .h for APP
    #pragma once

    #include "System.h"
    #include "Timer.h"
    class APP : public System{
    Char pone;
    Tile BG;
    Timer fps;
    Button bPlay;
    Button bExit;
    int menuState;
    public:
    APP();

    //inits everything (loads)
    bool Init();
    //single frame for menu..
    int menu();
    //single frame for game..
    int game();
    //clears screen
    bool clear_screen();
    //exits the game (clears everything up, sets Done to true)
    void exit();
    //handles the input...
    void handle_menu_input();
    void handle_game_input();
    //the main loop...
    virtual int main_loop();
    };


    the .cpp for APP
    #include "APP.h"
    APP::APP(){
    menuState = 0;
    }
    bool APP::Init(){
    //player set up...
    if(!pone.setup("images/sprites/AWESOME.png", 32, 48, false, true)){
    fprintf(stderr,"ERROR: cannot load image for player\n");
    return false;
    }
    //set poisition and collision for player...
    pone.load_info("settings/pone_settings.txt");
    //pone.set_collision(0,0,32,48);

    // background setup...
    if(!BG.setup("images/sprites/grassandsticks.png", 32, 32, false, true)){
    fprintf(stderr,"ERROR: cannot load background image\n");
    return false;
    }
    bPlay.load_info("settings/play_button_info.txt");
    bExit.load_info("settings/exit_button_info.txt");
    fps.start();
    return true;
    }

    int APP::menu(){
    if(!bPlay.Draw(screen)){
    return 3;
    }
    if(!bExit.Draw(screen)){
    return 4;
    }
    return 0;
    }
    int APP::game(){
    pone.move();
    if(!pone.Draw(screen, pone.get_frameno())){
    return 3;
    }
    return 0;
    }

    bool APP::clear_screen(){
    SDL_Rect temp;
    temp.x = 0;
    temp.y = 0;
    temp.w = get_screen_w();
    temp.h = get_screen_h();
    if(SDL_FillRect(screen,&temp,SDL_MapRGB(screen->format,0,0,0)) == -1){
    return false;
    }
    return true;
    }
    //exit function
    void APP::exit(){
    pone.destroy();
    BG.destroy();
    bPlay.destroy();
    bExit.destroy();
    }

    //handle input functions
    void APP::handle_game_input(){
    pone.handle_input(&in_event);
    }
    void APP::handle_menu_input(){
    if(in_event.type == SDL_MOUSEMOTION){
    bPlay.forceUpdateButtonIMG(&in_event);
    bExit.forceUpdateButtonIMG(&in_event);
    }
    }

    //main loop
    int APP::main_loop(){
    if(!Init()){
    //first error that could occur...
    return 1;
    }
    int error = 0;
    for(int frame = 0; GetDone() != true; frame++){
    //poll event...
    while(SDL_PollEvent(&in_event)){
    handle_input(&in_event);
    if(menuState == 0){
    handle_menu_input();
    }else if(menuState == 1){
    handle_game_input();
    }
    }
    if(menuState == 0){
    if(error = menu() != 0){
    break;
    }
    }else if(menuState == 1){
    if(error = game() != 0){
    break;
    }
    }else{
    break;
    }
    RenderScreen();
    //limit the frames per second its drawn...
    if(fps.get_ticks() < 1000 / FRAMES_PER_SECOND){
    SDL_Delay( (1000 / FRAMES_PER_SECOND) - fps.get_ticks() );
    }
    //reset the frame number...
    if(frame >= FRAMES_PER_SECOND) frame = 0;
    frame++;
    }

    exit();
    return 0;
    }


    and also the .h for the system class:
    #pragma once
    #ifndef SYSTEM_H
    #define SYSTEM_H

    #include "sdlstuff.h"
    #include "input.h"
    #include "IMG.h"
    #include "Crop.h"
    #include "Tile.h"
    #include "Physics.h"
    #include "anim.h"
    #include "Character.h"
    #include "Button.h"
    #include <fstream>
    #include <string>

    class System : public Input
    {
    protected:
    int screen_w, screen_h, screen_bpp;
    bool fullscreen, Done;
    SDL_Event in_event;
    SDL_Surface* screen;
    std::string window_text;
    public:
    //default constructor
    System(void);
    System(const System &Obj);
    //custom constructor from a file...
    System(std::string filename);
    //custom constructor
    System(int w, int h, int bpp, bool fs = false);
    //virtual default destructor
    virtual ~System(void);

    //quits the application by calling SDL_Quit() and other things
    void quit();
    //sets up the screen and etc...
    bool Setup();
    bool Setup(char* titlebar, bool fs = false);
    //sets up the screen depending if its fullscreen or not...
    bool changescreen();

    //draws the screen to the window
    bool RenderScreen();

    //virtual functions...
    //handles input
    virtual void handle_input(SDL_Event *event);
    virtual int main_loop(){
    //pure virtual
    return 0;
    }
    //getters
    int get_screen_w();
    int get_screen_h();
    int get_screen_bpp();
    bool GetDone();
    std::string get_window_text();
    //setters
    void set_screen_w(int w);
    void set_screen_h(int h);
    void set_screen_bpp(int bpp);
    void set_Done(bool B = true);
    void set_fullscreen(bool f = true);
    //window_text functions
    void set_window_text(std::string text);
    void update_window_title();
    //load function
    bool load_file(std::string filename);
    };
    #endif
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!