Sign in to follow this  
MiguelMartin

OpenGL FreeType troubles...

Recommended Posts

MiguelMartin    100
I am having a really harsh time in trying to implement Font rendering into my Engine... Now I'm mainly getting pissed off with FreeType, I just can't seem to understand it 100%. I'm loading the Font with TrueType and then looping through all the glyphs in the font and saving them into one single big Texture, in ASCII order, and uploading them to OpenGL, but that's not working out so well.

I would really appreciate it if someone looked at my code and explain to me what I am doing wrong. At the moment I can't load a TrueType font and I doubt I'm rendering them correctly either but I am not sure as I have not tested it...

Here is how I am loading fonts; the maximum size of a font is defaulted to 20, to save into a Texture:
[code]void OGLGraphicalAssetLoader::loadFontFromFile(const std::string& filepath, Font* output) const
{
FT_Library library; // a FreeType Library object
FT_Face face; // This holds the TrueType Font.
FT_Error error; // Holds any errors that could occur.
error = FT_Init_FreeType(&library); // Initialize the FreeType Library
if(error)
{
throw AnaxException("FreeType 2 Library could not be initialized", -2);
}
// Load the TrueType Font into memory
error = FT_New_Face(library, filepath.c_str(), 0, &face);
if(error)
{
throw AnaxException("Could not load TrueType Font: " + filepath, -2);
}
FT_Set_Char_Size(face, output->getMaxSize() * 64, output->getMaxSize() * 64, 96, 96); // Set the size of the Font

// Create a blank Texture (Image)
Image tempImage;
tempImage.create(face->glyph->bitmap.width, face->glyph->bitmap.rows);

Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
Uint32 drawX = 0, drawY = 0; // The x and y coordinates that the glypth will be drawn to in the Texture.
// Loop through the Glyph's putting them in the Texture (Image)
for(int i = 0; i < 256; ++i)
{
Uint32 index = FT_Get_Char_Index(face, (char)i);
error = FT_Load_Glyph(face, index, FT_LOAD_DEFAULT);
if(error)
continue; // just ignore it.. (should throw an except or something along those lines
error = FT_Render_Glyph(face->glyph, FT_RENDER_MODE_NORMAL);
if(error)
continue; // just ignore it...

// Place Texture Coordinates
textureCoords.position.x = drawX + face->glyph->bitmap_left;
textureCoords.position.y = drawY - face->glyph->bitmap_top;
textureCoords.size.width = face->glyph->bitmap.width;
textureCoords.size.height = face->glyph->bitmap.rows;
setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates
// Render into Image
BlitGlypth(face->glyph, &tempImage, textureCoords.position.x, textureCoords.position.y);

// Increment drawing position
drawX += face->glyph->advance.x >> 6;
drawY += face->glyph->advance.y >> 6;
}
// Upload the Texture to OpenGL
Texture2D tempTexture;
loadTextureFromImage(tempImage, &tempTexture);
// Set the ID of the Font
setFontIdNumber(output, tempTexture.getID());
}[/code]


I'm not quite sure if I'm formatting each character correctly into my texture, also I do not know how to get or calculate the size that the Texture will be. When I am calling tempImage.create() it parses 0, 0 for the dimensions of the image... S:? Is it because there is not current glyph selected or..? How do I calculate what the Texture size should be.

Here is how I am drawing the Font's, using a Rectangle to draw them:
[CODE]
void OGLRenderer::renderText(const Renderable2DText& text)
{
const std::string& theCharactersToRender = text.getText();
Renderable2DRect& rect = getRectFromText(text);

// Loop through all the characters
for(int i = 0; i < theCharactersToRender.length(); ++i)
{
const Rect2DFloat& subRect = text.getFont()->getGlypth(i);
rect.setSubRect(subRect);

// Render the Rect
renderRect(rect);
rect.move(subRect.position.x, subRect.position.y);
}
}
[/CODE]

If you need anymore detail on how I am implementing this, please say so :)

Share this post


Link to post
Share on other sites
Murdocki    285
[size=4][font=arial,helvetica,sans-serif]Ahh i remember having this problem myself. It had something to do with [color=#000000][left]FT_Set_Char_Size and the glyph methods (ft 2.4.4). I couldnt figure out what it was so i changed using those by using [/left][/color][/font][/size]FT_Set_Pixel_Sizes and FT_Load_Char instead. Here's what i got so you can take a look at how that works:

[code]FontLoader::FontLoader() :
logger( Logger::Instance() )
{
FT_Error error = FT_Init_FreeType( &library );
if ( error )
logger.Print( LOGGERDEPTH_CONSTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "Font loader could not be initialized." );
logger.Print( LOGGERDEPTH_CONSTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "ResourceLoaderFont created." );
}
FontLoader::~FontLoader()
{
logger.Print( LOGGERDEPTH_DESTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "ResourceLoaderFont destroyed." );
}
EEResource::Font* FontLoader::LoadFont( RenderResourceCreator& renderResourceCreator, const std::string& fontFile, uint8 fontSize )
{
FT_Error error;
FT_Face face;
if( error = (FT_New_Face( library, fontFile.c_str(), 0, &face ) ) )
{
if ( error == FT_Err_Unknown_File_Format )
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Format of font \"", fontFile, "\" is unsupported." );
else
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Font \"", fontFile, "\" could not be opened or read." );
return NULL;
}

//error = FT_Set_Char_Size(
// face, /* handle to face object */
// 0, /* char_width in 1/64th of points */
// 16*64, /* char_height in 1/64th of points */
// 300, /* horizontal device resolution */
// 300 ); /* vertical device resolution */
FT_Set_Pixel_Sizes( face, 0, fontSize );
int textureWidth = 0;
int textureHeight = 0;
EEResource::Font* newFont = new EEResource::Font( fontFile, fontSize );
for( uint16 charIndex = 0; charIndex < 256; ++charIndex )
{
error = FT_Load_Char( face, charIndex, FT_LOAD_RENDER );
Vector2f advance( float( face->glyph->advance.x >> 6 ), float( face->glyph->advance.y >> 6 ) );
uint8 width = face->glyph->bitmap.width;
uint8 height = face->glyph->bitmap.rows;
uint8* data = new uint8[ width * height ];
//Rendered data is upside down so we need to revert that.
for( uint8 row = 0; row < height; ++row )
memcpy( data + width * row, face->glyph->bitmap.buffer + width * (height - row - 1), sizeof( uint8 ) * width );
newFont->SetCharacter( (uint8)charIndex, advance, width, height, face->glyph->bitmap_left, face->glyph->bitmap_top, data );
}

if( !renderResourceCreator.LoadFont( *newFont ) )
{
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Font \"", fontFile, "\" could not be loaded by the renderer." );
delete newFont;
return NULL;
}
else
return newFont;
}[/code]

On a side note: i dont think you have to init the library for every font you're loading

Share this post


Link to post
Share on other sites
MiguelMartin    100
Couple of Questions:
1. Why are you parsing the height instead of the width in this function:

[color=#000000]FT_Set_Pixel_Sizes[/color][color=#666600]([/color][color=#000000] face[/color][color=#666600],[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600],[/color][color=#000000] fontSize [/color][color=#666600]);[/color]
i.e. why are you calling it like this?

[color=#000000]FT_Set_Pixel_Sizes[/color][color=#666600]([/color][color=#000000] face[/color][color=#666600],[/color][color=#000000] [/color][color=#000000]fontSize[/color][color=#666600],[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600]);[/color]

2. What are you doing with textureWidth and textureHeight?

[color=#000088]int[/color][color=#000000] textureWidth [/color][color=#666600]=[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600];[/color]
[color=#000088]int[/color][color=#000000] textureHeight [/color][color=#666600]=[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600];[/color]

Share this post


Link to post
Share on other sites
Murdocki    285
1: it doesn't really matter, this is some code that's inside the FT_Set_Pixel_Sizes for the version i'm using:
[code]

if ( pixel_width == 0 )
pixel_width = pixel_height;
else if ( pixel_height == 0 )
pixel_height = pixel_width;
[/code]
so [color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]0[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] fontSize [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]); is the same as [/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]fontSize, 0 [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]); and [/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]fontSize[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] fontSize [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]);[/left][/size][/font][/color]

2: nothing at all haha. I noticed them sitting around there aswell but i was too lazy to remove them :P. I know when i've started creating the fontloader i tried to put all characters in an as small as possible image but i couldn't really be bothered to finish that because gpu mem isn't an issue right now. So i gues it's just a remnant from that.

Share this post


Link to post
Share on other sites
Murdocki    285
[quote name='Murdocki' timestamp='1330257708' post='4916706']
[left]FT_Set_Pixel_Sizes[/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]0[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]); is the same as [/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]FT_Set_Pixel_Sizes[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize, 0 [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]); and [/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]FT_Set_Pixel_Sizes[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]);[/size][/font][/color][/left]
[/quote]

Dont you just looooove auto format.....

Share this post


Link to post
Share on other sites
MiguelMartin    100
Okay, I'm still having troubles, I have a question and I am not sure what I'm doing wrong o.O...

1. So with the FT_Set_Pixel_Sizes it sets the amount of pixels a glyph in the font should use for it's width/height, correct? Which means I can't have decimal sizes, right?


Here's my Code, it doesn't complain about loading or rendering, but I must be stuffing up somewhere.

Loading the Font:
[CODE]
void OGLGraphicalAssetLoader::loadFontFromFile(const std::string&amp; filepath, Font* output) const
{
FT_Library library; // a FreeType Library object
FT_Face face; // This holds the TrueType Font.

FT_Error error; // Holds any errors that could occur.

error = FT_Init_FreeType(&amp;library); // Initialize the FreeType Library

if(error)
{
throw AnaxException("FreeType 2 Library could not be initialized", -2);
}

// Load the TrueType Font into memory
error = FT_New_Face(library, filepath.c_str(), 0, &amp;face);
if(error)
{
throw AnaxException("Could not load TrueType Font: " + filepath, -2);
}

float maxSize = output->getMaxSize();
float width = maxSize * 64;


// Set the size of the Font
FT_Set_Pixel_Sizes(face, 0, maxSize);

// Create a blank Texture (Image)
Image tempImage;
tempImage.create(maxSize * 256, maxSize * 2);


Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
int drawX = 0, drawY = maxSize; // The x and y coordinates that the glypth will be drawn to in the Texture.


FT_GlyphSlot slot = face->glyph; // The Glyph slot for the face

// Loop through the Glyph's putting them in the Texture (Image)
for(int i = 0; i < 256; ++i)
{
Uint32 index = FT_Get_Char_Index(face, i);

error = FT_Load_Char(face, index, FT_LOAD_RENDER);
if(error)
continue; // just ignore it..


// Place Texture Coordinates
textureCoords.position.x = drawX + slot->bitmap_left;
textureCoords.position.y = drawY - slot->bitmap_top;
textureCoords.size.width = slot->bitmap.width;
textureCoords.size.height = slot->bitmap.rows;

setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates

// Render into Image
BlitGlypth(face->glyph, &amp;tempImage, textureCoords.position.x, textureCoords.position.y);


// Increment drawing position
drawX += face->glyph->advance.x >> 6;
}

// Upload the Texture to OpenGL
Texture2D tempTexture;
loadTextureFromImage(tempImage, &amp;tempTexture);

// Set the ID of the Font
setFontTexture(output, tempTexture);
}

[/CODE]

Bltting the Glyph to the Image:
[CODE]
void BlitGlypth(const FT_GlyphSlot glypth, Image *output, Uint32 xForOutput, Uint32 yForOutput)
{
Uint32 x, y;
for(y = 0; y < glypth->bitmap.rows; ++y)
{
for(x = 0; x < glypth->bitmap.width; ++x)
{
Uint32 pixel = glypth->bitmap.buffer[(x * glypth->bitmap.width) + y]; // access the pixel
output->setPixel(xForOutput + x, yForOutput + y, Colour(255, 255, 255, pixel)); // place it in the image
}
}
}
[/CODE]

Rendering the Font:
[CODE]

void OGLRenderer::renderText(const Renderable2DText&amp; text)
{
const std::string&amp; theCharactersToRender = text.getText();
Renderable2DRect&amp; rect = getRectFromText(text);

float drawX = 0;
float drawY = 0;

char currentChar = 0;

// Loop through all the characters
for(int i = 0; i < theCharactersToRender.length(); ++i)
{
// Update the current character
currentChar = theCharactersToRender[i];


if(currentChar == '\n')
{
//drawX = 0;
//drawY += subRect.size.height;
}
else if(currentChar == ' ')
{
//drawX += text.getLineSpacing();
}
else
{
const Rect2DFloat&amp; subRect = text.getFont()->getGlypth(currentChar);
rect.setSubRect(subRect);

// Render the Rect
renderRect(rect);

drawX += subRect.size.width + 1;
}


rect.move(drawX, drawY);
}
}
[/CODE]

I've noticed that my Font is rendered sideways or something, and it seems to not be in the correct order or something? S: What I mean is: For example the '+' sign (43 in ASCII) is not 43 characters from the NULL character (0). Now I tried rendering the '+' character (adding 28 to it too, to make it render the '+' character) but all I get when I render it (zoomed in, with a rectangle next to it) is this, how do I fix it?
[img]http://i1181.photobucket.com/albums/x436/miguelishawt/plussign.png[/img]


Here is the Image that was generated from the .ttf (the alpha channels might not show in the .png, I'm not sure S:), I don't know how to get the Image width exactly so it's not giving me excess pixels...:
[img]http://i1181.photobucket.com/albums/x436/miguelishawt/temp0.png[/img]

Many thanks in advance.

Share this post


Link to post
Share on other sites
Murdocki    285
You're still using [color=#000000][size=2][left]FT_Get_Char_Index, n[size=3]ot sure what it does but it has weird results for me aswell, works perfectly fine without though.[/size][/left][/size][/color][color=#000000][size=3][left]Your texture coordinate calculation also looks weird but i havn't really looked at it because the output texture seems bollocks to begin with, could you verify it doesnt look this bad on your end?[/left][/size][/color]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

    • By altay
      Hi folks,
      Imagine we have 8 different light sources in our scene and want dynamic shadow map for each of them. The question is how do we generate shadow maps? Do we render the scene for each to get the depth data? If so, how about performance? Do we deal with the performance issues just by applying general methods (e.g. frustum culling)?
      Thanks,
       
  • Popular Now