Sign in to follow this  
MiguelMartin

OpenGL FreeType troubles...

Recommended Posts

I am having a really harsh time in trying to implement Font rendering into my Engine... Now I'm mainly getting pissed off with FreeType, I just can't seem to understand it 100%. I'm loading the Font with TrueType and then looping through all the glyphs in the font and saving them into one single big Texture, in ASCII order, and uploading them to OpenGL, but that's not working out so well.

I would really appreciate it if someone looked at my code and explain to me what I am doing wrong. At the moment I can't load a TrueType font and I doubt I'm rendering them correctly either but I am not sure as I have not tested it...

Here is how I am loading fonts; the maximum size of a font is defaulted to 20, to save into a Texture:
[code]void OGLGraphicalAssetLoader::loadFontFromFile(const std::string& filepath, Font* output) const
{
FT_Library library; // a FreeType Library object
FT_Face face; // This holds the TrueType Font.
FT_Error error; // Holds any errors that could occur.
error = FT_Init_FreeType(&library); // Initialize the FreeType Library
if(error)
{
throw AnaxException("FreeType 2 Library could not be initialized", -2);
}
// Load the TrueType Font into memory
error = FT_New_Face(library, filepath.c_str(), 0, &face);
if(error)
{
throw AnaxException("Could not load TrueType Font: " + filepath, -2);
}
FT_Set_Char_Size(face, output->getMaxSize() * 64, output->getMaxSize() * 64, 96, 96); // Set the size of the Font

// Create a blank Texture (Image)
Image tempImage;
tempImage.create(face->glyph->bitmap.width, face->glyph->bitmap.rows);

Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
Uint32 drawX = 0, drawY = 0; // The x and y coordinates that the glypth will be drawn to in the Texture.
// Loop through the Glyph's putting them in the Texture (Image)
for(int i = 0; i < 256; ++i)
{
Uint32 index = FT_Get_Char_Index(face, (char)i);
error = FT_Load_Glyph(face, index, FT_LOAD_DEFAULT);
if(error)
continue; // just ignore it.. (should throw an except or something along those lines
error = FT_Render_Glyph(face->glyph, FT_RENDER_MODE_NORMAL);
if(error)
continue; // just ignore it...

// Place Texture Coordinates
textureCoords.position.x = drawX + face->glyph->bitmap_left;
textureCoords.position.y = drawY - face->glyph->bitmap_top;
textureCoords.size.width = face->glyph->bitmap.width;
textureCoords.size.height = face->glyph->bitmap.rows;
setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates
// Render into Image
BlitGlypth(face->glyph, &tempImage, textureCoords.position.x, textureCoords.position.y);

// Increment drawing position
drawX += face->glyph->advance.x >> 6;
drawY += face->glyph->advance.y >> 6;
}
// Upload the Texture to OpenGL
Texture2D tempTexture;
loadTextureFromImage(tempImage, &tempTexture);
// Set the ID of the Font
setFontIdNumber(output, tempTexture.getID());
}[/code]


I'm not quite sure if I'm formatting each character correctly into my texture, also I do not know how to get or calculate the size that the Texture will be. When I am calling tempImage.create() it parses 0, 0 for the dimensions of the image... S:? Is it because there is not current glyph selected or..? How do I calculate what the Texture size should be.

Here is how I am drawing the Font's, using a Rectangle to draw them:
[CODE]
void OGLRenderer::renderText(const Renderable2DText& text)
{
const std::string& theCharactersToRender = text.getText();
Renderable2DRect& rect = getRectFromText(text);

// Loop through all the characters
for(int i = 0; i < theCharactersToRender.length(); ++i)
{
const Rect2DFloat& subRect = text.getFont()->getGlypth(i);
rect.setSubRect(subRect);

// Render the Rect
renderRect(rect);
rect.move(subRect.position.x, subRect.position.y);
}
}
[/CODE]

If you need anymore detail on how I am implementing this, please say so :)

Share this post


Link to post
Share on other sites
[size=4][font=arial,helvetica,sans-serif]Ahh i remember having this problem myself. It had something to do with [color=#000000][left]FT_Set_Char_Size and the glyph methods (ft 2.4.4). I couldnt figure out what it was so i changed using those by using [/left][/color][/font][/size]FT_Set_Pixel_Sizes and FT_Load_Char instead. Here's what i got so you can take a look at how that works:

[code]FontLoader::FontLoader() :
logger( Logger::Instance() )
{
FT_Error error = FT_Init_FreeType( &library );
if ( error )
logger.Print( LOGGERDEPTH_CONSTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "Font loader could not be initialized." );
logger.Print( LOGGERDEPTH_CONSTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "ResourceLoaderFont created." );
}
FontLoader::~FontLoader()
{
logger.Print( LOGGERDEPTH_DESTRUCTOR, LOGGERSCOPE_NOCHANGE, "cn", "ResourceLoaderFont destroyed." );
}
EEResource::Font* FontLoader::LoadFont( RenderResourceCreator& renderResourceCreator, const std::string& fontFile, uint8 fontSize )
{
FT_Error error;
FT_Face face;
if( error = (FT_New_Face( library, fontFile.c_str(), 0, &face ) ) )
{
if ( error == FT_Err_Unknown_File_Format )
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Format of font \"", fontFile, "\" is unsupported." );
else
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Font \"", fontFile, "\" could not be opened or read." );
return NULL;
}

//error = FT_Set_Char_Size(
// face, /* handle to face object */
// 0, /* char_width in 1/64th of points */
// 16*64, /* char_height in 1/64th of points */
// 300, /* horizontal device resolution */
// 300 ); /* vertical device resolution */
FT_Set_Pixel_Sizes( face, 0, fontSize );
int textureWidth = 0;
int textureHeight = 0;
EEResource::Font* newFont = new EEResource::Font( fontFile, fontSize );
for( uint16 charIndex = 0; charIndex < 256; ++charIndex )
{
error = FT_Load_Char( face, charIndex, FT_LOAD_RENDER );
Vector2f advance( float( face->glyph->advance.x >> 6 ), float( face->glyph->advance.y >> 6 ) );
uint8 width = face->glyph->bitmap.width;
uint8 height = face->glyph->bitmap.rows;
uint8* data = new uint8[ width * height ];
//Rendered data is upside down so we need to revert that.
for( uint8 row = 0; row < height; ++row )
memcpy( data + width * row, face->glyph->bitmap.buffer + width * (height - row - 1), sizeof( uint8 ) * width );
newFont->SetCharacter( (uint8)charIndex, advance, width, height, face->glyph->bitmap_left, face->glyph->bitmap_top, data );
}

if( !renderResourceCreator.LoadFont( *newFont ) )
{
logger.Print( LOGGERDEPTH_WARNING, LOGGERSCOPE_NOCHANGE, "mcscn", __FUNCTION__, ": Font \"", fontFile, "\" could not be loaded by the renderer." );
delete newFont;
return NULL;
}
else
return newFont;
}[/code]

On a side note: i dont think you have to init the library for every font you're loading

Share this post


Link to post
Share on other sites
Couple of Questions:
1. Why are you parsing the height instead of the width in this function:

[color=#000000]FT_Set_Pixel_Sizes[/color][color=#666600]([/color][color=#000000] face[/color][color=#666600],[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600],[/color][color=#000000] fontSize [/color][color=#666600]);[/color]
i.e. why are you calling it like this?

[color=#000000]FT_Set_Pixel_Sizes[/color][color=#666600]([/color][color=#000000] face[/color][color=#666600],[/color][color=#000000] [/color][color=#000000]fontSize[/color][color=#666600],[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600]);[/color]

2. What are you doing with textureWidth and textureHeight?

[color=#000088]int[/color][color=#000000] textureWidth [/color][color=#666600]=[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600];[/color]
[color=#000088]int[/color][color=#000000] textureHeight [/color][color=#666600]=[/color][color=#000000] [/color][color=#006666]0[/color][color=#666600];[/color]

Share this post


Link to post
Share on other sites
1: it doesn't really matter, this is some code that's inside the FT_Set_Pixel_Sizes for the version i'm using:
[code]

if ( pixel_width == 0 )
pixel_width = pixel_height;
else if ( pixel_height == 0 )
pixel_height = pixel_width;
[/code]
so [color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]0[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] fontSize [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]); is the same as [/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]fontSize, 0 [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]); and [/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]FT_Set_Pixel_Sizes[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]([/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] face[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][/size][/font][/color][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]fontSize[/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left],[/left][/size][/font][/color][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left] fontSize [/left][/size][/font][/color][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left]);[/left][/size][/font][/color]

2: nothing at all haha. I noticed them sitting around there aswell but i was too lazy to remove them :P. I know when i've started creating the fontloader i tried to put all characters in an as small as possible image but i couldn't really be bothered to finish that because gpu mem isn't an issue right now. So i gues it's just a remnant from that.

Share this post


Link to post
Share on other sites
[quote name='Murdocki' timestamp='1330257708' post='4916706']
[left]FT_Set_Pixel_Sizes[/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]0[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]); is the same as [/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]FT_Set_Pixel_Sizes[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize, 0 [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]); and [/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]FT_Set_Pixel_Sizes[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]([/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]face[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#006666][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize[/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3],[/size][/font][/color][/left]
[left][color=#000000][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]fontSize [/size][/font][/color][/left]
[left][color=#666600][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]);[/size][/font][/color][/left]
[/quote]

Dont you just looooove auto format.....

Share this post


Link to post
Share on other sites
Okay, I'm still having troubles, I have a question and I am not sure what I'm doing wrong o.O...

1. So with the FT_Set_Pixel_Sizes it sets the amount of pixels a glyph in the font should use for it's width/height, correct? Which means I can't have decimal sizes, right?


Here's my Code, it doesn't complain about loading or rendering, but I must be stuffing up somewhere.

Loading the Font:
[CODE]
void OGLGraphicalAssetLoader::loadFontFromFile(const std::string&amp; filepath, Font* output) const
{
FT_Library library; // a FreeType Library object
FT_Face face; // This holds the TrueType Font.

FT_Error error; // Holds any errors that could occur.

error = FT_Init_FreeType(&amp;library); // Initialize the FreeType Library

if(error)
{
throw AnaxException("FreeType 2 Library could not be initialized", -2);
}

// Load the TrueType Font into memory
error = FT_New_Face(library, filepath.c_str(), 0, &amp;face);
if(error)
{
throw AnaxException("Could not load TrueType Font: " + filepath, -2);
}

float maxSize = output->getMaxSize();
float width = maxSize * 64;


// Set the size of the Font
FT_Set_Pixel_Sizes(face, 0, maxSize);

// Create a blank Texture (Image)
Image tempImage;
tempImage.create(maxSize * 256, maxSize * 2);


Rect2DFloat textureCoords; // Holds temporary Texture Coordinates
int drawX = 0, drawY = maxSize; // The x and y coordinates that the glypth will be drawn to in the Texture.


FT_GlyphSlot slot = face->glyph; // The Glyph slot for the face

// Loop through the Glyph's putting them in the Texture (Image)
for(int i = 0; i < 256; ++i)
{
Uint32 index = FT_Get_Char_Index(face, i);

error = FT_Load_Char(face, index, FT_LOAD_RENDER);
if(error)
continue; // just ignore it..


// Place Texture Coordinates
textureCoords.position.x = drawX + slot->bitmap_left;
textureCoords.position.y = drawY - slot->bitmap_top;
textureCoords.size.width = slot->bitmap.width;
textureCoords.size.height = slot->bitmap.rows;

setFontTextureCoordinateValueForGlyth(output, i, textureCoords); // Set the Texture Coordinates

// Render into Image
BlitGlypth(face->glyph, &amp;tempImage, textureCoords.position.x, textureCoords.position.y);


// Increment drawing position
drawX += face->glyph->advance.x >> 6;
}

// Upload the Texture to OpenGL
Texture2D tempTexture;
loadTextureFromImage(tempImage, &amp;tempTexture);

// Set the ID of the Font
setFontTexture(output, tempTexture);
}

[/CODE]

Bltting the Glyph to the Image:
[CODE]
void BlitGlypth(const FT_GlyphSlot glypth, Image *output, Uint32 xForOutput, Uint32 yForOutput)
{
Uint32 x, y;
for(y = 0; y < glypth->bitmap.rows; ++y)
{
for(x = 0; x < glypth->bitmap.width; ++x)
{
Uint32 pixel = glypth->bitmap.buffer[(x * glypth->bitmap.width) + y]; // access the pixel
output->setPixel(xForOutput + x, yForOutput + y, Colour(255, 255, 255, pixel)); // place it in the image
}
}
}
[/CODE]

Rendering the Font:
[CODE]

void OGLRenderer::renderText(const Renderable2DText&amp; text)
{
const std::string&amp; theCharactersToRender = text.getText();
Renderable2DRect&amp; rect = getRectFromText(text);

float drawX = 0;
float drawY = 0;

char currentChar = 0;

// Loop through all the characters
for(int i = 0; i < theCharactersToRender.length(); ++i)
{
// Update the current character
currentChar = theCharactersToRender[i];


if(currentChar == '\n')
{
//drawX = 0;
//drawY += subRect.size.height;
}
else if(currentChar == ' ')
{
//drawX += text.getLineSpacing();
}
else
{
const Rect2DFloat&amp; subRect = text.getFont()->getGlypth(currentChar);
rect.setSubRect(subRect);

// Render the Rect
renderRect(rect);

drawX += subRect.size.width + 1;
}


rect.move(drawX, drawY);
}
}
[/CODE]

I've noticed that my Font is rendered sideways or something, and it seems to not be in the correct order or something? S: What I mean is: For example the '+' sign (43 in ASCII) is not 43 characters from the NULL character (0). Now I tried rendering the '+' character (adding 28 to it too, to make it render the '+' character) but all I get when I render it (zoomed in, with a rectangle next to it) is this, how do I fix it?
[img]http://i1181.photobucket.com/albums/x436/miguelishawt/plussign.png[/img]


Here is the Image that was generated from the .ttf (the alpha channels might not show in the .png, I'm not sure S:), I don't know how to get the Image width exactly so it's not giving me excess pixels...:
[img]http://i1181.photobucket.com/albums/x436/miguelishawt/temp0.png[/img]

Many thanks in advance.

Share this post


Link to post
Share on other sites
You're still using [color=#000000][size=2][left]FT_Get_Char_Index, n[size=3]ot sure what it does but it has weird results for me aswell, works perfectly fine without though.[/size][/left][/size][/color][color=#000000][size=3][left]Your texture coordinate calculation also looks weird but i havn't really looked at it because the output texture seems bollocks to begin with, could you verify it doesnt look this bad on your end?[/left][/size][/color]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627757
    • Total Posts
      2978951
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now