Archived

This topic is now archived and is closed to further replies.

griffenjam

OpenGL OpenGl and access violations...a sad story

Recommended Posts

griffenjam    193
I''m trying to use glDrawPixels() and am not having ANY luck. I''m sending an unsigned char * but when the function runs I get an ACCESS VIOLATION!!! When I debug it it looks like the violation is in one of my video cards driver files (nVidia GeForce 2). This is all happening inside a class, I was also getting access violations when trying to set a value in the array. That is fixed now (I don''t really know how). is this correct? unsigned char *bitmap; bitmap = new unsigned char[imagesize]; //load the bitmap . . glDrawPixels(,,,, bitmap); if so why am I getting access violations? also why would mearly setting a value in bitmap (bitmap[0] = ''a'' cause an acces violation? Jason Mickela ICQ : 873518 E-Mail: jmickela@pacbell.net ------------------------------ "Evil attacks from all sides but the greatest evil attacks from within." Me ------------------------------

Share this post


Link to post
Share on other sites
Zerosignull    122
#define Height 64
#defein width 64
GLubyte *bitmap;

bitmap = new GLubyte bitmap[height][width];
//load the bitmap
.
.
glDrawPixels(Width, height, GL_RGB, GL_UNSIGNED_BYTE, bitmap);


make sure that the type ie ''GL_UNSIGNED_BYTE'' is the same type as the data u are using. the abouve code should work.. plz post a rply if it does/doesnt

~prevail by daring to fail~

Share this post


Link to post
Share on other sites
Zerosignull    122
shit i fogot to ass another array tot the bitmap array

#define Height 64
#defein width 64
GLubyte *bitmap;

bitmap = new GLubyte bitmap[height][width][3];
//load the bitmap
.
.
glDrawPixels(Width, height, GL_RGB, GL_UNSIGNED_BYTE, bitmap);


the following wont do ne thing. u dont know the actuall size of each variable ie a 24bpp bitmap will have 8bits (a byte) per color adn so the variables will have to be of type GLubyte || unsigned short (dont use char and all data will be unsigned!!!). and also if u load the entire file into the array u will also get the file header in with thae data and u dont want that. if u dont know the file structire of a bmp ull have to go find out so u can extract the depth information and the data offset from the begining of the file.

read(bitmap, sizeof(unsigned char), datasize, fp);


~prevail by daring to fail~

Share this post


Link to post
Share on other sites
zedzeek    528
>> bitmap = new unsigned char[imagesize]; <<

are u taking account of the images bitdepth eg
greyscale = imagesize*1
RGB = imagesize*3
RGBA = imagesize*4

Share this post


Link to post
Share on other sites
griffenjam    193
In my code I have already grabed the file header, that is where I got the imagesize value from.
No, I have not taken the type of image into account, none of the sample code that I have seen has either.
Nor would it matter for the question I am asking. IT doesn''t matter what I''m loading or it I''m loading it wrong, wrong data wouldn''t give an access violation, it jus''t wouldn''t work right.
AFAIK the statement
bitmap[1] = temp; //Where temp is of type unsigned char
should cause ANY problems.
I am testing to make sure I was able to allocate the memory.
Why am I getting access violations?
I can worry about the bitmap loading being incorrect when I can actaully run my code,
as it is I can''t test anything.

Share this post


Link to post
Share on other sites
zedzeek    528
u should get an access voilation (or some error)

ubyte data[2*2] // 4 pixels

glDrawPixels( GL_RGB, 2,2, ubyte, data ) // bang error

drawpixels wants 12 pixels ie 2x2x3 (3 cause its RGB) but youve only given it 4

try this in your code
bitmap = new unsigned char[imagesize*4];
instead of
bitmap = new unsigned char[imagesize];

Share this post


Link to post
Share on other sites

  • Similar Content

    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

    • By altay
      Hi folks,
      Imagine we have 8 different light sources in our scene and want dynamic shadow map for each of them. The question is how do we generate shadow maps? Do we render the scene for each to get the depth data? If so, how about performance? Do we deal with the performance issues just by applying general methods (e.g. frustum culling)?
      Thanks,
       
  • Popular Now