Sign in to follow this  

OpenGL 2 Problems: with resizing back buffer & IDirect3DDevice9 CreateTexture

Recommended Posts

1) I was playing around with the SetViewport function I found and noticed that if I created a viewport with the parameters {0, 0, ClientWidth, ClientHeight}, and did that in my SetupView function like I do in OpenGL, which is called whenever I resize my window, when I restore my window, which I created maximized, back to its restored size of 640x480, the viewport is smaller than the client area of the window... (was that sentence a run-on or not? With all the commas I put, it kind of sounds right to me. =) ) Well, after looking through the D3D docs a while, I realized this is because the back buffer size of my D3DDevice is still 1024x712, the size of the client area of my maximized window. Well I thought no big deal, if I wanted to have multiple viewports I would just set their width and height to a fraction of the back buffer width instead of the actual client area of the window. Then I noticed that when I shrink my window or worse, enlarge it from whatever the original client area of the window was when I created it (and thus the back buffer width), the whole scene would be scrunched or stretched to fit the new client area. Yikes. Not pretty. Especially if I created the window NOT maximized and then maxed it. Well after looking through the docs some more, I found a way to fix it. The Reset function of the IDirect3DDevice9 object. Whoops. Another problem. If I did this, I would have to re-create all my textures and stuff. And that wouldn't be good if I had a lot of them. So what can I do about this? Is there any way to resize the back buffer OTHER than resetting? 2) I posted this in for beginners a few days ago but never got it answered. I'm trying to make an image loader class that manually loads images in and creates a texture for them instead of using D3DXCreateTextureFromFile because I want to keep information about the images like width, height, etc... Well I just copied/pasted the loading code I wrote for my OpenGL image loader, and then set about making a GenerateTexture function. The problem is, it failes when I call the IDirect3DDevice9 CreateTexture function. Here's the code from the function:
void D3DImage::GenerateTexture(LPDIRECT3DDEVICE9 D3DDevice, UINT MipmapLevels)
	DeleteTexture(); // If, for whatever reason, this function has already been
	                 // called and a texture exists, release it before generating
	                 // this one
	// Width (UINT), Height (UINT), bAlpha (bool), and Texture (LPDIRECT3DTEXTURE9)
	// are member variables
	if(FAILED(D3DDevice->CreateTexture(Width, Height, MipmapLevels, 0, bAlpha ? D3DFMT_A8R8G8B8 : D3DFMT_R8G8B8, D3DPOOL_MANAGED, &Texture, NULL)))
	{blah blah...;}

The image is 512x512, 24-bit, and I'm trying to generate only 1 level. The function is returning D3DERR_INVALIDCALL. So what am I doing wrong here? Why is it failing? Any help would be appreciated. Thanks in advance. And sorry for post being so long. I didn't intend it to be. =) EDIT: Solved problem 2. Apparently it was failing because of the D3DFMT_R8G8B8. It has to be D3DFMT_X8R8G8B8 instead. [Edited by - K A Z E on September 28, 2006 3:46:26 AM]

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
    • By cebugdev
      hi guys, 
      are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
      Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic 
      let me know if you guys have recommendations.
      Thank you in advance!
    • By dud3
      How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? 
      Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
      Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
      The video shows the difference between blender and my rotation:
    • By Defend
      I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
      My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
      * make lots of VAO/VBO pairs and flip through them to render different objects, or
      * make one big VBO and jump around its memory to render different objects. 
      I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
      If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
    • By test opty
      Hello all,
      On my Windows 7 x64 machine I wrote the code below on VS 2017 and ran it.
      #include <glad/glad.h>  #include <GLFW/glfw3.h> #include <std_lib_facilities_4.h> using namespace std; void framebuffer_size_callback(GLFWwindow* window , int width, int height) {     glViewport(0, 0, width, height); } //****************************** void processInput(GLFWwindow* window) {     if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)         glfwSetWindowShouldClose(window, true); } //********************************* int main() {     glfwInit();     glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);     glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);     glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);     //glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);     GLFWwindow* window = glfwCreateWindow(800, 600, "LearnOpenGL", nullptr, nullptr);     if (window == nullptr)     {         cout << "Failed to create GLFW window" << endl;         glfwTerminate();         return -1;     }     glfwMakeContextCurrent(window);     if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))     {         cout << "Failed to initialize GLAD" << endl;         return -1;     }     glViewport(0, 0, 600, 480);     glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);     glClearColor(0.2f, 0.3f, 0.3f, 1.0f);     glClear(GL_COLOR_BUFFER_BIT);     while (!glfwWindowShouldClose(window))     {         processInput(window);         glfwSwapBuffers(window);         glfwPollEvents();     }     glfwTerminate();     return 0; }  
      The result should be a fixed dark green-blueish color as the end of here. But the color of my window turns from black to green-blueish repeatedly in high speed! I thought it might be a problem with my Graphics card driver but I've updated it and it's: NVIDIA GeForce GTX 750 Ti.
      What is the problem and how to solve it please?
  • Popular Now