Jump to content
  • Advertisement
Sign in to follow this  
TimothyFarrar

OpenGL Future of OpenGL on the Mac

This topic is 3857 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

With the release of Leopard, Apple has also updated their OpenGL Capabilities Tables and Supported OpenGL Extensions Guide. Apple is introducing some of the basic Shader Model 4.0 functionality (geometry shaders, transform feedback, gpu shader4) on the NVidia 8600M GT which is shipping on the MacBook Pro, however the ATI HD cards (shipping on the iMac line) still are missing even vertex texture fetch support (according to the list of supported extensions). If you are interested in helping the effort to bring the Mac's OpenGL support up to date so that the Mac can be a competitive and viable platform for both Games and GPGPU based applications, please send in a feature request to Apple for full support on both the NVidia GeForce 8 series and ATI HD Series cards for Shader Model 4.0 features such as, GL_EXT_gpu_shader4 GL_EXT_texture_array GL_EXT_texture_integer GL_EXT_texture_buffer_object While it might not seem important now because you might not be currently making use of SM4.0 features in your game or application, your Direct3D competitors are already making use of these features and have a years head start with working D3D 10 driver support from NVidia and now ATI. It can take quite some time to develop proper drivers, so now is the time to insure Apple's OpenGL support does not fall another year or two behind! Here is a direct link to the Apple Bug Reporter, where you can easily file a feature request, http://developer.apple.com/bugreporter/ This does require a valid Apple Developer Connection account, which you can get for FREE here, http://developer.apple.com/products/online.html

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by TimothyFarrar
With the release of Leopard, Apple has also updated their OpenGL Capabilities Tables and Supported OpenGL Extensions Guide.

Apple is introducing some of the basic Shader Model 4.0 functionality (geometry shaders, transform feedback, gpu shader4) on the NVidia 8600M GT which is shipping on the MacBook Pro, however the ATI HD cards (shipping on the iMac line) still are missing even vertex texture fetch support (according to the list of supported extensions).

If you are interested in helping the effort to bring the Mac's OpenGL support up to date so that the Mac can be a competitive and viable platform for both Games and GPGPU based applications, please send in a feature request to Apple for full support on both the NVidia GeForce 8 series and ATI HD Series cards for Shader Model 4.0 features...


Does that ATI card actually support Vertex texture units? I recall that ATI has been very opposed to the idea for some time now...

Edit:
As an amusing aside, I just received the first response back from a 4 month old bug report to Apple - about a fairly glaring problem in the linker. The response reads like so:
Quote:
This is a follow-up to Bug ID# *******. Engineering believes this issue has been addressed in Mac OS X Leopard.

Leopard is available commercially as of October 26, 2007 at:

<http://www.apple.com/macosx/>

Upon installing this software, please update this bug report with your results.

At least it shows they read it...

Share this post


Link to post
Share on other sites
A few guys from Apple are heading up the Longs Peak and Mt. Evans ARB. SM4-compliant drivers will arrive soon, I am sure.

Share this post


Link to post
Share on other sites
Interestingly enough, having installed Leopard on my early-model MacBook Pro a few days ago, I went to look at my GPU info, and discovered that my Radeon X1600 Mobile suddenly sprouted 16 Vertex Texture units (it had no support for vertex textures under Tiger).

Many kudos to Apple for upgrading my GPU at no cost to myself ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Does that ATI card actually support Vertex texture units? I recall that ATI has been very opposed to the idea for some time now...


They didn't include VTF in the SM3.0 cards due to the nature of the hardware; the reasoning being, wrong or right, that Render-to-vertex array was a better approuch and that a slow VTF with limited formats, as provided by NV, wasn't worth it.

The latest generation however has unified shader cores, as such any shader code can perform (more or less) the same operations.

As for seeing 'D3D10' support from AMD/ATI for GL2.1, I wouldn't count on it. Aside from any critcal bug fixes I would suspect work on 2.1 series GL drivers has stopped pending 3.0's release.

Share this post


Link to post
Share on other sites
until apple actually pull there fingers out, the osx platform will always be behind pc's when it comes to raw 3d power.

Take a new game like Crysis, there is not one mac on the market that is capable of running that at all it's glory.

I do appreciate what you are trying to achieve but there is only 1 mac that u can buy that is even capable of GL_EXT_gpu_shader4 and it's a notebook (with a minumum price tag of US $2000).

Apple's GPU options are poor to say the least. No i am not an apple basher, in fact i love osx ... i would drop pc land in a heart beat if apple could meet my demands as an avid gamer and developer.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Interestingly enough, having installed Leopard on my early-model MacBook Pro a few days ago, I went to look at my GPU info, and discovered that my Radeon X1600 Mobile suddenly sprouted 16 Vertex Texture units (it had no support for vertex textures under Tiger).

Many kudos to Apple for upgrading my GPU at no cost to myself ;)


Oddly enough all the graphics cards in Apple's drivers now show 16 Vertex Texture units if you are looking at MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB ... smells like software emulation to me (many of these cards and all the ATI cards until HD have no vertex texture fetch ability).

Share this post


Link to post
Share on other sites
Quote:
Original post by vibe3d
until apple actually pull there fingers out, the osx platform will always be behind pc's when it comes to raw 3d power.

Take a new game like Crysis, there is not one mac on the market that is capable of running that at all it's glory.

I do appreciate what you are trying to achieve but there is only 1 mac that u can buy that is even capable of GL_EXT_gpu_shader4 and it's a notebook (with a minumum price tag of US $2000).

Apple's GPU options are poor to say the least. No i am not an apple basher, in fact i love osx ... i would drop pc land in a heart beat if apple could meet my demands as an avid gamer and developer.


I think you summed up the situation quite well. This is a classic what comes first the chicken or the egg scenario. IMHO Apple won't allocate enough resources to get its OpenGL drivers up to speed because it doesn't see a good return on that investment (needs to result in more Mac's being purchased and user base expanding), and conversely, developers cannot port games/apps to Mac or hope to use up to date GL features because of the lack of hardware support in the drivers.

The sad thing here is that Apple seems to have a excellent grasp of developing and marketing their iPod and iPhone, but seems consistently clueless about the marketing potential of the Mac as a gaming platform. Apple has something like only at most 22M OSX users (world wide?), and in 2006 something like 38M game units (boxed titles?) were sold to US PC users alone ... my guess is one primary reason people don't purchase Macs for home use is because of the lack of games!

And the key element for this is getting full driver support (without using software emulation for hardware supported features!), without that, nothing will change. Which is exactly why if you are reading this, that you should go and write in a feature request for up-to date SM4.0 hardware support in the drivers, ... only the squeaky wheel gets the grease!

Share this post


Link to post
Share on other sites
Quote:
Original post by TimothyFarrar
Quote:
Original post by swiftcoder
Interestingly enough, having installed Leopard on my early-model MacBook Pro a few days ago, I went to look at my GPU info, and discovered that my Radeon X1600 Mobile suddenly sprouted 16 Vertex Texture units (it had no support for vertex textures under Tiger).

Many kudos to Apple for upgrading my GPU at no cost to myself ;)


Oddly enough all the graphics cards in Apple's drivers now show 16 Vertex Texture units if you are looking at MAX_VERTEX_TEXTURE_IMAGE_UNITS_ARB ... smells like software emulation to me (many of these cards and all the ATI cards until HD have no vertex texture fetch ability).


You may well be right. However, is it possible that they are instead emulating vertex texture fetch with render-to-vertex-buffer, as ATI recommends it be done at the application level. It seems possible (since Apple writes the drivers) that they may have provided transparent emulation of this kind in the driver itself, as has been proposed previously.

Anyone have an idea about how to test whether this is hardware based, or running in software?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By iArtist93
      I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
      1- First pass = depth
      2- Second pass = ambient
      3- [3 .. n] for all the lights in the scene.
      I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
      But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
      Is there anything wrong with those steps or is there any improvement to this process?
    • By babaliaris
      Hello Everyone!
      I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far.  In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
      Video: 
       
      What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
       
      Take a look at my code:
      Sprite Class
      (You mostly need to see the Constructor, the Render Method and the Move Method)
      #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }  
      Renderer Class
      #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }  
      Shader Code
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
       
      Window Class:
      #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }  
      Brain Class
      #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
      #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }  
      Main:
      #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }  
       
      I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
      Brain.cpp
      Error.cpp
      IndexBuffer.cpp
      Input.cpp
      Renderer.cpp
      Scene.cpp
      Shader.cpp
      Sprite.cpp
      Texture.cpp
      VertexArray.cpp
      VertexBuffer.cpp
      VertexBufferLayout.cpp
      Window.cpp
      Brain.h
      Error.h
      IndexBuffer.h
      Input.h
      Renderer.h
      Scene.h
      Shader.h
      SpaceShooterEngine.h
      Sprite.h
      Texture.h
      VertexArray.h
      VertexBuffer.h
      VertexBufferLayout.h
      Window.h
    • By Cristian Decu
      Hello fellow programmers,
      For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues
      can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets.
       
      I have used the classical quadtree LOD approach;
      I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0).
      Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back),
      creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally
      normalize all the vertices on my vertex shader i can get a perfect sphere.
      T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye;
      the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye.
      The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations).
      The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as `r_Grid'.
      // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The `r_ModelViewProjection' matrix is generated on the CPU in this manner.
      // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible.
      Each frame i'm rendering each node by sending the generated matrices this way.
      // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve;
      Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface
      of the planet the rounding errors start to kick in giving me that lovely stairs effect.
      I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively
      getting rid of those rounding errors.
       
      My question is how can i achieve this relative to camera rendering in my scenario here?
      I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing.
      I only use double on the CPU side where i also do most of the matrix multiplications.
      As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords).
       
      Thank you for your suggestions!
       
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!