• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL glAddSwapHintRectWIN

This topic is 587 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I've been having some problems with this opengl extension, but only on some machines. I'm no great expert at using extensions, but I get a list of extensions with glGetString and find whether 'GL_WIN_swap_hint' is in the list. Then I use wglGetProcAddress to get the address of glAddSwapHintRectWin.

 

So far so good. As I understand it, if not supported, it should not be in the list or return NULL for the the address.

 

The problem I'm finding is that on some PCs it is acting correctly and only swapping the requested rect, but on others that say they support it, the command appears to be ignored? Perhaps I'm misunderstanding and they are allowed to totally ignore it, as it is just a 'hint'. But to me that negates half the use of the thing.

 

Basically I have a GUI and various other things being rendered to the screen. On most frames, I don't need to render the whole screen, just a small part of it. So I want to just render that part, and then call swapbuffers and only have that region swapped. My question would be, if a driver can't do the thing, and swap only part of the buffers, why doesn't it just return that it can't do it?

 

I now have a situation where I will have to make the default build run a rubbishy slow path, just in case the GL driver has an implementation that doesn't work... Unless it really is a driver bug, but I've had it happen on 2 out of 5 test machines, so it would seem unlikely.

 

I've also tried hard coding in a small rectangle, just to check I wasn't passing in garbage to the hint function, but same result. Has anyone had any luck with this extension?

 

 

Share this post


Link to post
Share on other sites
Advertisement

To my opinion this extension is old and odd.

I would suggest you avoid using WGL extensions as they target only... Windows...

 

Finally, as the name says it, this is a hint. And just like any other hints in OpenGL, the OpenGL driver might not take care about the user request. So it is fully normal that it can work on some machines while it won't work on others.

Share this post


Link to post
Share on other sites

I would usually agree on the WGL extensions, but this is for a windows only app, and was quite easy to put in.

 

I'm getting the impression that maybe the problem is that while opengl has a means of showing whether the extension is available (wglGetProcAddress and glGetString), it doesn't seem to have a way of letting you know whether the extension is working in your particular hardware configuration. :blink: I had a glimmer of hope as I found in the docs that in the PixelFormatDescriptor there were 2 new flags for glAddSwapHintRectWin, PFD_SWAP_COPY and PFD_SWAP_EXCHANGE. However when I checked them PFD_SWAP_EXCHANGE was set for both my machine where it was working, and the test machine where it wasn't, so it didn't provide a means of distinguishing whether the extension was 'active'.

 

https://msdn.microsoft.com/en-us/library/windows/desktop/dd368826(v=vs.85).aspx

 

The problem I'm facing is I'm combining a complex 3d model (say 300,000 polys) and a GUI user interface. I don't want to have to render the model, every time I change a tiny element in the user interface (say a cursor blinking on and off!). As it is, without the swaphint, because of double / triple buffering, I'm needing to do this because it is forcing me to update the whole frame. Unless I render the model to another surface then copy this to the main one. Which might be difficult as I'm trying to use OpenGL 1 lol.  :D

 

I've now had to change my app code to by default update the whole frame each time, and then offer the 'proper' implementation of just updating subrects via a command line option. Which seems to be ridiculous. :wacko: If there is a better way of doing this I'd love to hear it!  :)

Share this post


Link to post
Share on other sites

OK.

 

Then you have very few options.

 

I'm unsure about it right now if they were already available (and couldn't find any suitable info about them), but glScissor might help you.

Also, you might be interested in viewports as long as your interface design allows it.

 

Also, again (maybe I was not clear from my last post), it's not because an extension is available and supported by a hardware and the driver that a hint function from this extension will always work. A hint is just an indication that we give to the driver. Then the driver will choose if it will honor this hint or not, depending on factors.

You can see for example glHint. You can put whatever you want as the arguments (as long as valid), and you might end with the same results.

 

You can also have a look at pbuffers.

Edited by _Silence_

Share this post


Link to post
Share on other sites

It is for my little 3d paint app:

http://www.gamedev.net/blog/2199/entry-2262243-3d-paint-preview-video/

 

I am already using glScissor, and viewports, but they do not solve the problem of having to draw the whole frame, when swaphint does not work (and as I have no way of querying this, I am forced to draw the whole screen every time :( ).

 

After a bit of assessment, the problem is not hugely severe except causing lag with high poly models in the user interface, so I may just live with it. :rolleyes:

 

I am also already copying the 3d view to a texture in some modes to accelerate things, so could potentially use the same functionality and save the 3d view to a texture when I stop rotating the view, and then render the 3d view as a quad when doing small GUI rect updates.

 

This is still a convoluted way of doing things, and limiting the swap area would seem much more sensible. But perhaps there are hidden hardware reasons why this is not possible in some cases (tiled rendering or somesuch). A way of querying OpenGL whether the extension is active in the context would seem to be far more sensible than the current hint system, imo.

 

Share this post


Link to post
Share on other sites

You're right, swapbuffers does not care at all about scissors and viewports...

I just saw your video and now I understand your problem more. And several things came to my mind:

 

You can use different windows, one for rendering, one or 2 for the interface. You'll then be able to easy choose which window to swap the buffer for.

 

Why targeting OpenGL 1 ? I have doubts that nowadays we can still find GC that can do GL 1 but no GL 2 or 3... I mean GL 2 came at the early 2000. This was 16 years ago... And you won't be forced to use shaders or to remove deprecated GL features.

 

I ask this because if you want to keep your interface inside GL (which it seems what you are doing), FBOs will be very helpful to you. You can render your GUI in FBOs then only have to patch them on a full quad rendered in the screen. You can of course do that with textures as you're already doing it, it will however be less quick. This will not remove your wish to swap only a portion of the screen however.

 

To my opinion it  might be more effective for a GC to swap the full buffer than a portion for several reasons: memory alignment in the buffer, having to maintain the other buffers aligned too. This implies extra work for something that should be as easy as setting the address of a buffer to another value...

 

As a final thought, and the one you might prefer, what you can do is: don't clear your color buffer, and display a black rectangle for the GUI portion that had changed, then redraw on top of it. Then swap your buffer. Since you don't clear the buffer, you don't have to redraw your model each time... You'll have to play with the depth test to make it work well.

 

Another solution that you might not like would be to use a GUI toolkit as Qt or Gtk...

Share this post


Link to post
Share on other sites

You can use different windows, one for rendering, one or 2 for the interface. You'll then be able to easy choose which window to swap the buffer for.

I initially had a look at this, but I had problems such as having the GUI draw over the 3d window (dropdown menus that obscure it etc).

 

Why targeting OpenGL 1 ? I have doubts that nowadays we can still find GC that can do GL 1 but no GL 2 or 3... I mean GL 2 came at the early 2000. This was 16 years ago... And you won't be forced to use shaders or to remove deprecated GL features.

Hehe I will update it if I get around to it (I would probably need it for normal and specular maps). I have an OpenGL ES 2.0 version of the GUI for Android, but just wanted something easy to get going that runs anywhere to start with for the paint app. And display lists and wireframe are nice and easy.

 

To my opinion it  might be more effective for a GC to swap the full buffer than a portion for several reasons: memory alignment in the buffer, having to maintain the other buffers aligned too. This implies extra work for something that should be as easy as setting the address of a buffer to another value...

Yup this is true. I may have been assuming it was doing an extra copy somewhere for windowed rather than full screen, but that might well not be true. I guess it is a black box to me how this is implemented, and OpenGL has to work well with lots of different implementations under the hood.

 

As a final thought, and the one you might prefer, what you can do is: don't clear your color buffer, and display a black rectangle for the GUI portion that had changed, then redraw on top of it. Then swap your buffer. Since you don't clear the buffer, you don't have to redraw your model each time... You'll have to play with the depth test to make it work well.

I did end up with a situation like this by accident when things went wrong lol. It is a little more complex though because the app might be using single, double triple buffering etc, and if you try to force it to a single buffer, it would impede getting nice fluid graphics. So you have to make sure that not only your back buffer is filled with the right background, but the last 2 or 3.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement