• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Achivai
      Hey, I am semi-new to 3d-programming and I've hit a snag. I have one object, let's call it Object A. This object has a long int array of 3d xyz-positions stored in it's vbo as an instanced attribute. I am using these numbers to instance object A a couple of thousand times. So far so good. 
      Now I've hit a point where I want to remove one of these instances of object A while the game is running, but I'm not quite sure how to go about it. At first my thought was to update the instanced attribute of Object A and change the positions to some dummy number that I could catch in the vertex shader and then decide there whether to draw the instance of Object A or not, but I think that would be expensive to do while the game is running, considering that it might have to be done several times every frame in some cases. 
      I'm not sure how to proceed, anyone have any tips?
    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
  • Advertisement
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

OpenGL OpenGL 2.0?

This topic is 6000 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Has anyone heard any word of what will be of 3dLabs proprosal for OpenGL 2.0? I''m brand spankin new to OpenGL programming, and I kind of worry about the ARB''s lack of progress in putting in new features. What 3dLabs proposed seems pretty cool, but I''ve yet to hear any mention of whether it will be improved or not. Somehow, having proprietary extensions seems to be anti-thetical to the whole notion of the Open part of OpenGL. I hope the proposal goes through....

Share this post


Link to post
Share on other sites
Advertisement
quote:
Original post by Dauntless
Somehow, having proprietary extensions seems to be anti-thetical to the whole notion of the Open part of OpenGL. I hope the proposal goes through....

I disagree. It makes it more open by allowing vendors to improve the API independent of any central group. The central group (the ARB) will later be able to standardize the extensions. This whole process allows vendors to add and test their features from the beginning.

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
In my (personal) opinion ... as much as i love OpenGL compared to D3D ... D3D will win unless the ARB becomes FAR more proatice with regard to the extension problem. It''s very easy to promote such extensions, but unless every card manufacture adheres to the basic structure, this thing is for nothing. It''s about time that card manufacturers got together and decided upon a common policy which sidelined the ARB and produced a single interface to sell OpenGL. Otherwise, like I said, OpenGL will become defunct.

I never thought I''d say this, but I''m now looking at D3D as a viable alternative - and D3D has so many problems it isn''t true - but it''s still more viable at the moment.

Share this post


Link to post
Share on other sites
OpenGL does have a "single interface," that''s why all the extensions have the same ''style''. However, their potential uniqueness is the whole reason behind them! You can get more out of an NVidia or ATI or whatever video card through extensions than you can through DirectX (by the word of both NVidia and ATI). If you''re worried about programmable shaders, those should be standardize in the near future, since ATI and NVidia are finally agreeing with each other about them (read the ARB notes).

OpenGL 1.3 does have a lot of the extensions turned into standard functions, but Windows doesn''t support OpenGL 1.3. Almost every other OS does. Microsoft is simply trying to suppress OpenGL.

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
N and V ...

Actually you''re kinda wrong and right at the same time - OpenGL (and extensions) have been around for a while now, and no one has aggreed to agree! It''s up to the ARB to force this kind of common policy, but I don''t see it happening for at least 2 years, maybe more. I wish that wasn''t the case ... but I fear it is. The ARB is a VERY slow acting organisation, whereas at least D3D is owned by one company and developed for it''s own purposes - I wish this wasn''t the case, but I fear it''s true.

Share this post


Link to post
Share on other sites
quote:
Original post by Shag
N and V ...

Actually you''re kinda wrong and right at the same time - OpenGL (and extensions) have been around for a while now, and no one has aggreed to agree! It''s up to the ARB to force this kind of common policy, but I don''t see it happening for at least 2 years, maybe more. I wish that wasn''t the case ... but I fear it is. The ARB is a VERY slow acting organisation, whereas at least D3D is owned by one company and developed for it''s own purposes - I wish this wasn''t the case, but I fear it''s true.


In that case, why don''t you just use D3D instead of arguing about OGL''s future?

I don''t mean to be harsh or anything but if you don''t see a future for OGL, why in heck do you use it?





"And that''s the bottom line cause I said so!"

Cyberdrek
Headhunter Soft
A division of DLC Multimedia

Resist Windows XP''s Invasive Production Activation Technology!

"gitty up" -- Kramer

Share this post


Link to post
Share on other sites
Why the heck do I use it?

Because it''s wonderful to use, and i care about it''s future. I just don''t won''t OpenGL to end up on the scrap pile because it couldn''t compete. That would be the biggest shame ever! Can you imagine Microsoft seeing off another BETTER technology?

Rest my case ...

Share this post


Link to post
Share on other sites

  • Advertisement