• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Video cards...

This topic is 4554 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm not sure this is the right place... Is there a hardware/product section on gamedev? If its not against the forum rules, I'd like to ask your advice. Its time for a new video-card but I simply don't know what to choose. I'm not really up-to-date about the new hardware and their prices. I don't need TV-out and that kind of stuff, just a powerfull video-card which is capable for writing new shaders and effects like HDR. I don't know if this is true, but people say that nVidia has better support for OpenGL, while ATI does a better job with Direct3D. I'm programming in OpenGL and I was thinking about a price around the 250 euro (I don't know exactly, but that's 280/300 dollars or something). I was thinking about a GF6800 GT. But again, there are different types and manucracturers (and thus different prices). And maybe ATI has a better deal for the same price? What would be a good choice with this budget? Or maybe someone knows a good site with all kind of comparisons? BTW. I guess all these new cards need a nuclear power plant to run. I have a stunning 300W PSU right now. A new PSU is not cheap either, so if there are still cards that can run on my current PSU... as long if the performance different isn't too big. The same for cooling. Are the onboard coolers of these cards enough or do I need to move to the north pole to prevent overheating? Greetings, Rick

Share this post


Link to post
Share on other sites
Advertisement
Not exactly. It depends a lot on the rest of the system, and those "requirements" are extra high, because NVidia and ATI want to cover their asses and be absolutely sure people have good enough PSU's.

My brother recently bought an Athlon 64 3500+ and Geforce 7800 GT, and hooked it up to his old no-name 350W PSU. Runs without a hitch.

Edit: But of course you should have a decent quality PSU no matter what. I'm just saying the wattage they usually claim to require is overrated

[Edited by - Spoonbender on November 3, 2005 2:55:38 PM]

Share this post


Link to post
Share on other sites
Thanks for the info! I'm a little bit afraid with the power thing, my current card pops up messages saying that there isn't enough power. I ripped everything out of my computer so only the basic hardware like 2 harddrives, disk station, DVD reader, Motherboard, RAM and a soundcard are mounted. But there were still problems.

I think it has to do with my video-card fan. The stupid thing couldn't rotate anymore so I think the card was wasting power trying to get it working. I disabled the fan and put a large next to the open case. Works better now but when running Quake4, everything still gets f@#$ed up. I dunno, its worth a try and if all modern cards require relative much power anyway, its time to buy a new psu I think.

Rick

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
More important than the wattage of your power supply is the quaity. A high-quality power supply will deliver plenty of power at each of the different voltages required and the power will be 'clean'(ie constant DC with very little noise) while a low-quality supply will deliver too much power at the voltages that aren't used much and not enough at the voltages used a lot, the will deliver voltages quite far from the proper value (the standards give a lot of room for differences, not sure on actual values but for ex maybe +5V can really be +4V to +6V - high quality will be like 4.9 to 5.1 varying because of temperature/line power/etc very little while low quality will be 4.5 to 6.5 varying randomly), and the power provided will have a lot of noise and small spikes and dips.

Both will work 'fine' for low-demand machines (though low-quality supplies put some strain on the electronics), but for high-demand machines you need either a very high wattage low-quality part (if it says it can supply 500W, you can probably count on it for MAYBE 400W using a decent measurement setup), or a decently rated high-quality part (if it says 400W you can count on it providing 400W).

Most systems don't need a lot of power, but use high wattage low-quality supplies because they're cheaper. If you can afford it, a quality power supply is a good investment because you can keep it in your newest machine whenever you upgrade and get less expensive parts to take it's place in your old machines (since you probably don't care as much about them).

You can find decent reviews of power supplies on many hardware websites, such as Anandtech.com:
http://search.anandtech.com/search?q=power+supply&site=atweb_collection&client=atweb_collection&proxystylesheet=atweb_collection&output=xml_no_dtd

-Extrarius

Share this post


Link to post
Share on other sites
Everyone should be careful about saying things like:

"X is the best mainstream card there is."

This is up for serious debate. Better to say, "I've used X, and it's worked well for me, but lots of people have nice things to say about Y as well."

Personally I think if you want to get serious, you need more than one development machine, and more than one brand of video card. Maybe NVIDIA does better with OpenGL, but wouldn't that mean if you made your game sing with ATI first - that it would be easy enough to adjust for NVIDIA - and then you would maximize your market?

I don't know... just saying that there is more than a single point of view here, and brand loyalty is a sure way to miss out on important views.

Both the X700 and the 6600GT run under $200. Both seem reasonable to me.

Share this post


Link to post
Share on other sites
Thanks everybody! I think I've made up my mind. I'll just try it with the current psu, if it fails, I can always buy a new one (better quality).

@ owiley
My current video card is a Sparkle GeForce 5700FX Ultra. Besides from the fan, it still works fine. When disabling AA and using 800x600, most games are running pretty well. I could buy a new fan, but I think I'll go for a new one. My dad needs a video card and I'd like to check HDR stuff :)

Greetings,
Rick

Share this post


Link to post
Share on other sites
Cards for around $250 that seem attractive (assuming AGP here):
GIGABYTE GV-R80L256V Radeon X800XL 256MB 256-bit GDDR3 VIVO AGP 4X/8X Video Card ($252)
ELSA GLADIAC 940GT Geforce 6800GT 256MB 256-bit GDDR3 AGP 4X/8X Video Card ($279)
I'd suggest you poke around the ATI offerings, since the NV offerings are very slim in this price bracket -- the 6800GT is the only option you have, and it's on the upper end of the range.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement