Jump to content
  • Advertisement

DerTroll

Member
  • Content Count

    35
  • Joined

  • Last visited

  • Days Won

    1

DerTroll last won the day on April 25

DerTroll had the most liked content!

Community Reputation

125 Neutral

About DerTroll

  • Rank
    Member

Personal Information

  • Interests
    DevOps
    Education
    Production
    Programming
    QA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hint: The maximum value of an unsigned 8bit integer is 255. So which value do you get if you calculate 255/256? It is probably not 1 . More like 0.9xxxx So divide by 255. Greetings
  2. DerTroll

    GLFW Callbacks From Inside Class/Struct

    Don't know, since it is not my code, just quoted it. But it is also not really relevant. The point here is, that you create a lambda function which fulfills the interface requirements. In this function, you call your member function. In your case it could look like this (might contain coding errors ;p): void GLFW::SetFrameBufferSize(GLFWwindow* window, int h, int w) { // Resize window } void GLFW::SetFrameBufferSizeCallBack() { auto myCallbackFunction = [this](GLFWwindow* window, int h, int w) { this-> SetTheFrameBufferSize(window, h, w); }; glfwSetFramebufferSizeCallback(m_Window, myCallbackFunction); // works fine glfwSetFramebufferSizeCallback(m_Window, GLFW::SetFrameBufferSize); /* argument of type "void (GLFW::*)(GLFWwindow* window, int h, int w)" is incompatible with parameter of type "GLFWframebuffersizefun" */ } Not sure if the this pointer is captured correctly here. I remember that whenever I captured it, I did something wrong and needed to fix that ;). As I said in my previous post, it might also happen, that the approach does not work, since your Lambda function has a state (pointer to your GLFW class). Especially GLUT didn't like that. Can't remember if GLFW was more tolerant in this case. If not, you need an external function which gets you the pointer to the corresponding instance of your class. Since my Window class is a singleton I can just call the instance function to get the pointer. Greetings
  3. DerTroll

    GLFW Callbacks From Inside Class/Struct

    @calioranged: You should really try this first. C++11 lamda functions are really nice if you have to squeeze arbitrary functions through strict interfaces. However, it does not work always. I can remember having issues with some GLUT callbacks where my function was not accepted if it depended on a runtime state.
  4. DerTroll

    I am looking for partners

    Not sure if this is some kind of troll post, but if not, do yourself a favor and remove your phone number. Interested people will contact you via PM.
  5. DerTroll

    I need ideas

    Maybe it is better for you to join one of the projects that are looking for help. Don't get me wrong but I don't see the point in programming your own game if you have no idea in which kind of game you are interested in. What will keep you motivated if you have no own vision of the goal you want to achieve? Greetings
  6. DerTroll

    Optimizing OpenGL rendering

    I am not a hardware expert, but a Geforce of the 900 series should easily handle this scene. Are you using a laptop? If you have an onboard GPU, then maybe your program is just using the onboard chip instead of your GeForce. I had a similar problem a long time ago. You should open the Nvidia control panel, search for your executable and enforce hardware acceleration. If that's not the problem, I have no clue at the moment. Greetings.
  7. Thanks, this is a useful link. OGL versions and the maximum number of threads are not a problem in my case.
  8. Good point. I am not targetting architectures with SSE support below 4.2. But I have a #define that switches to AVX2 if it is available. So in this case compiling two executables is probably the easiest way. But I am not sure if it will stay that simple in the future. Therefore, the purpose of this discussion was to find possible solution strategies in case it will ever be a problem. I use Travis CI to test my code and I think as long as the code is running on those machines with -march=native I am fine. I won't aim to support systems with older hardware. Greetings
  9. Absolutely right. Even though branches on the CPU are not as disastrous as on the GPU, I try to avoid them as much as possible. Every time I benchmark code, avoiding branches or moving them to a level that minimizes their execution yields the highest speedup. I wrote my own linear algebra library based on a flexible SSE/AVX implementation. Taught me a lot about vectorization so I use it whenever I can. The auto-vectorization of GCC and Clang yields quite good results, but only if the handwritten implementation would be easy. For example, I recently wrote an SSE accelerated Gaussian elimination algorithm for dense matrices, which I think is almost at maximum efficiency for the matrix size I am aiming for. I compared it to a non-SSE Version of the code and some popular linear algebra library. First I was shocked that the gain of my handwritten SSE version was less than 10% when compared to the simple implementation. However, then I realized that this was only the case when the matrix size was a multiple of the register size. If not, the auto-vectorization compared rather poorly. However, back to the topic: Since I also use automatic vectorization because you never know what the compiler can further optimize, I guess choosing different compiled versions of dynamic libraries at runtime would be the way to go for me. This is because I can't tell the compiler for each code section which auto-vectorization level he has to apply. Still not sure how I can select a dynamic library at run-time. I have never done this before. On the other side, there is still the option of compiling different executables. Greetings
  10. DerTroll

    Optimizing OpenGL rendering

    Are you using branch statements ( if ) in your shader code? Try to measure execution times of several parts of your program to narrow down the OpenGL API call that is responsible for the slow down.
  11. I also thought about something like that, but I wasn't sure if this is a good solution. So basically, if one reduces it to a single runtime switch I would do something like (pseudo code)? if (hasAVX) MainFunctionAVX(); else if (hasSSE) MainFunctionSSE(); else MainFunctionSerial(); As long as my processor without AVX support never takes the branch of the AVX implementation the program will not crash even though it has a branch with compiled AVX code?
  12. Hi there, I am toying around with my own small C++/OpenGL game engine which I might use to publish some small games in the future. However, there is one question that bothers me for a while now. How do I make sure, that my executable takes full advantage of the client systems hardware if some features need to be known at compile time? For example: My engine supports SSE registers of arbitrary size. If I compile with AVX2 enabled, the code won't run on processors that do not support AVX. If I use just SSE 4.2 then I would not utilize the full potential of the processor. How is such a problem usually solved? Do I build an executable / library for every kind of architecture option and select the right one dynamically on the client system? Alternatively, I could build the binaries on the client system during installation but I am not sure if this a good way. So what is the way, this is usually handled? Greetings
  13. That is totally dependent on how well you separated the rendering/GPU related stuff from the rest of your engine. If you put all the OpenGL functionality into your own set of functions, that do nothing more than transferring data to the GPU and issuing render calls than it should be rather easy. On the other hand, if pure OpenGL API calls are spread around all over your code then it could be a lot of work. EDIT: Regarding your original question: If you are not an experienced programmer, especially concerning graphics programming, you should not make your decision because of some benchmark that tells you, that Vulkan can be 20% faster than OpenGL. The question is, can you utilize Vulkan in a way to be 20% faster than your fastest OpenGL implementation and do you really need this extra speed? Maybe you should also google "OpenGL AZDO" (approaching zero driver overhead). This is a collection of techniques that try to find the fastest possible way to use OpenGL. If you can say that you fully understand what they are talking about, then you could give Vulkan a try.
  14. DerTroll

    Best way to do frustum culling?

    Hi there, I have not implemented frustum culling myself so far, so I can't give you professional advice on that, but I am currently working on my physics engine, which also involves collision detection, which is basically the same problem. However, I am just starting, so don't take all I say for the truth If you are not using vector tricks like dot and cross product, I would avoid the angle approach since it usually involves sin and cos functions, which are quite expensive. On the other hand, calculating the new frustum planes is a negligible operation concerning speed. In camera space, they are fixed. Calculate them once and then determine the camera-to-world transformation matrix every frame. Use it to transform the planes into your world space. We are talking about less then 10 nanoseconds on modern, vectorized hardware. In comparison with the huge number of subsequent culling checks, you should not care about that. Now having the planes, which describe a volume, there are effective collision check methods. Just google a little bit or buy a good book. I think the most important thing is, that you have an efficient data structure that reduces the cost of each collision check to a minimum without losing precision. 2 things that people usually use is space partitioning (what you mentioned too) and bounding volume hierarchies. Space partitioning bundles multiple objects into smaller groups (octrees, BSP, etc.) and can save you a lot of collision checks while overdoing it has the opposite effect. Finding the right balance is key. Using bounding volume hierarchies means, that your object has more than one bounding box. Every collision check starts with the least complex bounding box and only continues with a more complex one if the previous check has passed. This way you have high precision without losing to much speed due to multiple tests on the same object since the number of objects that need a full and detailed test is usually significantly lower than the ones that fail the first simple test. For your frustum check, the whole thing could look like this: - Check if a group bounding sphere is completely inside or outside the frustum. (No intersection with planes). If inside, everything needs to be drawn, and no further checks are needed and if outside, nothing needs to be drawn and no further checks are needed. - If it is partially inside, you need to check every member and subgroup of the group the same way as before. - For objects that are on the boundary, you need to perform a full collision check to determine if is partially inside or not. If you are planning to have a detailed collision system in your engine, you basically get everything you need to perform effective frustum culling for free. So think about that. Greetings
  15. No, it doesn't. Its actually doing the right thing, since: Your ball can only move along one axis, the z-axis and this is not affected by any rotations, as you said yourself. So the rotations have no effect on your "physical world", but on your camera. This means, that if you start moving the ball and you rotate the camera, the ball should not move in the view direction anymore. It's like circling around a moving car. The car is following a straight road while you are changing the view angle on that car. So why is the ball always keeping the same position in front of the camera? Probably because you messed up your render transformations. Additionally, what you can also observe if you have a sharp look during the rotation phase is, that the ball is not staying at the same point when you rotate. The problem is the ModelMatrix in combination with your ViewMatrix: ModelMatrix = glm::rotate(ModelMatrix, glm::radians(gchange), glm::vec3(0.0f, 1.0f, 0.0f)); ModelMatrix = glm::translate(ModelMatrix, glm::vec3(-19.5f, 0.0f, -24.5f)); ProjectionMatrix = glm::perspective(glm::radians(30.f), (float)1366 / (float)768, .001f, 1000.0f); glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix; First of all, it seems to me, that there is a little bit of missing clarification, which matrix is supposed to do what. Generally, a matrix transforms an object from one space to another. That's why I don't like some naming like ModelMatrix. What does that do? Transform to model space? If yes, from which space? Transform from model space? To which space? I know that these terms are usually common practice, but they are not clear. Especially beginners get easily confused by this naming. A model matrix is usually a model-To-World matrix and places everything inside your world. What you want to do is this; This needs a single world to camera transformation, usually done by a single ViewMatrix. So why is there still a "ModelMatrix"? Everything needs to be in world space at this point. A model(toWorld) matrix is an individual matrix for each object in your world and can't be part of a world-to-camera transformation! Even if it is just wrong naming, this should be a single transformation. I don't use GLM, but I think glm::lookAt should already provide your whole world-to-camera matrix. Read the docs for that. I recommend you read a little bit more about coordinate transformations to get a better understanding of why and when you do which transformation. Try this tutorial and maybe some more: https://learnopengl.com/Getting-started/Transformations Greetings
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!