Styves

Members
  • Content count

    208
  • Joined

  • Last visited

Community Reputation

1808 Excellent

1 Follower

About Styves

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Production
    Programming
  1. Your "TransformNode" is not a node then, it's an element or component of a node. Just call it a Transform. Also I imagine each of these derived nodes live inside your Scene Graph directly as part of some abstract list, and the parent/child relationship is part of each Node. Do I have that right? What does a call to load a model look like?
  2. You are massively over-complicating and over-engineering this to the point that even a huge set of paragraphs detailing it is still confusing. Now see, there's your real problem. You haven't figured out what the problem you're trying to solve is yet but you're already trying to solve it. This is a big red flag telling you to step back and break down what it is you're really trying to do. Hint: you mentioned what you want to do in your first line of your first paragraph: Basically what you seem to want to do want is store transformations separately from the objects in your scene so that you can propagate transformations from parents to children without having to structure the main objects in a matching hierarchy since they currently exist in flat lists for iteration purposes. As a result you need to link the transformations to their parent/children so that changes will be propagated to the child transforms and your objects need access to their corresponding transforms so they can manipulate or use them for rendering. Hopefully I understood that correctly. My first word of advice is to stop trying to literally represent your hierarchy through class/struct objects in code because it's one of the core causes of your current dilemma. Your nodes should not own their children independently. This is a violation of the single responsibility principle: your making your nodes responsible not only for their own logic but for managing their children as well. All of your nodes should exist in the Scene Graph which has the responsibility of managing those nodes. Stop right now if you're thinking about "visitor patterns" or other even more complicated OOP patterns as a potential solution for this problem, which itself is the result of bad design decision motivated by complicated OOP code. Moving forward with any of these patterns will only add more complexity and will cause you even more trouble in the long run. They aren't the solution to your problem. Objects should be added to the scene at creation time when their type is known. Anything that needs the type should be done at time where the type is known. Don't try to move from an abstract object to it's concrete form, this is usually a sign you're doing something wrong. About the only time I've ever seen this be ok is in UI code (working with Qt's qobject_cast) but even then it's rarely necessary. Why do you only have the abstract node in this case? If you need the concrete object, then get it directly in your script via something like GetModelNode(name) or something. I don't understand why you need to go through the abstract node to access it. PS: I think you need to revise your naming scheme: what exactly is a node? You have "TransformNode" and "Node"/"ModelNode"/etc, but only one of them seems to be in this "scene graph" (TransformNode). As far as I understood your "TransformNode" is the object that is part of the hierarchy, so this owning "Node" object isn't really a node, it's just an object containing a TransformNode, right? Having two separate concepts of "Node" in the same paragraph is really adding to the confusion.
  3. I still don't understand, can you tell us what the purpose of all of this is? Why do you need to "use non-root nodes" or "get the propeller"? What is it you want to do with the nodes? I just don't understand what it is you're trying to do with all of this. I don't have enough context to wrap my head around the problem. Maybe I'm just missing something or misunderstanding what was written prior. In any case more information will really help.
  4. I hope L.Spiro reads this one. Why do you need to get "some component of some child" if no processing is occurring? What exactly is your goal? What problem are you trying to solve? It's really difficult to give you a straight answer without knowing what the problem domain is. Also, if you want to avoid virtual calls but still process the hierarchy the way ApochPiQ mentioned then just break things up into lists of specialized components (light, camera, etc) and iterate over each list separately, keeping a simple link to the node/transform if necessry. Don't over-complicate it with OOP. For example you can sort the nodes in order of the hierarchy with parents coming first and then process them in linear order, without even leaving the current function and without the need for any virtual inheritance. Any special behavior can just be linked to the nodes in some way.
  5. If you're considering dynamic_casts or need to convert from base to specialization, then you most likely have a fundamental design problem on your hands and are trying to do something you shouldn't be. Do you even need to build a linked hierarchy this way, or can you give it to a higher-level construct (scene graph itself)? What is the purpose of these specializations? Do they have anything in common, can they share an interface? Is this transform node actually necessary, or are you putting in extra effort for a non-existent problem? If the goal is to keep transform separate from the node, then just use composition and have each node contain a transform that it can then plug into the scene graph. There's no reason to use inheritance here, IMO.
  6. What you're describing is called a "race condition", where two threads race to write to the same memory. The normal approach to dealing with this is to avoid needing to do it at all or to use atomic functions (in D3D those would be Interlocked functions). So if you just want to add values, just call InterlockedAdd(buffer, value, oldValue).
  7. Bloom

    I'd never advocate for a threshold pass, I agree with you entirely. It can actually make aliasing worse since you're making the jagged edges more pronounced by increasing the contrast via threshold before you blur it. I'd never advocate doing a threshold pass for bloom.
  8. Bloom

    maxest has a point, you can still apply a threshold if it's what you want, it will have an effect on the look and that might be desired. I'm just pointing out what the current "standard" approach is for compositing bloom, you don't need to follow it.
  9. Bloom

    If you want to be technical, yes. But you'll never see it because of the contrast between bright/dark pixels. How do you think bloom happens on real lenses?
  10. Bloom

    It's old practice to do that, but it's a legacy pass from before proper HDR rendering. It isn't necessary when you work with proper HDR ranges. If you have natural HDR brightness ranges, then only bright areas will bloom when blending it in at really low values. It's energy conserving, since light is never being "added", so it's more correct/realistic (for an image effect anyway). For example, if your HDR pixel has a value of 2.0, and your bloom has a value of 64.0, and you blend at a weight of 0.05, the pixel will be 5.1. Another example: If your HDR pixel has a value of 2.0, and bloom has a value of 3.0, with the same weight (0.05), then your pixel will have a value of 2.05, which in practice is hardly noticable. Final example: If your HDR pixel has a value of 2.0, and bloom has a value of 2.0, with the same weight (0.05), then your pixel will have a value of 2.0. Exactly the same as it was before. Keep in mind that a bloom pixel that is darker than an HDR pixel generally doesn't happen, so cases like HDR being 2.0 and bloom being 1.0 don't really happen. The reason is that the gaussian blurs you apply to blur the image will favor bright pixels when weighing the samples, so bright pixels will usually prevail (since we're using HDR ranges). There's no need to isolate bright pixels when you can leverage the contrast/ratio between light and dark. Be sure to combine them before applying exposure compensation. PS: CryEngine and a few other game engines perform bloom this way.
  11. Bloom

    I'm not sure I understand the questions.
  12. Depth-of-field

    One doesn't "normally use" the blur + blend approach. It looks very unappealing, unrealistic and doesn't really look like DOF. A better, and still pretty simple approach to implement if it's your first time, is to just perform a simple disk average at full resolution instead of blending with a blurred image. Scale the samples by the blur amount that you would have used for the blending. It would look like: float4 color = 0.0; for(int i = 0; i < numSamples; ++i) { float2 sampleOffset = screenUV + offsets[i] * blurScale; color += Texture.SampleLevel(Sampler, sampleOffset, 0); } color /= numSamples; Where blurScale is the calculated blur strength that you would have used to blend. The offsets are just coordinates forming a shape (circle, hexagon, whatever you want). That said, if you need to stick to the blur approach, I'd say anything that looks good to you. There's no "Right answer" when it comes to these things.
  13. OpenGL My first triangle

    You need to call glfwSwapBuffers(window) at the end of your function to present the rendered result to the screen.
  14. Bloom

    That's more or less how the simplest implementation works - just downsample, blur a few times with different weights, and then add all of them to the final image. But it's not at all accurate and the threshold stage leads to some weird problems. I don't find the results particularly appealing, especially since doing it this way leads to resolution dependence, making bloom at higher resolutions like 4k pretty much unnoticeable. Most of the more recent engines don't perform the threshold stage and instead perform a more energy conserving blend between the regular HDR image and the bloom image, something like lerp(HDR, Bloom, Weight). Blending it this way works because very bright pixels in the bloom image will still be quite visible with low blend weights when they bleed over dark pixels. I usually go for something like 0.05-0.15. Too low and bloom only shows up for very, very bright sources, too high and the image becomes muddy and/or blurry. You can probably do some magic effects by controlling that value over ranges of the image (lens dirt?). Anyway, there are various different ways to do the actual blooming effect. Some implementations simply perform a 2 pass blur with special weights and then composite that. Others just do downsamples then blur those and add them together at the end. My favorite way to do the blooming/blurring is similar to the approach that Bungie used for Halo 3. It provides really nice natural bloom sizes compared to the others since it essentially makes use of multi-pass blurs. The general idea is to perform a gaussian blur between the upsampling/downsampling, instead of doing all of the blurs first and adding later. It goes something like: Downsample the HDR image to a fixed size. (Using fixed size to avoid resolution dependent bloom radii. I usually use 512 for width, and 512 * aspectRatio for height). Downsample the result a few more times using a wide gaussian filter. In my usual implementation, I downsample 2 more times: one to 128, and the other at 32 (again both of those have heights multiplied by the aspect ratio). For each of those results (after all of them are downsampled), upsample and add to the level above it using a 5x5 gaussian blur filter (Ex: blur 32 image, upsample to 128 image, blur the 128 image, upsample to 512 image, blur 512 image). Lerp the 512 texture with the HDR image using some small weight of your choice to decide how much bloom should be in the image. The trick to this is to make sure your downsample/upsample shaders are optimized. My initial blur pass uses uses 9 texture samples with the help of some linear filtering tricks. The other blur passes use 5 texture samples, similar linear filtering tricks. Hardly anything crazy, since most downsample passes + bloom blurs require many samples anyway.
  15. This is a very dangerous mindset, take it from me. When everyone on your team thinks that there are plenty of cycles to spend everyone will forget to profile and realize that they don't have the cycles they assumed they did. It leads to a lot of negligence and slack when it comes to profiling your code. This kind of thinking can lead to a butterfly effect of inefficient code that eventually stacks up to be a huge performance hog of a system so rigidly locked in by its coding paradigm (OOP) that it becomes extremely expensive and time consuming to untangle and fix. Don't get me wrong, C# has wonderful quality of life features and some of those are very powerful for certain use cases. I mean, use the best tool for the job. But at the end of the day with C# you lose control over where your performance budget is spent and are locked into its strict OOP paradigm which can make things harder to optimize or refactor. You don't want memory to move at random during gameplay, it can cause random stalls and performance instability. I wouldn't advise it for performance critical code like a game engine where you'll want to keep an eye on your (most likely tight) system resource budgets. There's a reason C++ is still used as the core for game engines even today. C# is a good compliment to it and adds flexibility and ease where those are more valuable than raw cycles, but it's not a good fit for tasks that require tight system resource control. Edit: just realized this thread popped back up because of an edit. Hope my post wasn't too late/revived a dead thread.