• Advertisement

Styves

Member
  • Content count

    218
  • Joined

  • Last visited

Community Reputation

1811 Excellent

1 Follower

About Styves

  • Rank
    Member

Personal Information

  • Industry Role
    Programmer
    UI/UX Designer
  • Interests
    Art
    Design
    Production
    Programming
  1. Voxelization cracks

    Minor correction but Crysis 2 did no such voxelization. You could also try LogLUV, I've had great results using it for HDR storage.
  2. AFAIK the view matrix and projection matrix are constructed in tr_main.c (R_RotateForViewer and R_SetupProjection). If I'm not mistaken the actual matrix used during world rendering was just backEnd.viewParms.world.modelMatrix. I haven't looked at the code in a long while though... if I remember more I'll come back and post an update.
  3. Now you have me super curious about what part you had to work on.
  4. DX11 FFT on GPU

    Yes, absolutely. Not sure where you heard this. Unless you explicitly ask the GPU to give you the data on CPU, it'll stay on the GPU. It's an extremely common technique. Every frame you just swap the render target you are rendering to and use the previous one as the new input. Looks something like: Frame #1: Set Texture0 as Render Target Bind Texture1 as input Draw shader Frame #2: Set Texture1 as Render Target Bind Texture0 as input Draw Shader Frame #3: Same as Frame #1 Frame #4: Same as Frame #2 etc. Hence the name "ping ponging", you're bouncing between two textures every frame.
  5. 3D 3rd and 1st person arms

    Look at the way Mirror's Edge handled this. They hand-tailored animations so that they looked good in first-person (and severely twisted in third, you can see some examples on YouTube). You could do something similar - same rig and model, but a separate set of animations tailored for a good looking first person experience.
  6. You could probably get away with this with some depth output tweaks, but I see what you mean. I don't think it'd be worth the hassle. One option is to do the same thing Hodgman suggested but manually using shaders and MRT rather than the stencil buffer. Done this way you have full control of where this "occlusion" should be. Well, you don't need a complex framework for several render targets to do this unless you want to cache those results somewhere. I wouldn't particularly suggest it since you'll have to fight resolution mis-match with the main image. A better approach might be to simply have a single full-screen temporary render target in which you render each object to. Use scissors + viewport + clear between each object draw to clean up the region the next object is going to be rendered to, and composite that region back onto the main render target after the sprites for that object are drawn.
  7. If you want "occlusion" from top-most sprites, then couldn't you just have the transparent sprites write to the depth mask and let depth-testing take care of it? The second sphere would then automatically depth-fail against the previous sprites, giving the result in #2.
  8. SSAO using Opengles2

    Yeah, 1/width of the depth buffer. You want to take the delta between the neighboring pixels of the source image. Also no, SSAO should definitely not be causing this sort of aliased outline. As a side note, generating a normal buffer from depth this way results in faceted/flat surfaces and not smooth surfaces (so spheres will look like bundles of triangles instead of smooth spheres). This can be a problem so if it's not something desirable for you consider outputting a thin GBuffer or not using normals in your SSAO approach.
  9. SSAO using Opengles2

    At first glance I'd say the offsets for your "normal_from_depth" function are wrong, they should be in the scale of a texel, so 1/resolution, not hardcoded to 0.001. With your current value each sample is too far from the original pixel, which can cause this sort of edge filter artifact. I'd start there and see if it fixes the issue.
  10. How many draw calls do you make per frame?
  11. Your "TransformNode" is not a node then, it's an element or component of a node. Just call it a Transform. Also I imagine each of these derived nodes live inside your Scene Graph directly as part of some abstract list, and the parent/child relationship is part of each Node. Do I have that right? What does a call to load a model look like?
  12. You are massively over-complicating and over-engineering this to the point that even a huge set of paragraphs detailing it is still confusing. Now see, there's your real problem. You haven't figured out what the problem you're trying to solve is yet but you're already trying to solve it. This is a big red flag telling you to step back and break down what it is you're really trying to do. Hint: you mentioned what you want to do in your first line of your first paragraph: Basically what you seem to want to do want is store transformations separately from the objects in your scene so that you can propagate transformations from parents to children without having to structure the main objects in a matching hierarchy since they currently exist in flat lists for iteration purposes. As a result you need to link the transformations to their parent/children so that changes will be propagated to the child transforms and your objects need access to their corresponding transforms so they can manipulate or use them for rendering. Hopefully I understood that correctly. My first word of advice is to stop trying to literally represent your hierarchy through class/struct objects in code because it's one of the core causes of your current dilemma. Your nodes should not own their children independently. This is a violation of the single responsibility principle: your making your nodes responsible not only for their own logic but for managing their children as well. All of your nodes should exist in the Scene Graph which has the responsibility of managing those nodes. Stop right now if you're thinking about "visitor patterns" or other even more complicated OOP patterns as a potential solution for this problem, which itself is the result of bad design decision motivated by complicated OOP code. Moving forward with any of these patterns will only add more complexity and will cause you even more trouble in the long run. They aren't the solution to your problem. Objects should be added to the scene at creation time when their type is known. Anything that needs the type should be done at time where the type is known. Don't try to move from an abstract object to it's concrete form, this is usually a sign you're doing something wrong. About the only time I've ever seen this be ok is in UI code (working with Qt's qobject_cast) but even then it's rarely necessary. Why do you only have the abstract node in this case? If you need the concrete object, then get it directly in your script via something like GetModelNode(name) or something. I don't understand why you need to go through the abstract node to access it. PS: I think you need to revise your naming scheme: what exactly is a node? You have "TransformNode" and "Node"/"ModelNode"/etc, but only one of them seems to be in this "scene graph" (TransformNode). As far as I understood your "TransformNode" is the object that is part of the hierarchy, so this owning "Node" object isn't really a node, it's just an object containing a TransformNode, right? Having two separate concepts of "Node" in the same paragraph is really adding to the confusion.
  13. I still don't understand, can you tell us what the purpose of all of this is? Why do you need to "use non-root nodes" or "get the propeller"? What is it you want to do with the nodes? I just don't understand what it is you're trying to do with all of this. I don't have enough context to wrap my head around the problem. Maybe I'm just missing something or misunderstanding what was written prior. In any case more information will really help.
  14. I hope L.Spiro reads this one. Why do you need to get "some component of some child" if no processing is occurring? What exactly is your goal? What problem are you trying to solve? It's really difficult to give you a straight answer without knowing what the problem domain is. Also, if you want to avoid virtual calls but still process the hierarchy the way ApochPiQ mentioned then just break things up into lists of specialized components (light, camera, etc) and iterate over each list separately, keeping a simple link to the node/transform if necessry. Don't over-complicate it with OOP. For example you can sort the nodes in order of the hierarchy with parents coming first and then process them in linear order, without even leaving the current function and without the need for any virtual inheritance. Any special behavior can just be linked to the nodes in some way.
  15. If you're considering dynamic_casts or need to convert from base to specialization, then you most likely have a fundamental design problem on your hands and are trying to do something you shouldn't be. Do you even need to build a linked hierarchy this way, or can you give it to a higher-level construct (scene graph itself)? What is the purpose of these specializations? Do they have anything in common, can they share an interface? Is this transform node actually necessary, or are you putting in extra effort for a non-existent problem? If the goal is to keep transform separate from the node, then just use composition and have each node contain a transform that it can then plug into the scene graph. There's no reason to use inheritance here, IMO.
  • Advertisement