Jump to content
    1. Today
    2. For performance reasons Sparse Voxel Octrees should not be used. The update just takes too long if you move a light or have moving objects. Volume textures are easier to handle but updating them takes even longer because you have to invalidate everything. Thus Nvidia uses clipmaps where moving things doesn't imply the need to revoxeliz i guess. Thanks for the link to tomorrows children tho, Im having a look at it
    3. Hello Zakwayda, The x and y corner of the colliders represent like so, The x and y represent the top left corners of the rectangles, and the top right corners, the max width and height. Then, about the movement of the player, I am reading the keyboard like this, and moving 1 pixel (int) to the desired direction. (The method move(xMove, yMove) has the collission code that I post previously. @Override public void update() { super.update(); int xMove = 0; int yMove = 0; // Read keyboard events if (keyboard.up) { yMove--; } else if (keyboard.down) { yMove++; } else if (keyboard.left) { xMove--; } else if (keyboard.right) { xMove++; } if (xMove != 0 || yMove != 0) { move(xMove, yMove); } } Thank you so much for your response.
    4. Hello, and thank you for your support! Yes, I linked it to show what I have available, so basically nothing. It also shows what I need to feed to the class to make a mesh appear. What I need is to fill the Mesh.vertices[] and the Mesh.triangles[]. To make my questions more precise: (They mainly revolve arround the structure/architecture of data storage) 1. In what fashion do I index the vertices? Is there some "best practice" amongst graphic programmers? 2. Is there a fitting, elegant algorithm already existing that matches the indices of the vertices[]-array to the triangles[]-array? 3. (Partly answered) How do I store the information for later usage in the pathfinder? This is what I got so far (creating a flat hexagon in C#): using UnityEngine; public class D_HexTile : MonoBehaviour { public float mSize = 1f; public Material mMaterial; void CreateHex() { gameObject.AddComponent<MeshFilter>(); gameObject.AddComponent<MeshRenderer>(); Mesh mesh = GetComponent<MeshFilter>().mesh; mesh.Clear(); // === Vertices === Vector3[] vertices = new Vector3[7]; vertices[0] = transform.position; for(int i = 1; i < vertices.Length; i++) { vertices[i] = GetCorner(transform.position, mSize, i - 1); } mesh.vertices = vertices; // === Triangles === int[] triangles = new int[6 * 3]; int triNum = 1; for(int l = 0; l < triangles.Length; l++) { if (l % 3 == 0) { triangles[l] = 0; } else if(l % 3 == 1) { triangles[l] = 1 + (triNum % 6); } else if(l % 3 == 2) { triangles[l] = triNum; triNum++; } } mesh.vertices = vertices; mesh.triangles = triangles; } private Vector3 GetCorner(Vector3 center, float size, int i) { float angleDeg = 60f * i; float angleRad = Mathf.Deg2Rad * angleDeg; return new Vector3(center.x + size * Mathf.Cos(angleRad), center.y, center.z + size * Mathf.Sin(angleRad)); } } I omitted parts for UV's and normals. I show this to give an example usage of the Mesh-class ... Now I have to feed it. So, this is obviously not enough. GetCorners() will suffice for the hexagon corners, but using the midpoints by doing vector-math for sub-devided triangles (as you suggested) seems more appropriate. So, now I have to deal with the questions above. I can't seem to wrap my mind arround what question to prioritize first. The answer to question 2. depends on how I deal with 1. and 3. If I want the indices stored in a certain way, I will have to traverse the QuadTree / BinaryTree in a certain way (whatever that will look like). The other way arround: When I store the sub-devided triangles in a certain way, they may predefine the indices of the vertices. I found this idea very inspiring and would like to know more. I will probably need the edges more often than the faces (pathfinding). Do I understand this correctly, when I say the following? I have a List of edges. When it is just the hexagon, I have 12 of them. 6 of those contain data for two triangles, the other 6 have only one triangle (the outer edges). I thereby have 3 references to any one triangle. When I call SubdevideTri() on any triangle, it will tell all three edges to SubdevideEdge() and then .... .... well, this is where my brain locks up .... To the point: @Gnollrunner Could you ellaborate on how you did this? How is your data stored? Do the edges contain the corners and the triangles? Why do I need the triangles stored in a QuadTree, when I already have their reference in the edges, stored in 6 BinaryTrees? What about the new edges "in between" midpoints? Or is the QuadTree of triangles the "dominant" storage? Do the triangles reference the edges? Do I really need the BinaryTree in that case? Or do you do both, to simplify access? I will try things out a bit and come back to you. Valueable knowledge, thank you for this! Thanks again for your support! Cheers!
    5. Madman's Journal

      2D and 3D artist needed for horror adventure.

      Please forgive the delay in responding, it definitely was not do to disinterest. If you're looking for experience this is the right project. I love your work and would be delighted to have you involved. Please drop me a line at bradgoodman@hotmail.com if you're still interested.
    6. Welcome to the dilemma of game programmers everywhere. *Spoiler* You will never have the assets you want. The alternatives: Make better use of the assets you do have available - compromise Learn to make your own assets Work with artists Or some combination of the above. In the long haul learning to make your own assets is really useful, both for prototypes and maybe final assets, but of course it comes at the expense of programming time. A lot of the trick in game programming, like a good chef, is to make good use of what you've got, rather than spending time wishing you had more ingredients.
    7. Eimantas Gabrielius

      Daily Bonus Logic

      For example - Castle Crush. 7 Days, every day from 1 to 6 you get some reward after doing some battles. In day 7 you get LEGENDARY card which is game changing cards. Point is, if you miss atleast 1 day from 7day week, you continue getting daily rewards, but you cannot get LEGENDARY card at the week ends.
    8. Well, there are a number of humanoid player characters out there on the free pile, not a lot of animals and whatnot though, a few robots... What exactly are you Not finding? If you give us something more specific, I bet somebody knows where a good prototype asset is to be found. Otherwise, for anything that is highly specialized you might want to pick up some basic Blender/etc. modeling and animation skills. Takes a few weeks to get enough skills to do prototype quality work, but it's well worth the time.
    9. Silence@SiD

      Symbol lookup error on Linux

      And there is no options to enable this when configuring the build (./configure --help might give your some hints if the project uses autoconf, and config.h is the file where all enabled options get their defines set) ?
    10. a light breeze

      Which artstyle works best? (3 versions of world-map)

      I don't like the first one. The borders don't look natural at all. If you're going to with a low-res approximation of the borders, own up to it and go with option 2 instead. Option 3 looks the best. Since you haven't posted any sprites, I can't tell how well it will work with your sprites.
    11. Hi All, I'm a solo programmer. I want to mess around and make a game/prototype. However, I have no art or modeling skills. I poked around the Unity3D asset store(non-exhaustive search), but didn't really find anything that would match what I was going for. Furthermore gameplay (for me) is tightly tied to animations and effects to get the right feel. What do you do about this? How do you go about prototyping and playing/creating a game and getting assets that "will do" without wrecking your ability to get a feel right? I'm fine with programming but as far as finding the right animations and assets I'm stuck. Any suggestions for getting around this? One idea is to stitch something together using primitives. I'm really not sure how to go about this one. I'm looking for something practical so I can just start coding and putting something together. Regards, Plot
    12. NVs implementation is very slow, i've heard (but that was many years ago). AFAIK it has not been used in a game yet. I think they used anisotropic voxels (6 colors for each), and octree. Anybody else ruled both of them out. Instead anisotropic voxels it's better to just use one more subdivision, and instead octree it's better to use plain volume textures for speed. That's what people say... personally i would give octrees still a try, though. Cryengines approach is very interrestig: IIRC they use refractive shadowmaps, and the voxels are used just for occlusion. Likely they use one byte per voxel to utilize hardware filtering, but using just one bit would work too. This means big savings in memory, so the resolution can be higher and the light leaking becomes less of a problem. They have detailed description on their manual pages. The developers of PS4 game 'Tomorrows Children' have a very good paper on their site. They really tried a lot of optimization ideas and have great results, so a must read if you missed it. An unexplored idea would be to use oriented bricks of volumes. So you could prevoxelize also dynamic models like vehicles and just update their transform instead voxelizing each frame. Similar to how UE4 uses SDF volumes for shadows.
    13. Hodgman

      "Nice" tweakable S-function

      This thread is a great resource! At one point in time, I wanted a tweakable S-curve that could have a linear section in the middle and ended up with this mess: float SCurve( float x, float white = 10, float shoulder = 3, float slope = 1.5; ) { float invWhite = 1/((slope/shoulder)*(white-shoulder*2)); float one_on_shoulder = 1/shoulder; float slope_on_shoulder = slope/shoulder; float white_minus_two_shoulder = white-shoulder*2; float white_minus_shoulder = white-shoulder; x = min(x,white); float a = pow( abs(x*one_on_shoulder), slope ); float b = slope_on_shoulder*(x-shoulder)+1; float c = slope_on_shoulder*white_minus_two_shoulder + 2 - pow(abs((white-x)*one_on_shoulder), slope); return x < shoulder ? a : (x > white_minus_shoulder ? c : b); } The maximum x value is called "white" because I was using it for tonemapping HDR images http://www.wolframalpha.com/input/?i=plot+piecewise[{+{abs(x%2F3)^1.5,+x+<+3},+{(1.5%2F3)*(10-2*3)+%2B+2+-+abs((10-x)%2F3)^1.5,+x+>+10-3},+{(1.5%2F3)*(x-3)%2B1,+3<%3Dx<%3D10-3}+}],+x+%3D+0+to+10 http://www.wolframalpha.com/input/?i=plot+piecewise[{+{abs(x%2F5)^4,+x+<+5},+{(4%2F5)*(10-2*5)+%2B+2+-+abs((10-x)%2F5)^4,+x+>+10-5}+}],+x+%3D+0+to+10 Left: white = 10, slope = 1.5, shoulder = 3 (x<3 is the bottom shoulder, 3<x<7 is linear, 7<x is the top shoulder) Right: white = 10, slope = 4, shoulder = 5 (x<5 is the bottom shoulder, 5<x is the top shoulder)
    14. Tereize

      Watchbot - an idea i had

      Yes, please do let us know what type of game it is. Btw, do help me out as well. I'm trying to create a text-based decision making game and I have 4 game ideas, which one sounds best? Here's the link:
    15. I think it will be hard to keep concistancy with other assests, such as UI, units, buildings, all icons etc. As delta said, style should be throrough.
    16. Alexander Orefkov

      Mobile Brick Break

      Hi All! I present my new game - “Brick Break”. There is old classic puzzle game, moved to 3D, in graphics and gameplay. Link to Play Market Gameplay video Game was created by Urho3D engine. Screenshots
    17. Alexander Orefkov

      Brick Break screenshots

      Screenshots from "Brick Break " game.
    18. swiftcoder

      OOP is dead, long live OOP

      I've always assumed that Linus objection is to the rest of the baggage that goes along with "not C", rather than the OO part. There's quite a bit of OO in the linux kernel. Besides, Linus clinging to C is a bit of an outlier at this point: operating systems written in high-level languages abound... Symbian was written in C++, and prior to the emergence of iOS/Android, it was the dominant mobile OS. Microsoft Research's Singularity uses a C# kernel and device drivers written over a C++ HAL. Redox is shaping up to be a fairly complete OS written entirely in Rust. And of course, if you go back a bit in computing history, LISP and Smalltalk were each operating systems in addition to programming languages for the machines they were originally developed on.
    19. Zemlaynin

      The Great Tribes

      Album for The Great Tribes
    20. sQuid

      "Nice" tweakable S-function

      You could also check out the generalized logistic function.
    21. Regarding this code: // Right collision if (playerCollider.x - mapCollider.x < 0) { isCollisionX = true; } // Left collision if (playerCollider.x - mapCollider.x > 0) { isCollisionX = true; } // Top collision if (playerCollider.y - mapCollider.y > 0) { isCollisionY = true; } // Bottom collision if (playerCollider.y - mapCollider.y < 0) { isCollisionY = true; } What are x and y? Do they represent the centers of the rectangles? Or perhaps a corner? How are the rectangles represented? (E.g. min/max, min/size, center/extent, etc.) Also, you mentioned that the move deltas are +/-1. Is the movement time-based? Or are the deltas fixed, e.g. one pixel per move or something like that?
    22. Bit depths that are not a power of two (96-bit) are not particularly fun to support.
    23. Yesterday
    24. fleabay

      What's a frog TODO?

      I'm not too impressed with myself, sorry to say. I've just got to buckle down. Thanks for checking up on me, I needed it.
    25. Oberon_Command

      OOP is dead, long live OOP

      Note that the poster you're quoting said applications. Kernels and embedded software for network switches aren't really what I would call "applications." Applications are things like Word, Maya, Call of Duty, Steam... Consumer software that runs on top of an operating system. The software industry is a young field. It's continually evolving. We're constantly learning with everything we do; in fact, I would say that the death of a programmer's career can often be pinpointed by looking for the point at which he stopped learning new things. A programmer who stops learning is the walking dead, doomed to stagnation and to eventually be superseded by the folks who kept growing. I suspect this "how-to-usev2" will eventually be superseded, too. You make this out like it's a bad thing. Why is that?
    26. Servant of the Lord

      Accessing a subrect of a texture using a UV map [GLSL]

      I got my effect working. Thanks gents! 😃 https://twitter.com/JaminGrey/status/1053107844474646528
    27. Dim0thy

      OOP is dead, long live OOP

      @AnarchoYeasty It's cool, my friend - just don't insult me. I must admit that that's true to some extent. Note the 'potential' qualifier - it implies that people felt that there *might* be problems with procedural programming in the future; not that there were problems at the present. I respectfully and strongly disagree - C is still being used to write programs, and the language has consistently ranked in the top 5 for the last few years (if not decades). The most prominent and often cited case of a non-trivial program written in C is the Linux kernel - and you might say - yeah, but the decision to use C was made in the nineties; if it were decided now, surely they'd go OOP (i.e. C++). Not so, according to Linus Torvalds - it would still be C! (incidentally, 'git', Linus' other project, is C too). At the current job, we write software for network switches, and guess what - it's all C without a shred of C++ (thank god). There is a perception out there that C is obsolete and nobody uses it anymore, which seems to me like a form of delusion induced by the OOP marketing machine. Oh well - whatever this man says, writes or advocates does not affect my position in any ways; if anything, it makes me more suspicious. It is this tendency of OOP being revised (refined?) that I take issue with. Ok, so there is this 'inheritance' concept that was developed/discovered. Let's use it! Here's how you use it: <how-to-use-v1>. 10 years later - um, no, actually, that wasnt' right - we got unforseen problems when applied in practice, so let's use it this way : <how-to-use-v2>. 10 years later... See the pattern?? You did notice that the body of an a OOP 'method' is procedural code. So OOP can't be 'better' because it's not a separate thing that would sit at the same level and could be compared. Underlying it is procedural code, so it's more like a layer on top. So maybe you meant : better than procedural programming *without* OOP added to it? No. Allow me to elaborate : Suppose we have problem P, and two programs: program O - written in OOP style, and program S written in procedural style. Both programs solve the problem P completely and correctly. Compare the programmatic complexity of the two : program O is composed of procedural code + OOP overhead; program S is procedural only. So program S is *simpler* in structure and complexity. The conclusion follows by the 'simple is better' principle. I like your analogy, but my argument is more like : if the nails do the job then there's no need for screws. I would suggest you to read 'Notes on Structured Programming' by E. Dijkstra, but don't read it like a prose - instead read it like a math book :do the exercises, try to get an understanding of the concepts and the proofs. If you are like me, it will give you an appreciation for the beauty, simplicity and power of the ideas presented; it's an eye opener. Then, compare the insights and the clarity of thought you gained, with the fuzziness and flaky concepts that you'd get from the OOP camp. In my opinion - there's no contest. It's baffling to me that the structured programming paradigm has not had a foothold on the industry as OOP did. I blame it on the marketing machine of OOP (which SP does not have).
    28. Awoken

      What's a frog TODO?

      How is Frogger Universe coming along?
    29. Thank you. JoeJ. I think I will just evaluate a constant then. I agree, but the VXGI implementation of Nvidia uses cone tracing too (however utilizing another approch for storing the voxels) and they achieve some fairly pleasant results, even for diffuse GI.
    30. https://msdn.microsoft.com/en-us/library/windows/desktop/ff471325(v=vs.85).aspx R32G32B32 is only optionally supported for filtering - you need to query the driver for support before using it with a non-point sampler.
    31. Awoken


      As I was thinking about all this I was kinda envisioning 'wouldn't it be cool if...' scenarios, I don't know realistic these ideas are but I seeing as the floor is open: There are a multitude of independent developers out there who've marketed their games and are available for purchase on many platforms. If I want to buy a game for my computer I go to steam, if I want to buy a game for my phone I go to the Google Play store, if I want to play flash games I go to newgrounds, kongregate, onemorelevel and so on. The only place I can think of going to play games that are in the making is either steam( I've got to search them out myself and know ahead of time what I'm looking for ) or a kickstarter campaign. In my opinion they both have their draw-backs, steam doesn't provide enough community support( as far as I'm concerned, maybe I'm wrong ), and kickstarter's only give out IOU's on a one time all or nothing campaign. I wonder what kind of an audience there is for a location that provides it all, community, trial, fundraising and purchase abilities. Those are loftier my thoughts. This is something I was planning on doing, having two separate blogs and tying them to the same project page, hope nobody minds.
    32. Hello, I am detecting each side of collision between two rectangles, and wen I collide them, lets say, with the left side, it also detects collision on the top side when I move to the top, and the downside when I move to the bottom. I think that is because it is detecting also the top/bottom corners. I am detecting each side of collision between two rectangles, and wen I collide them, lets say, with the left side, it also detects collision on the top side when I move to the top, and the downside when I move to the bottom. I think that is because it is detecting also the top/bottom corners. How I can prevent this corner collision to allow to the player to slide? Currently, my code is the next one (using xMove (-1 or 1) and yMove (-1 or 1) to know the player movement direction. boolean isCollisionX = false; boolean isCollisionY = false; // Check collisions for (Rectangle mapCollider : belongsToLevel.tiledMap.mergedColliders) { if (playerCollider.intersects(mapCollider)) { // Right collision if (playerCollider.x - mapCollider.x < 0) { isCollisionX = true; } // Left collision if (playerCollider.x - mapCollider.x > 0) { isCollisionX = true; } // Top collision if (playerCollider.y - mapCollider.y > 0) { isCollisionY = true; } // Bottom collision if (playerCollider.y - mapCollider.y < 0) { isCollisionY = true; } break; } } if (!isCollisionX) { pos.x += xMove; } if (!isCollisionY) { pos.y += yMove; } Thank you in advance.
    33. sevenfold1

      PBR rendering engines?

      I've only come across a few so far. I'd like to get some opinions on them too if anybody has any experience using them.
    34. Thank you for the kind words :)
    35. No, it will have a barely noticeable effect, because indirect lighting is dominated by diffuse reflection in practice. Exceptions would be extremely artificial scenes with walls of shiny metal for example. (For metals you likely have better results using the greyish specular color instead of the almost black diffuse.) But even if you would store view dependent information, doing that only for the locations of the lights would not work well i think. Simple interpolation would look more wrong than using Lambert, and spending effort to improve that would not be worth it and still cause artifacts. To do better you would need directional information at the voxels themselves so they store their environment (e.g. using spherical harmonics / ambient cubes / spherical gaussians etc.), but this becomes quickly unpractical due to memory limits. With voxel cone tracing your real problem is that you can not even model diffuse interreflection with good accuracy, at least not for larger scenes. So you worry about a very minor detail here. If you really want to show complex materials even in reflections, probably the best way would be to trace cone paths. (Or to use DXR of course)
    36. Why not keep all styles and just use a toggle so players and decide? Games like EU4 have different map overlays.
    37. deltaKshatriya

      Which artstyle works best? (3 versions of world-map)

      Well, do you want to go with the feel of using one of those old computers with the vector graphics on it? It could work, but you'd have to fully embrace that feeling. So everything would have to have an old school vibe in the interface. I'd refer to DEFCON if that's the case. Otherwise, I think that the more traditional option would be the third one.
    38. "This is the End" combines the squad-based strategy features of XCOM with survival elements; it is based in a steampunk setting. See more at my TIGSource devblog: https://forums.tigsource.com/index.php?topic=65119
    39. The fact that it works on your nvidia gpu while DX11 gives an error means that it is not required that a gpu vendor implements sampling from this format. I wouldn't rely on it. If you really need to use it, you can access a pixel directly with the brackets operator like this: Texture2D<float3> myTexture; // ... float3 color = myTexture[uint2(128, 256)]; // load the pixel on coordinate (128, 256) Then you can do a linear filtering in shader code.
    40. It's a "total war" kind of game. You manage the regions, set taxes, build stuff, train armies. Move armies around and when encountering enemy armies (you move one region at a time, like in Risk) there is a realtime (simplified) battle view where the units fight. It's a mix between simulator feel and boardgame feel. Yes the first 2 styles is similar to defcon for sure, but that is a nice style:)
    41. deltaKshatriya

      Which artstyle works best? (3 versions of world-map)

      Depends: are you going for a more vector graphic type interface, like DEFCON? Then the first two are pretty good. If you're building something like Civs, then the last one is definitely better. Based on the info you posted though, I would think the 3rd works better, since the first two really give me a DEFCON style vector graphics vibe. It doesn't feel like "empire builder" to me. I'd like to hear more about the game itself though.
    42. Honestly C# and C++ are fairly similar. The concepts should transfer fairly easily from one to the other. C++ just doesn't have reflection built in.
    43. Making a turnbased strategy game/empire builder. Whcih looks best to you? Style 1 (natural borders) Style 2 (straight borders (abstract map)) Style 3 (satellite photo) (this one looks nice but harder to fit with sprites, ui elements and icons that are abstracted)
    44. Hey, I've come across some odd problem. I am using DirectX 11 and I've tested my results on 2 GPUs: Geforce GTX660M and GTX1060. The strange behaviour occurs surprisingly on the newer GPU - GTX1060. I am loading HDR texture into DirectX and creating its shader resource view with DXGI_FORMAT_R32G32B32_FLOAT format: D3D11_SUBRESOURCE_DATA texData; texData.pSysMem = data; //hdr data in as a float array with rgb channels texData.SysMemPitch = width * (4 * 3);//size of texture row in bytes (4 bytes per each channel rgb) DXGI_FORMAT format = DXGI_FORMAT_R32G32B32_FLOAT; //the remaining (not set below) attributes have default DirectX values Texture2dConfigDX11 conf; conf.SetFormat(format); conf.SetWidth(width); conf.SetHeight(height); conf.SetBindFlags(D3D11_BIND_SHADER_RESOURCE); conf.SetCPUAccessFlags(0); conf.SetUsage(D3D11_USAGE_DEFAULT); D3D11_TEX2D_SRV srv; srv.MipLevels = 1; srv.MostDetailedMip = 0; ShaderResourceViewConfigDX11 srvConf; srvConf.SetFormat(format); srvConf.SetTexture2D(srv); I'm sampling this texture using linear sampler with D3D11_FILTER_MIN_MAG_MIP_LINEAR and addressing mode: D3D11_TEXTURE_ADDRESS_CLAMP. This is how I sample the texture in a pixel shader: SamplerState linearSampler : register(s0); Texture2D tex; ... float4 psMain(in PS_INPUT input) : SV_TARGET { float3 color = tex.Sample(linearSampler, input.uv).rgb; return float4(color, 1); } First of all, I'm not getting any errors during runtime in release and my shader using this texture gives correct result on both GPUs. In debug mode I'm also getting correct results on both GPUs but I'm also getting following DX error (in output log in Visual Studio) when debugging the app and only on the GTX1060 GPU: D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View in slot 0 of the Pixel Shader unit is using the Format (R32G32B32_FLOAT). This format does not support 'Sample', 'SampleLevel', 'SampleBias' or 'SampleGrad', at least one of which may being used on the Resource by the shader. The exception is if the corresponding Sampler object is configured for point filtering (in which case this error can be ignored). This also only applies if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #371: DEVICE_DRAW_RESOURCE_FORMAT_SAMPLE_UNSUPPORTED] Despite this error, the result of the shader is correct... This doesn't seem to make any sense... Is this possible that my graphics driver (I updated to the newest version) on GTX1060 doesn't support sampling R32G32B32 textures in pixel shader? This sounds like pretty basic functionality to support... R32G32B32A32 format works flawlessly in debug/release on both GPUs.
    45. Hey guys, I am implementing following paper (Interactive Global Illumination using Voxel Cone Tracing): https://hal.sorbonne-universite.fr/LJK_GI_ARTIS/hal-00650173v1 (This paper can be downloaded for free if you look around) Basically the author suggest to store radiance, color etc. into the leafs of an octree and mipmap that into the higher levels of the tree. Then use conetracing to calculate 2 bounce global illumination. So the octree is ready and I now want to inject radiance into the leaf nodes. For this task I use the suggested method and render a "light-view" map from the perspective of the light. I use physically based materials, thus the actual computation cannot be precalculated because solving the rendering equation for a specific voxel also depends on the viewing and light direction. I have seen some implementations that just use the lambertian brdf as a simplification. But that will likely worsen the quality of the resulting frame, wouldn't it? My idea is to calculate the result (using the BRDF from ue4) for more than one viewing direction and just interpolate between them at runtime. This process has to be repeated when a light changes. So my question is: How should I handle this problem? Or should I just use the lambertian brdf and not care? Thanks
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!