• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

2734 Excellent


About JoeJ

  • Rank
    Advanced Member

Personal Information

  • Interests
  1. There's some extension mentioned, but they probably talk more about GL/VK compatibility: http://anki3d.org/vulkan-coordinate-system/ Personally i had to change back/frontface culling mode when moving to VK, but maybe projections too.
  2. Why some games look orange?

    Ha, did not know this, indeed my left eye seems brighter than the right (but no difference in hue) I always think colors are similar to musical notes - it's the difference between them that matters, but there is no absolute reference. It might be a nice idea to add a tint slider for slight color temperature adjustments to games, similar to gamma. (And please an option to tone down SSAO - just for me!)
  3. Why some games look orange?

    I meant constans color tone in the screen space, not just color from dominating game instances like vegetation. Look at the sky - it doesn't contain any green, same goes to the ground and enviroment objects This is really interesting: One guy perceives images as natural, maybe because he unconsciously prefers a green tint? (I agree with both people here, but there is a green tint - on my screen at least.) Maybe this is much more an issue of personal taste than i thought. Having worked as an artist for so many years i should know - but honestly i don't
  4. Why some games look orange?

    Often it's hard to tell if this is intended or not, some good examples: Destiny 2 has a green / red tint, which is really daring - ugly but very unique Deus Ex has a gold tint, unique too. Inside has very low saturation, but looks stunning. The Witness has high saturation, very nice. The Hollywood alike orange light and blue shadow looks always good, but is too common to look unique. Older games however (many using Unreal 3 engine), with their brown / grey tint looked really terrible to me - unplayable. The dark age of video game graphics. I'd like to know if this came from some limitation (monochrome GI baking so no color bleeding? It really looks so.) I still see this in recent games, for instance Wolfenstein Colossus. This game looks as if it would have used only one bounce or too much ambient occlusion for baking. The engine tech is very impressive, but the baked lighting is bad. Is it just my taste or did they something wrong and nobody cared?
  5. Not really, because if you render a 4x4 pixel frame buffer for picking, most of the GPU will be idle all the time. CPU is probably faster even when not considering the data transfer. But if you don't have a CPU skinning implementation, i agree it's not an option (i just assumed this for a tool). Performance does not matter a lot for a tool. If you have two options, one 10 times faster but more work, i'd pick the one causing less work and see if performance is acceptable (even if some guy on a forum mentions it's not ideal ). You would only paint one one model at a time, yes? So a test with 8 characters is already worse than expected worst case.
  6. I saw it in the other thread and was wondering how it works, now i get the idea In your final video there's a noticeable detachment of shadows on camera rotation. You could fight this by temporal reprojection, but it's one more problem that would not arise if you would work in texture space instead screen space. Another advantage is the possibility to update only a fraction of texels per frame - all those things are much harder in screen space. However moving from screen to texture space is a really big challenge, no matter if we talk about rasterization or ray tracing. Probably i won't dare to do it myself for other things than GI samples where i already have it...
  7. The holly grail of voxels. The way voxels can pass data around and mix it precisely will probably be the secret to unlocking real lighting and reflections. Not to mention provide new concepts for developers to play with. But this has already happened: voxel cone tracing ...and it turned out to be not really the solution towards realtime GI. The problem is for realtime GI you currently need aggressive LOD, and world grid aligned voxels quickly become a too bad approximization. The way voxels allow to pass data with neighbours is great, but the same applies to quadrangulations with textures while being much more memory / runtime efficient. So the only generic usecase for realtime voxels i would agree upon is very diffuse geometry, whatever this could be. A thinner shell of voxels over polygons is somethig i think about ever since... maybe i get a chance to try this in the far future...
  8. Instead rendering full screen for picking and using only a small region of it later, you could use a modified projection matrix and tiny frame buffer to render only the stuff around the cursor. (That's how OpenGL picking works) But i assume it's faster to do this on CPU, even in brute force: Just loop aver all vertices and select the closest of those being close enough to the picking ray. It saves the GPU <-> CPU communication, rasterization and your work to setup projection and render targets. Same is true for triangles. (To avoid undersampling issues you could also pick closest triangle instead vertex and select its vertex closest to intersection.)
  9. Yeah, but unfortunately for my needs there remain still too much singularities. I'm interested more or less in object space lighting. This project may be the first that really shows the benefits of the idea: See how this guy (probably) stores ray traced results in textures and instead denoising them, he just blurs with neighouring texels to turn a sharp reflection into a glossy one, or a hard shadow into soft shadow. (If i get him right) Having a mesh that primary consists of regular quads helps here, as we can build a seamless UV map to keep neighbour sampling efficient across UV seams. We can build this UV map on the original mesh from there as well, so it's not necessary to modify detailed geometry like characters or guns. Here the quadrangulation is just an intermediate step to get seamless UVs. But for background geometry using the quadrangulation directly enables really easy lodding: The low poly quadrangulation is the base level and we subdivide to get back the detail from original mesh, eventually in combination with geometry images, displacement, even screen space displacement, volumetric voxels on surface shell, whatever... We would end up with a new, more efficient form of geometry with many applications. (i think the real reason why displacement mapping did not really take off is the problem of seams becoming unacceptable, so you can use it only on things like height maps, or stitch holes with inefficient hacks.) That's quite off topic but may be worth to mention Edit: I got him wrong - mentioned project does not work in texture space so no need for global parameterization there. (But the argument holds for upcoming techniques that do so like mine.)
  10. I thought the same. One more problem with Unlimited Detail was they claimed things that never where true. Recently i've read their patent and surprise: Their algorithm is a regular octree front to back traversal - the same thing i'm using for occlusion culling for more than a decade and i never assumed this to be a new invention. In fact the only thing that's 'new' is their idea to replace a perspective divide by approximizations - this made sense i the 90s when divisions where expensive. So, no new revolutionary algorithm, of course no unlimited detail and no replacement for game engines. Now, looking at automontage.com i see similar claims: 'Meshes only model surfaces - a hollow and thus very incomplete approximation of reality' What? Why processing volume if all we can see is the surface? 'Mesh content creation is complex and technically demanding; costly with high barriers of entry' Ah... so poking out holes with a spherical brush is better than shaping by dragging vertices? 'Many layers of “hacks” (e.g. UVs) make editing and distributing mesh assets cumbersome' Yep - decoupling material from surface is surely a very bad thing - it allows to share data and saves work, but it is complex, so it must be bad. All their arguments are wrong and the exact opposite is true. Personally i think polygons are a very efficient way to aproximate surface. Voxels can never get there. We can improve the efficiency of polys too with better topology and by adding dispalcement mapping to get the same detail with less memory. We can make polygons volumetric by using polycubes or hexagonal remeshing etc. - this stuff is hard and did not made it into games yet, but it will, and it will be more flexible and efficient than voxels in regular grids. But that's just my personal opinion. What makes me sad is how they degrade their own good work with such ridiculous claims to attract foolish investors. Back on topic, the problem with marching cubes / tetras, dual contouring es the bad topology they produce. Too much vertices for a too bad shape. Hardware is powerful enough to deal with this, but we could improve here, and that's what i'm currently working at (but for completely different purpose and usecase). So, we could take the 'bad' output of those algorithms and remesh it to something good. E.g. using something like this, which is quite fast: https://github.com/wjakob/instant-meshes Personally i have harder requirements, i need a pure quads low poly approximization with as few unregular vertices as possible (quadrangulation problem). I did not think this could be realtime, but after implementing something close to this paper: https://www.graphics.rwth-aachen.de/media/papers/campen_sa2015_qgp_medium.pdf, i see it would be probably fast enough for user generated ingame content. Further this allows seamless texturing, so proper displacement mapping as well, plus smooth lod transitions as seen in voxel engines. I see a big future for this stuff in games even beyond current applications where we consider marching cubes.
  11. Yeah, i'm not convinced of Atomontage or such things, but Voxelfarm for example seems really nice tech for user generated content and destruction. I always wondered why no larger game came up using it since Everquest has been canceled. IIRC It has plugins for UE and Unity and is not that expensive.
  12. Never heard of this game, but looks awesome There is also new Atomontage video with lots of media coverage (voxels but volumetric). ... and related upcoming games like Dual Universe, Dreams, Claybook
  13. Just random coincidence? Although i consider it as well at the moment as a intermediate format to convert inefficient poly soup to something more efficient. But there is growing interest in SDF, voxels as well - anything different from triangles. This happens all the time since games like Comanche. We want easier editing (user generated content as in Minecraft), more details, non static worlds... stuff like that. Personally i'll probably prefer Elastic Surface Nets - see here for a comparison: https://0fps.net/2012/07/12/smooth-voxel-terrain-part-2/
  14. Probably you'd need to write triangleID to render target, get 3 vertices from that and their weights from barycentric coords. I get the impression you are experienced and used to working with the GPU graphics pipeline, and because of this you tend to utilize this tool to solve a problem that is probably easier to solve without it. Personally i think that's really a CPU thing, and being independent of graphics API is a good thing on the long run. You may continue using those algorithms for decades and applying them to different problems. I use regions a lot for all kinds of things: Mesh smoothing / sharpening, calculate curvature directions and crossfields etc. which is the basis for advanced stuff like segmentation, simplification and quadrangulation. So if you think you might add anything like this in the future it's worth to build the data structures and algorithms i've mentioned. But if you are sure you just want vertex painting and nothing else then your idea sounds good to me. There is however the visibility problem which will be frustrating sometimes: How to reach occluded regions - turn your viewport and try to get a good view, but then you accidently would paint to other parts of the surface, so you need make a manual selection first to avoid this, and so forth. But many professional tools have those same limitations and it's not a big problem usually.
  15. a. Yes, you need additional data to represent connenctivity. 'Half Edge' data structure is common, personally i use a more naive approach: For each vertex store all its edges and polys (clockwise ordering can become useful), for each poly store all its vertices and edges, for each edge store its 2 vertices and polys. You can implement neighbour searching and region growing on top of this. Because i have never implemented half edge data structure, i can not recommend which way to go, but my approach sometimes feels wasteful or even laborious to use. It always depends on what you need, but i'd give half edge a try if i'd need to start over from scratch. b. It is quite common to implement Dijkstra shortest paths on meshes for geometry processing tasks, which is very similar to the idea of region growing. Region growing means to extend your current selection - e.g. a single vertex initially - by one ring of neighbouring verts after another. I've implemted a region grower that can grow verts, edges and polys, so keeping this logic independent of the data can be useful to safe some work. You typically stop growing after all vertices are outside of a given max distance. There are differnt kinds of 'distance' you might want to consider: Euclidean distance, which means clipping your growing inside a simple sphere centered at the start vertex. (Should be fine for vertex painting.) Geodesic Distance: The exact distance of the shortest path between two points on the surface. Approx. Geodesic Distance: This is what you get if you use Dijkstra to measure distance by summing visited mesh edge lengths. This may be a zig zag path, so longer than necessary. c. You use any of the distances listed above and model any fallof function you want. d. You could trace a ray and use the closest vertex from the hit triangle, or use the hit triangle itself for growing. Because growing grows by rings of primitives on the surface, vertices of the 'wrong' leg would not be reached, even if you use simple euclidean distance and wrong legs vertices are very close. The growing process stops before it would start considering 'wrong' vertices (assuming your radius is small enough of course). Optionally / additionally you could use normals to fade out unwanted results. I may be able to answer some more detailed questions if you have. That stuff is not hard to do, but quite some work
  • Advertisement