SyranideMember Since 16 Nov 2004
Offline Last Active Apr 17 2013 08:59 AM
- Group Members
- Active Posts 570
- Profile Views 2,844
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by Syranide on 06 August 2012 - 04:14 AM
Otherwise, depending on what kind of quality you want, nearest neighbour upscaling and blurring and then using a threshold to give a black/white image yields quite similar results, although the output is obviously a lot more "round", the following is a quick and dirty test in photoshop with 4x gaussian blur (if you upsample with bilinear rather than nearest as I did you get slightly better and less wobbly results).
Posted by Syranide on 16 February 2012 - 07:10 AM
Posted by Syranide on 24 January 2012 - 07:10 AM
Posted by Syranide on 24 January 2012 - 05:56 AM
Perhaps not an proper answer to your question, but wikipedia may help you:
http://en.wikipedia.org/wiki/Xiaolin_Wu's_line_algorithm (shows an implementation of the algorihm)
Oh believe you me, I've devoured that article tens of times, that and all the other articles that I could find on the subject, now I'm reading Michael Abrashes explanation, so far his makes the most sense to me.
Ah, I guess you could look at Bresenham's code a bit too on wikipedia, may not be directly applicable, but they do go through some short explanatations and optimizations...
Anyway, as you understand I have little knowledge of this myself, but!
"The operation D <- D + d is a module 2^n addition with the overflow recorded."
I'm pretty sure he just means D = (D + d) % (2^n) ... and he stores the overflow ... somewhere. Or it's just strangely worded?
"e = k - d*2^(-n)"
Again, just my first hand instinct, negative n seems reasonable as the error should go down as the number of bits go up. Especially as it seems that d is related to 2^n, by the above.
"I( x, ceil(k*x) ) = (2^m - 1) * (D*2^(-n) + ex) = D*2^(m-n) + ...."
I'm guessing this is just a continuation on the above to some degree, with d replaced by D (as D is in-fact d, but accumulated).
Again, I may be completely stumbling in the dark here, so if this makes no sense either, it's probably because it doesn't ;)
Posted by Syranide on 23 December 2011 - 05:53 AM
Posted by Syranide on 23 December 2011 - 04:52 AM
well the fill itself takes 1.5 seconds per chunk, all added together everything it comes to 10 seconds (which would be more than one chunk) so you think you could fill a 256x256x256 chunk in less than 1.5 seconds? I mean the closer it gets to 10 milliseconds the more id be happy...
Well, filling a 256x256x256 chunk it's hard to say, I mean, we are talking 16.7m voxels... all I have to go on is the performance of my minecraft-like lighting engine, I can create spherical holes 50x50x50 and propagate light into them in a single frame, if even that, I can easily do it 10 times per second and the FPS drops from ~500 to somewhere around ~350... and like I mentioned, that also includes mesh rebuild and upload. Some basic math on those rather sketchy numbers would indicate that I could repeat it 1000+ times a second if I avoid rendering. And since mine is more than 5x5x5 times (125+) smaller than yours, logic would indicate that it indeed it is possible to bring it down to somewhere around 0.2-0.3s if you flood fill something really complicated spanning the entire volume. But those numbers may also be way off, I can't really benchmark it in any meaningful way.
And light-propagation is a tad bit more complicated as I actually propagate light values and not just a single bit... and more importantly, I also do this in all 26 directions (!) which I know for a fact significantly slows it down, even though I've written some really tricky code to speed it up. But it depends on what you are interfacing with, if it's some OO class linked from a DLL with virtual methods and such, there might not be all that much you can do unless you can are able to have direct access into memory.
But really, if you want to speed it up, you need direct access and some serious fine-tuning, even switching an if-else-statement around can significantly increase performance. And I should also add that, my code is also significantly slowed down because the world is divded into blocks and not a single continous chunk of data... meaning, there is "significant" overhead in determining boundaries of blocks and computing block local coordinates for indexing.
You could do it completly on the GPU.
256^3 could be placed on a single 4096x4096 texture (16x16 patches of 256x256 texel). You need two of these textures and switch them as render source/target between render frames. The shader will check per texel all 26 neighbor texel(when diagonal filling is allowed) or 6 neighbor texel and decides if the target texel should be filled.
The worst case should be 512 render frames, which should be no problem on a current hardware.
Edit: the worst case could be higher....
Indeed, worst case would be a single tile wide "snake" going back and forth through the entire cube, that would not be pretty... but then again not really something that would occur either.
Posted by Syranide on 20 December 2011 - 03:47 PM
Something that you may have fallen for is to put all adjacent voxels onto the queue when the current voxel is hollow... rather than first check if each adjacent voxel is hollow and then put it in the queue. Depending on how you traverse, this can considerably shorten the number of iterations. Also, make sure that your voxel checks are inlined and fast.
Posted by Syranide on 01 December 2011 - 04:16 AM
Posted by Syranide on 13 August 2011 - 05:25 PM
I'm not saying you're incorrect, but it's possible to do quite a lot better than that once you take into account recursive instancing.
Say you're right about each block of land being 1 meter on a side. If you were to fully populate the tree at that granularity, you'd get those results (or similar since it's an estimate). But now, imagine instead of fully populating the tree, you create a group of 100 of those blocks 10 meters on a side, then instance that over the entire world. Your tree just references that block of 100 ground plots rather than duplicating them. So now you've reduced the size requirement by approximately 100.
There's no limit to how far you can take this. The Sierpinski's pyramid is an excellent example of this - you can describe that whole world to an arbitrary size with a simple recursive function. The only unique data storage required for that demo is the model of the pink monster thingy.
As someone mentioned earlier, the storage requirement is more appropriately measured by the entropy of the world (how much unique stuff there is, including relative placement). The repetitive nature of the demo suggests very little of that, and thus very little actual storage requirement.
I'm not doubting you even one bit, what I meant to show was that with some very basic assumptions, some reasonable approximations and no real optimizations... I computed the number of blocks they could be using in their demo, and arrived at the same number of blocks that they are using in their demo. My point being, unless I've made a serious mistake, they aren't using anything fancy at all... like I mention, for all we know, they might even be using an 8-bit palette for the blocks. If I would have arrived at 2, then yeah, they would have used some fancy algoritms, but that memory consumption most likely is the actual reason they aren't showing more unique blocks.
Posted by Syranide on 12 August 2011 - 01:49 PM
Yes I thought so, but its designed for textures... could someone take the time to give me a really mega short 4 line example of how they think I should do it, remember its just a couple of vertex buffers.
From what I can tell it also supports 1D-resources, which a vertexbuffer could be classified as I assume?(Or perhaps that's only 1D textures)
Posted by Syranide on 12 August 2011 - 03:19 AM
BTW, perhaps I'm limited in my understanding but I acquire info with my senses and process it using my brain and what I see...
...is that there's nothing besides this island, which is admittedly bigger than atomontage but limited nonetheless. So, when he's talking about being unlimited - aka infinite - surely he's not talking about "infinitely large" but rather "infinitely small". As a start.
They've shown a bunch of other older demos which were slightly more varied in the blocks used, but those instead lacked much of the quality... so they just traded one thing for another. And so far, everything we've seen that would be indicators of memory usage have been terribly bad (few overly reused blocks, non-shaded materials, etc). Worse than that it even seems as if they are constrained to a grid, because every single demo they've ever shown has been built from prefab tiles as far as I've been able to tell.
However, it should be important to note that the size of the island they show is in most likelihood meaningless, they could probably with ease make it... A MILLION TIMES... larger without any issues, that is meant to be the strength of the algorithm... however, they could not add more unique models to make any use of it.
And what really strikes me as strange is why they are still running it on only 1 core after all these years, it should be pretty much trivial to utilize all the cores (and remove any chance of gameplay!). I'm curious how memory performance and bandwidth works out for this, now I'm far from an expert on this, but it really seems as if that could be a potentially huge issue to overcome if it indeed is an issue (much like it is an issue with raytracing).
But really, it all falls flat in theory for me. Textures and geometries today consume enough storage and memory as it is, we couldn't simply double that today and expect everything to run well. So, now consider that reusing textures over and over like we do today is very efficient... even storing color data as textures is efficient, it allows for compression and compositing multiple textures to seemingly make up quality from thin air. Triangle geometry is efficient, you can store enormous landscapes as dirt cheap (even compressed) heightmaps.
Now, consider what UD is doing:
They apply the texture individually to each voxel... so there is no texture reuse at all, it becomes harder to compress the color data
They break up the geometry into individual voxels... so a single triangle becomes a lot of voxels
So, let's for the sake of the argument say that, they have somehow managed to come up with a compression algorithm that takes all these voxels and manages to compress them down to the size of the original polygonal model. Great... right? Well, I would argue that no, it doesn't really matter all that much... because it all comes back to the texture issue. With polygons, we can make a statue that uses 2 textures, then make a 100 more statues using the same textures. In UD, every single object has its own unique "texture"... and note that the same is true for terrain. You can no longer reuse that grass texture over and over, or use a dirt cheap heightmap to represent hundreds of kilometers terrain... instead you now have to represent each triangle and texture by hundreds and hundreds of small voxels.
There is simply no way they could achieve the storage efficiency we enjoy today, even if they use every imaginable cheat and use 3D texture materials and all kinds of tricks... it will never be nearly as storage efficient as polygonal geometry and textures, it simply can't. Or am I missing something?
And like all good things, they are not good things unless they also work in practical circumstances, it's "easy" enough for nVidia and 3DMark to whip up impressive and carefully tweaked demos, applying it in games is a very very different thing
Posted by Syranide on 11 August 2011 - 06:28 AM
Which is where I would bet my money, sometime in the future...
Posted by Syranide on 10 August 2011 - 05:59 AM
I agree with everything you've said.
But consider MMO game that will last 10 years.
Will you really buy an engine that will bi obsolete in a year?
Also, if you expose scripting and ability to create new objects to the players (even if only 1% of players contributes) your game will be unmatchable.
No development team will be able to take your game on because you have as many developers as players.
If you are like Crytek and the rest and have engine that has only 10 different object types then that game will become boring very quickly - won't last for 10 years.
For example - if I in my game build a Space Ship building station.
Everyone can come there - compose a ship to their like to the very detail (interior/exterior/terminals, pack it up with their friends and fly from one planet to another - fight - explore - discover/script new technology...
They can even make it look like Enterprise from Star Trek - and nobody can't sue us because it is p2p MMO - we don't have control over the game - in fact every client can host their own instance of the game with just a few players - or milion players if you like.
This game will clearly be more fun than lets say EVE ONLINE (that has maybe 2% of the features)...
I'll leave you to think about that...
OK - I'm going to bed now.
It was nice discussing things with you all.
First off, you assume that more freedom is a better game, I really don't agree with that. Goals is what makes a game fun, just like Minecraft is fun for a while, as you explore and build your base... but once that wears off and you don't have any clear motivation to play on, you stop playing it. Although this really is something for another discussion.
And really, what you say is very nice... but EVE Online runs its own engine, WoW runs its own engine... and both have had it's fair share of scaling issues and concerns. From the sounds of it you are suggesting that your engine would simply scale perfectly out-of-the-box, in all areas, performance, networking, etc. And at the same time be as efficient as can be on a server.
OK, I admit, I deserve that - that was too harsh.
EDIT: I've deleted the post because it was too long and repetitive (and kind of annoying).
This summed it up:
2001: GTA 3 engine: - can dynamically load geometry, textures, sounds,... everithing
2011: Tech 5 engine: - can dynamically load only textures (and not all textures but only world textures)
2011: Cryengine 3: - can't dynamically load anything
Unreal Engine 3 - great engine because of a superb dynamic loading system and other stuff. If only lighting would be done better - it would be perfect.
EDIT: I think that Tech6 will be something really impressive, but Tech5 is just a stepping stone.
The more I hear you explain your engine, the more I realize that it isn't an engine or API, it's a middleware, that is exactly what it is. Comparing it to highly specialized game engines doesn't make sense. There are lots of good middleware out there, UE3 being the cream of the crop. Why doesn't everyone use UE3? Because it's not all red roses, a middleware puts you in a jet-fight from day one, but that's all it will ever be. If you try to branch off too much it's simply more efficient to just implement what you need from scratch.
Middleware allows most people to do really amazing things, if you have the experience and knowledge, building a capable engine is far from an impossible task... especially not if you have an older codebase that can be scavanged. Reusing a middlware solution is not always the best answer.
Also, again, it is very easy to say that you support dynamic loading of textures, the world, etc, etc, etc. However, as always, if you are generic about it, performance will suffer. You don't just smack a dynamic loading system on a game and be done with it, to have it perform requires a lot of optimizations and care. Putting data in the right order, prioritizing the right things to load first, etc, etc. It's very easy to replicate what is being done today in a generic way, but to keep the performance at peak, during high stress can be tremendously hard. It's like those database benchmarks with 1 thread... they are pointless. Same here, building a game that can run well on 2-year old computers and also look stunning on new computers, perform well at 100% CPU usage and accommodate real world loads. That is the hard thing.
Posted by Syranide on 10 August 2011 - 03:49 AM
Firstly, I was talking at the sub-meter level as we were talking about not being able to use a parent node's colors. There's no reason the majority of dirt nodes have to have a unique color. The majority of them are just brownish orange. You can still have children that are their own unique color, but most of them can just be the same orangish brown with the majority of the interest coming from shadow and light differences over the surface.
How many of the voxels in a model of this bank would just use the same salmon color? Sure there are places like what I am guessing is bird poo over the sign, but those are easily stored in voxels containing color data while all their salmon neighbors just have to sit there and exist.
It seems like you don't really appreciate the difference between shades of a single color, and small variations of a single color. To demonstrate, I took your mountain and approximated it with a single color.
First image is the reference, the second is the same but with a single color applied... however, I would be seriously impressed if you manage to get shadows that look anyway as good as that, in realtime, in UD.
Does it look like a mountain, sure it does, does it look like a good mountain, no it does not, the lack of nuance and variation makes it look dull. And you are forgetting that, while at a distance, things may look rather even in color, but up-close there is a lot more and important color variations going on. Additionally, if you do note bake lighting into the voxels, then you need to store MORE DATA, unless you want everything shaded using the same method and same parameters, which rarely looks very neat when trying to render realistic scenes.
Quite simply, I don't buy your argument unless you show me that it actually works.
Posted by Syranide on 09 August 2011 - 01:55 PM
That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.
You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.
You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.
That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.
It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.
LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
I would have to agree with the other dude.
I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).
I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.
Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.
It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.