Flood filling algorithm for filling the region enclosed between intermediate points generated by Bresenham 3D Line algorithm

Recommended Posts

Hi,

 

I have a triangle,oriented in a 3D Plane i.e. I have my vertices as (x1,y1,z1) ; (x2,y2,z2) and (x3,y3,z3)

I am trying to convert this triangular facet to voxelised model i.e.

Along each edge,I am applying Bresenhams 3D Line algorithm and generating intermediate points.After generating intermediate points, I want to fill the inside region.

I have been searching for some algorithm like flood filling,but did not find anything relevant till now.

I would be really glad,if some one can provide an algorithm for achieving this.

I basically have a List of tuple for storing all the (x,y,z) data created along the edges.(generated using Brsenhams 3D Line algorithm).

Now,I want an algorithm,which creates cubes in the inside region.

Share this post


Link to post
Share on other sites

You could use any triangle rasterizer. Using orthonormal projection pixel XY and depth Z give you the voxel index in a 3D grid.

But: Pick x,y or z plane for rasterization based on what fits each triangles normal best, otherwise you get holes. (So you need to setup 3 projections, 'screen' width and height and near / far clip planes likely refer to the bounding box of your object or scene for example)

You need to be careful with subpixel accuracy to get robust and watertight results. Existing code probably uses different rounding rules for screen positions and depth. Because you alternate projections those things need to match up.

 

There are basically two ways of rasterization:

Easy: Bounding rect for triangle, then test each pixel to be inside all edge half spaces, e.g. https://fgiesen.wordpress.com/2013/02/08/triangle-rasterization-in-practice/

Hard: classical scanline conversation, e.g.: http://chrishecker.com/Miscellaneous_Technical_Articles

 

Note that Bresenham is probably a bad start as it is not subpixel accurate. (jittering edges like first Playstation versus nice edge crawling in Quake. For voxelization this is even worse as it can generate holes.) 

 

Edit: Even simpler approach intersecting triangles with cubes: https://github.com/karimnaaji/voxelizer

Edited by JoeJ

Share this post


Link to post
Share on other sites

Hi JoeJ,

Thanks a lot for your quick reply.I went through the above links.I need some more information to start on with it.

I m actually implementing everything in C#. After implenting Bresenhams 3D Line algorithm ,I am storing all the coordinates(available on the edges/boundaries) in a List of Tuple..something like List<Tuple<doubl,double,double>>.

I would be really glad,if you can let me know a  basic code,which can use the data from the list of tuple and fill all the voxels inside the triangle and also on edges.

Edited by Sri Harsha

Share this post


Link to post
Share on other sites

I tried to do something similar, but figured that filling triangles with lines doesnt really work, unless its that scanline based approach (which doesnt apply for most 3D triangles).

I suggest you go with the voxel-by-voxel intersection-tester approach. Pass an arbitrary function to your rasterizer (eg lambda), and it just checks all voxel coords in some bounding volume for an intersection (could be simple boolean, or you can allow partial/negative intersection for more possibilities) and adds it to your list if such an intersection exists.

Because it operates on a voxel-by-voxel basis, it is inherently parallel, which is always good. 

It also allows rasterizing any arbitrary 3D function you want, not just a specific type of triangle.

I believe the result will have more filled voxels than absolutely necessary for watertightness (if you dont care for thickness). Those could be removed in a post processing step (not sure about quality), or maybe someone has an intersection function that is able to skip those entirely. Depends what youre doing.

If performance is a problem, you could rasterize hierarchially to avoid evaluating so many voxels (rasterize to low res grid, then use the resulting big voxel set as the bounding volume for a higher resolution grid).

Share this post


Link to post
Share on other sites

I had this problem in the past, and i ended up with a rather navie but effective algorithm, first of all, find the bounding box of your traingle, then for each dimension, compute the size of your voxel and perform a triangle-axis aligned bounding box collision test, if triangle and this box collide, then you have your voxel.

Share this post


Link to post
Share on other sites

Hi Programmer71,

Thanks for your reply.So,for example,my bounding box is a cube of size 32.Then if suppose,I fix the resolution to be 1,and then I find all the sampling points on my triangle of size 1.

Now,I will be looping through each pixel,and color the pixel on which sampling point lies.

So,in this process,if my sampling point is an intersection point of 3 or 4 pixels,will I be coloring all those pixels??

Thanks in Advance

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Similar Content

    • By Aleksid1
      Hi,
      We sell presentation app which uses Direct3D 9. It works slow on laptops with dual video cards (Intel + NVIDIA or AMD). We have to ask users to manually choose NVIDIA video card for our app.
      Is there any API to automatically select dedicated NVIDIA/AMD video card in our app on Windows 10?
      Thanks,
    • By Emerald_Eel_Entertainment
      Forest-themed levels in 3D linear games seem to be tricky to pull off. I primarily refer to first person games though this can easily be applied to third person games and possibly top-down games.
      Older games were limited by the hardware used at the time so texture space and polygon counts were important to manage. These games uses a flat texture of trees to create the illusion of depth or create rocky cliff walls to obscure parts of the scene the player is not meant to view.
       
      An example of a linear forest level is Forest Edge from Disney's Donald Duck: Goin' Quakers on the PlayStation 1.

       
      A relatively recent example of a linear forest level are portions the Outlands White Forest from Half-Life 2: Episode 2 on the PC. You can see that it looks more like a small narrow valley. For gameplay and readability it works well and doesn't feel artificial though it's not quite a dense forest.

       
      Outlast 2 did have areas set in tree-filled areas but the only indication of not being able to go through some bushes or trees are invisible barriers, which supposedly works but feels very artificial in my opinion.

       
      Some games in recent years like The Forest and Ark: Survival Evolved have made fully explorable forest levels but they are non-linear open world games. Since the respective games aren't linear in nature they have no need to funnel players through areas designed to be traversed.

       
      How can depth and believability be achieved without making the player confused or lose their direction? How can making a linear forest level be done without making the environment appear artificial?
      I created this topic as I'd love to hear what you guys think. I don't think there's a right or wrong way going about making a 3D linear forest level.
    • By flodihn
      Hello guys, I just want to share some of my findings after more than 10 years of software development. 
      During my last project, I made the decision for our dev team to go against what is quite commonly the default programming behaviour in Unity and most likely many other game engines. What we put all persistent data (as serialisable JSON) in a separate layer and use static functions to operate on those. This is more similar how C would operate, or most functional programming languages.
      It worked very well and caused no problem for us, so now I decided to take it one step further and make a sub system in Unity that facilitates that coding style in a more generalised way. I call this system AOEngine.
      This sub system looks like this:
      Model -> Stores data as json serialisable objects, dictionaries or both.
      View -> Listens to model, reacts when data is create, changed or destroyed and tries to render the object. The view is a monobehaviour and can be seen in the Unity Editor unlike the model and the controller.
      Controller -> Runs a state machine which executes game specific code that creates, updates or destroys data.
      State Machine -> Switches/keep/updates the global state of the game.

      The model, view and controllers are standard OOP classes communicating with interfaces using a observer pattern. So far everything is pretty standard.
      But when you realise that after separating the logic and data, what you have left are logic classes that always the same, just executing on different types of data. This means there is no need to instantiate pure logic and using static classes is actually a pretty neat idea.
      The reason static classes are getting quite a bad reputation is that in most cases they are used to share some global state, and easily become a tight dependency to many other areas in the code. But when the static class is just pure logic, we do not have this problem.
      The AO engine provides one point of entry, a GameInit prefab which is used to create the model, view and controller, the user have to provide an initial state for the state machine to run. When you import the AO Engine as a Unity package, you just need to drop one prefab into your scene, and that is the only prefab you need to start. 
      Here is the code for the GameInit:
      { public class GameInit : MonoBehaviour { public GameObject viewPrefab; public string InitialState; void Start() { IModel model = new AOEngine.Model.Model(); IView view = CreateUnityViewFromPrefab(); IController controller = new AOEngine.Controller.Controller(); view.Setup(model); controller.Setup(model, view); StateMachine.Setup(model); GameState initialGameState = TryCreateInitialGameState(); StateMachine.SwitchState(initialGameState); } The InitialState have to be defined in the Unity editor, you give it the full name to the initial game state for your game, for example MyGame.MenuGameState, then you have to make sure this class exists (goes without saying) and it must inherit AOEngine.StateMachine.GameState.
      This is how the a game state would look like:
      namespace MyGame { public class MenuGameState : GameState { public override void OnEnter() { GameObjectData uiCamera = new GameObjectData { {"uid", "camera"}, {"prefab", "Cameras/UICamera"} }; model.CreateData<GameObjectData>(uiCamera); UIData mainMenuData = new UIData { {"uid", GameUids.MAIN_MENU_DATA}, {"prefab", "UI/Prefabs/MainMenuCanvas"} }; model.CreateData<UIData>(mainMenuData); controller.BindLogic(typeof(MainMenuLogic), mainMenuData); And here is where things get interesting, first I create the some data, all data should have a unique uid to identify it, if no uid is given the model will generate an uid automatically. Optionally, a piece of data can be bound to one or more logic classes, which are static, using the controller.BindLogic providing the logic class and the data uid to bind it to. This is basically a component system, but not using monobehaviours or standard OOP.
      The static logic class would look something like this:
      namespace MyGame { public static class MainMenuLogic { private static IModel model; [OnLogicSetup] public static void Setup(IModel _model) { model = _model; } [OnLogicCreate] public static void OnCreate(string dataUid) { UIData data = model.GetData<UIData>(dataUid); model.UpdateProperty(data, new Dictionary<string, object> { { "ui_callbacks", new Dictionary<string, object> { {"OnButtonClicked", "MyGame.MainMenuLogic#OnButtonClicked"} }} }); } [OnLogicUpdate] public static void OnUpdate(string dataUid) { UnityEngine.Debug.Log("MainMenuLogic.OnUpdate"); } [OnLogicDestroy] public static void OnDestroy(string dataUid) { model.DestroyData(GameUids.MAIN_MENU_DATA); } public static void OnButtonClicked(string buttonName) { if(buttonName == "QuitButton") UnityEngine.Application.Quit(); if(buttonName == "StoryButton") { StateMachine.SwitchState(new StoryGameState()); } } } } So not being a fan of monobehaviours, I use them to a minimum, only for position translations, particle systems, collision detection and audio playing, and then these monobehaviours would be bare components without any logic or state.
      I need to figure how communication between my static components, I will probably use a game event system for this. 
      So far I am quite happy with system where all my logic are running in static classes. I wonder if there are any other people out there that reached the same conclusions, what are your opinions of this approach?
  • Popular Now