# 3D I like to learn about BSP rendering

## Recommended Posts

I like to understand the basics of BSP tree rendering.

I am looking to the doom source code also to understand the simplest form from the beginning.

Maybe i can write a win32 program with drawing pixels and lines to see how it works.

I dont know how the doom code works just by looking at it, i cant even find the part where they draw a pixel.

Also intrested in how to implement your own BSP tree in dirextX,

how do i use the vertexbuffer for that ?

Do you need to replace all vertices from the vertex buffer for every frame ?, aint that slow ?

I need if for unlimited lights, in the fixed pipeline i have only 8 light sources available.

I also dont understand how you make your own pipeline ?, how does it work in combination with a video card ?,

i dont wanto code asm or shader language or something : C++ only, is it possible ?

Anyone has undestandable info about the simplest BSP tree or non fixed pipeline ?

thanks

##### Share on other sites
7 minutes ago, the incredible smoker said:

BSP tree in dirextX

What has a BSP tree to do with DirectX? Or stated differently, how would it differ from other applications like ray tracing or database searches?

##### Share on other sites

To learn about BSP in Doom, read the chapter in Michael Abrashs Black Book, which is online everywhere.

But i did not find the source code and hope it's ok to attach it here.

(It's OpenGL or software rasterizer but i guess it still compiles)

BSP has nothing to do with graphics API or shaders anyways. Doom uses BSP for front to back sorted rendering of level geometry and frustum culling. The idea is to generate small chunks of geometry by cutting space in two halfs recursively. Those chinks are static and ther is no need to touch vertexbuffers at runtime.

Edit: Why do you think you need BSP nowadays?

Edited by JoeJ

##### Share on other sites

Hi, i like to know how i can get unlimited lights in my DX engine, now i only have 8 lights.

This is a good reason to have some BSP tree without fixed pipeline ?, or has pipeline nothing to do with BSP ?

I also making hardware, i bought a 320x240 full color LCD display, i dont know if it will be fast enough framerate on these 50MHz chips ?

Before writing code for hardware i go test on win32 application, saves a lot of re-flashing the chip.

Basic stuff is also nice to know for this reason.

So i have 2 reasons : PC game ( DX9 ) and full color hardware LCD display drawing lines.

Edited by the incredible smoker

##### Share on other sites

BSP trees are mostly used for the initial partitioning (typically done beforehand as a preprocess step), or runtime for visibility or collision detection.  Your vertices don't change during rendering, you just have a set of data per leaf of the BSP that, when determined to be visible, is pushed to your rendering system.  So in answer to your question, BSP has nothing to do with the rendering pipeline.  It's simply a way of partitioning the world into more manageable spaces, and how to determine spatial relationships between these spaces.  If you look at engines that used BSP like Quake, it also used during preprocessing to determine if the world was closed or had gaps, and at runtime to get correct front2back and back2front sorting of faces, and to do collision detection by checking if you're going from an empty area to a solid area or vice versa.

##### Share on other sites
7 minutes ago, the incredible smoker said:

Hi, i like to know how i can get unlimited lights in my DX engine, now i only have 8 lights.

You can lift that limitation in two ways:

2. Implement your own vertex lighting and submit lit vertex colors to DX.

If you still want hardware Lighting from fixed function GPU for more than 8 lights, you can make sure any triangle is lit by only 8 at max and change lights accordingly while rendering. Partititioning geometry will be usefull for this, but you can use any method you want. BSP is static, so it may not be the best choice here.

Doom / Quake had hidden surface removal (HSR) on top of BSP sorted drawing order to reduce overdraw (can achieve zero overdraw for static scenes). The combination of those things made BSP so attractive and efficient back the days. But GPU can't do HSR. They still profit from front to back order due to early Z, but a less restricted and more flexible way of sorting makes more sense for GPU.

I recommend you read the chapters, it's interesting anyways, and also the chapters about baked radiosity (with that 8 dynamic lights should suffice!)

##### Share on other sites

Hi, thanks for the replys all.

@ Joe : with the shaders ?, do i need to program shader language or asm ?

If i go change vertex colors, and dont add lighting in the vertices, is it the same as with default pipeline ?, very intresting btw.

Is HSR an array with a bit for each pixel ?, sounds logical, if there is enough memory, i dont understand how else you can do that.

@xycsiscys : sounds intresing that you can use collision detection also with bsp, is it very different then the checkintersect function from directx ?

What i also dont understand about the doom rendering, dont it takes a lot of math functions ?, i guess not since it was playable on 486 computers.

Dont you need at least sinus and cosinus for the rotation ?

So i made this win32 app showing a square with a line, if i make it bigger or smaller it looks like it is going further or closer,

how do i implement Z position without heavey math ?,

and what about rotating the camera ( i,m not rotating the square yet, need the positioning first ).

Ofcourse i also need to rotate the square, how would i do all that without calling math functions ?

thanks

Just now, the incredible smoker said:

Hi, thanks for the replys all.

@ Joe : with the shaders ?, do i need to program shader language or asm ?

If i go change vertex colors, and dont add lighting in the vertices, is it the same as with default pipeline ?, very intresting btw.

Is HSR an array with a bit for each pixel ?, sounds logical, if there is enough memory, i dont understand how else you can do that.

@xycsiscys : sounds intresing that you can use collision detection also with bsp, is it very different then the checkintersect function from directx ?

What i also dont understand about the doom rendering, dont it takes a lot of math functions ?, i guess not since it was playable on 486 computers.

Dont you need at least sinus and cosinus for the rotation ?

So i made this win32 app showing a square with a line, if i make it bigger or smaller it looks like it is going further or closer,

how do i implement Z position without heavey math ?, so that it would be correct.

and what about rotating the camera ( i,m not rotating the square yet, need the positioning first ).

Ofcourse i also need to rotate the square, how would i do all that without calling math functions ?

thanks

##### Share on other sites

Math is necessary its just a matter of how much.  What is your goal exactly?

a software renderer like the original wolfenstein?

a software renderer like in doom?

a software renderer for wireframe rendering?

a hardware accelerated wireframe renderer?

a hardware accelerated game like halflife?

depending on what exactly you want to do the advice will be different.

##### Share on other sites
4 hours ago, the incredible smoker said:

@ Joe : with the shaders ?, do i need to program shader language or asm ?

If i go change vertex colors, and dont add lighting in the vertices, is it the same as with default pipeline ?, very intresting btw.

Is HSR an array with a bit for each pixel ?, sounds logical, if there is enough memory, i dont understand how else you can do that.

In DirectX you code shaders in HLSL which is like C. DirectX translates this to bytecode, and the driver to ASM (each vendor and GPU has different instruction set). There are different shaders to process vertices or pixels, so a minimal vertex shader transforms vertices by madel and camera matrices, and a typical pixel shader has access to interpolated values (UVs, vertex colors, custom data) and ouputs final color for each pixel. You can still do lighting in the vertex shader like fixed function pipeline did, but any GPU is poweful enough to do it per pixel.

I you are happy with vertex lighting, yes you can do it on CPU yourself using vertex colors to get around the 8 lights limit.

HSR per pixel is what GPUs do using Z-Buffer. Software rasterizer (Quake / Doom) can do it e.g. per span (a scanline from a triangle), which is much less work than per pixel. But GPUs have lots of parallel processing power and work efficient HSR algorithms are hard to parallelize. Also triangles baecame much smaller so ZBuffer wins on GPU.

Using GPUs you don't care about fine grained HSR, but you may want to avoid rendering one half of the level if it is hidden behind a wall. Here BSP can be used in combination with a precomputed Potentially Visible Set (PVS): Each cell of the BSP knows what other cells may be visible if the camera is inside. This idea works also with other data structures but is limited to static geometry (and it's hard, so see how far you can get without a need for such things).

4 hours ago, the incredible smoker said:

Dont you need at least sinus and cosinus for the rotation ?

No. Only if you need to work with angles. You need trig for a first person camera once per frame, but you can animate a chracter and transform its entire skeleton hierarchy without trig. You mostly use dot and cross product instead sin and cos.

4 hours ago, the incredible smoker said:

So i made this win32 app showing a square with a line, if i make it bigger or smaller it looks like it is going further or closer,

how do i implement Z position without heavey math ?, so that it would be correct.

and what about rotating the camera ( i,m not rotating the square yet, need the positioning first ).

Ofcourse i also need to rotate the square, how would i do all that without calling math functions ?

Probably your hardware can do much more math than you think.

You use one matrix for the camera, and one matrix for each dynamic object. By multiplying both matrices you need to transform each vertex with only one matrix and do the perspective projecton to get screen coords (DirectX does this for you). However, Both CPU and GPU are built to do math on data - no need to avoid it. Today it's more important to optimize for efficient memory access than to limit math work.

Edited by JoeJ

##### Share on other sites

If you haven't already, you might want to look into Chocolate Doom (https://github.com/chocolate-doom/chocolate-doom).

It's a pretty clean and straightforward port of the original Id source. I think it uses SDL, the last time I looked into it.

Likewise, for another good deep dive into how the Doom engine worked, and how it used BSPs, Fabian Sanglard's website is fantastic (http://fabiensanglard.net/doomIphone/doomClassicRenderer.php)

##### Share on other sites

Thank you Eric for the link : http://fabiensanglard.net/doomIphone/doomClassicRenderer.php

very usefull information.

As they say : everything is done integer, this is exact the stuff i am looking for,

these chips you can program nowadays are comparable with oldschool computers / game consoles,

that is why you see more and more clones these days.

Ok, 32bit chips also has floating point, i prefer integer math, and learn more about using integer math for optimizing.

What i like to make : a software renderer that is all integer,

i,m happy with wolfenstein to begin with, once its done i can always go to doom renderer.

That would be very nice.

The good thing is i want to program 32 bit chips, i thought those 486 machines was all 16 bit right ?

So i have some advantage over older hardware with 32 bit being as fast as 8 or 16 bit instructions,

only clock speed might still be a problem for my current chips @ 50MHz.

##### Share on other sites

29 minutes ago, the incredible smoker said:

my current chips @ 50MHz.

ah, ok

I wrote a Doom like engine for Nokia NGage, that was 32 bit ARM CPU with 100 MHz.

I used integer math only (fixed point 16:16 mostly, sometimes 24:8), did also a bit of ASM (but not enough).

My engine was 2D BSP like doom but supported stacked rooms and voxel terrain similar to Outcast and perspective correct textured polygons for characters like Quake. No dynamic lighting, just lightmaps, character lit similar simply to Quake as well.

Screen resolution was 170 x 200. I got only 20 FPS. Commercial games did a bit better probably because they did the whole scanline in ASM.

486 was 32 bit and was too slow for Quake but fast enough for Doom IIRC.

##### Share on other sites
23 minutes ago, the incredible smoker said:

everything is done integer, this is exact the stuff i am looking for,

That is not exactly accurate, IIRC it is done in fixed point... which uses the integer units in the CPU.

25 minutes ago, the incredible smoker said:

i,m happy with wolfenstein to begin with, once its done i can always go to doom renderer.

Wolfenstein used what was called back then "raycasting".  IIRC basically imagine a piece of graph paper.  Now imagine that each side of a square is potentially a wall, and the map just tells you if there is a wall there or not and if so what type(image) of wall it is.  The key to this technique is the walls are all regularly spaced.  This allows you to collide against the walls very easily using simple math.  Anyway there are books and tutorials online that will explain the details.

32 minutes ago, the incredible smoker said:

i thought those 486 machines was all 16 bit right ?

486's were 32bit.  I used to play wolfenstein and doom on a 486sx 25mhz. (sx means no floating point coprocessor)

33 minutes ago, the incredible smoker said:

So i have some advantage over older hardware with 32 bit being as fast as 8 or 16 bit instructions,

only clock speed might still be a problem for my current chips @ 50MHz.

What specific hardware are you targeting?  Rasberry pi?

##### Share on other sites

PIC32

Nice one Joe!

btw what is IIRC ?

i like to avoid ASM, i know its crucial for some things.

Edited by the incredible smoker

##### Share on other sites
4 minutes ago, the incredible smoker said:

btw what is IIRC ?

"If i remember correctly" - took me years to figure it out as well, but now it's quite useful

##### Share on other sites

by the way : i have this display :

I,m searching for ray casting now, looks nice already if i could make that,

i want to build if from the ground up, not copying any wolfenstein code, maybe take a look.

Edited by the incredible smoker

##### Share on other sites

Here is a nice movie with the same display ILI9341

##### Share on other sites

Looks fun.

The downside of raycasting: You need to draw vertical spans for everything, so no division free textured walls and ceilings. (I doupt your hardware has a division instruction.)

In my engine i used both vertical and horizontal spans within a polygon rasterizer. I can't remember how i achieved zero overdraw exactly, but i was quite proud of it because it seemed better to me than Quakes solution. It also served as HSR so no PVS was necessary (Probably i tested internal BSP node bounds against covered screen to clip whole braches of the level BSP behind a wall - for a 2.5D game like Doom this is a 1D problem, so very fast. I don't know if Doom uses PVS, Quake does.).

But the complexity is probably 50 times more than a raycaster, so raycasting will give you results much faster and may already max out the hardware.

##### Share on other sites

I developing a voxel render in software with integer only operation, I have all math for matrix in fixed point. I don't make a software render with polygon but I have the pieces for do this.

But in :r4, a language close to ColorForth.

##### Share on other sites

Yes there is a division instruction, i thought it takes 18 instructions, very slow.

Is division needed ?, then i normally pre calculate lookuptables.

Only i,m not this far yet, i,m reading this site : http://lodev.org/cgtutor/raycasting.html

it says i need DDA "Digital Differential Analysis"

Now to find out what that is in C code.

Edited by the incredible smoker

##### Share on other sites

Division by Z is necessery for perspective correct texture mapping or any other interpolation for a scanline of a polygon.

But Z is constant at walls (horizontal scanline) and floors (vertical scanline), if you can't look up and down.

Quake did the z divide with floating point instruction every 16 pixels and interpolated the results in between. On Pentium this divide was free this way because it did integer instructions while FPU was busy with the divide.

DDA is just a way to get all intersections of a ray traveling throgh a grid quickly and in order. Notice that using this limits you to right angled walls. Doom had walls at any angle, which was the major improvement against Wolfenstein along textured floors / ceilings. It might be easier / worth to reinvent DDA yourself than reading about it, it's a really simple thing but with many usecases.

##### Share on other sites

I looked at the wolfenstein code, i did not find something to start with,

many asm code in there also,

i saw this weird thing : #define TILESHIFT 161   in the file : wl_def.h

If you shift something 161 positions you might as well set the value to zero.

I,m going to search for more code examples if there are any more raycast examples.

##### Share on other sites

btw : Do i need to cast a ray for every horizontal pixel ? ( x axis ).

Does wolfenstein cast a ray for every pixel ?

##### Share on other sites
2 minutes ago, the incredible smoker said:

Do i need to cast a ray for every horizontal pixel ? ( x axis ).

For Wolfenstein: Not for each pixel, but for each vertical span (so for 320x200 display, you need to raycast 320 times - probably you meant that already and that's right.).

If this sounds inefficient, i agree. Comparing this with a BSP of (optionally angled) wall segments we get something faster:

Sorting segments front to back is just a traversal of a binary tree, you decide which child to visit first by a simple test determinating on which side of the splitting line the camera is. So while traversing the tree we draw walls as they appear and track occlusion:

The first wall covers a horizontal range, say from 0 to 100.

The next wall to draw then has 50-150. Clipping that against already drawn space gives just 100-150.

Next wall has 20-70. We clip it away completely and draw nothing.

Next wall has 70-320, we draw 150 - 320. Now we know the screen is completely full and any wall coming after this must be occluded - wer're done.

So this is less work than 320 traced rays, similar to rasterizing triangles is usually faster than raytracing.

Tracking occlusion can be done by a linked list of full/empty spaces, the list can have 320 entries at maximum, so we can just use an array of that size. (''Span list")

Additionally you can chack interior nodes against the span list to reject full branches of the BSP tree early.

This is quite simple. In my case it got more complex because i supported brushes of different height and multiple brushes on top of each other with empty space in between. But for a Wolfenstein restricted world it sould be almost as easy to implement as raycasting. A little more extra work is to write editor and BSP compiler.

##### Share on other sites

Thank you Joe, edittor wont be a problem, i also made a 3D edittor already it looks like 3D max.

12 minutes ago, JoeJ said:

For Wolfenstein: Not for each pixel, but for each vertical span (so for 320x200 display, you need to raycast 320 times - probably you meant that already and that's right.).

If this sounds inefficient, i agree. Comparing this with a BSP of (optionally angled) wall segments we get something faster:

Yes that is what i mean for all X pixels.

What you are saying is that it would be the best to make wolfenstein with a BSP ?, sounds intresting to save CPU.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628707
• Total Posts
2984310
• ### Similar Content

• Hello people of gamedev.net

Me and my team have been working on a MMORPG game with Unreal Engine 4 for quite some time now.
We are seeking beta tester's and have beta key's available to people who sign up on our website.
Feel free to register on our forums, We can talk about the game and help everyone get a better idea of what type of game it is.

Legion is a 3D fantasy MMORPG that has features including massive scale battles, unique characters and monsters, customization of avatars, special equipment and more. Players choose between the starter stats of Warrior, Magician, Archer and character advancement occurs through a mix of questing, PvP, Guild Wars, and hunting, depending upon player preference. In Legion, completely open PvP battles take place between members of the two warring factions.

We plan to make this game very competitive and exciting
• By Matuda
Hello!
Trying to create a physics puzzle game in my "free" time.
So far it's going very steady, but slow.
Hope to get some feedback from you!

Area 86 is a physics-based game, that lets you control a robot at a secret place in space.
From simple item moving to custom imagined solutions with item picking, throwing, combining and activating!
Explore & examine all possibilities each place has to offer and do your best to get further.
But remember, each action has consequences and thus could break or make something unexpected.

Quick overlook of main features:
Physics-based gameplay with no bugs or whatsoever Tasks that give you more clue on how to do things wrong Controllable robot who can be blamed for all consequences Includes more than 1 level and each level contains less than 12 possible tasks to complete [ not in free version ] Secret places and hidden objects for extra challenge
One fully completable level with 6 tasks and 2 hidden special items to discover.
From the task list, 2 are main tasks which you should complete to get further and then there are 4 other tasks that should challenge your thinking.
One of the secret items is visible instant, but you need to figure out how to collect it, while the other special item is hiding.
Another extra feature is visual hints, that should force your thinking of discovering features.

• well, i have started developing games last year, alone , I made a singleplayer 3d openworld rpg on unity you can look at it on googleplaystore ( kooru stone rpg ) whatever, this year, i wanted to make mmo, which gone really fine until I first try real hosting, I was working on "wamp" until then. The reason i am desperate now is that the way my game works.
On my pc, using wamp mysql , with localhost as host for my game, i was testing my mmorpg with using andorid emulators, ofcourse no lag no issues no restrictions, beautiful dream... But then, I wanted to get real host from web, so, I rent a basic, cheaphest ever web host ( 10\$ year ), and transferred my php files along with sql database.
So, I launched the game, still no issues, tried to handle 2-3 players by using my pc, phone, friend's phone...
After a while, ( after really short time (3-4mins)) host started not to respond, beacause those web hosting were not fit to handle mmos, i predicted that.
now what i am explaining is that my game works like this and asking what way should i use to handle it :
- Creates web request ( like : webhost.com/game/getplayerdata.php?ID=2 )
-Reads request ( request result be like = "ID2-GoodGuyXx-23-123-4-123-43 )
-Builds player using result string
-does similar requests REEAALY FREQUENTLY  ( total requests of 8 - 12 times per seconds )
With my current ultimate cheap web hosting, i can handle 2 players with low lag ( lol ) but, i want to handle around 20-100 players,
just need a clear path, i have been struggling with google cloud sql and other vps server dedicated server options, i dont wanna pay much and get ripped off.

• Hi,

I have a triangle,oriented in a 3D Plane i.e. I have my vertices as (x1,y1,z1) ; (x2,y2,z2) and (x3,y3,z3)
I am trying to convert this triangular facet to voxelised model i.e.
Along each edge,I am applying Bresenhams 3D Line algorithm and generating intermediate points.After generating intermediate points, I want to fill the inside region.
I have been searching for some algorithm like flood filling,but did not find anything relevant till now.
I would be really glad,if some one can provide an algorithm for achieving this.
I basically have a List of tuple for storing all the (x,y,z) data created along the edges.(generated using Brsenhams 3D Line algorithm).
Now,I want an algorithm,which creates cubes in the inside region.
• By DavidMT
A team has a position open for a 3D artist and/or modeler.
Excellent opportunity to gain experience working with a team remotely. Great chance to extend your portfolio Candidates, if successful have the chance to join the team permanently with entry-level financial benefits. Revenue Share What they expect
Excellent communication skills Portfolio of previous work If you are interested please attach a CV and/or relevant works to jobs@iamdavidmt.com

• 23
• 10
• 9
• 13
• 13