Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!

L. Spiro

Member Since 29 Oct 2003
Online Last Active Today, 06:13 PM

#5214937 Texture atlas generator

Posted by L. Spiro on Today, 06:43 AM

I would manually unwrap the texture coordinates in the pixel shader and use tex2Dgrad (or equivalent function) for fetching the data.

Do you have any idea what that entails? 64 unwrappings requires 64 specific start-end coordinates and the 64 manual wrapping calculations associated with each.

This approach has been used for more than a decade by tons of engines;

Then why this:

I am simply looking for an existing bleeding-aware tool to pack textures.

So many engines for so long, why doesn’t such a tool already exist?

Would it be wise for you to make your own solution? Not sure, because:

I want to support 64 textures and I want to dynamically fetch different textures per-pixel based on a material mask.

You never identified texture filtering as your main problem, and instead blamed texture wrapping.
If you’re already comparing your solution to an existing one’s, the bleeding is not caused by wrapping (it would be though), it is caused by bilinear-filtering.

I’m not sure you can make a better tool than one that exists because I am not sure you understand what is causing the artifacts you are seeing.



L. Spiro

#5214929 Issues with viewport.Project

Posted by L. Spiro on Today, 05:42 AM

private Matrix world = Matrix.CreateTranslation(new Vector3(0, 0, 0));
graphics.GraphicsDevice.Viewport.Project(new Vector3(0, 0, 0), project, view, world);

How are you expecting to get different results if you are always hard-coding the input the same way?

It makes no difference what the world space values are.

Prove it. What other values have you tried? For testing purposes, [0,0,0] is the worst value you can possibly use.

L. Spiro

#5214876 Very strange FPS fluctuation

Posted by L. Spiro on Yesterday, 08:43 PM

Also, the math won't work as you expected. Some people fixed it in their code samples but nobody explicitly called it out:
long frameCount;
long frameTime;
1000 * frameCount / frameTime;
This will not give the result you seem to expect from your description.
Since both values are integral type (int, long, short, byte, char, whatever) the result will also be the same integer type. You won't get anything on the decimal side.  As an example, 99/100 does not equal 0.99, it equals 0 because of integer math. 3/2 = 1,  4/5 = 0, 49/5 = 9, and so on. 
Since it looks like you are expecting a number like "17.231" then you need to have a floating point value in at least one spot, probably in all the spots. 
1000.0f * (float)frameCount) / (float)frameTime;

He fixed this in his “real” code: Convert.ToInt32(frameCount * 1000.0 / frameMillis);.

The result of (frameCount * 1000.0) is a double, and that causes the division to be a double, so the math will work correctly.



L. Spiro

#5214869 Methods for Drawing a Low-Resolution 2D Line Gradually?

Posted by L. Spiro on Yesterday, 07:49 PM

When trying to find this topic again it took a while.  It should be moved to Graphics Programming and Theory.

I'm having a hard time imagining what that would look like to make the alpha flick alternately between 0 and 1

I said you only need to increase the alpha after first calculating your current alpha value.
Every pixel from start-to-end has a fixed alpha value from 1-0 (the original equation should have been (1.0 - (Cur - Start) / (End - Start))), which you are exposing over time by decreasing the cut-off value (from 1-0).

So if your cut-off value is 0.75, only the first 25% of the line will show.

Let’s say the next pixels have alphas of 0.6, 0.5, and 0.4. They wouldn’t get drawn on this pass unless you do some math to increase their alphas over 0.75.

alpha = max( alpha, alpha + sin( alpha * 100.0 ) );
Plug this in for each number:
0.6 = max( 0.6, 0.6 + sin( 0.6 * 100.0 ) ) = max( 0.6, 0.29518937889778329437435053452157 )
0.5 = max( 0.5, 0.5 + sin( 0.5 * 100.0 ) ) = max( 0.5, 0.23762514629607121408560635308738 )
0.4 = max( 0.4, 0.4 + sin( 0.4 * 100.0 ) ) = max( 0.4, 1.1451131604793487869877094026363 )
0.3 = max( 0.3, 0.3 + sin( 0.3 * 100.0 ) ) = max( 0.3, -0.68803162409286178998774890729446 )
0.2 = max( 0.2, 0.2 + sin( 0.2 * 100.0 ) ) = max( 0.2, 1.1129452507276276543760999838457 )
0.1 = max( 0.1, 0.1 + sin( 0.1 * 100.0 ) ) = max( 0.1, -0.44402111088936981340474766185138 )
0.0 = max( 0.0, 0.0 + sin( 0.0 * 100.0 ) ) = max( 0.0, 0.00000000000000000000000000000000 )

As you can see, 2 of the pixels ahead of the current cut-off will be drawn.
Fancier formulas can make better effects.

or is it just that it's neater this way?

It is more robust and it is niftier.

L. Spiro

#5214858 Very strange FPS fluctuation

Posted by L. Spiro on Yesterday, 06:27 PM

	// calc FPS
	frameCount ++;
        frameMillis += deltaMillis;
	if (frameMillis >= 1000)
		// 1 second elapsed - time to dispaly FPS, and restart counting
		framesPerSecond = Convert.ToInt32(frameCount * 1000.0 / frameMillis);
		frameMillis -= 1000;
		frameCount = 0;

Your counter and your time-accumulator are not synchronized after the first entry into this area. If you set frameCount to 0, you should set frameMillis to 0, otherwise it’s like giving frameMillis a head-start over the frame counter.  If, after frameMillis -= 1000;, the result is 13 (for example), then you’ve carried milliseconds over from the previous 1000 milliseconds but you didn’t carry over any fractions of the frame counter, which, to be fair and synchronized, should be something like 0.4 (just picking randomly).  Another way of thinking about it is that you included those 13 milliseconds in your divide here in “Convert.ToInt32(frameCount * 1000.0 / frameMillis);,” so you shouldn’t be including them the next time around.


The code should set both frameCount and frameMillis to 0 for “accurate” readings.



After that, always use microseconds, not milliseconds.  You should especially know why given that you reach up to 750 frames-per-second.  Once you go over 1,000, your delta is 0 and you do a whole update with no change from the previous update.  When you approach 1,000, you get updates with deltas of 1 and 2, which means some of your updates are literally twice as long as others.  This too can cause visual artifacts.



As for the large differences between your runs, this is somewhat normal.  Your debugging environment can sometimes cause it.  When your “game” is running at extremely high FPS, any small thing, including certain processes also being open, can cause a significant drop in frame-rate even though they are really only taking away a small amount of time from your frames.  I got this exact problem many times on older computers (4 or 5 years old).  It won’t happen on a more mature project because slowing down a game by the same amount of time has no noticeable impact if the game is running at 60 or especially 30 FPS.



L. Spiro

#5214855 Why using std::string when you can create it yourself...

Posted by L. Spiro on Yesterday, 06:05 PM

I hope you don’t mind, Finalspace, but we at Square Enix have really fallen in love with your code and believe it is the path to the future.
We’ve replace our own string classes with yours and will be shipping the trial version of Final Fantasy XV this month with your code.

Thanks for your contributions!

L. Spiro

#5214839 Texture atlas generator

Posted by L. Spiro on Yesterday, 04:17 PM

Listen to haegarr. Texture atlases don’t apply to terrain (except perhaps in 0.0001% of cases). You need to be able to wrap/repeat the textures, meaning the textures need to cover the 0-1 UV range. Texture arrays are for what you are looking.

L. Spiro

#5214643 Vulkan is Next-Gen OpenGL

Posted by L. Spiro on 04 March 2015 - 10:31 PM

Hopefully Vulkan could also be used to write an opengl implementation on top of it.

That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).


It may be a fun and novel project to make a Nintendo Entertainment System® emulator, but since OpenGL still exists and will continue to exist and be maintained there’s no novelty in making an OpenGL API rewrite using Vulkan.



L. Spiro

#5214638 Assimp and adventures in calculating tangent space - strange "seams"...

Posted by L. Spiro on 04 March 2015 - 10:12 PM

You didn’t unpack the normals from the normal map.
vec3 vUnpacked = texture2D(bumpMap,UVs).xyz;
vUnpacked.xy = vUnpacked.xy * 2.0 - 1.0;

L. Spiro

#5214634 What values get interpolated passing from VS to PS?

Posted by L. Spiro on 04 March 2015 - 09:57 PM

particularly the apparent difference

Half of that is because you didn’t renormalize the view vector in the pixel shader.

L. Spiro

#5214625 What values get interpolated passing from VS to PS?

Posted by L. Spiro on 04 March 2015 - 09:07 PM

This isn’t about specific rules on how things get interpolated—they all get interpolated linearly (unless you use an interpolation modifier).
Your problem is a logical one.  Interpolating the view vector is not the same as interpolating the position across the triangle.
As the position goes linearly away from the viewer the changes in the view direction will be smaller and smaller.
If you interpolate the view direction directly instead, the changes in the view direction will all be linear.
You have to interpolate the position and derive the view vector for each pixel.
Also, for future reference, since interpolations are linear, the end points of the view vector might all be normalized, but the points between them (in the pixel shader) will not be.  If you interpolate any normals, don’t normalize them in the vertex shader since you have to normalize them in the pixel shader anyway.
L. Spiro

#5214619 Concept for an input library - thoughts?

Posted by L. Spiro on 04 March 2015 - 08:25 PM

I think it is a noble thought, but not really feasible. See one of my many posts on input for reasons why.



The main issue is that inputs are not events and should not be triggered as events. They should always be buffered and consumed in specific steps at specific times during the game loop, and a proper integration with this type of system requires more intimate knowledge of the programmer’s game loop.
You suggest using your events to create the buffer, which is okay….

But it doesn’t address another issue: Key mapping.
I’ve recently had a long discussion with someone on where and when it is appropriate to translate keys to in-game actions.
I go in great detail here explaining why when translating keys into actions it is the context at the time of in-game action-processing that matters, not the context at the time of the button-press being received.



The point where you are proposing to take the burden off the programmer by recognizing key combinations etc. is correct for applications, but incorrect for games.

At the level where you are working the only thing you can do is log inputs.  Parsing them to produce in-game actions can only be done in the middle of the actual game loop.


And even there you are split between high-level and low-level.

No matter where you do it, it’s not as simple as “send a jump command to the player”.  All commands sent to the in-game characters should be considered low-level and should not be questioned by the in-game characters.  Why?  Because the in-game characters are a slave to the physics engine and scene manager, not the other way around.  When you send a command to the character to jump, you’re telling it just to jump.  Not to check to see if it is on the ground or if the ceiling is high enough, etc.  That has to have already been done by the higher-level systems.



Ultimately I don’t see a good generic way to make a library out of input systems.  The only thing you can do is gather input from all possible platforms and devices and then keep a buffer.  After that, it varies as much as the number of programmers there are.



L. Spiro

#5214603 Methods for Drawing a Low-Resolution 2D Line Gradually?

Posted by L. Spiro on 04 March 2015 - 07:46 PM

As for a stippling/dashed effect... would I be right in thinking that could be achieved with something like:
- Check if pixel is past interpolation value.
- If pixel is past interpolation value, do something like only draw every 2nd pixel.

No. Virtually all effects you want to apply to it would be done by manipulating the alpha value (the result of (Cur - Start) / (End - Start)).
Rather than branching based on being above or behind the cut-off value, simply manipulate the alpha value creatively (design an algorithm to increase it by some amount in certain cases).

For example, you can find a dithering equation online easily.
The normal result of dithering is to pick either the next color up or the next color down based on screen position.
Instead, where the result would have you pick the next color up, increase the alpha value by 0.1 (testing required) and where it would have you pick the next color down, either do nothing or decrease the alpha (testing required).

This is just an example—it won’t work on perfectly diagonal lines (a typical problem with dithering).

If you can’t think of a creative algorithm to get the effect you want, then yes you could start manually manipulating the alpha based off selection statements etc.

L. Spiro

#5214596 Directional light position in shadow mapping

Posted by L. Spiro on 04 March 2015 - 07:15 PM

Technically, the light must have a position when rendering the shadow maps.

No it doesn’t. You only need a projection matrix (orthogonal in this case).

So how do you determine what position to use when rendering depth for each of the cascade frustrums?

how do you do culling for directional lights? do you view frustrum culling for each cascade, each frame?

You don’t calculate a position, you calculate an orthogonal matrix. I will answer both questions at the same time.
  • Get the camera view frustum (6 planes).
  • For each cascade.
    • Determine where you split the camera frustum.
    • Create a frustum (usually not 6 planes) based on this information.
      • The first 3 planes are the camera frustum planes that face away from the light.  This creates the set of far planes (there is no single far plane).
      • The next few planes shoot in the direction of the sun and create an outline around the camera frustum where, on the camera frustum, one plane faces away from the light and the border plane faces towards the light.  
      • There is no near plane unless you have a tool chain that allows artists to set one.
  • Frustum-cull.  Gather objects inside the new frustum.  These are objects that might cast shadows into the player’s view for the given cascade.
  • Build an orthogonal matrix based on the list of objects you just created.
    • Typically all entities in your scene will have a forward, right, and up vector, and this applies to lights as well.
    • Determine the left, right, top, bottom, near, and far values by going over each object and based off the light’s up, right, and forward vectors determine the max extents in each direction (and inverse direction).  For example, to determine just the top and bottom values, given your light’s up vector and the list of objects inside its frustum:
      float fTop = -INFINITY;
      float fBottom = INFINITY;
      Vec4 vDown = -lLight.Up();
      for ( size_t I = 0; I < vObjects.size(); ++I ) {
      	float fTopDist = DOT( lLight.Up(), vObjects[I].Pos() );
      	fTopDist += fTopDist > 0.0f ? vObjects[I].BoundingSphere().Radius() : -vObjects[I].BoundingSphere().Radius();
      	float fBottomDist = DOT( vDown, vObjects[I].Pos() );
      	fBottomDist += fBottomDist > 0.0f ? vObjects[I].BoundingSphere().Radius() : -vObjects[I].BoundingSphere().Radius();
      	fTop = Max( fTop, fTopDist );
      	fBottom = Min( fBottom, fBottomDist );
    • The same loop can do the left, right, near, and far as well.
You now have the top, bottom, left, right, near, and far parameters for creating an orthogonal projection matrix.
Culling objects and determining the bounding box for the cascade are part of the same process and there is no need for a light position.

L. Spiro

#5214585 GGX image based lighting - mipmap artefacts

Posted by L. Spiro on 04 March 2015 - 06:42 PM

i tried to provide as much information as i could and asked some questions, which i hoped someone could answer given the code and screenshots i provided.

People didn’t reply mainly because of this. You overwhelmed everyone.
The point in providing information with your question isn’t just to blindly maximize the amount of information you dump on us, it is to minimize the amount of directly relevant information.


Not only did I personally not feel the energy (or find the time) to go through that haystack looking for a needle, I didn’t even reply to point out some basic/obvious things because I didn’t want to feel pressured to reply to the rest after having publicly shown that I have read the topic, etc.


Here are the few things for which I have energy:

Skybox miplevels

Look at the transitions between each level.  From 0 to 1 and 1 to 2 there is almost no change.  From 2 to 3 looks mostly correct, and then suddenly from 3 to 4 it gets crazy-blurred.

It is very safe to say your mipmaps are wrong.  How did you generate them?

That isn’t your main bug, but something you will have to fix eventually.


"Have you tried manually sending mip map level" see my edit2.

Feel free to take advantage of the built-in quoting system, which further helps those of us who are used to the site in helping you.

And as for your reply, textureCubeLod().



L. Spiro