Sign in to follow this  
ballmar

DX11 Bitmap font engine problem

Recommended Posts

ballmar    1586

Hello.

 

I'm trying to create my own text-rendering engine based on Rastertek's tutorial. The steps I did to make it work:

 

1. Created my own .png file, containing common ASCII symbols divided by spaces. PNG file had not been compressed.

2. Successfully parsed PNG file so that each symbol has it's own texture coordinates and width/height in pixels. The height is the same for each letter and equals to texture height.

3. Created DDS file from PNG font file.

4. Loaded texture from DDS file.

5. Created squares for each letter according to their width/height in pixels and texture coordinates.

6. Created orthogonal projection matrix to project them on the screen within pixel shader without any changes.

 

Everything seems well, but the result looks bad:

[attachment=13749:game.png]

 

As you can see, the letters look dirty and unprecise. Thats how DDS file looks in DX texture tool for comparison:

[attachment=13750:font_dds.png]

Edited by GuardianX

Share this post


Link to post
Share on other sites
ballmar    1586
Thank you for response. I use this sampler state:

 

    D3D11_SAMPLER_DESC samplerDesc;
    samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
    samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
    samplerDesc.MipLODBias = 0.0f;
    samplerDesc.MaxAnisotropy = 1;
    samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
    samplerDesc.BorderColor[0] = 0;
    samplerDesc.BorderColor[1] = 0;
    samplerDesc.BorderColor[2] = 0;
    samplerDesc.BorderColor[3] = 0;
    samplerDesc.MinLOD = 0;
    samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

    mpDevice->CreateSamplerState(&samplerDesc, &mpSamplerState);
 

 

Share this post


Link to post
Share on other sites
hpdvs2    1017

Out of curiosity, why don't you use the sprite fonts that are part of XNA. 

 

Anyways, I took a look at more detail.  I thought it might have been a slightly reduced image, but the final image and the original font both show the Capital 'M' at 10 pixels tall.

 

However, this looks like compression artifacts.  I.e. when it was saved, it may have included a small decompression like PNG/JPG would use.  Or it could be from the loading process.  I haven't used DX font loading of any type, so I'm not sure exactly how it should work, but that really looks like compression artifacts.

Share this post


Link to post
Share on other sites
Sporniket    360

Are you sure of your texture coordinate and your vertex coordinates ? I had a similar issue with OpenGl, and I had to correct the texture coordinates. It was years ago though, so I cannot be more accurate.

Share this post


Link to post
Share on other sites
ballmar    1586

Thank you for response, guys.

 

Out of curiosity, why don't you use the sprite fonts that are part of XNA. 

 

Anyways, I took a look at more detail.  I thought it might have been a slightly reduced image, but the final image and the original font both show the Capital 'M' at 10 pixels tall.

 

However, this looks like compression artifacts.  I.e. when it was saved, it may have included a small decompression like PNG/JPG would use.  Or it could be from the loading process.  I haven't used DX font loading of any type, so I'm not sure exactly how it should work, but that really looks like compression artifacts.

 

I'm creating my own C++ rendering engine for the sake of learning DX11. Not sure if XNA sprite fonts can be used in such application.

 

I don't think it is caused by image compressing in my case, since DDS file, generated from image (which I receive uncompressed) looks precise and has no artefacts. However, It is possible that this behavior is caused by loading mechanism. In my engine, I use DDS loader, provided by new Microsoft D3D11 tutorials. I'll create plane with size of the font texture and check if this behavior keeps bubbling up even for sole plane mesh, thanks for pointing that out.

 

 

 

Are you sure of your texture coordinate and your vertex coordinates ? I had a similar issue with OpenGl, and I had to correct the texture coordinates. It was years ago though, so I cannot be more accurate.

 

Well, I'm absolutely sure about generated vertex coordinates and that I load and set correct texture coordinates from font description file, generated by one of my tools. At first glance, texture coordinates and width of every character, obtained from this tool are legit. The whole mechanism of generating data from PNG file in this tool is just about getting start pixel and end pixel of each symbol and dividing those values by width of the file, so it's pretty clear and simple.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

If you paste your two images on top of each other and align the letters it's clearly visible that the rendered characters are wider than those in the texture. Perhaps you scale horizontally somewhere. Also, if you manually specify the back-buffer size, double-check that it matches the window client area.

Share this post


Link to post
Share on other sites
ballmar    1586

So you have startPixel/width and endPixel/width OR startPixel/width and (endPixel + 1)/width ?

 

If you paste your two images on top of each other and align the letters it's clearly visible that the rendered characters are wider than those in the texture. Perhaps you scale horizontally somewhere. Also, if you manually specify the back-buffer size, double-check that it matches the window client area.

 

Yeah, I just checked font description generation program and it seems that it was (endPixel+1)/width that caused result letters look wider. Anyway, I fixed it, and result text still looks bad:

 

[attachment=13758:update.png]

 

Back-buffer's size is okay. By the way, letters on this new screen are 32 pixels tall, so pay no attention to that.

Edited by GuardianX

Share this post


Link to post
Share on other sites
hpdvs2    1017

Out of curiosity, why don't you use the sprite fonts that are part of XNA. 

 

OOPS, a title near this one when I posted had said something about XNA and I confused it slightly with this one, presuming XNA, which as I recall allows the use of DX components.  

 

I'm creating my own C++ rendering engine for the sake of learning DX11. Not sure if XNA sprite fonts can be used in such application.

Yeah, same issue as prior, and I don't think XNA fonts would be usable in DX

Share this post


Link to post
Share on other sites
rukiruki    161

For your uv's, can you try x / (width-1), (x+w)/(width-1), y / (height-1), (y+h)/(height-1)?

If you're using endpixel, try endpixel-1 / (with-1)

?

 

and try with with no filtering in the sampler too.

*edit, erm, the other types of filtering i mean.

Edited by rukiruki

Share this post


Link to post
Share on other sites
Jason Z    6434

I know you are trying to do this as a learning exercise, but have you tried just using the PNG version of the texture instead of the DDS?  It would be worth a shot to see if you can get rid of the compression artifacts as a possible source of error.

 

If you want to see another reference, MJP added his text renderer to Hieroglyph 3.  Take a look in the SpriteRendererDX11 class, and you will see how he is using GDI to build the glyph texture.  I recall having issues if the source texture was not anti-aliased, and if the origin texture size is slightly off from the end size in the render target.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

On your new picture the letter widths are still different from the texture. If you are using point sampling, try switching to linear filtering and you will see it more clearly that the texture is blurred. Disable blending or add character boxes to your texture in different colors, so you can see each quad matches the corresponding quad in the texture pixel by pixel.

Share this post


Link to post
Share on other sites
ballmar    1586

Thank you for response, guys.

 

For your uv's, can you try x / (width-1), (x+w)/(width-1), y / (height-1), (y+h)/(height-1)?

If you're using endpixel, try endpixel-1 / (with-1)

?

 

and try with with no filtering in the sampler too.

*edit, erm, the other types of filtering i mean.

 

Well yes, that's what I have done in my font description creator tool. Now, the start texture coordinate is calculated as startPixel/(width-1) and end texture coordinate as endPixel/(width-1). The height is the same for all font letters, since they are placed in single line inside texture. Tried other types of filtering too, but unfortunately with no results.

 

I know you are trying to do this as a learning exercise, but have you tried just using the PNG version of the texture instead of the DDS?  It would be worth a shot to see if you can get rid of the compression artifacts as a possible source of error.

 

If you want to see another reference, MJP added his text renderer to Hieroglyph 3.  Take a look in the SpriteRendererDX11 class, and you will see how he is using GDI to build the glyph texture.  I recall having issues if the source texture was not anti-aliased, and if the origin texture size is slightly off from the end size in the render target.

 

Unfortunately, I'm not familiar with texture loading from file types other than DDS yet. I use Microsoft DDS loader from their tutorials, since I'm writing code using Windows 8 SDK, where lots of D3DX loading functions have been cut off. However, I created a single plane with the size of DDS texture, covering it with that texture, and here is the result:

 

[attachment=13778:discrete_letters_comparing_to_single_solid_textured_plane.png]

 

Upper text is generated by number of planes and fractional texture coordinates. The text in the center of the image is a solid rectangle with single font texture over it. As you can see, latter is displayed fine, so I assume there is no distortion, caused by texture convertion mechanism.

 

Concerning GDI. Thank you for pointing that out for me, but I hope to use the same text class I'm creating right now in 3D world as well. As far as I understand, GDI is used for 2D text rendering on the window surface (standard Win32 graphics API). The code of Hieroglyph 3 is extremely well written, but it's a little overwhelming project for me as beginner =) That's why I'm trying to create very simple font rendering engine the same way as described in Rastertek tutorial.

 

On your new picture the letter widths are still different from the texture. If you are using point sampling, try switching to linear filtering and you will see it more clearly that the texture is blurred. Disable blending or add character boxes to your texture in different colors, so you can see each quad matches the corresponding quad in the texture pixel by pixel.

 

I'm using linear filtering, tho I played with other filters and ever tried to disable Z-buffer, but those actions provided no result. Here is comparison with and without alpha, to emphasize pixel occupation by letter-planes. Pay no attention to spaces between letters, because they are auto-generated and do not belong to either letter geometry:

 

[attachment=13779:comparing_1.png][attachment=13780:comparing_2.png]

 

If that helps, my font texture is 1900x18 pixels:

[attachment=13781:font_black.png]

Not sure why, but it seems that forum engine compressed it, so it's just an example.

 

And generated coordinates are the following (the line consists of letter, width-in-pixels, height-in-pixels, start texture U coordinate, end texture U coordinate):

 

: 3 18 0.003159558 0.004212744
, 3 18 0.01421801 0.0152712
. 3 18 0.02632965 0.02738283
! 3 18 0.03791469 0.03896788
@ 7 18 0.04844655 0.05160611
# 8 18 0.059505 0.06319115
~ 8 18 0.07109005 0.07477619
% 7 18 0.08320168 0.08636124
$ 7 18 0.09478673 0.09794629
( 3 18 0.1079516 0.1090047
) 3 18 0.1184834 0.1195366
- 8 18 0.1290153 0.1327014
+ 9 18 0.1406003 0.1448131
= 9 18 0.1521854 0.1563981
/ 7 18 0.1637704 0.16693
* 7 18 0.175882 0.1790416
? 7 18 0.1874671 0.1906267
< 9 18 0.1985255 0.2027383
> 9 18 0.2101106 0.2143233
\ 7 18 0.2216956 0.2248552
A 11 18 0.2327541 0.23802
B 8 18 0.2448657 0.2485519
C 9 18 0.2564508 0.2606635
D 9 18 0.2680358 0.2722486
E 8 18 0.2796209 0.283307
F 8 18 0.2912059 0.294892
G 9 18 0.3027909 0.3070037
H 9 18 0.314376 0.3185887
I 7 18 0.3264876 0.3296472
J 9 18 0.3375461 0.3417588
K 9 18 0.3491311 0.3533439
L 9 18 0.3607162 0.3649289
M 11 18 0.3717746 0.3770405
N 9 18 0.3838862 0.388099
O 9 18 0.3954713 0.399684
P 8 18 0.4075829 0.4112691
Q 9 18 0.4186414 0.4228541
R 9 18 0.4302264 0.4344392
S 7 18 0.4423381 0.4454976
T 9 18 0.4533965 0.4576093
U 9 18 0.4649816 0.4691943
V 10 18 0.47604 0.4807793
W 11 18 0.4876251 0.492891
X 10 18 0.4992101 0.5039495
Y 9 18 0.5113217 0.5155345
Z 7 18 0.5234334 0.526593
a 8 18 0.5350184 0.5387046
b 9 18 0.5460769 0.5502896
c 8 18 0.557662 0.5613481
d 9 18 0.5692469 0.5734597
e 8 18 0.580832 0.5845182
f 8 18 0.5929437 0.5966298
g 8 18 0.6040021 0.6076882
h 9 18 0.6155872 0.6197999
i 7 18 0.6276988 0.6308584
j 5 18 0.6392838 0.6413902
k 8 18 0.6508689 0.654555
l 7 18 0.6624539 0.6656135
m 11 18 0.6729858 0.6782517
n 9 18 0.6850974 0.6893101
o 8 18 0.6966825 0.7003686
p 9 18 0.7082675 0.7124802
q 9 18 0.7198526 0.7240653
r 8 18 0.7319642 0.7356504
s 7 18 0.7435492 0.7467088
t 8 18 0.7546077 0.7582939
u 9 18 0.7661927 0.7704055
v 10 18 0.7772512 0.7819905
w 11 18 0.7888362 0.7941021
x 9 18 0.8009478 0.8051606
y 11 18 0.8120063 0.8172722
z 7 18 0.8246446 0.8278041
0 7 18 0.8362296 0.8393891
1 7 18 0.8478146 0.8509742
2 8 18 0.8588731 0.8625593
3 7 18 0.8709847 0.8741443
4 7 18 0.8825698 0.8857293
5 7 18 0.8941548 0.8973144
6 8 18 0.9057398 0.909426
7 7 18 0.9173249 0.9204845
8 7 18 0.92891 0.9320695
9 7 18 0.940495 0.9436545 

 

I think it can be caused by floating point unpresice issues, but I'm not sure how to prove that and how to fix that, if it is the case. For example, consider letter Q (width 9 pixels) will have 0.4186414x1899=795.04320186 start pixel and 0.4228541x1899=802.9999359 end pixel, when DX tries to get texel for concrete pixel.

 

I don't multiply anything in my application, just wondering if DX could interpolate pixels wrong. Tho, there is the same issue with texture coordinates in Rastertek's tutorial, but result text looks just fine in it.

Edited by GuardianX

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

The problem is either the texture coordinates or the vertex coordinates for your quads. Are you sure you are using only integer coordinates?

If your quad vertices are like 10.1 instead of 10.0 this could happen. If your texture-coordinates are on exact pixel boundaries in the texture then you need to use vertex coordinates without fractional parts too.

Also, you should not use 1899, but 1900. Coordinate 1.0 would otherwise be on the left side of the last pixel, where it should be on the right side.

Floating point is not an issue, it is precise enough for this case.

 

So if Q starts at 177, 0 in the texture and is 9 pixels wide and 18 pixels high, use coords (177.0/1900.0, 0.0, (177.0+9.0)/1900.0, 1.0).

Use vertex-coordinates (x, y, (x + 9.0), (y + 18.0)) to draw it on the screen, where x and y is floor(...) to make sure they are integers.

 

If you later scale your text it's possible you want to include a half pixel border for linear filtering, and as such create texture coordinates at 10.5 / 1900 instead of 10.0 / 1900, at which point you also need to do that for vertex-coords on the screen. Start with integer coords.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites
ballmar    1586

The problem is either the texture coordinates or the vertex coordinates for your quads. Are you sure you are using only integer coordinates?

If your quad vertices are like 10.1 instead of 10.0 this could happen. If your texture-coordinates are on exact pixel boundaries in the texture then you need to use vertex coordinates without fractional parts too.

Also, you should not use 1899, but 1900. Coordinate 1.0 would otherwise be on the left side of the last pixel, where it should be on the right side.

Floating point is not an issue, it is precise enough for this case.

 

So if Q starts at 177, 0 in the texture and is 9 pixels wide and 18 pixels high, use coords (177.0/1900.0, 0.0, (177.0+9.0)/1900.0, 1.0).

Use vertex-coordinates (x, y, (x + 9.0), (y + 18.0)) to draw it on the screen, where x and y is floor(...) to make sure they are integers.

 

If you later scale your text it's possible you want to include a half pixel border for linear filtering, and as such create texture coordinates at 10.5 / 1900 instead of 10.0 / 1900, at which point you also need to do that for vertex-coords on the screen. Start with integer coords.

Thanks for response.

 

Coordinates of my quads are float values, but they don't have any fractional parts, I just checked that. For example, when I create a sentence, containing only Q letter, the quad, which I get is:

 

Coordinate 0: 0    -18    0     Texture: 0.418641    1 // bottom-left
Coordinate 1: 9    -18    0     Texture: 0.422854    1 // bottom-right
Coordinate 2: 0    -0    0     Texture: 0.418641    0 // top-left
Coordinate 3: 9    -0    0     Texture: 0.422854    0 // top-right
 

 

As you can see for yourself from my previos post, they are accurately formed out of font description file.

 

Those coordinates are then modified in vertex shader by world matrix, which just offsets them to appropriate x,y coordinate system where 0,0 lies on top left of the screen. Without multiplication by world matrix, the quad will be displayed nearly at the center of the screen, but letter will look the same as with world transformation anyway, so this transformation have no impact on letter's bad appearance.

 

Also, doesnt last column of pixels will have an index 1899, if the first column has 0? In that case the last column of pixels will have 1899/1900 texture U coordinate. Shouldn't it be equal to 1?

Share this post


Link to post
Share on other sites
rukiruki    161

Can you post the formula you're using to calculate UV?
 

For example, consider letter Q (width 9 pixels) will have 0.4186414x1899=795.04320186 start pixel and 0.4228541x1899=802.9999359 end pixel, when DX tries to get texel for concrete pixel.

 

803 - 795 is not 9 pixels wide

Here's mine i use to achieve pixel perfect precision

float cRenderBuffer::convertUVDX( int pixelValIn )
{
	// Return
	return (float)( pixelValIn ) / (float)( currentTexture->baseDimension - 1 ); 
}

so 795 should equate to 0.4186413902053712 and 804 should equate to 0.4233807266982622

What you posted seems to be placing a 8 pixel wide uv onto a 9 pixel wide quad, unless I misread!

Share this post


Link to post
Share on other sites
ballmar    1586

Can you post the formula you're using to calculate UV?
 

For example, consider letter Q (width 9 pixels) will have 0.4186414x1899=795.04320186 start pixel and 0.4228541x1899=802.9999359 end pixel, when DX tries to get texel for concrete pixel.

 

803 - 795 is not 9 pixels wide

Here's mine i use to achieve pixel perfect precision

float cRenderBuffer::convertUVDX( int pixelValIn )
{
	// Return
	return (float)( pixelValIn ) / (float)( currentTexture->baseDimension - 1 ); 
}

so 795 should equate to 0.4186413902053712 and 804 should equate to 0.4233807266982622

What you posted seems to be placing a 8 pixel wide uv onto a 9 pixel wide quad, unless I misread!

 

Well, 795 is the start pixel position and 803 is the end pixel horizontal position of `Q` symbol. That means both should be included when calculating symbol's width. Hence result width is 803 - 795 + 1 = 9. My old algorythm was the same as yours.

 

Anyway, it looks like I have fixed an issue now. I imagined texture as the grid of texels, where each pixel resides between grid-forming lines, and not on them, that's why it cannot have pixelPos/width-1 texture coordinate, since `pixelPos` itself is 1.0/width-1 wide! It means that to obtain texture coordinate (nearest grid-forming line), I have to decrease symbol's start pixel horizontal position by 0.5 and increase symbol's last pixel horizontal position by 0.5. Here is algorythm I use to analyse PNG font texture written in C#:

 

Bitmap data = new Bitmap(filename);
int width = data.Width;
int height = data.Height;


int letter_start = -1;
int letter_end = -1;
int current_letter = 0;
for (int i = 0; i < width; i++)
{
    bool is_empty_column = true;
    for (int j = 0; j < height; j++)
    {
        if (data.GetPixel(i, j).A != 0)
        {
            is_empty_column = false;
            if (letter_start == -1)
            {
                letter_start = i;
            }
        }
    }
    if (is_empty_column == true)
    {
        if (letter_start != -1)
        {
            letter_end = i - 1;
            mapped_alphabet[alphabet[current_letter]] = new LetterInfo(letter_start, 
                letter_end, letter_end - letter_start + 1, height, ((float)letter_start - 0.5f)/((float)width-1.0f), 
                ((float)letter_end + 0.5f)/((float)width-1.0f));


            letter_start = -1;
            current_letter += 1;
        }
    }
}

 

where alphabet = ":,.!@#~%$()-+=/*?<>\\ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"

 

And the result is:

 

[attachment=13783:result.png]

 

There are rare occasions of half-transparent symbols, such as `9`, but everything else looks good.

 

Thanks to everyone who had found the time to help! =)

Edited by GuardianX

Share this post


Link to post
Share on other sites
rukiruki    161

I'm glad you got it working!

I'm confused as to why you're using  "letter_end - letter_start + 1" and "803 - 795 + 1 = 9", the +1 seems to be wrong to me.

If start pixel is 795 and width is 9, you should be passing 795 and 795 + 9 (which is 804) into the UV equation.

u1 = 795 / 1899

u2 = 804 / 1899

 

And hopefully you then dont have to bother with the 0.5 texel offsets, and it should be pixel perfect

 

But your way may be fine anyway, it's just the way I've gotten used to. Good luck :)

Share this post


Link to post
Share on other sites
ballmar    1586

Wow, I accidentaly found a perfect article, describing the process I tried to explain in my previous post biggrin.png

Hope it will help someone who stuck at this problem just like me.

 

It describes how you must subtract 0.5 to get corresponding pixel and texel match each other.

 

I'm glad you got it working!

I'm confused as to why you're using  "letter_end - letter_start + 1" and "803 - 795 + 1 = 9", the +1 seems to be wrong to me.

If start pixel is 795 and width is 9, you should be passing 795 and 795 + 9 (which is 804) into the UV equation.

u1 = 795 / 1899

u2 = 804 / 1899

 

And hopefully you then dont have to bother with the 0.5 texel offsets, and it should be pixel perfect

 

But your way may be fine anyway, it's just the way I've gotten used to. Good luck smile.png

 

Take a look at this:

 

[attachment=13784:example.png]

 

It illustrates my approach more closely.

 

Thank you for advice, regardless! =)

Edited by GuardianX

Share this post


Link to post
Share on other sites
Erik Rufelt    5901

Wow, I accidentaly found a perfect article, describing the process I tried to explain in my previous post biggrin.png

Hope it will help someone who stuck at this problem just like me.

 

It describes how you must subtract 0.5 to get corresponding pixel and texel match each other.

 

That is not correct in D3D11, only in D3D9. It has changed. If that fixes your problem then it is coincidental and not your actual issue, as you see from the letters that still look wrong.

 

You should divide by 1900 to get correct coordinates if that is the size of the texture, not 1899. The last pixel begins at 1899, but ends at 1900.

Imagine a texture that is just 1x1 or 2x2 in size. If you divide by width-1 you get completely wrong results.

 

If your Q begins at 795 and is 9 pixels wide then the correct coords are:

795.0 / 1900.0 = 0.4184210526

(795.0 + 9.0) / 1900.0 = 0.4231578947

 

If you have a 1x1 sized letter at the last pixel on the right side of the texture, then it starts at 1899 and has a width of 1.

1899.0 / 1900.0 = 0.9994736842

(1899.0 + 1) / 1900.0 = 1.0

 

Each single pixel is a quad with 4 edges, it is not a zero-width point. Each single pixel has a width of 1.0 / 1900.0

Share this post


Link to post
Share on other sites
ballmar    1586

Wow, I accidentaly found a perfect article, describing the process I tried to explain in my previous post biggrin.png

Hope it will help someone who stuck at this problem just like me.

 

It describes how you must subtract 0.5 to get corresponding pixel and texel match each other.

 

That is not correct in D3D11, only in D3D9. It has changed. If that fixes your problem then it is coincidental and not your actual issue, as you see from the letters that still look wrong.

 

You should divide by 1900 to get correct coordinates if that is the size of the texture, not 1899. The last pixel begins at 1899, but ends at 1900.

Imagine a texture that is just 1x1 or 2x2 in size. If you divide by width-1 you get completely wrong results.

 

If your Q begins at 795 and is 9 pixels wide then the correct coords are:

795.0 / 1900.0 = 0.4184210526

(795.0 + 9.0) / 1900.0 = 0.4231578947

 

If you have a 1x1 sized letter at the last pixel on the right side of the texture, then it starts at 1899 and has a width of 1.

1899.0 / 1900.0 = 0.9994736842

(1899.0 + 1) / 1900.0 = 1.0

 

Each single pixel is a quad with 4 edges, it is not a zero-width point. Each single pixel has a width of 1.0 / 1900.0

 

Oh, thanks for clearing that up for me. Now I have everything working as intended! =)

Share this post


Link to post
Share on other sites
Jason Z    6434

Concerning GDI. Thank you for pointing that out for me, but I hope to use the same text class I'm creating right now in 3D world as well. As far as I understand, GDI is used for 2D text rendering on the window surface (standard Win32 graphics API). The code of Hieroglyph 3 is extremely well written, but it's a little overwhelming project for me as beginner =) That's why I'm trying to create very simple font rendering engine the same way as described in Rastertek tutorial.

Thanks for the compliment :)  I know you already solved the problem, but just for clarification about this point: GDI is indeed a 2D text rendering technology.  However, Hieroglyph uses GDI to generate the 2D texture, similar to what you are doing manually.  This generated texture (I guess it would be called a glyph texture) can then be used in 2D rendering like what you are doing now, or in 3D as well.  Both methods are currently supported in Hieroglyph via the SpriteRendererDX11 (for 2D) and TextActor (for 3D).

 

If you ever have any questions about Hieroglyph, please feel free to shoot me an IM and I would be happy to help.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
       
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By Endurion
      I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture  
      My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
      Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
      This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
      // A constant buffer that stores the three basic column-major matrices for composing geometry. cbuffer ModelViewProjectionConstantBuffer : register(b0) { matrix model; matrix view; matrix projection; matrix ortho2d; }; struct DirectionLight { float3 Direction; float PaddingL1; float4 Ambient; float4 Diffuse; float4 Specular; }; cbuffer LightsConstantBuffer : register( b1 ) { float4 Ambient; float3 EyePos; float PaddingLC1; DirectionLight Light[8]; }; struct Material { float4 MaterialEmissive; float4 MaterialAmbient; float4 MaterialDiffuse; float4 MaterialSpecular; float MaterialSpecularPower; float3 MaterialPadding; }; cbuffer MaterialConstantBuffer : register( b2 ) { Material _Material; }; // Per-vertex data used as input to the vertex shader. struct VertexShaderInput { float3 pos : POSITION; float3 normal : NORMAL; float4 color : COLOR0; float2 tex : TEXCOORD0; }; // Per-pixel color data passed through the pixel shader. struct PixelShaderInput { float4 pos : SV_POSITION; float2 tex : TEXCOORD0; float4 color : COLOR0; }; // Simple shader to do vertex processing on the GPU. PixelShaderInput main(VertexShaderInput input) { PixelShaderInput output; float4 pos = float4( input.pos, 1.0f ); // Transform the vertex position into projected space. pos = mul(pos, model); pos = mul(pos, view); pos = mul(pos, projection); output.pos = pos; // pass texture coords output.tex = input.tex; // Calculate the normal vector against the world matrix only. //set required lighting vectors for interpolation float3 normal = mul( input.normal, ( float3x3 )model ); normal = normalize( normal ); float4 ambientEffect = Ambient; float4 diffuseEffect = float4( 0, 0, 0, 0 ); float4 specularEffect = float4( 0, 0, 0, 0 ); for ( int i = 0; i < 2; ++i ) { // Invert the light direction for calculations. float3 lightDir = -Light[i].Direction; float lightFactor = max( dot( lightDir, input.normal ), 0 ); ambientEffect += Light[i].Ambient * _Material.MaterialAmbient; diffuseEffect += saturate( Light[i].Diffuse * dot( normal, lightDir ) );// * _Material.MaterialDiffuse; //specularEffect += Light[i].Specular * dot( normal, halfangletolight ) * _Material.MaterialSpecularPower; } specularEffect *= _Material.MaterialSpecular; //ambientEffect.w = 1.0; ambientEffect = normalize( ambientEffect ); /* Ambient effect: (L1.ambient + L2.ambient) * object ambient color Diffuse effect: (L1.diffuse * Dot(VertexNormal, Light1.Direction) + L2.diffuse * Dot(VertexNormal, Light2.Direction)) * object diffuse color Specular effect: (L1.specular * Dot(VertexNormal, HalfAngleToLight1) * Object specular reflection power + L2.specular * Dot(VertexNormal, HalfAngleToLight2) * Object specular reflection power ) * object specular color Resulting color = Ambient effect + diffuse effect + specular effect*/ float4 totalFactor = ambientEffect + diffuseEffect + specularEffect; totalFactor.w = 1.0; output.color = input.color * totalFactor; return output; }   Edit: This message editor is driving me nuts (Arrrr!) - I don't write code in Word.
    • By Mercesa
      Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds.
      When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again.
      This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas?
      In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface.

       
      Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results
      Count of command buffers goes from 4 (far away from surface) to ~20(close to surface).
      The command buffer duration in % goes from around ~30% to ~99%
      Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds.
      I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface?
       
       
      Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.
       
    • By turanszkij
      In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
      Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware?
      Cheers!
       
    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
      I will really appreciate for any advice!
  • Popular Now