Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Member Since 06 Oct 2006
Offline Last Active Nov 22 2014 09:52 PM

#5165713 GPL licence question

Posted by DarkRonin on 08 July 2014 - 07:56 PM

Hi Guys,


With the GPL v3 licence, does the coder have to make thier own source code available for any application created using an API that is GPL licenced or do they only have to supply (or link to) the original source code of the API that they used?


This seems to be an extremely confusing area for me.


Thanks in advance. smile.png

#5165461 Loading textures to the GPU

Posted by DarkRonin on 07 July 2014 - 11:52 PM

Hi guys!


Not really a problem, just a quick question. smile.png


When you load a texture (for example a JPG) on to the d3dDevice via CreateTextureFromFile() or similar, is the texture decompressed by the API and then sent to the video card, or is the image decompressed on the video card itself?


Thanks in advance biggrin.png

#5164505 Full screen / windowed question

Posted by DarkRonin on 03 July 2014 - 01:00 AM

Found it!

If anyone else is interested it is...



Interesting to note that your application should be in windowed mode when releasing the swap chain. This explains why I was getting odd leaks, when I new my code wasn't (or shouldn't have been) leaking. So, fixed two issues in one today smile.png

#5161397 Orthographic camera

Posted by DarkRonin on 18 June 2014 - 08:07 PM

Ok, I made some decent progress. The code now works perfectly, except the fact that the y axis starts from the bottom of the screen and upwards is positive.
// Set up the view
XMMATRIX viewMatrix=XMMatrixIdentity();
XMMATRIX projMatrix=XMMatrixOrthographicOffCenterLH(0.0f,(float)width,0.0f,(float)height,0.0f,100.0f);		// 800 x 450

// position the object
XMMATRIX scaleMatrix=XMMatrixScaling(1.0f*128.0f,1.0f*128.0f,0.0f);  // This is correct for 256px sprite as verts are 1 to -1 (fix later)
XMMATRIX rotationMatrix=XMMatrixRotationZ(0.0f);
XMMATRIX translationMatrix=XMMatrixTranslation(0.0f,0.0f,0.0f);
XMMATRIX worldMat=scaleMatrix*rotationMatrix*translationMatrix;


So, overall I am pretty happy as I had no clue on shaders and DX11 just 24 hours ago. If I can nail this last issue (the y axis upside down) I'll be extremely happy.

#5161192 Orthographic camera

Posted by DarkRonin on 17 June 2014 - 07:14 PM

I was fearing that might be the case.


I'll have to hit it head on and take up the challenge then. smile.png

#5161177 Orthographic camera

Posted by DarkRonin on 17 June 2014 - 05:54 PM

Hi Guys,


I am currently moving from DX9 (fixed function) to DX11. All is going well so far but now I am creating the camera system. But, my shader knowlegde is next to zero, so it is a bit different.


Primarily, I'll be making 2D applications (at this point in time) so I'll need an orthographic setup.


I figure can go about this two ways.


1 - Cheat and use dynamic vertex buffers and move everything manually.

2 - Setup a camera system.


Is #1 a valid one or is it purely a hack?


With #2 do I have to do this all with shaders or is there a way to do this with function calls? Could anyone point me in the right direction on how to go about this?


Thanks in advance smile.png

#5160976 DirectX 11 Alpha Blending

Posted by DarkRonin on 16 June 2014 - 08:42 PM

Got it! cool.png


Just had to do this...


ID3D11BlendState* d3dBlendState;
D3D11_BLEND_DESC omDesc;
ZeroMemory( &omDesc, 
sizeof( D3D11_BLEND_DESC ) );
omDesc.RenderTarget[0].BlendEnable = 
omDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
omDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
omDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
omDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
omDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
omDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
omDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
if( FAILED( d3dDevice->CreateBlendState( &omDesc, &d3dBlendState ) ) )
return false;
d3dContext->OMSetBlendState( d3dBlendState, 0, 0xffffffff );

#5157486 Problem show up when I use debug in DX9 control panel

Posted by DarkRonin on 02 June 2014 - 12:30 AM

Damn, what a stupid mistake. I saw what is wrong straight after I posted this (must be the different font - LOL)

I am locking the vertex buffer twice in a row.

#5155005 Equivalent of glHint(GL_PERSPECTIVE_CORRECTION_HINT , GL_NICEST);

Posted by DarkRonin on 21 May 2014 - 02:19 AM

If the shape is 2D, and you've manually moved the top two verts closer together, you'll get the image in the middle. D3D/GL cannot magically fix this for you.

Exactly, this is the purpose for the original question. It seems that this GL Hint does 'magically' fix it.

#5153488 What DirectX version?

Posted by DarkRonin on 13 May 2014 - 11:59 PM

I tried looking into OpenGL about a month ago.


It was so confusing as there is no SDK as such available.


So, it makes it extremely difficult sifting around the net trying to figure out what is going on and what is the propper way to use OpenGL.


Like is it meant to be SDL, GLUT, [insert lib here]...?


Very very hard for anyone starting out with OpenGL. I tried for about a day and gave up hugely confused as to how to even start with it. All of the Google searches seem to point you in entirely different directions.

#5153483 Vertex buffers in DirectX 11

Posted by DarkRonin on 13 May 2014 - 11:52 PM

Hi Guys,


I thought it might be well worth learning DX11 (rather than studying DX9c). So, I have been playing with the tutorials over at directxtutorial.com and so far so good.


Currently I am at the point where I have written my own render class (loosly based on the tutorials) and have a coloured triangle rendered to the screen smile.png


The thing that I have noticed is different so far is that the co-ordinates seem to be different on DX11 (or is it just the way the tutorial is doing it??).


In the past when I have made vertex buffers in DX9c, I would specify the co-ordinates in screen space. So, if I wanted a 'quad' that was 256 x 256 I would base the vertices around 0 to 256.


It seems in DX11 the screen centre to 0, left side is -1, right side 1, etc...


Is there a way to make it work in pixel space again (without having to normalise all of my co-ordiante calculations)?


I am only focused on creating 2D projects at this stage.


Thanks in advance smile.png

#5153455 How do you know if you are not reinventing the wheel in your language

Posted by DarkRonin on 13 May 2014 - 09:46 PM

It depends...


I re-invent the wheel all of the time (sometimes through ignorance). But, I do like having complete say on what my code does (and complete understanding on how it does it). Makes it a lot easier to bug find I think.


You have to account for your time too. If something will take you a year to code and you can have it right away by using another API, you need to weigh up that too.

#5147582 Optimising my renderer

Posted by DarkRonin on 17 April 2014 - 02:18 AM

So, I am currently trying to go through this list Microsoft recommends.
Using strikethough as I go smile.png

General Performance Tips

Clear only when you must. Only clearing the backbuffer
•Minimize state changes and group the remaining state changes. How do you group state changes?
Use smaller textures, if you can do so. 256 x 256 recommended.Done
Draw objects in your scene from front to back. All objects using same z depth at at the moment. Using for 2D only at this stage.
Use triangle strips instead of lists and fans. For optimal vertex cache performance, arrange strips to reuse triangle vertices sooner, rather than later. Only making quads. But, am using strips.

Gracefully degrade special effects that require a disproportionate share of system resources. Not applicable yet
Constantly test your application's performance. Well that's what we are here for smile.png
Minimize vertex buffer switches. Only have one vertex buffer in my app
•Use static vertex buffers where possible. How do you know if it is static?
•Use one large static vertex buffer per FVF for static objects, rather than one per object. What if each object has the same vertex property? Eg. all objects are 256 x 256 quads? reuse the same buffer?
•If your application needs random access into the vertex buffer in AGP memory, choose a vertex format size that is a multiple of 32 bytes. Otherwise, select the smallest appropriate format. Random access as in needing to change vertexes at runtime?
•Draw using indexed primitives. This can allow for more efficient vertex caching within hardware. Tryinng this next. Again what if each object has the same vertex property? reuse the same buffer?
If the depth buffer format contains a stencil channel, always clear the depth and stencil channels at the same time. Only using 2D with no stencils so this shouldn't apply (I am guessing)
Combine the shader instruction and the data output where possible. Not using shaders yet.

Does this sound like on I am the right path? And please correct me if anything I have written is wrong. smile.png

#5145583 Trying to understand vertex shaders better

Posted by DarkRonin on 09 April 2014 - 01:02 AM

No, it is where you told it to be, you just likely didn't realize how you modified the location.


The computer is correctly executing your code (it is doing what you asked), you likely didn't understand what the changes meant.



Very very likely. Infact, I don't doubt it one bit. smile.png


Reading and re-reading your post to try and absorb what is happening here.


Thanks for the explanation though. smile.png

#5139636 MinGW compilation problem

Posted by DarkRonin on 16 March 2014 - 11:47 PM

Thanks Bacterius.


Everything is working great now! Glad you guys know the idiosyncrasies of MinGW. I would never have thought of that. :)