Jump to content

  • Log In with Google      Sign In   
  • Create Account

thebjoern

Member Since 08 Sep 2010
Offline Last Active Mar 20 2012 07:11 PM

Topics I've Started

D3DXSPRITE problem - wrongly scaled?

15 March 2012 - 01:54 AM

Hi,

I am using D3D9 for rendering some simple things, a movie, as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that.

Before adding the buttons everything seemed to have worked fine, and I was using a D3DXSPRITE for the text as well (D3DXFONT), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 of its original size (in my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so i expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels)

My rendering code is very simple (I am omitting error checks etc for brevity)

// initially viewport was set to width/height of client area

// clear device
m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 );

// begin scene
m_d3dDevice->BeginScene();

// render movie surface (just two triangles to which the movie is rendered)

m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false);
m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering
m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering
m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored
m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 );
m_d3dDevice->SetTexture( 0, m_movieTexture );
m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex));
m_d3dDevice->SetFVF(Vertex::FVF_Flags);
m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2);

// render sprites
m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE);

// text drop shadow

m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(),
&m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow );
// text
m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(),
&m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) );

// control object
m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this

m_sprite->End()


// end scene
m_d3dDevice->EndScene();


What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. Btw I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device etc works fine too).

For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file...

I am really very inexperienced in D3D, so the answer might be very simple...

thanks
Bjoern

[DX Version] What is the "correct" way to detect the available version?

15 November 2010 - 05:48 PM

Hi all,

I have created a simple application, which loads from a cfg file whether I want to use DX9, DX10, or DX11, and then proceed to load the dlls with the according functionality, which all works fine and renders fine.

Now I want to move on to determine which DX I can use, but I only know of using DSetup by calling DirectXSetupGetVersion() to find out. But it seems that is outdated? And I don't even have DSetup.dll on my machine (even though I had the .lib in my June 2010 SDK lib folder).

So what is the correct way of finding out, other than already loading a dll and trying to get the device of that version (ie CreateDevice), and if it fails, falling back to a lower version (which doesn't seem like a great way of doing it), until I get one working (or falling back to OpenGL, which I am supporting too)?

Thanks for your help,
Bjoern

Modifying textures in another thread [beginner's questions]

16 September 2010 - 12:16 AM

Hi,

I have recently created an application which can decode video files and then render the videos to the screen. I have done that in DX11, but I now want to do it for OpenGL as well, but I am new to OpenGL (and generally to rendering), but have programmed for many years.

Basically my main thread does the rendering of all the objects, and the decoder is on another thread. I have a class RenderObject which represents anything that can be rendered, and derived from that I have a DoubleBufferSurface class, which has an "active surface" (the one being renderer by the rendering thread) and a "buffer surface" (the one the decoder writes its current frame to). Once the buffer surface is finished being updated, the pointer to the "activeSurface" changes to point to this surface and vice versa, so that the new videoframe will be rendered on the next frame.

This all seems to work well in DX11, but when I now started implementing it in OpenGL, I found that I cannot call any OpenGL functions on my thread (they all return errors).

So my question is: Is it easily possible to implement this for OpenGL (the decoder thread does no rendering, all it wants to do is write the data from the video to the buffer (or inactive) surface, by using glTexSubImage2D())? What are the steps necessary to make this work?

thanks,
Bjoern

[D3D11] Multithreading, command lists, video rendering (beginner's questions)

08 September 2010 - 10:22 PM

Hi all,

I am quite new to graphics programming (but not new to programming in general), and I am currently trying to create some kind of media player, but ran into some problems, namely with the multithreading. It's probably easy to solve for someone with experience and more understanding of graphics programming, but I have been brooding over this for the last two days with no solution. Anyway, here is the description:

Originally I have created a decoder class that reads video files, and then writes each frame onto a surface (just two simple triangles filling the screen, with a 2D texture mapped onto them, created with D3D11_USAGE_DYNAMIC, D3D11_CPU_ACCESS_WRITE and D3D11_BIND_SHADER_RESOURCE). That was all working fine so far.

However, I was also rendering UI and other things, so I decided to add another thread to help with the decoding. Now I got the main thread doing the other stuff and the rendering of the main UI. The other thread meanwhile decodes the video and tries to write to the surface. I have created a deferred context for that, trying to send all the commands to it, create the command list, and pass the list to the main thread.
So what happens is this (not the real code, just the important parts, left out some checking for return values etc for simplicity):

// update decoder and read next frame
// ... left out for brevity
// clear state
m_deferredContext->ClearState();
// if a whole frame is read, we can write to the video surface:
m_deferredContext->Map( m_renderTargetTex, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedTex );
memcpy( mappedTex.pData, buffer, mappedTex.RowPitch * m_screenHeight );
m_deferredContext->Unmap( m_renderTargetTex, 0);
// set up the drawing
m_deferredContext->IASetInputLayout( m_vertexLayout );
m_deferredContext->IASetVertexBuffers( 0, 1, &m_vertexBuffer, &stride, &offset );
m_deferredContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST );
// Render
m_deferredContext->VSSetShader( m_vertexShader, NULL, 0 );
m_deferredContext->PSSetShader( m_pixelShader, NULL, 0 );
m_deferredContext->PSSetShaderResources( 0, 1, &m_textureRV );
m_deferredContext->PSSetSamplers( 0, 1, &m_samplerLinear );
m_deferredContext->Draw( 6, 0 );
// command list
ID3D11CommandList* commandList = 0;
m_deferredContext->FinishCommandList(FALSE, &commandList);
// send to main renderer
mainRenderer.SendCommandList(commandList);

The command list handling in the main renderer looks like this (again simplified):
void Renderer_DX11::SetCommandList( ID3D11CommandList* commandList )
{
ScopedLock lock(m_mutex);
if( m_currentCommandList )
m_currentCommandList->Release();
m_currentCommandList = commandList;
}

void Renderer_DX11::ExecuteCommandList()
{
ScopedLock lock(m_mutex);
if( !m_currentCommandList )
return;
m_immediateContext->ExecuteCommandList(m_currentCommandList, FALSE);
}

And the main rendering looks like this (again simplified):
void Renderer_DX11::Render()
{
// preparation
m_immediateContext->ClearRenderTargetView(m_renderTarget, s_clearColor);
m_immediateContext->ClearDepthStencilView( m_depthStencil, D3D11_CLEAR_DEPTH, 1.0f, 0 );
m_immediateContext->OMSetRenderTargets(1, &m_renderTarget, m_depthStencil);
m_immediateContext->OMSetDepthStencilState(NULL, 0);

// execute commands if available
ExecuteCommandList();

// render UI etc
// ....

// present
m_swapChain->Present(0, 0);
}

Now this might be totally stupid and ridiculous to you, the way I am doing it, and totally obvious why it doesn't work the way it should... but as I said, I am really new to this. I was happy I got it to work in a single thread, but am really struggling now in the multithreaded environment, probably because of a lack of understanding of how the whole pipeline and everything works, but so far I haven't seen any good books on D3D11 and this multithreading topic.

The result I get on the screen is that the UI is rendered fine, but the move sequence in the back just occasionally shows up in brief flickers. I was assuming that the reason is that the main thread and the decoder thread block each other trying to access the "movie surface"? Also is this absurd what I am doing with the command list? I was doing it under the assumption that I can create a command list, when I update the surface and then pass the list to the main thread, which keeps using it, until it gets a new command list, in which case I release the old one.

Would a better approach be to create two of those "movie surfaces", and there is one active one used by the renderer, and one inactive one, which the decoder writes to. And when the decoder is finished with a frame, I can change the index of which surface is active (guarded by mutex) - so in a way some kind of double buffering for the "movie surface"?

Or is all of what I said/did here total nonsense, and there is a much better way of doing it? Sorry if these questions seem stupid.. it's my first rendering project, and I am happy I got that far for now :-)

Thanks for your help,
best regards,
Bjoern

PARTNERS