xDarkice

Member
  • Content count

    20
  • Joined

  • Last visited

Community Reputation

112 Neutral

About xDarkice

  • Rank
    Member
  1. Playing video files in DirectX

    You can also use gstreamer, which provides support for video and audio files aswell. Gstreamer itself relies on other libraries like ffmpeg to do the decoding.
  2. You could take a look at my D3D9 implementation in Vocaluxe at [url="https://github.com/xDarkice/Vocaluxe/blob/master/Vocaluxe/Lib/Draw/CDirect3D.cs"]Github[/url]. Though its in C# and uses D3D9, it should not be that much off from D3D11. It got all functions you were asking for.
  3. Yes, you can decompile any unobfuscated .Net code i.e. with Red Gate Reflector. Unfortunetely the code was most likely optimized while compiling so the code will not be exactly the same. You will lose all comments also.
  4. Converting to PNG would do the job, but you will end up having performance issues. Instead of using Texture.FromMemory() which requires the texture already being in a right format, you should try to lock the texture and fill it with the raw data. You could take a look at my Direct3D rendering implementation which is at [url="https://github.com/xDarkice/Vocaluxe/blob/master/Vocaluxe/Lib/Draw/CDirect3D.cs"]Github[/url].
  5. If you want to use XAudio you need to decode your OGG files. This can be done using FFMPEG or another decoder. If your is application is not commercial you could also use BASS.Net.
  6. Thanks for your answer, please take a look at the snow at the bottom. The snow will lose one triangle of the quad. It only appears when I rotate these snow flocks. If they arent rotated they will display properly By the way this is how I am roting a quad: [code] if (rect.Rotation != 0) { rect.Rotation = rect.Rotation * (float)Math.PI / 180; float centerX = (rx1 + rx2) / 2f; float centerY = -(ry1 + ry2) / 2f; Matrix originTranslation = _Device.GetTransform(TransformState.World); Matrix translationA = Matrix.Translation(-centerX, -centerY, 0); Matrix rotation = Matrix.RotationZ(-rect.Rotation); Matrix translationB = Matrix.Translation(centerX, centerY, 0); Matrix result = translationA * rotation * translationB * originTranslation; _Device.SetTransform(TransformState.World, result); } [/code] This is the full source code: [url="http://pastebin.com/yDDuiZV1"]http://pastebin.com/yDDuiZV1[/url] I am happy for every improvement ideas =) I think its the Orthographic Projection I'm using. If a Point of a Vertex would go over my Projection it wont show up I believe, but in OpenGL it does its really confusing and im stuck here
  7. I got a problem that the textures are clipped when they reach the edges of the viewport. I am creating my VertexBuffer with the DoNotClip Flag and I am setting the RenderState.Clipping to false. Am I missing another flag?
  8. Due to performance causes I need a new way of creating textures from a bitmap. I am currently using this, but due to the fact that I use Bitmap.Save it isnt that fast. [code] using (MemoryStream stream = new MemoryStream()) { bmp2.Save(stream, ImageFormat.Png); stream.Position = 0; Texture t = Texture.FromStream(_Device, stream, 0, w, h, 0, Usage.None, Format.A8R8G8B8, Pool.Managed, Filter.Default, Filter.Default, 0); _D3DTextures.Add(t); } [/code] Unfortunetely there is no Method for Texture.FromBitmap in SlimDx. Can someone help?
  9. I got the performance fixed now, but I am still having issues resizing my window. Right now I am resizing my viewport but it seems as if the width and height parameters are ignored, the frame will shift by my x and y coordinates but it wont use the width and height i specified so that it is running out of my window. Do I have to set the backbuffer width and height? Do I have to change something at my orthogonal projection or my translation matrix?
  10. Can someone give me some tips to increase the performance, especially when much textures are drawn to the screen? I am including my whole source code for my D3D interface. Additionally, this does not work on Intel Graphicscards, I think I am using a mode that Intel graphics cant use, but due to the fact that I got no Intel Graphicscard here, I cant test it with debug runtimes. Does Intel provide a list of supported modes? [url="http://pastebin.com/QNHABBm5"]http://pastebin.com/QNHABBm5[/url] Edit: I came up with an idea: Could I write all Vertices into a list, which is only rendered once per frame(in my MainLoop) and is discarded after? Is this the way not to lock the VertexBuffer several times a frame? Which size is common for a vertexbuffer? Right now I am using this size [code]4 * Marshal.SizeOf(typeof(TexturedColoredVertex)[/code], which is rather small i think. I heard I can draw several vertices a time, so i would be using a factor of my current size?
  11. I got almost everything now, some bugs are left. The biggest one is about resizing the window. In OpenGL I am doing this: [code] private void RResize() { h = control.Height; w = control.Width; y = 0; x = 0; if ((float)w / (float)h > CSettings.GetRenderAspect()) { w = (int)Math.Round((float)h * CSettings.GetRenderAspect()); x = (control.Width - w) / 2; } else { h = (int)Math.Round((float)w / CSettings.GetRenderAspect()); y = (control.Height - h) / 2; } GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.Ortho(0, CSettings.iRenderW, CSettings.iRenderH, 0, (double)CSettings.zNear, (double)CSettings.zFar); GL.Viewport(x, y, w, h); } [/code] For DirectX i was trying with: [code] private void RResize() { h = this.Height; w = this.Width; y = 0; x = 0; if ((float)w / (float)h > CSettings.GetRenderAspect()) { w = (int)Math.Round((float)h * CSettings.GetRenderAspect()); x = (this.Width - w) / 2; } else { h = (int)Math.Round((float)w / CSettings.GetRenderAspect()); y = (this.Height - h) / 2; } if(_Run) { m_Device.Viewport= new Viewport(x, y, w, h); } } [/code] But I am getting [code]Direct3D9: (ERROR) :Viewport outside the render target surface[/code] Another one is when I open up the task manager or a UAC window appears, my device is lost. I think I have to call Device.Reset(). After I disposed all objects, i need a way to recreate the textures which are lost in this process. I need help doing this Edit2: I am still stuck at resizing my window with the correct aspect ratio. If I set the viewport like this it will go out of my window, it is shifted the right way though but the width and height is too big
  12. Ok, I got this now. Can I force DirectX to use screen positions instead of values from 0 to 1 to set the Z-Positions and the vertexpositions much more precisely? I think there could be some 'fighting' between the values when they are too near together. In OpenGL I can do this. Ideas? Edit: I am trying to use this [code]Matrix.OrthoLH(Screenwidth, Screenheight, zNear, zFar); [/code] This works, but if I am using screenpositions now for drawing, the textures are shifted to the mid of the screen. Basically, position 0,0 is the center of the screen, but I want it to be at the top left.
  13. Could this issue also be caused because I am not using textures with sizes of a power of two? The Sizes are not the same, they need to get scaled. For instance covers of any size can be used and they need to get scaled to fit. Could my calculation of the positions be too inaccurate? I need to scale the positions down to values from 0 to 1. In OpenGL I was able to specify screen positions. Is there a setting to do this in D3D aswell? Edit: I got it smooth now. But I am now getting small bars. After setting [code]m_Device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Linear); m_Device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Linear); m_Device.SetSamplerState(0, SamplerState.MipFilter, TextureFilter.Linear);[/code] I do not get the Previewvideo anymore and these small bars are appearing. Any ideas? Edit2: [code]m_Device.SetSamplerState(0, SamplerState.MipFilter, TextureFilter.Linear);[/code] is definetely the line which is breaking the video. Could this be due to I am not filling the video's mipmaps? This is how I am adding a new texture for a video: [code] public STexture AddTexture(int W, int H, ref byte[] Data) { STexture texture = new STexture(-1); texture.width = W; texture.height = H; texture.w2 = texture.width; texture.h2 = texture.height; texture.width_ratio = texture.width / texture.w2; texture.height_ratio = texture.height / texture.h2; Texture t = new Texture(m_Device, W, H, 0, Usage.Dynamic, Format.A8R8G8B8, Pool.Default); DataRectangle rect = t.LockRectangle(0, LockFlags.None); for(int i = 0; i < Data.Length;) { rect.Data.Write(Data, i, 4 * W); i+= 4*W; rect.Data.Position = rect.Data.Position - 4 * W; rect.Data.Position += rect.Pitch; } t.UnlockRectangle(0); _D3DTextures.Add(t); texture.index = _D3DTextures.Count - 1; texture.color = new SColorF(1f, 1f, 1f, 1f); texture.rect = new SRectF(0f, 0f, texture.width, texture.height, 0f); texture.TexturePath = String.Empty; _Textures.Add(texture); return texture; } [/code] And this is how I am updating this: [code] public bool UpdateTexture(ref STexture Texture, ref byte[] Data) { if ((Texture.index >= 0) && (_Textures.Count > 0) && (_D3DTextures.Count > Texture.index)) { DataRectangle rect = _D3DTextures[Texture.index].LockRectangle(0, LockFlags.None); for (int i = 0; i < rect.Data.Length; ) { if (rect.Data.Length - rect.Data.Position > 4 * (int)Texture.width) { rect.Data.Write(Data, i, 4 * (int)Texture.width); i += 4 * (int)Texture.width; rect.Data.Position = rect.Data.Position - 4 * (int)Texture.width; rect.Data.Position += rect.Pitch; } else break; } rect.Data.Position = 0; _D3DTextures[Texture.index].UnlockRectangle(0); } return true; } [/code] I do not need to use MipMaps for videos due to performance reasons, can I disable MipMaps for single textures only?
  14. I am not recreating the texture each frame. I keep all textures stored in a List and they can be updated using my UpdateTexture methods. The textures are drawn in my DrawTexture methods. Your post really helped me out. Instead of writing the data directly into the stream i am now doing this which works like a charm: [code] DataRectangle rect = t.LockRectangle(0, LockFlags.None); for(int i = 0; i < Data.Length;) { rect.Data.Write(Data, i, 4 * W); i+= 4*W; rect.Data.Position = rect.Data.Position - 4 * W; rect.Data.Position += rect.Pitch; } t.UnlockRectangle(0);[/code] The hopefully last minor bugs are that [s]my z-position isnt applied correctly [/s]and the textures are still a bit blurry. The small album art should be rendered over the big one. It is working fine in OpenGL, but z-Positions can go beyond 1.0f in OpenGL. If I draw my z-Positions with values over 1f they wont show up. I am currently setting any positions over 1f to 1f which creates this bug i think. The other thing is that the textures are not that good as in OpenGL. I experimented with some TextureFilters ending up using a linear filter, which is still not too fine. In OpenGL i am using MipMaps to have smooth textures. Can I do the same in DirectX? EDIT: Fixed z-Positions, still need to examine why the textures are a bit blurry though I am shifting the Pixels by 0.5
  15. Ok i got alpha working now. Now I am stuck at creating a texture out of a byte[], height and width is given. I am doing this: [code] Texture t = new Texture(m_Device, W, H, 0, Usage.Dynamic, Format.A8R8G8B8, Pool.Default); DataRectangle rect = t.LockRectangle(0, LockFlags.None); rect.Data.WriteRange(Data); t.UnlockRectangle(0); [/code] But i am not getting the result I want on screen. I need this to play a video, decoded by Acinerella, so creating a Bitmap out of the byte[] would not be a good idea due to the bad performance.