Auskennfuchs

Member
  • Content count

    43
  • Joined

  • Last visited

Community Reputation

1032 Excellent

About Auskennfuchs

  • Rank
    Member

Personal Information

  • Interests
    Programming

Social

  • Github
    Auskennfuchs
  1. You should read carefully, what the error message is telling you. D3DX11CompileFromFile can't find the file you are trying to load. When you start the executable out of Visual Studio make sure the working path is correct or load your data with an absolute path (not recommended). Default is somewhere in a Debug Subfolder.
  2. Are you using AdjustWindowRect for resizing your window? https://msdn.microsoft.com/en-us/library/windows/desktop/ms632665(v=vs.85).aspx This function makes sure you get the right clientwindow size no matter how big titlebars, menus, borders etc. are. On the other hand you can also adapt you backbuffer to the real clientsize so you can handle every windowsize.
  3. The graphics in the video is real 3D with a topdown camera on a low resolution texture and then scaled up to get the pixelart style. I guess the shadows are created with some kind of bsp-tree to get all walls in the level and then extrude them behind the camera (like watching through a long pipe). so the shadows will automatically overdraw the invisible scene. Maybe they also use a modified projection matrix to draw the shadows with a little more fisheye effect.
  4. Scale Texture2D DirectX11

    I'm not sure if you can achieve it by just copying the data to a texture with a smaller dimension. I guess your current result looks something like this: original image aaaaaa bbbbbb cccccc smaller image aaaa aabb bbbb last line is completely missing So with my solution the scaling is done in the pixelshader, like if you look on a quad from a further distance in 3D. After drawing on the smaller destination texture you can use the Map function to read the smaller image. As an alternative you can use GDI+ functions to scale a bitmap and don't have to deal with DirectX.
  5. Scale Texture2D DirectX11

    You can create a SRV for the source image from your Desktop API to bind the texture for a shader. Then use your smaller target texture as rendertarget and draw a fullscreen quad on it. You don't even need a index or vertexbuffer when using this vertexshader struct Output { float4 position_cs : SV_POSITION; float2 texcoord : TEXCOORD; }; Output main(uint id: SV_VertexID) { Output output; output.texcoord = float2((id << 1) & 2, id & 2); output.position_cs = float4(output.texcoord * float2(2, -2) + float2(-1, 1), 0, 1); return output; } You only have to use Draw(3,0); The pixelshader is also a no brainer, just sample the sourcetexture and you are done.
  6. My vertex shader won't display! :(

    You should build your worldMatrix by using mWorld = mScale * mRotation * mTranslation Otherwise your scaling axis are not aligned with the models origin. Also may try to build only one rotationmatrix by using quaternions. But I think this will not solve your problem.
  7. Pixel Visual Programming

    You can achieve this effect with many different ways - draw bigger images -> waste of memory, but more details possible or mixing pixelated and highres images - draw on a smaller texture and then draw this texture on a bigger quad -> need some math for ratio calculations, picking,... - zoom in with matrixparameters in orthogonal projection -> probably the most common technique Make sure to not use any filtering for your texture sampling or you get a blurry instead of pixelated output
  8. @Hodgman: Thanks for clarification. I never thought of bandwidth issues.
  9. Is there a technical reason for not using 32bit indexbuffers? Are GPUs optimized for 16bit indices?
  10. My vertex shader won't display! :(

    Probably you rendering with an alpha value of 0.1f, so it is barely visible ;-) float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; return AmbientColor * AmbientIntensity; should result in float4(0.1f,0.1f,0.1f,0.1f) Try something like this: float3 AmbientColor = float4(1, 1, 1); float AmbientIntensity = 0.1; return float4(AmbientColor * AmbientIntensity,1.0f);
  11. SharpDX loading TGA textures

    I'm using this code to load TGA-images: https://www.codeproject.com/Articles/31702/NET-Targa-Image-Reader It's released under CPOL-license, so should be comparable to MIT-license and won't bring problems, when using it using Paloma; ... var image = new TargaImage(filename); var format = Format.Unknown; var imageData = image.Image; if(image.Header.PixelDepth==24) { format = Format.B8G8R8A8_UNorm; imageData = image.Image.Clone(new System.Drawing.Rectangle(0, 0, image.Image.Width, image.Image.Height), System.Drawing.Imaging.PixelFormat.Format32bppRgb); } var bitmapData = imageData.LockBits(new System.Drawing.Rectangle(0, 0, image.Image.Width, image.Image.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, imageData.PixelFormat); var bufferSize = bitmapData.Height * bitmapData.Stride; // copy our buffer to the texture int stride = image.Image.Width * 4; var tex = new Texture2D(device, new Texture2DDescription() { Width = image.Header.Width, Height = image.Header.Height, Format = format, ArraySize = 1, BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, Usage = ResourceUsage.Default, CpuAccessFlags = CpuAccessFlags.None, MipLevels = GetNumMipLevels(image.Header.Width,image.Header.Height), OptionFlags = ResourceOptionFlags.GenerateMipMaps, SampleDescription = new SampleDescription(1, 0), }); device.ImmediateContext.UpdateSubresource(tex,0,null,bitmapData.Scan0,stride,0); // unlock the bitmap data imageData.UnlockBits(bitmapData);
  12. Struggling with WorldToScreen in C#

    I think you forgot to set the ViewMatrix SlimDX.Matrix.LookAtRH(eye, lookAt, up); probably should be View = SlimDX.Matrix.LookAtRH(eye, lookAt, up);
  13. May have a look at this tutorial: http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/point-lights/ I think it's one of the best tutorial series to get into deferred rendering. And of course there is lot of space for improvements. Maybe it's a little bit old but the idea behind still works. tl;dr: Draw a sphere mesh in the lightpass for every light. You can use instancing here or try it first with single drawcalls. Then use the depth information and the albedo-color from your model-drawpass to light the scenery. Only pixel within the mesh will get checked and colored if there are inside of the lightsource.
  14. Trouble passing multiple lights

    Did you tried to get the real information for your Constantbuffers from the ID3D11ShaderReflection interface for both vertex- and pixelshader? Maybe there are some differences.
  15. Animated movies.

    I think the easiest and cheapest way to get into animated movies, is to use Blender. You can create models and animations but also render them for a full movie. Actually they made a movie completely in blender with a big bunny ( don't know the name right now)   If you want to do it on your own, you have to write an engine (Rasterizer like UE4, Unity etc. or Raytracer like 3D Studio Max, Blender etc.) and store keyframes on a timeline with matrices (position, rotation, scale) for every object/objectpart in your scene. Now you can interpolate between those frames, render and store them as a video.