• Advertisement

Steven Ford

Member
  • Content count

    20
  • Joined

  • Last visited

Community Reputation

130 Neutral

About Steven Ford

  • Rank
    Member

Personal Information

  1. Recent Files Feature for WinForms editor

    Thanks @desiado. You've given me the key hint of jumplist though, I didn't know what the feature is called, at least now I know what I'm meant to be searching for. I'll do some more research over the weekend and let you know how I get on. Thanks again Steve
  2. Recent Files Feature for WinForms editor

    Hi @desiado, thanks. That confirms that it's definitely something that the framework will provide for me automatically if I can find out the magic keyword to use to let it know that I've done the load internally to the app. Cheers Steve
  3. Hi, I've written a level designer in C# / WinForms. As part of this, I'd like to add the ability to report the set of recently used files so that when a user right clicks on the task bar, they get a list of the recently used files / possibly a set of tasks (such as create new level). This'd match the behaviour of apps like Visual Studio and the browsers etc. Looking on google, I can see there's a UWP API (Windows.Storage.AccessCache) which would appear to be what I want, but I don't see what the WinForms equivalent would be. Does anyone know what the equivalent would be / where to look? Thanks Steve PS This is independent of the adding the entries to the ribbon's home area
  4. DX11 DX11 - Problem with SOSetTargets

    Thanks guys, I've gotten it working now (Well, the DrawAuto call isn't working as I'd expect, but that's a small issue). @galop1n - I have a large array representing a 2D water space (up to 200,000 points, each represented by 4bits), and was looking to use a GS to simply transfer that array to the GPU, and then run the GS over it to output the instance information (location / fluid colour) into a stream out buffer and then render that, using instancing to achieve the fluid representation. As I need to do the filtering, I was under the impression that the only way to do this on the GPU was to use geometry shaders they have conditionality? The background to this project is to learn how the whole environment fits together so I can do the next project much quicker One other option which has been cropped up in my mind would be to copy the array to a buffer, and then bind it as a form of texture and then render a single quad covering the entire level and have a pixel shader look up in my array based off where its screen position and either discard for empty values or pick the appropriate colour. Although having said all of this, even on my old netbook, the CPU implementaiton of the looping through this data and building the instance buffer in CPU space in a single threaded approach still allows me to hit 60fps so I think it may well be an [admittedly quite interesting] optimisation too far.
  5. HI all, I'm currently stuck on something which should be rather simple. I'm trying to use the Stream Out functionality to offload a set of calculations to the GPU but I'm having enormous difficulties in getting it to work. The code that I have to create my buffer (once, not per frame) is as follows: // Member variable.. Microsoft::WRL::ComPtr<ID3D11Buffer> _streamOutBuffer; // Create the buffer which will be used to handle the stream output stage { D3D11_BUFFER_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(bufferDesc)); bufferDesc.ByteWidth = WATERMANAGER_GRIDX * _gameEngine->getWidth() * WATERMANAGER_GRIDY * _gameEngine->getHeight() * sizeof(WATERRENDERERERBASE_DROPLETINSTANCE_DETAILS); // This evaluates to 307200 bufferDesc.BindFlags = D3D11_BIND_FLAG::D3D11_BIND_STREAM_OUTPUT | D3D11_BIND_FLAG::D3D11_BIND_VERTEX_BUFFER; hr = dxDevice->CreateBuffer(&bufferDesc, nullptr, &_streamOutBuffer); if (hr != S_OK) return; } The code that I have to use the buffer subsequently is: // Set the target output UINT offset = 0; context->SOSetTargets(1, &_streamOutBuffer, &offset); context->Draw(vertexCount, 0); // Reset the buffers ID3D11Buffer* buffers[1] = { 0 }; context->SOSetTargets(0, buffers, &offset); The problem that I have is that on the first call to 'SOSetTargets' the value of _streamOutBuffer gets set to being a null pointer which I simply don't understand why. If I use an intermediate value to store a copy of the pointer, then on subsequent frames I get access violation exceptions so it appears that something destructive is happening to the object. NOte that manually calling 'AddRef' didn't stop this. Looking on MSDN (Getting started and Method doc) it doesn't appear that I'm doing anything different to the examples that I can find (including Frank Luna's DX11 book). Can anyone shed any light as to what could be going on here / point me in the right direction please? If it's helpful, the following is the following per frame code: // Set all the inputs context->IASetInputLayout(_inputLayoutExpanding.Get()); context->VSSetShader(_vertexShaderExpanding->shader.Get(), nullptr, 0); context->GSSetShader(_geometryShaderWithSO.Get(), nullptr, 0); context->PSSetShader(nullptr, nullptr, 0); auto pointer = _vertexBuffer.Get(); UINT vertexStride = sizeof(unsigned int); UINT vertexOffset = 0; context->IASetVertexBuffers(0, 1, &pointer, &vertexStride, &vertexOffset); // Set the constant buffers ID3D11Buffer* vertexShaderCBs[1] = { _vertexShaderConstantBuffer.Get() }; ID3D11Buffer* geometryShaderCBs[1] = { _geometryShaderConstantBuffer.Get() }; context->VSSetConstantBuffers(0, 1, vertexShaderCBs); context->GSSetConstantBuffers(0, 1, geometryShaderCBs); context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY::D3D_PRIMITIVE_TOPOLOGY_POINTLIST); // context->RSSetScissorRects(0, nullptr); ID3D11DepthStencilState* depthStencilStates[1]; UINT stencilRef; context->OMGetDepthStencilState(depthStencilStates, &stencilRef); // Initialize the description of the stencil state. D3D11_DEPTH_STENCIL_DESC depthStencilDesc; ZeroMemory(&depthStencilDesc, sizeof(depthStencilDesc)); Set up the description of the stencil state. depthStencilDesc.DepthEnable = false; depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ZERO; Microsoft::WRL::ComPtr<ID3D11DepthStencilState> noWriteDepthStencil; this->_device->CreateDepthStencilState(&depthStencilDesc, &noWriteDepthStencil); context->OMSetDepthStencilState(noWriteDepthStencil.Get(), 0); // Set the target output UINT offset = 0; context->SOSetTargets(1, &_streamOutBuffer, &offset); context->Draw(vertexCount, 0); // Reset the buffers ID3D11Buffer* buffers[1] = { 0 }; context->SOSetTargets(0, buffers, &offset); context->VSSetShader(nullptr, nullptr, 0); context->GSSetShader(nullptr, nullptr, 0); context->OMSetDepthStencilState(depthStencilStates [ 0 ], stencilRef); Thanks Steve
  6. Unfortunately, just dropping it within a panel, setting that to AutoScroll=true requires my control to be anchored to top, left (if anchored to bottom / right, then the scrollbars disappear). This doesn't handle centring nicely. So currently, I've done the above and said that it's a tool for me and so I can deal with the ugliness. Given that I want to, shortly, add a zoom feature into the custom control, I guess I'll park this complexity and come back to it. Get the rest of the functionality working before worrying about the prettification
  7. Thanks @ApochPiQ. It's now vaguely working - I'm now needing to get the control embedded within a centring control with scrollbars, but I think that's a slightly different question! :-)
  8. Hi, I'm currently writing a level designer for my 2D game*. To do so, I'm writing this using c# / WinForms. The editor is designed to look a bit like VS, ie. a set of tool boxes surrounding a central panel to show the various tiles and game objects. I've got this central panel as a custom control (inheriting directly from Control) using custom rendering. The development machine that I'm using is using a 27" 4k display and so I've set Windows up to use a text zoom of 150% to avoid killing my eyes. Unfortunately, when I use my standard drawing code of (within the OnPaint override): var location = new System.Drawing.Point( offsetX + (tilePair.Key.X * tileLayer.TileSize.Width ), offsetY + (tilePair.Key.Y * tileLayer.TileSize.Height ) ); var destRectangle = new Rectangle( location, tileLayer.TileSize ); if( pe.ClipRectangle.IntersectsWith( destRectangle ) ) { pe.Graphics.DrawImage( tilePair.Value.Image, destRectangle, tilePair.Value.TileRectangle, GraphicsUnit.Pixel ); } The actual rendering is being done at 150% size of the destRectangle (~96 pixels vs. 64 that's expected). This isn't necessarily a problem so long as I can source that scale factor in advance so I can ensure the appropriate resizing / layout code. Googling doesn't lead to much info on how to source this information and there's nothing obvious that I can find in intellisense as to give me pointers as to where to investigate further. So... does anyone have either any ideas as to how to solve this or links to places where I can investigate this sort of issue. Thanks in advance Steve *originally, I was using Tiled - which is rather good, but for various reasons I wanted to write my own. 1. so I can define custom objects with a fixed set of properties for level designers who aren't that technical to use and 2. to use it as a learning exercise around the command pattern.
  9. DX11 Issue with the binding of 2 texture

    HI Snake, I'm more thinking about doing the following in your code: ID3D11ShaderResourceView* srvs [2]; srvs[0] = backFaceTexture.Get(); srvs[1] = volumeTexture.Get(); deviceContext->PSSetShaderResources(0, ARRAYSIZE(srvs), srvs) Rather than changing any of the shader code. Although doing a quick google search, this link implies that your code and mine are equivalent (was hoping that it would be something like that). The only other thing I can suggest is having a look at the Rastertek series - Multitexturing. And see if there's anything obvious between that and your code. Good luck! Steve
  10. DX11 Issue with the binding of 2 texture

    Hi Snake996, looking at the documentation for the call - MSDN it implies that you can pass in the calls in a single array, have you tried passing both in in a single call? Cheers Steve
  11. Thanks Hodgman / unbird, These were the first set of shaders that I'd ever written (as part of a learning process to learn DX) so it's quite likely that I was attribute happy! I'll make the suggested changes and go through the debugging info. I wasn't aware that different formats were optional (I had assumed that maybe it's an int thing). Time to start debugging. Cheers Steve
  12. Thanks Khatharr, I'll check it out
  13. Hi, I've got a problem with one of my geometry shaders with my game. It works on most hardware, but on a subset of machines, typically laptops with integrated graphics (although I do have a laptop with integrated graphics where it works), my shader doesn't work (that is, nothing is displayed on screen, but the calls themselves don't appear to be failing). My code is set up to require feature level 10.0 and on the machines where it's not working they're reporting as supporting this level. Everything else is working on these machines (I also have a pure 9.3 feature level fallback renderer which works perfectly on these machines). Usually, I'd run the code through the debugger however these machines are either not mine or struggle to run visual studio (it's an old netbook - Acer Aspire) hence that's not an easy option. So 2 questions: 1. Can anyone think of why one might see such issues between feature level 10.0 compatible hardware and if there are issues, then how would one programmatically identify this? 2. Suggestions on how to diagnose these problems without the use of VS Background: The shaders are designed to render a fluid in the game. The fluid is stored in a single large byte array where each droplet of fluid is represented by 4 bits (2 for colour, 2 for movement, ie. each byte represents 2 droplets). The location of the fluid is determined by its psition in the array. The geometry shader takes in an int and then, using bit masks, potentially outputs a set of vertices for every valid droplet. The rendering code then copies the original array to a buffer: D3D11_INPUT_ELEMENT_DESC waterLayout[1]; waterLayout[0].AlignedByteOffset = 0; waterLayout[0].Format = DXGI_FORMAT::DXGI_FORMAT_R32_UINT; waterLayout[0].InputSlot = 0; waterLayout[0].InputSlotClass = D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_VERTEX_DATA; waterLayout[0].InstanceDataStepRate = 0; waterLayout[0].SemanticIndex = 0; waterLayout[0].SemanticName = "BITMASK"; auto hr = dxDevice->CreateInputLayout(waterLayout, 1, _vertexShader->shaderByteCode.get(), _vertexShader->shaderByteCodeLength, &_inputLayout); I've attached the files in case there's anything obvious Thanks DataStructures.hlsl GeometryShader.hlsl PixelShader.hlsl VertexShader.hlsl
  14. Hi Adam,   thanks for that. I should have clarified at the beginning what I'm trying to do. I'm under the impression that WP8 is still limited to DX9.3 and hence the desire to try to make my little app phone compatible (yes, I know WP8 doesn't have a good market share and is dead vs. Win 10 mobile etc. but this little project is mainly about the learning experience and I've also got an old WP lying around which'd be nice to actually use for something rather than gathering dust :) ) and I also have a very old netbook which'd be perfect for my daughter to play the little game on and it's woefully underpowered :)   So one question on the approach above, does the DX runtime (or compatibility levels) then map between the data type that I specify in the layout and what's specified in the shader?   Also, taking the next question, given that WP8 has been replaced by Windows 10, am I right in assuming that under Windows 10, there's no longer the limitation of all hardware having to be feature level DX9.3 and instead, I'd be able to use the more advanced features on Win10 mobile? I don't suppose you have a good place to find out the various feature levels supported by the Win10Mobile phones do you? I've looked on the MS site:   https://www.microsoft.com/en-gb/mobile/phone/lumia950/specifications/   and it's not clear from that what the feature level would be for this type of phone? My only real concern is whether or not I have access to geometry shaders,    Thanks   Steve
  15. HLSL switch attributes

    Hi,   from looking at the MSDN documentation (https://msdn.microsoft.com/en-us/library/windows/desktop/bb509669(v=vs.85).aspx ), the comments are:   forcecase - force switch call - subroutines   My understanding would be that with the latter, the code would explicitly use a subroutine, whereas the former would be inclined to have conditional execution. A subroutine call would come with (apparently - http://gamedev.stackexchange.com/questions/82988/how-does-an-sm5-shader-handle-loops-and-if-statements-hlsl-cg ) a ~5 cycle overhead whereas with the conditional execution, each instruction skipped would come at a cost of 1 cycle.    From that, I'm guessing that for larger case items and where all of the pixels to be processed are likely to go down the same path in the switch statement, I'd use 'call'.   Hope this is vaguely useful.   Cheers   Steve
  • Advertisement