carkey

Members
  • Content count

    15
  • Joined

  • Last visited

Community Reputation

106 Neutral

About carkey

  • Rank
    Member
  1.   It is a windows store app (metro) so windows is supposed to handle all that. Windows decides that this device is this resolution, this other device is another resolution etc.
  2. Ah! I think I'm part way to finding the problem, I'm printing the pointer position out and:   My desktop is 1920x1080 and if I move the cursor all the way to the bottom right of the screen it's saying its position is 1920x1080.   The tablet that works is 1366x768 and if I go to the bottom right it says its position is 1366x768.   Now on the tablet that isn't working: it is also 1920x1080 but if I move the pointer to the bottom right...it says it is 1366x768.   So there is something strange going on here where the DirectX DXSwapChain is 1920x1080 (as I get a 1920x1080 screenshot if I use the DirectXTK SaveWic...() function) but the actual windows app itself thinks that it is 1366x768.   Does anyone with windows store app experience know how this could have happened? Why does the pointer think its range is only 1366x768?   Thanks for your time and the help so far!   Edit: I've just taken a print screen on the 1920x1080 device and it saves as a 1920x1080 png...so the app thinks it's 1920x1080, directx thinks it is 1920x1080 but the pointer framework/library thinks it is 1366x768 (which the other tablet is but my desktop isn't).
  3.   I don't really understand how to do what you're saying here, is it a simple thing to set up?
  4. Okay I'll have a go at doing some more debugging. The thing is I remember it working on this device a few months ago when I first added GPU picking. I've added a lot of other stuff since then but haven't touched the GPU picking class so I really don't know what has happened to make it not work on this device...but I'll keep digging thanks!
  5.   Yes it should. I fixed my first reply.     Ah okay cool thanks, I'll give this another go because this makes sense to me now :)   Thanks for all the help and really quick responses!
  6. Hi,   I have also tried it on another windows tablet now and it works on that tablet so now I'm even more confused!   I have been printing out the cursor/pointer x/y positions as well as the box coordinates.   I can't see any significant difference in the two at all. The thing is, if I touch the screen about 250px (in x and y) from where I want it to hit, it will hit there, so other than it being the cursor position problem (and I can't see how it can be this at this point) I'm thinking it's maybe a problem when I create the texture from the buffer, maybe I'm creating the 'screenshot' of the render too big so it is scaled in the x and y (meaning its about 250px away), is this possible that it would do it on one device and not another?   Thanks.
  7. I'm working on a project when I want to use GPU picking, basically it's all set up fine, assigning unique colors to objects etc. but my current problem is that when creating the 1x1 texture, it doesn't do this where the pointer is for some reason, but about -250 pixels in the x and y. I start by rendering the scene to my own ID3D11RenderTargetView and then use GetResource() to copy this buffer data to a ID3D11Resource.   This all works fine and I know it is capturing it correctly because I'm using the DirectXTK function SaveWICTextureToFile() to save off this resource to a PNG - which I can open on the desktop and it looks fine.   I then create a D3D11_TEXTURE2D_DESC of height 1px and width 1px and call CreateTexture2D() to create this texture. I then create a D3D11_BOX whose left is at mouse position X and top is at mouse position Y and whose width and height are both 1. I then call CopySubresourceRegion() like so:   CopySubresourceRegion(texture1x1, 0, 0, 0, 0, textureFromRenderTargetView, 0, box1x1);   I then look at the pixel RGBA value but it's always wrong.   I thought I might be able to debug it by creating a larger texture and seeing where it thinks the "mouse pointer" is: So I changed box to be 200x200 pixels rather than 1x1 and use the DirectXTK function to save it to a PNG.   If I compare the original buffer texture PNG and the new 200x200 PNG there is something weird going on. The 200x200's top left corner is nowhere near where the mouse point is, it is about -250 in the X and Y axes. And even weirder, the 200x200 image seems like it has been scaled up, when I overlay it onto the original buffer texture PNG the objects are definitely larger. Does anyone know what is going on here and what I can do to solve it?   Below is my D3D11_TEXTURE2D_DESC I use for the 1x1 texture and also its D3D11_SUBRESOURCE_DATA:     D3D11_TEXTURE2D_DESC desc; desc.Width = 1; desc.Height = 1; desc.MipLevels = 1; desc.ArraySize = 1; desc.SampleDesc.Count = 1; desc.SampleDesc.Quality = 0; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA subData; float buff[4] = {0,0,0,0}; subData.pSysMem = (void *)buff; subData.SysMemPitch = 4; subData.SysMemSlicePitch = 4; device->CreateTexture2D(&tdesc, &subData, &texture1x1);   I then create the box to have a 1 pixel height and width, starting at the pointer position.   The strange thing is, this works fine on some devices but not others. When on my desktop and laptop it works fine but on my tablet (windows) it does this strange 200pixels off thing. Could it be something to do with resolution or something?   I really can't work out what's going on on this one particular device (which is the target device). Any ideas?   Thanks for your time.   P.S. disclaimer: this is a cross-post from stackoverflow but I put that question up a week ago and have had no response so I thought I'd try here.  
  8. Ah okay, I see what you mean now.   The only thing I'm not quite understanding now is that the k for-loop uses the condition k < 2, should this be k < 3? Because say for vertex 0, j = 0, startTriangleIndex = 0, and then the loop only runs k = 0 and k = 1. k = 0 will not be used because that is our current vertex and it's already been added to the used vector and so then we'll only be averaging with k = 1 (vertices[0 + 1]), but what about vertices[2]?   Thanks for all the help so far, and hopefully I'll have some results soon to show :)
  9.   I believe it is column major matrices but I'm not really sure, it is currently a very basic instance vertex shader that just takes the position, adds on the instance position and then draws it.   Should the process of translating to the center of the mesh, scaling and then translating back out work in theory as I detail above?   It seems to be strange to me that it has no effect, as in doing the scale and the translate,scale,translate both produce the same result.   Thanks.
  10.   Hi,   Thanks for the reply, I tried this and as you say, each instance does now scale around its own origin. The problem is that its origin is in its bottom left corner, not in its center.   I tried translating to the center, scaling and translating back but this didn't seem to make any difference, I don't quite understand what I'm doing wrong: float4x4 scale = float4x4( 0.5f, 0, 0, 0, 0, 0.5f, 0, 0, 0, 0, 0.5f, 0, 0, 0, 0, 1); float4x4 translate = float4x4( 1, 0, 0, 3, 0, 1, 0, 3, 0, 0, 1, 3, 0, 0, 0, 1); float4x4 translateInverse = float4x4( 1, 0, 0, -3, 0, 1, 0, -3, 0, 0, 1, -3, 0, 0, 0, 0 ); float4 pos = (input.pos, 1.0f); pos = mul(pos, translate); pos = mul(pos, scale); pos = mul(pos, translateInverse); (I know my mesh is 6x6x6 therefore I'm trying to translate to the center at 3x3x3 from 0,0,0).   Thanks for the help so far!
  11. Ah okay, I see what you mean.   So if I had a 2 quad (4 triangle) grid like this: a - b - c |\ |\ | | \ | \ | | \| \| d - e - f ? ? ? ? In a triangle list the vertex list would be: {a, b, c, d, e, f} And the indices list would be: {(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11)}   Even though indices 1 and 5 (if they are both talking about vertex 'e') are exactly the same?   I think my mesh is constructed differently so that there would be no redundant indices and instead of {(0, 1, 2), (3, 4, 5), ... } it is {(0, 1, 2), (0, 3, 1), ...}   Does that make sense? Because triangles 1 and 2 share the vertices 'a' and 'e', they both use the same index? Or am I thinking about this all wrong? (Is that what you call a 'triangle strip'?)   If I do currently have my mesh setup the wrong way, is there a way to do the algorithm on this sort of mesh or should I think about converting it into a triangle list?   Thanks again for all the help so far
  12.   Hi TiagoCosta,   I was just implementing this and I was wondering about the above assertion. Is this the case? Isn't it possible for vertices I to be surrounded by j, k and l but their indicies are 10, 15, 44, 67? Depending on how the mesh has been created?   Everything else makes sense to me, just not this bit about using the indicies to know the neighbors.   Thanks for your time.
  13. Hi,   I'm trying to scale instance data individually in the vertex shader with directX and hlsl. have an input layout that takes a bool and if this is true I want to scale the instance.   The bool goes across to the shader fine but the scaling seems to be wrong.   Basically, I have a model matrix that exists in the shader but before I do: position = mul(input.position, model); position = mul(input.position, view); position = mul(input.position, projection); I first ask if this bool is true, if so I do: model = mul(model, scaleMatrix); scaleMatrix is just a simple scale matrix where the scale values are 0.5.   My problem is that it doesn't seem to be scaling the instance at it's own origin (which is what I want), it is scaling it at some other origin but I don't know why this is the case.   Any ideas?   I don't have much experience with instancing so any help would be greatly appreciated.   Thanks.
  14. @TiagoCosta - Thanks for the explanation, that makes sense to me now, I'll have a go at implementing it soon!   @LorenzoGatti - It is a very basic project I am working on and so the answer is no to most of those questions:   1) I basically want to experiment with different 'amounts' of smoothing, so going as far as smoothing a Cube into a near-sphere to smoothing it only slightly. 2) Not really, as I say in (1) I just want to experiment 3) No, I do not use textures. 4) No 5) No, this is to save the mesh to file so I don't need to preserve the original mesh. 6) I assume I do if I am smoothing it? There is possibly some noisy geom. so it would be nice to be able to smooth this. 7) No.   Thanks for your help so far, hopefully those answers will give you more of an idea of what I want to do :)   Thanks.
  15. Hi all,   I'm working on a small project and I've got to a point where I want to be able to smooth the meshes I've got.   I've had a quick google around and found something called Laplacian Smoothing http://en.wikipedia.org/wiki/Laplacian_smoothing but I'm not really sure how it works.   Does anyone have any good resources of what smoothing algorithms are out there, their pros/cons etc?   Thanks for your time.