Jump to content
  • Advertisement

LyGuy

Member
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

130 Neutral

About LyGuy

  • Rank
    Member
  1. Thank you for your posts guys!  I am really appreciating the discussion.  Olof this is exactly what I was looking for.  You opinion with reasoning I can get behind.  I do have a question about the collections in the standard library though.  Is there any reason I should worry about performance when using a vector over a basic array? 
  2. @cardinal:  Thanks for the reply man I have wondered that.   @Olof:  I have purposely written it this way as an exercise of how to implement simple heap sharing across copied instantiations of the same object.  Also I am not completely sold on the standard library collections.  I am working as an enthusiast with the time to run myself through exercises.  I am mainly looking for opinions from professionals in the industry on whether or not they use the standard library heavily.   @SiCrane:  So a design choice where shared ownership is a side effect is a bad design choice basically?
  3. Well I actually do use unique_ptr[s] and don't have a problem with those.  shared_ptr[s] however I tend to have issues with.  Specifically, the partial specialization used for new[] allocations.  I don't fully understand the class and therefore avoid it.  I have also read articles by many programmers who offer dislike toward the shared_ptr (including Bjarn Stroustrup).  Do you know if there are specific platform requirements for using smart pointers?  For example, if you develop with the intent to publish on the Xbox, will Microsoft enforce you to use smart pointers everywhere in your code base?
  4. Hi Guys,   I'm a game programmer enthusiast working on my own engine and I've been musing over using the standard library smart pointers in certain places in the engine.  Generally, I prefer to use stack memory allocation for most things but I've found that certain aspects, such as the Vertex array in my Mesh class, are easier to allocate dynamically.  For example, my Mesh class signature: class Mesh { Vertex* m_pVerts; string* m_pMeshObjectNames; D3D11_PRIMITIVE_TOPOLOGY* m_pTopologies; uint32_t* m_pReferenceCount; uint32_t m_NumVerts; bool m_HasNormalsDefined; bool m_HasTexCoordsDefined; void _LoadOBJMesh(LPSTR); public: Mesh(); Mesh(const Mesh&); Mesh& operator=(const Mesh&); ~Mesh(); void LoadMesh(LPSTR, MeshType); void LoadMeshAsync(LPSTR, MeshType); }; The m_pMeshObjectNames and m_pTopologies are intended to point to dynamic arrays of their respective types however at the moment I haven't fully implemented that.  Anyway my focus is the m_pVerts member.  Originally, I implemented the pointer members of this class as std::shared_ptr<TYPE[]> and ran into the trouble using the [] partial specialization.  I've also been wary of using the smart pointers in the standard library mostly due to lack of a full bodied understanding of how they work on my part.  My implementation now handles the destruction of these dynamic members via reference counting.  When I assign or use the copy constructor, the pointers are directly copied and then the reference counter is incremented.  Finally, those members do not actually delete[] until the reference count has reached 0.  Here is my destructor: Mesh::~Mesh() { if (m_pReferenceCount && !--(*m_pReferenceCount)) { if (m_pVerts) { delete[] m_pVerts; m_pVerts = nullptr; } if (m_pMeshObjectNames) { delete[] m_pMeshObjectNames; m_pMeshObjectNames = nullptr; } if (m_pTopologies) { delete[] m_pTopologies; m_pTopologies = nullptr; } if (m_pReferenceCount) { delete m_pReferenceCount; m_pReferenceCount = nullptr; } } } This code works as intended and I am pleased with it, however I don't know if it would be a widely accepted implementation.  My question is:  is this kind of code typically frowned upon in the industry?  Do industry professionals normally like to use the standard library (or other respected library) smart pointers?  I know these are probably very subjective questions, I just want some opinions!   Thanks!   Matt
  5. LyGuy

    Direct3D 11 Questions

    Well, the implementation is complete!  It works like a charm.  Thank you for your input gentlemen.  If you are interested I would very much appreciate some critique of my code!
  6. LyGuy

    Direct3D 11 Questions

    In response to your question I have created an UAV with a Byte Address buffer resource bound to the output merger.  As far as I understand, the Pixel Shader has access to this view and will be able to write to the resource behind it.  I've got the resources created, view created, and it binds.  I haven't yet mapped memory back into CPU space, however.
  7. LyGuy

    Direct3D 11 Questions

    Thank you for the reply Jason!  I realized shortly after I made that post that I was responding to one of the authors!  I am very excited to be communicating with the two of you!  I have really enjoyed learning from your book.  It has been an amazing asset!   After reading what Matt said and re-reading the passage in question (pages 30-31) I see that I did not read carefully/did not understand entirely.  The staging usage (pg 31) section clearly describes the concept that a resource must be copied into a staging resource after being manipulated by the GPU.  I think my confusion arose from the combination of the table on page 30 (Table 2.1) and the passage about the Staging Usage on page 31.  I made the assumption that the same resource would be used for GPU manipulation and CPU access.  I see now how that is incorrect.   I really appreciate you guys responding to my post and again I am learning tons from your book!
  8. Where do you (or the framework) bind the buffer to the IA?  Does your debugger support graphics debugging?   EDIT:  If you are sure that the buffer is being bound properly to the InputAssembler then the data is likely available in the vertex shader.  Perhaps something unexpected is happening with your transforms?
  9.   In the example they are using a SimpleVertex array which uses two 3 component structs for the Position and Normal respectively.  Their input layout DGXI_FORMAT matches this construct in that they are using DXGI_FORMAT_R32G32B32_FLOAT.  According to Microsoft, this format handles 3 components.   If you are using a SimpleVertex structure in the same way that they are, change your input layout to match as well by using the DXGI_FORMAT_R32G32B32_FLOAT format.    
  10.   From this code it looks like your vertex buffer is being filled with vertex elements of type Vector3.  The buffer should be filled with elements that match the signature in your vertex buffer.    struct SimpleVertex {        Vector4 Position;        Vector3 Normal; };   This of course changes your buffer description as well.  Specifically the SizeInBytes field/property will be the size of a SimpleVertex * NumberOfVerts
  11. In the shader you use a float4 for position for transform reasons - to get the vert into projection space.  Try the following in your VS()   current: output.pos = mul(input.pos, view); proposed: output.pos = mul(float4(input.pos,1), view);   EDIT:  I realized this isn't going to affect anything since you are defining your structure in the shader file as a float4 already.  Disregard!!    Do you have a struct in your code to define what a vertex looks like to the CPU?  It looks like you are sending just simple Vector3s to the input assembler.  You should be sending a struct that has two elements, one for position and one for normal.  I apologize because i'm not familiar with that framework.
  12. LyGuy

    Direct3D 11 Questions

    Thanks for the reply!   I understand what you are saying and I just got the resource to create based on your suggestions.  My reference material is a book titled "Practical Rendering & Computation with Direct3D 11" by Zink, Pattineo and Hoxley and in their chapter on D3D11 resources they have a table where they describe various GPU/CPU access for each usage type.  For the staging type they put GPU/CPU access as full read/write.  This led me to assume I could bind a buffer with CPU read/write access to the pipeline.  However based on what you are saying (and what I just read through again on MSDN) a staging only has CPU read/write and must copy from another buffer (in my case default probably) which can bind to the pipeline.  I set my bind flag to 0 and the resource created properly.   Also I'd like to describe my attempted implementation to see if what I am doing can be accomplished more easily another way.  I want to implement mouse over detection through my render component for game entities.  My assumption is that I need to make this test in the pixel shader after the data has been fully transformed to projection space and passed through the raster state.  Since windows has a mouse hot point described by a single pixel I figured I wanted to compare its position against pixels rendered for the entity I am testing against.  The data given to the pipeline is the position of the mouse in x,y coordinates with an extra element to use as a bool flag.  When the pixel shader detects that it is operating on a pixel which shares its location with the mouse hot point, the shader will set the bool flag to true.  I can then detect that the object is moused and perform whatever else I want for the next passes.  Does that sound reasonable?  I do have a concern for overhead but not so much that it will stop me from an attempted solution (I can optimize later after the engine implements the functionality I desire).   Thanks again!
  13. Hi guys,   I'm looking for people who are familiar with D3D11 and the new shader resource types which allow GPU/CPU access to the resource (ComputeShader type resources).  I am trying to create a 1D texture resource which both the CPU and GPU can access for read/write operations.  I am aware that the usage for this kind of a texture must be D3D11_USAGE_STAGING however I'm having issues getting the device to create for me the resource.  My D3D11_TEXTURE1D_DESC looks like this:           D3D11_TEXTURE1D_DESC td = {};         td.Width = 3;         td.ArraySize = 1;         td.MipLevels = 1;         td.Format = DXGI_FORMAT_R32_UINT;         td.Usage = D3D11_USAGE_STAGING;         td.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;         td.BindFlags = D3D11_BIND_SHADER_RESOURCE;   My intention that the texture would look something like this in memory:   unsigned int tex[] = {0, 0, 0}   The texture desc structure works when the usage is changed to D3D11_USAGE_DEFAULT and the CPUAccessFlags is changed to 0.  This however breaks what I am trying to acheive as it removes any hope fo the CPU getting access to the modified data in the shader.   Has anyone been able to successfully use a 1DTexture as a resource type for CPU read/write access?  I would be binding this to the pixel shader by the way.  Thanks for any help you can provide!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!