Jump to content
  • Advertisement

jdyyx1984

Member
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

148 Neutral

About jdyyx1984

  • Rank
    Member
  1. thanks for your replies! so i think i  should show some warning message to users then abort the process
  2. our game  uses cryengine3 , we found some of our users reveived unexpected device_removed error in dx11/ device_lost in dx9 , we force recreated d3d device in dx9 and it worked fine for those users,but this engine can't recover itself from device_removed error in dx11,so i  tried to  add this process ,i  met a problem: the return value of  SwapChain::Present  is alwasys DXGI_ERROR_DEVICE_REMOVED after the new d3d device is created,i can't find the reason   here is my steps 1. flush all rendering command 2. release device resources created with the old d3d device,such as all textures ,buffers,queries,render stats 3. release  the old swapchain and d3d device , create new one 4, recreate all resources released in step2 5 resume rendering    the return value of present in step5 was  DXGI_ERROR_DEVICE_REMOVED , but any other return values from D3D API before present was S_OK     i also tried to simplify the rendering pipeline, i removed all d3d api calls except a present, after recreating a new device,  the render thread  stalled in present, it even did not return. here is the stack info from visual studio     ntdll.dll!_ZwWaitForSingleObject@12()  ntdll.dll!_ZwWaitForSingleObject@12()  d3d11.dll!CUseCountedObjectRootEx<class NOutermost::CDevice>::InternalAddRef(void)  d3d11.dll!CUseCountedObject<class NOutermost::CDeviceChild>::AddRef(void)  d3d11.dll!CLayeredObjectWithCLS<class CDepthStencilState>::CContainedObject::AddRef(void)  d3d11.dll!ATL::AtlInternalQueryInterface(void *,struct ATL::_ATL_INTMAP_ENTRY const *,struct _GUID const &,void * *)  d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::_InternalQueryInterface(struct _GUID const &,void * *,struct ATL::_ATL_INTMAP_ENTRY const *)  d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::QueryInterface(struct _GUID const &,void * *)  d3d11.dll!ATL::CComObjectRootBase::_Delegate(void *,struct _GUID const &,void * *,unsigned long)  0cac7f00()  [???????????/???] d3d11.dll!CUseCountedObjectRootEx<class NOutermost::CDevice>::InternalAddRef(void) ?? d3d11.dll!CUseCountedObject<class NOutermost::CDeviceChild>::AddRef(void) ?? d3d11.dll!CLayeredObjectWithCLS<class CDepthStencilState>::CContainedObject::AddRef(void) ?? d3d11.dll!ATL::AtlInternalQueryInterface(void *,struct ATL::_ATL_INTMAP_ENTRY const *,struct _GUID const &,void * *) ?? d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::_InternalQueryInterface(struct _GUID const &,void * *,struct ATL::_ATL_INTMAP_ENTRY const *) ?? 00000001() ?? d3d11.dll!CLayeredObjectWithCLS<class CDepthStencilState>::CContainedObject::AddRef(void) ?? d3d11.dll!ATL::AtlInternalQueryInterface(void *,struct ATL::_ATL_INTMAP_ENTRY const *,struct _GUID const &,void * *) ?? d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::_InternalQueryInterface(struct _GUID const &,void * *,struct ATL::_ATL_INTMAP_ENTRY const *) ?? d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::QueryInterface(struct _GUID const &,void * *) ?? d3d11.dll!ATL::CComObjectRootBase::_Delegate(void *,struct _GUID const &,void * *,unsigned long) ?? d3d11.dll!ATL::AtlInternalQueryInterface(void *,struct ATL::_ATL_INTMAP_ENTRY const *,struct _GUID const &,void * *) ?? d3d11.dll!CLayeredObjectWithCLS<class CTexture2D>::_InternalQueryInterface(struct _GUID const &,void * *,struct ATL::_ATL_INTMAP_ENTRY const *) ?? d3d11.dll!CLayeredObject<class NDXGI::CResource>::QueryInterface(struct _GUID const &,void * *) ?? d3d11.dll!ATL::CComObjectRootBase::_Delegate(void *,struct _GUID const &,void * *,unsigned long) ?? d3d11.dll!TComObject<class NOutermost::CDevice>::AddRef(void) ?? d3d11.dll!CBridgeImpl<struct IUseCounted,struct ID3D11LayeredUseCounted,class CLayeredObjectWithCLS<class CTexture1D> >::UCEstablishCyclicReferences(void) ?? d3d11.dll!CBridgeImpl<struct IUseCounted,struct ID3D11LayeredUseCounted,class CLayeredObject<class NDXGI::CResource> >::UCEstablishCyclicReferences(void) ?? d3d11.dll!NOutermost::CDeviceChild::UCEstablishCyclicReferences(void) ?? d3d11.dll!CUseCountedObject<class NOutermost::CDeviceChild>::CProtectFinalConstruct::~CProtectFinalConstruct(void) ?? ntdll.dll!_RtlAllocateHeap@12() ?? d3d11.dll!ATL::AtlInternalQueryInterface(void *,struct ATL::_ATL_INTMAP_ENTRY const *,struct _GUID const &,void * *) ?? KernelBase.dll!_ReleaseSemaphore@12() ??     does any one know any clue about my problem?    thanks!!!!    
  3. the   thanks, so both magfilter and minfilter use bilinear filtering,the weight calculating algorithm is the same between them
  4. The MSDN says:  D3DTEXF_LINEAR Bilinear interpolation filtering used as a texture magnification or minification filter. A weighted average of a 2 x 2 area of texels surrounding the desired pixel is used. http://msdn.microsoft.com/en-us/library/windows/desktop/bb322811(v=vs.85).aspx Is the weight of each texel is always 0.25 when MinFilter=Linear is set and the pixel is larger than the projected texel? If not ,how does DX calculate the weight of each texel? thanks~~
  5. i did try this flag ,but it didn't work,the assembly changed with some additional comment which indicated the line number of the HLSL this assembly code belonged to ,but  the HLSL code column was still blank
  6. i precompiled and saved all DX9 shaders into a BIN file which was loaded at runtime to avoid shader compiling .when i dragged the exe to GPA or Nsight i could only see the assembly which mad the debugging a little difficult. Does any one know how to show hlsl in such debugging tools, should i compile the shader with some specific flag or function ??
  7. i think unreal3 supports the first one ,i just want to know is the latter one viable?
  8. i'm sorry for my description   here is my vertex format    struct MeshVertex { float3 pos:POSITION float3norm:NORMAL float2 tex_diffuse:TEX0 float2 tex_lightmap:TEX1   }   every vertex of a mesh has two uv coordinates ,one for texture one for lightmap , now i have 2 solutions: 1. generate the lightmap uv of each mesh type by 3DMAX , during the lighting building  process i just pack 3 single lightmap into a big atlas texture ,so each instance just need a uv offset and  uv scale to the atlas texture to get its corresponding lightmap data area .  2.  pack all instances into a single mesh  and generate a lightmap coordinate set for  that big mesh and then distribute the corresponding subset to each instance  after level design has been finished in the level editor.so , the uv space of lightmap can be more compact than the former one.An obvious drawback of it is the  extra vertex buffer needed for each instance and some extra vertex stream stat change during rending   which one is better?
  9. for example , i put 3 instances of a static mesh in the level, should i use the same lightmap uv coordinate for them and  then pack the 3 square block into one lightmap atlas texture, or complete different lightmap  uv coordinate for each one so that  the atlas texture can be more compact, but the latter one also need more vertex memory because the lightmap uv coordinate can not be reused by different instances with the same mesh type
  10. i think i found a good article about this http://blog.csdn.net/xhfut/article/details/7629047 ,the author said:  On systems with PCI-Express, some of the AGP vs system memory differences are reduced, but the usage hints you're giving the driver are still useful for optimizing performance.
  11. thanks~ but what about the apg memory?the description of it  is quite similar to shared memory,so i'm puzzled
  12. I'm a little confused about AGP memory and shared graphics memory  http://en.wikipedia.org/wiki/Shared_graphics_memory  what is the difference between them?
  13. I tried to analyze the rendering of Skyrim with intel GPA's frame cpature(Ctrl+Shift+C),but i found it only worked in the starup menu,when i entered the scene, it showed 'Unsupported D3DFORMAT: 0x5343564e', my system is this: OS:Win73 32bit CPU:E8400 GPU:Nvidia GTX460 Error Log: [TAL][WIN pid=4084][ 90c] Unsupported D3DFORMAT: 0x5343564e it was my first time to ues intel GPA ,does this tool work correctly with AIT or nvidia ' grapics card???
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!