Advertisement Jump to content
  • Advertisement

kovacsp

Member
  • Content Count

    364
  • Joined

  • Last visited

Community Reputation

306 Neutral

About kovacsp

  • Rank
    Member
  1. kovacsp

    Hidden Line Removal techniques

    Hi jpetrie, thanks for your answer! Wireframe is not exactly what I want, as the edges are specified too. I mean, I don't want to render the edges of all triangles, like I don't want to see the diagonal lines on the sides of a cube. (I use a line list for rendering the lines now) Moreover, if I would do a wireframe on the top of the shaded object, I think I'd get the same Z-fighting issues between the surfaces and the lines just like before. I can post some screenshots of the problems if that helps. Best regards, kp
  2. kovacsp

    anti alaising effect on sprites?

    Hi, one thing that can trick you is MIP-mapping. Use 1 as the fifth parameter instead of D3DX_DEFAULT. Then, if bilinear filtering is switched on during you use the texture. Check SetSamplerState(X, D3DSAMP_MAGFILTER, ...)and SetSamplerState(X, D3DSAMP_MINFILTER, ...) where you use the texture! Hope this helps, kp
  3. Hi all, I've bumped into the problem of Hidden Line Removal (HLR). Of course, it should be perfect (no Z-fighting) and fast (realtime for large scenes too). First, I've tried the naive approach, render object, then render the lines with z-check still on, so the lines that are occluded are killed by the z-check. Of course, the lines flicker heavily. Then comes the play with the depth bias, (either manually or through the D3DRS_DEPTHBIAS and D3DRS_SLOPESCALEDEPTHBIAS render states). Here, if the bias is too small, then it flickers, if too large, then small parts of the lines that should be occluded become visible. The main problem is that it needs continous adjustment, moreover, since the mapping of z-values to the z-buffer is not linear, I would need different biases for objects that are near and those that are far away (not even speaking about long objects that have parts in my face and in the distant also). So I've tried creating a vertex shader which adjusts the Z-value after the division by W, so that the offsetting I make there is linear. This is how it roughly looks like: float4x4 mWorldViewProj; // World * View * Projection transformation struct VS_INPUT { float4 Position : POSITION; float4 Diffuse : COLOR0; }; struct VS_OUTPUT { float4 Position : POSITION; // vertex position float4 Diffuse : COLOR0; // vertex diffuse color }; VS_OUTPUT main(VS_INPUT In) { VS_OUTPUT Output; Output.Position = mul( In.Position, mWorldViewProj ); Output.Position.x /= Output.Position.w; Output.Position.y /= Output.Position.w; Output.Position.z /= Output.Position.w; Output.Position.w = 1.0; Output.Position.z -= (1.0 / 65536.0); Output.Diffuse = In.Diffuse; return Output; } I've tried it with a D24X8 depth buffer, so the adjustment I need (at least I think so) is (1.0 / 2^24). I imagine that this shifts one "Z unit". If I use this offset on Z, nothing happens (it flickers just like before). It starts to make some difference when I use quite large values like (1.0 / 65536.0), but that value is huge if I think about the Z-values with this shift. Any idea why this does not work as I expect? What did I miss here? Then, I've implemented "the 3 pass technique" which does the following: -Render filled polygon --Disable depth buffer writes (leave depth test on) --Enable color buffer writes -Render polygon edges polygon --Normal depth & color buffering -Render filled polygon again --Enable depth buffer writes --Disable color buffer writes Which works prefectly, but it requires drawing each polygon one by one! This is a real batch killer.. I didn't try it on large scenes, but I can imagine the speed... :( Now I'm thinking about implementing a real (I mean non-image space) HLR algorithm and rendering the resulting lines in a single batch :) So, does anybody know about a nice method, which does not force me to render polygons one by one? Thanks for your help in advance, kp [Edited by - kovacsp on November 7, 2006 2:53:34 PM]
  4. kovacsp

    HLSL Multiple passes issue

    Hi, if you want to do several passes with pixel shaders, you normally render to a texture in the first pass, then render a textured quad (and use the result of the first pass as a texture), and use the second pixel shader. Then, you can do this through multiple passes is you want. Don;t ask how to do this using the effects framework, I've never used it because I do it by hand :) If you want to output to the color channel and the alpha channel, feel free to emit both in the same pixel shader if nothing is against it. If you need some clarification, feel free to ask more! To visually see the results of passes, and some real multipass effects, download RenderMonkey from the Ati website, and play with it a little bit! It helps a lot! kp
  5. kovacsp

    pretty #define in DX10

    Quote:The maximum number of viewports that can be bound to the rasterizer stage at any one time is #defined in D3D10.h as D3D10_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE.[smile]
  6. kovacsp

    [MDX]TextureShader

    Quote:Original post by Ivo Leitao What is the purpuse of this class ? I cannot find any documentation :-(Hey, this thread is a bit old, but it's about this topic. kp
  7. kovacsp

    How make shadows?

    hi, read these. when you're finished, you'll know the answers to your questions [smile] The Theory of Stencil Shadow Volumes An Example of Shadow Volume Rendering in DX9 kp
  8. Quote:Original post by supersonicstar Hi kovacsp,thank you. Because all my program to do is to render to a texture, and finaly save it as a file, I did not use the function present. After I drag my program to NVPerfHUD, my program start to run, but the screen is black. After I press hotkey Ctrl+Z, nothing happens, and the screen is still black. If I use the function present, there are random values presented on the screen, but still no performance information on the screen.Well, cannot you just render to the screen instead of a texture? (I know its FP texture you're rendering to, but just give it a try). Or maybe if you'd tone map the resulting texture to the screen, then maybe it would work. Just guessing :) kp
  9. Quote:Original post by supersonicstar My program renders many passes to a A16G16B16R16F texture with alpha blending. Now I'd like to analyse its performance and observe the video memory usage. It seems that NVPerfHUD dose not work when rendering to a texture.I've used NVPerfHUD several times on my prog, which is rendering to textures heavily. However, I didn't try floating point textures with it ever. Grab the new beta4 version, maybe it is better for you - if nothing helps, drop nvidia a mial, they're likely to respond very fast! BTW: whet do you mean "does not work"? Does it crash? kp
  10. kovacsp

    direct x for newbies

    Hi, I think the best you can do is download the SDK, and start with compiling and understanding the Tutorialx samples. Look up the DirectX calls in the SDK docs. Also read the "essay" parts of the SDK docs (DirectX Graphics/Direct3D9/Programming Guide/Getting Started). Or if you prefer printed books, you'll find tons of "getting started with game programming using dx9" books on amazon. kp
  11. kovacsp

    Drawing a Fullscreen Quad

    Hi, use my code (you have to adjust it at some places to fit your environment, eg. modify includes, use your own satet guard, or the device itself to set render states - but don't forget to reset them afterwards): ScreenAlignedQuad.hpp #pragma once #include "DirectXCommon.hpp" namespace DX { class ScreenAlignedQuad { private: IDIRECT3DVERTEXBUFFER* m_vertexBuffer; bool m_isValid; IDirect3DDevice9* m_pd3dDevice; public: ScreenAlignedQuad(); ~ScreenAlignedQuad(); bool IsValid() { return(m_isValid); } void Create(IDirect3DDevice9* p_pd3dDevice, int p_width, int p_height); void Render(); void Release(); }; } // namespace DX ScreenAlignedQuad.cpp #include "stdafx_DXM.h" #include "ScreenAlignedQuad.hpp" #include "dxutil.h" namespace DX { //----------------------------------------------------------------------------- ScreenAlignedQuad::ScreenAlignedQuad() { m_isValid = false; m_vertexBuffer = NULL; } //----------------------------------------------------------------------------- ScreenAlignedQuad::~ScreenAlignedQuad() { if (IsValid()) { Release(); } } //----------------------------------------------------------------------------- void ScreenAlignedQuad::Create(IDirect3DDevice9* p_pd3dDevice, int p_width, int p_height) { m_pd3dDevice = p_pd3dDevice; HRESULT hr; hr = m_pd3dDevice->CreateVertexBuffer(6 * sizeof(VertexXYZCT), 0, D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_TEX1, D3DPOOL_MANAGED, &m_vertexBuffer, NULL); if (hr != D3D_OK) return; if (m_vertexBuffer == NULL) return; void* vbVertices = NULL; if (FAILED(m_vertexBuffer->Lock(0, 6 * sizeof(VertexXYZCT), &vbVertices, 0))) throw CCException("DirectX error in drawing screen aligned quad"); VertexXYZCT* vertices = (VertexXYZCT*) vbVertices; float startX = -1.0; float endX = 1.0; float startY = 1.0; float endY = -1.0; float startU = 0.5 / p_width; float endU = 1.0 + 0.5 / p_width; float startV = 0.5 / p_height; float endV = 1.0 + 0.5 / p_height; vertices[0].pos = D3DXVECTOR3(startX, startY, 1.0); vertices[0].tu = startU; vertices[0].tv = startV; vertices[0].color = D3DCOLOR_RGBA(255, 255, 255, 255); vertices[1].pos = D3DXVECTOR3(endX, startY, 1.0); vertices[1].tu = endU; vertices[1].tv = startV; vertices[1].color = D3DCOLOR_RGBA(255, 255, 255, 255); vertices[2].pos = D3DXVECTOR3(endX, endY, 1.0); vertices[2].tu = endU; vertices[2].tv = endV; vertices[2].color = D3DCOLOR_RGBA(255, 255, 255, 255); vertices[3].pos = D3DXVECTOR3(startX, startY, 1.0); vertices[3].tu = startU; vertices[3].tv = startV; vertices[3].color = D3DCOLOR_RGBA(255, 255, 255, 255); vertices[4].pos = D3DXVECTOR3(endX, endY, 1.0); vertices[4].tu = endU; vertices[4].tv = endV; vertices[4].color = D3DCOLOR_RGBA(255, 255, 255, 255); vertices[5].pos = D3DXVECTOR3(startX, endY, 1.0); vertices[5].tu = startU; vertices[5].tv = endV; vertices[5].color = D3DCOLOR_RGBA(255, 255, 255, 255); m_vertexBuffer->Unlock(); m_isValid = true; } //----------------------------------------------------------------------------- void ScreenAlignedQuad::Render() { CCASSERT(IsValid()); CD3DStateGuard guard(m_pd3dDevice); D3DXMATRIX identity; m_pd3dDevice->SetTransform(D3DTS_WORLD, D3DXMatrixIdentity(&identity)); guard.SetTransform(D3DTS_VIEW, &identity); guard.SetTransform(D3DTS_PROJECTION, &identity); guard.SetRenderState(D3DRS_LIGHTING, FALSE); guard.SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); guard.SetRenderState(D3DRS_ZENABLE, FALSE); guard.SetRenderState(D3DRS_ZWRITEENABLE, FALSE); guard.SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE); m_pd3dDevice->SetFVF(FVF_VertexXYZCT); m_pd3dDevice->SetStreamSource(0, m_vertexBuffer, 0, sizeof(VertexXYZCT)); m_pd3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); } //----------------------------------------------------------------------------- void ScreenAlignedQuad::Release() { SAFE_RELEASE(m_vertexBuffer); } } // namespace DX kp
  12. Hi people, I've finally found the NVidia slides here: ftp://download.nvidia.com/developer/presentations/2006/gdc Ati didn't hide their slides, they are linked on the developer page here: http://www.ati.com/developer/techpapers.html kp
  13. Hi, try smaller ZBias values, maybe that helps (I use much smaller ones than you use now, however I don't have a depth-first pass). Quote:Original post by CadeF Offtopic: Also, would anyone know which drivers could I use to update the Radeon Mobility? Only the drivers on the OEM CD seem to work, I cant find any drivers which will install.For notebooks, you should always find your latest drivers from the website of the notebook manufacturer (officially). Also, there are patched versions of the non-mobility drivers that will install on laptops, you can find some here: DriverHeaven LaptopVideo2Go (I don't know if using these patched drivers has any drawbacks or side effects compared to the official ones, so take care) kp [Edited by - kovacsp on March 30, 2006 5:24:00 AM]
  14. kovacsp

    Better Rendering Performancef

    Quote:Original post by katkiller 1- Textures: what’s the best way to deal with textures> loading, managing, rendering Here is my experience about textures: -use DDS textures with mip maps present in the file (10x loading speed compared to JPGs) -use as small textures as possible. convince your artists that using 1024x1024 textures is usually BAAAAAD. (measure how large a texture is going to be on screen, and use a size close to that) -use 2^x sized textures (they'll be resized anyway, unless you force it not to - but DDS conversion will do this anyway) -use texture atlases (this is connected to reducing the number of drawsubset calls) I let others to answer the other points, as I'm still looking for the best solution myself :) kp
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!