Jump to content
  • Advertisement

orenk

Member
  • Content Count

    33
  • Joined

  • Last visited

Everything posted by orenk

  1. Hi all   I'm glad to announce my new product: OrenVideo - Cross Platform Video Solution   Few words about the product:   OrenVideo let you put videos in your game/application with less then an hour! without worrying about memory or speed, and the cool thing about it is that you have built-in alpha channel so you can play transparent videos... want to play video inside your game textures? no problem! want to play some cut scenes? easy! want to create your entire UI with videos? done!   to see it in action in real published game, check out: Super Splatters uses OrenVideo a lot! the entire UI done with videos. in game effects done with videos. in game markers/objects done with videos. and more...   to know how many OrenVideo instances are used in real-time check out the green rects in the video. here is what Niv Fisher - Founder & Lead programmer at Spiky Snail had to say about OrenVideo:   "After using Bink extensively in the XBLA version, also licensing it for PC turned out to be too expensive. We decided to try out OVD and were thoroughly impressed with the ease of integration, the simplicity of the tools and the support we got. The quality and performance of this library was on par or better for a fraction of the price."   Few of OrenVideo highlights: Cross platform (windows,mac, linux, android). Internal alpha channel for transparent videos. Simple and well documented SDK, easy to debug, you can integrate it in less than an hour!. One small library and you are set. no special system software & audio codec needed, it is completely self-contained and not rely on any 3rd party libraries. Same API and data files on multiple platforms. Highly optimized to take advantage of every platform it supports. you will be shocked how well it perform, you can use it for in-game videos, cut-scene videos and even build your entire UI with it! Optimize to use minimal memory as much as possible, no need to worry about codecs that eats your memory!. Design to work hand by hand in multi-threaded environments.   so how much it will cost you? check out: http://www.orenvid.com/pricing   for more info, check out my site at: http://www.orenvid.com   thanks  
  2. you are right that commercial can afford expensive video player fees but still, lets says is a game running on 5 platforms, and pay lets say 40 K for all, i think they would be happy to know that they can pay only 15 K - though it won't hurt them to pay the 40 K. you mentioned bink so open source solutions are out of the question as they are not close to what binks gives you in terms of speed and features. think like if you had only NVIDIA or AMD/ATI, they could take every price and you would pay because you don't have any choice, but today, you have a choice, you can choose based on price and features and buy AMD/ATI or NVIDIA. what i'm trying to do, is to give people a choice, and choose based on price and features.
  3. Hi all i'm working full power on publishing cross platform video solution. first stage it will be: 1. pc 2. linux 3. mac second stage will depend on sells so eventually i will add top consoles: 1. xbox 360 (future 720) 2. ps3 more platforms will be added based on customers requirements... now for the important stuff: Pricing as you already know or not, its a pain in the butt to create such a solution and at the same time making it fast enough to run multiple hd videos at 30 fps game... i'v added a poll at the top left corner of my blog: http://orenk2k.blogspot.com/ so i could know what do you think: YOUR VOTE COUNTS! believe it or not, but your vote will have a HUGE influence on the price. pool keywords: indie = independence/independent comm = commercial so... if you have friends, colleague or people who might consider buying such a solution, tell them to vote... thanks and cya.
  4. Hi Hodgman Per platform, per game... if you will need few platforms for the same game then you will talk with me and you'll get discount depends on platforms and future deals. keep in mind that i want to get approx price for the standart per platform per game and not prices for all types of licenses (such multi platform, bundle, bulk/publisher deal etc...) that's why i only put 6 choices (3-indie, 3 -commercial) to make it simple.
  5. Thanks for your replay most of the videos could be created via engine cut scene editor or video grabbers like fraps, some videos is third party related such as nvidia, intel, havok etc...
  6. hi i created SSAO effect but its seems very strange, i hope that some one will tell me whats wrong because i cant see any problem with my code. first i see the edges of the triangles popping out second, there is white line at the corners, where it needs to be black/shaded triangles edges: <br/>By orenk2k at 2008-07-15 white lines: <br/>By orenk2k at 2008-07-15 here is the depth shader: Vertex shader REAL4 vPositionView = mul(In.vPosition, g_matWorldView); Output.fDepth = length(vPositionView.xyz); Output.vNormal = mul((In.dwNormal)*2-1, (REAL3x3)g_matWorldView); Output.vNormal = normalize(Output.vNormal); Pixel shader return REAL4(In.vNormal.x, In.vNormal.y, In.vNormal.z, In.fDepth); frustum corner calc (in client side) FLOAT farY = tanf(m_pCamera->GetFov() / 2) * m_pCamera->GetFarPlane(); FLOAT farX = farY * m_pCamera->GetAspect(); D3DXVECTOR4 vCorner(farX, farY, m_pCamera->GetFarPlane(), 0.0f); // note: camera fov in radians SSAO Shader: Vertex shader: In.vPosition.xy = sign( In.vPosition.xy); Output.vPosition = REAL4( In.vPosition.xy, 0.0f, 1.0f); Output.vTexCoords = In.vTexCoords; Output.vViewDirToFrustumCorner = REAL3(In.vPosition.xy,1)*g_vDirection.xyz; // note: g_vDirection = vCorner computed in client side Pixel shader: //reconstruct eye-space position from the depth buffer REAL depth = tex2D(DepthSampler, In.vTexCoords).a; REAL3 se = depth * normalize(In.vViewDirToFrustumCorner); REAL3 randNormal = tex2D( RandNormalSampler, In.vTexCoords*20).rgb; REAL3 normal = tex2D(DepthSampler, In.vTexCoords).xyz; const int num_samples = 16; REAL finalColor = 0.0f; for (int i = 0; i < num_samples; i++) { REAL3 ray = reflect(samples.xyz, randNormal); // add view space normal and bias ray += normal * g_fNormalBias; ray *= g_fSampleRadius; // new (view-space) sample in a sphere of sampleRadius REAL4 sample = REAL4(se + ray, 1.0f); //determine clip-space locations of our current sample point REAL4 ss = mul(sample, g_matProjection); //determine the texture coordinate of our current sample point REAL2 sampleTexCoord = 0.5f * (ss.xy/ss.w) + REAL2(0.5f, 0.5f); sampleTexCoord.y = 1.0 - sampleTexCoord.y; //read the depth of our sample point from the depth buffer REAL sampleDepth = tex2D(DepthSampler, sampleTexCoord).a; //compute our occulusion factor REAL depth_diff = depth - sampleDepth; REAL occlusion = g_fDistanceScale * max(depth_diff, 0.0f); finalColor += 1.0 / (1.0f + (occlusion * occlusion) * 0.1); } REAL col = finalColor/num_samples; return REAL4(col, col, col, 1.0f);
  7. ok, fixed the corner and also found the depth error it seems that when i output: length(In.vPositionView.xyz) it creates the sphere shape and the depth info isn't correct. so now my depth shader look like this: // Vertex shader output structure //----------------------------------------------------------------------------- struct VS_OUTPUT_DEPTH { REAL4 vPosition : POSITION; REAL3 vNormal : TEXCOORD0; // normal REAL4 vPositionView : TEXCOORD1; // linear depth REAL fDepth : TEXCOORD2; // linear depth }; //----------------------------------------------------------------------------- // Vertex shader //----------------------------------------------------------------------------- VS_OUTPUT_DEPTH RenderSceneDepthVS(VS_INPUT In) { VS_OUTPUT_DEPTH Output; // Transform the position from object space to homogeneous projection space REAL4 vertexWorld; Output.vPosition = TransformVertex(In.vPosition, vertexWorld); // output linear depth (view/camera space z) REAL4x4 matWorldView = mul(g_matWorld, g_matView); Output.vPositionView = mul(In.vPosition, matWorldView); Output.fDepth = Output.vPositionView.z; //Output.vPositionView.z; Output.vNormal = mul((In.dwNormal)*2-1, (REAL3x3)matWorldView); return Output; } //----------------------------------------------------------------------------- // Pixel shader //----------------------------------------------------------------------------- REAL4 RenderSceneDepthPS(VS_OUTPUT_DEPTH In) : COLOR { return REAL4(normalize(In.vNormal.xyz), In.fDepth); // return REAL4(normalize(In.vNormal.xyz), length(In.vPositionView.xyz)); } [/source] ssao shader: //----------------------------------------------------------------------------- // Vertex shader //----------------------------------------------------------------------------- VS_OUTPUT_SSAO RenderSceneVS(VS_INPUT_SSAO In) { VS_OUTPUT_SSAO Output; // Transform the position from object space to homogeneous projection space In.vPosition.xy = sign( In.vPosition.xy); Output.vPosition = REAL4( In.vPosition.xy, 0.0f, 1.0f); Output.vTexCoords = In.vTexCoords; //Output.vViewDirToFrustumCorner = REAL3(In.vPosition.xy,1)*g_vDirection.xyz; //REAL3(farX*Output.vPosition.x, -farY*Output.vPosition.y, g_fFarClipPlane); Output.vViewDirToFrustumCorner = REAL3(-In.vCorner.x, -In.vCorner.y, In.vCorner.z); return Output; } //----------------------------------------------------------------------------- // Pixel shader //----------------------------------------------------------------------------- REAL4 RenderScenePS(VS_OUTPUT_SSAO In) : COLOR { //reconstruct eye-space position from the depth buffer REAL4 data = tex2D(DepthSampler, In.vTexCoords); REAL3 se = data.w * normalize(In.vViewDirToFrustumCorner); REAL3 randNormal = tex2D( RandNormalSampler, In.vTexCoords*20).rgb; REAL3 normal = normalize(data.xyz); // return se.xyzz; // return In.vViewDirToFrustumCorner.xyzz; // return depth*0.001; REAL3 tangent = ddy(se); REAL3 binormal = ddx(se); normal = cross(normalize(tangent), normalize(binormal)); // return REAL4(normal.xyz,1); const int num_samples = 16; REAL finalColor = 0.0f; for (int i = 0; i < num_samples; i++) { REAL3 ray = reflect(samples.xyz, randNormal);// * g_fSampleRadius; // add view space normal and bias ray += normal * g_fNormalBias; // new (view-space) sample in a sphere of sampleRadius REAL4 sample = REAL4(se + ray * g_fSampleRadius, 1.0f); //determine clip-space locations of our current sample point REAL4 ss = mul(sample, g_matProjection); //determine the texture coordinate of our current sample point REAL2 sampleTexCoord = 0.5f * (ss.xy/ss.w) + REAL2(0.5f, 0.5f); sampleTexCoord.y = 1.0 - sampleTexCoord.y; //read the depth of our sample point from the depth buffer REAL sampleDepth = tex2D(DepthSampler, sampleTexCoord).w; //compute our occulusion factor REAL depth_diff = g_fDistanceScale * max(data.w - sampleDepth, 0.0f); if (depth_diff > g_fOccludeDist) { finalColor++; } else { finalColor += 1.0 / (1.0f + (depth_diff*depth_diff)); } } REAL col = finalColor/num_samples; return REAL4(col, col, col, 1.0f); } now the screenshots: se: <br/>By orenk2k at 2008-07-19 corner: <br/>By orenk2k at 2008-07-19 normals: (calc on the fly within ps) <br/>By orenk2k at 2008-07-19 ssao: without depth check <br/>By orenk2k at 2008-07-19 ssao: with depth check <br/>By orenk2k at 2008-07-19 as you can see, i'v fixed few things - no tris edges and no strange white edges poping while moving my camera but there is still one problem left: the problem now is the halo effect around the objects, which i tried to remove by comparing depth diff against a threshold and if its bigger i continue without adding occlusion term else do the usual thing. but there is some strange white pixels around and when i composite the final picture your can show some bright areas around the the objects (inside the halo), this is not so great :( i'v checked your code agi and i played with your shader, i can't understand how your shader works and mine dont. i even tried to do exactly what you are doing and it still did work like yours. also i saw you output the length of eye space pos inside your fragment shader and its gives you good depth info (no sphere shape around you eye pos). any ideas what could be the problems? btw: what gfx api are you using, GL or DX?
  8. hi i seems the corner was not the problem at the first place, the depth generation is the problem. what i'v done is setup the corners in normal attribute like agi said, and use it, then i saw its the same like i had before, no gradient depth in 'se' so i change my depth shader to output interpolated eye space position and at the pixel shader i take the length. new depth shader: struct VS_OUTPUT_DEPTH { REAL4 vPosition : POSITION; REAL3 vNormal : TEXCOORD0; // normal REAL4 vPositionView : TEXCOORD1; // linear depth }; //----------------------------------------------------------------------------- // Vertex shader //----------------------------------------------------------------------------- VS_OUTPUT_DEPTH RenderSceneDepthVS(VS_INPUT In) { VS_OUTPUT_DEPTH Output; // Transform the position from object space to homogeneous projection space REAL4 vertexWorld; Output.vPosition = TransformVertex(In.vPosition, vertexWorld); // output linear depth (view/camera space z) Output.vPositionView = mul(In.vPosition, mul(g_matWorld, g_matView)); Output.vNormal = mul((In.dwNormal)*2-1, (REAL3x3)g_matWorldView); return Output; } //----------------------------------------------------------------------------- // Pixel shader //----------------------------------------------------------------------------- REAL4 RenderSceneDepthPS(VS_OUTPUT_DEPTH In) : COLOR { return REAL4(normalize(In.vNormal.xyz), length(In.vPositionView.xyz)); } this gives the gradient effect and the ssao look fine now, but now i see some black sphere shape around my view pos (some depth issue), very strange: see the screenshots maybe you have some comments: <br/>By orenk2k at 2008-07-18 top view: (see the sphere shape around my view pos) <br/>By orenk2k at 2008-07-18 another one: <br/>By orenk2k at 2008-07-18
  9. ok, thanks (agi, mjp) i think i have some material to work with ;) i try it out in hope it will fix the problem
  10. hi i can see the problem now, no gradient of depth that i could see, its just the same as the 'se' i'm using my own engine, so i need to make my own functions. 1. what exactly the function 'getFrustumCorners' do? is it calculate the 8 vertices pos the lies at the near and far planes? (4 near and 4 far) 2. assuming i have that function, is the order of corner vertices mater? do i need to map them in a certain order to full screen quad vertices?
  11. can you show me how your textures looks like (se, corner, normal)?
  12. hi yes i know what you mean ;) 0. my units is inches (i'm using q4 editor - they use inches) 1. se.xyzz <br/>By orenk2k at 2008-07-17 2. In.vViewDirToFrustumCorner.xyzz - look the same as 1 3. normal.xyzz <br/>By orenk2k at 2008-07-17
  13. Quote:Original post by agi_shi I can't see any obvious errors, but: 1) why does finalColor += 1.0 / (1.0f + (occlusion * occlusion) * 0.1); have that multiply by 0.1? What if you remove it? 2) for the depth shader, what happens if you interpolate the view-space position to the pixel shader and then take the length there? I believe that's why you can actually "see" the triangle edges 3) what does g_fDistanceScale look like? hi 'agi_shi' thanks for replaying i read your thread with geoff wilson on gamedev, i hope you could help me as much as you help him :) i also saw your 'disorder engine' pics, the ssao look very good, how did you do it? 1. the 0.1 is a scale factor on the depth diff, this allows me to control how it will influence the result. if i remove it the scene will be brighter. 2. i tried that but it did not give me better results 3. fDistanceScale is scale factor like in (1), allows me to control it from the app, basically i can remove the 0.1 from finalColor += ... and set fDistanceScale = 0.1 you are right - this will save me a mul :)
  14. orenk

    Simple occlusion test?

    you can use occlusion query to check how many pixels passed. render lens billboard and get how many pixel are visible, use this result to do fade in/out effect.
  15. i'm using my ambient pass to generation refraction map, in my ambient pass i'm rendering all none refractive surfaces, then i render all refractive surfaces to the alpha channel only to mask pixels the lies outside. for example you don't want to apply perturbation effect on pixels that isn't water surface pixels.
  16. it seems that you didn't understood what i ment by saying d3dx is slow, i wrote "its very slow comparing to good c/sse code", that means that if your are writing some loop (to do the skinning part for example) and call d3dx functions to do the transformations its slow comparing to writing the same loop in asm and optimized sse code. function call overhead and cache is very important when you'r optimizing. this also applied to quaternion math & conversions and shadow volume construction. anyway, check http://developer.intel.com/ to learn more about optimization.
  17. why you are creating a mesh each time you render? second, index buffer should remain static (set at once on init or something) also, compute normals yourself, that way you can optimize it (you dont actually knows what dx doing inside this function) from my tests using d3dx for matrix/vector math is not a good idea, its very slow comparing to good c/sse code. the process you wrote isnt so clear, you are doing lock in step 3,4, where you'r unlocking it? also you didn't wrote any skinning info you'r doing...
  18. but you can render refraction surfaces to render target (separate pass) and pass it to water ps (when rendering water surface)
  19. hi i dont think its because d3dxmesh, as i understood its just for filling vb data and rendering. i'm also rendering md5 but i'm not getting 13 fps, for about 100 q4 models approx 2000 faces gives me 7-10 fps on 3.2ghz 1gb ram and geforce6600 gt. try doing these things: 1. when doing skinning (Pos, TNB), dont lock vb before and unlock it after all computation is done, use temp buffer for this, and after that just lock, do mem copy, and unlock immediately. 2. for shadow volume, move it to gpu and you get extra boost 3. also, for TNB calc, you only need to compute TN and compute B on vertex shader 4. i dont know how you compute TNB data, but if you'r using the skinnned mesh for that like computing avg normals its not fast, you can compute (using base frame data) transformatino matrix for transforming TNB that you compute for the base frame only and when skinning use that matrix with the right weights. 5. after all, using correct SSE can double you performance
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!