Jump to content
  • Advertisement

eduwushu

Member
  • Content Count

    26
  • Joined

  • Last visited

Community Reputation

132 Neutral

About eduwushu

  • Rank
    Member
  1. Wow, thanks to all!! I've been occupied with my cascaded shadow maps and I forgot this post completely!! I were thinking about spherical harmonics once but I suppose that I don't want at this point to get the project more complicated than it already is (Maybe using spherical harmonics is an easy task after all but I have no idea related to this issue so I would have to investigate on it and it requires time that I must spend going on a little with the project) . Maybe in a future I could do that. I will try with a simple ambient component, trying not to saturate with it the color of the diffuse map and getting godd results combined with HDR.
  2. Hi there! I have a little problem with terrain rendering when using HDR technique. The issue is that on portions of the terrain where its normal is pointing out the sunlight direction the Phong model gives me a resulting color of (0,0,0). When my camera is focused on a portion of the scene centered on one of thsi darkened areas of the terrain then the average luminance of the scene lowers a lot and the little fragments of illuminated terrain still visible on the frame get a fabolous and esoteric glow. Could this be solved using some kind of ambient component? Like: resultColor += AmbientComponent * TerrainDiffuseColor (I use diffuse color to take the terrain textures color instead of a plain color) This is for sure not physically accurate but I think it could make the trick What do you think? Thanks!
  3. Finally I doscovered where was the error: As I'm from Spain I was using a number configuration that uses , as decimal separator (instead of .) Changing locale configuration for decimal separator solved the problem. Now my model is being processed correctly and everything is fine. Thanks to all for your support!!!! Really!!
  4. Well I didn't post any code because I thought it could be a project configuration problem or maybe a XNA version problem. But just in case here it is the relevant part of my model processor code. I took it from the web the code fragments needed to instruct XNA to compute tangent, normal and binormal data for me. As I said, this works perfectly on my work computer but in the computer of my home tangent and binormal data will not be computed. I'm trying tu uninstall XNA and reinstall it, just to be sure. [ContentProcessor] public class CBModelProcessor : ModelProcessor { // Maximum number of bone matrices we can render using shader 2.0 // in a single pass. If you change this, update SkinnedModelfx to match. const int MaxBones = 59; List<Vector3> m_adVertices = new List<Vector3>(); List<BoneWeightCollection> m_adVertexWeights = new List<BoneWeightCollection>(); Dictionary<string, int> m_dBoneNameMap = new Dictionary<string, int>(); Vector3 m_v3Min,m_v3Max; string sMaterialName; // acceptableVertexChannelNames are the inputs that the normal map effect // expects. The NormalMappingModelProcessor overrides ProcessVertexChannel // to remove all vertex channels which don't have one of these four // names. static IList<string> acceptableVertexChannelNames = new string[] { VertexChannelNames.TextureCoordinate(0), VertexChannelNames.Normal(0), VertexChannelNames.Binormal(0), VertexChannelNames.Tangent(0) }; [Browsable(false)] public override bool GenerateTangentFrames { get { return true; } set { } } protected override void ProcessVertexChannel(GeometryContent geometry, int vertexChannelIndex, ContentProcessorContext context) { String vertexChannelName = geometry.Vertices.Channels[vertexChannelIndex].Name; if (vertexChannelName == VertexChannelNames.Weights()) { foreach (BoneWeightCollection bwc in geometry.Vertices.Channels[vertexChannelIndex]) { m_adVertexWeights.Add(bwc); } //Store the weights of animation for OBBox computing //IEnumerable<BoneWeightCollection> iterator= geometry.Vertices.Channels[vertexChannelIndex].ReadConvertedContent<BoneWeightCollection>(); } // if this vertex channel has an acceptable names, process it as normal. if (acceptableVertexChannelNames.Contains(vertexChannelName)) { base.ProcessVertexChannel(geometry, vertexChannelIndex, context); } // otherwise, remove it from the vertex channels; it's just extra data // we don't need. else { geometry.Vertices.Channels.Remove(vertexChannelName); } } /// <summary> /// The main Process method converts an intermediate format content pipeline /// NodeContent tree to a ModelContent object with embedded animation data. /// </summary> public override ModelContent Process(NodeContent input, ContentProcessorContext context) { System.Diagnostics.Debugger.Launch(); ExtractModelVertices(input); ExtractMaterialName(input); if (!IsAnimatedModel(input)) return ProcessStaticModel(input, context); else return ProcessAnimatedModel(input, context); } [...] }
  5. szecs I was not expecting you to gues what was happening at first glance. I was only asking if anybody has experimented something like this before and if he has solved it.
  6. I will try it Thanks!!
  7. Nobody has ever experimented this problem??
  8. Hi all!! I'm developing a project in XNA and I made some time ago a custom model processor that generates for meshes the tangent, normals and binormals of the model. It worked for some time but now depending on the computer I'm on, it will generate or not this data. I have been developing thsi project ins everal computers, but using the same configuration for the project (x86). Now tangent frames are only generated correctly on my work computer. When I execute the project in my home PC or in my laptop this data will be initialized to zero. Anybody knows why is this happening? A lot of thanks
  9. Quote: Hi Edu, 0) Who is Matt? 1) Formatting posts See you :) Oops, a little mistake hehe. Quote:so max liminanz is luminanz you get when you take 1.f from the current framebuffer and transform it back to your HDRange with the inverse forular you used the last frame. I don't know if I understand you very well... Are you suggesting to use as maximum absolute luminance the maximum pixel luminance for the current frame? Take into account that in the shaderx article it seems that absolute_max_luminance is different from max_luminance. This last one will be corrected with an autoexposure formula to perform the 'eye adaptation' effect. Abs_Max_Luminance is used as maximum luminance value which defines the range on which the luminance slots will be distributed. Thinking on it, it only makes sense to put a value for abs_max_luminance greater or equal than the max luminance of current frame. I don't know if it makes sense to choose an abs_max_luminance greater than this value.
  10. Hi All!!! My name is Edu. I'm currently doing a small personal project in XNA and I'm a little stuck in the HDR pipeline. I read the ShaderX6 article of Fran Carucci regarding histogram computation on the CPU to compute the max, min and average luminance values. I'm getting now a bit crazy. As stated in the article I render the scene to a FP16 format render target (I'm not using currently LogLuv encoding, to simplify), create a downsampled version 1/4 of the original size and download this version for CPU analysis. The first step is the histogram creation: Carucci suggests to use 1024 luminance slots for the histogram. The luminance encompassed by each slot will be the absolute_maximum_luminance / 1024. I'm still trying to figura what is he refrring to with 'absolute_max_luminance' Could it be the maximum luminance that could be expressed with 16 bits per channel (The 65535)?It is a parameter to be set manually by an user? The fact is that the value chosen for this parameter affects the final result of the image: if I choose a big value for it and my scene has pixels whose max luminance is relatively small compared to the absolute_max_luminance chosen, then the vast majority of pixels will fall on the same histogram slot (the greater the value of absolute_max_lum is, the greater the luminace range encompassed by each slot) and the final image will be incorrect. I also can't understand very well how histogram equalization is applied in this process. As far as I know the equalization process tries to maximize the image contrast spreading teh original histogram of the image across the entire luminance range, preserving the luminance ratios. What I've tried to do is: 1.- Foreach pixel in the image compute luminance and fins its slot. Add 1/Total_Num_Pixels for that slot. This will give us the normalized histogram. 2.- Compute the cumulative distribution function for the image, that will be used to equalize the image. As seen in Digital Image Processing book this function can be used to perform the luminance mapping. I store this fucntion in a num_slots x 1 texture. 3.- As the image will be equalized to cover all the luminance range this implies that I can compute the max, min and average luinance this way: MinLuminance = Absolute_Max_Luminance * Min_Percentage MaxLuminance = Absolute_Max_Luminance * Max_Percentage AvgLuminance = Absolute_Max_Luminance * Avg_Percentage (I'm not very sure of this step) 4.- To perform tonemapping I use a Reinhard operator, using the values computed. Before applying the operator I try to map each pixel luminance to its equalized value: // Calculate the luminance of the current pixel float Lw = dot(LUM_CONVERT,vColor); Lw = tex1D(RemapSampler,Lw); float Ls = Lw - g_fMinLuminance; float Ld = 0.0f; if(Ls > 0.0f) { float L = (g_fMiddleGrey / g_fAvgLuminance) * Ls; Ld = L * (1.0f + L / (g_fMaxLuminance * g_fMaxLuminance)) /(1.0f + L); } vColor *= (Ld / Lw); return vColor; But this is not giving me good results. I send a capture showing the final rendered image, the render of the downsampled version of the original FP16 render target and the lighting configuration panel with the user configurable values for the algorithm Surely I'm not understandng something well. But I've been thinking on it for two weeks and my mind is blocked now. I would appreciate very much a little help. Thanks a lot in advance, really. [Edited by - eduwushu on September 6, 2010 4:55:54 AM]
  11. For some time I've been researching how to improve the efficiency in scene rendering in my application. I want to support instancing in my engine and, for that purpose, I want to implement batching: geometry batches will combine several instances into one draw call. But i have some doubts. I think in the global process in this way: 1.- Traverse the scene graph and cull all the entities that are not visible (suppose I'm using an octree). 2.- Sort the entities by statesblock needed for its rendering and by texture context. All the entities that have the same set of render states and set of textures will be combined, already transformed and with instance attributes applied, into a single batch, as they will be drawn using a single draw call. 3.- Send the batches to the renderer. Then the renderer will build a vertexbuffer object to hold all the batch information, will set the state needed to draw the batch and will draw the batch using, for example, a DrawIndexedPrimitive call. The next frame we must repeat the entire operation, so we must compute again the visible entities that can differ from the previous ones. So the resulting batches could be probably different from the ones in the previous frame, so we must recalculate the set of batches that will be drawn (and after that we must construct the new VB objects to draw that set of batches). Is this approach really efficient? Having to recompute all the batches and reconstruct the vertexbuffers is probably expensive. I wonder if there is some kind of batch caching technique that could improve the performance of this, or maybe I haven't understood well the batching technique. Thanks!!
  12. Thanks for all the replies. I will take them into consideration to rewrite my code. According to them, you would then recommend me to have a global variable of IRenderer type, or maybe an attribute of IRenderer type in some CEngine class? When i was debuggin the application I noticed this issue: --TestApp.cpp-- [Breakpoint] CD3DRenderer::Init(&data); When I hit the breakpoint placed before the call to Init I add a watch variable in the debugger with the value of the &IRenderer::m_pRenderer. Then I enter the Init method and add another watch with the value &m_pRenderer. And this last direction was different from the direction I get outside the Init method. This sounded very strange to me.
  13. I know but this seems to be the perfect place to use a singleton pattern, doen's it? I must have in my application a single instance of an object which must be globally visible to other components. And i must ensure that no one can create another instance of it. Anyway, I want to understand now what can be happening with this code and why it is not working, just as a programming curiosity. Can you see the error?
  14. Hi all, I've encountered some problems trying to implement a singleton pattern for a engine i'm developing. In that engine I needed a Renderer object. This renderer object must have subclasses in order to make it API independent. I will have a D3DRenderer and a OGLRenderer subclass for the Renderer class. I need in the engine only one object of the type of one of the two subclasses, but I need to access this object using the generic interface provided by the Renderer class. I thought in doing something like this: --FILE IRenderer.h-- class IRenderer { protected: static IRenderer* m_pRenderer; static void SetRenderer(IRenderer* renderer){m_pRenderer=renderer;} IRenderer(){} public: static IRenderer* GetRenderer(){return m_pRenderer;} virtual ~IRenderer(){} }; IRenderer* IRenderer::m_pRenderer=NULL; --FILE CD3DRenderer.h-- class CD3DRenderer : public IRenderer { private: LPDIRECT3DDEVICE9 m_pD3DDevice; IDirect3D9* m_pD3D9Interface; CD3DRenderer() { m_pD3D9Interface=NULL;m_pD3DDevice=NULL; } long InitD3D(D3DINITDATA* data); public: virtual ~CD3DRenderer(); static void Init(D3DINITDATA* data); }; --FILE CD3DRenderer.cpp-- void CD3DRenderer::Init( D3DINITDATA* data ) { if(m_pRenderer==NULL) { //Construct the new Renderer using the data specified m_pRenderer=new CD3DRenderer(); if(E_FAIL==((CD3DRenderer*)m_pRenderer)->InitD3D(data)) { delete m_pRenderer; m_pRenderer=NULL; } } } long CD3DRenderer::InitD3D( MIRAGED3DINITDATA* data ) { if(NULL==(m_pD3D9Interface=Direct3DCreate9(D3D_SDK_VERSION))) { return E_FAIL; } [...] D3DPRESENT_PARAMETERS d3dpp; [...] if( FAILED( m_pD3D9Interface->CreateDevice( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL,data->hWnd,vp,&d3dpp, &m_pD3DDevice ) ) ) return E_FAIL; return NOERROR; } CD3DRenderer::~CD3DRenderer() { m_pD3DDevice->Release(); } With this code the logical way to use the renderer object will be this: CD3DRenderer::Init(&somedata); //First initialize the singleton object IRenderer* renderer=IRenderer::GetRenderer(); (Do something with the renderer) The problem is that when I call to IRenderer::GetRenderer() from an application I get always the NULL pointer. I set a breakpoint inside the Init() method and the renderer object is being initialized OK (it is not setting the m_pRenderer varible to NULL). So I think there must be something that I have not took into account. Any ideas?? Thanks beforehand.
  15. Welll lets see im thinking how i can load with an engine static geometry for a scene, and how modern engines do it actually. Welll one posibility is loading all the static geometry,the main geometry of the level as a bunch of polygons that will be distributed in an octree, or maybe a bunch of polygons that in the file from which we load the map are distributed in nodes of a bsp tree. But i see that in the engines which people are setting up all teh geometry is treated as entities. If i treat this geometry as entities then how can i order them spatially? putting refreces to the entitys in the nodes of the spatial structure where this entities laid?mmmm is difficult this because this geometry is geometry that wont move in all the game. It is loaded and no more. You can have also some entities that are part of the geometry of the scene and can move like doors, elevators, switches...etc and are dinamyc entites but have a restricted movement. If we make them dinamyc we have to treat them like separately entites to move them,rotate them etc etc So then how can we localize them spatially along with the other static geometry and the pure dynamic objects like the enemies of the scene or our main character??? How can we put it all together? The fact is that the entitys could be also organized hierarchically and the combination of the hierarchy and spatiallity make this issue more difficult. Can someone tell me some examples of the management im searching for? Thank you very much
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!