LargeJ

Members
  • Content count

    25
  • Joined

  • Last visited

Community Reputation

126 Neutral

About LargeJ

  • Rank
    Member

Personal Information

  1. Thanks for all the input. I am actually looking for an easy and efficient way to update mesh data, because I like to experiment applying shaders, changing geometries, instancing and such all on the fly without having to worry too much about calling the render system to update the representation. I was thinking about representing a mesh as a pure data structure, only containing triangles, vertices, etc and some useful methods to manipulate the mesh data. This mesh must be transformed to 'my-engine specific' vertex buffer description that the render system consumes (and stores into an openGL-vertex buffer object). The digestion of the vertex buffer object leads to a collection of render calls in the render system. These calls are executed every frame and can be sorted for the least-amount of state changes. I was considering the following use case: As a user of my engine, I like to easily change the appearance of a mesh, by assigning new materials to any (random) combination of triangles after the mesh has already been created. For this to work the amount of render calls in the render system must be updated: I must find all render calls that belong to this geometry (leading to coupling between mesh and render system) and then possibly adding more render calls (because a single render call for 500 triangles with material A, can now be split into 200 triangles material A and 300 material B. Q: But maybe I must just limit the interface to more common easy ways to change the appearance of the mesh, and if the user wants to change more specific things, it is responsible to call the render system to update the GPU representations.?
  2. The mesh has a collection of vertices, triangles and assigned materials and an interface to be able to change the geometry in a convenient way. It is not stored in an efficient way to render immediately. For rendering I am using OpenGL, so I must create a vertex buffer object and store the data on the GPU. So what I have is decoupled the mesh representation from how the rendering system represents the object. When the mesh geometry changes I can directly instruct the GPU to update it's representation, but I thought that an event system seems much more convenient to have less coupling between the systems. So at this moment when I update the vertices or triangles, the mesh geometry changed and I'm sending a "mesh geometry changed" event. These events are sent throughout the Mesh class, but I am not sure if this is a proper way to solve this problem.
  3. I am wrapping my head around dynamic meshes. At the moment I want my mesh to be able to change it's geometry during execution, and let all dependent systems know they have to update their state based on the mesh-data (primarily the rendering system). I have an event system in place in which I can send mesh-changed events, but Im wondering if sending this event from inside my mesh class makes sense. It feels a bit of mixing responsibilities, because I like my mesh to be as elementary as possible. How to solve the updating of the mesh representations inside the rendering system in a clean way?
  4. Shadow artifacts with peter-panning

    Never mind guys, it was a silly mistake from my side. It turned out I was using only the depth matrix, but I had to still multiply it with the model matrix of the objects in the scene.
  5. I am trying to implement simple shadow mapping (following the shadow mapping tutorial of opengl: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/), but I'm getting some strange artifacts with the shadows. I attached three files to my post, clearly showing my problem. One picture seems to be correct (the cube is lifted somewhat of the floor), another in which I see peter-panning (although a little bit too much peter-panning to me) and another showing the issue that I have. My scene consists of two objects: a cube and a stretched cube (acting as a floor). The issue happens when I lift the floor up or change the thickness of the floor. My bias is very low, it is 0.005. The shadow maps are 2048x2048 and the size of the scene is around 10x10x1, so I wouldnt expect such amount of precision effects. Also the ground floor is not a plane, it is a stretched cube. in specific compositions of the scene. At this moment the scene consist of two objects, a cube and a stretched cube acting as the floor. Do you guys have any clues about what can cause this "subtraction" like effect?
  6. Hello all, I want to implement a ray tracer that models hair fibers as described by Marschner et al: "Light scattering from Human Hair Fibers (2003)". From reading several other papers I noticed that hair can be rendered explicitly and implicitly. Explicit rendering requires every hair strand to be rendered separately, but because hair fibers are very thin compared to the size of a pixel, there will likely be aliasing problems. I read a lot about using volume densities instead, but I do not entirely understand this idea. I was wondering what techniques are generally used to ray-trace a hair fiber? My idea is that hair segments (curves) can be projected on the image plane. This way, you exactly know which pixels are affected and then apply pixel blending to render the fibers that affect the pixel. However, I have not been able to find a (scientific) paper explaining the best way to render individual hair strands using ray tracing. It looks like many people choose to treat the curves as thin cylinders and use oversampling to accomodate for the aliasing problems. So, does anyone know how a single hair fiber is ray traced nowadays? The rendering should be physically accurate, so speed is not an issue at this point.
  7. @Krohm I'm trying to understand both suggestions and at the moment I'm as far as NiteLordz proposed. I don't really understand what the big difference/drawback is compared with your approach? Especially what you mean with the parameter blob. And as another question. I now use mesh files in which submeshes refer to named materials. Each named material has a corresponding callback object so that it is able to do some initialization and contains the code to update the uniform parameters of the shader, etc. These material names/classes should be known by the system in order to instantiate the correct callback object. I can create a plugin system or manually register all callback objects. From a book I have, plugins can be implemented by dynamic linking (e.g. dll files, however this does not sound familiar/convenient from what I see in other rendering engines). And manually registering all callback objects can be useful, but sounds cumbersome too. So what is the most common way of handling this kind of problem?
  8. Thanks for the replies. I will try to adjust my design by taking into account these insights. Apparently, I'm not even halfway there. Having a decent design is a pain in the ass.
  9. Hello all, I am writing my own rendering engine and I am doubting about how to handle materials. From what I know is that each material contains parameters on how to render a surface (including the shader to use). Up to now I use *.obj files to store and load the meshes. These files have a linked *.mtl file, but only include ambient, diffuse, specular, etc kind of shading methods. I can write some basic shaders producing these effects, but if I want to write my own specific shader. Is there an appropriate way to link this custom shader with its own specific parameters to the material description? Just manually editing the *.mtl files sounds awkward, because you ignore the general *.mtl format. Are there specific file formats in which the shader to use and the parameter values can all be set? Or do you create your own file format for the meshes in which you can specify all the information you need, including the parameters for the shader? Jeffrey
  10. Hello all, I am working on an application that reads in a collection of 3d models. Because the application is written in java, I decided to use JOGL to render the models to a window when the user presses "view model". I know how to create an opengl context by creating a GLCanvas and attaching a listener. In the init-method I can load in the models using VBO. However, what I want to do is initialize opengl at the start of the application, so that I can load in the models on the GPU. When the user presses the view mesh button, the window should be created containing the GLCanvas and render the selected model. This prevents me from storing all vertex and normal data and just store one or two id's. So, is there any way to obtain a handle to opengl without creating a GLCanvas? [Or is hiding the window the most efficient way to go? I don't wanna ruin processor time by rendering/looping something that isn't visible]
  11. Thanks for the quick reply. So what I understand is that you create a material structure that matches the available variable slots in the shader so that the renderer can set these slots when needed (or make it virtual in the IRenderable interface to vary different material structures for different shaders)? Right now I have a teapot which does not use a texture, but just diffuse/ambient/specular colors. So does this mean I have to set a boolean to true to indicate whether I use the specified diffuse color or the texture?
  12. I am working on my own simple rendering engine and I have a problem concerning shaders. 1. Right now I have two objects: a teapot (which consists of multiple submeshes) and a plane. The plane uses a shader that is almost identical to the teapot, but differs in the fact that it needs a "grass"-texture. Is this common in shader programming that you duplicate a lot of shader code (I guess not)? Do I have to split these shaders up in multiple shaders and apply multi-pass rendering, thereby causing overhead? 2. In my engine a mesh only knows about its geometrical data and not about the naming of uniform variables. I put a wrapper class around the mesh and set the uniform variables from there. So for the teapot, I loop over the submeshes and set the uniform variables for each of the meshes. The problem is that every wrapper class (say a wall, stone, car etc) needs to specify the uniform variable values. If one uniform variable name changes I have to edit a lot of code in different places. What is the best way to solve this? Right now I made a utility class that has a member "setMaterialVars( Material* )" that sets the material properties based on a material struct. Is this the way to go (every shader has it's own utility class) or are there better solutions? Thanks in advance.
  13. Thanks for the advices. Sorting lights sounds pretty obvious to make it faster. Should have thought about that myself [quote name='YogurtEmperor' timestamp='1313466891' post='4849685'] Accessing the array is actually slower than recalculating it. [/quote] Right now I perform some calculations on the vertex shader and store the results in an array, like half vectors, eye vectors etcetera. So what you are saying is that because array access is slower, I can better do all "simple vector" computations on the fragment shader?
  14. Thanks, I guess you are right. I changed the #define MAX_LIGHTS to 3 and it runs fluently again. So if I want to add more lights (say 10 lights), then there's no other way of doing it than using multiple render passes, assuming that I don't precompute the lighting values?
  15. Hello all, I am working on a lighting shader in GLSL in which I want to compute spot, direction and point lights. I compute everything in a single pass. The shader is not that efficient though, but my fps is still around 500 fps so I'm able to see the results. However, when I added spotlights the frame rate drops to 0-1 FPS showing a white screen, then a correctly rendered scene, white screen again, etc. It feels as if it's in an infinite loop or something. In my scene I have a plane and a teapot. If I render 3 lights (a spot, directional and point light) on the plane, everything works fluently. When computing the lighting effects on the teapot, I get this problem. This also occurs when I only render the teapot (and not the plane). However, when I render 3 lights on the plane and 2 lights on the teapot (in which case I don't compute the spot light) the fps is also reasonable. However when I only compute the spot light, there are also no problems. My way of deciding which lighting to compute is done using an 'if' statement in the fragment shader, like this: [code]struct Light { //position of light (if w == 0.0, then it's a direction for the directional light) vec4 Position; vec3 Color; vec3 Attenuation; //spotlight part. //if spot cutoff equals -1 (equals cos(180)), then it's a point light, otherwise it's a spotlight float SpotCutoff; float SpotExponent; vec3 SpotDirection; }; void main () { vec3 surfaceNormal = normalize(Normal); vec3 ColorOut = vec3(0.0f, 0.0f, 0.0f); for( int i=0; i<LightCount; ++i ){ //Determine kind of light if( LightSources[i].Position.w == 0.0f ) ColorOut += DirectionalLight(i, surfaceNormal, LightVectors[i], normalize(HalfVectors[i])); else{ float lightDistance = length(LightVectors[i]); vec3 lightDirection = LightVectors[i] / lightDistance; if( LightSources[i].SpotCutoff == -1.0f ) ColorOut += PointLight( i, surfaceNormal, lightDirection, normalize(HalfVectors[i]), lightDistance ); else ColorOut += SpotLight(i, surfaceNormal, lightDirection, normalize(HalfVectors[i]), normalize( SpotDirections[i] ), lightDistance); } } gl_FragColor = vec4(ColorOut + 0.05 * AmbientColor + EmissionColor, 0.0f); }[/code] By searching this forum I found a possible explanation that the video card does not support branching at the fragment level. However branching works for point and direction lights and I guess my video card supports branching. I'm using GLSL version 1.3 and have a Nvidia GeForce 9800M GS. Does anyone know what the cause of this problem might be?