_Slin_

Members
  • Content count

    16
  • Joined

  • Last visited

Community Reputation

209 Neutral

About _Slin_

  • Rank
    Member
  1. Reducing Unity game file size

    Ahh, I thought the 80mb are the download size. For runtime memory consumption that seems okay. The advantage of a format like pvrtc is that the GPU can actually handle them so they just take the same size in vram as they do on the hard disk and don´t need additional time for decompression. But that would only become in issue if you needed more than 200mb of vram. PNG just gets decompressed and is then used as uncompressed image data, which is often just fine.
  2. Reducing Unity game file size

    That is nearly four times the size reported by Unity, so where does all that additional data come from? I am just wondering because 60mb for just additional stuff seems a lot. Thats like if I forget to strip debug symbols and wonder why a library is 100mb instead of 1mb...
  3. Reducing Unity game file size

    Whats up with "Complete size 81.9 mb 100.0% " and the app being 156MB? Also did you look into compressed texture formats supported by the graphics hardware, like pvrtc for iOS devices (isn´t there a standard in opengl es by now!?)? They might be bigger than some png files and provide lower quality, but if you chose a higher resolution, filesize and quality should be quite good and Unitys Asset pipeline should support them. But sure, if your textures are mostly low frequency, PNG is a better choice, at least for download size...
  4. On my GTX 460 it is just all black, or differently fucked up (looks like some z prepass messup?) if I disable directional lighting.
  5. Defining some terms

    Feel free to define those however you want or think of better ones allowing for a better structure, so the following is my interpretation I am using in my own OpenGL based engine.   Model: Contains several meshes with their material (so when rendering I set a material and render the vertices supposed to use it as that is basicly how OpenGL wants it). It may also contain different LOD stages. It can be loaded from a file. Mesh: One vertex/index buffer pair per mesh. Node: A node is just a position in the game world which may be part of a more complex scene tree and be associated with one or more models/lights/cameras/...   Material: Holds a shader and render states and maybe some special shader variables or something. Entity: Is a node with a model and a skeleton Skeleton: A bone structure and pose also holding a pointer to its animation data.   The Animation is a bit tricky. I put my skeleton with animation data (only supporting skeletal animations) into the entity, as that allows me to have different animation states for different entities sharing the same model. For vertex based animations I would probably make them part of the mesh, storing some frame variable in the entity, doing the blending between frames in a shader.   When exporting an object from blender to my custom file format, the resulting files are skeleton and animation data and the model. The model is exported so that it contains a different mesh for each texture.   This is how I do it and there a probably better ways to handle this.
  6. Frustum Culling

    For now it is just a list, later they will probably be sorted into an octree or something similar, which should of course be an improvement but for now it is good enough to optimize the culling itself a little (for less than 1000 lights, it performs very well, using multithreading and some basic code optimizations).   My code seems to work well and looks like this now: #define Distance(plane, op, r) { \ float dot = (position.x * plane.normal.x + position.y * plane.normal.y + position.z * plane.normal.z);\ distance = dot - plane.d; \ if(distance op r) \ continue; \ } const Vector3& position = light->_worldPosition; const float range = light->_range; float distance, dr, dl, dt, db; Distance(plright, >, range); dr = distance; Distance(plleft, <, -range); dl = distance; Distance(plbottom, <, -range); db = distance; Distance(pltop, >, range); dt = distance; float sqrange = range*range; if(dr > 0.0f && db < 0.0f && dr*dr+db*db > sqrange) continue; if(dr > 0.0f && dt > 0.0f && dr*dr+dt*dt > sqrange) continue; if(dl < 0.0f && db < 0.0f && dl*dl+db*db > sqrange) continue; if(dl < 0.0f && dt > 0.0f && dl*dl+dt*dt > sqrange) continue;   It´s not pretty, but I have to fix some other things before cleaning up and while my planes have a function to get it´s distance to a point, at 32*24*1024*4 calls, the function call overhead was the biggest performance hit ;)
  7. Frustum Culling

    IgnatusZuk, thanks, but your code has the exact same problem I am trying to solve: There are some cases, where the function return visibility, while the object is actually outside the view frustum. In most cases I expect this to be just very few objects, so anything more complicated for culling is not worth it.   TMarques, I may should have mentioned that I am culling spheres in a 3D frustum, so those corners are actually lines and not points, but my approach to get that V as the distance to the planes should be fine, except the thing you mentioned, where I have to check which side of the planes the center is on, which explains why now in my implementation too many lights are culled. Thank you very much for that hint :)   If there are any more ideas, please post them, as I´d love to find a better way if there is one :)
  8. I want to do some basic view frustum culling, using bounding spheres for my objects. The spheres are positioned in world space and I am constructing planes for my frustum. I then calculate the distance between my spheres centers and each plane and if it is bigger than the radius and on the wrong side, the object is culled. This can be optimized by first culling with a sphere around the frustum and maybe some caching for the plane that culled an object in the previous frame and generally works great. Something like this can be found easily online and is most probably good enough for most cases.   A problematic case looks like this: where "View" is the visible area, "PL" the left plane, "PB" the bottom plane and "P" the center of the sphere to cull. It is obviously inside the top and right plane (where ever those are) and while the point is clearly on the wrong side of PL and PB, the distance to each minus the radius is inside the both planes, but still the sphere would not be visible. This usually shouldn´t be a problem, but I am using it to cull lights for tiled shading, so I´ve got many small frustums and many in comparison big spheres, so this is a big issue.   The solution I came up with is simple, all I do is (additional to the culling above) storing the distances to the planes and using pythagoras to get the squared distance to the frustums corner which means for example for the case above: distToPL*distToPL+distToPB*distToPB For visibility, the result can be compared to the squared radius.   Now I am wondering if this makes sense (I am not completely sure if maybe the perspective frustum breaks it? From my understanding it shuldn´t, but that doesn´t have to mean much...) and if there is any better alternative?
  9. Maybe not state of the art, but a very stable and commonly used technique for sun shadows would be cascaded shadow mapping plus Exponential Shadow Maps or Percentage Closer Filtering to make them soft. Both techniques to soften them can be realized with just a depth texture, which is good as not rendering colors is a noticeable performance improvement. For surface acene issues, glPolygonOffset works quite good for me, alternatives I found was to scale the depth in the lightdepth projection matrix or to calculate the perfect bias by using ddx/ddy. Using OpenGL 3.2, there are array textures available, which allows for very nice selecting of the right depthmap within the final shadow shader. You render into such an array texture in several passes or just in one, by setting the gl_Layer variable in a geometry shader. I experimented with several passes vs creating the data in a geometry shader and it turned out, that the speed differences is very small (geometry shader was slightly faster) and that geometry shaders creating geometry and instancing don´t seem to like each other very much (slower than rendering in several passes). In several papers they proposed to draw with instancing to remove the passes and just using a geometry shader to pass the geometry and selecting the layer instead of creating and culling new geometry. I also found some AMD extension to set gl_Layer in the vertex shader.   Kinda state of the art seem to be Sample Distribution Shadow Maps, but it seems to come with some not much, but still noticeable quality changes when moving the camera.   For point lights, you can render a depth cubemap in just one pass using instancing or a geometry shader to doublicate the geometry and slecting the layer, just as for the sun shadow splits.   Some links: Cascaded Shadow Maps - http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307(v=vs.85).aspx Some info on removing flickering and surface acne - http://msdn.microsoft.com/en-us/library/ee416324(VS.85).aspx How to split the frustum, several passes vs geometry shader vs instancing - http://http.developer.nvidia.com/GPUGems3/gpugems3_ch10.html More info on single pass rendering plus some other things - http://dice.se/publications/title-shadows-decals-d3d10-techniques-from-frostbite/ A very basic article on Percentage Closer Filtering - http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html Exponential Shadow Maps - http://nolimitsdesigns.com/tag/exponential-shadow-map/ Sample Distribution Shadow Maps (you might want to check out the demo source) - http://visual-computing.intel-research.net/art/publications/sdsm/ Rendering into a cubemap in one pass - http://diaryofagraphicsprogrammer.blogspot.de/2011/02/shadows-thoughts-on-ellipsoid-light.html   Most of those sources are based on DirectX, but can be easily adapted to OpenGL.   This is my result using 4 splits, each 1024^2, 24bit depth, using 2*2 PCF and enabled GL_TEXTURE_COMPARE_MODE. I create the splits such, that their part of the view frustum to cover always fits without having to change it, as this allows them to be completely flicker free as long as the light source does not move.   I would give you numbers for my tests, but unfortunately I didn´t write them down.
  10. what are possible values for min and max in your case?
  11. glTranslatef(position.x, position.y, position.z); glRotatef(angle / Mathematics::pi * 180.0f, axis.x, axis.y, axis.z);   just do it the other way around: glRotatef(angle / Mathematics::pi * 180.0f, axis.x, axis.y, axis.z); glTranslatef(position.x, position.y, position.z);   Edit: The idea is, that your objects vertices have positions relative to your objects center. Without transformation, this center will be the same as the worlds center. glTranslate and glRotate always transform relative to the worlds center. This means if you first translate, your object will be placed at a different position in the world, now if you rotate, that new position+your vertex positions will be rotate around the worlds center. If you rotate first, your object is rotated around the worlds center, which is also its own center and then it is moved to its final position. This explanation is probably not mathematically perfect, but should give a basic feeling for correct transformation orders. If you want to understand this in more detail, check out rotation and translation matrices and matrix multiplication, especially homogeneous matrices.
  12. On iPad 2 and later, there is some kind of depth texture format, I am not sure, but I think you can attach it as your framebuffers depth texture target. I would however expect the outline fullscreen shader to be the bottleneck in your case. So whatever calculations you do on the texture coordinates, do them in your vertex shader and in most cases you should unroll your loops.
  13. About the easiest file format to load, for which nearly every tool has an exporter is .obj. Now the question is what .obj files look like and how to get the data in a format you can directly feed to opengl functions. OpenGL prefers object data to be interleaved (meaning, you got one array containing the vertex information like this: position0, normal0, uv0, position1, normal1, uv1, ... or whatever other order and information you want), put into a vertex buffer object and addressed using indices in another vbo. I haven´t read it, but a good start on loading obj seems this tutorial: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-7-model-loading/ the resulting data is not interleaved and indexed, as far as I could see, but it should give you a start to improve on.
  14. I don´t see why you disable GL_TEXTURE_RECTANGLE, just to enable it a few lines later, but that of course should not be the problem. You should probably specify a viewport, but as you can see something in one case, that doesn´t seem to be the problem. Try to disable blending.   I would recommend you to use a frame debugger, which exist for windows, linux, mac and most smartphones: https://www.opengl.org/wiki/Debugging_Tools There you should be able to look at your target textures and find out where something goes wrong.
  15. Thank you Brother Bob, I didn´t know that :).