pseudomarvin

Members
  • Content count

    74
  • Joined

  • Last visited

Community Reputation

379 Neutral

About pseudomarvin

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. To clarify, I'm working on IBL for Physically Based Rendering and I wanted to implement a slow and dumb brute force way of solving the integral in the reflectance equation before doing it the optimized way but I found I couldn't. It was really just for reference to see if I get things right. @Hodgman It fails on SDL_GL_SwapWindow. Curiously not after the first frame but always after the second. @WiredCat I've tried using double instead of float and also increased the size of the incremented value. Still crashes. It is of course just sample code @Scouting Ninja Well it builds ok. It seems that the reason is as far as I can tell not related to the size of the shader program on the GPU. # Crashes for (double x = 0; x < 100; x += 1.0) { for (double y = 0; y < 100; y += 1.0) { sum += 1.0; } } # Works for (int x = 0; x < 10000; x += 1) { for (int y = 0; y < 10000; y += 1) { sum += 1.0; } } Thanks guys, it's not necessarily a mystery that I need to solve, I was just curious what the hard limit is and whether there's a way to go around it.
  2. I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
  3. All right, thanks again for the input :).
  4. Thank you all for the thoughtful comments, they've been very helpful. @Alberth: For practical reasons it would probably be best to let the user decide as you say, I just wanted to know what some reasonable values are. @frob: It is indeed a CS homework problem  :D. If I use a single byte there is an upper limit of 2^8 = 256 codes that I would require at most. Theoretically, if I use 64 bits as the size of a symbol I would have to (in the worst case) assign 2^64 codes. I guess that this is not really a problem if I only assign codes to 64b strings that actually occurred in the data (the cardinality of that subset of all possible 64b strings will be much smaller). Am I correct in that?
  5. I am tasked with implementing three different compression methods: arithmetic coding, PPM and LZW. I then have to compare their performance on different data (image, audio, text, binary executables). If the compression program receives a file on input and does not know what the contents are, how do I choose the appropriate symbol size? E.g., if the file is text, a single character could be 8 or 16 bit long and setting symbol size to 8 bits or 16 bits could have large impact on the compression ratio. Do I simply try various reasonable sizes (various multiples of one byte) and see which one fits the data best?
  6. Phong model correctness (video)

    Great, thanks for explanation.
  7. Phong model correctness (video)

    Yep, that solved it. Is there a physics/optics rationale for doing that? Thanks.
  8. Do you guys think this implementation of Phong shading and Phong reflection model is correct?   I use a directionial light pointing exactly where the black line is heading (0, 0, -1). I modify the shininess value later in the video.  I especially find it strange that when I look in direction exactly opposite to light, the contours of the bunny start glowing (if the shininess factor is low enough). But I have compared my implementation to a reference from our graphics class and it did the same thing.   https://www.youtube.com/watch?v=e3kW-wQ7V_Y&feature=youtu.be   Code: // Calculation is performed in world space vec3 N = normalize(worldNormal); vec3 specColor = vec3(1.0f, 1.0f, 1.0f); float ambient = 0.2f; float NLdot = max(dot(-U::worldLightDirection, N), 0.0f); float diffuse = NLdot; vec3 V = normalize(U::worldCameraPosition - worldPos); vec3 R = normalize(glm::reflect(U::worldLightDirection, N)); float specular = pow(max(dot(R, V), 0.0f), U::shininess); vec3 shadedColor = (ambient + diffuse) * albedo + specular * specColor;
  9. Mipmapping for a SW renderer

    Thanks everyone for the suggestions.
  10. Mipmapping for a SW renderer

    So the mipmap index calculation would look like this:   1) Calculate the interpolated texture coordinates for the 4 pixels. 2) Calculate the the largest du, dv for the top left pixel: // (x,y) are current pixel coords float duDx = pixel(x+1, y).u - pixel(x,y).u; float dvDx = pixel(x+1, y).v - pixel(x,y).v; float duDy = pixel(x, y+1).u - pixel(x,y).u; float dvDy = pixel(x, y+1).v - pixel(x,y).v; // choose max from these to calculate mipmap index 3) Now do I also use the index calculated for the topleft pixel as the index for the other 3 pixels?
  11. I am writing a sofware renderer. I have no SIMD or multithreading functionality yet. When rasterizing a triangle I loop over all of the pixels in its bounding box (in screen coordinates) and interpolate and color the pixels that pass the edge function. I tried implementing mipmapping but found that to compute the texture coordinate differentials I needed the interpolated values for the right and bottom neighboring pixels (whose attributes are not interpolated at this point).   I thought of couple solutions: 1) Do another loop before the main one which would just calculate all of the interpolated texture coordinates so they are available in the main loop. (This is obviously slow) 2) Choose the right mip level of the texture by calculating the maximum differential from the 3 vertices of the rasterized triangle. Would this work?      Intuitively it seems to me that yes: consider two vertices, u1 = 0, u2 = 1 and in screen coordinates x1 = 100, x2 = 600. Then it makes sense to pick a larger texture. On the other hand if u1 = 0 and u2 = 0 and x1= 100, x2 = 101, then picking the      smallest texture sounds reasonable.   Would these solutions work and/or is there a better one?
  12. Yeah, thanks. I guess I use a guard band (clamp the raster space coordinates to 0, width -1 or height - 1) and use Floating Point arithmetic so actually an infinite guard band.
  13. I am implementing SW rasterization. I want to clip triangles against the z = -near (so that strange things don't happen after the perspective divide). I would like to do the clipping right after the vertex shader - that is right after the vertices are multiplied by the MVP matrix and before they are divided by the w coordinate (they are in homogenous clip space), so I would really clip against the z = -w plane. However, after clipping the triangles, new vertices have to be created replacing the clipped ones. Attribute values for these new vertices have to be interpolated based on the distance of the clipped vertex to the z = -w plane relative to the total length of the edge that was clipped.   I know that if I clipped in view space I could get away with simple linear interpolation of the vertex attributes. Is this still true in homogenous clip space (since we have not done perspective division yet)?
  14. Project vertices to raster space on the CPU

    Well it seems that you can't really do rasterization correctly without clipping against the z = 0 plane in camera or clip space, unless you do rasterization itself in homogenous coordinates.   See this conversation http://stackoverflow.com/questions/40889826/clipping-triangles-in-screen-space-per-pixel?noredirect=1#comment69119871_40889826 .