Jump to content
  • Advertisement

pseudomarvin

Member
  • Content Count

    76
  • Joined

  • Last visited

Everything posted by pseudomarvin

  1. I have made a simple 2D game supposed to run on Windows only using SFML. The executable is built using Visual Studio 2017. I would like to make sure that it can run on as many Windows machines as possible (even on Windows 7/8 if possible). What steps can I take to ensure that? I have done the following: Build for Win32 (x86) not x64 platform In Project Properties->C/C++->Code Generation, I set Runtime Library to Multi-threaded (runtime library should thus be statically packed with the exe) not Multi-Threaded DLL Would the following help with compatibility (and not cause problems with forward compatibility)? Using Windows SDK version 8.1? Use older platform toolset (not Visual Studio 2017 v141 but perhaps VS 2015 v140 or even VS 2015 Windows XP) What else can I do, and what should I be aware of? Thanks.
  2. I want to distributy a game I've made as a simple self extracting zip file. After extraction, the structure should look like this: MyGame (shortcut to MyGame/Release/MyGame.exe) MyGame (folder) --- /assets (folder) --- / Release (folder) --- MyGame.exe I have already managed to make the shortcut point to the relative path of MyGame.exe by setting its target as: %windir%\explorer.exe "MyGame\Release\MyGame.exe". But I would also like to set its icon from a file in the assets folder in a relative way so that upon extracting on a different PC, the icon is already set. Is this possible? Or how is it usually done?
  3. I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
  4. To clarify, I'm working on IBL for Physically Based Rendering and I wanted to implement a slow and dumb brute force way of solving the integral in the reflectance equation before doing it the optimized way but I found I couldn't. It was really just for reference to see if I get things right. @Hodgman It fails on SDL_GL_SwapWindow. Curiously not after the first frame but always after the second. @WiredCat I've tried using double instead of float and also increased the size of the incremented value. Still crashes. It is of course just sample code @Scouting Ninja Well it builds ok. It seems that the reason is as far as I can tell not related to the size of the shader program on the GPU. # Crashes for (double x = 0; x < 100; x += 1.0) { for (double y = 0; y < 100; y += 1.0) { sum += 1.0; } } # Works for (int x = 0; x < 10000; x += 1) { for (int y = 0; y < 10000; y += 1) { sum += 1.0; } } Thanks guys, it's not necessarily a mystery that I need to solve, I was just curious what the hard limit is and whether there's a way to go around it.
  5. I am tasked with implementing three different compression methods: arithmetic coding, PPM and LZW. I then have to compare their performance on different data (image, audio, text, binary executables). If the compression program receives a file on input and does not know what the contents are, how do I choose the appropriate symbol size? E.g., if the file is text, a single character could be 8 or 16 bit long and setting symbol size to 8 bits or 16 bits could have large impact on the compression ratio. Do I simply try various reasonable sizes (various multiples of one byte) and see which one fits the data best?
  6. All right, thanks again for the input :).
  7. Thank you all for the thoughtful comments, they've been very helpful. @Alberth: For practical reasons it would probably be best to let the user decide as you say, I just wanted to know what some reasonable values are. @frob: It is indeed a CS homework problem  :D. If I use a single byte there is an upper limit of 2^8 = 256 codes that I would require at most. Theoretically, if I use 64 bits as the size of a symbol I would have to (in the worst case) assign 2^64 codes. I guess that this is not really a problem if I only assign codes to 64b strings that actually occurred in the data (the cardinality of that subset of all possible 64b strings will be much smaller). Am I correct in that?
  8. I am doing CPU rasterization and I would like to manually transform triangle vertices all the way from model space to raster space (where they directly correspond to pixels in the image). Currently I just multiply them with a MVP matrix. Which coordinates exactly am I in at that point? The vertices should then be equal to the vertices usually leaving the vertex shader. But the rendering pipeline does some steps after that I think I have to do manually now. What are they exactly? Division by the w coordinate, viewport transform or something else? I would appreciate an answer with specific steps. Thanks. for (Uint32 i = 0; i < mesh.vertexCount; ++i) { mesh.vertices[i].position = modelViewProjection * mesh.vertices[i].position; // I am using glm // What else? }
  9. pseudomarvin

    Phong model correctness (video)

    Great, thanks for explanation.
  10. Do you guys think this implementation of Phong shading and Phong reflection model is correct?   I use a directionial light pointing exactly where the black line is heading (0, 0, -1). I modify the shininess value later in the video.  I especially find it strange that when I look in direction exactly opposite to light, the contours of the bunny start glowing (if the shininess factor is low enough). But I have compared my implementation to a reference from our graphics class and it did the same thing.     Code: // Calculation is performed in world space vec3 N = normalize(worldNormal); vec3 specColor = vec3(1.0f, 1.0f, 1.0f); float ambient = 0.2f; float NLdot = max(dot(-U::worldLightDirection, N), 0.0f); float diffuse = NLdot; vec3 V = normalize(U::worldCameraPosition - worldPos); vec3 R = normalize(glm::reflect(U::worldLightDirection, N)); float specular = pow(max(dot(R, V), 0.0f), U::shininess); vec3 shadedColor = (ambient + diffuse) * albedo + specular * specColor;
  11. pseudomarvin

    Phong model correctness (video)

    Yep, that solved it. Is there a physics/optics rationale for doing that? Thanks.
  12. I am writing a sofware renderer. I have no SIMD or multithreading functionality yet. When rasterizing a triangle I loop over all of the pixels in its bounding box (in screen coordinates) and interpolate and color the pixels that pass the edge function. I tried implementing mipmapping but found that to compute the texture coordinate differentials I needed the interpolated values for the right and bottom neighboring pixels (whose attributes are not interpolated at this point).   I thought of couple solutions: 1) Do another loop before the main one which would just calculate all of the interpolated texture coordinates so they are available in the main loop. (This is obviously slow) 2) Choose the right mip level of the texture by calculating the maximum differential from the 3 vertices of the rasterized triangle. Would this work?      Intuitively it seems to me that yes: consider two vertices, u1 = 0, u2 = 1 and in screen coordinates x1 = 100, x2 = 600. Then it makes sense to pick a larger texture. On the other hand if u1 = 0 and u2 = 0 and x1= 100, x2 = 101, then picking the      smallest texture sounds reasonable.   Would these solutions work and/or is there a better one?
  13. pseudomarvin

    Mipmapping for a SW renderer

    Thanks everyone for the suggestions.
  14. pseudomarvin

    Mipmapping for a SW renderer

    So the mipmap index calculation would look like this:   1) Calculate the interpolated texture coordinates for the 4 pixels. 2) Calculate the the largest du, dv for the top left pixel: // (x,y) are current pixel coords float duDx = pixel(x+1, y).u - pixel(x,y).u; float dvDx = pixel(x+1, y).v - pixel(x,y).v; float duDy = pixel(x, y+1).u - pixel(x,y).u; float dvDy = pixel(x, y+1).v - pixel(x,y).v; // choose max from these to calculate mipmap index 3) Now do I also use the index calculated for the topleft pixel as the index for the other 3 pixels?
  15. I am implementing SW rasterization. I want to clip triangles against the z = -near (so that strange things don't happen after the perspective divide). I would like to do the clipping right after the vertex shader - that is right after the vertices are multiplied by the MVP matrix and before they are divided by the w coordinate (they are in homogenous clip space), so I would really clip against the z = -w plane. However, after clipping the triangles, new vertices have to be created replacing the clipped ones. Attribute values for these new vertices have to be interpolated based on the distance of the clipped vertex to the z = -w plane relative to the total length of the edge that was clipped.   I know that if I clipped in view space I could get away with simple linear interpolation of the vertex attributes. Is this still true in homogenous clip space (since we have not done perspective division yet)?
  16. Yeah, thanks. I guess I use a guard band (clamp the raster space coordinates to 0, width -1 or height - 1) and use Floating Point arithmetic so actually an infinite guard band.
  17. pseudomarvin

    Project vertices to raster space on the CPU

    Well it seems that you can't really do rasterization correctly without clipping against the z = 0 plane in camera or clip space, unless you do rasterization itself in homogenous coordinates.   See this conversation http://stackoverflow.com/questions/40889826/clipping-triangles-in-screen-space-per-pixel?noredirect=1#comment69119871_40889826 .
  18. pseudomarvin

    Project vertices to raster space on the CPU

    Thanks again for a very detailed answer. I have been trying to implement the first solution you proposed (without implementing a special clipping algorithm) with mixed results. Looping just in the valid screen space coordinates works fine. But it fails for the cases when the triangles are behind the camera (partially or completely). I have tried to solve this by interpolating the vertex NDC for every pixel in the triangle and doing a clipping test in screen space. Basically I only let pixels with NDC x,y,z coordinates in the range [-1, 1] be rasterized. However I wasn't sure how to do the interpolation. According to the spec https://www.opengl.org/registry/doc/glspec44.core.pdf page 427 formula 14.9 is used for vertex attribute interpolation. But then depth is interpolated using formula 14.10. So I tried both versions. //w0, w1, w2 are the barycentric weights of the vertices. // Version 1 (formula 14.9 for x,y) float x_ndc = (w0*v0_NDC.x/v0_NDC.w + w1*v1_NDC.x/v1_NDC.w + w2*v2_NDC.x/v2_NDC.w) / (w0/v0_NDC.w + w1/v1_NDC.w + w2/v2_NDC.w); float y_ndc = (w0*v0_NDC.y/v0_NDC.w + w1*v1_NDC.y/v1_NDC.w + w2*v2_NDC.y/v2_NDC.w) / (w0/v0_NDC.w + w1/w1_NDC.w + w2/v2_NDC.w); float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z; // Version 2 (formula 14.10 for x,y) float x_ndc = w0 * v0_NDC.x + w1 * v1_NDC.x + w2 * v2_NDC.x; float y_ndc = w0 * v0_NDC.y + w1 * v1_NDC.y + w2 * v2_NDC.y; float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z; //The clipping + depth test always looks like this: if (-1.0f < z_ndc && z_ndc < 1.0f && z_ndc < currentDepth && 1.0f < y_ndc && y_ndc < 1.0f && -1.0f < x_ndc && x_ndc < 1.0f) Results in gifs: https://imgur.com/a/4N01p   1) Strange things happen when the second cube is behind the camera or when I go into a cube. 2) Strange artifacts are not visible but as the camera approaches vertices, they start disappearing. And since this is the perspective correct interpolation of attributesvertices (nearer to the camera?) have greater weight so as soon as a vertex gets clipped this information is interpolated with strong weight to the triangle pixels.   Is all of this expected or have I done something wrong? I am starting to think it might have been easier to implement Sutherland-Hodman in the first place  :D .
  19. pseudomarvin

    Project vertices to raster space on the CPU

    Thanks a lot for the explanation. Your SW renderer is especially helpful.    I have a question about triangle clipping. Why do we have to do it even if we use the halfspace triangle algorithm? Is there a problem with vertices that are behind the near plane? Or is it always just a matter of performance?
  20. I am implementing a software rasterizer for school and I would like to simply pass to SDL a buffer with pixels and tell it the format that they are in (RGB, 24b per pixel). SDL would then render that to screen. I have tried using SDL_Surface but it seems to only work with images or data with headers since the following fails: unsigned char pixels[BYTES_PER_PIXEL * SCREEN_WIDTH * SCREEN_HEIGHT]; // ... all pixels are filled with white color imageSurface = SDL_CreateRGBSurfaceFrom(pixels, SCREEN_WIDTH, SCREEN_HEIGHT, sizeof(unsigned char) * BYTES_PER_PIXEL, // depth SCREEN_WIDTH * BYTES_PER_PIXEL, // pitch (row length * BPP) 0x000000ff, // red mask 0x0000ff00, // green mask 0x00ff0000, // blue mask 0); // alpha mask The error is: SDL_CreateRGBSurfaceFrom failed: Unknown pixel format. What is the fastest way to do it in SDL? Also would there be a speed difference between that approach and drawing to an OpenGL texture which I would then render onto a full screen quad?
  21. Ok so I've found a solution, I'm not sure how effective it is but I've realized that it probably doesn't matter since the time it takes will be very small relative to the whole rasterization process. void setPixel(SDL_Surface* surface, int X, int Y, Uint32 Color) { Uint8* pixel = (Uint8*)surface->pixels; pixel += (Y * surface->pitch) + (X * sizeof(Uint32)); *((Uint32*)pixel) = Color; } //... for (int y = 0; y < screenSurface->h; ++y) { for (int x = 0; x < screenSurface->w; ++x) { setPixel(screenSurface, x, y, 0x0000ffff); } }
  22. I am encountering a screen tear issue using OpenGL and WinApi. I am doing a ray marching demo (rendering just 2 triangles and ray marching and shading terrain in fragment shader). Rendering with GTX 860M, the app does not have an issue with staying at 60 FPS (at least at lower resolution). A screen tearing is present however (and this was rendered at especially low resolution to make sure this has nothing to do with the complexity of the fragment shader code):     Where I am at:   I am creating the OpenGL context using the recommended settings (https://www.opengl.org/wiki/Creating_an_OpenGL_Context_%28WGL%29).  I have tried manually turning VSync on using (wglSwapIntervalEXT(1)), although that should be the default behavior. I have tried placing glFinish before the SwapBuffers call, i.e.: while (isRunning) { ProcessMessages(window, &context); App::Update(&context); glFinish(); SwapBuffers(windowHandle); } I have tried setting VSync to "Always on" in the Nvidia control panel. I have also tried setting Maximum pre-rendered frames to 1.     None of these things have helped. Does anyone have any ideas? I have also tried running the app on a friend's PC (GTX 960) and the tearing was present there as well. Thanks.
  23. I know manually loading GL functions is usually a bad idea but I am making a 64kB demo (exe cannot go beyond this size limit) so using GLEW does not seem to be an option. It is enough to get this working on Windows.    In VS 2015 I link with opengl32.lib and then I use a helper function like this: void *GetAnyGLFuncAddress(const char *name) { void *p = (void *)wglGetProcAddress(name); if (p == 0 || (p == (void*)0x1) || (p == (void*)0x2) || (p == (void*)0x3) || (p == (void*)-1)) { HMODULE module = LoadLibraryA("opengl32.dll"); p = (void *)GetProcAddress(module, name); } return p; } An example usage: // GL functions header, definitions copied from https://www.opengl.org/registry/api/GL/glext.h #define GLAPIENTRY __stdcall typedef void (GLAPIENTRY *PFGLCLEAR)(GLuint mask); typedef GLuint(GLAPIENTRY * PFNGLCREATEPROGRAMPROC) (void); // program PFGLCLEAR glClear = (PFGLCLEAR) GetAnyGLFuncAddress("glClear"); PFNGLCREATEPROGRAMPROC glCreateProgram = (PFNGLCREATEPROGRAMPROC)GetAnyGLFuncAddress("glCreateProgram"); This approach worked for glClear and glClearColor but getting any other function pointers after that failed (glCreateProgram, glCreateShader...). Why doesn't it work, and why does it work for those first two? Thanks.
  24. pseudomarvin

    Screen tearing when using OpenGL on Windows

    Thank you for the suggestion. I have tried rendering to a full screen window using the styles you have specified with WinApi. I also tried rendering to a full screen window created by GLFW. The tearing persisted in both cases. I tried setting the pre-rendered frames to maximum but that did not help either.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!