l0calh05t

Members
  • Content count

    536
  • Joined

  • Last visited

Community Reputation

1796 Excellent

About l0calh05t

  • Rank
    Advanced Member
  1. I prefer to follow what the standard library and boost do, i.e., No type prefixes, this includes I for interfaces snake_case for both functions and classes CamelCase for template parameters/concepts UPPER_SNAKE_CASE for macros/defines For members I usually just use snake_case as well, without an m_ prefix.
  2. Whoa, it ain't supported in any GL 3 level hardware.    You sure about that? DirectX uses the equivalent of GL_ZERO_TO_ONE, so hardware support is likely present. Driver support is a different matter though.
  3. The simplest way of making 2D textures seamless is shifting them by half along x and y and blending that with the unshifted version depending on distance from the edge (or manually stamping the edges away to maintain more detail). The cubemap version of that would, I believe, be to rotate by 45 degrees along all 3 dimensions, placing the corners in the centers of the faces.
  4. Spaghetti code

    @Kylotan and @Daixiwen what do you think are the problems/why does visual programming only work for simple tasks in your opinion? I kinda wonder if it's only the same mistakes being made in such environments like a lack of hierarchies (or maybe just the use thereof? although that labview image in the first post looks like there are hierarchical blocks but they are always displayed inline, never worked with it though, only simulink) or that some abstractions like repeating the same operation on multiple inputs are missing.
  5.  You could do it via a geometry shader couldn't you?   That shouldn't be necessary in most cases. For normals using the screen space derivatives of the position usually works well enough. Furthermore, both DirectX and OpenGL support "flat"/"nointerpolation" for vertex attributes. In such cases, the "provoking vertex" determines the value for the entire primitive (see https://www.khronos.org/opengl/wiki/Primitive#Provoking_vertex). You may need to duplicate a few vertices, but not all of them.
  6. Move away from Git and move to a centralized versioning system like Perforce or Subversion that doesn't have issues with large files? Also, DLL/EXE PDBs are "special" in that it is probably better to use a symbol server with source indexing instead.
  7. This. But mind you... first, you must be very careful not to confuse two very different things: Screen size (1080p, 4k, ...) and pixel density (72dpi, 96dpi, 150dpi, ...).   It's even worse. There's a third factor that is equally important: (expected) distance to screen. Mobile phones are held a lot closer (~40cm) to you than the distance to you 60" 4K TV at home (~3 m). Or your much closer 4K 27" PC screen (~60 cm?) What really matters isn't so much dpi as view angle per pixel. Just to underline the need for configurable scaling, because you might be able to query a screens pixel size and physical size, you can only guess wrt distance. But do base your initial scaling on system settings (Window DPI slider).
  8. Scalable vector graphics tend to work best. If you know the potential scaling sizes you can also get crisp lines. For example in Windows 10, the smallest scaling factor step is .25 so horizontal/vertical lines that are multiples of 4 px (reference/low res scale) wide and aligned to a 4px grid will be crisp, without blurred antialiased edges. If you do go the raster graphics route, I'd recommend assuming the highest current resolution and always scale down (you can potentially do this at load time, no need to keep the high res graphics in VRAM). And make sure to use a high quality scaling filter, not linear interpolation (another reason to do this once before copying to the GPU) You should however at least allow for scaling up as well, because higher resolutions will likely be available in the not-too-far future.
  9. How to lose friends and alienate coworkers.

    Nonsense. Private methods are not at all like other methods. They are implementation details! So those "pre and post conditions" are liable to change at any time, making your tests utterly worthless. A private method can even leave an object in an inconsistent state. DO NOT do this. And doing this to achieve better "code coverage" is even worse, because if any paths in private methods cannot be exercised via public methods, they are dead code and should be removed. Or if you prefer a quote from a book, Dave Thomas and Andy Hunt write the following in Pragmatic Unit Testing: In general, you don't want to break any encapsulation for the sake of testing (or as Mom used to say, "don't expose your privates!"). Most of the time, you should be able to test a class by exercising its public methods. If there is significant functionality that is hidden behind private or protected access, that might be a warning sign that there's another class in there struggling to get out.
  10. How to lose friends and alienate coworkers.

      And this is why we need a reflection API in C++^^   Nope. There are many good reasons for (static!) reflection in C++, but this sure as hell isn't one of them. private parts of a class are NOT part of the API. protected parts are only part of the API for things that derive from it. So you should never, ever, do this.
  11. Agreed, the format is overcomplicated, extremely inefficient bullshit.
  12. Have you had a look at FreeCAD? https://github.com/FreeCAD/FreeCAD Not really a library, but OpenSource and has STEP/IGES support
  13.   Most games I know use vertical FOV to get "Horizontal+" behavior on wider screens, i.e., wider screen (relative to height) means wider horizontal fov. In any case, here's the variant using angles directly instead of going via the frustum planes for calculating vfov from hfov: Basic trigonometry gives us w / d = 2 * tan(hfov / 2) h / d = 2 * tan(vfov / 2) Which we can divide by one another to remove the distance w / h = tan(hfov / 2) / tan(vfov / 2) This can then be transformed into vfov = 2 * tan-1(h/w * tan(hfov / 2)) or hfov = 2 * tan-1(w/h * tan(vfov / 2)) So if hfov is 90? and w/h = 2 vfov = 2 * tan-1(0.5 * tan(45?)) = 53.13? To the OP: Which FOV you use is mostly a matter of convention. For every aspect ratio you can compute an equivalent vertical FOV from a horizontal FOV or vice versa (you could even split the difference and use a diagonal FOV!). As mentioned above, many games specify a vertical FOV so that wider screens automatically result in a wider (horizontal) FOV. The physically correct FOV is determined by the user's distance to the screen (d above) and it's actual physical width and height (w and h above). With such a FOV you get a distortion free "window" into virtual 3D space (assuming the user is centered in front of the screen, if not you need a skewed projection matrix). However, most games use much higher FOVs than what would be physically correct, especially first person shooters. IMO, the description of the OpenGL projection matrix here is pretty good: http://www.songho.ca/opengl/gl_projectionmatrix.html
  14. that is not how vertical fov is calculated. the aspect ratio should not be applied to the angular fov value but to the linear "Distance" value.
  15. OpenGL pixel offset after rendering to fbo?

    While GL_NEAREST might theoretically be faster, there's nothing wrong with trying to get it right with GL_LINEAR first. GL_NEAREST might hide a scaling mistake which could also show up with GL_NEAREST for some target sizes. With GL_LINEAR it will show up as slight blurriness for all target sizes.