Jump to content
  • Advertisement

h3xl3r

Member
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

670 Good

About h3xl3r

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming
  1. h3xl3r

    Type of function

    https://en.m.wikipedia.org/wiki/Parametric_equation
  2. I've been to GDC Europe a couple of years ago and it was "alright" in my opinion. I like technical talks (rendering-dude) and it was all a bit light on that side, but it was great to chat with so many talented and interesting people.   So my question is, who on this forum is going this year and what are you expecting/looking forward to? Have you been before and was it really, really great? Did I maybe just catch a "boring year"?   Debating right now if the cost/trip is worth it in 2014.
  3. h3xl3r

    OpenCL Driver/Runtime

    Oh wow. Thanks a lot for the detailed reply!   Indeed in my case I am already using one of the GPUs for the (OpenGL-)rendering context and would like to use the second GPU for some things that right now are done on CPU (again, optionally of course, re-inforced by all the gotchas you mention).   I hadn't actually thought about using some sort of Crossfire/SLI technology here, but it makes a lot of sense, as for example one of the uses In my case would be tessellation, and the trip from CPU -> 2nd GPU -> CPU (result) -> 1st GPU (render) could then be trimmed down to CPU -> 2nd GPU -> 1stGPU and that would be great, but I guess the tech is not really there yet in reality. I did indeed also hear some horror-stories about OpenCL/OpenGL interop breaking randomly with driver releases and therefore not really being used outside of scientific ( = controlled environement) uses.   Is there any hope of these things being fixed in the not so distant future? Is maybe CUDA worth looking at more in terms of reliability (even if hardware specific)?   Edit: Also wondering if the situation is any better or worse on OS X? Specifically looking at hardware like their new dual AMD MacPro machines (where CUDA is obviously a no-go),
  4. So I have isolated some portions of an application that lend itself to being parallelized. I already have some multi-threading in place to spin off threads to distribute these "tasks" over multiple available CPU cores.   Now I am interested in (optionally) delegating some of these tasks to a second GPU and would like to use OpenCL to do so. After a (albeit tiny) bit of research it seems that the situation of detecting and initializing a GPU as compute device is anything but straightforward. Especially the OpenCL driver situation is rather confusing to me as it seems that all major vendors are supplying OpenCL drivers/runtimes that need to be shipped and installed with my application? Because of this I am also wondering if these drivers are actually specific to the vendors' hardware and if I'd have to ship all possible drivers with the application and then choose the "right" one to install depending on the client hardware?   Anyone have any experience with this or a reference to an article/blog that gives an overview of this situation?
  5. As stated already in this thread, buffers are the way to go!   For some example text-rendering code (plus some goodies like SDF scalable rendering + effects) take a look at: https://github.com/rougier/freetype-gl   Personally I use a hybrid approach of using a plain pre-rendered texture-atlas for small font sizes (8-12) and a single pre-rendered signed-distance-field texture-atlas scaled to all other sizes. All using VBO's of course, only difference between rendering both variants being the fragment shader code.
  6. Only time for a quick look, if there's time later I'll try to post some code as well, but here's some remarks:   - You're using the deprecated (in OpenGL 3) GL_QUADS primitive type. You should be using two triangles really to draw a "quad". Your driver might still support this primitive type of course as it's only depreacted and might not be removed completley (yet), but you'll have to switch sooner or later and it's the way to go in general.   - Your vertex data does not have enough vertices to draw a cube without using an index buffer. Eight points are enough for drawing an indexed cube as the indexes can reference each vertex multiple times. If you're using glDrawArrays ( = non-indexed) you'll have to specify each vertex(-position) multiple times to get the same result. Take a look at glDrawElements for indexed drawing.   My suggestion would be to not go for the cube immediately but try to draw one quad composed of two triangles first, preferably using an index buffer, as that's what you'll need extending the code for the full cube later.   Edit:   Take a look at: http://www.open.gl/drawing   The example there is also starting with a triangle and extending it to a quad with glDrawArrays first, and then adding indexing with glDrawElements.
  7. Thank you! That's exactly the sort of experience I wanted to hear about!   I am guessing you also mean to combine the glBufferData with "discarding" the "old" buffer before filling it each frame by buffering NULL data first with the same parameters? Seems to be an sort of "accepted" hint for the vendor driver to hand out a new memory region instead of using an "in-flight" buffer and causing a sync.
  8. I've found this excellent article from "OpenGL Insights" discussing exactly this problem and evaluating all the options: http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-AsynchronousBufferTransfers.pdf   Seems like in the end there's separate "fast paths" for different vendors (*sigh*).
  9. There seem to be two sets of defines for GL versions in GLEW, a bit confusing really: "GLEW_VERSION_x_x" vs. "GL_VERSION_x_x"   In my code I am using the "GL_VERSION_x_x" ones with "glewIsSupported" and so far haven't had any problems on multiple test machines.   At least according to this post on Stackoverflow the "GL_VERSION_x_x" ones seem to work better indeed (for whatever reason) http://stackoverflow.com/questions/21644230/what-is-the-difference-between-glew-version-x-x-and-gl-version-x-x   I guess your idea to just try to create contexts of a certain version and see if it succeeds or fails seems like a good way to check as well. Just on intel (and maybe other vendors) I've seen situations where it would create the context just fine but then not actually support the full GL spec for that particular version. Good times :)
  10. Try using glewIsSupported("GL_VERSION_3_0"); etc, instead of checking the version macros directly.   Querying GL_MAJOR_VERSION, GL_MINOR_VERSION, GL_VERSION will not give you available OpenGL versions in the driver but the version of the context you already created. Also GL_MAJOR_VERSION and GL_MINOR_VERSION were introduced in OpenGL 3.0 as far as I know, so they might not be a good way of checking the version on contexts created with earlier GL versions.   You could also try to run another OpenGL capabilities viewer like http://www.realtech-vr.com/glview/ to compare results.
  11. Another free online introduction, discussing modern OpenGL (3.x+): http://open.gl Real paper books go out of date real quick these days, but of course the OpenGL triplet of books (Red/Blue/Orange) are classics and "complete" references for anything in GL. Just be sure to pick the one discussing the version you're interested in, and don't expect the most beginner friendly tutorial.
  12. You could also give FXAA a try, pure post-processing-based AA in a shader: http://www.geeks3d.com/20110405/fxaa-fast-approximate-anti-aliasing-demo-glsl-opengl-test-radeon-geforce/
  13. Good day! This is something I've been struggling with for a while now: I am streaming a lot of dynamic vertex data in a OpenGL 2.1/OpenGL ES 2 compatible renderer that is staying away from fixed function completley (in desktop GL 2.1 where there would still be the option). Now, as I have cpu-side state management I can swap on the fly between implementations using either Vertex Arrays or GPU-buffers and profile the separate paths/methods. All is well, except that there is no way it seems I can get the vertex buffer updates using proper GL buffer objects as fast as the vertex array way, though theoretically it should come down to exactly about the same actions in the driver if I think about it: - vertex array: copy all the data from cpu memory to driver in glVertexAttribPointer - vertex buffer: copy all the data from cpu memory in glBufferSubData I know this is of course rather dependent on the driver as well, but between a rather low-spec desktop machine and multiple versions of iPads, there is no way I can get the same times. The GPU buffer always loses. I've been of course looking on the web and I tried everything I saw that I actually have available in the GL2-class spec (double/triple buffer, invalidating with null data first, etc.) to no avail. Also I am soon going to start the GL3/ES3 version which removes the vertex array option altogether, so this is going to have to be solved. Any ideas? Appreciate all hints really! Maybe something extension based that isn't too badly supported? Edit: Just want to mention I also tried various ways to "Map/MapRange", but none of the new fancy options to these are available to (vanilla) GL2/ES2 and (especially on the pads) these also always came in late in the profiling. I am obviously causing a sync somewhere but going double or triple and invalidation tricks just don't seem to make a difference so I am a bit lost for ideas
  14. h3xl3r

    Static Classes in c++

    About structure for this specific problem: In this case I would not make this class a singleton either. As there's usually some sort of "Game/Application" class already that's responsible for creating instances of all subsystems on start-up, and seeing as this particular class is naturally a singleton as there can be only ever one instance of it, that is a good place to just create and put getters for your "global tools" there. So you'd make the master "Game" class create an instance of your GUIDManager once and have a GetGUIDManager getter to let all other subsystems have access to it via "Game". Sure it could all be structured differently, but this way you avoid making a lot of singletons (bad!) and using the one you already "have to" make anyway.
  15. Sounds like your problem is more with the time-step of your physics simulation rather than anything to do with replaying the action? If so, the article rip-off suggested is exactly what you need to fix it. After your physics are not going wild anymore because of the variable timestep, recording/replaying should simply be a matter of keeping a record of all user input (key/mouse/net) with timestamps and then running the simulation again with the same initial state and firing the input events from your recording according to their timestamps. Pretty much like you would record a piece of music on a MIDI keyboard (recording key events + time) and then play it back later.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!