Dustin Hopper

  • Content count

  • Joined

  • Last visited

Community Reputation

186 Neutral

About Dustin Hopper

  • Rank
  1. Hey everyone!   (TL/DR: Kickstarter, Website)   I work for a startup company called Lynx Laboratories. We're creating the first 3D capture device, which can digitize the shape and motion of what it sees.   The details: The camera does three things: scene modeling, object modeling, and motion capture. The goal is to capture high quality content that would otherwise be made by hand today, like the 2D camera did for oil painting: five hour 3D modeling tasks can be done in just minutes animators can have a motion capture studio in their backpack - reducing the total amount of time required to animate asset creation is now easier than ever, with any (not small) object or scene you could imagine   We have videos available on our website, Kickstarter, and YouTube channel. Let me know what you guys think! I'd love to answer any questions and hear comments/suggestions.
  2. [quote name='web383' timestamp='1346256771' post='4974465'] This is very interesting. Maybe because the graphics driver isn't hitting the frame buffer with anti-aliasing with your new pipeline? [/quote] It is enabled, and possible, yes. [quote name='Hodgman' timestamp='1346259733' post='4974488'] What does [font=courier new,courier,monospace]grabDepth[/font] do, really? The answer to your question probably depends on how you're doing this in both cases. [/quote] I doubt this. In fact, I've had to do a little more for the second round to make it possible. Before: [source lang="cpp"]void grabDepth(float *depth_array) { glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO_); glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glBindBuffer(GL_PIXEL_PACK_BUFFER, 0); // use gpgpu api to memcpy array pointed to by depthPBO into depth_array }[/source] After: [source lang="java"]void grabDepth(float *depth_array) { glBindFramebuffer(GL_READ_FRAMEBUFFER, depthFBO_); glBindBuffer(GL_PIXEL_PACK_BUFFER, depthPBO_); glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glBindBuffer(GL_PIXEL_PACK_BUFFER, 0); // use gpgpu api to memcpy array pointed to by depthPBO into depth_array glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); }[/source]
  3. Let's say for this example, I have a few standard meshes of around 200,000 polygons each. Pseudo-pseudo for the old rendering pipeline goes: [source lang="cpp"]void display() { // setup movement // define lighting properties // draw multiple meshes // draw extra objects (2D UI, etc.) // both are rendered using glDrawElements with gl***Pointer // store depth buffer of FB at viewport inside array } void grabDepth(float *depth_array) { // you can assume the proper mutexes exist for this situation to work concurrently // copy depth buffer from local array into depth_array }[/source] Using GPGPU resources, this worked pretty great. Recently, I've switched to rendering everything into separate FBO/RBO objects. Pseudo-pseudo code for the new rendering pipeline goes: [source lang="cpp"]void display() { // setup movement // bind OBJECTS fbo // define lighting properties // draw multiple meshes // bind extra objects fbo // draw extra objects (2D UI, etc.) // render combined FBO/RBO combos as texture on quad to screen } void grabDepth(float* depth_array) { // you can assume the proper mutexes exist for this situation to work concurrently // just grab RB depth attachment and copy into depth_array }[/source] All data arrays are malloc'd and stored on the GPU. Nothing is moved or transferred to/through host. I'm receiving a performance increase (speed increase and appearance is more crisp) in this situation just rendering to an offscreen FBO instead of direct. I can't figure out why this is. Anyone have any pointers or suggestions?
  4. Look into CUDA or OpenCL development, these two APIs allow for device memory computation on the device, and give support for pushing data back and forth from RAM and GPU.
  5. instant fps drop

    Enabling the OpenGL API flags GL_LINE_SMOOTH and GL_POINT_SMOOTH can severely effect your FPS if you have an insane number of points and/or lines rendering. Play around with what you have and see what performance increase/detail decrease you get.
  6. OpenGL Mouse input api ?

    I mean, you could make your own, in Windows, it's pretty simple. [url="http://msdn.microsoft.com/en-us/library/system.windows.input.mouse.aspx"]http://msdn.microsoft.com/en-us/library/system.windows.input.mouse.aspx[/url] Just make a class that wraps these functions, create a static singleton/arbiter (or whatever you prefer), and listen for events/pass events in your main event loop. Most input handling, from what I know, is done using much larger APIs such as Qt, GLUT, etc.
  7. OpenGL Main Character Cape

    [url="http://nehe.gamedev.net/tutorial/flag_effect_(waving_texture)/16002/"]http://nehe.gamedev.net/tutorial/flag_effect_(waving_texture)/16002/[/url] Just play around with the math and u/v/a points.
  8. TGA Import Error

    I had this problem initially when I started to use TGA images for textures. http://www.organicbit.com/closecombat/formats/tga.html http://tgalib.sourceforge.net/api/r28.html I believe that tutorial may be old, check the listed resources and try updating your header format to best resemble the examples from the resources. You can double check by simply printing out the variables to double check what is saved in each variable matches what you have in GIMP. By default, I recommend using 32bit RGBA export settings.
  9. Large space game, moving objects

    [quote name='Dark_Oppressor' timestamp='1338113849' post='4943660'] I don't see how 2D reduces playability, though. How is that? [/quote] To me, when I think of space, I have images of wide, unexplored, mysterious areas of elegant nature in pristine design. Massive objects of gas, liquid, rock, or pure density, surrounded by vast emptiness and orbiting children scattered sporadically and diverse like speckled drywall on the largest ceiling known to mankind. 3D is the only way, I feel, to completely captivate this experience in even fractions. Not that I'm hating on 2D, 3D would just be exponentially better for a space environment. [quote name='Dark_Oppressor' timestamp='1338113849' post='4943660'] And just to cement that this is a game no one will want, you don't even get to shoot things! [/quote] I understand some self-development project, that's awesome. You just asked for an opinion on estimated levels of enjoyment, I jumped the gun and assumed a straight-to-market kind of game. All aside, yeah, for a slightly less basic more advanced 2D game than normal, this definitely tops on the list of what I would rather play when compared to other 2D adventure games. The engine alone is a good idea for a start. Not to mention the potential you could add in the future; dynamic mapping, custom ships, crafting and resource skills, financial attributes/currency, warfare, etc. I'd check out Universe Sandbox. It may be the feel you're trying to accomplish, but in 3D and without a control ship or first person interaction.
  10. [quote name='ATEFred' timestamp='1338086854' post='4943612'] I've used gpa with a 480 without problems, I doubt that is an issue. The problem here is probably skyrim using vendor specific formats (not exposed by d3d9) which intel gpa does not know about. (I know it understands intz and rawz, but there might be others I guess). [/quote] Verified, somewhat. [url="http://msdn.microsof...8(v=vs.85"]http://msdn.microsof...8(v=vs.85[/url]).aspx Also, check out the section 'Issues and Limitations': [url="http://software.intel.com/en-us/articles/gpa2012R3-releasenotes/"]http://software.inte...3-releasenotes/[/url] Seems to be quite a few problems that pop up when reading through. [quote name='Intel'][list] [*]Intel® GPA does not support frame capture or analysis for those applications which utilize any of the following features: [list] [*]Execute on the Debug D3D runtime system [*]Use the Reference D3D Device [/list][*]The Intel® GPA System Analyzer HUD may not be displayed in applications that use copy-protection, anti-debugging mechanisms, or non-standard encrypted launching schemes. [*]Intel® GPA provides analysis functionality by inserting itself between your application and DirectX*. Therefore, Intel GPA may not work correctly with certain applications which themselves hook or intercept DirectX* APIs or interfaces. [/list] [/quote] I'm not sure exactly how the graphics sub-level of Skyrim works, but if so, that's a reason it could be failing. EDIT: I skipped over where you said you had a 32bit system. [quote name='Intel'] [list] [*]Intel® GPA Frame Analyzer runs best on 64-bit operating systems with a minimum of 4GB of physical memory. Additionally, consider running Intel GPA Frame Analyzer in a networked configuration (the server is your target graphics device, and the client running the Intel GPA Frame Analyzer tool is a 64-bit OS with at least 4GB of memory). On 32-bit operating systems (or 64-bit operating systems with <4GB of memory), warning messages, parse errors, very long load times, or other issues may occur when loading a large or complex frame capture file. [*]Frame capture using Intel® GPA Monitor runs best on 64-bit operating systems with a minimum of 4GB of physical memory. On 32-bit operating systems (or 64-bit operating systems with <4GB of memory), out of memory or capture failed messages can occur. [/list][color=#515357][font=Arial, sans-serif] [/quote][/font][/color]
  11. Large space game, moving objects

    So, EVE Online[sup] [/sup]without anything but space travel and an applied realistic global (or universal, I guess) physics engine? I don't know, I'd try it, but I doubt the repetitiveness of the game would have mass appeal. 2D brings down the playability as well. You could have the game map be based off estimated positions of real stellar objects, give it a scientific exploration feel. Would you have any plans for a plot or storyline?
  12. I'm now getting 100% accuracy, at least for the test data, by doing a second pass on the data and comparing all three images. It isn't too much of a speed decrease, since all of these images are very, very small. EDIT: Source attached. I used OpenCV.
  13. I've been working on it for a bit, and these are the results I've been able to accomplish. http://imgur.com/a/IxnED Step-by-step, I first inverted the image, computed the average color of the image, then used a simple gaussian filter to smooth the image and perform simple edge improvement. Color the spots blue that are above the average color of the image as a constant threshold. Then, I compare how much of the resulting images were colored as a percentage, and use linear regression with the two best image results to guesstimate a correct 'marked/unmarked' boundary. I'm getting pretty good results from this, but one or two of the images just has too much error invariance in the marking. u1->8% m1->89% u2->15% m2->36% u3->7% m3->38% u4->21%(not good) Not sure if any of this helped, but it was fun either way.
  14. Computer vision, excellent. What does 'marked' entail?
  15. Glad I helped. Good luck.