GDNet+ Pro
  • Content count

  • Joined

  • Last visited

Community Reputation

2539 Excellent

About iedoc

  • Rank
    Advanced Member

Personal Information

  1. You usually want to write your resume for the job your specifically after. If your looking for a programming job, it might be best to word your 3d art experience in a way that makes it look like it gave you some kind of experience that will help you with a programming job. On the other hand, if you want a 3d art job, you should word your programming experience in a way that maKes your 3d art skills shine. In your case, one resume does not fit all. You should have multiple resumes, each designed for the type of job you are applying for
  2. Basically when you draw, you'll update the depth buffer. render your 2d stuff first for example, you can have it update your depth buffer to be very close to the camera. then when you render the 3d stuff, anywhere that a pixel is further away in the depth buffer than what is already written to the depth buffer will not get rendered (unless your blending, then it uses the blending algorithm)
  3. Where is "FX/color.fx" relative to your project you are debugging?
  4. OpenCV

    the disparity is pretty much the "depth map". You can get the 3d point cloud from a disparity map using the reprojectImageTo3D function opencv provides. you can do it yourself of course too, this is just a convenience function
  5. OpenCV

    i can totally help you with depth from a disparity image. i did a thing trying to get a job over at verizon: there's a link to the github for the code somewhere in the description. I only had a couple days to do this so it's not perfect, I wish i would have added in something like marching cubes to turn the point cloud into a mesh. maybe remove some of the noise as well from the point cloud
  6. In D3D11, you actually have no direct control or guarantee that a resource is in VRAM. This is managed automatically by the D3D11 drivers. if VRAM fills up for example, it will make a decision about which resources it can page out to system memory, and if system memory fills up, it will make a decision about which resources can be paged out to disk. If a resource is needed by a shader it'll be paged in to VRAM
  7. OpenCV

    the depth sensor is actually using the infrared sensor, which is why depth in these types of cameras don't work at all outside because of all the ir the sun is emitting. if you want good depth, i'd suggest buying a stereo camera, or making your own. The kinect is at least fun to play with though, all the hard work of detecting people is done for you
  8. can you post the shader (or effects file) that's failing to compile?
  9. OpenCV

    I've used the kinect sensor for a few different things. If your trying to use it as a webcam though, i can tell you to not do that. You'll have to set variables in the system registry, install a few different things that seem to be basically hacked together, and then cross your fingers it works (as a webcam, to get the input stream). Then again that was a couple years ago since i've touched the kinect, so maybe its easier to use without the kinect sdk now. Microsoft discontinued the kinect anyway because it was more of a novelty, and honestly in my experience using a regular webcam can get you the same results, often times even better when you expect a lot of sunlight in the environment. The kinect is easy to use though, which is why it's probably appealing. i'd recommend don't go with the kinect. the intel realsense is pretty much just as good, smaller, and i believe is still not discontinued. The ir sensors (depth) work a couple feet further away on the kinect, otherwise the realsense is better in my opinion.
  10. OpenCV

    your including a header in your project that was only intended to be used within the opencv library. remove the line that includes that header in your project
  11. The pi works as a great server for personal use, depending on what exactly you need it for. but if your expecting large volumes of people to play your game, the pi will most definitely not cut it. You could certainly start out with it though, and upgrade later. I have a couple pi's set up as servers at my house, ones even hosting a site (not a public site though)
  12. OpenCV

    You may need to use a couple different techniques to get what you're after. But I think what your looking for is connected components. the first thing you'll do is load your image and convert it into a grayscale image using imread and cvtColor. then you use adaptive thresholding to turn the image to black and white, or black white and grey, using the function adaptiveThreshold. Then you can use the function findContours to find each "connected component" [another link]. findContours works best when your image has been thresholded, which is why i mentioned adaptive thresholding above. You can also do canny edge detection, but I've personally had better results with adaptive thresholding before findContours, although the actual results will depend a lot on what is actually in your images. You'll then be able to either draw the outlines of the contours or fill them in with either drawContours or fillPoly. You'll probably find at some point you get better results by pre processing your image even more, for example either Eroding and Dilating Anyway, that should get you started while you wait for your book
  13. OpenCV

    As far as books go, the O'Reilly's "Learning OpenCV 3.0" was a really good overview of the library. I read through the entire thing, they do a great job explaining the library, getting you set up and getting into it, and explaining the different parts of the library and overview how they work. They also do a really good job at getting you familiar with some of the techniques used in computer vision/image processing, such as feature detectors and descriptors. If your new to OpenCV, you won't want to read straight through the entire book, you'll just waste time. Read through the first couple chapters to get how the library works, then sort of pick and choose the chapters that are relevant to what your trying to do. Definitely worth purchasing if you want to learn OpenCV.
  14. Wierd bug

    are they initialized? they might be getting initialized with (seemingly) random data
  15. this is saying that all the descriptors that a descriptor table points to (the table which the root signature is pointing to), MUST exist before your shader runs Nope, you don't need to fill out anything that your not using