Advertisement Jump to content
  • Advertisement

Quat

Member
  • Content Count

    1039
  • Joined

  • Last visited

Community Reputation

568 Good

About Quat

  • Rank
    Contributor

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the reply. Is this the 64KB page true in d3d11 too? Is the driver just doing the suballocation for us?
  2. In d3d12 the docs seem to push you to allocate large buffers that can store multiple resources (suballocation), whereas in d3d11 you might do one allocation per texture/mesh. Is there a performance benefit to this? It seems with the fast resource binding model of d3d12 you could do an allocation per texture/mesh and still have good performance over d3d11. Why does the GPU care if the memory is from one large buffer or scattered?
  3. I'm reading about memory management in d3d12. The docs basically say that a "committed resource" it like how past versions of Direct3D allocated a resource where you get virtual address and physical address. I understand reserved resources to do things that tiled resources in d3d11 were used for, but when would you use placed resources over committed resources?
  4. I have a job of porting an old legacy opengl 2D renderer to Direct3D. I'm trying to decide between d3d11 and d3d12. The main feature of d3d12 that I think would be most beneficial is bindless texturing. On the other hand, I could essentially emulate that on d3d11 with texture array of texture atlas (to store all sprite textures in one resource). Sticking with d3d11 would also avoid all the extra complicated memory management type things of d3d12. So I'm leaning towards d3d11, but are there other features of d3d12 that would make a big difference on a 2D renderer?
  5. Given a mesh with N vertices suppose that we know the colors for a subset of those vertices. How to interpolate the colors at the known vertices to the other vertices? I've only seen one paper called "Interpolation on a Triangulated 3D Surface" that it looks like MATLAB implements in mesh_laplacian_interp. However, this paper looks kind of old and I'm wondering if better algorithms have been developed since then.
  6. Quat

    DX12 + WPF : Still no way ?

    I can't comment about DX12 since I haven't tried/done it, but you might be able to use https://docs.microsoft.com/en-us/windows/desktop/direct3d12/direct3d-11-on-12 Regarding the WindowsFormHost, in my experience, this is the way to go if you can live without overlaying WPF elements on top of your 3D viewport. I've had bad performance experience with D3DImage interop, as you are tied to the WPF rendering/update system.
  7. I'm trying to model a polished material with mirror like properties (shiny new body of car). My specular reflections use fresnel schlick approximation but I noticed the reflections were kind of dim. After investigating I see why: Suppose I have a flat quad and the view ray strikes the surface normal N at 45 degree angle, and the reflection vector R intersects a light source with light vector L. With classical specular function, we would do some formula like dot(L,R)^m and the above configuration would give a maximum specular contribution. This result gives me what I would expect. But with fresnel schlick, we look at (1-dot(R, N))^5 = (1-.707)^5 which will be small number, thus resulting in the dim specular. Of course this makes sense for what we intuitively expect fresnel to do (increase reflection at glancing angles and more refraction/diffuse when looking head on), and it works well for things like water/glass, but it does not seem to be working well for modeling the shiny car. However, my understanding of PBR is that everything should use fresnel. Is there a flaw in my understanding? If it helps, I'm using fresnel F0=(0.04, 0.04, 0.04) which real-time rendering suggest for plastic/glass.
  8. I'm trying to write a shader to do distortion correction.http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html Suppose there is just radial distortion. I have an input image with radial distortion and the output image will be the corrected image. So for each pixel (x,y) in the output image, I want to sample the pixel in the input image that needs to move to (x,y). Intuitively, for radial distortion correction this is going to "pull" pixels inward toward the center of the image. My question is at the boundaries, the texture coordinates are going to go out of bounds of the input image. I can use a black border color, but is this expected? Looking at the results from this post:Image lens distortion correction it looks like the corrected image will have some radial black areas, but I just wanted to confirm.
  9. Quat

    Image Processing Server

    Thanks for the replies. I did a little more research and was curious where the socket APIs fit in. Those seem to be built on top of transport protocols, or are sockets still considered too low-level? I like that sockets are mostly portable API, as we may use Linux. HTTP sounds simple and that it could work. I found this MS Rest API https://msdn.microsoft.com/en-us/library/jj950081.aspx and they even have an example of pushing a chunk of data to the server, which is pretty much what I need. I have a question though for HTTP. Can the server push data to a specific client, or must the client put in a request to get the image processing output? So basically, I'm leaning towards sockets (TCP/IP) or HTTP, as they seem like the simplest for what I need to do. Would one be significantly faster that the other? Or is HTTP likely using sockets under the hood?
  10. Hello, I hope this is the right forum. A product I am working on will have a computer with camera attached that will be taking real-time video in grayscale. The frames will need to be sent over a fast network to an "image processing server" that will have multiple GPUs. Each frame will be assigned to the next available GPU for processing. Then the output will be sent to another computer in the network. The idea is for the frames to be processed in real time for computer vision type application. I don't have that much experience with network programming, but I think this should be pretty simple? Would I just create a basic client/server program using TCP/IP or UDP to send the image byte data, and a small amount of meta data (image dimensions, frame #, time stamp, etc)? Speed is the most important thing so that we can process in real-time. Any suggestion on protocols and design?
  11. I have a library of image processing algorithms, and I want to create a simple data driven scripting type system, basically to chain a sequence of image processing algorithms. Does anyone know of some open source libraries that already do this that I can look at to get some ideas? It seems mostly straightforward, but dynamic kernel parameters are bothering me. Fixed parameters could be specified in the script, and the output of one process will serve as input for the other. However, there will be some algorithms where the parameters need to be adjusted on the fly by the application. For example, maybe an array of values needs to be changed based on the application state that an image process depends on. Obviously these can't be hardcoded in the data file. I can't really think of a clean way to handle this other than using some type of reflection to figure out what type of image process we are doing and what parameters it expects as input.
  12. I might be able to get 5.  Do you have a link for solving for this projective frame?
  13. I have the following problem.  Suppose I have a 3D triangle in world coordinates, and I know their corresponding projected image coordinates.  Is it possible to find the view and projection transform?   The image points are actually found by feature point detection algorithm, which is why I do not know the view/proj matrices.  But I want to project more 3D points, which is why I need to solve/approximate the view/projection matrices.  
  14. I have a set A of 3D points. I have the camera and projection matrix so I can find their projected points on an image plane to get projected points {p1, ..., pn}. Suppose the set A is transformed by a rigid body transform T to a new set B of 3D points. Again I can project these points by the same camera to get projected points {q1,...,qn}. I am trying to do image alignment. It looks like the idea is to use least squares to find the 2D alignment transform (http://www.cs.toronto.edu/~urtasun/courses/CV/lecture06.pdf). My question is, can I use knowledge of the 3D rigid body transform T to find this 2D transform faster or immediately?  In other words, given a 3d rigid body transform, can I figure out how that transforms the corresponding projected points given a camera?
  15. Not sure if this should be moved to the math section...   Suppose I have a 3D object in some reference position/orientation R.  It undergoes a, relatively small, rigid-body transform M to get in new position/orientation R'.  Then R and R' will have different projected images I and I' relative to some camera C.  I want to find the alignment transform to align I and I'.  I know there are 2D algorithms for this to estimate the transform by identifying several feature pixels on the images, and then find a 2D affine transform.   Question is: does knowing M help?  That is, does knowing the 3D transform help me get a more accurate image alignment algorithm or help me get it faster?   So far, I think it will help somewhat.  Given the known feature pixels in image I from the reference position and their 3D points on the model, I can apply M, then project back to I' so that I know the feature pixels for R'.  This saves me having to search for matching feature pixels.  But is there room for more improvement? 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!