Jump to content
  • Advertisement

Quat

Member
  • Content Count

    1035
  • Joined

  • Last visited

Community Reputation

568 Good

About Quat

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. Given a mesh with N vertices suppose that we know the colors for a subset of those vertices. How to interpolate the colors at the known vertices to the other vertices? I've only seen one paper called "Interpolation on a Triangulated 3D Surface" that it looks like MATLAB implements in mesh_laplacian_interp. However, this paper looks kind of old and I'm wondering if better algorithms have been developed since then.
  2. Quat

    DX12 + WPF : Still no way ?

    I can't comment about DX12 since I haven't tried/done it, but you might be able to use https://docs.microsoft.com/en-us/windows/desktop/direct3d12/direct3d-11-on-12 Regarding the WindowsFormHost, in my experience, this is the way to go if you can live without overlaying WPF elements on top of your 3D viewport. I've had bad performance experience with D3DImage interop, as you are tied to the WPF rendering/update system.
  3. I'm trying to model a polished material with mirror like properties (shiny new body of car). My specular reflections use fresnel schlick approximation but I noticed the reflections were kind of dim. After investigating I see why: Suppose I have a flat quad and the view ray strikes the surface normal N at 45 degree angle, and the reflection vector R intersects a light source with light vector L. With classical specular function, we would do some formula like dot(L,R)^m and the above configuration would give a maximum specular contribution. This result gives me what I would expect. But with fresnel schlick, we look at (1-dot(R, N))^5 = (1-.707)^5 which will be small number, thus resulting in the dim specular. Of course this makes sense for what we intuitively expect fresnel to do (increase reflection at glancing angles and more refraction/diffuse when looking head on), and it works well for things like water/glass, but it does not seem to be working well for modeling the shiny car. However, my understanding of PBR is that everything should use fresnel. Is there a flaw in my understanding? If it helps, I'm using fresnel F0=(0.04, 0.04, 0.04) which real-time rendering suggest for plastic/glass.
  4. I'm trying to write a shader to do distortion correction.http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html Suppose there is just radial distortion. I have an input image with radial distortion and the output image will be the corrected image. So for each pixel (x,y) in the output image, I want to sample the pixel in the input image that needs to move to (x,y). Intuitively, for radial distortion correction this is going to "pull" pixels inward toward the center of the image. My question is at the boundaries, the texture coordinates are going to go out of bounds of the input image. I can use a black border color, but is this expected? Looking at the results from this post:Image lens distortion correction it looks like the corrected image will have some radial black areas, but I just wanted to confirm.
  5. Quat

    Image Processing Server

    Thanks for the replies. I did a little more research and was curious where the socket APIs fit in. Those seem to be built on top of transport protocols, or are sockets still considered too low-level? I like that sockets are mostly portable API, as we may use Linux. HTTP sounds simple and that it could work. I found this MS Rest API https://msdn.microsoft.com/en-us/library/jj950081.aspx and they even have an example of pushing a chunk of data to the server, which is pretty much what I need. I have a question though for HTTP. Can the server push data to a specific client, or must the client put in a request to get the image processing output? So basically, I'm leaning towards sockets (TCP/IP) or HTTP, as they seem like the simplest for what I need to do. Would one be significantly faster that the other? Or is HTTP likely using sockets under the hood?
  6. Hello, I hope this is the right forum. A product I am working on will have a computer with camera attached that will be taking real-time video in grayscale. The frames will need to be sent over a fast network to an "image processing server" that will have multiple GPUs. Each frame will be assigned to the next available GPU for processing. Then the output will be sent to another computer in the network. The idea is for the frames to be processed in real time for computer vision type application. I don't have that much experience with network programming, but I think this should be pretty simple? Would I just create a basic client/server program using TCP/IP or UDP to send the image byte data, and a small amount of meta data (image dimensions, frame #, time stamp, etc)? Speed is the most important thing so that we can process in real-time. Any suggestion on protocols and design?
  7. I have a library of image processing algorithms, and I want to create a simple data driven scripting type system, basically to chain a sequence of image processing algorithms. Does anyone know of some open source libraries that already do this that I can look at to get some ideas? It seems mostly straightforward, but dynamic kernel parameters are bothering me. Fixed parameters could be specified in the script, and the output of one process will serve as input for the other. However, there will be some algorithms where the parameters need to be adjusted on the fly by the application. For example, maybe an array of values needs to be changed based on the application state that an image process depends on. Obviously these can't be hardcoded in the data file. I can't really think of a clean way to handle this other than using some type of reflection to figure out what type of image process we are doing and what parameters it expects as input.
  8. I might be able to get 5.  Do you have a link for solving for this projective frame?
  9. I have the following problem.  Suppose I have a 3D triangle in world coordinates, and I know their corresponding projected image coordinates.  Is it possible to find the view and projection transform?   The image points are actually found by feature point detection algorithm, which is why I do not know the view/proj matrices.  But I want to project more 3D points, which is why I need to solve/approximate the view/projection matrices.  
  10. I have a set A of 3D points. I have the camera and projection matrix so I can find their projected points on an image plane to get projected points {p1, ..., pn}. Suppose the set A is transformed by a rigid body transform T to a new set B of 3D points. Again I can project these points by the same camera to get projected points {q1,...,qn}. I am trying to do image alignment. It looks like the idea is to use least squares to find the 2D alignment transform (http://www.cs.toronto.edu/~urtasun/courses/CV/lecture06.pdf). My question is, can I use knowledge of the 3D rigid body transform T to find this 2D transform faster or immediately?  In other words, given a 3d rigid body transform, can I figure out how that transforms the corresponding projected points given a camera?
  11. Not sure if this should be moved to the math section...   Suppose I have a 3D object in some reference position/orientation R.  It undergoes a, relatively small, rigid-body transform M to get in new position/orientation R'.  Then R and R' will have different projected images I and I' relative to some camera C.  I want to find the alignment transform to align I and I'.  I know there are 2D algorithms for this to estimate the transform by identifying several feature pixels on the images, and then find a 2D affine transform.   Question is: does knowing M help?  That is, does knowing the 3D transform help me get a more accurate image alignment algorithm or help me get it faster?   So far, I think it will help somewhat.  Given the known feature pixels in image I from the reference position and their 3D points on the model, I can apply M, then project back to I' so that I know the feature pixels for R'.  This saves me having to search for matching feature pixels.  But is there room for more improvement? 
  12. //CamPos = float3(ViewMatrix._41, ViewMatrix._42, ViewMatrix._43);   The camera position is not stored in the 4th row of the view matrix.     Eye = CamPos - Pos.xyz;   I think you mean Pos.xyz - CamPos, which would be the vector from the camera origin to point Pos.  However, this doesn't take into consideration the orientation of the camera. Pos = mul(Pos, ViewMatrix);   This does take the orientation of the camera into consideration. //Eye = Pos.z;   Assigning scalar to vector? Out.Pos = mul(Pos, ProjMatrix); Out.DepthV = length(Eye);   Length is not the same as just the z-coordinate Pos.z.
  13. I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering.  I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.
  14. Hello, I am new to unit testing and have the following question.  I have a WPF app that uses MVVM and I am working on unit testing the view model.  My UI has a button which the view model abstracts as a command, and when it is pressed it sets the state of a few properties and posts an event.  My question is whether I should make one unit test that asserts everything I expect to happen from the button press (psuedocode):   // Arrange   btnCommand.Execute(null); // basically button press handler   Assert(StateA == X) Assert(StateB == Y) Assert(StateC == Z) Verify event was posted   or should I separate these out into 4 separate tests with only one assert per test?    
  15. First, I don't have a lot of network background, so sorry if this question shows my noobness.  In my particular scenario, there is going to be one service per client (one-to-one).  I'm sending game data from the server to client so that the client can do some processing with the data. In theory the data could change every frame, but in practice it does not.  So I'm trying to only send over deltas, but ran into a problem with my approach.   My approach was to keep a dictionary<ID, data> on the server, so when it came to sending data over the wire, I could look up the last data I sent, and check what pieces of data changed, and then only send that data over.  A set of bits would be flagged and also sent over so the client knew what data values were being updated.  The client also keeps the data cached so it only needs the updated data.   The problem I ran into is that the server starts off before any clients connect, and starts sending the data (to nowhere).  This builds the cache and so by the time a client connects, it is only receiving deltas (but the client never received the full object the first time around because it wasn't connected yet).     Since the client/service is one-to-one, I could probably modify the server to not start building a cache until a client connects.  However, I wondered if missed packets would be a problem (maybe our socket API automatically resends so I don't need to worry about this situation).  I'm wondering what kind of systems games use to sort of sync up efficiently client/server data so that only deltas can be sent.    
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!