Quat

Members
  • Content count

    1031
  • Joined

  • Last visited

Community Reputation

568 Good

About Quat

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. Image Processing Server

    Thanks for the replies. I did a little more research and was curious where the socket APIs fit in. Those seem to be built on top of transport protocols, or are sockets still considered too low-level? I like that sockets are mostly portable API, as we may use Linux. HTTP sounds simple and that it could work. I found this MS Rest API https://msdn.microsoft.com/en-us/library/jj950081.aspx and they even have an example of pushing a chunk of data to the server, which is pretty much what I need. I have a question though for HTTP. Can the server push data to a specific client, or must the client put in a request to get the image processing output? So basically, I'm leaning towards sockets (TCP/IP) or HTTP, as they seem like the simplest for what I need to do. Would one be significantly faster that the other? Or is HTTP likely using sockets under the hood?
  2. Hello, I hope this is the right forum. A product I am working on will have a computer with camera attached that will be taking real-time video in grayscale. The frames will need to be sent over a fast network to an "image processing server" that will have multiple GPUs. Each frame will be assigned to the next available GPU for processing. Then the output will be sent to another computer in the network. The idea is for the frames to be processed in real time for computer vision type application. I don't have that much experience with network programming, but I think this should be pretty simple? Would I just create a basic client/server program using TCP/IP or UDP to send the image byte data, and a small amount of meta data (image dimensions, frame #, time stamp, etc)? Speed is the most important thing so that we can process in real-time. Any suggestion on protocols and design?
  3. I have a library of image processing algorithms, and I want to create a simple data driven scripting type system, basically to chain a sequence of image processing algorithms. Does anyone know of some open source libraries that already do this that I can look at to get some ideas? It seems mostly straightforward, but dynamic kernel parameters are bothering me. Fixed parameters could be specified in the script, and the output of one process will serve as input for the other. However, there will be some algorithms where the parameters need to be adjusted on the fly by the application. For example, maybe an array of values needs to be changed based on the application state that an image process depends on. Obviously these can't be hardcoded in the data file. I can't really think of a clean way to handle this other than using some type of reflection to figure out what type of image process we are doing and what parameters it expects as input.
  4. I might be able to get 5.  Do you have a link for solving for this projective frame?
  5. I have the following problem.  Suppose I have a 3D triangle in world coordinates, and I know their corresponding projected image coordinates.  Is it possible to find the view and projection transform?   The image points are actually found by feature point detection algorithm, which is why I do not know the view/proj matrices.  But I want to project more 3D points, which is why I need to solve/approximate the view/projection matrices.  
  6. I have a set A of 3D points. I have the camera and projection matrix so I can find their projected points on an image plane to get projected points {p1, ..., pn}. Suppose the set A is transformed by a rigid body transform T to a new set B of 3D points. Again I can project these points by the same camera to get projected points {q1,...,qn}. I am trying to do image alignment. It looks like the idea is to use least squares to find the 2D alignment transform (http://www.cs.toronto.edu/~urtasun/courses/CV/lecture06.pdf). My question is, can I use knowledge of the 3D rigid body transform T to find this 2D transform faster or immediately?  In other words, given a 3d rigid body transform, can I figure out how that transforms the corresponding projected points given a camera?
  7. Not sure if this should be moved to the math section...   Suppose I have a 3D object in some reference position/orientation R.  It undergoes a, relatively small, rigid-body transform M to get in new position/orientation R'.  Then R and R' will have different projected images I and I' relative to some camera C.  I want to find the alignment transform to align I and I'.  I know there are 2D algorithms for this to estimate the transform by identifying several feature pixels on the images, and then find a 2D affine transform.   Question is: does knowing M help?  That is, does knowing the 3D transform help me get a more accurate image alignment algorithm or help me get it faster?   So far, I think it will help somewhat.  Given the known feature pixels in image I from the reference position and their 3D points on the model, I can apply M, then project back to I' so that I know the feature pixels for R'.  This saves me having to search for matching feature pixels.  But is there room for more improvement? 
  8. //CamPos = float3(ViewMatrix._41, ViewMatrix._42, ViewMatrix._43);   The camera position is not stored in the 4th row of the view matrix.     Eye = CamPos - Pos.xyz;   I think you mean Pos.xyz - CamPos, which would be the vector from the camera origin to point Pos.  However, this doesn't take into consideration the orientation of the camera. Pos = mul(Pos, ViewMatrix);   This does take the orientation of the camera into consideration. //Eye = Pos.z;   Assigning scalar to vector? Out.Pos = mul(Pos, ProjMatrix); Out.DepthV = length(Eye);   Length is not the same as just the z-coordinate Pos.z.
  9. I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering.  I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.
  10. Hello, I am new to unit testing and have the following question.  I have a WPF app that uses MVVM and I am working on unit testing the view model.  My UI has a button which the view model abstracts as a command, and when it is pressed it sets the state of a few properties and posts an event.  My question is whether I should make one unit test that asserts everything I expect to happen from the button press (psuedocode):   // Arrange   btnCommand.Execute(null); // basically button press handler   Assert(StateA == X) Assert(StateB == Y) Assert(StateC == Z) Verify event was posted   or should I separate these out into 4 separate tests with only one assert per test?    
  11. First, I don't have a lot of network background, so sorry if this question shows my noobness.  In my particular scenario, there is going to be one service per client (one-to-one).  I'm sending game data from the server to client so that the client can do some processing with the data. In theory the data could change every frame, but in practice it does not.  So I'm trying to only send over deltas, but ran into a problem with my approach.   My approach was to keep a dictionary<ID, data> on the server, so when it came to sending data over the wire, I could look up the last data I sent, and check what pieces of data changed, and then only send that data over.  A set of bits would be flagged and also sent over so the client knew what data values were being updated.  The client also keeps the data cached so it only needs the updated data.   The problem I ran into is that the server starts off before any clients connect, and starts sending the data (to nowhere).  This builds the cache and so by the time a client connects, it is only receiving deltas (but the client never received the full object the first time around because it wasn't connected yet).     Since the client/service is one-to-one, I could probably modify the server to not start building a cache until a client connects.  However, I wondered if missed packets would be a problem (maybe our socket API automatically resends so I don't need to worry about this situation).  I'm wondering what kind of systems games use to sort of sync up efficiently client/server data so that only deltas can be sent.    
  12. I'm implementing a sprite system to draw sprites.  My first strategy was as follows:   1. Use one big dynamic vertex buffer. 2. Sort sprites by render state/texture. 3. For each sprite batch   a. Fill dynamic vertex buffer with next batch using glBufferSubData.   b. Issue draw call to draw the batch   My thinking was that the driver would discard the old buffer and allocate a new one for the next batch so there would be no stalls.  This is how Direct3D words with the Map/Discard/WriteOnly flags.  Can I expect similar behavior on mobile opengl es 2.0?   Later, I was thinking that even if the driver does discard the buffers, it still allocates a lot of buffers per frame (not sure if this is a big deal or not).  So then I was thinking of the following fill strategy:   3. For each sprite batch   a. Fill system memory vertex array with next batch   b. push_back a struct BatchData = {VertexStart, VertexCount, Texture*, etc.} so I know what region of vertex buffer corresponds to a draw call. 4. Copy all the vertices for all the draw calls to the dynamic vertex buffer using  glBufferSubData 5. For each batch   a. Draw the BatchData using offset into vertex buffer   To me the advantage of the 2nd approach is that I won't discard a whole buffer if the sprite batch was small (like 1-2 sprites), but it requires a 2nd pass.   Any thoughts?          
  13. I'm new to network programming and have to write a tool that runs on Windows and communicates with a Linux box over the network.  The linux box has a TCP/IP server setup using c++ with boost.     My tool in Windows needs to connect.  For the Windows side, I am writing the tool in C# and looked at this tutorial: http://www.codeproject.com/Articles/10649/An-Introduction-to-Socket-Programming-in-NET-using   Basically, the linux box is going to send packets with "event data" to the Windows client at certain times.  What's the best way for the client to wait for incoming data?  The tutorial above uses a while loop to send/receive data over the network stream.  But is just looping and continuously polling for a packet the right design?
  14. In the 3rd edition of Real-time rendering, the authors base the specular reflectance off roughness and the fresnel effect:   (m+8)/(8pi) * R_F(a_h)cos^m(t_h)     It seems that for a really large m and where the normal and half vector are close to each other, the result of the above expression will be a large number much greater than 1.     Won't this amplify the amount of reflected specular light?  It's not making sense to me because then won't the reflected light be greater than the incoming light?  How is that possible?
  15.   I think you can for the most part.  If you have fixed level sizes (arena map, race track), you can pretty much know at load time what resources you have (number of objects, materials, textures, etc.), so you can size your descriptor heap appropriately.  You can then add a a fixed maximum count to support room for dynamic objects that will be inserted/removed on the fly.  Descriptors don't cost much memory, so it wouldn't be a big deal to over allocate some extra heap space.  For more advanced scenarios, you can reuse heap space that you aren't using anymore.   For a level editor type application, I'd imagine you could grow heaps sort of the way vectors grow.     I didn't really follow your question.  The way I would assume you would do is allocate the cbuffer memory.  Then you need to allocate CBVs that live in a heap that reference subsets of that cbuffer memory.  If an object is deleted, it would be easiest to just flag that cbuffer memory region and CBV as free so it can be used the next time an object is created.