• Advertisement

Big Muscle

Member
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

135 Neutral

About Big Muscle

  • Rank
    Member
  1. Optimize overlapping rectangles

    If I describe a bit the original problem - I need to write a C++ function which receives an array of rectangles which do not overlap and look similary to the picture #2:   It is simply rectangular description of the geometry which results from subtraction of two overlapped rectangles. Overlapping can be arbitrary, the size of any rectangle can be arbitrary. I cannot influence the subtraction nor the format which I receive. My task is to inflate this geometry by some number and return it (in the same format - array of 0-4 rectangles). The problem is that returned (inflated) rectangles mustn't overlap - they can touch but not overlap.
  2. Optimize overlapping rectangles

    I used A,B,C,D marks only to specify their order. If array contains e.g. 3 items only I cannot say whether it is ABC, BCD, ACD or ABD.
  3. Hello,   I have an array of 0 - 4 rectangles (each one is specified with [left, top, right, bottom] coordinates) and these rectangles can overlap in some areas. Is there any algorithm which could help to reduce these overlapped areas (not to remove the intersection completely but to remove it only from one of the rectangles).   I can be sure that the number of rectangles is always from zero to four. The cases 0 and 1 can be ignored because there are no overlapping rectangles but the rest is always in the following form:   [attachment=18406:rectangles.png]   The order of rectangles is always maintained but any of them can be missing (which is the main problem here). So array can look e.g. ABCD or BC or AD etc. The goal is to get:   [attachment=18407:rectangles.png]   I tried several thing but without success. Could someone help? Thanks!
  4. Drawing surface plot in C++ using GDI

    Weird, I can't sometimes log in to this forum properly...   Does LGPL allow distributing without the source code? If it does not then it is still not suitable...   However, I noticed that there are two important functions in MathGL - rotate and calcScr which seem to be enough to correctly render 3D plot via my own algorithm, i.e. calling calcScr on each of p0,p1,p2,p3 (mentioned above) and it seems to be much faster then using MathGL completely. Both function just do some matrix operations and point scaling.
  5. Drawing surface plot in C++ using GDI

    Our goal was to make it as simple as possible. It does not have to be pure GDI but we should be able to achieve by GDI-like functions (i.e. DrawPolygon, FillPolygon etc.). I tried MathGL and it generates nice plot but its license (GPL) does not serve our purpose so it is probably unusable. Current 2D (bird eye) view is generated very simply as: for(int i = 0; i < x.size() - 1; ++i) for(int j = 0; j < y.size() - 1; ++j) { POINT points[4]; p[0] = x[i], y[j]; p[1] = x[i+1], y[j]; p[2] = x[i+1], y[j+1]; p[3] = x[i], y[j+1]; SetFillColorFromPalette(z[i][j]); Polygon(points, 4); }   I was thinking about function Project(x, y, z) which is called for each p0, p1, p2, p3 and just transforms the coordinate to the screen_x and screen_y and Polygon function will draw this transformed points then. But my expectance was probably too simple to make it work this way. Or maybe only the implementation of Project function was incorrect and transformation was wrong? I don't have much experience in this so I just tried what I remember from school - multiplying vector (x,y,z,1) by rotation matrix and scaling "x" and "y" by "z".
  6. Hello, I need a little help. I have to render 3D surface plot in C++ using GDI. I have approx. 1000 points [x,y,z] and I want to render a plot from them (it could be called height map or something like that.. exactly what Matlab "surf" function does). The speed is not big problem here, because the plot will be rendered once and then stored to image file so using GDI is enough.   I am able to render 2D plot (from bird view) without any problem by just omitting Z coordinate and simply drawing polygons from 4 neighbouring points. I even fill each rectangle by different color by differentiate their heights. I get this result (which is same as rotating Matlab plot to bird view):  [attachment=16122:plot2d.png]   But now I want to rotate it so the Z coordinate is visible - just to get something like this:   I tried to create projection matrix and multiply each point by it but I didn't get anything usable. Could anyone help me with this? Thank you!  
  7. Resizing primitives in vertex buffer

    Yeah, vertex layout is known. Draw is classic device->Draw. I have no problem to modify the vertex buffer. The only problem is how to correctly and fastly compute the new coordinates (the request is to add 8 pixels to each edge of rendered primitive).
  8. Hello,   maybe I have a little non-standard request as I am developing a little bit non-standard application.   My library receives ID3D10Device1 before the content is rendered. I can get vertex buffer from this device via device->IAGetVertexBuffers which contains individual vertices. I know that the topology is triangle strip.   Now I need to enlarge the edges of the rendered primitive. The primitives are mostly the rectangles (4), rectangles with rectangle hole (10 or 22) or rectangles with rounded corners (34). The brackets contain known vertex count for each primitive. I know I can simply do if(vertexCount==4)  ... else.. and modify the original vertices but I would like to find more general solution.   The attached images show 2 possibilities of what I would like to do - black rectangles are the original one, red rectangles are what I want to achieve. Just any primitive will be resized that it covers some more pixels at each edge.   I hope it is understandable :-)
  9.  Yes, the old functions come back (as I remember correctly it was something like Draw<0> is executed e.g. 500x, then function was swapped). Weird thing is that it does not happen always. Currently I wanted to track down how often the functions are swapped but it does not occur now :-/   However, some time ago, I was using following snippet running as background thread:   while(true) { if(device->Draw != myDraw) rehook(); Sleep(xxx); }   it just hooks Draw function still around. If sleep is not used then it works "correctly" (there is still possible race condition but very rare) - but consumes 100% CPU.. When there is Sleep (with > 0 ms) then it does not work, so the functions are swapped very often. Also, it is not something like "if(some_condition_is_true) Draw<0> else Draw<1>", The function pointer is changed directly in the device object's virtual address table.
  10. Hello,   I have a problem which I am trying to solve for a longer time and have not found a correct solution yet. I need to hook certain functions in Direct3D and replace them with my own implementation. Hooking itself is not the problem. The problem is that Direct3D library uses more implementations for some functions and "randomly" switches between them. So if I hook e.g. Draw function, it works only for a while and then the function is replaced with another implementation so my hook is not called until I rehook this another implementation too.   After debugging Direct3D library, I noticed it really happens. I have found functions such as: D3D10Device1::Draw_<0>, D3D10Device1::Draw_<1>, D3D10Device1::Draw_<2> etc. It is much worse for D3D11Device as there is 8 different implementation of Draw function.   Does anybody know the technical details why (or when) each of the <0>, <1> etc.. is called? What is the difference between them? My current solution for hooks is to periodically check Draw pointer and if it changes then rehook it again. It works but I don't see it as good solution (just because it requires additional code which can slow the things down).
  11. Help with shader disassembling

    so still something wrong, sometimes the value is incorrect.
  12. Help with shader disassembling

    Yes! +2 works with correct RGBA offset! thanks!
  13. Help with shader disassembling

    I tried to dump that buffer and it seems to be stored there, but not as pure alpha, but rather (1.0 - alpha). The blend state is following if it depends:     Now I need to find the correct index in constant buffer as it is on different position than I computed. I would expect that R-G-B-A has offsets 0-4-8-12, so alpha value is at offset 12. But here it seems to be at 8, so I think I have overlooked something in the shader assembly.
  14. Hello, I have a few short shaders in assembler code and I would like to understand what it really does. Or better, I know what it does but I am not able to get it work. So please, could someone help?   I need to get the alpha value of the pixels written by pixel shader. This is the pixel shader: Microsoft (R) Direct3D Shader Compiler 9.30.9200.16384 Copyright (C) Microsoft Corporation 2002-2011. All rights reserved. // // Generated by Microsoft (R) D3D Shader Disassembler // // /// // Note: shader requires additional functionality: // Minimum-precision data types // // // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------- ------ // SV_POSITION 0 xyzw 0 POS float // TEXCOORD 0 xy 1 NONE float xy // TEXCOORD 1 xyzw 2 NONE min2_8f xyzw // // // Output signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------- ------ // SV_TARGET 0 xyzw 0 TARGET min2_8f xyzw // // // Sampler/Resource to DX9 shader sampler mappings: // // Target Sampler Source Sampler Source Resource // -------------- --------------- ---------------- // s0 s0 t0 // // // Level9 shader bytecode: // ps_2_0 dcl t0.xy dcl t1 {min2_8f} dcl_2d s0 texld r0 {min2_8f}, t0, s0 mul r0.xyz, r0 {min2_8f}, t1 {min2_8f} mov r0.w, t1.w {min2_8f} mov oC0 {min2_8f}, r0 // approximately 4 instruction slots used (1 texture, 3 arithmetic) ps_4_0 dcl_globalFlags refactoringAllowed | enableMinimumPrecision dcl_sampler s0, mode_default dcl_resource_texture2d (float,float,float,float) t0 dcl_input_ps linear v1.xy dcl_input_ps linear v2.xyzw {min2_8f} dcl_output o0.xyzw {min2_8f} dcl_temps 1 sample r0.xyzw {min2_8f}, v1.xyxx, t0.xyzw, s0 mul r0.xyz, r0.xyzx {min2_8f as def32}, v2.xyzx {min2_8f as def32} mov r0.w, v2.w {min2_8f as def32} mov o0.xyzw {min2_8f}, r0.xyzw {def32 as min2_8f} ret // Approximately 0 instruction slots used   As I understand correctly, The output alpha value is stored in o0.w which is just copied from the input (v2.w which the second input TEXCOORD.w). And there is a vertex shader which should compute the alpha and put it on its output:   Microsoft (R) Direct3D Shader Compiler 9.30.9200.16384 Copyright (C) Microsoft Corporation 2002-2011. All rights reserved. // // Generated by Microsoft (R) D3D Shader Disassembler // // /// // Note: shader requires additional functionality: // Minimum-precision data types // // // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------- ------ // POSITION 0 xy 0 NONE float xy // TEXCOORD 0 xy 1 NONE int x // // // Output signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------- ------ // SV_POSITION 0 xyzw 0 POS float xyzw // TEXCOORD 0 xy 1 NONE float xy // TEXCOORD 1 xyzw 2 NONE min2_8f xyzw // // // Constant buffer to DX9 shader constant mappings: // // Target Reg Buffer Start Reg # of Regs Data Conversion // ---------- ------- --------- --------- ---------------------- // c0 cb1 0 250 ( FLT, FLT, FLT, FLT) // c251 cb0 0 1 ( FLT, FLT, FLT, FLT) // // // Runtime generated constant mappings: // // Target Reg Constant Description // ---------- -------------------------------------------------- // c250 Vertex Shader position offset // // // Level9 shader bytecode: // vs_2_0 def c252, 0.5, 1, 0, 0 dcl_texcoord v0 dcl_texcoord1 v1 mova a0.x, v1.x mul r0.xy, v0, c0[a0.x] add r0.x, r0.y, r0.x add oT0.x, r0.x, c0[a0.x].z mul r0.xy, v0, c1[a0.x] add r0.x, r0.y, r0.x add oT0.y, r0.x, c1[a0.x].z mov oT1 {min2_8f}, c2[a0.x] mad r0.xy, v0, c251.xzzw, c251.ywzw add oPos.xy, r0, c250 mov oPos.zw, c252.xyxy // approximately 11 instruction slots used vs_4_0 dcl_globalFlags refactoringAllowed | enableMinimumPrecision dcl_constantbuffer cb0[1], immediateIndexed dcl_constantbuffer cb1[250], dynamicIndexed dcl_input v0.xy dcl_input v1.x dcl_output_siv o0.xyzw, position dcl_output o1.xy dcl_output o2.xyzw {min2_8f} dcl_temps 1 mad o0.xy, v0.xyxx, cb0[0].xzxx, cb0[0].ywyy mov o0.zw, l(0,0,0.500000,1.000000) mov r0.x, v1.x dp2 r0.y, v0.xyxx, cb1[r0.x + 0].xyxx add o1.x, r0.y, cb1[r0.x + 0].z iadd r0.xy, v1.xxxx, l(1, 2, 0, 0) dp2 r0.z, v0.xyxx, cb1[r0.x + 0].xyxx add o1.y, r0.z, cb1[r0.x + 0].z mov o2.xyzw {min2_8f}, cb1[r0.y + 0].xyzw {def32 as min2_8f} ret // Approximately 0 instruction slots used   From this code, I can understand that the requested value is stored at cb1[r0.y+0].w which should be just a value in the constant buffer #2. And index to constant buffer #2 is stored at offset pointed by r0.y. But I am not able to compute this offset (which should just be value in constant buffer #1). I probably miss something, because if I read the constant buffers by   ID3D11Buffer* constantBuffers[2]; This->VSGetConstantBuffers(0, 2, constantBuffers);   the requested value is not there. The constantBuffers[0] is 16 bytes long, constantBuffers[1] is 4000 bytes long which correspond to the data written in shader disassembly.   I know I have a little bit non-standard task. But I would really appreciate any help with finding where the requested value (alpha component of pixel shader output) is stored. Thanks in advance!
  • Advertisement