• Advertisement

jeremie009

Member
  • Content count

    19
  • Joined

  • Last visited

Community Reputation

696 Good

About jeremie009

  • Rank
    Member
  1. arbitrary texture coordinates

    ok It actually compile. but I realize I totally forgot how uv coordinates were mapped. So there I just need to subdivide by height and width.    Thanks. 
  2. I'm trying inside a loop accessing a texture color using arbitrary coodinates.    My texture can map properly to a mesh but I can't seem to sample them without using the mesh uv coodinate.  while (count <= size) { int x = (int)(count % w); int y = (int)(count / w); float2 uv = float2(x, y); float3 = col.Sample(texturesampler, uv).rgb; count++; } the color output is always wrong. like its only sampling one color   this is my sampler description  SamplerStateDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, BorderColor = new Color4(0, 0, 0, 0), ComparisonFunction = Comparison.Never, Filter = Filter.MinLinearMagMipPoint, MaximumLod = float.MaxValue, MinimumLod = 0, MipLodBias = 0.0f } so I'm wondering if there it can be done or if I have to use something like sampleGrad.       
  3. DX11 Painting Texture

    I managed to fix my problem. The code is mostly correct
  4. Hi,    I'm working on a tool to do basic edit of texture using dx11. I need to edit texture while rendering them using dx11.   So what I'm doing so far is that I'm updating a array on the cpu and just before the draw call I'm update my resource using map/ unmap.    the problem is the difference between the final texture and the color array I keep in the cpu.  I manage to get the pixel to be paint but there is some offset issue.    this is my code  var data = _device.Context.MapSubresource(Rgb.Resource, 0, MapMode.WriteDiscard, MapFlags.None); var buffer = (Color*)data.DataPointer; for (var i = 0; i < Texture.Length; i++) { var x = (int) (i % Rectangle.Width); var y = (int) (i / Rectangle.Width); buffer[y * data.RowPitch / 4 + x] = Texture[i]; } _device.Context.UnmapSubresource(Rgb.Resource, 0); texture is the color array store on the cpu and rgb is the shaderresourceView.    Color is Sharpdx Color Struct     I'm using sharpdx and c#  by the way.    So basically I'm having a problem with the offset. the dataraw pitch doesn't match the cpu texture pitch and even with my code I can't get them to match the mouse position. The more I move further from the left upper corner the more the offset is apparent.    So anybody with input about how to deal with it? Should I write another struct and use a buffer to compensate for the offset ? How updating the texture is not the way to do it ?      
  5. Radiosity

    When you ported  your radiosity on the gpu, did you use some sort of hierarchy or just brute force?
  6. Radiosity

    The albedo was the issue. I had a outer space scene but I never tried it. Anyway thanks for the input. 
  7. Radiosity

    I did try your code against mine just in case I was missing something but the result is the same. The light keep adding up instead of converging. So its fine to just do a couple of pass but the problem arise when you need more precision and more pass. The value are suppose to average out after a few pass. But that is not my case. Did you manage to do more than 5 pass without blowing up the light ? I can't on my implementation so I have to assume that is incorrect.    The reflection value are suppose to be the albedo color but if you albedo is pure white, it'll just reflect as much energy as it receive which is incorrect.  The form factor seem to give away too much energy so the bounce are really strong.   I could implement energy conserving sort of thing but I though radiosity was more correct that other approximation. 
  8. Radiosity

    Thanks for noticing but distance is distanceSquare. 
  9. Radiosity

    Hi,  I'm building a Lightmapper for my small engine and I'm running into a bit of a problem using radiosity. I'm splitting my scene into small patches and propagating the light using a simplified version of the form factor.  private float FormFactor(Vector3 v, float d2, Vector3 receiverNormal, Vector3 emitterNormal, float emitterArea) { emitterArea * (-Vector3.Dot(emitterNormal, v) * Vector3.Dot(receiverNormal, v)) /(Pi*d2+emitterArea); } the problem I'm having is with the bounce light. They never converge and keep adding up energy. I could stop after some iteration but the code is probably incorrect since the energy never goes down.  if (Vector3.Dot(ne, lightdir)<0) { var form = FormFactor(lightdir, distance, nr, ne, emitter.Area); emittedLight += emitter.Color *form* receiver.SurfaceColor; } this is the function where I had the bounce light.    Llightdir is the vector from the emitter patch to the receiver.  ne is the normalize normal of the emitter patch.  nr is the normalize normal of the receiver patch.   I try to scale my scene to see if maybe it was a energy or scaling problem but it didn't work. the only thing that actually work was to divided by 4 the bounce light but that seems incorrect because in some scene the light ended up converging and on other there where just adding more energy.   So I'm wondering is there some kind of rule I'm missing. Should I add attenuation to the bounce light or the form factor is enough ? I spend the last week try to piece it together but most sources on internet didn't gave me clues on how to balance the bounce energy.    BTW I choose the form factor because it's easy to run on the cpu. 
  10. thanks. I ditched d3dImage and decide to use hwndHost which makes using a swapchain possible. 
  11. Hi,    I'm working on a 3d program and I wish to have a quad windows like in Maya or 3dsmax or completely different smaller windows (like model preview window). So far I managed to get most of this editor working but I'm having a problem with dx11 but I'm stuck.    So basically each window create its own device and whenever I create a shader resource with one device, if I use this resource with another device the program stall on deviceContext.Flush.    So, what do I need to do to share shader resource from different device ? Is it possible ? I try change the optionflags to shared, but it doesn't change anything.  I used Wpf for the the interface so I'm using d3dImage    I'm pretty sure I'm missing something quite simple.    thanks
  12. I think directX toolkit was develop by Shawn Hargreaves which was on the xna dev team. The project is open source so basically you can just have a look at it and you'll find what you want to understand. Also you can extend, add, modify the feature you want. You are not lock up in the design.    If you want to stay on c# sharpdx is a good alternative. It's a thin wrapper on top of directX. I managed to follow frank luma directX book ,which was written for c++ user, and the code was pretty close to sharpdx. 
  13. Moving from MonoGame to SharpDX/SlimDX

    this book might help you    http://www.amazon.com/Direct3D-Rendering-Cookbook-Justin-Stenning/dp/1849697108
  14. I'm trying to figure out something about this paper, http://www.cs.purdue.edu/cgvlab/papers/popescu/popescuNPI_CGA11.pdf. I'm not really sure about how I would implement it in my own game engine. So from what I understood to create a single non pinhole occlusion camera I need to project the image along different ray based on the depth value ? Or do I need to distort the vertex projection so I can see occluded part? Also I'm not sure but, can use something similar to a fisheye camera ?
  15. SharpDX 2.4 on WP8 and high level XNA like API

    Nice !! Downloading now.
  • Advertisement