• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

696 Good

About jeremie009

  • Rank
  1. ok It actually compile. but I realize I totally forgot how uv coordinates were mapped. So there I just need to subdivide by height and width.    Thanks. 
  2. I'm trying inside a loop accessing a texture color using arbitrary coodinates.    My texture can map properly to a mesh but I can't seem to sample them without using the mesh uv coodinate.  while (count <= size) { int x = (int)(count % w); int y = (int)(count / w); float2 uv = float2(x, y); float3 = col.Sample(texturesampler, uv).rgb; count++; } the color output is always wrong. like its only sampling one color   this is my sampler description  SamplerStateDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, BorderColor = new Color4(0, 0, 0, 0), ComparisonFunction = Comparison.Never, Filter = Filter.MinLinearMagMipPoint, MaximumLod = float.MaxValue, MinimumLod = 0, MipLodBias = 0.0f } so I'm wondering if there it can be done or if I have to use something like sampleGrad.       
  3. DX11

    I managed to fix my problem. The code is mostly correct
  4. Hi,    I'm working on a tool to do basic edit of texture using dx11. I need to edit texture while rendering them using dx11.   So what I'm doing so far is that I'm updating a array on the cpu and just before the draw call I'm update my resource using map/ unmap.    the problem is the difference between the final texture and the color array I keep in the cpu.  I manage to get the pixel to be paint but there is some offset issue.    this is my code  var data = _device.Context.MapSubresource(Rgb.Resource, 0, MapMode.WriteDiscard, MapFlags.None); var buffer = (Color*)data.DataPointer; for (var i = 0; i < Texture.Length; i++) { var x = (int) (i % Rectangle.Width); var y = (int) (i / Rectangle.Width); buffer[y * data.RowPitch / 4 + x] = Texture[i]; } _device.Context.UnmapSubresource(Rgb.Resource, 0); texture is the color array store on the cpu and rgb is the shaderresourceView.    Color is Sharpdx Color Struct     I'm using sharpdx and c#  by the way.    So basically I'm having a problem with the offset. the dataraw pitch doesn't match the cpu texture pitch and even with my code I can't get them to match the mouse position. The more I move further from the left upper corner the more the offset is apparent.    So anybody with input about how to deal with it? Should I write another struct and use a buffer to compensate for the offset ? How updating the texture is not the way to do it ?      
  5. When you ported  your radiosity on the gpu, did you use some sort of hierarchy or just brute force?
  6. The albedo was the issue. I had a outer space scene but I never tried it. Anyway thanks for the input. 
  7. I did try your code against mine just in case I was missing something but the result is the same. The light keep adding up instead of converging. So its fine to just do a couple of pass but the problem arise when you need more precision and more pass. The value are suppose to average out after a few pass. But that is not my case. Did you manage to do more than 5 pass without blowing up the light ? I can't on my implementation so I have to assume that is incorrect.    The reflection value are suppose to be the albedo color but if you albedo is pure white, it'll just reflect as much energy as it receive which is incorrect.  The form factor seem to give away too much energy so the bounce are really strong.   I could implement energy conserving sort of thing but I though radiosity was more correct that other approximation. 
  8. Thanks for noticing but distance is distanceSquare. 
  9. Hi,  I'm building a Lightmapper for my small engine and I'm running into a bit of a problem using radiosity. I'm splitting my scene into small patches and propagating the light using a simplified version of the form factor.  private float FormFactor(Vector3 v, float d2, Vector3 receiverNormal, Vector3 emitterNormal, float emitterArea) { emitterArea * (-Vector3.Dot(emitterNormal, v) * Vector3.Dot(receiverNormal, v)) /(Pi*d2+emitterArea); } the problem I'm having is with the bounce light. They never converge and keep adding up energy. I could stop after some iteration but the code is probably incorrect since the energy never goes down.  if (Vector3.Dot(ne, lightdir)<0) { var form = FormFactor(lightdir, distance, nr, ne, emitter.Area); emittedLight += emitter.Color *form* receiver.SurfaceColor; } this is the function where I had the bounce light.    Llightdir is the vector from the emitter patch to the receiver.  ne is the normalize normal of the emitter patch.  nr is the normalize normal of the receiver patch.   I try to scale my scene to see if maybe it was a energy or scaling problem but it didn't work. the only thing that actually work was to divided by 4 the bounce light but that seems incorrect because in some scene the light ended up converging and on other there where just adding more energy.   So I'm wondering is there some kind of rule I'm missing. Should I add attenuation to the bounce light or the form factor is enough ? I spend the last week try to piece it together but most sources on internet didn't gave me clues on how to balance the bounce energy.    BTW I choose the form factor because it's easy to run on the cpu. 
  10. thanks. I ditched d3dImage and decide to use hwndHost which makes using a swapchain possible. 
  11. Hi,    I'm working on a 3d program and I wish to have a quad windows like in Maya or 3dsmax or completely different smaller windows (like model preview window). So far I managed to get most of this editor working but I'm having a problem with dx11 but I'm stuck.    So basically each window create its own device and whenever I create a shader resource with one device, if I use this resource with another device the program stall on deviceContext.Flush.    So, what do I need to do to share shader resource from different device ? Is it possible ? I try change the optionflags to shared, but it doesn't change anything.  I used Wpf for the the interface so I'm using d3dImage    I'm pretty sure I'm missing something quite simple.    thanks
  12. I think directX toolkit was develop by Shawn Hargreaves which was on the xna dev team. The project is open source so basically you can just have a look at it and you'll find what you want to understand. Also you can extend, add, modify the feature you want. You are not lock up in the design.    If you want to stay on c# sharpdx is a good alternative. It's a thin wrapper on top of directX. I managed to follow frank luma directX book ,which was written for c++ user, and the code was pretty close to sharpdx. 
  13. this book might help you    http://www.amazon.com/Direct3D-Rendering-Cookbook-Justin-Stenning/dp/1849697108
  14. I'm trying to figure out something about this paper, http://www.cs.purdue.edu/cgvlab/papers/popescu/popescuNPI_CGA11.pdf. I'm not really sure about how I would implement it in my own game engine. So from what I understood to create a single non pinhole occlusion camera I need to project the image along different ray based on the depth value ? Or do I need to distort the vertex projection so I can see occluded part? Also I'm not sure but, can use something similar to a fisheye camera ?
  15. Nice !! Downloading now.