• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

user88

Members
  • Content count

    185
  • Joined

  • Last visited

Community Reputation

304 Neutral

About user88

  • Rank
    Member
  1. Considering that since Windows Vista both Direct3D and GDI+ draw using DirectX Runtime, it seems that 3D rendering thread overloads DirectX Runtime and GDI+ simply unable to do its tasks in time..   See Direct3D / GDI+ / DirectX Runtime structure here: https://msdn.microsoft.com/ru-ru/library/windows/desktop/ee417756(v=vs.85).aspx#background
  2. Problem: User interface (based on WinForms) isn't responsive in .NET Desktop Application with its own Real-Time 3D Render (not full-screen) while it renders heavy frames in separated thread. Heavy frames means that overall FPS is less than 5. Reason is unknown but it is not a case when main Application thread waits too long for 3D rendering thread for synchronization. This issue exists even on multi-cor   Question: Does anybody know some tricks with configuration of multi-threading in .NET? Simply specifying of BelowNormal priority to rendering thread doesn't make situation better. Any other ideas how to solve this problem?
  3. Hi,   i would like to improve photorealism of real-time 3D engine that is used for room interior visualization.   Questions: what shading algorithms is most suitable (in your opinion) for such purposes?   I tried to google  a little. But all results were 10 years old..
  4. Hi,   this MSDN page tells me that RWTexture2DArray object can be modified from Pixel Shader.   How should I bind this object with the resource in application code?   There is no ID3D11DeviceContext::PSSetUnorderedAccessViews method as an alternative to the ID3D11DeviceContext::CSSetUnorderedAccessViews for compute shader..
  5. Okay, with implementation of Gamma Correction as well as backbuffer sRGB format all is clear for me now.   Great post, Chris:     Hodgman, your post also was very helpful, thanks. One point is not clear for me. It is about "mathematically linear" and "perceptually linear" things:     - Why the linear gradient that i can see on screen looks like non-linear with gamma correction (second case in my first post)? I compared it visually with linear gradient that i have made in PhotoShop. Same width in pixels. On screen looks different. Is there some PhotoShop trick?
  6. No, the display/monitor itself does the pow(value,2.2) itself, in the display hardware. If you do the pow(value,2.2) yourself, then you end with seeing pow(pow(value,2.2),2.2) after the display emits the picture    I mean 1/2.2 not 2.2. Already corrected my previous post. Sorry for that..
  7.   Hi Jason,   I have read this article (anyway thank you for a link) and understand the mathematic reasoning of Gamma Correction process. All is clear for me with sRGB images sampling and correction for further linear calculations. All intermediate calculations should be outputted to buffers with any correction. That is also clear for me.   The misunderstanding actually is with sRGB backbuffer. I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB backbuffer are already Gamma Corrected (pow(value, 1/2.2)). If so, then final color values should outputted with pow(value, 1/2.2) correction. But no, it seems the sRGB backbuffer is the opposite of what I thought. Furthermore, final color value should be outputted with pow(value, 1/2.2) correction for non-sRGB backbuffers, right?
  8. Hello Ashaman73, as I understood you are talking about sRGB color space and HDR, but my question is about advantage sRGB backbuffer + pow(u, 2.2) over non-sRGB format  + direct output.   What I can guess from comparison image (last one in my first post) the advantage is in precision of Gamma curve applied to final image. With sRGB backbuffer + pow(u, 2.2) it is more precise. Right? Are there any other advantages?
  9. Hi,   after reading a couple of resources in web about Gamma Correction I still feel confused.   In my experiment pixel shader simply outputs linear gradient to backbuffer.    - First case: backbuffer format is not sRGB, value of linear gradient is outputted without any modifications: [attachment=22107:ng.jpg]    - Second case: backbuffer format is sRGB, value of linear gradient is outputted without any modifications: [attachment=22104:g1.jpg]    - Third case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 1/2.2): [attachment=22105:g1div2.2.jpg]    - Fourth case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 2.2): [attachment=22106:g2.2.jpg]   As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture? The result is almost the same: [attachment=22108:pixcmp.jpg]
  10. After short googling I have found discussion about the same problem: http://social.msdn.microsoft.com/Forums/en-US/51823ee4-018c-44d9-a5ef-7c99e64979e5/vc-2005-express-want-to-build-dxsdk-sample-usp10h-missing?forum=gametechnologiesgeneral   I hope that will help you.
  11. thank you for reply, Tispe.     I will take your post into account. Before i started to do anything, I want to hear the point views of people concerning topic question. So, please tell me if somebody has different point of view.     Nice advice! Didn't know about such feature.     Only direct lighting calculations are there..
  12. Hi,   I want to expand maximum lights capability in my 3D engine. The DX9, forward rendering is used there. The lights pass to shader as array of corresponding shader structs, located in registers memory. So, registers memory is restriction for max lights.   Could anybody tell me tell whether the performance will be good enough when i will hold lights array in Volume Texture? I mean in compare to registers memory..   Thanks for your help!
  13. I guess you use c#. Here is c# class representing triangle mesh. You can use picking functionality from this code: /// <summary: /// 3D Geometry that consist from indices, vertices, normals, texture coordinates. /// </summary> public sealed class TriangleMesh { private static uint s_internalIdCounter; private BoundingSphere m_boundingSphere; private BoundingBox m_boundingBox; /// <summary> /// Gets an internal ID that value is unique for each instance. /// </summary> public uint InternalID { get; private set; } /// <summary> /// Gets an array of the indices of the geometry. /// </summary> public ushort[] Indices { get; private set; } /// <summary> /// Gets an array of the vertices of the geometry. /// </summary> public Float3[] Vertices { get; private set; } /// <summary> /// Gets an array of the normals of the geometry. Value can be <c>null</c>. /// </summary> public Float3[] Normals { get; private set; } /// <summary> /// Gets an array of the normals of the geometry. Value can be <c>null</c>. /// </summary> public Float2[] TexCoords { get; private set; } public TriangleMesh(ushort[] indices, Float3[] vertices, Float3[] normals, Float2[] texCoords) { if (indices.Length < 3) throw new ArgumentException("The length of the indices array shouldn't be less than 3 elements.", "indices"); if (vertices.Length < 3) throw new ArgumentException("The length of the vertices array shouldn't be less than 3 elements.", "vertices"); Indices = indices; Vertices = vertices; Normals = normals; TexCoords = texCoords; unchecked { s_internalIdCounter++; } InternalID = s_internalIdCounter; CalculateBounds(); } /// <summary> /// Calculates a bounding box and a bonding sphere of the geometry. /// </summary> private void CalculateBounds() { m_boundingBox = BoundingBox.FromPoints(Vertices); m_boundingSphere = BoundingSphere.FromBox(m_boundingBox); } /// <summary> /// Gets transformed bounding sphere of the geometry. /// </summary> /// <param name="transform">Transformation matrix.</param> /// <returns>Transformed bounding sphere.</returns> public BoundingSphere CalculateBoundingSphere(Float4x4 transform) { Float3 center = Float3.TransformCoordinate(m_boundingSphere.Center, transform); return new BoundingSphere(center, m_boundingSphere.Radius); } /// <summary> /// Gets transformed bounding box of the geometry. /// </summary> /// <param name="transform">Transformation matrix.</param> /// <returns>Transformed bounding box.</returns> public BoundingBox CalculateBoundingBox(Float4x4 transform) { Float3 min = Float3.TransformCoordinate(m_boundingBox.Minimum, transform); Float3 max = Float3.TransformCoordinate(m_boundingBox.Maximum, transform); return new BoundingBox(min, max); } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex) { float u, v; return Intersects(transform , ray, out distance, out faceIndex, out u, out v); } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <param name="u">Barycentric U of face which the ray intersects.</param> /// <param name="v">Barycentric V of face which the ray intersects.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex, out float u, out float v) { // Convert ray to model space Float3 near = ray.Position; Float3 dir = ray.Direction; transform.Invert(); Float3 tmp = near; Float3.TransformCoordinate(ref tmp, ref transform, out near); tmp = dir; Float3.TransformNormal(ref tmp, ref transform, out dir); Ray modelSpaceRay = new Ray(near, dir); // Test bounding sphere first BoundingSphere bs = CalculateBoundingSphere(transform); if (Ray.Intersects(ray, bs, out distance)) { if (Indices != null && Indices.Length > 0) { // Intersect indexed geometry for (faceIndex = 0; faceIndex < Indices.Length; faceIndex += 3) { Float3 vertex1 = Vertices[Indices[faceIndex]]; Float3 vertex2 = Vertices[Indices[faceIndex + 1]]; Float3 vertex3 = Vertices[Indices[faceIndex + 2]]; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out distance, out u, out v)) { return true; } } } else { // Intersect non-indexed geometry for (faceIndex = 0; faceIndex < Vertices.Length; faceIndex += 3) { Float3 vertex1 = Vertices[faceIndex]; Float3 vertex2 = Vertices[faceIndex + 1]; Float3 vertex3 = Vertices[faceIndex + 2]; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out distance, out u, out v)) { return true; } } } } faceIndex = -1; distance = u = v = -1f; return false; } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <param name="hits">All intersection hits.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex, out IntersectInformation[] hits) { var hitsList = new List<IntersectInformation>(); float curDistance; int curIndex; distance = float.MaxValue; faceIndex = -1; // Create bounding sphere before inverting transform matrix BoundingSphere bs = CalculateBoundingSphere(transform); // Convert ray to model space Float3 near = ray.Position; Float3 dir = ray.Direction; transform.Invert(); Float3 tmp = near; Float3.TransformCoordinate(ref tmp, ref transform, out near); tmp = dir; Float3.TransformNormal(ref tmp, ref transform, out dir); Ray modelSpaceRay = new Ray(near, dir); // Test bounding sphere first if (Ray.Intersects(ray, bs, out curDistance)) { if (Indices != null && Indices.Length > 0) { // Intersect indexed geometry for (curIndex = 0; curIndex < Indices.Length; curIndex += 3) { Float3 vertex1 = Vertices[Indices[curIndex]]; Float3 vertex2 = Vertices[Indices[curIndex + 1]]; Float3 vertex3 = Vertices[Indices[curIndex + 2]]; float u, v; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out curDistance, out u, out v)) { if (curDistance < distance) { distance = curDistance; faceIndex = curIndex / 3; } var hit = new IntersectInformation { Distance = curDistance, FaceIndex = faceIndex, U = u, V = v }; hitsList.Add(hit); } } } else { // Intersect non-indexed geometry for (curIndex = 0; curIndex < Vertices.Length; curIndex += 3) { Float3 vertex1 = Vertices[curIndex]; Float3 vertex2 = Vertices[curIndex + 1]; Float3 vertex3 = Vertices[curIndex + 2]; float u, v; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out curDistance, out u, out v)) { if (curDistance < distance) { distance = curDistance; faceIndex = curIndex / 3; } var hit = new IntersectInformation { Distance = curDistance, FaceIndex = faceIndex, U = u, V = v }; hitsList.Add(hit); } } } } hits = hitsList.ToArray(); return hits.Length > 0; } }   Also ray you have to get from picked pixel on screen rather than from near/far. Have a look to this code:   void CalculatePickRay(int x, int y, int width, int height, float near, Float4x4 view, Float4x4 projection, out Float3 pickRayDir, out Float3 pickRayOrig) { pickRayDir.X = (((2.0f * x) / width) - 1); pickRayDir.Y = -(((2.0f * y) / height) - 1); pickRayDir.Z = 1.0f; projection.M41 = 0; projection.M42 = 0; projection.M43 = 0; projection.M44 = 1; projection.Invert(); Float3 tmp = pickRayDir; Float3.TransformNormal(ref tmp, ref projection, out pickRayDir); // Get the inverse view matrix view.Invert(); tmp = pickRayDir; Float3.TransformNormal(ref tmp, ref view, out pickRayDir); pickRayDir.Normalize(); pickRayOrig.X = view.M41; pickRayOrig.Y = view.M42; pickRayOrig.Z = view.M43; // calc origin as intersection with near frustum pickRayOrig += pickRayDir * near; }
  14.   Have already implemented the basic functionality of picking. I mean Pick ray calculation, from picked screen coordinates, etc?   In case you have a lot of 3D objects in scene you can make the map table of relationships between 3D object in scene and object in your application data model. You have to do it once per scene/level creation.