
Advertisement

Content count
185 
Joined

Last visited
Community Reputation
304 NeutralAbout user88

Rank
Member
Personal Information

Interests
Programming

responsive interface with render 3D in separate thread
user88 replied to user88's topic in Graphics and GPU Programming
Considering that since Windows Vista both Direct3D and GDI+ draw using DirectX Runtime, it seems that 3D rendering thread overloads DirectX Runtime and GDI+ simply unable to do its tasks in time.. See Direct3D / GDI+ / DirectX Runtime structure here: https://msdn.microsoft.com/ruru/library/windows/desktop/ee417756(v=vs.85).aspx#background 
responsive interface with render 3D in separate thread
user88 posted a topic in Graphics and GPU Programming
Problem: User interface (based on WinForms) isn't responsive in .NET Desktop Application with its own RealTime 3D Render (not fullscreen) while it renders heavy frames in separated thread. Heavy frames means that overall FPS is less than 5. Reason is unknown but it is not a case when main Application thread waits too long for 3D rendering thread for synchronization. This issue exists even on multicor Question: Does anybody know some tricks with configuration of multithreading in .NET? Simply specifying of BelowNormal priority to rendering thread doesn't make situation better. Any other ideas how to solve this problem? 
Hi, i would like to improve photorealism of realtime 3D engine that is used for room interior visualization. Questions: what shading algorithms is most suitable (in your opinion) for such purposes? I tried to google a little. But all results were 10 years old..

DX11 [DX11] writing RWTexture2DArray in pixel shader
user88 replied to user88's topic in Graphics and GPU Programming
Thanks Zaoshi Kaba. 1 reply

1

DX11 [DX11] writing RWTexture2DArray in pixel shader
user88 posted a topic in Graphics and GPU Programming
Hi, this MSDN page tells me that RWTexture2DArray object can be modified from Pixel Shader. How should I bind this object with the resource in application code? There is no ID3D11DeviceContext::PSSetUnorderedAccessViews method as an alternative to the ID3D11DeviceContext::CSSetUnorderedAccessViews for compute shader.. 
DX11 [DX11] Why we need sRGB back buffer
user88 replied to user88's topic in Graphics and GPU Programming
Okay, with implementation of Gamma Correction as well as backbuffer sRGB format all is clear for me now. Great post, Chris: Hodgman, your post also was very helpful, thanks. One point is not clear for me. It is about "mathematically linear" and "perceptually linear" things:  Why the linear gradient that i can see on screen looks like nonlinear with gamma correction (second case in my first post)? I compared it visually with linear gradient that i have made in PhotoShop. Same width in pixels. On screen looks different. Is there some PhotoShop trick? 
DX11 [DX11] Why we need sRGB back buffer
user88 replied to user88's topic in Graphics and GPU Programming
No, the display/monitor itself does the pow(value,2.2) itself, in the display hardware. If you do the pow(value,2.2) yourself, then you end with seeing pow(pow(value,2.2),2.2) after the display emits the picture I mean 1/2.2 not 2.2. Already corrected my previous post. Sorry for that.. 
DX11 [DX11] Why we need sRGB back buffer
user88 replied to user88's topic in Graphics and GPU Programming
Hi Jason, I have read this article (anyway thank you for a link) and understand the mathematic reasoning of Gamma Correction process. All is clear for me with sRGB images sampling and correction for further linear calculations. All intermediate calculations should be outputted to buffers with any correction. That is also clear for me. The misunderstanding actually is with sRGB backbuffer. I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB backbuffer are already Gamma Corrected (pow(value, 1/2.2)). If so, then final color values should outputted with pow(value, 1/2.2) correction. But no, it seems the sRGB backbuffer is the opposite of what I thought. Furthermore, final color value should be outputted with pow(value, 1/2.2) correction for nonsRGB backbuffers, right? 
DX11 [DX11] Why we need sRGB back buffer
user88 replied to user88's topic in Graphics and GPU Programming
Hello Ashaman73, as I understood you are talking about sRGB color space and HDR, but my question is about advantage sRGB backbuffer + pow(u, 2.2) over nonsRGB format + direct output. What I can guess from comparison image (last one in my first post) the advantage is in precision of Gamma curve applied to final image. With sRGB backbuffer + pow(u, 2.2) it is more precise. Right? Are there any other advantages? 
Hi, after reading a couple of resources in web about Gamma Correction I still feel confused. In my experiment pixel shader simply outputs linear gradient to backbuffer.  First case: backbuffer format is not sRGB, value of linear gradient is outputted without any modifications: [attachment=22107:ng.jpg]  Second case: backbuffer format is sRGB, value of linear gradient is outputted without any modifications: [attachment=22104:g1.jpg]  Third case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 1/2.2): [attachment=22105:g1div2.2.jpg]  Fourth case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 2.2): [attachment=22106:g2.2.jpg] As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use nonsRGB texture? The result is almost the same: [attachment=22108:pixcmp.jpg]

'usp10.h' no such file or directory
user88 replied to Deek880's topic in Graphics and GPU Programming
After short googling I have found discussion about the same problem: http://social.msdn.microsoft.com/Forums/enUS/51823ee4018c44d9a5ef7c99e64979e5/vc2005expresswanttobuilddxsdksampleusp10hmissing?forum=gametechnologiesgeneral I hope that will help you. 
[DX9] many lights: registers VS Volume Texture
user88 replied to user88's topic in Graphics and GPU Programming
thank you for reply, Tispe. I will take your post into account. Before i started to do anything, I want to hear the point views of people concerning topic question. So, please tell me if somebody has different point of view. Nice advice! Didn't know about such feature. Only direct lighting calculations are there.. 
[DX9] many lights: registers VS Volume Texture
user88 posted a topic in Graphics and GPU Programming
Hi, I want to expand maximum lights capability in my 3D engine. The DX9, forward rendering is used there. The lights pass to shader as array of corresponding shader structs, located in registers memory. So, registers memory is restriction for max lights. Could anybody tell me tell whether the performance will be good enough when i will hold lights array in Volume Texture? I mean in compare to registers memory.. Thanks for your help! 
I guess you use c#. Here is c# class representing triangle mesh. You can use picking functionality from this code: /// <summary: /// 3D Geometry that consist from indices, vertices, normals, texture coordinates. /// </summary> public sealed class TriangleMesh { private static uint s_internalIdCounter; private BoundingSphere m_boundingSphere; private BoundingBox m_boundingBox; /// <summary> /// Gets an internal ID that value is unique for each instance. /// </summary> public uint InternalID { get; private set; } /// <summary> /// Gets an array of the indices of the geometry. /// </summary> public ushort[] Indices { get; private set; } /// <summary> /// Gets an array of the vertices of the geometry. /// </summary> public Float3[] Vertices { get; private set; } /// <summary> /// Gets an array of the normals of the geometry. Value can be <c>null</c>. /// </summary> public Float3[] Normals { get; private set; } /// <summary> /// Gets an array of the normals of the geometry. Value can be <c>null</c>. /// </summary> public Float2[] TexCoords { get; private set; } public TriangleMesh(ushort[] indices, Float3[] vertices, Float3[] normals, Float2[] texCoords) { if (indices.Length < 3) throw new ArgumentException("The length of the indices array shouldn't be less than 3 elements.", "indices"); if (vertices.Length < 3) throw new ArgumentException("The length of the vertices array shouldn't be less than 3 elements.", "vertices"); Indices = indices; Vertices = vertices; Normals = normals; TexCoords = texCoords; unchecked { s_internalIdCounter++; } InternalID = s_internalIdCounter; CalculateBounds(); } /// <summary> /// Calculates a bounding box and a bonding sphere of the geometry. /// </summary> private void CalculateBounds() { m_boundingBox = BoundingBox.FromPoints(Vertices); m_boundingSphere = BoundingSphere.FromBox(m_boundingBox); } /// <summary> /// Gets transformed bounding sphere of the geometry. /// </summary> /// <param name="transform">Transformation matrix.</param> /// <returns>Transformed bounding sphere.</returns> public BoundingSphere CalculateBoundingSphere(Float4x4 transform) { Float3 center = Float3.TransformCoordinate(m_boundingSphere.Center, transform); return new BoundingSphere(center, m_boundingSphere.Radius); } /// <summary> /// Gets transformed bounding box of the geometry. /// </summary> /// <param name="transform">Transformation matrix.</param> /// <returns>Transformed bounding box.</returns> public BoundingBox CalculateBoundingBox(Float4x4 transform) { Float3 min = Float3.TransformCoordinate(m_boundingBox.Minimum, transform); Float3 max = Float3.TransformCoordinate(m_boundingBox.Maximum, transform); return new BoundingBox(min, max); } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex) { float u, v; return Intersects(transform , ray, out distance, out faceIndex, out u, out v); } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <param name="u">Barycentric U of face which the ray intersects.</param> /// <param name="v">Barycentric V of face which the ray intersects.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex, out float u, out float v) { // Convert ray to model space Float3 near = ray.Position; Float3 dir = ray.Direction; transform.Invert(); Float3 tmp = near; Float3.TransformCoordinate(ref tmp, ref transform, out near); tmp = dir; Float3.TransformNormal(ref tmp, ref transform, out dir); Ray modelSpaceRay = new Ray(near, dir); // Test bounding sphere first BoundingSphere bs = CalculateBoundingSphere(transform); if (Ray.Intersects(ray, bs, out distance)) { if (Indices != null && Indices.Length > 0) { // Intersect indexed geometry for (faceIndex = 0; faceIndex < Indices.Length; faceIndex += 3) { Float3 vertex1 = Vertices[Indices[faceIndex]]; Float3 vertex2 = Vertices[Indices[faceIndex + 1]]; Float3 vertex3 = Vertices[Indices[faceIndex + 2]]; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out distance, out u, out v)) { return true; } } } else { // Intersect nonindexed geometry for (faceIndex = 0; faceIndex < Vertices.Length; faceIndex += 3) { Float3 vertex1 = Vertices[faceIndex]; Float3 vertex2 = Vertices[faceIndex + 1]; Float3 vertex3 = Vertices[faceIndex + 2]; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out distance, out u, out v)) { return true; } } } } faceIndex = 1; distance = u = v = 1f; return false; } /// <summary> /// Determines whether a ray intersects the geometry. /// </summary> /// <param name="transform">Transformation matrix of the geometry</param> /// <param name="ray">The ray which will be tested for intersection.</param> /// <param name="distance">When the method completes, contains the distance at which the ray intersected the plane.</param> /// <param name="faceIndex">When the method completes, contains the index of face which the ray intersects.</param> /// <param name="hits">All intersection hits.</param> /// <returns><c>true</c> if the ray intersects the plane; otherwise, <c>false</c>.</returns> public bool Intersects(Float4x4 transform, Ray ray, out float distance, out int faceIndex, out IntersectInformation[] hits) { var hitsList = new List<IntersectInformation>(); float curDistance; int curIndex; distance = float.MaxValue; faceIndex = 1; // Create bounding sphere before inverting transform matrix BoundingSphere bs = CalculateBoundingSphere(transform); // Convert ray to model space Float3 near = ray.Position; Float3 dir = ray.Direction; transform.Invert(); Float3 tmp = near; Float3.TransformCoordinate(ref tmp, ref transform, out near); tmp = dir; Float3.TransformNormal(ref tmp, ref transform, out dir); Ray modelSpaceRay = new Ray(near, dir); // Test bounding sphere first if (Ray.Intersects(ray, bs, out curDistance)) { if (Indices != null && Indices.Length > 0) { // Intersect indexed geometry for (curIndex = 0; curIndex < Indices.Length; curIndex += 3) { Float3 vertex1 = Vertices[Indices[curIndex]]; Float3 vertex2 = Vertices[Indices[curIndex + 1]]; Float3 vertex3 = Vertices[Indices[curIndex + 2]]; float u, v; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out curDistance, out u, out v)) { if (curDistance < distance) { distance = curDistance; faceIndex = curIndex / 3; } var hit = new IntersectInformation { Distance = curDistance, FaceIndex = faceIndex, U = u, V = v }; hitsList.Add(hit); } } } else { // Intersect nonindexed geometry for (curIndex = 0; curIndex < Vertices.Length; curIndex += 3) { Float3 vertex1 = Vertices[curIndex]; Float3 vertex2 = Vertices[curIndex + 1]; Float3 vertex3 = Vertices[curIndex + 2]; float u, v; if (Ray.Intersects(modelSpaceRay, vertex1, vertex2, vertex3, out curDistance, out u, out v)) { if (curDistance < distance) { distance = curDistance; faceIndex = curIndex / 3; } var hit = new IntersectInformation { Distance = curDistance, FaceIndex = faceIndex, U = u, V = v }; hitsList.Add(hit); } } } } hits = hitsList.ToArray(); return hits.Length > 0; } } Also ray you have to get from picked pixel on screen rather than from near/far. Have a look to this code: void CalculatePickRay(int x, int y, int width, int height, float near, Float4x4 view, Float4x4 projection, out Float3 pickRayDir, out Float3 pickRayOrig) { pickRayDir.X = (((2.0f * x) / width)  1); pickRayDir.Y = (((2.0f * y) / height)  1); pickRayDir.Z = 1.0f; projection.M41 = 0; projection.M42 = 0; projection.M43 = 0; projection.M44 = 1; projection.Invert(); Float3 tmp = pickRayDir; Float3.TransformNormal(ref tmp, ref projection, out pickRayDir); // Get the inverse view matrix view.Invert(); tmp = pickRayDir; Float3.TransformNormal(ref tmp, ref view, out pickRayDir); pickRayDir.Normalize(); pickRayOrig.X = view.M41; pickRayOrig.Y = view.M42; pickRayOrig.Z = view.M43; // calc origin as intersection with near frustum pickRayOrig += pickRayDir * near; }

Have already implemented the basic functionality of picking. I mean Pick ray calculation, from picked screen coordinates, etc? In case you have a lot of 3D objects in scene you can make the map table of relationships between 3D object in scene and object in your application data model. You have to do it once per scene/level creation.

Advertisement