Jump to content

DevAgit

Sign in to follow this  
  • entry
    1
  • comments
    2
  • views
    413

Simple Json Lib.

DevAgit

492 views

I went looking for Json. But there wasn't a simple Json.
I just wanted to change  Class to Json and change Class back to Json. So I made it myself.

zJson.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
/* code by eekdro@gmail.com */
/* zJson is a simple Json library */
/* Just Class to Json, Json to Class, Haven't Depth*/
 
 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text;
 
namespace AliceUtil
{
    /// <summary>
    /// Json to Class , Class to Json 을 지원하는 Class
    /// </summary>
    public class zJson
    {
 
        public static string MakeJson<T>(List<T> jsonClassList, string FileName ) where T : new()
        {
            string Result = MakeJson(jsonClassList);
            zUt.SaveTextFile(FileName, Result);
            return Result;
            
        }
 
        /// <summary>
        /// List를 Json으로 만들어준다.
        /// </summary>
        /// <typeparam name="T"></typeparam>
        /// <param name="jsonClassList"></param>
        /// <returns></returns>
        public static string MakeJson<T>(List<T> jsonClassList) where T : new()
        {
            string ResultStr = "{\n";
            Type ClassType = typeof(T);
            ResultStr += "\t\"" + ClassType.Name + "\":[";
 
            foreach (T jsonClass in jsonClassList)
            {
                string JsonStr = MakeJson(jsonClass).Replace("\t""\t\t");
                ResultStr += "\t" + JsonStr.Insert(JsonStr.Length - 1"\t"+ ",\n";
            }
 
            if (ResultStr[ResultStr.Length - 2== ',')
                ResultStr = ResultStr.Substring(0, ResultStr.Length - 2);
 
            ResultStr += "\t]\n}";
 
            return ResultStr;
        }
 
        public static string MakeJson<T>(T jsonClass, string FileName) where T : new()
        {
            string Result = MakeJson(jsonClass);
            zUt.SaveTextFile(FileName, Result);
            return Result;
        }
 
        public static string MakeJson<T>(T jsonClass) where T : new()
        {
            string ResultStr = "{\n";
            Type ClassType = typeof(T);
            FieldInfo[] fields = ClassType.GetFields();
            PropertyInfo[] Pinfo = ClassType.GetProperties();
 
            foreach (FieldInfo f in fields)
            {
                if (f.FieldType == typeof(string))
                {
                    string value = f.GetValue(jsonClass) == null ? "" : f.GetValue(jsonClass).ToString();
                    ResultStr += "\t\"" + f.Name + "\":\"" + value + "\",\n";
                }
                else if (f.FieldType == typeof(DateTime))
                    ResultStr += "\t\"" + f.Name + "\":" + ((DateTime)f.GetValue(jsonClass)).Ticks.ToString() + ",\n";
                else if (f.FieldType == typeof(int|| f.FieldType == typeof(long|| f.FieldType == typeof(float|| f.FieldType == typeof(double|| f.FieldType == typeof(bool))
                    ResultStr += "\t\"" + f.Name + "\":" + f.GetValue(jsonClass).ToString() + ",\n";
                else if (f.FieldType == typeof(int[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((int[])f.GetValue(jsonClass)).Select(x => x.ToString()).ToArray()) + "],\n";
                else if (f.FieldType == typeof(long[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((long[])f.GetValue(jsonClass)).Select(x => x.ToString()).ToArray()) + "],\n";
                else if (f.FieldType == typeof(double[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((double[])f.GetValue(jsonClass)).Select(x => x.ToString()).ToArray()) + "],\n";
                else if (f.FieldType == typeof(float[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((float[])f.GetValue(jsonClass)).Select(x => x.ToString()).ToArray()) + "],\n";
                else if (f.FieldType == typeof(bool[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((bool[])f.GetValue(jsonClass)).Select(x => x.ToString()).ToArray()) + "],\n";
                else if (f.FieldType == typeof(string[]))
                    ResultStr += "\t\"" + f.Name + "\":" + "[" + string.Join(",", ((string[])f.GetValue(jsonClass)).Select(x => "\"" + x + "\"").ToArray()) + "],\n";
            }
 
            if (ResultStr[ResultStr.Length - 2== ',')
                ResultStr = ResultStr.Substring(0, ResultStr.Length - 2);
 
            return ResultStr + "\n}";
        }
 
        static string GetTokenString(string Source, string StartStr, string EndStr)
        {
            int Openidx = Source.IndexOf(StartStr);
            int Closeidx = 0;
 
            while (true)
            {
                Openidx = Source.IndexOf(StartStr, Openidx + 1);
                Closeidx = Source.IndexOf(EndStr, Closeidx + 1);
 
                if (Closeidx == -1return "";  // 여기로 온다는 것은 정상적인 Json 이 아니라는 것을 의미한다.
                if (Openidx == -1 || Openidx > Closeidx) break;
            }
            return Source.Substring(1, Closeidx - 1);
        }
 
        public static List<T> PaserListFromFile<T>(string jsonFilename) where T : new()
        {
            return PaserList<T>(zUt.LoadTextFile(jsonFilename));
        }
 
        public static List<T> PaserList<T>(string jsonStr) where T : new()
        {
            List<T> ResultList = new List<T>();
 
            int idx = jsonStr.IndexOf("\"" + typeof(T).Name + "\"");
            if (idx == -1return ResultList;
            jsonStr = jsonStr.Substring(idx);
            idx = jsonStr.IndexOf("[");
            if (idx == -1return ResultList;
            jsonStr = jsonStr.Substring(idx).Trim().Replace("\t""").Replace("\n"""); ;
            jsonStr = GetTokenString(jsonStr, "[""]");
 
            while (true)
            {
                if (jsonStr.Trim() == ""break;
                string ItemStr = GetTokenString(jsonStr, "{""}");
                if (ItemStr == ""break;
                ResultList.Add(Paser<T>(ItemStr));
                jsonStr = jsonStr.Substring(ItemStr.Length + 2);
            }
 
            Console.WriteLine(jsonStr);
 
            return ResultList;
        }
 
        public static T PaserFromFile<T>(string jsonFilename) where T : new()
        {
            return Paser<T>(zUt.LoadTextFile(jsonFilename));
        }
 
        public static T Paser<T>(string jsonStr) where T : new()
        {
            T nJsonClass = new T();
            Type ClassType = typeof(T);
            FieldInfo[] fields = ClassType.GetFields();
            PropertyInfo[] Pinfo = ClassType.GetProperties();
 
            foreach (FieldInfo f in fields)
            {
                int idx = jsonStr.IndexOf("\"" + f.Name + "\"");
                if (idx == -1continue;
                string rs = jsonStr.Substring(idx + ("\"" + f.Name + "\"").Length).Trim().Replace("\t""").Replace("\n""");
                if (rs[0!= ':'continue;
 
                if (f.FieldType == typeof(string))
                {
                    int ix = rs.IndexOf("\"");
                    int chix = rs.IndexOf(",");
                    if (chix != -1 && chix > ix && ix != -1)
                    {
                        rs = rs.Substring(rs.IndexOf("\""+ 1);
                        string value = rs.Substring(0, rs.IndexOf("\""));
                        f.SetValue(nJsonClass, value);
                    }
                    else f.SetValue(nJsonClass, null);
                }
                else if (f.FieldType == typeof(int[]) || f.FieldType == typeof(string[]))
                {
                    int ps = rs.IndexOf("["+ 1;
                    rs = rs.Substring(ps, rs.IndexOf("]"- ps).Trim();
                    if (rs == ""continue;
                    string[] datas = rs.Split(',');
 
                    if (f.FieldType == typeof(int[]))
                    {
                        var temp = new int[datas.Length];
                        for (int n = 0; n < datas.Length; n++) temp[n] = int.Parse(datas[n].Trim());
                        f.SetValue(nJsonClass, temp);
                    }
                    else if (f.FieldType == typeof(string[]))
                    {
                        var temp = new string[datas.Length];
                        for (int n = 0; n < datas.Length; n++) temp[n] = datas[n].Trim().Substring(1, datas[n].Trim().Length - 2);
                        f.SetValue(nJsonClass, temp);
                    }
                }
                else
                {
                    // 마지막이 , } ] 중 가까운 문자위치가 마지막이라고 판단해야된다.
                    int[] EndPos = { rs.IndexOf(","), rs.IndexOf("}"), rs.IndexOf("]") };
                    int ps = 99;
                    foreach (int ix in EndPos)
                        if (ix != -1 && ix < ps) ps = ix;
                    if (ps == -1 || ps == 99continue;
 
                    string value = rs.Substring(1, ps - 1).Trim();
                    if (f.FieldType == typeof(int)) f.SetValue(nJsonClass, int.Parse(value));
 
                    else if (f.FieldType == typeof(DateTime)) f.SetValue(nJsonClass, new DateTime(long.Parse(value)));
                    else if (f.FieldType == typeof(double)) f.SetValue(nJsonClass, double.Parse(value));
                    else if (f.FieldType == typeof(float)) f.SetValue(nJsonClass, float.Parse(value));
                    else if (f.FieldType == typeof(long)) f.SetValue(nJsonClass, long.Parse(value));
                    else if (f.FieldType == typeof(bool)) f.SetValue(nJsonClass, bool.Parse(value));
 
                }
            }
            return nJsonClass;
        }
    }
}
 
 

for coffee 0x4C668AeBB9Facd8ecE8764AaAC48B7186130C411


약간 이해가 안되는 내용이 있다면 나는 Datetime 을 Tick로 보관한다.
이유는 단순하다. 다국적 처리를 할때 Datetime를 String로 바꾸면 문제가 생길 수 있기 때문이다.

If there is something special code, I use Tick to keep Datetime.
The reason is simple. Because changing Datetime to String can cause problem when dealing with multinational process.

Test Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
using System;
using System.Collections.Generic;
using System.Text;
using AliceUtil;
 
namespace JsonTest
{
    class Person
    {
        public bool ok;
        public int Id;
        public string Name;
        public DateTime Ddate;
        public int[] intArray = { 1234 };
        public string[] strArray = { "1""2""3""4" };
    }
 
 
    class Program
    {
        static void Main(string[] args)
        {
            List<Person> tbPerson = new List<Person>();
 
            for (int n = 0; n < 3; n++)
            {
                tbPerson.Add(
                    zJson.Paser<Person>(("{'Id' : " + n + ", 'Name' : 'Alex', 'ok':true}").Replace("'""\""))
                    );
            }
 
            string result = zJson.MakeJson<Person>(tbPerson);
            Console.WriteLine(result);
            Console.WriteLine("---------------");
 
            List<Person> PList = zJson.PaserList<Person>(result);
            foreach( Person ps in PList)
            {
                Console.WriteLine(zJson.MakeJson(ps));                
            }
            
        }
    }
}
 
 


 

 



2 Comments


Recommended Comments

StillDesign님의 의견 고맙습니다.
저는 간단한 기능과 짧은 코드로 구현되기를 원했기 때문에 만들었습니다.

Thank you for your comment, StillDesign.
I made it and wrote it because I only need simple functions. and short code

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By CommanderLake
      I've been experimenting with my own n-body simulation for some time and I recently discovered how to optimize it for efficient multithreading and vectorization with the Intel compiler. It did exactly the same thing after making it multithreaded and scaled very well on my ancient i7 3820 (4.3GHz). Then I changed the interleaved xy coordinates to separate arrays for x and y to eliminate the strided loads to improve AVX scaling and copy the coordinates to an interleaved array for OpenTK to render as points. Now the physics is all wrong, the points form clumps that interact with each other but they are unusually dense and accelerate faster than they decelerate causing the clumps to randomly fly off into the distance and after several seconds I get a NaN where 2 points somehow occupy exactly the same x and y float coordinates. This is the C++ DLL:
      #include "PPC.h" #include <thread> static const float G = 0.0000001F; const int count = 4096; __declspec(align(64)) float pointsx[count]; __declspec(align(64)) float pointsy[count]; void SetData(float* x, float* y){ memcpy(pointsx, x, count * sizeof(float)); memcpy(pointsy, y, count * sizeof(float)); } void Compute(float* points, float* velx, float* vely, long pcount, float aspect, float zoom) { #pragma omp parallel for for (auto i = 0; i < count; ++i) { auto forcex = 0.0F; auto forcey = 0.0F; for (auto j = 0; j < count; ++j) { if(j == i)continue; const auto distx = pointsx[i] - pointsx[j]; const auto disty = pointsy[i] - pointsy[j]; //if(px != px) continue; //most efficient way to avoid a NaN failure const auto force = G / (distx * distx + disty * disty); forcex += distx * force; forcey += disty * force; } pointsx[i] += velx[i] -= forcex; pointsy[i] += vely[i] -= forcey; if (zoom != 1) { points[i * 2] = pointsx[i] * zoom / aspect; points[i * 2 + 1] = pointsy[i] * zoom; } else { points[i * 2] = pointsx[i] / aspect; points[i * 2 + 1] = pointsy[i]; } /*points[i * 2] = pointsx[i]; points[i * 2 + 1] = pointsy[i];*/ } } This is the relevant part of the C# OpenTK GameWindow:
      private void PhysicsLoop(){ while(true){ if(stop){ for(var i = 0; i < pcount; ++i) { velx[i] = vely[i] = 0F; } } if(reset){ reset = false; var r = new Random(); for(var i = 0; i < Startcount; ++i){ do{ pointsx[i] = (float)(r.NextDouble()*2.0F - 1.0F); pointsy[i] = (float)(r.NextDouble()*2.0F - 1.0F); } while(pointsx[i]*pointsx[i] + pointsy[i]*pointsy[i] > 1.0F); velx[i] = vely[i] = 0.0F; } NativeMethods.SetData(pointsx, pointsy); pcount = Startcount; buffersize = (IntPtr)(pcount*8); } are.WaitOne(); NativeMethods.Compute(points0, velx, vely, pcount, aspect, zoom); var pointstemp = points0; points0 = points1; points1 = pointstemp; are1.Set(); } } protected override void OnRenderFrame(FrameEventArgs e){ GL.Clear(ClearBufferMask.ColorBufferBit); GL.EnableVertexAttribArray(0); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); mre1.Wait(); are1.WaitOne(); GL.BufferData(BufferTarget.ArrayBuffer, buffersize, points1, BufferUsageHint.StaticDraw); are.Set(); GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Float, false, 0, 0); GL.DrawArrays(PrimitiveType.Points, 0, pcount); GL.DisableVertexAttribArray(0); SwapBuffers(); } These are the array declarations:
      private const int Startcount = 4096; private readonly float[] pointsx = new float[Startcount]; private readonly float[] pointsy = new float[Startcount]; private float[] points0 = new float[Startcount*2]; private float[] points1 = new float[Startcount*2]; private readonly float[] velx = new float[Startcount]; private readonly float[] vely = new float[Startcount];  
      Edit 0: It seems that adding 3 zeros to G increases the accuracy of the simulation but I'm at a loss as to why its different without interleaved coordinates. Edit 1: I somehow achieved an 8.3x performance increase with AVX over scalar with the new code above!
    • By Waaayoff
      I'm looking for an algorithm that I can use to remove vertices that are close to each other within a margin of error in a triangular mesh. Pretty much similar to Blender's "Remove Doubles" feature, if anyone is familiar with it.
      I think the issue isn't just removing the doubles, but also how would I handle the face indices once I remove "duplicate" vertices?
    • By iGrfx
      I've learned that the triangle clipping in the rasterization process usually using Sutherland–Hodgman algorithm. I also found an algorithm called "Guard-band". I'm writing a software raster so I want to know what technical the GPU use, I want to implement it for study. Thanks!
      updated: what's the more proper triangulate algorithm?
    • By Vilem Otte
      Welcome to the first part of multiple effect articles about soft shadows. In recent days I've been working on area light support in my own game engine, which is critical for one of the game concepts I'd like to eventually do (if time will allow me to do so). For each area light, it is crucial to have proper soft shadows with proper penumbra. For motivation, let's have the following screenshot with 3 area lights with various sizes:

      Fig. 01 - PCSS variant that allows for perfectly smooth, large-area light shadows
       
      Let's start the article by comparison of the following 2 screenshots - one with shadows and one without:
       
      Fig. 02 - Scene from default viewpoint lit with light without any shadows (left) and with shadows (right)
       
      This is the scene we're going to work with, and for the sake of simplicity, all of the comparison screenshots will be from this exact same viewpoint with 2 different scene configurations. Let's start with the definition of how shadows are created. Given a scene and light which we're viewing. Shadow umbra will be present at each position where there is no direct visibility between given position and any existing point on the light. Shadow penumbra will be present at each position where there is visibility of any point on the light, yet not all of them. No shadow is everywhere where there is full direct visibility between each point on the light and position.
      Most of the games tend to simplify, instead of defining a light as area or volume, it gets defined as an infinitely small point, this gives us few advantages:
      For single point, it is possible to define visibility in a binary way - either in shadow or not in shadow From single point, a projection of the scene can be easily constructed in such way, that definition of shadow becomes trivial (either position is occluded by other objects in the scene from lights point of view, or it isn't) From here, one can follow into the idea of shadow mapping - which is a basic technique for all others used here.
       
      Standard Shadow Mapping
      Trivial, yet should be mentioned here.
      inline float ShadowMap(Texture2D<float2> shadowMap, SamplerState shadowSamplerState, float3 coord) { return shadowMap.SampleLevel(shadowSamplerState, coord.xy, 0.0f).x < coord.z ? 0.0f : 1.0f; } Fig. 03 - code snippet for standard shadow mapping, where depth map (stored 'distance' from lights point of view) is compared against calculated 'distance' between point we're computing right now and given light position. Word 'distance' may either mean actual distance, or more likely just value on z-axis for light point of view basis.
       
      Which is well known to everyone here, giving us basic results, that we all well know, like:

      Fig. 04 - Standard Shadow Mapping
       
      This can be simply explained with the following image:

      Fig. 05 - Each rendered pixel calculates whether its 'depth' from light point is greater than what is written in 'depth' map from light point (represented as yellow dot), white lines represent computation for each pixel.
       
      Percentage-Close-Filtering (PCF)
      To make shadow more visually appealing, adding soft-edge is a must. This is done by simply performing NxN tests with offsets. For the sake of improved visual quality I've used shadow mapping with bilinear filter (which requires resolving 4 samples), along with 5x5 PCF filtering:
       
      Fig. 06 - Percentage close filtering (PCF) results in nice soft-edged shadows, sadly the shadow is uniformly soft everywhere
       
      Clearly, none of the above techniques does any penumbra/umbra calculation, and therefore they're not really useful for area lights. For the sake of completeness, I'm adding basic PCF source code (for the sake of optimization, feel free to improve for your uses):
      inline float ShadowMapPCF(Texture2D<float2> tex, SamplerState state, float3 projCoord, float resolution, float pixelSize, int filterSize) { float shadow = 0.0f; float2 grad = frac(projCoord.xy * resolution + 0.5f); for (int i = -filterSize; i <= filterSize; i++) { for (int j = -filterSize; j <= filterSize; j++) { float4 tmp = tex.Gather(state, projCoord.xy + float2(i, j) * float2(pixelSize, pixelSize)); tmp.x = tmp.x < projCoord.z ? 0.0f : 1.0f; tmp.y = tmp.y < projCoord.z ? 0.0f : 1.0f; tmp.z = tmp.z < projCoord.z ? 0.0f : 1.0f; tmp.w = tmp.w < projCoord.z ? 0.0f : 1.0f; shadow += lerp(lerp(tmp.w, tmp.z, grad.x), lerp(tmp.x, tmp.y, grad.x), grad.y); } } return shadow / (float)((2 * filterSize + 1) * (2 * filterSize + 1)); } Fig. 07 - PCF filtering source code
       
      Representing this with image:

      Fig. 08 - Image representing PCF, specifically a pixel with straight line and star in the end also calculates shadow in neighboring pixels (e.g. performing additional samples). The resulting shadow is then weighted sum of the results of all the samples for a given pixel.
       
      While the idea is quite basic, it is clear that using larger kernels would end up in slow computation. There are ways how to perform separable filtering of shadow maps using different approach to resolve where the shadow is (Variance Shadow Mapping for example). They do introduce additional problems though.
       
      Percentage-Closer Soft Shadows
      To understand problem in both previous techniques let's replace point light with area light in our sketch image.

      Fig. 09 - Using Area light introduces penumbra and umbra. The size of penumbra is dependent on multiple factors - distance between receiver and light, distance between blocker and light and light size (shape).
       
      To calculate plausible shadows like in the schematic image, we need to calculate distance between receiver and blocker, and distance between receiver and light. PCSS is a 2-pass algorithm that does calculate average blocker distance as the first step - using this value to calculate penumbra size, and then performing some kind of filtering (often PCF, or jittered-PCF for example). In short, PCSS computation will look similar to this:
      float ShadowMapPCSS(...) { float averageBlockerDistance = PCSS_BlockerDistance(...); // If there isn't any average blocker distance - it means that there is no blocker at all if (averageBlockerDistance < 1.0) { return 1.0f; } else { float penumbraSize = estimatePenumbraSize(averageBlockerDistance, ...) float shadow = ShadowPCF(..., penumbraSize); return shadow; } } Fig. 10 - Pseudo-code of PCSS shadow mapping
       
      The first problem is to determine correct average blocker calculation - and as we want to limit search size for average blocker, we simply pass in additional parameter that determines search size. Actual average blocker is calculated by searching shadow map with depth value smaller than of receiver. In my case I used the following estimation of blocker distance:
      // Input parameters are: // tex - Input shadow depth map // state - Sampler state for shadow depth map // projCoord - holds projection UV coordinates, and depth for receiver (~further compared against shadow depth map) // searchUV - input size for blocker search // rotationTrig - input parameter for random rotation of kernel samples inline float2 PCSS_BlockerDistance(Texture2D<float2> tex, SamplerState state, float3 projCoord, float searchUV, float2 rotationTrig) { // Perform N samples with pre-defined offset and random rotation, scale by input search size int blockers = 0; float avgBlocker = 0.0f; for (int i = 0; i < (int)PCSS_SampleCount; i++) { // Calculate sample offset (technically anything can be used here - standard NxN kernel, random samples with scale, etc.) float2 offset = PCSS_Samples[i] * searchUV; offset = PCSS_Rotate(offset, rotationTrig); // Compare given sample depth with receiver depth, if it puts receiver into shadow, this sample is a blocker float z = tex.SampleLevel(state, projCoord.xy + offset, 0.0f).x; if (z < projCoord.z) { blockers++; avgBlockerDistance += z; } } // Calculate average blocker depth avgBlocker /= blockers; // To solve cases where there are no blockers - we output 2 values - average blocker depth and no. of blockers return float2(avgBlocker, (float)blockers); } Fig. 11 - Average blocker estimation for PCSS shadow mapping
       
      For penumbra size calculation - first - we assume that blocker and receiver are plannar and parallel. This makes actual penumbra size is then based on similar triangles. Determined as:
      penmubraSize = lightSize * (receiverDepth - averageBlockerDepth) / averageBlockerDepth This size is then used as input kernel size for PCF (or similar) filter. In my case I again used rotated kernel samples. Note.: Depending on the samples positioning one can achieve different area light shapes. The result gives quite correct shadows, with the downside of requiring a lot of processing power to do noise-less shadows (a lot of samples) and large kernel sizes (which also requires large blocker search size). Generally this is very good technique for small to mid-sized area lights, yet large-sized area lights will cause problems.
       
      Fig. 12 - PCSS shadow mapping in practice
       
      As currently the article is quite large and describing 2 other techniques which I allow in my current game engine build (first of them is a variant of PCSS that utilizes mip maps and allows for slightly larger light size without impacting the performance that much, and second of them is sort of back-projection technique), I will leave those two for another article which may eventually come out. Anyways allow me to at least show a short video of the first technique in action:
       
      Note: This article was originally published as a blog entry right here at GameDev.net, and has been reproduced here as a featured article with the kind permission of the author.
      You might also be interested in our recently featured article on Contact-hardening Soft Shadows Made Fast.
    • By lilbump
      Hi guys i'm new here, i really hope my question won't sound utterly stupid..
      I'd like to know whether it's better to use a PRNG or a regular RNG, if you are trying to program your own video slot machine. Actually i don't even have clearly understood the difference between the two =D
      2nd question is: which developer i should rely on? I'm following this guide, they talk about RNG but not which one or where to find it.
      Thank you in advance :)
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!