Search the Community

Showing results for tags 'R&D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 4 results

  1. My SDF font looks great at large sizes, but not when I draw it at smaller sizes. I have my orthogonal projection matrix setup so that each unit is a 1x1 pixel. The text is rendered from Freetype2 to a texture atlas @ 56px with a spread of 8 pixels (the multiplier is 8x and scaled down). I'm drawing @ 18px in the screenshot attached to this post. The way I calculate the size of the text quads is by dividing the desired size (18px in the screenshot) by the size of the glyphs in the atlas (56px in this case), and scaling the glyph sprite by that factor. So: 18/56 = ~0.32, and I multiply the rect's size vector by that when it comes to vertex placement (this obviously doesn't apply to the vertices' texture coords). Now, I made sure that all metrics stored in my SDF font files are whole numbers (rect position/size, bearing amounts, advance, etc), but when I scale the font, vertex positions are almost always not going to be whole numbers. I increase the "edge" smoothstep shader parameter for smaller text as well, but it doesn't seem to help all that much.
  2. Hi, Tile based renderers are quite popular nowadays, like tiled deferred, forward+ and clustered renderers. There is a presentation about GPU based particle systems from AMD. What particularly interest me is the tile based rendering part. The basic idea is, that leave the rasterization pipeline when rendering billboards and do it in a compute shader instead, much like Forward+. You determine tile frustums, cull particles, sort front to back, then render them until the accumulated alpha value is below 1. The performance results at the end of the slides seems promising. Has anyone ever implemented this? Was it a success, is it worth doing? The front to back rendering is the most interesting part in my opinion, because overdraw can be eliminated for alpha blending. The demo is sadly no longer available..
  3. I was reworking on my LightProbe filter, and I wrote some code to generate the Reference Cubemap, but then I noticed some discontinuous on the border of each face.(Top:CPU implementaion, Bottom: GPU implementation, the contrast has been adjusted on the right side) At first I think it maybe caused by the interpolation, but then I tried the same algorithm in 2D (like a slice in the normal light probe prefiltering) for better visualization, and the result really confused me. See the attachments, the top half is the Prefiltered Color value, displayed per channel, it's upside down because I used the ColorValue directly as the y coordinate. The bottom half is the differential of the color, it's very clearly there is a discontinuous, and the position is where the border should be. And as the roughness goes higher, the plot gets stranger . So, I am kinda of stuck in here, what's happening and what to do to remove this artifact? Anybody have any idea? and here is my code inline FVector2D Map(int32 FaceIndex, int32 i, int32 FaceSize, float& SolidAngle) { float u = 2 * (i + 0.5) / (float)FaceSize - 1; FVector2D Return; switch (FaceIndex) { case 0: Return = FVector2D(-u, -1); break; case 1: Return = FVector2D(-1, u); break; case 2: Return = FVector2D(u, 1); break; case 3: Return = FVector2D(1, -u); break; } SolidAngle = 1.0f / FMath::Pow(Return.SizeSquared(), 3.0f / 2.0f); return Return.SafeNormal(); } void Test2D() { const int32 Res = 256; const int32 MipLevel = 8; TArray<FLinearColor> Source; TArray<FLinearColor> Prefiltered; Source.AddZeroed(Res * 4); Prefiltered.AddZeroed(Res * 4); for (int32 i = 0; i < Res; ++i) { Source[i] = FLinearColor(1, 0, 0); Source[Res + i] = FLinearColor(0, 1, 0); Source[Res * 2 + i] = FLinearColor(0, 0, 1); Source[Res * 3 + i] = FLinearColor(0, 0, 0); } const float Roughness = MipLevel / 8.0f; const float a = Roughness * Roughness; const float a2 = a * a; // Brute force sampling with GGX kernel for (int32 FaceIndex = 0; FaceIndex < 4; ++FaceIndex) { for (int32 i = 0; i < Res; ++i) { float SolidAngle = 0; FVector2D N = Map(FaceIndex, i, Res, SolidAngle); double TotalColor[3] = {}; double TotalWeight = 0; for (int32 SampleFace = 0; SampleFace < 4; ++SampleFace) { for (int32 j = 0; j < Res; ++j) { float SampleJacobian = 0; FVector2D L = Map(SampleFace, j, Res, SampleJacobian); const float NoL = (L | N); if (NoL <= 0) continue; const FVector2D H = (N + L).SafeNormal(); const float NoH = (N | H); float D = a2 * NoL * SampleJacobian / FMath::Pow(NoH*NoH * (a2 - 1) + 1, 2.0f) ; TotalWeight += D; FLinearColor Sample = Source[SampleFace * Res + j] * D; TotalColor[0] += Sample.R; TotalColor[1] += Sample.G; TotalColor[2] += Sample.B; } } if (TotalWeight > 0) { Prefiltered[FaceIndex * Res + i] = FLinearColor( TotalColor[0] / TotalWeight, TotalColor[1] / TotalWeight, TotalColor[2] / TotalWeight); } } } // Save to bmp const int32 Width = 4 * Res; const int32 Height = 768; TArray<FColor> Bitmap; Bitmap.SetNum(Width * Height); // Prefiltered Color curve per channel float MaxDelta = 0; for (int32 x = 0; x < Width; ++x) { FColor SourceColor = Source[x].ToFColor(false); Bitmap[x] = SourceColor; FColor Sample = Prefiltered[x].ToFColor(false); check(Sample.R < 256); check(Sample.G < 256); check(Sample.B < 256); Bitmap[Sample.R * Width + x] = FColor(255, 0, 0); Bitmap[Sample.G * Width + x] = FColor(0, 255, 0); Bitmap[Sample.B * Width + x] = FColor(0, 0, 255); if (x > 0) { const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1]; MaxDelta = FMath::Max(MaxDelta, FMath::Max3(FMath::Abs(Delta.R), FMath::Abs(Delta.G), FMath::Abs(Delta.B))); } } // Differential per channel const float Scale = 128 / MaxDelta; for (int32 x = 1; x < Width; ++x) { const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1]; Bitmap[int32(512 + Delta.R * Scale) * Width + x] = FColor(255, 0, 0); Bitmap[int32(512 + Delta.G * Scale) * Width + x] = FColor(0, 255, 0); Bitmap[int32(512 + Delta.B * Scale) * Width + x] = FColor(0, 0, 255); } FFileHelper::CreateBitmap(TEXT("Test"), Width, Height, Bitmap.GetData()); } Roughness 0.5.bmp Roughness 1.bmp
  4. Hello, I'd like to ask your take on Lagarde's renormalization of the Disney BRDF for the diffuse term, but applied to Lambert. Let me explain. In this document: https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf (page 10, listing 1) we see that he uses 1/1.51 * percetualRoughness as a factor to renormalize the diffuse part of the lighting function. Ok. Now let's take Karis's assertion at the beginning of his famous document: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf Page 2, diffuse BRDF: I think his premise applies and is enough reason to use Lambert (at least in my case). But from Lagarde's document page 11 figure 10, we see that Lambert looks frankly equivalent to Disney. From that observation, the question that naturally comes up is, if Disney needs renormalization, doesn't Lambert too ? And I'm not talking about 1/π (this one is obvious), but that roughness related factor. A wild guess would tell me that because there is no Schlick in Lambert. and no dependence on roughness, and as long as 1/π is there, in all cases Lambert albedo is inferior to 1, so it shouldn't need further renormalization. So then, where does that extra energy appear in Disney ? According to the graph, it's high view angle and high roughness zone, so that would mean, here: (cf image) This is super small of a difference. This certainly doesn't justify in my eyes the need for the huge darkening introduced by the 1/1.51 factor that enters in effect on a much wider range of the function. But this could be perceptual, or just my stupidity. Looking forward to be educated Bests