Jump to content
  • Advertisement

Search the Community

Showing results for tags 'R&D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 11 results

  1. Hello, Just wanted to share the link to the latest upgrade for the Conservative Morphological Anti-Aliasing, in case someone is interested. It is a post-process AA technique in the same class of approaches as FXAA & SMAA but focusing on minimizing the input image change - that is, apply as much anti-aliasing as possible while avoiding blurring textures or other sharp features. Details available on https://software.intel.com/en-us/articles/conservative-morphological-anti-aliasing-20 and full DX11 source code under MIT license available on https://github.com/GameTechDev/CMAA2/ (compute shader implementation, DX12 & Vulkan ports are in the works too!)
  2. Imagine you are Valve or ID or Dice, and your team is going to create a new engine to run your company's main titles for the next decade. You want an engine that is innovative and flexible, can knock socks off next year and still impress gamers 5 years down the road. Would someone in this position use helper libraries like GLUT aor GLFW or GLM or would they create their own libraries for their project and do the win API stuff manually?
  3. Hello! Previous year in my job we implemented a HDR output (as in HDR10 / BT.2020 / ST.2084 PQ back-buffers) on one of our games on the consoles, which do support HDR10 over HDMI. The HDR compatible hardware (monitors, televisions) has already been around for a year, with varying quality. I wonder if there's already HDR-HW output exposed in the PC drivers? Windows 10? Vulkan? DX 11? DX 12? Which vendors? For those unfamiliar, I'm talking about outputting HDR signal to HDR hardware (using r10g10b10a2_unorm + PQ backbuffers, or better). Thanks, .P
  4. Hello gamedev, I am currently evaluating the worthiness of jumping into RD work for an automatic impostor system in our engine. In the past I've witnessed tremendous performance increase from such a system into the engine of LumenRT (which has to cope with unoptimized user created content). We're a little bit in the same situation right now. Possibly large fields with way too much data (high poly etc..). So if the engine would support auto-impostor-ing of stuff that'd be cool. Though, to make it a bit more modern, I was thinking that we could extend the parallax validity of billboards by storing the depth too, and render them using parallax occlusion mapping. So the invalidation could come after the camera has moved to a more radical angle than for traditional impostors. These exist techniques with full volumetric billboards that I am aware of, but they need the gometry shader to generate slices, and cost heavy voxel storage. I need something very light on the bandwidth to cope with switch/PS4 limitations. Can you point me to modern research on well balanced imposter techniques sounding like this ? or any idea you have on the matter. thanks
  5. Hi, Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!) I have gotten so far but I am unsure how a few things that they say in the blogs.. This is a simplified version of how I understand their stuff to be setup: Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..) The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource. Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel. One things I would like clarification on In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource Where are the handles allocated from? the RenderBackend? How do they do it in a thread safe way that's doesn't kill performance? If anyone has any ideas or any additional resources on the subject, that would be great. Thanks
  6. we are looking for someone who can develop a fully automatic [recruiting information removed by moderator - please use jobs section]
  7. Hi, Tile based renderers are quite popular nowadays, like tiled deferred, forward+ and clustered renderers. There is a presentation about GPU based particle systems from AMD. What particularly interest me is the tile based rendering part. The basic idea is, that leave the rasterization pipeline when rendering billboards and do it in a compute shader instead, much like Forward+. You determine tile frustums, cull particles, sort front to back, then render them until the accumulated alpha value is below 1. The performance results at the end of the slides seems promising. Has anyone ever implemented this? Was it a success, is it worth doing? The front to back rendering is the most interesting part in my opinion, because overdraw can be eliminated for alpha blending. The demo is sadly no longer available..
  8. My SDF font looks great at large sizes, but not when I draw it at smaller sizes. I have my orthogonal projection matrix setup so that each unit is a 1x1 pixel. The text is rendered from Freetype2 to a texture atlas @ 56px with a spread of 8 pixels (the multiplier is 8x and scaled down). I'm drawing @ 18px in the screenshot attached to this post. The way I calculate the size of the text quads is by dividing the desired size (18px in the screenshot) by the size of the glyphs in the atlas (56px in this case), and scaling the glyph sprite by that factor. So: 18/56 = ~0.32, and I multiply the rect's size vector by that when it comes to vertex placement (this obviously doesn't apply to the vertices' texture coords). Now, I made sure that all metrics stored in my SDF font files are whole numbers (rect position/size, bearing amounts, advance, etc), but when I scale the font, vertex positions are almost always not going to be whole numbers. I increase the "edge" smoothstep shader parameter for smaller text as well, but it doesn't seem to help all that much.
  9. I was reworking on my LightProbe filter, and I wrote some code to generate the Reference Cubemap, but then I noticed some discontinuous on the border of each face.(Top:CPU implementaion, Bottom: GPU implementation, the contrast has been adjusted on the right side) At first I think it maybe caused by the interpolation, but then I tried the same algorithm in 2D (like a slice in the normal light probe prefiltering) for better visualization, and the result really confused me. See the attachments, the top half is the Prefiltered Color value, displayed per channel, it's upside down because I used the ColorValue directly as the y coordinate. The bottom half is the differential of the color, it's very clearly there is a discontinuous, and the position is where the border should be. And as the roughness goes higher, the plot gets stranger . So, I am kinda of stuck in here, what's happening and what to do to remove this artifact? Anybody have any idea? and here is my code inline FVector2D Map(int32 FaceIndex, int32 i, int32 FaceSize, float& SolidAngle) { float u = 2 * (i + 0.5) / (float)FaceSize - 1; FVector2D Return; switch (FaceIndex) { case 0: Return = FVector2D(-u, -1); break; case 1: Return = FVector2D(-1, u); break; case 2: Return = FVector2D(u, 1); break; case 3: Return = FVector2D(1, -u); break; } SolidAngle = 1.0f / FMath::Pow(Return.SizeSquared(), 3.0f / 2.0f); return Return.SafeNormal(); } void Test2D() { const int32 Res = 256; const int32 MipLevel = 8; TArray<FLinearColor> Source; TArray<FLinearColor> Prefiltered; Source.AddZeroed(Res * 4); Prefiltered.AddZeroed(Res * 4); for (int32 i = 0; i < Res; ++i) { Source = FLinearColor(1, 0, 0); Source[Res + i] = FLinearColor(0, 1, 0); Source[Res * 2 + i] = FLinearColor(0, 0, 1); Source[Res * 3 + i] = FLinearColor(0, 0, 0); } const float Roughness = MipLevel / 8.0f; const float a = Roughness * Roughness; const float a2 = a * a; // Brute force sampling with GGX kernel for (int32 FaceIndex = 0; FaceIndex < 4; ++FaceIndex) { for (int32 i = 0; i < Res; ++i) { float SolidAngle = 0; FVector2D N = Map(FaceIndex, i, Res, SolidAngle); double TotalColor[3] = {}; double TotalWeight = 0; for (int32 SampleFace = 0; SampleFace < 4; ++SampleFace) { for (int32 j = 0; j < Res; ++j) { float SampleJacobian = 0; FVector2D L = Map(SampleFace, j, Res, SampleJacobian); const float NoL = (L | N); if (NoL <= 0) continue; const FVector2D H = (N + L).SafeNormal(); const float NoH = (N | H); float D = a2 * NoL * SampleJacobian / FMath::Pow(NoH*NoH * (a2 - 1) + 1, 2.0f) ; TotalWeight += D; FLinearColor Sample = Source[SampleFace * Res + j] * D; TotalColor[0] += Sample.R; TotalColor[1] += Sample.G; TotalColor[2] += Sample.B; } } if (TotalWeight > 0) { Prefiltered[FaceIndex * Res + i] = FLinearColor( TotalColor[0] / TotalWeight, TotalColor[1] / TotalWeight, TotalColor[2] / TotalWeight); } } } // Save to bmp const int32 Width = 4 * Res; const int32 Height = 768; TArray<FColor> Bitmap; Bitmap.SetNum(Width * Height); // Prefiltered Color curve per channel float MaxDelta = 0; for (int32 x = 0; x < Width; ++x) { FColor SourceColor = Source[x].ToFColor(false); Bitmap[x] = SourceColor; FColor Sample = Prefiltered[x].ToFColor(false); check(Sample.R < 256); check(Sample.G < 256); check(Sample.B < 256); Bitmap[Sample.R * Width + x] = FColor(255, 0, 0); Bitmap[Sample.G * Width + x] = FColor(0, 255, 0); Bitmap[Sample.B * Width + x] = FColor(0, 0, 255); if (x > 0) { const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1]; MaxDelta = FMath::Max(MaxDelta, FMath::Max3(FMath::Abs(Delta.R), FMath::Abs(Delta.G), FMath::Abs(Delta.B))); } } // Differential per channel const float Scale = 128 / MaxDelta; for (int32 x = 1; x < Width; ++x) { const FLinearColor Delta = Prefiltered[x] - Prefiltered[x - 1]; Bitmap[int32(512 + Delta.R * Scale) * Width + x] = FColor(255, 0, 0); Bitmap[int32(512 + Delta.G * Scale) * Width + x] = FColor(0, 255, 0); Bitmap[int32(512 + Delta.B * Scale) * Width + x] = FColor(0, 0, 255); } FFileHelper::CreateBitmap(TEXT("Test"), Width, Height, Bitmap.GetData()); } Roughness 0.5.bmp Roughness 1.bmp
  10. Hello, I'd like to ask your take on Lagarde's renormalization of the Disney BRDF for the diffuse term, but applied to Lambert. Let me explain. In this document: https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf (page 10, listing 1) we see that he uses 1/1.51 * percetualRoughness as a factor to renormalize the diffuse part of the lighting function. Ok. Now let's take Karis's assertion at the beginning of his famous document: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf Page 2, diffuse BRDF: I think his premise applies and is enough reason to use Lambert (at least in my case). But from Lagarde's document page 11 figure 10, we see that Lambert looks frankly equivalent to Disney. From that observation, the question that naturally comes up is, if Disney needs renormalization, doesn't Lambert too ? And I'm not talking about 1/π (this one is obvious), but that roughness related factor. A wild guess would tell me that because there is no Schlick in Lambert. and no dependence on roughness, and as long as 1/π is there, in all cases Lambert albedo is inferior to 1, so it shouldn't need further renormalization. So then, where does that extra energy appear in Disney ? According to the graph, it's high view angle and high roughness zone, so that would mean, here: (cf image) This is super small of a difference. This certainly doesn't justify in my eyes the need for the huge darkening introduced by the 1/1.51 factor that enters in effect on a much wider range of the function. But this could be perceptual, or just my stupidity. Looking forward to be educated Bests
  11. Hello, I want you guys to help me to know the difference between technical artist and graphics programming role. I'm very interested in arts, maths and programming that's why i started to study computer graphics ( before i know that there is a technical artist role ) but after i know it i get confused knowing what the key similarity and difference between both of them. Can both position be in overlap and may be work on the same set of tasks/problems? What is the responsibilities of both of them? What's the skillset one should have to work on either of them? Thanks for your time, Regards.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!