Jump to content
  • Advertisement

Search the Community

Showing results for tags 'R&D' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 15 results

  1. So, in real life, incoming dot normal at the silhouette is always 0. With smooth shaded meshes, it never is, not naturally, not outside of contrived situations. (Not with flat shaded meshes either, I guess.) And incoming dot normal is one of the bedrocks of CG. Probably the equal of 4x4 matrix multiplication. Problems with silhouette normals show up in Fresnel, in diffuse lighting, in environment mapping.... everywhere. But I can't really find anybody talking about it. (Maybe I'm not Googling the right terms.) Obviously, the problem decreases as poly count goes up, eventually reaching a point where it's dwarfed by other silhouette problems (like translucency or micro-occlusion) that CG doesn't handle well either. But, if I'm reasoning correctly, normal maps don't improve the problem-- they're as likely to exacerbate it as improve it, and the exacerbations are, aesthetically speaking, probably worse than the improvements are better. I've tried playing with crude fixes-- basically, rotating normals toward incoming by a percentage, or of course clamping incoming dot normal (like we all have to do) to prevent it from bending behind the mesh. Nothing I've tried looks good. I suppose the best option might be to rotate normals to perpendicular to incoming at the silhouette and then interpolate to the nearest inflection point of something like screen space depth to preserve curvature, but the math for how to do that is beyond me, and I'm not sure it would look any better. Or maybe, instead, somehow, adjust the drawn silhouette to match the silhouette defined by incoming dot normal? Not even sure how that would work, not if the normal was pointing away from incoming. I don't know-- is this a solvable problem? Has anyone tried other stuff and given up, pursued anything that was promising but too expensive, anything like that? Are there any papers I'm missing? It's really surprising to me that I can't find anyone else talking about this. (Apologies if I chose the wrong subforum for this. I considered art forums, but I felt that people frequenting the programming forums would have more to say on the subject.)
  2. how is the BSDF function used in the kajiya rendering équations ? We know that path tracing proivde an analytical solution and we saw the BSDF function at first time in the path tracing algorithm. After that, is there a way to use mutliple BSDF function in a full rendering process ? If you have some links to any books or website, please share it !
  3. I have large vectors of triangles that make up a mesh for a given model. What I would like to do is find a way to iterate through the mesh looking for shared edges, then move on to the next triangle until I have searched every triangle in the mesh. The triangles are ordered in a vector as I read them from a file, but it may be that there's a better way to setup for this type of search. Can anyone point to some resources that would help me figure out how to find shared edges?
  4. Hi everyone here, Hope you just had a great day with writing something shining in 60FPS:) I've found a great talk about the GI solution from Global Illumination in Tom Clancy's The Division, a GDC talk given by Ubisoft's Nikolay Stefanov. Everything looks nice but I have some unsolvable questions that, what is the "surfel" he talked about and how to represent the "surfel"? So far as I searched around there are only some academic papers which look not so close to my problem domain, the "surfel" those papers talked about is using the point as the topology primitives rather than triangled mesh. Are these "surfel"s the same terminology and concept? From 10:55 he introduced that they "store an explicit surfel list each probe 'sees'", that literally same as storing the surfel list of the first ray casting results that from the probe to the certain directions (which he mentioned just a few minutes later). So far I have a similar probe capturing stage during the GI baking process in my engine, I would get a G-buffer cubemap at each probe's position with facing 6 coordinate axes. But what I stored in the cubemap is the rasterized texel data of the world position normal albedo and so on, which is bounded by the resolution of the cubemap. Even I could tag some kind of the surface ID during the asset creation to mimic "surfel", still, they won't be accurately transferred to the "explicit surfel list each probe 'sees'" if I keep doing the traditional cubemap works. Do I need to ray cast on CPU to get an accurate result? Thanks for any kinds of help.
  5. Now that my real-time software renderer is almost complete after many years, I'm finding it difficult to research the current state-of-art in the area to compare performance. Most other software renderers I've found are either non-released hobby projects, slow emulations of GPUs with all their limitations or targeting very old CPUs from the 1990s. Best so far was a near-real-time CPU ray-tracing experiment by Intel around year 2004 ac. Feel free to share any progress you've made on the research subject or interesting real-time software renderers you've found. With real-time, I mean at least 60 FPS in a reasonable resolution for the intended purpose. With software, I mean no dependencies on Direct3D, OpenGL, OpenCL, Vulkan nor Metal.
  6. zfvesoljc

    R&D Trails

    I'm currently working on implementing a trail system and I have the basic stuff working. A moving object can specify a material, width, life time, and time based curves for evaluation of colour and width. For each trail point I generate two vertices, trail point is center, vertex A is offset "left" and vertex B is offset "right", half width each. A setting for minimum distance between two trail points determines how spread out they are. This works nice until width and turning angle are so "close" that one side of trail triangles starts overlapping and in case of additive shading causes ugly artefacts. So. I'm now playing with ideas on how to solve this: - do some vertex detection magic and check for overlapping, maybe discard overlapping vertices or move them close by - push both vertices on one side of trail, ie: A = point, B = point + width (instead of A = point + half_width, B = point - half_width), but I yet have to figure out how to detect that I need to do this And other solutions or tips? Forgot to mention, I'm doing mesh generation on cpu side.
  7. I'm learning light probes used for dynamic global illumination. I have a question regarding the placement of light probes, as based on most of the pictures I have seen, they seem to be placed uniformly in a grid. Is that a reasonable way for placement? I feel that more should be placed in corners than in the middle of an area. is there any research on light probe placement that minimize the overall data needed for rendering? this GDC talk http://twvideo01.ubm-us.net/o1/vault/gdc2012/slides/Programming%20Track/Cupisz_Robert_Light_Probe_Interpolation.pdf mentioned irregular placement on tetrahedrons and how to do interpolation. but it doesn't seem to say much about placement itself. this paper http://melancholytree.com/thesis.pdf mentioned that only one probe is needed in a convex shape, but I don't seem to see people do this, is it because this is only for static global illumination without moving objects? what's the the latest development in light probe placement?
  8. I have a DJI Matrice 600 and it has the ability to take an HDMI signal from the drone and send it wireless to display on a remote controller. I plugged a PC's HDMI output into the HDMI port and it works. The PC is at 800x600 60hz 24 bits. I have another PC with VGA output and cheap VGA to HDMI converters. I set the PC resolution to 800x600 60hz 24bits and get no signal on the remote. Why would a PC's HDMI video out work but not a signal from HDMI converted from another PC's VGA output? https://www.amazon.com/GANA-Converter-Monitors-displayers-Computer/dp/B01H5BOLYC The obvious reason is that the PC is converting to a different HDMI signal than the converters are. But according to the converter specs and DJI specs, it should work. DJI claims to support 720p, 1080i and 1080p. So I assume that the 800x600 signal is being converted to 720p to work. Thanks for any input as to how to debug this issue.
  9. Hello. GCN paper says that its one simd engine can have up to 10 wavefonts in-flight. Does it mean it can run 10 wavefonts simultaniosly? If so then how? By pipelining them? AFAIK wavefonts are scheduled by a scheduler. How does the scheduler interact with simd engine to make it possible? Do these 10 wavefonts belong to only one instruction?
  10. Hi everyone! I need to transform a 32bit PFM (HDR) file, reading pixel by pixel, into a usual LDR format and later write into PPM and BMP files. Can someone please give me an equation to solve this or a sinppet. is it enough to tonemap?
  11. Hello all, Sorry for my English! I found a description about the fastest reflection algorithm for flat surfaces. http://www.remi-genin.fr/blog/screen-space-plane-indexed-reflection-in-ghost-recon-wildlands I want to recreate it in unity. Attached an example for testing https://www.drive.google.com/open?id=1WfgCpxwx8k6lgALHY6s_588p1D4lYEb6 I have some problems. 1) Why does the example use reflection along the Z axis? After all, the example is reflected on the horizontal surface Y? 2) In my implementation, an incorrect hash buffer. Perhaps the buffer is written in reverse? I added a z-buffer distance limit for test. You can see what's happen on 40/100/1000 meters. Here is the contents of the hash buffer I tried to change the interlocked Max / min, reversed the value of the z-buffer, tried without hashing (just write uv), still did not succeed. 3) Another interesting question is filling holes. The author writes this Temporary reprojection for static images will not work (or am I mistaken?) And in the second part I did not understand what he was doing? Is it blending the previous frame and the current frame of the reprojection? ps There are a couple more links for this method, but the code is more complicated and confusing. http://www.guitarjawa.net/wordpress/wp-content/uploads/2018/04/IMPLEMENTATION-OF-OPTIMIZED-PIXEL-PROJECTEDREFLECTIONS-FOR-PLANAR-REFLECTORS.pdf http://www.advances.realtimerendering.com/s2017/PixelProjectedReflectionsAC_v_1.92.pdf
  12. Hi all, Since the release of Optix 6.0 a few days ago I have been playing with the new RTX support. I have uploaded an initial demo on ompf2, which showcases ray tracing performance with and without RTX hardware support: https://ompf2.com/viewtopic.php?f=8&t=2158 Demo should run on Maxwell, Pascal and Turing, provided that you have installed the latest drivers. - Jacco.
  13. I've been found a talk at GDC, now on youtube, about procedural animation. The video. When I saw what could be done with it, I knew I had to learn it. Any tips or pointers for to build a strong grasp of the subject for a beginner to procedural animation? Like I understand the 2D math so far, ready for the 3D, but do you have tips or goto resources? I'm figuring through the 3D maths, maybe I should go and make my own custom animator in DirectX or is that a bit much? I even saw a course on the robotics version of this through online courses. But I don't want to go off on a tangent or pick up some useless books on it when there are better ones.
  14. Hello everyone here in GameDev forums, It's my first post and I'm very happy to participate in the discussion here, thanks for everyone! My question is, is there any workaround that we could implement something in DX11 that similar to the DX12's CBV with the offset, and/or something in OpenGL similar to the dynamic UBO descriptor in Vulkan? My initial purpose is about to achieve a unified per-object resource updating design across different APIs in my engine. I've gathered all the per-object resource updating to a few of coherent memory write operations in DX12 and Vulkan rendering backends, and later record all the descriptor binding commands with the per-object offset. DX12 example code: WriteMemoryFromCPUSide(); for(auto object : objects) { offset = object->offset; commandListPtr->SetGraphicsRootConstantBufferView(startSlot, constantBufferPtr->GetGPUVirtualAddress() + offset * elementSize); // record drawcall } Vulkan example code: WriteMemoryFromCPUSide(); for(auto object : objects) { offset = object->offset; vkCmdBindDescriptorSets(commandBufferPtr, VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayoutPtr, firstSet, setCount, descriptorSetPtr, dynamicOffsetCount, offset); // record drawcall } I have an idea to record the per-object offset as the index/UUID, then use explicit UBO/SSBO array in OpenGL and StructuredBuffer in DX11, but still, I need to submit the index to GPU memory at a certain moment. Hope everyone has a good day!
  15. Heyo! For the last few months I've been working on a realtime raytracer (like everyone currently), but have been trying to make it work on my graphics card, an NVidia GTX 750 ti - a good card but not an RTX or anything ... So I figured I'd post my results since they're kinda cool and I'm also interested to see if anyone might have some ideas on how to speed it up further Here's a dreadful video showcasing some of what I have currently: I've sped it up a tad and fixed reflections since then but 'eh it gets the gist across . If you're interested in trying out a demo or checking out the shader source code, I've attached a windows build (FlipperRaytracer_2019_02_25.zip). I develop on Linux so it's not as well tested as I'd like but it works on an iffy laptop I have so hopefully it'll be alright XD. You can change the resolution and whether it starts up in fullscreen in a config file next to it, and in the demo you can fly around, change the lighting setup and adjust various parameters like the frame blending (increase samples) and disabling GI, reflections, etc. If anyone tests it out I'd love to know what sort of timings you get on your GPU But yeah so currently I can achieve about 330million rays a second, enough to shoot 3 incoherent rays per pixel at 1080p at 50fps - so not too bad overall. I'm really hoping to bump this up a bit further to 5 incoherent rays at 60fps...but we'll see I'll briefly describe how it works now :). Each render loop it goes through these steps: Render the scene into a 3D texture (Voxelize it) Generate an acceleration structure akin to an octree from that Render GBuffer (I use a deferred renderer approach) Calculate lighting by raytracing a few rays per pixel Blend with previous frames to increase sample count Finally output with motion blur and some tonemapping Pretty much the most obvious way to do it all So the main reason it's quick enough is the acceleration structure, which is kinda cool in how simple yet effective it is. At first I tried distance fields, which while really efficient to step through, just can't be generated fast enough in real time (I could only get it down to 300ms for a 512x512x512 texture). Besides I wanted voxel accurate casting for some reason anyway (blocky artifacts look so good...), so I figured I'd start there. Doing an unaccelerated raycast against a voxel texture is simple enough, just cast a ray and test against every voxel the ray intersects, by stepping through it pixel by pixel using a line-stepping algorithm like DDA. The cool thing is, by voxelizing the scene at different mipmaps it's possible to take differently sized steps by checking which is the lowest resolution mipmap with empty space. This can be precomputed into a single texture allows that information in 1 sample. I've found this gives pretty similar raytracing speed to the distance fields, but can be generated in 1-2ms, ending up with a texture like this (a 2D slice): It also has some nice properties, like if the ray is cast directly next to and parallel to a wall, instead of moving tiny amounts each step (due to the distance field saying it's super close to something) it'll move...an arbitrary amount depending on where the wall falls on the grid :P. Still the worst case is the same as the distance field and it's best case is much better so it's pretty neat So then for the raytracing I use some importance sampling, directing the rays towards the lights. I find just picking a random importance sampler per pixel and shooting towards that looks good enough and allows as many as I need without changing the framerate (only noise). Then I throw a random ray to calculate GI/other lights, and a ray for reflections. The global illumination works pretty simply too, when voxelizing the scene I throw some rays out from each voxel, and since they raycast against themselves, each frame I get an additional bounce of light :D. That said, I found that a bit slow, so I have an intermediate step where I actually render the objects into a low resolution lightmap, which is where the raycasts take place, then when voxelizing I just sample the lightmap. This also theoretically gives me a fallback in case a computer can't handle raytracing every pixel or the voxel field isn't large enough to cover an entire scene (although currently the lightmap is...iffy...wouldn't use it for that yet XD). And yeah then I use the usual temporal anti aliasing technique to increase the sample count and anti-alias the image. I previously had a texture that would keep track of how many samples had been taken per pixel, resetting when viewing a previously unviewed region, and used this to properly average the samples (so it converged much faster/actually did converge...) rather than using the usual exponential blending. That said I had some issues integrating any sort of discarding with anti-aliasing so currently I just let everything smear like crazy XD. I think the idea there though is to just have separate temporal supersampling and temporal anti aliasing, so I might try that out. That should improve the smearing and noise significantly...I think XD Hopefully some of that made sense and was interesting :), please ask any questions you have, I can probably explain it better haha. I'm curious to know what anyone thinks, and of course any ideas to speed it up/develop it further are very much encouraged . Ooh also I'm also working on some physics simulations, so you can create a realtime cloth in it by pressing C - just usual position based dynamics stuff. Anyway questions on that are open too :P. FlipperRaytracer_2019_02_25.zip
  • Advertisement
×