• Advertisement

All Activity

This stream auto-updates     

  1. Past hour
  2. Vulkan DirectX - Vulkan clip space

    Thank you both, for the time being I will go with flipping in the vertex shaders. Though I have no system in place, I will have to manually add that with a SPIRV macro. This way I don't have to mess with culling direction.
  3. Unfortunately, I don't have a machine with win 7 any more and can't keep up with syncing with the old DirectX SDK. Developing just for the new one is so much easier. Though I'm nearly finished the Vulkan implementation which should eliminate the Win 10 requirement. You are right, mip maps here are like different 3D textures, but in DirectX they are stored in the same resource as the main 3d texture wich enables eficcient access from the shaders because with one sample operation you can possibly load from multiple mips (sub-3d textures) at once. If your sampler has "linear" filtering mode. This is also called quadrilinear filtering as you mentioned. We don't need view dependency for the diffuse part of the illumination. This calculation only takes into consideration the surface normal, and shoots rays inside a hemisphere directed along that normal. This step involves first having your surface position which you start the rays from. Also you have the surface normal in world space, which will give the ray direction. Now you can start stepping along the ray. On each step, you convert the position on the ray which is in world space, to your voxel texture space, then sample. Repeat until you accumulated more that one alpha or reached the lowest mip level or reached some predefined maximum distance (to avoid infinite loop). You are actually right, sampling only gives the surrounding voxel colors, but those should already contain the scene with direct illumination, so it's exactly what you want. You will need view direction information for specular reflections, which you can also retrieve from the voxels. The algorithm is the same, but you are shooting a single ray in the reflection direction. Note that my code uses the function "ConeTrace" for both diffuse and specular GI. I hope that it was any help.
  4. circle drawing method comparison

    Until you draw a million circles / second, I wouldn't worry much about performance, whatever you pick it's always fast enough for a few circles. As to why it's faster, CPUs have changed, and compilers have improved. It's extremely hard to understand why code is faster or slower. For example, I would not be surprised if the loop of the second solution is unrolled a few times, and several points are computed in parallel in SSE. Secondly, sending a bunch of vectors to the GPU versus sending loads of pixels to the GPU does make a huge difference.
  5. I'm not sure what you can not understand, but maybe it is how to approximate the above with cone tracing. First, notice that instead weighting by cosine, you should change the distribution of rays instead: More rays at normal directions, less at tangent directions. (Importance sampling) Then you can simply average and get better results with less rays... just to mention. So, e.g. instead tracing 1000 hemisphere rays we want to trace 4 cones instead. We can only expect to get an approximate and similar result, but we try to make it as good as possible. We start by dividing 1000 rays into 4 bundles of 250 rays, and we calculate a bounding* cone for each of them. Then we trace the cones, which is similar to sphere tracing, but for cone tracing we constantly grow sphere size so it fits our cone as it marches along the ray. We will also increase the step size by the same factor. Sphere size also sets the mip level for trilinear filtering so volume data resolution fits sphere size. Each time we sample from the volume data, we accumulate color and alpha, and if alpha > 1 we decide to stop tracing, because we assume all 250 rays have hit something at that point. The accumulated color divided by alpha then refers to the averaged color from all 250 rays. It's just an approximization, that becomes better and better the more cones you use. (3 is minimum to approximate hemisphere, 4,5,7... gives better results) Direction (hit point normal) does not matter. You seem to be confused by that, but it would not matter for ray tracing as well. The normal of the hit point would only matter if you would not just have point at the hit, but something like a patch or surfel, something that has an area you wouldd want to integrate - here visible area from the receiver depends on the surface normal of the emitting patch (radiosity method). Ray tracing does not care - it approximates this by the distribution of the ray directions (or their weighting as you said.) I like this example to understand how diffuse GI is calculated: Place a perfect mirror ball at the receiving point you want to calculate. Make a photo of the mirror ball from the direction along the receivers surface normal. (avoid perspective in the photo, so orthogonal if possible) Sum up all the pixel colors from the photo on the mirror ball and divide by number of pixels. (no weighting necessary) This is the final incoming light, we're done. Notice that this simple example explains the most important things about GI and how easy it is to derive related math (like ray or cone distribution or weighting, cone directions and angles, etc.) from there. Also the normal direction of emitters does not matter - only the image that appears on the mirror ball. *) bounding would not be very good as it would duplicate too much space between the cones, but just to visualize...
  6. Ah... sorry you are right because I am deaf My english is not okay. I need learn more. I would like to say sorry for my bad English. Please do not be angry to me! You know I am coder for C# but I am really shy because I am initial coder from deafness bedcause I have experience of HTML5, Javascript ( bit less ) and C# is like baby-light language. I will excuse you that my bad English. I need learn more details of Languages.
  7. Today
  8. hi turanszkij, many many thanks for you answer. Some months ago i had an intensive study on your surce code und your home page. Although i work with c# i could success with my basic c++ knowledge to compile and run your engine. Bad badly i did no save the code and after trying to download and compile again i saw, that i cannot use the engine because i only have Win7 64bit your engine requires Win10 DirectX12. Do you still have a Win7 version ? In ray marching contex i understand. But opacity here is easy to detect and the progress is only e.g. one pixel. So the problem i have is not touched. My Problem "understanding occlusion query" is solved by tracing each pixel step by step. Now here is my problem. When doing mip mapping on a 3d texture several coarser 3d texture mip maps are created ? But when sampling the 3d texture at a given point an given mip level it will allways return the same value right ? When sampling with "quadrilinear interpolation" we get smooth values but allways the same value independent from view direction of the cone. see the following Part of your code The coneDirection is only for calculating the next postion in texture space where we sample from. But if sampling the mip map is view direction independent how can it reproduce the correct color ? Sampling a point within a 3d mipmap is something totally different from getting a projection (of the view ) of the colors from the Voxels it consists of ( when looking at it from a certain viepoint) ? To my understanding the sampling yust gives me the interplation of the accumulated surrounding voxel colors of the sample point. I still need help in understanding, please be patient. best regards evelyn Part Code from turanszkij game engine float diameter = max(g_xWorld_VoxelRadianceDataSize, 2 * coneAperture * dist); float mip =log2(diameter * g_xWorld_VoxelRadianceDataSize_Inverse); // Because we do the ray-marching in world space, we need to remap into 3d texture space before sampling: // todo: optimization could be doing ray-marching in texture space float3 tc = startPos + coneDirection * dist; tc = (tc - g_xWorld_VoxelRadianceDataCenter) * g_xWorld_VoxelRadianceDataSize_Inverse; tc *= g_xWorld_VoxelRadianceDataRes_Inverse; tc = tc * float3(0.5f, -0.5f, 0.5f) + 0.5f; // break if the ray exits the voxel grid, or we sample from the last mip: if (any(tc - saturate(tc)) || mip >= (float)g_xWorld_VoxelRadianceDataMIPs) break; float4 sam = voxels.SampleLevel(sampler_linear_clamp, tc, mip);
  9. I use Visual Studio 2015, I believe it only supports up to C++11. But I doubt it is the newer C++ version that causes the problem for you. After all, AngelScript works even without C++11 on older compilers too. I've tried several options; 64bit, 32bit, with or without AS_NO_THREADS, debug and release mode, and so far I haven't been able to reproduce this error. I did identify another problem while doing this related to the opImplCast with the variable type. The compiler would try to use this even for non reference types, and cause an assert failure (in debug mode) or crash (in release mode). I've fixed this in revision 2491. You will need this fix too, but unfortunately it doesn't fix the problem you've reported.
  10. You could always just google for a good water normal map like this. If you wanted random noise to make the texture more dynamic like real water you could do something easy and hacky like just interpolate the normals over time between the original from the texture and the original's inverse.
  11. I'm not sure if I'm understanding the question clearly, but they are different languages and libraries. Porting from one to the other would mean re-writing the code. To do this, you would need a good understanding of all the languages and libraries involved. At a basic level you would just look at each line in the original version, and figure out the equivalent functions in your target language. In reality for non-trivial projects it's a bit more complex.
  12. Pixelpunk XL - Out Now!

    Greetings! I'm happy to announce that Pixelpunk XL has been released today on Steam! I thank all the people who support me and give feedback on my posts. Steam page: http://store.steampowered.com/app/803850/Pixelpunk_XL/
  13. As far as I understand there is no real random or noise function in HLSL. I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway... Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?
  14. i guess method 2 it is, i have some ideas on how to minimize the tile looping each frame, thanks guys
  15. Advice

    Apologies for not being very specific I am new to this, I would like to focus more on the design side of things, thankyou as well for linking the article for job interviews as well, I will be sure to have a look at it
  16. Need Coders Existing System

    I don't expect any one person to know all this, but perhaps 2 can work it. My existing customer base is over 175k unique users
  17. Php MySql C C++ Javascript API dev Web Design Joomla Existing System, I no longer have a full time coder. This is for the experience. First assignments will be volunteer, and easy as you adapt. Then per project, and residual income if you want to continue. Must be willing to sign an NDa, if oversees, must be willing to sign an NDA in your country from one of our acquaintances. Any experience in combat and role play
  18. Method 2, but you want to avoid looping through all of the tiles every frame. Here is an example of how to do that: https://gamedev.stackexchange.com/questions/32140/effecient-tilemap-rendering Also, you may want to look up quadtrees to help with spatial queries as well.
  19. Back and Forth

    Aerazmet, you can get feedback here, if you post portions of your writing here. If you want offline private conversations, this is the wrong place for that (this forum is for online public conversations).
  20. Strategy Gniarf

    Hello GameDev !I did a minor update of my app (0.052a), I add some crashs happening that are normally resolved.I added a small feature, you can double tap in the menu to select the next unit unmoved. That way when you have one big territory it's way easier to move all your gniarfs instead of clicking on them one by one.I made some more levels when I didn't add them, I'll wait to have more and so I can adjust the difficulty/make some bonus levels, be patient !Thanks for reading me !(Small video to show the double tap : https://twitter.com/twitter/statuses/987319146605088769)
  21. Advice

    Court, your first question is too unspecific to be answered. You say "this field" but there is no field matching the major you cited ("game animation and design"). There is animation, and there is design. Which one are you planning to focus on? I wrote an article on job interviews too. As for your second question, it depends. If you're in North America, it's not hard for a female applicant to "find" jobs (as you asked) but it can be challenging to get hired (for anyone). Depending on what culture you live in, there can definitely be a bias towards hiring males. And female workers in games tend to be paid less than males (an unfairness that the industry knows it needs to fix).
  22. Hi, first of all, a concrete simple example how to sample the voxel grid which is contained in a 3D texture: https://github.com/turanszkij/WickedEngine/blob/master/WickedEngine/voxelConeTracingHF.hlsli#L32 For understanding cone tracing, first you should understand ray tracing, and how to approximate it in a numeric way where you don't have explicit parametric definition of your scene surface, just a bunch of data. The approximation is called ray marching and the data is your texture which is built from pixels. In ray marching, you start at the start of the ray, look up the corresponding pixel value, then advance along the ray direction by one pixel and sample the texture again. If you just sampled a pixel with opacity = 1, then that means you just hit a surface. Cone tracing is the same, but you don't want to trace a single ray, but many rays at once, making up a cone. An approximation is pre-integrating the texture into different levels of detail, called mip mapping. So now each level contains less data than the previous, but the data being the average of that as well. So do the exact same thing as with ray marching, start at the ray beginning and go along the ray direction, but with each step, increase which mip level you sample from, so each sample will give you a precomputed average of samples. The linear filtering will work for 3D textures as well, and it will ensure that when you sample from an increased mip level, the result will be weighted by the sample position, so it will result in a "nice" gradient of colors when visualized. Also, you must keep track of the opacity, because with the data being pre-integrated, the opacity value you read will not be just one or zero, but averages of nearby pixels. You can be sure that you hit a surface when you accumulated alpha and exceeded the value of one. I hope that made some sense, good luck!
  23. Method 2 is not processing-heavy in any real sense of the word. You don't need to perform complex geometry on each tile - their position alone is enough to tell you whether they are offscreen or not.
  24. Good morning, I have been having great fun with UE4 and looking for projects to spend some time on. I have a bachelor of software engineering and am full time employed. Let me know if you have any interest. Thank you,
  25. Hi Forum, in terms of rendering a tiled game level, lets say the level is 3840x2208 pixels using 16x16 tiles. which method is recommended; method 1- draw the whole level, store it in a texture-object, and only render whats in view, each frame. method 2- on each frame, loop trough all tiles, and only draw and render it to the window if its in view. are both of these methods valid? is there other ways? i know method 1 is memory intensive but method 2 is processing heavy. thanks in advance
  26. Why some games look orange?

    Have you seen this blog entry? It might be of interest.
  27. Advice

    Hi there, I am currently studying a diploma of screen and media and want to move on to the bachelor of game animation and design after this course finishes. I was just wondering if there was any advice anyone had on landing job interviews and finding work in general for this field. I was also wondering how hard it is to find a job in this field for females as well? many thanks
  1. Load more activity
  • Advertisement