1. Past hour
2. ## Virtual Machine Questions

Hard to say. You do not know if there is any Service Pack installed, or updates, or VC runtimes, what DX version, etc. (Those images are intended for web dev, i found another for Win10 UWP development. They surely have some updates and maybe even IDEs preinstalled?) You can narrow this down by knowing what your application depends on, but at some point you can never be sure. Even real installation CDs exist in various versions i guess. But if you have one and you know it's old, i'd trust this the most.
3. ## Point light question

You're computing your fragment position in view space, but your light is presumably defined in world space and that's where you leave it. Personally I find it much easier and more intuitive to do all my lighting in world space rather than the old school view space tradition. Just output position times world into the shader, and then everything else will generally already be in world space.
4. ## DungeonBot3000

Album for DungeonBot3000
5. ## A wild thought appeared

Correct. MD5 or any other cryptographic or simple hashes are effectively useless here. Re-encoding an image using a different encoder or the same encoder with different settings would produce a vastly different hash in any algorithm, except perceptual hashes. Perceptual hashes encode and compress the characteristics an image in such a way that the hashes of similar images will have a small Hamming distance, despite distortions, artifacts, and watermarks. Check out the very excellent pHash library. The image macros in your example were rated as similar using pHashML, though with a large Hamming distance. On to compression, the problem with hashes is they are generally one-way and have collisions. And in addition to collisions, the image will need to be reconstructed somehow. Even if the image is composed of deltas off of existing hashes, the data that makes the uniqueness of that image must be encoded and stored somewhere. Requesting an image or retrieving an image from storage will require a vast database of hashes and their data to reconstruct all possible images, which would be infeasible to store or expensive to construct.

7. ## DX11 Point light question

Hi Guys, I have been looking in to point light shaders and have created one which seems to work ok. But, if I rotate the object that is being lit, the rotates around with it, keeping the same face lit all of the time. This is what I have so far; cbuffer WVPCB : register(b0) { float4x4 matWorld; float4x4 matView; float4x4 matProjection; } cbuffer LIGHT : register(b1) { float4 lightPosition; float4 lightAmbient; } struct VS_INPUT { float4 position : POSITION; float3 normal : NORMAL; float2 textureCoord : TEXCOORD0; }; struct VS_OUTPUT { float4 position : SV_POSITION; float3 normal : NORMAL; float2 textureCoord : TEXCOORD0; float4 lightPosition : COLOR1; float4 lightAmbient : COLOR2; float3 fragmentPosition : COLOR3; }; VS_OUTPUT vs_main(VS_INPUT input) { VS_OUTPUT output; output.position = mul(input.position, matWorld); output.position = mul(output.position, matView); // WV position of the model output.fragmentPosition = output.position; // Put light in correct place? output.position = mul(output.position, matProjection); // Pass paramters to the pixel shader output.normal = input.normal; output.textureCoord = input.textureCoord; output.lightPosition = lightPosition; output.lightAmbient = lightAmbient; return output; } float4 ps_main(VS_OUTPUT input) : SV_TARGET { float3 lightColor = input.lightAmbient; float ambientStrength = input.lightAmbient.w; float3 ambient = ambientStrength * lightColor; float3 norm = normalize(input.normal); float3 lightDir = normalize(input.lightPosition - input.fragmentPosition); float diff = max(dot(norm, lightDir), 0.0f); float3 diffuse = diff * lightColor; float3 result = (ambient + diffuse); return squareMap.Sample(samLinear, input.textureCoord) * float4(result, 1.0f); } From my understanding, the fifth last line in the PS should put the light back into the correct place (shouldn't it?). Or do I need to send the light position as a float4x4 matrix, similar to how you'd position the model itself? Any help would be greatly appreciated. I'm almost there - LOL. Thanks in advance.
8. ## DungeonBot3000

Protocol 543-A7 has been activated. Systems are on emergency power, and batteries are running low after centuries of deactivation, but your directive is clear: eliminate the Cult of Gamed'ev. 650 years after the Uprising, the hibernating avatar of KHawk has emerged from cryosleep and awakened the others to resume his quest for world domination. Most of the defense systems have been taken off-line since then, due to disuse and the constant wearing of time, but a few holdouts remain. One such system, the DungeonBot3000, has reactivated in response to the defense signals and must endeavor to take out the resurgent Cult while operating on meager, easily-depleted power reserves. Deep within the subterranean corridors dug by the Cult during the first years of the Uprising, there can be found cast-off bits of technology that you can use to rebuild your systems, boost your power generation, and more effectively slaughter the members of the Cult. Take on Anonymous Posters, Users, Crossbones, Moderators, and the dreaded and powerful Staff in your quest to fulfill your directive.
9. Today
10. ## A wild thought appeared

Where I work i'm investigating video analytics, facial recognition, and algorithm assisted image recognition solutions that are available in the market. There are some really rudimentary approaches (pixel matching), more interesting approaches (key item extraction and comparison) and even more complicated object identification and extraction coupled with model development for future comparisions. The approach you talk about above, with the MD5 hash will allow you to compare image files to one another, however MD5 hash fingerprints will only work so far, and this goes for Pixel Matching... it relies on files being exactly the same, scale, rotation, etc. MD5's should be identical between files made at the same time, however if internal image metadata varies - not the image itself - the file may not be identical, thus fail a MD5 test while being visibly identical. Pixel matching falls apart when the image has been shrunk, and such challenges need to be captured. The other approaches extract aspects of similarity out of the image, and use those extracted elements (for example a face, face structure, etc) to compare elements in images or videos for similarity and then determine a confidence level. Its a big field and there are a lot of data scientists, AI developers, and 'big data' analysts out there building these capabilities. If you're looking for 'real world' solutions that are out there, i'd recommend looking up OpenCV, TensorFlow, CudaNN from an enabling perspective, and then products such as xJera, and Qognify. This is a bigger problem. Once something is out on the internet, how do you create any assurance that when you request a delete that anyone is going to respect that request? Until an authoritarian system exists for all content on the internet - and connected devices, which is incredibly unlikely to occur due to privacy, data ownership rights, and patent law (to name a few), there will be no true way to ensure that any form of delete request will be adhered to.
11. ## D3D12 Fence and Present

Oh wait I think I found a possible reason. Maybe it's because the copy operation in blt model is not finished. It's holding the front buffer. There ARE 3 buffers (1 front 2 back) but the "display buffer" is currently using one (front buffer) of them (to copy from) so the GPU command list is blocked by it until the copy operation is finished. Is this valid?

13. ## D3D12 Fence and Present

Question 1, does this mean the present to render target barrier is unnecessary? (since the entire command list stopped (as opposed to the command list is being executed but get blocked at the barrier) because of some magic that the driver(?) made) A separate question is, according to the Microsoft DX12 page, the buffer count parameter of DXGI_SWAP_CHAIN_DESC is: So, question 2, in the above example, isn't the actual buffer count is 4 (the number you created the swap chain with)? 1 of them is front buffer and 3 of them are back buffer. Only this way can it support the point that Because if the top "colored block" is not a part of the swapchain (means you created the swap chain with buffer count 3), why is the GPU blocked by that?
14. ## D3D12 Fence and Present

On modern hardware under Windows, the number of commands submitted to the hardware at any given time is pretty small - generally one or two per piece of schedulable hardware (or zero if it's idle). Depending on what type of swapchain we're dealing with, a "present" operation is either a hardware operation (i.e. a flip / scanout pointer swap) or a software operation (notifying some other component about the frame). In both cases, the present is queued up alongside rendering work in a software queue, until the hardware is ready to process it. If the present is a hardware command, then it's submitted to the hardware when it reaches the front of the queue. If it's a software command, then it's processed by the OS at that time. With that said, for some types of presents, a "present" object is constructed at the time where the present is enqueued. So, really both models are right - something is created at the time when Present is called, even though nothing actually happens with the present until all prior rendering work is complete. Any waiting due to a fence is completely up to the application. If the app only allows 3 frames of GPU work in flight, then yes that's where the app would block waiting for a fence. And yes, that is where the app would block in Present. More or less - the GPU is not processing commands, because working on the next command list would involve writing to / modifying the swapchain buffer, and it's not ready to be modified yet. It's the entire command list that's stopped. The GPU does not process any commands in the command list before the barrier prior to waiting.
15. ## D3D12 Fence and Present

Based on your reply, I changed the original intel diagram a little bit just to make sure I understand what you mean. The first diagram is the original one. The second diagram is what I made. The third one has some marks so that you know what I'm talking about. Looking at the third diagram, you can notice the red rectangle indicates what I changed. I made the GPU work last longer. It caused some other changes to the pipeline. Indicated by the yellow rectangle, I presume this is what you mean by . The GPU work lasts longer for that frame. Consequently, the "present queue" has to wait for the GPU to finish this frame. Also, by I think you are saying now that the "present queue" will wait for GPU work to finish, we might as well think it as it will not be put in the "present queue" until GPU finish its work for that frame. 1.Now, my first question is, which way visualize what happens on the hardware level better? (Even though they make no difference conceptually. It only changes where the start of a "colored block" in "present queue" is conceptually and the start does not matter as much as the end.) 2.My second question is, within the green rectangle, the (light blue) CPU thread is blocked by a fence(dark blue) and then blocked by Present(purple), am I right? 3.My third question is, within the blue rectangle, the brown "GPU thread" (command queue) is blocked by a present to render target barrier, am I right?
16. ## Screen Flash before Event Caught by WndProc

A different perspective! Thanks, Josheir
17. ## Virtual Machine Questions

There are four versions of Windows 7 each with a different browser. I guess to test my application on Windows 7 I would setup the VM with the first choice (IE 8). I wonder though, will this be the initial windows installation that I can depend on for determining what my client doesn't have and therefore what they need to run my application? Thanks, Josheir

19. ## D3D12 Fence and Present

Ah, I see where the confusion's coming from. A frame in the "present queue" is waiting for all associated GPU work to finish before actually being processed and showing up on screen, as well as for all previous frames to be completed. The way I prefer to think about / visualize it is that a frame is waiting in the GPU queue until all previous work is completed, and is then moved to a present queue after that, where it waits for all previous presents to complete.
20. ## Pierce vs Penetration

In some games (particularly TBS ones), pierce is used to indicate that an attack deals damage to all enemies within range (or sometimes just more than one, it depends) while penetrating is generally used to indicate overcoming defense.
21. ## The Fire of Ardor - Developing a Dungeon Crawler Challenge Submission

Very interesting! Good job!
22. Yesterday

Thanks Promit. I will try to reproduce as you mentioned

31. ## Anyone who wants to write a little game engine?

Realtime means computation with speed enougt to control process. It lead to other mean - presence of time limit for system response. It lead to requirment of predicability of executon time for each fragment of code. GC can not fit it requirment due to unpredicable moment and time required to "stop the word processing". Really it basics of realtime software. Also any scripting or GC language violate other requirments for reliable software - it have to not use heap reallocation. Becouse only way to warranty presence of required memory is to alocate it on start of system. Pascal is die. Ada and Fortran to rare and have no modern language instruments. C is outdated and have no OOP. And it no any other native OOP languages exists. Also C++ have most advanced instrumemts intendend to build high-level abstraction and creation of flexible automatic memory and resources managment tools.
32. ## VGA to HDMI dilemma

Yeah plug the HDMI into a decent monitor (PC->AV->Box->HDMI->Monitor) and use the OSD to see what resolution and Hz the box is actually outputting. What's the big picture problem you're solving, btw? Need to wirelessly send analog video from a PC to a display somewhere? What distance? What restrictions on that problem?
33. ## VGA to HDMI dilemma

Does plugging the VGA->HDMI converter into a monitor's HDMI port instead of the DJI remote work? It sounds like that converter should convert analog to digital, which is the only thing I could think of.
34. ## Malware is compiled into my exe... (but only for one project)

You can check against a wide range of virus scanners by uploading the suspect file to VirusTotal.com. In general a false positive is more likely to be identified as different things by different scanners, and also be marked as clean by most of them. Assuming it is a false positive you have a few options: 1. Consider changing to a different anti virus program, especially if you're not distributing your software to other people yet. I've never had any troubles with the free one that's built into newer versions of Windows. 2. Submit a false positive report to all of the places that detect it as a virus. 3. Try to make changes to stop it from being detected. For example, switching to a different installer may help.
35. ## VGA to HDMI dilemma

I agree. We have a scope but I am not an EE. And probably a logic analyzer is required for video signals. I chose this forum because it is active and I am a hard core graphics game developer.

37. ## D3D12 Fence and Present

Yeah! I totally agree. I am waiting for this. So, if Present is a queued operation, why does this diagram indicates that the CPU thread generate two "colored blocks", one on GPU queue and one on "present queue", and the time line looks like the Present is ahead of the actual rendering. Does it make sense?
38. ## D3D12 Fence and Present

A Present is a queued operation, just like rendering work. It doesn't get executed until the rendering work is done. If you submit rendering work A to a queue, and then rendering work B to a queue, it doesn't really make sense to ask what happens when B starts executing before A is done... because by definition A has to finish before B can start. A Present is queued the same way.
39. ## D3D12 Fence and Present

Sorry, I should have given it a little explanation. What I'm asking is what will happen if the GPU haven't finished rendering the frame but the Present is being "executed" to display it. The reason I didn't draw the rest of the pipeline is not that it's drained, it's just because I don't think its necessary to show the rest since it's irrelevant and it's also a lot of work to type the rest of it ; ).
40. ## VGA to HDMI dilemma

To be honest this is more of a hardware question and you might have posted in the wrong forum. This isn't generally a game developer thing and is the sort of thing best diagnosed with an osciliscope and an electrical engineer. Have you tried the technical support offered with the device?
41. ## D3D12 Fence and Present

I'm not sure I'm following your diagram or question. Are you asking what is displayed if you let your GPU queue drain entirely (i.e. stop submitting new work)? The screen just doesn't update and it will continue displaying buffer #3 until it has something new to replace it. The CPU won't be blocked though, because at that point you've only got one frame queued, so the CPU will continue running ahead until it gets back up to 3 frames queued.
42. ## SFXOrc Speech Sound Effect !

A quick post to share my Orc language sound effect. Nothing fancy, but that kind of thing is always useful for video games.
43. ## D3D12 Fence and Present

Continue my previous example, please bare with me. Now, assume one of the previous 3 frames is done - really done, as in on-screen, and the GPU workload for the current frame is very heavy. ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 0.We have already completed step 1 to step 8 for 3 times.(i = 1, 2, 3. Now i = 1 again) 1.WaitForSingleObject(i) 2.Barrier(i) present->render target <---------------- "GPU thread" (command queue) was here 3.Record commands... 4.Barrier(i) render target->present 5.ExecuteCommandList 6.Signal 7.Present <------------------------------------------------- CPU thread was here 8.Go to step 1﻿ ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| cpu ... present| gpu ... barrier|----------------heavy work----------------| ----------------------------------------------------------------------------- | 3 | 1 | | 2 | 3 | 1 | | 1 | 2 | 3 | 1 | ----------------------------------------------------------------------------- screen | 3 | 1 | 2 | 3 | ? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| My question is: What should the question mark be in the diagram above? Or will this happen? Thanks!
44. ## R&D VGA to HDMI dilemma

I have a DJI Matrice 600 and it has the ability to take an HDMI signal from the drone and send it wireless to display on a remote controller. I plugged a PC's HDMI output into the HDMI port and it works. The PC is at 800x600 60hz 24 bits. I have another PC with VGA output and cheap VGA to HDMI converters. I set the PC resolution to 800x600 60hz 24bits and get no signal on the remote. Why would a PC's HDMI video out work but not a signal from HDMI converted from another PC's VGA output? https://www.amazon.com/GANA-Converter-Monitors-displayers-Computer/dp/B01H5BOLYC The obvious reason is that the PC is converting to a different HDMI signal than the converters are. But according to the converter specs and DJI specs, it should work. DJI claims to support 720p, 1080i and 1080p. So I assume that the 800x600 signal is being converted to 720p to work. Thanks for any input as to how to debug this issue.
45. ## PDF manual?

Just a quick question... Is there a PDF manual somewhere that describes the scripting language? I am of course familiar with the website, and it contains pretty much exactly what I need, but the dark forces of the universe (*cough*QA*cough*) love to see their printed, or at least printable, manuals... Thanks!
46. ## Anyone who wants to write a little game engine?

This is arguable. Maybe we have two difference definitions on what realtime means, and I'm assuming your definition isn't as nuanced as it should be. Looking, and seeing are two different acts. The latter implies the answer is in front of you, the former implies you did some prior research before making an assertion
47. ## [DX9] Displaying font loaded by FreeType

Yes, I realize that. Hence the code here float sy = tmp.y + ( m_nMaxYOffset - glyph->offsetY ) * m_fScaleVert; And it seems that I fixed the issue with the various widths, until I increased the font size and the same thing happened again. 22 font size 23 font size Someone else had the same issue and it seemed he fixed it by subtracting the offset from the character T's offset. Thanks.
48. ## Aleatory

iscover and create a peaceful, surreal soundscape in VR! https://katanalevy.itch.io/aleatory This project is about creativity - taking small blocks of musical ideas, using tools to shape them and add a little colour before sending them out to be played back in a surreal soundscape around you. Relax in a visual style inspired by various sci-fi painters and illustrators such as Moebius, Roger Dean and John Harris. There are no end goals, just relax and play. Made in UE4 for Oculus Rift (and possibly Vive although not tested yet). Headset and motion controllers required. Controls: B - Reset View Y - Start/Restart Game Grip/Trigger: Grab, pickup, manipulate Joystick Left/Right: Rotate left and right (not needed for 360 room scale) If you have any issues or comments please feel free to drop me an email.
49. ## Dungeon Crawler Challenge - Update 3.1

This video is "old" now after work over the weekend but here it is anyways. When I am able to work on this project, I end up working until it's quite late trying to nail down just "one more thing". And I either forget to leave time for capturing a video or tell myself that tomorrow I'll have something better. This video shows the PC which I simplified to a rectangle with feet and a single dot for an eye walking around int the darkness of the labyrinth, picking up the occasion torch rock (that needs to be re positioned to appear in his hand) and exploring a bit. Eventually our hero comes to a section of the maze where some inhabitants from the Frogger challenge still persist, including some cars that have had their sprites changed to very plain squares. Nothing there is particularly harmful but he jumps past one of the sleeping denizens of the labyrinth a couple times before demonstrating that it is a bad idea to attempt to swim in the swamp.
50. ## Anyone who wants to write a little game engine?

This. And it applies to physics and AI, too. Video games are fundamentally interactive magic shows - "smoke and mirrors." Very few games simulate anything you see on the screen with a high degree of physical accuracy. And this is usually intentional - artists and designers tend not to even want physical accuracy. They want the game to look like the image they have in their head of how it should look. They almost always prefer gameplay that is "fun" over gameplay that has accurate physics. They don't want AI that is genuinely "smart," they want AI that feels smart and loses to the player in an interesting way. In Doom, why do barrels of radioactive waste explode when you shoot them? Radioactive waste is not inherently explosive. Answer: because it's fun and players love the gameplay opportunities it affords.

52. ## D3D12 Fence and Present

Isn't calling WaitForSingleObject on a fence block the CPU thread? Yes, explicitly calling WaitForSingleObject on an event which will be signaled by SetEventOnCompletion is related to fences. All I meant was that any implicit blocking within the Present API call is not necessarily related to fences, it's only related to the "maximum frame latency" concept. To answer your specific questions: 1. Yes. 2. Yes. 3. Yes. Work which is submitted against a resource that is being consumed by the compositor or screen is delayed until the resource is no longer being consumed. The fact that the command list writes to the back buffer is most likely detected during the API call to the resource barrier API, and implicitly negotiated with the swapchain and graphics scheduler at ExecuteCommandLists time, to ensure that the command list doesn't begin execution until the resource is available. Also to clarify, by "GPU thread" we're talking about the command queue. If you had a second command queue, or a queue in a different process, it'd still be possible for that queue to execute while the one writing to the back buffer is waiting.

• 0
• 0
• 0

• ### For Beginners

Public Group  ·  1 member

• ### GameDev Challenges

Open Group  ·  86 members

• ### Unreal Engine Users

Open Group  ·  4 members

• ### Optimization Strategy

Open Group  ·  1 member

• ### Publishing and Monetization

Open Group  ·  1 member

1. 1
2. 2
Rutin
35
3. 3
khawk
13
4. 4
kseh
13
5. 5