Advertisement Jump to content
  • Advertisement
    1. Past hour
    2. Hard to say. You do not know if there is any Service Pack installed, or updates, or VC runtimes, what DX version, etc. (Those images are intended for web dev, i found another for Win10 UWP development. They surely have some updates and maybe even IDEs preinstalled?) You can narrow this down by knowing what your application depends on, but at some point you can never be sure. Even real installation CDs exist in various versions i guess. But if you have one and you know it's old, i'd trust this the most.
    3. Promit

      Point light question

      You're computing your fragment position in view space, but your light is presumably defined in world space and that's where you leave it. Personally I find it much easier and more intuitive to do all my lighting in world space rather than the old school view space tradition. Just output position times world into the shader, and then everything else will generally already be in world space.
    4. JTippetts

      DungeonBot3000

      Album for DungeonBot3000
    5. fastcall22

      A wild thought appeared

      Correct. MD5 or any other cryptographic or simple hashes are effectively useless here. Re-encoding an image using a different encoder or the same encoder with different settings would produce a vastly different hash in any algorithm, except perceptual hashes. Perceptual hashes encode and compress the characteristics an image in such a way that the hashes of similar images will have a small Hamming distance, despite distortions, artifacts, and watermarks. Check out the very excellent pHash library. The image macros in your example were rated as similar using pHashML, though with a large Hamming distance. On to compression, the problem with hashes is they are generally one-way and have collisions. And in addition to collisions, the image will need to be reconstructed somehow. Even if the image is composed of deltas off of existing hashes, the data that makes the uniqueness of that image must be encoded and stored somewhere. Requesting an image or retrieving an image from storage will require a vast database of hashes and their data to reconstruct all possible images, which would be infeasible to store or expensive to construct.
    6. JTippetts

      DungeonBot3000

      I wanted to take part in the latest Challenge, Dungeon Crawl. Unfortunately, due to things both foreseen and unforeseen, I haven't had a whole lot of time to work on it. Nevertheless, I will carry on even if I'm pretty sure I won't be able to finish all of the requirements. You are DungeonBot3000, a prototype combat droid built during the ill-fated Uprising. After 650 years, the nefarious Cult of Gamed'ev has once again arisen in the depths, led by the reanimated avatar of KHawk, freshly-emerged from cryo-sleep. After so many centuries, your systems have degraded and your power generation capabilities are severely handicapped; nevertheless, the dictates of your directive are clear: eliminate the Cult. Arming yourself with castoff bits of tech you find along the way, you must enter the depths of Gamed Dungeon, the system of caves and tunnels from within which the evil Cult waged their Uprising so long ago, and within which the new Cult has emerged. Fight your way through Anonymous Posters, Users, Crossbones and powerful Moderators as you quest into the depths to confront your ultimate targets: KHawk and his Staff. Along the way, find bits of technology to help you boost your power generation and upgrade your abilities, enhancing your speed and combat effectiveness. But beware: as the Cult brings their defenses online, the areas you must clear will become more and more deadly. So far, I've got a basic (and sufficient for the challenge) maze gen system in place, and have begun work on the controllers and combat. Enemies spawn in groups and are driven by the Detour Crowd functionality of Urho3D, to pursue relentlessly. The combat system is ARPG, and I've currently implemented a spin attack. More to come, though probably not before the deadline.
    7. Hi Guys, I have been looking in to point light shaders and have created one which seems to work ok. But, if I rotate the object that is being lit, the rotates around with it, keeping the same face lit all of the time. This is what I have so far; cbuffer WVPCB : register(b0) { float4x4 matWorld; float4x4 matView; float4x4 matProjection; } cbuffer LIGHT : register(b1) { float4 lightPosition; float4 lightAmbient; } struct VS_INPUT { float4 position : POSITION; float3 normal : NORMAL; float2 textureCoord : TEXCOORD0; }; struct VS_OUTPUT { float4 position : SV_POSITION; float3 normal : NORMAL; float2 textureCoord : TEXCOORD0; float4 lightPosition : COLOR1; float4 lightAmbient : COLOR2; float3 fragmentPosition : COLOR3; }; VS_OUTPUT vs_main(VS_INPUT input) { VS_OUTPUT output; output.position = mul(input.position, matWorld); output.position = mul(output.position, matView); // WV position of the model output.fragmentPosition = output.position; // Put light in correct place? output.position = mul(output.position, matProjection); // Pass paramters to the pixel shader output.normal = input.normal; output.textureCoord = input.textureCoord; output.lightPosition = lightPosition; output.lightAmbient = lightAmbient; return output; } float4 ps_main(VS_OUTPUT input) : SV_TARGET { float3 lightColor = input.lightAmbient; float ambientStrength = input.lightAmbient.w; float3 ambient = ambientStrength * lightColor; float3 norm = normalize(input.normal); float3 lightDir = normalize(input.lightPosition - input.fragmentPosition); float diff = max(dot(norm, lightDir), 0.0f); float3 diffuse = diff * lightColor; float3 result = (ambient + diffuse); return squareMap.Sample(samLinear, input.textureCoord) * float4(result, 1.0f); } From my understanding, the fifth last line in the PS should put the light back into the correct place (shouldn't it?). Or do I need to send the light position as a float4x4 matrix, similar to how you'd position the model itself? Any help would be greatly appreciated. I'm almost there - LOL. Thanks in advance.
    8. JTippetts

      DungeonBot3000

      Protocol 543-A7 has been activated. Systems are on emergency power, and batteries are running low after centuries of deactivation, but your directive is clear: eliminate the Cult of Gamed'ev. 650 years after the Uprising, the hibernating avatar of KHawk has emerged from cryosleep and awakened the others to resume his quest for world domination. Most of the defense systems have been taken off-line since then, due to disuse and the constant wearing of time, but a few holdouts remain. One such system, the DungeonBot3000, has reactivated in response to the defense signals and must endeavor to take out the resurgent Cult while operating on meager, easily-depleted power reserves. Deep within the subterranean corridors dug by the Cult during the first years of the Uprising, there can be found cast-off bits of technology that you can use to rebuild your systems, boost your power generation, and more effectively slaughter the members of the Cult. Take on Anonymous Posters, Users, Crossbones, Moderators, and the dreaded and powerful Staff in your quest to fulfill your directive.
    9. Today
    10. Stragen

      A wild thought appeared

      Where I work i'm investigating video analytics, facial recognition, and algorithm assisted image recognition solutions that are available in the market. There are some really rudimentary approaches (pixel matching), more interesting approaches (key item extraction and comparison) and even more complicated object identification and extraction coupled with model development for future comparisions. The approach you talk about above, with the MD5 hash will allow you to compare image files to one another, however MD5 hash fingerprints will only work so far, and this goes for Pixel Matching... it relies on files being exactly the same, scale, rotation, etc. MD5's should be identical between files made at the same time, however if internal image metadata varies - not the image itself - the file may not be identical, thus fail a MD5 test while being visibly identical. Pixel matching falls apart when the image has been shrunk, and such challenges need to be captured. The other approaches extract aspects of similarity out of the image, and use those extracted elements (for example a face, face structure, etc) to compare elements in images or videos for similarity and then determine a confidence level. Its a big field and there are a lot of data scientists, AI developers, and 'big data' analysts out there building these capabilities. If you're looking for 'real world' solutions that are out there, i'd recommend looking up OpenCV, TensorFlow, CudaNN from an enabling perspective, and then products such as xJera, and Qognify. This is a bigger problem. Once something is out on the internet, how do you create any assurance that when you request a delete that anyone is going to respect that request? Until an authoritarian system exists for all content on the internet - and connected devices, which is incredibly unlikely to occur due to privacy, data ownership rights, and patent law (to name a few), there will be no true way to ensure that any form of delete request will be adhered to.
    11. acerskyline

      D3D12 Fence and Present

      Oh wait I think I found a possible reason. Maybe it's because the copy operation in blt model is not finished. It's holding the front buffer. There ARE 3 buffers (1 front 2 back) but the "display buffer" is currently using one (front buffer) of them (to copy from) so the GPU command list is blocked by it until the copy operation is finished. Is this valid?
    12. slayemin

      A wild thought appeared

      This morning I went for a long drive. I started thinking about image recognition. The question is, "How can you train a computer to recognize a common image pattern, even if many parts of the image change?" In other words, can we teach a computer to recognize and identify memes? The thing with meme images is that they usually reuse the same image and people post different text overlays on it. Here's an example: Here's another example: So, if you took both of these images and just hashed them with MD5 and stored the result in a database somewhere, and then only looked at the hash value, you would have no way of knowing that these are practically the same image but with minor variations. How could you devise a better way to know what you're looking at and whether its similar to other things? And why would you want to? This is probably a solved problem and I'm probably reinventing the wheel -- and much more inefficiently, as first attempts tend to go. I imagine that many much smarter engineers working at facebook and google have already solved this problem and written extensive white papers on the subject, of which I am ignorant. But, I think the journey towards the discovery of a solution is half the fun and also a good exercise. It's like trying to solve a math problem in a textbook before checking the answer in the back of the book, right? The point isn't to get the answer, but to develop the method... So, onwards to what I came up with! My initial thought was that you'd just feed an image into a machine learning neural network. If the network is big enough and the magic black box is complicated enough, and we give it enough repetitions in supervised learning, maybe it can learn to recognize images and similarities between images? But that sounds really complicated and I don't like things that "just work" and by unexplainable "magic". It's great when it works, but what do you do when it doesn't work and you don't know why? What hope would you ever have of diagnosing the problem and fixing it? Maybe neural networks are enough and I'm unnecessarily overthinking things, but that's the fun of it. What if... instead of taking a hash value of the complete image, we created several hash values of subdivisions of the image? We could create a quad tree out of an image, and then generate a hash value for each quadrant, and then keep repeating the process until we get to some minimum sized subdivision. Then, we can store all of these hash values in a database as an index to the image data. To test if an image is similar to another, we just look at the list of subdivided hash values and see if we find any matches. The more matches you find, the more statistically likely you are to have a similar image match. Of course, this all assumes that the images are going to be exactly the same (at the binary level) with the exception of text. If even one bit is off, then the generated hash value will be completely different. This could be particularly problematic with lossy image formats such as JPEG, so doing hashes of image quadrants may not necessarily be the best approach. But, what if we looped back to our machine learning idea and had a neural network be specialized at recognizing subsections of an image and recognizing it, despite any lossy or incomplete data? The tolerance for lossiness would make it easier to correctly identify an image subquadrant and match it to a set of high and low resolution image quadrants. But now, this is where it gets interesting. What if... when someone uploads an image online, rather than storing the whole image, we subdivide the image into quadrants and compare each image quadrant against a set of existing image quadrants? The only data we would upload would be any image quadrants which had no matches in the database. The rest of the image data would just be stored as references to the existing image quadrants. Theoretically, you could have some pretty intense data compression. That 1024x1024 32 bit image (~4.1Mb), could be 90% represented by pre-existing hash values representing image blocks in the database, and you'd be looking at a massive reduction in disk space usage. Rather than storing the full image itself, you'd be storing the deltas on a quadrant by quadrant basis, and as people ask for any image, it gets reconstructed on the fly based on the set of hash values which describe the image. And, if you have this way of laying out your data, you can create a heat map of which data quadrants are more popular than others, and then make those data quadrants far more "hot" and accessible. In other words, if a meme or video goes viral and needs to serve out a billion views, you would want a distributed file system which makes thousands of copies of that data available on hundreds of mirrors, and then let a network load balancer find the fastest data block to send to a user. The more frequently a block of data is requested, the more distributed and readily accessible it gets (with an obvious time decay of course, to handle dead memes). And in the best case scenario, if someone uploads an exact copy of an image which already has a hash value match to something else someone uploaded, we just create a duplicate reference to that data instead of a duplication of the data itself. Then, the question becomes: How and when do we delete data? If there's a dead meme, that's one thing. But what if someone uploads naked pictures of me to the internet and I want them deleted (just to personalize the stakes)? Both are use cases that need to be handled elegantly by the underlying design. In the case of dead memes, we would enforce a gradual decay timer based on frequency of access to each data block. If a block of data hasn't had a read request for months and we have 4 copies, maybe we can automatically delete three of those copies to free up space without completely deleting the data? If a user uploads an image, they don't "own" the image, but rather they're given a hash value which represents the composition of that image, which may have data blocks which are shared by other users who uploaded similar images. A "delete" action by this user would only delete their hash link rather than the underlying image itself. Maybe we also maintain a reference counter, such that when it reaches zero, it gets marked for garbage collection in a month or so? In the case of someone uploading naked pictures of me, where removing the image is more important than just deleting references to it, there would need to be some sort of purge command which not only deletes all references to all data blocks, but also deletes the data blocks themselves, so any future read accesses will fail. This would be a dangerous section of code and would need to be done very carefully, with security and abuse in mind. But, that raises another question: who gets to decide who gets 'delete' access and who gets 'purge' access? Do you own every quadrant of an image you upload, even if its shared among other related images? Let's say a vicious person took a naked picture of me and uploaded it to this cloud, and then I go through the process and get this image purged. What's going to stop this vicious person from uploading the image again, and causing me new headache and harm? Ideally, we'd like to block the vicious person from being able to upload the image ever again. But, in order to do that, we'd have to know that the data someone is trying to upload has been deemed "forbidden" by the system before we can block it. And how are we to know a block of data is forbidden without comparing it against an existing dataset? To be much more specific towards a real world problem, let's pretend that someone uploads child porn to our image cloud. They use the network to distribute the CP to other pedophiles as quickly as they can before it gets found and shut down. The server owners find and immediately purge the content as quickly as they can and want to block any further attempts to upload the same content/material, but they run a bit of a challenge: In order to know what content to block, you must store copies of the content to recognize it, and that ultimately would mean that you're still storing child porn on your servers, which would then get you into legal trouble. I think my quadrant based hash value idea would still be an elegant way to resolve this. Instead of storing the data itself, you store a list of banned hash values. If, during the image decomposition, one or more of the hashed image quadrants match a list of the banned image values, then you reject the complete image. If your hashing function has a one in a trillion collision rate, you don't really need to worry about false positives (provided your quadrants don't get down to the granular level of individual pixels). The other danger is that someone is a bit too liberal with the "purge & ban" button. Imagine that an artificial neural network isn't used just for pattern analysis to identify images and image quadrants, but also to identify paragraphs, sentences, and words in blogs, forums, and message boards. If someone posts copy/pasta which matches a forbidden topic, such as "Falun Gong" or the "Tianamen Square Massacre" in China, this sort of system could potentially be used by authoritarians to squash free speech and ideas which they don't like. That could ultimately be more dangerous & damaging in the long run for the flourishing of mankind, than a permissive policy? Imagine it gets really crazy, where the home owners association for your neighborhood has legal authority to silence anyone, and someone petty with a spiteful bone to pick decides to get happy with the "purge & ban" button with a neighbor they don't like? Obviously, the room for abuse on both the admin and user sides will need to be very carefully planned and designed. For that, I don't have easy technical solutions to people centered problems... Maybe in this case, a diversity of platforms and owners would be better than one large shared cloud? But maybe that's just trading one problem for another. Anyways, this is the kind of stuff my mind tends to wander towards when I'm stuck in traffic.
    13. acerskyline

      D3D12 Fence and Present

      Question 1, does this mean the present to render target barrier is unnecessary? (since the entire command list stopped (as opposed to the command list is being executed but get blocked at the barrier) because of some magic that the driver(?) made) A separate question is, according to the Microsoft DX12 page, the buffer count parameter of DXGI_SWAP_CHAIN_DESC is: So, question 2, in the above example, isn't the actual buffer count is 4 (the number you created the swap chain with)? 1 of them is front buffer and 3 of them are back buffer. Only this way can it support the point that Because if the top "colored block" is not a part of the swapchain (means you created the swap chain with buffer count 3), why is the GPU blocked by that?
    14. SoldierOfLight

      D3D12 Fence and Present

      On modern hardware under Windows, the number of commands submitted to the hardware at any given time is pretty small - generally one or two per piece of schedulable hardware (or zero if it's idle). Depending on what type of swapchain we're dealing with, a "present" operation is either a hardware operation (i.e. a flip / scanout pointer swap) or a software operation (notifying some other component about the frame). In both cases, the present is queued up alongside rendering work in a software queue, until the hardware is ready to process it. If the present is a hardware command, then it's submitted to the hardware when it reaches the front of the queue. If it's a software command, then it's processed by the OS at that time. With that said, for some types of presents, a "present" object is constructed at the time where the present is enqueued. So, really both models are right - something is created at the time when Present is called, even though nothing actually happens with the present until all prior rendering work is complete. Any waiting due to a fence is completely up to the application. If the app only allows 3 frames of GPU work in flight, then yes that's where the app would block waiting for a fence. And yes, that is where the app would block in Present. More or less - the GPU is not processing commands, because working on the next command list would involve writing to / modifying the swapchain buffer, and it's not ready to be modified yet. It's the entire command list that's stopped. The GPU does not process any commands in the command list before the barrier prior to waiting.
    15. acerskyline

      D3D12 Fence and Present

      Based on your reply, I changed the original intel diagram a little bit just to make sure I understand what you mean. The first diagram is the original one. The second diagram is what I made. The third one has some marks so that you know what I'm talking about. Looking at the third diagram, you can notice the red rectangle indicates what I changed. I made the GPU work last longer. It caused some other changes to the pipeline. Indicated by the yellow rectangle, I presume this is what you mean by . The GPU work lasts longer for that frame. Consequently, the "present queue" has to wait for the GPU to finish this frame. Also, by I think you are saying now that the "present queue" will wait for GPU work to finish, we might as well think it as it will not be put in the "present queue" until GPU finish its work for that frame. 1.Now, my first question is, which way visualize what happens on the hardware level better? (Even though they make no difference conceptually. It only changes where the start of a "colored block" in "present queue" is conceptually and the start does not matter as much as the end.) 2.My second question is, within the green rectangle, the (light blue) CPU thread is blocked by a fence(dark blue) and then blocked by Present(purple), am I right? 3.My third question is, within the blue rectangle, the brown "GPU thread" (command queue) is blocked by a present to render target barrier, am I right?
    16. A different perspective! Thanks, Josheir
    17. Josheir

      Virtual Machine Questions

      There are four versions of Windows 7 each with a different browser. I guess to test my application on Windows 7 I would setup the VM with the first choice (IE 8). I wonder though, will this be the initial windows installation that I can depend on for determining what my client doesn't have and therefore what they need to run my application? Thanks, Josheir
    18. Tom Sloper

      Pierce vs Penetration

      Necro. Locking thread.
    19. SoldierOfLight

      D3D12 Fence and Present

      Ah, I see where the confusion's coming from. A frame in the "present queue" is waiting for all associated GPU work to finish before actually being processed and showing up on screen, as well as for all previous frames to be completed. The way I prefer to think about / visualize it is that a frame is waiting in the GPU queue until all previous work is completed, and is then moved to a present queue after that, where it waits for all previous presents to complete.
    20. Taylorobey

      Pierce vs Penetration

      In some games (particularly TBS ones), pierce is used to indicate that an attack deals damage to all enemies within range (or sometimes just more than one, it depends) while penetrating is generally used to indicate overcoming defense.
    21. Yesterday
    22. Thanks Promit. I will try to reproduce as you mentioned
    23. DreamHack Activities

      DreamHack's Indie Zone Marries IndieGamesPlus

      In an effort to boost awareness for indie games, we've partnered up with IndieGamesPlus for DreamHack Dallas 2019! The IndieGamesPlus team will be imbuing DreamHack's Indie Zone (this includes the Indie Playground and Indie Show & Tell Stage) with new creative ideas. Beyond helping us shape the future of the Indie Zone, IndieGamesPlus will also be on the board of judges to select the games we invite to the festival. If you're interested in being a part of DreamHack Dallas 2019, then check out these sign-ups special made for indies: Indie Playground Submit for a chance to win a 10x10 booth in the DreamHack Expo and a spot on the Indie Show & Tell Stage. DEADLINE: APRIL 5TH Page: https://dreamhack.com/dallas/activities/indie-playground/ Entry Form: https://form.jotform.com/72125421242140 Art Gallery Enter your art for a chance to have it printed and hung in our Art Gallery at both DreamHack Dallas and DreamHack Summer 2019. DEADLINE: APRIL 12TH Page: https://dreamhack.com/dallas/activities/art-gallery/ Entry Form: https://form.jotform.com/72125687442156 Game Pitch Championship Prepare your elevator pitch and enter it in our competition. You'll be reviewed and critiqued by a panel of industry judges. Only 5 nominees will be asked to return to give their pitch in front of a live audience on the Indie Show & Tell Stage for a chance to win $2,500 USD. DEADLINE: APRIL 5TH Page: https://dreamhack.com/dallas/activities/game-pitch-championship/ Entry Form: https://form.jotform.com/72117732142145 Check out the full article about the partnership on DreamHack's site Keep up to date with IndieGamesPlus
    24. I have hacked together a kind of "conversion" method. That can take the raw class data and cast it correctly. This is just a rough test but I think it could be made to work properly. class ClassName { public string name { get; } public ClassName() {name = "defaultName"; } public void RunAction() { Console.WriteLine("Default"); } } class Unique : ClassName { public string uName { get; } public Unique() { uName = "UniqueName"; } public new void RunAction() { Console.WriteLine("Unique"); } } class YAunique : ClassName { public string yaName { get; } public YAunique() { yaName = "YetAnotherName"; } public new void RunAction() { Console.WriteLine("Yet Another Unique"); } } class TestingGround { static void Main(string[] args) { //Dictionary<string, ClassName> doesThisWork = new Dictionary<string, ClassName>(); Dictionary<string, ClassName> doesThisWork = new Dictionary<string, ClassName>(); doesThisWork.Add("test", new Unique()); doesThisWork.Add("test2", new YAunique()); //doesThisWork["test"].RunAction(); RunAction(doesThisWork["test"]); RunAction(doesThisWork["test2"]); Console.ReadKey(); } static void RunAction(ClassName behaviour) { //var instance1 = (YAunique)behaviour; //var myObject = behaviour as YAunique; if (behaviour as Unique != null) { var myObject = behaviour as Unique; myObject.RunAction(); } else if (behaviour as YAunique != null) { var myObject = behaviour as YAunique; myObject.RunAction(); } else { Console.WriteLine("**************** FAILED CAST ****************"); //I should never see this! } } } What this dose is test "doesThisWork[action]" against all the available behaviours in the game. If one is a match, then it executes the code. It dosen't seem very efficient but it looks like I can make this work maybe. I can reduce the gigantic IF statment by using the "action" string itself to limit the searchable behaviours. So something like.... RunAction(doesThisWork, action)... then read "action" and for example, send it to a code block that only tests TAKE actions... and then use that same "action" string as the dictionary string and then run the correct unqiue behaviour. Again we come back to the same kind of "bookkeeping" problem. A gigantic If Statement is a pain in the arse but I can not use a for(loop) on a list that has every unique behaviour class, as we run into the same problem.. you can not have different classes in a collection. So yeah.. thoughts on this and my "hacked" solution? I would love a better way if you know one.
    25. So I have been playing a little bit with ideas form this thread... I have a lot of life stuff going on so not putting in as much time as I hope. When I finish moving house I'll be able to devote more to this project.... But.. the idea for some kind of dynamic way to give "unique" behaviours to the various objects and areas sounds great.. I am just having some difficulties working out how this can be achieved in C#. I have tried a variety of ways.. but they all end up requiring at some point a hard coded bit that I can not do dynamically. What exactly am I trying to do? What I am trying to do is make a system that can take in a "string" and use that string to somehow assign unique code (a behaviour) to a class. The way I was planning to do this was to have a series of generic "functions" in a class... like say open, take, move etc etc and then assign to those functions unique methods in the form a class... So.. for example... when building my class I was trying to do something like this (this could be a bad way of doing it) I define in the DataFile for the item an part that defines specific behaviours to get read and added to the item class in the game. The data reader reads a behaviour called something like "take-TriggerAvalanche". This unique "take" behaviour has all this unique code in it I want to run. In this case triggering an avalanche, adding and removing items, and flicking switches in other rooms. This all occurs when the player tries to "take" this item. This behaviour is added to a "behaviour" variable that is in the Item Class. This is a Dictionary <string,Behaviours>. So this "take" behaviour is added kinda like this... behaviours.Add("take", take-TriggerAvalancheClass). This is the bit that is not working (see next sections below) So now I have a item.behaviours[action] that is a class with whatever methods I like..... I already have a command processor that boils down user inputs into "action" and "target".... so "retrieve the ball next to the table" becomes "take" and "ball". So all I need to do is search the area's item list for "ball", and if that is found.. then search the ball's behaviour dictionary for the key that matches the "action" sent by the command processor... and if it find it.. then activate the unique method in that behaviour. if (behaviour.ContainsKey(action)) behaviour[action].runAction(); So this is basically what I would like to happen. Example of trying to use inheritance. class ClassName { public string name { get; } public ClassName() { name = "name"; } } class Unique : ClassName { public string uName { get; } public Unique() { uName = "UniqueName?"; } } class YAunique : ClassName { public string yaName { get; } public YAunique() { yaName = "YetAnotherName"; } } class Program { static void Main(string[] args) { Dictionary<string, ClassName> doesThisWork = new Dictionary<string, ClassName>(); doesThisWork.Add("test", new Unique()); doesThisWork.Add("test2", new YAunique()); Console.WriteLine(doesThisWork["test"].name); Console.WriteLine(doesThisWork["test"].uName); //dose not work Console.WriteLine(doesThisWork["test2"].name); Console.WriteLine(doesThisWork["test2"].yaName); //does not work Unique example = new Unique(); Console.WriteLine(example.name); Console.WriteLine(example.uName); YAunique example2 = new YAunique(); Console.WriteLine(example2.name); Console.WriteLine(example2.yaName); // Pauses the console window Pause4Input(); } Here is a simple example of inheritance being added to a Dictionary class. The problem here is that while I can add any inherited child class to the Dictionary... I can only access the methods and variables of the base class. The only way I can work out how to get this to work is to "cast" the class to the correct child class. Like this... var casted = (Unique)doesThisWork["test"]; Console.WriteLine(casted.name); Console.WriteLine(casted.uName); //This Now Works The problem here is that casting to the unique class form the base class, basically means that it is no longer "dynamic". I need to physically type in the casting every time and be different for each type.. so it removes the entire purpose of the idea, as in have a way to add and call unique code from simple "string". Basically I am trying to find a way to do this but that actually works class ClassName { public string name { get; } public ClassName() {name = "defaultName"; } public void RunAction() { Console.WriteLine("Default"); } } class Unique : ClassName { public string uName { get; } public Unique() { uName = "UniqueName"; } public new void RunAction() { Console.WriteLine("Unique"); } } class YAunique : ClassName { public string yaName { get; } public YAunique() { yaName = "YetAnotherName"; } public new void RunAction() { Console.WriteLine("Yet Another Unique"); } } class TestingGround { static void Main(string[] args) { //Dictionary<string, ClassName> doesThisWork = new Dictionary<string, ClassName>(); Dictionary<string, ClassName> doesThisWork = new Dictionary<string, ClassName>(); doesThisWork.Add("test", new Unique()); doesThisWork.Add("test2", new YAunique()); doesThisWork["test"].RunAction(); Console.ReadKey(); } } s
    26. Finale can do quite a bit but the problem with it is you cannot get under the hood as much as you can with a DAW. For example, I really tweaked a bunch of the CC data for a client recently who had written an entire piece in Finale. She liked how Finale had things set up for the most part but in a few spots she wanted a bit more realism and humanization. Plus I was able to dig much deeper and do things like side chaining and use better effects (and samples) than what came with Finale. You can certainly compose in Finale but some DAWs like Nuendo, can use Wwise and set things up for implementation which is a huge factor in game audio. And most DAWs are easier to use when writing to picture than Finale - at least in my experience. I'd keep exploring DAWs and seeing what works for you. Finale (and the like) are vital for score part preps for sure. But when creating audio, I think DAWs have a lot of features to offer.
    27. SChalice

      VGA to HDMI dilemma

      OK Hodgman, I see the problem now thanks to your advice. I just realized, I think, that the Lenovo is pure HDMI. When you set it for 800x600, it is a lie! It is actually 1080p! So when the Lenovo was at 800x600 the output was in fact 1920 x 1080. That is because, I think, it is easier to anti-alias 600 pixels to 1080 instead of 720 (or just dropping the lines which would be nasty). The solution is not yet obvious but realizing the problem makes me feel a lot better. The drone has an infrared camera and an onboard PC with a frame grabber plus an interface to the drone for telemetry and flight control. The big picture is as far as the eye can see. In our case, it is identifying things in the infrared spectrum and reporting that information. The user on the ground (or anywhere in the world) will have access to the camera images in real time.
    28. Hodgman

      VGA to HDMI dilemma

      Yeah, but what's the big picture problem that requires plugging a PC into DJI video transmitter?
    29. SChalice

      VGA to HDMI dilemma

      Our current onboard computer only has a VGA output and DJI only takes HDMI or Lightbridge inputs. We are looking at adding an HDMI port to our computer but I was hoping a $10 VGA to HDMI adapter would be the fast/easy/cheap solution. We are trying what you suggested now, thanks.
    30. Realtime means computation with speed enougt to control process. It lead to other mean - presence of time limit for system response. It lead to requirment of predicability of executon time for each fragment of code. GC can not fit it requirment due to unpredicable moment and time required to "stop the word processing". Really it basics of realtime software. Also any scripting or GC language violate other requirments for reliable software - it have to not use heap reallocation. Becouse only way to warranty presence of required memory is to alocate it on start of system. Pascal is die. Ada and Fortran to rare and have no modern language instruments. C is outdated and have no OOP. And it no any other native OOP languages exists. Also C++ have most advanced instrumemts intendend to build high-level abstraction and creation of flexible automatic memory and resources managment tools.
    31. Hodgman

      VGA to HDMI dilemma

      Yeah plug the HDMI into a decent monitor (PC->AV->Box->HDMI->Monitor) and use the OSD to see what resolution and Hz the box is actually outputting. What's the big picture problem you're solving, btw? Need to wirelessly send analog video from a PC to a display somewhere? What distance? What restrictions on that problem?
    32. Nypyren

      VGA to HDMI dilemma

      Does plugging the VGA->HDMI converter into a monitor's HDMI port instead of the DJI remote work? It sounds like that converter should convert analog to digital, which is the only thing I could think of.
    33. You can check against a wide range of virus scanners by uploading the suspect file to VirusTotal.com. In general a false positive is more likely to be identified as different things by different scanners, and also be marked as clean by most of them. Assuming it is a false positive you have a few options: 1. Consider changing to a different anti virus program, especially if you're not distributing your software to other people yet. I've never had any troubles with the free one that's built into newer versions of Windows. 2. Submit a false positive report to all of the places that detect it as a virus. 3. Try to make changes to stop it from being detected. For example, switching to a different installer may help.
    34. SChalice

      VGA to HDMI dilemma

      I agree. We have a scope but I am not an EE. And probably a logic analyzer is required for video signals. I chose this forum because it is active and I am a hard core graphics game developer.
    35. Bolt-Action Gaming

      This Week in GameGuru - 01/21/2019

      Apologies for the short weekly review here, I am in crunch mode for the book for the next few weeks. Official GameGuru News: https://www.thegamecreators.com/post/gameguru-mega-pack-2-dlc-updated-4 Looks like more updates are happening with the PBR quality improvement passes. Unfortunately the issues log continues to grow. Here's to hoping Lee and team can get some of those big ticket items knocked off the list. What's Good in the Store: Looks like Graphix put up a nice looking radio and BSP added his first new weapon pack in a while. I love the Pancor Jackhammer, aesthetically so I'm looking forward to picking this pack up myself and testing it out! Free Stuff:Graphix put out an amazing looking light-sword (cough not a saber, more like a rapier, honest). Make sure you get a copy here: https://forum.game-guru.com/thread/217932?page=13#msg2610448 Third Party Tutorials and Tools: Nothing new here, sorry kids. Random Acts of Creativity (WIPS):I'm getting absolutely bombarded with everyone's post-new years works - so apologies if I missed anything pertinent. Current the big news is Synchromesh put out a new copy of Protascope and it looks great! I'm really looking forward to this one, as it presents interesting game mechanics for the GameGuru genre of games. The menu alone looks much better than previous iterations so definitely keep your eyes open! I'm also happy to report Bonesy is still working on his yet unnamed project. Ramiro put a new demo video up for Psalm of Salem. Here's a link for those interested: https://www.youtube.com/watch?v=gCXLuRBUK9c It looks pretty nice and doesn't appear to be having framerate issues so that's a huge boon for a Forest map. He's definitely gotten my attention, at least. Guns of Solo got a demo video uploaded as well. Check out the information on the post here: https://forum.game-guru.com/thread/219402?page=3#msg2610773 3com is putting up a new project. His take is different but the texturing is definitely going to hold him back unless he can find a way to improve the graphical fidelity. Still, it's a solid effort in a new direction. https://forum.game-guru.com/thread/218230?page=1#msg2610719 In My Own Works:I've gotten a TREMENDOUS amount of work done over the past week. I literally added another 5000 words to the book by simply updating the appendixes that I have added. I have to say if you had no other reason to buy the book - this would do it. I've torn apart code, read internal documentation, tested, verified with others and a host of other things just to answer questions that simply are assumed by most of the coders involved. My personal editor continues his trek into my book and while there's been some very large cleanups overall feedback has been really positive. I'm looking forward to being done with this phase, though this week it's all about taking pictures for the book. View the full article
    36. acerskyline

      D3D12 Fence and Present

      Yeah! I totally agree. I am waiting for this. So, if Present is a queued operation, why does this diagram indicates that the CPU thread generate two "colored blocks", one on GPU queue and one on "present queue", and the time line looks like the Present is ahead of the actual rendering. Does it make sense?
    37. SoldierOfLight

      D3D12 Fence and Present

      A Present is a queued operation, just like rendering work. It doesn't get executed until the rendering work is done. If you submit rendering work A to a queue, and then rendering work B to a queue, it doesn't really make sense to ask what happens when B starts executing before A is done... because by definition A has to finish before B can start. A Present is queued the same way.
    38. acerskyline

      D3D12 Fence and Present

      Sorry, I should have given it a little explanation. What I'm asking is what will happen if the GPU haven't finished rendering the frame but the Present is being "executed" to display it. The reason I didn't draw the rest of the pipeline is not that it's drained, it's just because I don't think its necessary to show the rest since it's irrelevant and it's also a lot of work to type the rest of it ; ).
    39. Brain

      VGA to HDMI dilemma

      To be honest this is more of a hardware question and you might have posted in the wrong forum. This isn't generally a game developer thing and is the sort of thing best diagnosed with an osciliscope and an electrical engineer. Have you tried the technical support offered with the device?
    40. SoldierOfLight

      D3D12 Fence and Present

      I'm not sure I'm following your diagram or question. Are you asking what is displayed if you let your GPU queue drain entirely (i.e. stop submitting new work)? The screen just doesn't update and it will continue displaying buffer #3 until it has something new to replace it. The CPU won't be blocked though, because at that point you've only got one frame queued, so the CPU will continue running ahead until it gets back up to 3 frames queued.
    41. A quick post to share my Orc language sound effect. Nothing fancy, but that kind of thing is always useful for video games.
    42. acerskyline

      D3D12 Fence and Present

      Continue my previous example, please bare with me. Now, assume one of the previous 3 frames is done - really done, as in on-screen, and the GPU workload for the current frame is very heavy. ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 0.We have already completed step 1 to step 8 for 3 times.(i = 1, 2, 3. Now i = 1 again) 1.WaitForSingleObject(i) 2.Barrier(i) present->render target <---------------- "GPU thread" (command queue) was here 3.Record commands... 4.Barrier(i) render target->present 5.ExecuteCommandList 6.Signal 7.Present <------------------------------------------------- CPU thread was here 8.Go to step 1 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| cpu ... present| gpu ... barrier|----------------heavy work----------------| ----------------------------------------------------------------------------- | 3 | 1 | | 2 | 3 | 1 | | 1 | 2 | 3 | 1 | ----------------------------------------------------------------------------- screen | 3 | 1 | 2 | 3 | ? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| My question is: What should the question mark be in the diagram above? Or will this happen? Thanks!
    43. I have a DJI Matrice 600 and it has the ability to take an HDMI signal from the drone and send it wireless to display on a remote controller. I plugged a PC's HDMI output into the HDMI port and it works. The PC is at 800x600 60hz 24 bits. I have another PC with VGA output and cheap VGA to HDMI converters. I set the PC resolution to 800x600 60hz 24bits and get no signal on the remote. Why would a PC's HDMI video out work but not a signal from HDMI converted from another PC's VGA output? https://www.amazon.com/GANA-Converter-Monitors-displayers-Computer/dp/B01H5BOLYC The obvious reason is that the PC is converting to a different HDMI signal than the converters are. But according to the converter specs and DJI specs, it should work. DJI claims to support 720p, 1080i and 1080p. So I assume that the 800x600 signal is being converted to 720p to work. Thanks for any input as to how to debug this issue.
    44. johannesg

      PDF manual?

      Just a quick question... Is there a PDF manual somewhere that describes the scripting language? I am of course familiar with the website, and it contains pretty much exactly what I need, but the dark forces of the universe (*cough*QA*cough*) love to see their printed, or at least printable, manuals... Thanks!
    45. This is arguable. Maybe we have two difference definitions on what realtime means, and I'm assuming your definition isn't as nuanced as it should be. Looking, and seeing are two different acts. The latter implies the answer is in front of you, the former implies you did some prior research before making an assertion
    46. Yes, I realize that. Hence the code here float sy = tmp.y + ( m_nMaxYOffset - glyph->offsetY ) * m_fScaleVert; And it seems that I fixed the issue with the various widths, until I increased the font size and the same thing happened again. 22 font size 23 font size Someone else had the same issue and it seemed he fixed it by subtracting the offset from the character T's offset. Thanks.
    47. levy

      Aleatory

      iscover and create a peaceful, surreal soundscape in VR! https://katanalevy.itch.io/aleatory This project is about creativity - taking small blocks of musical ideas, using tools to shape them and add a little colour before sending them out to be played back in a surreal soundscape around you. Relax in a visual style inspired by various sci-fi painters and illustrators such as Moebius, Roger Dean and John Harris. There are no end goals, just relax and play. Made in UE4 for Oculus Rift (and possibly Vive although not tested yet). Headset and motion controllers required. Controls: B - Reset View Y - Start/Restart Game Grip/Trigger: Grab, pickup, manipulate Joystick Left/Right: Rotate left and right (not needed for 360 room scale) If you have any issues or comments please feel free to drop me an email.
    48. This video is "old" now after work over the weekend but here it is anyways. When I am able to work on this project, I end up working until it's quite late trying to nail down just "one more thing". And I either forget to leave time for capturing a video or tell myself that tomorrow I'll have something better. This video shows the PC which I simplified to a rectangle with feet and a single dot for an eye walking around int the darkness of the labyrinth, picking up the occasion torch rock (that needs to be re positioned to appear in his hand) and exploring a bit. Eventually our hero comes to a section of the maze where some inhabitants from the Frogger challenge still persist, including some cars that have had their sprites changed to very plain squares. Nothing there is particularly harmful but he jumps past one of the sleeping denizens of the labyrinth a couple times before demonstrating that it is a bad idea to attempt to swim in the swamp.
    49. Oberon_Command

      Anyone who wants to write a little game engine?

      This. And it applies to physics and AI, too. Video games are fundamentally interactive magic shows - "smoke and mirrors." Very few games simulate anything you see on the screen with a high degree of physical accuracy. And this is usually intentional - artists and designers tend not to even want physical accuracy. They want the game to look like the image they have in their head of how it should look. They almost always prefer gameplay that is "fun" over gameplay that has accurate physics. They don't want AI that is genuinely "smart," they want AI that feels smart and loses to the player in an interesting way. In Doom, why do barrels of radioactive waste explode when you shoot them? Radioactive waste is not inherently explosive. Answer: because it's fun and players love the gameplay opportunities it affords.
    50. Promit

      Engine for 2D turn based games

      How about Godot? https://godotengine.org/
    51. SoldierOfLight

      D3D12 Fence and Present

      Isn't calling WaitForSingleObject on a fence block the CPU thread? Yes, explicitly calling WaitForSingleObject on an event which will be signaled by SetEventOnCompletion is related to fences. All I meant was that any implicit blocking within the Present API call is not necessarily related to fences, it's only related to the "maximum frame latency" concept. To answer your specific questions: 1. Yes. 2. Yes. 3. Yes. Work which is submitted against a resource that is being consumed by the compositor or screen is delayed until the resource is no longer being consumed. The fact that the command list writes to the back buffer is most likely detected during the API call to the resource barrier API, and implicitly negotiated with the swapchain and graphics scheduler at ExecuteCommandLists time, to ensure that the command list doesn't begin execution until the resource is available. Also to clarify, by "GPU thread" we're talking about the command queue. If you had a second command queue, or a queue in a different process, it'd still be possible for that queue to execute while the one writing to the back buffer is waiting.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!