Jump to content
  • Advertisement

Search the Community

Showing results for tags 'R&D'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 58 results

  1. So. I was recently introduced to the world or RTS games. I think they are the best type of game in existence. Knowing so, I decided to try and make my own. With javascript. Using ZERO graphics libraries. Just plain ol' canvas and fill rect. 😉 My game is called WebRisk. (I had no other names). I've gotten to the point where mostly everything except the units work. The multiplayer works but the units just sit there. Oh and, I forgot to mention what units I have. RIflemen, Snipers, APCs, Tanks, Transport Helis, Attack Helis, and Fighter Jets. Can I get some pointers from you guys? Stuff like how the AI should work and stuff. Nothing too complicated though. I would share my code, but I'm 14 and I was told my code is hard for other to read/build on. 🤷‍♂️ You can email me here: noahgerardv@gmail.com And one last thing. I've had alot of people tell me that I'm shooting too far. I agree a bit, but I have way to much time on my hands and a stubborn mind. So please, dont try to convince me to not do it. Sorry if I sound demanding. Heres a link to the gfx I've made. http://webriskgfx--helixable.repl.co/ That doesn't include the snipers and the riflemen btw.
  2. Hi everybody from Gamedev.net, Us, Xilvan Design is building 2D & 3D games in Blitz3D, we are now showing to you our kindly official gaming related pages. - The Xilvan Design Website - (please click on each links, download games & bookmark the pages): Lights of Dreams IV: Far Above the Clouds v14.07. Candy World II: Another Golden Bones v20.57. Candy Racing Cup: The Lillians Rallies v7.17. Candy World Adventures IV: The Mirages of Starfield v11.57. Candy to the Rescue IV: The Scepter of Thunders v14.07. Candy's Space Adventures: The Messages from the Lillians v25.17. Candy's Space Mysteries II: New Mission on the earth-likes Planets v14.75. -"Firstly, I removed many bugs caused by Noopy's Ball in Candy World II v20.57, Candy to the Rescue IV v14.07 & Candy's Space Adventures v25.17. Now the ball appear at the respective place in front of Noopy or Candy. In Lights of Dreams v14.07 I fixed the turning of the character Xylvan & many more. Now the grass is lower in order to give a better monster visibility." -"Secondly, I fixed the turning around bug in every game, except in Lights of Dreams IV. In this one, every monsters have a new color. I fixed the side of every teleporting & pet changing during a game." -"Thirdly, I changed the installers of each games like Candy World II, Candy World Adventures IV, Candy to the Rescue IV & Candy Racing Cup. Now the logo in each games are showing in each step of the installation process. I finally got my Candy World Adventures IV v11.57 installer for all my fans. My games are free for the moment!" -"Fourthly, I fixed the colors of the ambient lights, adjusted the starlight coming from Galaxies, now the dogs may Walk, Run, Jump, Fly together, Grab bones, Pastries, Golden Bones, Hearts, Crystals, attack with freesbies. In Candy's Space Adventures, I fixed the far away galaxies, the stars, the planets, all asteroids are present, comets may be destroyable just like the evil ufos. Around the planet, It may have sattellites & healing stations." -"Fifthly, I've found a new grass textures for my 3D games including Candy World II, Candy to the Rescue IV, Candy Racing Cup, Candy's Space Adventures, Candy's Space Mysteries II & Lights of Dreams IV. Now these game are playable & more challengeous." -"Sixtly, it will be possible to edit all the levels of the game since Candy World Adventures IV v10.07. All 28 levels and 8 more in the future. Now, Noopy will do more animation than in v9.47. In v10.37, I've also Added two more colors for the buttons: An Orange & a Yellow one. Since v11.07, the butterflies have more possibilities of colors. Now, the game is ready to be tested by you in v11.57, because, for the moment, the install work great." -"Sevently, I Want to continue Candy Racing Cup: The Lillians Rallies, then I've just changed the camera type. I want to add more circuits, characters & cars. Now, there's no more accident in the demo caused by collisions. The cloud may now change of lightening. I changed the cars' reflection of lights & added butterflies of all colors. In next versions, I want environment mapping applyed on the cars, maybe I'll add more worlds to explore, more cars & more characters." -"In conclusion, In the near future, I'm up to create a whole new Spatial Mode in Lights of Dreams V. New Space travel possibilities will be available before 2020 in our games. In v13.47, I changed the leaves around the colomns buildings. Now they're altmost all textured & visible all sides." Once more, here is my YouTube Channel, where we are showing Candy's & Lights of Dreams series. Each games is free to play for the moment! - My Youtube Channel - Hope you'll like our games, bookmark, downloads & watching our videos! Friendly, Xylvan, Xilvan Design.
  3. A very good day to you All, I hope you are doing Very Well, Please kindly see the attached code for a C# Program This C# Program will Load an External EXE Program into Memory This C# Program will then do a Function Call to Run that EXE Program that sits in Memory The External EXE Program is an Executable that sending Keypress to the Desktop It does nothing except sending out keypresses to the Desktop The Calling Function to Run the EXE From Memory is listed below //Execute the assembly exeAssembly.EntryPoint.Invoke(null, null); //no parameters Is there any way for me to call the above function as many times as I want so that I can keep on running the EXE from memory as many times as I want ? The below code works for the 1st call to run the EXE On the second call to run the EXE - The code returns an error message and is not able to proceed any further What I would like to be able to do is to 1) Load the EXE into Memory 2) Set it up in such a way where I can Call the Invoke Function as many times as I wish in order to Run the External Keypress Program as many times as I wish Is there any way to do this ? Secondly I would like for the external EXE program to be able to interact with all the surrounding applications that are running on the Windows Desktop This is because the external EXE program is actually a program that sends out keypresses I am calling and running this external EXE program multiple times in order to deliver multiple keypresses many many times over The keypresses are to interact with any Running APP that I have opened in the Windows desktop And thus I would like for this keypresses app to be able to affect and interact with all the apps on the Windows desktop How can I modify the code so that I can allow for this external EXE to be able to interact with all running desktop apps? Here is the code using System; using System.Collections.Generic; using System.Windows.Forms; //Needed using System.Reflection; using System.IO; namespace MemoryLauncher { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { RunInternalExe("x.exe"); } private static void RunInternalExe(string exeName) { //Get the current assembly Assembly assembly = Assembly.GetExecutingAssembly(); //Get the assembly's root name string rootName = assembly.GetName().Name; //Get the resource stream Stream resourceStream = assembly.GetManifestResourceStream(rootName + "." + exeName); //Verify the internal exe exists if (resourceStream == null) return; //Read the raw bytes of the resource byte[] resourcesBuffer = new byte[resourceStream.Length]; resourceStream.Read(resourcesBuffer, 0, resourcesBuffer.Length); resourceStream.Close(); //Load the bytes as an assembly Assembly exeAssembly = Assembly.Load(resourcesBuffer); //Execute the assembly exeAssembly.EntryPoint.Invoke(null, null); //no parameters } } }
  4. I have large vectors of triangles that make up a mesh for a given model. What I would like to do is find a way to iterate through the mesh looking for shared edges, then move on to the next triangle until I have searched every triangle in the mesh. The triangles are ordered in a vector as I read them from a file, but it may be that there's a better way to setup for this type of search. Can anyone point to some resources that would help me figure out how to find shared edges?
  5. Hi everyone here, Hope you just had a great day with writing something shining in 60FPS:) I've found a great talk about the GI solution from Global Illumination in Tom Clancy's The Division, a GDC talk given by Ubisoft's Nikolay Stefanov. Everything looks nice but I have some unsolvable questions that, what is the "surfel" he talked about and how to represent the "surfel"? So far as I searched around there are only some academic papers which look not so close to my problem domain, the "surfel" those papers talked about is using the point as the topology primitives rather than triangled mesh. Are these "surfel"s the same terminology and concept? From 10:55 he introduced that they "store an explicit surfel list each probe 'sees'", that literally same as storing the surfel list of the first ray casting results that from the probe to the certain directions (which he mentioned just a few minutes later). So far I have a similar probe capturing stage during the GI baking process in my engine, I would get a G-buffer cubemap at each probe's position with facing 6 coordinate axes. But what I stored in the cubemap is the rasterized texel data of the world position normal albedo and so on, which is bounded by the resolution of the cubemap. Even I could tag some kind of the surface ID during the asset creation to mimic "surfel", still, they won't be accurately transferred to the "explicit surfel list each probe 'sees'" if I keep doing the traditional cubemap works. Do I need to ray cast on CPU to get an accurate result? Thanks for any kinds of help.
  6. Hello everyone! I'm an IB student writing my extended essay in CS comparing Monte carlo tree search and minimax. For collecting my primary data, I wish to conduct a benchmark test on both these search techniques. For that I need to use a game that implements the monte carlo tree search and also has a version that implements minimax. Can someone please help me out and send the links of where I can find such an engine/game? Thank you!!
  7. Hello guys, Please forgive me if I look flustered as I am trying to express what is in my mind in words and it may come out a little scrambled. I am writing a special version of tic tac toe to enhance my programming skills, especially in the AI side of things. The grid is 6 by 6. The player must form 5 lines in a row in order to win, same must apply to the opposing team. Right now the game is like 98% complete except when it comes to the AI. I have started writing my very first AI four days ago, I have done lots of bug fixes and debugging and the AI is at it's infancy. Right now it moves randomly and puts it randomly in the board...but it is not a smart AI. It is not smart enough to block my attempt from winning and it is not smart enough to form it's own lines to win and also not tactful enough to form line where it wins even if I attempt to block it in one side. I want to make my AI smart but beatable but at the same time smart enough to beat me. I want the AI to think much like the AI of chess or checker to think when it does it's move. Can someone help me in this please? How do I make an unpredictable AI but smart enough to make the game fun and challenging but not impossible or not too dumb either. I want the computer to take it's time..think of the best move for it to win and also blocks me from winning, but if it had a choice of winning and blocks me, it goes for winning, but also make a strategical move to attempt to win even if I blocked it from one end but also unpredictable. Like I do not want it to do the same moves again and again... Remember this is not an average tic tac toe it is a 6 by 6 so it have 36 grids to move about and you need to form five rows to win. CAN SOMEONE HELP ME ON THIS? THANKS IN ADVANCE!
  8. Now that my real-time software renderer is almost complete after many years, I'm finding it difficult to research the current state-of-art in the area to compare performance. Most other software renderers I've found are either non-released hobby projects, slow emulations of GPUs with all their limitations or targeting very old CPUs from the 1990s. Best so far was a near-real-time CPU ray-tracing experiment by Intel around year 2004 ac. Feel free to share any progress you've made on the research subject or interesting real-time software renderers you've found. With real-time, I mean at least 60 FPS in a reasonable resolution for the intended purpose. With software, I mean no dependencies on Direct3D, OpenGL, OpenCL, Vulkan nor Metal.
  9. zfvesoljc

    R&D Trails

    I'm currently working on implementing a trail system and I have the basic stuff working. A moving object can specify a material, width, life time, and time based curves for evaluation of colour and width. For each trail point I generate two vertices, trail point is center, vertex A is offset "left" and vertex B is offset "right", half width each. A setting for minimum distance between two trail points determines how spread out they are. This works nice until width and turning angle are so "close" that one side of trail triangles starts overlapping and in case of additive shading causes ugly artefacts. So. I'm now playing with ideas on how to solve this: - do some vertex detection magic and check for overlapping, maybe discard overlapping vertices or move them close by - push both vertices on one side of trail, ie: A = point, B = point + width (instead of A = point + half_width, B = point - half_width), but I yet have to figure out how to detect that I need to do this And other solutions or tips? Forgot to mention, I'm doing mesh generation on cpu side.
  10. I'm learning light probes used for dynamic global illumination. I have a question regarding the placement of light probes, as based on most of the pictures I have seen, they seem to be placed uniformly in a grid. Is that a reasonable way for placement? I feel that more should be placed in corners than in the middle of an area. is there any research on light probe placement that minimize the overall data needed for rendering? this GDC talk http://twvideo01.ubm-us.net/o1/vault/gdc2012/slides/Programming%20Track/Cupisz_Robert_Light_Probe_Interpolation.pdf mentioned irregular placement on tetrahedrons and how to do interpolation. but it doesn't seem to say much about placement itself. this paper http://melancholytree.com/thesis.pdf mentioned that only one probe is needed in a convex shape, but I don't seem to see people do this, is it because this is only for static global illumination without moving objects? what's the the latest development in light probe placement?
  11. Hello dear AI folk! I picked up a passion for Game AI a while ago and I wanted to join a community for a while now. This seems to be the right place to get cozy. Now, for my bachelors degree, I am supposed to write a research essay (about 20 pages). I chose to title it "Comparing Approaches to Game AI imder Consideration of Gameplay Mechanics". In the essay I am making bold statements, concerning the state of related work in the field. I want to know, if it holds up to reality. And what better way to find out, than asking the community of said reality? Disclaimer: This is the first research paper I have ever done and it is work in progress. I feel that I suck at this. The main question is: Does the above statement hold up to reality? But please don't shy away from corrections or general advice. I would also like to share the completed work as soon as it is done, if anyone is interested. I am in dear need of some feedback, that I personally can grow by. Cheers
  12. Hello there, I am looking for code examples or a tutorial of context based steering behaviors. The "Game AI Pro" series gives an overview but no code implementation. It would be nice to see an example to help me along with my understanding of it. Any little bit of help is appreciated. Language does not matter to me just the code. Thanks in advance everyone!
  13. Hello all, Sorry for my English! I found a description about the fastest reflection algorithm for flat surfaces. http://www.remi-genin.fr/blog/screen-space-plane-indexed-reflection-in-ghost-recon-wildlands I want to recreate it in unity. Attached an example for testing https://www.drive.google.com/open?id=1WfgCpxwx8k6lgALHY6s_588p1D4lYEb6 I have some problems. 1) Why does the example use reflection along the Z axis? After all, the example is reflected on the horizontal surface Y? 2) In my implementation, an incorrect hash buffer. Perhaps the buffer is written in reverse? I added a z-buffer distance limit for test. You can see what's happen on 40/100/1000 meters. Here is the contents of the hash buffer I tried to change the interlocked Max / min, reversed the value of the z-buffer, tried without hashing (just write uv), still did not succeed. 3) Another interesting question is filling holes. The author writes this Temporary reprojection for static images will not work (or am I mistaken?) And in the second part I did not understand what he was doing? Is it blending the previous frame and the current frame of the reprojection? ps There are a couple more links for this method, but the code is more complicated and confusing. http://www.guitarjawa.net/wordpress/wp-content/uploads/2018/04/IMPLEMENTATION-OF-OPTIMIZED-PIXEL-PROJECTEDREFLECTIONS-FOR-PLANAR-REFLECTORS.pdf http://www.advances.realtimerendering.com/s2017/PixelProjectedReflectionsAC_v_1.92.pdf
  14. Heyo! For the last few months I've been working on a realtime raytracer (like everyone currently), but have been trying to make it work on my graphics card, an NVidia GTX 750 ti - a good card but not an RTX or anything ... So I figured I'd post my results since they're kinda cool and I'm also interested to see if anyone might have some ideas on how to speed it up further Here's a dreadful video showcasing some of what I have currently: I've sped it up a tad and fixed reflections since then but 'eh it gets the gist across . If you're interested in trying out a demo or checking out the shader source code, I've attached a windows build (FlipperRaytracer_2019_02_25.zip). I develop on Linux so it's not as well tested as I'd like but it works on an iffy laptop I have so hopefully it'll be alright XD. You can change the resolution and whether it starts up in fullscreen in a config file next to it, and in the demo you can fly around, change the lighting setup and adjust various parameters like the frame blending (increase samples) and disabling GI, reflections, etc. If anyone tests it out I'd love to know what sort of timings you get on your GPU But yeah so currently I can achieve about 330million rays a second, enough to shoot 3 incoherent rays per pixel at 1080p at 50fps - so not too bad overall. I'm really hoping to bump this up a bit further to 5 incoherent rays at 60fps...but we'll see I'll briefly describe how it works now :). Each render loop it goes through these steps: Render the scene into a 3D texture (Voxelize it) Generate an acceleration structure akin to an octree from that Render GBuffer (I use a deferred renderer approach) Calculate lighting by raytracing a few rays per pixel Blend with previous frames to increase sample count Finally output with motion blur and some tonemapping Pretty much the most obvious way to do it all So the main reason it's quick enough is the acceleration structure, which is kinda cool in how simple yet effective it is. At first I tried distance fields, which while really efficient to step through, just can't be generated fast enough in real time (I could only get it down to 300ms for a 512x512x512 texture). Besides I wanted voxel accurate casting for some reason anyway (blocky artifacts look so good...), so I figured I'd start there. Doing an unaccelerated raycast against a voxel texture is simple enough, just cast a ray and test against every voxel the ray intersects, by stepping through it pixel by pixel using a line-stepping algorithm like DDA. The cool thing is, by voxelizing the scene at different mipmaps it's possible to take differently sized steps by checking which is the lowest resolution mipmap with empty space. This can be precomputed into a single texture allows that information in 1 sample. I've found this gives pretty similar raytracing speed to the distance fields, but can be generated in 1-2ms, ending up with a texture like this (a 2D slice): It also has some nice properties, like if the ray is cast directly next to and parallel to a wall, instead of moving tiny amounts each step (due to the distance field saying it's super close to something) it'll move...an arbitrary amount depending on where the wall falls on the grid :P. Still the worst case is the same as the distance field and it's best case is much better so it's pretty neat So then for the raytracing I use some importance sampling, directing the rays towards the lights. I find just picking a random importance sampler per pixel and shooting towards that looks good enough and allows as many as I need without changing the framerate (only noise). Then I throw a random ray to calculate GI/other lights, and a ray for reflections. The global illumination works pretty simply too, when voxelizing the scene I throw some rays out from each voxel, and since they raycast against themselves, each frame I get an additional bounce of light :D. That said, I found that a bit slow, so I have an intermediate step where I actually render the objects into a low resolution lightmap, which is where the raycasts take place, then when voxelizing I just sample the lightmap. This also theoretically gives me a fallback in case a computer can't handle raytracing every pixel or the voxel field isn't large enough to cover an entire scene (although currently the lightmap is...iffy...wouldn't use it for that yet XD). And yeah then I use the usual temporal anti aliasing technique to increase the sample count and anti-alias the image. I previously had a texture that would keep track of how many samples had been taken per pixel, resetting when viewing a previously unviewed region, and used this to properly average the samples (so it converged much faster/actually did converge...) rather than using the usual exponential blending. That said I had some issues integrating any sort of discarding with anti-aliasing so currently I just let everything smear like crazy XD. I think the idea there though is to just have separate temporal supersampling and temporal anti aliasing, so I might try that out. That should improve the smearing and noise significantly...I think XD Hopefully some of that made sense and was interesting :), please ask any questions you have, I can probably explain it better haha. I'm curious to know what anyone thinks, and of course any ideas to speed it up/develop it further are very much encouraged . Ooh also I'm also working on some physics simulations, so you can create a realtime cloth in it by pressing C - just usual position based dynamics stuff. Anyway questions on that are open too :P. FlipperRaytracer_2019_02_25.zip
  15. Wasn't sure where to post my game (https://incredicat.com) as I'm still working on it but wanted to put it out there to get any useful feedback or thoughts from the experts. It's basically a game similar to 20 Questions (or Animal, Vegetable, Mineral) that attempts to ask you questions to work out an object you are thinking about. You can think of everyday items (animals, household objects, food, quite a bit of other stuff etc) and it has 30 questions to try and guess the item. I've been working on it for a while but not sure what to do next so interested to hear anyone's thoughts... I've finished working on the new algorithm which is based on ID3; entropy and information gain. The link for anyone that wants to try it out is incredicat.com Thanks in advance!
  16. Hello everyone here in GameDev forums, It's my first post and I'm very happy to participate in the discussion here, thanks for everyone! My question is, is there any workaround that we could implement something in DX11 that similar to the DX12's CBV with the offset, and/or something in OpenGL similar to the dynamic UBO descriptor in Vulkan? My initial purpose is about to achieve a unified per-object resource updating design across different APIs in my engine. I've gathered all the per-object resource updating to a few of coherent memory write operations in DX12 and Vulkan rendering backends, and later record all the descriptor binding commands with the per-object offset. DX12 example code: WriteMemoryFromCPUSide(); for(auto object : objects) { offset = object->offset; commandListPtr->SetGraphicsRootConstantBufferView(startSlot, constantBufferPtr->GetGPUVirtualAddress() + offset * elementSize); // record drawcall } Vulkan example code: WriteMemoryFromCPUSide(); for(auto object : objects) { offset = object->offset; vkCmdBindDescriptorSets(commandBufferPtr, VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayoutPtr, firstSet, setCount, descriptorSetPtr, dynamicOffsetCount, offset); // record drawcall } I have an idea to record the per-object offset as the index/UUID, then use explicit UBO/SSBO array in OpenGL and StructuredBuffer in DX11, but still, I need to submit the index to GPU memory at a certain moment. Hope everyone has a good day!
  17. thecheeselover

    Zone generation

    Subscribe to our subreddit to get all the updates from the team! I have integrated the zone separation with my implementation of the Marching Cubes algorithm. Now I have been working on zone generation. A level is separated in the following way : Shrink the zone map to exactly fit an integer number of Chunk2Ds, which are of 32² m². For each Chunk2D, analyse all zones inside its boundaries and determine all possible heights for Chunk3Ds, which are of 32³ m³. Imagine this as a three dimensional array as an hash map : we are trying to figure out all keys for Chunk3Ds for a given Chunk2D. Create and generate a Chunk3D for each height found. Execute the Marching Cubes algorithm to assemble the geometry for each Chunk3D. In our game, we want levels to look like and feel like a certain world. The first world we are creating is the savanna. Even though each Chunk3D is generated using 3D noise, I made a noise module to map 3D noises into the 2D to able to apply 2D perturbation to the terrain. I also tried some funkier procedural noises : An arch! The important thing with procedural generation, it's to have a certain level of control over it. With the new zone division system, I have achieved a minimum on that path for my game.
  18. I have a DJI Matrice 600 and it has the ability to take an HDMI signal from the drone and send it wireless to display on a remote controller. I plugged a PC's HDMI output into the HDMI port and it works. The PC is at 800x600 60hz 24 bits. I have another PC with VGA output and cheap VGA to HDMI converters. I set the PC resolution to 800x600 60hz 24bits and get no signal on the remote. Why would a PC's HDMI video out work but not a signal from HDMI converted from another PC's VGA output? https://www.amazon.com/GANA-Converter-Monitors-displayers-Computer/dp/B01H5BOLYC The obvious reason is that the PC is converting to a different HDMI signal than the converters are. But according to the converter specs and DJI specs, it should work. DJI claims to support 720p, 1080i and 1080p. So I assume that the 800x600 signal is being converted to 720p to work. Thanks for any input as to how to debug this issue.
  19. Hi there, Please help us evaluate three fighting-game streaming channels (Twitch) through the following SurveyMonkey page: https://jp.surveymonkey.com/r/8PR7QRD It takes around 15 to 20 minutes to finish this evaluation. Thank you so much in advance for your contributions to fighting-game research. If possible, please finish the survey by this Sunday. Team FightingICE http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
  20. Hello. GCN paper says that its one simd engine can have up to 10 wavefonts in-flight. Does it mean it can run 10 wavefonts simultaniosly? If so then how? By pipelining them? AFAIK wavefonts are scheduled by a scheduler. How does the scheduler interact with simd engine to make it possible? Do these 10 wavefonts belong to only one instruction?
  21. Sean O'Connor

    R&D Evolving neural networks

    A long time ago I used to hang around this forum with the user name redtea. Red tea is an actual beverage that is delicious with milk and sugar. It is not a political persuasion. A few rednecks ran me off though because red is a red flag to them. Kinda funny and stupid story at the same time. Anyway you can evolve neural networks especially if you choose rather unusual activation functions that particularity suit evolution such as signed square y=-x*x, x<0 y=x*x, x>=0. I don't think you would ever use those with back propagation: https://groups.google.com/forum/#!topic/artificial-general-intelligence/4aKEE0gGGoA I also have a kind of associative memory that might be interesting for character behavior, https://github.com/S6Regen/Associative-Memory-and-Self-Organizing-Maps-Experiments Maybe AMScalar is the one to use.
  22. Armaan Gupta

    Lets build cool things!

    Hi there, My name is Armaan and our game studio my company started, The Creative Games, is looking for talented people to join. Art, development, code, audio, design... whatever you do, we would love to have you. As of now were working to get more people to really get a diverse set of inputs. Were not focused on a "type" of games, really just whatever we as a team want to make. If you want to be a part of a team working on building cool things, email me at armaangupta01@gmail.com or text me on my Discord (Guppy#7625). Can't wait to have you join!
  23. If you have CROWDFUNDED the development of your game, which of the following statements do you agree with? 1. I went out of my way to try to launch my game by the estimated delivery date 2. I made an effort to launch my game by the estimated delivery date 3. I was not at all concerned about launching my game by the estimated delivery date ------------------------------------------------------------------------------- Hi there! I am an academician doing research on both funding success and video game development success. For those who have CROWDFUNDED your game development, it would be extremely helpful if you could fill out a very short survey (click the Qualtrics link below) about your experiences. http://koc.ca1.qualtrics.com/jfe/form/SV_5cjBhJv5pHzDpEV The survey would just take 5 minutes and I’ll be happy to share my findings of what leads to crowdfunding success and how it affects game development based on an examination of 350 Kickstarter projects on game development in return. This is an anonymous survey and your personal information will not be recorded. Thank you very much in advance!
  24. Am a new game dev and I need the help of you (the experts) While making the game I had one main problem, In my game, the player moves his mouse to control the direction of a sword that his character is supposed swings against other players, the problem is that I don't know how to program the hand to move according to the mouse. I will be grateful if someone can give me a helping hand on how to code it or a general idea of how this thing can be programmed on unity ^^.
  25. Hi, Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!) I have gotten so far but I am unsure how a few things that they say in the blogs.. This is a simplified version of how I understand their stuff to be setup: Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..) The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource. Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel. One things I would like clarification on In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource Where are the handles allocated from? the RenderBackend? How do they do it in a thread safe way that's doesn't kill performance? If anyone has any ideas or any additional resources on the subject, that would be great. Thanks
  • Advertisement