Jump to content
  • Advertisement

Zurtan

Member
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

122 Neutral

About Zurtan

  • Rank
    Member

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Zurtan

    Golden Fall, RPG with no regrets.

    There is also a quick guide over here: https://computerrpg.com/golden-fallindie-rpg-quick-guide/
  2. TL;DR; An RPG cave crawling game, with hand tailored levels. Monsters, items, skills, magic, riddles and a game that is not too long so you would worry you have no time for another RPG. I made a game called Golden Fall. It is an RPG about a boy who tries to find some coin in a dangerous cave, but falls into a hole instead and have to delve deeper to get out. It is a "time unit" based RPG. Like a Rogue Like, when ever you move, everything else move as well. The character that has wasted the least time units is the one to make the next move. It is not terribly long, and you can finish it in a few hours. So you don't need to worry about spending a lot of time in yet another very long RPG. While the unit based system is like in a Rogue Like, the levels are fixed and not procedural. That means that if you fail once, you can recall where you failed before and think how to overcome this specific level. Sure, it is less "cool" that you can memorize levels, but I think it's the kind of classic gameplay that people can enjoy. There are several versions of this game. Some of them are free. You can get it for free on a Mac, iOS, or Google play. You can get it for 99 cents on Steam. I don't think I can do a sale on Steam where it's free, that is why it's not free on Steam as well. As you can tell, I got desperate from lack of interest in the game. So I beg you to try it out. lol. Links: Steam: iOS: https://apps.apple.com/us/app/golden-fall/id1473700267 Mac: https://apps.apple.com/pe/app/golden-fall/id1470597898?l=en&mt=12 Google Play: https://play.google.com/store/apps/details?id=com.PompiPompi.RPGEngine&hl=en Amazon: https://www.amazon.com/Pompi-Entertainment-Golden-Fall/dp/B07TMQCH7F/ref=sr_1_8?keywords=golden+fall+android+game&qid=1573912685&sr=8-8 (I suddenly see that some of the stores do not have the price I meant them to have) PLEASE TRY IT OUT! OR GIVE FEEDBACK! I got so disappointed from this.
  3. Well, I haven't been doing non engine, raw direct X win32 apps for a while. Apparently there is something called dwm and you need to take care you are not doing the wrong things with directX while it's in different stats I guess: I am unfamiliar with it, so you will need to read on it yourself. https://docs.microsoft.com/en-us/windows/win32/dwm/dwm-overview
  4. Does your app also have a console window? There are some scenarios where win32 apps might make your main thread get stuck. For instance, if you have a console window, and you click the mouse on it. It will be stuck(the thread itself) until you press enter. If you have a background thread working in the background, it might allocate stuff. This is only win32 and C++ right? No C#?
  5. The only thing that doesn't make sense to me is that you set the size of the index buffer to sizeof(unit) multiplied kVertexBufferSize.
  6. I think you are not showing enough code for us to tell. Would also help if you showed the line that "fix" the issue. And if you could also provide your wrapper code. One easy way to catch a memory leak, is to have a small allocation logger class. In this class you would have something like std::unordered_map<long long, std::tuple<std::string, int>> allocationMap; Then, before every place you allocate something, you would insert a value to the allocationMap, where the key is the address translated into long long(for instance, the pointer of the interface if it's not actual ram), the string is some string you give it to identify the allocation place, and the int could be bytes(if applicable). When you free, you just give the address and it it removes the entry based on the interface address. In addition, you can add asserts if there was insert of a null address, or if there was a removal of something that doesn't exist in the map. This is also useful to find VRAM allocation issues, as often you don't have profilers for VRAM. Notice you are talking about RAM not VRAM, so somehow I doubt it's the actual DX object that leaks. Because buffers are stored mostly in VRAM, and you would see the leak in VRAM as well. My guess is that maybe your wrapper is leaking, or maybe while the app is minimized, something gets allocated and never released, because you might release it only in the renderer, and it doesn't render in that state.
  7. The direction will help you figure out when it's hitting and when it's going away from each other. If it's going away from each other, then you don't need to fix the position otherwise it will stick. I am not sure that is a good way to HANDLE the collision, I just suggested how you can detect objects colliding while going into each other, instead of moving away from each other. I think the easiest way to do this, is try to divide your frame into smaller steps, and move everyone in smaller steps.
  8. You can use dot product. If two vactores are orthogonal then dot product is 0. If they are moving away from each other or have wider angle than 90 then thay are negative. You can use that to test if two direction are moving away from each other. You can also calculate the normal to your vector to test if the direction is left or right to the other vector. Hope this helps.
  9. Thanks. This helped solve the problem. In the network interface there was a packet discarded counter. This countered showed that the receiving NIC was discarding frames. I have brought back Jumbo frames and the actual culprit was that a long time ago we have decided to disable flow control. Eversince we kept it disabled but enabling it with the combination of ironing bugs along the way was what solved the problem. I now get nearly zero packt discard on 4 agents running together. Had like a few packt discard after 30 minutes. The next challange is streaming four 4K raw videos, one from each agent. As it seems a single socket with a single thread struggles with this. Thanks for your patience and help.
  10. I have been looking in perfmonitor for windows. I just looked at the ipv4 UDP stats and I wasn't able to get anything that helped me fron that. What do you suggest to look at the perfmonitor to figure out the source of the problem?
  11. Zurtan

    Opinions on Unity for 2D Mobile?

    Unity will save you a lot of time and effort, especially if you are making a game. The biggest issue with Unity I think is that it doesn't have support for native GUI building, as far as I can tell. Both iOS and Android have their own tools for making GUI stuff, when using Unity you will probably need to do your own GUI using Unity and miss all the many features iOS and Android offer in the term of GUI. But if you are making a game, you want game GUI and not App GUI for the most part.
  12. How do I get port statistics from a NIC or a switch? I couldn't find anything for intel x710
  13. Ok so I have been working on it lately. A few things. On UDP I would get too many packet losses. However, after disabling Jumbo Packets I was getting far less packet losses. The issue that with 1500 bytes packets, its nearly impossible to get more than 2Gbs with a single socket and thread. While with the Jumbo Packets I could easily get 9Gbs with a single thread and socket. Even after disabling Jumbo Packets, one of the Edge computers would Sometime get too many packet loss. CPU also work a lot hard with the 1500 bytes packets. Why this single PC gets worse packet loss than the other PCs? Its suppose to have identical Machine and configuration. The only difference I could think of is different RAM sticks. I also replaced the fiber cable to make sure. So why Jumbo packets have so many packet losses? I am using winsock2 directly with recvfrom, select, sendto and etc. My current threading is: For receiving: A thread for a single socket, The thread wait for readability of the socket with select with a 2 second timeout. Although I also tried doing select only the first time. Then it gets a pointer to a vector from a pool with the size of the packet. It recvfrom the socket then push the vector to a queue. A different thread polls the queue and compose the frame. For the sendto I have thread the decompose the frame into packets and push them to a queue and then a thread that polls the queue and sendto(tried with and without select for writability). Am I doing this right? Basically I think I have a solution where I can split the frames to send into multiple socket. Only one edge computer under performs sometimes for no apparent reason.
  14. Well I think we figured that it is highly unlikely that the hardware lose packets. The packet loss we have seen in UDP is just "software packet loss". It means the receiving end is unable to read the packets fast enough. This is more apparent because if your server sends packets to the other computer, he doesn't even care if there is a receving client reading those packets. He sends them anyway, and windows will show that there is bandwidth on the NIC eventhough you already closed your client because the server just keeps on sending. On the other hand, since TCP is blocking on send, the block itself might affect the bandwidth you see on task manager. TCP is a lot less stable bandwidth wise on task manager than the UDP server that always sends and doesn't care if anyone reads it on the other end. So why would TCP block the send if it queues the data into a buffer and send it with a latency later on? You would think a buffer would smooth out the bandwidth. We are now focusing on UDP, as we don't know why TCP is so unstable and non uniform. Maybe it's by design. However, even in UDP we have issues. My latest theory is that... a single core cannot handle 3Gbs of bandwidth too well. A CPU core operates on about 3Giga cycles per second? So you have 3Ghz/(3Gbs/(8*9000Kb)) that leaves you with 72K cycles per packet on average. That might be not enough for a a single core. So I think that I need more than one thread to read from a single socket, or to limit the bandwidth per thread. What is the scale of dealing with 14k packets per second?(for 1Gbs)
  15. We don't have a switch, it's a direct cable from NIC to NIC. A star setup. I have a theory though... Let's say you have a 100Mbs NIC. Now let's say in the first second you send 400Mb, and then the following 3 seconds, you don't send anything. You might think that you are sending 100Mbs on average, but your NIC might drop a lot of packets in the first second. Edit: I might need to smooth the sending. I might be spamming the NIC too fast in a short period of time.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!