Request problems with Call of Duty API

Started by
5 comments, last by SuperVGA 3 years, 8 months ago

Hey there.

Currently, I'm developing an FPS-game-stats App. For the Call of Duty API, I'm using the “official” way via the CallofDuty website/device ID(randomly generated string) and generating cookies then. Unfortunately, this way isn't very useful. Every time I do a certain number of requests it either freezes for some hours( I think it gives me a cooldown) or it crashed and I have to generate a new cookie.

// I'm wondering how the big players in this business like Tracker Network can handle this.

I know it's not really game development but I would be really happy if someone could help me with some tips or showing me another option for it.

Thanks for your help

None

Advertisement

I guess you are locked from the API for using it too exessively like any other company does with their REST APIs. They simply want to keep traffic low except for some partners that pay for it. Traffic and server power is still expensive and companies don't want to offer it for free.

The way you can do it else is to write a web crawler. Some games like Forntite or any Blizzard one offers those information on their website. If your favorite game does this as well then you can query the HTML content as a regular browser does and parse it for the information you want. Sending an HTTP header that mimics a mobile device might also work to get better HTML than for a desktop PC but that's a matter of try/error and looking through the raw HTML you get from the website.

I did this before for some famous shopping websites when I wrote a discount notifier to let me know if certain item was on a low price. I did it in C# because they already offer necessary HTTP code but you can use any language you want

I already considered doing it this way. Think I would do it in C# too but I couldn't find any information on their website. Am I blind?

None

But it could also happen that you're getting blocked this way if you do too many requests(DDOS).

None

You won't if you don't request too much at a time. I set my tool to grab every 30 seconds up to every 2 minutes from the website I was observing this way and I had run it for about 5 weeks in a row until I got my cheap items.

If you want to start creating a web-thingy, just use System.Net.Http.HttpClient in order to perform the fetch and after that, simply a Regex will do the trick to find what information you need. You might need to toy around with URL parameters but that depends on the website

T-rex_on_point said:

But it could also happen that you're getting blocked this way if you do too many requests(DDOS).

@T-rex_on_point You should proxy the requests with a cache and rules wrt. frequency.
Better yet, aggregate and handle everything on the server, exposing your own API entirely. You might want a slightly different payload than the official API, TREX-user-ids and whatnot.

This topic is closed to new replies.

Advertisement