• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

King_DuckZ

Members
  • Content count

    106
  • Joined

  • Last visited

Community Reputation

128 Neutral

About King_DuckZ

  • Rank
    Member

Personal Information

  • Location
    Rome
  1. Ok, thanks for clarifying! I can't find it right now, but one of the tutorial I stumbled upon stated that clipping was necessary in order for the rasterizer to remain inside its buffer. Somehow, I understood it was my responsibility to do that, not the hardware's. All the better then, I'll just remove invisible polygons and leave partially visible polygons as they are!
  2. Hello everybody, I'm trying to figure out how to implement clipping in my code. My understanding is that when a triangle crosses a clipping plane (ie the frustum), new vertices are added to avoid disappearing triangles artifacts. This is done on the CPU (correct?), and any object is eligible for clipping. But my question is: doesn't this make GL_STATIC_DRAW flag utopic? I mean, an object standing on the screen border gets clipped every frame, so its vertex data is far from being static (new vertices added and removed continuously). Moreover, how does it fit with skinned meshes or any other GPU-generated vertices? Anyone got a good reference on clipping? All I've managed to find is either abstract theory or very basic examples.
  3. Building the authentication address (ie: the one that gets spit to stdout) is not hard at all; in fact it's just a composition of client_id, redirect_url and a couple more things. The problem is that you need on output out of that address, and I can't tell any browser/platform-independent reliable way of getting some random text out of another program's memory. Obviously, I won't ask the user to do a copy and paste over some 30 characters long opaque string. People could take it as if you're trying to hack their account. As swiftcoder pointed out, the suggested way is to embed a browser directly inside my app, which is just not doable.
  4. I wish I could see that earlier... I never scrolled that page to the bottom :/ Too bad for them then, no facebook support. It's fairly stupid, as the oauth2 system provides a request_token based authentication that would work pretty nicely -- [url="http://cms.getsatisfaction.com/developers/authentication"]http://cms.getsatisf.../authentication[/url] and [url="http://www.reijo.org/scribbles/list-of-oauth-service-providers"]http://www.reijo.org...rvice-providers[/url] I thought they were using some alternate method or that I was wrong, instead they just disabled it... Thanks for helping, I think we will only use the iOS library they provide and leave the pc versions facebook-free.
  5. @SiCrane: I've checked the libraries you linked and: lib 1 prints to stdout an url to copy-paste into your browser, then asks to copy back whatever url the user is redirected to; that's exactly what I'm trying to automate lib 2 I already had seen it; it relies on QNetworkAccessManager, which /somehow/ returns the redirected url. It's really not clear how that's possible at all because when the user logs in to facebook he might be subscribing for the first time, so you can't just ask for the 3rd redirected url for example AND the user could get distracted, close the tab, open one more, go to a porn site and then come back to the game, so you can't ask for the last url he visited. I'll check Qt source code anyways. @Swiftcoder: I'm sorry but I still don't get it. In fact I'm the least fit person for internet-related programming, and yet they choose me. Where do I get these cookies from? Can you give me an example please? My understanding is that I open an url using the default browser using ShellExecute. It might start Explorer, Opera or even some home-made browser. The browser then shows the login page, and upon successful login it will receive the cookies (not my program). The best I can do is to wait for the process to terminate, but that doesn't give me a clue on where to find the browser's cookies or the redirected url.
  6. Maybe I'm missing something obvious but... doesn't parseSignedRequest() expect a signed_request already? Its caller (line 484) is getting it from the session or from some cookie, but I don't have either of them in our game. I could get a signed_request out of apps.facebook.com/<appID>, but that too seems to rely on cookies. As soon as I try to get that same page from a ruby script for example, the base64-encoded signed_request results into a mostly empty json structure. My understanding of the flow in oauth2 is: client asks for an access_token client fires up a browser asking the user to login and allow whoever is using that access token to mess around with his account focus gets back to the game, and if the access_token has been validated it can be used for a limited time I suppose I'm wrong, but then I don't understand how I can do a two-way communication with a browser launched via ShellExecute.
  7. Hello everybody, I need to write some code to make our game perform some actions on Facebook. The game is multiplatform (iOS, MacOSX, Win32, Linux) and is mainly written in c++. We can do http and https communication, and it's ok to invoke platform-specific functions such as ShellExecute from win32. I've searched the net fo days now, and it seems to me I need a so called "access token" in order to do anything. The question is: how do I get such token? I've seen thousands of examples in php and .net, but all of them are under the assumption the game is a browser game. Our game is a stand-alone binary. Any help on how to get a signed request, an access token, a code or whatever is needed would be very helpful. As far as I can tell, we need access to the user's friends list and the ability to post pictures on his behalf.
  8. [quote] Timing is the most common source of bugs when swapping cores. There could be others, more obscure ones. [/quote] Indeed, I didn't think of QueryPerformanceCounter(). The timer implementation is using it, and being an old code for single core it's not even trying to make up for the possible discrepancies. [quote] You could always fix the code.... Which isn't really viable since such bugs are hard to reproduce. [/quote] Also timing code isn't wrapped into only one or two classes... the original programmer had a vocation for copy & paste, so there's hundreds of direct calls with the subsequent transformation into milliseconds. Sometimes you even see the same comments over and over in the code. [quote] Or just make a batch file "start /affinity 1 foo.exe". [/quote] Tbh I was rather thinking to use SetThreadAffinityMask(GetCurrentThread(), 1) in the WinMain. Are the two solutions equivalent or are there arguments in favour of one rather than the other?
  9. Hello guys, I'm working on an old game that needs to be ported to modern windows systems. The game is single-threaded, and one of the bugs they assigned to me is to bind the main thread to a fixed core on multi-core cpus. While this should be easy enough, I was wondering if there would really be a good reason for that, and how it may affect the performance and the os. Should I really do that? [i]Edit:[/i] As I said it's an old game, so extreme performance is not an issue at all. Cache misses due to core swapping are hardly a concern.
  10. @Antheus: you're right, I just tried to write my allocator in a more compact manner. The code I just posted better reflects my real code. Anyways, using unions is tolerated but still non-standard. In practice it would work except that you can't put anything with a non-trivial constructor in a union and that the type must be known when you declare it. Counter-example: you can't possibly use a union for the fast pimpl idiom. So, as for the non-trivial constructor, my allocator is going to return a constructed object of type T, so it must work both for PODs as well as the rest.
  11. Well, just to scare you a little bit then, I *am* getting "unexpected" results This is my sample program: [code] #include <iostream> class MyClass { public: int* GetNew() { int* retVal = reinterpret_cast<int*>(mem + sizeof(int) * used); *retVal = 0; used++; return retVal; } char mem[sizeof(int) * 4]; int used; MyClass() { used = 0; } }; int main() { MyClass getter; int pool[64]; int* a = new int; //getter.GetNew(); *a = 2; *(short*)a = 1; int* b = getter.GetNew(); *b = 2; *(short*)b = 1; char* c = (char*)pool; *c = 2; pool[0] = 3; *(short*)c = 1; std::cout << *a << " " << *b << " " << (int)*c << "\n"; return 0; } [/code] And here's what I'm getting as the output: [code] [dev00@dev00 ~]$ g++ -fstrict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing 2 2 1 [dev00@dev00 ~]$ g++ -fno-strict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing 1 1 1 [dev00@dev00 ~]$ pathCC -fno-strict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing Warning: variable .init.0 in _GLOBAL__I_main might be used uninitialized 1 1 1 [dev00@dev00 ~]$ pathCC -fstrict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing Warning: variable .init.0 in _GLOBAL__I_main might be used uninitialized 1 1 1 [dev00@dev00 ~]$ icpc -fstrict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing 1 1 1 [dev00@dev00 ~]$ icpc -fno-strict-aliasing -O3 -o StrictAliasing -Wall StrictAliasing.cpp && ./StrictAliasing 1 1 1 [/code] So yeah, unless you shovel it in gcc's face with the *(int*)&myFloat obvious example, you don't even get a warning. Anyways in many cases I doubt the compiler can figure out the real original type, and only gcc seems to strive for this insanity. I'll try the VS compiler tomorrow at work (but as you said I expect it to tolerate aliasing), and if anyone can try clang and comeau, that would be interesting... Anyways if I'm correct the placement new should tell the compiler that the official type for a given address is changing, so whatever I copy and pass around from that point on is only allowed to be of the newly declared type. If I use a void* instead, as from malloc(), then the first time you cast that pointer to a type tells the compiler what type it's going to be. You can't dereference void* anyways. Hopefully this is going to work... x.x [i]Edit:[/i] Visual Studio has a /Oa switch (as well as a /Ow), but I get "1 1 1" as the result both with /Oa and without anything.
  12. I had already found that, but it didn't provide any solution to my problem. I think I just stumbled upon something interesting though: [quote name='C++11 standard draft §3.8/1'] The lifetime of an object is a runtime property of the object. An object is said to have non-trivial initialization if it is of a class or aggregate type and it or one of its members is initialized by a constructor other than a trivial default constructor. [ Note: initialization by a trivial copy/move constructor is non-trivial initialization. — end note ] The lifetime of an object of type T begins when: — storage with the proper alignment and size for type T is obtained, and — if the object has non-trivial initialization, its initialization is complete. The lifetime of an object of type T ends when: — if T is a class type with a non-trivial destructor (12.4), the destructor call starts, or — the storage which the object occupies is reused or released. [/quote] Or in code (well, kind of an extreme example): [code] int a = 10; float* b = new(&a) float; *b = 10.0f; // OK a = 10; // WRONG! [/code] I'm starting to see the point of Linus in his rant... Well, hopefully this is going to work, it looks like old gcc versions had a bug here (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286).
  13. Hello smasherprog, thanks for your reply but this is not what I asked. My concern is about the strict aliasing rule. Specifically, it means that casting a pointer to a type incompatible with the one of the value at the address being casted will give undefined behaviour. That is: [code] int a; *(float*)&a = 10.0f; [/code] is not going to work, or at least is against the standard. And that's assuming that int and float have the same alignment and size. I've continued my research on the internet, and I have experimented a bit myself. gcc seems to skip quite a few warnings about type punning, but luckily ekopath seems to work a little better. What's puzzling me now is that casting a float* to int* makes a warning, while casting a char* to int* and vice-versa doesn't. This is opposite to my understanding that anything can be casted to a char* but not the other way round: [quote] If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined: * the dynamic type of the object, * a cv-qualified version of the dynamic type of the object, * a type similar (as defined in 4.4) to the dynamic type of the object, * a type that is the signed or unsigned type corresponding to the dynamic type of the object, * a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object, * an aggregate or union type that includes one of the aforementioned types among its elements or non-static data members (including, recursively, an element or non-static data member of a subaggregate or contained union), * a type that is a (possibly cv-qualified) base class type of the dynamic type of the object, * a char or unsigned char type. [/quote] Any clarification is appreciated! [i]Edit[/i][i]: [/i]Thinking a little, I'm wondering if this is really an issue in my case. The problem is that casting to an incompatible type and then dereferencing both pointers and reading/writing to them will cause a problem because the compiler will reorder the writes as it believes the two objects are unrelated. BUT in my case I only use char to hold memory, and I never really dereference anything. So casting to another type should be 100% safe, no? I'll keep on searching, but this issue is very confusing, I think.
  14. Hello, today I was writing some container class that pre-allocates uninitialized memory and uses placement new when needed. After a long search, I realized I can't possibly do that due to the strict aliasing rule. To my understanding, void* can be casted to something else only if the original variable holding the pointer to void has not been casted to something else yet. So: [code] void* myMem = malloc(sizeof(int) * 5); int* a = (int*) myMem; // OK short int* b = (short int*) myMem; //WRONG [/code] Is this correct? If so, writing a stack-based allocator is impossible, or is there any technique I don't know of to achieve that? I'd like to get the equivalent of the following (incorrect) code: [code] void* StackAlloc(int size) { static char mem[64]; static int used = 0; void* ret = mem + used; used += size; return ret; } [/code] From what I gathered, this is only valid if client code casts the returned pointer back to a char. Note that I'm aware of the alignment issues, but this is just a simplified example. From [url="http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg01655.html"]this discussion[/url] with Linus I take it I should disable the no aliasing optimization for the memory lib, but most of the code is templated so I should disable that optimization for every client of the memory lib. [url="http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg01647.html"]From that same discussion[/url] understand that the gain in performance and code size is very small to be generous. Do you think I should go ahead and disable said optimization? If so, will I find a way to do the same on compilers other than gcc (ie: Visual Studio), or will I end up with non-standard unportable code?