• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Juliean

GDNet+ Basic
  • Content count

    1586
  • Joined

  • Last visited

Community Reputation

7063 Excellent

About Juliean

  • Rank
    Contributor

Personal Information

  1. Thanks, makes a lot more sense now. I somehow mistake the right-to-left argument passing to evaluation order, oh well. Still strange that this problem only shows up now and in such a specific case, but oh well, thats what you get when you rely on unspecified behaviour.
  2. Hi, So yesterday I had this really weird bug, which fortunately I managed to work around, but I'd still like to know whats going on in the first place. So here's the deal: For my script-systems C++-function-bindings, I (variadic) templates like this: template<typename Return, typename Class, typename... Args> using Function = Return (Class::*)(Args...); template<typename Return, CheckHasReturn<Return> = 0> static void Call(size_t index, Class& object, Function func, CallState& state) { auto value = (object.*func)(getAttribute<Args>(index, state)...); returnValue<Return>(0, state, value); } This allows for me to call a function-pointer from a bunch of arguments stored continuously in memory. In case you aren't familiar with variadic templates (or didn't know this was possible): getAttribute<Args>()... inserts the getAttribute-function for each individual template argument, like that: void test(int, float) { } Call(&test) => index = 1; func(getAttribute<int>(index), getAttribute<float>(index)); Now here's the important part: Since I'm using the (default) cdecl/thiscall calling order, attributes are evaluted right-to-left, so index starts at the maximum value and is decremented in getAttribute(). And now to the bug: Inside my plugin-projects, getAttribute would sometimes get called out of order. An example of such a function would be: void ShowOptions(event::CallState& state, const std::vector<core::LocalizedString*>& vOptions, int selection, bool allowCancel, bool playSounds); Now here's what happened: Instead of calling getAttribute<bool>(3), getAttribute<bool>(2), ... it was calling getAttribute<const std::vector<core::LocalizedString*>&>(3) first, which resulted in an misinterpretation of the last "bool" argument as a vector-type (it was still being inserted into the correct argument-slot, kind of obvious since there likely would've been a compile-error otherwise). After that, it would resume to calling the leftover getAttribute-functions in the "correct" order (in reality though, this resulted in an early crash). Now my question is: How can this happen? Is there any clause in the C++-standard that allows for argument-evaluation to happen out-of-order under certain circumstances, maybe just in conjunction with the variadic-function unpacking "trick" I've used here? Or could it be a compiler-bug (in which case I'd be tempted to smack MSVC finally; that thing is just getting messier and messier)? You'd probably need to know what else I tried to pin down the exact problem in order to know for sure, so here: - Since it only happens inside the plugin-projects (loaded via DLL), I've made sure that all compiler-settings match. - I've also found out that under no circumstances, neigther in the plugin projects nor the main engine codebase, any such bug could be directly reproduced. As in, I've tried to reduce the involved code for the function-binding (which is quite a lot) to a minimal working example, but all attempts failed to yield a similar result - all the arguments/functions were evaluted/called in the required order. - What I've been able to do is pinpoint the problem to (appearently) the getAttribute<>-function. Long story short, I'm supporting const vector<>& and similar constructs via an specialized templated class inside getAttribute: namespace detail { template<typename Arg> struct AttributeHelper { static Arg Call(size_t& index, CallState& state) { return state.GetAttribute<core::CleanType<Arg>>(index--); } }; template<typename Arg> struct AttributeHelper<const std::vector<Arg>&> { static std::vector<Arg> Call(size_t& index, CallState& state) { return state.GetAttribute<std::vector<Arg>>(index--); } }; } template<typename Arg> decltype(auto) getAttribute(size_t& index, CallState& state) { return detail::AttributeHelper<Arg>::Call(index, state); } The actual code is a lot more complicated, but as I've seen, the "bug" is triggered when one of the AttributeHelper specializations are being choosen. Now again, I couldn't reduce this problem to a more simplified version of the code, but it seems thats the root of the problem here. I've actually been able to workaround the issue by using an std::index_sequence instead of manually counting the index (which imposes a few additional limiations, but oh well). The issue still remains though, and I still don't have any damn clue whats exactly going on or why. I know this is probably a deeply complicated technical issue, and I don't require immediate help, but I'd still like to know if the experienced behaviour (out-of-order evaluation of function arguments under specific circumstances) is actually valid or just another microsoft-related bug. Any ideas? Thanks!
  3. One thing that stands out to me: struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; Here you have a float3 for position; which is matched by your input-signature declaration. PSInput VSMain(float4 position : POSITION, float4 color : COLOR) { PSInput result; result.position = position; result.color = color; return result; } But the Vertex-shader receives a float4. Even if this were technically legal (not sure in DX12), this probably means that the w-component is 0.0f, while it should actually be 1.0f when output from the vertex-shader w/o any persective transform, as is the case for you. I'd suggest trying: PSInput VSMain(float3 position : POSITION, float4 color : COLOR) { PSInput result; result.position.xyz = position; result.position.w = 1.0f; result.color = color; return result; and see if this works.
  4. Thanks all for the suggestions! I already knew Mitsuba and was planning on using it to evaluate my results. I didn't think it was useful for what I need right now, since right now I'm probably looking for some mistake in my shader code/data, so I was looking for something simpler; the two github-links look like they might work.
  5. Hello, I'm currently implementing a PBR-based BRDF in my engine, and I'm having slight trouble/concerns about the correctness of my results. So I'd like to ask if someone knows of an application that can be used to check those kind of things - specifically, I'd like to see ie. how GGX/Microfacet-Distribution/... is supposed to look to check if I'm feeding it the right input values, so some kind of open-source code project where I can play around with the shaders would be exactly what I'm looking for. Is there any such thing? Thanks!
  6. You mean something like this? class Signal { ~Signal(void) { m_isDead = true; } void Emit(void) const { for(auto& delegate : m_list) { delegate(); if(m_isDead) break; } } private: bool m_isDead; } Seems a bit hacky & unsafe though - I imagine you had something different in mind, but since signals are just standalone classes that are composed into other classes, there's currently no other (easy) way to let the loop know. I'm also not sure if thats even a good idea. Its usually important that all slots are correctly notified, even if one of them ends up terminating the signal - otherwise it can lead to invalid state/further crashes. Especially since the order of the slots is not guaranteed, this can easily lead to undefined behaviour, based on which slot is called first on any given time. I quess in that sense, creating a copy of the list is a necessary "evil".
  7. Come to think of it, I'm already doing what you describe for my entities/game objects, at least. Having a game-object destroyed while its script is running/issuing the destroy command was something I encountered quite early, and alleviated by introducing a destroyed-queue like you described. For widgets (which is my primary source of pain regarding this topic), I never though of doing that. There's a certain additional level of complication that I didn't make clear yet, though: - Widgets are generally user-managed. Meaning, they are mostly created & stored by a std::unique_ptr in the user code. I used some managed storage before, and it caused different problems, so I ended up this way. On the other hand, this means that its neigh impossible to introduce a dead-widget queue to solve the problem. For example, pseudo-code for a asset-viewer that allows to tab between different assets: class AssetView { using TabMap = std::unordered_map<Asset*, std::unique_ptr<AssetTabView>>; private: void OnCloseTab(Asset* pAsset) // isued by TabBar::TabItem::SigClose { m_pTabs->RemoveTab(pAsset); // oups m_mTabs.erase(pAsset); } std::unique_ptr<TabBar> m_pTabs; TabMap m_mTabs; } This at the very least means that TabBar has to implement its own dead-widget queue, or I have to implement the delayed-destroy in the user code, like in this "AssetView". - This, unlike the example with the game-objects, can also happen when a slot gets removed from/added to the signal while iterating, as this will also invalidate iterators. This happens more often then you'd probably imagine - ie. when reloading the generated shader code of my "material"-system, this sometimes forces new material-instances to be generated, which in turn will have to register with the "SigReload" of its material-class. Now sure, another thing where I could just delay the initialization, but quite frankly, this happens so frequently that it becomes a huge pain in the ass, having to delay everything without much added benefit (IMHO; as opposed to game-objects where it makes quite much sense).
  8. So I've been using this signal/slot-library for quite some time now: https://github.com/pbhogan/Signals One rather "common" pattern I encountered, is a signal that destroys its owning class: class Widget { core::Signal<> SigClicked; } void onCloseWindow(void) { delete &widget; } widget.SigClicked.Connect(&onCloseWindow); widget.SigClicked(); // ... Now this is going a bit into the details of how signals is implemented - internally it stores an std::set, which in the operator() is iterated via a for-loop: void Signal::operator()() const { for(auto& delegate : delegate_list) { delegate.Call(); } } Now can you imagine what happens when any of the delegates destroys the object that owns the Signal? Obviously this will result in the destructor of the Signal being called, after which it will resume to iterate over the list - crash, in the best case I've encountered multiple strategies to combat this - the most obvious that I know is to simply delay the delete/destroy-operator, ie. by performing it in the next tick-step. This can overcomplicate things though, so I searched for a simpler solution-until I finally found one: void Signal::EmitSafe() const { const auto temp_list = delegate_list; for(auto& delegate : temp_list) { delegate.Call(); } } Now I'm not sure if thats a coding-horror, or an actual elegant solution for this problem Its not universal though, since due to the overhead of copying the list everytime, I only use it where I can expect something like this to happen. Did you ever have fun with objects trying to delete themselves in a similar fasion? I'd also like to hear some suggestions of what you'd do to solve that
  9. Thanks, I submitted a ticket to the support portal. Not sure its related to the site upgrade, though unless I'm mistaken (that just happened yesterday, didn't it?) - as I mentioned the failed payment happened a few weeks ago, I just looked at my bills today and noticed that the payment was successfull with the new card
  10. Hi, so there was a problem with my subscription to GDNet+. Basically my credit card expired at the time it tried to resubscribe, so I updated it, after which I got told its going to try again on some later date. Now I just saw that the subscription fee has been paid from my account, yet I'm still not showing as GDNet+-member. Well, I hope this is the right place to report this issue, and hope it can be resolved. Its actually been some time since this happened, but I only saw the money has been booked right now. Thanks!
  11. Ok, to be fair they are alike in those points. I was just baffled at how visually ugly labview is, that I felt a comparsion to blueprint somewhat odd. Like, I didn't even notice that labview had comment blocks from the picture posted above.   Ok, another fair point. Didn't even knew things like scratch existed - looks like its similar to what the Rpg-Maker series does, only a lot more visual.   Yeah, I can totally agree to that. I'm being so used to Blueprints now that I do tend to find textual coding messy for a lot of cases - I wouldn't implemented a render-queue in Blueprints by all means, but highlevel gameplay logic just feels way cleaner doing visually. Quess in addition to being used to, also using the right tool for the right job is something to consider. EDIT: Ah, another thing that I forgot to mention, that can lead to blueprints feeling messy: In textual languages, having (pseudo)code like that in multiple places is not a big deal: for(auto& element : world.objects[index].GetCollection()) { // do anything } In blueprint, this requires 3-5 nodes (including the for-loop), which take up a lot of space and can be quite annoying if you have to create over and over again. So it is quite imperative to create a method/macro every time you have the same set of nodes in more than 2 places. While you should probably do that regarless of what type of coding you use, in blueprints its a much bigger deal (also since for-loops, ifs etc... are more "costly" in terms of typing-efficiency) than otherwise, so the difference between "mindlessingly recreating the same code in different places" and "properly removing instances of duplicate code" has an enormous impact on both the perceived and actual productivity and messy-ness of the created visual code, in my experience at least.
  12. (I'm only going to base this on Unreals Blueprints, since they are the most advanced and best visual programming language I know)     Actually, its quite easy: Select all nodes, rightclick, add comment block, type text. This also has the advantage of clearly indicating which nodes belong to the comment, whereas in textual languages, you have to come up with custom commenting schemes to mark blocks of code. You can also add a comment so single nodes, group functions in categories, etc... in Blueprint, so I wouldn't say its more difficult, but merely uncommon for most programmers :)     Well, that is a fair point actually. Unreal already attempted to combat this problem by adding auto-formatting options, which just aren't optimal yet. You have to keep in mind though that with textual programming, if you didn't have IDEs/text editors with auto-formatting, then you'd spend a lot of time formatting your code as well, like indenting. Also Blueprints don't require syntactical whobble like ";" at the end of lines, or () {} "" for marking different blocks of code. Again IDEs take some overhead away by ie. automatically closing brackets after you, and being used to typing for a lot of years gets you really used to so you do it without thinking - but the same thing will happen if you get used to Blueprints, you get a lot more effective after a few months of using them.     On the contrary, Blueprint shows you the name of parameters: https://docs.unrealengine.com/latest/INT/BlueprintAPI/Rendering/Debug/DrawDebugLine/index.html which totally makes up for the lack of local variable names. Compare ie. with the C++-variant: DrawDebugLine(FVector(0, 0, 0), FVector(100, 100, 100), FColor(255, 0, 255), 10.0, 2.0) Here you don't know which parameter is which without a little quesswork or knowledge of the function. So you'd have to create constants, just to make up for the fact. In general, Blueprints handle functions with multiple parameters much better. Every parameter has a default value. If you want to draw a line for 10 seconds and don't care how it looks, just change the "time" parameter, no need to supply default values for eigther the caller or the callee most of the time; and every parameter behaves the same regardless of position (ie. if you want to have explicit default values, you don't have to move the parameter to the end of the parameter list. There's some more advantages like working with an all-present, insanely good autocompletion, where you cannot mix type-connections that don't match. Which brings me to the main point about visual scripting languages and their pro/contras - the're only as good as the tools they supply. Same goes for textual languages, but probably to a lesser degree. For example, think about having to code without autocompletion/intellisense. Or not having auto-formatting. Or not having helpfull functions like goto definition, create definition, global search & replace etc... at some point or the other, those things didn't exist; or wheren't as well-rounded as they are now. Same thing applies to visual scripts: Today we are probably lacking a f*ckton of features that make working with them so much easier. Unreal have already taken a huge step forward with their language IMHO, and I wouldn't want to use another type of visual scripting system for the life of me (but theirs pretty much replaced most of my gameplay coding workflow). So just think about what will probably happen in the future - ie. there's certainly no reason why it shouldn't be possible to create a global auto-formatting for the nodes, that will drastically reduce having to drag nodes around manually. Just little stuff that adds up - and hopefully makes (good) visual scripting languages more approachable for more people :) Because:     I find that comparsion unjustified, they pretty much look nothing alike (and labview pretty unbearably compared to blueprints), except they are visual languages. Now that to me sounds like saying C# looks a lot like malbolge, because, you know, they both use characters to represent their code ;) (not sure if you're just too unfamiliar to blueprints; or I'm too overly biased towards them though, so don't take this too serious)
  13. If we are talking about std::unique_ptr (as I belive) then, not exclusively. Exception safety and such is a good side-effect, but really std::unique_ptr is a 1:1 replacement for 99% of the code where you used new/delete before. It has almost no downsides, default-member functions like ctors + forward declaration don't work well is the only I can think about. Upsides are drastically reduced chance of any accidential memory leak/double delete, reduced coding complexity (no explicit deletes in dtors etc... needed, especially valuable if you store stuff like std::vector<std::unique_ptr<>>!), and increased code readability by explicitely documenting the intended ownership in signature (func(Type* pRaw) vs func(std::unique_ptr<Type> pRaw)). So really, you shouldn't be asking "why should I use std::unique_ptr", you should rather be asking "why should I still use raw new/delete"? :)
  14. Well, first of your code here is different to whats in your link: B * testObA; B testObB(); A test_1; // destructor called A * test_2; So you are now asking why only line 3 calls the destructor, if I am right? Thats quite simple: - "B* testObA" declares a pointer to an object of type B, not an object itself. As a pointer just stores the address of another object, obviously it does not call the destructor, with doing: B * testObA = new B; // creates & assigns an object of Type B delete testObA; // deletes the object pointed to by testObA => calls the destructor (you're actually right that you should rather use unique_ptr in that case :) ) - The second line "B testObB();", at least how it is the code you linked, does not declare a variable of type B eigther, it declares a function pointer with the signature "B (*)(void)". Yeah, having empty brakes like that when declaring a variable does not call the constructor but instead modifies the type of the variable. So when you remove the (), it should construct & destruct the object properly.   Point 2 can actually be misleading, but pointers are pretty basic C++-stuff, so I think this thread rather belongs to for beginners; and on that note you should read up on basic concepts of c++ memory "management" like pointer, references etc... from how it appears :)
  15. You have been told multiple times that this is wrong. Repeating it won't make it right. You clearly have no idea what you are talking about, so why do you continue arguing? Accept that you are wrong and move on. Jeez.