• Advertisement
Sign in to follow this  

Unity Critique my lockless queue, Part II

This topic is 2995 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is kind of a continuation of this thread. I found this GDC 2008 presentation from Microsoft where the presenter defines "channels" as a way to do IPC. He specifies some rules the channels must follow: * Writes are atomic and non-blocking * Reads block if queue empty * Supports multiple readers: each message is read once and only once Judging from his code examples in other places in the presentation it looks like channels are also expected to be able to support multiple writers in different threads too. He mentions using XLockFreeQueue as a way to implement channels (I don't have an XDK, so I don't know any of the guarantees the XLockFreeQueue makes or how it works), or that a deque and a semaphore could also work on Windows. Well, I couldn't figure out how to make a deque threadsafe without using locks and whatnot, so I made this... It's a LIFO stack implemented with a singly-linked list! Okay, it's not really a queue, but like I said, I couldn't figure out how to make a lock-free queue that followed the rules of the "channel," so I went back to something simpler and made this.
template<typename T>
class MessageStack
{
    struct Node
    {
        volatile Node* Next;
        T Data;
    };

    volatile Node* m_head;
    SDL_sem* m_semaphore;


public:
    MessageStack()
    {
        m_semaphore = SDL_CreateSemaphore(0);
        m_head = NULL;
    }

    ~MessageStack()
    {
        SDL_DestroySemaphore(m_semaphore);

        while(m_head != NULL)
        {
            volatile Node* n = m_head;
            m_head = m_head->Next;

            _aligned_free((void*)n);
        }
    }

    void AddMessage(T message)
    {
        Node* n = (Node*)_aligned_malloc(sizeof(Node), sizeof(void*));
        n->Data = message;

        while(true)
        {
            n->Next = m_head;
            
            if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, n, (void*)n->Next) == n->Next)
                break;
        }
        
        // let any waiting readers know there's an available node
        SDL_SemPost(m_semaphore);
    }

    // blocks until a message is available, and then removes it from the list
    T RemoveMessage()
    {
        // block the reader until we have something available
        SDL_SemWait(m_semaphore);

        volatile Node* n = NULL;
        while(true)
        {
            n = m_head;

            if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, (void*)m_head->Next, (void*)n) == n)
                break;
        }

        T t = n->Data;
        _aligned_free((void*)n);

        return t;
    }

    // attempts to get the next message, but won't block.
    // returns true if a message was received, false otherwise
    bool TryRemoveMessage(T* message)
    {
        if (SDL_SemTryWait(m_semaphore) == 0)
        {
            volatile Node* n = NULL;
            while(true)
            {
                n = m_head;

                if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, (void*)m_head->Next, (void*)n) == n)
                    break;
            }

            *message = n->Data;
            _aligned_free((void*)n);

            return true;
        }

        return false;
    }
};
It uses a semaphore (I happened to use SDL instead of Win32, but the basic semaphore operations should be obvious) to make the reader block if there is nothing in the list to read. I think it meets all the requirements of a "channel." So how does it look?

Share this post


Link to post
Share on other sites
Advertisement
Which compiler(s) are you targeting? volatile means different things to different compilers.

For this reason, I'm even tempted to make the somewhat more blankety statement that using volatile qualification should be viewed as a bug.

Share this post


Link to post
Share on other sites
Right now I'm just focused on Visual C++ 2005 and 2008.

InterlockedCompareExchangePointer() requires volatile pointers for the first parameter. Why wouldn't I want to use volatile?

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by BradDaBug
Why wouldn't I want to use volatile?
Required reading on the topic.

In your case, volatile is likely doing exactly what you intend it to, but this usage isn't portable.


Agreed.

OP: If you're putting together lock-free code, it's super important that you know what volatile really means, or rather, know what it really doesn't mean.

For this reason, I would recommend that you add pre-processor checks to ensure that the code is only compiled with Visual C++ 2005 or later, if you intend to distribute code in source form.

It will fall apart with a number of other compilers. Even better, re-write the code such that volatile isn't needed (by inserting the assumed fences and so on explicitly).

Share this post


Link to post
Share on other sites
volatile is required for variables to be used with the Interlocked* functions only to ensure the compiler doesn't re-order (or otherwise optimise) reads and writes to the variable.
Any call to an Interlocked* function will create the required memory fence to ensure that none of these optimisations occur at run-time.

AFAIK, this usage of volatile should be portable across any compiler.

BradDaBug's code doesn't seem (I'd have to look closer to make sure!) to rely on the non-standard behaviour of volatile adding memory-fences -- the interlocked API does that for him (and this API requires your variable to be marked volatile, otherwise the fence would be useless if the compiler is going to reorder things anyway!).

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
volatile is required for variables to be used with the Interlocked* functions only to ensure the compiler doesn't re-order (or otherwise optimise) reads and writes to the variable.
Any call to an Interlocked* function will create the required memory fence to ensure that none of these optimisations occur at run-time.

AFAIK, this usage of volatile should be portable across any compiler.

BradDaBug's code doesn't seem (I'd have to look closer to make sure!) to rely on the non-standard behaviour of volatile adding memory-fences -- the interlocked API does that for him (and this API requires your variable to be marked volatile, otherwise the fence would be useless if the compiler is going to reorder things anyway!).



You're contradicting yourself. First you say that the usage of volatile to ensure that the compiler doesn't re-order access should be portable, then you say it's nonstandard behavior of volatile adding memory fences. That's exactly what memory fences are for, making sure operations aren't re-ordered.

FWIW, it's Microsoft specific behavior, and as far as the C++ standard goes, it says nothing about re-ordering or anything of the sort. volatile is almost completely useless if you're only interested in it's standard defined behavior.

Share this post


Link to post
Share on other sites
Bob pendleton has added some preliminary support for more low level multithreading primitives for SDL 1.3, analogous to InterlockedCompareExchange etc You might find it interesting.

Share this post


Link to post
Share on other sites
Quote:
Original post by cache_hit
Quote:
Original post by Hodgman
volatile is required for variables to be used with the Interlocked* functions only to ensure the compiler doesn't re-order (or otherwise optimise) reads and writes to the variable.
Any call to an Interlocked* function will create the required memory fence to ensure that none of these optimisations occur at run-time.

AFAIK, this usage of volatile should be portable across any compiler.

BradDaBug's code doesn't seem (I'd have to look closer to make sure!) to rely on the non-standard behaviour of volatile adding memory-fences -- the interlocked API does that for him (and this API requires your variable to be marked volatile, otherwise the fence would be useless if the compiler is going to reorder things anyway!).
You're contradicting yourself. First you say that the usage of volatile to ensure that the compiler doesn't re-order access should be portable, then you say it's nonstandard behavior of volatile adding memory fences. That's exactly what memory fences are for, making sure operations aren't re-ordered.

FWIW, it's Microsoft specific behavior, and as far as the C++ standard goes, it says nothing about re-ordering or anything of the sort. volatile is almost completely useless if you're only interested in it's standard defined behavior.
Nope, Hodgman is correct. The key here is that the 'volatile' keyword isn't doing much of anything - it is merely a requirement of the Interlocked* functions. The Interlocked* functions themselves handle the memory fences, so the code as it stands is perfectly valid.

Equally, the reason it isn't portable has nothing to do with the use of volatile - instead it is the fact that the Interlocked* functions are Windows-specific.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Nope, Hodgman is correct. The key here is that the 'volatile' keyword isn't doing much of anything - it is merely a requirement of the Interlocked* functions. The Interlocked* functions themselves handle the memory fences, so the code as it stands is perfectly valid.

Equally, the reason it isn't portable has nothing to do with the use of volatile - instead it is the fact that the Interlocked* functions are Windows-specific.


Maybe I'm misreading his sentence, but this is what I see.

Quote:

volatile is required for variables to be used with the Interlocked* functions only to ensure the compiler doesn't re-order (or otherwise optimise) reads and writes to the variable.

...

AFAIK, this usage of volatile should be portable across any compiler.


According to the standard, volatile has nothing to do with whether or not a compiler re-orders reads or writes to a variable. It's required because a) the function signature demands it, and b) it just so happens that msvc does use memory fences on volatile accesses. But I still dont' see anything "portable" about this usage of volatile at all.

Am I missing something?

Share this post


Link to post
Share on other sites
Quote:
Original post by cache_hit
First you say that the usage of volatile to ensure that the compiler doesn't re-order access should be portable, then you say it's nonstandard behavior of volatile adding memory fences. That's exactly what memory fences are for, making sure operations aren't re-ordered.
There's compile-time reordering and run-time reordering. I was saying that volatile only inhibits compile-time reordering, whereas a memory-fence only inhibits run-time reordering. There's no contradiction there.
Quote:
FWIW, it's Microsoft specific behavior, and as far as the C++ standard goes, it says nothing about re-ordering or anything of the sort. volatile is almost completely useless if you're only interested in it's standard defined behavior.
AFAIK, the behaviour where volatile adds fences (i.e. inhibits runtime reordering) is Microsoft-specific, but I thought all compilers inhibited compile-time reordering of volatile variables. Accessing a volatile variable is one of those points in a program where everything before it must be complete and nothing after it must have taken place (of course modern CPUs break this by optimising at run-time, which is why you need fences as well).

Can someone reference the part of the spec dealing with volatile? IIRC the rules of volatile are directly linked to the sequencing of the abstract machine...

[Edited by - Hodgman on December 6, 2009 7:44:19 PM]

Share this post


Link to post
Share on other sites
The standard says a number of things about volatile:

Quote:

7.15.1.8 volatile is a hint to the implementation to avoid aggressive optimization involving the object
because the value of the object might be changed by means undetectable by an implementation. See 1.9 for
detailed semantics. In general, the semantics of volatile are intended to be the same in C + + as they are
in C.


Of course, it's only a hint. So we refer back to 1.9 and it says the following:

Quote:

1.9 When the processing of the abstract machine is interrupted by receipt of a signal, the values of objects with
type other than volatile sig_atomic_t are unspecified, and the value of any object not of
volatile sig_atomic_t that is modified by the handler becomes undefined.


There are a number of surrounding comments regarding volatile as well
Quote:

1.6 The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and
calls to library I/O functions.

1.7 Accessing an object designated by a volatile lvalue (3.10), modifying an object, calling a library I/O
function, or calling a function that does any of those operations are all side effects, which are changes in the
state of the execution environment. Evaluation of an expression might produce side effects. At certain
specified points in the execution sequence called sequence points, all side effects of previous evaluations
shall be complete and no side effects of subsequent evaluations shall have taken place.

1.11 The least requirements on a conforming implementation are:
— At sequence points, volatile objects are stable in the sense that previous evaluations are complete and
subsequent evaluations have not yet occurred.
— At program termination, all data written into files shall be identical to one of the possible results that
execution of the program according to the abstract semantics would have produced.
— The input and output dynamics of interactive devices shall take place in such a fashion that prompting
messages actually appear prior to a program waiting for input. What constitutes an interactive device is
implementation-defined.
[Note: more stringent correspondences between abstract and actual semantics may be defined by each
implementation. ]



I couldn't find anything else related enough. Furthermore, this is from the Microsoft documentation:

Quote:

Microsoft Specific

Objects declared as volatile are not used in certain optimizations because their values can change at any time. The system always reads the current value of a volatile object at the point it is requested, even if a previous instruction asked for a value from the same object. Also, the value of the object is written immediately on assignment.

Also, when optimizing, the compiler must maintain ordering among references to volatile objects as well as references to other global objects. In particular,

A write to a volatile object (volatile write) has Release semantics; a reference to a global or static object that occurs before a write to a volatile object in the instruction sequence will occur before that volatile write in the compiled binary.

A read of a volatile object (volatile read) has Acquire semantics; a reference to a global or static object that occurs after a read of volatile memory in the instruction sequence will occur after that volatile read in the compiled binary.

This allows volatile objects to be used for memory locks and releases in multithreaded applications.

Note
Although the processor will not reorder un-cacheable memory accesses, un-cacheable variables must be volatile to guarantee that the compiler will not change memory order.

End Microsoft Specific



It is for this reason that articles such as the one linked by swiftcoder above exist. In particular, that article states:

Quote:

Hans Boehm points out that there are only three portable uses for volatile. I'll summarize them here:

marking a local variable in the scope of a setjmp so that the variable does not rollback after a longjmp.
memory that is modified by an external agent or appears to be because of a screwy memory mapping
signal handler mischief



In practice, I do think a lot of compilers take the "hint" mentioned in 7.15.1.8 of the standard quoted above. I just don't think it's safe to say it's a "portable" use of the keyword.

Share this post


Link to post
Share on other sites
Quote:
Original post by cache_hit
— At sequence points, volatile objects are stable in the sense that previous evaluations are complete and
subsequent evaluations have not yet occurred.

Hans Boehm points out that there are only three portable uses for volatile.
* volatile may be used when variables may be "externally modified", but the modification in fact is triggered synchronously by the thread itself, e.g. because the underlying memory is mapped at multiple locations.
The way that I interpret that is: two reads of a volatile variable won't be optimised down to a single fetch. If you request the value of a volatile variable twice, it will be fetched from memory twice, and those fetches will occur in the sequence specified by the programmer.

If you simply used a memory fence instruction, but didn't use a volatile variable, the compiler is allowed to remove your "redundant" fetches. However, they're not really "redundant" because you know that the value might have been modified behind the scenes (by another thread -- i.e. "externally modified" in Hans Boehm's words), and you really do need to perform the subsequent "redundant" fetches.

Hence, both volatile access and memory fences are important for thread-safe code, which is why the Interlocked* API demands you use volatile.

In Herb Sutter's volatile vs. volatile, he also interprets the spec as saying that reads from a volatile variable can't be reordered (with respect to each other) or omitted by the compiler (unlike normal reads).

[edit]So I'm wrong about it inhibiting *all* compile-time reordering -- some compilers still allow regular variable accesses to be moved across a volatile access -- but all compilers should preserve the order of volatile accesses with respect to other volatile accesses, and most importantly, no compiler should omit any volatile accesses.

[edit #2]We really should get back to critiquing BradDaBug's actual code now --
I'll try to have an in-depth look at it tonight ;)

[Edited by - Hodgman on December 6, 2009 10:00:15 PM]

Share this post


Link to post
Share on other sites
Ok BradDaBug, there unfortunately are a few bugs in your code that I can see.

1) Everywhere you've written "volatile Node*", you actually meant "Node* volatile".

The former is a (regular) pointer to a volatile value, the latter is a volatile pointer to a (regular) value. Seeing the pointer is the thing being modified by multiple threads, it's the thing that needs to be volatile.

2) In your "pop" functions, you've got this code:
                n = m_head;

if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, (void*)m_head->Next, (void*)n) == n)
break;
This is dangerous as m_head is read from twice - first it's value is read so it can be copied into n, then it is read a second time at m_head->Next. It's possible for the value of m_head to be modified between these two accesses, leading to a race condition.

You can fix this by getting the value of Next via n, instead of accessing m_head a second time.
                n = m_head;
Node* next = n->Next;


Here's my fixed (but untested version). You'll notice that by fixing bug #1, none of your local variables need to use the volatile keyword any more:
template<typename T>
class MessageStack
{
struct Node
{
Node* volatile Next;
T Data;
};

Node* volatile m_head;
SDL_sem* m_semaphore;


public:
MessageStack()
{
m_semaphore = SDL_CreateSemaphore(0);
m_head = NULL;
}

~MessageStack()
{
SDL_DestroySemaphore(m_semaphore);

while(m_head != NULL)
{
Node* n = m_head;
m_head = m_head->Next;

_aligned_free(n);
}
}

void AddMessage(T message)
{
Node* n = (Node*)_aligned_malloc(sizeof(Node), sizeof(void*));
n->Data = message;

while(true)
{
n->Next = m_head;

if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, n, n->Next) == n->Next)
break;
}

// let any waiting readers know there's an available node
SDL_SemPost(m_semaphore);
}

// blocks until a message is available, and then removes it from the list
T RemoveMessage()
{
// block the reader until we have something available
SDL_SemWait(m_semaphore);

Node* n = NULL;
while(true)
{
n = m_head;
Node* next = n->Next;

if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, next, n) == n)
break;
}

T t = n->Data;
_aligned_free(n);

return t;
}

// attempts to get the next message, but won't block.
// returns true if a message was received, false otherwise
bool TryRemoveMessage(T* message)
{
if (SDL_SemTryWait(m_semaphore) == 0)
{
Node* n = NULL;
while(true)
{
n = m_head;
Node* next = n->Next;

if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, next, n) == n)
break;
}

*message = n->Data;
_aligned_free(n);

return true;
}

return false;
}
};


Before I'd declare any of these fancy data-structures to be truly thread safe though (even after peer review), I'd construct a stress test where lots and lots of threads hammer the thing with billions of push/pop operations, making sure no values ever get lost/corrupted in the process ;)
I've had a queue before that worked fine in a stress test for hours before one of the subtle bugs caused it to corrupt a value :/

[Edited by - Hodgman on December 7, 2009 6:50:35 AM]

Share this post


Link to post
Share on other sites
Lock-free access for such a queue is achieved at best with ATOMIC read-writes and this is possible on compilers where 'volatile' means exactly that e.g. VC++ 2010 or later (any version earlier than this has not offered atomic read-writes via this keyword, they have rather offered a system-wide access to variables using it e.g. not atomic)

Share this post


Link to post
Share on other sites
Quote:
Original post by ddlox
Lock-free access for such a queue is achieved at best with ATOMIC read-writes...
The Interlocked* functions (used in this code) are part of the Windows API for performing atomic operations on any Win32 compiler. Code written to use this API should assume that volatile has the standard meaning, not the MS-specific atomic meaning.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
1) Everywhere you've written "volatile Node*", you actually meant "Node* volatile".

Oops, thanks.

Quote:
2) In your "pop" functions, you've got this code:*** Source Snippet Removed ***This is dangerous as m_head is read from twice - first it's value is read so it can be copied into n, then it is read a second time at m_head->Next. It's possible for the value of m_head to be modified between these two accesses, leading to a race condition.

You can fix this by getting the value of Next via n, instead of accessing m_head a second time.*** Source Snippet Removed ***

Why would it matter if m_head changes between those two lines? Wouldn't the InterlockedCompareExchangePointer() detect that n is now different than m_head, then when it returned the current version of m_head it would be different than n, and the loop would continue and try again?

On a hopefully unrelated note, I've been doing some testing with several threads, and I've started getting an error message that says something like "Damage before 0xSomeMemoryAddress which was allocated by aligned routine". It either happens when I call _aligned_malloc() or _aligned_free() (I've seen it happen with either). I can't see any obvious places where memory could get corrupted. I'm allocating everything, including the stack itself, with _aligned_malloc(). Is there anything else I need to do to make sure the memory is aligned properly? Or is the issue something else?

Share this post


Link to post
Share on other sites
Quote:
Original post by BradDaBug
Why would it matter if m_head changes between those two lines? Wouldn't the InterlockedCompareExchangePointer() detect that n is now different than m_head, then when it returned the current version of m_head it would be different than n, and the loop would continue and try again?
Yeah it won't be a big deal, unless m_Head has been changed to NULL ;) It slightly worse performance-wise as well, due to m_Head being volatile (whereas n is not).

To handle that case (where there is one item on the stack, and two threads simultaneously pop it), you'd have to do something like:
                n = m_head;
if( !n )
{//one of:
return RemoveMessage(); //for RemoveMessage, try again
return false;//for TryRemoveMessage, return failure
}
Node* next = n->Next;
[EDIT]Actually, does the semaphore take care of this for you? Just to make sure, you could put in: if(!n)printf("oh noes!!!"); ;)[/EDIT]


Quote:
On a hopefully unrelated note, I've been doing some testing with several threads, and I've started getting an error message that says something like "Damage before 0xSomeMemoryAddress which was allocated by aligned routine".
Not exactly sure, but it sounds like a memory-overrun problem. Double-check you're not looping past the end of any arrays, using any pointers after they've been deleted, or doing any dangerous casts (e.g. allocating a Foo and casting the pointer to a Bar*).
Actually, the above bug, if it occurs, would lead to calling free twice on the same pointer, which is going to do god-knows-what to memory.


[edit #2]
Found another bug :( As before, it probably won't do anything bad because the Interlocked Exchange will save you... but you are potentially accessing invalid memory, which AFAIK is undefined behaviour.

This problem is actually written about in most of the papers on lock-free linked list structures... It's really hard to safely deallocate the nodes, because other threads may still hold pointers to them.
One (rough) solution is to garbage collect the nodes - instead of deallocating them instantly, you put them into a queue to be deleted in "some suitable amount of time in the future".
Here's the scenario:
    // Two threads call this function at the same time:
T RemoveMessage()
{
SDL_SemWait(m_semaphore);

Node* n = NULL;
while(true)
{
n = m_head;
//Both threads get here at the same time, they both read the same value of m_head into 'n'
//Thread #1 then goes to sleep here (see below where it wakes up in this scenario)
Node* next = n->Next;
//Thread #1 just accessed unallocated memory, hopefully the system didn't notice...

if (InterlockedCompareExchangePointer((volatile PVOID*)&m_head, next, n) == n)
break;
}

T t = n->Data;
_aligned_free(n);
//When Thread#2 reaches this point, Thread#1 wakes back up
//'n' has now been deallocated, but Thread#1 is about to access 'n.Next', which is now undefined!

return t;
}


[Edited by - Hodgman on December 7, 2009 5:15:55 PM]

Share this post


Link to post
Share on other sites
Yep, I think you're right.

I probably spotted a similar bug in AddMessage(). As soon as InterlockedCompareExchangePointer() executes, but before the n->Next part is evaluated on the other side of the == another thread can execute RemoveMessage() to remove and delete the n that was just added, so n->Next is invalid. Since the result returned by InterlockedCompareExchangePointer() is different than whatever is in n->Next (since the debug CRT has written 0xfeeefeee or whatever) the loop tries again with a node that's already been freed... and bad things happen. I think I fixed that one just by storing n->Next is another variable before the call to InterlockedCompareExchangePointer() and using that instead of n->Next.

Share this post


Link to post
Share on other sites
Just in case anyone is interested, I changed the code to create a secondary "free list" of nodes that aren't being used. Whenever a node is freed it's put in this list. When a new node is needed it is pulled off the free list, or allocated if the free list is empty. That way the number of memory allocations and frees is TINY compared to what it was before. According to some quick and dirty testing it runs about 10x faster this way.

Of course, that probably makes it even more susceptible to the ABA thing.

Speaking of which, has anyone tried using "hazard pointers" (clicky clicky) before?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By bryandalo
      Good day,

      I just wanted to share our casual game that is available for android.

      Description: Fight your way from the ravenous plant monster for survival through flips. The rules are simple, drag and release your phone screen. Improve your skills and show it to your friends with the games quirky ranks. Select an array of characters using the orb you acquire throughout the game.

      Download: https://play.google.com/store/apps/details?id=com.HellmodeGames.FlipEscape&hl=en
       
      Trailer: 
       
    • By Manuel Berger
      Hello fellow devs!
      Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed.
      My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.

      In Pseudo-Code:
      angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
      index = (int) (angle / (PI / 5));
      PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down

      Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.

      What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.

      I can't get my head around it. Any suggestions? Is the whole approach doomed?

      Thank you in advance for any input!
       
    • By khawk
      Watch the latest from Unity.
       
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By Ovicior
      Hey,
      So I'm currently working on a rogue-like top-down game that features melee combat. Getting basic weapon stats like power, weight, and range is not a problem. I am, however, having a problem with coming up with a flexible and dynamic system to allow me to quickly create unique effects for the weapons. I want to essentially create a sort of API that is called when appropriate and gives whatever information is necessary (For example, I could opt to use methods called OnPlayerHit() or IfPlayerBleeding() to implement behavior for each weapon). The issue is, I've never actually made a system as flexible as this.
      My current idea is to make a base abstract weapon class, and then have calls to all the methods when appropriate in there (OnPlayerHit() would be called whenever the player's health is subtracted from, for example). This would involve creating a sub-class for every weapon type and overriding each method to make sure the behavior works appropriately. This does not feel very efficient or clean at all. I was thinking of using interfaces to allow for the implementation of whatever "event" is needed (such as having an interface for OnPlayerAttack(), which would force the creation of a method that is called whenever the player attacks something).
       
      Here's a couple unique weapon ideas I have:
      Explosion sword: Create explosion in attack direction.
      Cold sword: Chance to freeze enemies when they are hit.
      Electric sword: On attack, electricity chains damage to nearby enemies.
       
      I'm basically trying to create a sort of API that'll allow me to easily inherit from a base weapon class and add additional behaviors somehow. One thing to know is that I'm on Unity, and swapping the weapon object's weapon component whenever the weapon changes is not at all a good idea. I need some way to contain all this varying data in one Unity component that can contain a Weapon field to hold all this data. Any ideas?
       
      I'm currently considering having a WeaponController class that can contain a Weapon class, which calls all the methods I use to create unique effects in the weapon (Such as OnPlayerAttack()) when appropriate.
  • Advertisement