• Create Account

# Animation Blending

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

171 replies to this topic

### #21haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 16 August 2008 - 08:38 PM

An addendum to the previous post: One has to differ between a "relative" weighting during blending and an "absolute" weighting during layering. I.e. in general the sums of weights during blending are allowed to not sum-up to 1 when a normalization step is done during processing the "end of animation group". But the sum of weights as is used for layering must not be normalized, of course.

To make the life of the artist a bit easier, one can use distinct weights for blending and layering. E.g. the blending weights are associated with the animations, while the layering weights are associated with the animation groups.

### #22Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 16 August 2008 - 10:44 PM


void CAnimation::Blend(CSkeleton *pFinalSkel, CSkeleton *pLastSkel, CSkeleton *pNextSkel, float blendFactor) {

for(unsigned int i = 0; i < pLastSkel->NumBones(); i++) {

pFinalSkel->bones[i]->vPos = pLastSkel->bones[i]->vPos * (1.0f - blendFactor) + pNextSkel->bones[i]->vPos * blendFactor;
pFinalSkel->bones[i]->qRotate.Nlerp(pLastSkel->bones[i]->qRotate, pNextSkel->bones[i]->qRotate, blendFactor);

}

}

bool loop = true;

void CAnimation::Animate(CSkeleton *pFinalSkel) {

unsigned int uiFrame = 0;
CQuat qTmp;
CVector3 vTmp;

static float lastTime = startTime;
float time = m_Timer.GetSeconds();

time += lastTime;
lastTime = time;

if(time > (startTime + length)) {

if(loop) {
m_Timer.Init();
time = startTime;
lastTime = startTime;
uiFrame = 0;

} else {

time = startTime + length;

}

}

while(uiFrame < m_Frames.size() && m_Frames[uiFrame]->time < time) uiFrame++;

if(uiFrame == 0) {

for(unsigned int i = 0; i < m_Frames[0]->NumBones()+1; i++) {

pFinalSkel->bones[i]->qRotate = pFinalSkel->bones[i]->qRotate * m_Frames[0]->bones[i]->qRotate;
pFinalSkel->bones[i]->vPos += m_Frames[0]->bones[i]->vPos;

}
}

else if(uiFrame == m_Frames.size()) {

for(unsigned int i = 0; i < m_Frames[uiFrame-1]->NumBones()+1; i++){

pFinalSkel->bones[i]->qRotate = pFinalSkel->bones[i]->qRotate  * m_Frames[uiFrame-1]->bones[i]->qRotate;
pFinalSkel->bones[i]->vPos += m_Frames[uiFrame-1]->bones[i]->vPos;

}

}

else {

CSkeleton *curFrame = m_Frames[uiFrame];
CSkeleton *prevFrame = m_Frames[uiFrame-1];

float delta = curFrame->time - prevFrame->time;
float blend_factor = (time - prevFrame->time) / delta;

CAnimation::Blend(pFinalSkel, curFrame, prevFrame, blend_factor);

}

}



Thats My New Animation. Its working how needed. But sometimes it slows down. I think its because of CPU. Am i right?

Thanks
Kasya

### #23haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 16 August 2008 - 11:42 PM

@ Kasya: Your latest code seems me mainly okay in principle, but I think there are some design quirks. Perhaps I'm wrong because I don't have insight to some implementation and design details. However, here is what I mean:

(1) I wouldn't use a timer but an always running time. When you start using animation blending, several animated objects, and perhaps physics you would otherwise deal with dozens of timers, and you would introduce little slides at each timer restart. A central, always running time would unburden you from taking care, and also gives all animated elements the same time base.

Let's assume there is a central time outside the animation. Whenever you begin you compute and render a new video frame, you freeze the current time (what I have named "the tick" in my large post above).

Reasoning: If you wouldn't do so, then animation A is computed for time t, animation B a bit later, say at time t+0.05s, animation 3 another bit later, and so on. So, your current snapshot of the scene would be no longer a real snapshot since all animations are computed at slightly different time moments.

Hold a relative time in the animation instance. I.e. an animation is instanciated and should start playing at t0. The animation is actually not playing as long as "the tick" is less than t0. Then, when the tick goes behind t0, the relative time will be
t' := tick - t0
Of course, every animation has its own t0 and hence its own t'. If t' exceeds the length T of the animation, the result depends on whether the animation loops or not (you already have seen that, of course).

(2) Animations with loops have to wrap around. That means, there is no "reset" when t' exceeds the length. That has 2 consequences:

First, when t' exceeds the duration, you have to correct t0 (what would be the easiest way of dealing with this) by setting
t0 := t0 + T while t' + t0 > T
Doing so does _not_ drop the time overshooting as would a simple reset do!

Second, when you look at the key-frames, the indices of the key-frames look like
0, 1, 2, 3, ... n-2, n-1, 0, 1, 2, ...
where N is the number of key-frames. Then, if t'[n-1] != t'[0] you have to interpolate between key-frames n-1 and 0 also! You may have another scheme where t'[n-1] == t'[0] so need not handle this stuff specially.

I hint you at this possibility although, or even because, I don't know how you define your key-frames.

### #24godmodder  Members   -  Reputation: 788

Like
0Likes
Like

Posted 17 August 2008 - 01:28 AM

Quote:
 Hold a relative time in the animation instance. I.e. an animation is instanciated and should start playing at t0.

I currently have a global timer in my engine that I query each frame to see how much time has passed since the previous frame. Every model in the world has a CurrentAnimTime variable that is increased with this frame interval. With the help of this CurrentAnimTime, the skeleton is animated. Is this the method you are using?

Quote:
 (A little excursus: To make things even more interesting, this factor is a float parameter of the animation, and a float modifier can modify it. You see what I mean? Yes, another animation can be used to control it if one wants to do so. So it is even possible to fade animations in and out.)

What do you mean by "fading" an animation in and out? Is it transitioning from a walking to a running animation for example by letting the weight of the walking animation gradually decrease and the weight for the running animation increase?

I've read a thesis here about this subject. It states that animation blending could be implemented as a special case of animation layering, where all the weights off the skeletons from different animations are taken into account, as opposed to just a few joints. This seems like a good approach to me, but I'd like to know what you think of it.

Also, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

Aside from the actual implementation, is this really the technique games like Tomb Raider used in the past to combine animations? I find it a little hard to believe that these games, that didn't even use skinning, used such an advanced animation system. I ask this because I'm currently replicating a Tomb Raider engine and I just need to combine a running and a shooting animation. The rest of the animations can simply be transitioned in between. Maybe there's a simpler technique that I've overlooked?

As you can see, I have quite a few questions before I go and implement this technique. ;) Sorry you had to type that much, BTW!

Jeroen

[Edited by - godmodder on August 17, 2008 8:28:03 AM]

### #25haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 17 August 2008 - 02:24 AM

Quote:
 Original post by godmodderI currently have a global timer in my engine that I query each frame to see how much time has passed since the previous frame. Every model in the world has a CurrentAnimTime variable that is increased with this frame interval. With the help of this CurrentAnimTime, the skeleton is animated. Is this the method you are using?

I use another way of computing, but in effect it seems me the same with perhaps some minor exceptions.

We both use a global time from which the video frame's tick is fetched. I hope you freeze the tick as I have described in one of the posts above. From your desciption I see that you compute the time elapsed since the last recent video frame
dt := tickn - tickn-1
and increase the animation time by it
t'n := t'n-1 + dt , presumbly with t'0 := 0

My way is
t'n := tickn - t0

Now (if I'm correct with my assumptions), comparing these 2 ways shows that the animation playback is started at a video frame boundary in your case, while it is started at a arbitrary (but defined, of course) time moment t0 in my case. So I can better synchronize with already running animations. But you may also use a t'0==t0!=0, of course, in which my hint is irrelevant.

The other thing is that your way accumulates small inaccuraries in t'n over time, while my way avoids this.

But as said, both things are minor and may never yield in a problem at runtime.

Quote:
 Original post by godmodderI've read thesis here about this subject. It states that animation blending could be implemented as a special case of animation layering, where all the weights off the skeletons from different animations are taken into account, as opposed to just a few joints. This seems like a good approach to me, but I'd like to know what you think of it.

Please give me some days to read the thesis.

Quote:
 Original post by godmodderAside from the actual implementation, is this really the technique games like Tomb Raider used in the past to combine animations? I find it a little hard to believe that these games, that didn't even use skinning, used such an advanced animation system. I ask this because I'm currently replicating a Tomb Raider engine and I just need to combine a running and a shooting animation. The rest of the animations can simply be transitioned in between. Maybe there's a simpler technique that I've overlooked?

Well, I don't know how animation is implemented in Tomb Raider. The described system is just the animation system for my engine. If you ignore named binding, symmetry, and instancing, nothing more than a 2 level averaging is left over. Essentially, in a kind of pseudo code, what is done is
for( group=0; group < noofAnimGroups; ++group ) {   for( anim=0; anim < group.noofAnims; ++anim ) {      for( track=0; track < anim.noofTracks; ++ track ) {         value = track.interpolate( tickRelativeToAnim );         track.targetParameter += anim.weight * value;      )   }   skeleton.normalize;   result.addIfTotalWeightIsLessThanOne( skeleton, group.weight );}

As can be seen here, the "for( anim" loop does the blending, while the "for( group" loop does the layering. In fact, blending and layering is mathematically the same with the exception of manipulating the layer's weight factor accordingly to the already applied weight factors. No big deal.

However, I don't claim that the described method is the only one, or even the best one. It is just the method that seems me appropriate to reach the goals I've discussed with my artist, and that fit in my overall engine design, of course. E.g. the timeline and prototype/instancing stuff result from the basic design principles of the engine, and not from a necessity due to the animation system itself. And the usage of tracks is IMHO advantageous because it allows my artits to concentrate on the parameters he want to tweak, and it lowers the amount of parameters the engine will process.

### #26haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 17 August 2008 - 02:40 AM

Quote:
 Original post by godmodderWhat do you mean by "fading" an animation in and out? Is it transitioning from a walking to a running animation for example by letting the weight of the walking animation gradually decrease and the weight for the running animation increase?

Yes, that is a possible scenario.

Quote:
 Original post by godmodderAlso, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

In principle also translations have to be blended, yes, but only if they are animated. The result of blending 2 identical value
vPos * b + vPos * ( 1 - b ) == vPos
is independend on the blending factor and hence need not be done if you _know_ that they are identical. In skeletons the positions are typically constant. That is one of the reasons I use tracks, so the artist _can_ animate (selected) positions.

Using slerp or nlerp is an issue; I've posted something about it anywhere earlier.

Quote:
 Original post by godmodderAs you can see, I have quite a few questions before I go and implement this technique. ;) Sorry you had to type that much, BTW!

Oh, life is hard ;) Well, my engine is definitely somewhat complex, and offering some details here give the chance of discussion. I don't claim to have found the best way, and so the discussion help also me in perhaps revising design decisions. So don't hesitate to point out weaknesses. I'm glad about every hint making things better.

### #27RobTheBloke  Crossbones+   -  Reputation: 2531

Like
0Likes
Like

Posted 17 August 2008 - 11:17 PM

Quote:
 Original post by godmodderAlso, this paper speaks only of blending of the rotations with the SLERP function, but do translations also need to be blended?

Use nlerp when blending anims - it's faster and a hell of a lot easier when blending multiple anims. Normalise at the end though, then you have no reason to put constraints on the weights for the layering (additive blending as we call it). The other advantage of using nlerp, is that you can treat all anim tracks as floats, with a special case flag for quat rotations (i.e. normalise float tracks 0,1,2,3 as a quat). It's then trivial to extend the system to handle colours / bland shape weights etc.

### #28haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 17 August 2008 - 11:46 PM

Quote:
 Original post by RobTheBlokeUse nlerp when blending anims - it's faster and a hell of a lot easier when blending multiple anims. Normalise at the end though, then you have no reason to put constraints on the weights for the layering (additive blending as we call it). The other advantage of using nlerp, is that you can treat all anim tracks as floats, with a special case flag for quat rotations (i.e. normalise float tracks 0,1,2,3 as a quat). It's then trivial to extend the system to handle colours / bland shape weights etc.

Seconded, with the exception of dropping constraints on the weights for the layering. How do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

[Edited by - haegarr on August 18, 2008 6:46:50 AM]

### #29Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 18 August 2008 - 12:25 AM

Hello,
What is Animation Track? is it like

class AnimationTrack {public:std::vector<Keyframe> keyframes;unsigned int BoneID;};

?

Thanks,
Kasya

### #30RobTheBloke  Crossbones+   -  Reputation: 2531

Like
0Likes
Like

Posted 18 August 2008 - 01:15 AM

More or less. That track will be fine if you only want skeletal animation, you could simplify it to:

struct KeyFrame{  float time;  float value;};enum AnimTrackType{  kRotateX,  kRotateY,  kRotateZ,  kRotateW,  kTranslateX,  kTranslateY,  kTranslateZ,  kScaleX,  kScaleY,  kScaleZ,  kColorR,  kColorG,  kColorB,  // and any others you might need.... };struct AnimTrack{  int someNodeId;  AnimTrackType type;  std::vector<KeyFrame> keys;};

Though there is a *lot* of optimisation possible in the above code...

### #31Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 18 August 2008 - 01:21 AM

question #1: Can i not use AnimTrackType. my Keyframe class is

class CKeyframe {public:CVector3 vPos;CQuat qRotate;};

question #2: int someNodeID; is Bone ID if i use Bone System right?

question #3: Do i need Scale or Color? If yes where i'll use it

Thanks,
Kasya

### #32RobTheBloke  Crossbones+   -  Reputation: 2531

Like
0Likes
Like

Posted 18 August 2008 - 01:25 AM

Quote:
 Original post by haegarrHow do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.

### #33RobTheBloke  Crossbones+   -  Reputation: 2531

Like
0Likes
Like

Posted 18 August 2008 - 01:36 AM

Quote:
 Original post by Kasyaquestion #1: Can i not use AnimTrackType. my Keyframe class is

Which is probably not ideal. Imho, storing the keyframe as a quat+trans will probably give you a very large amount of redundent data. It's very very rare that an animator will modify the translation on anything other than the character root. Also, by storing the keyframe as an explicit keyframe type, you loose generalisation that's trivial to add into the system at the start - eg, what happens when you want to animate the parameters of a particle system (say, particle colour, emission rate etc?)

Quote:
 Original post by Kasyaquestion #2: int someNodeID; is Bone ID if i use Bone System right?

When i say 'some node' i mean it quite literally - it could be the id of a bone, it could be the id of a material, it could be the id of a particle system etc. By explicitly using a 'bone' id you are restricting your system to be able to animate bones only. By taking a small step towards generalisation, you can make the system store animated colours, blend shape weights, even point animation. The easiest way of doing this, is to simply store all keyframes as floats, and use some enum identifier to let you know what data it relates to.

Ultimately, there is only one thing that distinguishes the handling of rotations vs anything else you can animate - i.e. the quat's need to be normalised before converting to matrices. So you might as well store tracks independently for floats - instead of a float/vec2/vec3/vec4/quat track types.

Quote:
 Original post by Kasyaquestion #3: Do i need Scale or Color? If yes where i'll use it

Nope, but there's a good chance that at some point in the future you'll want to expand the system to handle something other than just rotation and translation. I used scale and color as an example, but you might have a need to animate ambient/diffuse/specular/emission/shininess/uv scrolling etc etc. All of those can be described as a number of floats.

### #34Gasim  Members   -  Reputation: 207

Like
0Likes
Like

Posted 18 August 2008 - 01:43 AM

Thanks!

### #35haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 18 August 2008 - 01:52 AM

Quote:
 Original post by Kasya...What is Animation Track?...

I use animation tracks similar to your assumption, but a bit more complicated. I explain it coarsly in the following, but notice please that every system may have other edge constraints. So you may find another or just less complex solution better suited for your needs. In the following words with a capital first letter denote normally a C++ class.

The global Timeline is controlled by the Runtime::tick what reflects the current moment in time but freezed during the processing of a video frame. I've mentioned this several times before. The Runtime::tick (indirectly) denotes the position of the virtual playhead of the Timeline. In fact a Timebase is in-between, but that should not play a role here.

An Animation::Resource is a kind of prototype for animations and as such it counts to the assets. It consists of one or more channels if well defined, where each channel is for modifying a tageted value. Targeting is done by a naming scheme, allowing me to bind the animation instance later to any target in principle. You have specified a BoneID in your suggestion. That is somewhat equivalent. But notice that the said channels are _not_ bound to actual targets, since we are still speaking of a prototype resource. One subtype of channel is implemented as a key-frame interpolation. Other classes exist as well, but are not of interest here.

Now, when the system decides to animate a skeleton, it gets a Track from the global Timeline. The Track is especially an AnimationTrack, since it is used also to handle the animation grouping for blending and layering. However, a new Animation instance is created and backed with the selected Animation::Resource. The Animation instance is a Timeline::Task, and as such can and will be added to the formerly got AnimationTrack. The Animation instance is bound to the skeleton instance, what means that the names available from the Animation::Resource channels are resolved w.r.t. the skeleton instance. Names that cannot be resolved are simply ignored (over-determined animation), and names for that no channel exists are simply not hit (under-determined animation; this is where my "an animation should not need to control _all_ bones" happens).

BTW: Notice please that an Animation instance is added to the Timeline similarly to a skeleton instance is added to the Scene graph. Also skeletons are defined two-fold in my engine, namely as a Linkage::Resource and a Linkage instance.

What have we now? There is an Animation instance on the Timeline, reflecting the current state of the animation and its bindings. The instance is backed by an Animation::Resource which holds the static definition of the animation. There is furthur a Linkage instance in the Scene graph, reflecting the current state (i.e. all current bone positions and rotations and so on) of the skeleton. The instance is backed by a Linkage::Resource which defines the static parts of the skeleton, e.g. its principle structure and the rest pose.

When time elapses and the Runtime::tick enforces the Timeline's playhead to be over the Animation instance (remember it is a kind of Timeline::Task), the Animation instance processes all channels like described in a previous post. That means that the targeted positions and orientations are set to what the channels compute as the current value due to the given playhead position, Animation::start, and inner definition of the channel.

EDIT: Don't forget that I describe a system. I don't enfore you to do the same. Please feel free to pick out whatever idea seems you suitable.

### #36haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 18 August 2008 - 01:59 AM

Uhh, writing my previous post has de-synchronized me from the thread's progression ... :)

Quote:
Original post by RobTheBloke
Quote:
 Original post by haegarrHow do you prevent lower priority animations from being added if there is no criteria like "the sum is full"?

Depends on how the rest of your system works i guess. In our case, we don't need to do that, and in the majority of situations our clients normally want the weights to sum past 1. A very simple example is a spine rotated back, and a spine rotated forward pose. Given a basic walk, if the character is going up hill, add a bit of the spine forward pose to your walk animation - then normalise. Obviously the translation tracks should not be present in the two poses (which is normally the case for most bones other than the character root). Mind you, our system has about 20 different ways to blend animations - mainly because there really is no 'one fits all' blending scheme you can use.

It seems me that we have another understanding of layering. I mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.

### #37RobTheBloke  Crossbones+   -  Reputation: 2531

Like
0Likes
Like

Posted 18 August 2008 - 03:14 AM

Quote:
 Original post by haegarrI mean a kind of blending that is able to supress another animation to be added as well, simply because the so higher priorized animation is processed earlier and "fills up the sum", so that "no more space" is left for the subsequent animation.

Do you have a use case for that? I'm assuming you mean target an upperbody anim on top of a lower body anim (or something like that?). But then, why not use a mask for that? i.e. eval tracks 1,2,8,9 only.... Since the system (when using nlerp) is basically a simple sum, shouldn't really cause a problem in those cases. Seems that :

if(mask_used){  foreach track to eval}else{  foreach track}

is better than
foreach track{  if(weight<1)  {  }}

The former is easier to multithread, not to mention not having the additional memory cost of storing the additional weighting for each track.

### #38godmodder  Members   -  Reputation: 788

Like
0Likes
Like

Posted 18 August 2008 - 03:56 AM

This makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack. Using a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...

### #39haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 18 August 2008 - 04:07 AM

@ RobTheBloke

An example would be aiming with a gun during walking. The walking means animation of the entire body if done alone, i.e. including swinging of arms. When layering the animation of aiming on top of it, swinging of the arm that holds the gun should be supressed.

Masking is a binary decision, while weighting is (more or less) continuous. The additional memory costs are neglectable. I don't weight individual channels, i.e. the modifier of an individual position or orientation, but the entirety of channels of a particular _active_ animation (I use "animation" as the modification of a couple of bones).

And masking would need to have more information on the active animation tracks than weighting need. This is because weighting works without the knowledge of other animations, while masks must be build over all (or at least all active) animations.

I don't say that masking is bad. It is just a way that seems not to fit well into the animation system I described in the posts above. I think we have a totally different approach to animations on the top abstraction level, haven't we?

[Edited by - haegarr on August 18, 2008 10:07:58 AM]

### #40haegarr  Crossbones+   -  Reputation: 6995

Like
0Likes
Like

Posted 18 August 2008 - 04:26 AM

Quote:
 Original post by godmodderThis makes me wonder, haegarr, how exactly do you supply these extra weights per animation? For example: I use Milkshape 3D to import my animation data, but there's no direct support for any kind of parameters like that. The best I can do is supply these extra weights in the supported joint "comments", but that feels kind of like a hack.

If we still speak of the weights of layers: There is simply no sense in pre-defining them, and hence no need for DCC package support. It is a decision of the runtime system. The runtime system decides from the gamer's input and the state of the game object that pulling out a gun and aiming at a target is to be animated.

If we speak of the weights of blending: The same holds here. If the avatar is walking, and the user input causes the avatar to accelerate up to running, then the blending factor is influenced from the input system logic, not from the DCC package. The latter has been used to make a 100% walking animation, as well as a 100% running animation: No need for weights there.

Notice please that I have absolutely no weights associated with joints directly. All weights are associated with animation tracks. Please refer back to my post where I answered Kasya on what I understand on an animation track.

Quote:
 Original post by godmodderUsing a simple mask could alleviate my problem I think. I could then predefine a couple of masks like "upper-body animation", "lower-body animation" and then use these to blend the animations. Ofcourse this wouldn't give me alot of fine grained control...

I don't contradict. As said several times, there is no "the animation system", and I never claim that my system is even superior.

[Edited by - haegarr on August 18, 2008 10:26:15 AM]

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS