Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    11
  • comments
    0
  • views
    307

Entries in this blog

 

C++ Quiz #4

This is a test of your knowledge of C++, not of your compiler's knowledge of C++. Using a compiler during this test will likely give you the wrong answers, or at least incomplete ones.


What is the value of i after the first numbered line is evaluated?
What do you expect the second numbered line to print out?
What is the value of p->c after the third numbered line is evaluated?
What does the fourth numbered line print?

struct C;
void f(C* p);


struct C {
int c;
C() : c(1) {
f(this);
}
};


const C obj;
void f(C* p) {
int i = obj.c << 2; //1
std::cout<< p->c << std::endl; //2
p->c = i; //3
std::cout<< obj.c << std::endl; //4
}

What should you expect the compiler to do on the first numbered line? Why?
What should you expect the value of j to be after the second numbered line is evaluated? Why?

struct X {
operator int() {
return 314159;
}
};
struct Y {
operator X() {
return X();
}
};
Y y;
int i = y; //1
int j = X(y); //2

What should you expect the compiler to do on the first and second numbered lines? Why?

struct Z {
Z() {}
explicit Z(int) {}
};
Z z1 = 1; //1
Z z2 = static_cast<Z>(1); //2

What should you expect the behavior of each of the numbered lines, irrespective of the other lines, to be?

struct Base {
virtual ~Base() {}
};
struct Derived : Base {
~Derived() {}
};
typedef Base Base2;
Derived d;
Base* p = &d;
void f() {
d.Base::~Base(); //1
p->~Base(); //2
p->~Base2(); //3
p->Base2::~Base(); //4
p->Base2::~Base2(); //5
}



Source

Washu

Washu

 

C++ Quiz #3

This is a test of your knowledge of C++, not of your compiler's knowledge of C++. Using a compiler during this test will likely give you the wrong answers, or at least incomplete ones.

Given the following code:

class Base {
public:
virtual ~Base() {}
virtual void DoSomething() {}
void Mutate();
};

class Derived : public Base {
public:
virtual void DoSomething() {}
};

void Base::Mutate() {
new (this) Derived; // 1
}
void f() {
void* v = ::operator new(sizeof(Base) + sizeof(Derived));
Base* p = new (v) Base();
p->DoSomething(); // 2
p->Mutate(); // 3
void* vp = p; // 4
p->DoSomething(); // 5
}



Does the first numbered line result in defined behavior? (Yes/No)
What should the first numbered line do?
Do the second and third numbered lines produce defined behavior? (Yes/No)
Does the fourth numbered line produce defined behavior? If so, why? If not, why?
Does the fifth numbered line produce defined behavior? If so, why? If not, why?
What is the behavior of calling void exit(int);?


Given the following code:

struct T{};
struct B {
~B();
};

void h() {
B b;
new (&b) T; // 1
return; // 2
}



Does the first numbered line result in defined behavior?
Is the behavior of the second line defined? If so, why? If not, why is the behavior not defined?
What is the behavior of int& p = (int)0;? Why does it have that behavior? Is this a null reference?
What is the behavior of p->I::~I(); if I is defined as typedef int I; and p is defined as I* p;?


Source

Washu

Washu

 

C++ Quiz #2

This is a test of your knowledge of C++, not of your compiler's knowledge of C++. Using a compiler during this test will likely give you the wrong answers, or at least incomplete ones.


Using the code below as a reference, explain what behavior should be expected of each of the commented lines, please keep your answer very short.

struct Base {
virtual void Arr();
};
struct SubBase1 : virtual Base { };
struct SubBase2 : virtual Base {
SubBase2(Base*, SubBase1*);
virtual void Arr();
};


struct Derived : SubBase1, SubBase2 {
Derived() : SubBase2((SubBase1*)this, this) { }
};
SubBase2::SubBase2(Base* a, SubBase1* b) {
typeid(*a); //1
dynamic_cast<SubBase2*>(a); //2
typeid(*b); //3
dynamic_cast<SubBase1*>(b); //4
a->Arr(); //5
b->Arr(); //6
}

Using the code below as a reference, explain what behavior should be expected of each of the commented lines?

template<class T> class X {
X<T>* p; //1
X<T> a; //2
};



Source

Washu

Washu

 

Building a SlimDX MiniTriangle sample with Direct3D11 and IronPython

I generally don't post huge code dumps, mainly because I find them more annoying and less helpful than some books/authors might. But you know, I've been playing with IronPython/SlimDX recently and decided to do up another SlimDX Sample (demonstrating DX11), except in IronPython this time. This will be in the SlimDX samples sometime soon!

import clr clr.AddReference('System.Windows.Forms') clr.AddReference('System.Drawing') clr.AddReference('SlimDX')from System import * from System.Drawing import Size from System.Windows.Forms import Form, Application, MessageBox, FormBorderStyle from SlimDX import * from SlimDX.Direct3D11 import * from SlimDX.DXGI import SwapChainDescription, SwapChainFlags, ModeDescription, SampleDescription, Usage, SwapEffect, Format, PresentFlags, Factory, WindowAssociationFlags from SlimDX.D3DCompiler import * from SlimDX.Windows import MessagePumpclass GameObject: def Render(self): pass def Tick(self): passclass GraphicsDevice(IDisposable): Context = property(lambda self: self.context) Device = property(lambda self: self.device) SwapChain = property(lambda self: self.swapChain) def __init__(self, control, fullscreen): self.fullscreen = fullscreen self.control = control control.Resize += lambda sender, args: self.Resize() swapChainDesc = self.CreateSwapChainDescription(); success,self.device,self.swapChain = Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, Array[FeatureLevel]([FeatureLevel.Level_11_0, FeatureLevel.Level_10_1, FeatureLevel.Level_10_0]), swapChainDesc) self.context = self.Device.ImmediateContext with self.swapChain.GetParent[Factory]() as factory: factory.SetWindowAssociation(self.control.Handle, WindowAssociationFlags.IgnoreAll) with Resource.FromSwapChain[Texture2D](self.swapChain, 0) as backBuffer: self.backBufferRTV = RenderTargetView(self.Device, backBuffer) self.Resize() def CreateSwapChainDescription(self): swapChainDesc = SwapChainDescription() swapChainDesc.IsWindowed = not self.fullscreen swapChainDesc.BufferCount = 1 swapChainDesc.ModeDescription = ModeDescription(self.control.ClientSize.Width, self.control.ClientSize.Height, Rational(60, 1), Format.R8G8B8A8_UNorm) swapChainDesc.Flags = SwapChainFlags.None swapChainDesc.SwapEffect = SwapEffect.Discard swapChainDesc.Usage = Usage.RenderTargetOutput swapChainDesc.SampleDescription = SampleDescription(1, 0) swapChainDesc.OutputHandle = self.control.Handle return swapChainDesc def Resize(self): self.Context.ClearState() self.backBufferRTV.Dispose() self.swapChain.ResizeBuffers(1, self.control.ClientSize.Width, self.control.ClientSize.Height, Format.R8G8B8A8_UNorm, SwapChainFlags.None) with Resource.FromSwapChain[Texture2D](self.swapChain, 0) as backBuffer: self.backBufferRTV = RenderTargetView(self.Device, backBuffer) self.Context.Rasterizer.SetViewports(Viewport(0, 0, self.control.ClientSize.Width, self.control.ClientSize.Height, 0.0, 1.0)) def BeginRender(self): self.Context.ClearRenderTargetView(self.backBufferRTV, Color4(0, 0, 0, 0)) self.Context.OutputMerger.SetTargets(self.backBufferRTV) def EndRender(self): self.swapChain.Present(0, PresentFlags.None) def Dispose(self): self.backBufferRTV.Dispose() self.swapChain.Dispose() self.device.Dispose()class TriangleObject(GameObject): def __init__(self, game): self.game = game device = game.GraphicsDevice.Device context = game.GraphicsDevice.Context err = clr.Reference[str]() with ShaderBytecode.CompileFromFile("SimpleTriangle10.fx", "fx_5_0", ShaderFlags.None, EffectFlags.None, None, None, err) as shaderByteCode: self.effect = Effect(device, shaderByteCode) shaderTechnique = self.effect.GetTechniqueByIndex(0) self.shaderPass = shaderTechnique.GetPassByIndex(0) sig = self.shaderPass.Description.Signature self.inputLayout = InputLayout(device, sig, Array[InputElement]([InputElement("POSITION", 0, Format.R32G32B32A32_Float, 0, 0), InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0)])) bufferDesc = BufferDescription(3 * 32, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0) self.vertexBuffer = Buffer(device, bufferDesc) stream = context.MapSubresource(self.vertexBuffer, 0, 3 * 32, MapMode.WriteDiscard, MapFlags.None).Data data = Array[Vector4]([ Vector4(0.0, 0.5, 0.5, 1.0), Vector4(1.0, 0.0, 0.0, 1.0), Vector4(0.5, -0.5, 0.5, 1.0), Vector4(0.0, 1.0, 0.0, 1.0), Vector4(-0.5, -0.5, 0.5, 1.0), Vector4(0.0, 0.0, 1.0, 1.0) ]) stream.WriteRange(data) context.UnmapSubresource(self.vertexBuffer, 0) def Render(self): context = self.game.GraphicsDevice.Context context.InputAssembler.InputLayout = self.inputLayout context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList context.InputAssembler.SetVertexBuffers(0, VertexBufferBinding(self.vertexBuffer, 32, 0)) self.shaderPass.Apply(context) context.Draw(3, 0) def Dispose(self): self.effect.Dispose() self.inputLayout.Dispose() self.vertexBuffer.Dispose()class Game(IDisposable): GraphicsDevice = property(lambda self: self.graphicsDevice) def __init__(self, width, height, fullscreen = False): self.fullscreen = fullscreen self.form = GameForm(width, height, fullscreen) self.form.Visible = True self.graphicsDevice = GraphicsDevice(self.form, self.fullscreen) self.gameObjects = [TriangleObject(self)] def Run(self): Application.Idle += self.OnIdle Application.Run(self.form) def OnIdle(self, ea, sender): while MessagePump.IsApplicationIdle: self.Update() self.Render() def Update(self): for i in self.gameObjects: i.Tick() def Render(self): self.GraphicsDevice.BeginRender() for i in self.gameObjects: i.Render() self.GraphicsDevice.EndRender() def Dispose(self): self.GraphicsDevice.Dispose() for i in self.gameObjects: if 'Dispose' in dir(i): i.Dispose() self.form.Dispose(True)class GameForm(Form): def __init__(self, width, height, fullscreen): self.ClientSize = Size(width, height) if fullscreen: self.FormBorderStyle = FormBorderStyle.Noneif __name__ == "__main__": try: with Game(640, 480) as game: game.Run() except Exception as e: MessageBox.Show(e.ToString())Source

Washu

Washu

 

SlimGen and You, Part ADD EAX, [EAX] of N

So far I've covered how SlimGen works and the difficulties in doing what it does, including calling convention issues that one must be made aware of when writing replacement methods for use with SlimGen.
So the next question arises, just how much of a difference can using SlimGen make? Well, a lot of that will depend on the developer and their skill level. But we also were pretty curious about this and so we slapped together a test sample that runs through a series of matrix multiplications and times it. It uses three arrays to perform the multiplications, two of the arrays contains 100,000 randomly generated matrixes, with the third being used as the destinations for the results. Both matrix multiplications (the SlimGen one and the .Net one) assume that a source can also be used as a destination, and so they are overlap safe.
The timing results will vary, of course, from machine to machine depending on the processor in the machine, how much ram you have and also on what you're doing at the time. Running the results against my Phenom 9850 I get: Total Matrix Count Per Run: 100,000 Multiply Total Ticks: 2,001,059 SlimGenMultiply Total Ticks: 1,269,200 Improvement: 36.57 %
While when I run it against my T8300 Core2 Duo laptop I get: Total Matrix Count Per Run: 100,000 Multiply Total Ticks: 2,175,380 SlimGenMultiply Total Ticks: 1,621,830 Improvement: 25.45 %
Still, 25-35% improvement over the FPU based multiply is quite significant. Since X64 support hasn't been fully hammered out (in that it "works" but hasn't been sufficiently verified as working), those numbers are unavailable at the moment. However, they should be available in the near future as we finalize error handling and ensure that there are no bugs in the x64 assembly handling.
So why the great difference in performance? Well, part of it is the method size, the .Net method is 566 bytes of pure code, that's over half a kilobyte of code that has to be walked through by the processor, code which needs to be brought into the instruction-cache on the CPU and executed, meanwhile the SSE2 method is around half that size, at 266 bytes. The smaller your footprint in the I-cache, the fewer hits you take and the more likely your code is to actually be IN the I-cache. Then there's the instructions, SSE2 has been around for a while, and so it has had plenty of time to be wrangled around with by CPU manufacturers to ensure optimal performance. Finally there's the memory hit issue, the SSE2 based code hits memory a minimal number of times, reducing the chances of cache misses, after the first read/write, except for a few cases.
Finally there's how it deals with storage of the temporary results. The .Net FPU based version allocates a Matrix type on the stack, calls the constructor (which 0 initializes it), and then proceeds to overwrite those entries one by one with the results of each set of dot products. At the end of the method it does what amounts to a memcpy, and copies the temporary matrix over the result matrix. The SSE2 version however doesn't bother with initializing the stack and only stores three of the results on the stack, opting to write out the final result directly to the destination. The three other rows are then moved back into XMM registers and then back out to the destination.
The SSE2 source code, followed by the .Net source code, note that both are functionally equivalent: start: mov eax, [esp + 4] movups xmm4, [edx] movups xmm5, [edx + 0x10] movups xmm6, [edx + 0x20] movups xmm7, [edx + 0x30] movups xmm0, [ecx] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm1, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [esp - 0x20], xmm0 ; store row 0 of new matrix movups xmm0, [ecx + 0x10] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [esp - 0x30], xmm0 ; store row 1 of new matrix movups xmm0, [ecx + 0x20] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [esp - 0x40], xmm0 ; store row 2 of new matrix movups xmm0, [ecx + 0x30] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [eax + 0x30], xmm0 ; store row 3 of new matrix movups xmm0, [esp - 0x40] movups [eax + 0x20], xmm0 movups xmm0, [esp - 0x30] movups [eax + 0x10], xmm0 movups xmm0, [esp - 0x20] movups [eax], xmm0 ret 4
The .Net matrix multiplication source code: public static void Multiply(ref Matrix left, ref Matrix right, out Matrix result) { Matrix r; r.M11 = (left.M11 * right.M11) + (left.M12 * right.M21) + (left.M13 * right.M31) + (left.M14 * right.M41); r.M12 = (left.M11 * right.M12) + (left.M12 * right.M22) + (left.M13 * right.M32) + (left.M14 * right.M42); r.M13 = (left.M11 * right.M13) + (left.M12 * right.M23) + (left.M13 * right.M33) + (left.M14 * right.M43); r.M14 = (left.M11 * right.M14) + (left.M12 * right.M24) + (left.M13 * right.M34) + (left.M14 * right.M44); r.M21 = (left.M21 * right.M11) + (left.M22 * right.M21) + (left.M23 * right.M31) + (left.M24 * right.M41); r.M22 = (left.M21 * right.M12) + (left.M22 * right.M22) + (left.M23 * right.M32) + (left.M24 * right.M42); r.M23 = (left.M21 * right.M13) + (left.M22 * right.M23) + (left.M23 * right.M33) + (left.M24 * right.M43); r.M24 = (left.M21 * right.M14) + (left.M22 * right.M24) + (left.M23 * right.M34) + (left.M24 * right.M44); r.M31 = (left.M31 * right.M11) + (left.M32 * right.M21) + (left.M33 * right.M31) + (left.M34 * right.M41); r.M32 = (left.M31 * right.M12) + (left.M32 * right.M22) + (left.M33 * right.M32) + (left.M34 * right.M42); r.M33 = (left.M31 * right.M13) + (left.M32 * right.M23) + (left.M33 * right.M33) + (left.M34 * right.M43); r.M34 = (left.M31 * right.M14) + (left.M32 * right.M24) + (left.M33 * right.M34) + (left.M34 * right.M44); r.M41 = (left.M41 * right.M11) + (left.M42 * right.M21) + (left.M43 * right.M31) + (left.M44 * right.M41); r.M42 = (left.M41 * right.M12) + (left.M42 * right.M22) + (left.M43 * right.M32) + (left.M44 * right.M42); r.M43 = (left.M41 * right.M13) + (left.M42 * right.M23) + (left.M43 * right.M33) + (left.M44 * right.M43); r.M44 = (left.M41 * right.M14) + (left.M42 * right.M24) + (left.M43 * right.M34) + (left.M44 * right.M44); result = r;}

Source

Washu

Washu

 

SlimGen and You, Part ADD AL, [RAX] of N

The question does arise though, when using SlimGen and writing your SSE replacement methods, what kind of calling convention does the CLR use?
The CLR uses a version of fastcall. On x86 processors this means that the first two parameters (that are DWORD or smaller) are passed in ECX and EDX. However, and this is where the CLR differs from standard fastcall, the parameters after the first two are pushed onto the stack from left to right, not right to left. This is important to remember, especially for functions that take a variable number of arguments. So a call like: X('c', 2, 3.0f, "Hello"); becomes:X('c', 2, 3.0f, "Hello"); 00000025 push 40400000h ; 3.0f 0000002a push dword ptr ds:[03402088h] ;Address of "Hello" 00000030 mov edx,2 00000035 mov ecx,63h ;'c' 0000003a call FFB8B040
The situation is the same for member functions as well, except with this being passed in ECX, which leaves only EDX to hold an additional parameter. The rest are passed on the stack as before:p.Y(2, 3.0f); 0000006d push 40400000h ; 3.0f 00000072 mov ecx,dword ptr [ebp-40h] ;this 00000075 mov edx,2 0000007c call FFA1B048
So this all seems clear enough, but it's important to note these differences, especially when you're poking around in the low level bowels of the CLR or when you're doing what SlimGen does: which is replacing actual method bodies.
So this does beget the question, what about on the x64 platform? Well, again, the calling convention is fastcall with a few differences. The first four parameters are in RCX, RDX, R8 and R9 (or smaller registers), unless those parameters are floating point types, in which case they are passed using XMM registers. Z('c', 2, 3.0f, "Hello", 1.0, pa); 000000c0 mov r9,124D3100h 000000ca mov r9,qword ptr [r9] ; "Hello" 000000cd mov rax,qword ptr [rsp+38h] ;pa (IntPtr[]) 000000d2 mov qword ptr [rsp+28h],rax ;pa - stack spill 000000d7 movsd xmm0,mmword ptr [00000118h] ;1.0 000000df movsd mmword ptr [rsp+20h],xmm0 ;1.0 - stack spill 000000e5 movss xmm2,dword ptr [00000110h] ;3.0f 000000ed mov edx,2 ;int (2) 000000f2 mov cx,63h ;'c' 000000f6 call FFFFFFFFFFEC9300
Whew, that looks pretty nasty doesn't it? But if you notice, pretty much every single parameter to that function is passed in a register. The stack spillage is part of the calling convention to allow for variables to be spilled into memory (or read back from memory) when the register needs to be used. Calling an instance method follows pretty much the same rules, except the this pointer is passed in RCX first.p.Q(~0L, ~1L, ~2L, ~3); 0000010a mov rcx,qword ptr [rsp+30h] ; this pointer 0000010f mov qword ptr [rsp+20h],0FFFFFFFFFFFFFFFCh ;~3L, spilled to stack 00000118 mov r9,0FFFFFFFFFFFFFFFDh ;~2L 0000011f mov r8,0FFFFFFFFFFFFFFFEh ;~1L 00000126 mov rdx,0FFFFFFFFFFFFFFFFh ;~0L 0000012d call FFFFFFFFFFEC9310
Calling a function and passing something larger than a register can store does pose an interesting problem, the CLR deals with it by moving the entire data onto the stack, and passing it (hence call by value)var v = new Vector(); p.R(v); 00000169 lea rcx,[rsp+40h] 0000016e mov rax,qword ptr [rcx] 00000171 mov qword ptr [rsp+50h],rax 00000176 mov rax,qword ptr [rcx+8] 0000017a mov qword ptr [rsp+58h],rax 0000017f lea rdx,[rsp+50h] 00000184 mov rcx,r8 00000187 call FFFFFFFFFFEC9318
As you can see, it copies the data from the vector onto the stack, stores the this pointer in RCX, and then calls to the function. This is why pass by reference is the preferred method (for fast code) to move around structures that are non-trivial.
All of this goes into calcuating our matrix multiplication method (which assumes the output is not one of the inputs):BITS 32 ORG 0x59f0 ; void Multiply(ref Matrix, ref Matrix, out Matrix)start: mov eax, [esp + 4] movups xmm4, [edx] movups xmm5, [edx + 0x10] movups xmm6, [edx + 0x20] movups xmm7, [edx + 0x30] movups xmm0, [ecx] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm1, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [eax], xmm0 ; Calculate row 0 of new matrix movups xmm0, [ecx + 0x10] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [eax + 0x10], xmm0 ; Calculate row 1 of new matrix movups xmm0, [ecx + 0x20] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [eax + 0x20], xmm0 ; Calculate row 2 of new matrix movups xmm0, [ecx + 0x30] movaps xmm1, xmm0 movaps xmm2, xmm0 movaps xmm3, xmm0 shufps xmm0, xmm0, 0x00 shufps xmm1, xmm1, 0x55 shufps xmm2, xmm2, 0xAA shufps xmm3, xmm3, 0xFF mulps xmm0, xmm4 mulps xmm1, xmm5 mulps xmm2, xmm6 mulps xmm3, xmm7 addps xmm0, xmm2 addps xmm1, xmm3 addps xmm0, xmm1 movups [eax + 0x30], xmm0 ; Calculate row 3 of new matrix ret 4

Source

Washu

Washu

 

SlimGen and You, Part ADD [EAX], EAX of N

So previously we delved into one of the nastier performance corners on the .Net framework. Today I'm going to introduce you to a tool, that is in development currently, which allows you to take those slow math functions of yours and replace them with high performance SSE optimized methods.
We've called it SlimGen, which although not exactly accurate, does fit nicely in with the other Slim projects currently underway including SlimTune, and the flagship that started it all, SlimDX.
So what does SlimGen do? Well, you pass it a .Net assembly and it replaces the native method bodies, which are generated using NGEN, with replacement ones written in assembly (for now). This modified assembly then replaces the original assembly that was stored in the native image store. SlimGen can operate on signed and unsigned assemblies alike, as the native image is not signed, more on this later though.
Managed PE files contain a great deal of metadata stored in tables. You can enumerate these tables and parse them yourself, for instance if you were writing your own CLR. Thankfully though, the .Net framework comes with several COM interfaces that are very helpful in accessing these tables without having to manually parse them out of the PE file, this is especially useful since the table rows are are not a fixed format. Specifically, indexes in the tables can be either a 2 bytes or 4 bytes in size depending on the size of the dataset indexed. In the case of SlimGen we use the IMetaDataImport2 interface for accessing the metadata.
Of course, the managed metadata does not contain all of the information we need. NGEN manipulates the managed assembly and introduces pre-jitted versions of the functions contained within the assembly. However, their managed counterparts remain in the assembly and are what the metadata tables reference to. So how does one go from a managed method and its IL to the associated unmanaged code? Well, the CLR header of a PE file does contain a pointer to a table for a native header. However the exact format of that table is undocumented and as such it makes it hard to parse it and find the information we need. Therefore we have to use an alternative method...
When you load up an assembly the CLR generates, using the metadata and other information found in the PE file, a set of runtime tables that it uses to indicate information about where things are in memory, and their current state. For instance, it can tell if its jitted a method or not. When you load up an assembly that's been NGENed, it checks the native images for an associated copy, assuming your assembly validates, and will load up the NGENed assembly and parse out the appropriate information from that. Therefore we need some way of gaining access to these runtime generated tables. Enter the debugger.
The .Net framework exposes debugging interfaces that are quite trivial to implement, but more important, they give you access to all of the runtime information available to the CLR. In the case of SlimGen what we do is load up your assembly (not run) into a host process and then simply have the host process execute a debugger breakpoint. The SlimGen analyzer first initializes its self as a debugger and then executes the host process as the attached debugger. When the breakpoint is hit, it breaks into the analyzer, which can then begin the work of processing the loaded assemblies. Since SlimGen knows which assembly it fed to the host, it is able to filter out all of the other assemblies that have been loaded and focus in on the one we care about. First we check and see if a native version of the assembly has been loaded, for if one hasn't been loaded there is no point in continuing. if not then we simply report an error and cleanup. Assuming there is a native version of the assembly loaded then we use the aforementioned metadata interfaces to walk the assembly and find all of the methods that have been marked for replacement. Each method is examined to ensure that it has a native counterpart, and if it doesn't another warning is issued and the method is skipped.
Now comes the annoying part. In .Net 1.x the framework had each method exist within a singular code chunk, which made extracting that code quite easy. However in .Net 2.x and forward the framework allows a method to have multiple code chunks, each with a different base address and length. This is theoretically to allow an optimizer to spread work its magic, but it does make extracting methods harder. SlimGen will generate an assembly file per chunk and all of the associated binaries for each chunk, generated from the assembly files, must be present for the method to be replaced. No dangling chunks please. The SlimGen analyzer extracts each base address from each chunk, along with the module base address. Using that information we can then calculate the relative virtual address of each method's native counterpart within the NGENed file.
Using that information the SlimGen client simply walks a copy of the native image performing the replacement of each method, and then when done (and assuming no errors), copies it back over the original NGEN image. Tada, you now have your highly optimized SSE code running in a managed application with no managed -> unmanaged transitions in sight.

Source

Washu

Washu

 

SlimGen and You, Part ADD [EAX], AL of N

Imagine you could have the safety of managed code, and the speed of SIMD all in one? Sounds like one of those weird dreams Trent has, or perhaps you are already thinking of using C++/CLI to wrap SIMD methods to help reduce the unmanaged transition overhead. You might also be thinking about pinvoking DLL methods such as those used in the D3DX framework to take advantage of its SIMD capabilities.
While all of those are quite possible, and for sufficiently large problems quite efficient too, they also have a relatively high cost of invocation. Managed to unmanaged transitions, even in the best of cases, costs a pretty penny. Registers have to be saved, marshalling of non-fundamental types has to be performed, and in many cases an interop thunk has to be created/jitted. This is a case where the best option is to do as much work as you can in one area before transitioning to the next.
But you can't always do tons of work at once, a prime example is that of managing your game state. You'll have discrete transformations of objects, but batching up those transformations to perform them all at once because a management nightmare. You have to craft special data-structures to avoid marshalling, use pinned arrays, and in general you end up doing a lot of work maintaining the two, will spend plenty of time debugging your interface, and may actually not gain anything speed wise still.
If you're wondering just how bad the interop transition is, you can take a look at my previous entries, where I explored the topic in some detail.
In the .Net framework, most code runs almost as fast, as fast, or faster than the comparable native counterparts. There are cases where the framework is significantly faster, and cases where it loses out at about 10% in the worst case. 10% isn't a horrible loss, and it's not a consistent loss either. The cost will vary depending on factors such as: is JITing required, is memory allocation performed, are you doing FPU math that would be vectorized in native code?
In fact, that 10% figure isn't accurate either: If a method requires JITting the first time it is called, which could cost you 10% on the first invocation, future invocations will not need JITing and so the cost may end up being the same as its native counterpart henceforth. If the method is called a thousand times, then that's only an additional .01% cost over the entire set of invocations.
The only real area that the .Net framework seriously loses out to unmanaged code is in the math department. The inability to use vectorization can significantly increase the cost of managed math over that of unamanged math code, that 10% figure rears its ugly head here. On the integer math side of things managed code is almost on equal footing with unmanaged code, although there are some vectorized operations you can perform that will enhance integer operations quite significantly, but in general the two add up to be about the same. However when it comes to floating point performance managed code loses out due to its dependency on the FPU or single float SSE instructions. The ability to vectorize large chunks of floating point math can work wonders for unmanaged code.
Well, all is not lost for those of us who love the managed world... SlimGen is here. Exactly what SlimGen is will be delved into later, but here's a sample preview of what it can do: SlimDX.Matrix.Multiply(SlimDX.Matrix ByRef, SlimDX.Matrix ByRef, SlimDX.Matrix ByRef) Begin 5a856e64, size 293 5A856E64 8B442404 mov eax,dword ptr [esp+4] 5A856E68 0F1022 movups xmm4,xmmword ptr [edx] 5A856E6B 0F106A10 movups xmm5,xmmword ptr [edx+10h] 5A856E6F 0F107220 movups xmm6,xmmword ptr [edx+20h] 5A856E73 0F107A30 movups xmm7,xmmword ptr [edx+30h] 5A856E77 0F1001 movups xmm0,xmmword ptr [ecx] 5A856E7A 0F28C8 movaps xmm1,xmm0

Source

Washu

Washu

 

Playing With The .NET JIT Part 4

As noted previously there are some cases where the performance of unmanaged code can beat that of the managed JIT. In the previous case it was the matrix multiplication function. We do have some other possible performance benefits we can give to our .NET code, specifically, we can NGEN it. NGEN is an interesting utility, it can perform heavy optimizations that would not be possible in the standard runtime JIT (as we shall see). The question before us is: Will it give us enough of a boost to be able to surpass the performance of our unmanaged matrix multiplication?An Analysis of Existing Code
We haven't looked at the current code that was produced for our previous tests yet, so I feel that it is time we gave it a look and see what we have. To keep this shorter we'll only look at the inner product function. The code produced for the matrix multiplication suffers from the same problems and benefits from the same extensions. For the purposes of this writing we'll only consider the x64 platform. First up we'll look at our unmanaged matrix multiplication, which as we may recall is an SSE2 version. There some things we should note: this method cannot be inlined into the managed code, and there are no frame pointers (they got optimized out).00000001`800019c3 0f100a movups xmm1,xmmword ptr [rdx] 00000001`800019c6 0f59c8 mulps xmm1,xmm0 00000001`800019c9 0f28c1 movaps xmm0,xmm1 00000001`800019cc 0fc6c14e shufps xmm0,xmm1,4Eh 00000001`800019d0 0f58c8 addps xmm1,xmm0 00000001`800019d3 0f28c1 movaps xmm0,xmm1 00000001`800019d6 0fc6c11b shufps xmm0,xmm1,1Bh 00000001`800019da 0f58c1 addps xmm0,xmm1 00000001`800019dd f3410f1100 movss dword ptr [r8],xmm0 00000001`800019e2 c3 ret
The code used to produce the managed version shown below has undergone a slight modification. No longer does the method return a float, instead it has an out parameter to a float, which ends up holding the result of the operation. This change was made to eliminate some compilation issues in both the managed and unmanaged versions. In the case of the managed version below, without the out parameter the store operation (at 00000642`801673b3) would have required a conversion to a double and back to a single again, the new versions are shown at the end of this post. Examining the managed inner product we get a somewhat worse picture:00000642`8016732f 4c8b4908 mov r9,qword ptr [rcx+8]00000642`80167333 4d85c9 test r9,r900000642`80167336 0f8684000000 jbe 00000642`801673c000000642`8016733c f30f104110 movss xmm0,dword ptr [rcx+10h]00000642`80167341 488b4208 mov rax,qword ptr [rdx+8]00000642`80167345 4885c0 test rax,rax00000642`80167348 7676 jbe 00000642`801673c000000642`8016734a f30f104a10 movss xmm1,dword ptr [rdx+10h]00000642`8016734f f30f59c8 mulss xmm1,xmm000000642`80167353 4983f901 cmp r9,100000642`80167357 7667 jbe 00000642`801673c000000642`80167359 f30f105114 movss xmm2,dword ptr [rcx+14h]00000642`8016735e 483d01000000 cmp rax,100000642`80167364 765a jbe 00000642`801673c000000642`80167366 f30f104214 movss xmm0,dword ptr [rdx+14h]00000642`8016736b f30f59c2 mulss xmm0,xmm200000642`8016736f f30f58c1 addss xmm0,xmm100000642`80167373 4983f902 cmp r9,200000642`80167377 7647 jbe 00000642`801673c000000642`80167379 f30f105118 movss xmm2,dword ptr [rcx+18h]00000642`8016737e 483d02000000 cmp rax,200000642`80167384 763a jbe 00000642`801673c000000642`80167386 f30f104a18 movss xmm1,dword ptr [rdx+18h]00000642`8016738b f30f59ca mulss xmm1,xmm200000642`8016738f f30f58c8 addss xmm1,xmm000000642`80167393 4983f903 cmp r9,300000642`80167397 7627 jbe 00000642`801673c000000642`80167399 f30f10511c movss xmm2,dword ptr [rcx+1Ch]00000642`8016739e 483d03000000 cmp rax,300000642`801673a4 761a jbe 00000642`801673c000000642`801673a6 f30f10421c movss xmm0,dword ptr [rdx+1Ch]00000642`801673ab f30f59c2 mulss xmm0,xmm200000642`801673af f30f58c1 addss xmm0,xmm100000642`801673b3 f3410f114040 movss dword ptr [r8+40h],xmm0...00000642`801673bd f3c3 rep ret00000642`801673bf 90 nop00000642`801673c0 e88b9f8aff call mscorwks!JIT_RngChkFail (00000642`7fa11350)
Wow! Lots of conditionals there, it's not vectorized either, but we don't expect it to be, automatic vectorization is a hit and miss type of deal with most optimizing compilers (like the Intel one). Not to mention, vectorizing in the runtime JIT would take up far too much time. This method is inlined for us (thankfully), but we see that it is littered with conditionals and jumps. So where are they jumping to? Well, they are actually ending up just after the end of the method. Note the nop instruction that causes the jump destination to be paragraph aligned, that is intentional. As you can probably guess based on the name from the jump destination, those conditionals are checking the array bounds, stored in r9 and rax, against the indices being used. Those jumps aren't actually that friendly for branch prediction, but for the most part they won't hamper the speed of this method much, but they are an additional cost. Unfortunately, they are rather problematic for the matrix version, and tend to cost quite a bit in performance.
We also can see that in x64 mode the JIT will use SSE2 for floating point operations. This is quite nice, but does have some interesting consequences, for instance comparing floating point numbers generated using the FPU and those using SSE2 will actually more than likely fail, EVEN IF you truncate them to their appropriate sizes. The reason for this is that the XMM registers (when using the single versions of the instructions and not the double ones) store the floating point values as exactly 32 bit floats. The FPU however will expand them to 80 bit floats, which means that operations on those 80 bit floats before truncating them can affect the lower bits of the 32 bit result in a manner that will result in them differing in the lower portions. If you are wondering when this might become an issue, then you can imagine the problems of running a managed networked game where you have 64bit and 32 bit clients all sending packets to the server. This is just another reason why you should be using deltas for comparison of floats. Other things to note is that with the addition of SSE2 support came the ability to use instructions that save us loads and stores, such as the cvtss2sd and cvtsd2ss instructions, which perform single to double and double to single conversions respectively.Examining the Call Stack
Of course, there is also the question of exactly what all does our program go through to call our unmanaged methods. First off, the JIT will have to generate several marshalling stubs (to deal with any non-blittable types, although in this case all of the passed types are blittable), along with the security demands. The total number of machines instructions for these stubs is around 10-30, never the less, they aren't inlinable and end up having to be created at runtime. The extra overhead of these calls can add up to quite a bit. First up we'll look at the pinvoke and the delegate stacks:000006427f66bd14 ManagedMathLib!matrix_mul0000064280168b85 mscorwks!DoNDirectCall__PatchGetThreadCall+0x780000064280168ccc ManagedMathLib!DomainBoundILStubClass.IL_STUB(Single[], Single[], Single[])+0xb50000064280168a0f PInvokeTest!SecurityILStubClass.IL_STUB(Single[], Single[], Single[])+0x5c000006428016893e PInvokeTest!PInvokeTest.Program+c__DisplayClass8.b__0()+0x1f0000064280167ca1 PInvokeTest!PInvokeTest.Program.TimeTest(TestMethod, Int32)+0x6e000006427f66c5e2 PInvokeTest!PInvokeTest.Program.Main(System.String[])+0x591000006427f66bd14 ManagedMathLib!matrix_mul0000064280168465 mscorwks!DoNDirectCall__PatchGetThreadCall+0x7800000642801685c1 ManagedMathLib!DomainBoundILStubClass.IL_STUB(Single[], Single[], Single[])+0xb50000064280168945 PInvokeTest!SecurityILStubClass.IL_STUB(Single[], Single[], Single[])+0x510000064280167d59 PInvokeTest!PInvokeTest.Program.TimeTest(TestMethod, Int32)+0x75000006427f66c5e2 PInvokeTest!PInvokeTest.Program.Main(System.String[])+0x649
We can see the two stubs that were created, along with this last method calledDoNDirectCall__PatchGetThreadCall
that actually does the work of calling to our unmanaged function. Exactly what it does is probably what the name says, although I haven't actually dug in and tried to find out what's going on in the internals of it. One important thing to notice is the PInvokeTest!PInvokeTest.Program+c__DisplayClass8.b__0() call, which is actually a delegate used to call to our unmanaged method (passed in to TimeTest). By using the delegate to call the matrix multiplication function, the JIT was able to eliminate the calls entirely. Other than that, the contents of the two sets of stubs are practically identical. The security stub actually asserts that we have the right to call to unmanaged code, as this is a security demand and can change at runtime, this cannot be eliminated. Calling to our unmanaged function from the manged DLL is up next, and it turns out that this is also the most direct call:000006427f66bf32 ManagedMathLib!matrix_mul0000064280169601 mscorwks!DoNDirectCallWorker+0x6200000642801694ef ManagedMathLib!ManagedMathLib.ManagedMath.MatrixMul(Single[], Single[], Single[])+0xd10000064280168945 PInvokeTest!PInvokeTest.Program+c__DisplayClass8.b__3()+0x1f0000064280167ecf PInvokeTest!PInvokeTest.Program.TimeTest(TestMethod, Int32)+0x75000006427f66c5e2 PInvokeTest!PInvokeTest.Program.Main(System.String[])+0x7bf
As we can see, the only real work that is done to call our unmanaged method is the call to DoNDirectCallWorker. Digging around in that method we find that it is basically a wrapper that saves registers, sets up some registers and then dispatches to the unmanaged function. Upon returning it restores the registers and returns to the caller. There is no dynamic method construction, nor does this require any extra overhead on our end. In fact, one could say that the code is about as fast as we can expect it to be for a managed to unmanaged transition. Looking at the difference between the original unmanaged inner product call and the new version (which writes takes a pointer to the destination float), being made from the managed DLL, we can see a huge difference:000006427f66bf32 ManagedMathLib!inner_product0000064280169bd0 mscorwks!DoNDirectCallWorker+0x620000064280169acf ManagedMathLib!ManagedMathLib.ManagedMath.InnerProduct(Single[], Single[], Single ByRef)+0xc00000064280168955 PInvokeTest!PInvokeTest.Program+c__DisplayClass8.b__7()+0x1f00000642801681c5 PInvokeTest!PInvokeTest.Program.TimeTest(TestMethod, Int32)+0x75000006427f66c5e2 PInvokeTest!PInvokeTest.Program.Main(System.String[])+0xab5000006427f66bd14 ManagedMathLib!inner_product0000064280169ca3 mscorwks!DoNDirectCall__PatchGetThreadCall+0x780000064280169ba0 ManagedMathLib!DomainBoundILStubClass.IL_STUB(Single*, Single*)+0x430000064280169b00 ManagedMathLib!ManagedMathLib.ManagedMath.InnerProduct(Single[], Single[])+0x50000006428016893e PInvokeTest!PInvokeTest.Program+c__DisplayClass8.b__7()+0x2000000642801681c5 PInvokeTest!PInvokeTest.Program.TimeTest(TestMethod, Int32)+0x6e000006427f66c5e2 PInvokeTest!PInvokeTest.Program.Main(System.String[])+0xab5
Notice the second call stack has the marshalling stub (also note the parameters to the stub). Returning value types has all sorts of interesting consequences. By changing the signature to write out to a float (in the case of the managed DLL it uses an out parameter), we eliminate the marshalling stub entirely. This improves performance by a decent bit, but nowhere near enough to make up for the call in the first place. The managed inner product is still significantly faster.And then came NGEN
So, we've gone through and optimized our managed application, but yet it still is running too slow. We contemplate the necessity of moving some code over to the unmanaged world and shudder at the implications. Security would be shot, bugs abound...what to do! But then we remember that there's yet one more option, NGEN!
Running NGEN on our test executable prejitted the whole thing, even methods that eventually ended up being inlined. So, what did it do to our managed inner product? Well first we'll look at the actual method that got prejitted:PInvokeTest.Program.InnerProduct2(Single[], Single[], Single ByRef)Begin 0000064288003290, size b000000642`88003290 4883ec28 sub rsp,28h00000642`88003294 4c8bc9 mov r9,rcx00000642`88003297 498b4108 mov rax,qword ptr [r9+8]00000642`8800329b 4885c0 test rax,rax00000642`8800329e 0f8696000000 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032a4 33c9 xor ecx,ecx00000642`880032a6 488b4a08 mov rcx,qword ptr [rdx+8]00000642`880032aa 4885c9 test rcx,rcx00000642`880032ad 0f8687000000 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032b3 4533d2 xor r10d,r10d00000642`880032b6 483d01000000 cmp rax,100000642`880032bc 767c jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032be 41ba01000000 mov r10d,100000642`880032c4 4883f901 cmp rcx,100000642`880032c8 7670 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032ca 41ba01000000 mov r10d,100000642`880032d0 483d02000000 cmp rax,200000642`880032d6 7662 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032d8 41ba02000000 mov r10d,200000642`880032de 4883f902 cmp rcx,200000642`880032e2 7656 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032e4 483d03000000 cmp rax,300000642`880032ea 764e jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032ec b803000000 mov eax,300000642`880032f1 4883f903 cmp rcx,300000642`880032f5 7643 jbe PInvokeTest_ni!COM+_Entry_Point (PInvokeTest_ni+0x333a) (00000642`8800333a)00000642`880032f7 f30f104a14 movss xmm1,dword ptr [rdx+14h]00000642`880032fc f3410f594914 mulss xmm1,dword ptr [r9+14h]00000642`88003302 f30f104210 movss xmm0,dword ptr [rdx+10h]00000642`88003307 f3410f594110 mulss xmm0,dword ptr [r9+10h]00000642`8800330d f30f58c8 addss xmm1,xmm000000642`88003311 f30f104218 movss xmm0,dword ptr [rdx+18h]00000642`88003316 f3410f594118 mulss xmm0,dword ptr [r9+18h]00000642`8800331c f30f58c8 addss xmm1,xmm000000642`88003320 f30f10421c movss xmm0,dword ptr [rdx+1Ch]00000642`88003325 f3410f59411c mulss xmm0,dword ptr [r9+1Ch]00000642`8800332b f30f58c8 addss xmm1,xmm000000642`8800332f f3410f1108 movss dword ptr [r8],xmm100000642`88003334 4883c428 add rsp,28h00000642`88003338 f3c3 rep ret00000642`8800333a e811e0a0f7 call mscorwks!JIT_RngChkFail (00000642`7fa11350)00000642`8800333f 90 nop
Interesting results eh? First off, all of the checks are right up front, and ignoring the stack frames we can see exactly what will be inlined. Some other things to note: This method appears a lot better than before, with all of the branches right up at the top where one would assume branch prediction can best deal with them (the registers never change and are being compared to constants). Never the less there are some oddities in this code, for instance there appear to be some extrenuous instructions like mov eax,3. Yeah, don't ask me. Never the less the code is clearly superior to its previous form, and in fact the matrix version is equally as superior, with the range checks being spaced out significantly more (and a bunch are done right up front as well). Of course, the question now is: How much does this help our performance? First up we'll examine some results from the new code base, and then some from the NGEN results on the same code base.Count: 50PInvoke MatrixMul : 00:00:07.6456226 Average: 00:00:00.1529124Delegate MatrixMul: 00:00:06.6500307 Average: 00:00:00.1330006Managed MatrixMul: 00:00:05.5783511 Average: 00:00:00.1115670Internal MatrixMul: 00:00:04.5377141 Average: 00:00:00.0907542PInvoke Inner Product: 00:00:05.4466987 Average: 00:00:00.1089339Delegate Inner Product: 00:00:04.5001885 Average: 00:00:00.0900037Managed Inner Product: 00:00:00.5535891 Average: 00:00:00.0110717Internal Inner Product: 00:00:02.2694728 Average: 00:00:00.0453894Count: 10PInvoke MatrixMul : 00:00:01.5706254 Average: 00:00:00.1570625Delegate MatrixMul: 00:00:01.2689247 Average: 00:00:00.1268924Managed MatrixMul: 00:00:01.1501118 Average: 00:00:00.1150111Internal MatrixMul: 00:00:00.9302144 Average: 00:00:00.0930214PInvoke Inner Product: 00:00:01.0198933 Average: 00:00:00.1019893Delegate Inner Product: 00:00:00.8538827 Average: 00:00:00.0853882Managed Inner Product: 00:00:00.0987369 Average: 00:00:00.0098736Internal Inner Product: 00:00:00.4287660 Average: 00:00:00.0428766
All in all, our performance changes have helped out the managed inner product a decent amount, although even the unmanaged calls managed to get a bit of a boost. Now for the NGEN results:Count: 50PInvoke MatrixMul : 00:00:07.5788052 Average: 00:00:00.1515761Delegate MatrixMul: 00:00:06.2202549 Average: 00:00:00.1244050Managed MatrixMul: 00:00:04.0376665 Average: 00:00:00.0807533Internal MatrixMul: 00:00:04.5778189 Average: 00:00:00.0915563PInvoke Inner Product: 00:00:05.2785764 Average: 00:00:00.1055715Delegate Inner Product: 00:00:04.1814388 Average: 00:00:00.0836287Managed Inner Product: 00:00:00.5579279 Average: 00:00:00.0111585Internal Inner Product: 00:00:02.2419279 Average: 00:00:00.0448385Count: 10PInvoke MatrixMul : 00:00:01.3822036 Average: 00:00:00.1382203Delegate MatrixMul: 00:00:01.1436108 Average: 00:00:00.1143610Managed MatrixMul: 00:00:00.7386742 Average: 00:00:00.0738674Internal MatrixMul: 00:00:00.8427460 Average: 00:00:00.0842746PInvoke Inner Product: 00:00:00.9507331 Average: 00:00:00.0950733Delegate Inner Product: 00:00:00.7428082 Average: 00:00:00.0742808Managed Inner Product: 00:00:00.1005084 Average: 00:00:00.0100508Internal Inner Product: 00:00:00.4025611 Average: 00:00:00.0402561
So, now we can see that our matrix multiplication doesn't offer any advantages over the managed version, in fact it's actually SLOWER than the managed version! We also can see that the unmanaged invocations also benefitted from the NGEN process, as their managed calls were also optimized somewhat, although the stub wrappers are still there and hence still add their overhead. Other things we note is that the inner product function appears to have slowed down just a bit, this might be nothing, or it might be due to machine load or it might genuinly be slower. I'm tempted to say that it's actually slower now, though.Conclusion
You may recall that this was all sparked by a discussion I had way back when about comparing managed and unmanaged benchmarks and the disadvantages of just setting the /clr flag. I've gone a bit past that though in looking at our managed resources and optimized unmanaged resources and when it is actually beneficial to call into unmanaged code. It is still beneficial to do so, but only with some operations that are just sufficiently taxing enough to bother with. In this case our matrix code which, while in a pure JIT situation, the native code clearly beat out the JIT produced code, gets beat out by the managed version. So what is sufficiently taxing then? Well, set processing might be taxing enough. That is: applying a set of vectorized operations to a collection of objects. But the reality is, you MUST profile first before you can be sure that optimizations of that sort are anywhere near what you need, as if you just assume it will you're probably mistaken.
On a final note, the x86 version also performs better when NGENed than the native version, although in a surprise jump, the delegates actually cost significantly more:Count: 50PInvoke MatrixMul : 00:00:07.9897235 Average: 00:00:00.1597944Delegate MatrixMul: 00:00:27.2561396 Average: 00:00:00.5451227Managed MatrixMul: 00:00:03.5224029 Average: 00:00:00.0704480Internal MatrixMul: 00:00:04.5232549 Average: 00:00:00.0904650PInvoke Inner Product: 00:00:05.5799834 Average: 00:00:00.1115996Delegate Inner Product: 00:00:29.5660003 Average: 00:00:00.5913200Managed Inner Product: 00:00:00.5755690 Average: 00:00:00.0115113Internal Inner Product: 00:00:01.8218949 Average: 00:00:00.0364378
Exactly why this is I haven't investigated, and perhaps I will next time.
Sources for the new inner product functions:void __declspec(dllexport) inner_product(float const* v1, float const* v2, float* out) { __m128 a = _mm_mul_ps(_mm_loadu_ps(v1), _mm_loadu_ps(v2)); a = _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(1, 0, 3, 2))); _mm_store_ss(out, _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(0, 1, 2, 3))));}static void InnerProduct(array^ v1, array^ v2, [Runtime::InteropServices::Out] float% result) { pin_ptr pv1 = &v1[0]; pin_ptr pv2 = &v2[0]; pin_ptr out = &result; inner_product(pv1, pv2, out);}public static void InnerProduct2(float[] v1, float[] v2, out float f) { f = v1[0] * v2[0] + v1[1] * v2[1] + v1[2] * v2[2] + v1[3] * v2[3];}

Source

Washu

Washu

 

Playing With The .NET JIT Part 3

Integrating unmanaged code into the managed platform is one of the problem areas with the managed world. Often times the exact costs of calling into unmanaged code is unknown. This obviously leads to some confusion as to when it is appropriate to mix in unmanaged code to help to improve the performance of our application.PInvoke
There are three ways to access an unmanaged function from managed code. The first is to use the PInvoke capabilities of the language. In C# this is done by declaring a method with external linkage and indicating (using the DllImportAttribute attribute) in which DLL the method may be found. The second way would be to obtain a pointer to the function (using LoadLibrary/GetProcAddress/FreeLibrary), and marshal that pointer to a managed delegate using Marshal.GetDelegateForFunctionPointer. Finally you can write an unmanaged wrapper around the function, using C++/CLI, and invoke that managed method, which will in turn call the unmanaged method.
For the purposes of this post we'll be using two mathematical sample functions. The first being the standard inner product on R3 (aka the dot product), and the second will be a 4x4 matrix multiplication. We'll be comparing two implementations, the first will be a trivial managed implementation of them, and the second will be a SSE2 optimized version. Thanks must be given to Arseny Kapoulkine for the SSE2 version of the matrix multiplication.
First up are the implementations of the inner product functions, it should be noted that I'll be doing the profiling in x64 mode, however the results are similar (albeit a bit slower) for x86.public static float InnerProduct2(float[] v1, float[] v2) { return v1[0] * v2[0] + v1[1] * v2[1] + v1[2] * v2[2] + v1[3] * v2[3];}float __declspec(dllexport) inner_product(float const* v1, float const* v2) { float result; __m128 a = _mm_mul_ps(_mm_loadu_ps(v1), _mm_loadu_ps(v2)); a = _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(1, 0, 3, 2))); _mm_store_ss(&result, _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(0, 1, 2, 3)))); return result;}
Things that should be noted about these implementations is that they both operate soley on arrays of floats. InnerProduct2 is inlineable since it's only 23 bytes long and is taking reference types as parameters. The unmanaged inner product could also be implemented using the SSE3 haddps instruction, however I decided to keep it as processor neutral as possible by using only SSE2 instructions.
The implementations of the matrix multiplication vary quite significantly as well, the managed version is the trivial implementation, but its expansion into machine code is quite long. The unmanaged version is an SSE2 optimized one, the raw performance boost of using it is quite significant.public static void MatrixMul2(float[] m1, float[] m2, float[] o) { o[0] = m1[0] * m2[0] + m1[1] * m2[4] + m1[2] * m2[8] + m1[3] * m2[12]; o[1] = m1[0] * m2[1] + m1[1] * m2[5] + m1[2] * m2[9] + m1[3] * m2[13]; o[2] = m1[0] * m2[2] + m1[1] * m2[6] + m1[2] * m2[10] + m1[3] * m2[14]; o[3] = m1[0] * m2[3] + m1[1] * m2[7] + m1[2] * m2[11] + m1[3] * m2[15]; o[4] = m1[4] * m2[0] + m1[5] * m2[4] + m1[6] * m2[8] + m1[7] * m2[12]; o[5] = m1[4] * m2[1] + m1[5] * m2[5] + m1[6] * m2[9] + m1[7] * m2[13]; o[6] = m1[4] * m2[2] + m1[5] * m2[6] + m1[6] * m2[10] + m1[7] * m2[14]; o[7] = m1[4] * m2[3] + m1[5] * m2[7] + m1[6] * m2[11] + m1[7] * m2[15]; o[8] = m1[8] * m2[0] + m1[9] * m2[4] + m1[10] * m2[8] + m1[11] * m2[12]; o[9] = m1[8] * m2[1] + m1[9] * m2[5] + m1[10] * m2[9] + m1[11] * m2[13]; o[10] = m1[8] * m2[2] + m1[9] * m2[6] + m1[10] * m2[10] + m1[11] * m2[14]; o[11] = m1[8] * m2[3] + m1[9] * m2[7] + m1[10] * m2[11] + m1[11] * m2[15]; o[12] = m1[12] * m2[0] + m1[13] * m2[4] + m1[14] * m2[8] + m1[15] * m2[12]; o[13] = m1[12] * m2[1] + m1[13] * m2[5] + m1[14] * m2[9] + m1[15] * m2[13]; o[14] = m1[12] * m2[2] + m1[13] * m2[6] + m1[14] * m2[10] + m1[15] * m2[14]; o[15] = m1[12] * m2[3] + m1[13] * m2[7] + m1[14] * m2[11] + m1[15] * m2[15];}void __declspec(dllexport) matrix_mul(float const* m1, float const* m2, float* out) { __m128 r; __m128 col1 = _mm_loadu_ps(m2); __m128 col2 = _mm_loadu_ps(m2 + 4); __m128 col3 = _mm_loadu_ps(m2 + 8); __m128 col4 = _mm_loadu_ps(m2 + 12); __m128 row1 = _mm_loadu_ps(m1); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out, r); __m128 row2 = _mm_loadu_ps(m1 + 4); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 4, r); __m128 row3 = _mm_loadu_ps(m1 + 8); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 8, r); __m128 row4 = _mm_loadu_ps(m1 + 12); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 12, r);}
It is trivially obvious that the managed version of the matrix multiplication cannot be inlined. The overhead of the function call is really the least of your worries though (it is the smallest cost of the entire method really). The unmanaged version is a nicely optimized SSE2 method, and requires only a minimal number of loads and stores from main memory, and the loads and stores are reasonably cache friendly (P4 will prefetch 128 bytes of memory).PInvoke
Of course, the question is, how do these perform against each other when called from a managed application. The profiling setup is quite simple. It simply runs the methods against a set of matricies and vectors (randomly generated) a million times. It repeats those tests several more times (100 in this case), and averages the results. Full optimizations were turned on for both the unmanaged and managed tests. The Internal calls are made from a managed class that directly calls to the unmanaged methods. Both the managed wrapper and the unmanaged methods are hosted in the same DLL (source for the full DLL at the end of this entry).PInvoke MatrixMul : 00:00:15.0203285 Average: 00:00:00.1502032Delegate MatrixMul: 00:00:13.1004306 Average: 00:00:00.1310043Managed MatrixMul: 00:00:10.2809715 Average: 00:00:00.1028097Internal MatrixMul: 00:00:08.8992407 Average: 00:00:00.0889924PInvoke Inner Product: 00:00:10.6779944 Average: 00:00:00.1067799Delegate Inner Product: 00:00:09.3359882 Average: 00:00:00.0933598Managed Inner Product: 00:00:01.3460812 Average: 00:00:00.0134608Internal Inner Product: 00:00:05.6842336 Average: 00:00:00.0568423
The first thing to note is that the PInvoke calls for both the matrix multiplication and inner product were the slowest. The delegate calls were only slightly faster than the PInvoke calls. As we move into the managed territory we find the the results begin to diverge. The managed matrix multiplication is slower than the internal matrix multiplication, however the managed inner product is several times faster than the internal one.
Part of the reason behind this divergance is a result of the invocation framework. There is a cost to calling unmanaged methods from managed code, as each method must be wrapped to perform operations such as fixing any managed resources, performing marshalling for non-blittable types, and finally calling the actual native method. After returning the method further marshalling of the return type may be required, along with checks on the condition of the stack and exception checks (SEH exceptions are caught and wrapped in the SEHException class). Even the internal calls to the unmanaged method require some amount of this, although the actual marshalling requirements are avoided, as are some of the other costs. The result is that the costs add up over time, and in the case of the inner product the additional cost overrode the complexity requirements of the method (which is fairly trivial). The case, on the average, is different for the matrix multiplication. The additional costs of the call do not add a significant amount overhead compared to that of the body of the method, which executes faster than that of the managed matrix multiplication due to vectorization.
Performing further testing with counts at 50 and 25 reveal similar results, however the managed matrix multiplication begins to approach the performance of the internal one. However, even at a count of 1 (that's one million matrix multiplications), the internal matrix multiplication is faster than the managed version.Count = 50PInvoke MatrixMul : 00:00:07.4730356 Average: 00:00:00.1494607Delegate MatrixMul: 00:00:06.4519274 Average: 00:00:00.1290385Managed MatrixMul: 00:00:05.1662482 Average: 00:00:00.1033249Internal MatrixMul: 00:00:04.3371530 Average: 00:00:00.0867430PInvoke Inner Product: 00:00:05.3891030 Average: 00:00:00.1077820Delegate Inner Product: 00:00:04.7625597 Average: 00:00:00.0952511Managed Inner Product: 00:00:00.6791549 Average: 00:00:00.0135830Internal Inner Product: 00:00:02.6719175 Average: 00:00:00.0534383Count = 25PInvoke MatrixMul : 00:00:03.7432932 Average: 00:00:00.1497317Delegate MatrixMul: 00:00:03.2074834 Average: 00:00:00.1282993Managed MatrixMul: 00:00:02.6200096 Average: 00:00:00.1048003Internal MatrixMul: 00:00:02.2144342 Average: 00:00:00.0885773PInvoke Inner Product: 00:00:02.8778559 Average: 00:00:00.1151142Delegate Inner Product: 00:00:02.0178957 Average: 00:00:00.0807158Managed Inner Product: 00:00:00.3385675 Average: 00:00:00.0135427Internal Inner Product: 00:00:01.4391529 Average: 00:00:00.0575661Count = 5PInvoke MatrixMul : 00:00:00.7642981 Average: 00:00:00.1528596Delegate MatrixMul: 00:00:00.6407667 Average: 00:00:00.1281533Managed MatrixMul: 00:00:00.5231416 Average: 00:00:00.1046283Internal MatrixMul: 00:00:00.4458765 Average: 00:00:00.0891753PInvoke Inner Product: 00:00:00.5702666 Average: 00:00:00.1140533Delegate Inner Product: 00:00:00.4122217 Average: 00:00:00.0824443Managed Inner Product: 00:00:00.0683842 Average: 00:00:00.0136768Internal Inner Product: 00:00:00.2899304 Average: 00:00:00.0579860Count = 1PInvoke MatrixMul : 00:00:00.1476958 Average: 00:00:00.1476958Delegate MatrixMul: 00:00:00.1337818 Average: 00:00:00.1337818Managed MatrixMul: 00:00:00.1155993 Average: 00:00:00.1155993Internal MatrixMul: 00:00:00.0919538 Average: 00:00:00.0919538PInvoke Inner Product: 00:00:00.1155769 Average: 00:00:00.1155769Delegate Inner Product: 00:00:00.0906768 Average: 00:00:00.0906768Managed Inner Product: 00:00:00.0155480 Average: 00:00:00.0155480Internal Inner Product: 00:00:00.0653527 Average: 00:00:00.0653527Conclusion
Clearly we should reserve unmanaged operations for longer running methods where the cost of the managed wrappers is negligible compared to the cost of the method. Even heavily optimized methods cost significantly in the wrapping code, and so trivial optimizations are easily overshadowed by that cost. It is best to use unmanaged operations wrapped in a C++/CLI wrapper (and preferably the wrapper will be part of the library that the operations are in). Next time we'll look at the assembly produced by the JIT for these methods under varying circumstances.
Source for Managed DLL:#pragma managed(push, off)extern "C" { float __declspec(dllexport) inner_product(float const* v1, float const* v2) { float result; __m128 a = _mm_mul_ps(_mm_loadu_ps(v1), _mm_loadu_ps(v2)); a = _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(1, 0, 3, 2))); _mm_store_ss(&result, _mm_add_ps(a, _mm_shuffle_ps(a, a, _MM_SHUFFLE(0, 1, 2, 3)))); return result; } void __declspec(dllexport) matrix_mul(float const* m1, float const* m2, float* out) { __m128 r; __m128 col1 = _mm_loadu_ps(m2); __m128 col2 = _mm_loadu_ps(m2 + 4); __m128 col3 = _mm_loadu_ps(m2 + 8); __m128 col4 = _mm_loadu_ps(m2 + 12); __m128 row1 = _mm_loadu_ps(m1); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row1, row1, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out, r); __m128 row2 = _mm_loadu_ps(m1 + 4); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row2, row2, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 4, r); __m128 row3 = _mm_loadu_ps(m1 + 8); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row3, row3, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 8, r); __m128 row4 = _mm_loadu_ps(m1 + 12); r = _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(0, 0, 0, 0)), col1), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(1, 1, 1, 1)), col2), _mm_add_ps(_mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(2, 2, 2, 2)), col3), _mm_mul_ps(_mm_shuffle_ps(row4, row4, _MM_SHUFFLE(3, 3, 3, 3)), col4)))); _mm_storeu_ps(out + 12, r); }}#pragma managed(pop)using namespace System;namespace ManagedMathLib { public ref class ManagedMath { public: static IntPtr InnerProductPtr = IntPtr(inner_product); static IntPtr MatrixMulPtr = IntPtr(matrix_mul); static float InnerProduct(array^ v1, array^ v2) { pin_ptr pv1 = &v1[0]; pin_ptr pv2 = &v2[0]; return inner_product(pv1, pv2); } static void MatrixMul(array^ m1, array^ m2, array^ out) { pin_ptr pm1 = &m1[0]; pin_ptr pm2 = &m2[0]; pin_ptr outp = &out[0]; matrix_mul(pm1, pm2, outp); } };}

Source

Washu

Washu

 

Playing With The .NET JIT Part 2

Previously I discussed various potential issues the x86 JIT had with inlining non-trivial methods and functions taking or returning value types. In this entry I hope to cover some potential pitfalls facing would be optimizers, along with discussing some unexpected optimizations that do take place.Optimizations That Aren't
It is not that uncommon to see people advocating the usage of unsafe code as a means of producing "optimized" code in the managed environment. The idea is a simple one, by getting down to the metal with pointers and all that fun stuff, you can somehow produce code that will be "optimized" in ways that typical managed code cannot be.
Unsafe code does not allow you to manipulate pointers to managed objects in whatever manner you please. Certain steps have to be taken to ensure that your operations are safe with regards to the managed heap. Just because your code is marked as "unsafe" doesn't mean that it is free to do what it wants. For example, you cannot assign a pointer the address of a managed object without first pinning the object. Pointers to objects are not tracked by the GC, so should you obtain a pointer to an object and then attempt to use the pointer, you could end up accessing a now collected region of memory. What can also happen is that you could obtain a pointer to an object, but when the GC runs your object could be shuffled around on the heap. This shuffling would invalidate your pointer, but since pointers are not tracked by the GC it would not be updated (while references to objects are updated). Pinning objects solves this problem, and hence is why you are only allowed to take the address of an object that's been pinned. In essence, a pinned object cannot be moved nor collected by the GC until it is unpinned. This is typically done through the use of the fixed keyword in C# or the GCHandle structure.
Much like how a fixed object cannot be moved by the GC, a pointer to a fixed object cannot be reassigned. This makes it difficult to traverse primitive arrays, as you end up needing to create other temporary pointers, or limiting the size of the fixed area to a small segment. Fixed objects, and unsafe code, increase the overall size of the produced IL by a fairly significant margin. While an increase in the IL is not indicative of the size of the produced machine code, it does prevent the runtime from inlining such methods. As an example, the two following snippets reveal the difference between a safe inner product and an unsafe one; note that in the unmanaged case it was using a fixed sized buffer.public float Magnitude() { return (float)Math.Sqrt(X * X + Y * Y + Z * Z);}.method public hidebysig instance float32 Magnitude() cil managed{ .maxstack 8 L_0000: ldarg.0 L_0001: ldfld float32 PerformanceTests.Vector3::X L_0006: ldarg.0 L_0007: ldfld float32 PerformanceTests.Vector3::X L_000c: mul L_000d: ldarg.0 L_000e: ldfld float32 PerformanceTests.Vector3::Y L_0013: ldarg.0 L_0014: ldfld float32 PerformanceTests.Vector3::Y L_0019: mul L_001a: add L_001b: ldarg.0 L_001c: ldfld float32 PerformanceTests.Vector3::Z L_0021: ldarg.0 L_0022: ldfld float32 PerformanceTests.Vector3::Z L_0027: mul L_0028: add L_0029: conv.r8 L_002a: call float64 [mscorlib]System.Math::Sqrt(float64) L_002f: conv.r4 L_0030: ret}public float Magnitude() { fixed (float* p = V) { return (float)Math.Sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]); }}.method public hidebysig instance float32 Magnitude() cil managed{ .maxstack 4 .locals init ( [0] float32& pinned singleRef1, [1] float32 single1) L_0000: ldarg.0 L_0001: ldflda PerformanceTests.Unsafe.Vector3/e__FixedBuffer0 PerformanceTests.Unsafe.Vector3::V L_0006: ldflda float32 PerformanceTests.Unsafe.Vector3/e__FixedBuffer0::FixedElementField L_000b: stloc.0 L_000c: ldloc.0 L_000d: conv.i L_000e: ldind.r4 L_000f: ldloc.0 L_0010: conv.i L_0011: ldind.r4 L_0012: mul L_0013: ldloc.0 L_0014: conv.i L_0015: ldc.i4.4 L_0016: conv.i L_0017: add L_0018: ldind.r4 L_0019: ldloc.0 L_001a: conv.i L_001b: ldc.i4.4 L_001c: conv.i L_001d: add L_001e: ldind.r4 L_001f: mul L_0020: add L_0021: ldloc.0 L_0022: conv.i L_0023: ldc.i4.8 L_0024: conv.i L_0025: add L_0026: ldind.r4 L_0027: ldloc.0 L_0028: conv.i L_0029: ldc.i4.8 L_002a: conv.i L_002b: add L_002c: ldind.r4 L_002d: mul L_002e: add L_002f: conv.r8 L_0030: call float64 [mscorlib]System.Math::Sqrt(float64) L_0035: conv.r4 L_0036: stloc.1 L_0037: leave.s L_0039 L_0039: ldloc.1 L_003a: ret}
Note that neither of these two appear to be candidates for inlining, both being well over the 32 byte IL limit. The produced IL, while not directly indicative of the assembly produced by the JIT compiler, does tend to give an overall idea of how much larger we should expect this method to be when reproduced in machine code. Fixed length buffers have other issues that need addressing: You cannot access a fixed length buffer outside of a fixed statement. They are also an unsafe construct, and so you must indicate that the type is unsafe. Finally, they produce temporary types at compilation time that can throw off serialization and other reflection based mechanisms.
In the end, unsafe code does not increase performance, and the reliance upon platform structures to ensure safety, such as the fixed construct, introduces more problems than it solves. Furthermore, even the smallest method that might be inlined tends to bloat up to the point where inlining by the JIT is no longer possible.Surprising Developments and JIT Optimizations
Previously I noted that the JIT compiler can only inline a method that is a maximum of 32 bytes of IL in length. However, I wasn't completely honest with you. In some cases the JIT compiler will inline chunks of code that are longer than 32 bytes of IL. I have not dug in-depth into the reasons for this, nor when these conditions may arise. As such this information is presented as an informal experimental result. In the case of a function returning the result of an intrinsic operation, there may arise a condition whereby the result is inlined. Two examples of this behavior will be shown, note that in both cases the function used is an intrinsic math function and that neither are passed value types (which will prevent inlining). The first is the Magnitude function, which we saw above. Calling it results in it being inlined and produces the following inlined assembly.00220164 D945D4 fld dword ptr [ebp-2Ch]00220167 D8C8 fmul st,st(0)00220169 D945D8 fld dword ptr [ebp-28h]0022016C D8C8 fmul st,st(0)0022016E DEC1 faddp st(1),st00220170 D945DC fld dword ptr [ebp-24h]00220173 D8C8 fmul st,st(0)00220175 DEC1 faddp st(1),st00220177 DD5D9C fstp qword ptr [ebp-64h]0022017A DD459C fld qword ptr [ebp-64h]0022017D D9FA fsqrt
We note that this is the optimal form for the magnitude function, with a minimal number of memory reads, the majority of the work taking place on the FPU stack. Compared to the unsafe version, which is shown next, you can clearly see how much worse unsafe code is.007A0438 55 push ebp007A0439 8BEC mov ebp,esp007A043B 57 push edi007A043C 56 push esi007A043D 53 push ebx007A043E 83EC10 sub esp,10h007A0441 33C0 xor eax,eax007A0443 8945F0 mov dword ptr [ebp-10h],eax007A0446 894DF0 mov dword ptr [ebp-10h],ecx007A0449 D901 fld dword ptr [ecx]007A044B 8BF1 mov esi,ecx007A044D D80E fmul dword ptr [esi]007A044F 8BF9 mov edi,ecx007A0451 D94704 fld dword ptr [edi+4]007A0454 8BD1 mov edx,ecx007A0456 D84A04 fmul dword ptr [edx+4]007A0459 DEC1 faddp st(1),st007A045B 8BC1 mov eax,ecx007A045D D94008 fld dword ptr [eax+8]007A0460 8BD8 mov ebx,eax007A0462 D84B08 fmul dword ptr [ebx+8]007A0465 DEC1 faddp st(1),st007A0467 DD5DE4 fstp qword ptr [ebp-1Ch]007A046A DD45E4 fld qword ptr [ebp-1Ch]007A046D D9FA fsqrt007A046F D95DEC fstp dword ptr [ebp-14h]007A0472 D945EC fld dword ptr [ebp-14h]007A0475 8D65F4 lea esp,[ebp-0Ch]007A0478 5B pop ebx007A0479 5E pop esi007A047A 5F pop edi007A047B 5D pop ebp007A047C C3 ret
Next up is a fairly ubiquitous utility function which obtains the angle between two unit length vectors, note that acos is not directly producible as a machine instruction, none the less it is considered an intrinsic function. As we see below, this produces a nicely optimized set of instructions, with only a single call to a function (which computes acos).public static float AngleBetween(ref Vector3 lhs, ref Vector3 rhs) { return (float)Math.Acos(lhs.X * rhs.X + lhs.Y * rhs.Y + lhs.Z * rhs.Z);}.method public hidebysig static float32 AngleBetween(PerformanceTests.Vector3& lhs, PerformanceTests.Vector3& rhs) cil managed{ .maxstack 8 L_0000: ldarg.0 L_0001: ldfld float32 PerformanceTests.Vector3::X L_0006: ldarg.1 L_0007: ldfld float32 PerformanceTests.Vector3::X L_000c: mul L_000d: ldarg.0 L_000e: ldfld float32 PerformanceTests.Vector3::Y L_0013: ldarg.1 L_0014: ldfld float32 PerformanceTests.Vector3::Y L_0019: mul L_001a: add L_001b: ldarg.0 L_001c: ldfld float32 PerformanceTests.Vector3::Z L_0021: ldarg.1 L_0022: ldfld float32 PerformanceTests.Vector3::Z L_0027: mul L_0028: add L_0029: conv.r8 L_002a: call float64 [mscorlib]System.Math::Acos(float64) L_002f: conv.r4 L_0030: ret}007A01D9 8D55D4 lea edx,[ebp-2Ch] 007A01DC 8D4DC8 lea ecx,[ebp-38h] 007A01DF D902 fld dword ptr [edx] 007A01E1 D809 fmul dword ptr [ecx] 007A01E3 D94204 fld dword ptr [edx+4] 007A01E6 D84904 fmul dword ptr [ecx+4] 007A01E9 DEC1 faddp st(1),st 007A01EB D94208 fld dword ptr [edx+8] 007A01EE D84908 fmul dword ptr [ecx+8] 007A01F1 DEC1 faddp st(1),st 007A01F3 83EC08 sub esp,8 007A01F6 DD1C24 fstp qword ptr [esp] 007A01F9 E868A5AF79 call 7A29A766 (System.Math.Acos(Double), mdToken: 06000b28)
Finally there is the issue of SIMD instruction sets. While the JIT will not use SIMD instructions on the x86 platform, it will utilize them for other operations. One common operation you see is the conversion of floating point numbers to integers. In .NET 2.0 the JIT will optimize this to use the SSE2 instruction. For instance, the following snippet of code will result in the assembly dump following.int n = (int)r.NextDouble();002A02FB 8BCB mov ecx,ebx 002A02FD 8B01 mov eax,dword ptr [ecx] 002A02FF FF5048 call dword ptr [eax+48h] 002A0302 DD5DA0 fstp qword ptr [ebp-60h] 002A0305 F20F1045A0 movsd xmm0,mmword ptr [ebp-60h] 002A030A F20F2CF0 cvttsd2si esi,xmm0
While not quite as optimal as it could be if the JIT were using the full SSE2 instruction set, this minor optimization can go a long way.
So what is left to visit? Well, there's obviously the x64 platform, which is growing in popularity. The x64 platform presents new opportunities to explore, including certain guarantees and performance benefits that aren't available on the x86 platform. Amongst them are a whole new set of optimizations and available instruction sets that the JIT can take advantage of. Finally there is the case of calling to unmanaged code for highly performance intensive operations. Hand optimized SIMD code and the potential performance benefits or hazards calling to an unmanaged function can incur.

Source

Washu

Washu

 

Playing With The .NET JIT Part 1

Introduction
.NET has been getting some interesting press recently. Even to the point where an article in Game Developer Magazine was published advocating the usage of managed code for rapid development of components. However, I did raise some issues with the author in regards to the performance metric he used. Thus it is that I have decided to cover some issue with .NET performance, future benefits, and hopefully even a few solutions to some of the problems I'll be posing.
Ultimately the performance of your application will be determined by the algorithms and data-structures you use . No amount of micro-optimization can hope to account for the huge performance differences that can crop up between different choices of algorithms. Thus the most important tool you can have in your arsenal is a decent profiler. Thankfully there are many good profilers available for the .NET platform. Some of the profiling tools are specific to certain areas of managed coding, such as the CLR Profiler, which is useful for profiling the allocation patterns of your managed application. Others, like DevPartner, allow you to profile the entire application, identifying performance bottlenecks in both managed and unmanaged code. Finally there are the low level profiling tools, such as the SOS Debugging Tools, these tools give you extremely detailed information about the performance of your systems but are hard to use.
Applications designed and built towards a managed platform tend to have different design decisions behind them than unmanaged applications. Even such fundamental things as memory allocation patterns are usually quite a bit different. With object lifetimes being non-deterministic, one has to apply different patterns to ensure the timely release of resources. Allocation patterns are also different, partly due to the inability to allocate objects on the stack, but also due to the ease of allocation on the managed heap. Allocating on an unmanaged heap typically requires a heap walk to find a block of free space that is at least the size of the block requested. The managed allocator typically allocates at the end of the heap, resulting in significantly faster allocation times (constant time, for the most part). These changes to the underlying assumptions that drive the system typically have large sweeping changes on the overall design of the systems.Future Developments
Theoretically a JIT compiler can outperform a standard compiler simply because it can target the platform in ways that traditional compilation cannot. Traditionally, to target different instruction sets, you would have to compile a binary for each instruction set. For instance, targeting SSE2 would require you to build a separate binary from that of your non-SSE2 branch. You could, of course, do this through the use of DLLs, or by custom writing your SSE2 code and using function pointers to dictate which branch to chose.
Hand written SIMD code is often faster than compiler generated SIMD, due to the ability to manually vectorize the data thus enabling for true SIMD to take place. Some compilers, like the Intel C++ Compiler can perform automatic vectorization. However it is unable to guarantee the accuracy of the resulting binary and extensive testing typically has to be done in order to ensure that the functionality was correctly generated. While most compilers have the option to target SIMD instruction sets, they usually use it to replace standard floating point operations where they can, as the single based SIMD instructions are generally faster than their FPU counterparts.
The JIT compiler could target any SIMD instruction set supported by its platform, along with any other hardware specific optimizations it knew about. While automatic vectorization is not likely to be in a JIT release anytime soon, even using the non-vectorized SIMD instruction sets can help to parallelize your processing. As an example, multiple independent SIMD operations can typically run in parallel (that is, an add and a multiplication could both run simultaneously). Furthermore, the JIT can allow any .NET application to target any system it supports, provided the libraries it uses are also available on that system. This means that, provided you aren't doing anything highly non-portable such as assuming that a pointer is 32bits..., your application could be JIT compiled to target a 64 bit compiler and run natively that way.
Another area of potential advancement includes the realm of Profile Guided Optimization. Currently POGO is restricted to the arena of unmanaged applications, as it requires the ability to generate raw machine code and to perform instruction reordering. In essence you instrument an application with a POGO profiler; then you use the application normally to allow the profiler to collect usage data and to find the hotspots. Finally you run the optimizer on the solution, which will rebuild the solution, using the profiling data it gathered to optimize the heavily utilized sections of your application. A JIT compiler could instrument a managed program on first launch and watch its usage, while in another thread it could be optimizing the machine code using the profiling data that it gathers. The resulting cached binary image would be optimized on the next launch (excepting those areas that had not been accessed, and thus the JIT hadn't compiled yet). This would be especially effective on systems with multiple cores.JIT Compilation for the x86
The JIT compiler for the x86 platform, as of .NET 2.0, does not support SIMD instruction sets. It will generate occasional MMX or SSE instructions for some integral and floating point promotions, but otherwise it will not utilize SIMD instruction sets. Inlining poses its own problems for the JIT compiler. Currently the JIT compiler will only inline functions that are 32 bytes of IL or smaller. Because the JIT compiler runs in an extremely tight time constraint, it is forced to make sacrifices in the optimizations it can make. Inlining is typically an expensive operation because it requires shuffling around the addresses of everything that comes after the inlined code (which requires interpreting the IL, then determining if its address is before or after the inlined code, then making the appropriate adjustments...). Because of this, all but the smallest of methods will not be inlined. Here's a sample of a method that will not be inlined, and the IL that accompanies it:public float SquareMagnitude() { return X * X + Y * Y + Z * Z;}.method public hidebysig instance float32 SquareMagnitude() cil managed{ .maxstack 8 L_0001: ldfld float32 Performance_Tests.Vector3::X L_0006: ldarg.0 L_0007: ldfld float32 Performance_Tests.Vector3::X L_000c: mul L_000d: ldarg.0 L_000e: ldfld float32 Performance_Tests.Vector3::Y L_0013: ldarg.0 L_0014: ldfld float32 Performance_Tests.Vector3::Y L_0019: mul L_001a: add L_001b: ldarg.0 L_001c: ldfld float32 Performance_Tests.Vector3::Z L_0021: ldarg.0 L_0022: ldfld float32 Performance_Tests.Vector3::Z L_0027: mul L_0028: add L_0029: ret}
This method, as you can tell, is 42 bytes long, counting the return instruction. Clearly this is over the 32 byte IL limit. However, the resulting assembly compiles down to less than 25 bytes:002802C0 D901 fld dword ptr [ecx]002802C2 D9C0 fld st(0)002802C4 DEC9 fmulp st(1),st002802C6 D94104 fld dword ptr [ecx+4]002802C9 D9C0 fld st(0)002802CB DEC9 fmulp st(1),st002802CD DEC1 faddp st(1),st002802CF D94108 fld dword ptr [ecx+8]002802D2 D9C0 fld st(0)002802D4 DEC9 fmulp st(1),st002802D6 DEC1 faddp st(1),st002802D8 C3 ret
Methods that use this one though, like the Magnitude method, may be candidates for inlining however. Which typically reduces to a call to the SquareMagnitude method and a fsqrt call.
Another area where the JIT has issues deals with value-types and inlining. Methods that take value-type parameters are not currently considered for inlining. There is a fix in the pipe for this, as it is considered a bug. An example of this behavior can be seen in the following example function, which although far below the 32 bytes of IL limit, will not be inlined.static float WillNotInline32(float f) { return f * f;}.method private hidebysig static float32 WillNotInline32(float32 f) cil managed{ .maxstack 8 L_0000: ldarg.0 L_0001: ldarg.0 L_0002: mul L_0003: ret}
The resulting call to this function and the assembly code of the function looks as follows0087008F FF75F4 push dword ptr [ebp-0Ch]00870092 FF154C302A00 call dword ptr ds:[002A304Ch]----003F01F8 D9442404 fld dword ptr [esp+4]003F01FC DCC8 fmul st(0),st003F01FE C20400 ret 4
Clearly the x86 JIT requires a lot more work before it will be able to produce machine code approaching that of a good optimizing compiler. However, the news isn't all grim. Interop between .NET and unmanaged code allows for you to write those methods that need to be highly optimized in a lower level language.

Source

Washu

Washu

 

Is It Really A Bug For A Beginner To Be Using C-Strings In C++?

Depends, but probably yes.
A beginning programmer should be focusing on learning to program. That is: the process of taking a concept and turning it into an application. Problem solving, in other words. Learning to program is not the same thing as learning a programming language. Learning a programming language is about learning the syntax and standard library that comes with said programming language, it may involve the process of problem solving, but that is not its primary concern.
Given that, one can quickly see that the best way to introduce a beginning programmer to programming is to get them to use a language that is quick and easy to get up and running in. There are many languages which are quick and easy to get up and running with. Python and Ruby are two prime examples, both of which have a very simple language syntax which allows for a lot of leeway for the programmer, without all the extra clutter that many other languages have (C++ cough). Another good choice, in my opinion, is C# which, when combined with Microsoft Visual C#, provides a very robust but easy to learn language. These languages all have many key features which make them easy to learn and use: All of them are generally garbage collected, they all have fairly simple syntax with few (if any) corner cases, and all of them have huge standard libraries that provide for a great deal of quick and easy to use functionality with minimal programmer effort.
C++ has almost none of those things. While there are useful tools in many IDEs, such as Visual Studio, IntelliSense and similar auto-completion tools are not perfect, even with the help of tools like WholeTomato's VAX. The C++ standard library is very small, dealing mainly with stream based IO, some minimal containers, threading and algorithms for operating on iterators. The rest of the work is left up the developer. This means that for any sufficiently complex project you will either end up implementing a majority of the behaviors needed yourself, or having to dig up third party libraries and APIs for said behavior. Even the recent C++11 work hasn't really alleviated the problem. Then you have the language complexity of which I've commented on previously.
However, the C++ standard library does provide some features that should be in every developers pocketbook... such as std::string. std::string behaves a lot more like what a beginning programmer expects a primitive type to work. They've learned that you can add integers and floats together, so why can't they add strings together? Well, with std::string they can, but with c-strings they can't. They've learned to compare integers and floats using the standard == operator, so why can't they do that with strings? With std::string they can, but with c-strings they can't (well, they "can", but the behavior is not what they want). They've learned how to read in integers and floats from std::cin, so why can't they do the same with strings? They can with std::string, but with c-strings they have to be careful of the length and also that they've pre-allocated it, which has hazards of its own... such as stack space issues when they try to create a char array of 5000 characters.
C-strings do not behave intuitively. They have no inherit length, instead preferring to use null terminators to indicate the end of the string. They cannot be trivially concatenated, instead requiring the user to ensure that appropriate space is available, and then they have to use various function calls to copy the string, and then they have to ensure that those string functions had the space required to copy the null terminator (which the strncpy and other functions MAY omit if there isn't enough space in the destination). Comparison requires the use functionality like strcmp, which doesn't return true/false, but instead returns an integer indicating the string difference, with 0 being no differences. In a language where the user has been taught that 0/null generally means failure, remembering to test for 0 in that one off corner case is rather strange.
For a beginner, all that strangeness doesn't equate to extra power or better performance. Instead it equates to extra confusion, and strange crashes. Had they been taught std::string first, they would have been free and clear, able to use the familiar operators they are used to, while being safe and secure in the bosom that is std::string. In fact, it generally gets worst than that, as c-strings are usually taught before pointers! This makes it even more confusing for the poor beginner, because then they're introduced to arrays and pointers (instead of say std::vector), and now have a whole slew of new functionality to basically kill themselves with.
Thus, in conclusion, if you see a c-string in a beginners code, it probably means they have a bug somewhere in their code.

Source

Washu

Washu

 

A Simple C++ Quiz

This is a very basic C++ quiz, it mainly tests a wee bit of knowldege that I've found some people who profess to have a mastery of C++ to be lacking. The answers should all be based on the C++ standard, and not your compiler's implementation.
Use the following code snippet to answer questions 1 through 3:int* p = new int[10];int* j = p + 11;int* k = p + 10;
Is the second line well defined behavior?
If the second line is well defined, where does the pointer j point to after the second line is executed.

What are some of the legal operations that can be performed on the pointer k?

What output should the following lines of code produce?int a = 10;std::cout

Assuming the function called in the following block of code has no default parameters, and that no operators are overloaded, how many parameters does it take? Which objects are passed to it?f((a, b, c), d, e, ((g, h), i));


Source

Washu

Washu

Sign in to follow this  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!