Giving upvotes because the answer is correct.
The mathematics of 3D worlds is linear algebra. You need to understand the fundamentals of linear algebra to program 3D software effectively. Otherwise you will be entirely at the mercy of people online providing your basic formulas every time you need them.
The mathematics of 2D worlds is trigonometry and geometry. You need to understand the fundamentals to program 2D software effectively.
For how to apply the law of cosines, you can compute the angle between any two vectors using the function provided above. It applies just as well in 2D if you set the third component to zero; the two vectors are the legs of a triangle and the cosine (or inverse cosine) is used for calculating the angle between the legs of the triangle.
If you need the last line without 'math notation', the angle is equal to the inverse cosine of the dot product of two normalized vectors. If you don't know what those words mean, work through this or similar.
Ok, let me clarify, because it seems that the other answers are focused on the first half, but not the last half of what I said.
I have already calculated the angle between points B and C. That's not the problem whatsoever, so that part is already solved.
The problem (tested in my program) is that yaw of the camera is dependent upon world space with z = north (0), -z = south (180), x = east(90) and -x = west(270).
Now I would have already been done just from the first half if the camera was statically placed at 0, y, 0 (since I'm only checking the horizontal plane), but this is not the case. The camera can be anywhere in the world (anywhere of the four Quadrants). This makes the yaw of the camera problematic as I will explain next.
Given the same image, if you move the diagram to the Quadrant I, II, III or IV, the world coordinates are valid in relation to the camera's orientation and the angle, plus the angle of point B and C matches that of the world.
But here is where everything fails:
Rotate the direction of the camera and point B for example 180 degrees, but keep point C in the same spot. Point C will still be 270 degrees in relation to the world, but in relation to the camera's direction (point B always = 0 degrees), point C = 90 degrees.
So the problem is not the angle between point B and C. The problem is defining degrees specific to the camera's direction it is facing, which will always be 0 degrees no matter which way it rotates.
If you dont understand that, the problem is no different than getting proper orientation. if you are facing the world's north and your dog is standing to your left, you understand that not only the dog is to the left of you, but the dog in reference to the world is also west of you.
But if you were to turn 90 degrees counter clockwise, the dog will still be west of you, but is the dog still to the left of you? NO. The dog is now in front of you.
So roughly, as the world has its own Cartesian coordinate system, the camera needs an independent version based on the direction the camera is pointing.
P.S. Not all programmers read/write math notation and I don't appreciate the condescending response. Now if I give you sentence written in Farsi writing, should you know how to say it or should it written phonetically so you can say it and use it? Same goes with programming. A lot of people can't read mathematically notations, but they sure know how to program the same equations if it's written in a form they understand such as when you said "the angle is equal to the inverse cosine of the dot product of two normalized vectors."