If your code is flexible enough to support multiple notions of ownership for a game object (visual, inventory, tribe, etc...), then this should just be a matter of a player having a "temp crafting inventory". Those items are placed in the "temp crafting inventory" as needed - then when the player is done crafting, the "temp crafting inventory" is emptied. All along the items remain visible in the world, but any code that checks if someone can pick it up needs to check that it isn't owned by a "temp crafting inventory".
If you already have support for serializing/deserializing (assuming you can read/write to a binary blob that can exist in memory), then #1 is the easiest (almost trivial to implement) - but it might come at an unacceptable performance cost once your scene gets large.
The thing now is I want to implement my own tree and a heads up on how I can do it will be appreciated.
Have you seen the talks/slides on this site? They might help.
A behavior tree core implementation is actually fairly simple. I mean, I wrote one a while back based on info I learned from the above reference site - I could dig that code up and give it to you - but looking at Behavior Machine in the Unity asset store, it does everything mine did and much more (and appears to plug nicely into Unity). What's wrong with it exactly? How is it limiting you?
The hard part is modelling the world and somehow distilling that into nodes in a BT. And you won't find any implementation that does that, because it will be specific to your game. So maybe you're looking for tutorials that help explain how to do that, or concrete examples of complex AI implemented using behavior trees? Let me know if you find any good info on that!
Ok, it looks like (from my own tests) calling ElementAt on the Dictionary's Values collection is really slow. I'm guessing it is an order-n operation that needs to iterate through items one by one, since the larger the index the longer it takes.
You probably want to use foreach instead (and hopefully the ValueCollection's iterator is a value type, to avoid unnecessary heap allocation)
In the little test I did, for a Dictionary of 10,000 items, this:
What the!? Why am I not getting Destruct A ?! Memory leaks if I would have some pointers in A cuz its destructor is never called!? I want to see something like this:
Construct A 
Destruct A 
Construct A 
Destruct A 
Destruct A 
That wouldn't make sense, because then your constructors and destructors aren't matched.
a = A()
doesn't call the destructor on a (I'm guessing that's what you're expecting based on your desired output). It simply calls a's assignment operator (and the temporary's constructor and destructor, unless elided as ApochPiQ said).
You might want to search for "projective texture mapping". You can assign UVs in the vertex shader based upon the local or world position values.
If you project along 3 axes (just by applying a scale/offset to xz, xy and yz positions), you can generate 3 sets of UVs, and then blend between three texture samples in the pixel shader, depending on the vertex normal. This is an easy way to handle arbitrary meshes - the downside being that it requires 3 texture samples in the pixel shader. I saw a talk at PAXDev where a team was using this a lot in their game though (I forget which company), and it was a big time-saver for them in terms of artist effort.
I use it in one of my own projects to texture things like rocks. The following rock uses projected textures:
Generally, for each game object you can store translation (vector3), scale (vector3, or maybe just a single float), and rotation (quaternion) separately.
Then, each frame (or whenever the object moves/rotates/changes size) combine them (scale, rotation, then translation) to get the final world matrix. Whatever framework you're using presumably has functions to do this.
Something like that will work (though I'd assume you'd want a region close to you where they are all opaque throughout).
But if you're using alpha blending, you'll need to make sure that you're sorting all your objects by distance to the camera, and drawing back to front. This could become a performance issue on the GPU (overdraw, blending) or CPU (sorting) depending on how many objects you have and how big they are.
I seem to be confused about something, in some tests I've run using the code I provided, even when object is like 10 units from the camera, the Z value is something like 0.995, while near the far end of the frustum at nearly 1000 units away, it's like 0.99998...
Yeah, it will be non-linear. It has to do perspective-correct rasterization: