Whilst good libraries can help your overall quality I feel the target for your efforts lies in the orchestration and song production.
The first song on soundcloud feels very loose in the timing between individual parts.
There are held chords which drown out the other parts in both the 1st and 2nd songs and cause clashes with the other parts.
Depth of dynamics in the individual parts or as a whole create that flat sound which takes away from emotional impact.
Your 3rd piece is stronger but still clashes sonically and had no movement in the dynamics but has more cohesiveness than the two earlier tracks
You don't need super high quality samples to get work but it can help with presentation.
Don't worry, I started out with similar issues but one of the things I learned was how to polish the hell out of whatever I wrote with whatever samples and libraries I had at my disposal. I think that is where you need to concentrate your efforts.
Concentrate on how to use frequencies of various instruments together and how to compliment the different parts together.
Learn to analyze your own music and other people's. You can learn a lot by analyzing their techniques to improve your mixing both in terms of instrument frequency, mix frequencies, support instruments, and over all production will help improve you to a point where you can listen to your music and say to yourself that it has a quality high enough for someone to pay for it.
This is an example of the same type of low quality not realistic samples working together and produced differently (eq, reverb, compression) but together everything feels cohesive. If you can produce good tracks with the tools you have then it makes it easier to produce good tracks with more advanced tools.
Both Fabric and WWise can be obtained free to experiment with before choosing one to license.
Dynamic mixing for games is definitely an art in itself and a lot of fun to make it work
It really depends on how technically your music is adaptive. Does the mix just change because of one stem gets added - ie there's a basic mix and then another stem is layered in, or does the music need to get to a transition point and change. That will drive what middleware or solution you choose as well. I have not explored Fabric's dynamic music system at all, but have used WWises and FMOD desgner's a lot. Basic knowledge of FMOD from talks and demos I've had with the guys and the previous designer system both FMOD / WWISE have outstanding dynamic music systems and authoring tools where you can prototype the behaviors.
Both also work with unity, with an external plugin. It's so great to have so many choices these days!
As far as standards, a lot of developers for console / tv are adapting the ITU 1770 standard of loudness. WWise actually supports metering with this to help achieve the right mix. For handheld / tablet.. etc - there is no real standard yet since we're dealing with different speaker types sizes.
This sounds fascinating, you can certainly try to prototype this in wwise.
Experiment with 3D atttenuation graphs.
Try using a music bus and running some DSP (wwise supports things like compressors and side chain compressors / ducking, eq... etc) over it. There's some 3rd party vendors of some fairly good DSP out there as well.
It really depends what you're trying to do with the objects, how many other tracks play, do they fade out to silence, or just duck a bit to focus on the other object. Can you within a certain radius fade over to the produced track mixed fully. Lots of different things to try. Some you'll need engineering support and a way for you to tune the results, most you can try to protoype in the audio authoring middleware.