+1 on what Paradigm Shifter said. No writers and only readers is always safe.
However, it is not necessarily faster, or not necessarily as much faster as you may think. Not by just throwing threads at the problem, anyway. A bit of consideration is advisable.
First, many random accesses in a huge data set from several threads will cause more cache misses on a shared-cache architecture. It may be worthwhile to sort the skeletons in this case (so accesses to the same memory cell stay "close together"). Adding extra work by sorting may seem nonsensical, but depending on how many cache misses you have, this may be very much worth it. Sorting may also work in favour of branch prediction, if you have a lot of branches.
Second, on NUMA architectures (think for example Opteron servers), reading from a location that doesn't belong to your node is much slower. In such a scenario, you will want to make a copy of your data, one for each NUMA node.
Third, you need to be sure that the data which you write out does not compete on cache lines, both in respect of true and false sharing.