If a client wants to send 10 gold to someone and sends a request to do that, the client has no way of knowing if the request was processed correctly without an ack. But the ack can be lost, so the situation where you might have to resend the same request is present in all networked applications.
I think we're crossing wires a bit here. Reliable messaging is a trivial problem to solve (TCP, or a layer over UDP), and thus it is easy to know that either (a) the request was processed correctly, or will be at some time in the very near future, or (b) the other process has terminated, and thus all bets are off. It's not clear why you need application-level re-transmission. But even that's assuming a multiple-server approach - in a single game server approach, this issue never arises at all - there is a variable in memory that contains the current quantity of gold and you just increment that variable with a 100% success rate. Multiple objects? No problem - just modify each of them before your routine is done.
What you're saying, is that you willingly forgo those simple guarantees in order to pursue a different approach, one which scales to higher throughput better. That's fine, but these are new problems, unique to that way of working, not intrinsic to the 'business logic' at all. With 2 objects co-located in one process you get atomicity, consistency, and isolation for free, and delegate durability to your DB as a high-latency background task.