A common way for handling #2 is just re-adding the node to the priority queue. Sure, it might be in the priority queue twice, but when you (eventually) pull out the node (and there's still a duplicate reference in the priority queue) you mark the node as closed/processed, so when you continue your iterating and pull out the duplicated reference, you just check if it's already been processed/if it's closed, and if so skip it and pull out the next node in the priority queue.
I'm implementing an A* algorithm (mostly for fun and conceptual learning), and the open_list needs to do two things. 1) act as a priority queue for the purpose of searching closest nodes first, and 2) I need to determine if a node is already in the open_list before adding it.
It does create some duplicates, but generally there aren't too many nodes that get re-added/duplicated in the priority queue. Usually there's just a few, and it's cheaper to add duplicates to the priority queue than to maintain multiple data structures or check for and maintain uniqueness in the priority queue. The only time a node gets duplicated in the priority queue is when you haven't processed it but find a shorter path to it, which won't be the case for many of your searched nodes.