So you're suggesting I have a couple of concrete UI elements like Button and Image that use sprites (or what I would call them in the future) by composition? The problem is that Image is a pretty basic thing, actually used by Button right now. I've got more complicated UI elements that include multiple buttons, images and labels.
How i think you should do it unless you find a better solution:
Regarding the scene graph, I think my approach basically works like that already, so I guess that's good
Sprite is sprite, it's singular. It also does not have means for collecting input--that's far beyond its purview. What you may want instead is some base-class GUIElement. This class could provide default implementations for listening to and forwarding input, as well as adding and removing children. The classes that you derive from GUIElement can hold as many sprites as is appropriate for whatever particular element they are representing.
But what _is_ a sprite then? A visual representation of an entity? A "drawable"? I have so far been using the term synonymous for "entity", actually.
You're right, I could just rename Sprite to Widget and my object hierarchy would suddenly make sense. In Qt for example, widgets can be nested. But I just thought it felt wrong to call it Widget, since I'm after all working on a game here... There is a lot of UI, but there's more, and I don't want the UI to dominate the terminology in my code base. When I add characters moving around, they would probably end up deriving from Widget...
Then again, I guess I don't need to have my individual entities handle input like widgets would, my past games usually had per-scene input handling. I think I'm mainly struggling with how I can have my UI (screens and widgets) fit into the same object system as my entities and scenes. My initial approach was to opt for less UI-typical code and use entities and scenes for the UI instead. But one thing that was particularly annoying was central input handling...