As ever: We're discussing possibilities here. You need to decide what is helpful for your game and what can be ignored for now...
Like I understand it: The machine is the parent and the zone is the child. So the machine does the rendering of the zone (for example sake the zone is a visible component).
But as you said earlier, an object doesn't do it's own rendering, so does the machine add it's sprite together with the zones' sprite and give it to the renderer as a 'spriteset' or so?
The mechanism of parenting does nothing more than expressing a spatial placement relative to another one. This is outside of rendering, collision detection, or anything similar. Here is why and how:
In 2D a placement consists of a 2D position and an orientation angle. Those parameters can be used to compute a 3x3 homogenous matrix expressing the transform of the sprite. The transform defines what to do to come from the local space (also called model space) to the parent space. I'm used to define that every model / sprite in the scene is placed globally initially, so that the transform matrix inherent to the model / sprite is a world transform matrix. In other words, each model / sprite has a world placement.
In the case that a model / sprite is needed to be parented, I'm used to add an explicit tool for this: The Parenting component. The attachment of a Parenting makes the model / sprite the child. The Parenting includes a reference to another model / sprite, so that those other model / sprite becomes the parent of the child. The Parenting further includes a local Placement, i.e. a Placement that expresses the spatial relation to the referenced parent model / sprite.
In fact the Parent component introduces a constraint on the world Placement of the child model / sprite in that its world Placement is computed as said concatenation of the local Placement with the world Placement of the parent. This is a fancy description essentially of a matrix product.
So ... what you can see here is that the above solution does an external coupling (external to the models / sprites, because based on the introduced Parenting class) of models / sprites on a spatial level. The coupling updates the world transform of what is made the child model / sprite. This means that if all is up-to-date, every model / sprite has a valid world transform in its Placement, regardless whether it is a static one or computed by a Parenting (or computed by another mechanism; I've something like a dozen or so, including animation, of course).
Now, when the collision detection or rendering comes to work, there is a couple of models / sprites each with a valid Placement in the world. That is all what the respective sub-system needs to know. "Parenting? What's that?" is said by the renderer.
As we are talking about doing the rendering outside the object, how does this work? Does the renderer iterate over all the objects, take their sprites and draw them on the coordinates+rotation of the object?
Yes. The renderer iterates the scene. Maybe there is a subset of scene objects that denotes all "drawables" or so. However, the renderer finds the objects, and uses the sprite as visual representation and the (as we know being valid) world placement's transform matrix. It usually does frustum / viewport culling with the help of some bounding box first.