I think you should first seperate the idea of geometry, buffers, and the game objects themselves.
Geometry is just that. A collection of verts and faces that make up some object. It is not the object itself, nor is it a buffer. It is just points and lines.
Buffers are render objects, they are behind the interface of your rendering system. You can have a buffer created from geometry, but again, it is not geometry in and of itself.
Your Game object is the higher level object in game or engine code.
Your game wants some entities, so it creates game objects, and adds them to the world. The objects have visual representations, so the geometry is loaded from a file. Since this process was started in the game code, it knows what formats are needed, and buffer objects can be created via the rendering system. (So the system has no idea about the game objects, yet can be told how to structure the buffer data). When you render the scene, the game objects Render method can pass game info such as what buffers are needed, what shaders and constants(transforms, lighting info, etc) to the rendering system. After the game objects have pushed their data, the renderer can make the draw calls, knowing how many of each object, where they are, which of them need shadows, etc using a batching system to reduce hangups.
Bear in mind that I have little to no idea of what i'm talking about on any professional level, only that i've been hobby developing for a few years.
TL,DR: Your render api should expose methods to create buffers, and draw them, and little more. Internally it can use information deduced from the draw calls to optimize for performance. In a sense, your meshes are tightly coupled to your render API, but they are also encapsulated, and this little detail helps reduce coupling from outside of the renderer, where games have little business doing stuff.