Quote:Original post by DadleFish
Well, the examples I gave you as well as the cutdown of Ogre shown in the thread are full featured ones. All I'm saying is that you shouldn't fear it, as we're not talking about something nearly in the magnitude of 100K of lines.
Yes, thanks to you and swiftcoder I'm now convinced that it's possible to have a full featured wrapper with a reasonable amount of code. However I'm still concerned with the flexibility issue. I see my engine also as a place for experiments, so what happens if the newest nVidia or Ati GPU offers a new OpenGL extension and I want to experiment with it? If I build my render on top of a DX/GL wrapper I don't have access to it. Maybe my wrapper should have an extensions or capabilities system.
Quote:Original post by DadleFish
I can't understand why would you use a visitor pattern.
Because if the scenegraph doesn't know about the render, the only solution is to have the render visiting the scenegraph. The render is the visitor, the scenegraph is the visited.
Quote:Original post by DadleFish
Perhaps I wasn't too clear, sorry. The whole idea of abstraction is that the upper layer doesn't know which API is used by the renderer object(s). It only knows the renderer object(s) interfaces, and works with them. My intention was to hide the API from the scenegraph, for example; you would call "m_pRenderer->ClearScreen()" and you wouldn't know or care how it's done.
However, I do not think that the scenegraph should be ignorant to the actual existence of the renderer (like in the DOCVIEW pattern you've mentioned). I can't really understand why would you do it. After all, it is probable that you would have a single renderer object in your system, just like you have std::cout, and it's unreasonable IMHO that stdout would search for relevant information. Besides, its the scenegraph job to actually feed the renderer with meshes, textures and so on in order for the renderer to send them as polygons to the video adapter.
That's the way it is generally done, but I want to be even more radical and completely hide the renderer interface from the scene code, because the scene code doesn't really need to know it.
Why should the scene graph feed the render with meshes and textures? The job of the scene graph is to store and organize data, nothing more than that. The render can perfectly read the coordinates of textures and meshes stored in the scenegraph nodes and load them without further external help. Moreover the scenegraph can't even know what resources the render needs in a particular moment unless it implements some visibility algorithm, but again visibility is not something it should be concerned with, that's a task for the render.
Obviously some parts of high level code will know the renderer interface, to be precise the parts that instantiate it (getting the object from a factory), calls the render method for every frame and destroy it on cleanup.
I know that this might sound odd from an object oriented perspective, but it's a common solution in generic programming. Think about STL: you have data structures and algorithms and data structures know nothing about algorithms, they just do their job of representing data. In my case the data structure is a graph (a DAG to be precise), while the render uses the appropriate algorithms to visit it and gather informations. I'm really considering the possibility of using BGL (the Boost Graph Library) for this.
Quote:Original post by DadleFish
As for physics and AI - these are indeed higher than the scenegraph and they manipulate it. The renderer isn't such a case. Even if you think in docview terms, you wouldn't say the physics/AI are views manipulating the doc. They are more of the logic.
That's correct. I mentioned the DocView pattern just as an example, but I'm not strictly following it. What I have in mind is a central data structure (the scene graph) with the various engine subsystems accessing and modifying it. Ideally the data structure will never know about the subsystems and the subsystems will depend only on the data structure (ingoring each other).
Original post by DadleFish
The idea behind abstraction isn't minimizing the code. The idea is to get the core systems of your application (game) ignorant to their actual environment. Something like "hey, read that house from a file and display it, I don't care how you do it"; so you'd abstract your OS and your rendering API. The idea is to separate the logic from the actual tidbits of DOING stuff, so you can later replace the API with anything else.
I perfectly agree, but I would add that if abstraction is good, orthogonality (as implemented in STL, BGL and other generic libraries) is even better, since it adds an even higher degree of abstraction, true separation of concerns and generally also minimizes the code.
Maybe my solution will turn out to be overkill, but after all this is not a real project, I'm not going to create a new Unreal Engine or some sort of killer application, it's just a way to experiment and learn.
Thank again for your comments and suggestions.