Jump to content

  • Log In with Google      Sign In   
  • Create Account


Creating an Engine API


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 Shael   Members   -  Reputation: 277

Like
0Likes
Like

Posted 02 February 2012 - 09:47 PM

I'd like to know what methods others have chosen to create an API for their engine. I have a couple questions to help get the ball rolling.
  • How do you organise your engine in the file system so that you can easily generate a copy of the source files required by the end user?
  • Do you prefer to use abstract interfaces for everything or do you export full classes and use PImpl idiom to hide implementation details and reduce virtual calls?

Currently my file system is arranged so that folders represent modules of the engine. IE.

EngineName
	 -> Source
		  -> Platform
					-> .h/.cpp
		  -> IO
		  -> Gfx

However, at the moment it would be a manual process to go through each module and copy out appropriate headers into a replicate folder structure for use by the end user in their project. I'd like to find an automated way of doing this Posted Image

I've been going down the track of exporting classes and using PImpl's but this does kind of make the engine code a little ugly-er and not as straight forward so I was questioning whether to go to abstract classes at the cost of potentially having a somewhat duplicate abstract interface for everything needed by the end user and suffering from virtuals.

Your thoughts and experience?

Sponsor:

#2 phr34k9   Members   -  Reputation: 152

Like
0Likes
Like

Posted 03 February 2012 - 05:28 AM

In my earlier api designs I’ve arranged the file system hierarchy to match modules much as like what you stated. In my case this resulted lot of sparse directories with on average maybe 3 files per folder. And that caused productivity to drop due increasing navigational burden.

The api design I’m using right now isolates classes on two isolation layers system and engine. The system layer facilitates all common abstractions that you would expect from a modern ‘base class library’ to have think file-io, network-io, memory-streams. The engine layers is what implements the remainder i.e. scene-graphs, world-objects, render-queues.

I go great lengths to adhere a file-structure & naming conventions so the code that is presented to the user is clean and matches the organizational structure as you would find in the Microsoft .NET framework. A small example of that is that I consciously engineer my headers so they are imported as ‘#include <System.IO/BinaryReader.h>’. Folder structure for each isolation layer includes ‘includes’ (.h for public usage), ‘source’ (.cpp), ‘internals’ (.h for private usage) building a sdk type of thing becomes as easy linking everything together and merge the folder structures.

Whether to use interfaces or pimpl is still a case by case decision. This decision is based on how I want people to use the classes. For instance in the case of the binary reader it describes ‘sealed’ functionality. Does it make sense that people could implement their own IBinaryReader or does it make more sense people can use BinaryReader but the class is presented with absence of platform specific code? An extra advantage of the pimple-idiom is that allocation is significantly simplified i.e. it can occur on the stack and doesn’t rely on a factory classes for allocations.

Also to come back on folder replication theres always scripts you can use to automate the tasks. Cp dir1 dir2. Cd dir2. Remove-Files *.* -exclude *.h,*.incl

#3 turch   Members   -  Reputation: 590

Like
0Likes
Like

Posted 03 February 2012 - 07:48 AM

(I used these techniques for multiple programmers working on one project, but a lot of it applies to distributing an API as well, especially if you give source code access)

Symbolic links and svn externals are your friends here. So is CMake. I was working on a 3 programmer project a while ago, with each programmer working on a discreet layer (graphics, gameplay, and utility) and we didn't have a build server. It took me about 4 days of playing around with cmake to get everything working right, but the investment was well worth it.

The file structure looked like this
src
  util
	include
	src
  ..
bin
assets
  textures
  meshes
  ...

The src directory in svn had externals to bin and assets, so someone checking out src automatically got those checked out.

CMake then did several things:
1. Automatically detect install paths for DX, OGL, and other programs such as Maya / 3ds Max (if the user was compiling things that needed them) and added them to the generated projects / makefiles. If a directory wasn't found through standard system / environment variables, the uses had the option of adding them manually.
2. Make a symlink to assets in the build directory.
3. Add pre-build events to copy binaries out of src/bin to bld/Debug and bld/Release (for VS).
4. Add post-build events to copy binaries out of bld/Debug and bld/Release to src/bin

The last two were done because we did not have a build server, and each programer didn't want to compile the code in the levels below the one they were working at. So if util and graphics were changed and compiled by the people working on them, after they made a commit, the game programmer would update and get new binaries without needing to build them or having the projects to be added to their own build. I used old fashioned copying rather than a symlink like with assets for two reasons: we wanted to keep the VS standard of compiling to /Debug and /Release, but we wanted to have one unified bin directory in the svn; and we didn't want to pollute the bin directory with all the VS specific files generated, and at the same time we didn't want to delete them because they were useful.

#4 Shael   Members   -  Reputation: 277

Like
0Likes
Like

Posted 05 February 2012 - 04:07 PM

In my case this resulted lot of sparse directories with on average maybe 3 files per folder. And that caused productivity to drop due increasing navigational burden.


I'm not too worried about this as most of my modules contain a good portion of functionality not just a couple of files. My main concern was determining from the tree structure what is required by the end user to build an SDK. One idea I was thinking was to have public/private directories within each module. That way a python script or something could scan for files in public directories only and copy them to form the SDK.

Whether to use interfaces or pimpl is still a case by case decision.


You make a good point about Interfaces vs. PImpl and I think I should start looking at it as a case-by-case problem rather than an overall design decision of the engine.

I'd be interested to see what more people think about this.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS