Jump to content

View more

Image of the Day

Adding some finishing touches...
Follow us for more
#screenshotsaturday #indiedev... by #MakeGoodGames https://t.co/Otbwywbm3a
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.


Sign up now

Project Darkstar

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
76 replies to this topic

#61 jeffpk   Members   

154
Like
0Likes
Like

Posted 12 September 2009 - 04:03 PM

Quote:
Quote:

But the implementation is where the bugs, limitations and issues arise. No?


Of course, but the fact is that the model does not drive the implementation, meaning that you cannot claim it is the fault of the non-transactional model. There are many games that do not suffer from the same drawbacks.


I disagree. Java has no wild pointer errors. Why? because the system makes it impossible to create them.

The PDS makes it impossible to create race conditions. That is a result of the transactional model.

This is why enterprise system use transactional models, after all. To ensure data integrity.

Quote:
Quote:

Fair enough. A PDS app can do the same thing if it wants.


I agree and acknowledge this, however, I would find PDS less attractive if I needed to bootstrap my own data store and persistency implementation on top of it. This is just a personal opinion however.


Fair enough. Any personal opinion is valid as a personal opinion, if nothing else :)

I'm not sure I understood the sentence though since you *don't* need to create your own persistency implementation. Its transparent and part of the system and a default implementation ships with it.

Am I simply being dense that I thus don't follow the statement?

Quote:
Quote:

Well, its goal is not to be a game engine.


Agreed, however many of the claims made about PDS are game related and I believe are a bit misleading. For instance, the claims about zoning and sharding are a bit disengenuous because of the general use of zoning and sharding as game-play mechanics rather than physical limitations.


Disengenous is a fighting word. Maybe you might want to chose another? Because its a perfectly genuine statement.

And I am the *first* to admit, and even sometimes encourage, zoning as a game design premise. But what PDS does it make it a game design decision and NOT something that is necessitated by the architecture of the system.

Furthermore, even if you create Zones and Shards, the PDS allows any system to process any part of those zones and shards at any time. This gives you both fail-over and backend-wide load balancing properties you cannot get from a hard-zoned architecture.

My project right now is *highly* "zoned" in fact for game-play reasons. But PDS makes it easy (no work in fact) to just throw up zones as needed and use any part of my machine room to process them. And should any one machine fail, the zone stays alive and simply moves it processing to another node.

So actually, its a BETTER zoning system, if you want to look at it that way :)

"Zoneless/Shardless" is market-speak. And market-speak is always over-simplified ;)

Quote:


And I certainly wish you luck in this endeavor and will watch the progress of PDS. As I said before, I am only critical of the manner in which its public face has been presented in this thread.


Oh and I SURE hear you here. I myself am automatically VERY skeptical of anything I think is being "over-hyped". In fact if that's what you are responding to then I don't blame you for being a bit belligerent.

But there is always a dance you have to do between making enough noise in simplistic enough terms to get the common press to pick you up and being detailed enough about the real technology for real technologists. I like to think our forums are pretty serious and technical. if some of the public buzz has ssemed a bit "over-enthusiastic", well I apologize for that too.

i DO love this thing though. I've been a software engineer for 25 years and I really believe that if I will be remembered for any contribution to coding, this is probably it. And I personally love how it simplifies my coding life.

Now there ARE some hiccups still. I'm glad to admit that and we talk about these things on the forums. There is an issue right now with upgrading code in "Serialization incompatible' ways. There are work-arounds but none of them are what I woudl consider ideal. I'm actually working on a solution for this right now that Im hoping my employers will let me return to the community.

Quote:


Actually, you are quite correct, it was a very, very poor choice of words on my part and I apologize profusely. A more proper thing to say is probably that it is targeted at those with little to no experience in scalable server-side persistency and state management solutions. I understand it definitely has enterprise level application (why else would Sun pick it up?) and the word hobbyist has the wrong connotations.


Fair enough. It is certainly intended to make it so an expert doesn't have to do this stuff. I believe the ultimate goal of all software engineers should be to put ourselves out of the business of solving the same problems over and over :)

But I do have to say that every expert on multi-processing Ive talked with (and there have been quite a few over the years) has had the same reaction I have to it. Its hard, its error prone, and in the end we all end up falling back on less then optimal solutions that we can at least prove reasonably will work.

So I think its a great thing to let the computer do and let us get on to more creative pursuits. In that sense, its freeing for experts in the same way that high performance C optimizers freed experts on byte-tweaking.

Quote:
Quote:

What is "out of scope." Im happy to admit that all game engine functionality is out of scope. PDS is a platform for game engines, not a game engine itself. There are many game functions it does not provide. But it does provide the environment to build them and the hooks to tie them in.


The data collection/mining part of the back-end that I mentioned in my previous post. And it is fine that it is not part of the PDS persistency, but you cannot pretend that the PDS model makes it unnecessary.

Oh not at all, if its important to you. I never inte4nded to suggest otherwise. But different kinds of apps have different kinds of needs in this. In my current app we will be writing out data on user actions to an SQL database for customer profiling and economic info for tracking our economy.

And querieable database is indeed the RIGHT tool for creative data mining and analysis.

But such selective storage, as we established is no different from what MMOS do today. So I agree we don't address that space and make it better, but its no worse :)

Quote:

The issue mainly comes down to the fact that there is a lot of personal opinion that goes into how a particular group or individual would want to implement certain portions of the model that PDS is offering.


Sure. And for some people it may be perfect as is. For some people it may be great with some of the plug points changed or extended. And for some people it may be totally wrong. I regularly tell people who show up and ask "Can i build my FPS with this" at the forums that they can build their game matching and score keeping, but they will never get the kind of split second timing for that sort of gameplay and that they will have to do what every other system does and throw off a process to handle it.

There are no silver bullets. Just good tools. But a hammer never makes a good screwdriver.


Quote:

Some of us are real game developers with real game development experience, including MMOs. Some of us see real drawbacks in the PDS implementation and are expressing our opinions on the subject.


And no one is ever going to agree with all the choices someone else makes. A lot of my time in the game industry BEFORE sun was spent writing library code for games and one thing I learned VERY early was that you need to build these things in layers so people can strip off what they don't want or need. Ive done my best to push that modularity during the time I was with the PDS team and i hope it shows in the system.

Frankly, there are a few things in the current API set that make me cringe. I would have done them differently, but part of working with a team is compromise and being open to others ideas. And some of what the PDS team brought to the project I wouldn't have done... and they were right. So if one or two places I still think I'm right, that's a small price to pay for so many great brains on the project.

Quote:

That was the crux of my reply this afternoon, as it seems that you feel our comments and opinions are misguided or inexperienced. They aren't, they simply speak to the idea that PDS may not be a silver bullet for everyone.


Yup there are no silver bullets.

But now I'm going to ask you to please try to see a bit now from my POV. My introduction to this thread was a rather strident statement of "I've done the analysis and PROOVED it can never work" which was totally misguided and based on almost total ignorance about the system, how it worked, and what its goals were.

It was followed with the dismissive statement that "all it is, is an object database" which was equally ignorant and incorrect. The real paradigm shift is not in the database at all but in the execution model of the code.

So if I got my hackles up, well, if you could handle that without doing so all I can say is your a better m,an then I am.

For the record I never, ever intended to imply any ignorance of game programming. Just ignorance of the PDS. And that was what I was presented with on entry.

#62 jeffpk   Members   

154
Like
0Likes
Like

Posted 12 September 2009 - 04:17 PM

Quote:

How pluggable is the DataStore module as far as swapping storage formats without scrapping the module interface? For instance, if I wanted to change database providers, have some crazy idea about the blazing speed of XML, or want to do remoting?


Access to the DataStore is done through a system manager, the DataManager. Its pluggable in a few different ways.

The first object invoked for any event is fetched from the system DataManager. So if you need to completely replace the persistence, you woudl replace that. At its highest level its just an interface that gets objects from IDs and symbolic bindings and manages locks taken on them, adn conversely allows you to stick obejcts into the datastoer and get back the bindings and ids.

There are a few semantic things you have to preserve if you want to preserve the PDs execution model. The PDS execution model assumes the DataStore will do deadlock detection and throw a specifc exception if you try to take a write lock that woudl result in deadlock. Originally we used BDB's deadlock detection but for a number of reasons Tim (the Db guy)is replacing that rigth now with a seperate deadlock detection implementation that can provide more info back to both the user and the system.

So i *believe* there are three levels to it now. The deadlock detector, a middle layer im a bit fuzzy on, and the lowest layer that actually puts and gets objects from storage. Beyond that, you'd either have to look at the Java docs or ask Tim for details (tjb on the PDS forums.)

Edit: i may have implied thsi but not said it... AIUI all are pluggable, all the way up to the DataManager interface presented to the rest of the system)

Now, you can also also plug in parallel managers to access data anywhere else. You just need to put the results in transient fields on the objects the PDS IS managing so they don't get saved and restored as part of the state of those objects.

Is that helpful? beyond that level i think you need tim...

#63 arbitus   Members   

440
Like
0Likes
Like

Posted 12 September 2009 - 04:41 PM

Quote:
Original post by jeffpk

Yup there are no silver bullets.

But now I'm going to ask you to please try to see a bit now from my POV. My introduction to this thread was a rather strident statement of "I've done the analysis and PROOVED it can never work" which was totally misguided and based on almost total ignorance about the system, how it worked, and what its goals were.

It was followed with the dismissive statement that "all it is, is an object database" which was equally ignorant and incorrect. The real paradigm shift is not in the database at all but in the execution model of the code.

So if I got my hackles up, well, if you could handle that without doing so all I can say is your a better m,an then I am.

For the record I never, ever intended to imply any ignorance of game programming. Just ignorance of the PDS. And that was what I was presented with on entry.


It is understandable to get upset when people criticize your "baby." But you have to be a bit careful and not get too carried away, and have a bit of thick skin when it comes to criticism. I think perhaps you took many of the comments in this thread a bit too much to heart.

Quote:

Is that helpful? beyond that level i think you need tim...


Yes, it is very helpful. Unfortunately, I do not think PDS is right for my current needs (well, not my needs, but the needs dictated to me by my boss), but I will definitely keep tabs on it for future projects.


#64 jeffpk   Members   

154
Like
0Likes
Like

Posted 12 September 2009 - 04:47 PM

Well thank you for putting up with my hot-temper and giving me the chance to explain "my baby."

Honestly I was asked to get involved here and I ALMOST didn't, because right now I'm under pretty big deadline pressure... and I *know* my asshole quotient goes up in these times. :)

I'm glad we were able to work through it.



#65 arbitus   Members   

440
Like
0Likes
Like

Posted 12 September 2009 - 05:08 PM

Quote:
Original post by jeffpk
Well thank you for putting up with my hot-temper and giving me the chance to explain "my baby."

Honestly I was asked to get involved here and I ALMOST didn't, because right now I'm under pretty big deadline pressure... and I *know* my asshole quotient goes up in these times. :)

I'm glad we were able to work through it.


It is quite OK, as I should also have done a better job in replying as well, as I definitely made poor word choices in my haste.

#66 jeffpk   Members   

154
Like
0Likes
Like

Posted 13 September 2009 - 05:35 PM

I just looked at the Wiki links.

I had no idea they were that broken.

THANK YOU for pointing it out. We just went through a site software upgrade and I have a feeling someone broke something.

I'm agitating now to get them fixed. There is a lot of good info that is currently inaccessible :(

#67 CMelissinos   Members   

104
Like
0Likes
Like

Posted 14 September 2009 - 01:32 AM

Quote:
Original post by arbitus
Quote:
Original post by jeffpk
Well thank you for putting up with my hot-temper and giving me the chance to explain "my baby."

Honestly I was asked to get involved here and I ALMOST didn't, because right now I'm under pretty big deadline pressure... and I *know* my asshole quotient goes up in these times. :)

I'm glad we were able to work through it.


It is quite OK, as I should also have done a better job in replying as well, as I definitely made poor word choices in my haste.


And I would also like to chime in here and say that, I too, am sorry for perhaps too hasty comments and mis-understanding. There was no intended disrespect and, unfortunately, nuance is not always apparent in text based discussion threads :)

I'm glad we arrived at a good resolution of this, and look forward to constructive discussions moving forward.

Cheers!

#68 Tim Blackman   Members   

100
Like
0Likes
Like

Posted 14 September 2009 - 04:19 AM

Quote:
Original post by jeffpk
Quote:
Original post by arbitus
How pluggable is the DataStore module as far as swapping storage formats without scrapping the module interface? For instance, if I wanted to change database providers, have some crazy idea about the blazing speed of XML, or want to do remoting?


Access to the DataStore is done through a system manager, the DataManager. Its pluggable in a few different ways.

The first object invoked for any event is fetched from the system DataManager. So if you need to completely replace the persistence, you woudl replace that. At its highest level its just an interface that gets objects from IDs and symbolic bindings and manages locks taken on them, adn conversely allows you to stick obejcts into the datastoer and get back the bindings and ids.

There are a few semantic things you have to preserve if you want to preserve the PDs execution model. The PDS execution model assumes the DataStore will do deadlock detection and throw a specifc exception if you try to take a write lock that woudl result in deadlock. Originally we used BDB's deadlock detection but for a number of reasons Tim (the Db guy)is replacing that rigth now with a seperate deadlock detection implementation that can provide more info back to both the user and the system.

So i *believe* there are three levels to it now. The deadlock detector, a middle layer im a bit fuzzy on, and the lowest layer that actually puts and gets objects from storage. Beyond that, you'd either have to look at the Java docs or ask Tim for details (tjb on the PDS forums.)

Edit: i may have implied thsi but not said it... AIUI all are pluggable, all the way up to the DataManager interface presented to the rest of the system)

Now, you can also also plug in parallel managers to access data anywhere else. You just need to put the results in transient fields on the objects the PDS IS managing so they don't get saved and restored as part of the state of those objects.

Is that helpful? beyond that level i think you need tim...


There are a few layers in Darkstar where you could plug in persistence implementations.

As Jeff said, the application layer for accessing persistence is via the DataManager, which can be replaced via configuration.

The data manager's implementation is supplied by the DataService, which is the facility that provides access to persistent objects. This is layer whose implementation you would need to replace or modify in order to use a different sort of object serialization.

The default implementation of the data service depends on a DataStore layer, which is responsible for storing blobs of bytes by object ID or name. It is at that layer that we're currently experimenting with multi-node caching.

The current implementations of that layer in turn make use of a somewhat lower level database layer, which is where we plug in Berkeley DB, either the native or Java edition. It's at this layer that you could fairly easily plug in a different database.

- Tim

#69 Tim Blackman   Members   

100
Like
0Likes
Like

Posted 14 September 2009 - 04:39 AM

Quote:
Original post by hplus0603
Btw, someone pointed out this quote from the Sun roadmap:
Quote:
While it is possible that such an update could, on failure, leave the two durable stores in an inconsistent state, the window in which such a failure could occur was decided to be sufficiently small that repair by human intervention was an acceptable alternative to implementing a generalized transaction manager.


I'd like some more insight into this, if possible.


Some background...

The transaction coordinator in the current implementation only supports a single durable transaction participant. This limitation was initially put in place to avoid the complication of building a full two phase commit implementation, something like XA.

Originally, it had been our intention to update the implementation of the transaction coordinator to support multiple durable participants so that applications could plug in an additional persistence mechanism -- a relational database, most likely -- and use it in the same transaction with the data service. In fact, I had originally thought I would need this capability to support a multi-node data store.

Thinking about it more, though, we decided that two phase commit would probably be too slow to use under normal circumstances for our transactions, which need to stay quite short. In addition, the current multi-node scheme does not require something like this -- which is a good thing from the point of view of performance!

Instead, our thought is that people can build non-transactional facilities to store updates to external databases.

So...

What the comment is talking about is that, when using such a non-transactional facility to interact with an external database, the application needs to do something to account for the fact that the two persistence mechanisms might crash separately, and so run the risk of not being fully coordinated. Applications could choose to deal with this by arranging to store the data to the external database again if needed (idempotency), through the intervention of an administrator (tacky, but sensible in some cases), by not worrying about inconsistencies (perhaps OK for usage statistics), or some other mechanism.

But this comment does not refer to an inconsistency that is possible in the current system on its own. The base Darkstar system maintains transactional consistency in all cases. The only thing that it may not supply of the standard ACID properties is the 'D' one: it provides limited durability. By allowing the system to forget some recent updates when there is a crash -- but still maintaining consistency -- the system is able to provide much better latency.

- Tim

#70 hplus0603   Moderators   

11189
Like
0Likes
Like

Posted 14 September 2009 - 05:34 AM

Quote:
By allowing the system to forget some recent updates when there is a crash -- but still maintaining consistency -- the system is able to provide much better latency.


To make this clear (to me): Is what you're saying that, if I for reporting reasons mirror the state of the world out to a separate database system, and there was a crash, then my reporting system would be out of sync with the actual world state?

And if I have to mirror all commits out to a reporting system anyway, why wouldn't I just use that system for my datastore/back-end? Scaling reads is easy, compared to scaling writes...


#71 jeffpk   Members   

154
Like
0Likes
Like

Posted 14 September 2009 - 06:18 AM

Quote:
Original post by hplus0603
Quote:
By allowing the system to forget some recent updates when there is a crash -- but still maintaining consistency -- the system is able to provide much better latency.


To make this clear (to me): Is what you're saying that, if I for reporting reasons mirror the state of the world out to a separate database system, and there was a crash, then my reporting system would be out of sync with the actual world state?


No, thats very much an over-statement.

There is a *small* chance that this could happen, limited to the window of time between when the transaction coordinator calls your external DB service to commit its transaction and before it completes its own commit to the Data Store.

Quote:

And if I have to mirror all commits out to a reporting system anyway, why wouldn't I just use that system for my datastore/back-end? Scaling reads is easy, compared to scaling writes...


Because such an external data store would not be fast enough nor execution-aware enough to provide the data needed on the fly to handle events.

This is the same confusion Arbitus and I resolved above. The PDS Datastore is neither designed nor intended to replace a SQL database for statistical tracking.

What it replaces is the typical in memory model of game state. By doing so it makes that game state fault-tolerant, reliable, persistent and scalable across many cooperating computers. As you yourself pointed out, scaling writes is hard and the typical duty cycle on the in-memory model is 50% read/50% write.
That is why we needed our own technology and we had to cut anything that burned time that we didn't need to make that replacement.

The datastore's transactional nature, combined with the event execution model, means that game code that is written as if it was running on a single CPU can be farmed out across all the processors in all the boxes in the back end in a race-proof and deadlock-proof manner without any explicit synchronization on the part of the game programmer.

This is the most important thing about the system and was my original design goal when I started the project. It allows you to write highly parallelized and fault tolerant code without any knowledge of how it is being processed.

The rest of the advantages such as fail-over and durable persistence just "fell out" of the design.

Edit: If it helps any try this... the PDS as a technology does not compete with MYQL. Its closest competitor is probably Terracotta though each of those technologies do things the other doesn't.

Edit2: This might help too. Looking *just* at the Datastore and the functions it provides to the rest of the system, its most like a "tuple-space." The PDS execution model can be categorized as a "flow of objects" model, but with some very unigue twists. Unfortunately the existing tuple-space products we looked at did things we didn't need and didn't do thing we did, or we would have used one of them.

[Edited by - jeffpk on September 14, 2009 1:18:05 PM]

#72 Tim Blackman   Members   

100
Like
0Likes
Like

Posted 14 September 2009 - 08:36 AM

Quote:
Original post by hplus0603
Quote:
By allowing the system to forget some recent updates when there is a crash -- but still maintaining consistency -- the system is able to provide much better latency.


To make this clear (to me): Is what you're saying that, if I for reporting reasons mirror the state of the world out to a separate database system, and there was a crash, then my reporting system would be out of sync with the actual world state?


Yes. That would only happen if Darkstar crashed after the external update had committed but before the change was flushed asynchronously to disk for the database backing Darkstar's data service. That certainly could happen, though.

Quote:
Original post by hplus0603
And if I have to mirror all commits out to a reporting system anyway, why wouldn't I just use that system for my datastore/back-end? Scaling reads is easy, compared to scaling writes...


Committing every change made in a Darkstar transaction to an external database is likely to increase the latency of the system a lot, so that probably isn't a good idea.

If you really did need/want all your Darkstar data to be stored in a relational database, I could picture your using a data service implementation that did that. I have not done the experiment, so I don't know what the performance would be like. My concern is that the relational database would provide longer transaction latencies -- not throughput! -- but I haven't actually tried it. I don't think that building a relational implementation of the DB layer underneath the data store would too hard to do, though.

- Tim

#73 jeffpk   Members   

154
Like
0Likes
Like

Posted 14 September 2009 - 10:37 AM

Outside of performance concerns (which are always
at the forefront of PDS decision making) the biggest issue I could see with a
Relational DB as a PDS Data Store is the ORM. It seems to me you would either need an ORM that worked "on the fly" or you would need to preprocess your app to produce the ORM for the DB to use and keep them in synch.

#74 hplus0603   Moderators   

11189
Like
0Likes
Like

Posted 14 September 2009 - 03:07 PM

Quote:
The datastore's transactional nature, combined with the event execution model, means that game code that is written as if it was running on a single CPU can be farmed out across all the processors in all the boxes in the back end in a race-proof and deadlock-proof manner without any explicit synchronization on the part of the game programmer.


When you say this, are you claiming that you can, TODAY, farm out the object update execution and object store code across multiple execution nodes? Because that's not what the roadmap seems to indicate.

If not, then what pieces, if any, can you farm out across all the boxes in the back end, TODAY?

On a related note, I find that separating very clearly between "now" and "in the future" is crucial when communicating, because if you say something like "we can architecturally support ..." and that means that you've thought about it, but not written a single line of code yet, nor know exactly when you will, then that is part of what I think some in this thread call "misleading." That may very well be how I was misled about PDS claims for capabilities many years ago.


Second issue: When all object links are in the object database, and you introduce some new game element that needs a new kind of link established (say, an index of all players that have a bind point in Darkfell, or whatever), then you have to write code to manually rip through your entire datastore and update all the player objects. That's a simple example of schema migration.

Schema migration, however, can get really hairy, and when you don't have relational algebra to do it for you, it's been my experience that it's doubly so. But perhaps part of what you developed in the last few years includes something smart to make that better? If so, could you post a specific link?


#75 jeffpk   Members   

154
Like
0Likes
Like

Posted 15 September 2009 - 04:00 AM

Quote:
Original post by hplus0603
Quote:
The datastore's transactional nature, combined with the event execution model, means that game code that is written as if it was running on a single CPU can be farmed out across all the processors in all the boxes in the back end in a race-proof and deadlock-proof manner without any explicit synchronization on the part of the game programmer.


When you say this, are you claiming that you can, TODAY, farm out the object update execution and object store code across multiple execution nodes? Because that's not what the roadmap seems to indicate.


You asked the wrong question. So let me answer your question and then the *right* question.

Yes, you can build a multi-node backend TODAY and distribute load.

No, you would not WANT to do that today. Thats because although multi-node technically works right now, its performance is crap. This is neither surprising nor atypical of building distributed processing systems. The first cut always has bottlenecks that impede performance. This is what the team is working hard on fixing now.

So if my language confuses you, I'm sorry. To be clear, when I talk about what the PDS "does" Im talking about what the model was designed to do. Implementation isn't complete yet and I'm happy to admit that. However what is complete is in use in both large and small game companies and WHEN multi-node is complete it will require no app-level code changes to run on.

Is that clearer?

Quote:

On a related note, I find that separating very clearly between "now" and "in the future" is crucial


Well I hope thats clearer now.

Quote:

Second issue: When all object links are in the object database, and you introduce some new game element that needs a new kind of link established (say, an index of all players that have a bind point in Darkfell, or whatever), then you have to write code to manually rip through your entire datastore and update all the player objects. That's a simple example of schema migration.


To simply add a link all you need to do is add a field to the ManagedObject.
Such additions are "serialization compatable" changes and the system automatically handles them.

Now there *are* such things as serialization incompatible changes. I mentioned this before in this thread. Its a known area where the PDS world is less then ideal and something the community is well aware of. There are some workarounds. For instance in the MW we wrote code to dump important data in an object neutral format so we could read it back into a new codebase.

This is less then ideal though and Im currently building what amounts to a "merge tool" at BFG that analyzes the old code base and the new one and finds serialization incompatible changes. It then prompts the user to resolve how that change will be mapped. When all chnages are mapped it goes into the binary serialization data (which is a well established standard) and re-writes it to comply.

I am HOPING that BFG will let me release this tool to the public as this is actually a general problem in serialization thats needed a proper fix for a long time. If not I will at least try to document exactly what I did so others can reproduce it.

I think we've actually rat-holed many times in this discussion. I just posted a 10,000 foot view answer to your whole question of game state over on the PDS boards. I hope it clears a lot up.

#76 Antheus   Members   

2409
Like
0Likes
Like

Posted 15 September 2009 - 05:10 AM

Quote:
Original post by jeffpk
I am HOPING that BFG will let me release this tool to the public as this is actually a general problem in serialization thats needed a proper fix for a long time. If not I will at least try to document exactly what I did so others can reproduce it.


This is the general problem with this discussion that bothers me, and why I feel it's difficult to discuss anything that does not involve general hand-waving. I'm sorry I need to resort to this word again.

Schema migration is not a trivial problem, it's not a new problem, and is a well understood and researched one. There is no need for cutting edge research on it, just tweaks to existing frameworks and their issues.

With PDS you make it sound like its trivial and already supported, yet in next paragraph there might be a tool available some time in the future.

I will only point out ZeroC documentation (pdf), namely chapter 41 on FreezeScript. It clearly describes the problem, the solution, as well as corner cases which can and cannot be handled.

There is no "it's automatic" unless it really is. There is no "sometime in the future" or "someone implemented it". It is solved, implemented and available right now. Not only that, but it has been of production quality in 2006/2007, when we used it to great success.

The only "problem" with above solution is that it is a solved problem, available either as open source of in commercial form, and it works. There is no need to argue what it is or isn't, why something is or isn't. It has been available for years, there are users well experienced in it, and at least as far as our project was concerned, was fully usable out of box. It remains in this form today.


I would really wish this discussion were of technical and not marketing nature.

What does PDS solve that is different from ZeroC's solution, or how does it improve on CORBA persistence service? They are both widely used, stable, well understood platforms, and make for excellent apples-to-apples comparison.

Where can I read documentation of quality that matches that of ZeroC, including full disclosure on what is and isn't available, how everything is implemented (in documentation, not as source), and what trade-offs and gotchas there are?

#77 hplus0603   Moderators   

11189
Like
0Likes
Like

Posted 15 September 2009 - 06:22 AM

Quote:
when I talk about what the PDS "does" Im talking about what the model was designed to do


That is what's quite confusing, and in fact what seems to turn off most engineers I've talked to. That is marketeer-speak, not engineer-speak. If you want to make statements on what you hope the architecture will be able to accomplish, then they should generally be qualified as "the PDS architecture allows ..." or similar. Saying "PDS does ..." implicitly means that I can, today, get a deployment, push a button (or write a few hundred lines of code), and get the result that you claim.

A less discerning cusomer/engineer will read those statements, and believe that he'll get that capability today. I've seen that marketing within enterprise software a lot, but mostly it's clearly framed within roadmap context. Part of my critique of the PDS documentation and communication is that such context is largely missing.

Regarding schema updates: It sounds like there's no good solution within PDS other than writing code right now, which is fair, and it sounds like there might be some tools to help support these cases in the future, which is also fair -- but again, is a forward-looking statement.





Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.