Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


samoth

Member Since 18 Jan 2008
Offline Last Active Today, 11:33 AM

Posts I've Made

In Topic: using technology as magic

Today, 10:21 AM

That idea with draining life is not too much different from what already exists e.g. in Ryzom, and if you leave out the fact that Ryzom was a total economical failure (several times), it worked quite well. It was even harsher in Ryzom.

 

Spells (or any action, for that matter) need to be balanced with a "counterweight". That's usually mana for magic and stamina for melee. However, unless you use spells that are way below your level, you do not have enough counterweights in mana alone to cast a spell (and you cannot possibly cast the highest spells even at maximum level). You have the choice to put in "time" as counterweight, or "hp".

 

"Time" means your spells take a lot longer, and during cast time you will be hit automatically by all but the lowest creatures and your spell fails. In other words, you're dead. So "hp" is the only working solution. Which means, of course, you can fire the biggest badass spells, but you are also much easier to kill.

It's certainly a risk tradeoff, but it isn't necessarily creating frustration, it may very well add to challenge (and very strongly encourages team playing).


In Topic: NAS - recommend one? (diskless)

20 November 2014 - 12:09 PM

Yes I'm definitiely asking for single-user systems, no real servers - and within that frame, there ought to be something in the 100 bucks price range that's not total crap? I mean, it's not rocket science. It seems ridiculous that I buy a damn glorified HDD controller for more than double or triple of what my actual main computer costs.

What you pay for is getting a system that works reliably and that holds your data with redundancy. This is something that matters for single-user system as much as for systems that serve a thousand users. Losing data sucks big time. Restoring from backup sucks as well, even if you don't lose much.

 

The Synology 214SE (their cheapest model) costs 129€ without disk, so if you add the price for the disk, that's about twice as expensive as a cheapish WD thingie. But you can trivially configure it to use RAID-1, and it's like 3 clicks to have it automatically back data up onto another diskstation or onto one of several supported cloud services (among them Strato HiDrive, Amazon Glacier, and Elephant). With versioning, if you want.

 

Yes, you can hack together a script (say, bacula) on a MyBook live, too. But it's nowhere like the same level of comfort, and the overall level of reliability is totally different.

Making a WD MyBook live unresponsive is simply a matter of running Peazip "Extract here..." on a large archive and select "Scan for Viruses..." from the context menu on the containing folder. Congrats... you can now pull the cable so your NAS reboots because Samba froze up the box, not even  SSH works any more. Do the same on Synology, and it just works as expected. Of course it does, what else. That's what you pay for. It's not just a disk controller with a network plug. It's a system that offers a certain level of reliability and robustness against everyday abuse.

 

Harddisks start making noise? Pull them out one-by-one, plug in new ones, and worry no more. Data is still there and you need not interrupt your work for one minute. That's what you pay for.


In Topic: NAS - recommend one? (diskless)

20 November 2014 - 03:00 AM

Could you mention those good and bad things of each?

 

Synology:

  • Put in one disk, it formats the disk and everything works.
  • Put in two disks, it makes a RAID (you get to choose which one, or whether to use the disk as hot spare, of course)
  • Put another disk into an existing system, it enlarges the RAID. System is very slightly slower while it does that, but fully operational.
  • Pull out disks one by one and plug in a bigger ones, everything keeps operational and RAID size is enlarged.
  • Have a surveillance camera? The NAS comes with software for it.
  • Want to host photos/blog/videos? Two clicks.
  • Want a DVB recorder? Sure.
  • Need a minimal Git or Subversion server? Two clicks.
  • Need anything advanced? Linux system.
  • VPN server? Sure.
  • Download slave, P2P, proxy server, mail server, what you want.
  • Fully automatic monitoring
  • Linux 3.2 system with root access via SSH
  • Looks cool, consumes only a few watts (44W with 4 disks, so the station itself can't take much more than 5-6W).
  • Does cool stuff (like link aggregation, which if fucking awesome)

WD:

  • Disk configured to park the head every few moments, load cycle count comes close to "fail" within months. Can be fixed, but that requires building a low-level tool (and knowing about it in the first place).
  • Slow.
  • Sluggy, half-assed management interface. Things that should take one millisecond (like, add a user) take 5-6 seconds.
  • Linux 2.6 system with root access via SSH (actually that is one good thing)
  • Management iInterface did not work any more after half a year (for no apparent reason).
  • "Load factory settings" suggests that you may lose some settings, but are able to use the management interface again afterwards. It really means "delete all shares, and disable console access so you really can't do anything any more on the box".
  • Disk is formatted with a weird block size so you can't rescue your data if your NAS doesn't work any more. Not without a custom-built Linux kernel anyway.

Especially the last point is a dealbreaker for WD. It's the same kind of shit that Panasonic is doing with their DVD/harddisk recorders -- deliberately getting into the owner's way and doing the maximum to make your life unhappy. As if a non-working NAS wasn't enough distress, they must make sure that you can't get to your data even when you open the case and take out the disk.


In Topic: NAS - recommend one? (diskless)

19 November 2014 - 11:46 AM

Only have good things to say about Synology (and only bad things about WD). Price tag on Synology somewhat higher than 80 euros, but worth every cent.

 

Owning a 4-bay, 2-bay, and an ultra-lowcost 2-bay "SE" version. The latter is a bit weak on the CPU side, but does the job nevertheless. The 4-bay version is just awesome.


In Topic: is there a better way top refer to assets in a game?

18 November 2014 - 06:22 AM

I strongly disagree with anyone recommending integer values or enumerations. They're ugly has hell and they can seriously damage your "flow." What happens if I want my artist and designers to constantly iterate their work? They'll have to get knee deep in my source code just to add a few lines in strange places.

That's why you shouldn't have them in the source at all in my opinion. Neither strings nor integers/enums.

 

Artist edits the "resource definition file" or whatever you call it, preferrably with a special editor for easier workflow, but in the simplest case that can happen in a text editor, writing out XML or JSON or any other format, even a custom one if you want.

 

Artist refers to "kaboom.wav" as "explosion_sound" when referencing it from within "grenade". The toolchain packs the whole stuff together into a binary file. That file can contain the strings and you look up assets by strings (but this requires using the equivalent of a map structure at runtime), or the build system will translate "explosion_sound" to, say 51 and "grenade" to, say, 213. If the artist edits the file, it may happen that the numbers are different, but that doesn't matter since only the build system has to worry that the mapping is consistent (that is, if asset #213 references #51 and due to a change #51 becomes #63, then #213 references #63). The application only uses what the datafile provides, it needs not care about consistency.

 

While it is true that the overhead of hashing a string or even looking up a string in a map is neglegible compared to disk I/O it is also true that this overhead is completely unnecessary. Hashes or IDs can be calculated at compile-time if you insist on having the names hardcoded (but I recommend against that unless you really only have 5 assets), and are otherwise calculated by the build tool.

 

Most of us are not on systems any more where encoding "filename.mus" in the source code causes too much data in the executable.

But it's not really about the size of that string (nor the overhead).

 

Artists do not want to, and should not tamper with source files. And you do not want to, nor should you need to recompile the whole program only because the artist decided to add another sound or another sprite. Making the application run is your responsibility -- keep it there. Putting the "art stuff" together is the artist's responsibility -- keep it there, as well. Don't mix the two, and don't mess with something that's not your responsibility. Changing one component should not require rebuilding the other, nor should it possibly make it fail. Saying that hardcoding assets and having artists edit source files is a guarantee for failure would probably be going too far, but you get me. It's something that can break, and things that can break will eventually break.

 

 

 

if i'm drawing 16,000 non-instanced meshes, i don't want to be looking up the array index for the mesh filename of each one

Good grief, who is modelling all these? Surely you mean 160 -- not 16,000?


PARTNERS