• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
sliders_alpha

java serializable class, really usefull?

13 posts in this topic

Hi,
for my game I have to store game sectors, each sector is an array of 16x16x256 byte.

What I did is make a method to write sectors into a single string inside a file named after it's coordinate,
and one to read them and rebuilt my object in RAM.
But my write time is about 65ms and read time 135ms wich is quite long since a scene is composed of 900 sectors..

Then someone told me to use serializable objects instead,
after reading some documentation about it I don't see how to use this.

If I serialize object A,B,C and D then I can only get them back in the same order,
but I need to acces files according to their coordinate, not their storage order.

is it me or the serializable class is not what I'm looking for?

thanks
0

Share this post


Link to post
Share on other sites
First find out what exactly is slow then decide what to do about it.
Also make sure your benchmarking is correct and actually useful.
0

Share this post


Link to post
Share on other sites
You could try storing the sectors in a binary format instead of a text format, use one file for all sectors and have the first 16x16x256 bytes be the first sector etc.

Edit oops: java, no istream. , you can use FileInputStreams skip method ( filestream.skip(sectornumber*16*16*256); )
if your sectors are laid out in a 2D grid your sector number can be y*width+x;

This way you get one file (opening files are fairly slow) that you can keep open all the time. (each sector will only require you to do a skip and then a read of 64KiB data (Which really is nothing)).

If that is still too slow you can keep nearby sectors in RAM and load new ones in a background thread before they are needed. Edited by SimonForsman
0

Share this post


Link to post
Share on other sites
[quote]But my write time is about 65ms and read time 135ms wich is quite long since a scene is composed of 900 sectors..[/quote]

1 sector is 64kB. 900 will be ~56MB. That takes time either way.

To write a sector:[code]byte sector[] = new byte[65536];
FileOutputStream fs = new FileOutputStream("sector413.dat");
fs.write(sector); // fs.read(sector); to load
fs.close();[/code]No magic here.

How long this takes depends on disk and file system, especially if there are many files. 64/130 ms sounds a lot, but not impossible.

64k in a single file is very convenient since it doesn't cause overhead, storing the coordinates causes just about maximum overhead possible (99.7%
overhead for the coordinates due to typical 4k disk page size).

Serializable could be used for the above, but doesn't bring much to the table beyond.


One way is to keep index separate as a file which contains only:[code]class Index {
float x, y;
int filename;
}

List<Index> indices;[/code]File is represented as a number, so '485.dat', '0.dat', .... Edited by Antheus
1

Share this post


Link to post
Share on other sites
the thing is, the world is generated as the player explore (minecraft clone).
if the player only goes toward the east this one file containing everything will be filled with 75% of 0.

Someone else told me that actually using a database such as MySQL could be usefull.
1 column for the coordinates (primary key), 1 column for the data.
Since I only open a session at the beginning of the game I won't lose time like when I need to acces 900 separated text files.

In addition to that I can even uses indexes for equality seach (on the coordinates) to speed those database access.

....
calculating the coordinate is not time consuming since the player position on the sector grid is always being tracked.
a sector contains more information but I'm only storing the data, since the coordinate are stored inside, storing the file with it's coordinate name is also quick.
[code]
public class Sector{
byte[][][] data;
int x,y;
int DL;

//some methods
}
[/code]

I also noticed something weird today, using [b]System.currentTimeMillis()[/b]
[b]opening a file = 0ms[/b]
writing = 65ms
[b]closing = 0 ms[/b]
same thing for reading, shouldn't opening the file be time consuming? Edited by sliders_alpha
0

Share this post


Link to post
Share on other sites
[quote name='Antheus' timestamp='1336224321' post='4937600']
64k in a single file is very convenient since it doesn't cause overhead, storing the coordinates causes just about maximum overhead possible (99.7%
overhead for the coordinates due to typical 4k disk page size).
[/quote]

You don't need to store the coordinates if you use one file for all sectors as the coordinates can be inferred from the sectors position in the file. using multiple small files will give significantly worse performance than a single larger file would. (more system calls to open files and higher data fragmentation)

[quote name='sliders_alpha' timestamp='1336241418' post='4937644']
I also noticed something weird today, using [b]System.currentTimeMillis()[/b]
[b]opening a file = 0ms[/b]
writing = 65ms
[b]closing = 0 ms[/b]
same thing for reading, shouldn't opening the file be time consuming?
[/quote]

It is possible that the OS does the file open asynchronosly, so the actual open call returns immediatly and the cost of it gets tacked onto your writing/reading if it is done immediatly afterwards.

Try doubling the amount of data you write and see how much the time increases.

In general i would strongly recommend against using multiple files if you got a fixed maximum size of 900 cells (900 cells is only around 60MiB, and storing a single ~60MB file of all zeroes is a fairly low price to pay (you can generate this file on installation or store it compressed in the installer and it won't have any impact on the download size) Edited by SimonForsman
0

Share this post


Link to post
Share on other sites
[quote name='sliders_alpha' timestamp='1336241418' post='4937644']

I also noticed something weird today, using [b]System.currentTimeMillis()[/b]
[b]opening a file = 0ms[/b]
writing = 65ms
[b]closing = 0 ms[/b]
same thing for reading, shouldn't opening the file be time consuming?
[/quote]

What does this give?[code]import java.io.FileOutputStream;
import java.util.Random;


public class Test
{
public static void main(String[] args) {

final Random r = new Random();
final String path = "f:\ mp\\";
final int N = 100;
final byte[] chunk = new byte[65536];

long total = 0;

try {
for (int i = 0; i < N; i++) {
final long start = System.nanoTime();

final FileOutputStream fs = new FileOutputStream(path + String.valueOf(r.nextInt()));
fs.write(chunk);
// fs.getFD().sync();
fs.close();
final long end = System.nanoTime();

total += end-start;
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println(1.0 * total / N / 1e6 + "ms");
}
}[/code]

What if you uncomment the sync() line?
0

Share this post


Link to post
Share on other sites
aaah, I'm not using byte, but short, wich double the size.
futhermore, i'm inserting a "," betwen each value, getting even a bigger file.

[quote]
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]It is possible that the OS does the file open asynchronosly, so the actual open call returns immediatly and the cost of it gets tacked onto your writing/reading if it is done immediatly afterwards.[/background][/left][/size][/font][/color]

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]Try doubling the amount of data you write and see how much the time increases.[/background][/left][/size][/font][/color]
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)][/quote][/background][/left][/size][/font][/color]
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]Amazing, I would never have though about that, your theory is indeed correct,[/background][/left][/size][/font][/color]
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]when doubling the amount of data to write the write time is only increased by 1-2ms.[/background][/left][/size][/font][/color]
[b][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]Therefore the file access time must be about 62ms.[/background][/left][/size][/font][/color][/b]

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]The reading time is far longer because in addition to opening the file I'm also decoding it, getting each value betwen ",".[/background][/left][/size][/font][/color]


[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=1]...[/size][/font][/color][/left]
[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=1]trying your code Antheus =D[/size][/font][/color][/left]
0

Share this post


Link to post
Share on other sites
[quote name='sliders_alpha' timestamp='1336247048' post='4937653']
aaah, I'm not using byte, but short, wich double the size.
futhermore, i'm inserting a "," betwen each value, getting even a bigger file.

[quote]
[left][background=rgb(250, 251, 252)]It is possible that the OS does the file open asynchronosly, so the actual open call returns immediatly and the cost of it gets tacked onto your writing/reading if it is done immediatly afterwards.[/background][/left]


[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)]Try doubling the amount of data you write and see how much the time increases.[/background][/size][/font][/color][/left]

[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)][/quote][/background][/size][/font][/color][/left]

[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)]Amazing, I would never have though about that, your theory is indeed correct,[/background][/size][/font][/color][/left]

[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)]when doubling the amount of data to write the write time is only increased by 1-2ms.[/background][/size][/font][/color][/left]

[left][b][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)]Therefore the file access time must be about 62ms.[/background][/size][/font][/color][/b][/left]


[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][background=rgb(250, 251, 252)]The reading time is far longer because in addition to opening the file I'm also decoding it, getting each value betwen ",".[/background][/size][/font][/color][/left]



[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=1]...[/size][/font][/color][/left]
[left][color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=1]trying your code Antheus =D[/size][/font][/color][/left]
[/quote]

if you are storing it as text rather than a binary format a short will take far more than 2 bytes, (the short value 14675 is 5 bytes if stored as text and 2 bytes if stored as a binary short), By storing it as text you're also forced to parse each value (which further slows things down), by using a binary format each value is exactly the same length (a short is always 2 bytes and a byte is always 1 byte) so there is no need to insert ',' to separate individual values.
0

Share this post


Link to post
Share on other sites
You could have a file with a grid that tells where to index the chunk file to get the chunk for that grid position.
0

Share this post


Link to post
Share on other sites
For efficient writes, here's an alternate way:[code]
final short[] chunk = new short[65536];
final RandomAccessFile fs = new RandomAccessFile(path + String.valueOf(Math.abs(r.nextInt())), "rw");
final ByteBuffer bb = fs.getChannel().map(MapMode.READ_WRITE, 0, chunk.length * 2);
for (int i = 0; i < chunk.length; i++) bb.putShort(chunk[i]);
fs.getChannel().force(true);
fs.getFD().sync();
fs.close(); [/code]
For, this completes in 15ms, 1.4ms without sync vs 93ms if writing as text.

Above is also about as fast as it gets.
0

Share this post


Link to post
Share on other sites
mmh, I'm not getting result as good as you Antheus with you code (slightly modified for 3D arrays) :

[code]

public void storeToDisk(String path) throws IOException{
long time = System.currentTimeMillis();
final RandomAccessFile fs = new RandomAccessFile(path +x+","+z+".txt", "rw");
final ByteBuffer bb = fs.getChannel().map(MapMode.READ_WRITE, 0, (Global.CHUNK_HEIGHT*Global.CHUNK_X*Global.CHUNK_Z)* 2);


for(short yy=0; yy<Global.CHUNK_HEIGHT; yy++){
for(short zz=0; zz<Global.CHUNK_Z; zz++){
for(short xx=0; xx<Global.CHUNK_X; xx++){
bb.putShort(sect[xx][yy][zz]);
}
}
}
fs.getChannel().force(true);
fs.getFD().sync();
fs.close();
System.out.println("write time : " + (System.currentTimeMillis() - time)+"ms");
Global.totalTime += (System.currentTimeMillis() - time);
}
[/code]

[quote]
//with synch

19ms, 16ms, 14ms, 57ms, 77ms, 21ms, 56ms, 15ms, 9ms, 74ms, 10ms, 9ms, 74ms, 18ms, 22ms, 66ms, 31ms, 50ms, 18ms, 49ms, 72ms, 18ms, 16ms, 68ms, 17ms, 23ms, 66ms, 17ms, 23ms, 72ms, 17ms, 17ms, 75ms, 17ms, 17ms, 66ms, total write time : 1308ms

//without synch
24ms, 32ms, 15ms, 57ms, 16ms, 22ms, 56ms, 28ms, 14ms, 75ms, 17ms, 17ms, 48ms, 26ms, 15ms, 66ms, 14ms, 60ms, 15ms, 17ms, 62ms, 28ms, 17ms, 58ms, 16ms, 14ms, 52ms, 29ms, 17ms, 78ms, 14ms, 17ms, 61ms, 15ms, 25ms, 68ms, total write time : 1208ms
[/quote]



Also, yesterday I changed my chunk2text methods, adding some bufferedWriter, it basically takes the same time
[code]

public void storeToDisk(String path) throws IOException{
long time = System.currentTimeMillis();

File file = new File(path+x+","+z+".txt");
FileWriter fw = new FileWriter(file);
BufferedWriter out = new BufferedWriter(fw);
for(short yy=0; yy<Global.CHUNK_HEIGHT; yy++){
for(short zz=0; zz<Global.CHUNK_Z; zz++){
String s = "";
for(short xx=0; xx<Global.CHUNK_X; xx++){
s += String.valueOf(sect[xx][yy][zz]);
s += ",";
}
out.write(s);
}
}
out.close();
System.out.println("write time : " + (System.currentTimeMillis() - time));
Global.totalTime += (System.currentTimeMillis() - time);
}
[/code]

[quote]
38ms, 33ms, 29ms, 30ms, 27ms, 28ms, 33ms, 31ms, 28ms, 28ms, 29ms, 33ms, 28ms, 28ms, 31ms, 28ms, 29ms, 32ms, 28ms, 30ms, 31ms, 30ms, 33ms, 30ms, 30ms, 33ms, 29ms, 29ms, 30ms, 28ms, 34ms, 30ms, 32ms, 58ms, 33ms, 31ms, total write time : 1124ms
[/quote] Edited by sliders_alpha
0

Share this post


Link to post
Share on other sites
[quote name='sliders_alpha' timestamp='1336317926' post='4937787']
mmh, I'm not getting result as good as you Antheus :[/quote]

You are.

Explicit sync makes sure that each chunk is really absolutely positively written to disk. So it's the slowest possible case.


You'll notice two cases, one set of times is around 17ms, which is very close to what I get. The other is in 50ms range.

9-17ms is fairly easy to explain. Seek time of disk (around 8-10ms) plus the write.
50ms happens when OS need to force flush and wait, maybe it has something else going on, maybe another process is doing disk IO, so it takes longer.


Improvements from deferred writes vary. On laptop it might be disabled altogether for increased reliability (in case battery goes out) or the OS/disk cache might be full or too small or too slow. Deferred writes may improve things, but they aren't magic, they merely let your thread run ahead while OS does the work in the back. If that isn't possible, it won't be any faster.

Deferred writes also do not magically increase throughput. If enough data is written, times will settle at limits of disk IO, since OS cannot afford to buffer too much data it claimed to have written to disk. I could save 500MB then turn off the machine, thinking it's safe, while the OS would still need 2 minutes to flush everything from memory.


For the second case, you're using text serialization. You write roughly 2.5 times as much data. First example uses exactly 2 bytes per value. Second example uses 4-5. Timing is consistent with that.

The numbers above give hard limits on how long the disk IO takes. Edited by Antheus
1

Share this post


Link to post
Share on other sites
damn, looks like I'll need to do as mojang did and implement a "region" system.

anyway serializable is not bringing anything usefull to the table, good to know.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0