Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 13 Sep 2012
Offline Last Active Yesterday, 07:19 PM

#5292725 can a generic engine do something like skyrim?

Posted by on 20 May 2016 - 10:46 PM

neither is designed to take advantage of the most modern video cards like Skyrim does, but the engine can be modified easily enough.  

It does? Skyrim was released very late 2011 and it uses D3D 9 (and nothing else). Doesn't has any occlusion effect (5 years after Crytek introduced it), nor atmospheric scattering, nor volumetric lighting. Supports up to 4 shadow casting lights, they're really bad quality and they're all selected by hand (so the level designer has to mark which lights cast shadows, which meshes cast shadows, and pray it doesnt has more than 4 in the same scene, no dynamic LOD of lights or anything, just distance based).


All water in the game is represented with a flat plane and watery normal maps on top of it (again, Crytek in 2007 did it much better). Most of the things in the world dont cast a shadow, like clutter and medium sized rocks, which CryEngine did in 2007, also GTA IV in 2008 did actual ocean waves AND the whole open world shebang, procedural cluttering, real dynamic shadows, you name it.


Whatever animation system or blending system it has is straight from 2006 (or new but very badly used). Most "advanced" part is that feet accommodate along the floor, a bit, so most of the time everything looks as if its floating on the terrain.


It apparently can use multiple threads, but it keeps one or two to 80% utilization, rest unused.


Unity/UE4 can do any of those with ease. Skyrim was anything but technically advanced for its time. Default UE4 shooter scene thingy already looks better and is better animated than Skyrim.


The places where you are going to have a problem arent the big "systems" (rendering, physics, sound, animations), those are well covered. You're going to have issues with other things. Managing the world data, LODing systems, daily AI routines (did any game had that before Oblivion in 2006?), and also a very important thing: Modding API.


UE4 and Unity are really hard to mod. Skyrim's engine is practically made to be modded. It has its own versioning of world references, so plugins that modify the same things can overwrite eachother depending on some order and keep working. Literally the game updates are stuffed in an additional plugin, like any mod, that applies over the base game and the engine just fucking picks it up and makes everything work. Its awesome. Last time I checked they also implemented hot reloading of plugins, in the shipped game!


Also load times are really tiny. No clue how they do it, but quick save and quick load work really well.


EDIT: Oh and they have their own seemingly asynchronous scripting system, works really well, and a huge event system to fetch data from and run those scripts. Very nice stuff.

#5292531 Moving to OpenGL 3.1

Posted by on 19 May 2016 - 02:50 PM

Just std140 all the things. Its basically what D3D does, and look how popular it is.


Well, I'm done with porting to 3.1. I've lost about 10 FPS (from 462 -> 452)
 You're looking at this from a bad perspective. Don't measure FPS, measure milliseconds per frame.


462 fps is a grand total of 2.164 milliseconds per frame.

452 fps is a grand total of 2.212 milliseconds per frame.


Less than half a millisecond of difference (0.47ms). Thats nothing. You can probably chalk those margins up to measuring error.


Now, say that you're at 30 fps, and you loose 10 fps.


30 fps is 33.333 ms.

20 fps is 50.000 ms.


Difference is a whooping 16.666 milliseconds per frame. Thats a lot. And the difference from a playable 30 fps to a horribly unplayable 20 fps. Whereas from 460 fps to 450 fps, you probably cant physically notice a difference.

#5291701 Vulkan render-call performance drain

Posted by on 15 May 2016 - 10:01 AM

... What if you don't wait until the device has executed all the commands? 90% of the point with GPUs these days is that you submit work, and then (hopefully, otherwise you'll require fences) forget about it, so they can work asynchronously while the CPU builds the next set of commands.


If you wait for the device every frame you're basically synchronizing the CPU with the GPU, not allowing the CPU to work ahead on more commands. Like calling glFlush/glFinish every frame.


Then again, I haven't delved in Vulkan so I dunno if there is any specific use for vkDeviceWaitIdle.


EDIT: Good job on the OP for measuring the actual issue btw! Most often the code provided has nothing to do with the actual issue since no one bothers to profile.

#5291054 How can I optimize Linux server for lowest latency game server?

Posted by on 10 May 2016 - 06:27 PM

Also just remembered that there are multiple different versions of Java on Linux, try to google which is the fastest and make sure the one you are using does not interrupt everything when collecting garbage. (I stopped doing Java when Oracle tried to make me use the Ask toolbar, so my Java knowledge might be a bit outdated)

Yup, it is. Also download the JDK instead of the JRE, no toolbar.


You got one open source GPL licenced codebase for both the standard libraries, and the VM: OpenJDK, with HotSpot VM inside. Thats the one Oracle, Red Hat, Google (backend, not Android), etc put their code in. 


Oracle grabs it, compiles it, bundles a few of their tools n stuff, and releases it as OracleJDK. They distribute binaries for Windows, Linux, your mom's toaster, etc.


Now on Linux land, repo maintainers grab OpenJDK's sources too, compile them, and provide them as OpenJDK package in whatever package management system they use (.deb, .rpm, etc).


In short, they're the same VM, same libs, same performance. The only case where it gets tricky is if you're running on ARM (there is no JIT yet for ARM last time I checked, so you get the "zero" vm on Linux, which is a plain interpreter). Also Google will be grabbing the "standard libraries" part of OpenJDK and supplying them with their own VM in future Android versions for example.



I'd be more worried about using Java, because the Garbage Collector may cause unpredictable jitter (which, again, turns into unpredictable latency.)


You'd need to do something horribly wrong to get many 100ms pauses on GC alone. Minecraftian levels of wrong. Then again, if the pauses are client side, then you're on either Dalvik or ART, and thats a different issue.


To me sounds OP needs to measure exactly whats going on. If its client side pauses, network related pauses, and if the server really takes so much time sending packets.

#5290560 Fast Thumbnail Generator

Posted by on 07 May 2016 - 11:55 AM

Unless I'm missing something here... Do it in the GPU?

#5289673 Is making game with c possible?

Posted by on 01 May 2016 - 10:12 PM

why we always use c++?

We do?

#5289663 Is using the Factory or Builder pattern nessesary?

Posted by on 01 May 2016 - 09:02 PM

Domestic shouldn't be there and Client shouldn't be abstract. Literally no difference between the classes in your example. Either you got a client with just an id, or an international client with a list of ids.  


Dont make classes just because. For a subclass to exist it has to either have different behavior (methods) or different state (fields) than the superclass. Otherwise it has no purpose.


Hell you could remove International/Domestic altogether, remove the ID field from Client, and just let Client have a documents list, some will only have an Id, some will have passports/driver licences.


Also you should respect standard naming schemes: ID == bad, id == good. clientID == bad, clientId == good.


Have the constructor enforce invariants, can an id be null? Can a last name be null? Can a first name be null? Strict API leads to less bugs.

#5289591 Why does GLSL use integers for texture fetches?

Posted by on 01 May 2016 - 11:15 AM

tl;dr; Because bad calls were made.

#5288238 Optimizing Generation

Posted by on 22 April 2016 - 08:25 PM

All of those virtual functions are silly. Quite a few of them don't even use state from the object (ie, they could be static).


And the rest shouldn't work like that at all:

public virtual int Temperature()
  // -100 to 100
  return 0;

public virtual bool Light()
  return false; 


Those are fields dude, not methods.


public readonly int Temperature;
public readonly bool Light;
// etc.


Defaulting to virtual is a bad idea on C#, VM wont inline those functions. Also that Block could just be a struct just fine if instead of relying on 10 virtual functions you use fields as you should.


All of those FaceDataX methods could be static. You can just get all those "isXYZSolid" booleans first as local variables, then do the logic over them, instead of doing "IsSolid(north)" then asking the same again a few lines below. I really hope the Vector classes are structs, otherwise you're allocating a shit load of tiny objects.


Also you could make the enums "extend" byte directly, save a couple of bytes here and there.

#5286067 Yo dawg, I herd u liek gd.net

Posted by on 09 April 2016 - 03:41 PM

So we put gd.net in ur gd.net PM editor so you can gd.net while you're gd.net'ing




I have no idea how this happened. I accessed the PM screen, then some other section, then hit "back" on the broswer a few times and soon enough I had gd.net loaded inside the text editor \o/


PD: I posted this topic from inside the text editor  :D


EDIT: It seems the text editor is an iframe, and somehow the site got set as source of the iframe (My guess: Relative fetch element magic of jQuery).

#5285903 If statements are all you need

Posted by on 08 April 2016 - 02:58 PM

Ifs? Nah, MOV is all you need 


#5285142 how we do a collision?

Posted by on 04 April 2016 - 05:42 PM

We do a downloadings of Bullet or Box2D and let them handle collisions for you.

#5284346 Should I start with fixed-function-pipeline OpenGL or the newest possible?

Posted by on 30 March 2016 - 03:48 PM

    Back to the topic, I did some 5-6 tutorials with the old OpenGL, now the next best step seems to be to read "Learning Modern 3D Graphics Programming" (why do people use the word "modern" in a book, I've always wondered). Nevermind, hope I don't have problems.
Thats a really nice online book thingy. It teaches pure "core" OpenGL, so no deprecated stuff at all. It also nails down the math nicely. 

#5283604 Vulkan render pass questions

Posted by on 26 March 2016 - 02:01 PM

 - It might be possible to work around this by packing all my bloom mipmaps into a single texture.

Yeah, googled around for "nv_texture_barrier vulkan" but got no results.


In any case, yeah, just use more render passes.


Probably you've seen this


They talk a bit about sub render passes but the example they gave is contrived (as the speaker acknowledged).

#5281293 Game Engine design for many-typed many-copied entities

Posted by on 14 March 2016 - 09:26 PM

 A bitset with 1,000 bits fits into two cache lines (64 bytes per cacheline = 512 bits per cacheline), and the very first iteration will shrink this to one cache line (it halved the search area), meaning AT WORST this causes one cache miss. 

Wait a second, you're mixing a bunch of things: First you mentioned binary search over a vector, not a bitset. Second, you're suggesting binary search for iteration? That doesn't makes much sense, it wouldnt help you for iterating over a bitset.


For binary searching through a vector (what you initially suggested) you'd need 1000 ints for the IDs, thats 4kB of data and it'd really involve a bunch of cache misses.


There are plenty of ways to iterate over the bits of an int, but it has to simply check if each value of the backing array has any single bit on before fetching the next one, regardless of the technique you use to iterate over the bits themselves in the single int.