• Advertisement
Sign in to follow this  

Unity my c++ d c# benchmark!

This topic is 4119 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quote:
Original post by Promit
Lastly, realize that if someone were to break out the MMX intrinsics, everything except C would be seriously screwed. You'd be looking at times for the C code that were about 1/4 to 1/3 what they are now. DMD, C#, and Java wouldn't be able to touch that -- and we can't even do autovectorization like that statically, let alone in a JITter.


D could do the hand-tuned MMX (in a compiler and op. sys. portable way) as well:

D inline assembler

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Kambiz
I thought that JIT compilation can produce more efficient code because it can optimize for the target machine. Maybe I should just wait for the next version of those compilers. C/C++ is a mature language and it is not surprising that there are excellent compilers available for it.


Ahhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true <g>

Real-life code (especially game code) usually doesn't seem to follow.

Here's even a set of micro-benchmarks that Java still sucks in:

Shootout Benchmarks


Probably several 10's or 100's of millions of $'s and 10 years has been spent trying to make Java run like C++, and the different language semantics shouldn't prevent that for the most part.

Yet it just isn't happening. Given the emphasis on Java performance, I figure there has to some computer science reasons behind it (besides available time to optimize -- Sun's Hotspot -server takes all the time it needs) and I'd be willing to bet that statically compiled code and langauges will be around for a few more decades because of it.

From what I've seen, I think D kind-of builds a bridge between C++ and Java.

Share this post


Link to post
Share on other sites
Quote:
Original post by DaveJF

Probably several 10's or 100's of millions of $'s and 10 years has been spent trying to make Java run like C++, and the different language semantics shouldn't prevent that for the most part.



We can agree that the code representation (ie: language) isnt an issue in most cases, including Java, C#, and VB.NET.

Quote:
Original post by DaveJF

Yet it just isn't happening. Given the emphasis on Java performance, I figure there has to some computer science reasons behind it



While this may be true, this is the first time I have hear the idea mentioned. I don't see why the act of JITing is any different than regular compilation.

What I do see different is that all these JIT languages use an intermediate language designed with the intention of porting. Most compilers do compile to an intermediate language as a first stage. What could easily be different is that in those cases, porting isnt a consideration.

I am uncertain as to if the strive for porting has restrained the design of the intermediate languages used in JIT's. I am completely unfamiliar with java's and I am only minorly self educated in MSIL. Both the Java IL and MSIL are stack based while perhaps the 'good' stand alone compilers do not use a stack based IL.

The issue in question however does not seem to be related at all. It simply seems that the .NET JIT neglects to make an "obvious" optimisation specific to the x86-style div instruction (in that it returns the modulus as well) - clearly the case at hand isnt a deficiency of the language or of jitting in general.

Share this post


Link to post
Share on other sites
Quote:
Original post by DaveJFAhhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true <g>

Real-life code (especially game code) usually doesn't seem to follow.


Not true at all. You can write Java benchmarks and run them differently to see the effects of JIT compilation. Even running with different versions of the JVM will show improvements. The problem is that most people who write Java benchmarks are benchmarking the wrong thing.

JIT compilation doesn't happen instantly. The JVM first runs code in interpreted mode. It will only start compiling after a certain number of executions. Benchmarks that don't take this into account are flawed. If you run a timed loop for a few thousand iterations, you might be getting only interpreted mode -- which is nowhere near being a realistic benchmark. Maybe you will get a mix of interpreted mode and compiled mode, but in that case your results will be skewed by the compilation time.

Read more about how to properly benchmark Java in this article and this article.

Share this post


Link to post
Share on other sites
Quote:
Original post by Aldacron
Quote:
Original post by DaveJFAhhh, the marketing folks have been at it again... That's been promulgated for years by Sun and the Java lobby, and for a few micro-benchmarks, it may even be true <g>

Real-life code (especially game code) usually doesn't seem to follow.
Not true at all. You can write Java benchmarks and run them differently to see the effects of JIT compilation. Even running with different versions of the JVM will show improvements. The problem is that most people who write Java benchmarks are benchmarking the wrong thing.
I'm not so sure of that. It's not about writing a benchmark that favors the way Java works. It's about writing a benchmark that functionally resembles a real application.

Quote:
JIT compilation doesn't happen instantly. The JVM first runs code in interpreted mode. It will only start compiling after a certain number of executions. Benchmarks that don't take this into account are flawed. If you run a timed loop for a few thousand iterations, you might be getting only interpreted mode -- which is nowhere near being a realistic benchmark. Maybe you will get a mix of interpreted mode and compiled mode, but in that case your results will be skewed by the compilation time.

Read more about how to properly benchmark Java in this article and this article.
I read the article. What I inferred from it is that Java benchmark results are erratic and often unpredictable, and that the erratic and unpredictable parts always seem to result in slower programs. Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.

Even worse than what that does to app speeds, is the corollary that the programmer is going to have a hard time optimizing the algorithms, because he can't get repeatable timings from one run to the next. He can't tell if his algorithm changes are making things better or worse.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Stachel
Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D
  • ) or even "running for hours first"...

  • And slightly altered by Promit for .NET

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Anonymous Poster
    Quote:
    Original post by Stachel
    Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
    Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D
  • ) or even "running for hours first"...

  • And slightly altered by Promit for .NET


  • The running for hours bit comes from the article cited: "Timing measurements in the face of continuous recompilation can be quite noisy and misleading, and it is often necessary to run Java code for quite a long time (I've seen anecdotes of speedups hours or even days after a program starts running) before obtaining useful performance data." The author, Brian Goetz, is an expert in the field.

    Secondly, my cited post was not about that particular benchmark, but about Java benchmarking in general, and it was based on the Goetz article. I stated that quite clearly.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Stachel
    Quote:
    Original post by Anonymous Poster
    Quote:
    Original post by Stachel
    Only under the very best of ideal conditions can Java hope to approach static compilation results, and for that the Java app has to run for hours first.
    Um. Kambiz ran his test for 15 seconds and it was as fast as the statically compiled C version (The C version didn't have decimals shown so the result can be on range 15-15.99). So much for "approaching" the results or "very best ideal conditions" (just a random test written originally for D
  • ) or even "running for hours first"...

  • And slightly altered by Promit for .NET


  • The running for hours bit comes from the article cited: "Timing measurements in the face of continuous recompilation can be quite noisy and misleading, and it is often necessary to run Java code for quite a long time (I've seen anecdotes of speedups hours or even days after a program starts running) before obtaining useful performance data." The author, Brian Goetz, is an expert in the field.

    Secondly, my cited post was not about that particular benchmark, but about Java benchmarking in general, and it was based on the Goetz article. I stated that quite clearly.


    The key here is that your original statement was overgeneralized. It does not matter that you were not refering to that specific benchmark, because it is contained within the boundries of the set to which the statement proportedly applies to ("the Java app", refering to [any] arbitrary application of "Java").

    And while Brian Goetz may be an expert, he certainly didn't make this overgeneralization himself. Unless I've inadvertently missed it, he in fact makes absolutely no references to either compilation methods resulting in faster programs than the other. The "often hours" figure comes from reaching optimals within that language, which are never compared to the static version. For all you know, this is significantly faster than the equivilant C/C++ due to optimizaitons based on input data which absolutely could not be made in the equivilant staticly compiled program (because, as that first article mentions, Java is able to make profile guided assumptions, even when those assumptions may later prove invalid for the general case!). For all you know, it's been running faster than the static equivilant for all those hours!

    Not that I think this is the general case, but I'm not even going to state that, as I would be woefully under-evidenced to the point of making such an undereducated assertion as mine on the subject completely worthless.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by MaulingMonkey
    For all you know, this is significantly faster than the equivilant C/C++ due to optimizaitons based on input data which absolutely could not be made in the equivilant staticly compiled program (because, as that first article mentions, Java is able to make profile guided assumptions, even when those assumptions may later prove invalid for the general case!). For all you know, it's been running faster than the static equivilant for all those hours!
    I read about the profile guided assumptions, and the theory that JITs can therefore produce faster code. But the results always seem to be missing in action.

    Here's one set of benchmarks comparing Java with C++: http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=java&lang2=gpp

    Score 1 out of 17 for Java.

    Share this post


    Link to post
    Share on other sites
    Very interesting read...

    It seems that people really care about this a lot...

    Some seem to be attached to certain languages more than others.

    I really enjoyed it.

    IMO, nothing in life is fair. It's true that there are some flaws in the benchmark, but so many comparisons made today aren't fair...

    Anyways, there's a more interesting point to it all.

    What can we conclude from all this? What am I really seeing? What should you be seeing?

    C/C++ is the best tool for intense calculations.

    D is not, along with C#.

    What are they good for?

    Building larger projects that don't require so much computation. And also decreasing development time...

    The key is, is that you should use the best tool for the project, something that meets your needs.

    (I'm not implying any sarcasm through this whole thing, just an fyi)

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by dbzprogrammer
    C/C++ is the best tool for intense calculations.

    D is not, along with C#.

    Errr... right. And I haven't shown that D gets as fast as C++ when using the GCC-based compiler...

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Stachel
    Quote:
    Original post by MaulingMonkey
    For all you know, this is significantly faster than the equivilant C/C++ due to optimizaitons based on input data which absolutely could not be made in the equivilant staticly compiled program (because, as that first article mentions, Java is able to make profile guided assumptions, even when those assumptions may later prove invalid for the general case!). For all you know, it's been running faster than the static equivilant for all those hours!
    I read about the profile guided assumptions, and the theory that JITs can therefore produce faster code. But the results always seem to be missing in action.

    Here's one set of benchmarks comparing Java with C++: http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=java&lang2=gpp

    Score 1 out of 17 for Java.


    Note: HTML Translation $@(^%*(s up your link.

    Fixed: http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=java&lang2=gpp

    Final comment: 1 out of 17 is similarly enough to contradict your earlier generalization, espeically considering most of the tests listed there only run a few seconds (only 4 out of 17 take longer than 5 seconds in the C++ version), which I already knew Java was at a disadvantage at (Just look at the Java vs C++ startup benchmark to see what I mean - almost a 43x increase just to run hello world - nevermind JIT compilation/optimization). We were talking about the scope of a few hours, a 3600x in timescale. Presuming we need the program to start performing faster than the C++ version after a mere few minutes in order to come out ahead for the hours scale, that's still a 60x increase.

    "Only 1 out of 17 in a few seconds [where Java >= C++]" I can believe at face value.
    "Only 0 out of N in a few hours [where Java merely 'approaches' C++]" does not follow that at all - which again, is how your original overgeneralization interprets.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by dbzprogrammer
    C/C++ is the best tool for intense calculations.

    D is not, along with C#.

    Urmmm..

    D is built with speed in mind (D 1st class array types, built-in strings, etc.).

    Overall Shootout Benchmarks

    D and C++

    10 of 15 (C++ is missing 2) in favor of D, and D is not quite at version 1.0.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by StachelIt's not about writing a benchmark that favors the way Java works. It's about writing a benchmark that functionally resembles a real application.


    I'm not sure what your point is. For a real-world application, a Java programmer would code in a manner that favors the way Java works. If a benchmark doesn't reflect that, then it isn't accurate.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Aldacron
    Quote:
    Original post by StachelIt's not about writing a benchmark that favors the way Java works. It's about writing a benchmark that functionally resembles a real application.
    I'm not sure what your point is. For a real-world application, a Java programmer would code in a manner that favors the way Java works. If a benchmark doesn't reflect that, then it isn't accurate.
    Think of it like a test in school. There's teaching the test so the students can pass it, then there's teaching knowledge so passing the test falls out naturally.

    For a benchmark, consider that D supports inline assembler. That means one could write the Pi benchmark entirely in hand-optimized assembler, and that would technically be writing it in D. Nothing could beat that for speed. But is that a reasonable benchmark for D? I'd sure cry foul. I didn't see any special tweaking in the D or C versions of Pi.

    For the Java benchmark, it's been tweaked to replace (x / 10) with (x * .1f). That's faster on some CPUs, but slower on others. Doesn't that mean the Java JIT should do this transformation on its own because after all, isn't the great strength of the JIT being able to adapt to the particular CPU? If I have to bend my otherwise straightforward Java code around shortcomings in its JIT, that doesn't reflect well on the Java implementation, and it's fair for a benchmark to point that out. If the Java implementation is written to do a good job with straightforward Java source code, then doing well on a benchmark will fall out naturally.

    I'm not interested in benchmarks custom carefully designed to avoid weaknesses in an implementation, and only show its strength. It's like posting a picture of the good side of your car for sale on ebay, and neglecting to mention that the other side is bashed in and the car won't drive straight. You'll also find that if you do tweaks like (x * .1f), supposedly portable Java source is going to do very badly on some CPUs, and those tweaks may actually sabotage performance if the Java implementation improves.

    Write benchmarks in a manner that's straightforward to the language being used. Straightforward code is what the language implementors work hardest at improving performance for, so you're not likely to be left in a backwater with oddities like (x * .1f). "Optimizations" like that were once popular with C, and they did work, but with modern compilers such things perform worse than the original unoptimized code.

    Share this post


    Link to post
    Share on other sites
    Guest Anonymous Poster
    Quote:
    Original post by Stachel
    For the Java benchmark, it's been tweaked to replace (x / 10) with (x * .1f). That's faster on some CPUs, but slower on others. Doesn't that mean the Java JIT should do this transformation on its own because after all, isn't the great strength of the JIT being able to adapt to the particular CPU?
    Actually that particular optimization was not an optimization any compiler could do. It could be said being a (minor) algorithmic change because it changes the input/output mapping of the div and mul functions. It just "happens" to work in this application because it's not so important how the numbers are distributed in the array that represents large numbers, only that the array's "real" value is correct. (e.g. 4*10^2+1*10 is same as 3*10^2+11*10)

    The problem was really that neither Java nor C# did div/mod with just one instruction so another way had to be used to get same speed as those combined. But it's not that complex or time-wasting optimization that it couldn't be done in a JIT compiler. It just shows the immaturity of the .NET and Java JIT compilers as compared to C++ compilers. This information is relevant today, but it doesn't prove anything against "managed" languages' compilation model.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Stachel
    For a benchmark, consider that D supports inline assembler. That means one could write the Pi benchmark entirely in hand-optimized assembler, and that would technically be writing it in D. Nothing could beat that for speed. But is that a reasonable benchmark for D? I'd sure cry foul. I didn't see any special tweaking in the D or C versions of Pi.


    Writing in inline assembler is not writing in D. I'd cry foul, too.

    Quote:

    For the Java benchmark, it's been tweaked to replace (x / 10) with (x * .1f).

    ...

    I'm not interested in benchmarks custom carefully designed to avoid weaknesses in an implementation, and only show its strength.

    ...

    Write benchmarks in a manner that's straightforward to the language being used.


    Indeed. So what you were on about is optimizing the benchmark and not about tailoring the benchmark to the language. I was referring to the latter.

    The problem is that most of these multi-language benchmarks that we see online are written by people who have most of their experience in only one of the languages. So when it comes to porting the benchmark from that language to the others, they carry with them the same idioms -- which may not apply to the other languages being benchmarked.

    For example, in a benchmark that makes use of a large number of objects the C++ version might preallocate the objects upfront in an array. In Java these days, object pools are rarely used except for resource-intensive objects (such as Threads) because of advances made in garbage collection, so doing the same thing for the Java benchmark would not be a reflection of real-world code. If what you were benchmarking is array access, then that's one thing. But if the storage and allocation of objects are peripheral to the benchmark they should be done in a manner that reflects real-world usage.

    There's a difference between optimizing a benchmark to get better results and "coding to the language". For many benchmarks it isn't going to matter, but it's always important to be aware of the common idioms of each language you are benchmarking so that you create a benchmark that is accurate (well, whatever that means in the world of microbenchmarks).

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Stachel
    For a benchmark, consider that D supports inline assembler. That means one could write the Pi benchmark entirely in hand-optimized assembler, and that would technically be writing it in D. Nothing could beat that for speed. But is that a reasonable benchmark for D? I'd sure cry foul. I didn't see any special tweaking in the D or C versions of Pi.

    I wrote things in a "hand" optimized assembler. What about you? Is compiler able to convert that hand optimized assembly into 64 bit code? Could you use that compiled code on any CPU in natively optimized form? You could also write hand optimized ASM in Java and use JNI. Then you could attempt to run that ASM in error condition, and say. Java application didn't crashed because of that ASM code, the other did and blasted out OS. Should this be in benchmark? Obviously it should.

    However this benchmark is a simple one. It's just coincidence that C++ code was so similar there was no need for modifications. In fact you can find also code that is so nicely written you could just copy and past it into Java, or C# and change method naming, and it works.

    Quote:
    For the Java benchmark, it's been tweaked to replace (x / 10) with (x * .1f). That's faster on some CPUs, but slower on others. Doesn't that mean the Java JIT should do this transformation on its own because after all, isn't the great strength of the JIT being able to adapt to the particular CPU?

    Are you saing that there are CPUs where division isn't 80 cycles monstrosity, and multiplication 1 - 1.5 cycle sweety operation?

    If "a" is a double precision floating point number writen in a IEEE standard and
    "a" modulo 10 != 0 then
    "a"/10 != "a" * 0.1D

    If a compiler would replace it without your permission, it could decrease precision of computation considerably.

    BTW I was bitten by a * b / c != a * (b / c)
    It created a hiccup in the middle of screen.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Raghar
    Are you saing that there are CPUs where division isn't 80 cycles monstrosity, and multiplication 1 - 1.5 cycle sweety operation?
    Yes. The 386/387 CPU combination and the 486 have integer division faster than float multiply according to clock cycle counts. Not to mention any processor that doesn't have hardware floating point (seen in embedded processors). It isn't just the multiply, there's the conversion to and from float to add in, plus resetting the rounding mode.

    I tried this change on my machine. Despite CPU instruction timings saying that the floating multiply should be faster, the actual benchmark timing shows it to be slower. I can't explain this other than the fact that for modern CPUs the cycle counts aren't the whole story. It probably has to do with some internal pipelining/scheduling issue.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Kevlar-X
    Quote:
    Original post by deathkrush
    The fastest way to calculate PI in C is of course:

    *** Source Snippet Removed ***


    Hey, I loved the snippet!
    Though, it wouldnt compile as given (F_00() is used before defined) so I rearranged the code blocks.

    Then it crunches out the number '0.250'.

    So I might add: it may be a great way to calculate pi, compact and cool looking, but it lacks one extra nicety we would like in a program to calculate pi. Determining the correct result.
    On all other accounts its cool though! :)

    - Jacob


    It's from the International Obfuscated C Contest. It probably worked fine with old compilers and produced a correct result. Does anybody know what's wrong with the code, I get a wrong result too. The author says that you can get more precision by using a bigger ASCII picture though. :-)

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Promit
    Sure (with the exception of support for vectorized instruction sets, which Java and C# lack).

    You could add them yourself. Source code of Java JIT is accessible, and if you don't screw it too much, you could add that little support for SIMD instructions.

    BTW SIMD means Single Instruction Multiple Data. And I somehow doubt they would add support for multiple instructions.

    Share this post


    Link to post
    Share on other sites
    Quote:
    Original post by Promit
    Sure (with the exception of support for vectorized instruction sets, which Java and C# lack).

    You could add them yourself. Source code of Java JIT is accessible, and if you don't screw it too much, you could add that little support for SIMD instructions.
    The problem you'd know if you'd use ASM is SIMD registers are not that great win. First at all, they are small 128 bits is VERY small if you need to work with 64 bit numbers.
    Suport for possibility to make all operations on 128 bit regiseters in one clock cycle was added by introducing Core2Duo. Previous CPUs splited some operations/all? into two 64 bit operations.


    BTW SIMD means Single Instruction Multiple Data. And I somehow doubt they would add support for Multiple Instructions...

    Share this post


    Link to post
    Share on other sites
    Sign in to follow this  

    • Advertisement
    • Advertisement
    • Popular Tags

    • Advertisement
    • Popular Now

    • Similar Content

      • By 3dmodelerguy
        So I am building a turn based rogue-like (think CDDA). The game is going to have a very large map (up to 1000's x 1000's) however to alleviate most of that I obviously can't render everything so there will just be render a certain radius around the player and just load in and out data as the player moves.
        The next major system I am prototyping is making interactive tiles destructible and pretty much everything will be destructible besides basic landscape (cars, doors, windows, structures, etc. will be destructible)
        While I am only rendering a certain amount of tiles around the player, I want to keep the amount of colliders active at one time to be as small as possible for performance and currently the tilemap tool I use automatically merges colliders together.
        So instead of creating a separate colliders for each of these tiles and having the destructible behavior tied to that object (which my tilemap tool would allow me to do) I was thinking that I would store an array of all the X and Y locations for the interactive tilemap layer and let the tilemap manage the colliders. 
        Then when I hit a collider on the interactive tilemap layer, instead of of getting the behavior for how to deal with the destruction for that tile from that game object, I would pull it from the array I mentioned earlier based on the tile I attempt to interact with which I already have.
        Does this sound like a good approach? Any other recommendations would be welcomed.
      • By NDraskovic
        Hey guys,
        I have a really weird problem. I'm trying to get some data from a REST service. I'm using the following code:
         
        private void GetTheScores() { UnityWebRequest GetCommand = UnityWebRequest.Get(url); UnityWebRequestAsyncOperation operation = GetCommand.SendWebRequest(); if (!operation.webRequest.isNetworkError) { ResultsContainer rez = JsonUtility.FromJson<ResultsContainer>(operation.webRequest.downloadHandler.text); Debug.Log("Text: " + operation.webRequest.downloadHandler.text); } } The problem is that when I'm in Unity's editor, the request doesn't return anything (operation.webRequest.downloadHandler.text is empty, the Debug.Log command just prints "Text: "), but when I enter the debug mode and insert a breakpoint on that line, then it returns the text properly. Does anyone have an idea why is this happening?
        The real problem I'm trying to solve is that when I receive the text, I can't get the data from the JSON. The markup is really simple:
        [{"id":1,"name":"Player1"},{"id":2,"name":"Player2"}] and I have an object that should accept that data:
        [System.Serializable] public class ResultScript { public int id; public string name; } There is also a class that should accept the array of these objects (which the JSON is returning):
        [System.Serializable] public class ResultsContainer { public ResultScript[] results; } But when I run the code (in the debug mode, to get any result) I get an error: ArgumentException: JSON must represent an object type. I've googled it but none of the proposed solutions work for me.
        Also (regardless if I'm in the debug mode or not) when I try to do some string operations like removing or adding characters to the GET result, the functions return an empty string as a result
        Can you help me with any of these problems?
        Thank you
      • By nihitori
        The Emotional Music Vol. I pack focuses on beautiful and esoteric orchestral music, capable of creating truly emotive and intimate moods. It features detailed chamber strings, cello and piano as the main instruments, resulting in a subtle and elegant sound never before heard in video game royalty-free music assets.

        The pack includes 5 original tracks, as well as a total of 47 loops based on these tracks (long loops for simple use and short loops for custom / complex music layering).

        Unity Asset Store link: https://www.assetstore.unity3d.com/en/#!/content/107032
        Unreal Engine Marketplace link: https://www.unrealengine.com/marketplace/emotional-music-vol-i

        A 15 seconds preview of each main track is available on Soundcloud:
         
      • By RoKabium Games
        Another one of our new UI for #screenshotsaturday. This is the inventory screen for showing what animal fossils you have collected so far. #gamedev #indiedev #sama
      • By eldwin11929
        We're looking for programmers for our project.
        Our project is being made in Unity
        Requirements:
        -Skills in Unity
        -C#
        -Javascript
        -Node.js
        We're looking for programmers who can perform a variety of functions on our project.
        Project is a top-down hack-and-slash pvp dungeon-crawler like game. Game is entirely multiplayer based, using randomized dungeons, and a unique combat system with emphasis on gameplay.
        We have a GDD to work off of, and a Lead Programmer you would work under.
        Assignments may include:
        -Creating new scripts of varying degrees specific to the project (mostly server-side, but sometimes client-side)
        -Assembling already created monsters/characters with existing or non-existing code.
        -Creating VFX
        -Assembling already created environment models
        If interested, please contact: eldwin11929@yahoo.com
        This project is unpaid, but with royalties.
         
        ---
        Additional Project Info:
        Summary:
        Bassetune Reapers is a Player-verus-Player, competitive dungeon crawler. This basically takes on aspects of dungeon crawling, but with a more aggressive setting. Players will have the option to play as the "dungeon-crawlers" (called the 'Knights', or "Knight Class", in-game) or as the "dungeon" itself (literally called the 'Bosses', or "Boss Class", in-game). What this means is that players can choose to play as the people invading the dungeon, or as the dungeon-holders themselves.
        Key Features:
        -Intense, fast-paced combat
        -Multiple skills, weapons, and ways to play the game
        -Tons of different Bosses, Minibosses, creatures and traps to utilize throughout the dungeon
        -Multiple unique environments
        -Interesting, detailed lore behind both the game and world
        -Intricate RPG system
        -Ladder and ranking system
        -Lots of customization for both classes s of customization for both classes
    • Advertisement