• Advertisement
Sign in to follow this  

Browser Game Engine Speeds.

This topic is 2346 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all,

I'm working on a 3D browser engine using WebGL and javascript. Something similar to what Irrilicht put out (Copperlicht). Recently I've implemented a terrain system and started to load some big terrains to push its limits. When loading a 1025X1025 heightmap I start to take a FPS hit, down to about 30 FPS. The thing is, when I look at my computers diagnostics, I'm still barely scratching what it can handle.

With the game being brought down to 30 FPS I'm using 20-25% of my CPU, I still have about a 1.75 gigs of DRAM, I'm only using 500MB of VRAM, with 1 gig available, and my gpu is only running at about 40%. The only thing I can think of that would cause this would be that I'm getting horrible cache coherency and my CPU is sitting idle most of the time.

I'm sure there are only a select group of people out there that know enough about how browsers handle javascript, but if you are reading this I could use some suggestions on how to find my bottleneck.

Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Assuming you have a quad core CPU (or hyperthreaded dual core) 25% usage usually means that one core is at 100% usage and the others are at 0%. That is the engine isn't making use of threads, and is CPU bound.

Share this post


Link to post
Share on other sites
I do indeed have a quad core. I thought that browsers handled all thread creation and multiprocessing? Is there anything I can do to make use of my other cores?

Edit:

I guess there is something called Web Workers that is some basic way of implementing threads. I always thought that browsers were smart enough to make use of multicore processors.... I guess there aren't a lot of web applications that need more than one core.

Kind of ironic that my 6 year old desktop with a 3ghz single core has more CPU power than my brand new quad core i7.....

Share this post


Link to post
Share on other sites

I do indeed have a quad core. I thought that browsers handled all thread creation and multiprocessing? Is there anything I can do to make use of my other cores?

Edit:

I guess there is something called Web Workers that is some basic way of implementing threads. I always thought that browsers were smart enough to make use of multicore processors.... I guess there aren't a lot of web applications that need more than one core.

Kind of ironic that my 6 year old desktop with a 3ghz single core has more CPU power than my brand new quad core i7.....


There is now a concept of web workers that allows browsers that implement multiple threads. Check this out to get you started: Web Workers

I don't know specifically which browsers implement this yet (I suspect Chrome and Firefox do, whereas IE probably doesn't, except maybe IE9).

Unfortunately, if you're using a prebuilt engine that isn't aware of these sorts of things, you would have to implement all of the concurrency yourself, which is likely to be complicated to say the least.

Share this post


Link to post
Share on other sites

There is now a concept of web workers that allows browsers that implement multiple threads. Check this out to get you started: Web Workers

I don't know specifically which browsers implement this yet (I suspect Chrome and Firefox do, whereas IE probably doesn't, except maybe IE9).

Unfortunately, if you're using a prebuilt engine that isn't aware of these sorts of things, you would have to implement all of the concurrency yourself, which is likely to be complicated to say the least.


Thanks for the post. I've browsed through Web Workers a bit. Are these true OS threads? Are they making real system calls such as fork() to spawn processes? If so then that's awesome. It will be a lot of work to implement but I'm glad the technology is there.

Share this post


Link to post
Share on other sites
Before resorting to creating multiple threads you might want to profile your existing code and find out which bits are slow, and if it can be optimized first. A quick search says FireBug has a JavaScript profiler.

Share this post


Link to post
Share on other sites

Before resorting to creating multiple threads you might want to profile your existing code and find out which bits are slow, and if it can be optimized first. A quick search says FireBug has a JavaScript profiler.


Awesome! Thank you very much, a great resource that I was unaware of.

Share this post


Link to post
Share on other sites

I thought that browsers handled all thread creation and multiprocessing?

Ugh, no.


Is there anything I can do to make use of my other cores?[/quote]

Not really.

I guess there is something called Web Workers that is some basic way of implementing threads.[/quote]
Not usable for low-latency. Web workers are something you spin to parse or request a web page over 5 seconds so it doesn't block UI.

I always thought that browsers were smart enough to make use of multicore processors.... I guess there aren't a lot of web applications that need more than one core.[/quote]
Yes, they make use of multiple cores.
JavaScript however is painfully restricted to only ever having access to single-threaded processing due to DOM design.

Kind of ironic that my 6 year old desktop with a 3ghz single core has more CPU power than my brand new quad core i7.....[/quote]
JavaScript has computing power of 8086 running at 2MHz.

Share this post


Link to post
Share on other sites

JavaScript has computing power of 8086 running at 2MHz.


I find this kind of hard to believe, but I'll bite. So are you saying that there is nothing I can do to keep my cpu from running the engine on one core? Also, the statement above makes it seem like no matter what CPU I have, I'm running at 2MHz?

In one statement you say that browsers don't handle thread creation or multiprocessing, then you say that I don't have any ways to make threads or spawn processes myself, and then you say that browsers make use of multiple cores. I'm a bit confused here.

I can't seem to find any articles on how much slower javascript is compared to a compiled c++ program.

Share this post


Link to post
Share on other sites
Of course he's "stretching it a bit" to make the concept clear by using a bold statement. It's not like that. It's more like "take your CPU speed and divide by 100"... and throw away most architectural improvements over the last few years, and slice your cache to 1/4.
You don't see many benchs around as they make no sense, especially for JS, as the implementation is king of the hill in determining performance. Furthermore, little details can make a lot of difference on some systems... but not others. Such as using proper data types instead of just "letting the runtime guess".


So, it's comparing apples to pineapples... nonetheless, the best effort I know about is here. You will see there's some dependancy on the exact workload.

Share this post


Link to post
Share on other sites

"take your CPU speed and divide by 100"... and throw away most architectural improvements over the last few years, and slice your cache to 1/4.


Jeez, is it really that drastic? Even if JS was 10x slower than compiled code, it would be able to handle a PS2 quality engine, which use to run on a 300Mhz processor. If its 100x slower... than I'm probably screwed. It's interesting that with WebGL I essentially have the full power of a modern GPU yet my CPU is crippled to that of a processor 2 decades ago. I thought that if the creators of Irrilicht made a JS/WebGL engine that it would have some potential.

Not sure what I should do here, I may be wasting my time (although I have learned a lot, so it's not truly a waste). Would it be better to run Java or something inside the browser and require plug-ins than to try and do everything in Javascript?

I currently have skin-and-bone animation, terrain, lighting, texturing/multitexturing, many forms of intersection tests, frustum culling, and bounding volumes. The GPU stuff has worked wonderfully, I can load some nice looking scenes, but if the CPU is crippled then I don't see how I could have a physics/collision system + game logic + CPU rendering overhead (draw calls and such) + animation setup overhead all running at once.

I've never written a java applet before. Are they significantly faster than javascript? Can you mix java and javascript together? WebGL can only be used in javascript as far as I know. I'm not sure what hardware accelerated graphics java applets offer.

What are my options here? =/

Share this post


Link to post
Share on other sites
Your program can send off as many frames as it wants/can (way faster than the GPU can manage render),
which the GPU then processes as fast as it can. -Where do you get your 40% GPU utilization from?
And what is your graphics hardware like? My guess is that the graphics card is the framerate bottleneck, not the CPU.
The fact that it's a client-side browser game using WebGL shouldn't matter much, although there might be a cap somewhere, for compatibility/performance purposes.

Share this post


Link to post
Share on other sites

Your program can send off as many frames as it wants, which the GPU then processes as fast as it can. Where do you get your 40% GPU utilization from?
And what is your graphics hardware like? My guess is that the graphics card is the framerate bottleneck, not the CPU.
The fact that it's a client-side browser game using WebGL shouldn't matter much.


I run GPUShark as I'm running the engine. I have a GeForce GT 425M. It has 1gig of VRAM and the engine is using 500MB. GPUShark tells me that my GPU is running at about 40%.

If I lower the block size of my terrain (which means more draw calls, more frustum/block intersection tests, but still rendering the same amount of triangles essentially) my framerate starts to dip down some.

After browsing around some, it seems Java applets and JS can communicate, although I'm not sure to what extent. I wonder if I would get some big gains by having my matrix/vector math as well as intersection tests and other expensive computations done in a Java applet (if this is even possible).

Share this post


Link to post
Share on other sites

[quote name='SuperVGA' timestamp='1316074370' post='4861954']
Your program can send off as many frames as it wants, which the GPU then processes as fast as it can. Where do you get your 40% GPU utilization from?
And what is your graphics hardware like? My guess is that the graphics card is the framerate bottleneck, not the CPU.
The fact that it's a client-side browser game using WebGL shouldn't matter much.


I run GPUShark as I'm running the engine. I have a GeForce GT 425M. It has 1gig of VRAM and the engine is using 500MB. GPUShark tells me that my GPU is running at about 40%.

If I lower the block size of my terrain (which means more draw calls, more frustum/block intersection tests, but still rendering the same amount of triangles essentially) my framerate starts to dip down some.
[/quote]

Alright, that does sound like a CPU cap, then...
What are your block sizes now? You said that your terrain is 1024^2, I'd probably have 128^2-256^2 blocks, larger, mip-mapped ones further away...
Do you consider all blocks for frustum culling or only ones that are within a certain distance from the camera?

Share this post


Link to post
Share on other sites
I consider all blocks for frustum culling and am using 65X65 blocks. I'm sure there are a good bit of optimizations I could do, but it seems like it would just be a bandaid on the real problem. Once I start running a collision detection system + animation + view frustum culling + game logic on a full scene, I just don't see how it would be possible if I'm already taking a FPS hit with just a terrain engine and two animations running. Not to mention I have a pretty fast CPU.

I'm not sure what the overhead is in calling a java function in javascript (or vice verse). In order to use WebGL I would have to do all WebGL calls in javascript, but I think most of the other heavy engine work could be done in Java and get a big speed boost. Unfortunately I've never made a Java applet before so I don't know if I'm completely wrong. I know that MineCraft can be run in a browser via a Java Applet and it gets pretty decent performance.

Share this post


Link to post
Share on other sites
65? That's a little odd size for a terrain patch I think. ;) -Hmm...
That's still 225 frustum culling checks per frame..
What happens when you disable frustum culling and view the entire terrain compared to when it's enabled while viewing the entire terrain?

Share this post


Link to post
Share on other sites

65? That's a little odd size for a terrain patch I think. ;) -Hmm...
That's still 225 frustum culling checks per frame..
What happens when you disable frustum culling and view the entire terrain compared to when it's enabled while viewing the entire terrain?


My FPS can be weird sometimes. I'll load the browser engine and itll be at 60FPS and then other times I'll load and it'll sit at 40, even when it's the same code.

I turned off view frustum checks and just rendered the whole terrain and I didn't notice a FPS change.

Share this post


Link to post
Share on other sites

Jeez, is it really that drastic? Even if JS was 10x slower than compiled code, it would be able to handle a PS2 quality engine, which use to run on a 300Mhz processor. If its 100x slower... than I'm probably screwed.

Yes, it is that drastic.

It's interesting that with WebGL I essentially have the full power of a modern GPU yet my CPU is crippled to that of a processor 2 decades ago. I thought that if the creators of Irrilicht made a JS/WebGL engine that it would have some potential.[/quote]
Imagine F1 racing track. The car is the GPU, as fast as the best of them. But your pit crew is slow and lazy and takes 3 minutes to change the tire.

So the goal is to stay out of the pit for as much as possible.

Same applies to other parts. regex processing may be done natively, so it will be quite fast on average, but setting up the strings will be slow. The DOM is efficiently represented, but JS accessors are, again, slow. Numbers are a disaster, mixed int/float representation without many hard rules on conversion making it essentially impossible to make reliably fast calculations.

I don't see how I could have a physics/collision system[/quote]
This was pointed out when first WebGL demo was presented. It had no collision system and when pointed out, there was the usual silence.

One way is to move collisions to GPU as well. Not trivial and limits what can be done, but not ideal.

game logic + CPU rendering overhead (draw calls and such)[/quote]
Logic is not a problem, it can utilize workers. Draw call overhead can be effectively minimized, so not a problem either.

animation setup overhead[/quote]
Was pointed out since the start. Animation and texture setups are slow.

Some of these can probably benefit from TypedArrays. But minimize the need for such work.

I've never written a java applet before. Are they significantly faster than javascript? Can you mix java and javascript together? WebGL can only be used in javascript as far as I know. I'm not sure what hardware accelerated graphics java applets offer.[/quote]
Applets run inside JVM which is not ideal choice, but it worked for Minecraft. JVM, being probably the fastest JVM has least overhead, but more importantly, it gets OGL support via JNI so for most part it has few drawbacks compared to native code.

What are my options here? =/[/quote]

1) Simplify as much as possible. Why are iPhone shooters "on rails"? Controls have a lot to do with it, but it helps with complexity of renderer as well.
2) Adapt to restrictions of environment. If some mechanism can be implemented purely on GPU, use that, otherwise simplify it or remove it completely to run in JS. Instead of full per-pixel collision detection, change level to be 2D, make simple walkable zones, test against that (one point vs. polygon test) vs. full-scene collision detection.
3) Simplify animation and physics. Unless it works fully on GPU (and can be implemented with available shaders), make simpler physics. Mario did it. Simplify geometry as well, a lot can be faked with box approximations
4) Just because you barely make something happen on i7, don't forget about majority of users running Atom netbooks with integrated graphics that barely support shaders. Unless it works reliably at 200FPS on your machine, it will likely not work anywhere else.
5) As a rule of thumb - even though GPU can render 4096x4096 textures at 120Hz, the overall complexity of the scene that is managable is somewhere around Doom. Or, 2D levels, only the most trivial pseudo-physics, fully pre-processed world, very trivial environment interactions, hard-coded animations (preferably sprites or simple models).

It's not really a new problem and solutions are the same. If something is too demanding, change the approach. It wasn't until very recently that most methods can be implemented fully and "correctly", everything from physics to animations has always been just various hacks and approximations.

Share this post


Link to post
Share on other sites
Thanks for the helpful post.

So you don't think I would get much gain by trying to make the computationally expensive parts of the engine as Java applets?

If you think it's still possible to have a 3D engine in JS by simplifying things as much as possible, I may keep on trucking and see where I can get. Although I have to question if a game that simplified will even be fun. I can obviously just remove the terrain LoD and keep things limited to a sandbox environment. I can simplifiy my animations and optimize JS code in a lot of places I'm sure. I was hoping to learn about algorithms that are used in modern 3D game engines, but it seems like I'll be implementing algorithms from the stone age =/

Edit:
So I spent some time checking out games from my childhood (PS1 stuff). Did you know that the PS1 had a 33MHz processor, 2MB of DRAM and 1MB of VRAM. The fact that they could even make a 3D game on specs like that blows my mind. I guess us youngsters are spoiled.

If JS can't compete with a 33MHz processor I think I will slit my wrists >_>
Obviously I have loads more DRAM, VRAM and a modern GPU to help pick up some slack, plus whatever I can compute server side and send across the wire. Making some interesting 3D browser games may be entirely possible still.

Share this post


Link to post
Share on other sites

I was hoping to learn about algorithms that are used in modern 3D game engines, but it seems like I'll be implementing algorithms from the stone age =/

WebGL and JS are not modern 3D hardware. They are, in many ways, worse than PS1.

Occam's razor - why isn't anyone making anything with them. Because, beyond hype and single-effect demos, they are useless for real world problems. And even then, if one were to put enough effort in them, the range they need to cover is incredible - from integrated graphics cards running on atoms to water cooled overclocked i7 running multiple 6970s. Unreal engine, the de-facto standard, covers performance range of factor of 3. Or, i5-i7 and 6x00 series cards.

So I spent some time checking out games from my childhood (PS1 stuff). Did you know that the PS1 had a 33MHz processor, 2MB of DRAM and 1MB of VRAM. The fact that they could even make a 3D game on specs like that blows my mind. I guess us youngsters are spoiled.[/quote]
Yes. Those people studied the exact specs of hardware, then tailored algorithms around them.

You cannot throw per-poly/per-pixel collision detection. It needs to be faked.
LoD terrain? Forget it. First, reduce terrain from 1 million points to 1000. Then make a collision cover out of 100 polys. Or preferably, make terrain flat and just test against z.
Animations? Yes, if your model is composed out of 5 bones, hard-coded and 100 polys. No skinning, unless you get it fully on GPU.
Physics? Again, yes, if scene is simple enough and there is only a dozen or so point or rigid objects without interaction. See Angry Birds for comparison, that is about as much physics as you get.

Look at it this way, all the solutions you will need have already been tried and tested 10-20 years ago, just pick and choose.

Obviously I have loads more DRAM, VRAM and a modern GPU to help pick up some slack[/quote]
In WebGL and JS, there is none of that. It's a virtual machine. If HTML5 local storage is ever implemented, you will have 5 MB of disk, some 200MB of memory total and serial IO, which drives everything, from GPU to loading to any other external stuff that you have no control over.

When people say that WebGL is a joke and useless in real world, there might be a reason. Or put differently, unlike today's high-end hardware which helps solve the problem, with HTML5/WebGL, the platform will be telling you what you can do, how and why not. Unlike the rest of modern engines which act as enablers (just throw your ideas at it), web platform fights you on every step. So you have no choice but to learn about the limitations and work around them, reducing your ideas to what is possible.

Or, go native and just write for iOS (OGL/ES) or Windows (DX is best-in-class), both of which will work seamlessly on all mobile devices as well.

Share this post


Link to post
Share on other sites
I appreciate the brutal honesty. If I am wasting my time I would like to know, but I have also put a good bit of time into this already and don't want to throw it out on a whim, so I have a few more questions for you if you don't mind.


WebGL and JS are not modern 3D hardware. They are, in many ways, worse than PS1.


Why is WebGL worse than PS1? I have thrown a good bit at my GPU using WebGL and have had no problems. From multiple animations with ~60bones, to large triangle counts. I could send 1025X1025 terrain with no LoD to my GPU and it would chug it down without being my bottleneck. Why do you think that WebGL is worse than PS1 GPU?


Occam's razor - why isn't anyone making anything with them. Because, beyond hype and single-effect demos, they are useless for real world problems. And even then, if one were to put enough effort in them, the range they need to cover is incredible - from integrated graphics cards running on atoms to water cooled overclocked i7 running multiple 6970s.
[/quote]

I agree here, but the engine would essentially be platform independent (not browser independent however) so couldn't one just pick some processor cap and everyone under it is out of luck?


In WebGL and JS, there is none of that. It's a virtual machine. If HTML5 local storage is ever implemented, you will have 5 MB of disk, some 200MB of memory total and serial IO, which drives everything, from GPU to loading to any other external stuff that you have no control over.
[/quote]

Not being able to access the HD definitely does suck, but PS1/N64 didn't have built in HD's either. Of course they could stream things from disk/cartridge, but content can to some extent be streamed from a server these days as well. Also, my engine is already using up 400MB of DRAM, so why is there a 200MB cap?

I know that anything modern is out of the picture, but I think there is still some market for browser games. Zynga is proof of that, they turned silly 2D games into a (they were going to have a 1 billion dollar IPO the last I heard) successful business. I'm sure a lot of us wouldn't mind paying a dollar to play Ocarina of Time, or any other great nostalgic games.

Do you believe that even PS1 quality games are not possible with JS/WebGL?

Share this post


Link to post
Share on other sites

Why is WebGL worse than PS1? I have thrown a good bit at my GPU using WebGL and have had no problems. From multiple animations with ~60bones, to large triangle counts. I could send 1025X1025 terrain with no LoD to my GPU and it would chug it down without being my bottleneck.


That is not WebGL, it's your graphic card driver. Run the same thing on less than top notch hardware and you'll get one of the following:
- Isn't supported and doesn't work (blank screen)
- Doesn't produce expected results
- Is slow, inaccurate, produces artefacts (due to driver emulation, non-optimal code paths, shader fallbacks)
- Crashes

What we're talking here is shader code, it has nothing to do with JavaScript, browsers or even Web/OpenGL. It's what your particular graphic card driver does to convert shader code onto GPU.

Same for large triangle counts. Once they are in GPU memory, things are great, since it has nothing to do with GL or JS anymore. But you rely on driver being able to handle everything before. Google didn't even realize what a mess it is, they cried loudly when they realized it and simply decided that they aren't capable of providing consistent support.

But WebGL is not "graphics", it's IO. Like file. You write something to it and maybe read from it. But it's completely external, so as long as this goes one way it's easy and fast. It's also asynchronous, so you cannot rely on it for interactive parts.

Why do you think that WebGL is worse than PS1 GPU?[/quote]
In PS1 programmers had access to everything, registers, flags, timing interrupts. They pushed that to limit. Under WebGL you don't know anything - it needs to work even without GPU. You can send an opaque text string and hope something works. How, why, where, when - questions that cannot be answered. And again, once you send something to OGL it's a done deal, you lose all control and it will or will not happen at some point in the future.

I agree here, but the engine would essentially be platform independent (not browser independent however) so couldn't one just pick some processor cap and everyone under it is out of luck?[/quote]
Google gave up on trying to provide that. Do you think maybe their programmers aren't good enough for it? Sure, they only have several billion dollars in budget and 30,000 of world's best programmers with a very heavy hand to pressure just about everyone into supporting them.

It's a pipe dream. Real world isn't cooperating.

Not being able to access the HD definitely does suck, but PS1/N64 didn't have built in HD's either. Of course they could stream things from disk/cartridge, but content can to some extent be streamed from a server these days as well. Also, my engine is already using up 400MB of DRAM, so why is there a 200MB cap?[/quote]200MB might be generous. 20-60 would be more realistic. Because same browser needs to run on Android phone as well. On a GPU which can render about 10k triangle per scene. While on battery saving mode.

Now consider that a single high-quality texture takes several megabytes and you need 40 of them. They cannot be stored, so each time you launch an app it must be downloaded. Via 3g. On a 1 gig/month plan.

This is why WebGL as universal abstraction is flawed.

Zynga is proof of that, they turned silly 2D games[/quote]
They use Flash for pragmatic reasons:
- everyone has it (nobody can run WebGL, even browsers that do support it often need to enable it manually)
- it has strong performance guarantees (it either works or Flash isn't supported on that device - remember that *everyone* had Flash, so it always worked), there is rarely a case where some machine would be too slow, unless application is careless.
- it's a sprite pushing machine. One of most trivial methods of rendering graphics available since 80s. WebGL needs to do things that are "next-gen" today.

I'm sure a lot of us wouldn't mind paying a dollar to play Ocarina of Time, or any other great nostalgic games.[/quote]
Yes, you would. This is why Zynga is worth that much.

They never, ever, even for a second tried to be next-gen. Software isn't even part of their business. To make such advances, risk must be taken out of the equation. They deliver fun, not graphics, not next-gen, not technology, not algorithms, not software, but fun-in-a-can. They are making the type of games because it earns them most money: min(cost of development) and max(customer preference).

Do you believe that even PS1 quality games are not possible with JS/WebGL?[/quote]
Of course it's possible. Anything is possible. If nothing else, you write your own WebGL implementation and require custom browser. But this isn't the real question: "I want as many internet users to have access to what I make in simplest possible way"

Is it cost effective?

Consider the cost of writing 3 separate applications (iOS, Windows, Android). Would WebGL+JS cost more or less?

Right now, given the real state of affairs, WebGL isn't capable (aka, cannot be done, no matter the effort) delivering to 95% of above users (they simply cannot run it).

Share this post


Link to post
Share on other sites
Thanks for all of the info, you have been really helpful.

I would still like to venture into 3D browser games, but JS is painfully slow, even for basic game engine computations. So what should I do now? Java Applets? If Javascript is 100x slower than C but Java is 10x slower, then its still a pretty big gain. I'm not sure how JOGL compares to WebGL? Are there better methods for making a 3D browser engine? Is it hopeless and I should focus on something else?

Share this post


Link to post
Share on other sites

I would still like to venture into 3D browser games

Unless it's HTML/JS, it won't run "on web", but client-side plugins that can present themselves in browser. Or same as desktop app, it just crams itself into browser.

, but JS is painfully slow, even for basic game engine computations. So what should I do now? Java Applets? If Javascript is 100x slower than C but Java is 10x slower, then its still a pretty big gain[/quote]
First - why are you writing an engine? And secondly, why do you insist on writing it in environment where everyone gave up, including Google?

Want 3D, use UDK.

AAA high-fidelity graphics are demanding. Of developers, hardware, budgets, ... You'll notice that for last 5-10 years there were no new indie engines. There is some basic 3D here and there, but the rest is 2D. The sheer effort needed to develop something like this under ideal conditions is prohibitive. AAA publishers are having trouble funding such work and even then just for consoles, PC is simply too expensive.

The computational performance is just one of parameters. How fast or not fast one runtime is is merely one parameter and rarely a relevant one. I gave examples above, each of performance problems can and has been addressed and solved in the past. If you cannot crunch 1 million polygons you reduce resolution to 1000. Simple and needs to be done anyway if building an actual game, not just tech demo. It will be one of thousands of compromises made during development.

Are there better methods for making a 3D browser engine?[/quote]

How many 3D browser engines have you heard of so far? How many of those use WebGL? How many of those were used in commercial products? (0/0/0). Runescape and Minecraft run on JVM, which is standalone, Unity is a standalone plugin - none have anything to do with Web or Browsers, they can just be conveniently launched from them.

Applets can, for example, be put into a web page, but there is nothing browser or web about them. The modern alternative is Unity. But so far, none of the proposed solutions managed to get around the basic premise - nobody is really asking for 3D inside a web page.

Share this post


Link to post
Share on other sites
I was making a 3D engine for the learning experience and a resume builder. It also seemed like a nice bonus that a 3D browser engine wouldn't be expected to compete with all of the crazy 3D engines out there and thus I could make relevant games on a very low budget.

If I made a 3D engine in c++ it would be completely worthless as there are many opensource engines that do more than I would ever be able to do alone. At least with browser stuff I was venturing into something new where there aren't a ton of engines out there and it had a chance of being relevant. I'm just trying to make something relevant without having to be a multimillion dollar corporation.

On a side note: is CopperLicht just tricking people into wasting their money on a 3D javascript engine? Is it completely worthless?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement