Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 04 Apr 2007
Offline Last Active Yesterday, 07:24 AM

Posts I've Made

In Topic: Refactoring suggestions

14 January 2016 - 06:48 PM


Is there a reason why RegisterObjectProperty()/RegisterObjectMethod() live in asIScriptEngine... <snip>

"Drastically improve performance" is probably drastically exaggerated, but I see your point. smile.png
I don't want to move the registration into the asITypeInfo interface as I want to keep it centralized in the engine interface. However, I should be able to do something about the performance anyway. I could for example cache the last used object type between calls, thus avoid the lookup most of the time. Or I could simply change it so that instead of passing the name of the object type, the asITypeInfo representing the type would be given by the application. I'll think about it.
Just how big is your application interface, since you're saying the performance is impacted by this? Can you perhaps send me the configuration file so I can take a look at it and perhaps do some performance tuning (create it with the WriteConfigToFile helper function)?


This is entirely fair. Startup time is actually very reasonable right now, but I'm a meganerd for application responsiveness. Your idea to take the asITypeInfo as a parameter seems a very acceptable compromise and I think I'd be happy with that. On the other hand, while I think understand your rationale for the caching idea, (common case for API registration is to create a type, then register all properties/methods on it-- worth noting I completely fall into this case) I would argue it's the inferior API decision considering it adds additional hidden state/inconsistent behavior. The parameter idea keeps the constant-time lookup and is less invasive of an source change.


What's the consensus about pulling asCGarbageCollector or something similar into the public interface? ... <snip>

I'm not sure what you're trying to get at here. Each engine instance keeps its own garbage collector, and you can already interact with it through the asIScriptEngine interface. What is it you feel is missing?
I'm always on the look-out for ways to improving the garbage collection. Ideally nothing would be stored in the garbage collector until it really is garbage and has to be collected, that way the garbage collector wouldn't have to do time consuming sweeps to try to identify if an object is garbage or a live object. 
I have few ideas for improving the garbage collector already on my to-do list. Maybe I'll be able to try out some of those in 2016.


This was more along the conceptual lines of separating out script data (asITypeInfo/asIScriptModule), the stuff being scripted(asIScriptObjects on a micro level, and a notion of a hypothetical asIGarbageContext larger-scale), and the stuff executing scripts (asIScriptContexts/asIScriptEngine) into more distinctly scoped concepts. This one is definitely more out there in terms of direct feature usefulness, and revisiting this I'm not sure it's as attractive as I initially thought.


Looking up functions by name/signature is *super* yuck from a performance perspective ... <snip>

Yes, looking up functions by name/signature is slow. That's why it is recommended to do it only once, and then keep the asIScriptFunction pointer. smile.png
How often do you look up functions by name/signature? Can you reduce the number of lookups by caching the result? How much time is your application spending on the look-ups?
There is always room for making improvements in the library. But there has to be really good argument for adding more complexity to the code and data-structure in order to speed up a few look-ups. Name-mangling might be possible, but there are other ways to speed up look-ups too.


Very frequently. Current use case is something very similar to Theron whereupon all game events/application to script notifications are modeled as typed data structures sent to a (possibly overloaded) handler function. The caching solution you described was something I arrived at independently, but it just bothers me that maintaining a completely separate yet highly similar lookup system is necessary when some extensions to one would largely solve the problem.

In Topic: Where is the cosine factor in extended LTE?

25 December 2015 - 11:40 PM

The answer is really subtle-- it's actually implicit/handled in the projection into screen space!


It's important to first remember that the actual equations in use here are approximating overall energy distributions over the sphere/hemisphere, though a lot of learning materials just kind of present things as pretty arbitrary quantity modifications. Engineering calculus says you can chop that space up into little tiny bits, and very loosely that's what's going on when you shoot rays in a path tracer/*mumble mumble mumble* in a rasterizer. Ever seen all the cool environment map integration techniques for spherical harmonic projection, etc.? Think about applying that literal process, just from the perspective of the camera. 


As a simple illustrative thought experiment, consider the case of a white triangle on a back background. If you were to hypothetically draw this triangle at increasing distances/oblique angles, then sum up all the white pixels, you should notice that the white pixel count decreases according to the inverse square law/cosine of the triangle normal with camera forward vector.

In Topic: Multiple Lights on game map with forward rendering

14 January 2015 - 09:38 AM

Another interesting approach used by UE3 was to composite all lights CPU-side into a set of spherical harmonics coefficients, then send these to the GPU for shading. This is an awesome tradeoff for mobile where the extra detail afforded by calculating things like falloff per-pixel are harder to see and the performance benefits are huge-- lighting time is completely independent of the number of lights influencing the object!

In Topic: Standard emissive+ambient+diffuse+specular lighting model... how do texture a...

14 February 2014 - 12:14 AM

It just turns the specific value in question into something that can vary over the surface of a model instead of being uniform. Granted, ambient texture maps don't generally make sense, but it's just a question of scale and artistic intent.

In Topic: HDR Light Values

25 January 2014 - 07:32 AM

Generally, intensity values like that are unitless unless you specifically work out some scale for them. A fair number of archviz renderers do so in order for them to work nicely with measured IES light profiles. There isn't a *formal* standard with games/the DCC packages used to create their assets, though, as physically-based rendering is just starting to catch on these days.


Incidentally, you very much want to establish some sort of PBR framework so the values you feed into the shader(s) are used in a meaningful context, which should hopefully make sense when you think about it.


Re: quadratic attenuation-- that should again make some intuitive sense considering real-world light follows the inverse square law. You likely will see better results moving over to a simple area light model, though, as point lights are physically impossible. This would also give you a more sensible attenuation model 'for free.'


Lastly, tone mapping is pretty much entirely an artistic process, you fiddle with it until it subjectively 'looks good,' and that's that.