Jump to content
  • Advertisement

JSandusky

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

237 Neutral

About JSandusky

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. JSandusky

    AEcology: Upcoming AI generation library

      Great was worried my wording would come off as too adversarial. You're on the opposite side. I've never been given access to engineers on the front-side of systems and have always had to rely on liasons/experts for what analysis should expect. Refreshing to find a similar slightly grumpy tone on the other side of the fence.     As far as I'm concerned AI is lagging behind industrial methods. As I mentioned before LOPA is utility based AI (it is very literally Dave's IAUS) and it's quite old. I doubt there's an AI patent that can't be invalidated by a chemical industry precedent. From Markov models to scalar math, everything that AI is based on first has an industrial use that long predates it. Industry moves forward constantly, industry will always be ahead of game AI.   There isn't even a "unrealistic accessibility" factor to it, I have to make the evaluations function on garbage hardware. To my knowledge influence mapping first appears in 1987 (I would have been 2) in early models of the transfer of heat through nodes in a pressurized pipe system for "fast" evaluation (I haven't searched the subject further past - I was only concerned about modernizing the program as that was my job). Back then they made some very disgusting estimations of this transfer but those are no different than modeling influence in a navigation mesh. I know you've written an article on influence mapping thus I assume you're familiar with the many publications of Dr. Baybutt?   If not then you should read them, you'll shit bricks and learn a lot. Actually you'll probably just become an alcoholic in despair.   ---   I think the ultimate summary is that "tweakability" must be extremely high. In a Utility based AI system I've been writing for Unity I made it a point to create a document that outlined the "less"/"more" Guideword factors of the different curves to demonstrate how curves change based on their inputs (MKCB curves). Control and flexibility is what we all want. Naturally machine learning is against this a bit as we can work around it with weights and biases.
  2. JSandusky

    AEcology: Upcoming AI generation library

      You should elaborate just a smidge better on what you're calling "controls" as by reflex I'm substituting SIS for every reference of the word, in which case you lost my attention quite quickly. If it doesn't have a proven industrial use than it doesn't have a game development use either in my opinion. All surviving "fancy" AI schemes mimic industry. Utility is basically LOPA, which is proven. I'm terribly sorry but for suggesting machine theory I really doubt you can enumerate the many superior alternatives that have been in use by the chemical industry for years. Safety is serious, and AI is a part of that.
  3. Another vote for looking into utility systems for your problem. Utility systems are awesome.   Pay attention to the bits in the "Building a Better Centaur" presentation where they talk about escaping early and sorting considerations (I call them criteria - not sure what's a better fit yet - semantic wars). Not only do you get a really organic feel for cheap, but it gets even cheaper by being able to just say "meh ... I can't win ... screw this."   I also reuse the system to determine the weights for different "ActionSets," probably what Dave calls packages.   Handling targets can be a bit tricky to do cleanly though.   Fast track to all of Dave's (and co-conspirators) presentations: https://www.google.com/search?q=GDC+Vault+Dave+Mark   If for whatever reason you find the curves terrifying here's a jump-start on the curves (C# from a Unity port of my C++ implementation of Dave's IAUS): http://hastebin.com/useqoxumox.cs As long as you write a GUI for manipulating the inputs then curves are a no brainer and tools like https://www.desmos.com/calculator are helpful, lately I use some pretty wacky curves.   In practice I have curves on the variables of a curve, curv-ception.
  4. JSandusky

    Cook-Torrance G term

    I don't recognize what you're calling Smith vis/geo?   Here's the code I use for vis/geo terms, basically verbatim implementations of the formulas with the usual "not shipping yet, roughness * roughness everywhere." // Visibility terms /// Smith GGX Visibility /// nDotL: dot-prod of surface normal and light direction /// nDotV: dot-prod of surface normal and view direction /// roughness: surface roughness float SmithGGXVisibility(in float nDotL, in float nDotV, in float roughness) { float rough2 = roughness * roughness; float gSmithV = nDotV + sqrt(nDotV * (nDotV - nDotV * rough2) + rough2); float gSmithL = nDotL + sqrt(nDotL * (nDotL - nDotL * rough2) + rough2); return 1.0 / (gSmithV * gSmithL); } float SchlickG1(in float factor, in float rough2) { return 1.0 / (factor * (1.0 - rough2) + rough2); } /// Schlick approximation of Smith GGX /// nDotL: dot product of surface normal and light direction /// nDotV: dot product of surface normal and view direction /// roughness: surface roughness float SchlickVisibility(float nDotL, float nDotV, float roughness) { const float rough2 = roughness * roughness; return (SchlickG1(nDotL, rough2) * SchlickG1(nDotV, rough2)) * 0.25; } What really strikes me as odd is your use of "roughness + 1"? If you were working smoothness instead of roughness that'd be "1 - roughness,"  but I can't really make any assumptions about whether you're renormalizing inputs or such.   What's the deal with the division by 8?
  5. JSandusky

    Radiosity

    In my case (rendering a view from the lumel), I started with brute force. Later I moved to an iterative lattice distribution pretty much exactly matching Hugo Elias' description of such an approach (render every forth lumel, then find those in between and if close enough lerp otherwise render, then find those in between that subset and lerp if close enough) just adding a "dominant" plane check where I just dot-prod'ed the lumel's normal against the 6 cardinal planes and classified that lumel by the best fitting plane, so far that's been sufficient. For CPU/GPU side real-time, I cluster by dominant plane and distance "emitters" and then calculate a limited number of form factors (3 usually) for each lumel against those clusters. I then create a vertex buffer for the clusters, on the CPU I just do a simple directional light, but on the GPU I create a point cloud of multiple samples for each cluster (random distribution) and use transform feedback to calculate the results at the end of the frame (shadow maps already exist at this point), I then propogate that to the lumels, and then apply the data back to the lightmap averaging the sample values. Quite fast, sending the updated lightmap to the GPU is slower than everything else combined.   Pretty sure you have that backwards. Banding and noise don't present themselves until you start bumping up the resolution in my experience so far. I normally use 32x32 for each hemicube face.   As you increase the resolution you also decrease the coarseness of multiplier "map" (whether that map is a real map you precompute or you just calculate the value on the fly). Even though that map is normalized, the number of values that end up becoming significant contributing factors is substantially higher as the resolution increases. That coarseness at low resolution also really saves you when it comes to dealing with situations where the hemicube penetrates corner geometry, most methods of dealing with that result in poor occlusion at edges by offsetting the sampling point to avoid interpenetration and therefore not capturing the foreign excidence seen by the actual point (requiring an ambient occlusion pass to reacquire).   Penetration will generally be quite severe in one or maybe two faces, at low res that coarseness will make those penetrated values practically meaningless to the overall contribution. I've never seen banding at 128x128 hemicube faces or lower. It does appear to be a problem for some, but I'd imagine it has more to do with a faulty lumel vs world unit ratio determination and misguided attempts at irradiance caching (do it in surface space, not world space) than the actual process of rendering from a lumel. The other villain to the approach is specular. GPU hardware texture filtering creates far more undesirable artifacts as far as I've seen.   The big problem of that approach is specular response. It's really easy to end up with an implementation that cannot converge (as in it'll bleach out to white). By rendering from the lumel via your normal means of rendering you're subjected to the specular response of your materials, so you have to account for that, and in doing multiple passes against a lightmapping technique that also includes specular response (ambient + directional for example) you have to account for that as well. Almost all radiosity research and study focuses on diffuse transfer. The original theory of rendering from the lumel also assumed a purely diffuse environment.
  6. JSandusky

    Radiosity

    All I see as input is emitter color. What happens to the accumulated color once it's been entirely accumulated before the next pass? Are you averaging? Adding them to the emitter color? etc?   The emitter should have a "reflectance" value if you're doing more than a single bounce (assuming you aren't just averaging passes). Without absorption it'll never converge.   If you're using PBR you could use something like: smoothness * (sqrt(smoothness) + roughness); as a really rough approximation of absorption (that's Lagarde's specular dominate direction IIRC).   Personally, I really prefer rendering the scene from each lumel's position along the normal and multiplying by a weight map (in fisheye for draft, and hemicube for quality) for radiosity (on top of a brute force direct lighting map) and calling it a day.   Link to some source in my lightmapper for an example of how incredibly simple that approach can be: https://github.com/JSandusky/Urho3D/blob/Lightmapping/Source/Tools/LightmapGenerator/FisheyeSceneSampler.cpp
  7. JSandusky

    Abusing Bindings for Tools

    While writing C++/CLI bindings I was getting pretty tired of it, so I started looking around for an alternative means to speed up the process. With a hefty amount of existing Angelscript bindings it seemed sensible to parse the script registration methods...   Then it clicked that I could just write an implementation of asIScriptEngine abstract class that did nothing but build data about the script bindings.   The "SpoofEngine" does absolutely nothing except with methods that perform registrations, instead of actually binding anything it just uses the given information to build an XML document. By redefining the asMETHOD/asFUNCTION (and their PR versions) to return a junk asFuncPtr while recording their method/function name into the Spoof singleton so I'd have the C++ method/function name/signature to use for constructing a binding from the data. All other functions do nothing.   It was incredibly simple as the script engine registration methods are very crystal clear as to what's going on (took at most 30 minutes), all that's needed to be done to build the data is run the existing script registration process with the "Spoof" script engine. No fuss and minimal muck. <type name="Sphere"> <behavior type="0" declaration="void f()" cfunction="ConstructSphere" /> <behavior type="0" declaration="void f(const Sphere&amp;in)" cfunction="ConstructSphereCopy" /> <behavior type="0" declaration="void f(const Vector3&amp;in, float)" cfunction="ConstructSphereInit" /> <behavior type="0" declaration="void f(const BoundingBox&amp;in)" cfunction="ConstructSphereBoundingBox" /> <behavior type="0" declaration="void f(const Frustum&amp;in)" cfunction="ConstructSphereFrustum" /> <behavior type="0" declaration="void f(const Polyhedron&amp;in)" cfunction="ConstructSpherePolyhedron" /> <method declaration="Sphere&amp; opAssign(const Sphere&amp;in)" cmethod="operator =" signature="(const Sphere&amp;)" return="Sphere&amp;" /> <method declaration="bool &amp;opEquals(const Sphere&amp;in) const" cmethod="operator ==" /> <method declaration="void Define(const Vector3&amp;in, float)" cmethod="Define" signature="(const Vector3&amp;, float)" return="void" /> <method declaration="void Define(const BoundingBox&amp;in)" cmethod="Define" signature="(const BoundingBox&amp;)" return="void" /> <method declaration="void Define(const Frustum&amp;in)" cmethod="Define" signature="(const Frustum&amp;)" return="void" /> <method declaration="void Define(const Polyhedron&amp;in)" cmethod="Define" signature="(const Polyhedron&amp;)" return="void" /> <method declaration="void Define(const Sphere&amp;in)" cmethod="Define" signature="(const Sphere&amp;)" return="void" /> <method declaration="void Merge(const Vector3&amp;in)" cmethod="Merge" signature="(const Vector3&amp;)" return="void" /> <method declaration="void Merge(const BoundingBox&amp;in)" cmethod="Merge" signature="(const BoundingBox&amp;)" return="void" /> <method declaration="void Merge(const Frustum&amp;in)" cmethod="Merge" signature="(const Frustum&amp;)" return="void" /> <method declaration="void Merge(const Sphere&amp;in)" cmethod="Merge" signature="(const Sphere&amp;)" return="void" /> <method declaration="void Clear()" cmethod="Clear" /> <method declaration="Intersection IsInside(const Vector3&amp;in) const" cmethod="IsInside" signature="(const Vector3&amp;) const" return="Intersection" /> <method declaration="Intersection IsInside(const Sphere&amp;in) const" cmethod="IsInside" signature="(const Sphere&amp;) const" return="Intersection" /> <method declaration="Intersection IsInside(const BoundingBox&amp;in) const" cmethod="IsInside" signature="(const BoundingBox&amp;) const" return="Intersection" /> <method declaration="float Distance(const Vector3&amp;in) const" cmethod="Distance" /> <field declaration="Vector3 center" /> <field declaration="float radius" /> <field declaration="bool defined" /> </type> The results are incredibly easy to parse and make sense of for use in generating code (with the exception of the "return" attribute, which is just the explicit ret-value for an asMETHODPR binding).   Reusing all that effort on scripting bindings means weeks I won't spend writing C++/CLI or band-aiding my way around missing functionality that I'm used to having in Angelscript.   Nevermind that bindings that match so closely is incredibly convenient.
  8. JSandusky

    Angelscript IDE (Urho3D focused)

      I noticed that, and it made me smile with an assurance that I'm not a nut.   I need to get around to installing QT to try the other one because I'm super curious about the performance differences of C#/WPF vs QT in this scenario. There's so much virtualization occurring in WPF that it can get irritating.   -------------- With the latest updates it should be much easier to use for a non-Urho3D project. For practical reasons it still requires a Script API dump for your types that are registered to Angelscript, but compilers / info-tabs / editor-tabs have all been moved to a plugin system (that is trying to be simple enough to avoid the "time to go to MEF" scenario). The dump is vital for getting information about the C/C++ types that are registered to Angelscript. There are example plugins for CSV editing (datagrid), media viewing (image viewing / playback of audio available to windows media foundation), and XML editing without text with schema validation. For compiling there's the UrhoCompilerPlugin which includes two compilers. One for single file compilation and another that compiles all files in the directory of the targeted file with adjusted log output to be readable for a many files compiled project. The main purpose of the mass compiler is for hunting down #include sequence issues and sample compilation. "Intellisense" (well, it's basically 'string-ripping') has been abstracted out enough that it is viable to add support for most practical cases. HLSL/GLSL prototypes are included. HLSL is likely wrong, GLSL should be correct. Experimentally there's also parsing of the active tab to generate a "Local Types" tree that contains a full #include processed tree of Angelscript variables, methods, namespaces, and types. This will likely fail with deep namespaces, imports, and funcdefs at the moment. Those issues won't be resolved for a few days. The "Local Types" are for the currently active file and based on the version that exists on disk, they will not refresh unless you save the file being edited. "Go To Definition" is included with the "Local Type" processing. Super handy.
  9. JSandusky

    Angelscript IDE (Urho3D focused)

    Just recently tossed up a WPF based AngelscriptIDE on Github that's extremely focused for Urho3D. A fair bit of a mess in it, very obviously a weekend project that grew, but it's constantly being cleaned up since I'm using it all the time.   IDE/Debugger for Angelscript: https://github.com/JSandusky/UrhoAngelscriptIDE   Path centered instead of 'project-file' centered (no files get in the way, configurations live in AppData) Class/enum browser Autocompletion (deep dot paths, []'s, scoping, immediate access of function return values, etc), member/function lists Function overloads helper/list Documentation on hover (assuming you wrote any) Multiple code-block snippets with "options" and "inputs" Events/Attributes lists with clipboard commands (copy subscriber, copy param getter, copy attribute getter/setter, etc) Find-in-files Verbatim console log of compiler output Error list (parsed from console output) with jump to file/line asPEEK debugging client, with watches/toggle-able-breakpoints/stack-traces/locals/this/etc (everything the web-client has) It could be tweaked to work for a non-Urho3D project if: Implement a ScriptCompiler tool similar to Urho3D that can compile or dump C++ style headers Rip out the Events/Attributes/Script API tabs in the IDEView.xaml + all the latex parsing evil Assumes a single master file for compilation The ScriptCompiler is not an option. The intellisense/autocomplete works by parsing C++ style header dumps from that are used to generate type information. That provides the class browser and is used for resolving auto-completion, there's some local file scanning used, but that's largely for finding the type something was declared as and for variable names.   Dependencies: ModernUI, AvalonEdit, Json.NET, WIXTools, Websocket4Net
  10. There's the item as it exists on the "ground" or ejected from the player's inventory.   There's the item in the player's inventory.   There's the item the player has equipped.   That item is the same short sword across all three, but there's no good reason for it to be the EXACT same thing across all three.   What's boggling me is handling this in a remotely sane manner. They are three separate things and yet the same thing. Other than an in memory database I can't find a sensible way to handle this ... nor can I find anything open-source available that exhibits this three tier approach. It's unnecessary if there's no networking involved, however in my case there is ... and that's how I arrived at the realization that this is a huge problem.   A massive object that covers all of the above is a nasty monster to maintain. How is this problem handled in practice? The only option I see is an in memory database that contains the information of what an "Object ID" means under different cases and how to handle it. That's a pretty disgusting strategy pattern hack, and while I as a programmer can make easy sense out of it ... it will make little-to-no sense to designers.   To add details. The environment is not persistent. It is not an MMO. An arena shooter with saved character stats/inventory would fit the bill.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!