Jump to content
  • Advertisement

L. Spiro

Member
  • Content Count

    4363
  • Joined

  • Last visited

  • Days Won

    2

L. Spiro last won the day on April 16

L. Spiro had the most liked content!

Community Reputation

25714 Excellent

5 Followers

About L. Spiro

  • Rank
    Crossbones+

Personal Information

Social

  • Twitter
    @TheRealLSpiro
  • Github
    https://github.com/L-Spiro

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It is generally a good thing to have multiple API's. There is a reason we used to not allow monopolies in America. Competition between the API's is what drives efficiency and features, and there aren't so many relevant API's as of now. L. Spiro
  2. L. Spiro

    FBX SDK skinned animation

    For now, your format makes it easier to post the results you are getting, so it's actually useful for the moment. But normally it will be your own binary format that you load yourself. Text is both slower and larger. L. Spiro
  3. L. Spiro

    FBX SDK skinned animation

    Custom scene editors tend to aim at giving you basically a real-time "preview" of your result. "WYSIWYG" editors such as those in Unreal Engine and Unity are examples of this. You might be more used to editors in which you export your work and then run elsewhere, but the main purpose of a scene editor is to tie everything together after that step. By that point, anything your run-time does not support is something your artists can't edit in the scene editor. If you stick to tracks and expose as many properties as you can, then artists can easily continue how they are used to working and each property they can edit is one more level of expression your in-game scenes/characters can have. Of course there is always a trade-off between features, memory, and run-time overhead, but one factor to remember before simply eliminating a property to save run-time costs is that dirty flags can allow you to skip a lot of work. If your pivots and offsets are identity (meaning they generate an identity matrix) then you can entirely skip the parts where they are combined for the final result and your cost has become simply and if(). Checking for identity can be done just when those values change, and in my 2nd post I explained a dirty system to allow this. That means a slight-bit more work when they change, but we are definitely talking about things that almost never change here. So simply handling these properties is almost free and saves you the trouble of revisiting the code 2 years later when an artist actually does want to change a pivot in real-time. No, they are functionally the same, because I still have to decompose my matrix into exactly the same thing you get. This is why I said they are basically both baked systems with only a slight difference. This was my first engine and naturally poorly designed, so I had to decompose the matrices at run-time because I had idiotically chosen to have the matrix responsible for more than it should have been and tried to use it as the only representation of an object orientation rather than as the product of combining those properties. So, here, I am not complaining about the overhead of decomposing as much because that was my own fault, but the fact that a matrix is a lossy representation of the orientation. Some matrices (which are perfectly valid in real games) do not decompose at all (https://computergraphics.stackexchange.com/questions/4491/detect-a-lossy-matrix-decomposition), and you also can't tell if only 1 scale has been inverted or 3. That's a real problem when you want to assign custom tracks to a single scale property etc. Forward kinematics, inverse kinematics, and basic blending do account for the majority of all animations, but more and more forms of dynamic animations are gaining demand, and the last thing you want is to lose a contract because your engine can't handle the demands. Note that IK automatically forces you to expose more properties than the baked values because you need constraints etc. It's not related to accuracy at all. They both express the animation the same, until you want to start to take control of the finer details in a more dynamic way, at which point baked animations start to show cracks. If you wanted to change a pivot or offset at run-time you simply can't with a baked system. Why would anyone do this?: It's not our job to decide when and why a game might want to do this; it is our job to empower the game to be able to do it. But an actual example could be where a robot gets an arm dislocated in a fight like bent metal and now starts to rotate around a slightly different point, or if you allow the player to take a part off a machine and reattach it. These are very reasonable demands these days. My point is more about functionality, because the run-time overhead is nearly the same. It looks as though you are spending a lot of time combining a bunch of matrices to get the final result, but most of them are identity and you can if() them out of the process in virtually all cases, plus you are likely to implement a run-time track system anyway, and you will find that you are able to connect tracks to anything but rotation and scale (position is of course unchanged when stored in a matrix), which will hopefully leave you saying, "What the hell did I just do?" All of this is coming from real-world examples. You're going to make your own scene editor no matter what so you can combine all the different elements from other tools into one, and you necessarily are going to end up with new properties that don't exist in Maya or whatever authoring tool you use. At this timecode you can see a good example of a real-world application: https://youtu.be/_gEm_WjMy88?t=52 His left monitor is the in-house scene editor where you can see trillions of custom properties you can apply to provide game-specific adjustments that Maya (right monitor) can't (and as per today's standards it is WYSIWYG). Here is an example of that same editor being used to adjust a more global scene: https://youtu.be/_gEm_WjMy88?t=118 And what are you going to use to animate the clouds (which are fully procedural and can't be animated in Maya)? Tracks and curves! https://www.youtube.com/watch?v=Y_0OCZC8TVY The color changes in the clouds, the cloud swirls, the wind, the particles, everything driven by tracks and curves. This game is an extreme example to be bringing in here, but I ran into these exact issues very early in my first engine as soon as I wanted to animate anything beyond models. I shot myself in the foot and there is no reason anyone else needs to lose a foot over the same issue. Everything in that XML file (which is still not an appropriate file format for models) should be loaded into memory at load-time. Nothing here needs to be streamed in in real-time. L. Spiro
  4. L. Spiro

    FBX SDK skinned animation

    This is quite close to what I warned him not to do in my first post, with the only difference being that you stored the decomposed matrix instead of the matrix. It’s a baked animation either way, and I highlighted the problems with such a system in my first post. It actually is the preferred method for animating cut-scenes and demos or creating a reference playback system, but manually handling pivots, scale propagation, constraints, etc., specifically is the method-of-choice for many games, because handling these is not really difficult and it gives you the control you need to publish a wide variety of games. It is of course worth considering the experience of the person who has to actually implement it, and you are correct that your way is simpler, but I found it harder (impossible) to go from there to a correct system and had to start entirely over, and when I finally did I found that it was a lot simpler than I expected (all of my propagation code, creating all matrices via each property, and combining the matrices into the final result worked on the first try), plus using tracks/curves to animate made everything easier. The run-time was cleaner and flexible enough to allow artists to work on model animations in a custom scene editor in a way that was natural to them. In fact it was a custom scene editor that exposed all of the weaknesses of a baked-based system, and it all had to be scrapped. I fully stand by my suggestion to walk the tracks and recompose manually, as it is deceptively easy and won’t leave a person with an empty feeling, knowing he or she has made nothing more than a static animation player…backer. L. Spiro
  5. L. Spiro

    FBX SDK skinned animation

    After considering it for a while I realized my latest version of my engine does not use FbxAnimEvaluator::GetNodeGlobalTransform(). That was for getting the matrices for "baked" animations which sucked in my first engine. I had considered putting a disclaimer on that post suggesting that it's been a while and I may have scrambled slightly some details. You only need to store the bind pose (to calculate the "from" of from -> to) and the tracks. The bind pose can be a tree of matrices because it is only used for rendering, so in that one case you can just get the "baked" set of final matrices for each bone in the bind pose. Aside from the bind pose, the only other things you need are: Your own run-time bone class with all of the individual necessary scalars (which you may elect to group into vectors, but not matrices). This means your bones all have a local translation vector, a rotation offset (another vector I believe), a local scaling vector, etc. Normally these properties are not considered specific to bones. In FBX they are on every single node, because they all form the most basic properties of every single object in your scene. If you ignore this fact, you will find that you are unable to correctly animate objects without bones. I believe in my current engine I use CEntity to store all of this and CActor to implement the object hierarchy ("entity" and "actor" are the common names for the most basic class from which all other objects in the scene inherit, though most engines use just one or the other—I wanted to separate the parent/child relationship into its own class and the "these are the properties and functions to use them that all objects in the scene have" into another, so I used both, as "CActor : public CEntity"). So if you follow my way, CEntity will have all the base properties needed to position anything in the scene. This is where your local position vector goes, your local scaling vector, etc., and CActor will add the parent/child relationship you need to correctly depict the bone structure (but note again this not specific to bones; everything but CEntity inherits from CActor so all objects in the scene can be parented). CActor, because it introduces the concept of parenting, also introduces the idea of using matrices to represent the final orientation of the object, thus it uses the local rotation, local scale, offsets, pivots, etc., to generate the final local matrix I mentioned before which should match FBX's, and from there uses its parent/child structure to generate "final" world matrices (where myWorldMatrix = myLocalMatrix * parentLocalMatrix). CBone (or whatever) would, like every other object, inherit from CActor and add anything specific to bones. Your own tracks. This is why you do not have to evaluate the global positions. By using proper animations rather than baked ones you only need to store the track data, processed only by the algorithm above to remove redundant key frames etc. Putting This Data Together Go over each track. Each track should be connected to exactly one scalar. That means a float, uint32_t, int32_t, or bool. I believe I used templates, but note that interpolation always happens as a float and then is cast back to the scalar type, which means your track itself can store all of its keyframe values in floats. A bool track would have floats with values of only 0 or 1, and you could interpolate a value to 0.2 which would be converted to true (note that "(bool)0.2f" is 0 (false), but your track system should use a logical conversion for this ("return interpValue /* 0.2f */ ? true : false; // returns true)). Tracks in Maya/FBX/whatever can be attached to bool properties. So if you have to update the full position, you need 3 tracks. One connects to POS.X, one on POS.Y, and one on POS.Z. This matches the FBX SDK exactly and also how all 3D authoring software works. Advance each track by however many microseconds you need for the update you are doing. Never accumulate time in float or double; they drift on accumulation, and even when using uint64_t you can accumulate errors if your game loop is wrong. float can be used to represent the current delta, but any objects that need to accumulate all the tick time they have had so far should tally the microseconds in unsigned integers. When a track is advanced by some microseconds, it determines its current track time (it loops if necessary, stops if necessary, advances backward, advances forward, does nothing if paused, etc., depending on its settings/state) and from there determines the value of the scalar it wants to produce via linear interpolation between the previous and next keyframes. My tracks use a pointer directly to the scalar they are intended to update. As an optimization you often want to use dirty flags to avoid updating large data sets if there have been no changes to the data inside them, so my tracks also take an optional pointer to a uin32_t and a value to |= into that value if it does change the value of the scalar (in other words if it updates the scalar then it also sets some bits in a dirty flag). For example if your rotation has not been modified then you don't want to generate the new rotation matrix later, so you could make a dirty flag such that bit 0 = UpdateRot, bit 1 = UpdateScale, bit 2 = UpdatePos, bit 3 = UpdateRotPivot, etc., and if—and only if—the track changes the let's say ROT.Y scalar value then it will also set bit 0 in the dirty flags. This optimization can come later, but it is generally easy enough to implement on your first pass. Attaching a track would look like: someFloatTrack.Attach( &myObj.Pos.x, &myObj.DirtyFlags, LS_DIRTYFLAG_ROT /* = 1 */ );. Do take care to ensure the track is detached when the object is destroyed. That's mostly it. You have advanced all the tracks, so your components (rotation, scale, pivots, offsets, etc.) are all updated. I already explained in the first post that your engine will combine these into the final matrices that should match those in FBX, but now as you can see if you wanted to fully customize your animations it is now possible to do so while being fully compatible with the animations your artists provided. You can decide to add your own track to a scalar if you want it to be dynamic, or pause a single track or whatever. Your final result will always match what would appear in Maya if the same thing were done. Note that dynamic animations that cause a character to always look at a certain point are applied on the bone level after the final matrices have been created. You would implement this by attaching an override to a bone, and after the matrices have all been created after this track update is done, execute all custom overrides on each bone (once again this would actually be more appropriate on CActor, since you would want this to be possible any anything, not just bones). This particular override would take a target point and a weight, create a look-at matrix towards the target object, and interpolate between the default keyed animation matrix on that bone and the one it generated, so a weight of 0 = just the regular keyframe matrix, 0.5 would have the character look 50% towards the target, and 1 would have the character look directly at the target. Creating the final world matrices is also an implicit task of the CActor class. The tracks have only generated a local matrix for that bone. All CActor objects in the entire scene will be updated so that they build their final world matrices (with dirty flags again to avoid unnecessary updates) so there are no extra steps here. Now you have a bind pose full of matrices and all the matrices that represent the current pose of your objects. Applying skinning from here is entirely general and unrelated to FBX, so Google is your next resource. Your game loop determines how much time has passed since its last update and updates all objects accordingly. This is a matter of your game loop, not timers. http://lspiroengine.com/?p=378 Once you have determined how long an update needs to be in your game loop, that is the delta you pass to tracks, which handle the data as described above. L. Spiro
  6. L. Spiro

    FBX SDK skinned animation

    If you are using bones/joints then you have a bind pose; there is no other to derive the "from here to there" movement of vertices if you don't have a "from." Keyframes are a typical part of boned animations. An alternative form of animation is vertex morphing/blend shapes, but that is not common due to the amount of data required and processing. It is typically reserved for facial animations. For boned animations you will have a bind pose which you can always retrieve from the FBX SDK (or error otherwise). But before that we need to back up. General tips? Your engine should never be directly loading FBX files. FBX is a transport format. Your actual job is to convert that format to the format that your engine will load. Would you be planning to send out every game with a legal note on your use of the FBX SDK? Why would you bloat your users' downloads with this? Would you be planning to ship games with actual FBX files that every kid can open, view, and steal? Plus they have data that you won't ever use, so the file sizes are insanely huge, and load times are ridiculously slow. This is obvious insanity. If you are actually directly linking to the FBX SDK from your engine, holy hell stop what you are doing. Make a tool that links to your engine's SDK (or can be a special build directly using some of your engine's modules) and the FBX SDK and export to a file format that is made for your engine. As for loading animations, the general approach is: Get the bind pose. Go to the animation you want to export. Multiple can be layered into single file. For each track (you would expect 1 track to handle 1 component of rotation from T=0-5, then another track to handle a different rotation component from T=5-7): First, store the times of all the keyframes. Let's say the animation is nothing but a base bone which does not move and an arm bone 2 feet away that first swivels up (from fully horizontal to 45 degrees) and then over to the left (or whatever; the point is it does 2 basic motions in 2 directions—imagine raising your right arm to 45 degrees and then bringing your hand to your chest, all without moving the elbow). For this kind of rudimentary animation you could expect a key frame at T=0, then a key frame at T=5 seconds where the pose has the arm raised to 45 degrees, and then a 3rd keyframe at T=7 seconds with the hand swiveled over to the chest (hand raises for 5 seconds, then moves to chest during 2 seconds). These are the minimum keyframes necessary for the motion, but there could actually be more keyframes for a number of reasons, and it is often the case that many keyframes are unnecessary. If the actual motion is exactly the same, you could eliminate all keyframes but these 3, and I will get to that later. Go to each keyframe in your sorted list and have the FBX SDK evaluate the entire scene position at that time. FbxAnimEvaluator::GetNodeGlobalTransform(). FBX can do a full scene evaluation and give you the positions of every bone at any time in the animation. So you have a list of times which are sorted. Go to each in order, evaluate the scene positions via the FBX SDK, and get the locations of all bones. You have a choice to simply store the matrix that describes the bone orientation, but I heavily regretted doing this and there is no point in anyone ever again making the same mistake. The rotation can be decomposed at run-time to interpolate between orientations (interpolation is what moves your bones between keyframes), but this is expensive and inflexible. You will find later that the matrixes does not have all the information you need to make your animations more dynamic. For example it will be impossible to correctly override all of the individual elements of the matrix (position.x, position.y, scale.z, rotation.y, etc.) to create a system in which you can have POS.XYZ, SCALE.XYZ, and ROT.XY all interpolated via the decomposed matrices while you manually override only ROT.Z. You will find that you can get some of them to work (especially POS.XY or Z), but you will find rotations impossible to handle correctly in some edge cases. So store the matrices only if you plan to do the most basic crappy animations ever. If you are just going to run a static animation in front of some businessmen, have fun, but for real work you need to store the actual components of the node transforms and combine them yourself the same way the FBX SDK does: https://help.autodesk.com/view/FBX/2017/ENU/?guid=__files_GUID_10CDD63C_79C1_4F2D_BB28_AD2BE65A02ED_htm It's extremely easy and reliable despite looking complex. You can do all of the testing while you are working on loading the data, since you can combine them yourself and check to see if your final matrix matches theirs. Now you have all of the components as vectors and scalars (NOT as matrices) and you have tested code you know combines them into the final matrices. This is run-time code. It can be optimized later to skip scales that are all 1 etc., but work on a reference implementation before you make a fast implementation. At run-time you can now override correctly any of the properties that go into the transform, so now your system much more closely matches what an artist wants to control, which is obviously vital, but more importantly it retains the concept of tracks, which animate individual scalars. If you had stored just the matrices, your animations would be basically "baked." You no longer work with the concept of tracks, and focus on purely animation via matrix interpolation, which of course gives rise to the limitations mentioned above. This is the basic idea behind simply accessing the data. All of this goes into your own custom model file format. Nothing here should be done inside your engine (except the code that recombines the properties to get a final matrix, which will be shared between your engine and your model converter). Note again we do not export matrices, we export the values along tracks, and retain a track-based animation system. When you want to play this animation back, update the tracks (they will interpolate between key frames and give you a single scalar back) for each individual bone/joint element (POS.X, ROT.Z, etc.) and then use those values to generate your final matrix. Before moving on, you will quickly find a lot of useless data there. You need to trim your file sizes down and also make your run-time faster, so: Trim your model file size down by eliminating redundant key frames. Make your run-time faster by only using linear interpolation between key frames. FBX supports numerous types of interpolation, so to avoid run-time checks you need to reduce them all to only linear interpolations (no run-time branching to check the type of interpolation, and linear is the fastest type of interpolation). Eliminating Redundant Keyframes I had you first store the times of keyframes for a reason. It makes this part extremely simple. After you have stored all the times of the keyframes in step 3, now just add a bunch of times spaced apart by a fixed amount that you decide. For example you may already have T=0, T=5, and T=7, now add T=0.1, T=0.2, T=0.3, etc. (you can decide how densely to pack in these fake times, but typically the more the better, because virtually all of them are about to be removed, and the more dense they are the fewer artifacts your results will have). If you use std::set<double> for this then your times are automatically sorted and duplicates are eliminated. The end result is that you have T=? for every 0.1 (or whatever) seconds in the animation, and (critically) you have the actual keyframe times all in this set. Now you are going to go over this list of times and once again use FbxAnimEvaluator::GetNodeGlobalTransform() to evaluate the positions at all of these times, but this is an extra step, not #4 yet. This time, your goal is to eliminate times from this set by determining if they are redundant. A keyframe (B) is redundant if you can interpolate between keyframes A and C and get the same result as B. You can see where this is going. This time going over your times, you need to examine 3 T values at once. Let's say A=0.2, B = 0.3, and C=0.5: Get the final scalars on your track for A and C by having the FBX SDK evaluate the scene/track at these times. Interpolate by yourself to derive your own value for B using linear interpolation. If A=0.2, B = 0.3, and C=0.5, then from A to C is 0.3 seconds, and B is 0.1 seconds into that, or 33.3333333%. So you are interpolating to 33.3333333% between A and C to derive your own value for B. Now have the FBX SDK evaluate the track at T=0.3 (B's time) and compare your values. If your values are close enough (use your own epsilon compare, and you get to decide what epsilon is: Smaller values leave more keyframes in the file but replication the animation via linear interpolation more accurately; higher values lose run-time accuracy but create smaller files) then B is redundant and should be eliminated. If you eliminate a value, you try again from A. B will have been removed, so C will have slid over to become your new B, and what was previously D will be your new C. All of this happens automatically simply by removing B from the vector so there are no special cases except to check that you are not too close to the end of the list. If you do not eliminate a value, then you advance to the next time value and repeat. This means your previous B becomes your new A and you repeat. Simple as that. When you are done you should find that in our simple example you have just reduced the whole set of times back down to the 3 keyframes, but for larger animations it is extremely common that you will end up with more or fewer keyframes, and both are important. If you have ended up inserting T values that were not there before, that will be explained below, and if you eliminated keyframes that were originally part of the data then you have just removed data that was originally redundant. This could happen even if you didn't add a bunch of fake keyframe times to your std::set<>. this same algorithm, without adding your fake T values, would allow you to remove redundant keyframes that exist in the FBX file (and these are common, ESPECIALLY with motion-capture data). Only Use Linear Interpolation Actually you will have already prepared to do this by adding the fake T values in the above algorithm. The fake T values allow you to examine the animation tracks with a fine comb. At each T value you are checking to see if linear interpolation between the previous and following T values result in the same value as your current T value. This wording is not accidental. If your linearly interpolated value does not match the value given by the FBX SDK, then you have determined that your run-time playback of the track, which as we said only uses linear interpolation, will not give you a result with the accuracy you desire. Then you keep that T value and move on. By the end of the algorithm, you have kept only the keyframes that, when played back purely through linear interpolation, give you a result that matches "close enough" (based on your epsilon). So there is nothing left to do. Play the track back via only linear interpolation. Once again, an engine does not load an FBX file. That is bonkers. You are responsible for going through an FBX file, taking out the data you want, loading your custom file via your engine, and playing back the animation yourself, which you do by attaching tracks (which you code from scratch) to scalars on your model (such as POS.X) and coding your tracks to give you back interpolated values between keyframes as you yourself tell them to "tick" (advance by a given amount of time). Once all of your custom tracks, which you definitely wrote yourself and 14,000,605% did not come from the FBX SDK, because we have very very firmly established that it is not directly connected to your engine, have ticked and written an updated scalar value into whatever properties have tracks assigned to them, create your final matrices and render. L. Spiro
  7. L. Spiro

    CopyResource with BC7 texture

    https://docs.microsoft.com/en-us/windows/desktop/api/D3D10/nf-d3d10-id3d10device-copyresource Is the source mapped? Is it immutable? Are the destination/source formats the same and are different resources? L. Spiro
  8. Confusion around the term is a likely cause for negativity. What is a "producer"? We all produce something; that's a meaningless term. A programmer is a code producer, an artist is an art producer, a musician is an audio producer, so what the hell is a "just a producer"? They produce...exactly what? Literally nothing. The only one who doesn't actually produce anything is the producer. Which makes it seem to me that the only way to justify the term is if the "producer" gets credit for the entire product (they supposed produced the whole game). They tend to be listed at the top of credits, which makes the shoe fit, so it's natural that people who actually produce something should be a bit offended that the one person who produces nothing gets top credit for making the game. These are 2 issues packed into one paragraph so let me clearly separate them. The term "producer" is already confusing to people since it describes absolutely nothing about the job. I was a producer on a few projects when working with @mr_tawan, and I sucked at it because I literally wasn't even sure what my duties were. Many years later I finally figured out what they do, and I still have a bitter taste. Executive producers throw money at the project and perhaps own it, so I don't have a beef with these guys taking credit for the whole game as much. It's at least a product of their money and they usually decide the general direction of the game. But regular producers get listed right under them and haven't actually worked on the game. They work on the people who work on the game. Their name is misleading. Call them "Managers" or "Schedule Enforcers" or "People Pushers". As long as they are being called producers, many people will still think of them as some fat slob in a smoke-filled room upstairs doing nothing and taking all the credit. This was mentioned before. I agree, because the term doesn't describe anything about what they do, so we are indeed free to let our imaginations run wild when people use such nebulous ambiguous terms. At one of my previous companies they are informally called "gerbils" or "hamsters" or some kind of pet that the executive producer sends to do his or her bidding. Simply changing the term would go a long way towards getting rid of resentment and confusion. Generally people and companies value the role (not always as mentioned by others) but it's very difficult for many to get past the branding. L. Spiro
  9. L. Spiro

    Do you still play video games?

    I have not yet played a game on which I have worked after release, but I started the Nintendo Club inside Square Enix where I aggressively forced unwitting coworkers into daily lunch games and tournaments, and I play Super Smash Bros. for Wii U daily (except this week because I have a more fulfilling hobby at the moment). Now at work we sometimes help the rest of the office concentrate by screaming and yelling and laughing at Mario Kart 8 Deluxe. These happy cheers must surely bring a happy atmosphere into their office lives and surely they get their work done faster. Also, I play the game of life, baby. L. Spiro
  10. L. Spiro

    How much longer can Trump/Trumpism last?

    Well, I mean, you can set up any rhetorical question by first making a baseless non-objective statement that you project as fact. They factually literally verifiably have not all been anti-American over the vast majority of the past century, so, where am I supposed to go with this? I can't address your question because it is invalid (in your context, anyway), and I can't offer you validation for trying to put forth your own personal views as objective fact just so that you can try to get a tight grip on the conversation within your own personal comfort zone so that you can swat away confused replies made by people who didn't notice the type of hijacking you just attempted and are trying to make valid points that they are more likely to muddy when trying to frame them in your own personal context, which doesn't seem to make sense to anyone else. The farthest America has come from its roots on a purely social level is in regards to religion. Technology and other forms of advancements have pushed nearly all countries away from their roots, speaking very strictly, but adapting to those changes is part of the natural growth of every country, so what's the point in saying, "Because we have changed, all together, as a nation, as a necessary means of survival, we are no longer America"? The founding fathers had no concept of machine guns and rocket launchers, and change is absolutely 100% necessary in order to survive, so what is the point in masking your pejorative spin on it behind a misplaced and unnaturally strict focus on what essentially amounts to "reasonable guesses" as to what it was and should be? I'm pretty sure that the absolute biggest point on their minds was simply that we survive, first, and if that means rewriting the 2nd Amendment then that makes us more American, not less. I don't personally care if the situations in my life force me to live differently compared to what they may have desired. That has literally no bearing on how American I am, and fools who blindly follow old ways will rightfully be removed from the gene pool. It is absolutely the least-compelling argument in the history of arguments that we are not American because we live today and not back then (and similar about our presidents). @Bregma How do Americans view individuality and collectivism? Both are completely present in virtually all Americans, and virtually all of us switch in and out of these modes at different times. This is why you may be confused and it is why it is easy for people to answer both ways and for you to be unable to get a clear answer. I am extremely individual on my personal projects and identity. I care not to conform to any social standards unless they specifically serve me (no one consulted me when deciding on beauty standards, nuanced social constructs, etc.), or if I have any other reason of my own. I stand out on this site for being very direct, but in fact in order to achieve this I am only ignoring a single point relating to social etiquette, and it is extremely easy for me to put that point front-and-center in my head and start to completely change my tone (more closely matching my very jolly and respectful real-life persona). Most likely all of you are able to do this, so you know what it means to switch modes (I think a lot of us switch modes when going online vs. out in real life). When at work or on any non-personal project, I prefer being a part of the collective. Working together achieves greater things. I make music and art, but I would rather hire others who would do better jobs. I have no difficulty in trusting others with their skills and I view the project as a product of all of us. Without context, I have no particular preferences towards individualism or collectivism, and I suspect that is true for the vast majority of humans. All humans are like this. Each serves itself at times and can switch towards working for the greater good when necessary. Countries and cultures are unable to completely remove these traits from humans, but Japan does its best to suppress individualism while America used to try to bring it out more, until corporations took over the country and are trying to make people into profit drones. As an average, Americans have never stood out as being overly individual at any point in time except within appropriate contexts. There has always (and I am speaking very strictly literally here) been a roughly equal number of people who focus just as much on collectivism and America has never ever had a problem rallying people to a cause (especially when 90% of its population can focus on collectivism if the cause speaks to them). Strictly speaking, Americans don't exist. America is the 3rd largest country and population in the solar system, and at these scales all conversations about what it means to be American break down (except when taken to legal definitions etc.) There are always the same number of people ready to fight and not to fight. For every individual there is a collective member. For every gun nut there is a tree hugger. For every Trump supporter there is a normal person. So, if there has been confusion regarding American individuality vs. collectivism, that's because it's an invalid question. Either-or doesn't exist. It's always been both and there has never been a particular lean either way until you start to look at smaller more manageable data points that isolate smaller regions of the country. L. Spiro
  11. L. Spiro

    How much longer can Trump/Trumpism last?

    We certainly hope so. Thaksin was a piece of s&*% and his whole family is corrupt, but they are not out to destroy the entire world and to align America with the axis of evil. At this point I would be happy if we were only dealing with another Thaksin. L. Spiro
  12. L. Spiro

    Floating point edit box

    Read via GetWindowTextW() (or GetWindowTextA() if you are using multi-byte, or GetWindowText() if you change between Unicode and multi-byte sets via project settings). Convert to text via swprintf_s() (or sprintf_s() for multi-byte sets or _stprintf_s() for use with TCHAR.h) using as many digits of precision as you want (%.17f prints a float out to 17 decimal places. Read the Format Specifications). Convert to a number via _wtof() or atof() (or _tstof() if using TCHAR.h). L. Spiro
  13. Since I lived and worked in Thailand, France, Japan, the UK, and the USA… If the native alphabet is small enough then they try to use pictograms directly on the keyboards. This was my keyboard in Thailand: In “weird” places (France and friends) they may swap letters around. This was my keyboard in France (notice the Q etc.) UK’ians are also weird. I used to use this in the UK but I had to get it replaced because of that damned short-as-hell Shift key: Notice that other symbols are moved around as well, not just letters. All of these work like normal keyboards, except you press different buttons to get the same result. Thai represents 2 special cases though: A larger alphabet (44 consonants and 15 vowel symbols) and 2 alphabets (English and Thai). All keyboards with roots not in Roman maintain the standard English alphabet and American layout along with the native alphabet. You switch modes to type in one or the other, usually via Shift-Tilde. In Japan, this was my keyboard: Hiragana is written next to the English characters, which implicitly align with Katakana characters since they are 1-for-1 (Katakana characters are just a different way to draw Hiragana characters). Surrounding the space bar are keys to select input methods and alphabets. For the most part, you completely ignore the Hiragana characters. You can enter a special mode to type them directly just as with Thai, but that means relearning to type so no one does this. Instead it basically boils down to typing in English directly, or typing in Japanese phonetically and letting that get turned into Japanese based on which alphabet you have active. If I type “ku”, if I am in Hiragana it becomes く, if I am in Katakana it becomes ク. If I then hit the space bar I get options for KU. Now I can select which Kanji I want or select the Hiragana form or (a little lower) the Katakana form, etc. The IME pop-up that you see there learns which Kanji you use most often and puts them at the top. L. Spiro
  14. Timers in games fall into 2 categories: Utilities and in-game/gameplay timers. Utility timers run on separate threads and trigger system events etc. An example that used to be common (but should never be done) is a timer to run the game loop. Timers to update the sound system, to load data, to run physics, etc., are examples of utility timers. They keep the game running, but are not specific to the game. They run on system threads. A game timer is meant to trigger an in-game action. They are gameplay-critical. They run in the main game thread and are updated at a specific point within the game loop. If the game lags, the timers lag. They can be based on game time, pausable game time, frames, ticks, logical updates, or other game-related timing mechanisms. So which do you need? Neither. You’re updating an animation. This is definitely not the purpose of timers. You draw the correct frame of an animation by determining how much time has passed since the animation began, which you do by simply accumulating it each tick. If you Tick() for 33333 microseconds, each tick your objects add 33333 to their current animation time (which is stored in microseconds). Which frame to draw simply depends on how fast they animate. If I am drawing at 2 seconds in and the animations are running at 24 FPS then I should be drawing frame 48. Why would you implement a whole timer system instead of a multiply and divide? L. Spiro
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!