Jump to content
  • Advertisement

L. Spiro

Member
  • Content count

    4352
  • Joined

  • Last visited

  • Days Won

    2

L. Spiro last won the day on April 16

L. Spiro had the most liked content!

Community Reputation

25705 Excellent

4 Followers

About L. Spiro

  • Rank
    Crossbones+

Personal Information

Social

  • Twitter
    @TheRealLSpiro
  • Github
    https://github.com/L-Spiro

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. L. Spiro

    Floating point edit box

    Read via GetWindowTextW() (or GetWindowTextA() if you are using multi-byte, or GetWindowText() if you change between Unicode and multi-byte sets via project settings). Convert to text via swprintf_s() (or sprintf_s() for multi-byte sets or _stprintf_s() for use with TCHAR.h) using as many digits of precision as you want (%.17f prints a float out to 17 decimal places. Read the Format Specifications). Convert to a number via _wtof() or atof() (or _tstof() if using TCHAR.h). L. Spiro
  2. Since I lived and worked in Thailand, France, Japan, the UK, and the USA… If the native alphabet is small enough then they try to use pictograms directly on the keyboards. This was my keyboard in Thailand: In “weird” places (France and friends) they may swap letters around. This was my keyboard in France (notice the Q etc.) UK’ians are also weird. I used to use this in the UK but I had to get it replaced because of that damned short-as-hell Shift key: Notice that other symbols are moved around as well, not just letters. All of these work like normal keyboards, except you press different buttons to get the same result. Thai represents 2 special cases though: A larger alphabet (44 consonants and 15 vowel symbols) and 2 alphabets (English and Thai). All keyboards with roots not in Roman maintain the standard English alphabet and American layout along with the native alphabet. You switch modes to type in one or the other, usually via Shift-Tilde. In Japan, this was my keyboard: Hiragana is written next to the English characters, which implicitly align with Katakana characters since they are 1-for-1 (Katakana characters are just a different way to draw Hiragana characters). Surrounding the space bar are keys to select input methods and alphabets. For the most part, you completely ignore the Hiragana characters. You can enter a special mode to type them directly just as with Thai, but that means relearning to type so no one does this. Instead it basically boils down to typing in English directly, or typing in Japanese phonetically and letting that get turned into Japanese based on which alphabet you have active. If I type “ku”, if I am in Hiragana it becomes く, if I am in Katakana it becomes ク. If I then hit the space bar I get options for KU. Now I can select which Kanji I want or select the Hiragana form or (a little lower) the Katakana form, etc. The IME pop-up that you see there learns which Kanji you use most often and puts them at the top. L. Spiro
  3. Timers in games fall into 2 categories: Utilities and in-game/gameplay timers. Utility timers run on separate threads and trigger system events etc. An example that used to be common (but should never be done) is a timer to run the game loop. Timers to update the sound system, to load data, to run physics, etc., are examples of utility timers. They keep the game running, but are not specific to the game. They run on system threads. A game timer is meant to trigger an in-game action. They are gameplay-critical. They run in the main game thread and are updated at a specific point within the game loop. If the game lags, the timers lag. They can be based on game time, pausable game time, frames, ticks, logical updates, or other game-related timing mechanisms. So which do you need? Neither. You’re updating an animation. This is definitely not the purpose of timers. You draw the correct frame of an animation by determining how much time has passed since the animation began, which you do by simply accumulating it each tick. If you Tick() for 33333 microseconds, each tick your objects add 33333 to their current animation time (which is stored in microseconds). Which frame to draw simply depends on how fast they animate. If I am drawing at 2 seconds in and the animations are running at 24 FPS then I should be drawing frame 48. Why would you implement a whole timer system instead of a multiply and divide? L. Spiro
  4. Yes. Like I said. An effect that gets gradually more pronounced as you step away from the center of your vision, or the screen. And as I said you don’t notice it as much nearer to the center. But, of course, it is there. Waiting for you to manually override your vision and see it for what it is. If you do your absolute best to swivel your head around one of your eyeballs, does not a doorway straight edge still change perceived directions as you look up and down? If you look down, the bottom of the door edge goes away from you straight into the distance. It does not go away from you into your forward distance as you rotated your head around your eyeball to maintain the exact same projection of your surroundings as you look up. The angle the door goes away from you remains the same as long as you keep your eyeball positioned. Rotating around your eyeball is like putting each individual pixel in the center of your screen, hopefully keeping in mind that just because something may have been lower on your screen when you were looking ahead, it’s still the same image that would be projected on the bottom of your screen vs. in front of your screen when looking directly at it. You will start to see the curves if you override the “correction” your brain makes. You can override it because your visual system does not transform it into a straight line—you see a curved line that your brain simply says, “What curve? It’s straight.” It’s literally the same as the mechanism behind the blue/black dress vs gold. So your perspective is not based on gathering light from a roughly singular point in space? So, glasses? Because what you describe is literally physically impossible, although light does not gather in a literally singular location within your eye, and these imperfections can cause a slight distortion, though once again your brain corrects for these, and it isn’t as bad as presented by digital media. L. Spiro
  5. If you render it into a render target you aren't really solving the problem. You are spending a lot of time simply reducing the problem. The distortion gets worse as you move to the side of the screen. That means the only place on the entire screen that would not have this error would be the single center pixel, if it existed. Since it doesn't, literally every single one of your pixels has this problem to some degree. This means your center circle too. And rendering them to a separate target can't correctly handle large objects that take up the whole screen. It won't play well with your current Z buffer. I heard in '89 it got arrested for possession. Only a pixel-perfect solution will work and it must be post-processing since triangles are always drawn with straight edges (a correct rendering of a long straight edge should curve around your view), so simply adjusting points in the vertex shader will not work. A pixel-perfect solution will usually have tons of artifacts, and in order to pull in the edges and corners of the screen correctly you must over-draw the scene onto a larger texture. A lot of work and a performance hit just to get a result that you will likely discard and then return to the standard way of rendering. L. Spiro
  6. L. Spiro

    Insomnia keeps me company

    Get the CBT-i Coach app, keep up your diary for a bit, and by looking at your own data and possibly by suggestion from the app you can decide to see a doctor. The most obvious thing in the world is to not nap. That sleep is meant to be burned off later, and you aren't going to get to sleep if there is nothing to burn. Also, is there a reason for the regimen? It sounds as if you are getting the amount of sleep you require but you simply calculated that you need more (when you don't). If you were awake longer before sunset, it is perfectly normal for the sunset to trigger your sleepy (at 3 or 4 PM) much more heavily. The only suspicious point in your story is why you suddenly think you need more sleep than you actually need. L. Spiro
  7. Ray tracers can handle it correctly if each ray is cast spherically out from an eye (or any infinitely small point), modeling reality (except of course that photons go into the eye, not out of it (well, some do, but they do not aide in your vision)). These are usually reserved just for movies, since ray tracing is slow in the first place, but extra slow the more accurately you model reality. Pyramid projections are the standard because they are fast, and they will not be going away from real-time media any time soon. L. Spiro
  8. Your eyes do not distort images that are off-to-the-side this way. This happens because the spherical space around your head and in your field of view in real life is being projected onto a flat plane in the renderer. This adds distortion that was not previously present, very obviously (since it is literally distorting a spheroid space into a flat pyramid). In reality, if the sphere had moved to the side, it would only seem as if it had changed distances from you (it moved farther away by moving to one side) and retain its spherical shape. You can verify this by simply turning in place to look at the new sphere. Nothing about the sphere will change from your perspective except that it moves to the center of your vision. In non-ray-traced computer graphics (and in most ray-traced scenes), the sphere, as projected for rendering, will retain its distance from you as it moves left and right. Note that in the physical world of the game (inside the CPU simulation), a vector from the moved sphere to your head will match reality (it will give you a larger distance as the sphere moves to the side, exactly matching reality)—this issue is related to the projection. Because of the projection, the sphere will cover the exact same range of depth values as it moves across the screen (check the depth buffer). That means the near and far points of the circle have the same range of Z values as it moves left and right. This is not consistent with reality and causes the stretching you see. In reality, you see the world this way: You just don’t notice because your field of view is limited to such a small range that you don’t notice that straight lines curve away from you, but whether you notice it or not it is physically and mathematically impossible for this not to be the case, no matter how small your field is. But as mentioned, this is not an issue. There is nothing here to be fixed. This is completely expected behavior for games, which do not gather photons from a perfectly simulated game world. L. Spiro
  9. L. Spiro

    FPP Game

    That’s niftypuff. Thanks for sharing. I was wondering where msmile032 sat, and now I know. If you cared to contribute to the world of video games, that would be niftypuff. L. Spiro
  10. L. Spiro

    Have you ever had your game idea stolen?

    The rule makes sense, you’re just underestimating how naive I was: I didn’t even call a developer, I just called the LEGO customer-support number listed on the box. L. Spiro
  11. L. Spiro

    Have you ever had your game idea stolen?

    No, and in fact I have had a game not stolen once. When I was around 14 I called LEGO to tell them about my fantastic wonderful game that I had designed, only to be told, "Sorry we can't accept ideas from outside the company." It sucks not having your ideas stolen. L. Spiro
  12. L. Spiro

    Advanced AI in Games?

    There are not many places where it can fit. Training the AI is an offline non-shipping process, so immediately we can toss aside any ideas based on letting the AI grown as part of the game. Now that we are talking only about games shipping with a developed ready-to-go AI, it first may seem logical to consider that a neural network can handle most AI tasks better than you could write by hand, and that is true (for a small set of tasks a direct hard-coded solution is best, but for all other tasks there can always be imagined a neural-network solution that handles the same task exactly as well as manually or better), but it ignores the important step of actually arriving at said perfect solution. The problem is that just because a perfect neural network can almost always be imagined to handle a task better than manual code can (and this is what tempts people to keep thinking about how to apply them to games), that's doesn't mean you can create said perfect AI/neural network training weights. Simply making a large working neural network is a chore in itself. Once you have invested that time into it, you have to train it into the perfect end result, and there is no guarantee that that will happen. How and what a neural network learns at each iteration is unknown to us and we can only make guesses as to how to steer them towards our desired behaviors. By the time we discover that it is learning wrong behavior it is too late. If you try to guide it to the desired behavior from there, there is no guarantee how much success you will have. You can start the learning process over, but you still have no guarantees, and you will have lost too much time. As you can see, the main issue is the lack of control. The purpose of machine learning is for computers to teach themselves things that would be much too complex for us to teach them manually (for example how to identify images of objects). We can't just look at their tables of weights and make adjustments or track progress or judge correctness, and machine learning will never be part of games (or become mainstream) until this can happen, so if you want to pioneer anything then start working on tools to help develop neural networks or otherwise facilitate machine learning. I personally see machine learning, AI, and neural networks as fitting into our future pipelines the same ways as languages do now. Meaning we can be more and more productive with certain languages, and so we created parser generators to create parsers for our languages so we could make better languages and etc. Machine learning is blooming and it will soon be a large part of developing any game. We should be settling on standards early and making tools and libraries to generate, train, and introspect large networks and deep learning. New neural-network projects should be at least as easy as creating a new language—just explain the details to a "Neural Network Generator" as we do with parser generators, and the exported code will sit on top of a foundation that allows us to more easily create introspection routines and possibly creative ways to guide learning towards results more consistently controllable. L. Spiro
  13. Count the number of squares in a fog patch. Use that information to decide the direction of the closest largest fog patch. L. Spiro
  14. Then use a routine to find a mark fog sections, find the closest largest fog section, and use A* to head in that direction. L. Spiro
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!