Jump to content
  • Advertisement

capn_midnight

Member
  • Content count

    15309
  • Joined

  • Last visited

Everything posted by capn_midnight

  1. Recognize these faces as my tweeps, but the BGs have been changed. Also, I think they screwed up their targeting. https://t.co/qDRDQsptHO
  2. My wife watches this show "Once Upon a Time". I find the only relatable characters are the recurring villains.
  3. This database app vendor I have to deal with doesn't provide a public API. Or so they think. They forgot about SendInput.
  4. RT @JulianHiggins: The eternal civil war - procrastination vs. motivation. https://t.co/LWncuQnSQq
  5. I had forgotten how pedantic Java was. I had gotten to live--for a short while--in blissful ignorance, that which is so rarely recapturable.
  6. In this video, I demonstrate using the Primrose text editor to live-edit the world around me. [color=rgb(0,0,238)][/color]
  7. If you'd like to see the (kind of crappy) video of me #livecoding #WebVR #VR #WebGL #JavaScript you can get it here: https://t.co/Ybze6FAPrb
  8. capn_midnight

    The Good, The Bad and The WebGL-y

    >> ThreeJS caught my attention because it allowed games to be built directly into a browser with no need for plugins. While great in theory, there was a huge learning curve and 3JS, in its current state, is the toy of elite coders and is pretty much inaccessible for someone wanting to implement simple WebGL into their current online presence.    I forget sometimes how far I've come.   In terms of libraries, Three.JS helps you *avoid* having to write a lot of particularly difficult code. It has a very useful scene graph implementation, and really does some great work for turning WebGL's procedural madness into a much more manageable object-oriented style. For the most part, the design is very straight forward and consistent, though admitted the documentation is lacking, or worse, in some cases it's out of date.   I personally find using something like Unity more difficult than using Three.JS. When it comes to using a GUI system to design a game, I'm a noob. But I've been programming for over 15 years.
  9. capn_midnight

    VR Lessons Learned So Far

    @jefferytitan: I don't think the 75fps framerate target that Carmack talks about is strictly necessary, but it definitely needs to be at least 60fps. But what is surprising is that scenes don't need to be extremely detailed to achieve presence. I would say it's better to achieve high framerate than to achieve high polygon count, high res textures, etc. We used to play 3D games that were nothing more than billboarded sprites and were fine with it. Something like that would still work in VR, as long as the framerate and interactions were smooth.    On avatars, I've been using a model of a fat, cartoon bear that I threw together with a simple running animation. The arms are not rigged to my Leap Motion yet, but you mostly can't see the arms. Having a body and feet to see, though, has been nice. I like it, at least. YMMV.   However, with the Leap, I've noticed that it absolutely *is* necessary to have a fully rigged hand model. Simple spheres for each of the knuckles works fine, but a single sphere for the entire hand is not good enough. However, I'm having some network performance issues with the Smartphone HMD sync in Psychologist.js, so I might have to degrade the visual in that case (i.e. transmit only the hand's location and not all of the individual finger joints).   But without direct, kinematic sync with body motion, it's best to show nothing. I've found any sort of user-oriented movement that was not triggered by the user's actual movement to be the most likely to cause motion sickness. This is where I refer to presence being a double-edged sword. If you haven't achieved presence, the user (or rather, me) still feels like they are looking at a screen and non-didactic movement isn't disorienting. When you get the user feeling like they are in the scene, they need to be in complete control of what is attributed to them.   So in short, if you do use some sort of fake hand model, I'd try to keep it abstract. Show the model of the tool held in the hand but do not show the hand itself, and keep movement to a minimum.      @cozzie: I have the Samsung Galaxy Note 3 for doing the Smartphone-style HMDs, and a cardboard box that I built myself for holding it, with a strap I cribbed from a pair if head-wearable magnifying lenses (but not the lenses, I bought lenses specifically on Amazon.com). It's not Google Cardboard, because Google Cardboard was announced the day I ordered the lenses.   I also just acquired an Oculus Rift DK2. I spent a little time last night and most of this morning getting it to work with Psychologist.js (okay, I officially hate that name now, it's too long). It's up and running now, but requires specially development builds of Chromium or Firefox. 
  10. capn_midnight

    VR Lessons Learned So Far

    [color=rgb(77,79,81)][font=Helvetica] This is a loosely organized list of things I've noticed while using and developing virtual reality applications for the smartphone-in-headset form factor. It is specific to my experience and may not reflect anyone else's personal preference, such that VR is apparently quite dependent on preference. But I think that steadfast rules of design are necessary for the impending VR bubble, to convey an aesthetic and unified design such that users may expect certain, common idioms and adapt to VR software quickly. Thus, this is list a roadmap of my current aesthetic for VR. It is a living document, in the sense that future experiences may invalidate assumptions I have made and force me to recognize more universal truths. Proceed with caution.[/font][/color] Presence is the ability to feel like you are in the scene, not just viewing a special screen. You'll hear a lot of people talk about it, and It is important, but ultimately I believe it to be a descriptor of an end result, a combination of elements done well. There is no one thing that makes "presence", just as there is no one thing that makes an application "intuitive", "user friendly", "elegant", or "beautiful". They either are or they are not, and it's up to the individual experiences of the users to determine it. Presence is a double-edged sword. [font=inherit]I've found that, once I feel "present" in the application, I also feel alone, almost a "ghost town" feeling. Even if the app has a single-user purpose, it seems like it would be better in an "arcade" sort of setting. To be able to see other people may help with presence.[/font] The hardware is not yet ready for the mass market. That's good, actually, because the software and design side of things are a lot worse off. Now is the time to get into VR development. I'll say nothing more about the hardware issues from a performance side. They are well known, and being worked on [font=inherit]fervently [/font]by people with far more resources than I. Mixing 2D and 3D elements is a no-go. Others have talked about not placing fixed-screen-space 2D heads-up-display elements in the view for video game applications, but it extends much further than that. The problem is two-fold: we currently have to take off the display to do simple things involving any sort of user input, and there is no way to manage separate application windows. We're a long way off from getting this one right. For now, we'll have to settle for being consistent in a single app on its own. A good start would be to build a form API that uses three-space objects to represent its controls. Give the user an avatar. This may be a personal preference, but when I look down, I want to see a body. It doesn't have to be my body, it just needs something there. Floating in the air gives me no sense of how tall I stand, which in turn gives me no sense of how far away everything is. Match the avatar to the UI, and vice versa. If your application involves a character running around, then encourage the user to stand and design around gamepads. If you must have a user sit at a keyboard, then create a didactic explanation for the restriction of their movement: put them in a vehicle. Gesture control may finally be useful. I'm still researching this issue, but the experiments I've done so far have indicated that the ability to move the view freely and see depth make gestures significantly easier to execute than they have been with 2D displays. I am anxious to finish soldering together a device for performing arm gestures and test this more thoroughly. This demo makes it clear that this is at least an extremely lucrative path of study. Use all of the depth cues. Binocular vision is not the only one. Place familiar objects with well-known sizes in the scene. Use fog/haze and a hue shift towards blue at further distances. But most importantly, do not give the user long view distances. Restrict it with blind corners instead. Binocular vision is only good for a few feet before the other depth cues become more important, and we are not yet capable of making a convincing experience without the binocular cue. Object believability has more to do with textures and shading than polygon count. Save on polygon count in favor of more detailed textures and smooth shading. Frame rate is important. I remember being perfectly happy with 30FPS on games 10 years ago. That's not going to cut it anymore. You have to hit 60FPS, at least. Oculus Rift is targeting 75FPS. I'm sure that is a good goal. Make sure you're designing your content and algorithms to maintain this benchmark. Use lots of non-repetitive textures. Flat colors give nothing for your eyes to "catch" on to make the stereo image. The design of these viewer devices is such that the eyes must actually fight their natural focus angle to see things in the display correctly. It will be easier for the user if you make it as hard as possible to not focus on object surfaces. Repetitive textures are only slightly better than flat colors, as they provide a chance to focus at the wrong angle, yet still achieve what is known as the "wallpaper effect". And do not place smaller objects in any sort of pattern with regular spacing. Support as many different application interactions as possible. If the user has a keyboard hooked up, let them use the keyboard. If they have a gamepad, let them use the gamepad. If the user wants to use the app on their desktop with a regular 2D display, let them. Do not presume to know how the user will interact with the application. This early in development, not everyone will have all of the same hardware. Even into the future, it will be unlikely that an app will be successfully monetizable with a user base solely centered on those who have all of the requisite hardware to have a full VR experience. [font=inherit]Be maximally accessible.[/font] [font=inherit]Make the application useful. This seems like it shouldn't be said, but ask yourself what would happen if you were to rip out the "VR" aspect of the application and have people use it with traditional IO elements. Treat the VR aspect of it as tertiary. Presence by its very definition means forgetting about the artifice of the experience. If the experience is defined by its VR nature, then it is actively destroying presence by reveling in artifice.[/font] [font=inherit]Much research needs to be done on user input especially for large amounts of text. Typing on a keyboard is still the gold standard of text entry, but tying the user to the keyboard does not make for the best experience, and reaquiring a spatial reference to the keyboard after putting the headset on and moving away from the keyboard is nearly impossible. Too often, I find myself reaching completely behind in the wrong direction.[/font] [font=inherit]3D Audio is essential. We could mostly get away without audio in 2D application development, but in VR it is a significant component to sensing orientation and achieving presence. I believe it works by giving us a reference to fixed points in space that can always be sensed, even if they are not in view. Because you always hear the audio, you never lose the frame of reference.[/font] [color=rgb(77,79,81)][font=Helvetica] I may add to this later.[/font][/color]
  11. I've written a little bit about this project for a little while, and I've finally decided on a name. Psychologist.js is a framework for rapidly prototyping virtual reality applications using standard HTML5 technologies. It keeps you sane while bending your mind. You can view a demo of the framework in action here. You can access the repository on Github here. Features: Google Cardboard compatible: use your smartphone as a head-mounted display, Multiplayer: share the experience and collaborate in cyberspace, Leap Motion support: control objects with natural movement, Game pad support: create fast-action games, Speech recognition: hands free interactions, Peer-2-peer input sharing: use devices connected to your PC with your Google Cardboard, 3D Audio: create fully immersive environments, App Cache support: save bandwidth, Blender-based workflow: no proprietary tools to learn, Cross-platform: works in Mozilla Firefox and Google Chrome on Windows, Linux, and Android, Oculus Rift support (coming soon): the cutting edge of head-mounted display technology.
  12. capn_midnight

    HTML5 audio for games made easy

    You're welcome!
  13. Audio in the browser is deceptively tetchy. It's easy to get a basic sound to play with the tag.[code=html:1] If you can read this, your browser is not fully HTML5 compatible. But there are several problems with this: First of all, good luck navigating this compatibility chart. Until Mozilla finally caved and decided to support MP3, there was no single file format supporting Chrome, Firefox, Safari, and IE. Opera is still a painful holdout on file formats, and the situation on mobile is disgusting. You can't programmatically trigger the audio playing on mobile devices without direct user action in your game loop. You basically have to put up a "start game" button that tricks the user into playing a silent audio file, then you have free reign to trigger that particular audio element. You get almost no control over how the file plays. There is a volume setting, but it hasn't always been reliable on all platforms. There's no mixing. There are no effects. It's super difficult to rewind and replay an audio file. Honestly, I still don't really know how to do it correctly, and I'm a goddamn salty pirate. In short, the tag is for one thing and one thing only: for NPR to post their podcasts directly on their site. Okay, let's get out of this malarky. What else do we have? Well, there's the Web Audio API: Granted, formats still aren't great. Strangely, the compatibilities don't exactly match the tag. But MP3 is universally there, as is AAC. And technically, you could write a decoder if you wanted. I wouldn't suggest it, but it is possible. You can play audio whenever you want, as many times as you want, on desktop and mobile, without buggering around with stupid hacks. It's a fairly-well featured signal processing system. That's great if you know what you're doing, murder if you don't. It's a little difficult to program. And the MDN tutorial gets far too into crazy effects for me to bother if all I want to do is make a few blips, bloops, and gunshot sounds. That's why I wrote this: Audio3DOutput.js. Here's what you do:[code=js:1]// to start, create the audio contextvar audio = new Audio3DOutput();// then, check if your system supports itif(audio.isAvailable){ // if you want to play a specific sound file every time a user clicks a mouse button: audio.loadBuffer("click.mp3", null, function(buffer){ window.addEventListener("mousedown", function(evt){ audio.playBufferImmediate(buffer, 0.25); // 25% volume gain }); }); // if you want progress notifications while the audio is loading and processing: audio.loadFixedSound("song.mp3", /* looping */ true, function(op, file, numBytes){ console.log(op, file, numBytes); }, function(snd){ snd.source.start(); }); // if you want to position the sound in 3D space: var sourceX = 10, sourceY = -4, sourceZ = 3; audio.load3DSound("ambient-sound.mp3", true, sourceX, sourceY, sourceZ, null, function(snd){ snd.source.start(); setTimeout(moveListener, 5000); // 5 seconds }); function moveListener(){ audio.setPosition(x, y, z); audio.setVelocity(vx, vy, vz); audio.setOrientation( ox, oy, oz, upz, upy, upz); } // if you want to take the first file of a list that successfully loads: audio.loadFixedSoundCascadeSrcList(["song.aac", "song.ogg", "song.mp3"], null, function(snd){ snd.source.start(); }); // or if you want to synthesize the raw PCM data yourself: // in monaural var data = [], seconds = 0.25; for(var i = 0; i
  14. capn_midnight

    HTML5 audio for games made easy

    Very nice, but you shouldn't use <progress> elements like that. You should be using <meter>.
  15. capn_midnight

    HTML5 audio for games made easy

    That's a good idea. And yes, with Web Audio, it shouldn't be difficult.   My goal with this was to just get a bare minimum of useful audio together. The Web Audio API is extensive, but for throwing together quick demos, 80% of it is unnecessary. This JS file is for that 20% use case, wherein the <audio> tag is completely useless.
  16. I recently received a very generous job offer from a rather prestigious company. I didn't even apply, they contacted me through LinkedIn. To say that I was honored to even receive a cold-contact from such a company is an understatement. "Flabbergasted" is a much more appropriate term. The salary was great. There was an additional cash bonus of approximately "holy crap" dollars. There was also a stock grant of even more "are you freaking serious" dollars. The projects sounded right up my alley. And the managers sounded like good people. All around, it sounded great. But I had to say no, for two specific reasons completely unrelated to compensation packages. I'm an east-coast guy and they wanted me to move to the left coast. My family is here and my wife's family is here. We specifically live in a place that is convenient for seeing our families on a regular basis. We had considered the possibility of moving, if the job presented a clear opportunity for significant career advancement. But we'd also like to have kids soon, and that's going to peg us even harder to only a few hours' drive from where the grandmas and grandpas live. Also, I've spent the last two--almost three years working as a freelancer. The term "free-lance" comes from the great Scottish author Sir Walter Scott, referring to a sort of medieval mercenary, one whose "lance" was free of any sworn allegiance to any feudal lords. I've been incredibly productive during that time. The corporate desire to have people "on site" grows more and more alien to me every day. I know what work I'm capable of, and I think being self-directed and independent has markedly improved my output. To be asked to go to a specific place to do work in our now rather aged era of telecommuting feels like being asked to intentionally hobble myself for nothing more than someone else's convenience. I think the work is more important than that. I'm not done with my current path. I started freelancing for a reason. I was dissatisfied with my work-life relationship and I hoped I could one day create the sort of company that I have always wanted to work for. Freelancing is not an end to itself, but it is hopefully a means. The flexibility it affords is much closer to that ideal work life that I envisioned for myself than I've ever encountered before. I'm able now to work on my own R&D projects in addition to the freelancing with a focus and effort for which I had never had the adequate time while I was working as a 9-to-5 stiff. To take the job would be to give up on those plans, just as they are starting to show promise. I take the mercenary notion of freelancing very seriously. I operate by my own ethic, one that places doing the right thing and doing the most important things above doing what I'm told. When a client hires me, they don't just buy my time, tapping on a keyboard at whatever they want. They buy my opinions and my taste regarding how work should be organized. Sometimes that can come across as defiance, but I do it out of respect for their needs as I see them, not as they are expressed in the heat of the moment. Freelancing is a system that explicitly maintains that--at the end of the day--I own my own labor. It is the nature of corporate non-compete and non-disclosure agreements to capture and monopolize my labor as much as possible, for as little compensation as possible--indeed, why would a contractual agreement be necessary if the compensation were enough? And to make "my" company, I need to own my labor. While the NDA and Non-Competes weren't a major deciding factor in themselves to turning down the job, the prospect of what they meant for my personal projects certainly helped the decision along. It would essentially mean cancelling most of my projects. Their offer, while generous, was not quite that compelling. I just couldn't do it. Through out the interviewing process, I had this voice in the back of my head, chiding me, "it's a great job, you don't turn down such a good job." I'm sure working for this company would have been very rewarding. But I don't want a "job". I think I can do more. And I think I owe it to everyone involved to do so.
  17. capn_midnight

    Why I turned down a great job offer.

    Yeah, and while the pay is good, it's wasn't significantly more than I could be making with consulting. I'm specifically part-time consulting right now so I can work on my side projects more. If I were full-time again (which would please my client quite a bit), I'd be netting just about the same amount. I already decided less pay was fine for doing the work I want to do, I'd like to have a chance to actually do it.
  18. capn_midnight

    VR Lessons Learned So Far

    Testing all of these assumptions out is why I'm building a framework for building VR applications in modern web browsers: https://www.github.com/capnmidnight/VR   It's been fascinating to try things that I thought would be awesome and see them not work out at all, while also trying things that I thought were not going to be a big deal but turned out to be hugely important.   For example, I thought speech commands were going to be a really big deal, but they turned out to be almost completely useless. It's really difficult to get a speech engine to recognize very short phrases, and longer phrases are cumbersome. It just doesn't make for anywhere approaching a good experience.   And then there are the weird things, like combining actions from one hand over the Leap Motion with mouse motions in the other hand. I think it's because the mouse allows me to make very broad movements and the Leap allows me to make small ones, and the two combined cover areas that the other doesn't do well.   Anyway, really fascinating stuff.
  19. capn_midnight

    WebRTC device syncing

    WebGL VR Big update today. When you use your PC in conjunction with your Smartphone, the two devices will communicate over your local network, rather than having to take a round trip through the game server. This reduces latency, so when you move your mouse, it updates in the display almost immediately.
  20. capn_midnight

    Week of Awesome 2 - The Toys are Alive - #1

    So yeah, turns out I'm working on a VR game and that took priority over this.
  21. So Week of Awesome 2 starts on the day I have a half-day of job interviews and a demo of a product I'm working on. From 10:30am to about midnight tonight, I'm either driving somewhere or trying to present myself as awesome and totally not a slob at all. But tomorrow, I should be free for the rest of the week. The theme is "The Toys are Alive". A title comes to mind, "Adult Toy Story". Nah, that's too obvious. The key to a good game competition entry is to have a simple concept, executed well, completed early, to which "polish" is added with the remaining time. Yes, polish is a positive addition of things, like sprinkles on a cake. Also, to not neglect sound. Even the most basic sound instantly increases the quality of a submission 10-fold. One should stick to things one knows well. Venturing into new territory is a good way to get "stuck in the weeds" and fail to complete a submission. With that in mind, I should probably be making a business intelligence suite, perhaps a series of reports demonstrating the effectiveness of a hospital full of toys. "Toy Hospital Administrator Simulator". I'm sure to win[1]! Since I still don't have a real concept in mind after half an hour of typing, I'm going to continue to fill space with useless junk about strategy. Ideas: HTML5/JS game. I'm going to reuse my growing project template for doing HTML5 web apps. It's all just boilerplate stuff for loading bars, rounded corners on buttons, that sort of junk. Other people might use something like Angular or Bootstrap, but I am too old for that shit. 2D graphics. I'm still learning Three.js in my WebGL Virtual Reality[2] project, so I'm going to stick to what I know and do Canvas 2D instead of WebGL 3D. 3D audio. It's just so easy to do in modern browsers, why the hell not? Mix of procedural and static content: Maybe a static level-select map and procedural levels. Issues: Figure out a proper concept to meet the theme I don't currently have a good system for easily specifying and running keyframed 2D animations. Will have to figure something out here. Might be where I spend most of my time on this project. I have simple boilerplate code for doing multiplayer stuff, but I feel like only a really good concept should have it. To add it as a gimmick would be detrimental. Okay, that is all for now. I'll update this post here if I think of any concepts. [1] Only if there is a "biggest Poindexter" award. [2] Check it out. Star it. Follow it. Fork it and make contributions. Ask nicely and get direct commit access!
  22. capn_midnight

    Week of Awesome 2 - The Toys are Alive - #1

     Thanks everyone. The interview went well. I just am not sure I want to move to the left coast, no matter how big and awesome the company might be, or the fact that they contacted me directly.   So I'm thinking a really neat game would be a variation on this Hospital idea, and the facetious "sex toy" idea. If the toys are alive, then the toys fuck! And what happens when people fuck? THEY MAKE BABIES! The game is (probably, maybe, maybe not) going to a breeding game, where features of toys get shared and tested against children in focus groups. How well they test relates to how well they will do in the market, leading to better income and maybe even access to breed with "stud" toys.
  23. idea: open source VR UI toolkit
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!