Jump to content

  • Log In with Google      Sign In   
  • Create Account

Working late past midnight...

Primrose - WebGL VR framework

Posted by , 06 July 2015 - - - - - - · 254 views

In this video, I demonstrate using the Primrose text editor to live-edit the world around me.

Announcing Psychologist.js, a RAD HTML5 VR framework

Posted by , 24 October 2014 - - - - - - · 584 views
vr, virtual reality, oculus rift and 3 more...
Announcing Psychologist.js, a RAD HTML5 VR framework I've written a little bit about this project for a little while, and I've finally decided on a name.

Psychologist.js is a framework for rapidly prototyping virtual reality applications using standard HTML5 technologies. It keeps you sane while bending your mind.

You can view a demo of the framework in action here.

You can access the repository on Github here.

  • Google Cardboard compatible: use your smartphone as a head-mounted display,
  • Multiplayer: share the experience and collaborate in cyberspace,
  • Leap Motion support: control objects with natural movement,
  • Game pad support: create fast-action games,
  • Speech recognition: hands free interactions,
  • Peer-2-peer input sharing: use devices connected to your PC with your Google Cardboard,
  • 3D Audio: create fully immersive environments,
  • App Cache support: save bandwidth,
  • Blender-based workflow: no proprietary tools to learn,
  • Cross-platform: works in Mozilla Firefox and Google Chrome on Windows, Linux, and Android,
  • Oculus Rift support (coming soon): the cutting edge of head-mounted display technology.

HTML5 audio for games made easy

Posted by , 20 October 2014 - - - - - - · 2,327 views
html5, javascript, audio
Audio in the browser is deceptively tetchy. It's easy to get a basic sound to play with the <audio> tag.
<audio id="myAudioThinger" controls="controls" preload="auto" autoplay="true">
    <source src="your-sound.mp3"></source>
    <source src="sound-in-alternative-format.ogg"></source>
    If you can read this, your browser is not fully HTML5 compatible.
But there are several problems with this:
  • First of all, good luck navigating this compatibility chart. Until Mozilla finally caved and decided to support MP3, there was no single file format supporting Chrome, Firefox, Safari, and IE. Opera is still a painful holdout on file formats, and the situation on mobile is disgusting.
  • You can't programmatically trigger the audio playing on mobile devices without direct user action in your game loop. You basically have to put up a "start game" button that tricks the user into playing a silent audio file, then you have free reign to trigger that particular audio element.
  • You get almost no control over how the file plays. There is a volume setting, but it hasn't always been reliable on all platforms. There's no mixing. There are no effects.
  • It's super difficult to rewind and replay an audio file. Honestly, I still don't really know how to do it correctly, and I'm a goddamn salty pirate.
In short, the <audio> tag is for one thing and one thing only: for NPR to post their podcasts directly on their site.

Okay, let's get out of this malarky. What else do we have? Well, there's the Web Audio API:
  • Granted, formats still aren't great. Strangely, the compatibilities don't exactly match the <audio> tag. But MP3 is universally there, as is AAC. And technically, you could write a decoder if you wanted. I wouldn't suggest it, but it is possible.
  • You can play audio whenever you want, as many times as you want, on desktop and mobile, without buggering around with stupid hacks.
  • It's a fairly-well featured signal processing system. That's great if you know what you're doing, murder if you don't.
It's a little difficult to program. And the MDN tutorial gets far too into crazy effects for me to bother if all I want to do is make a few blips, bloops, and gunshot sounds.

That's why I wrote this: Audio3DOutput.js. Here's what you do:
// to start, create the audio context
var audio = new Audio3DOutput();

// then, check if your system supports it

    // if you want to play a specific sound file every time a user clicks a mouse button:
    audio.loadBuffer("click.mp3", null, function(buffer){
        window.addEventListener("mousedown", function(evt){
            audio.playBufferImmediate(buffer, 0.25); // 25% volume gain

    // if you want progress notifications while the audio is loading and processing:
    audio.loadFixedSound("song.mp3", /* looping */ true, function(op, file, numBytes){
        console.log(op, file, numBytes);
    }, function(snd){

    // if you want to position the sound in 3D space:
    var sourceX = 10, sourceY = -4, sourceZ = 3;
    audio.load3DSound("ambient-sound.mp3", true, sourceX, sourceY, sourceZ, null, function(snd){
        setTimeout(moveListener, 5000); // 5 seconds

    function moveListener(){
        audio.setPosition(x, y, z);
        audio.setVelocity(vx, vy, vz);
            ox, oy, oz,
            upz, upy, upz);

    // if you want to take the first file of a list that successfully loads: 
    audio.loadFixedSoundCascadeSrcList(["song.aac", "song.ogg", "song.mp3"], null, function(snd){

    // or if you want to synthesize the raw PCM data yourself: 

    // in monaural 
    var data = [], seconds = 0.25; 
    for(var i = 0; i <= audio.sampleRate * seconds; ++i){
        data.push((Math.sin(i / 10) + 1) * 0.5); 

    audio.createRawSound([data], function(buffer){
        window.addEventListener("keydown", function(evt){
            if(evt.keyCode === 81){
                audio.playBufferImmediate(buffer, 0.05); 

    // in stereo 
    var left = [], right = [];
    for(var i = 0; i <= audio.sampleRate * seconds; ++i){
        left.push((Math.sin(i / 10) + 1) * 0.5);
        right.push((Math.sin(i / 20) + 1) * 0.5);

    audio.createRawSound([left, right], function(buffer){
        window.addEventListener("keydown", function(evt){
            if(evt.keyCode === 81){
                audio.playBufferImmediate(buffer, 0.05);
There you go. If you find a need for more than these basic functions, please drop me a line and let's discuss adding it!

Why I turned down a great job offer.

Posted by , 20 October 2014 - - - - - - · 504 views

I recently received a very generous job offer from a rather prestigious company. I didn't even apply, they contacted me through LinkedIn. To say that I was honored to even receive a cold-contact from such a company is an understatement. "Flabbergasted" is a much more appropriate term.

The salary was great. There was an additional cash bonus of approximately "holy crap" dollars. There was also a stock grant of even more "are you freaking serious" dollars. The projects sounded right up my alley. And the managers sounded like good people. All around, it sounded great.

But I had to say no, for two specific reasons completely unrelated to compensation packages.

I'm an east-coast guy and they wanted me to move to the left coast.

My family is here and my wife's family is here. We specifically live in a place that is convenient for seeing our families on a regular basis. We had considered the possibility of moving, if the job presented a clear opportunity for significant career advancement. But we'd also like to have kids soon, and that's going to peg us even harder to only a few hours' drive from where the grandmas and grandpas live.

Also, I've spent the last two--almost three years working as a freelancer. The term "free-lance" comes from the great Scottish author Sir Walter Scott, referring to a sort of medieval mercenary, one whose "lance" was free of any sworn allegiance to any feudal lords. I've been incredibly productive during that time. The corporate desire to have people "on site" grows more and more alien to me every day. I know what work I'm capable of, and I think being self-directed and independent has markedly improved my output. To be asked to go to a specific place to do work in our now rather aged era of telecommuting feels like being asked to intentionally hobble myself for nothing more than someone else's convenience. I think the work is more important than that.

I'm not done with my current path.

I started freelancing for a reason. I was dissatisfied with my work-life relationship and I hoped I could one day create the sort of company that I have always wanted to work for. Freelancing is not an end to itself, but it is hopefully a means. The flexibility it affords is much closer to that ideal work life that I envisioned for myself than I've ever encountered before. I'm able now to work on my own R&D projects in addition to the freelancing with a focus and effort for which I had never had the adequate time while I was working as a 9-to-5 stiff. To take the job would be to give up on those plans, just as they are starting to show promise.

I take the mercenary notion of freelancing very seriously. I operate by my own ethic, one that places doing the right thing and doing the most important things above doing what I'm told. When a client hires me, they don't just buy my time, tapping on a keyboard at whatever they want. They buy my opinions and my taste regarding how work should be organized. Sometimes that can come across as defiance, but I do it out of respect for their needs as I see them, not as they are expressed in the heat of the moment.

Freelancing is a system that explicitly maintains that--at the end of the day--I own my own labor. It is the nature of corporate non-compete and non-disclosure agreements to capture and monopolize my labor as much as possible, for as little compensation as possible--indeed, why would a contractual agreement be necessary if the compensation were enough? And to make "my" company, I need to own my labor. While the NDA and Non-Competes weren't a major deciding factor in themselves to turning down the job, the prospect of what they meant for my personal projects certainly helped the decision along. It would essentially mean cancelling most of my projects. Their offer, while generous, was not quite that compelling.

I just couldn't do it.

Through out the interviewing process, I had this voice in the back of my head, chiding me, "it's a great job, you don't turn down such a good job." I'm sure working for this company would have been very rewarding. But I don't want a "job". I think I can do more. And I think I owe it to everyone involved to do so.

VR Lessons Learned So Far

Posted by , 16 October 2014 - - - - - - · 2,490 views

This is a loosely organized list of things I've noticed while using and developing virtual reality applications for the smartphone-in-headset form factor. It is specific to my experience and may not reflect anyone else's personal preference, such that VR is apparently quite dependent on preference. But I think that steadfast rules of design are necessary for the impending VR bubble, to convey an aesthetic and unified design such that users may expect certain, common idioms and adapt to VR software quickly. Thus, this is list a roadmap of my current aesthetic for VR. It is a living document, in the sense that future experiences may invalidate assumptions I have made and force me to recognize more universal truths. Proceed with caution.
  • Presence is the ability to feel like you are in the scene, not just viewing a special screen. You'll hear a lot of people talk about it, and It is important, but ultimately I believe it to be a descriptor of an end result, a combination of elements done well. There is no one thing that makes "presence", just as there is no one thing that makes an application "intuitive", "user friendly", "elegant", or "beautiful". They either are or they are not, and it's up to the individual experiences of the users to determine it.
  • Presence is a double-edged sword. I've found that, once I feel "present" in the application, I also feel alone, almost a "ghost town" feeling. Even if the app has a single-user purpose, it seems like it would be better in an "arcade" sort of setting. To be able to see other people may help with presence.
  • The hardware is not yet ready for the mass market. That's good, actually, because the software and design side of things are a lot worse off. Now is the time to get into VR development. I'll say nothing more about the hardware issues from a performance side. They are well known, and being worked on fervently by people with far more resources than I.
  • Mixing 2D and 3D elements is a no-go. Others have talked about not placing fixed-screen-space 2D heads-up-display elements in the view for video game applications, but it extends much further than that. The problem is two-fold: we currently have to take off the display to do simple things involving any sort of user input, and there is no way to manage separate application windows. We're a long way off from getting this one right. For now, we'll have to settle for being consistent in a single app on its own. A good start would be to build a form API that uses three-space objects to represent its controls.
  • Give the user an avatar. This may be a personal preference, but when I look down, I want to see a body. It doesn't have to be my body, it just needs something there. Floating in the air gives me no sense of how tall I stand, which in turn gives me no sense of how far away everything is.
  • Match the avatar to the UI, and vice versa. If your application involves a character running around, then encourage the user to stand and design around gamepads. If you must have a user sit at a keyboard, then create a didactic explanation for the restriction of their movement: put them in a vehicle.
  • Gesture control may finally be useful. I'm still researching this issue, but the experiments I've done so far have indicated that the ability to move the view freely and see depth make gestures significantly easier to execute than they have been with 2D displays. I am anxious to finish soldering together a device for performing arm gestures and test this more thoroughly. This demo makes it clear that this is at least an extremely lucrative path of study.
  • Use all of the depth cues. Binocular vision is not the only one. Place familiar objects with well-known sizes in the scene. Use fog/haze and a hue shift towards blue at further distances. But most importantly, do not give the user long view distances. Restrict it with blind corners instead. Binocular vision is only good for a few feet before the other depth cues become more important, and we are not yet capable of making a convincing experience without the binocular cue.
  • Object believability has more to do with textures and shading than polygon count. Save on polygon count in favor of more detailed textures and smooth shading.
  • Frame rate is important. I remember being perfectly happy with 30FPS on games 10 years ago. That's not going to cut it anymore. You have to hit 60FPS, at least. Oculus Rift is targeting 75FPS. I'm sure that is a good goal. Make sure you're designing your content and algorithms to maintain this benchmark.
  • Use lots of non-repetitive textures. Flat colors give nothing for your eyes to "catch" on to make the stereo image. The design of these viewer devices is such that the eyes must actually fight their natural focus angle to see things in the display correctly. It will be easier for the user if you make it as hard as possible to not focus on object surfaces. Repetitive textures are only slightly better than flat colors, as they provide a chance to focus at the wrong angle, yet still achieve what is known as the "wallpaper effect". And do not place smaller objects in any sort of pattern with regular spacing.
  • Support as many different application interactions as possible. If the user has a keyboard hooked up, let them use the keyboard. If they have a gamepad, let them use the gamepad. If the user wants to use the app on their desktop with a regular 2D display, let them. Do not presume to know how the user will interact with the application. This early in development, not everyone will have all of the same hardware. Even into the future, it will be unlikely that an app will be successfully monetizable with a user base solely centered on those who have all of the requisite hardware to have a full VR experience. Be maximally accessible.
  • Make the application useful. This seems like it shouldn't be said, but ask yourself what would happen if you were to rip out the "VR" aspect of the application and have people use it with traditional IO elements. Treat the VR aspect of it as tertiary. Presence by its very definition means forgetting about the artifice of the experience. If the experience is defined by its VR nature, then it is actively destroying presence by reveling in artifice.
  • Much research needs to be done on user input especially for large amounts of text. Typing on a keyboard is still the gold standard of text entry, but tying the user to the keyboard does not make for the best experience, and reaquiring a spatial reference to the keyboard after putting the headset on and moving away from the keyboard is nearly impossible. Too often, I find myself reaching completely behind in the wrong direction.
  • 3D Audio is essential. We could mostly get away without audio in 2D application development, but in VR it is a significant component to sensing orientation and achieving presence. I believe it works by giving us a reference to fixed points in space that can always be sensed, even if they are not in view. Because you always hear the audio, you never lose the frame of reference.

I may add to this later.

Latest Visitors