Cromfel

Members
  • Content count

    6
  • Joined

  • Last visited

Community Reputation

132 Neutral

About Cromfel

  • Rank
    Newbie
  1. Here is video where we demonstrated Augmented Reality system utilizing 2 of the modalities. Little rough and "oily" video because it is not for purpose of promotion but rather technical side of things. But should give some idea of how AR can help.   http://www.youtube.com/watch?v=74iRV1QjWCA    
  2.   There is quite interesting AR application available (Given its quite early development) that uses GPS to guide students around city to perform educational tasks.   http://www.youtube.com/watch?v=Ds-t3TUidOo   This is definately the right direction for combining real world with the tasks.
  3.     It is good to try wrap your head around these things. You always figure out something new when you iterate around some consepts.   What you say about games utilizing these modalities. Thats very true. Big problem have been the display paradigm what have derailed thinking of many individuals. Actually it is rather funny now on hindsight to look back and think what kind of dramatic effect our display devices have had on shaping our perception and expectations of visual simulation and for exampe games from entertainment point of view. What Oulus VR is doing is quite remarkable "Why do we have to see this world as a box?" I feel almost sorry for all those companies who have been in the HMD business and who have been stuck on the boxed-view-to-virtual-worlds paradigm. Ok, its easy to say now. But lets say half of the problem was compatibility issues and legacy support and that none wanted to take the risk, yet this very issue could have been the problem preventing VR getting on earlier considering the money burned (Even how much Sony burned money on their personal "video viewer"). Palmer Luckey literally thought outside of the box ;)   What we earlier discussed about Battlefield 3 and the ingame AR in form of minimaps and visual indicators of your team mates position. A lot of that stuff what we see in games could be directly adopted to real life AR (Given that we had sufficient technological level). I think there is huge amount of different well honed application "templates", if you will, what we can adopt only by evaluating 3D games and how problems have been solved there. Specifically when it comes to navigation support. I cant emphasis enough the importance on being critical on what you want to achieve with the technology in respect to AR.   Take for example this picture:     You see representation of all 3 modalities in same screen. As soon as we throw all this to such display device as Oculus Rift, the whole illusion of user interface being somehow related to the boxed world view experience is shattered. Suddenly google glass makes perfect sense also. One style of AR being merely localized according to eyes while other localized according to hear or rest of the body (Kinesthetic mental model).   Modality 1 - Concrete reasoning in form of Caudell AR by spatializing indicators of friends passing by,  while you were not perfectly sure if it was your schoolmate from 10 years ago but as your smartphone pulled his name and visualized through optical see through HMD floating name around his head you were sure it was John Snow.   Modality 2 - Abstract reasoning in form of HUD type AR (Google Glass) showing your vital state. For example heartbeat when you are joggin, blood sugar level for someone with diabetes or just showing contextualized information of your location and whatever may be interesting to you such as history of the specific street you walk.   Modality 3 - You have transformational reasoning modality by utilizing GPS and showing dynamic minimap with view from above you on the city giving you situational awarenes on nearby vehicles. Also showing red boxes on nearby parking places that dont have room while blue ones have slots left, combining abstract and concrete reasoning for making the sensory data compatible with you.   So as you said, big challenge is the Modality 3 since modality 1 and 2 are mostly for the purpose of distinguishing between information for the purpose of communication and that of tacit knowledge. The challenge being how to transform the vast amount of data we have available throughour gadgets and networks and servers and utilize them for something meaningful. And how that information is transformed to Modality 1 or Modality 2 for maximum impact and usability. Again the ongoing fight over "This is AR thats not AR" what many people are conducting is in my humble opiniontotally counter productive. The arguments on either side just does not hold when you have slightly wider view on the feedback you try to provide to human being. No matter what is the context, what is the application, what is the purpose. (Additionally one should remember that outside of how we are used to perceive and understand this world the Modality 3 can help us greatly on understanding phenomena and concepts that could previously be only understood by reading and having deep theoretical background).   One good thinking excerise is handy to perform: Imagine situation where you are fully immersed into virtual world through all your senses. That being the state where people in Matrix movie are. What happens to "VR" or "AR" in that situation from the classical boundary point of view? All the concepts fall apart. Just as I gave the example of using VR to prototype Caudell type AR applications. One can substitute reality with virtual sensory stimuli or augment something new to reality. But at the point when user is not ablel to distinguish between roadsign that is real from the roadsign that is just overlaid with translated writing, from the user point of view both are real. One is merely more dynamic technology than the other. Both serve same purpose. A: To indicate where you should go given this sign points to desired location and B: Communicate the abstract term attributed to the given place. Mind bending? Confusing? From traditional AR point of view yes.
  4.   This is indeed one of the plagues haunting AR. On top of the exploitation of how camera Point Of View content shown on youtube videos of marker based tracking with some 3D object floating. This is how people would first of all like to experience things. Per the framework for concrete reasoning when the POV is correct the content looks very appealing for obvious reasons. But it is not how you experience it in reality unles you have proper HMD with video-see-through. And even then the added value is nonexistent if the product is not such as evaluating visual appearance that is of suitable scale for such visualization. And exactly as you said, this very thing could be implemented using other sensors also. Tracking is tracking and its totally detached from the AR experience if you use image based tracking or something else. This is also one of the things people dont distinguish. AR tracking using image is separate thing than using video overlay. Even if they happen to come in same package. One can use magnetic tracking for position and orientation and still use video see through just to give obvious example.     This is what should be done more. And it is not simple task to find good way to categorize what exactly makes specific case valuable whereas exactly same application is completely useles in some other case. For me the experience of utilizing VR, as an example, boiled down to fact that the cases where it was most useful was when we had multidisciplinary group of people with various backgrounds who needed to discuss new design of machines such as mining loaders etc. So when I tried to narrow down what was causing this specific setting to be useful was the fact that the concrete experience what the drivers of such machine had was only becoming relevant when they could communicate upon the setting where they would be working. But now in controlled VR experience.  Same phenomena was visible where much of the VR utilization was on the sales and most important thing in that situation was outside of the abstract reasoning and product specification, but actually to be sitting inside machine what you are going to buy. Evaluate how it would feel to be inside and do specific job task.   Now when it comes to the usual AR with floating 3D objects. The only differentiating factor was the utilization of physical world in conjunction for a specific purpose. It can be for example that the machine design is at such phase that physical cabin construct is already being built. Now only the dashboard and final button / UI screen layout is being finalized. There it is more useful to utilize the physical structure to provide somatosensory experience, some would call it haptic, and visualize through AR the alternative button layouts and screen configuration where the physical presence in the cabin helps to get feeling of space... Now from what sense point of view this is AR? Again this becomes interesting question when we get rid of the traditional AR view and not only focus on the visual feedback :)   This all will become much more clear to you when you watch the lecture video. Many of these things such as using 500 000$ haptic supermegahyperglove with your VR visualization becomes solved when you just grab a replica of the object being manipulated. My point being that model and simulate only when its necesary. NEVER when it is just because of the coolnes and due "Because I can" setting. This seems very obvious on hindsight... But not so obvious when you go and see some very expensive haptic demo being used for completely irrelevant purpose.     I am looking forward to get some feedback on the framework. It is something I have been coining around for some years but only now got some breathing room from work to put something down in form of presentation. Not even to think about any kind of publication yet. I already see good insight in what you have been saying and much reflects the experience I have had during the years. And it always amazes me how little attention people pay to objectively analyze the situation and squeeze out what is valuable and what is just cool.     This becomes also somehow bizarre if you adopt the framework. I mean in a good way. It takes away the mystery what many people are trying to build around AR. It does not make AR irelevant, null or void. Quite opposite for me atleast. It removes a lot of unnecesary noise and brings clarity where you can focus on essential. But time will show if other people will perceive the situation in same manner.
  5.     Thats how you can interpret the situation if you perceive the AR from its legacy point of view. My opinion being that such question as "Is Google Glass AR or not" is simply wrong question. I would apreciate if you took the time and watch the youtube lecture. It looks long but I encourage you to watch it   In classical sense, where you can pose such questions, you see Augmented Reality in every game already. User interface is contextualized information perfect example of it is TF2 with Oculus rift and how "ARish" the UI actually is (The UI elements in such case are that of Google Glass type and fall under abstract reasoning). Take for example Battlefield 3 and start to analyze what is going on when you play. Your squad members are being overlaid with graphical indicators that are spatially correct and give you enhanced awareness (This is now concrete reasoning AR, where your spatial and temporal parts of brains are tapped into even if the name tags for example are abstract, the main added value is of the spatial reasoning). That is Augmented Reality in Virtual environment right there, in two different games representing 2 of the modalities.   This very thing allows for example me to use Virtual Reality to prototype Augmented Reality applications without developing any real technology. Thats right, for our industrial VR/AR applications we use virtual reality to prototype real systems, for example product or even virtual reality systems themselves (For example use of HMD to evaluate powerwall / cavesetups) and use of VR to prototype any kind of AR application you can imagine without bothering yet with technical development and risking that your application is not actually useful. Mind boggling? No not really. People have just technology oriented fascination and hype on AR without bothering to too much understand what is actually going on with VR and AR. If you want to create augmented reality game, it would mean that your perception of reality is augmented with synthetic stimuli for the purpose of entertainment. Where as this stimuli would be of gaming nature. Be it solving some kind of puzzle, compete for scoring etc. I suppose you are augmenting visual perception. Now, what kind of game you want the player to experience? Lets say you wanted to create augmented reality MUD? Google glass would be sufficient for that. You capture the basic components of what makes a good MUD and incorporate the physical presence in specific location to your gameplay. Incorporate google maps to it or whatever. And you make the action as just text based adventure. Future of LARP? Anyway, thats tapping to your abstract reasoning whereas your physical presence at specific spot gives you context. Game content is purely based on your abstract reasoning while being contextualized with concrete reasoning. No, Im not any kind of MUD expert just using it as example to widen the mindset. Now, I suppose we stick with so called classic AR with overlaying graphical content to real world. Such as 3D objects, say monsters etc, one needs such device that have optical see through (very challenging) or video see through. Google Glass or Meta 1 and such devices will not do. They are still too primitive for the task. Say you had Oculus rift and 2 HD cameras that could provide you with video see through. Then you combine it with some tracking solution like ALVAR from VTT or Metaio pick whatever. Now you can start to overlay graphical content to your field of view s there was monsters crawling etc. Thats pure concrete reasoning right there while you are running away from the zombies that haunt you on the streets. Watch out for reality while you are at it. Dont be fooled by the nice technology demos you see like Meta1 etc. They are appealing only for the audience and not for the user itself. They use optical see through without any benefit of the see through capability. It is even counter intuitive actually due technology limitations. Its exactly same effect what allowed Johnny Lee to demonstrate head tracking. That is tapping to your concrete reasoning even when the medium is normal display. FOr the actual user the AR demos on those videos is not actually delivered. Thats still on the dreaming state. What I want you to understand that Augmented Reality as technology will not be giving you much cues what kind of game is fun or meaningful or engaging. First of all you must envision the game and what kind of gameplay you want to achieve. Then you go and pick the relevant technology to implement it and be totally objective what you actually want to achieve with your technology. Some games could be just fine with smartphone providing you with the graphics overlay while the GPS and other sensors provide you with contextualization in case you plan to make some geo-tag game. Some games could actually use your visual sense to be overlaid in Oculus Rift kind of way so that you feel immersed by the monsters. Thats the downside of VR & AR at large. They allow you to do everything. Yes, I mean it, everything you can imagine. But the technology will not make up the lack of meaningful content. Understand your medium. Pick your target audience and then develope a game that respects the medium and the audience. This is for example the reason why "true VR" claim of Oculus rift falls short. The wide field of view is not any more or less VR than some old z800 HMD. In given application the z800 is perfectly fine. It is not about technology when it comes to VR/AR. Sure, good tech widens your scope and gives you higher propability to succeed with your application. (DOnt get me wrong, I love rift Its awesome!) I feel almost dirty to try make people to see the lecture and try to understand it, maybe over time it will happen. It may be all confusing at large but I promise that you will appreciate understanding these things when you actually want use AR outside of dreaming and hype scenarios, and you are trying to figure out what is exactly the added value of AR in your game.   Historical reasons are what they are and they derail a lot of attention from us, humans, and our perception of reality. Technology is the enabler for us to do all those fancy and nice tricks, but its nothing but technology that we try to harness for specific purposes. When it boils down to design of systems that deliver you ought to understand for what purpose you use what set of technologies. Its not easy for sure, been there done that, failed many times among few successful systems. We are only starting to get slight understanding what AR actually is, no matter how many experts you see, with perfect confidence, telling you about head mounted displays and floating 3D objects etc. I make a bold claim that these people simply dont know in full widht what they are dealing with ;)
  6. Let me try to give a shot on the AR...   Lecture - What is Augmented Reality and Why it emerged?   Augmented Reality is quite misunderstood concept IMHO. I have been involved with design and implementation of quite many different kind of VR and AR systems over the years. And I always felt uncomfortable with the technology and what emphasis people put on the tech instead of for what purpose this tech is used.   One big problem for me personally was that none seemed to be able to explain why VR/AR are useful. Intuitively everyone has such fascination and awe, but when it comes to explaining things everyone went mute. Hence I started to try answer that question my self. This being only my personal point of view on things, more of and hypothesis over some 5+ years of empirical observations On that basis, I gave lecture on Augmented Reality, introducing framework how I see VR/AR when developing different kind of systems mostly for industry. In more general sense the principles of this framework are also applicable to whatever kind of applications you develop. Even VR/AR games and where you put the challenges or push your player to the limits, for example Cymatic Bruce experience in Portal 2 when jumping off the cliff down to grab a box etc. Or when it comes to the use of Google Glass in comparison to Meta 1 for some GPS based adventure game or educational app. I hope this lecture provides some structure when you try to understand what makes VR / AR games or applications either fun and engaging or from serious games point of view what makes them useful (Since you can do basically everything but that does not warrant value for industry). In the recent Augmented World Expo 2013, many of the key players in AR field debated whether Google Glass is AR solution at all, or if Meta 1 is the true AR system. This debate also revolved around work conducted by Steve Mann on the perceptual enhancements. And I hope if you see my lecture it should become obvious that this starts to look like apples vs oranges kind of debate, hence not very constructive or useful arguments. Please feel free to throw feedback. As I say in the video description this should also stir some discussion within the AR community what we are trying to achieve with our technology. - Sauli