Digesting the aspects of this subject and this conversation is going to take me a little while. I twice watched the video of your lecture. Most of it I have read or heard from various sources but I have never experienced a presentation like that on the related issues. It occurred to me that many of the most popular video games utilized all of the modalities. Some games with amazing visual appeal failed to become popular because of lacking in one or more of the modalities. I agree about the evolution of AR depending on developers being conscious of the modalities and designing future AR games accordingly. The challenge of getting beyond 1st and 2nd modalities is obvious with AR.
More study in this subject is needed by me.
Thank you for your work in this area.
It is good to try wrap your head around these things. You always figure out something new when you iterate around some consepts.
What you say about games utilizing these modalities. Thats very true. Big problem have been the display paradigm what have derailed thinking of many individuals. Actually it is rather funny now on hindsight to look back and think what kind of dramatic effect our display devices have had on shaping our perception and expectations of visual simulation and for exampe games from entertainment point of view. What Oulus VR is doing is quite remarkable "Why do we have to see this world as a box?" I feel almost sorry for all those companies who have been in the HMD business and who have been stuck on the boxed-view-to-virtual-worlds paradigm. Ok, its easy to say now. But lets say half of the problem was compatibility issues and legacy support and that none wanted to take the risk, yet this very issue could have been the problem preventing VR getting on earlier considering the money burned (Even how much Sony burned money on their personal "video viewer"). Palmer Luckey literally thought outside of the box ;)
What we earlier discussed about Battlefield 3 and the ingame AR in form of minimaps and visual indicators of your team mates position. A lot of that stuff what we see in games could be directly adopted to real life AR (Given that we had sufficient technological level). I think there is huge amount of different well honed application "templates", if you will, what we can adopt only by evaluating 3D games and how problems have been solved there. Specifically when it comes to navigation support. I cant emphasis enough the importance on being critical on what you want to achieve with the technology in respect to AR.
Take for example this picture:
You see representation of all 3 modalities in same screen. As soon as we throw all this to such display device as Oculus Rift, the whole illusion of user interface being somehow related to the boxed world view experience is shattered. Suddenly google glass makes perfect sense also. One style of AR being merely localized according to eyes while other localized according to hear or rest of the body (Kinesthetic mental model).
Modality 1 - Concrete reasoning in form of Caudell AR by spatializing indicators of friends passing by, while you were not perfectly sure if it was your schoolmate from 10 years ago but as your smartphone pulled his name and visualized through optical see through HMD floating name around his head you were sure it was John Snow.
Modality 2 - Abstract reasoning in form of HUD type AR (Google Glass) showing your vital state. For example heartbeat when you are joggin, blood sugar level for someone with diabetes or just showing contextualized information of your location and whatever may be interesting to you such as history of the specific street you walk.
Modality 3 - You have transformational reasoning modality by utilizing GPS and showing dynamic minimap with view from above you on the city giving you situational awarenes on nearby vehicles. Also showing red boxes on nearby parking places that dont have room while blue ones have slots left, combining abstract and concrete reasoning for making the sensory data compatible with you.
So as you said, big challenge is the Modality 3 since modality 1 and 2 are mostly for the purpose of distinguishing between information for the purpose of communication and that of tacit knowledge. The challenge being how to transform the vast amount of data we have available throughour gadgets and networks and servers and utilize them for something meaningful. And how that information is transformed to Modality 1 or Modality 2 for maximum impact and usability. Again the ongoing fight over "This is AR thats not AR" what many people are conducting is in my humble opiniontotally counter productive. The arguments on either side just does not hold when you have slightly wider view on the feedback you try to provide to human being. No matter what is the context, what is the application, what is the purpose. (Additionally one should remember that outside of how we are used to perceive and understand this world the Modality 3 can help us greatly on understanding phenomena and concepts that could previously be only understood by reading and having deep theoretical background).
One good thinking excerise is handy to perform: Imagine situation where you are fully immersed into virtual world through all your senses. That being the state where people in Matrix movie are. What happens to "VR" or "AR" in that situation from the classical boundary point of view? All the concepts fall apart. Just as I gave the example of using VR to prototype Caudell type AR applications. One can substitute reality with virtual sensory stimuli or augment something new to reality. But at the point when user is not ablel to distinguish between roadsign that is real from the roadsign that is just overlaid with translated writing, from the user point of view both are real. One is merely more dynamic technology than the other. Both serve same purpose. A: To indicate where you should go given this sign points to desired location and B: Communicate the abstract term attributed to the given place. Mind bending? Confusing? From traditional AR point of view yes.