Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualCromfel

Posted 27 June 2013 - 01:49 PM

For example many AR applications or games involve overlaying 3d models into the world, or usually on top of a specific marker. This model then animates or you can move around the marker to view it from different directions etc. None of which ever needed AR to function, the same results could just as easily be achieved by displaying a model on screen and using gyroscope or accelerometer or even just traditional input methods. So I find myself asking, why bother? What is the point other than shouting hey look at how cool we are using this new technology. Which ultimately makes me feel rather sad.

 

This is indeed one of the plagues haunting AR. On top of the exploitation of how camera Point Of View content shown on youtube videos of marker based tracking with some 3D object floating. This is how people would first of all like to experience things. Per the framework for concrete reasoning when the POV is correct the content looks very appealing for obvious reasons. But it is not how you experience it in reality unles you have proper HMD with video-see-through. And even then the added value is nonexistent if the product is not such as evaluating visual appearance that is of suitable scale for such visualization. And exactly as you said, this very thing could be implemented using other sensors also. Tracking is tracking and its totally detached from the AR experience if you use image based tracking or something else. This is also one of the things people dont distinguish. AR tracking using image is separate thing than using video overlay. Even if they happen to come in same package. One can use magnetic tracking for position and orientation and still use video see through just to give obvious example.

 

As such i've started to jot down what I feel are the benefits of AR, what use cases make the most sense of the technology, where does it fail, what other interactions should it be mixed with or not, how do people interact or expect to interact with AR, AR Markers, or the display (e.g. handheld devices) etc. All in order to heopfully help guide potential future clients into developing a project that has better value than previously.

 

This is what should be done more. And it is not simple task to find good way to categorize what exactly makes specific case valuable whereas exactly same application is completely useles in some other case. For me the experience of utilizing VR, as an example, boiled down to fact that the cases where it was most useful was when we had multidisciplinary group of people with various backgrounds who needed to discuss new design of machines such as mining loaders etc. So when I tried to narrow down what was causing this specific setting to be useful was the fact that the concrete experience what the drivers of such machine had was only becoming relevant when they could communicate upon the setting where they would be working. But now in controlled VR experience.  Same phenomena was visible where much of the VR utilization was on the sales and most important thing in that situation was outside of the abstract reasoning and product specification, but actually to be sitting inside machine what you are going to buy. Evaluate how it would feel to be inside and do specific job task.

 

Now when it comes to the usual AR with floating 3D objects. The only differentiating factor was the utilization of physical world in conjunction for a specific purpose. It can be for example that the machine design is at such phase that physical cabin construct is already being built. Now only the dashboard and final button / UI screen layout is being finalized. There it is more useful to utilize the physical structure to provide somatosensory experience, some would call it haptic, and visualize through AR the alternative button layouts and screen configuration where the physical presence in the cabin helps to get feeling of space... Now from what sense point of view this is AR? Again this becomes interesting question when we get rid of the traditional AR view and not only focus on the visual feedback :)

 

This all will become much more clear to you when you watch the lecture video. Many of these things such as using 500 000$ haptic supermegahyperglove with your VR visualization becomes solved when you just grab a replica of the object being manipulated. My point being that model and simulate only when its necesary. NEVER when it is just because of the coolnes and due "Because I can" setting. This seems very obvious on hindsight... But not so obvious when you go and see some very expensive haptic demo being used for completely irrelevant purpose.

 

I'd love to discuss this further and to have had time to properly collated my thoughts in order to write and explain them better than I have above, but as ever i'm in a bit of crunch mode for coincidently an AR project ;) So I just wanted to put done some initial thoughts and viewpoint on the matter. Hopefully i'll be able to return to it in a week or so and after i've had time to watch the video posted.

 

I am looking forward to get some feedback on the framework. It is something I have been coining around for some years but only now got some breathing room from work to put something down in form of presentation. Not even to think about any kind of publication yet. I already see good insight in what you have been saying and much reflects the experience I have had during the years. And it always amazes me how little attention people pay to objectively analyze the situation and squeeze out what is valuable and what is just cool.

 

Oh i'm also interested in the whole definition of AR as it could be as broad or as narrow as you want. As such I wonder if the term AR is a little meaningless now since at its most basic it could include anything and everything where by information is overlaid on top of a real or even virtual video stream. That's not really AR to me, its a subset, basically a HUD. However defining and naming these things can cause huge arguments, so i'm unsure how beneficial it is to bother ;)

 

This becomes also somehow bizarre if you adopt the framework. I mean in a good way. It takes away the mystery what many people are trying to build around AR. It does not make AR irelevant, null or void. Quite opposite for me atleast. It removes a lot of unnecesary noise and brings clarity where you can focus on essential. But time will show if other people will perceive the situation in same manner.


#3Cromfel

Posted 27 June 2013 - 01:45 PM

For example many AR applications or games involve overlaying 3d models into the world, or usually on top of a specific marker. This model then animates or you can move around the marker to view it from different directions etc. None of which ever needed AR to function, the same results could just as easily be achieved by displaying a model on screen and using gyroscope or accelerometer or even just traditional input methods. So I find myself asking, why bother? What is the point other than shouting hey look at how cool we are using this new technology. Which ultimately makes me feel rather sad.

 

This is indeed one of the plagues haunting AR. On top of the exploitation of how camera Point Of View content shown on youtube videos of marker based tracking with some 3D object floating. This is how people would first of all like to experience things. Per the framework for concrete reasoning when the POV is correct the content looks very appealing for obvious reasons. But it is not how you experience it in reality unles you have proper HMD with video-see-through. And even then the added value is nonexistent if the product is not such as evaluating visual appearance that is of suitable scale for such visualization. And exactly as you said, this very thing could be implemented using other sensors also. Tracking is tracking and its totally detached from the AR experience if you use image based tracking or something else. This is also one of the things people dont distinguish. AR tracking using image is separate thing than using video overlay. Even if they happen to come in same package. One can use magnetic tracking for position and orientation and still use video see through just to give obvious example.

 

As such i've started to jot down what I feel are the benefits of AR, what use cases make the most sense of the technology, where does it fail, what other interactions should it be mixed with or not, how do people interact or expect to interact with AR, AR Markers, or the display (e.g. handheld devices) etc. All in order to heopfully help guide potential future clients into developing a project that has better value than previously.

 

This is what should be done more. And it is not simple task to find good way to categorize what exactly makes specific case valuable whereas exactly same application is completely useles in some other case. For me the experience of utilizing VR, as an example, boiled down to fact that the cases where it was most useful was when we had multidisciplinary group of people with various backgrounds who needed to discuss new design of machines such as mining loaders etc. So when I tried to narrow down what was causing this specific setting to be useful was the fact that the concrete experience what the drivers of such machine had was only becoming relevant when they could communicate upon the setting where they would be working. But now in controlled VR experience.  Same phenomena was visible where much of the VR utilization was on the sales and most important thing in that situation was outside of the abstract reasoning and product specification, but actually to be sitting inside machine what you are going to buy. Evaluate how it would feel to be inside and do specific job task.

 

Now when it comes to the usual AR with floating 3D objects. The only differentiating factor was the utilization of physical world in conjunction for a specific purpose. It can be for example that the machine design is at such phase that physical cabin construct is already being built. Now only the dashboard and final button / UI screen layout is being finalized. There it is more useful to utilize the physical structure to provide somatosensory experience, some would call it haptic, and visualize through AR the alternative button layouts.

 

This all will become much more clear to you when you watch the lecture video. Many of these things such as using 500 000$ haptic supermegahyperglove with your VR visualization becomes solved when you just grab a replica of the object being manipulated. My point being that model and simulate only when its necesary. NEVER when it is just because of the coolnes and due "Because I can" setting. This seems very obvious on hindsight... But not so obvious when you go and see some very expensive haptic demo being used for completely irrelevant purpose.

 

I'd love to discuss this further and to have had time to properly collated my thoughts in order to write and explain them better than I have above, but as ever i'm in a bit of crunch mode for coincidently an AR project ;) So I just wanted to put done some initial thoughts and viewpoint on the matter. Hopefully i'll be able to return to it in a week or so and after i've had time to watch the video posted.

 

I am looking forward to get some feedback on the framework. It is something I have been coining around for some years but only now got some breathing room from work to put something down in form of presentation. Not even to think about any kind of publication yet. I already see good insight in what you have been saying and much reflects the experience I have had during the years. And it always amazes me how little attention people pay to objectively analyze the situation and squeeze out what is valuable and what is just cool.

 

Oh i'm also interested in the whole definition of AR as it could be as broad or as narrow as you want. As such I wonder if the term AR is a little meaningless now since at its most basic it could include anything and everything where by information is overlaid on top of a real or even virtual video stream. That's not really AR to me, its a subset, basically a HUD. However defining and naming these things can cause huge arguments, so i'm unsure how beneficial it is to bother ;)

 

This becomes also somehow bizarre if you adopt the framework. I mean in a good way. It takes away the mystery what many people are trying to build around AR. It does not make AR irelevant, null or void. Quite opposite for me atleast. It removes a lot of unnecesary noise and brings clarity where you can focus on essential. But time will show if other people will perceive the situation in same manner.


#2Cromfel

Posted 27 June 2013 - 01:35 PM

For example many AR applications or games involve overlaying 3d models into the world, or usually on top of a specific marker. This model then animates or you can move around the marker to view it from different directions etc. None of which ever needed AR to function, the same results could just as easily be achieved by displaying a model on screen and using gyroscope or accelerometer or even just traditional input methods. So I find myself asking, why bother? What is the point other than shouting hey look at how cool we are using this new technology. Which ultimately makes me feel rather sad.

 

This is indeed one of the plagues haunting AR. On top of the exploitation of how camera Point Of View content shown on youtube videos of marker based tracking with some 3D object floating. This is how people would first of all like to experience things. Per my framework when the POV is correct the content looks very appealing. But it is not how you experience it in reality unles you have proper HMD with video-see-through. And even then the added value is nonexistent if the product is not such as evaluating visual appearance that is of suitable scale for such visualization. And exactly as you said, this very thing could be implemented using other sensors also. Tracking is tracking and its totally detached from the AR experience if you use image based tracking or something else. This is also one of the things people dont distinguish. AR tracking using image is separate thing than using video overlay. Even if they happen to come in same package. One can use magnetic tracking for position and orientation and still use video see through just to give obvious example.

 

As such i've started to jot down what I feel are the benefits of AR, what use cases make the most sense of the technology, where does it fail, what other interactions should it be mixed with or not, how do people interact or expect to interact with AR, AR Markers, or the display (e.g. handheld devices) etc. All in order to heopfully help guide potential future clients into developing a project that has better value than previously.

 

This is what should be done more. And it is not simple task to find good way to categorize what exactly makes specific case valuable whereas exactly same application is completely useles in some other case. For me the experience of utilizing VR, as an example, boiled down to fact that the cases where it was most useful was when we had multidisciplinary group of people with various backgrounds who needed to discuss new design of machines such as mining loaders etc. So when I tried to narrow down what was causing this specific setting to be useful was the fact that the concrete experience what the drivers of such machine had was only becoming relevant when they could communicate upon the setting where they would be working. But now in controlled VR experience.  Same phenomena was visible where much of the VR utilization was on the sales and most important thing in that situation was outside of the abstract reasoning and product specification, but actually to be sitting inside machine what you are going to buy. Evaluate how it would feel to be inside and do specific job task.

 

Now when it comes to the usual AR with floating 3D objects. The only differentiating factor was the utilization of physical world in conjunction for a specific purpose. It can be for example that the machine design is at such phase that physical cabin construct is already being built. Now only the dashboard and final button / UI screen layout is being finalized. There it is more useful to utilize the physical structure to provide somatosensory experience, some would call it haptic, and visualize through AR the alternative button layouts.

 

This all will become much more clear to you when you watch the lecture video. Many of these things such as using 500 000$ haptic supermegahyperglove with your VR visualization becomes solved when you just grab a replica of the object being manipulated. My point being that model and simulate only when its necesary. NEVER when it is just because of the coolnes and due "Because I can" setting. This seems very obvious on hindsight... But not so obvious when you go and see some very expensive haptic demo being used for completely irrelevant purpose.

 

I'd love to discuss this further and to have had time to properly collated my thoughts in order to write and explain them better than I have above, but as ever i'm in a bit of crunch mode for coincidently an AR project ;) So I just wanted to put done some initial thoughts and viewpoint on the matter. Hopefully i'll be able to return to it in a week or so and after i've had time to watch the video posted.

 

I am looking forward to get some feedback on the framework. It is something I have been coining around for some years but only now got some breathing room from work to put something down in form of presentation. Not even to think about any kind of publication yet. I already see good insight in what you have been saying and much reflects the experience I have had during the years. And it always amazes me how little attention people pay to objectively analyze the situation and squeeze out what is valuable and what is just cool.

 

Oh i'm also interested in the whole definition of AR as it could be as broad or as narrow as you want. As such I wonder if the term AR is a little meaningless now since at its most basic it could include anything and everything where by information is overlaid on top of a real or even virtual video stream. That's not really AR to me, its a subset, basically a HUD. However defining and naming these things can cause huge arguments, so i'm unsure how beneficial it is to bother ;)

 

This becomes also somehow bizarre if you adopt the framework. I mean in a good way. It takes away the mystery what many people are trying to build around AR. It does not make AR irelevant, null or void. Quite opposite for me atleast. It removes a lot of unnecesary noise and brings clarity where you can focus on essential. But time will show if other people will perceive the situation in same manner.


#1Cromfel

Posted 27 June 2013 - 01:32 PM

For example many AR applications or games involve overlaying 3d models into the world, or usually on top of a specific marker. This model then animates or you can move around the marker to view it from different directions etc. None of which ever needed AR to function, the same results could just as easily be achieved by displaying a model on screen and using gyroscope or accelerometer or even just traditional input methods. So I find myself asking, why bother? What is the point other than shouting hey look at how cool we are using this new technology. Which ultimately makes me feel rather sad.

 

This is indeed one of the plagues haunting AR. On top of the exploitation of how camera Point Of View content shown on youtube videos of marker based tracking with some 3D object floating. This is how people would first of all like to experience things. Per my framework when the POV is correct the content looks very appealing. But it is not how you experience it in reality unles you have proper HMD with video-see-through. And even then the added value is nonexistent if the product is not such as evaluating visual appearance that is of suitable scale for such visualization.

 

As such i've started to jot down what I feel are the benefits of AR, what use cases make the most sense of the technology, where does it fail, what other interactions should it be mixed with or not, how do people interact or expect to interact with AR, AR Markers, or the display (e.g. handheld devices) etc. All in order to heopfully help guide potential future clients into developing a project that has better value than previously.

 

This is what should be done more. And it is not simple task to find good way to categorize what exactly makes specific case valuable whereas exactly same application is completely useles in some other case. For me the experience of utilizing VR, as an example, boiled down to fact that the cases where it was most useful was when we had multidisciplinary group of people with various backgrounds who needed to discuss new design of machines such as mining loaders etc. So when I tried to narrow down what was causing this specific setting to be useful was the fact that the concrete experience what the drivers of such machine had was only becoming relevant when they could communicate upon the setting where they would be working. But now in controlled VR experience.  Same phenomena was visible where much of the VR utilization was on the sales and most important thing in that situation was outside of the abstract reasoning and product specification, but actually to be sitting inside machine what you are going to buy. Evaluate how it would feel to be inside and do specific job task.

 

Now when it comes to the usual AR with floating 3D objects. The only differentiating factor was the utilization of physical world in conjunction for a specific purpose. It can be for example that the machine design is at such phase that physical cabin construct is already being built. Now only the dashboard and final button / UI screen layout is being finalized. There it is more useful to utilize the physical structure to provide somatosensory experience, some would call it haptic, and visualize through AR the alternative button layouts.

 

This all will become much more clear to you when you watch the lecture video. Many of these things such as using 500 000$ haptic supermegahyperglove with your VR visualization becomes solved when you just grab a replica of the object being manipulated. My point being that model and simulate only when its necesary. NEVER when it is just because of the coolnes and due "Because I can" setting. This seems very obvious on hindsight... But not so obvious when you go and see some very expensive haptic demo being used for completely irrelevant purpose.

 

I'd love to discuss this further and to have had time to properly collated my thoughts in order to write and explain them better than I have above, but as ever i'm in a bit of crunch mode for coincidently an AR project ;) So I just wanted to put done some initial thoughts and viewpoint on the matter. Hopefully i'll be able to return to it in a week or so and after i've had time to watch the video posted.

 

I am looking forward to get some feedback on the framework. It is something I have been coining around for some years but only now got some breathing room from work to put something down in form of presentation. Not even to think about any kind of publication yet. I already see good insight in what you have been saying and much reflects the experience I have had during the years. And it always amazes me how little attention people pay to objectively analyze the situation and squeeze out what is valuable and what is just cool.

 

Oh i'm also interested in the whole definition of AR as it could be as broad or as narrow as you want. As such I wonder if the term AR is a little meaningless now since at its most basic it could include anything and everything where by information is overlaid on top of a real or even virtual video stream. That's not really AR to me, its a subset, basically a HUD. However defining and naming these things can cause huge arguments, so i'm unsure how beneficial it is to bother ;)

 

This becomes also somehow bizarre if you adopt the framework. I mean in a good way. It takes away the mystery what many people are trying to build around AR. It does not make AR irelevant, null or void. Quite opposite for me atleast. It removes a lot of unnecesary noise and brings clarity where you can focus on essential. But time will show if other people will perceive the situation in same manner.


PARTNERS