• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Arnold // Golden Donkey Productions
      Hi, I've been working on this issue for a while and haven't yet found an answer.
      Does anyone know the best way to convert unity's LAT & LONG into a vector 3 position that I could use in a virtual world (if it's even possible). 
      Thankyou in advance
    • By khawk
      Last week VIVE announced the VIVE SRWorks SDK, allowing developers to access the stereo front facing cameras on the VIVE Pro. Developers will now be able to perform 3D perception and depth sensing with the stereo RGB sensors. From the announcement:
      The SDK includes plugins for Unity and Unreal.
      VIVE also included a few videos worth checking out:
       
       
       
      Learn more at http://developer.vive.com/resources.

      View full story
    • By khawk
      Last week VIVE announced the VIVE SRWorks SDK, allowing developers to access the stereo front facing cameras on the VIVE Pro. Developers will now be able to perform 3D perception and depth sensing with the stereo RGB sensors. From the announcement:
      The SDK includes plugins for Unity and Unreal.
      VIVE also included a few videos worth checking out:
       
       
       
      Learn more at http://developer.vive.com/resources.
    • By Joseph Nguyen
      Hello everyone,
      I'm writing this topic because I can't find a right fit for me. I'm working on a project where AR glasses would be an "extra monitor" and connected to a PC using Windows OS. The AR Glasses would just (for now) mirror the PC monitor or parts of it.
      The easiest solution for me would be to start my project with AR glasses that are using a Windows OS. However, all the glasses I could find are using Android OS. I'm not interested by the Hololens, because it is too expensive and too bulky.
      Using Google, I found 3 ways to solve my issue but as I'm a noob, I'm writing this topic so that people who are more expert than me can tell me if I understood correctly, and maybe guide me on which path I should choose :
      Solution 1 : Someone in the world knows where to find AR glasses with Windows OS on them. Can someone give me a brand name if you think this solution is viable ?
      Solution 2 : I misunderstood how AR glasses work, and I could install any OS I want on any Smart Glasses. This article talks about Vuzix m100 android smart glasses that are using windows 10 OS. Is it possible to switch OS on smart glasses ? If yes, I will just buy any smart glasses that would fit my needs.
      Solution 3 : I'll just code a piece of software that will bridge the Android OS app with my Windows OS software. This solution seems to involve more work, but maybe a bit of code is already available in open source.
       
      Which solution would you pick if you had to conduct this project, knowing that the PC has to use Windows OS and the AR glasses have to mirror the PC Screen.
       
      Thank you for your help,
      Best Regards,
      Joseph Nguyen
    • By Joseph Nguyen
      Hello everyone,
      I'm writing this new topic because I'm trying to know more about AR, especially smart glasses. I'm reading everywhere that smart glasses are getting crazier, you can always do more stuff with it. However, I would like to know if it is possible to do something really basic : "can I mirror my PC screen on any smart glasses ?". I can find no information about this feature. Either it's so basic that everyone (but me) knows how to do it, or smart glasses can't do it at all.
      My Google research allowed me to find that Vuzix is creating an app to mirror the PC screen, do I seriously need an app and spend 1400€ for smart glasses just to mirror my PC monitor ?
      Can someone tell me if it is possible to mirror your screen on smart glasses without spending a crazy amount of money ? The ideal solution for me would be to find smart glasses that does just that, mirror your PC screen or tablet.
       
      Thank you very much for your help,
      Best regards,
      Joseph
  • Advertisement
  • Advertisement

How to render a slice to make it look really inside an object?

Recommended Posts

Hi,

I have a slice, which shows inside of a real sphere.

I want to render this slice to make it look really inside the sphere.

I know the relative pose of the slice to the real sphere, so the view matrix is not an issue.

The problem if how to add some visual effect to make it looks real. 

If I just simply render both sphere and the slice, it give a feeling the slice is dangling in the air.

The way I can figure out is to add a hole geometry on the sphere to give us a feeling that we look the slice through a hole, so the slice looks like inside the sphere.

Any suggestion about this method and other suggestions?

Thanks a lot!

YL

Share this post


Link to post
Share on other sites
Advertisement

I'm not sure if you explanation is detailed enough?

Do you want to draw the intersection between the slice and the sphere? You can do that in the pixel/fragment shader by discarding fragments with world positon that is in front (or outside) of the slice?

Share this post


Link to post
Share on other sites

Sorry for unclear description of my problem.

Actually, I do not have the geometry of the sphere and cannot render it. The only think I have is one slice (a png format image) of the inside of the sphere. This slice is an image cutting through the sphere, like ultrasound image. 

I use a real camera to take a picture of the sphere and then want to render this slice combined with the picture to give people a feeling that this slice is inside the sphere.

If I have to have more information, I can use a stereo camera to get the depth information of the sphere. With the depth information, Maybe I can create a hole as KosadinPetkow said.

 

 

 

 

Share this post


Link to post
Share on other sites
14 hours ago, YixunLiu said:

I use a real camera to take a picture of the sphere and then want to render this slice combined with the picture to give people a feeling that this slice is inside t

If the real picture is a texture on top of a 3D model then you would manipulate the 3D model and textures.

If the real picture is an augmented reality image then it will be much harder. You'd need to turn the object into a model through whatever means you have available, such as back projection or object registration or an object recognition library someone has already developed.  If that is the situation then I strongly recommend using an existing tool.  This will create a 3D model and texture that you would manipulate the same way.

 

On 7/4/2017 at 1:32 PM, YixunLiu said:

The way I can figure out is to add a hole geometry on the sphere to give us a feeling that we look the slice through a hole, so the slice looks like inside the sphere.

Once you have the model and texture to manipulate, manipulating the graphics will require some knowledge of graphics and model processing, or some tools that can do it for you.

For something simple you could make a slice of the model's geometry, or perhaps use a clipping plane to cut across it, or perhaps using a shader to modify the geometry or change to a transparent texture for the cutout.  The details will depend on the effect you are trying to achieve and your level of skill with the tools.

For more advanced shapes you might use tools for constructive geometry to subtract sections, assuming tools are available to you.

14 hours ago, YixunLiu said:

This slice is an image cutting through the sphere, like ultrasound image. 

If you have volumetric data for the sphere and you want to show a proper cutout, there is a bunch of research on how to do that efficiently. The IEEE VIS conference has twenty years of research papers that can help as a starting point. 

 

Share this post


Link to post
Share on other sites

Thank you so much for your valuable comments.

What I want to achieve is like the attached picture Hole.png, which I capture a frame from the Youtube.

As you can see from the picture, a virtual hole is added on the real wall to show the virtual sky. 

To reach this effect, I think I need to

1. Obtain the wall surface mesh by using some depth camera and then cut a hole in the mesh

2. Render the cut mesh to get depth value

3. Render the sky image with depth test enabled

Am I right?

Any suggestions are really appreciated!

 

Best,

YL

 

 

Hole.png

Share this post


Link to post
Share on other sites

Oh, that was all you wanted. Here I was imagining  some kind of system where you filled a object with voxel like slices.

 

Your shader should have some kind of "Discard" command that will simply not render that fragment. You can use that with some kind of map to define what pixels shouldn't be drawn.

It's often used for those little bullet holes. If you want I could show you a example with Unreal or Unity.

This will be able to create a visual hole in any object. It won't have the depth filling and won't adjust collisions. If you are using rays for firing a gun it will be possible to fire trough a hole made with a shader like this. This is how a game like Counter-Strike would do it.

 

 

If your game is using realistic ballistics. Like in Rainbow Six Siege, you need to break the mesh and update the collisions.

maxresdefault.jpg

That is to say it isn't some kind of cheap effect. From the jagged edges and the z depth you can see it's a mesh.

The important part is to use some kind of LOD or Octree so that only the part that is hit updates. Updating the collision for the full wall would slow the game to a crawl.

  

Share this post


Link to post
Share on other sites

Thanks Scouting!

My understanding based on what you said is there are two methods. 

The first method is to use pixel shade to render the sky, but just render part of the sky based on a map.

This method looks easy to be implemented because it does not require the mesh of the surface.

For this method, does it looks real or provide enough depth information? 

It is better if you can send me an example. But I am not familiar with Unity and Unreal. I want to use DirectX11 to implement it.

The second method based on what you said is to build a mesh for the wall surface and then update the mesh.

It looks this method can produce more realistic result, right?

Do you know where can I get some examples of  similar implementation using DirectX?

The following is the link for this video:

https://hololens.reality.news/news/video-holelenz-adds-magic-windows-hololens-gives-portals-new-worlds-0176281/

 

Thanks.

 

Share this post


Link to post
Share on other sites

What is showed in the video you linked is just sprites of holes rendered over the render result. There is no real holes or objects with holes in them.

 

Just use directX to render your sphere, then render the sprite over it. In screen space.

Here the concept is shown:Concept.thumb.png.d0a3f3e655361cb0a58654f9f86bce55.png

Edited by Scouting Ninja

Share this post


Link to post
Share on other sites

The problem is  I do not have the sphere model. What I have is the slice and the photo of the sphere taken by a real camera.

The margin of the hole looks very real. Is it produced by using shadow?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement