Painting on 3D models

Started by
11 comments, last by DonnieDarko 17 years, 7 months ago
I'm working on a project where I will have a 3D model (maybe several peices, maybe a mesh). I need to basically be able to paint this model with different colors. An example of this would be: I have a 3D model of a car (textured) and I want to draw a red line across the hood. Now the way I'm heading right now, I'm wondering if I can create a mapping from the model to the texture. So if I want to draw at a certain point on the car, I would go through a mapping that tells me what point in the texture image corresponds to that point on the car. Then I could just modify the texture and apply the new (painted on) version to the model. Does this seem like a reasonable way get what I'm after? I'd like to be able to paint with a fair amount of precision. As far as performance, I'd like it to be responsive, as the painting will be done in real time.
~Mark~
Advertisement
That's perfectly reasonable. There are even specialized programs to do that: Take a look at ZBrush and Deep Paint.
I think even Blender (free!) can do it in the newer versions, but don't quote me on that.
I did it in 3d studio max recently, there it was called UVW Unwrap. The method you described is the best way to do it i guess.

Greetings.
I suppose I should clarify that this will be done within a larger software project. I am using OpenSceneGraph for the graphics. I will have data that says what needs to be painted onto a model and where.

My question was more about implementing this in software. Thanks for the awesome links though :D
~Mark~
Quote:Original post by El3mental
I suppose I should clarify that this will be done within a larger software project. I am using OpenSceneGraph for the graphics. I will have data that says what needs to be painted onto a model and where.

My question was more about implementing this in software. Thanks for the awesome links though :D


If you take a car for instance and set the texture coordinates in a 3d modeling programs, you will need to import these coordinates into your game. If you have multilayer textures (basic car texture, striping, stickers) you can use alpha blending or multitexturing to get the final texture. The texture coordinates do the rest, they will interpolate the texture. I suppose you have some exporters for osg, i am pretty sure they will be able to convert the texture coordinates you specify in the 3d modelling software you use as well.
I'm not sure I understand, but it sounds like you're making a paint program for painting directly onto the geometry of a level or object(s).
You could use 32bit color values sequentially per polygon in a hidden backbuffer draw, and then sample the color at the pixel point for paint application to get the immediately effected polygon.
Finally to get the precise texture coordinate for that polygon, I'd do a temporary scanline fill of the triangle on the texture itself, with incrementing color values. Then redraw that triangle doing a pixel for pixel translation from the flat 2d texture space triangle to the 3d polygon. I'd also use a small depth buffer to store all pixels effected. Basically any pixel that draws under the final cooresponding screen coord is stored as an effected texture coordinate. The color tells you which texture coords are effected/painted. The final onscreen color will have the greatest influence and you'll need to interpolate painting between all the final effected 2d coords based on their 3d distance from the true 3d position of the final/displayed pixel. 3D distance cuz some of the effected pixels could be located on other polys, for example, if the pixel painted to was right on the vertex adjoining several polyons, and if it's being painted at a steep angle and a fair distance from the camera.
As confusing as that was hopefully it'll give you some ideas. :)
Google up "barycentric coordinates". It's used in per-polygon collision detection and normal mapping, and I bet that it's used in apps that support paint-on-the-surface like ZBrush and Blender.
So you need to have texture-coordinates that map the surface to a texture, then you do a collision-detection with a ray from cursor to surface, and with barycentrics you can get the pixel on the texture that matches to collision point. And you can also interpolate other per-vertex info with this, like normal vectors. Then you can project the collision point on the surface with tangent matrix to do stuff like brushes that map in 3d to the texture.

Or are you after something like applying a decal to a surface, as in some car-pimping games?

ch.
Barycentric coordinates sound like exactly what I need. I will still need the texture coordinates of the vertices, correct? I was hoping that I could draw a simple model in blender, export to 3ds to load into my program (it looks like OpenSceneGraph comes with loader functionality for 3ds files). I don't have a lot of experience with model loading, so I was kinda hoping that loading the .3ds model into the program would get me the texture coordinates for the vertices that I need. Anyone have any advice?

What I need to do is really more simple than it even sounds. I probably won't have to interact with the mouse per se. Rather, I will most likely have a list of data about the object at certain 3d coordinates. Based on the data, I need to color the object certain colors at that point.

So to give an example, pretend I have a cube:
0,2______2,2  |      |  |      |      <---one face of the cube (at z = 0)  |______|0,0      2,0


As I'm rendering my scene, over time I may get some data that says point (1,1) in the center of my cube face there needs to be painted red. So I could take the mapping from 3D coordinates to texture coordinates, find where to add a red dot on my texture image, and then reapply it to my cube.

Thanks a bunch for all the help so far guys!
~Mark~
To El3mental: Your approach sounds similar to the one I'm using with my 3D paint app. I render the model, but instead of writing RGB values to each pixel, I write the UV coordinates. This allows every screen pixel to map back to a point in the texture buffer.

The advantages are very fast updates, allowing fast reflection of changes to the texture into your preview window (paint on the texture, see it on the screen), and directly mapping every pixel back to a point in the texture (paint on the screen, see it in the texture).

There are limitations/downsides you'll have to address.

You'll want high-res UV coords. This is doable with hardware, provided you can live with 8-bit UV values. If you want textures larger than 256x256, or want sub-pixel precision, you need more bits. This is probably doable with bit operator tricks in a pixel shader, or by rendering to non-RGB formatted buffers (floats or maybe one of the 16-bit formats), but those are not always available as render targets. For my app, I used a custom software renderer to take care of this, which also gives me complete control over bit representations of everything.

It will make things easier if you lock the orientation of the model while painting, so you don't have to update the pixel-to-UV mapping buffer. Arguably not much of an issue if you're using hardware to do this (and can read the UV texture back from hardware at a fast enough speed), but again, I'm using a software render, so high res windows are a bit sluggish.

You'll need to deal with not having a 1:1 mapping of screen pixels to texels.

If you're painting on a zoomed out model, two adjacent pixels on the screen may map to points several texels apart in your texture, so you have to deal with stretching the paint brush across a larger area of the texture. This can be awkward when painting across triangles on the screen that map to locations in the texture -- seams along the different mesh patches (hint, look into a geometric representation like winged-edge or half-edge data structures so you can easily figure out where the problem areas are going to be, but this assumes you have access to the original model, and aren't working on polysoup).

I usually work zoomed in, so several screen pixels map to a single texel, so I haven't had to finished solving that problem yet. Being zoomed in means combining several screen samples to one texel, which is much more manageable (though still subject to edge conditions).

This is one of those problems that can involve quite a few different areas of graphics, especially if you're wanting something well-done and not a short hack. If you're in it for the geek experience, expect to spend a few months on it. If you really just want a 3D paint program, save yourself the time and buy a 3D paint program (or live with the headaches of jumping back and forth between a simple 2D paint program and a 3D renderer).

I have recently implemented a 3d paint application and at first I planned on doing it in software where it basically boils down to doing intersections tests between a line a polygon soup. For this to be fast enough for realtime painting a spatial acceleration structure, such as a kd-tree, is definately needed.

Instead I tried tried to implement it using the GPU and it turned out to be pretty easy and run really fast too. I first created a texture atlas for the mesh (using UVAtlas from the DirectX SDK) so there's a 1:1 mapping between a point on the surface of the 3d model and a texel in the paint texture (atlas texture). If you think of the paint coming out from the paint source in a cone shape, tha calculations for finding out where to apply the paint is almost the same as doing lighting with a spot light, only now it has to be done in texture space.

Set the paint texture as a render target and render the 3d model with a vertex shader that generate vertex positions based on atlas texture coordinates:

VS_OUTPUT vsToPaint(float4 pos : POSITION, float3 normal : NORMAL, float2 tex : TEXCOORD0){	VS_OUTPUT output = (VS_OUTPUT)0;		float2 uvpos;	uvpos.x = 2*tex.x - 1;	uvpos.y = 2*(1-tex.y) - 1;		output.position = float4(uvpos, 0, 1);	output.worldNormal = mul(normal, (float3x3)matWorld);	output.worldPos = mul(pos, matWorld);	return output;}


The actual position and normal of the current point on the 3d model is transformed to world space and interpolated across the triangle in texture space where they are used for the "spot light" calculation. So for texels which face the paint source and lie within the paint cone the alpha value is set to something greater than zero. Now enable alpha blending and paint should be added to the correct part of the model.
Now the only missing thing is to handle self occlusion on the model. That is stopping paint from reaching points on the surface that are not visible from the paint source. This is easily handled by using shadow mapping as you normally would for a shadow casting spot light.

This method runs 100+ fps on my X800 using several simultaneous paint sources and a 1k x 1k paint texture. It's also nice that with this method it's easy to paint with different brushes which are applied as projected textures (spot light cookie textures).

This topic is closed to new replies.

Advertisement