binsansballs

Members
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

164 Neutral

About binsansballs

  • Rank
    Member
  1. Hi, I'd like to render lighting effects for an indoor environment, where there are about 20 different light sources You can roughly imagine it as a cinematographic set -- the subjects to be illuminated are mostly humans. Currently, I simply model the light as a 9-values vector (by using a spherical harmonics representation). Do you think it could be an appropriate model, or is it too poor?
  2. Hi, I have two triangled meshes -- the one (say, A) at a higher resolution, the other (say, B) at a lower resolution. I know normal and vertices coordinates for both (and texture coordinates as well). I'd like to extract a normal map from the latter on the basis of the former. I already have a mapping from the points of B's surface into the uv space. I guess I have to calculate a tangent space for every * pixel * of B's map (on the basis of the corresponding point on B's surface -- am I correct here?)... And ok, I know how to do it. Then, for assegning each pixel of the map a color... How should I use the information I've got about A's geometry?
  3. Actually, MeshLab seems to support also gaussian curvature... And Gaussian curvature calculated on my 3D model, rendered in MeshLab, looks fine. I'm totally new to MeshLab so, I was wondering if it would be possible to export by an obj file the model just rendered in MeshLab. In other words, I'd like to export an obj file where the texture corresponds to the gaussian curvature values just calculated -- not to the original texture map of the scan.
  4. Hi all, I need obtaining the gaussian curvature for a 3D mesh (stored in a .obj file). I looked for some code online, but without finding anything useful. Any hint -- an already existing implementation or some suggestions for starting writing a code by myself?
  5. I have to merge (in 2D) images in order to obtain a high resolution texture map for human bodies. My images correspond to the camera images provided by a 3D scanner: such images, taken from different views, represent (a part of) the body plus the background. I know how to map these images on a coherent map, but I need to obtain a full "seamless" effect when merging them -- namely, avoiding patch discontinuities due to different lighting conditions. Blending techniques often follow a "feathering weight" technique: for each original 2D camera image, a feathering function f(x) is calculated on image pixels x s.t. f(x)=0 outside the object and f(x) grows linearly to a maximum value of 1 within the object. Any idea about how modeling f(x) by an appropriate function?
  6. Hi all, I'm really a newbie -- sorry for my question. I'm trying to run a simple facebook app in Python using GAE: [url="https://developers.facebook.com/docs/samples/canvas/"]http://developers.facebook.com/docs/samples/canvas/[/url] I'd like to run it locally (on localhost:8080), on sandbox mode. Everything seems to work fine, but upon clicking the Login button I can not see the "request for permission" popup -- I'm simply redirected to the same, Login page. Any idea about what's going wrong?
  7. Thank you, this was exactly what I was misunderstanding! Unfortunately, in my actual code, I can not use the first solution you suggest (the introduction of a second function would be problematic). And for the second approach... I tried something similar but it gave me some problems since I had to iterate on cell arrays, not on simpe arrays: If x is a cell array, I can not use the syntax x{1:3}... Anyway, thank you again. On Monday I'll test the function again and give you a feedback!
  8. One moment. I think I'm really misunderstanding something about lsqnonlin... I've got problems even with this simple code: opt = {3}; for i = 1 : 3 opt{i} = @(x) x^i; end [r resnorm] = lsqnonlin(opt,10); the error is FUN must be a function or an inline object; or, FUN may be a cell array that contains these type of objects. but class(opt) returns me a cell...
  9. [quote name='apatriarca' timestamp='1349986367' post='4989236'] I haven't completely understood what you are trying to achieve. What about [source lang="plain"]opt_r = @(x) orig_r{1:num} * x(1) - other_r{1:num} .* x(1:num);[/source] To define vector valued functions you should usually either directly define a new vector with the square braces notation or use vector operations on vectors/matrices. [/quote] The problem still remains (since it is in .* x(1:num)). Say num = 3. x would be a vector of size 3, and I'd like to have: opt_r = @(x) orig_r{2}.^x(1) - other_r{2}.*x(2) + orig_r{3}.^x(1) - other_r{3}.^x(3)
  10. Hi all, I need solving nonlinear least-squares for my prototype app. I'm referring to the lsqnonlin function: [url="http://www.mathworks.com/help/optim/ug/lsqnonlin.html"]http://www.mathworks.../lsqnonlin.html[/url] Currently, I use the following code: k = 1 : num opt_r = @(x) orig_r{k}*x(1) - other_r{k}*x(k); x = ones(1,num); [r_exp, resnorm] = lsqnonlin(opt_r, x); in other words, I need considering all the arrays belonging to my orig_r and other_r cell arrays. I receive the following error: Error using * Too many input arguments. What would the correct statement be?
  11. @Kaptein: yeah, it is exactly a "too few vertices" problem. Unfortunately, the number of vertices I should consider may noticeably vary from a zone to another on the surface mesh (e.g., on the face I need less vertices than on the torso); further, considering a huge number of vertices is very time consuming. @Lauris: sorry for my "newbie" question... Very shortly, how do you use the three values you mentioned (location of the surface point, location of the point in mesh geometry and normal at that point) in order to set a pixel value in the bump map? i
  12. Hi all, I need to unwrap the texture of my 3D mesh by constructing a kind of bump (or a normal) map. I know the function mapping every surface point on the mesh on its u,v coordinates and I can estimate the normal at each surface point as well. Currently, I'm drawing a grayscale map -- by assigning each pixel an intensity proportional to the dot product between the normal at the corresponding surface point, and the average between the normals in a well defined neighboring of that point. However, the resolution of the map is quite poor in this way (the dot product is very near to 1 in most cases). Is there any smarter way to achieve my goal?