# OpenGL 3DS MAX ASE file info problems

## Recommended Posts

Hi all, I am trying to import a model from 3ds max into my Opengl application. I've found that the most popular way to do it is to export the mesh to an ASE file. I used the vertices from the *MESH_VERTEX_LIST section and u,v coordinates from *MESH_TVERTLIST section. I know that when importing into Opengl, the Y,Z coordinates are flipped and the Z is inverted, but will that affect the U,V coordinates? I can get texture onto the model but its all in the wrong places. Any help would be greatly appreciated. Thanks, Adam Perry

##### Share on other sites
what kind of problems are you experiencing with the UV coordinates? have you tried applying a UVW modifier to your objects?

##### Share on other sites
For the uvw coords, you need to invert the v coordinate:
u = u
v = (-v)
If the vertex coordinates are read as f1,f2,f3 they will need to be changed like so:
v1=(-f1), v2=f3, v3=(-f2)
The face data needs face b and c swapped:
f_a = a
f_b = c
f_c = b

I think that's right. Try it out - it should work. ;)

##### Share on other sites
I import ASE ok; I just negate the Z vertex coord and the V texcoord.
Don't do any face reorganisation though... :?

##### Share on other sites
if you invert the v coordinate, won't it give you a negative value? i thought that Opengl read bitmaps in from the bottom-left corner--the bottom left corner having a u,v value of (0,0) and the top-right corner having a value of (1,1)? Wouldn't a negative value for u or v put the coordinate out of the bitmap? And what's the deal with the faces? Why are there more texture faces than there are vertex faces? Also the index values for the texture face list is different from the index values on the vertex list. Why the difference?

##### Share on other sites
OpenGL and Direct3D both read the V coordinate from the first byte you send to the API when you specify the texture. The difference is in what texture format you're using (or what loader you're using). When loading a BMP or TGA, you typically get the bottom byte first; when loading a PNG or JPG, you typically get the top byte first.

Meanwhile, I think 3dsMax defines V as always bottom-first, no matter what the loader is. Thus, if you're using PNG or JPG, or an image loaded that gives you top-first even for TGA or BMP, you will have to negate the V component. If you are using texture coordinate clamping, you have to additionally add 1.

##### Share on other sites
Quote:
 ..Don't do any face reorganisation..

Oh yeah, I was using D3D, and I came up with that combo by trial and error - so that it would be exactly the same in model space as it was in 3d studio.

##### Share on other sites
Hey buddy,

I used ASE for a while and heres some tips i came across:

if youd like try setting: tv = 1 - tv
this will fix your little prob of having negative tex coords (which actually should not cause a prob anyways)

You should consider switching and using a different format. Although ASE is simple, its a text file specifically for 3DSMAX to read and not optmized for quick rendering that would be preferable in a game environment. Another thing that is annoying is the ASE file uses the indices to declare where each UV coord is being used....in other words, imagine you create a simple cube in a 3D editor, an ASE file may export 1 vertex for one of the corners, which may seem optimum at first glance, but this actually makes loading somewhat tedious, considering there may be up to 3 vertices in that EXACT location because each side of the cube would have different UV coords......:(

just some things to think about....good luck anyways

PAUL

##### Share on other sites
Oh yeah, I was going to mention that there may be more texture indices than vertex indices, because the vertices are being shared, but some of the vertices are using unique uv coordinates per face (even though they might actually be using the same uv coord). One thing you can do is have your program optimize the file and resave it any time a new file is created - or better yet - resaved in an optimized non-loop loadable chunk with optimization for shared uv data. I actually do this with the RTX format, which is an extended version of ASE. You can use it to include skinmesh, fog, and so on as well.
As long as your game data isn't too complex/extensive though, ASE seems fine.

##### Share on other sites
Thanks guys for all the help! I finally got it to work last night at 2:30 am (or this morning..depending on who you are) Anyway, thanks again!

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627678
• Total Posts
2978605
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
Thank you in advance!
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 12
• 12
• 10
• 12
• 22