Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

211 Neutral

About panic

  • Rank
  1. Hey. I'm working on porting a project from C++ to Windows Phone and have run into a little problem. I have a semi-arbitrary databuffer where I store data in an interleaved manner, kind of like a vertex buffer. In C++ I will access the various parts of the data using pointers to which I both read and write back new data. I'm using this as part of a data driven particle system which may or may not contain all available attributes, it's interleaved because there will be lots of particles processed and I'm concerned of cache performance. Here's some "pseudo code" of the C++ implementation. Buffer class implementation class Buffer { public: // Attribute bit-flags enum Attribute { Attrib_Position = 1<<0, Attrib_Size = 1<<1, Attrib_TTL = 1<<2, Attrib_Velocity = 1<<3, Attrib_Rotation = 1<<4, // and so forth }; // Constructor Buffer( int maxCount ) : m_stride(0) , m_attributes(0) , m_maxCount(maxCount) { memset( m_attributePtrs, 0, sizeof(unsigned char*) * 32 ); memset( m_attributeSizes, 0, sizeof(int) * 32 ); } void addAttribute( Attribute attribute, int size ) { // 4-byte alignment int aligned_size = ( size + 3 ) & ~3; // increment total stride m_stride += aligned_size; // count trailing zeros of attribute bit to get an index int attribute_index = bit2index(attribute); m_attributeSizes[attribute_index] = aligned_size; m_attributes |= attribute; } void initialize() { // allocate memory to hold all attributes m_attributeArray = new unsigned char[ m_stride * m_maxCount ]; memset( m_attributeArray, 0, m_stride * m_maxCount ); // setup "access pointers" to the first instance // of the attributes in the attribute array int offset = 0; for( int i = 0; i < 32; ++i ) { if( m_attributes & (1<<i)) { m_attributePtrs = m_attributeArray + offset; offset += m_attributeSizes; } } } template<typename T> inline T* getAttributePtr(Attribute attribute, int index ) { if( (attribute & m_attributes) == 0 ) return NULL; // attribute didn't exist // count trailing zeros of attribute bit to get an index int attrib_index = bit2index(attribute); // return a pointer to the requested attribute at requested index return reinterpret_cast<T*>(m_attribPtr[attrib_index] + m_stride * index); } int getMaxCount() { return m_maxCount; } private: int m_attributes; int m_stride; int m_maxCount; unsigned char* m_attributeArray; unsigned char* m_attributePtrs[32]; int m_attributeSizes[32]; }; Initialization and usage void example_init() { Buffer buffer = new Buffer(100); // Add some attributes buffer->addAttribute( Buffer::Attrib_Position, sizeof(Vector3)); buffer->addAttribute( Buffer::Attrib_Size, sizeof(Vector2)); buffer->addAttribute( Buffer::Attrib_Velocity, sizeof(Vector3)); buffer->addAttribute( Buffer::Attrib_TTL, sizeof(float)); buffer->initialize() } void example_usage( Buffer& buffer, float delta ) { for( int i = 0; i < buffer.getMaxCount(); ++i ) { Vector3* position = buffer.getAttributePtr<Vector3>(Buffer::Attrib_Position, i); Vector2* size = buffer.getAttributePtr<Vector2>(Buffer::Attrib_Size, i); Vector3* velocity = buffer.getAttributePtr<Vector3>(Buffer::Attrib_Velocity, i); float* ttl = buffer.getAttributePtr<float>(Buffer::Attrib_TTL, i); // read/write data *ttl -= delta; *velocity -= kGravity; *position += *velocity * delta; *size = lerp( minsize, maxsize, (maxttl - *ttl) / maxttl); } } I don't know how to migrate this to XNA for Windows Phone in a good and efficient way. I obviously can't use pointers, so what would be a suitable approach if I want to keep the cache friendliness of this implementation? Thanks.
  2. panic

    2 c++ problems

    I would try something like this for your second problem: int main() { Listener* listener = new MyListener(); PhysicsBody.userData = static_cast<void*>(listener); do things } void onEvent(PhysicsBody body) { Listener* listener = static_cast<Listener*>(body.userData); listener->onEvent(); } [color="#1C2837"]Otherwise the void* will contain the address to a MyListener class, and not the Listener base-class.[color="#1C2837"](I might be wrong)
  3. panic

    PNG Alpha and iPhone OpenGL

    That page clearly states that said blend func only applies to images with premultiplied alpha, and also states that PNG does not have premultiplied alpha.
  4. panic

    .cpp to .app on mac

    Google Xcode
  5. Couldn't find any concrete examples on how to do it. But from reading the Khronos specifications of the extension I would guess it goes something like this: (Assuming the gles version have [font="monospace"]OES_framebuffer_object support, and EGL also have [/font][font="monospace"]KHR_gl_renderbuffer_image)[/font] [font="monospace"] [font="monospace"]GLuint width = 256;[/font] [font="monospace"]GLuint height = 256;[/font] [font="monospace"] [/font] [font="monospace"]// Create an offscreen framebuffer with a renderbuffer as color buffer[/font] [font="monospace"]GLuint framebuffer;[/font] [font="monospace"]glGenFramebuffers(1, &framebuffer);[/font] [font="monospace"]glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);[/font] [font="monospace"] [/font] [font="monospace"]GLuint colorbuffer;[/font] [font="monospace"]glGenRenderbuffers(1, &colorbuffer);[/font] [font="monospace"]glBindRenderbuffer(GL_RENDERBUFFER, colorbuffer);[/font] [font="monospace"]glRenderbufferStorage(GL_RENDERBUFFER, [/font][font="monospace"]GL_RGBA4, width, height);[/font] [font="monospace"] [/font] [font="monospace"]glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorbuffer);[/font] [font="monospace"] [/font] [font="monospace"]glBindRenderbuffer(GL_RENDERBUFFER, 0);[/font] [font="monospace"]glBindFramebuffer(GL_FRAMEBUFFER, 0);[/font] [font="monospace"] [/font] [font="monospace"]GLenum err = glCheckFramebufferStatus(GL_FRAMEBUFFER, framebuffer);[/font] [font="monospace"] [/font] [font="monospace"]// Create the EGL_Image[/font] [font="monospace"]EGLImageKHR eglImage = eglCreateImageKHR(display, context, EGL_GL_RENDERBUFFER_KHR, (EGLClientBuffer)colorbuffer, NULL);[/font] [/font] [font="monospace"]This would presumably create an off-screen render target for gles which you could render to, and then using the EGLImage handle share it with for example OpenVG. As I assume this is the main purpose of EGLImages. I guess you could just as well setup the gles frame buffer to draw to a GL_TEXTURE_2D instead, and simply pass the handle to that one to eglCreateImageKHR instead, and also changing <target> to EGL_GL_TEXTURE_2D of course.[/font] [font="monospace"]I have never messed with this, but it seems reasonable. But don't take my word on it being correct [/font]
  6. panic

    The new forums are worse than the old ones...

    I don't mind the changes in general. But the recent threads page is awful! I want to see all recent threads, dating back several days. Not just the new, or replied to since my last visit. I have the habit of quickly pop in and have a glance at the recent threads to see if there's anything interesting popping up. But it's not always you have the possibility to read or reply to something right then. With the old system you could just make a mental note there was something interesting you wanted to take a closer look at, and later you could just skim through the recent threads pages until you found it again. Now you need to know what forum that thing was actually posted on in order to find it again, if you're not lucky enough that the thread is still active when you get back to it. I can't imagine it being a big thing to change either.
  7. panic

    Normals Calculation

    Quote:Original post by SaLiVa EDIT: Okay, so it does seem that the OpenGL 4.0 no longer supports normals. Interesting move, but complicates things just a bit. The following thread gives more light onto what needs to be done to find the normals. http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=286206 You calculate the normals from the Vertex Shader (By taking the cross product) and then the Normals output is passed onto the fragment shader. Opengl 4 doesn't "support" any predefined elements like the old fixed function pipeline. It only supports arbitrary data, then it's up to you to supply that data and tell the GPU how to interpret said data, using the shaders. To supply data you typically use the command: glVertexAttribPointer(index, size, type, normalized, stride, *data) And to interpret this data you write a vertex shader. A quick example to show the relations: // C++ code struct MyVector3 { float x,y,z; }; int num = 10; // The amount of vertices and normals I want to send to the GPU MyVector3 MyVertices[num] = { /* here goes vertex-data in the form of x,y,z floats, or this data could be loaded from a file just as well */ }; MyVector3 MyNormals[num] = { /* normal data, x,y,z floats, can also be loaded from file just as well */ }; /* These are the indexes that supplies information to what shader attribute a set of data should be associated with, this is NOT the proper way to do it. In order to be sure you have the correct attribute indexes you need to query your shader about this information. For example using glGetAttribLocation. */ int vertex_index = 0; int normal_index = 1; // Upload this data to the GPU // Vertices glVertexAttribPointer(vertex_index, 3 /* 3 floats */, GL_FLOAT, false /* don't normalize */, sizeof(MyVector3) /* size of each element */, MyVertices /* the vertex data */ ); // Normals glVertexAttribPointer(normal_index, 3, GL_FLOAT, false /* I assume the normals are normalized already */, sizeof(MyVector3), MyNormals); /* ====================================== */ // Vertex shader code /* This is the first attribute, which most likely (no guarantee!) will correspond to vertex_index = 0. */ attribute vec3 attribVertices; /* This is the second attribute, again, which probably will correspond to normal_index = 1 */ attribute vec3 attribNormals; void main(void) { /* Do stuff in the shader, you typically want to supply transformation matrices as uniforms to the shader in order to do anything worth while. */ } This was just a quick run down with focus on the supplying data, and the relations to what you define in the vertex shader. There is a lot more to it, like the uniforms I mentioned. And the actual transformations. Attributes can be named whatever you like.
  8. It makes sense that it isn't faster then using GL_RGBA judging from the gles specification. But it seems odd that it would be slower. Unless this is done for each texture fetch, which to me, seems like a silly thing to do: Quote: GL_ALPHA Each element is a single alpha component. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for red, green, and blue. Each component is then clamped to the range [0,1]. GL_LUMINANCE Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then clamped to the range [0,1].
  9. panic

    HIII :) i have favor to ask :)

    I wasn't mocking you either. I was giving you some realistic advice. Perhaps in a harsh way, sorry for that.
  10. panic

    HIII :) i have favor to ask :)

    To derail from all the sarcastic replies so far. Judging from that list, you want make the next ZOMFG-World-of-Warcraft-killer-MMO, like every other new game developer dreams of. Pro-tip of the day: Dream, but leave it like that for now Start out with something simple, a text-based adventure game. A clone of Pong, just something that gets you started without too many questions. And after a few years of experience you can start thinking of your dream again, by then you will probably have enough experience to understand it's an impossible task without an army backing you up.
  11. panic

    GLSL Bytecode

    Looks like you are trying to compile a vertex shader with a fragment shader profile.. Quote: 3. cgc -oglsl -profile arbfp1 normal_map.fp > normal_map.frag.asm In this example, normal_map.fp is the GLSL fragment program and normal_map.frag.asm will be the compiled assembly file.
  12. Quote:Original post by adder_noir 2) Apparently no storage is allocated by 'operator new', yet we have created an object (SerialPort). Ok well if there is no storage allocated then where the hell does the created object go? I really don't like this chapter, it seems to be messing with things that look dead certain to create nasty compile-time and run-time errors to me. That's just my impression though, I am new aterall ;o) Any ideas? Thanks for any help anyone can give ;o) The object com1 will simply occupy the memory at 0x400000, whatever that may be. If I were to guess, this book was written for systems without protected memory. And this portion wants to create an instance of a class that directly communicates over the serial port, which presumably resides at memory address 0x400000 on said system. This is however not the case on modern operating systems, and doing anything with obj1 created this way will most likely cause run-time crashes.
  13. Seems a bit overkill with loops etc. Plus, you are not actually packing to rgba which your variable name hints towards. You are packing to abgr. A faster "pack" would be: #define pack_rgba(r, g, b, a) (Uint)(r<<24|g<<16|b<<8|a)
  14. panic

    OpenGL Memory Overwrite

    That is probably not a good solution :) OpenGL is not supposed to write ANYTHING outside of the 4-byte handle pointer. Step through your code and keep track of exactly where your struct is getting messed up. If it happens after you called glGenTextures then I'm stumped and would blame the driver :) You say it's messed up when you get to the drawing part, but what else have happened in between? In this case I would say that you yourself are causing the overwrite somewhere along the way. Without the entire code-flow it's impossible to say. But I personally would blame opengl only when all other options had been exhausted. And even at that point it would probably turn out I did something really silly that I just missed. Debug your code :)
  15. panic

    Templates and GCC

    Quote:Original post by phresnel $ cat lazy.cc template <typename T> void does_it () { (&T())->pardon_but_gcc_does_not(); } int main () { } smach@devxp2 /c/Dokumente und Einstellungen/smach/Desktop $ g++ -Wall lazy.cc smach@devxp2 /c/Dokumente und Einstellungen/smach/Desktop $ It is mostly conformant; the only gripe I have is that the point of instantion is not always fully conformant (see also the one book by Josuttis/Vandevoorde), but two-phase-lookup is implemented (from my experience) correctly. I meant more like this: template <typename T> void i_said_completely_bonkers() { undefined_object->fail(); hell, even this line is fine in VC2008! } int main (int argv, const char** argc) { return 0; } This code will compile in VC2008, it wouldn't compile in VC2008 if the template argument wasn't there. It won't compile with GCC, since it's obviously completely bonkers. Just as the OT's original code, except he was declaring undefined_object from an undefined class (forward declared only).
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!