# OpenGL How should I store Vertex data for a mesh.

This topic is 3619 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi All, Am am a bit confused as to how I should store the verticies for a mesh. Should I have an array of verticies that are shared by polygons, and have an array of polygons that say what verticies to connect, like what you get out from a model saved in the 3ds file format? Or should I have an array of floats for the verticies in the order of the polygons? Both methods seem to have pros and cons... With the former, data is not repeated, which has always seemed like a good thing to me, but how do I throw this data at opengl. And with the latter I can easily send it to be rendered with a vertex array or whatever, but it seems stupid, because there is no way to know which verticies are shared for calculating vertex normals etc. So how do you guy's/gal's store your data, and pass it to opengl? Thanks in advance.

##### Share on other sites
The modern way to go is to use vertex buffer objects (VBOs) when it comes to rendering. Therein you have vertices and faces, the latter given either implicitely as vertex sequences or (a bot more explicit) as sequence of indices of vertices. Indexed vertices are most efficient if vertices can be shared between faces. Using any other representation means the necessity to convert.

The above works well for rendering. Animation, or even more interactive editing of meshes may make more topological information senseful to know.

In other words, the solution depends a bit on the situation.

##### Share on other sites
I'm a little torn between vertex arrays and display lists atm (though I'll end up back with VBOs in the long run).

So I'll just pick up on your vertex normal comment; you can easily calculate one by taking an average of all connecting surface normals (this will vary depending on tessellation and any boundaries).

##### Share on other sites
Thanks, I didnt know that VBO's could do sequence of indices of vertices. Sounds like that is the neat solution that I am searching for, Thanks, and I guess I should start googling.

Do you know of any good tutorials?

Thanks again.

##### Share on other sites
A vertex always describes all (needed) features of a surface at a given point. So it is a composite of the position itself, the normal and tangent and bi-normal vectors at that position, the color(s) at that position, and the texture co-ordinates at that position; just to name the most common ones. If you are using VBOs and only a single bit of all these values differs, you need to store a second vertex. The immediate mode allows you to assemble the vertex at runtime on CPU costs, but that way is slow and will be no longer part of core OpenGL with version 3 (however, it will presumbly be available in a utility library).

The mesh representation as vertices and sequences of polygon corners (either as index lists or not) still has topological information of vertices, edges, faces, and partly of shells. So you can of course find shared vertices in a VBO like representation, although it is obviously very inefficient since some dependencies are given indirectly only, and hence require the algorithm to do many searches. On the other hand, as already stated in my previous post, how often do you encounter situations in which yiu still need to access the topology that way?

The project I'm currently working on deals with both interactive editing and, of course, display of large amounts of meshes. Due to the interactive editing purposes, the project knows of a mesh structure named EditMesh. This kind of mesh has an explicit knowledge of vertices, edges, loops, faces (n-gons), holes, shells, regions, and voids as first class topological (and partly also geometrical) elements, and edge shares as well as face shares as helper elements.

During editing a mesh, the relations between the elements is explicitely given in both top down as well as bottom up order. When not being edited, roughly half of the relations are not available; when starting editing, the half is reconstructed, and when ending editing, it is dropped. Reconstruction is fast enough as long as not dealing with meshes of a million vertices. The advantage is that, although many meshes are loaded simultanuosly, the total memory consumption is relatively small since only one (or at most a few) meshes are edited at a time. Actually, the EditMesh is not VBO friendly, since it is a kind of multi-indexed array set.

Hence the above mesh representation is not suitable for fast rendering. For this purpose a totally other kind of meshes exists, namely the RenderMesh class. Whenever needed, the faces of the EditMesh are processed (i.e. triangulated), and a RenderMesh is computed from them. The RenderMesh, you assume it already, is very VBO friendly.

The computation of the RenderMesh is backed by the EditMesh, and hence has all geometrical and topological information at hand, also those used to decide how normals are to be computed (i.e. which faces contribute to what vertex normal). Although often only "the mesh is smoothed or not" may be used, the EditMesh allows a finer granularity. However, after the RenderMesh is computed, in many cases there is no more need to have the EditMesh at hand up until the next editing. So, in the "compiled" level, EditMeshes are very rare, but RenderMeshes are found at every corner.

Well, many words to show just "use whatever is suitable in a given situation".

##### Share on other sites
Thanks,

I think I understand what your saying, and what I want to know is what sort of data is stored in your RenderMeshes class, is it just the the arrays of vertex sequences?

Regards,

Ben

##### Share on other sites
_What_ is stored is principally: The vertex data (whatever is needed from the position, normal, ...), zero or more index arrays, and the kind of primitive for each (index) array, e.g. TRIANGLES or LINES. These informations are used to fill-up a GfxRenderingJob, actually just a vehicle to be able to sort rendering due to shader and material settings; I'm sure you already have read about this way.

If you're interested in _how_ it is stored: The nitty-gritty details are numerous...

Since the shaders may need various data, it cannot always be forecasted what compositions of vertex data need to be passed to the API. Even if it were possible, then the number of combinations may be too great. Hence, the vertex data is actually stored in "unstructured" octet arrays, overlayed by (more or less) primitive types as and when needed. The description how the overlays are to be done is stored in some metadata. E.g. the byte offset, primitive data type, and semantics is stored this way. (For performance reasons, I don't deal with one such array for each mesh but with several and bigger arrays, but you don't necessarily need care about that ATM.) However, all those stuff is later used when the renderer actually performs a GfxRenderingJob, for parametrizing the various glXxxPointer and related routines.

Since situations exists where a mesh is so-called "static", presumbly a copy will be made in VRAM or so (see the various GL_STATIC_DRAW, GL_DYNAMIC_DRAW, GL_STREAM_DRAW,... modes). The renderer needs to know whether such a copy exists, or whether it is required to refresh the copy from the main memory. It furthur needs to know which VBOs are associated with a mesh. These informations are also available from the RenderMesh class (although not necessarily directly).

##### Share on other sites
Wow, you are a champ.

Thanks again,

Ben

##### Share on other sites
I store all my meshes as a vertex array and a triangle array.
My triangle structure also holds references to neighbouring triangles, that makes it easier to calculated vertex normals and to do efficient ray casting (e.g.: using plücker coordinates I only test on edge per triangle)

That also allows you to apply some operations like edge collapse to reduce redundant triangles

 15 class Triangle 16 { 17         uint    m_Vertex[3]; 18         int     m_Neighbour[3]; 19 public: 20         Triangle(uint a=0, uint b=0, uint c=0) 21         { 22                 m_Vertex[0] = a; 23                 m_Vertex[1] = b; 24                 m_Vertex[2] = c; 25                 m_Neighbour[0] = -1; 26                 m_Neighbour[1] = -1; 27                 m_Neighbour[2] = -1; 28         }; 29         const int&      Neighbour(uint i) const { return m_Neighbour; }; 30         int&            Neighbour(uint i) { return m_Neighbour; }; 31         const uint&     Vertex(uint i) const { return m_Vertex; }; 32         uint&           Vertex(uint i) { return m_Vertex; }; 33 };

##### Share on other sites
Basiror, I thanks for the neat Triangle class, but I'm not quite ready for that yet.

Ok, I'm having a few problems getting VBO's with indexing to work.

class Model{public:    Model();    ~Model();    bool load(const char *pFileName);    void render();        void createVBO();    protected:    unsigned short mVertexQty;    unsigned short mPolygonQty;        float *mVertex;    unsigned short *mPolygon;        unsigned mVBOCoordinatesID;    unsigned mVBOIndiciesID;};void Model::createVBO(){    glGenBuffers(1, &mVBOCoordinatesID);    glGenBuffers(1, &mVBOIndiciesID);    glBindBuffer(GL_ARRAY_BUFFER, mVBOCoordinatesID);    glBufferData(GL_ARRAY_BUFFER, mVertexQty * 3 * sizeof(float), mVertex, GL_STREAM_DRAW);    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBOIndiciesID);    glBufferData(GL_ELEMENT_ARRAY_BUFFER, mPolygonQty * 3 * sizeof(unsigned short), mPolygon, GL_STREAM_DRAW);}void Model::render(){    glBindBuffer(GL_ARRAY_BUFFER, mVBOCoordinatesID);    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBOIndiciesID);    glEnableClientState(GL_VERTEX_ARRAY);    glVertexPointer(3, GL_FLOAT, 0, 0);    glDrawElements(GL_TRIANGLES, mPolygonQty, GL_UNSIGNED_BYTE, 0);    glDisableClientState(GL_VERTEX_ARRAY);    glBindBuffer(GL_ARRAY_BUFFER, 0);    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);}

Am I on the right track?
It doesn't crash, but there is a blob of polygons that doesn't look like the model at all.

• ### Similar Content

• By nOoNEE
in the OpenGL Rendering Pipeline section there is a picture like this: link
but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu

• By Inbar_xz
I'm using the OPENGL with eclipse+JOGL.
My goal is to create movement of the camera and the player.
I create main class, which create some box in 3D and hold
an object of PlayerAxis.
I create PlayerAxis class which hold the axis of the player.
If we want to move the camera, then in the main class I call to
the func "cameraMove"(from PlayerAxis) and it update the player axis.
That's work good.
The problem start if I move the camera on 2 axis,
for example if I move with the camera right(that's on the y axis)
and then down(on the x axis) -
in some point the move front is not to the front anymore..
In order to move to the front, I do
player.playerMoving(0, 0, 1);
And I learn that in order to keep the front move,
I need to convert (0, 0, 1) to the player axis, and then add this.
I think I dont do the convert right..
I will be glad for help!

Here is part of my PlayerAxis class:

//player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1﻿; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); }﻿ x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMat﻿rix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; ﻿coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }
and in the main class i have this:

public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }
finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
• By Lewa
So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
(here is the full shader source code if someone wants to take a look at it)
Now, i suspect that the normals are the culprit.
vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
//"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?

• Hi,
I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

• I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
Regards

• I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example:
postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources.
I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though.
Another example of what I'm doing at the moment:
1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
Thanks all!

• void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
• By Lewa
So, i stumbled upon the topic of gamma correction.
So from what i've been able to gather: (Please correct me if i'm wrong)
Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format
Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)

What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
No gamma correction:
With gamma correction:

The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?

• Hi

I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for.
I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem.

The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over).

If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed.

To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above.
This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with
GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255].
Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions?

• Hi,
i am self teaching me graphics and oo programming and came upon this:
My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up.
Here are some code snippets, mostly standard from tutorials:
// EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector).  In future i'll take that functionally out into an own class that inherits from the event handler.
void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler.
// static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method.
// virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ...
// straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors().
Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing.

I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-)
And thanks for any hints and so ...

• 38
• 12
• 10
• 10
• 9
• ### Forum Statistics

• Total Topics
631362
• Total Posts
2999577
×