Jump to content
  • Advertisement

Search the Community

Showing results for tags 'C++'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 803 results

  1. Hello, everyone! I've implemented a simple library in C++ and uploaded it on github. Now I want to be sure, that it works on Windows/Linux/OS X. I have some background in Java/Python and only a bit of C++. So, in Java, I'd just go with simple unit-tests and I'd be sure that it works, then, it should work everywhere. The same with Python. But now I did the lib in C++ and I want to ensure, that it is at least buildable under Windows/Linux and OS X. I want to be sure, that my small library runs everywhere. My wife has MacBookPro and I own a laptop with Ubuntu 16.04 and a PC with Windows 8. I see few options: A. Just do it: 1. Check on Windows machine. 2. Copy&paste to Ubuntu machine, start Qt Studio and run it there. 3. The same for OS X. It's the most uncomfortable part, since I'm not the owner of the macbook. B. 1. Install SSH daemons on all the machines. 2. Install git and cmake on all the machines as well. 3. run git pull and ctest remotely. So, overall it should look like: ssh user@ubuntu_laptop 'bash -s' < git_pull_ctest.sh What I wonder about: how is it usually implemented in C++ projects? What's a 'golden standard' for such kinds of things? What would you do? Thank you for you attention!
  2. Atum engine is a newcomer in a row of game engines. Most game engines focus on render techniques in features list. The main task of Atum is to deliver the best toolset, that’s why, as I hope, Atum will be a good light weighted alternative to Unity for indie games. Atum already has fully workable editor that has an ability to play test edited scene. All system code has simple ideas behind them and focuses on easy to use functionality. That’s why code is minimized as much as possible. All source can be found here - https://github.com/ENgineE777/Atum In case you have questions related to Atum engine do not hesitate to contact me via email enginee777@gmail.com Currently the engine have follow features: - PC, Android and iOS platforms are supported - Scene Editor with ability to play test edited scene - Powerful system for binding properties into the editor - Powerful assets system - Track based editor for creation of animated objects - Sprite Editor - Render system that supports DX11 and OpenGL - UI system bassed on assets - Script system based on angel script - Controls system based on aliases - Font system based on stb_truetype.h - Support of PhysX 3.0, there are samples in repo that's using physics - Network code which allows to create server/clinet; there is some code in repo which allows
  3. I have been playing around with a old game source (~2002, dx8) and upgrading it to dx9 (for now) and getting it to work - just for the sake of learning (instead of just writing new code and learning little). But since the source doesn't seem to have working shader logic - it's up to me to fix it. Currently limited to VS_1_1 (asm - because game has also precompiled shaders i don't want to start rewriting before i get it to work to begin with). The shader i have problem with is one they apply to some items added to the map. The ones without shader work fine - the matrix is calculated fine (rot, scale, position etc). Non shader version passes it to SetTransform(D3DTS_WORLD, (D3DXMATRIX*)w_matrix) And this works fine. Now the shader part passes it to the vertex shader logic that set's all the shader variables. And calls SetVertexShader() And this is where the entire map gets messed up - stuff collapsed to a single point or not even visible anymore etc. So what does the setTransform do in this case that i could replicate with a shader / what should i pass in? Taking basic shader something like this (or even less)? vs_1_1 dcl_position v0 ; Transform to view space(world matrix is identity) ; m4x4 r9, v0, c0 ; Transform to projection space ; m4x4 r10, r9, c4 ; Store output position mov oPos, v0 D3DXMATRIXA16 viewMat; D3DXMatrixMultiply(&viewMat, (D3DXMATRIX*)mWorldMat, &state->mMatView); D3DXMatrixTranspose(&viewMat, &viewMat); pd3dDevice->SetVertexShaderConstantF(0, (float*)&viewMat, 4); // Set the projection space constant D3DXMATRIXA16 projMat; D3DXMatrixTranspose(&projMat, &state->mMatProj); pd3dDevice->SetVertexShaderConstantF(4, (float*)&projMat, 4); Not 100% sure if the data in state is correctly stored - so what should those two actually be or are they even needed to replicate how "D3DTS_WORLD" works? Can u get some world/proj matrix from directx? Spent two days on this and i can't get it working. I can update with any additional info if required. I need a pointer to the right direction with this at least. PS! apparently a missing SetVertexShader(0) later in the code to stop the shader from applying to everything else.
  4. I am trying to write a function for the CPU to keep calling a play in a do while loop (which I am assuming is correct) with a pause in between and for the menu to be brought up to the user after each selection from the playbook function that contains the playbook menu. I am not an expert with the cin operator, but my playbook function works perfectly when the user is entering the plays but the cursor hangs when it is calling the function_cpu for the cpu to make the selection. I am sure I am missing something in the code here. I put some notes in the code below.
  5. 3dBookman

    The 3D book

    After a break of several years the 3D book project is back on. A few short words now on what this blog is about. I have to deliver my wife to the bus station in a few minutes, then a week alone so may have the time then to explain things. But the 3D book is something I started in 014 and put several years into, then the break, now on again. A win32 app with a text window and an ogl window. I just remembered I had something written on this so here it is I write to see if anyone in this community of game developers, programmers, enthusiasts, may be interested in a project I have been developing[off and on] for several years now. So follows a short description of this project, which I call the 3D-Book project. The 3D-Format Reader: A new format of media. Imagine opening a book, the left page is conventional formatted text - on the right page a 3D animation of the subject of the text on the left hand page. The text page with user input from mouse and keyboard, the 3D page with user input from a game pad. An anatomy text for a future surgeon, with the a beating heart in 3D animation. A children's story adventure book with a 3D fantasy world to enter on the right page. ... Currently 3D-Format Reader consists of a C++ Windows program: Two "child" windows in a main window frame. Two windows: a text-2D rendering window and a 3D-rendering window. The text-2D window, as its' name implies, displays text and 2D graphics; it is programmed using Microsoft's DirectWrite text formatting API and Microsoft's Direct2D API for 2D graphics. The 3D-rendering window uses the OpenGL API. A 3DE-Book page is formatted in one of two possible modes: DW_MODE or GL_MODE. In GL_MODE both windows are shown; the text-2D rendering window is on the left and the 3D OpenGL window is on the right. In DW_MODE, only the text-2D rendering window is shown, the OpenGL window is hidden (Logically it is still there, it has just been given zero width). The 3D-Format Reader reads text files, which consists of the text of the book, control character for the formatting of text, (bold, underline, ...), display of tables, loading of images(.jpg .png ...), and control of 2D and 3D routines. 3D-Reader programming is based on a Model-View-Controller (MVC) architecture. The MVC design is modular: The Controller component handles user input from the operating system , the Model component processes the input, and the View component sends output back to the user on the display. Typical Parent-Child windows programs have multiple "call back" window procedures(winProcs): One for the parent window and one for child window. The MVC model, simplifies message routing by using a call-back window procedure which receives Windows messages for the main window, the text-2D window and the OGL window. A sample MVC program by Song Ho Ahn was used as a template for the 3DE-Reader. Rushed for time now, so a hasty sign off and thanks for reading. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 8 - 21 -18 I spent the last few days working on procedural mesh generation. First looking to find a bit of code to do what I had in mind. Which begs the question: What did I have in mind? I just wanted a cube mesh generator such that... Requirements Input: An integer n = units from origin to cube face. Output: The vertices for a unit cube centered on the origin. 8n² triangles per cube face. 3 times 8n² verts in clockwise winding order (from the outside of the cube) ready for the rendering pipeline. Screenshot of some cubes generated with the procedural cube mesh generator. That was about it for the output requirements. I did not want to hand code even a single vertex and did not want to load a mesh file. I was sure the code was out there somewhere, but was not finding it. So, a bit reluctantly at first, I started coding the mesh generator. I started enjoying creating this thing and stopped searching for the "out-there-somewhere" code; although still curious how others did this. Analysis First question: How do we number the verts? It would be great to conceive of some concise algorithm to put out the cube face verts all in clockwise order for the outside faces of the cube directly. That seemed beyond me so I plodded along step by step. I decided to just use a simple nested loop to generate the cube face verts and number them in the order they were produced. The hope(and the presumption) was: The loop code was in some order, running thru the x y and z coordinates in order, from -n to +n, therefore the output would be a recognizable pattern. The simple nested loop vert generator did not let us down: It gave us a recognizable pattern, at least for this face. It turned out (as expected now) that all six faces have similar recognizable patterns. Plotting the first row or two of verts you can easily see how to run the rest of the pattern. Plot of the first(of six) cube faces verts output by the vert generator: Input of n: There are (2n+1)² verts per cube face, or 25 verts for n = 2. This is looking at the x = -n face from the outside of the cube. To simplify the math it helps to define s = 2n. Then there are (s + 1)² verts, or 25 for s = 4 s² cells on the face, or 16 for 4 = 2. We are going divide each cell into 2 triangles, so there are 2s² triangles per face, or 32 for s = 4. Second question: What pattern for the triangles? How to number the 2s² = 32 triangles? What we want in the end is a bit of code such that... for triangles T[0] thru T[2s²-1] or T[0] thru T[31]( for n = 4), we have T[N] = f0(N), f1(N), f2(N). Where f0(N) gives the first vertex of T[N] as a function of N. and f1 and f2 give the second and third verts, all in CW winding order looking into the cube of course. Here the choice is a bit arbitrary, but it would seem to make things easier if we can manage to have the order of triangles follow the order of verts to a degree. Numbering the triangles. And now the problem becomes: Look at the triangle vert list, T0 - T8...T31 in the image, and try to discern some pattern leading us to the sought after functions f0(N), f1(N), f2(N) where N is the number of the triangle, 0 thru 2s²-1. This really is the holy grail of this whole effort; then we have T[N] = f0(N), f1(N), f2(N) and that list of verts can be sent directly to the rendering pipeline. Of course we want these functions to work for all six faces and all 12s² triangles to cover the cube. But first let's see if we can just do this one face, 0 thru 2s²-1.. Thru a bit of trial and error the 32 triangles(T0 - T31) were ordered as shown. Now we have an ordered list of the triangles and the verts from our loop. T0 = 0 5 6 T1 = 6 1 0 T2 = 1 6 7 T3 = 7 2 1 T4 = 2 7 8 T5 = 8 3 2 T6 = 3 8 9 T7 = 9 4 3 T8 = 5 10 11 ... T30 T31. If we can find a pattern in the verts on the right side of this list; we can implement it in an algorithm and the rest is just coding. Pattern recognition: It appears T2 = T0 with 1 added to each component T3 = T1 with 1 added to each component In general T[N+2] = T[N] with 1 added to each component, until we come to T8 at least. Also it is hard to recognize a relation between the even and odd triangles,To see what is happening here it helps to look at an image of the generalized case where n can take on any integer value n > 0. Looking for patterns in this generalized(for any n) vert plot we see... We have defined s = 2n. The 4 corner coordinates(+-n,+-n) of the x = - n cube face, one at each corner (+-n,+-n). There are (s+1)² verts/face numbered (0 thru (s+1)² -1). There are 2s² triangles/face numbered (0 thru 2s² -1). They are indicated in red. It's not as bad as it looks iff you break it down. Let's look at the even triangles only and just the 0th vert of these triangles. For any row we see the number of that first vert of the even triangles just increases by one going down the row. We can even try a relation such as T[N].0 = N/2. Here T[N].0 denotes the 0th vert of th Nth triangle. Which works until we have to jump to the next row. Every time we jump a row we T[N+1].0 = T[N].0 + 2 for the first triangle in the higher row. So we need a corrective term to the T[N].0 = N/2 relation that adds 1 every time we jump a row. We can use computer integer division to generate such a term and N/2s is such a term. It only changes value when we jump rows and we get our first function ... f0(N) = N/2 + N/2s. (even triangles) Remember the integer division will discard any remainder from the terms and check this works for the entire cube face, but only for the even triangles. What about the odd triangles? Going back to the triangle vs vert list for the specific case n = 2, s = 4 for the first row; we see for the odd triangles T[N].0 = T[N-1].0 + s + 2. And adding this term, s + 2 to the formula for the even triangle 0th vert we get f0[N] for the odd triangles. f0(N) = N/2 + N/2s + s + 2. (odd triangles) Continuing this somewhat tedious analysis for the remaining functions f1(N), f2(N) we eventually have these relations for the x = -n cube face triangles. for N = 0 thru N = 2s² - 1. defining m = N/2 + N/2s. T[N] = m, m + s + 1, m + s + 2 T[N] = f0(N), f1(N), f2(N). (even N) T[N] = m + s + 2, m + 1, m T[N] = f0'(N), f1'(N), f2'(N) (odd N) So it turns out we have two sets of functions for the verts, fn(N) for the even triangles and fn'(N) for the odd. To recap here; we now have formulae for all the T[N] verts as functions of N and the input parameter n: Input: An integer n = units from origin to cube face. But this is only for the first face x = -n, we have five more faces to determine. So the question is: Do these formulae work for the other faces? And the answer is no they do not, but going through a similar analysis for the remaining face gives similar T[N] = f0(N), f1(N), f2(N) for them. There is still the choice of how to number the remaining triangles and verts on the remaining five faces, and the f0(N), f1(N), f2(N) will depend on the somewhat arbitrary choice of how we do the numbering. For the particular choice of a numbering scheme I ended up making, it became clear how to determine the f0(N), f1(N), f2(N) for the remaining faces. It required making generalized vert plots for the remaining five face similar to the previous image. Then these relation emerged... For face x = -n T[N] N(0 thru 2²-1) we have the f0(N), f1(N), f2(N), even and odd For face x = n T[N] N(2s² thru 4s²-1) add (s+1)² to the x=-n face components and reverse the winding order For face y = -n T[N] N(4s² thru 6s²-1) add 2(s+1)² to the x=-n face components and reverse the winding order For face y = n T[N] N(6s² thru 8s²-1) add 3(s+1)² to the x=-n face components For face z = -n T[N] N(8s²0 thru 10s²-1) add 4(s+1)² to the x=-n face components For face z = n T[N] N(10s²0 thru 12s²-1) add 5(s+1)² to the x=-n face components and reverse the winding order And these are enough to allow us to write explicit expressions for all 12n² triangles for all 6 faces T[N] and what remains to be done is to implement these expression in code. Which turned out to be a much simpler task than finding the f0(N), f1(N), f2(N) and resulted in a surprisingly short bit of code. Implementation I have attempted to make this C++ snippet of code as generic as possible and have removed any dev-platform specific #includes and the like. GLM, a C++ mathematics library for graphics developed by Christophe Riccio is used. It is a header only library. https://github.com/g-truc/glm/releases/download/0.9.9.0/glm-0.9.9.0.zip That is the only outside dependency. // Procedural cube face verticies generator #include <vector> #include <glm/gtc/matrix_transform.hpp> struct Triangle { glm::vec3 vert[3]; // the three verts of the triangle }; /* std::vector<Triangle> cube_Faces(int n) Input: integer 'n'; the units from origin to cube face. Output: vector<Triangle> glTriangle; container for the 12*(2*n)² triangles covering the 6 cube faces. */ std::vector<Triangle> cube_Faces(int n){ size_t number_of_triangles(12*(2*n )*(2*n)); size_t number_of_face_verts(6*(2*n +1 )*(2*n+1)); std::vector<glm::vec3> face_verts(number_of_face_verts); std::vector<Triangle> glTriangle(number_of_triangles); // Generate the 6*(2n +1 )² face verts ------------------------------- int l(0); for(int i = 0; i < 6; i++){ for(int j = -n; j <= n; j++){ for(int k = -n; k <= n; k++){ // Below "ifS" strip out all interior cube verts. if( i == 0){ // do yz faces face_verts[l].x = (float)(-n); //x face_verts[l].y = (float)j; //y face_verts[l].z = (float)k;}//z if( i == 1){ // do yz faces face_verts[l].x = (float)(n); //x face_verts[l].y = (float)j; //y face_verts[l].z = (float)k;}//z if( i == 2){ // do zx faces face_verts[l].x = (float)j; //x face_verts[l].y = (float)(-n); //y face_verts[l].z = (float)k;}//z if( i == 3){ // do zx faces face_verts[l].x = (float)j; //x face_verts[l].y = (float)(n); //y face_verts[l].z = (float)k;}//z if( i == 4){ // do xy faces face_verts[l].x = (float)j; //x face_verts[l].y = (float)k; //y face_verts[l].z = (float)(-n);}//z if( i == 5){ // do xy faces face_verts[l].x = (float)j; //x face_verts[l].y = (float)k; //y face_verts[l].z = (float)(n);}//z l++; } } } // Generate the 12*(2*n)² triangles from the face verts ------- int s = 2*n; int q = 2*s*s; int a = (s+1)*(s+1); int f(0); int r(0); int h(0); for( int N=0; N < number_of_triangles; ){ // triangles already in CW winding if( N < q || N < 5*q && N > 3*q - 1 ){ // do the even indicies f= q*(N/q); r = a*(N/q); h = (N-f)/2 + (N-f)/(2*s) + r; glTriangle[N].vert[0] = face_verts[h]; glTriangle[N].vert[1] = face_verts[s + 1 + h]; glTriangle[N].vert[2] = face_verts[s + 2 + h]; N++; f= q*(N/q); r = a*(N/q); h = (N-f)/2 + (N-f)/(2*s) + r; // do the odd indicies glTriangle[N].vert[0] = face_verts[s + 2 + h]; glTriangle[N].vert[1] = face_verts[ 1 + h]; glTriangle[N].vert[2] = face_verts[h]; N++; f= q*(N/q); r = a*(N/q); h = (N-f)/2 + (N-f)/(2*s) + r; } // triangles needing reverse order for CW winding if( N > 5*q - 1 || N < 3*q && N > q - 1 ){ // do the even indicies glTriangle[N].vert[0] = face_verts[s + 2 + h]; glTriangle[N].vert[1] = face_verts[s + 1 + h]; glTriangle[N].vert[2] = face_verts[h]; N++; f= q*(N/q); r = a*(N/q); h = (N-f)/2 + (N-f)/(2*s) + r; // do the odd indicies glTriangle[N].vert[0] = face_verts[h]; glTriangle[N].vert[1] = face_verts[1 + h]; glTriangle[N].vert[2] = face_verts[s + 2 + h]; N++; f= q*(N/q); r = a*(N/q); h = (N-f)/2 + (N-f)/(2*s) + r; } } // Normalize the cube to side = 1 ------------------------------ for(int i = 0; i < number_of_triangles; i++){ glTriangle[i].vert[0].x = glTriangle[i].vert[0].x/(2.0*(float)n); glTriangle[i].vert[0].y = glTriangle[i].vert[0].y/(2.0*(float)n); glTriangle[i].vert[0].z = glTriangle[i].vert[0].z/(2.0*(float)n); glTriangle[i].vert[1].x = glTriangle[i].vert[1].x/(2.0*(float)n); glTriangle[i].vert[1].y = glTriangle[i].vert[1].y/(2.0*(float)n); glTriangle[i].vert[1].z = glTriangle[i].vert[1].z/(2.0*(float)n); glTriangle[i].vert[2].x = glTriangle[i].vert[2].x/(2.0*(float)n); glTriangle[i].vert[2].y = glTriangle[i].vert[2].y/(2.0*(float)n); glTriangle[i].vert[2].z = glTriangle[i].vert[2].z/(2.0*(float)n); }; return glTriangle; } The rendering was done using OpenGl. // OGL render call to the cube mesh generator - PSUEDOCODE int n(2); int cube_triangle_Count = (12*(2*n)*(2*n)); std::vector<Triangle> cube_Triangles(cube_triangle_Count); cube_Triangles = cube_Faces(n); glBindBuffer(GL_ARRAY_BUFFER, uiVBO[0]); glBufferData(GL_ARRAY_BUFFER, cube_Triangles.size()*sizeof(Triangle), &cube_Triangles[0], GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), 0); glEnableVertexAttribArray(0); glDrawArray(GL_TRIANGLES,0,3*cube_triangle_Count); This just gets the position attribute of the cube face triangle verts; for the color and other attributes there are a couple of options: Use separate GL_ARRAY_BUFFERS for the color and other attributes. Or add attributes to the Triangle struct... struct Triangle { glm::vec3 vert[3]; // the three verts of the triangle attribute1; attribute2; ... }; Screenshot of the spherified cube. What's next? Now that we have the cube mesh what we can do with with it practically unlimited. The first thing I did was turn it into a sphere. Playing with tesselating the cube or sphere or stellating it with different patterns; might do. Ended up trying a few matrix transformations on the cube mesh. These are shown in the image below. These shapes are result short bits of code like the code for the column shape below. //Column for(int i = 0; i < number_of_triangles; i++){ for(int j = 0; j < 3; j++){ if( glTriangle[i].vert[j].y < 0.5f && glTriangle[i].vert[j].y > -0.5f ){ float length_of_v = sqrt((glTriangle[i].vert[j].x * glTriangle[i].vert[j].x) + (glTriangle[i].vert[j].z * glTriangle[i].vert[j].z)); glTriangle[i].vert[j].x = 0.5f*glTriangle[i].vert[j].x/length_of_v; glTriangle[i].vert[j].z = 0.5f*glTriangle[i].vert[j].z/length_of_v; } } } Doing this; the blacksmith at his forge analogy soon presents. The mesh is the ingot, hammer matrices stretch, round and bend it against the fixed geometry of the anvil - coordinate system. I am the smith. Tetrahedron The tetrahedron is the platonic solid with the least number of faces(4), edges(6), and verts(4). In antiquity it was associated with the element of fire due to its' sharp vertices. The algorithm for the tetrahedron mesh was developed in a similar way to the cube, but here it seemed simpler to get a routine for just one face - an equilateral triangle - and use matrix translations and rotations to form the complete tetrahedron. So more like origami or tinsmithing than blacksmithing. Procedural tetrahedron screenshot. The n = 4 and the general case To get an routine for the general case, n an integer > 0, a bit of what I think is known as mathematical induction was used. PSUEDO-CODE Algorithm to generate equilateral triangle face with unit side composed of n² "sub-triangle" in the xy plane. std::vector<Triangle> equilateral(int n){ std::vector<Triangle> tri_Angle(n²); // Create the seed triangle in xy plane . // This is triangle "0" in the image above. // This is in the xy(z=0) plane so all the // tri_Angle.vert[0 thrue n -1 ].z = 0. // We just work with the x and y verts. tri_Angle[all].vert[all].z = 0; // The seed triangle tri_Angle[0].vert[0].x = 0; tri_Angle[0].vert[0].y = 0; tri_Angle[0].vert[1].x = 1/2n; tri_Angle[0].vert[1].y = sin(π/3)/n; tri_Angle[0].vert[2].x = 1/n; tri_Angle[0].vert[2].y = 0; // Build the equilateral triangle face. int count(0); for(int row = 0; row < n; row++){ count = 0; Spin = glmRotateMatrix( π/3, zaxis ); // The magic happens here! for(int i = 2*n*row - row*row; i < 2*n*row - row*row + 2*n - 2*row - 1; i++) { if (count % 2 == 0 ) // Triangle is even in the row - just translate { // more magic. x_Lat = glm_Matrix((count + row)/2n, row*sin(π/3)/n, 0.0f); tri_Angle[i].vert[0] = x_Lat* tri_Angle[0].vert[0]; tri_Angle[i].vert[1] = x_Lat* tri_Angle[0].vert[1]; } else // Triangle is odd in the row - rotate then translate { //and more magic. x_Lat = glm_Matrix((count + row + 1)/2n, row*sin(π/3)/n, 0.0f); tri_Angle[i].vert[0] = x_Lat*Spin*tri_Angle[0].vert[0]; tri_Angle[i].vert[1] = x_Lat*Spin*tri_Angle[0].vert[1]; } } count++; } return tri_Angle; } This is the psuedocode version of the routine which generates the verts for the n² triangles in a face. Getting this algorithm was a bit of a brain drain but looking for patterns in the image of the face allowed it to happen. We use a "seed" triangle, which is triangle 0 on the lower left of the figure. The verts of this one triangle are input; the rest of the n² triangles verts are generated by translating and rotating this seed triangle. Notice: There are n rows, every row has 2 less triangles than the row below. If we number the triangles from 0 to 2n - 2*row - 2, where the rows run 0 to n; the even triangles just need to be translated ... in the x direction by (count + row)/2n where count = their position in the row 0 to 2n - 2*row - 2. in the y direction by row*height. height = height of seed triangle. The odd triangles need to be rotated pi/3 = 60 degrees around the z axis then translated ... in the x direction by (count + row + 1)/2n where count = their position in the row 0 to 2n - 2*row - 2. in the y direction by row*height. height = height of seed triangle. Now we have a single face for the tetrahedron, to join the four faces together we need the angle between the faces called the dihedral angle. Dihedral Angle Each of the five platonic solids has a characteristic called the dihedral angle. This is the angle between the faces. For the cube it is 90 degrees or pi/2 radians. For the tetrahedron it is 70.528779° = arccos(1/3) = atan(2*sqrt(2)); The tetrahedron, with just four faces, is the simplest of the platonic solids. The simplest way I can think of to build it: Start with the four face stacked one on another, edges aligned. Imagine the top three faces each hinged to the bottom face along one edge. Then rotate each face around then hinged edge by arccos(1/3), the dihedral angle. That is the method of the bit of code shown below. vector<Triangle> tetrahedron(int N){ std::vector<Triangle> tetra(4n²); tetra[all].vert[all].z = 0; // The seed triangle tetra[0].vert[0].x = 0; tetra[0].vert[0].y = 0; tetra[0].vert[1].x = 1/2n; tetra[0].vert[1].y = sin(π/3)/n; tetra[0].vert[2].x = 1/n; tetra[0].vert[2].y = 0; // ----- The first face ----- // generate the first equilateral triangle face with unit side // composed of n² "sub-triangle" in the xy(z=0) plane. int count(0); for(int row = 0; row < n; row++) { count = 0; Spin = glmRotateMatrix( π/3, zaxis ); for(int i = 2*n*row - row*row; i < 2*n*row - row*row + 2*n - 2*row - 1; i++) { if (count % 2 == 0 ) // Triangle is even in the row - just translate { x_Lat = glm_Matrix((count + row)/2n, row*sin(π/3)/n, 0.0f); tetra[i].vert[0] = x_Lat* tetra[0].vert[0]; tetra[i].vert[1] = x_Lat* tetra[0].vert[1]; } else // Triangle is odd in the row - rotate then translate { x_Lat = glm_Matrix((count + row + 1)/2n, row*sin(π/3)/n, 0.0f); tetra[i].vert[0] = x_Lat*Spin*tetra[0].vert[0]; tetra[i].vert[1] = x_Lat*Spin*tetra[0].vert[1]; } } count++; } // ----- The second face ----- // generate the second equilateral face from the first // by rotating around the X axis by the dihedral angle. float tetra_Dihedral = atan(2*sqrt(2)); Spin = glmRotateMatrix( -tetra_Dihedral, xaxis ); //just rotate for(int i = 0; i < n²; i++) { for(int j = 0; j < 3; j++) { tetra[n² + i].vert[j] = Spin*tetra[i].vert[j]; } } //The rotation gives CCW verts so need need to make them CW again for(int i = n²; i < 2n²; i++) { swap(tetra[i].vert[0] ---- with --- tetra[i].vert[2]; } // ----- The third face ----- // For the second face we rotated the first triangle around its' // base on the X - axis. For the third face we rotate the first // triangle around its' edge along the vector ( 0.5, 0.866025, 0.0 ). Spin = glmRotateMatrix( tetra_Dihedral ,glm::vec3(0.5f,0.866025f,0.0f)); for(int i = 0; i < n²; i++) { for(int j = 0; j < 3; j++) { tetra[2n² + i].vert[j] = Spin*tetra[i].vert[j]; } } //need to make it CW again for(int i = 2n²; i < 3n²; i++) { swap(tetra[i].vert[0] ---- with --- tetra[i].vert[2]; } // ----- The forth face ----- // For the forth face we first translate the original face along the // X axis so it right edge vector (-0.5f, 0.866025f, 0.0f) passes thru the origin. // Then we rotate the first triangle around the that vector by the dihedral angle. x_Lat = glm::translate( glm::vec3(-1.0f, 0.0f, 0.0f)); Spin = glmRotateMatrix( -tetra_Dihedral, glm::vec3(-0.5f,0.866025f,0.0f)); for(int i = 0; i < n²; i++) { for(int j = 0; j < 3; j++) { tetra[3n² + i].vert[j] = Spin*x_Lat*tetra[i].vert[j]; } } //need to make it CW again for(int i = 3n²; i < 4n²; i++) { swap(tetra[i].vert[0] ---- with --- tetra[i].vert[2]; } // We now have the complete tetrahedron, tetra(4n²), but its' base // is not horizontal so let's make is so. // put the base in the xz plane // rotate 90 - dihedral angle around X axis. Spin = glm::rotate( tetra_Dihedral - half_PI, xaxis); for(int i = 0; i < 4n²; i++) { for(int j = 0; j < 3; j++) { tetra[i].vert[j] = Spin*tetra[i].vert[j]; } } // We now have the complete tetrahedron, tetra(4n²), sitting with its' // base on the xz(y=0) plane, let's put its' center at the origin. // For this we need another Platonic Solid attribute: The radius of // the tetrahedrons circumscribed sphere which is sqrt(3/8). So the // center of the tet is this vertical distance down from its' apex. // To put the center at the origin we need to translate down this // distance along the Y axis. We need also to xlat along the Z axis // by 1/2(sqrt(3)) = 0.28867; the distance from the center of a face // to the center of a side. // Finally we need to center along the X axis( xlat -.5) x_Lat = glm::translate( glm::vec3(-0.5f, -sqrt(3/8), sqrt(3)/2); for(int i = 0; i < 4n²; i++) { for(int j = 0; j < 3; j++) { tetra[i].vert[j] = x_Lat*tetra[i].vert[j]; } } return tetra; } Notes: Oops: Left out std::vector<Triangle> tri_Angles(4*n*n); Should be the first line of the function body! Corrections to the corrections: First line of function definition vector<Triangle> tetrahedron(int N){ should be vector<Triangle> tetrahedron(int n){ Those last two for loops could and probably should be combined to do a translate*rotate*triangle in one statement, but I have not tried it. All distances are for a tetrahedron with unit side. The sign of the dihedral angle in the rotations was usually determined by trial and error. I.e.; I tried one sign, compiled the code and rendered the tet. If it was wrong I just reversed the sign. The end result is a tetrahedron with its' center at the origin, its' base in the xz plane, and one edge parallel to the X axis. Of the five platonic solids; three (tetrahedron, octahedron, icosahedron) are composed of equilateral triangle faces. One of square faces (cube). And one of pentagon faces (dodecahedron). Two tetrahedrons fit nicely in a cube. 11-16-18: Corrections to code blocks for equilateral triangle and tetrahedron. 11-18-18: More corrections. Icosahedron Two faces = Icosahedron Petal Five petals in this flower = 10 faces = half of the icosahedron 12-5-18 Understanding the icosahedron's 3d form via 2d images is difficult; we need to make a small, palm sized, 3d model. It takes nineteen paper triangles and some tape. The vertices of five equilateral triangles must come together at each vertex of the icosahedron. The icosahedron has 20 faces and 12 verts, but leaving one face off the model allows us to look in side. Besides; when where done we'll have a neat little icosahedron basket. You don't really need to make a model to code the icosahedron; but it helped me to see some properties which simplified its' construction. Symmetries are important to the mathematician and perhaps even more so to the physicist. They say something has a certain symmetry if you perform a certain operation on it and the world remains unchanged. Unchanged in the sense that you can not even detect something has been done to it. Example: Rotate a square 90 degrees around its center in the plane; the square has that type of rotational symmetry. The square also has an inversion symmetry; if you take every point on the square and invert it through the origin you end up with the same square you started with. Inversion is simply negating all the coordinates. The center must be at the origin of course. This is true for the cube, but not for the tetrahedron. Symmetries simplify the construction (coding) of an object. Going back to the cube; it might have been easier to do three faces and then just invert them thru the origin to get the other three. For the tetrahedron simple inversion is not a symmetry, but I am pretty sure inversion together with a rotation is. If so; we could do two faces and then perform an inversion - rotation on them in one step. And inversion in Cartesian coordinates just means to negate all the verts - easy! Toying with our icosahedron model; holding it gently with our thumb on one vertex and our middle finger on the opposite vertex, lazily twirling it around an imaginary axis through those two vertices; we are struck with a thought: We are toying with our icosahedron model twirling it around an axis through - Eureka! - two opposite vertices: The icosahedron has inversion symmetry. This is great - our work has just been cut in half. We can code half of the icosahedron's verts and just invert(negate) to get the rest. Thank you inversion symmetry. But let's not stop now, we are on a roll (no pun intended); lets see if we can find more symmetries to make our work easier. Looking intently; holding our model as before, but still, not rotating. Then slowly rotating about the axis we see another symmetry. After one fifth of a revolution(2π/5 radians) the universe is the looks the same as when we started rotating it. The icosahedron has a 2π/5 rotational symmetry. Can we use this to cut our work load? You bet we can. First we need to clear up a few points about something central to our construction efforts: The axis of symmetry. (Sorry, the puns just keep coming.) An axis of symmetry is a line passing thru two opposite vertices and the center of the icosahedron. The icosahedron has six of them: We only need one. We will use the Z axis. Dihedral angle: To be precise, it is the angle between the normals of two adjacent faces. The images: Looking at the images we see a flower shape with five "petals". A petals is just two faces joined along a side. The angle between the two petal faces is the icosa's dihedral angle; arccos(- √5/3) radians. Five petals make a "flower" , which is ten faces, so it is half of the icosahedron. Once we have five petals joined to make this flower, we just copy/invert all its' verts to get the other half: We have our icosahedron. The Plan: Refer to the figures 1.) Make a petal. 2.) Attach one tip of the petal the axis of symmetry. (Oriented properly of course.) 3.) Copy/rotate the petal around the axis of symmetry by 2π/5 radians four times to get five petals = a flower. We are using the 2π/5 radians rotational symmetry here. 4.) Copy/invert( r -> -r ) our five-petal-ten-face flower to get our 20 face icosahedron. Using inversion symmetry here. So just four steps; sounds simple enough. Each step has its own steps of course, but they are mostly intuitive common sense things we must do to get the result. Constants: Before we get to the code we need four constants. 1.) The dihedral angle between two faces, the dihedral_angle. dihedral_angle. = arccos(- √5/3) = 46.06322 radians = 138.18969°. 2.) The angle between the Z axis and the normal of a face of a petal at the vertex. 0.652351 radians 3.) The radius of a circumscribed sphere(A sphere that touches all 12 verts). In other words the distance from the icosahedron center to a vertex. Also called the circumradius. R = sin(2π/5). 4.) The rotational symmetry: 2π/5 radians Let's not do a blow by blow, or should say, a bend by bend, description of the code. If a picture is worth a thousand words, it seems safe to assume an animated 3D image is worth even more. I suggest we compile the code and render to the display step by step. In fact this is how the code was developed; with a projection matrix and a rotation around the Z axis. Compile - Render the first face: F0. " " the first petal: P0 from F0 and F1. " " the second petal P1. " " the third petal P2. " " the fourth petal P3. " " the fifth petal P4. We now have the flower. Compile - Render the inversion of the flower. Done. The icosahedron pseudocode. struct Triangle { glm::vec3 vert[3]; // the three verts of the triangle }; //PSUEDOCODE ICOSAHEDRON /* input: integer n - number of triangles along icosahedron edge. output: std::vector<Triangle> icosahedron - mesh with 20n² triangles. */ std::vector<Triangle> icosahedron( int n ){ const float dihedral_Angle = acos(-(sqrt(5.0f)/3.0f)); const float dihedral_Comp = π - dihedral_Angle; std::vector<Triangle> T_icosahedron(20n²); // Create the seed triangle T. Triangle T; T.vert[0].x = 0; T.vert[0].y = 0; T.vert[0].z = 0; T.vert[1].x = 1/2n; T.vert[1].y = sin(π/3); T.vert[1].z = 0; T.vert[2].x = 1/n; T.vert[2].y = 0; T.vert[2].z = 0; // ----- F0 ----- // Create the first face; "F0" in the xy(z=0) plane // from the seed triangle T. int count(0); for(int row = 0; row < n; row++){ count = 0; for(int i = 2*n*row - row*row; i < 2*n*row - row*row + 2*n - 2*row - 1; i++){ if (count % 2 == 0 ){ // Triangle is even in the row - just translate . x_Lat = glm::translate(count+row)/2n, row*sin(π/3), 0); for(int j = 0; j < 3; j++){ T_icosahedron[i].vert[j] = x_Lat*T.vert[j]; } } else{ // Triangle is odd in the row - rotate then translate. x_Lat = glm::translate( glm::vec3((count+1+row)/2n, row*sin(π/3), 0)); Spin = glm::rotate( π/3, zaxis ); for(int j = 0; j < 3; j++){ T_icosahedron[i].vert[j] = x_Lat*Spin*T.vert[j]; } } count++; } } // At this point comment out the rest of the code, // return T_icosahedron; // Compile and render F0 to the display. // ----- P0 ----- // Create the first petal "P0" in the xy(z=0) plane. glm::vec3 axis(0.5f, sin(π/3), 0.0f); Spin = glm::rotate( π/3, zaxis ); //just rotate Spin2 = glm::rotate( -dihedral_Comp, axis ); for(int i = 0; i < n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[n² + i].vert[i] = Spin2*Spin*T_icosahedron[i].vert[j]; } } // xlate P0 by -1.0 along x and bend down by epsilon from the xy plane // epsilon is the angle we want between the Z axis and the normal of F0. // epsilon = 0.6523581f; x_Lat = glm::translate( glm::vec3(-1.0f, 0.0f, 0.0f)); Spin2 = glm::rotate( glm::mat4(1.0), -π/3, zaxis ); Spin = glm::rotate( glm::mat4(1.0), -epsilon, xaxis ); //just rotate for(int i = 0; i < 2n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i].vert[j] = Spin*Spin2**x_Lat*T_icosahedron[i].vert[j]; } } // At this point comment out the rest of the code, // return T_icosahedron; // Compile and render P0 to the display. // Create P1 from the P0 verts, rotate 2π/5 around z then Spin = glm::rotate( 2π/5, zaxis ); //just rotate for(int i = 0; i < 2n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i+2n²].vert[j] = Spin*T_icosahedron[i].vert[j]; } } // At this point comment out the rest of the code, // return T_icosahedron; // Compile and render P0 - P1 to the display. // Create P2 thru P4 from P0 verts: rotate around z: // 2*2π/5 for P2, 3*2π/5 for P3 and finally 4*2π/5 for P4 // P2 Spin = glm::rotate( 2*2π/5, zaxis ); //just rotate for(int i = 0; i < 2n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i+4n²].vert[j] = Spin*T_icosahedron[i].vert[j]; } } // P3 Spin = glm::rotate( 3*2π/5, zaxis ); //just rotate for(int i = 0; i < 2n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i+6n²].vert[j] = tSpin*T_icosahedron[i].vert[j]; } } // P4 Spin = glm::rotate( 4*2π/5, zaxis ); //just rotate for(int i = 0; i < 2n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i+8n²].vert[j] = Spin*T_icosahedron[i].vert[j]; } } // At this point we should have the full flower. // Comment out the rest of the code, // return T_icosahedron; // Compile and render P0 thru P4 to the display. // Move everthing up along z to put the icosahedron center at the origin. // radius of circumscribed sphere = sin(2π/5), for face side = 1. x_Lat = glm::translate( glm::vec3(0, 0, sin(2π/5)); for(int i = 0; i < 10n²; i++){ for(int j = 0; j < 3; j++){ T_icosahedron[i].vert[j] = x_Lat*T_icosahedron[i].vert[j]; } } // invert all the verts and reverse for cw winding // this creates the other half of the icosahedron from the first 10 triangles for(int i = 0; i < 10n²; i++){ for(int j = 0; j < 3; j++){ // invert T_icosahedron[i+10n²].vert[j].x = -T_icosahedron[i].vert[j].x; T_icosahedron[i+10n²].vert[j].y = -T_icosahedron[i].vert[j].y; T_icosahedron[i+10n²].vert[j].z = -T_icosahedron[i].vert[j].z; } // Swap verts 0 and 2 to get back to CW winding. hold = T_icosahedron[i+10n²].vert[0];// reverse T_icosahedron[i+10n²].vert[0] = T_icosahedron[i+10*n²].vert[2]; T_icosahedron[i+10n²].vert[2] = hold; } return T_icosahedron; // Spherify - uncomment the code below to spherify the icosahedron /* for(int i = 0; i < 20n²; i++){ for(int j = 0; j < 3; j++){ float length_of_v = sqrt( (T_icosahedron[i].vert[j].x * T_icosahedron[i].vert[j].x) + (T_icosahedron[i].vert[j].y * T_icosahedron[i].vert[j].y) + (T_icosahedron[i].vert[j].z * T_icosahedron[i].vert[j].z)); T_icosahedron[i].vert[j].x = T_icosahedron[i].vert[j].x/length_of_v; T_icosahedron[i].vert[j].y = T_icosahedron[i].vert[j].y/length_of_v; T_icosahedron[i].vert[j].z = T_icosahedron[i].vert[j].z/length_of_v; } } */ return T_icosahedron; } Screen-shots: First petal P0 and five petal icosahedron flower.
  6. I decided to write a small program that writes a vector to a binary file. However, I'm having some issues with the code. Here is what I have so far: #include <iostream> #include <map> #include <string> #include <vector> #include <cmath> #include <math.h> #include <fstream> using namespace std; class Book { protected: string m_pBookTitle; string m_pAuthor; string m_pGenre; int m_pNumberOfPages; public: Book() : m_pBookTitle(""), m_pAuthor(""), m_pGenre(""), m_pNumberOfPages(0){} Book(string title, string author, string genre, int numPages) : m_pBookTitle(title), m_pAuthor(author), m_pGenre(genre), m_pNumberOfPages(numPages) { } ~Book(){} string getBookTitle() const { return m_pBookTitle; } string getBookAuthor() const { return m_pAuthor; } string getBookGenre() const { return m_pGenre; } int getNumPages() const { return m_pNumberOfPages; } }; int main(int argc, char** argv) { vector<Book*> Books; Book* book1 = new Book("The man with a dog", "Robert White", "scifi", 300); Books.push_back(book1); Book* book2 = new Book("Just got here", "James Hancock", "Fantasy", 100); Books.push_back(book2); Book* book3 = new Book("The Girl with the Dragon Tattoo", "Eddinton Carlos","Fiction", 500); Books.push_back(book3); cout << "Number of books: " << Books.size() << endl; ofstream bookFile; bookFile.open("Books.book", ios::binary); if (bookFile.is_open()) { for (vector<Book*>::iterator i = Books.begin()+1; i != Books.end(); i++) { bookFile.write((char*)&(*i), sizeof((*i))); } } bookFile.close(); ifstream bookFileIn; bookFileIn.open("Books.book", ios::binary); vector<Book*> LoadedBooks; if (bookFileIn.is_open()) { Book* book = nullptr; while (!bookFileIn.eof()) { bookFileIn.read((char*)&book, sizeof(book)); LoadedBooks.push_back(book); } } bookFileIn.close(); for (vector<Book*>::iterator i = LoadedBooks.begin(); i != LoadedBooks.end(); i++) { Book* b = (*i); cout << "Book title: " << b->getBookTitle() << endl; cout << "Book Author: " << b->getBookAuthor() << endl; cout << "Genre: " << b->getBookGenre() << endl; cout << "Number of pages: " << b->getNumPages() << endl; } cout << "Number of books: " << LoadedBooks.size() << endl; system("PAUSE"); return EXIT_SUCCESS; } This is what happens with the +1: http://puu.sh/ChluR/3c42a42137.png And removing the +1: http://puu.sh/Chlwz/ed52d85cdb.png The problem is that one of the books gets added twice. Someone in the Discord server I'm in said that I can't do bookFile.write((char*)&(*i),sizeof((*i)) because the book has non trivial types such as strings. What should I do?
  7. Hi, I started implementing 2D board game. I have concept of how to write rules, controlls etc, but i dont want to write another all-in-app. So i decided to do it "right". I divided my code into reuseable modules - ResourceManager, Renderer, Core, Math (for now). All modules use SDL2. ResourceManager(RM) - loads textures, audio etc without duplicating them in memory. Resources are gathered in TextureResource, AudioResource (...) objects, handled by std::shared_ptr. For textures I have prepared Texture class that wraps SDL_Texture and I RM serves this Texture objs for Core module. Core - The main game module, contains game loop, gameobject / component implementation and event handling. Core requests for data from RM and sends them to right components. Renderer - Creates window and knows about render range (in core represented by camera). Takes info about texture, position, rotation and scale to render images (just this for now). Its time for my questions: is this architecture good? After I end this board game I want to extend renderer module for example for rendering 3D objects. Loading resources while ingame is good idea? I mean single textures, models, sounds etc. As I said, for handling resources I am using shared_ptr, is it good cleaning cache every (for example) 3 minutes? By cleaning i mean removing not used resources (counter =1). And the hardest thing for me right now - take a look at this flow: Core create a T1 token Component Renderer2D is connected to T1. Core requests a texture /textures/T1.png from RM. RM checks if /textures/T1.png is in map, if not, loads it. RM returns a std::shared_ptr<Texture> to Core. Core assign texture to T1 Renderer2D component. Now i want to pass this object to renderer. But I wont pass all gameObjects and checks which have renderer2D component (i also cant, because only Core know what is gameObject and component). So i had an idea: I can create Renderable interface (in Renderer module) and inherit from it in the renderer2D component. Renderable will contain only pointers to position data. Now i am able to pass renderer2D component pointer to Renderer and register it. Is this good way to handle this? Or im overcomplicating things? If point above is right I had last question - registering object in Renderer module. I dont want to iterate over all objects and check if I can render them (if they are in render range). I wanted to place them in "buckets" of - for example - screen size. Now calculating collisions should be faster - i would do this only for objects in adjacent buckets. But for 2D game i have to render objects in correct order using Z index. Objects have to be placed in correct bucket first, then sorted by Z in range of bucket. But now i have problem with unregistering objects from Renderer module. I think I got lost somewhere in this place... Maybe You can help me? Of course it this is correct way to handle this problem. I would love to read your comments and tips about what can I do better or how can i solve my problems. If i didnt mention something but You see something in my approach, write boldly, I will gladly read all Your tips :).
  8. Hello again Recently I was trying to apply 6 different textures in a cube and I noticed that some textures would not apply correctly, but if i change the texture image with another it works just fine. I can't really understand what's going on. I will also attach the image files. So does this might have to do anything with coding or its just the image fault??? This is a high quality texture 2048x2048 brick1.jpg, which does the following: And this is another texture 512x512 container.jpg which is getting applied correctly with the exact same texture coordinates as the prev one: Vertex Shader #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; } Fragment Shader #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); } Texture Loader Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (!trans) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Free the image data. stbi_image_free(data); } Texture::~Texture() { } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } Rendering Code: Renderer::Renderer() { float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; //Create the Vertex Array. m_vao = new Vao(); //Create the Vertex Buffer. m_vbo = new Vbo(vertices, sizeof(vertices)); //Create the attributes. m_attributes = new VertexAttributes(); m_attributes->Push(3); m_attributes->Push(3); m_attributes->Push(2); m_attributes->Commit(m_vbo); } Renderer::~Renderer() { delete m_vao; delete m_vbo; delete m_attributes; } void Renderer::DrawArrays(Cube *cube) { //Render the cube. cube->Render(); unsigned int tex = 0; for (unsigned int i = 0; i < 36; i += 6) { if (tex < cube->m_textures.size()) cube->m_textures[tex]->Bind(); GLCall(glDrawArrays(GL_TRIANGLES, i, 6)); tex++; } }
  9. I'm what you might consider a "casual prosumer" when it comes to commenting. I comment important stuff, but only if the code isn't written in a self-explanatory way. Which is why I've also adapted a really simple and descriptive naming scheme and done away with any and all notation systems. That being said, there are occasions where I want to either document a whole system of code or provide a summarized walk-through on a per-function basis. This is where I find myself in a frustrating spot since none of the solutions on the market seem to be quite for what I want. So I figured I'd list things I want to achieve and what I want to avoid in hopes that I'm either not familiar with something or perhaps am simply not configuring stuff properly. I'm willing to pay for a good solution. The dream wish list of things I want and need: full documentation generation, a la Doxygen a non-verbose (lightweight) and non-monolithic style non-XML style markup (eg the way Natural Docs does it, not Doxygen) no block comments in documentation (I use block comments extensively to manage code flow during development) partial documentation (I really don't want to provide an explanation for each and every argument and return type) a concise format with a clear layout, so no \param and \return shenanigans automatically filled in for me no duplication of obvious information (eg the function name) in the comments inline documentation no explicit flow direction (in/out/inout) in documentation, but rather taken directly from code - I already provide this information! proper macro expansion I've tried Atomineer and it doesn't work for me at all. So far the Doxygen style in general is pure bloat in my eyes since it becomes bothersome to maintain as soon as you make something as simple as a name change. Allow me to demonstrate by example: Here's what a typical function in my code might look like: _BASEMETHOD ECBool OnInitialize( IN MODIFY ResourceType& object, IN const char* type, OPTIONAL IN ISignalable* signalable = nullptr, OPTIONAL IN uint32 flags = 0) const { ... } _BASEMETHOD expands to 'virtual'. Atomineer doesn't handle this too well since it is adamant about placing the documentation below that line unless I take care to actually generate it on the word _BASEMETHOD itself. Here's the default "trite" Atomineer generates: /// Executes the initialize action /// /// \author yomama /// \date 12-Dec-18 /// /// \tparam ResourceType Type of the resource type. /// \param [in,out] {IN MODIFY ResourceType&} object The object. /// \param {IN const char*} type The type. /// \param [in,out] {OPTIONAL IN ISignalable*} signalable (Optional) If non-null, the signalable. /// \param {OPTIONAL IN uint32} custHandlerFlags The customer handler flags. /// /// \return {ECBool} An ECBool. This is close to being the least useful way to say what the function actually does. None of the auto-generated stuff makes sense, because it's already obvious from the names. In addition, data flow direction is assumed, not extrapolated from markup that already exists in the code (notice the in/out of signalable while certain conditions might force me to accept a non-const pointer, which is nevertheless never written to). The return type is obvious. Even the general description is obvious to the point of being insulting to the reader. Of course this is all meant to be manually edited. However, the problem is that: 1) on the one hand, writing this stuff from scratch using this style of markup is time consuming and annoyingly verbose. 2) auto-generating the template and editing is also time consuming, because again, it's way too verbose. Here's what an ideal way of commenting the above function looks to me: /// Fill \p object with data and notify \p signalable once the procedure is complete. Runs asynchronously. _BASEMETHOD ECBool OnInitialize( IN MODIFY ResourceType& object, IN const char* type, OPTIONAL IN ISignalable* signalable = nullptr, /// Type-specific flags. See documentation of related resource type for possible values. OPTIONAL IN uint32 flags = 0) const { ... } That's it. This should be enough to generate feature-complete documentation when the docs are finally built. AND it's easy to read inline while writing code. A major hurdle is that while I actually kinda like the Natural Docs style, to the best of my knowledge it's only able to generate documentation for things that have actually been manually documented. Facepalm. So no automatic full documentation of classes, inheritance diagrams, etc. This seemingly forces me into using Doxygen, which is much more feature complete, but suffers from the abovementioned stylistic bloat and for some reason cannot handle relatively simple macro expansions in imo-not-so-complicated cases. I simplified the following from a real world example, but this includes auto-generated class implementations, eg: BEGIN_DEFAULT_HANDLER(foo) _BASEMETHOD const char* bar() const _OVERRIDE { return "yomama"; } END_DEFAULT_HANDLER(foo) which might expand into something like ---------> class foo : public crpt_base<foo> { base_interface* GetInterfaceClass() const _OVERRIDE { _STATIC foo_interface if; return &if; } _BASEMETHOD const char* bar() const _OVERRIDE { return "yomama"; } }; extern "C" _DLLEXPORT base_class* _fooFactory() { return static_cast<base_class*>(new foo); } Doxygen doesn't even recognize foo as a class. The bottom line is it seems to me I shouldn't be asking for too much here. I'd really like the clear coding style I've adopted to pay off in more than just the code. What's your approach? Any suggestions? Ideas or alternative options to explore?
  10. Hello, I have a 'join'-function that joins an array's elements to a string. This is what I want: vector<int> a = {1,2,3} array<int, 3> b = {1,2,3} cout << join(a) << endl; // "1, 2, 3" cout << join(b) << endl; // "1, 2, 3" So I'm trying to declare the function this way: template <typename T, typename C> string join(const C<const T> &arr, const string &delimiter = ", "){ ... } But without any success. I understand, that I could declare it this way: template <typename T> string join(T &arr){ ... } or this way: template <typename Iter> string join(Iter &begin, Iter &end){ ... } But I just wonder, is it possible to implement it like: template <typename T, typename Collection> string join(const Collection<const T> &data){ ... } Thank you!
  11. I am curious, if anyone would be interested in an RPG Adventure in a visual novel art style? I loved Doki Doki and if I could create something in an RPG element, that would be THE BEST. I am a complete noob however, that is the thing. Like I just started adventuring into coding two weeks ago. I love it so far. I think I may be addicted. oof. Which is why I want to create something I have that itch. lol. Basically if anyone who wants to pitch in for free, or not I'd be glad to include them in the credits section. Also, I'd love to get the community involved in this, to create more fun RPG-esk things if that makes sense. Where would I go for that?
  12. Hello! My texture problems just don't want to stop keep coming... After a lot of discussions here with you guys, I've learned a lot about textures and digital images and I fixed my bugs. But right now I'm making an animation system and this happened. Now if you see, the first animation (bomb) is ok. But the second and the third (which are arrows changing direction) are being render weird (They get the GL_REPEAT effect). In order to be sure, I only rendered (without using my animation system or anything else i created in my project, just using simple opengl rendering code) the textures that are causing this issue and this is the result (all these textures have exactly 115x93 resolution) I will attach all the images which I'm using. giphy-27 and giphy-28 are rendering just fine. All the others not.They give me an effect like GL_REPEAT which I use in my code. This is why I'm getting this result? But my texture coordinates are inside the range of -1 and 1 so why? My Texture Code: #include "Texture.h" #include "STB_IMAGE/stb_image.h" #include "GLCall.h" #include "EngineError.h" #include "Logger.h" Texture::Texture(std::string path, int unit) { //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //This image is not supported. else { std::string err = "The Image: " + path; err += " , is using " + m_channels; err += " channels which are not supported."; throw VampEngine::EngineError(err); } //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw VampEngine::EngineError("There was an error loading image \ (Myabe the image format is not supported): " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } My Render Code: #include "Renderer.h" #include "glcall.h" #include "shader.h" Renderer::Renderer() { //Vertices. float vertices[] = { //Positions Texture Coordinates. 0.0f, 0.0f, 0.0f, 0.0f, //Left Bottom. 0.0f, 1.0f, 0.0f, 1.0f, //Left Top. 1.0f, 1.0f, 1.0f, 1.0f, //Right Top. 1.0f, 0.0f, 1.0f, 0.0f //Right Bottom. }; //Indices. unsigned int indices[] = { 0, 1, 2, //Left Up Triangle. 0, 3, 2 //Right Down Triangle. }; //Create and bind a Vertex Array. GLCall(glGenVertexArrays(1, &VAO)); GLCall(glBindVertexArray(VAO)); //Create and bind a Vertex Buffer. GLCall(glGenBuffers(1, &VBO)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, VBO)); //Create and bind an Index Buffer. GLCall(glGenBuffers(1, &EBO)); GLCall(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)); //Transfer the data to the VBO and EBO. GLCall(glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW)); GLCall(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW)); //Enable and create the attribute for both Positions and Texture Coordinates. GLCall(glEnableVertexAttribArray(0)); GLCall(glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(float) * 4, (void *)0)); //Create the shader program. m_shader = new Shader("Shaders/sprite_vertex.glsl", "Shaders/sprite_fragment.glsl"); } Renderer::~Renderer() { //Clean Up. GLCall(glDeleteVertexArrays(1, &VAO)); GLCall(glDeleteBuffers(1, &VBO)); GLCall(glDeleteBuffers(1, &EBO)); delete m_shader; } void Renderer::RenderElements(glm::mat4 model) { //Create the projection matrix. glm::mat4 proj = glm::ortho(0.0f, 600.0f, 600.0f, 0.0f, -1.0f, 1.0f); //Set the texture unit to be used. m_shader->SetUniform1i("diffuse", 0); //Set the transformation matrices. m_shader->SetUniformMat4f("model", model); m_shader->SetUniformMat4f("proj", proj); //Draw Call. GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL)); } Vertex Shader: #version 330 core layout(location = 0) in vec4 aData; uniform mat4 model; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * model * vec4(aData.xy, 0.0f, 1.0); TexCoord = aData.zw; } Fragment Shader: #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); }
  13. We are pleased to announce the release of Matali Physics 4.4. The latest version introduces comprehensive support for Android 9.0 Pie, iOS 12.x and macOS Mojave (version 10.14.x). Latest version also introduces Matali Render 3.4 add-on with normal mapping and parallax mapping based on the distance from the observer as well as other improvements and fixes. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes You can find out more information on www.mataliphysics.com View full story
  14. We are pleased to announce the release of Matali Physics 4.4. The latest version introduces comprehensive support for Android 9.0 Pie, iOS 12.x and macOS Mojave (version 10.14.x). Latest version also introduces Matali Render 3.4 add-on with normal mapping and parallax mapping based on the distance from the observer as well as other improvements and fixes. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes You can find out more information on www.mataliphysics.com
  15. Hello! For those who don't know me I have started a quite amount of threads about textures in opengl. I was encountering bugs like the texture was not appearing correctly (even that my code and shaders where fine) or I was getting access violation in memory when I was uploading a texture into the gpu. Mostly I thought that these might be AMD's bugs because when someone was running my code he was getting a nice result. Then someone told me "Some drivers implementations are more forgiven than others, so it might happen that your driver does not forgive that easily. This might be the reason that other can see the output you where expecting". I did not believe him and move on. Then Mr. @Hodgman gave me the light. He explained me somethings about images and what channels are (I had no clue) and with some research from my perspective I learned how digital images work in theory and what channels are. Then by also reading this article about image formats I also learned some more stuff. The question now is, if for example I want to upload a PNG to the gpu, am I 100% that I can use 4 channels? Or even that the image is a PNG it might not contain all 4 channels (rgba). So I need somehow to retrieve that information so my code below will be able to tell the driver how to read the data based on the channels. I'm asking this just to know how to properly write the code below (with capitals are the variables which I want you to tell me how to specify) stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, HOW_MANY_CHANNELS_TO_USE); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); GLCall(glTexImage2D(GL_TEXTURE_2D, 0, WHAT_FORMAT_FOR_THE_TEXTURE, m_width, m_height, 0, WHAT_FORMAT_FOR_THE_DATA, GL_UNSIGNED_BYTE, data)); } So back to my question. If I'm loading a PNG, and tell stbi_load to use 4 channels and then into glTexImage2D, WHAT_FORMAT_FOR_THE_DATA = RGBA will I be sure that the driver will properly read the data without getting an access violation? I want to write a code that no matter the image file, it will always be able to read the data correctly and upload them to the GPU. Like 100% of the tutorials and guides about openGL out there (even one which I purchased from Udemy) where not explaining all these stuff and this is why I was experiencing all these bugs and got stuck for months! Also some documentation you might need to know about stbi_load to help me more: // Limitations: // - no 12-bit-per-channel JPEG // - no JPEGs with arithmetic coding // - GIF always returns *comp=4 // // Basic usage (see HDR discussion below for HDR usage): // int x,y,n; // unsigned char *data = stbi_load(filename, &x, &y, &n, 0); // // ... process data if not NULL ... // // ... x = width, y = height, n = # 8-bit components per pixel ... // // ... replace '0' with '1'..'4' to force that many components per pixel // // ... but 'n' will always be the number that it would have been if you said 0 // stbi_image_free(data)
  16. Hi there! I'm developing a 3D drone racing game. I'm implementing the AI movement and I've applied the Craig Reynold's Path Following Algorithm. Now I want to make the Obstacles Avoidance. My main question is how to determine the best Avoidance target to Seek it, in order to not collide with obstacles. Does anybody know a good solution for this situation?
  17. Hello! I was trying to load some textures and I was getting this access violation atioglxx.dll access violation stb image which i'm using to load the png file into the memory, was not reporting any errors. I found this on the internet explaining that it is a bug from AMD. I fixed that problem by changing the image file which i was using. The image that was causing this issue was generated by this online converter from gif to pngs. Does anyone know more about it? Thank you.
  18. I was tying to figure out what to do with my procedural planets and for an initial step and I wanted to fly a spacecraft in and establish a low orbit around my world starting from some distant point and initial velocity. I know how to set up the gravity and I think I can pretty much do manual controls that will simulate Newtonian physics. However what I'm looking for is some software or algorithms that let me establish the orbit by controlling thrust in the right direction at the appropriate points in a trip towards the planet. So I guess the software would accept something like starting position, starting velocity, desired orbit height (I'm assuming circular for now) , and desired orbit plane. From there it would give me firing points, duration and trust vectors needed to for the orbit. To make things simpler I'm assuming infinite fuel. I figure NASA must do stuff like this all the time but I haven't been able to find something solid on how it's done. Perhaps it's too complex, I'm not really sure, but I thought I throw the question out there anyway.
  19. Hi, We are working on a sequel to my all time favorite application: "Garry Kitchen's GameMaker" for the Commodore 64. (it's a very simple game creation IDE) Application builds and runs on Windows(R) and Linux. (the only dependency is SDL2 and there is a makefile included to build on any Linux) NOTE: This is a just started work-in-progress...don't expect too much We will be updating this thread posting as production progresses. Please post complaints and suggestions to this forum thread. This is our most ambitious project to date so don't expect a beta for at least 6-12 months... Thanks! JeZxLee www.FallenAngelSoftware.com If you are unfamiliar with this superb game creation IDE then please check out the wiki below: en.wikipedia.org Garry Kitchen's GameMaker Garry Kitchen's GameMaker is an IDE for the Commodore 64, Apple II, and IBM PCs, created by Garry Kitchen and released by Activision in 1985. The software is notable as one of the earliest all-in-one game design products aimed at the general consumer, preceded by Broderbund’s The Arcade Machine in 1982. Two add-on disks are available for the Commodore 64 version: Sports, and Science Fiction. These include sprites, music, and background elements for loading into GameMaker. To demonstrate the vers... You can download the current entire project below on GitHub: GitHub FallenAngelSoftware/SDL2-C64GKGM2 100% FREE Cross-Platform Open-Source SDL2 Video Game Engine! - FallenAngelSoftware/SDL2-C64GKGM2 Here is a screenshot:
  20. Hope this is the right forum. I'm using Ogre and have rendered a wireframe of triangles by creating manual objects and feeding it vertex and index buffers. My question is, if I want to turn some of the edges of my wire mesh a different color, how would that normally be done? I.e., A triangle having 3 edges where 2 edges are white and I change one edge to green.
  21. Hodgman

    OOP is dead, long live OOP

    edit: Seeing this has been linked outside of game-development circles: "ECS" (this wikipedia page is garbage, btw -- it conflates EC-frameworks and ECS-frameworks, which aren't the same...) is a faux-pattern circulated within game-dev communities, which is basically a version of the relational model, where "entities" are just ID's that represent a formless object, "components" are rows in specific tables that reference an ID, and "systems" are procedural code that can modify the components. This "pattern" is always posed as a solution to an over-use of inheritance, without mentioning that an over-use of inheritance is actually bad under OOP guidelines. Hence the rant. This isn't the "one true way" to write software. It's getting people to actually look at existing design guidelines. Inspiration This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras! You can find Aras' slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground. I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations. Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version! TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too). I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure: Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules). Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson). Show that the relational model is a great fit for games (but call it "ECS"). This structure grinds my gears because: (A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good, but more importantly: (B) it has the side effect of suppressing knowledge and unintentionally discouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers. Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own. I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind. I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code. OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code. Background As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP": Abstraction Encapsulation Polymorphism Inheritance I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them. In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules... Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C". Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes). Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation. Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them. Not included in the SOLID acronym, but I would argue is just as important is the: Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required. This gives us SOLID-C(++) From now on, I'll refer to these by their three letter acronyms -- SRP, OCP, LSP, ISP, DIP, CRP... A few other notes: In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software. As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition. Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation. Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance. In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword. In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword. OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell! And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation). When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person! Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first. When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square? This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance. If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool. From this perspective, the following makes perfect sense: struct Square { int width; }; struct Rectangle : Square { int height; }; A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle! As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever. A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width". By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle. Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width; This will work correctly for squares (producing the sum of their areas), but will not work for rectangles. Therefore, Rectangle violates the LSP rule. If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other. So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go! For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is: struct Shape { virtual int area() const = 0; }; struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; }; struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; }; "public virtual" means "implements" in Java. For use when implementing an interface. "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it. I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it! TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time! Entity / Component frameworks With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point. Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional. I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae // ------------------------------------------------------------------------------------------------- // super simple "component system" class GameObject; class Component; typedef std::vector<Component*> ComponentVector; typedef std::vector<GameObject*> GameObjectVector; // Component base class. Knows about the parent game object, and has some virtual methods. class Component { public: Component() : m_GameObject(nullptr) {} virtual ~Component() {} virtual void Start() {} virtual void Update(double time, float deltaTime) {} const GameObject& GetGameObject() const { return *m_GameObject; } GameObject& GetGameObject() { return *m_GameObject; } void SetGameObject(GameObject& go) { m_GameObject = &go; } bool HasGameObject() const { return m_GameObject != nullptr; } private: GameObject* m_GameObject; }; // Game object class. Has an array of components. class GameObject { public: GameObject(const std::string&& name) : m_Name(name) { } ~GameObject() { // game object owns the components; destroy them when deleting the game object for (auto c : m_Components) delete c; } // get a component of type T, or null if it does not exist on this game object template<typename T> T* GetComponent() { for (auto i : m_Components) { T* c = dynamic_cast<T*>(i); if (c != nullptr) return c; } return nullptr; } // add a new component to this game object void AddComponent(Component* c) { assert(!c->HasGameObject()); c->SetGameObject(*this); m_Components.emplace_back(c); } void Start() { for (auto c : m_Components) c->Start(); } void Update(double time, float deltaTime) { for (auto c : m_Components) c->Update(time, deltaTime); } private: std::string m_Name; ComponentVector m_Components; }; // The "scene": array of game objects. static GameObjectVector s_Objects; // Finds all components of given type in the whole scene template<typename T> static ComponentVector FindAllComponentsOfType() { ComponentVector res; for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) res.emplace_back(c); } return res; } // Find one component of given type in the scene (returns first found one) template<typename T> static T* FindOfType() { for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) return c; } return nullptr; } Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead. What does this code do? Well, nothing good To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x. What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules: the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components". GameObjects fulfill the service locator pattern - they can be queried for a child component by type. Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject. Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects). A GameObject may only have one component of each type (some frameworks enforced this, others did not). Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update". GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components). This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today. However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework! We'll re-add this "feature" in a later follow-up post, in a way that doesn't cost us a 10x performance overhead... Let's evaluate this code according to OOD: GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells. GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games... I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect. Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation. The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use. However, it's not great as DIP -- many of the components do have direct knowledge of each other. So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow. Frameworkless composition (AKA using the features of the #*@!ing programming language) If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job. Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp The gist of the changes is: Removing ": public Component" from each component type. I add a constructor to each component type. OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized. I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent. I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes. Fix the "virtual void Update" anti-pattern. Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction. The objects So, instead of this "VM" code: // create regular objects that move for (auto i = 0; i < kObjectCount; ++i) { GameObject* go = new GameObject("object"); // position it within world bounds PositionComponent* pos = new PositionComponent(); pos->x = RandomFloat(bounds->xMin, bounds->xMax); pos->y = RandomFloat(bounds->yMin, bounds->yMax); go->AddComponent(pos); // setup a sprite for it (random sprite index from first 5), and initial white color SpriteComponent* sprite = new SpriteComponent(); sprite->colorR = 1.0f; sprite->colorG = 1.0f; sprite->colorB = 1.0f; sprite->spriteIndex = rand() % 5; sprite->scale = 1.0f; go->AddComponent(sprite); // make it move MoveComponent* move = new MoveComponent(0.5f, 0.7f); go->AddComponent(move); // make it avoid the bubble things AvoidComponent* avoid = new AvoidComponent(); go->AddComponent(avoid); s_Objects.emplace_back(go); } We now have this normal C++ code: struct RegularObject { PositionComponent pos; SpriteComponent sprite; MoveComponent move; AvoidComponent avoid; RegularObject(const WorldBoundsComponent& bounds) : move(0.5f, 0.7f) // position it within world bounds , pos(RandomFloat(bounds.xMin, bounds.xMax), RandomFloat(bounds.yMin, bounds.yMax)) // setup a sprite for it (random sprite index from first 5), and initial white color , sprite(1.0f, 1.0f, 1.0f, rand() % 5, 1.0f) { } }; ... // create regular objects that move regularObject.reserve(kObjectCount); for (auto i = 0; i < kObjectCount; ++i) regularObject.emplace_back(bounds); The algorithms Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just: // go through all objects for (auto go : s_Objects) { // Update all their components go->Update(time, deltaTime); You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire. Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits) // Update all positions for (auto& go : s_game->regularObject) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } for (auto& go : s_game->avoidThis) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } // Resolve all collisions for (auto& go : s_game->regularObject) { ResolveCollisions(deltaTime, go, s_game->avoidThis); } The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series. Performance There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)! Next steps There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when...
  22. I am looking for some advice in how best to structure a text-based football game in C++. I am actually a trying to mimic an old DOS game called Armchair Quarterback but without any graphics for the time being. The original DOS game had very minimal graphics just showing a cursor moving across a static graphic (but I will leave the graphics to the end). I attached what the original game looked like below. So far, I wrote functions for the following : - for the user to select a team - for the user to select the opponent's which is always the CPU controlled - for the user to select difficulty (enum) Things left to do: Set-up one overarching team class for all the teams in the game then each specific team would inherit the attributes from the overarching team class. In the original game, each team only had four attributes that they were rated on. I would also have to construct 17 offensive plays that the user has to choose from. Would each play be a separate function??? Also, the main logic and AI would need to be constructed (would need some variables for logic and AI). -Bigger pass plays have lower % of completion than shorter pass plays -CPU is guessing what the user selects to "play" defense -The user is guessing what the CPU selects to "play" defense -This will incorporate the attribute ratings for the offense and defense per team plus randomness and the situation in the game (down and distance???) Any help would be appreciated in how the code base should be structured. Thanks! DK -
  23. I'm doing a WinAPI/DX10 framework rewrite and this time around I'd like to start handling errors a little more seriously in my code. Up to this point I've allowed the code to fail naturally and catch the exception within VS. I always found that to be the quickest method. Usually a break occurs and I just step through the call stack and find the problem in seconds (hopefully). However I'm not really sure that's the proper way to do this, but it does feel like I can avoid writing code that is littered with FAILED() or if( == NULL) statements. So what is the proper way? I've read a few other threads on here that talk about using asserts, but I've never really seen any code written with that so I'm bit doubtfull. Most of the conditions I'd like to monitor for errors are usually DirectX function return values, sometimes null points, sometimes WinAPI function fail handles, sometimes index overrun errors, but mostly DX function fails. Again, I feel like debug DX libraries are very verbose and do a wonderful job interacting with VS as it is, so up to this point I've just relied on output window messages for that, but again I'd like to know if that is a proper way to do things. P.S the framework is really just a learning tool for me so it's not going to see serious use. So I lean on the side of cleaner, more readable code vs obfuscated one, so I'd like to not go too crazy with error checking however I'd like to get at least an idea of the way things are done properly. P.P.S I'd also like to mention that I am aware of exceptions however I'm particularly asking about catching programming errors, I was under the impression that exception handling is more of a run-time error detection method to catch things that are not really the programmers fault (like memory, harwdware or network issues).
  24. Improved Bloom, tonemapping, contrast reduction, gamma correction:
  25. I am looking for resources on best practices (really any info at this point) on how to program a dependency graph with functions that create nodes, create attributes and connect attributes. For some reason this topic is kryptonite for my Google Fu.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!