taylorsnead

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

139 Neutral

About taylorsnead

  • Rank
    Member
  1. I'm going to bump this, because I still haven't figured it out, and I'm still trying to implement it. I'm able to make a system for multiple languages, but it then forces the programmer to rebind their classes for each language. So, I was attempting to do something like what Hodgman suggested, but I don't really know how I'd reaccess the data later, since it would be a solely automatic grabbing and using of the data and I can't store a type as a variable so that it's known what type to bind it as. Once I can have the data grabbed later on in a separate function with the correct type and stuff, then I can have an easily extensible system to add new languages that work with no changes to existing binding code. Maybe it would be better to bind one language (eg. Squirrel) and use its dynamic typing to create a binding system there that routes back to C++, allowing someone to bind their classes in either Squirrel or C++?
  2. Bump. Still not solved, and not sure how to attempt to.
  3. I had already done that with relative and absolute directories, the correct dir, and setting the dir to empty. Either way doesn't work.
  4. getcwd(), in fact, does output the correct directory when launching from terminal and not the IDE (Which is what I do anyway).
  5. Now, using full strace (just "strace ./Launcher"), it seems that it detects the file correctly with an absolute path, but then it does a few things, and then it tries opening the file with some relative directory that wouldn't be correct anyways. This is the part of the output that applies: brk(0) = 0x1310000 brk(0x1331000) = 0x1331000 open("/home/taylorsnead/Documents/ShadowFox/NewEngine/BinLinux/SFGame.so", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300!\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=27024, ...}) = 0 mmap(NULL, 2122168, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8d2ab03000 mprotect(0x7f8d2ab09000, 2093056, PROT_NONE) = 0 mmap(0x7f8d2ad08000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f8d2ad08000 close(3) = 0 open("../../BinLinux//SFEngine.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) munmap(0x7f8d2ab03000, 2122168) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8d2bcfe000 write(1, "Failed to load /home/taylorsnead"..., 82Failed to load /home/taylorsnead/Documents/ShadowFox/NewEngine/BinLinux/SFGame.so ) = 82 write(1, "../../BinLinux//SFEngine.so: can"..., 87../../BinLinux//SFEngine.so: cannot open shared object file: No such file or directory ) = 87 But, I don't know where it gets this new directory. Even using a relative directory, as with Boost.Filesystem, it sees the file, but dlopen (I guess) just converts it to a directory relative to the code/project files, which won't work with the executable. I'm using the same compiler for the libraries and executable, all (I guess, I don't specify to the compiler (G++)) 32-bit. Trying "LD_PRELOAD=SFGame.so /bin/ls" just says that SFGame.so can't be preloaded: ERROR: ld.so: object 'SFGame.so' from LD_PRELOAD cannot be preloaded: ignored. Using LD_PRELOAD with the path to Launcher instead of ls makes it run Launcher, which of course fails. Setting it without a path doesn't change the result of the strace of Launcher.
  6. Oh, I mean, Code::Blocks has an option for "Execution working dir", and I tried it empty, but the same problem occurred. Running the binary result either through the IDE or from the terminal makes no difference, unless I compile it via terminal. That is why the option seemed suspicious to me. Through this, I should think I do understand "working directory" as it would be used for file paths within the executable. Despite these, I don't know how the problem persists even using an absolute directory to the file, but I feel it's something with libdl.so, and less to do with working directory, but I have no idea what to do about it.
  7. Okay, it seems like that was correct, though it seems that I do have to use an absolute path to the file, it works when compiled via terminal using g++-4.7, not using LD_LIBRARY_PATH (though I guess that would allow me to use relative paths). Code::Blocks still has the problem with the execution directory empty. Any thoughts on how to fix this? I guess I could just compile my Launcher with the terminal when I need to, but I'd prefer to be able to compile with my IDE (especially since it works fine on Windows, I feel like there should be an easy way to resolve this). It also seems like the same problem occurs with two dynamic libraries linked at compile/link time (of course, through Code::Blocks) (I use my Launcher as an executable to run my Game.so, which uses Engine.so to run and control the other, lower-level libraries (boost, glfw, alut, etc), so dynamic linking is important (though I use static linking when possible)).
  8. I should have mentioned that, yes, I already tried setting LD_LIBRARY_PATH, and that didn't help either. It seems that at compile-time, the string passing to the dlopen gets set to that path local to the main.cpp. strace shows that it is trying to open that directory, not just displaying it, whereas filesystem is really opening the correct directory (which I already knew since it showed it). I don't know how to get it using the correct path, unless I moved my source to the folder where it should compile, but that seems like a horrible solution. EDIT: Or it has something to do with the "Execution Working Dir" Option in Code::Blocks. I've tried setting that relatively and absolutely to BinLinux, but no change.
  9. I've been working mainly in Windows with VC11, but a few days ago switched to using MinGW (I had already been planning on switching, I just liked Visual Studio's plugins too much). Very quickly, I then installed GCC/G++ 4.7 on my Linux (Mint). Making a launcher executable for my library (This makes it easier for separation, or changing which library is launched without recompiling a large amount of code), I use Boost.Extension's shared_library (which simply uses LoadLibrary and dlopen (depending on the platform)) to load my dynamic/shared library (SFGame.so). This worked great on windows with both compilers, but on Linux, dlopen will fail however I use the path. Using the same path, Boost.Filesystem confirms (through the application) that the file can be accessed with the paths I'm using. My simple code: ("<" and ">" replaced with "^") [source lang="cpp"]#include "iostream" #include "functional" #include "boost/extension/shared_library.hpp" #include "boost/filesystem.hpp" #if _WIN32 || _WIN64 # define EXT ".dll" #elif __APPLE__ # define EXT ".dylib" #elif __linux # define EXT ".so" #endif int main( void ) { using namespace boost::extensions; using namespace boost::filesystem; std::string libpath = "./SFGame.so"; path pfile( libpath ); if( exists( pfile ) ) { std::cout ^^ "Game library: " ^^ complete( pfile ).generic_string() ^^ " exists\nSize: " ^^ file_size( pfile ) ^^ "\n"; } shared_library lib( libpath ); if( !lib.open() ) { std::cout ^^ "Failed to load " ^^ libpath ^^ "\n" ^^ dlerror() ^^ "\n"; return 0; } lib.get^void^( "Launch" )(); return 1; }[/source] This same code works perfectly on Windows, as I said. My output: ../../BinLinux//SFGame.so: cannot open shared object library or: SFGame.so: cannot open shared object library depending on whether I use "./SFGame.so" (I assume because my Code::Blocks project file and main.cpp are in "Engine/Code/Game/" and SFGame.so is in "Engine/BinLinux/") or "SFGame.so". I can use the full path from "complete(pfile).generic_string()", which gives the same dir in the error as "./SFGame.so". Any ideas on how I could fix this from failing EVERY time? I even tried linking Launcher to Engine.so in the compiler (GNU GCC (well, g++) within Code::Blocks), but no dice.
  10. I understand both of your concepts. I tried to implement both. For the first (Ashaman73) concept, I did it like so: [source lang="cpp"]//SClass has Var, Func, Prop, and Bind functions without implementations //SqClass is a child and implements the functions for squirrel //ScriptSystem::NewClass() returns a pointer to a child SqClass or eg. LuaClass, based on mLang (set by SetLang(); ScriptSystem::SetLang("Squirrel"); SClass $Player# & scp = ScriptSystem::NewClass<Player>(); scp.Var("Health", &Player::mHealth); scp.Bind("Player"); Script& sqs = ScriptSystem::NewScript(); sqs.Compile("Test.nut"); sqs.Run();[/source] Now, this should basically work, except that if I want to, say, shut off the Squirrel VM and launch the Lua VM, I have to rebind those classes. Using something closer to the other given concept (Hodgman), I'd be able to bind once, and it should keep a list of the classes to be bound in ScriptSystem (Since then the SClass holds actual data), then on VM binding (called just before script launching begins), it would take the data from all SClass instances in the list, using it to create VM specific Classes, binding their members and vars, then binding the actual class (Also, then it's easy to remove them from the bind list). The only problem is, I can't really figure out how I should manage the structures to use for data containment, and later usage for binding to the specific VM. I was thinking something like this: [source lang="cpp"] struct VarData { string label; void* ref; }; struct ClassData { vector $VarData# vars; }; struct SClass { template $class T, typename V# void Var(string label, V T::*val) { VarData newvar; newvar.label = label; newvar.ref = val; data.vars.push_back(newvar); } void Bind(string label); ClassData data; }; [/source] But, the problem is about using that given data later. [source lang="cpp"]Class $Player# cls; for(int i=0; i $ play.data.vars.size(); ++i) { cls.Var(play.data.vars[i].label, play.data.vars[i].ref); RootTable().Bind("NameHere", cls); }[/source] ("<>" are replaced by "$#", the code block doesnt like those characters, for some reason) Something like that for Squirrel, except I can't use the void pointer directly, so would I use a different type for storage, or how can I cast to the correct type for this situation? I'm just not really sure how I'd approach it.
  11. I am attempting to make a system that would allow a user (in C++) to bind classes to their scripting language of choice, then compile the script and run it. The primary language I am using for any scripting needs is Squirrel, but I want to be able to switch languages at runtime (possibly with an enum), and using the [b]same[/b] code, bind to, say, Python. I was thinking of doing it by having a class for each language's Class (eg. has Var, Func, Prop, Bind, etc, functions for binding) and a class for it's Script (eg. Compile and Run functions), then instantiating those runs through the Subsystem (which changes based on the enum to a child class with creation methods), which passes back a Class* pointer or Script* pointer, but I'd prefer it to be implicit. Like this: [source lang="cpp"]void BindPlayer() { scriptLang = "Squirrel"; // SClass is for ScriptClass, the parent top class, not for Squirrel specifics SClass<Player> pclass; pclass.Var("TestVar", &Player::TestVar); // Bind other vars/funcs/props pclass.Bind("Player"); Script scr; scr.Compile("TestPlayer.nut"); scr.Run(); scriptLang = "Python"; // Do the bind again, same code. Script scrp; scrp.Compile("TestPlayer.py"); scrp.Run(); }[/source] How feasible is this method; if at all, how exactly would I go about the implicit creation via another class? If that method wouldn't work well, then what would?
  12. Thanks. So for using the struct as my vertices, to specify offset, I can simply do that with glVertexAttribPointer? Like this? [source lang="cpp"] glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, mesh->mVertexId); glVertexAttribPointer( // Vertex Position 0, // The attribute we want to configure 3, // size GL_FLOAT, // type GL_FALSE, // normalized? 52, // stride (void*)0 // array buffer offset ); // Then the next item in my struct: (The struct is padded to 64-byte, values are float(4-byte)) glEnableVertexAttribArray(1); glVertexAttribPointer( // UVs 1, // The attribute we want to configure 2, // size GL_FLOAT, // type GL_FALSE, // normalized? 56, // stride (void*)12 // array buffer offset );[/source] I think most of what I was asking is answered. Do I still have to do the ClientState enabling/disabling? Lastly, I am just confused a bit with matrix manipulation. As I understand, I would have my matrix, use GLMs translate, rotate, and scale functions upon it (well, for rotation its easier to me to simply use matrix *= mat4_cast(quatrot);) then pass the "MVP" to the shader. This MVP stands for Projection * View * Model, correct? So, model would be the transform of my object, projection is my perspective matrix from the camera, and view is the transformation matrix of the camera (created using lookAt())? Then set the position in the shader with: gl_Position = MVP; If that is all correct, then my inquiries for now seem to be answered. Almost forgot, know of any reasons shaders/GLSL can be better for certain things than C++? I know that it runs on the GPU, but for what things is that an advantage?
  13. I have looked around at tutorials for OpenGL, and found that I like these: [url="http://www.opengl-tutorial.org/beginners-tutorials/"]http://www.opengl-tu...ners-tutorials/[/url] alot. I'm not sure if those have the best methods though. My real questions are a range. First of all, using VBOs, what would be the best method of updating the data each frame? Initializing a mesh to render(not every frame): I don't believe there is much of a problem or difference between using 3-4 buffers of arrays and using 1 buffer of a struct array. Then, indexing that vertex data. Lastly, creating pointers for each data area of a vertex (position, UV, normals, color), and sending the buffered indices. [source lang="cpp"]glGenBuffers(1, &mesh->mVertexId); glBindBuffer(GL_ARRAY_BUFFER, mesh->mVertexId); // Bind the buffer (vertex array data) glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * mesh->mVertices.size(), NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(Vertex) * mesh->mVertices.size(), &mesh->mVertices); IndexArray(mesh->mVertices, mesh->mIndex); glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), OFFSET(12)); glNormalPointer(GL_FLOAT, sizeof(Vertex), OFFSET(20)); glColorPointer(4, GL_FLOAT, sizeof(Vertex), OFFSET(32)); glVertexPointer(3, GL_FLOAT, sizeof(Vertex), OFFSET(0)); glGenBuffers(1, &mesh->mVertexArrayId); // Generate buffer glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh->mVertexArrayId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, mesh->mIndex.size()/3 * sizeof(byte), &mesh->mIndex, GL_STATIC_DRAW); mMeshBuffer.push_back(meshr); //Dont mind "meshr" its something else containing the Mesh*[/source] Those tutorials do it similarly, but do away with the "pointer" function calls and split each area of vertex data into separate buffers. I think what I'm doing is better, but I could be wrong? Now, for per-frame rendering stuff, I will have either a separate VBO for each mesh, or find a way to batch certain ones together, keeping them independent for transformation. This is what I have gathered as what seemed to be well (in my function called each frame): [source lang="cpp"] Mesh* mesh = mMeshBuffer[i]->GetMesh(); glBindBuffer(GL_ARRAY_BUFFER, mesh->mVertexId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mMeshBuffer[i]->GetMesh()->mVertexArrayId); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glUseProgram(mesh->mShader.id); glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), OFFSET(12)); glNormalPointer(GL_FLOAT, sizeof(Vertex), OFFSET(20)); glColorPointer(4, GL_FLOAT, sizeof(Vertex), OFFSET(32)); glVertexPointer(3, GL_FLOAT, sizeof(Vertex), OFFSET(0)); glDrawElements(GL_TRIANGLES, mesh->mVertices.size(), GL_INT, OFFSET(0)); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY);[/source] While those tuts do it differently. Many sources that i've seen, in fact, have different methods here for rendering their buffered vertex data. Like, sending the data to the shader with glVertexAttribPointer and again, not using glXXXXPointer. Mind you, I haven't finished the series yet, and the code of them is fairly static (understandably), so it's long and harder for me to understand. I'm here to ask which of any seems to be the best (yet fairly manageable in easiness) method for rendering a mesh (whose vertices wont change) with a dynamic transform (I assume glRotatef, glTranslatef, and glScalef work okay; or are those as obsolete as straight drawing (function call for each vertex)? If so, shedding light on matrix manipulation with GL'd be nice), texture (simple for now), normals, and, possibly, a shader (I'm not completely sure I know what a shader entails (and it's advantages over using C++ for certain things) when applying it to a mesh) in a VBO. Also, any useful information/tips about related things would help. Thanks. EDIT: Also, glBufferData vs glBufferData (as NULL/nullptr) with glBufferSubData?
  14. Game Engine Basis Choices

    I realize its hard. I think reworking something existing to be how I want it may be more annoying than building my own. That's what I was trying to get opinions about: which would be harder to work into how I want and still have good graphics features and performance.
  15. Game Engine Basis Choices

    Everyone in that thread said that Unity Pro is required, which costs $1500. I'm looking for something free.