• Advertisement

Neilo

Member
  • Content count

    58
  • Joined

  • Last visited

Community Reputation

290 Neutral

About Neilo

  • Rank
    Member
  1. What does 2.5D actually mean?

    It's a horrible term that I first saw used in relation to games with 2D gameplay but 3D graphics. The term seemed to become really common when Nights into Dreams was released on the Sega Saturn.
  2. Namespaces good for anything else?

    I don't really like seeing namespaces to break down subsystems in C++ projects. Similarly, I don't think there is a need for nested namespaces. That said, I use a single top level namespace for each project I work on and heavy use of anonymous namespaces for implementation hiding.
  3. [quote name='Kurt_duncan' timestamp='1343582284' post='4964278'] Thanks to you too beans222. I downloaded glew 1.8 and i am trying to see this code: [source lang="cpp"]#include <gl/glew.h> #include <iostream> #include <stdlib.h> using namespace std; void main(int argc, char** argv) { if(glGetString(GL_VERSION) == NULL){ GLenum glError = glGetError(); cout << glError; } else{ string version( (const char*)glGetString(GL_VERSION) ); cout << version.data(); } }[/source] But i am getting error 1282, invalid operation. It seems that it is because opengl context it is not setted up. How i can do that? Brother Bob, the nehe tutorials that i saw, are based on opengl 1.1 and they are, in my opinion, very good but very old. When i ask you before about a tutorial, i was thinking on a tutorial that explains how to configure visual studio to uses latest versions of opengl. Has Glew an implementation of latest versions of opengl or how it works to get functions work? I told that because although i am getting error 1282, visual studio finds functions that are part of latest versions! Like glDrawElements, for example. Thanks for everything. [/quote] You need to init GLEW before you do anything... [code]int kContextFlags = WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB #ifdef _DEBUG | WGL_CONTEXT_DEBUG_BIT_ARB; #else ; #endif bool Device::Initialize(HWND window) { // TODO: assert window is valid, make sure get a valid device context also this->window = window; deviceContext = ::GetDC(this->window); PIXELFORMATDESCRIPTOR pfd = { 0 }; pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 32; pfd.cDepthBits = 32; pfd.iLayerType = PFD_MAIN_PLANE; int pixelFormat = ::ChoosePixelFormat(deviceContext, &pfd); if(pixelFormat == 0) { return false; } if(SetPixelFormat(deviceContext, pixelFormat, &pfd) == FALSE) { return false; } HGLRC tempContext = ::wglCreateContext(deviceContext); ::wglMakeCurrent(deviceContext, tempContext); GLenum result = glewInit(); if(result != GLEW_OK) { return false; } if(wglewIsSupported("WGL_ARB_create_context") == 1) { renderingContext = ::wglCreateContextAttribsARB(deviceContext, NULL, kOpenGLAttribs); BOOL success = ::wglMakeCurrent(NULL, NULL); ::wglDeleteContext(tempContext); ::wglMakeCurrent(deviceContext, renderingContext); wglSwapIntervalEXT(1); SetupDebugOutput(); } else { //renderingContext = tempContext; return false; } GLenum error = glGetError(); glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]); glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]); return error == GL_NO_ERROR; }[/code]
  4. OpenGL Interleaved Arrays

    I use 3.2 Core for compatibility with OSX 10.7 but on that version at least, Vertex Array Objects are required. Perhaps this is true of 4.x? They are similar to Input Layouts in D3D10+ or FVFs in D3D9 in that they describe the data you're passing to the pipeline. In the case of VAOs, you just create and bind a VAO, get the indices of your attributes and bind them with the correct offsets.
  5. [quote name='kunos' timestamp='1339952582' post='4950046'] 3) unique_ptr .. which is like 2 but if you are willing to write unique_ptr<blabla>, .get() and all that nonsense over and over again to AVOID 1 SINGLE CALL TO "delete" in your destructor. [/quote] Somewhat agree, somewhat disagree with this point. I don't use exceptions in my C++, but using a unique or scoped pointer makes your code more exception safe. Even if you don't use exceptions, for allocations within a function, it take out a lot of the programmer overhead of memory management. Then, for consistency's sake, I use scoped pointer wrapper for member variables too. I do agree that littering code with myPtr.get() is a bit of a pain in the neck though!
  6. You're trying to start a lib, right click on Tut 01 Main and choose "Set as Startup Project"
  7. Pretty good introductory articles, although you should probably give your MyGamePadController class a dealloc method that removes your HID Manager from the runloop and then call CFRelease on it. Core Foundation stuff isn't covered by ARC.
  8. What Books Are Recommended

    I have both of those books. Have had the maths one for years and year and it's a great reference. The AI book I bought recently and it's very good. As you'd expect from a book with "by example" in the title, there's lots of examples to help you get up and running. The coding style isn't quite to my taste but it's clear and works well. The tone of the book is quite informal too which is nice.
  9. code for frame rate

    [code]class Timer { public: Timer(bool autoStart = false); ~Timer(); void Reset(); void Update(); inline float DeltaTime() const { return deltaTime; } inline float TotalTime() const { return totalTime; } private: Timer(Timer const&); Timer& operator=(Timer const&); #ifdef WIN32 unsigned __int64 startTicks; unsigned __int64 lastTick; unsigned __int64 ticksPerSecond; #else // iOS int64_t startTicks; int64_t lastTick; double ticksPerSecond; #endif float totalTime; float deltaTime; };[/code] [code]// http://macresearch.org/tutorial_performance_and_time #include <mach/mach_time.h> #include "Timer.h" namespace { #ifdef __APPLE__ float GetTimeBase() { float ticksPerSecond = 0.0; mach_timebase_info_data_t info; kern_return_t err = mach_timebase_info( &info ); //Convert the timebase into seconds if( err == 0 ) { ticksPerSecond = 1e-9 * static_cast<float>(info.numer) / static_cast<float>(info.denom); } return ticksPerSecond; } #endif } Timer::Timer(bool autoStart) : totalTime(0.0f), deltaTime(0.0f), startTicks(0), lastTick(0) { ticksPerSecond = GetTimeBase(); if(autoStart) { Reset(); } } Timer::~Timer() { } void Timer::Reset() { startTicks = mach_absolute_time(); lastTick = startTicks; } void Timer::Update() { int64_t const currentTicks = mach_absolute_time(); int64_t const deltaTicks = currentTicks - lastTick; deltaTime = static_cast<float>(deltaTicks) * ticksPerSecond; totalTime = static_cast<float>(currentTicks - startTicks) * ticksPerSecond; lastTick = currentTicks; } [/code]
  10. Heap Management

    It's not really aimed at you, I just mentioned it because DDS loaders were brought up previously in the thread. I don't really know how to respond to the rest of your over defensive post. My point is not that one should never reinvent the wheel, but that they should reinvent if if they need to. You're right, I have no idea how you came to the conclusion that you needed to roll your own, and to be honest, I don't care. My point is in the interest of getting things done, it's better to not invent the wheel but use what's at your disposal in such a way that swapping things out later isn't a massive pain in the neck! Your own progress with a next generation engine sort of proves my point too. Your blog doesn't show that much progress despite your quest for the mother of all DDS loaders!
  11. Heap Management

    I'm strongly against "Not Invented Here" syndrome which is what rolling your own allocation system is, as many posters have suggested. I'm a pretty competent programmer and in toy projects have tinkered with memory allocation, but when it comes to getting work done, I'd much rather abstract allocations, or resource loading or whatever somewhat and plug in a proven third party library than roll my own. If it becomes a problem when profiling then it's worth revisiting, but other than that, why bother? Perhaps premature optimization is the wrong term. Maybe obsessive compulsive optimization is more fitting? So many developers seem to lose sight of the point, making things go and using all the resources at their disposal to do so, instead, they worry about edge cases and optimizations that may or may not be required. Surely the true skill is coding in such a way that you don't wait yourself into a corner if you do decide you need to write your own DDS loader or memory allocator?
  12. Lambdas and asynchronous code go so well together and make that sort of code more readable and maintainable in my opinion.
  13. OpenGL OpenGL 3.x Rendering

    I use native Win32 and GLEW to get a 3.2 context on Windows and an NSOpenGLContext created with the NSOpenGLProfileVersion3_2Core attribute on OS X Lion. I didn't need to change too much of my 2.1 code, but I did need to update all my shaders. Most worked out of the box once I changed the keywords around though.
  14. [quote name='YogurtEmperor' timestamp='1317193196' post='4866712'] [code] lse::CEngine::LSE_ENGINE_INIT eiInit = { 64 * 1024 * 1024, true }; [/code] [code] lse::CEngine::LSE_ENGINE_SECONDARY_INIT esiSecondInit = { &gGame, 800UL, 600UL, 0UL, 0UL, "L. Spiro Engine", false };[/code] [/quote] Regarding comments. To me, these pieces of code needs more comments then anything else, yet is has none. All the other functions you call in your example main are relatively self explanatory. I don't get it...
  15. As far as I know, GLSL 1.2 uses [i]attribute[/i] and [i]varying[/i] instead of in and out. and the output of the fragment shader is GL_FragColor.
  • Advertisement