bubu LV

Members
  • Content count

    894
  • Joined

  • Last visited

Community Reputation

1436 Excellent

About bubu LV

  • Rank
    Advanced Member
  1. That github repo for Android comes from my modifications in OpenAL-Soft fork here: http://repo.or.cz/w/openal-soft/android.git (it includes NDK based makefile, so there is no need for cmake).   But you can use official OpenAL-Soft on Android by using OpenSL ES backend. OpenAL-Soft supports this backend for some time already and it works on Android just fine.   Just build the OpenAL-Soft library with Android NDK + CMake, make sure to include OpenSL ES backend. Of course this will limit you to Android 2.3 and up, but that is reasonable limitation.
  2. Modern OpenGL implementations compile following : if (cond) { x = A; } else { x = B; } to something like this: x = cond * A + (1 - cond) * B; (actually they have much better opcodes to express this kind of code to hardware)   So I would recommend you to use actual if conditionals - GLSL compiler in some cases could optimize code much better than you manually introducing additional multiply operations.
  3. Alternative to SDL to use with TCC

    You can always compile SDL to shared library - dll or so file. And use that instead of static lib.   As for initialization - only Android, iOS and PSP needs special code. Other platforms will work fine without SDL_main.
  4. Alternative to SDL to use with TCC

    If you are not worried about Android/iOS/psp platforms, then you can simply define SDL_MAIN_HANDLED preprocessor symbol, and SDL won't override main.
  5. First of all - get rid of glGetUniformLocation calls during drawing. Get those uniform locations after shader is linked, and store somewhere. Calling glGetSomething usually is expensive operation in OpenGL.   Also drawing only 36 vertices per draw call is very small number. You should be drawing much more to have high performance with large number of triangles. Look into Instancing. Another thing to look into is uniform buffer object - uploading all uniforms at once could be much better that calling individual glUniformX functions per each uniform.
  6. They are completely different beasts.   CLI array is not a "standard C++ array. They are managed arrays. It's like new [] operator in C++, but for managed heap. cli::array allocates memory in managed heap and garbage collects it to free the memory.  std::vector allocates its memory from regular C++ heap and frees it when it goes out of scope.   ^ is not for smart pointers. It is for managed pointers - they get garbage collected to free the memory.
  7. What? Only thing you need to cast is pData member of D3D11_MAPPED_SUBRESOURCE. It has void* type, but you need Color*
  8. You don't need to copy all memory just to access one element of mapped memory. hr=d3d11DevCon->Map(textureBuf,0,D3D11_MAP_READ,NULL,&mapResource); if (FAILED(hr)) abort();      struct Color {float r, g, b, a;}; Color* obj = (Color*)mapResource.pData;   if(mousePos.x>0&&mousePos.x<Width&&mousePos.y>0&&mousePos.y<Height) if(obj[(mousePos.y*(mapResource.RowPitch)/sizeof(Color))+(4*mousePos.y)+mousePos.x].r==1.0/*If object that we was pick mouse have 1.0 on red bit we draw little cloud*/) model.rysujOtoczk?(model.dolny,model.gorny);      d3d11DevCon->Unmap(textureBuf,0); textureBuf->Release();
  9. sizeof(Color) is 16. You are copying only 16 bytes. You need to copy Height * RowPitch bytes.
  10. Verify that calls to CreateTexture2D and Map functions are not giving you error. Basically assign them to HRESULT hr variable. and check that SUCCEEDED(hr) is true. Or enable debug runtime - either pass D3D11_CREATE_DEVICE_DEBUG flag for D3D11CreateDevice function, or go to DirectX Control Panel. That will show you errors in debugger output.
  11. DirectInput actually is not the most effective way of getting input on Windows. It is deprecated and nobody should be using it anymore. On Windows you should use regular input/polling functions with standard windows message loop (GetMessage/PeekMessage and DispatchMessage) to process Raw Input messages. That will give you less latency than DirectInput.   On Linux you'll be fine with X11. Of course you can access hardware directly, but as you said - you'll need root for that. To do that you'll need access /dev/input/* devices. Here's some info how: https://www.kernel.org/doc/Documentation/input/input.txt http://stackoverflow.com/a/3877020/675078
  12. Python 3.3 on Windows 8?

    Python executable is called "python", not "python3". Even for 3.x version. Open C:\Python33 folder in Explorer and you can see that by yourself.
  13. GLFW/GLUT glutBitmapString - high CPU usage?

    glutBitmapString internally uses glBitmap function that is not hardware accelerated on modern hardware. So using this function will make CPU do all the rendering.   I suggest you to look into rendering fonts using texture. Look here for generating texure: http://www.angelcode.com/products/bmfont/ Or you can render fonts directly from truetype font file: https://code.google.com/p/freetype-gl/
  14. Any good c++ non c++11 sha2 sources?

    W is local variable in sha_compress function - you need to pass it to RND function in argument.
  15. Currently by default it uses headers and CRT from whatever compiler you used to built it from sources. If you built clang using Visual Studio or WindowsSDK, then it will use link.exe and headers/libs from Visual Studio/Windows SDK. If you built clang using MinGW, then it will use ld.exe from binutils and headers/libs provided by mingw. Of course you can override this and use headers and libraries from whatever folder you wish. Overriding link.exe/ld.exe will require manually calling it - you'll use clang to generate object files, and will call ld.exe/link.exe to produce executable).