Ingrater

Members
  • Content count

    162
  • Joined

  • Last visited

Community Reputation

187 Neutral

About Ingrater

  • Rank
    Member
  1. I just wanted to record all data which is send from a certain aplication to a certain usb device. I thougt that I'm allowed to do that as I'm also allowed to record the data output of my radio. Am I mistaken? Would it be legal if I found out the protocoll of the device by brute force try and error? Regards Ingrater
  2. And all other methods invoke CreateFile WriteFile? Because I hooked a app which is writing to a usb device. With a older version of the app it worked. But after a update it seems that the app is not longer using CreateFile / WriteFile for writing to that usb device. Could it be possible that the app is directly communicating with the device driver?
  3. What ways are there to write to a usb device using the winapi? I know of the CreateFile, WriteFile method if you have the correct device name. Are there ways which don't use the CreateFile api call?
  4. Hi, I'm currently trying to implement a beat detection algorithm for my winamp visualization plugin. I used this paper form gamedev http://www.gamedev.net/reference/programming/features/beatdetection/ and implemented the "Frequency selected sound energy algorithm #1" with the improvements for band size and variance testing. The problem I have now is that the values he stated totaly do not apply to my app. For a C of 200 I don't get any beats. This might be caused through the fact that winamp is only passing 576 bytes of specturm data to the plugin. There are two arrays of this size one for the left and one for the right channel. But this would roughly be the half size of data so C should also be the half size (right?) ~= 100 But for C=100 I also get no beats. I need C values of 10-8 to geat beats. But those beats don't seem to be correct. If I test only for beats in the low subbands it totaly doesn't detect low beats... The question now is in which range should the spectrum data be? Winamp passes it in from 0-255. I also tested 0-1 but that doesn't help. If it's not the range what I'm doing wrong? Here the source (I commented out the variance testing) I set the first subband size to 2 bool local_beat = false; float c = 5.0f; int start = 0; for(int i=0;i<32;i++){ int size = (double)i * (512.0/496.0) + (2.0 - (512.0/496.0)); if(start+size > 576) size = 576 - start; float local_energy = 0.0f; for(int j=start;j<start+size;j++){ local_energy += (float)module.spectrumData[0][j]; local_energy += (float)module.spectrumData[1][j]; } local_energy = local_energy / (size*2); if(avgs[i].size() >= 100){ float average_energy = 0.0f; for(std::deque<float>::iterator it = avgs[i].begin(); it != avgs[i].end();++it){ average_energy += *it; } average_energy /= (float)avgs[i].size(); /*float variance = 0.0f; for(std::deque<float>::iterator it = avgs[i].begin();it != avgs[i].end();++it){ float temp = *it - average_energy; variance += temp * temp; } variance /= avgs[i].size();*/ if(local_energy > average_energy * c /*&& variance > 150*/ && local_beat == false){ float blend = (float)i / 31.0f; int red = (int)((1.0f - blend) * 16.0f) << 20 & 0xF00000; int green = (int)(blend * 16.0f) << 16 & 0x0F0000; beat_color = red | green; local_beat = true; } } avgs[i].push_back(local_energy); while(avgs[i].size() > 100) avgs[i].pop_front(); start += size; } if(local_beat) beat = true; Thanks in advance Ingrater
  5. Well then whats the protocll for communicating with the AlienFX controller build into each alienware laptop of the mx15 and mx17 series?
  6. The problem is not to read or write to a usb port. I want to hear what another app is reading / wirting to an ceration usb device so I can monitor those packets and reverse engineer the protocol which is needed for the usb device.
  7. Is there a way to somehow listen to the information that is written / read from a usb device? I need it to reverse engineer a unkown but I think rather simple protocol.
  8. That makes sence. Thanks for enlightening me. Regards Ingrater
  9. I'm currently implementing GPU-Based Geometry Clipmaps after the paper in GPU Gems 2. I have a working implementation so far but currently I'm not able to implement the transitions between to different lod levels. I don't understand the principle behind it. In the example shader code there is only 1 texture lookup without a chaning mipmap level so how do they do the transitions? Do they use a extra texture for each lod level and encode the transitions into each texture? Or are they using one big texture with transitions encoded in it? Both would require to recompute all textures on each movement because of the transitions in them? Currently I'm using one big texture for all lod levels and simply shifting the texture in the vertex shader to achive a movement. When the camera reaches the border of the texture I update it. It would be great if there is a transition technique to preserve this. What possible transitions techniques are there and what are the pros and cons? Current State: (The artifacts are because of only 8 bit height data didn't yet find better data) Thanks in advance Ingrater
  10. GLSL Forced into Software

    The only software fallbacks I ever noticed was on ATI cards. And then the framerate didn't drop only half id did drop to about 0.5 fps. Is the window of your application larger than in the applications you tested the shader? If so maybe it's simply because more fillrate is needed and more pixels have to be processed.
  11. Linear depth buffer question

    You can simply compute the linear depth with this function float LinearDepth(in float depth){ return (2.0 * Near) / (Far + Near - depth * (Far - Near)); } Near is the value of the near clipping plane, far for the far clipping plane. The linear depth is in the range 0 - 1. I used it for my outline shader. You can find it at http://3d.benjamin-thaut.de
  12. I've found out, that I've linked a lib to the app (lua) that was not compiled with the -mthreads option. So the threadsafe exception handling could not be enabled. After I removed the lib from the linking process everything worked fine.
  13. it crashes inside of __Unwind_SjLj_RaisException() if found out that -mthreads should be used for compiling and linking to get thread safe exceptions. Unfortunately those options have absolutely no effect for me... [Edited by - Ingrater on February 27, 2009 12:42:30 AM]
  14. As far as I read in the boost documentation it is safe to use exceptions in threads as long as they are catched in the same thread. So I wrote this small test Programm, but it crashes for me (mingw gcc 3.4.5, boost 1.36) #include <boost/thread/thread.hpp> #include <boost/ref.hpp> void throw_something(){ int ex = 1; throw ex; } struct dummy { void operator ()(){ while(true){ try{ throw_something(); } catch(...){} } } }; int main() { dummy t1,t2; printf("Starting threads\n"); boost::thread thread2(boost::ref(t2)); t1(); return 0; } I'm totaly despaired. I've searched for hours, and googeld but couldn't find anything. Can someone please enlighten me? Thanks in advance Ingrater
  15. Currently I've got a mutli threaded application that uses lua in multiple threads. For each thread a sparate lua state is created and used. I've compiled lua as a c++ libary so it uses exceptions instead of longjmp/setjmp. The problem now is, that if there is a syntax error or any other error in lua the application crashes on stack unwinding inside the lua.dll, but only when two or more threads are used. If the application does only use one thread it runs perfectly. Now I've got absolutly no clue what could cause the stack unwinding to fail. For what possible causes should I search?