abdulla

Members
  • Content count

    199
  • Joined

  • Last visited

Community Reputation

164 Neutral

About abdulla

  • Rank
    Member
  1. Hi guys Is there a way of getting the mipmap level of a texture2D lookup? So far the best advice I can find is to encode the data in the texture itself. I thought there might be a better way that someone is aware of.
  2. using std::vector in my container

    You need to explicitly say that "List::iterator" is a typename by saying "typedef typename List::iterator iterator".
  3. Since you're in Melbourne, I'd recommend ClickNGo.com.au. If you have any issues, just pick up the phone and they'll have it fixed in an instant.
  4. List of free libraries

    Eigen - a "C++ template library for linear algebra: vectors, matrices, and related algorithms".
  5. what language has least bugs in code?

    Haskell actually fairs pretty well in the shootout. I also recommend Haskell, it's a joy to code in once you wrap your head around it.
  6. premaking PAL on Mac OS

    An aside: After having a quick look at Premake, it looks like it does something _very_ similar to CMake. I'd probably suggest moving to CMake, as it's also supported under Fink. Personally I'm a fan of SCons, I just wish it had a coloured output like CMake, or a minimal output like the Linux kernel build system.
  7. Eigen 2 seems to be a very promising upcoming library. It uses expression templates and is explicitly vectorised (SSE and AltiVec). The geometry module isn't complete though, but they are looking for contributors.
  8. Quote:Original post by swiftcoder I didn't really state my point very well there, so here goes: a) syscall count is not a good metric of performance. b) context switches are expensive when working with processes - when you start blocking you are going to lose. c) you are most likely going to need to do those extra copies to make your shared-memory solution non-bocking. a) No it's not, but it has a big effect on performance. Reducing syscalls really does help, but this is at the nanosecond level (at least on my test systems). b+c) I guess it really comes down to whether you want to block or not, so it depends on the structure of the system.
  9. Quote:Original post by Antheus In the same way that the articles mentions generalizations, generalizing over a single blog that applies to Java only is somewhat bold. Sure, it was the first link that turned up in my history. I've been reading a lot of papers lately so forgive me for generalising, but you can dig deeper and find more articles on the subject. I do admit my background is Linux/Mac OS X, so I can't comment on the behaviour of Windows. Quote: Quote:I use a thread per connection in my RPC servers and it scales amazingly well. Are the RPC calls blocking, or use asynchronous method invocation? If former, then you're limited by network latency either way, and the application is under-utilized, which is why many workload sensitive applications prefer message passing vs. pure RPC. Actually I wrote 2 different RPC libraries, one that does block but uses tricks to reduce latency, the other that attempts to be completely asynchronous. Message passing is great, but it doesn't give the type-safety or convenience of RPC.
  10. Quote:Original post by swiftcoder Quote:Original post by abdulla And you're correct, shared memory is faster than sockets due in part to reduction in the amount of syscalls needed.Only if you are able to avoid locking - as soon as you have to lock your shared memory, the sockets do about as well, since a local socket (at least in the unix world) tends to be implemented as chunk of shared memory and some locking primitive, which evens out your syscalls again. Actually, I've got some benchmarks for a paper I'm working on that show that that's a false assumption. You end up copying around data a lot more, even if you use tricks like vmsplice (which require you to allocate pages of memory). The overhead for semaphores, especially under Linux, is quite low, and you can use spin locks to reduce the latency further.
  11. Quote:Original post by Antheus Which is somewhat interesting, given that JVM doesn't have network layer, and all calls are passed down to OS. How well does it work with 5000 concurrent connections? Any chance of a link to the articles? http://paultyma.blogspot.com/2008/03/writing-java-multithreaded-servers.html http://www.classhat.com/tymaPaulMultithread.pdf I use a thread per connection in my RPC servers and it scales amazingly well.
  12. The biggest overhead you'll have by going multi-process is context switching. It's a lot faster to switch between threads in the same address space. And you're correct, shared memory is faster than sockets due in part to reduction in the amount of syscalls needed. I'd also advise against file-backed shared memory, I'm not sure how it's done under Windows, but there's shm_open and related functions in POSIX that allow you to create and manage shared memory without a file backing. All in all, unless you need the added protection of address space separation, you're better off sticking with a multi-threaded design.
  13. Quote:Original post by Kylotan Quote:Original post by Zouflain and finally a thread for each active connection Never do that. "The "one thread per client" model is well-known not to scale beyond a dozen clients or so." Actually, that's a bit of an exaggeration, as I've had hundreds of simultaneous connections with 2 threads per connection, but it does indeed thrash your server. Actually, I've been reading a lot of papers on RPC, with quite a few talking about server scalability. It turns out on modern operating systems (like Linux since NPTL was introduced), one thread per connection turns out to perform a lot better than using any form of poll/select. Which is actually reflected in my own performance tests. There are papers that talk about how Java's original IO is better than NIO for that very reason.
  14. SVN repository path

    I don't believe so, I think it just looks at .svn/entries file in the current path and finds the repository URL. But as usual, look at the SVN book to find out more.
  15. After reading all the books acadestuff mentioned, I'd recommend reading: C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond - David Abrahams, Aleksey Gurtovoy It's a really great read.