About this blog
Programming, computer graphics, game development and randomness
Entries in this blog
I came back to this site on a whim recently. In fact, I had forgotten this blog even existed. I don't come here often anymore, because I've been quite busy as of late.
Still, GDNet has a special place. It's where I first learned about matrices, for example. Without it, I may never of had the interest to pursue my undergraduate degree, and study math/physics/CS as seriously as I have.
Years later, I may not be going into game development as a career, but I have a pretty nice Silicon Valley job nonetheless. But maybe one day I'll accept a job offer in the field, and do some interesting things there. Exciting times, in any case.
If you asked me 10 years ago if I would ever achieve what I have, I would probably have said, "Of course!" Such is the naivety of youth. I appreciate the enormity of the task better now. Still, if not for the inspirations then, many of which can be traced back to here, I may not have persevered as I did.
I'm happy to see the site is still quite active, and within its members, I hope there are many more successes taking shape.
So I've managed to smooth the lighting out by using many fewer photons (3000 from 100,000), while giving each photon a much larger area of influence (~10x larger). To do this, I added a geometry shader that generates scaled screen aligned sprites, because the built in point sprites have a maximum size.
I fixed the lack of illumination of corners by having each photon act as two lights, incoming and outgoing, within its area of influence.
I've also changed the falloff of each photon to approximate a Gaussian distribution using [(1-x^2)/(1+x^2)]^2 (see a graph from WolframAlpha). Notice that it becomes zero after a finite distance along the x-axis, which prevents discontinuities at the edge of a photon's range of influence.
This is what it looks like now:
So I've been playing around with OpenCL the last couple of weeks. I wrote a 500,000 element particle system with HDR bloom and a photon mapper, pictured here:
As you can see, I'm in dire need of a better way to do final gather, haha. This was just a hack job over a weekend, of course. Right now I'm just rendering each photon contact as a point light, which is just under 100,000 in total. As a result, it's only ~10fps. Lots of room for improvement.
The OpenCL code for doing the bounces
Of course, this being a new technology there are a lot of caveats. First, OpenGL interop is tricky to get working. Don't forget to pass the OpenGL context and OS device handle to OpenCL! When sharing multiple buffer objects, I found OpenCL refused to function correctly, saying it was out of resources. Instead, I just lumped all the data to be shared into one giant VBO/cl_mem object, and it was mostly OK. Still, with large buffers (e.g. 500,000 particles) the OpenCL kernel seems to eventually give up and stop running after a while. I have no idea why. Finally, uploading images directly seems to be a problem, but if you create the image and then copy the data separately, it seems to be fine.
This in on Nvidia, mind you. I hear ATI is much better.
Recently I was at a presentation where one of the senior technical artists of Crytek spoke. I had a chance to talk with him one on one afterwards about the kind of programmer a place like Crytek was looking for. For R&D, first on the list was a Masters degree in a technical field, which is not surprising, though this can be overlooked in the exceptional case of a mind-blowing portfolio. For less core engine development, a Bachelors is the standard, but a community college graduate with an excellent portfolio would still be seriously considered. I had similar responses when I spoke with owners of local independent Toronto studios at my local IGDA chapter.
On this basis, I'm very seriously considering continuing my education after I finish my 3 year diploma at Humber College. In the interest of well roundedness, and to pursue my non-game interests, I'm thinking of doing something business related for my undergraduate degree, probably commerce, and a Masters of computer science after.
I do realize it's not always easy to get into graduate school, and taking an undergraduate degree not directly related doesn't help. However, I can get letters of recommendation from a variety of professors I know, who have PhDs in fields such as physics, computer science and theoretical chemistry. I'm hoping that will offset the less technical nature of, say, a commerce degree. Although, one of my current professors who specializes in quantum physics suggested that it is possible to go straight into a Masters degree without a Bachelors in certain exceptional circumstances. I'm flattered my prof thinks I'm skilled enough to even consider it, but I'm not sure it would be right for me even if I could.
There's two problems here. First, I want to get some non-computer education and I really love business, securities and trade. Commerce seems ideal for this very strong secondary interest of mine. I'm not so passionate about it that I want to do an all-out MBA (though who knows what the future holds), so I think a Bachelors would be the best way for me to experience the field. At the same time, I'm 23 now and while I should qualify for 2 years of credits between my diploma and previous university experience, I would be 25 by the time I graduated. Assuming 2 years for a Masters, that's 27 before I'm in the workforce full-time.
Granted, at that point I would make a significant amount of money, but I'm also as impatient as I am stubbornly persistent. I want to get into the meat of the business and start contributing something valuable. I know I already can, though more education will help of course. I'm not totally decided on this issue.
I'm going to an open house at one of the universities I'm interested in this weekend. Maybe they will persuade me.
In the mean time, I've become the student federation rep of the game programmers at Humber. I'm working to improve the course, it's rigour, co-ordinate with clubs and art students, and plan some sort of demoscene style event. I have my final year courses and projects on top of that (though I'm happy to say my mid-term average is a solid 93%). I'm also keeping up on my self-interest studies into math, reading my books, my ACM subscriptions and newsletters, going to the IGDA meetings, and working on a programming contest. I also have the SIGGRAPH 2009 DVD set in the mail. I'm glad I got my smartphone, otherwise I'd never be organized enough to do it all. Quite busy.
Some say they are without merit in this digital age, but I find that for highly technical topics they are still useful. True, you can read many papers and presentations online and acquire the same information, but that takes more time. A good book offers the same information in a condensed form. Yes, some there are survey papers out there that serve a similar purpose, but they are not as expansive as a good textbook, and you can't always find one that suits your interest. Not to mention the fact that books work without power, computers and even come with their own screen! :P Of course, products like Kindle are bridging this gap, but not everyone has access to that (i.e. Canadians like myself) and many speciality books are not available on it. Therefore, I believe books are still a worthy investment.
With that in mind, I'm trying to build a list of books I would like to have in my personal collection. I'm looking for texts that are comprehensive in their topic, but not so broad that they can only half-explain things. Certainly, they can be superficial if there are prerequisite concepts to a given topic and there isn't enough pages to explain them, but they should be intentionally so. They should defer these ideas to other materials that treat them directly and with the depth required, and give recommended reading. If a book starts going past 1000 pages, the author is likely isn't doing this and is probably getting carried away. Such a treatise may never really be finished in the author's lifetime (call it Donald's Dilemma). It's a noble goal, but ultimately yields something less like a book and more like a blunt weapon. I prefer a book that treats a manageable set of topics, and does it well.
I also want the book to be to the point, only including examples when they are the best way demonstrate an important concept or note a non-obvious application. This doesn't mean the book is dense and unapproachable, only that it recognizes your ability to reason with and creatively apply the ideas it discusses. The point of a book is to learn from it. A barrage of examples only encourages rote memorization and wastes paper. I neither want a book that assumes you know everything already by using opaque and ultra-dense terminology. I can read scholarly journals for that sort of masochistic indulgence. :P What I really want is a book that keeps me thinking by building up new ideas at a brisk pace.
I believe a good technical book should also be reference worthy. I don't care for a collection of best practises, common sense and opinion, because that can be found freely online in massive quantities. More easily, it can simply be deduced. Even if your thoughts are backed by hard data, just say what it means instead of waxing lyrical about it... Well, unless you're writing "Coding Practises in Iambic Pentameter", haha! Of course, I make exceptions if the book is a particularly compelling, definitive or authoritative collection of subjective discussions.
The most important thing, I don't want a book that shies away from abstracts. I need to know the fundamental principals and abstract concepts that are behind any given technique or idea. That's how I remember and learn things. My mind is like a vast network of concepts that I sample and compose into concrete ideas and methods. I don't expect the book to explain them all, just say what they are please. It annoys me to no end how many engineering textbooks simply state "do it this way", show a few examples to prove it works, and move on. Another one I often see is, "The reasons it is done this way are obvious." Is it so hard to simply note the ideas they are building upon?
Here's a concrete example of what I mean:
I was reading about linear recurrence relations with constant coefficients in an engineering math book I have. One of the steps in solving them involves finding the roots of the characteristic polynomial. It shows you how to do it for 2nd order recurrences, but it doesn't state why you are using those roots or any other significance they have.
After some thought, I realized that it was because using these roots you can split the constants in the recurrence up, such that you can rearrange and remove a variable from the relation. Doing this repeatedly to remove all but one variable will solve it. I reasoned that this process could probably be condensed using linear algebra constructs. A look at Wikipedia confirmed this, and revealed to me the relationships between recurrence relations, eigenvalues/vectors and linear differential equations. This is a powerful association that improves both my understanding and recall of these ideas.
However, I must note that relying on Wikipedia is unacceptable for advanced topics, because it's often impenetrable, disorganized, incomplete and confusing. Eventually I can parse it, but it takes an excessive amount of time. This is why I want to get some good books, to save time. My engineering math textbook could have done this if it just said something like, "Such-and-such property of polynomial roots enables rearranging the recurrence to remove a variable. See this-and-that for details." That is what I want the book I read to do. I want it to help me see the "big picture".
Far too often in technical education is the rote memorization of basic rules and mechanics stressed. I believe that one needs to bind these mechanics together with the abstract concepts and transforms that connect them. This shouldn't be done "in the future when you know everything", it has to be reinforced each day. If one starts with a few basic mechanics and attaches them to a larger conceptual framework, retention and interest will be far greater. I find the current state of technical education is like showing someone all the parts of a car one at a time, but neither the car itself nor each part's place in it. It's absurd.
In any case, my present interests lie primarily in computer graphics and physics, math, software engineering and management, and general computer science. I'm looking for the most definitive and/or informative texts in these areas. I have a high standard because I don't have much money to spend.
The books I've bought so far are:
I chose these because they seem to be held in high regard, and they cover a very interesting but focused range of topics. I must also say that the Morgan Kauffman Series in Computer Graphics seems to be very interesting as a whole.
- Real Time Rendering by Akenine-Moller
- Real Time Collision Detection by Christer Ericson
- All the Mathematics You Missed: But Need to Know for Graduate School by Thomas Garrity
As for the list of books I still want, I'm developing it on my Amazon Wishlist. Be aware that I added a bunch of books recently, so it needs some trimming right now. I may move the list to some other medium at some point, so I can include freely available books and important papers also.
One problem I have making this list is that there is no technical library or book store near where I live (Brampton, Ontario). I have to go about an hour to Toronto by transit to the University of Toronto's engineering library, and I can't take out any books because I'm not a student. I'm also too busy with work and such that I can only practically go out there on weekends. It's not easy for me to view and evaluate them before buying. For the most part, I end up relying on the opinions of others, reviews, and the pages I can see on Amazon or Google Books. Also, I mainly use Amazon's recommendations to explore what's out there.
Compounding this is the fact that I don't have enough associates or friends with the same technical interests. I am not in university or graduate school, nor am I ever likely to be. Neither do I have any degrees, though I've done 2 years of a 3 year college diploma. However, I don't think that's an excuse not to self-educate myself at a high level, and I seem to manage just fine. MIT's open courseware is also helpful with this. Plus, regardless of your background, continuous learning is a must!
My question for the readers of this blog is simple: Are there any? :P Seriously though, what books for computer graphics/physics, math and software engineering do you recommend and why? What do you think of my thoughts here? Are there any books, articles or papers you know of that fit what I'm looking for?
It's one of those weird transformations. When my family moved into my current city (Brampton, Ontario, Canada), it was by far dominated by white people. In fact, the Mormon church had big plans to settle families here, so they built a huge temple. It's one of the biggest this side of North America (58,000 sq ft), and you can see Moroni from a mile away. However, that didn't pan out. Instead Brampton became the city of choice for immigrants from India, Pakistan and the Middle East. (Probably a better outcome, as Indian food is so delicious! :p ) In any case, we now we have this massive white Mormon edifice in a sea of Muslims, Hindus and Sikhs, which I find amusing. Similarly, the Jehovah Witnesses that are here have not bothered to go door to door for a long time.
So for a number of years now, as a white person, I have been a visible minority. More recently, South Asians (India, Pakistan, Bangladesh) have become the majority. With this I have noticed certain social shifts. There are some communities where, as a white person, I am not welcome. There are certain jobs I cannot easily get. I am occasionally subject to racist remarks as I walk around my neighbourhood. Just today, I was sarcastically asked if I was a crack dealer. It's hardly endemic, but it is very different from how it was when I was growing up here.
Of course, the vast majority of people treat me decently. It's usually only idiotic teenagers egged on by their friends that make these sorts of remarks. In fact, I once held a job in a mall where the only white people were myself and the real estate agent, and I was always treated with respect. It's a very small amount of people that treat me indecently. However, it really makes one pause to think about the much more dramatic racism experienced by other groups throughout history.
This is one of the advantages of living in a diverse and multi-cultural area like this. Not only are you exposed to a variety of cultures, but you get a first-hand (small) taste of what racial injustice is. It helps you better understand why Black people rose up like they did in the 60s, and the dangers and damage of discrimination. It makes you know how it feels to live in a place you don't always recognize, where you are sometimes the outsider, ignorant of customs and frowned upon by some.
I call this an advantage, because with it one realizes the tremendous value of the freedom enjoyed in this country. This is a place where you are free not to be black or white, but to be human. It teaches you that when faced with discrimination and injustice, you need to fight and sacrifice for your dignity. You should demand success in your life, work for it and settle for nothing less. Moreover, you must put the same effort into maintaining and supporting your society, so that it does not devolve into a hegemony where self-actualization is available only to a chosen few.
Sadly, many young people I see do not realize this. They see these problems (minor problems really), and become apathetic. Somehow, they don't realize that it doesn't take tremendous effort in a country as free as Canada to work on these issues. They somehow fail to see that solutions and opportunities stand before them glaringly. I don't fully understand how this veil has been drawn across their vision for the future. Perhaps it's simply that no credible person has told them what's possible?
For example, in my political experience it's exceptionally easy to get the attention of major politicians in this country. I have personally met both formally and informally our Prime Minister, Finance Minister, many other federal ministers, provincial party leaders (on many occasions), and too many MPs and MPPs to count. I have even met one of the Prime Ministers of the Czech Republic and Presidents of the European Union. I have ample opportunity to involve myself in forming the policies that define the direction this country will take. The limits to my connections are only my desire and initiative.
How is this possible? I ask to see them. I get involved. I have a party membership, even though I don't agree with everything the party does. But the main point is, I don't really put much effort into doing this. I just make myself known and available, nothing more. For that, I have access to the levers of power in this country. And to be clear, half of the youth in these circles are gay, Asian, Indian and so on, particularly the most influential ones. It most definitely isn't a race issue, but mainly the willingness to be present.
That being said, this same idea of simple availability extends to other areas like business and charity. The truth is, so many young people want nothing to do with the "establishment", but this "establishment" knows it's mortality. They know they need to hand off power to the young eventually, so they are desperate for their involvement. They need to make sure the young understand the power they will wield one day, and all the hidden complexities therein. They need to reach out to the young, because the young are their future too. The young should take the offer, because they will be the establishment one day whether they like it or not.
Still, too many youth here don't seem to realize this. They just don't get involved. Maybe it's because of some stigmata or irrational association they've been taught. Most just keep to themselves, passing the time with studies, Facebook or binge gaming. Others atrophy and entertain themselves by insulting passing strangers. Maybe they think they have nothing to offer, but in truth just being present is tremendously valuable. Somehow they don't seem to notice or value the immense wealth and opportunity that simply living here provides. Too many take it for granted, and it's a damn shame.
It's also dangerous, as it makes for sheep more easily herded.
On May 1st, I participated in TOJam in Toronto. There I produced SimArson, which can be seen here among many other excellent games. You can also get it on my website.
As my website states, this program includes the following features and technologies:
- GPU simulated flame propagation
- (ridiculous) HDR bloom/streaks
- 2D shader metaballs
- Per-pixel Phong lighting and refraction
- Various colour adjustments (e.g. rock wetness)
- Fire and water shaders
- Velocity field based motion blur (i.e. the one where you stretch the geometry in the direction of motion)
- OpenGL, FBOs, GLSL
- GLEW, Win32, Boost, Box2D, Corona Image Library
Everything else was hand coded by myself. The windowing framework and a number of utilities were developed prior to the event. The essential gameplay, rendering, shaders and physics setup were coded at the event, with much embellishment in the following 2 weeks. Needless to say, I was exhausted afterwards, as you can see in a photo of me sleeping in a writeup on the Torontoist blog.
To note, I had a lot of serious problems making it work on the ATI platform, and the comments in the ATI readme were made in a moment of intense frustration. But seriously, those bugs were crazy! In the end, I gave up supporting ATI. Mainly, because I don't own an ATI card, and fixing the kind of bugs you get with ATI on a friend's hardware through MSN is nigh impossible.
And a pretty image:
Also, all the shaders are stored in txt files, and there are some interesting commented out lines in there. In "fsFinal.txt" there are two ways to render water drops and one to render blood. In "fsStreak.txt" there is a very silly alternate way to calculate "float angle" that results in streaks reminiscent of waving tentacles. I call it "tentacle bloom" and expect to receive significant interest from Japanese developers. :p
Feel free to tweak away, and keep me apprised of the results!
I'd like to point out my intrusive list on my personal website. It's something I created as an exploration into the world of concurrent data structures and lock-free algorithms. As the website states:
|It's your usual doubly linked list, and consumes no more memory than one (2 pointers per node, 1 pointer for the list head), but it allows for concurrent local insertion, self-removal and overtaking forward traversal. There are no formal locks, though the functions can block for a time if there are conflicts.|
Now I'm the first to admit, I'm not sure if it's really lock-free or just looks that way. The basic mechanism of operation is message passing. When a node is removing, and the node previous to it is also removing, it will wait for a message. If the node to next to it is removing, it will send a message. These messages allow the threads to correctly restructure the list.
This mechanism is also used to insert nodes, though in this case it is assumed the previous node will not be removing, so only the next node will need a message. It is a safe assumption, I think, because if a thread is inserting, it must have control of the node it is inserting after. If it doesn't, then something else could remove that node, and you get a reference to an undefined space. I assume the programmer is smart enough to avoid this kind of fundamental error. An easy way to do this is to make sure you have an iterator attached to the reference node, which I will now discuss.
Iterator traversal is achieved by having the iterators insert themselves into the list, and sort-of "stack" on top of their current data node. This means you can start iterating from any node a thread has control of. The aforementioned message passing mechanism can also tell if a node is data or iterator, and act appropriately. This includes blocking the removal of data nodes with iterators attached, which means you can safely insert from them.
Note that inserting nodes and starting iterators at the start of the list is enabled by the use of a "virtual" node representing a node just before the start of the list.
To be clear, I have not rigorously proved the correctness of this code, but my testing has not revealed any problems (mind you this is only on a dual core system). This is mainly because I don't have the formal knowledge required to perform such a proof. However, after countless hours of informal analysis (i.e. thinking), I don't believe there are any errors.
Also, there are performance issues with the various wait loops. Specifically, they should spin a certain number of times before yielding, if they ever yield at all. However, I again don't know what strategy to use to balance the spin counts for optimal performance. It probably depends on the current usage too, so perhaps some dynamic system should be used. In my testing, using a load balancing algorithm between the threads worked very well, but I don't think that's an ideal solution. I'm not sure how to go about this exactly.
Finally, I could probably add reverse traversal too. I don't see any obvious problems with it, but I just haven't bothered to yet. Same for insertion previous to a node. However, for my purposes (the as yet unfinished concurrent garbage collector), these operations are unnecessary.
Let me know what you think. The code is well commented. I may also post it on github or some similar service eventually. What would you recommend?
Of course, everyone knows about iD Software's use of voxel rendering, which is very cool, but it's interesting to also explore what came before it.
I was fortunate enough to work with Ben Houston briefly once a few years ago. Those in the know may remember him for his work on hierarchical RLE level sets (full text, free preprint). The technology that iD is working with is similar.
Also, the OTOY project from the company JulesWorld that provided the (infamous?) ATI Ruby 2.0 demos uses voxel rendering extensively. I'm still waiting to see concrete results from that project, because it involves extensive distributed rendering. The idea being to render high-fidelity graphics and stream them out to things like mobile platform that could not otherwise deliver those visuals.
And of course, voxel rendering has a long and storied history in scientific and medical visualization.
To note, Digital Molecular Matter uses finite element mathematics to perform their physics calculations, which is in some ways comparable to voxel rendering. Although, instead of cubes, they use tetrahedrons. In both cases, you are dealing with a complex grid of data points, with all the ensuing challenges.
So this is far from a new technology, but what is new is its application to games... or not. For example, they already use voxels to do smoke simulations, though that infrequently shows up in games. Though that is volume (not surface) rendering, and the grid resolutions are generally much smaller. Really, what is new is the application of high resolution, sparse, hierarchical voxel sets in games.
A noble thing indeed, but the next "big thing"? I'm not so sure. It seems to me to be more of a natural evolution of real time rendering, and a particular solution to a particular problem. The end user is only really going to notice the advantages in close ups, which mainly only happen in cut-scenes. They are also useful for dealing with volumetric effects, such as translucency, as they can avoid depth peeling. Generally, they can be more efficient with effects that are best done using ray tracing. At the same time they are more difficult to animate in the traditional manner (skeletons/skinning), while they are a better format for fluid-like movements. Therefore, I see them as complimentary to polygons, not a replacement for them.
To note on animating voxel sets, I view distorted coordinate systems as the best solution here. Basically, each voxel component to be animated is split into a separate set and enclosed in a polygon hull. There are animated and rendered to a sort of G-buffer, which stores the transformations of rays from world space to the distorted voxel space for each screen pixel. The process can additionally cull non-visible voxel sets. Then the actual voxels are ray cast and the image rendered. The trouble here is that skinning polygons allows for the distortion of meshes, not just translation/rotation, which is not as obvious here. Also, there are issues with overlapping voxel hulls, because the area between the foreground hull's silhouette and the actual voxel one must not occlude the background. However, I'm believe these issues can be solved.
This is something that anyone remotely interested in the middle east (everyone?) should be paying close attention to. As we all know, Iran is a major backer of many political and military groups in the middle east, so their foreign policy has major implications on stability in the region. The basic problem is that the conservative establishment has failed to deliver on many promises (e.g. economic reform, anti-corruption), all the while inflaming many nations that could otherwise be working with Iran to the benefit of everyone. Many Iranian's are tired of this, and with what appears to be a stolen election, they have decided to vent their frustrations en masse.
For those that are not aware, the basic structure of the Iranian system is as follows: The Majlis (parliment) is lead by the President and First Vice President (a function previously served by a Prime Minister). They are overseen by the Guardian Council lead by the Supreme Leader, who is elected and monitored by the Assembly of Experts. However, as the groups do not always agree, the Expediency Council is a sort of mediator between the two. The Majlis, President and Assembly of Experts are all elected, however all candidates must be approved by the Guardian Council. The Council itself contains 6 members appointed by the Supreme Leader, and 6 more selected by the Majlis from a group chosen by the judiciary. The council also appoints a variety of other important positions, such as the leader of the military.
The reason for the extensive oversight by the Guardian Council rests in the philosophy of Islamic Jurists; a group of learned Islamic scholars charged with ensuring the application of Islamic principals. In Iran's case, this includes the interpretation of their constitution, though it is not a court in the sense that it arbitrates between two opposing parties.
Some interesting historical background to the current situation is as follows: Mahmoud Ahmadinejad was the previous President, the questioned victor of the recent election, and favoured by the Supreme Leader, Ayatollah Khamenei (not Khomeini, that was the first Supreme Leader and leader of the Islamic Revolution).
On the other hand, the questioned 2nd place politician is Mir-Hossein Mousavi, who is probably not favoured by Khamenei. This is because when Khamenei was President of Iran, Mousavi was Prime Minister, and it is believed that him and the then Supreme Leader Ayatollah Khomeini worked together to restrain Khamenei's influence. Indeed, one of the very first things Khamenei did when becoming the Supreme Leader was abolish the position of Prime Minister, removing Mousavi from politics. As a result, Mousavi avoided politics for many years until the recent election.
Another interesting element is that of Ayatollah Rafsanjani, who is the current chairman of both the Assembly of Experts and the Expediency Council. Ahmadinejad made some very defamatory remarks about him during the last election debate, leading Rafsanjani to write an open letter to Khamenei comparing Ahmadinejad's statements to that of various discredited groups. The two have been long time political enemies.
Also, as the chairman of the Assembly of Experts, Rafsanjani could initiate the dismissal of the Supreme Leader if it can be shown that Khamenei is violating Islamic tenants. Indeed, by his apparent election rigging and endorsement of the Ahmadinejad before the usual 3 day grace period, it appears some Grand Ayatollahs (Sanei in particular) have issued edicts decrying the election as a falsehood and that supporting it is against Islam. Indeed, rumour has it that Rafsanjani has called a meeting of the Assembly, but his intentions are unknown.
Remember, that Iran is in a tremendous state of flux right now. There are many rumours flying about, so don't trust them as absolute. The recent protests in Tehran alone are estimated to have between 1 and 2 million participants, and are likely to only grow. This kind of mass protest has not been seen in Iran since the Islamic Revolution. In any case, we will have a better of idea of what's happening by the end of the week. Lets hope for something not reminiscent of Tiananmen Square, though I don't think it will get too violent. After all, such government reprisals are part of what fueled the Islamic Revolution to begin with. Khamenei knows this, and indeed there has only been a handful of fatalities so far related to the acts of Basij militas, but that may change. In any case, Iran could clearly use a change of political atmosphere right now.
Personally, I like the Iranian people. They are one of the most industrious and educated populaces in the Middle East. Just look at the wonderful work by Ali Rahimi on GameDev.net. Their progress and development has been remarkable, though there is still much to do. For example, if not for their government's dangerous and inflammatory behaviour, there is no reason why the Iranian people shouldn't have nuclear power. The people are honourable. Their government's active role in destabilizing the region (along with many other parties) is also regrettable. However, these protests show that the Iranian government is no longer reflective of the people's will. Hopefully, that will change.
My prayers are with our friend Rahimi and his country.
The Globe and Mail
The New York Times
Various Twitter feeds (e.g. IranElection09)
One of the problems when you get a lot of CPUs with their own cache and possibly their own RAM pools is that your data is no longer uniform. When you start working with one set of data on one CPU, if the thread moves to another CPU, you have a serious performance penalty as the data is moved over to it. On the CELL, the use of private address spaces on the SPEs ameliorates this, but brings its own management complexities. One proposed solution to this is maintaining a global address space, with everything accessible to all processors, but making segments of this space associated with specific processors and RAM pools. This can be still complimented with private spaces, but the point is that you still have direct access to a global space, which simplifies code and can improve performance.
To do this, these new languages (and language extensions) use the concept of mapping data/memory to places/domains (representing these pools of cache, RAM and even remote computers) with distribution objects and annotations (e.g. shared/private data). The point of having that extra information is that you can intelligently manage what is executed where. You can ensure that one thread does not go where it's data would be slow to access. Now, these particular languages are focused on supercomputers, but once we hit 8 cores per CPU this sort of system with be necessary, imho. I expect mainstream languages to pick up on it in a few years.
Really, lvalue references (T&) are a specialized version of rvalue references (T&&). This much is clear because T&& can be assigned to both l and r value references, whereas T& strictly assigns to lvalue refs. Moreover, when T is a template parameter, T&& will become Type& or Type&& depending on if T is an l or r value ref, whereas T& forces everything to an lvalue ref. Of course, both will remain the same if T is an ordinary type. In other words, T&& discriminates between l and r values, whereas T& does not. Clearly, T&& is more general.
In template parameter deduction, this fact also holds. T& always requires an lvalue ref, works naturally with lvalues and their refs, but rvalues and their refs will convert into const Type&, causing T to be deduced to const Type. On the other hand, T&& works naturally with rvalues and their refs, but with lvalues and their refs T becomes [const] Type&, so that T&& will collapse into [const] Type& &&=[const] Type&. In this way, T&& is again able to discriminate between l and r values, whereas T& cannot.
To be clear, what T&& really represents is not so much a new kind of reference. It covers all the functionality of T&, and they are often interchangeable. However, it is a more general form of the C++ reference that allows access to modifiable rvalues and can discriminate between l and r values.
One more perspective: T&& behaves exactly like the underlying type, be it lvalue, rvalue, Type, Type& or Type&&. However, T& is not exact. It behaves the same as the underlying type, but it makes everything look like an l value, discarding any r-valueness. So again, T& shows itself as a derivative of T&&, and T&& shows itself as the more general form.
Read a bunch about Agile and it's development from the thoughts of Edwards Deming, to The Toyota Way, to the Agile Manifesto, to Lean Development.
The basic idea is one I have often been fond of, feedback loops. It's the way I view the world and the mind, a huge complex system of feedback loops. Nature works this way because it works, so it only makes sense to adopt it in our business systems and development processes. It's also why things are so grey and abstract, they are always changing and in flux. A steady state is nice, but inevitably temporary, so we must be prepared for change; agile in other words.
Agile is basically a rejection of micromanagement and micro-planning, but overzealous users may reject all planning at their peril.
I can see why the Toyota Way is used so broadly, it's a very general principal. I particularly like the idea of a "pull" system that only uses the resources it needs, doesn't horde. However, that kind of optimization requires a stable environment to work (suppliers in there case). Some blame part of the current financial crisis on over-optimization in this area. Buffers are still useful, but should be analyzed (see Queueing Theory). I also like the idea of levelling the workload. I can imagine that having swings in work rate putting the system in an unnecessarily diverse range of states, making errors more likely. However, must be careful not to make the system brittle to changes in work rate either. That's the common sense aspect I suppose, unwritten but probably should be. Can't write everything though, life is often too complex to finish it, and too temporary to create long term value.
Balance in all things. Feedback loops can help maintain this through control circuits.
I was thinking about the 5S methodology:
* Seiri - Take an itinerary of the objects in the environment
* Seiton - Arrange the objects to optimize work flow
* Seiso - Keep thing clean and neat
* Seiketsu - Standardize so everyone knows where things are and what to expect. Less time wasted on preventable uncertainties.
* Shitsuke - Don't give in to laziness, apathy or carelessness. Keep the system alive. Probably requires a Kaizen mind, a sense of continuous urgency that takes advantage of the survival instinct for improved productivity.
It occurred to me that it was like a Zen koan. You're part of reality by being separate from it. That is, the work place has a defined set of standards, arrangements and procedures that separate it from reality. It's its own cosmos, and ideally one that promotes productivity, hence the emphasis on cleanliness and whatnot. However, that productivity enabling aspect also makes it much more present in reality and the outside world than the inside of someone's house would be.
Zen koans to me are pretty simple. They are the quantum mechanics of philosophy. They take two seemingly opposite ideas and fuse them together, because in reality nothing is so black and white. This is even the case when those opposite ideas are mutually exclusive, because reality is probabilistic. A system can be in a sort of superposition of those exclusive states, so as you sample it, it may appear one way or another. However, underneath that seeming contradiction is a unifying probability field. So this another angle on the fusion of being separate in order to connect, and one I think Deming might of liked given its statistical analogies.
Also read about the CELL. Currently installing YDL on my PS3 so I can write a concurrent raytracer or something similar. Not sure if I should write a PC version first, might be a waste of time. I'm pretty sure I can handle this. Still need to finish my Common2 library and Compiler though. Lots of work. I chose YDL, because it appears to have the most comprehensive PS3 support, and includes all the necessary development tools out of the box. That being said, it's eclipse package is somewhat broken, so I had to reinstall the eclipse package from an older version, and reinstall the Cell SDK to get the eclipse plugin working. Next time, I'll probably just use Fedora.
There are many blogging websites around, and in fact, I have my own website on which I could host a blog, but I decided to pay the GDNet+ fee and open one here.
My reasons for this are because I believe having my blog here will give me greater exposure. More importantly, I will get more informed comments. That being said, I could have also opened a blog at Gamasutra for the same effect, but I'm more familiar with the GameDev crowd. I've been a member here since July 2002 (7 years!), which is a long time in the technology world. On top of that, it was using the resources at GameDev.net that I learned much about game development (e.g. NeHe, Matrix/Quaternion FAQ). I feel a much stronger connection here, so this is where I'm penning my thoughts.
Though perhaps if there was some way to post to many blogs at once, I could use that. I think I've seen stuff that can do that, but I'm not a "web guy", so I don't really know. However, something to manage the online life would be wonderful. I have 12 emails, countless website registrations, numerous subscriptions, I actually do have many blogs (I don't use them), and so on. A system that could track all that, and in particular archive my posts and the threads/discussions they're in, would be a very useful thing. Any ideas?
One thing to clarify, I'm going to be writing here about things that I am absolutely not any sort of qualified expert on. If I get something wrong, do correct me, but don't expect perfection. I will also greatly appreciate it if people can point out resources or papers (I have access to the ACM library) that I should read.
Now I'm going to post some things in rapid succession here. Just because I never really "blogged" much before doesn't mean I don't write things down. This pace will not continue (unless you really like it).
One last thing, I'm available for projects and temp work, so please contact me if you have any.