Jump to content
  • Advertisement
  • entries
  • comments
  • views

Musings on the future, part 1

Sign in to follow this  


I had a scary realization a short while ago, the kind that fills you up with feelings of age, the shortness of life, and - perhaps paradoxically - a deep sense of fulfillment at what has been accomplished in a fleeting span of time.

You see, it's been five years. The landscape of games technology has changed dramatically in that time, and the next five promise to be even more radical. Five years ago, I certainly wouldn't have predicted that things would turn out as they have.

As a matter of fact, five years ago, my plan was to be releasing the second or third generation of real-time raytracing technology. At the time I was still planning on software-based rendering, but that changed. I had a vision for a world with a radically different approach to computer graphics - and my products at the center of it all, naturally.

I "officially" discarded my other hobby projects to begin focused work on the "Freon 2/7 Project" in April of 2002. The announcement was made on my web site, to the vast audience of about three people. I'd already done quite a bit of experimentation in raytracing, and had written the beginnings of my own raytracer. Over the next two years, I poured thousands of hours into research and development on the technology.

It quickly became clear that software wouldn't hack it. I shifted my focus to developing a prototype of a system that would effectively be a fixed-function hardware accelerator for raytracing. At the peak of my research, I developed a fast global-illumination approximation algorithm, which still remains unpublished (out of hope that I might get to use it properly someday).

Soon, though, things started to slip; the original "release date" of December 17, 2003 came and went, with only a token update to the website. The second anniversary of the project came and went, and the whole thing just quietly shut down. I hacked off and on until the summer of 2004, and then stuffed the code into a .ZIP file, burned it to CD, and haven't opened it since.

Even in the intervening two years, things have changed. Two years ago, it was clear that a fixed-function raytracing card had no place in the market - but a programmable one had a good shot. The problem was, developing a proper emulator for a programmable system would require a full rewrite of my prototype code; more importantly, bringing the product to market would require substantial involvement (and money) from a hardware development team. I put the project away not because it was no longer worth pursuing, but because I no longer had the time - I'd buckled to the temptations of a day job, and had committed my remaining time to working with Egosoft.

After the day job went down the crapper, I found myself again with a surplus of time, but also a decent bit of burnout, and not much desire to work on many side projects. Over the past few months, that's clearly changed, with stuff like TKC2 and the Epoch language grabbing my attention.

Through it all, though, there's been a little nagging doubt in the back of my mind: what about Freon?

I've thought it over at length, and I think the time for the product has come and gone. There was a window of opportunity for a couple of years, but it's now closed. Shader Model 4 and the increasing convergence of CPU and GPU technology has basically eliminated the need for dedicated raytracing hardware. More importantly, programmable shaders in general have eliminated the quality disparity between rasterized and raytraced graphics. There's simply no need for raytracing in the market; there's no place for it.


I think another window will open in a few years. Raytracing is inherently a more efficient and elegant rendering method than scanline conversion, and due to its power, it will eventually supplant traditional polygon-rasterization methods. This will particularly become true when hardware is fast enough to genuinely simulate complex lighting effects (global illumination, subsurface scattering, participating media, fully dynamic lights and geometry, and so on) rather than just making hackish approximations.

That time is a long ways off, though, and even that future has no place for dedicated raytracing hardware. Instead, I think the future is shaping up to hold something different.

The first step was AMD's integration of the memory controller and CPU. The AMD/ATi acquisition promises to provide further changes. Dual-core technology has hinted at the challenges to come, and a close look at multi-core processor plans of the future shows that there's no shortage of difficulties to overcome.

Perhaps most important, though, is the role of programming languages in all of this. Freon was killed, by and large, by programmability; with technology developed as far as it has, the age of fixed-function hardware is over. Programmability will be king for the foreseeable future, and likely until the von Neumann architecture is replaced entirely. But programmability requires programming languages, and the languages we have now are not enough.

For a few hours, I pondered the tradeoff of reviving the Freon project (which, after all, still has some promising proprietary technology) versus work on the Epoch language. Both have some potential, and both would require tremendous amounts of effort to really accomplish anything.

The more I think about it, though, the more I'm convinced that the future of both projects is one and the same. It lies down a twisting but all-important road of processing advancements, at the logical conclusion of core-proliferation and the challenges of writing multiprocessing-capable software.

And just because I'm a smug bastard, I'm going to make you wait to see what I think that future looks like [smile]
Sign in to follow this  


Recommended Comments

I'm not convinced that real-time ray-tracing is the way forward.

Yes, it'll happen and yes it'll look so damned cool as to make the polygons weep.

I just think it'll take so much investment in all areas that the commercial factors wont bite. We're already getting talk about diminishing returns with XB360/PS3/D3D10. A change from current polygonal techniques to all-out ray-tracing is no small thing and I just find it hard to see many people liking the cost:reward balance. At least initially...

Anyway, guess time will tell [smile]


Share this comment

Link to comment
I would agree, but for one small and all-important truth about how processor technology is shaping up, and the direction things will be heading in the next few years.

But that's for Part Deux.

Share this comment

Link to comment
good article. I agree 100%. I used to follow your Freon project always wondered what officially happened. Even if von - Neumann architecture is replaced or evolved significantly beyond where its original basis is unknowable I feel programming will continue to be most important. This is coming from a neutral non-programmer so I am not biased or anything. :)

Also, considering the increased parallel processing power (what is it 20 cores by 2010?), the strong foot hold the lambda calculus, type theory, and cartesian closed categories (all representationally equivalent but different thoeries) are gaining in CS and the implementations of new languages (ML, Epigram, Coq), something like raytracing which is more mathematically tractible/elegant and less "hackish" than the current method of polygonal patchwork quilts will hold sway in the future.

In addition, the functional paradigm works transparentely with concurrency (especially cmprd with imperative) which results in a very good match all in all for processors , langs and renderer.

Share this comment

Link to comment
I also (mostly) agree with your writings, Apoch. The current approach is wonderful for "faking it", but when technology can finally go no further with the current rasterization approach, it will likely fall back to the powerful relatively untouched (in real-time, at least :P) realm of raytracing.

A bunch of students wrote Quake 3 Ray-Traced, although it requires about 36Ghz worth of processing power to get 20FPS. But it's the "we'll get there" element that's important. [smile]

Maybe SaarCOR will see its day afterall!

Share this comment

Link to comment
Original post by Daerax
something like raytracing which is more mathematically tractible/elegant and less "hackish" than the current method of polygonal patchwork quilts will hold sway in the future.

In addition, the functional paradigm works transparentely with concurrency (especially cmprd with imperative) which results in a very good match all in all for processors , langs and renderer.
Good points. I often end up forgetting that we will change as well as the technology - engineers learning how to use new tools and so on...

But still, a games industry where employees can hold a conversation down the pub in fluent C++ is gonna be hard to convince to re-train to some functional paradigm/language. Which loops back round to my opening comment [smile]


Share this comment

Link to comment
I read the article over at Borders earlier tonight -- last night? Things are fuzzy. Anyways.

I didn't find it worth the cover price of the issue. It was barely 2.5 pages long, had only a cursory introduction to the issues at hand, and then more or less turned into an advertisement for InTrace. It was excellent as far as a crash-course in graphics technology for people who don't "do" graphics tech for a living, but there really was nothing there that I (or the SaarCOR guys) haven't been saying for years.

It's interesting to see a little media exposure to the technology, but I was a bit disappointed that they made it sound like such a mystery how the technology will play out (at the end of the article). I imagine the InTrace stuff added some bias, but frankly there's no market for a dedicated RPU. It's highly unlikely we'll see dedicated ray instructions in programmable graphics hardware, either, because GPUs are designed around a different model of data flow than an optimal raytracer.

That, of course, basically leaves us with advances in CPU design to help take advantage of different data pipelining models... which, of course, I've already said my piece about, in part two [smile]

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!