Sign in to follow this  
VanillaSnake21

software renderer?

Recommended Posts

I just realized that I don't know anything about game programming. A quote that caught my attention is from Andre Lamothe's book on 3D Rasterization,
Quote:
"A real game programmer can write a 3D engine from scratch with blood pouring out of his eyes, on fire, and with needles in his eardrums, as well as use 3D hardware.."
So I gave myself a mental task, if i had a standard x86 machine, with no APIs etc (except a c++ dev environment, and maybe windows headers/dlls) , how can I plot a pixel on screen. So my first though was ok, im gonna write a struct POINT and will give it two memebers x,y positions. And thats where I came to a halt. I have a point in memory, now what. As I thought about it, I couldn't even picture the place I could start drawing that point out to screen. I've been reliant on the outside APIs/Libs for so long that all I know in my 2 years of programming is how to call a function. So I decided to embark on this huge, massive, discouraging quest, of plotting a single point to screen all by myself, i mean completely, completely independent. So as the title suggest i decided to write a software render, a word that almost everyone scorns these days. So Im here in the Beginners forum, not precisely for the exact code references of how I would go about doing it, but simply how do I start. This is my approach as what I managed to think out by myself 1. Somehow "aquire" the memory in video ram of my graphics card (up to now all I know about this is saying Lock(...) to pass in the buffer that comes back with that mem) 2. Input my point into that buffer, in some sort of format or something. 3. Allow the buffer to return to card so the card can draw it. This post is getting exteremly long so my first question is how can I bypass all the windows layers and talk to my graphics card directly. Thanks, Tim. [Edited by - VanillaSnake21 on May 6, 2007 11:38:07 PM]

Share this post


Link to post
Share on other sites
This sort of thing isn't really feasible on modern operating system. Direct access to the hardware is a bad thing generally speaking - it's way too easy to make a mistake and cause serious damage or completely crash the computer (blue screen style or worse). If you want to write to video memory directly without using an API you need to be running in kernel mode. This means either writing a skeleton operating system that will run your software renderer or writing a custom driver that will write to video memory for you. Neither of these things is especially easy and if you make a mistake kernel mode debugging is not exactly fun.

You might be able to ask windows to emulate some sort of old DOS video mode for you, but this would mean writing 16bit DOS style code and that's not much fun either.

Honestly the best thing to do would be to make a buffer in system memory. Pretend that buffer is video memory and build the software renderer based on that. Then display that buffer to the screen in whatever manner you like. You could dump it to a window or display it using direct3d. Heck you could just lock a primary surface in direct3d and memcpy your buffer onto the surface.

Share this post


Link to post
Share on other sites
This reminds me the old days when
the games were running in DOS and
the programmer accessed the Video-ram in x800000 memory address (or
something like that). I don't know if that can happen under Windows,
or if you 're able to bypass all layers and get a pointer to
the video ram.
Idea: Use a directdraw surface and transfer pure(software rendered) data
to that surface. Does this help?????

Share this post


Link to post
Share on other sites
Quote:
Original post by VanillaSnake21
...So I decided to embark on this huge, massive, discouraging quest, of plotting a single point to screen all by myself, i mean completely, completely independent.


It can't be done, at some point you've got to interface with the graphics card driver, which would count as using an API :P
I guess you could download the one of the open-source gfx-card drivers for linux, and see how they interface with the hardware, but then you'd be limited to that one card, and your code wouldnt work anywhere else...

If I were to write a SW renderer, I'd just use GL to open a window, and limit myself to the GL functions that let you write directly to pixels.

Share this post


Link to post
Share on other sites
Just use an API for actually drawing the pixels. I've written two 3D software rasterizers for school, one that used OpenGL and made a vertex at each pixel, then drew point lists, and another that used GDI to draw points (both approaches were pretty slow, so I'm sure you could do better). The point to doing this yourself is to learn all the math and such involved in a renderer, not the machine code to interface with your graphics card.

Share this post


Link to post
Share on other sites
The thing is - it's not the pixel plotting that I need, I currently have a semi-working game in development that does much more than pixel plotting. What I want is to actually work with the hardware without the APIs. Some suggestions were to use a surface to get the mem, but that defeats the whole purpose of what im trying to do. I think that by calling one of the APIs functions to fetch the mem for me, behind the scenes a couple of million lines of code is executed that I have no conception of. The graphic card calls are not like MatrixMultiply or other functions where users can guess what kinda of implementation the makers used, mem calls are some mysteryous portals that no one goes beyond, and that no one challanges =D.

cshowe mentioned that i need to get into kernel mode, or write a driver, both of these I am willing to attempt. At least now I have some lead as where to begin. Ok, kernel mode. How do I get in? As for the driver - that sounds like a very interesting idea - ive been dying to try out driver dev for a while, can someone elaborate on what the custom vid driver has to do. Thanks for the replies :)

P.S If there is any area that is physically impossible for one person to do please tell me, so if something that im asking about requires enourmous amount of code etc, that for one person would take months or years to even type, please gimme a heads up, thanks =)

Share this post


Link to post
Share on other sites
Yes, you can (easily) talk to your graphics card directly, without an API: like Jimmy Valavanis said, you just need to write to the appropriate address.

It is definately feasible for one person to do it (how do you think people made games back in the DOS days?).

I started learning graphics coding from denthors tutorials, they are still around: (now in C!)
http://www.cprogramming.com/tutorial/tut1.html

That will give you an introduction to what to do.

If you want to go further and use "modern" resolutions and color spaces (32 bit color, 1024x768) you will have to learn about VESA modes (a lot more work), and probably there is something now that has superseeded that - I don't know what though : if your really that dedicated you could probably take a look at the linux drivers source code.

Good luck!

Share this post


Link to post
Share on other sites
Quote:
Original post by e64
I started learning graphics coding from denthors tutorials, they are still around: (now in C!)
http://www.cprogramming.com/tutorial/tut1.html


Thanks!!, thats some amazing stuff, I;m currently experimenting with assembly and I've been looking for ways to implemnt it in games, that site addresses my current issue as well as uses assembly :)

Quote:
if your really that dedicated you could probably take a look at the linux drivers source code.


Is there anything for windwows? I just googled the topic and some intresing results came up for display drivers on windows, im trying to get the WDK (wndws driver kit) from MS Conneect righ now.

Share this post


Link to post
Share on other sites
Quote:
Original post by VanillaSnake21
What I want is to actually work with the hardware without the APIs. Some suggestions were to use a surface to get the mem, but that defeats the whole purpose of what im trying to do. I think that by calling one of the APIs functions to fetch the mem for me, behind the scenes a couple of million lines of code is executed that I have no conception of.

What Andre Lamothe meant with that quote of yours was about software rasterizer with software shading, with collision detection, world/model/entity management, etc..
Using DirectX to get access to the framebuffer memory hardly goes as using an API to do all the 3d work.
Unless you really wanna try to kick your own arse, then roll your own kernel drivers ;)
And besides, game programming isn't about graphics, graphics programming is.
Game programming has so much more in to it, like the actual game :O , user interfaces, AI, physics, sound. So as far as game programming goes, you are heading the wrong way here.

ch.

Share this post


Link to post
Share on other sites
Quote:
Original post by christian h
Quote:
Original post by VanillaSnake21
What I want is to actually work with the hardware without the APIs. Some suggestions were to use a surface to get the mem, but that defeats the whole purpose of what im trying to do. I think that by calling one of the APIs functions to fetch the mem for me, behind the scenes a couple of million lines of code is executed that I have no conception of.

What Andre Lamothe meant with that quote of yours was about software rasterizer with software shading, with collision detection, world/model/entity management, etc..
Using DirectX to get access to the framebuffer memory hardly goes as using an API to do all the 3d work.
Unless you really wanna try to kick your own arse, then roll your own kernel drivers ;)
And besides, game programming isn't about graphics, graphics programming is.
Game programming has so much more in to it, like the actual game :O , user interfaces, AI, physics, sound. So as far as game programming goes, you are heading the wrong way here.

ch.


yr right, game programming is more about the actual game, so i guess this is a little beyond game dev, but for the past year or so that i'v been making my game with the DX9 API, I felt like im doing almost the same stuff over and over again. Maybe the algorithms differ or the ai differs, but it all relies on the same look of the same functions, Init this, make it move, apply shader, flip backuffer, repeat- and thats all done with premade functions from the API. I wanna control my computer, i mean conrol, not ask a layer to do something for me, but i want to feel the glory of drawing a line by shifting memory within the processor, I think at that point, all the artistry of gaming can flow in. It's like getting a paintbrush and painting on a real canvas, when you get this low, where every extra 1 counts in the opcode, the drawing becomes almost tangible since you're manipulating every move of your "code brush".

About VGA mode, i have a laptop with a built-in LCD screen, so i have to use VESA? Or can I still use VGA?

Share this post


Link to post
Share on other sites
You don't want to bother with all that VESA and VGA mode crap. You need a DOS compiler to do it nowadays anyway; modern operating systems prevent you from accessing the hardware directly, it is not just a matter of "writing to the appropriate address" if you're talking about a Win32 application on a modern PC, because chances are the address your app is writing too isn't the actual physical address your app is really writing to (address space is virtualized, remember), among other problems.

The techniques for writing directly to video memory that were used back in the DOS days are no longer relevant, and they're only implementation detail on top of the theory of computer graphics. It's the theory that is important, not the implementation detail. You can happy write a software rasterizer using GDI or DirectDraw to get access to a blob of memory you treat like a pixel buffer and manipulate and blit. That's the route to take if you want to explore software rasterization with the intent of learning a relevant skillset. If you just want to screw around in video memory, get an old computer and a DOS compiler (such as DJGPP, I think) and have fun. Just be aware it's not going to be that important on your resume.

Quote:

What I want is to actually work with the hardware without the APIs... I wanna control my computer, i mean conrol, not ask a layer to do something for me, but i want to feel the glory of drawing a line by shifting memory within the processor.

This requires intimate knowledge of the hardware, which is usually not public information, and a driver development kit. You cannot write a usermode Windows application and do this. Period.

Quote:

A quote that caught my attention is from Andre Lamothe's book on 3D Rasterization,

LaMothe, you must know, is highly polarizing. He's very popular and has written and edited a lot of books, but an overwhelming majority of professionals consider him to be an idiot and his books to be terrible wastes of money that focus on the superficial.

Share this post


Link to post
Share on other sites
What jpetrie said.



Knowing the hardware-level details is virtually irrelevant these days; understanding how the higher level stuff is done (the shader pipeline, how memory is transferred across the bus from the system to GPU, how various visual effects are achieved, etc.) is vastly more important.

Nobody but hardware driver developers can (practically speaking) poke the hardware directly, and unless you really want to get into that tiny little niche, I'd recommend you find a better use of your time.

Understanding how the lowest level end works is good, but doing it yourself is not really all that useful. Writing your own software renderer and using a common trick like accessing the framebuffer via DirectX would still be highly educational (in terms of understanding 3D mechanics and theory) and actually achievable.



Andre LaMothe was a decent read about ten years ago, when his ideas were actually still relevant to the industry, and when his (very crude) approach to optimization was important. Nowadays, most of his stuff is totally outdated, and useful only for historical entertainment. He doesn't even cover the subject material particularly well. I wouldn't put a tremendous amount of stock in what he says about game programmers.

Share this post


Link to post
Share on other sites
@VanillaSnake21

Don't do direct to video accessing in windows. Mostly I say this because you almost certainly can't but absolutely you shouldn't.

Besides which it completely defeats one of the best things about software renderers: You don't have to draw everything on a screen!

If you write your software renderer (and speaking as someone who's written a couple of very simple ones) to render into your own framebuffer/z-buffer/etc then you can save it out as an image, or turn it into an anim, or perform wierd transforms and post processing on it. Whatever you like.

If you then choose to take that lump of data and display it you can do so using whatever API you like, or should you be stuck on something like the PSP (homebrew etc) then you might just get your hardware-bit-banging love [grin]

Andy

Share this post


Link to post
Share on other sites
@jpetrie
Quote:
LaMothe, you must know, is highly polarizing. He's very popular and has written and edited a lot of books, but an overwhelming majority of professionals consider him to be an idiot and his books to be terrible wastes of money that focus on the superficial.

I see that you're not a fan of Lamothe, but I consider his books genoius, and easy to read. He tackles huge problems with a positive look, if a driver has to be written, he writes it and explains it, file loader... you got it. His book on 2D programming was my foundation to Windows/2D dev. I personally think he's a genious based on his work and resume.

Quote:

If you just want to screw around in video memory, get an old computer and a DOS compiler (such as DJGPP, I think) and have fun. Just be aware it's not going to be that important on your resume.

Messing around with an old 16 bit comp is not my intentions, I do want to create a working game (even if its as simple as a wireframe object moving on screen) by not using any APIs at all.

@ApochPiQ
Quote:
Knowing the hardware-level details is virtually irrelevant these days; understanding how the higher level stuff is done (the shader pipeline, how memory is transferred across the bus from the system to GPU, how various visual effects are achieved, etc.) is vastly more important.

I've already explored some of these areas, and I will mention again that I do have a working game that I'm currently writing that is fully dependent on DX9 API. My purpose here is not to get improve my resume, or anything like that, I personally think it will help me A LOT, if I manage to go that low, in terms of understanding the system.

Quote:

Understanding how the lowest level end works is good, but doing it yourself is not really all that useful. Writing your own software renderer and using a common trick like accessing the framebuffer via DirectX would still be highly educational (in terms of understanding 3D mechanics and theory) and actually achievable.

useful for what? Why always do whats useful (even though I think it's extremely useful)? Don't you get that felling sometimes that you want to dive into something that you have no idea about. Thats why Lamothe is so good, he loves the most complex topics, like a mathematition loves rederiving calculus formulas. If you think that doing something unordinary or something that no one attempts is useless, and is to be avoided, then programming is simply your job, not your passion.


Share this post


Link to post
Share on other sites
Quote:

I see that you're not a fan of Lamothe, but I consider his books genoius, and easy to read. He tackles huge problems with a positive look, if a driver has to be written, he writes it and explains it, file loader... you got it. His book on 2D programming was my foundation to Windows/2D dev. I personally think he's a genious based on his work and resume.

ApochPIQ put it nicely. LaMothe's work was relevant years ago. Nowadays, his methodologies are inefficient, his techniques outmoded, and his books tends towards code dumps with the focus on superficial details (API details, which are highly variable). The last item in particular is what drives the success of his books right now, because they enable people to get up and running with visible results quickly; this has its advantages, but it doesn't impart a deep understanding that is necessary to eventually excel.

My personal opinion of the man is not on the table here: the facts speak for themselves, that you do not yet have the experience to recognize their full impact is just something that will come with time.

Quote:

Thats why Lamothe is so good, he loves the most complex topics,

No he doesn't. His books on rasterization (for example) cover the topics on a superficial level, he does not achieve nearly the depth or complexity of the classical texts on the subject, such as Foley, van Dam, et al's works or even Abrash's. You're speaking from a position of relative inexperience in the field -- by your own admission -- and so you don't really have a foundation.

Nor does he get into any of the gritty, nasty details about memory virtualization, device I/O and DMA, interrupts, and other hardware nastiness that you'd need to pull this off.

Quote:

Messing around with an old 16 bit comp is not my intentions, I do want to create a working game (even if its as simple as a wireframe object moving on screen) by not using any APIs at all.

You can't. Sorry. It's impossible to make anything nontrivial without going through some kind of API. In your case, you'd need to use the DDK APIs and you'd need to have detailed knowledge of the specific class(es) of graphics cards you were going to support. Except, possibly, for old and outdated cards, this information is not available and you'll have to reverse engineer it -- which is not a small task.

On top of that, you'll need to have some intimate knowledge of how the chips (CPU and GPU both) work and how drivers interact with the OS, because the only way you're going to get the kind of access you want is by writing a driver.

So, to sum up:
a) LaMothe's book has given you, if anything, the tip of the iceberg as far as the foundational knowledge required to achieve your goal of implementation an API-less rendering demo.
b) Your goal is not reachable within the parameters you've defined anyway (you can't do it on modern hardware with a modern OS without going through an API), so
c) You're going to need to write a driver for a graphics card, which will require the DDK APIs (which still technically violates your goal, but its reasonable).

Consequently, your next step is to learn how to write drivers. I would start at MSDN, and you'll need to obtain the DDK (when I got my copy, it was free but you had to pay for shipping; it was not downloadable, that may have changed). Once you get a bit more comfortable in that realm, you can turn to the daunting task of reverse-engineering your chosen graphics card.

Share this post


Link to post
Share on other sites
I would note that intel have open source linux drivers for their GMA range of graphics chipsets, so if you have access to a computer that uses GMA graphics you'd be able to write a driver without needing to do a whole load of reverse engineering.

Though I do agree with previous posters that trying to write a graphics driver isn't going to help you much if your goal is to become a better game programmer, sure you'll probably learn a lot but most of it won't really help when trying to write a game (unless of course you're writing your game to be specifically optimized for use with the driver you've just written, which is a rather odd thing to do as you want it to run well on all kinds of systems with different drivers).

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Quote:

Messing around with an old 16 bit comp is not my intentions, I do want to create a working game (even if its as simple as a wireframe object moving on screen) by not using any APIs at all.

You can't. Sorry. It's impossible to make anything nontrivial without going through some kind of API. In your case, you'd need to use the DDK APIs and you'd need to have detailed knowledge of the specific class(es) of graphics cards you were going to support. Except, possibly, for old and outdated cards, this information is not available and you'll have to reverse engineer it -- which is not a small task.


Actually even newer cards such as the ones based on the G80 chipset have Vesa and VGA support, so it is always possible to get it up and running, the tricky part is getting hardware acceleration to work on them. (But you don't need that for a software rasterizer)

Share this post


Link to post
Share on other sites
Ahh, to be young and naive again [smile]


Quote:
Original post by VanillaSnake21
I see that you're not a fan of Lamothe, but I consider his books genoius, and easy to read. He tackles huge problems with a positive look, if a driver has to be written, he writes it and explains it, file loader... you got it. His book on 2D programming was my foundation to Windows/2D dev. I personally think he's a genious based on his work and resume.


His resume is that he's written and edited a lot of books, most of which are considered rather low-quality by people actually in the trenches.

To the best of my knowledge, LaMothe has never actually published a game, successful or otherwise.

Compare that to people who have published games, and actually do real work in the industry. Abrash is an excellent example (he did some seminal work with id software on Quake). There are dozens of other writers who have truly done ground-breaking, brilliant research in computer graphics. I'd far sooner attribute "genius" to them than to LaMothe.

Don't get me wrong - LaMothe was, as I said, a good read... ten years ago. But even then, he was basically just regurgitating well-known knowledge from the industry and making it available for more-or-less public consumption. While that was an admirable task, he's far from a landmark name in the annals of graphics technology.

Recognizing that will come in time and with experience; as jpetrie said, you're still fairly fresh, so it's thoroughly understandable that you feel the way you do. (In the interests of full disclosure, I did too, once.) Just remember that everyone looks tall when you're three feet high [wink]


Quote:
Original post by VanillaSnake21
Messing around with an old 16 bit comp is not my intentions, I do want to create a working game (even if its as simple as a wireframe object moving on screen) by not using any APIs at all.


That's a bit like saying you want to sail around the world without using a boat.

Or maybe like saying you want to start a colony on the moon, without using a space suit or a space ship.


The tools are there for a reason. Trying to live without them is a little bit silly.



Quote:
Original post by VanillaSnake21
I've already explored some of these areas, and I will mention again that I do have a working game that I'm currently writing that is fully dependent on DX9 API. My purpose here is not to get improve my resume, or anything like that, I personally think it will help me A LOT, if I manage to go that low, in terms of understanding the system.


If you're interested in how the APIs work, great; that knowledge will help make you a better programmer. But the way to gain that knowledge is not by reinventing the wheel.

I'm not making a blanket statement here; sometimes, reinventing wheels is a good thing for educational purposes. When it comes to hardware, however, it is a bad idea.

Modern graphics hardware is very difficult to access directly, even if you do know how to write a driver. Learning that will educate you immensely, to be sure, but it's not productive. Simply studying it from the outside is sufficient, unless you really desperately want to write drivers for the rest of your life.

Maybe it'll make more sense to put it a slightly different way: what you want to learn is good. The way you want to learn it is foolish. There's a difference between understanding the theory of how something works, and being able to create that thing yourself from scratch. In this particular case, the theory is more than sufficient, and going much beyond that will have severely diminishing returns.

In other words, there are two sets of knowledge here: what you will learn by doing this all the way down to the bare metal, and what you will learn simply by studying how your APIs of choice work and how operating systems in general work. The second set of knowledge is a tiny bit bigger, but requires hundreds (if not thousands) of times as much work.


I hope that clears up our objections to your plans.




Quote:
Original post by VanillaSnake21
useful for what? Why always do whats useful (even though I think it's extremely useful)? Don't you get that felling sometimes that you want to dive into something that you have no idea about. Thats why Lamothe is so good, he loves the most complex topics, like a mathematition loves rederiving calculus formulas. If you think that doing something unordinary or something that no one attempts is useless, and is to be avoided, then programming is simply your job, not your passion.


As I've just discussed, the goal is to help you learn things in an efficient way. I have no objection to the idea that learning the low-level workings will help you do better development at the high level. In fact, I fully agree, and I wish more programmers had the same perspective.

What I'm against here is doing something that is, by and large, an utter waste of your time. Take it from someone who has been through the same exact things, and speaks with the benefit of hindsight and bitter experience.


Attempting extraordinary things is admirable, sure. Most of the great developments in human history have been thanks to people who are willing to ask the hard questions. Sometimes, you're right, we do need to do things that everyone seems to think are a bad idea.

However, that doesn't mean that every time someone says "X is a bad idea" you should immediately go out and do X as hard as you can.

The difference between great, successful innovators, and burnt-out, disillusioned fools is not doing things that seem to be a bad idea. It's actually knowing when things are a bad idea, and that requires having some wisdom and experience.

You don't have those, yet; but keep it up and pay attention to those who can advise you, and you will eventually [smile]

Share this post


Link to post
Share on other sites
Wow, you know way more about this than I do. I'd probably just get OpenGL, if I were you, but I don't know. Something I do know is that it might be feasibly possible to do it from QBasic, but I don't want to fool around any closer to the system level than that.

But I am writing a 3D engine for the TI-89 Titanium graphing calculator. (It actually has a 3D graph mode already, but I am writing my own because I can't figure out how to use it.)

Share this post


Link to post
Share on other sites
I know how you feel. I remember when I was younger wanting to do all the pixel operations myself rather then using some outside library.

Of course this was in the dos days and there was access to video memory. My recommendation to you would be to stay away from the windows platform for this project.

What you could try to do and would be feasible is look at the homebrew communities for various consoles. There's large communities out there for various platforms. You could do something for nes, snes, gba, ps2, dreamcast. Lot's of different options.

This will be your best bet if you want to work directly with hardware. With these platforms you'd be able to flip bits and see results.

Share this post


Link to post
Share on other sites
I think VanillaShake is missing LaMothe's point. He doesn't mean that you should know every nook and cranny of the hardware interface to plot pixels. He's saying that a good 3D programmer knows how to transform vertices, texture map, sort polygons, cull, etc. It's about being familiar enough with those algorithms to be able to do them yourself, whether you actually need to from a practical standpoint or not. Learning to plot pixels by manipulating hardware is only somewhat useful, because how you've done it today could be completely irrelevant on tomorrow's new hardware.

Share this post


Link to post
Share on other sites
First off I want to address a confusion about Andre LaMothe, the quote in my first post has nothing to do with my current plans, all that quote did was push a though in my mind. He obviously used the quote in other context. Second of all, some posts said that I'm trying to use his books as my foundations for API-less renderer. That is not correct either, LaMothe doesn't even come close to explaining how to write directly to mem, in his book he uses DirectDraw to get the buffer, and writes his custom T3DLIB library. I don't even know how he got into the discussion, I simply used him as reference from where I came upon my idea.

now for the replies...
Quote:
by jpetrie
Consequently, your next step is to learn how to write drivers. I would start at MSDN, and you'll need to obtain the DDK (when I got my copy, it was free but you had to pay for shipping; it was not downloadable, that may have changed). Once you get a bit more comfortable in that realm, you can turn to the daunting task of reverse-engineering your chosen graphics card.


I'm trying to get WDK - Windows Driver Kit, is that the same as DDK?

Quote:
by Monder
Though I do agree with previous posters that trying to write a graphics driver isn't going to help you much if your goal is to become a better game programmer...
I don't have any goals set formyself, If I simply wanted to learn, and understnd the theory then I would do like other posteres suggested and just get the books need. I honestly don't mind it not imporoving my game programming skills. lol maybe I will write drivers for the res of my life (@ApochPiQ). This is not an educational process, im just bored of doing the easy work. I don't mind writing drivers, programming in kernel mode, using VGA, whatever. I just want to start. I think that answers most of the posts. I appreciate everyones concern for my time, and I do admit, I am pretty much a beginner programmer, compared to some of you guys, and I do value your suggestions, but it seems that now everyone is just simply trying to disprove me. Can you guys just point me in the right direction, I want to know - what exactly does a display driver has to do, 2nd - how can I access kernel mode. Thanks for all replies )

[EDIT]
Quote:
by jpetrie You're going to need to write a driver for a graphics card, which will require the DDK APIs (which still technically violates your goal, but its reasonable).
I obviosly meant no graphics API, ofcourse I also have to use a compiler and WinAPI to get the window open, I thought that it was clear from my motives what I meant by that.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this