• ### Announcements

• #### Wondering what's new and changed at GameDev.net?06/20/17

Check out the latest Staff Blog update that talks about what's changed, what's new, and what's up with these "Pixels".

#### Archived

This topic is now archived and is closed to further replies.

# 03.01 - Q&A

## 120 posts in this topic

Ible... whenever you create a Bullet that is a Sprite the Bullet also creates a Sprite. Proof of this could be shown if you put a message in the sprite's constructor and then create a bullet. If you want this to be different, I'm not sure what to do, though. Would it be possible to make it a virtual static variable? I don't know if that would work.

Edited by - randomGamer on June 13, 2001 4:48:39 PM
0

##### Share on other sites
I have a problem

If I take the basecode and do nothing but change the following

Globals.h
..

//----------------------------------------------------------------------------
// Global Defines
//----------------------------------------------------------------------------

(from this) #define SCREEN_BITDEPTH 16
(to this ) #define SCREEN_BITDEPTH 8

If I compile and run the program runs fine but upon exit I get a DDHELP popup stating a general page fault has occurred in an module at some address.

And if I change it back to #define SCREEN_BITDEPTH 16 and recompile and run it locks up the program and have to use CTRL-ALT-DEL to cancel it and if I change back to #define SCREEN_BITDEPTH 8 I will get errors upon trying to run. I have to completely reboot to clear whatever flag is set.

A similar situation occurs when i call some functions incorrectly the first time, and then no matter if i correct it i get a runtime error, i have to reboot

I consulted the DX7 help and it states :
When Ddraw.dll is first loaded by a DirectDraw application, it in turn loads Ddhelp.exe, which enables DirectDraw to restore the proper screen resolution and perform other cleanup tasks in the event that an application fails to do so. KillHelp is the only way to remove Ddraw.dll and Ddhelp.exe from memory without actually restarting the system.
DX8 help does not even list it.

Any ideas why this happening and how to fix it?

//UPDATE

Well after thinking it through and taking a guess i decided to update my not so old ASUS drivers to NVidia drivers, when i did this all the freaky DX8 errors went away.... go figure

Edited by - Black_Flag on June 24, 2001 10:24:18 PM
0

##### Share on other sites
Quick question, how come vc++ IntelliSense doesn't work half the time when I use directx functions? It'd be nice to not have to make so many trips to the sdk. Also IntelliSense doesn't work for the struct G either, even though I think it should.

Thx Teej!

# Andrew

/*
Friends can see your Privates, but a Friend of a Friend cant.
*/

Edited by - tripnotize on June 18, 2001 9:09:37 PM
0

##### Share on other sites
I''m not necessarily a problem, but I have a question on improving efficiency. I''m writing a program that is going to run with 16 bit graphics, but my Adobe Photoshop only saves .bmp files in 24 bit. This is taking a lot of memory up, and I assume, is also making the program run slower than it could if the .bmps were 16 bit. Anyone know of any programs that can convert the 24 to 16?

Vic
0

##### Share on other sites
There should almost definately be an option in photoshop to choose the bit-depth of your .bmp. If you can''t find it or if you are sure it doesn''t exist, try jasc paintshop pro, which has a 60 day free trial. It can convert almost any format.

http://www.jasc.com/

I will try a version of photoshop and if i can figure it out i''ll edit this post.

# Andrew

/*
Friends can see your Privates, but a Friend of a Friend cant.
*/
0

##### Share on other sites
I searched through Adobe and it''s manual. Still nothing. I also downloaded the Jasc paint program. That also has the option of saving only in 8 or 24 bit, plus the low 4 and 2. Unless I''ve missed over something, I don''t think either of the other two programs will do the job. Anyone else have any ideas/suggestions?

--Vic--
0

##### Share on other sites
It is actually more efficient/faster to store bitmaps with a colour depth of 24 bits rather than 16 bits. This results from the architecture of the Intel microprocessor.
0

##### Share on other sites
I'm not sure if this is true or not. To my knowledge, Pentium chips work with 4 byte info, or 32 bit, the size of ints. I thought that 32 bit would be the fastest. In fact, I read in a game programming book that some graphics cards don't like splitting the regular 32 bits up to have 24 bit color so they tack on the extra 8 bits of alpha and only support 32 bit mode. I could be wrong, but I thought that 16 bits would be an easier split to make (being half) than 24 bits (three-fourths).

--Vic--

Edited by - Roof Top Pew Wee on June 21, 2001 8:23:05 AM
0

##### Share on other sites
Roof Top:

You are correct; the computer uses 32 bits to move data around. But when you are using 24-bit colour, the additional 8 bits are not used (they would be used for alpha information if you were using 32-bit colour).

I''ve forgotten what the original question was, but many (most?) paint programs don''t support 16-bit colour. You either have to use 8-bit (with palette) or 24/32-bit colour.
0

##### Share on other sites
Ehh. . . I guess that problem isn''t that important. But I do have another question which is totally puzzling me. I am writing a program which is a top down map maker. When I run the program on different machines, I notice one of two things. One is that the program runs fine, but if I have bitmaps with color keys (so the whole surface isn''t drawn with a square around the image), there are lines running through the bitmap that aren''t copied. So I have stripes which are transparent. This happens on my machine and one of my friends. However, I have run the program on other machines which don''t have problems drawing the bitmap, but the program runs extremely slow. I don''t think this has to do much with the hardware, considering it ran slow on my friend''s machine. His is a 1.3 GHz, mine is only a 700 MHz. He also has the GeForce, and me a TNT2. His machine beats mine in every way, but I still don''t see how it runs slow. The only thing I can think of is I have Win 98 and he has 2000. And ideas?

--Vic--
0

##### Share on other sites
My \$0.02 on color:

The ideal color depth for a bitmap is 24-bit, or 8 bits per color component. The reason that you're having such a hard time converting your bitmap to 16-bit is because people don't usually convert the bitmap itself, they sample-down the colors of the pixel as needed, depending on the proper number of bits allocated for each color component by a particular video card's specific 16-bit color implementation (555,565). If you think about it, it makes sense that a bitmap should either be in 24-bit color so that it retains the most color information, or in 8-bit color because of the nature of palettes.

To convert a 24-bit color to a 16-bit color, you need to scale-down each color component. Assuming you want 565 color, here's a possible conversion formula:

565red =   (red / 255.0)   * 31;565green = (green / 255.0) * 63;565blue =  (blue / 255.0)  * 31;

What these are doing is saying, "Whatever the intensity of a color component was on a scale of 0 to 255, I want the same relative intensity for a scale of 0 to 31 (or 63 for 6 bits)".

Can you see a massive optimization here? Let's try some mental exploration...

Take a number N, divide it by two, and then multiply it by 2:

N/2 * 2 = N or N   2   2N- * - = -- = N2   1   2

If you take N, divide it by five and then multiply it by 10, you get:

N/5 * 10 = 2N or N   10   10N- * -- = --- = 2N5   1    5

This should make sense intuitively; take a pie, cut it in four pieces and give someone two pieces -- that's the same as giving them half of the pie. Since we hate more mathematical operations than are necessary, we can devise a way to reduce the number of operations needed for our problem by starting with simple examples and then generalizing.

In this case, it's helpful to use the numbers 256, 32 and 64 instead of 255,31 and 63, as they are evenly divisible. So, here's the formula from above, slightly adjusted:

(N / 256) * 32 or  N    32   32N   1N--- * -- = --- = -- = N / 8256   1    256   8

If you try mentally placing example numbers for N into this forumula, you'll see that in fact numbers from 0-255 map to numbers between 0-31 properly. Note that for integer division, the remainder is dropped, which is what we want anyhow.

Finally, it wouldn't hurt to notice that N/8 is the same as n >> 3 (shift 3-bits to the right).

So, here's the final set of calculations:

565red =   red   >> 3;565green = green >> 2;565blue =  blue  >> 3;

What the heck; may as well make it a macro:

#define 888TO565RGB(r,g,b) (((r>>3)<<11) | ((g>>2)<<5) | (b>>3))

I'd better leave it at that -- this is supposed to be a reply, not an article

Teej

Edited by - Teej on June 28, 2001 3:40:12 PM
0

##### Share on other sites
Aha. I see. However, this still leaves the Win 2000 question unanswered, and now that I am putting up another post, might as well raise a question that Teej''s post brought up. How do I go about accessing individual pixels on a bitmap? I can see this as the start for doing transparencies (a red. . ghost say would just add on to the red value for the pixel), so I am interested in finding this out.

--Vic--
0

##### Share on other sites
I''m not quite sure what is going on here...but it''s driving me crazy.

I''m basically trying to initialize DD7, but I''m having some problems. I make a call to this function...

BOOL InitDD(HWND hWnd, LPDIRECTDRAW7 lpDD, LPDIRECTDRAWSURFACE7 lpPrimary, LPDIRECTDRAWSURFACE7 lpBackBuffer)

...and it executes fine, but the lpDD variable(and I''m assuming the two surface variables) I feed it as an argument doesn''t maintain the initialization. So, when I go to call my clipper initialization function it fails on the call to lpDD->CreateClipper().

I thought the LPDIRECTDRAW7 was just IDirectDraw7*...so if I am passing an address, and not by value, why does it not maintain the effects of the function?

Hah Coe
0

##### Share on other sites
Hah Coe: I think I know what the problem is, so let''s see if I can explain the situation...

In the beginning, there was nothing but the variable...these are passed to functions by value, which means that the receiving function can''t modify them. Now this made a lot of people unhappy, since they (1) could only return a single value from a function, and (2) had a lot of overhead in passing large parameters by value.

Enter the pointer, which allows addresses to be passed around, and the memory they point to accessable by anyone who has the pointer. Now, four bytes (size of a pointer) is all you needed to pass to and from functions in order to access data. This only works because an address is a value -- it gets passed around just like variables do. As such, the address itself cannot be modified (you only get a copy), but the memory it points to can.

Whenever you catch yourself passing a pointer to a function, you have to remember that it''s value -- a memory address, is set in stone. You can''t therefore allocate memory and assign it to this pointer, because if you think about it, the OS returns the address of any memory it allocates for you.

The solution? Send a pointer to the pointer''s address. Now, the value of the pointer can be changed because you have the address of the pointer''s value. Here''s what it might look like in your case:

DIRECTDRAW7 **lplpDD;

or

LPDIRECTDRAW7 *lplpDD;

(both are the same)

In the function that receives this variable, the pointer itself is used like this:

*lplpDD->DoSomething()

The one asterix is dereferencing the pointer to a pointer, which brings it down to just being a pointer.

So, the rule of thumb is: Anything that you want to modify in another function must be passed by reference, i.e. you need to pass the address of the value you want to change. And remember, a pointer is just an address variable.

Hope that helps...

Teej

0

##### Share on other sites
Perfect...I had a feeling that was my problem, but I didn''t quite understand how pointers worked in that context. Thanks for clearing it up.

Hah Coe
0

##### Share on other sites

My function now looks like this....

BOOL InitDD(HWND hWnd, LPDIRECTDRAW7* lpDD, LPDIRECTDRAWSURFACE7* lpPrimary, LPDIRECTDRAWSURFACE7* lpBackBuffer)

And my calls now look like this...

*lpDD->Whatever();

And the errors I receive look like this...

error C2227: left of ''->SetCooperativeLevel'' must point to class/struct/union

Dunno.

Hah Coe
0

##### Share on other sites
I still have no clue why the above didn''t work, but I did manage to fix my problem. I passed the arguments by reference, instead of using the pointer method.

Hah Coe
0

##### Share on other sites
Hae Coe, lose the asterik in front of lpDD.

-Hyatus
"kopperia no hitsugi"
0

##### Share on other sites
hey i was wondering if anyone knows how to seed random() with GetTickCount() get back ot me asylum101
!=o)
0

##### Share on other sites
Asylum101:

srand( (unsigned)timeGetTime() );

Just call that once.
Then your random numbers should, I believe, work with
that function.

-Hyatus
"say it aint so-o-o-o-ooo"
0

##### Share on other sites
hey hyatus thanks for the tip but it didnt work
i tryed srand((unsigned)time(NULL) too and that didnt work either
im using it to seed the rand star init in the 3d star function in the pixel manip topic cuz i knoticed that the stars are always the same when i tryed what you said it compiled and linked fine but when i ran the program it went ok for about 20 frames then exited and when i used time(NULL) it turned the stars into lines that ran vertical on my screen if anyone could explain why seeding the rand in the 3d star funtion did that id love to understand it
0

##### Share on other sites
O.K. ,so I have a few questions....

1. What is and when I must use "#ifdef" and "endif"
1.5 What about other "hieroglifs" in the headers of win based programs?
I seam to understand sommething but I do not have a strong view

2. What is a "macro"?
3. How do I make in C++ the "Units" from pascal.I mean ,how do
I put a lot of declarations and definitions for functions in a cpp file and no "main"( like "Units" in Pascal) and also make
the compiler happy ( no error "missing .... _main ")?
4. How do I buy a good win programing book for beginers on the Internet if I
don''t have a credit card? ( don''t tell me to get a credit card..I realy don''t have so much money, in fact I have very few)

0

##### Share on other sites
Hi,

I had to install the DX8 dll to play some game and now Iget a ddinit failed: -7
error when I run the Basecode. . .

anything I can do?
I know DX cant be uninstalled. . .
0

##### Share on other sites

I post again here because I still heve somme little problems with
C.
I hope sommeone will replay to my last post (the one with the qst) .Especialy I am interesed in the first topic.
And this tutorial is great...
thaks

0

##### Share on other sites
quote:

1. What is and when I must use "#ifdef" and "endif"
1.5 What about other "hieroglifs" in the headers of win based programs?

These tokens (beginning with #) are "preprocessor commands". They tell the computer what to do before compiling the program.

For example:
#define TEST#ifdef TEST  :  :  #endif

All the statements between #ifdef TEST and #endif will be compiled. However, if TEST had not been defined, then those statements would not be compiled.

The most common use of these kinds of statements is to prevent including header files more than once.

For example, enclose a header file with this code:
#ifndef _WINMAIN_H#define _WINMAIN_H  :  : (header code in here)  :#endif

In this example, the header will only be read by the compiler once (while _WINMAIN_H is not defined). The next time the compiler tries to read the header file, it will note that _WINMAIN_H has been defined and the header will not be read by the compiler.

Hope this makes sense.

Edited by - Weatherman on July 22, 2001 12:19:17 PM
0