Jump to content
  • Advertisement

bmanruler

Member
  • Content Count

    62
  • Joined

  • Last visited

Community Reputation

122 Neutral

About bmanruler

  • Rank
    Member
  1. bmanruler

    Warnings - should I fix them?

    I know I am not a great programmer, so in general I assume the compiler people know much more about C++ than me. Turning on level 4 warnings is annoying at first, but will save you lots of headache later on.
  2. bmanruler

    3.7.5 Calculator Program

    #include <iostream> #include <cmath> using namespace std; int main() { bool quit = true; float x = 0; float y = 0; // Menu while (!quit) { cout << "1) cos(x), 2) sin(x), 3) tan(x), 4) atan2(y, x), 5) sqrt(x), 6) x^y, 7) ln(x), 8) e^x, 9) |x|, 10) floor(x), 11) ceil(x), 12) Exit." << endl; int selection; cin >> selection; switch(selection) { case 1: cin >> x; cosf(x); break; case 2: cin >> x; sin(x); break; case 3: cin >> x; tan(x); break; case 4: cin >> y; cin >> x; atan2(y, x); break; case 5: cin >> x; sqrt(x); break; case 6: cin >> x; x^y; break; case 7: cin >> x; ln(x); break; case 8: cin >> x; e^x; break; case 9: cin >> x; |x|; break; case 10: cin >> x; floor(x); break; case 11: cin >> x; ceil(x); break; case 12: quit = false; } } } The errors are there because the stuff you tried to do does not work that way in C++. Lets fix the errors one at a time. x^y; This operator doesn't do what you think it does. Try pow(x,y) instead. ln(x); Again, the function is log(x) e^x; exp(x) does what you want |x|; abs(x)
  3. bmanruler

    Emails in C++

    The link worked fine for me and has exactly what you are looking for.
  4. bmanruler

    little trouble with simple c++ program

    using namespace std; // Don't do this It is preferable to not include the entire standard library. In this program it isn't a big deal, but in general one would write out this: using std::vector; using std::string; using std::cout; using std::cin; // etc
  5. bmanruler

    Thoughts on computer parts

    I have a 74gig 10k rpm hard drive and one of the new 1 TB drives, and all my benchmarks tell me the large drive is faster. I've heard the same from other people as well. Plus it has 937 megs of actual storage when formatted, which has to count for something.
  6. bmanruler

    The Industry Method

    Quote:Original post by Pirate_Lord No time to reply tonight, but someone had asked for these... Examples of Gameplay • 5-Minutes-Of-Gameplay (Tactical/Bounty Hunter, Seek & Destroy, Early Game) Summary: Someone who has been playing the game a while finds a pirate and blasts him with a superior ship. Quote:• 120-Minutes-Of-Gameplay (Strategic/President, Operations, Late-Mid Game) Summary: Leader of a large pirate organization logs on and checks on the game. One of his pet projects is in trouble so he fights the horrible communication system in the game and gets help. Quote:• 30-Minutes-Of-Gameplay (Mixed/Mercenary, Trolling, Late-Early Game) Summary: Guy with lots of money fits a ship and runs an escort mission. Quote:• 60-Minutes-of-Gameplay (Strategic/President, Research, Late-Mid Game) Summary: Industrialist finds a new partner and starts a research project, tomorow of course. First, let me say thank you for finally listening to the advice and posting how the game would play out. My summaries are very simple but they are the main feel I got from each of those posts. Honestly nothing in there struck me as revolutionary, or even improved from Eve-online. In particular your communication system seems horrific.
  7. Quote:Original post by Koobs For a thought experiment, consider an NPC in a world that can simulate farming fairly realistically. There are plants, seeds, soil nutrients, sun, rain, seasons, and weeds. What needs to happen to make the NPC (and maybe his buddies) start farming? If you create a finite list of things to do, i.e. plant seeds, water seeds, harvest crops etc, then it would seem that it wouldn't be too hard to get an AI to solve. Maybe a genetic algorithm where the fitness is how many dollars worth of crops the farmer can bring to market at the end of the season. But to have them come up with creative, unimagined products? Now that would be darn hard.
  8. bmanruler

    What do you like most in a MMORPG?

    1. Socializing 2. Exploring 3. Killing 4. Achieving Or at least that is how my Bartle Test turned out last time. If those don't ring a bell you should look up Designing Virtual Worlds, it really opened my eyes to designing MMO's.
  9. bmanruler

    Problem with Neural Network

    This is an excerpt from the guide I used as the basis. Quote: Before we explain the training, let’s define the following: l (Lambda) the Learning Rate: a real number constant, usually 0.2 for output layer neurons and 0.15 for hidden layer neurons. D (Delta) the change: For example Dx is the change in x. Note that Dx is a single value and not D multiplied by x. 6.2 Output Layer Training Let z be the output of an output layer neuron as shown in section ‎4. Let y be the desired output for the same neuron, it should be scaled to a value between 0 and 1. This is the ideal output which we like to get when applying a given set of input. Then e (the error) will be: e = z * (1 – z) * (y – z) Dq = l * e ... The change in q Dwi = Dq * xi ... The change in weight at input i of the neuron So you can see how I might have gotten a lambda and a one mixed up.
  10. bmanruler

    Problem with Neural Network

    Quote:Original post by Vorpy Your sigma function, which determines the output from each of your layers, can only be zero when its input is negative infinity. Thanks, I will rework this. Quote:Original post by Vorpy The statement "The output of the network never is zero, so the problem is in my backpropagation function." is clearly incorrect. No matter what training algorithm you use, you can't force a function that doesn't return zero to return zero. Sorry about the misdirection, it was pretty late when I posted. I guess I glanced over all the nice graphs when I was coding. When I change the Sigma function to something that can return 0 the network still evolves it to the maximum value the Sigma can return.
  11. I wrote up a feed forward neural network with one input, hidden and output layer. The algorithms were based off of this website. The problem comes when I train a 2 input, 2 hidden and 1 output network on XOR. The output of the network never is zero, so the problem is in my backpropagation function. neuralnet.hpp #ifndef BMP_LIBRARY_NEURAL_NET_HPP #define BMP_LIBRARY_NEURAL_NET_HPP #include <vector> #include <boost/random.hpp> using boost::uniform_01; using std::vector; namespace bmplib { // Typedefs for easy changes later on typedef double number; typedef std::vector< number > layer; typedef std::vector< layer > bundle; //----------------------------------------------------------------------------- // This class is a simple feed forward neural network that consists of 3 layers // 1 input layer, 1 hidden layer and 1 output layer // Each layer is fully interconnected // The amount of neurons are set at class creation time and cannot be changed //----------------------------------------------------------------------------- class NeuralNet { private: // Number of neurons in each layer const int inputCount_, hiddenCount_, outputCount_; const number sigAleph_; // These are the weights between the previous layer and the mentioned layer // It is a two dimensional array so you can look up via: // i = index in previous layer // j = index in mentioned layer // So weight[j] is the connection between base and hidden[j] bundle hiddenWeights_, outputWeights_; layer inputData_, inputTheta_, inputOutput_, hiddenInput_, hiddenTheta_, hiddenOutput_, outputInput_, outputTheta_, outputData_, correctData_; // Use boost random functionality to create good random numbers boost::mt19937 randomEngine_; // engine that makes the random numbers boost::uniform_01< boost::mt19937, number > randomNumber_; // generates number type [0,1) public: //----------------------------------------------------------------------------- // Name: Constructor // Purpose: Constructs the NeuralNet // Pre: All int values are > 0 // Post: Valid NeuralNet is created // Guarantee: Strong //----------------------------------------------------------------------------- NeuralNet( int InputNeurons, int HiddenNeurons, int OutputNeurons ); //----------------------------------------------------------------------------- // Name: Destructor // Purpose: Destroys the object // Pre: None that you need concern yourself with // Post: Object is destroyed // Guarantee: No-throw //----------------------------------------------------------------------------- ~NeuralNet(); //----------------------------------------------------------------------------- // Name: setInputData // Purpose: Takes a layer and assigns the values to the input. // Pre: The number of input layers and the size of the passed in array must be the same. // Post: The passed data is assigned to the object. // Guarantee: //----------------------------------------------------------------------------- void setInputData( const layer & ); //----------------------------------------------------------------------------- // Name: getOutputData // Purpose: Returns the value of the output of the network. // Pre: Should not be called before network is updated. // Post: Returns values in output neuron layer // Guarantee: Strong //----------------------------------------------------------------------------- const layer & getOutputData() const; //----------------------------------------------------------------------------- // Name: setCorrectData // Purpose: Sets the 'correct' results for the given input. // Pre: Passed layer is same length as output neuron count for network. // Post: Passed data is assigned to the object. // Guarantee: Strong //----------------------------------------------------------------------------- void setCorrectData( const layer & ); //----------------------------------------------------------------------------- // Name: PrintAndRun // Purpose: Diagnostic running // Pre: input data has been set // Post: runs all features of the net and prints results // Guarantee: Strong //----------------------------------------------------------------------------- void PrintAndRun(); //----------------------------------------------------------------------------- // Name: Train // Purpose: Trains the network with training data. // Pre: None // Post: Network should be 'smarter' // Guarantee: Strong //----------------------------------------------------------------------------- void Train( const layer & InputData, const layer & CorrectData ); //----------------------------------------------------------------------------- // Name: Run // Purpose: Uses input data and runs the network with no training. // Pre: Input data has been set. // Post: None // Guarantee: Strong //----------------------------------------------------------------------------- void Run(); //----------------------------------------------------------------------------- // Name: Run // Purpose: Runs the network with no training. // Pre: Pass in valid input data. // Post: Returns output data. // Guarantee: Strong //----------------------------------------------------------------------------- const layer &Run( const layer & InputData ); private: //----------------------------------------------------------------------------- // Name: Init // Purpose: Private utility function to initialize class. // Pre: None // Post: Class initialization is finished. // Guarantee: Strong //----------------------------------------------------------------------------- void Init(); //----------------------------------------------------------------------------- // Cleanup function //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // Name: Cleanup // Purpose: Private utility function that performs cleanup work for destructor. // Pre: Object is valid. // Post: Object is fully ready to be destroyed. // Guarantee: Strong //----------------------------------------------------------------------------- void Cleanup(); //----------------------------------------------------------------------------- // Sigma function //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // Name: Sigma // Purpose: Takes a number and returns the sigma function results // Pre: None // Post: Returns ( 1.0 / ( 1.0 + exp( -X ) // Guarantee: Strong //----------------------------------------------------------------------------- number Sigma( const number &X ) const; //----------------------------------------------------------------------------- // Updates all the values in the Value lists //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // Name: FeedForward // Purpose: Private utility function that propagates all the values through the network. // Pre: Object has been initialized. // Post: All values match what the weights and input say they should be. // Guarantee: Strong //----------------------------------------------------------------------------- void FeedForward(); //----------------------------------------------------------------------------- // Name: Backprop // Purpose: Does the actual 'learning' of the network; adjusts the weights to match training data. // Pre: Object has been initialized. // Post: Net now more closely fits training data. // Guarantee: Strong //--------------------------------------------------------------------------- void Backprop(); }; } // end namespace::NeuralNetwork #endif // BMP_LIBRARY_NEURAL_NET_HPP neuralnet.cpp #include "neural_net.hpp" #include <cmath> // exp() in Sigma #include <iostream> using std::cout; using std::endl; #include <boost/random.hpp> // random_uniform using boost::uniform_01; namespace bmplib { //----------------------------------------------------------------------------- // Constructor // Pre: // Post: // Guarantee: Strong //----------------------------------------------------------------------------- NeuralNet::NeuralNet( int InputNeurons, int HiddenNeurons, int OutputNeurons ) : inputCount_( InputNeurons ), hiddenCount_( HiddenNeurons ), outputCount_( OutputNeurons ), sigAleph_( 1.0 ), // set the aleph value for Sigma function inputData_( inputCount_, 0.0 ), inputTheta_( inputCount_, 0.0 ), inputOutput_( inputCount_, 0.0 ), hiddenInput_( hiddenCount_, 0.0 ), hiddenTheta_( hiddenCount_, 0.0 ), hiddenOutput_( hiddenCount_, 0.0 ), outputInput_( outputCount_, 0.0 ), outputTheta_( outputCount_, 0.0 ), outputData_( outputCount_, 0.0 ), correctData_( outputCount_, 0.0 ), hiddenWeights_( inputCount_, layer( hiddenCount_, 0.0 ) ), outputWeights_( hiddenCount_, layer( outputCount_, 0.0 ) ), randomNumber_( randomEngine_ ) { Init(); } //----------------------------------------------------------------------------- // Destructor // Pre: // Post: // Guarantee: No-throw //----------------------------------------------------------------------------- NeuralNet::~NeuralNet() { try { Cleanup(); } catch( ... ) { // swallow all exceptions } } //----------------------------------------------------------------------------- // Init() // Purpose: Initialization routine // Pre: // Post: // Guarantee: Strong //----------------------------------------------------------------------------- void NeuralNet::Init() { // Set theta values for( int in = 0; in < inputCount_; in++ ) { inputTheta_[in] = randomNumber_(); for( int hid = 0; hid < hiddenCount_; hid++ ) { hiddenWeights_[in][hid] = randomNumber_(); } } for( int hid = 0; hid < hiddenCount_; hid++ ) { hiddenTheta_[hid] = randomNumber_(); for( int out = 0; out < outputCount_; out++ ) { outputWeights_[hid][out] = randomNumber_(); } } for( int out = 0; out < outputCount_; out++ ) { outputTheta_[out] = randomNumber_(); } } //----------------------------------------------------------------------------- // Cleanup() // Purpose: Cleans up for destructor // Pre: // Post: // Guarantee: Strong //----------------------------------------------------------------------------- void NeuralNet::Cleanup() { } //----------------------------------------------------------------------------- // Returns the value of the Sigma function // Pre: // Post: // Guarantee: Strong //----------------------------------------------------------------------------- number NeuralNet::Sigma( const number &X ) const { return static_cast< number >( 1.0 / ( 1.0 + exp( sigAleph_ * -X ) ) ); } //----------------------------------------------------------------------------- // setInputData // Sets the data in the input layer of the network //----------------------------------------------------------------------------- void NeuralNet::setInputData( const layer &Data ) { for( int i = 0; i < inputCount_; i++ ) { inputData_ = Data; } } //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- const layer & NeuralNet::getOutputData() const { return outputData_; } //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- void NeuralNet::setCorrectData( const layer &Data ) { for( int i = 0; i < outputCount_; i++ ) { correctData_ = Data; } } void NeuralNet::PrintAndRun() { // Update network FeedForward(); cout << "Output: "; for( int i = 0; i < outputCount_; i++ ) { cout << outputData_ << " "; } cout << endl << "Correct: "; for( int i = 0; i < outputCount_; i++ ) { cout << correctData_ << " "; } cout << endl; Backprop(); } void NeuralNet::Train( const layer &InputData, const layer &CorrectData ) { setInputData( InputData ); setCorrectData( CorrectData ); FeedForward(); Backprop(); } void NeuralNet::Run() { FeedForward(); } const layer &NeuralNet::Run( const layer & InputData ) { setInputData( InputData ); FeedForward(); return outputData_; } //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- void NeuralNet::FeedForward() { // calculate output from input layer for( int in = 0; in < inputCount_; in++ ) { inputOutput_[in] = Sigma( inputData_[in] ); } // calculate input to hidden layer for( int hid = 0; hid < hiddenCount_; hid++ ) { hiddenInput_[hid] = 0.0; // zero it out first for( int in = 0; in < inputCount_; in++ ) { hiddenInput_[hid] += inputOutput_[in] * hiddenWeights_[in][hid]; } } // calculate output from hidden layer for( int hid = 0; hid < hiddenCount_; hid++ ) { hiddenOutput_[hid] = Sigma( hiddenTheta_[hid] + hiddenInput_[hid] ); } // calculate input to output layer for( int out = 0; out < outputCount_; out++ ) { outputInput_[out] = 0.0; for( int hid = 0; hid < hiddenCount_; hid++ ) { outputInput_[out] += ( hiddenOutput_[hid] * outputWeights_[hid][out] ); } } // calculate final output for( int out = 0; out < outputCount_; out++ ) { outputData_[out] = Sigma( outputInput_[out] + outputTheta_[out] ); } } //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- void NeuralNet::Backprop() { const number Lambda1 = 0.20; const number Lambda2 = 0.15; // How much to change output theta layer DeltaThetaOut( outputCount_, 0.0 ); // find how much to change output layer for( int out = 0; out < outputCount_; out++ ) { DeltaThetaOut[out] = outputData_[out] * ( Lambda1 - outputData_[out] ) * ( correctData_[out] - outputData_[out] ); DeltaThetaOut[out] *= Lambda1; } bundle DeltaWeightOut( hiddenCount_, layer( outputCount_, 0.0 ) ); // find delta value for weights between hidden and output layers for( int out = 0; out < outputCount_; out++ ) { for( int hid = 0; hid < hiddenCount_; hid++ ) { DeltaWeightOut[hid][out] = hiddenOutput_[hid] * DeltaThetaOut[out]; } } layer DeltaThetaHidden( hiddenCount_, 0.0 ); // find delta values for hidden layer for( int hid = 0; hid < hiddenCount_; hid++ ) { for( int out = 0; out < outputCount_; out++ ) { DeltaThetaHidden[hid] += outputWeights_[hid][out] * ( DeltaThetaOut[out] / Lambda1 ); } } // Delta value for hidden layer finally computed for( int hid = 0; hid < hiddenCount_; hid++ ) { DeltaThetaHidden[hid] = hiddenOutput_[hid] * ( Lambda2 - hiddenOutput_[hid] ) * DeltaThetaHidden[hid]; DeltaThetaHidden[hid] *= Lambda2; } // Find delta weights for input to hidden layer bundle DeltaWeightHidden( inputCount_, layer( hiddenCount_, 0.0 ) ); for( int in = 0; in < inputCount_; in++ ) { for( int hid = 0; hid < hiddenCount_; hid++ ) { DeltaWeightHidden[in][hid] = DeltaThetaHidden[hid] * inputOutput_[in]; } } // Find delta theta's for input layer layer DeltaThetaInput( inputCount_, 0.0 ); for( int in = 0; in < inputCount_; in++ ) { for( int hid = 0; hid < hiddenCount_; hid++ ) { DeltaThetaInput[in] += hiddenWeights_[in][hid] * ( DeltaThetaHidden[hid] / Lambda2 ); } } for( int in = 0; in < inputCount_; in++ ) { DeltaThetaInput[in] = inputOutput_[in] * ( Lambda2 - inputOutput_[in] ) * DeltaThetaInput[in]; DeltaThetaInput[in] *= Lambda2; } // Now all we have to do is apply all the changes :/ // Start with output theta's for( int out = 0; out < outputCount_; out++ ) { outputTheta_[out] += DeltaThetaOut[out]; // Then do output weights for( int hid = 0; hid < hiddenCount_; hid++ ) { outputWeights_[hid][out] += DeltaWeightOut[hid][out]; } } // Now hidden layer theta's for( int hid = 0; hid < hiddenCount_; hid++ ) { hiddenTheta_[hid] += DeltaThetaHidden[hid]; // Then hidden weights for( int in = 0; in < inputCount_; in++ ) { hiddenWeights_[in][hid] += DeltaWeightHidden[in][hid]; } } // And finally adjust input theta's for( int in = 0; in < inputCount_; in++ ) { inputTheta_[in] += DeltaThetaInput[in]; } } } // end namespace::NeuralNetwork main.cpp // main.cpp #include "neural_net.hpp" #include "bmp_timer.h" using namespace::bmplib; #include <iostream> using std::cout; using std::endl; int main() { // 0,0,0 // 0,1,1 // 1,0,1 // 1,1,0 bundle XorIn( 4, layer( 2, 0.0 ) ); XorIn[1][1] = 1.0; XorIn[2][0] = 1.0; XorIn[3][0] = 1.0; XorIn[3][1] = 1.0; bundle XorOut( 4, layer( 1, 0.0 ) ); XorOut[1][0] = 1.0; XorOut[2][0] = 1.0; NeuralNet Net(2,2,1); for( int i = 0; i < 1000; i++ ) { for( int j = 0; j < 4; j++ ) { Net.setInputData( XorIn[j] ); Net.setCorrectData( XorOut[j] ); Net.PrintAndRun(); } cout << endl; } return 0; }
  12. bmanruler

    MMO topic: Is MMO still appropriate?

    The term MMO seems a bit loaded sometimes so I took after Richard Bartle and started calling them Virtual Worlds. It implies a much more open design, MMO brings certain stereotypes to mind.
  13. I tried modifying the data without recreating the texture and I got the exception "The operation was aborted. You may not modify a resource that has been set on a device, or after it has been used within a tiling bracket." It does swap the buffer's by reference now though.
  14. This is my first actual project using C# and XNA. I would really appreciate it if people could point out any flaws/hack code so I can learn. It is a simple cellular automaton that is initially randomly seeded, then updated every frame. The part I feel is most hack-ish is where I recreate the texture each frame. It would be great to know a better way. If something isn't clear from the code ask away. *edit* I guess what I really would like to know is, is there a bettere way to draw individual pixels to screen then creating a texture, destroying it and creating a new one. I suppose you could multiple textures and create a buffering system but that still would leave you with remaking alot of textures. Also if anyone knows a performance analyzer for C# that would help also, didn't find anything that looked good when I looked. #region Using Statements using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; #endregion namespace CellAuto { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; int screenWidth = 800; int screenHeight = 600; int screenSize = 800 * 600; Texture2D CA; Color[] caTexture; Color[] caBuffer; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; caTexture = new Color[screenSize]; caBuffer = new Color[screenSize]; } public void InitializeCA() { Random random = new Random(); Color black = Color.Black; Color white = Color.White; // Seed the CA with random values for (int i = 0; i < screenSize; i++) { int RandomBinary = random.Next(2); // generate a random number between 0 and 1 if( 0 == RandomBinary ) // if 0, then add black { caTexture = black; } else // if 1, then add white { caTexture = white; } } // Create the texture CA = new Texture2D(graphics.GraphicsDevice, screenWidth, screenHeight, 1, TextureUsage.None, SurfaceFormat.Color); // Set the new data to the texture CA.SetData<Color>(caTexture); } public void GameOfLife() { // Update buffer using ruleset for (int y = 0; y < screenHeight; y++) { for (int x = 0; x < screenWidth; x++) { caBuffer[y * screenWidth + x] = GameOfLifeRules(x, y); } } // replace original with buffer for (int i = 0; i < screenSize; i++) { caTexture = caBuffer; } } private Color GameOfLifeRules(int X, int Y) { // Original game of life rules made image converge too quickly, so this is used. int temp = LiveNeighborCount(X, Y); if (0 <= temp && temp <= 2) { return Color.Black; } if (3 == temp) { return Color.White; } if (5 == temp) { return Color.Black; } if (6 <= temp) { return Color.White; } return caTexture[Y*screenWidth+X]; } private int LiveNeighborCount(int X, int Y) { int retVal = 0; // Test for bounds int left = 0, leftAbove = 0, above = 0, rightAbove = 0, right = 0, rightBelow = 0, below = 0, leftBelow = 0; // Left if (X > 0) { if (Color.White == caTexture[Y * screenWidth + X - 1]) { left = 1; } } // Left Above if (X > 0 && Y > 0) { if (Color.White == caTexture[(Y - 1) * screenWidth + X - 1]) { leftAbove = 1; } } // Above if (Y > 0) { if (Color.White == caTexture[(Y - 1) * screenWidth + X]) { above = 1; } } // Right Above if (X+1 < screenWidth && Y > 0) { if (Color.White == caTexture[(Y - 1) * screenWidth + X + 1]) { rightAbove = 1; } } // Right if (X+1 < screenWidth) { if (Color.White == caTexture[Y * screenWidth + X + 1]) { right = 1; } } // Right Below if (X+1 < screenWidth && Y+1 < screenHeight) { if (Color.White == caTexture[(Y + 1) * screenWidth + X + 1]) { rightBelow = 1; } } // Below if (Y + 1 < screenHeight) { if (Color.White == caTexture[(Y + 1) * screenWidth + X]) { below = 1; } } // Left Below if (X > 0 && Y + 1 < screenHeight) { if (Color.White == caTexture[(Y + 1) * screenWidth + X - 1]) { leftBelow = 1; } } retVal = left + leftAbove + above + rightAbove + right + rightBelow + below + leftBelow; return retVal; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here InitializeCA(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { CA.Dispose(); // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.CornflowerBlue); // This updates the data in the CA so it actually evolves // Uses the game of life rule GameOfLife(); // Hack to refresh the texture, discard the previous one and create a new texture with the // updated buffer CA.Dispose(); CA = new Texture2D(graphics.GraphicsDevice, screenWidth, screenHeight, 1, TextureUsage.None, SurfaceFormat.Color); CA.SetData<Color>(caTexture); // Draw the sprites to screen spriteBatch.Begin(); spriteBatch.Draw(CA, Vector2.Zero, Color.White); spriteBatch.End(); base.Draw(gameTime); } } } [Edited by - bmanruler on January 28, 2008 12:03:11 PM]
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!