• Create Account

sooner123

Member Since 22 Mar 2010
Offline Last Active Jul 18 2016 05:13 AM

#5201015What game engine/game library could be the best one for Android development?

Posted by on 31 December 2014 - 11:45 AM

You could of course have at least given a link to the specific video. Or given the name of the video.

edit: Well it was easy to find.

I'm sorry but that video was not useful at all. Jut you repeating yourself over and over giving really obvious advice when what is needed by most people is some concrete comparisons.

#5100097Data Structures & Algorithms for Blackjack Simulator

Posted by on 10 October 2013 - 01:57 AM

I'm trying to create a Blackjack simulator so I can test out (and maybe genetically evolve) some various play-charts, betting-strategies, and card-counting-charts.

Before I dive into it, I wanted to outline the structure in detail, since the game has these caveats that make it less trivial to code than a game like poker. (like splitting)

My basic design so far is this:

Data Structures:

-deck: composed of a vector or list of cards

-card: composed of a value (like 10 for 10's, Jacks, Queens, etc.), a name and a suit

-hand: composed of a bet, a list of cards, and two hands (potential left split and right split)

-player: composed of a balance and a hand

Algorithms:

-shuffling: i'm going to shuffle by sucking cards, randomly, out of the deck and creating a new deck, in order

Questions:

1. Should I use a vector or a list for the cards? I'm thinking:

vector: O(1) for lookup, O(n/4) for removal (average cases)

list: O(n/4) for lookup, O(1) for removal (average cases)

2. How do I deal with Aces? Is the best way to just have some conditional code whenever dealing with cards to see if a card is an Ace to check for soft 17's and the like?

#5012327Calculating prime numbers with memoization

Posted by on 18 December 2012 - 10:35 PM

My code is working, but is there a way I could get rid of the need for the 'isPrime' variable. It seems redundant.

I'd like to do this without a major restructuring of the code if possible.

[source lang="cpp"]#include <time.h>#include <math.h>#include <iostream>#include <cstdlib>int main(int argc, char*argv[]){ clock_t startTime = clock(); int maxToTest = atoi(argv[1]); int*primes = new int[maxToTest]; primes[0] = 2; int maxPrimeIndex = 0; for (int i=3; i<=maxToTest; i++){ int isPrime = 1; for (int j=0; (isPrime)&&(primes[j] <=sqrt(i)); j++) if (!(i%primes[j])) isPrime = 0; if (isPrime) primes[++maxPrimeIndex]=i; } std::cout << maxPrimeIndex + 1 << " primes <= " << maxToTest << " found in " << (clock() - startTime)/CLOCKS_PER_SEC << " seconds";}[/source]

#5011512Predator-Prey simulation

Posted by on 16 December 2012 - 10:10 PM

Wow. A lot of that syntax is totally different from anything I ever knew would work. I had to do some googling to understand it all. I wasn't even aware that assignment returned a reference to the lefthand operand. That's awesome. Love the way that single line of code reassigns the cel reference for that Animal and simultaneously adds the Animal to that cel in the grid. Very clever. I have tons to learn.

I really appreciate both your help. Thanks a lot

#5007906Any faster way to calculate prime number?

Posted by on 06 December 2012 - 04:32 PM

Whether 1 is prime or not is a matter of convention, and the prevailing convention is that it is not a prime. There are multiple possible definitions of what "prime" means, and they often tell you explicitly that 1 is not a prime. See the beginning of the Wikipedia page for an example.

This isn't really correct. 1 isn't a prime because it doesn't match the true definition of prime numbers. There is no convention about it.

The definition you've probably heard (something along the lines of "positive integer divisible only by 1 and itself, excluding 1") is a laymen's definition that simply happens to coincide with the real definition.

The real definition is that a prime number is an element of the set of numbers into which all positive integers can be factored into products of powers of, uniquely. It's a fundamental theorem in number theory. '1' clearly violates this because to factorize, say 12, you could have 1 * 2^2 * 3^1 or 1^2 * 2^2 * 3^1, or 1^3 * 2^2 * 3^1, etc. There are an infinite number of ways you can factorize something if you accept 1 as a prime number and primes are defined to make factorization unique.

#5007855Fastest Growing Hierarchy

Posted by on 06 December 2012 - 02:05 PM

UPDATE: I figured it out. If you're interested, here was my problem and the solution is below.

In trying to write a recursive function that returns the fastest growing hierarchy (a really fast growing function).

Here is the definition:

Base function f0: f0(n) = n+1

Next function f(a+1): f(a+1)(n) = fan(n)

Functional powers: f(a+1)(n) = f(fa(n)) where f0(n) = n

```#include <iostream>

using namespace std;

int hf(int, int);

int f(int x, int power, int functionalBase)
{
if (power==1) return hf(x, functionalBase);
else return f(f(x, power-1, functionalBase),1, functionalBase);
}

int hf(int x, int functionalBase)
{
if (functionalBase==0) return x+1;
else return f(x,x,functionalBase-1);
}

int main()
{
cout << f(2, 2, 2);
}```

#5006368Problem with destructors in C++

Posted by on 02 December 2012 - 02:16 PM

Not harsh at all. I appreciate the response. And though I agree that my code is pretty much leaky garbage, I'd like to fine tune this and turn it into something that isn't garbage before I start using libraries to do this stuff for me.

And thank you for pointing out the error in the logic that was causing the crash.

One last mistake on my part. I was deleting the Node instead of the Class the Node pointed to.

#5006240Contain instance of Derived Class in instance of Base Class

Posted by on 02 December 2012 - 02:57 AM

This is the current state. So far it's passing all my tests. Thanks for the help.

```#include <iostream>
using namespace std;
class BaseClass;
struct Node
{
Node* prev;
Node* next;
BaseClass* data;
};
class BaseClass
{
public:
int x, y;
//Node<BaseClass*>* list;
Node* list;
{
Node*elem = new Node;
elem->prev = NULL;
elem->data = obj;
if (list == NULL)
{
elem->next = NULL;
list = elem;
}
else
{
elem->next = list;
list->prev = elem;
list = elem;
}
}
void RemoveChild(BaseClass* obj)
{
for(Node*elem = list; elem != NULL; elem = elem->next)
{
if (elem->data == obj)
{
//if node to delete is head of list
if (elem->prev == NULL)
{
list = elem->next;
}
//if node to delete is tail of list
else if (elem->next == NULL)
{
elem->prev->next = NULL;
}
//if node to delete is not start or end
else
{
elem->prev->next = elem->next;
elem->next->prev = elem->prev;
}
delete elem;
return;
}
}
}
void ListChildren(int tabDepth=0)
{
Node*elem = new Node;
elem = list;
while(elem != NULL)
{
for (int i=0; i<tabDepth; i++) cout << "	";
cout << "child: " << elem->data->x << endl;
if (elem->data ->list != NULL)
{
elem->data->ListChildren(tabDepth+1);
}
elem = elem->next;
}
}
BaseClass()
{
list = NULL;
}
};
class DerivedClass: public BaseClass
{
public:
int memberValue;
DerivedClass(int a, int b)
{
x = a;
y = b;
}
};
int main()
{
BaseClass* root = new BaseClass;
BaseClass* child1 = new DerivedClass(1, 2);
BaseClass* child11 = new DerivedClass(5, 6);
BaseClass* child111 = new DerivedClass(13, 14);
BaseClass* child112 = new DerivedClass(15, 16);
BaseClass* child113 = new DerivedClass(17, 18);
BaseClass* child12 = new DerivedClass(7,8);
BaseClass* child2 = new DerivedClass(3, 4);
BaseClass* child21 = new DerivedClass(9, 10);
BaseClass* child22 = new DerivedClass(11, 12);
root->ListChildren();
}
```

#4966872md5+salt

Posted by on 06 August 2012 - 07:05 PM

I think the issue may be that he doesn't really understanding what hashing or salting is. If someone here could surpass the language barrier (or knows a good website that describes the concepts in really simple english) that might help him out.

#4889052School? I would not call it that way.

Posted by on 30 November 2011 - 07:31 AM

This looks like a lot of disagreeing for the sake of disagreeing. If you want to point out that the kid's whining is useless, then point that out. If you want to point out that most CS instructors are pretty bad, then point that out.

But don't take some person you've never met's side on a range of issues they were wrong on to try to prove the prior points. It just makes you look like the type of person who will take an incorrect stance for the sake of argument and utterly destroys your credibility.

Java is far-and-away the most common first programming language in both highschools and university. And there is good reason to teach it in the 10th grade: the AP Computer Science test is largely Java-oriented, so your students will have a headstart in that respect.

Valid.

Big numbers are a great justification for using doubles. It takes a pretty solid understanding of binary representations fully comprehend when doubles are required.

Invalid. Understanding the binary behind primitive data types and why/when to use them is simple and can be grasped by a 10th grader. And he was correct. Her not pointing out the use of floats/doubles for high precision rather than just high value was poor on her part.

Modern languages don't even have floats: Python, Ruby, JavaScript - no floats to be seen.

Invalid. There are floats in javascript. And other "modern" languages besides the few you listed use floats. CPUs are hardwired to deal with floats.

You are both right. The difference is that when using ASCII, you have to change the locale to match the character set you wish to use.

Invalid. The obvious implication of what she said was that ASCII has a value for every character. Not true. It has 256 characters. You can arbitrarily change what these characters are mapped to but then guess what? You lose the other ones they used to be mapped to. There are more than 256 characters between all characters in all languages. Quite a bit more in fact. This kid is right. His teacher was wrong. He is also right about unicode.

Programmers are lazy. Her way is no less correct than your's, and in some cases may be less error-prone.

Invalid. He didn't say either way was more or less correct. He said that she said that programmers use shorthand for no good reason. This isn't true. a++ is more easily descriptive than a = a + 1. It has its uses in good practice and the kid realizes this.

Nobody's future is going to be ruined by a crappy semester of programming. Those that care will learn on their own,

Valid.

and those that don't care will go off and have an actual social life

Invalid. Embarrassingly so. Beyond the need for explanation so.

Quit obsessing over her lack of l33t h4ck3r skills, and go do all the usual highschool things - sports, girlfriends, etc. You'll only regret it later if you don't.

Invalid. And indicative of some strange kind of envy you have for one particular subset of people.

#4871917train neural network with genetic algorithms

Posted by on 12 October 2011 - 11:18 AM

I believe that was my issue with all of this... it seems that by combining the two, you are training your fitness function with a fitness function. Why not just make the original one more accurately descriptive in the first place?

I disagree. Back propagation works about as well as permuting the weights genetically. That's all this guy is doing. Treating the weights of the connections between the nodes as components of the chromosome. The only time NN works against GA is when not only the weights but the structure of the NN is dynamic.

#4866398Theory - ultimate AI, at atomic level

Posted by on 27 September 2011 - 05:59 AM

a. It was already established one of the root causes was our lack of understanding the physics. And he pointed out we can't yet solve the 3-body problem.

No it wasn't established. The only issue is a mathematical one. We understand the physics of the 3-body problem perfectly. At any given point we know the gravitational forces between any of the bodies. You simply don't understand this.

b. The butterfly effect is a subset of the chaos theory. And when we go into Chaos Theory, we find out we don't know yet whether real life™ is deterministic or not. Should we scientifically prove God plays dice once just for fun in a while causing the universe to be non-deterministic, then our simulation becomes flawed, since PCs are inherently deterministic. We can try to play with entropy data from the outside or go multi core and hope the quantum mechanics break the determinism we need. But even then we wouldn't be able to introduce the same randomness "God" put into our life; being us unable to reproduce reality accurately. Furthermore, happen our world to be non-deterministic; many simulations would actually fail to produce life even if we knew all physics equations and had a 100% understanding. I'm cheering for a deterministic world therefore, just to think that there isn't something impossible; but we have to recognize there's a chance it may not be possible.

Ahh I see. The fact that you want the universe to be deterministic explains a lot about your weak understanding of chaos theory. Given that our universe is probably simulated with a finite resolution, I think determinism is unlikely.

c. It's already established we need infinite processing power to simulate perfectly; since our simulated world may want to start it's own simulation just like we're trying. Or may be we shouldn't try it!!! Otherwise real life will stall until our simulation we just started stops (tip: if the scientist from simland decide to simulate their "real life", their world will stall too)

Wrong. You can't know that and it hasn't been established. I don't know why you keep saying things that no one even knows are "established." It's more than likely that our universe is being simulated. And therefore it's being simulated to some finite resolution. Which doesn't require infinite processing. Again, it just seems like you don't understand the concept of simulating a universe. The universe "above" ours could easily have subtle or radically different laws that allow them to do far more efficient computing. Or we simply have further to go in computing than we realize.

You meant an infinitely powerful enough computer? Unless all results are round numbers, "accurate" becomes truncated/rounded numbers with translated errors. You're waaaaay underestimating the butterfly effect, which leads us to...

Wrong again. I didn't say perfectly. I said accurately. I was talking about how the more powerful your simulating medium gets, the more accurate the simulation. In a predator-prey simulation, the denizens move around in incremental finite steps. But they're not aware of this. Maybe if they got smart enough they could be. Like how we can analyze quantum mechanical phenomena now.

You mean a computer with infinite RAM? Well then, since I have a very simple task for you: Compute the number PI with 100% accuracy. All decimals included. Then use it in your simulation.

Not relevant for reasons stated above. If we simulate the universe down to an accuracy of x, then we only need to use pi to enough digits that given an exact diameter, we get a circle to the nearest x.

Pro tip: If you miss one decimal, the butterfly effect will sooner or later kick you in the balls. Seriously. Try to debug THAT.

Why do you assume the Butterfly effect is this end all be all force of nature that causes ripples ever outward from some origin? Why not wave dampening? Why not consider the case where some cause is eventually nullified. Stepping on that butterfly in the cretaceous era doesn't matter since a T-Rex steps there right after. Small deviations and changes? Maybe overruled by bigger ones.

You don't understand Chaos theory, Butterfly effect, or the concept of simulation.

But even if you were right. What's so crazy about the concept of a computer that doesn't miss a decimal? Or how do you know there aren't errors all the time? And they do ripple outward? So what?

I miss when Gamedev automatically locket threads down after 2 weeks. Someone please lock this madness. One trivial comment bumps the thread after which soon 3 troll/flamewar/pointless/endless threads follow.

There's always more to be said when talking about philosophical stuff like this.

#4865124Theory - ultimate AI, at atomic level

Posted by on 23 September 2011 - 06:02 AM

Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

I do not believe you understand either of these problems then.

It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.

The butterfly effect is exactly the same deal.

Not that Dave needs it, but I have to come to his defence here. His point was bang on-- mathematics is still a work in progress, as is physics (and by extension everything else).

Sorry but no. His point was that we couldn't accurately analyze three gravitationally bound bodies in terms of physics. This isn't true. At any point we can see the resultant forces, center of gravity, momentums of every component and subset of an n-body situation. We just don't have the mathematical tools to model them with closed expressions. His point was correct in that we do not have a perfect knowledge of physics. His example was incorrect. We do have perfect knowledge of the 3, 4, or even n-body problem. We just don't have the mathematical tools to model it with a closed expression.

You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.

Prove it. ;)

Actually the onus is on you to prove otherwise. Suggesting otherwise is akin to a religious claim.

If you can't understand it, try to understand what a neural network is. Understand the difference between mind and brain.

And if you do understand it, why play devil's advocate with me who is correct and stretch definitions and overlook inaccuracies to agree with the mod? It comes off as you being an enemy of free discussion of intellectual topics since you appeal far too much to authority.

#4860793Theory - ultimate AI, at atomic level

Posted by on 12 September 2011 - 12:22 PM

<br>

<br>

<br>Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".<br><br>On-topic, I should have locked this thread when I had the chance. <img src="http://public.gamedev.net/public/style_emoticons/default/dry.gif"><br>

<br><br>I do not believe you understand either of these problems then.<br><br>It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.<br><br>The butterfly effect is exactly the same deal.<br><br>The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.<br><br>I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.<br><br>You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.<br><br>(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)<br>

<br><br><br>If you close your eyes and think of a person or picture...what is it youre seeing?&nbsp;&nbsp;That "picture" that you're imagining is not a picture...its a thought, but one that we interpret as a picture by using different parts of our brain to form it.<br><br>When you create an AI...how would they see that "thought"&nbsp;&nbsp;If you programmed them with knowledge of a particular item, could they use their programmed knowledge to picture the item without seeing it?&nbsp;&nbsp;<br><br>Makes me think of language...language is actually a barrier that slows our thought process down.&nbsp;&nbsp; If everyone was equally intelligent, perfect beings, we would have no need for language.&nbsp;&nbsp;It wouldnt be telepathy, it would be knowing the answer because its the right thing to do.&nbsp;&nbsp;If we had to communicate with people, we would instantly understand what they needed without exchanging words because we could interpret the need without having to talk.<br><br>its like a team game..either digital or athletic.&nbsp;&nbsp;you become a cohesive unit...multiple brains melding into one to the point where you can predict what the other is doing without talking.&nbsp;&nbsp;<br><br>pretty neat to think about.&nbsp;&nbsp;<br><br><br>

<br><br>If the AI's neural network was structured the same as yours, they would "see" the same things you see when you "picture" something.<br><br>If it was given a neural network similar to a human infants, visual input into the optic nerve, auditory/gravitational input into the vestibular, etc. etc., then it would evolve into an adult brain that thinks, conceives, and pictures things the same way you do.<br><br>There is no distinction. You are seeing a difference that doesn't exist because you don't understand intelligence.

#4860677Theory - ultimate AI, at atomic level

Posted by on 12 September 2011 - 08:07 AM

Incidentally, for those who cite "perfect knowledge of the laws of physics" need to google the "3-body problem" and the "butterfly effect".

On-topic, I should have locked this thread when I had the chance.

I do not believe you understand either of these problems then.

It's easy to simulate 3 bodies. With a powerful enough computer you could even simulate them accurately. The problem asks for a closed expression for their position(time). This is a mathematical failing. Not a physics failing.

The butterfly effect is exactly the same deal.

The original post was an obvious troll. Not made obvious only by the implication that we could simulate a planet, but that the best way to simulate intelligence would be on the atomic level, rather than the cellular level.

I'd also like to point out to another poster that thinks that machines won't be able to "think" that it is not the machine's failure to think but YOUR failure to understand what thought is.

You think your thoughts are under your control but they are just the deterministic result of the processing of your neural network. Simulating an equivalent neural network in software would have the result of a conscious entity of equal intelligence and sentience as yourself.

(simulating a better one would result in a conscious entity capable of understanding that thought is a deterministic result. not an underlying, driving force)

PARTNERS