Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need 7 developers from Canada and 18 more from Australia. to help us complete a research survey. We need your help! Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!

Brian Sandberg

Member Since 10 Oct 2003
Offline Last Active Today, 05:46 PM

Posts I've Made

In Topic: FINALLY a tool that suits my needs!

15 May 2015 - 04:51 AM

Well, if there's any one thing that beats patenting of obvious inventions by tagging on "in a computer", its by tagging on "on the internet."


Or "using XML" or "on a handheld device".

In Topic: Am I wasting my time with this

19 February 2015 - 12:18 PM

You could also just use OpenGL bindings directly, and do your 2D stuff with that.   Have a look at something like OpenTK.  Using OpenGL directly will take a bit of extra effort at first, but it's knowledge that'll serve you well in the future, no matter what language or engine or platform you end up working with.  It'll also let you write 2D games at first, and then add effects and 3D elements to them without having to start completely over.

In Topic: Now this is cool

17 February 2015 - 03:45 PM

Wauw, that kind of sensor should be in every phone.


But needing an internet connection and access to some service in the cloud?  Ridiculous.  I cant see any good reason for that, other than lock-in.

In Topic: Is Artificial Intelligence (AI) Really a Threat?

31 January 2015 - 12:23 PM


As a VERY poor example and highly exaggerated, say someone gives an intelligence the goal of curing world hunger. Rather than planting a large amount of crops to feed everyone, it could interpret the most efficient route is to exterminate select cities populations so that those who are not killed are no longer hungry through either poisoning city water supplies, nuking that shiz, or something similar. This is not coming from any form of malice towards humans, simply because it was the most efficient solution to the problem it was given.



Exactly.  "Cure world hunger", when held as a goal by a human, lives in the context of general human values, and weird solutions like "lobotomize everyone to not feel hunger while they starve to death", that offend those values, are automatically rejected.


The problem of friendly AI, in a nutshell, is to give it that context of commonsense human values.

In Topic: Is Artificial Intelligence (AI) Really a Threat?

31 January 2015 - 12:18 PM


If either of those things happen, then we have an AI (or a whole bunch of them) with the goal of destroying humans.

Then its all simply a matter of whether they have enough control of their environment to manipulate it such that humans die off.



Or maybe it doesn't hate you, and doesn't love you.  You are just made of atoms that it can use for something else.


The flipside of guessing AI's will come equipped with the entire bag of human emotions, and being afraid of those, is overlooking simple indifference.  The space of all goal structures is vast, and love and hate occupy very small areas of it.  Unless we equip them with a very precise set of values that overlap with the human values we all share, things can get strange and deadly.