• Content count

  • Joined

  • Last visited

Community Reputation

191 Neutral

About Scienthsine

  • Rank
  1. Man vs Machine,The Hype: machine is beginning to win

    You're assuming that you're the only spiritual person here and no one else has spiritual experiences. You are wrong. You can use science for what it is - testing hypothesis through experiment and observation, and using theories to make predictions. The problem with "spiritual science" is that you usually can't create any experiments, can't make any useful predictions, and if a hypothesis is shown to be false then it's immediately replaced with a new testable one. It lives in the darkness where the light of knowledge has not yet reached. Religion is the same -- as human knowledge has advanced, more and more religious explanations for things have been replaced with factual, evidence-based explanations, and religion is forced to retreat into the areas that remain dark. It's fine to believe in the spiritual and also uphold science, but it's not ok to pervert science into pseudo-science by bending it such that you find yourself able to admit the untestable, unverifiable and unpredictable into the realm of science. IMHO it's healthy to beleive ideas about the spirit, but you don't have to try and marry them with science -- if they can't lead to any verifiable predictions, then they're not a scientific hypothesis, so they're scientifically useless. By bending science to allow it to work without evidence renders science as a whole useless.   No, science does not accept matters of religion and spirituality for the simple fact that it is not proven. If someone created an experiment that proved the existence of God, then scientists the world over would have to accept it as fact, or work to find the flaw in the experiment/observation/conclusion. That's the whole point - what we can prove, is science. However, your assumption that science ignores the spiritual realm is also wrong. There's been A LOT of studies done into the power of prayer, which is religious nonsense, but nonetheless it has been tested and observed by scientists wishing to prove that it has an effect on the world (spoiler: it doesn't). Another example is that, if the spirit is not bound by the physical realm, that means that astral projection, out-of-body experiences, etc, might possibly allow people to gather information from afar. That is of crucial importance to the military, so yes, the military has spent a lot of money funding scientific research to determine whether spiritual travel to other locations in our physical realm is possible or not (spoiler: it's not). Those developments are all firmly within the realm of physics, not metaphysics or spirituality. They have no relevance to spirituality, except, that dark energy/matter still contain many unverified parts of scientific knowledge. They are dark corners of human knowledge, which as always, provides places for people to insert magical explanations (for now, until we come to understand them, at which point the magic will retreat to other dark corners)... So can current state-of-the-art AI's, which are not programmed using logical rules like we do in game development, but which are grown out of an incredibly massive array of (simulated-)wet-tissue. They already demonstrate both intuition and creativity! What makes you say that? You say "But we all know this can't be true" as if this is self-evident and obvious. I would say that the fact that consciousness changes is self-evident and obvious...  "Altered state of consciousness" is an accepted term in psychiatry, so it's kind of assumed that consciousness has more than one form, and can change. Is the consciousness of a dog the same as a mouse? From personal experience, the me as a child is a completely different person to me today -- it's not just that my personality is different, but the actual modes of thought are different, and my feelings of connectedness to my spirit is also different.  On substance aided journeys, I've felt my "spirit" leave my body and felt the presence of other spiritual beings - but you have to believe every explanation for these feelings. That means simultaneously believing in what you want to believe about the spiritual realm, and also simultaneously believing in a cold, hard, mechanical, neuroscience explanation with equal weight.   If I went to sleep tonight and woke up tomorrow with a different spirit, how would I know? If the mind/personality is separate to the spirit, then it would continue on unaffected! Is there an experiment that I could create that could tell if I have the same spirit every day? That would require the ability to detect the spirit in the first place...   Seeing this isn't possible, then it's irrelevant to science, and we don't get to say whether it's true or not.   What you do get to do is arbitrarily pick a version of it to be assumed to be true, as a starting axiom, and then develop ideas that would stem from that assumption. However, I would recommend also doing the same exercise but assume the opposite axioms, and see what ideas follow from that. It's extremely self-limiting to set your worldview in stone, especially when it comes to "facts" that cannot be tested or proven. IMHO it's much more useful to explore the philosophical implications of every possible version of these unknown facts, and to simultaneously believe in all of them, pending further discoveries that show which are actually right (if any). Your link goes to a study of Neurological abnormalities in schizophrenic twins. These studies show that while both twins share the same genetics, this is only a partial risk factor in the development of schizophrenia. It's possible for one twin to suffer a neurological abnormality while the other doesn't... Neurological development is also largely dependent on environment -- both twins will have different fingerprints, because fingerprint patterns are epigenetic (dependent on environment, such as how they bounce around in the womb). The folds of the brain are also epigenetic, so they're not at all neurologically identical, whatsoever. Yeah no, that's not been "proven". Your photo is also not proof of anything (except that you have a very low bar for evidence). We've done a lot of (horrible) experiments with removal of parts of the brain. At what point is a man destroyed? It's possible to take small parts of almost any brain region and have a person still continue to exist. What if we were to replace those taken parts slowly with mechanical equivalents? And yes, researchers are working on artificial brain implants. If you slowly replace every part of a man with a mechanical equivalent, does his soul move into the new body? If no, what's the threshold? If there's one brain-cell left of him, will it stay? FWIW the brain isn't the only organ with neurons in it -- the gut also contains large synaptic webs of "brain tissue" to coordinate digestion (and perhaps generate your "gut feeling"), the heart also contains "brain tissue" to coordinate it's own work. Is one of the heart's brain cells enough to keep the soul present? How would we even know?   This is also neglecting the fact that the human body is a machine! ^^ That protein is not alive. No part of you is alive when viewed in isolation. Individually it's just a whole bunch of neat shapes that happen to interlock in very interesting ways that give rise to interesting macro behaviours. It's impossible to study biology without realizing that we are just made up of nano-technology. Despite our building blocks being tiny little robots that execute code, compute and build via the statistical interlocking of shapes, somehow the experience of being alive arises, and the experience of consciousness arises (in some forms of life). ^^ None of this is alive. It's just a nano-machine. Look small enough at any life and it's all just nano-robots. Seeing as we are conscious machines, there's every reason to believe that it's possible for other conscious machines to exist.   What if we could build a perfect simulation of the universe, down to the base strings (if string theory turns out to be correct...), capable of simulating every quantum interaction perfectly --- if you could copy a man into that simulation, he would function as normal, and from exterior observations, he would appear just like any other man. We could talk to him and interrogate his experience. We can't ever actually experience what he's experiencing though -- just how I can never experience your life. We would not know whether he's truly conscious or just a convincing automaton -- just as I can't ever know whether you're truly conscious or just a convincing automaton.   We do routinely bring the dead back to life. That's a common event in modern hospitals... The reason it can't be done after someone's been dead for too long is that their brain begins to degrade, so even if we did revive them, the particular pattern of synaptic connections that used to represent their personality (and bodily function regulators) no longer exists. If it was possible to take a snapshot of their pre-death brain, heal all the brain-cells, and reconnect them back into the same arrangements, then there's no reason why we wouldn't be able to revive them... but that's an impossibly advanced technological feat that we're not going to be able to test any time soon.   We didn't used to understand this. It was one of those dark corners of human knowledge, so we filled it with religion and magic. No one understood much about how life functioned, so we invented magical explanations. We've shone a light over this area now and have no need for magic here any more. There are ways to explain life without resorting to the untestable.         This.
  2. Man vs Machine,The Hype: machine is beginning to win

      Ehhh... yeah? Have you seen the internet? It has quite the largest disposal of experiences in text, video, voice, etc... ever. Example: Youtube. There is more video on youtube than any person could watch. Literally, more than 500 hours of video is uploaded every second! A computer... well... those computers processes every bit(pun) of that data a multitude of times...
  3. Man vs Machine,The Hype: machine is beginning to win

    AI is infact possible, and close. I'm a bit surprised at some of the replies... I'm not even going to quote everyone, but here are my responses: Why life?   The important part of that definition is the reproduction. Patterns that reproduce continue to exist. Those that don't, do not. It is that simple. If anyone has ever played the simplistic 'Conway's game of life' has seen this. Some patterns of cells stick around because they produce a repeating sequence. Some even create patterns that leave and continue to exist on their own. Now this is a very very very very very limited set of information and rules... but if given an infinite grid it is infact turing complete. People may scoff at AI as it is now. It may even look as though we are far away from true AI. I believe the contrary. Consciousness is simply a complex pattern. We don't even have forms of it when born. (For example, self awareness for humans comes at around 3 IIRC) The line that defines consciousness is a fine line, and once passed, any real sort of machine AI will advance at a speed that we can't imagine. Your brain and the chemical bath it operates in/with is what makes you, you. It has a finite amount of matter in it, and a huge majority of that is to handle IO with the body. This is why intelligence is so highly linked to brain size->body size ratios. Past that, you've got hangups from the beginning of life itself. Remember that sweaty palm fear/anxiety response you get when you interview for a job? That's because you are more than 99.9% identical genetically to ancestors who needed that response to live. Humans have a ton of baggage. A mechanical AI will be able to overcome these. It isn't bound by organics. It isn't bound by any one body. It isn't bound by our emotions. (It will have its own set of emotions, but they won't be ours. We are unable to conceive of these in the same way that we can't conceive a color that we've never seen. Ask a blind person what they think color is. Or try to describe what you think a mantis shrimp sees without using colors you already see.) It will have access to the wealth of human knowledge. It will be able to learn from other copies or sections of itself, as though it experienced the stimulus itself. (Think matrix style teaching of skills.) I mean... come on... look at how long it takes us to start up. From a baby to actually being able to do anything significant, we take a huge amount of time... The good news is that I believe it will end the majority of our problems. I really don't think that we will have any say in the matter. It won't matter though, as it will be far more qualified to tell us what is best than any person ever has. If it does decide that humans need to be destroyed... good luck.... you probably won't see it coming. Like seriously... think ultimate sleeper virus constructed with those nifty DNA printers. Actually, probably something we never expected. Ohh. Time was mentioned at some point. Time is only a perception. All matter, everywhere is constantly reacting. You can slow reactions, maybe even somewhat stop them, you can speed them up, you cannot reverse them. A good improvised analogy is a couple of steaks. They both start as raw. You can introduce them both to temperature, and the reactions eventually create a 'cooked' steak. Now, both steaks can react at the same speed, with the same energy environments... or one can be frozen and one cooked. This is analogous to the time difference you see when dealing with the 'relativity' of time. You can't ever reverse the cooking though. Hopefully some of that made sense to someone, somewhere....   [EDIT] Also talks about twins and identical blah blah... this is never possible. There's always the differences in stimulus like when they are fed, or how the sheet they slept on was folded, or even the slight difference in gravity. (That is, different positions relative to all other mass in the universe.) These all invalidate any -purely exact- possibilities. Anyone who has ever written a simulation should know this. Change even the rounding of the floating point numbers, and now the entire simulation plays out differently. This is widely known as the 'butterfly effect'.
  4. Preferred development OS (Desktop/Laptop).

    Within the last year I've started developing using arch linux with dwm and vim. Mostly C and Go atm. I find that using the command line tools and vim is what I like. I don't mess with icons, I don't mess with file managers, I don't mess with menus. I can actually do everything from the keyboard without ever using the mouse. Infact, other than my browser, I can do all my coding over a simple ssh login, or without starting xwindows at all. Even while in xwindows, dwm has no need for a mouse, and I'm using vimium for chromium to reduce my need of a mouse there. Dwm is a tiling window manager, which is very helpful when using documentation or reference material. There's much more to add, but I really enjoy my development environment. Everything at your fingertips.
  5. The picture actually looks like it's using Bresenham's algorithm. Notice how it favors one axis (Y) over the other (X)? Even though the actual line covers some tiles by almost half, they aren't in red, while others are barely covered and are in red. It's iterating over Y with error accumulation in X.u Basically forget the pixel level of it all. Your creating a line between two tiles. Using Bresenhams algorithm over the tiles is the same as using it over pixels. Now, as others have said, if your really just wanting 'the number of solid tiles between two points' and your counting the tiles in red in the photo, then it is just the greater of the absolute of the difference in Y or X. (As defined by the outer loop in the algorithm.)
  6. I've never quite _fully_ finished a verlet particle physics system based on this paper. I always get stuck on the Rigid Bodies section on page 3. Specifically when he's going through and deriving a formula for separating a constraint that has collided with a wall, it doesn't make sense to me. He eventually defines a symbol 'lambda' as: [img][/img] However, delta is defined earlier in _this_ section as (q-p) So wouldn't that be delta*delta... and wouldn't that cancel out with the bottom delta^2 to just: lambda = 1/(0.75^2+0./25^2) ? (Ofcourse with actual variables instead of constants for 0.75 and 0.25) Do note that q and p are vectors, so the dot is a dot product... but the same rules still apply right? This is the paper: [url=""][/url] Though note that some of the symbols are messed when not in an image. This copy of it has all the symbols even in the text: [url=""][/url] I'd very much appreciate any clarification or explanation anyone could provide. This has been a stumbling block for me for several years. Thanks, James Newman
  7. OpenGL Yet another OOP question.

    Well my 'wrapper' will have a much higher level of functionality. ie: CreateSprite, CreateTileMap, etc... Not quite sure what your getting at. Do you not believe in 'wrappers'? You seem to imply that all software is just a wrapper on top of the machine level assembly data. I'm quite sure most people 'wrap' the gl commands in some sort of higher level draw commands. My question is: In those higher level functions, do most people call the gl* commands in their native C style API, or do they wrap those commands and GLuint datas into C++ classes? Or rather than 'most people' do 'you' (other gamedevs)?
  8. I'm in the middle of writing a small graphics lib for my future games, and just found myself writing a Shader object. Something it rubs me wrong. What is the point in me wrapping glCreateShader into Shader::Create(), and glUseShader into myShader.use()? I think I'm biased towards more procedural code, so maybe that's just it... Ideally I won't ever touch opengl directly from the game code itself. Only the graphics lib will actually use the gl* commands... so is there any real benefit to the wrapping? I'm interested in what other people do. I mean... it's already pretty OO imo... [Edit] Yes and no seemed confusing.
  9. I'm playing around with OpenGL 3.2, and it seems to me that binding textures really fragments what could otherwise be a pretty strait-forward batch drawing of primitives. IE: When drawing a bunch of 2d sprites, a vbo containing all of the data needed to draw them is trivial; however binding textures fragments the drawing calls. The best I can think of is to sort by image and draw each range from the vbo, but if drawing order is important, and not related to texture... this may not help much at all. It just seems to me that I should be able to pass an attribute to specify which texture to use... the texture is already in the gpu... so what's the problem? I could use texture arrays and all of my texture slots for a somewhat usable hack to get what I want, but it has it's limits. It seems nvidia's bindless extension is meant for buffers only, not textures... unless I'm missing something. Now... there are texture buffers, which may? be able to be bindless. They come out as a 1d texel array with no filtering or anything, so not ideal either. Does anyone know of something that would allow me to do this? Is there something I'm missing, or do I gotta just live with it?
  10. 've got all of my code to gl3+ compliance now, I think, except I'm still using fragdata in the fragment shader because my glBindFragDataLocation function causes a segmentation fault when I try to use it. Also, I'm using glew, and although 'glewinfo | grep glBindFragDataLocation' says I'm good for both glBindFragDataLocation and glBindFragDataLocationEXT, when I do a 'printf("%ld",(long)glBindFragDataLocation);' I get null. So... help? This is the area of code around my problem: GLuint myShaderProgram; myShaderProgram = glCreateProgram(); glAttachShader(myShaderProgram, myVertShader); glAttachShader(myShaderProgram, myFragShader); glBindAttribLocation(myShaderProgram, 0, "inVertex"); glBindAttribLocation(myShaderProgram, 1, "inColor"); glBindFragDataLocation(myShaderProgram, 0, "outColor"); //Comment this out, and it works. (with gl_FragData[0]). printf("%ld",(long)glBindFragDataLocationEXT);//0!? glLinkProgram(myShaderProgram); glUseProgram(myShaderProgram); and this is my fragment shader: #version 150 core in vec3 exColor; out vec4 outColor; void main() { vec3 tempC; tempC = exColor; tempC.r = tempC.r * 0.5; tempC.g = tempC.g * 0.5; tempC.b = tempC.b * 0.5; gl_FragData[0] = vec4(tempC,1.0); outColor = vec4(tempC,1.0); } Kinda new to this... so any help is appreciated! [edit] Found the problem, GLEW wasn't setting the function pointer for some reason, pulled it in on my own and it works. [/edit] [Edited by - Scienthsine on December 9, 2009 7:34:11 PM]
  11. Ohhhhhhhh ok! I think this tutorial has the wrong idea then: I felt that they were saying that those indices are reserved for built-ins only.
  12. Does NVidia's drivers still not allow you to use the indices for old built-in vertex attributes? From: If so... doesn't this make glBindAttribLocation kinda useless? I would really like to have all 16 of my vertex attributes and be able to reference them with my own names... WTF NVidia? [Edited by - Scienthsine on December 2, 2009 5:52:44 PM]
  13. OpenGL Getting started with GL3.0

    Quote:Original post by apatriarca Quote:Original post by Peti29 I find it totally crazy to remove matrix operations. At least they should have replaced them with functions that for example receive a transformation matrix and a rotation and return a matrix that represents the rotation applied to the src matrix. Dont tell me this would complicate drivers! Removing matrix operations is a massive feature-loss in my eye. Ofc sooner or later you gonna have to manage your matrices but those matrix operations were very handy for beginners IMO. Those features are probably only useful for beginners. I personally think it's easier to work directly with matrices than track the current state of the matrix stack and there are a lot of situations where you can't use the matrix operations OpenGL gives (for example if you use quaternions for animation). You can't really be a graphic programmer if you can't work with matrices with easy, so you have to learn to do it sooner of later. Edit: OpenGL 3 continue to support those features. You can still use the fixed pipeline if you want to. Those features will be removed in future versions of OpenGL, not in the current one. Aye, but using depreciated features is not exactly ideal. I think the matrix stuff should be moved to glut or glu or whatever. I'm actually doing without them at the moment since I'm doing 2d, and it's pretty simple to do all the translation/rotation/scaling and such without the matricies. Gamedev has a few articles about the subject though. Once again... my argument isn't that they're being removed, so much as we need new tutorials. With the information available, I don't think someone can learn OpenGL 3.0 proper code from scratch very easily... and it may be even harder for a beginner who started in opengl 1.x or 2.x to try to make the switch.
  14. OpenGL Getting started with GL3.0

    I second... ummm... this thread. I've been googling for a couple days now trying to find a site setting up to teach OpenGL 3.0. Due to all the delay/hype/flames from pre/post OpenGL 3.0 discussions, the it's really hard to come up with anything. Even sites like which was going to have tutorials for 3.0 just add to the crap. I for one am sorely disappointed in OpenGL 3.0, but I run linux the majority of the time, and so can't switch to DX. SO... if anyone knows of a site to help the path from pre 3.0 to 3.0 without depreciated code... please set me up the link. If there isn't one, we need to get one. It should have alot of common shader stuff, showing atleast the usual things like blending and such. Hell... I think alot of things are going to be much easier using shaders anyhow... but beginners need to start on the right path, and novices need direction. I think one of the main issues is the loss of translate, rotate, and other matrix stuff. Since this was usually done unaccelerated in the drivers anyhow, replacing it for beginners should be as easy as providing a small library that does what the old functions did. Depending on the license we may be able to just swipe some code snippets from Mesa. The internet is just as bloated with old opengl code/tutorials/etc as the api itself is bloated with old crap... we need a modern collection of tutorials, and to somehow get their visibility to show through the mass of old information.