Brain Scanner Records Dreams on Video

Started by
9 comments, last by Antheus 12 years, 6 months ago
I don't know how I missed this. Someone just linked it in IRC. Amazing technology. The ability to pull foggy video from a person's head by scanning it. This is real-time video.

Been listening to dream police now. :lol:
Advertisement

I don't know how I missed this. Someone just linked it in IRC. Amazing technology. The ability to pull foggy video from a person's head by scanning it. This is real-time video.

Been listening to dream police now. :lol:


That's really not the link you meant to post...
-~-The Cow of Darkness-~-
I think he meant posting this or this. Although his original link from mind reading to Steve Jobs (iMindControl anyone ?) is also intriguing :wink:

Interesting approach, but this is not pulling video from the mind. It's akin to reading a hash value produced by the mind from visual stimulation (and probably a ton of other data) and then searching a database of pre-recorded video material trying to find something that generates a similar hash on a best-guess basis.

It's also not at all reading "what you're picturing inside your head", as the first article (heavily influenced by a technically inept journalist) says. It's "the patterns that appear in people's brains as they watch a movie". Both are very different. This system could probably not correlate dreams at all, as they do not produce the same signal patterns as direct visual stimulation. They're probably just measuring the signals from the optical nerves and/or the visual processing center. Which is still impressive, but not what these articles claim it to be.
I think what's fascinating about the kind of statistical correlation methods that are used for pretty much everything these days (e.g. voice recognition) generally work really well but obviously don't quite match what brains actually do. For instance, there are cases where even the best voice recognition fails spectacularly, but I've also experienced cases where it can interpret what someone is saying in cases where I'd have absolutely no idea. That's not at all surprising, but it is interesting, because it makes me think it really won't be long at all before machines can understand all kinds of cues from people that other people can't pick up on at all.

EDIT: Obviously it helps if, say, you're Google, and you have all the data.
-~-The Cow of Darkness-~-
There fixed the link. Was on the wrong tab when I copied I guess.

Yeah I'm not 100% sure on what they're doing. However, what I do know is they are taking brain signals and turning them back into an image essentially mapping the protocols and signals to digital values that can be displayed. I would really like to know if it works if people close their eyes and think of an object. One would imagine it activates the same visual regions of the brain. Like have them watch the movie then try to remember the movie and play it out in their head.

What is cool is if we can understand the signals it would be awesome to try to reproduce the signals in the brain like a flashback. Might be a more efficient way to watch video. (Close your eyes and stop sending signals and instead just transmit artificial signals). It's too bad everyone's brain is unique so such a system would need to be really flexible.

Comparing the brain-scan video to the original video is just a way to prove that the system works, but there's nothing stopping this technique from being used to suck video out of people's heads directly.[/quote]

Er, sorry? Putting cheese on bread and grilling it is just a way to make cheese on toast, but there is nothing to stop this technique from being used to paint a bathroom. Except for the fact that they are completely unrelated.
This experiment doesn't interpret dreams or record anything of value.

It records a blob of electric signals. It then tries to correlate these signals with existing library of video recordings.

It's like trying to correlate: "A movie, with special effects, might have deNiro in it, some shooting and a midget" against IMDB.


The resulting image is sort of a distraction and is not only not required, but obfuscates what the algorithm returns. Algorithm correlates electric signals with video and the result is something like this:2413, 85.4132
88413, 61.5824
9143, 2.45132
5654, -1.5413
..., ...

Left value is ID of video, right value is correlation.

The video presented in demo is then a blend of top matching videos. There is no correlation between them, since video 241 and 517 aren't meaningfully correlated, you cannot blend them together to obtain video 91364. It's like saying that by blending Command and Terminator you get Rambo movie.


The correlation aspect is interesting, but it only works to correlate whether one might have watched a specific video, not to reconstruct what they saw.

snip


I was also thinking that we probably read too much into it because the two videos are being played next to each other. Really if you just watch the brain scan half it's fairly useless data as it stands now. Not to say that that's not how many awesome things come about, but just as many totally irrelevant things come about the same way.

Really if you just watch the brain scan half it's fairly useless data as it stands now.


It's misguiding.

The resulting video implies that if you take pixel-wise 0.46 * video1 + 0.22 * video2 + 0.13 * video3 the result matches contents of the brain.


Imagine taking character-wise 0.80 * Shakespeare + 0.20 * Heinlein, and pretend the result is theatric drama in outer space.


But they definitely get 10 points for making an appealing presentation, deceiving the public and almost certainly guaranteeing big funding and many lucrative contracts. Go capitalism.

Imagine taking character-wise 0.80 * Shakespeare + 0.20 * Heinlein, and pretend the result is theatric drama in outer space.

Mother of god...

/moves to hollywood.

This topic is closed to new replies.

Advertisement