Appreciating Homo sapiens again

Started by
4 comments, last by NikiTo 3 years ago

Few months ago, when i started diving into AI, i discovered it can pretty well >>mimic<< humans. In all aspects. In the future.

And i always thought that if A can mimic B, then A is B. There is no difference between a robot that perfectly mimics a human and a human. This was my main reasoning. And it still looks logical to me.

Because of this reasoning, i felt into a state of nihilism. I could call it “Artificial Intelligence [originated] nihilism”. I don't like nihilism, but i felt into it. Because it is logical. Machines that mimic and even outperform humans can replace humans and nobody cares. If there is any difference, it is only for good.

But last night i realized something - art introduces a difference. It is nothing logical. But it is how i feel. It is bringing me back to spirituality, and i hate to be nihilistic. If you are reading this and you are a nihilist, be it. But if nihilism was clothes, i would not like wearing them. If you like it, wear it. It is about tastes.

So what happened. What is that unlogic that replaced my logic.

Shortly - for some illogical reasons i can not accept art from a machine. It is the same art, should be the same. But i don't feel it is the same.

Example - A human writes a C++ compiler. It is a complicated piece of software. I consider it to be art. Then another person writes C++ code and i think it is art. Then the programmer compiles his code into bytes to run in RAM. oops - the bytes are not art anymore. Why??? It is super mega illogical, no? Code of the compiler - art, code in C++ - art too. Compiled code not art. What happened? The human was lost. The compiled code does not feel like art for me.

Example - A robot handles me a piece of metal that is perfect. And i don't care. I would not risk my life to save this piece of metal from a burning house. It is perfect though. Something like this -

Remember - it is about how i feel, logic is gone.

Example - a man creates a robot. I feel it like a piece of art. Then that robot creates another robot… it doesn't feel art anymore. The original art is in the design of the original robot. The second robot is a product of the art of the person that made the first robot.

Example - a person writes a code that procedurally generates houses. Procedural code feels like art to me. The generated houses not.

I asked myself - “is it because of true randomness of human creations?” and i failed to base on this my feelings. RNG is a human creation too. And we could create very sophisticated RNGs.

Then i asked myself - “is it because reproducibility?”. Let me elaborate on this - a robot paints a painting for me. And gives it to me. And i tell the robot: “Meh, now repeat the same painting 1mln times.” And the robot repeats it. Doesn't it suddenly feel like a digital copy of Windows? This makes some sense. But doesn't explain why i feel that way. I value perfection over uniqueness. It is still a good argument(though not mine one) that a robot can produce the exact piece of work 1mln times.

If a robot is explicitly programmed to never repeat a creation, i could ask it to create the best product, and the best, the absolute perfect best is the same as the previous absolute perfect best. So even if explicitly programmed, i could fool it to repeat the piece.

Let say there is a black box huge enough, so there is no practical chance to trace down the decisions of a robot. There is no practical way to explain why a robot decided to paint a painting in green. Still it doesn't make me feel like the thing the robot produced is art. Illogical no? I still believe that if A exactly mimics B, then A is B. But i don't feel like that about art.

I still think a program can perfectly replace a human in music composing. But if there is a contest for robots that compose music, and a robot wins, and i have to give him the prize for the 1st place, i would ask for “where is the guy who created the robot? I have a prize for him.”

A robot outperforming a human in music composing is cool, and will be fact some day. Notice - the robot still is like human. Robot is like human. Because robot mimics human. A is B here. It feels fine for me. And i could even listen to that music on repeat. It will be damn good music. Robots are better than humans. But the difference is i would not put on my wall the poster of the robot who created the music. I would not push with other people in a crowd to take the autograph of that robot. I would ask “where is the guy who created that robot that creates awesome music. I want his autograph.”

The difference is subtle, but it is enough to start escaping nihilism. It is curious how i still think i would love a robot. But not accept its art. The robot is art itself, i could love it. What the robot makes is not art. It is the postproduct of art, the consequence of the original art that was created when creating the robot. So it is the human intervention that suddenly resulted valuable for me.

In the past, pagans were worshiping a piece of metal, a piece of wood, a rock. Maybe somebody could accept art coming even from a wooden chair. I agree AI creates >>cool<< visuals. But that's it - >>cool<<, not art. If i had to look at it as art, my thoughts would go for the creator of the AI.

I still think AI can outperform a human in creativity. And maybe i would value it…. but i would go for the developer of the AI to have his autograph. I think this explains my thoughts the best - who would i ask for an autograph, the robot or the human developer.

Advertisement

NikiTo said:
who would i ask for an autograph, the robot or the human developer

It is like parentship, people are always graduating a childrens success to the education that was given from the mother. “Your son is a great musician” for example. So this is not your thought alone but a social sickness to always give the respect of a pupet's success to the puppteer.

Your examples a simple, thex always assume an AI that was built and trained to outperform exactly one thing, but what about AIs that really “decide” things, how are they different from humans? For example Google created an AI voice some time ago which was able to “decide” from a dozen of samples, which voice it liked and in the end created an own human like voice just from learning. Is this still the success of the programmer which created the AI or do we have slight transit into self-aware intelligent beings?

I agree that most of the time AI is mentioned these days, it really means some kind of supervised automation because there is no “real” learning process in it, rather than a huge amount of entropy from tons of data. So how would a “real” AI behave in Art?

I can imagine that, like biological beings, there has to be some kind of mechanism which tells the artificial brain if something “feels” good or bad. Hormones control our brain to prefer certain things or avoid them, so we first would need some kind of “virtual hormone” to have a discrete feedback to the AB (artificial brain from now on). Those VHs can fo example be created from randomnes, some random lines of code which translate some sensor input, like a visual picture, into good or bad.

Scientists already did that with a robot. It was equiped with sensors which measured electricity from a cube the robot was told to collect. Some cubes where at another amplitude then others and the robot learned which cubes are “tasty”. So it's decisions were based on getting a “tasty” cube.

Finally we need some kind of memory. I spend some time getting into SOMs (self organizing maps). They're data structures which can concatenate vectors of information into locgical acessable groups. For example to have a random vector of colors sorted into arrays. If you then pick a random color of a certain array, red for example, you can tell if a color belongs to the “red” group. So in fact it associates information into clusters of topics, the same as a human brain does. A brain is far more complex, so it may be mirrored by connecting several SOMs to each other.

If you think of a firetruck, you think of a color, a shape, the firefighters riding it, vehicles with 4 or more wheels, a ladder and so on. This is because your brain has connected all these informations to the topic “firetruck” but those informations are not fix. Red may be different variations of red, a ladder may be made from different materials, firefighters may be woman as well and so on.

NikiTo said:
Machines that mimic and even outperform humans can replace humans

This is correct and desireable for some aspects of our all lives. How long do you think would it take for humans to sort millions of Amazon packages each day?

For the AI aspect itself, I did some research because we're planning to have our online game be supervised by an AI. That Ai should control the game's balance, create quests for players and the 3D models and animations needed for those to take place ?

Shaarigan said:
Is this still the success of the programmer which created the AI or do we have slight transit into self-aware intelligent beings?

Yes, we have a slight transition.

Shaarigan said:
I agree that most of the time AI is mentioned these days, it really means some kind of supervised automation because there is no “real” learning process in it, rather than a huge amount of entropy from tons of data. So how would a “real” AI behave in Art?

I always make a difference between current tech and tech in the future, between practice and theory. Current tech is very supervised. I tried hard to create a fair setup with minimal intervention from my side. But it was really hard. I failed. In one way or another the developer always affects the decision making of the AI. Maybe if i put a Jetson Nano on the back of a real-life hexapod and just throw it in a real-life forest, it could be a fair setup. But again, making distinction between theory and practice, it is inevitable the robot will fail one thousand of times and will get stuck on every rock out there, so it is inevitable for the developer to go outside and restart the hexapod. In one way or another, the dev must put his hand into the experiment. Unless we have our own universe to can simulate bazillion years of evolution, bruteforcing trial and error on every single atom in this universe.

Even nowadays we observe indications of unsupervised learning in NNs, and we observe basic intelligence in NNs - hidden features discovery. The NN alone by itself was able to figure out that a face requires nose, eye and mouth. It figures it by itself without nobody telling this to it. It draws conclusions. This happens today, right now. But in practice we must be there all the time, telling it what to do indirectly. So i always say - "in the future". We have very good indications that intelligence is present in NNs. Ofc, you could refuse to acknowledge that what NNs are doing is "learning". If you want. It depends how you look at it. Training of NNs is mostly bruteforcing on a supercomputer.

Shaarigan said:
Scientists already did that with a robot. It was equiped with sensors which measured electricity from a cube the robot was told to collect. Some cubes where at another amplitude then others and the robot learned which cubes are “tasty”. So it's decisions were based on getting a “tasty” cube.

It is inevitable. Since the first year of our lives, we are being explicitly taught what is good or bad. Then later in our lives we are constantly being told what is legal and illegal. It happens with humans too. It is inevitable.

Not far ago, i was arguing with an anti-robotization guy. He was saying that it is not fair to claim that a NN created music, because humans were there all the time correcting it while composing. Then i told him - there is no other way. If you don't guide it, it would create a music that only it likes - beep beep bebeeeep. It just can not be the other way around. Even a human composer tries various melodies and discards the bad ones. Then the composer shows his music to the public and if the public dislikes it, his music is discarded. It is impossible to operate without a feedback. Even an AGI would need to be told that beep beep is not music, tralala is music. If AI serves humans, it needs to know what humans like and not. There is simply no other way around.

Shaarigan said:
I spend some time getting into SOMs (self organizing maps).

This is unsupervised learning. If you feed the medical data of lot of patients to a NN, it organizes the data in a logical way. It will put obese people near the people with diabetes, and old people near people with problems in their bones. It shows signs of using logic. Imagine in the future.
If you show the network all kind of trucks, it could put the ambulance car close to the firetruck. This makes sense. We should not blame the NN. Because we actually wanted the NN to put the tanker truck near the firetruck. The NN is not to blame. The developer now needs to change some parameters to the NN in order to correct the learning process. You remove the therm “disaster” from the training set because “disaster” unifies logically a firetruck and an ambulance, and you don't want that unification. You want it to go toward another logical link. Add “water” to the training set. (In practice it is much more complicated than that, but what i say is still correct)

The interesting thing is - the limits between terms are washed out. It is learning, but very supervised, but it is normal to supervise a human kid while it learns too, and it is intelligence, but could be seen as bruteforce too, or as "simple" math.

Shaarigan said:
This is correct and desireable for some aspects of our all lives. How long do you think would it take for humans to sort millions of Amazon packages each day?

That desirable vs undesirable line passes sometimes right through the middle of the AB. We need it to be super intelligent to can differentiate between a regular bush and a soldier who camouflaged as a bush. But at the same time we don't want it to be too intelligent or it could decide to not obey. Super complicated.

Shaarigan said:
For the AI aspect itself, I did some research because we're planning to have our online game be supervised by an AI. That Ai should control the game's balance, create quests for players and the 3D models and animations needed for those to take place ?

It looks like AI is the future. I think it is a good decision to put your hands on AI. When there is something i don't understand and i can not find the answer in google, i could ask you, and you could help me.

I was learning about NNs theoretically. And codded the base for a NN API. I need to finish a shader first, then i will dive practically into NNs using that API. Right when you think you are finishing, it always takes few days more to finish. Always something unexpected pops up.

Something interesting to read about - “super-turing neural networks” or hypercomputation. I don't understand this quite well yet. I heard it can even surpass regular quantum computers. Turing believed that a turing machine can precisely mimic a human in all of his external behavior. But super-turing can solve problems that turing machines can not. I hope when i learn regular NNs well enough, i will understand super-turing NNs too. The future of robots promises to be awesome. Logic tells me if it looks as art, if it smells as art, if it moves as art, it is art. But illogically i would refuse the autograph of a robot. Maybe in the future this feeling of mine will change. Depends how advanced the robots/AI from the future will be. So far, i refuse the art coming from a machine. Illogically.

NikiTo said:
But in practice we must be there all the time, telling it what to do indirectly

How is this different from a children going to school?

NikiTo said:
Something interesting to read about

I read anything up from http://www.ai-junkie.com/

It requires flash and is a bit outdated but you can list the tutorials on google

Shaarigan said:
How is this different from a children going to school? NikiTo said:

Not different. But this is one of the arguments of toxic anti-AI people - “AI must be left completely alone. If you tell it a single hint, it is not intelligent.” What anti-AI people ask for is not practically possible. And humans learn in school for more than 10 years. So it is not a fair way to criticize AI.
We want to minimize the need to help AI, but it can not be minimized down to zero. It is impossible to not tell it nothing at all.

Thanks for the link!

You can pay a human actor to behave, in all respects, as if he is in love with you, but you cannot pay him to actually be in love in you. You can program a robot to behave as if it is in love with you, but you cannot program it to actually be in love with you. Same difference.

a light breeze said:

You can pay a human actor to behave, in all respects, as if he is in love with you, but you cannot pay him to actually be in love in you. You can program a robot to behave as if it is in love with you, but you cannot program it to actually be in love with you. Same difference.

I am aware of it. Lot of people who fall in love with a pillow waifu are aware of this too. Being aware of it, it still could work out. It works with a pillow. Should work much better with a sophisticated robot.

Now imagine that the robot has more neurons than a human. And now imagine that its neuronal connections evolve in a way that it decides to act as if it loves me. It was not programed for it. But it ended up acting as if it loves me.
(it was not programmed to love me) + (it happened out of luck) + (it acts like if it loves me) = pretty much valid true love.

And if it is short of IQ, it still can serve to me as a 100mln$ robotized pillow waifu.

To add to it - you have no way to know for sure somebody loves you. People live for 10 years with somebody only to find out that all that time their partner had a lover…..

This topic is closed to new replies.

Advertisement