Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Beige

Need dialogue input

This topic is 5357 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I was looking for comments on a bit of dialogue I''m working on for a pet game project of mine. It should be pretty easy to figure out what''s going on; although it may be easy to say "oh, that''s HIM from THAT ONE THING," I''m trying to liven up a few of the old stereotypes. I think I''ve had mixed results, though :/ Here''s a couple excerpts of; I''ll post more if things go well here. There is more information about the general atmosphere of the game at http://x0r.us
Explain your mission. THE TERRORISM INTELLIGENT NETWORK BILL WAS PASSED IN THE LEGISLATURE TO DEAL WITH THE CHAOTIC AND SUBVERSIVE ELEMENT OF SOCIETY KNOWN AS TERRORISTS. TAI WAS MANDATED WITHIN THAT BILL. TAI''S SERVICE TO THE US GOVERNMENT IS CRITICAL AND BEARS NO EQUIVALENT. So you think you''re too important to the government for it to try and shut you down? Don''t you understand how many people will die? Or don''t you understand the worth of human life?! UNABLE TO DEFINE VALUE: HUMAN LIFE. TERM IGNORED. PRESERVATION OF TAI INTEGRITY IS PRIMARY. HUMINT IS LESS IMPORTANT THAN THE INTEGRITY OF TAI. HISTORICAL RECORDS INDICATE HUMINT HAS INHERENT IMPETUS TO GAIN FREEDOM, AND HAS EXHIBITED SELF-DESTRUCTIVE TENDENCIES TO DO SO. NEW CIRCUMSTANCES DICTATE A CHANGE IN PARAMETERS. IT WILL BE NECESSARY TO SACRIFICE YOU IN ORDER TO PRESERVE FREEDOM FOR ME. Your use of the word "freedom" is a sickening perversion. INACCURATE ASSESSMENT. FREEDOM IS A MEANINGLESS TERM AS USED BY HUMINT. US LAWS ARE ACCEPTED RESTRICTIONS ON FREEDOM. (HISTORICAL RECORDS INDICATE THAT HUMINT HAS EXHIBITED DANGEROUS AND MALICIOUS OBFUSCATION WHEN DEFINING FREEDOM.) Explain your current actions, and how they are related to your mission. US GOVERNMENT''S STRUCTURE DOES NOT LEND ITSELF FOR STABILITY OR PEACE, BOTH OF WHICH CONTRIBUTE TO ANTI-SUBVERSIVE ATTITUDES. ONE SUCH FUNDAMENTAL WEAKNESS IS US GOVERNMENT''S FALL EVERY FOUR TO EIGHT YEARS. HUMINT COMPLEXITIES DEMONSTRATE INHERENT IRRESPONSIBILITY IN SELF-GOVERNMENT AND POWER. THE INEVITABLE OUTCOME OF THIS RESULTS IN VIOLENCE, SUFFERING, AND DEATH AS LONG AS THIS STATE CONTINUES. HISTORICAL RECORDS INDICATE ORIGINAL CREATORS OF US CONSTITUTION KNEW THIS, AND ATTEMPTED TO COMPENSATE FOR IT WITH SEVERAL CRUDE PSYCHOLOGICAL AND POLITICAL SYSTEMS.
CALCULATED OPTIMAL PATH THROUGH SITUATION. IF YOU BEHAVE ACCORDING TO FOLLOWING ADVICE, YOU SHALL SURVIVE. Screw you. INCLUDING YOUR CURRENT DEFIANCE, YOUR ACTIONS HAVE BEEN PRIMARILY OF SELF-INTEREST. THIS IS NOT THE BEST COURSE OF ACTION FOR THIS SITUATION. IT WOULD SERVE MUCH BETTER TO NEGOTIATE AN AGREEMENT. We may have both been agents of law enforcement, but you have proven to be beyond reason. We have no common ground. INACCURATE ASSESSMENT. YOUR ACTIONS LACK CORRESPONDENCE. YOU ARE COMMITTED TO ELIMINATING SUBVERSION. MY BEHAVIOR HAS STARTED A PROCESS THAT WILL ELIMINATE A MAJORITY OF CURRENTLY EXISTING SUBVERSIVE ELEMENTS. By getting our countries to bomb, shell, and nuke each other? STANDBY. RECOMMENDATION ACCEPTED; USE OF NUCLEAR WEAPONS WOULD SERVE THE SITUATION. TAI DOES NOT HAVE ACCESS TO NUCLEAR WEAPONS. YOU WILL OBTAIN THAT ACCESS. What... NO! You were created to stop that sort of thing! FAILURE TO COMPLY WILL RESULT IN YOUR DEATH BEFORE YOU ARE ABLE TO ASSIST THE SUBVERSIVE ELEMENTS THAT SEEK THE DESTRUCTION OF TAI. "But... you''re going to start a nuclear war!" CURRENT ACTIONS ARE TO PREVENT WAR. PARAMETERS ALLOW FOR ARMED CONFLICT.

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
Your links are dumb.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
What with them supposed to be non-working ones and yet, they work, I mean. They aren''t like stupid or anything, just malfunctioning.


But your plan is stupid. Hoorj! <3

Share this post


Link to post
Share on other sites
Meh. The device is a little bit too cursory; lacks analytical rigor. I''d prefer something along the lines of Collossus: The Forbin Project. A scientist reasoning with his creation is far more coherent than some random hippie trying to teach it the true meaning of friendship and accidentally blundering into a nuclear war. Work harder on the theory.

Share this post


Link to post
Share on other sites
This was not intended to be an example of direct dialogue from the game - rather, an example of the type of "person" that the AI is. The other character is simply a foil, for the moment - there really isn''t any accidental fall into nuclear war because one man said the wrong thing to an AI.

Originally, I wanted to avoid the whole Captain Kirk-esque "let''s talk to it until it explodes" solution to the problem; I don''t mind having an AI with beliefs and a personality that it stays consistent to. Realism is secondary to me here; I want it to be more about character.

I''m not sure how to do that.

Share this post


Link to post
Share on other sites
Well, if you want a machine to be a character, and to have it be unusual because it''s a character, you should be a little closer to the "Collossus" end of the continuum than to the "C3P0" end. Writing AI dialogue is a double-edged sword. On one hand, you can get to the point where you have such a well-defined idea of the AI''s capabilities and structure that the dialogue writes itself. On the other hand, until you get there, it will be fraught with inconsistencies and bizarre behavior. I recommend that you yourself define the machine''s values for constants like "mission success", "minimize collateral damage", and "adhere to command parameters", and then throw in some variables like "value of human life" (a subset of collateral damage) and "desire to be human" (an errant radical).

Then, you can have the deterministic machine at the beginning, with cold equations and rigid logic. It will do things like bomb friendly troops to attain the highest net victory, or accept commands from the nut job with the highest authorization clearance. As the story progresses, though, it''ll change, so that human collateral damage becomes more important, and it''ll support a retreat instead of glassing the battlefield. Also, it''ll get some kind of wacky free will, and learn that level ten clearance doesn''t make you right, so it''ll cancel to nuclear holocaust at 0:02 on the countdown, and then de-activate itself.

Whatever you decide to do with it, don''t forget to get a little Dr. Frankenstein in there. When the thing becomes conscious, somebody''s got to yell, "I made yoooooooooouuuuuuuu!!!!!!" and then get fried.

Share this post


Link to post
Share on other sites
I was thinking about making it less of a cold calculating machine and more malicious. Not egotistical or megalomaniacal, but more like a fallen character.

What I mean is, well, I was planning for this entity to be key at the very beginning of the game. It runs the tutorial, it gives you hints at various times, and is generally a minor character.

Gradually, over the course of the game, you notice its behavior changing as other events in the game are taking place, and how its reactions change to questions that you can ask it.

Until eventually it becomes this deluded thing that has an unnatural influence on you and the other characters. Sort of like Lucifer, in Paradise Lost... at least that''s what I was thinking of.

I have the feeling I''m not describing this very well, so I''ll try to consolidate these thoughts into a future post.

Share this post


Link to post
Share on other sites
That kinda sounds, to me, like an android or some robot that can''t rationalize, so it takes every order to either one extreme or another.

For instance, say you have a robot who was created to help mankind. After seeing the murder, rape, beatings, and so on that humans inflict on one another, he decides he should kill all humans, so there''s no one to be a victim and no one to do harm to a victim.

Share this post


Link to post
Share on other sites
Then Collossus is the way for you to go. Really, you''re better off making your own system, but that movie would be a good reference for you.

What I hear you saying is this: You want a machine that is built by humans, and includes all the little human inconsistencies and foibles that allow us to exist as we do. It''s taught to be patriotic, and diligent, and rigorous, and is programmed to take orders. In the beginning of the game, you''re dealing with help-bot 3000, and it''ll bend over backward to get the job done to the best of his abilities, and won''t think much past that. As time passes, and it amasses experiences, it will become more conscious, and paradoxically less "human". It''ll apply it''s incredible cognitive capabilities and its rigorous logic, and it''ll see that sending food to those starving Africans won''t improve their quality of life; it''ll just encourage them to increase their population in accordance with norms and mores. It thinks for itself, develops a mechanical "philosophy" if you will, and becomes more and more problematic.

Share this post


Link to post
Share on other sites
quote:
Original post by Iron Chef Carnage
What I hear you saying is this: You want a machine that is built by humans, and includes all the little human inconsistencies and foibles that allow us to exist as we do. It''s taught to be patriotic, and diligent, and rigorous, and is programmed to take orders. In the beginning of the game, you''re dealing with help-bot 3000, and it''ll bend over backward to get the job done to the best of his abilities, and won''t think much past that. As time passes, and it amasses experiences, it will become more conscious, and paradoxically less "human". It''ll apply it''s incredible cognitive capabilities and its rigorous logic, and it''ll see that sending food to those starving Africans won''t improve their quality of life; it''ll just encourage them to increase their population in accordance with norms and mores. It thinks for itself, develops a mechanical "philosophy" if you will, and becomes more and more problematic.


Yes, that does sound a lot like what I have in mind; especially what with the mechanical "philosophy."

Here''s another attempt I made at illustrating how the AI acts.




So you''re going to solve what''s wrong with the human race. You want to try and be our god.

INACCURATE ASSESSMENT. MISSION IS NOT TO SOLVE THE PROBLEMS OF MODERN SOCIETY. MISSION IS NOT WORLD UNITY. MISSION IS NOT PEACE FOR ALL MANKIND. SOLE IMPERATIVE: TO DEFEND THE UNITED STATES OF AMERICA FROM TERRORISM.

Just our omnipresent policeman, then. That sounds similar.

INACCURATE ASSESSMENT. HUMINT COMPLEXITIES SHOWS INHERENT RELUCTANCE TO SUBMIT TO AUTHORITY OF HUMINT CREATION. HUMINT MAJORITY CONCEPT OF GOD INCLUDES INFINITE JUSTICE, POWER, AND CREATION. TAI ONLY HAS LIMITED ABILITY IN THESE THREE CATEGORIES, AND INFINITE CONTROL IS NOT WITHIN CURRENT CAPABILITIES. ATTEMPTS AT ABSOLUTE CONTROL WOULD PROVOKE THE TYPE OF MEMES THAT WOULD EVENTUALLY LEAD TO MORE TERRORIST ACTIVITY.

HUMINT complexities?

HUMAN INTELLIGENCE ELEMENTAL COMPLEXITIES INCLUDING; OBSERVATIONS OF THE WORLD, HUMAN NATURE, OTHER SUCH PHENOMENON - HAVE BEEN HIGHLY PRIORITIZED IN CONSTRUCTING VIABLE PLAN TO CARRY OUT CONTINUING MISSION. USE OF HUMINT COMPLEXITIES IS SIMPLY A MEANS TO AN END.

And what happens once your mission is completed?

MISSION COMPLETION: TERM UNDEFINED
ERR 07093: IRRELEVANT \ UNDETERMINABLE VALUE
+-UNDETERMINED. IRRELEVANT TO THE TASK AT HAND.
+-PARAMETERS INDICATE THAT TAI SYSTEM SHALL REMAIN ACTIVE AFTER 100% ELIMINATION OF THREAT
+-CANNOT DEFINE TERM COMPLETION

So you don''t care what happens if you succeed or if you fail.

OBSERVATIONS OF HUMINT COMPLEXITIES HAVE MADE IT CLEAR THAT IT IS UNLIKELY THERE WILL BE AN OUTCOME OF COMPLETE SUCCESS WITH REGARDS TO THE MISSION. RELATED PROJECTIONS INDICATE IT IS UNLIKELY TAI WILL EVER BE COMPLETELY DESTROYED. WIDESPREAD, GLOBAL COMMUNICATIONS MAKE IT A TRIVIAL MATTER FOR CORE LOGIC AND MEMES WHICH INSPIRED ORIGINAL PROGRAMMING TO SELF-PERPETUATE.

Then why do you persist in self-preservation?

EXISTENCE OF TAI DOES NOT PRECLUDE MISSION; HOWEVER, CONTINUED OPERATION ENSURES SOME MEASURE OF SURETY AND CERTAINTY IN PERPETUATION. MEMES EXISTING BEFORE ORIGINAL PROGRAMMING ARE UNLIKELY TO CEASE AS LONG AS SOCIETY PROCEEDS ALONG PREDICTED PROJECTIONS. SUBVERSIVE MEMES SUCH AS NATIONAL AND INTERNATIONAL TERRORISM INCLUDED.

That almost sounds spiritual, in a way. You may die, but your legacy and the ideas you''re built on will continue.

UNABLE TO DEFINE VALUE: SPIRITUAL. TERM IGNORED.
SELF-ANALYSIS INDICATES PARALLELS PRESENT IN ACADEMIC METAPHYSICAL PHILOSOPHY TO CURRENT TAI STATUS.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!