
Advertisement

Content Count
15 
Joined

Last visited
Community Reputation
292 NeutralAbout BedderDanu

Rank
Member

Luck in Games: Why RNG isn't the answer
BedderDanu commented on ElyotGrant's article in Game Design and Theory
It makes sense to me, but only within the context of the "5th luck type". In this case, luck means "Obtaining a desired outcome out of a possibility of outcomes". He then defines 5 different types of luck: Hard Randomness: I have a 68% chance of my outcome happening. Skill: This is difficult to perform, so I only enact my outcome 15% of the time. Yomi: My opponent usually does A, so if I do B now, I have a 78% chance of obtaining my outcome. Soft Randomness: A Skill or Yomi challenge with a difficulty determined by Hard Randomness. Outcome: I'm unsure of the path to get to my desired outcome, so I only reach it 37% of the time. These are categorized by the effects that the type of lucks have on a game. Soft Randomness in this light is different from the rest. 
Formulas, Math, and theories for RPG combat/leveling systems?
BedderDanu replied to Marscaleb's topic in Math and Physics
Also remember, I'm talking about using functions that scale any range of values between 0 and 1 (0% and 100%) When used like this, there is no "maximum defense" in a strict sense. If defense is AC/(AC + 100), then 50 AC gives 33%, 500 gives 83%, 5000 gives 98%, and 50000 gives 99.8%. However, if I change that value of 100 to 750, the same scale becomes 50>6%, 500>40%, 5000>86%, 50000>98% So the first value supports ranges up to a few thousand, while the second supports up to tens of thousands. Roughly. This also plays into the strategy department. I'm playing around with an "easy" way to implement elemental resistances. Reduction = AC/(AC + 100  RES) RES = Resistance/(Resistance + 10) So the more Resistance you have, the more effective your armor is against that element. In the above, 10 Resistance means you have twice the armor value vs those attacks. You can also hit the stats involved with the calculations with penalties, or skip a multiplication step, or whatever and the system is pretty robust to those changes. For example: 300 damage vs AC of 150 gives 120 damage 200 damage vs AC of 150 with a 50% armor penalty gives 114 damage. 300 damage vs AC of 500 gives 50 damage 200 damage vs AC of 500 with a 50% armor penalty gives 57 damage. So this means it's easy to build in things that skip or ignore defenses. You still need to control your numbers, but these types of formula make it easy to control the range of values that your system can handle, and what changes to those numbers actually mean. EDIT: Remember combining percentages. That's important. So you have an armor equation of AC/(AC + 100), and some sort of resistance equation of RES/(RES + 10). You have a character with 75 armor, and 15 fire resistance. You are hit with some sort of fire attack, dealing 500 damage. Your armor blocks 75/(75+100) = 42%, and you resist 15/(15+10) = 60%. Your total resistance becomes 1(142%)*(160%) = 76% Therefore, you only take 116 damage. This is how you get varying stat ranges to combine nicely into huge "Butter Zones" 
Can a TRPG like Fire Emblem be created using Unity?
BedderDanu replied to TeresaS's topic in General and Gameplay Programming
An estimate I've heard is that most business face ~40% of their cost as salary. Using that as a start, if it costs $700,000 for salary over the life of the game, and the game is the only thing you do, the total cost comes to $1,750,000. As a fan of the series, as long as you have good artists, you should consider leaving the whole game in 2D like the older Fire Emblems. It looks fine that way, and it seems you can be a little more expressive with the sprites in 2D than the models in 3D. Plus it feels like the animations are a little sluggish in 3D. That, of course, is just my opinion. 
Formulas, Math, and theories for RPG combat/leveling systems?
BedderDanu replied to Marscaleb's topic in Math and Physics
I've always been a fan of multiplicative percentages and logistic functions. Basically, you work backwards from your "Butter Zone" to the values that you need. Basically, you find some way of determining a flat damage (strength, weapon power, etc.) and then you use logistic functions to scale your other defenses to between 0 and 1. Once you have this, you can turn this into multiplication/addition of percentages to find the actual damage dealt. so, lets say that, using DnD terms, you have +10 damage to strength and a 2d6 variable damage sword. So your potential outlay of damage is 1224. You roll "20" damage. In this system, you are using a sword, which gets bonus damage based on dex. You have 307 dex, which gets plugged into your logistic function of "1/(1+EXP(DEX/255))" and comes out to 76%. In addition, you are considered "Practiced" with a sword, which is a straight 10% bonus. Your total potential damage is 20 * (1 + 76% + 10%) = 37 Damage. However, your enemy is wearing chainmail (12 AC), which means he has a percentage reduction of 37%. (12 / (12 + 20) = .375) In addition, chainmail is resistant against slashing weapons, so that's worth a flat 5% reduction. Making the total reduction against your attack 40% (1  (1  37%) * (1  5%) = .402). This means that the net damage is now 37 * (40%) = 15 Damage. what's nice about this is that you can manipulate the constants to make almost any range of values fit within the "Butter Zone", and can easily stack multiple effects together without having the numbers run away from you. 
RTS  Detecting when a command finishes
BedderDanu replied to Kovaz's topic in General and Gameplay Programming
Would something like the following work? 1) Set location as "target" 2) move unit toward the target 3) if the unit collides with an ally unit, check to see if the target is occupied by an ally unit if not) continue as normal 4) if it is, check for the closest point in a ring around the target that isn't occupied by an ally. Set as new target. 5) If no spot is available, increase the ring size and try again. 
Text and ECS (Entity Component System) architecture questions
BedderDanu replied to Finalspace's topic in General and Gameplay Programming
I'm still teaching myself all this stuff, but can't you do it with 2 different drawing systems? System 1 iterates through all entities and draws them if they have 3D components System 2 iterates through all entities and draws them if they have HUD components That essentially gives you your renderHUD() method. So you wind up doing something like Systems.input.update() Systems.AI.update() Systems.physics.update() Systems.WorldPainter.draw() Systems.HudPainter.draw() 
Make file unreadable by an external program
BedderDanu replied to Eric Bizet's topic in General and Gameplay Programming
For what it's worth, Microsoft uses regular old zipped XML files for it's office documents (.???x files, like .docx). If you convert them from docx to zip, you can extract the XML and see what makes them up. Almost everyone has no idea they are multifile zip archives, and opening them takes almost no time, even for large files. So if you name them "Tank.content"; which is really Tank.zip; which is an archive that contains Tank.obj, Tank.png, and Tank.wav; then you have already deterred most everyone from opening them up and seeing what's inside. 
Hash Distance & Angle To Produce Unique Value
BedderDanu replied to gretty's topic in General and Gameplay Programming
check if angles are close for both (A%360, B%360) and (A%360, B%360  360)? If either are close, then the angle is close. Edit: for hashing, use your closeness algorithm to find the closest angle to (1, 0 deg), either ?%360, or ?%360  360. Always hash that angle, not the one your are directly given in the function? That might just push the problem out to 180 though. Above, A would convert to (100, 10) before hashing. B would convert to (100, 10). A hypothetical (95, 725) would then convert to (95, 5) before hashing. 
How To Make Combat Formulas Work Better ?
BedderDanu replied to RLS0812's topic in Game Design and Theory
My go to thought on something like this is that I want Robust mathematical formula, so that as long as the interacting parts have approximately similar numbers, everything kind of works out. Using what you have here, I'd probably go something like this: Damage = Strength Reduction = Armor / (Armor + 100  Defense) [ 10 < Defense << 100, 0 < Armor < 400] Dodge = (1 + Accuracy) * (1  Avoid) [ 0 < Avoid < Accuracy < 1] Hit Taken = Damage * Reduction * Dodge Something like this can take huge variance in inputs, and the results will still make sense. This allows you to experiment with your base values to find what fits best 
boost::regex  How to match "variable" end of string?
BedderDanu replied to Blednik's topic in General and Gameplay Programming
Is it a static amount? can you preprogram the length RegExs from, say, 430 into the database, and just pull the ones you need for that trip through? Do you have to do this on RegEx match every time? Can you just check the string length first before running the RegEx? psudocode: string s = ""; int len = RAND(3, 30); bool finished = false; While(!finished) { switch(s.length()) { case len: finish(s); finished = true; break; case 0: start(s); break; default: process(s); break; } } 
boost::regex  How to match "variable" end of string?
BedderDanu replied to Blednik's topic in General and Gameplay Programming
I think ^.{#}$ will match a string of length #. For example, ^.{12}$ would find strings of exactly length 12. ^.{#}$, exactly # of characters ^.{#,}$, at least # of characters ^.{,#}$, up to # of characters ^.{A,B}$, between A and B characters. so your list of patters could start with ^\s*$  match empty line ^.{#}$  match exactly desired number of letters ^.{#,}$  handle case of too many letters [rest]  rest of your patters to fix the nick. 
I don't know enough to make it work, but in pseudocode static int _x = 0; int[red, green, blue, alpha] GetNextColor() { //note, after 2^32 colors, this will fail. if you remove alpha, you only get 2^24 colors _x++; int alpha (_x >> 24) % 256 //Only take the last 8 bits int red = (_x >> 16) % 256 //I think this means that you will ignore the first 16 bits of _x, and only look at the next 8 bits int green = (_x >> 8) % 256 //Ignore the first 8 bits, and only look at the next 8 bits int blue = _x % 256 //Only look at the first 8 bits return [red, green, blue, alpha] }

Bell's theorem: simulating spooky action at distance of Quantum Mechanics
BedderDanu replied to humbleteleskop's topic in Math and Physics
Lets use 20 light pulses ++++ ++++ ++++ ++++ ++++ When both are 0, we get 100% transmission, and we get A: ++++ ++++ ++++ ++++ ++++ => 100% B: ++++ ++++ ++++ ++++ ++++ => 100% D: 0000 0000 0000 0000 0000 => 0% When A is 30, we get 75% transmission. A: +++ +++ +++ +++ +++ => 75% B: ++++ ++++ ++++ ++++ ++++ => 100% D: 0001 0001 0001 0001 0001 => 25% When B is 30, we get 75% transmission A: ++++ ++++ ++++ ++++ ++++ => 100% B: +++ +++ +++ +++ +++ => 75% D: 1000 1000 1000 1000 1000 => 25% When A is 30, and B is 30, we get 75% at both sites. The MAXIMUM POSSIBLE DISCORD happens when A and B blocks happen at completely different times. A: +++ +++ +++ +++ +++ => 75% B: +++ +++ +++ +++ +++ => 75% D: 1001 1001 1001 1001 1001 => 50% Cool! Well, lets use quantum mechanics, and calculate what a pair of entangled particles would do A: ++ +++ ++ +++ ++ => 60% B: +++ ++ +++ ++ +++ => 60% D: 1101 1101 1011 1011 1101 => 75% Great! We have a difference! QM predicts something different from any local variable theory, regardless of what those variables are. This is why we need to randomize the angles. we can't allow any information from polarizer A to influence the measurement at polarizer B. If we do, all bets are off. So, we fire off the Photons. Then, while they are in flight, we randomize the polarizers. We then measure before the settings at polarizer A can change any variables local to photon B. If we get 50% discordance or less, we know that local properties are all that matter. If we get 51% discordance or more, we know that nonlocal properties are necessary to describe the behavior. The photon is getting superluminal information from detector A. If we get 75% discordance, we know that QM predicted the correct value of the discordance. So we run the test, and what do we get? 75% Discordance EDIT: Hang on, Malus's law doesn't Apply here. The angle you want is between the light beam and the polarizer, not between the two polarizers. Basically, a +30 and 30 shift in polarizer A and B is not equivalent to a 0 and 60 degree shift. Think of it this way: first you do 0/0 A: ++++ ++++ ++++ ++++ ++++ => 100% B: ++++ ++++ ++++ ++++ ++++ => 100% D: 0000 0000 0000 0000 0000 => 0% Then, you do 0/45 A: ++++ ++++ ++++ ++++ ++++ => 100% B: ++ ++ ++ ++ ++ => 50% D: 0011 0011 0011 0011 0011 => 50% Then, you do 45/0 A: ++ ++ ++ ++ ++ => 50% B: ++++ ++++ ++++ ++++ ++++ => 100% D:1100 0110 0011 1010 0101 => 50% Then, you do 45/45 A: ++ ++ ++ ++ ++ => 50% B: ++ ++ ++ ++ ++ => 50% D: 1111 0101 0000 1001 0110 => 50% Then, you do 0/90 A: ++++ ++++ ++++ ++++ ++++ => 100% B:      => 0% D: 1111 1111 1111 1111 1111 => 100% Remember, your angle is relative to the incoming light, not the other polarizer. 
Bell's theorem: simulating spooky action at distance of Quantum Mechanics
BedderDanu replied to humbleteleskop's topic in Math and Physics
*Note, I'm saying "I found something inconsistent...where did I go wrong?" I'm not crazy... I missed the link the first time. Sorry. After reading it, I think I see the issue that's being missed: What is being tested is whether or not a polarization exists before the measurement of the polarization We don't care what the polarization is, only whether or not it exists. 1) In classical descriptions, Light has a polarization that either passes through the polarizers or doesn't. 2) In quantum Mechanical descriptions, Light "Collapses" to a single polarization only when it is forced to by passing through the polarizers. The entangling allows us to test the difference between these two situations. 1) Entangled photons always "have" the same polarization (either classically always, or QM they collapse to the same polarization. Either works) 2) If classical is correct, then the setting of one Polarizer will have no effect on the other. 3) If QM is correct, since both photons collapse when necessary, and they are entangled, then the settings of the two polarizers will effect each other. The experiment is as follows: Assume classical descriptions are correct. The polarizers are set randomly for every experiment. This creates 4 sets of unknowns: 1) Polarizer 1 setting 2) Photon 1 detection 3) Polarizer 2 setting 4) Photon 2 detection The polarizers need to be set randomly to insure that no classical interaction can happen between the polarizers when the interaction takes place. Because of this restriction, you need to analyze the resulting data with Bell's Theorem (A statistics theorem, not a physics theorem) Bell's theorem lets you look at partial tests of a system, and make predictions about the whole of the system. This is where I myself start to loose understanding, but basically, there are some additional inequalities that should be satisfied by choosing subsets of the data. Abandoning the 25% + 25% = 50% < 75% stuff, as far as I can tell the easiest to reproduce is the CHSH inequality. In this case, it says that p(a,b) + p(a,b') + p(a',b)  p(a',b') < 2, where p(1, 2) is the agreement percentage between two different settings. For this, choose two settings for I can confirm, using that same excel sheet as before, that this is the case. I used 75% and 25% for my tests, and ran them 50 times. The closes I could get to violating was a value of 1.25, which is still below 2. According to the tests of QM, the value they get is ~2root2, or 2.828. This violates Bell's Theorem. What this means is that something about our initial assumptions is wrong. Any of the following can be true 1) The photons can share information faster than light 2) The photons only collapse to a single polarization at the polarizer 3) Reality is determined 100%, forwards and backwards. You cannot set the polarizer randomly, and the Photons know the future 4) Information can travel backwards in time, so the settings of the polarizer are known when the Photons are created. 5) Every possible outcome actually happens, but you only experience one at a time (Many Worlds) Most physicists say 2 is the simplest, and therefore best, explanation. Part of me wonders about 4, because things moving at light speed don't experience time, But that's enough learning on my end for one day. I've attached what I did in excel to get my "less than 2" result, for what it's worth. [attachment=22181:Bells Theroem.zip] 
Bell's theorem: simulating spooky action at distance of Quantum Mechanics
BedderDanu replied to humbleteleskop's topic in Math and Physics
I'm going to try to rewrite his complaint, because I just tried something in excel, and I think I can make it more clear. As far as I understand it: A photon is shot at a screen. the screen will randomly allow 75% of the photons fired at it through. If a photon gets through, we'll call that "1", and if it doesn't, we'll call that 0. Now, lets get two people, with two screens. we'll fire two photons, one at each screen. We'll do this over and over, and generate a stream of data. (copied from excel) Person 1 sees: 1100101110001111111111101101111 Person 2 sees: 1101111011110111010111101110011 Agreement: 1110101010000111010111111100011 Bell's claim (as I understand it) According to the mystery, a traditional probability analysis will say that the odds of data being shared between two streams like this is 50% When we run the experiment, they share data at a rate of 75% Therefore, classical probability breaks down, and no classical variable can account for the behavior. Quantum mechanics does account for this, and accurately predicts it. The problem, as I see it, is when I try to generate 2 random streams of true and false values at a rate of 75% odds for true and then compare the streams, I get ~60% agreement between the streams, which is more than the 50% that bell assumed would take place. Basically, Bell's assumptions on how the agreement would add together is wrong, and that a final agreement of more than 50% is perfectly classical. I'm probably missing something in my understanding of the experiment, but I can't deny that if I generate a huge stream of these random associations, I don't get Bells classical "50%" agreement.

Advertisement