encryption scheme

Started by
17 comments, last by KalvinB 22 years, 2 months ago
Why not just use a published algorithm developed by real cryptographers?

What fun is that? I've had this idea for quite awhile and I just decided to spend a few hours putting it together and see how well it would work in the real world.

AP:

I'll add in counter to see how often each bit shows up in that picture and post the results.

for the key:
1 40715 0 40702 //128
1 40938 0 40479 //64
1 40750 0 40667 //32
1 40895 0 40522 //16
1 40820 0 40597 //8
1 40508 0 40909 //4
1 40616 0 40801 //2
1 40705 0 40712 //1

for the image:
1 40650 0 40767
1 40818 0 40599
1 40571 0 40846
1 40361 0 41056
1 41194 0 40223
1 40785 0 40632
1 40656 0 40761
1 40477 0 40940

Ben

Edited by - KalvinB on January 31, 2002 2:43:51 PM
Advertisement
actually, i take back what i said earlier... for the most part, ur code is *not* impossible to break... however, in that particular application, of incoding an image, it is.
its not possible to decode fixed-data + random-data. fixed data is ur equation, and the random data is ur image. actually, this is complex topic because ur data isnt truly random, there are maximum and minimum values, and ur picture is a self defining pattern. but in general, ur picture of that criminal is secure.

BUT to talk about general encryption,

i am seriously sick of people using math to encrypt. math = logic = patterns. yes, math needs to be used with data, but it cannot be used with information. data != information.

dont use an equation to encrypt ur information. math == pattern == same information different pattern.

you need to destruct the information. take ur cat, chop it up, and have god glue it together so it acts and looks like a camel -that is destroying information.
because the entire point of encrypting revolves around information, this lession is extremely important.

take ur file, your image, and destroy the image, and then encode the data.

for a text file, destroy the language frequencies, and then encode the data.

etcetcetc.
next to last, disassociate the information.
lastly, hide the information so that it is not special or outstanding.

keywords that u can have enlightening fun learning and thinking about:

data
meaning
information
comprehension
interface, grammer, syntax

association
organization

Edited by - evilcrap on January 31, 2002 2:46:13 PM
> take ur file, your image, and destroy the image, and then encode the data.
> for a text file, destroy the language frequencies, and then encode the data.

A good encryption system will make exactly that unnecessary. Your data will look and behave like noise. Chopping up the data doesn''t add anything to a good cipher, but has no negative effects either (any noise pattern combined with another *unrelated* noise pattern, can only be more random than before). This is exactly the purpose of a good cryptoalgorithm: to deviate the potential attack on patterns towards the key instead of the plaintext. It is made mathematically impossible to run pattern analysis on the encrypted data (even if you have a copy of the unencoded plaintext !), *without* running it on the key itself first. So it is not important, if your plaintext data has patterns, it may be text, graphics, or even a whole file filled with zeros. You''ll have to cryptanalyze the *key* first, and that''s where patterns are dangerous.

KalvinB: if you want to run some cryptanalysis on your algorithm, there are some standard tests to get a first impression of the algorithm''s security. If you like, I can try to find the URL somewhere on my HD.

- AH
Your bit pattern distribution doesn''t look too bad. But the bitpattern distribution *per byte* looks also very 50/50% if you do a simple rand(). The key is to analyze, if certain bit combinations repeat after a certain sequence, or if they are in *any* way related to each other.

Examples:

* is it more probable, that a 1 bit will arise at that position, every 10th byte in a row ?

* Does bit 4 tend to be more often 0, when bit 6 is 1, and bit 7 is 1 too ?

* Does the bit sequence 1101 repeats itself in a non-random way ? Perhaps shifted by 2 bits everytime ?

* Does bit 1 tend to invert itself, if bit 7 does that too ?

Remember, the output of a ''perfect'' cipher is 100% *pure* noise. No correlation, not the slightest. Every correlation is a potential security breach.

- AH
All the numbers in the below list should be zero. These are percentages for how far off from ideal each bit and byte number is off.

Each bit should appear 6.25 percent of the time and each byte should appear 0.39 percent of the time.

list of percentages off

In that respect there doesn''t appear to be much of a skew towards any bit or byte number. The amount off for any bit or byte is around 1/100th of a percent.

Whether or not there are bit patterns has yet to be proven. If you could post any links to existing programs that could do the work, that would be great. I''d like to see just how well this works. It looks like it''s got the basic requirements of even amounts of bits and bytes and no visual pattern.

Ben

[The Rabbit Hole | The Labyrinth | Programming | Gang Wars | The Wall]
> Whether or not there are bit patterns has yet to be proven.

Hehe I think it has to be proven, that there are no bit patterns. Pro cryptanalysts work years on this phase, before a new algorithm is considered ''safe''...

> If you could post any links to existing programs that could do the work, that would be great.

OK, I''ll look into that.

> I''d like to see just how well this works. It looks like it''s got the basic requirements of even amounts of bits and bytes and no visual pattern.

Honestly, I think it will show tons of bit patterns. A gaussian bit noise is just the beginning. In the form it is right now, it''s just too simple. You don''t do any bit shifting nor hashing/permutations. Those are two fundamental standard requirements for PRNGs. You rely 100% on wrap-around ''randomness''. The standard C rand() function is very similar to that. But well, it all depends on what level of security you are heading for. It surely is a great algorithm to encrypt eg. TGA textures, so that people do not tamper them easily. And it''s very fast, so that you could do realtime en/decryption. But those kind of overflow algorithms tend to be breakable in no time by professionals. So, if you want to sell it to the NSA, it will need some further improvements

Have a look at the link I gave somewhere above, they have tons of (proven) symetric cipher algorithms there. You could have a look at their algorithms, could give you some ideas to improve your own.

- AH
Here''s a bit pattern check for 1,2,3,4,5,6,7 and 8 bits.

Inverse bit patterns match pretty well in terms of occuraces. For example with the 2 bit pattern check

00 69482
10 103996
01 103995
11 69535

The program can do up to 4096 bit patterns but it took about a half hour to do a 416000bit file with just going up to 8bits.

Maybe I''ll leave it running for a couple days to check the key for patterns. The above file is an examination of an encrypted file.

Ben

[The Rabbit Hole | The Labyrinth | Programming | Gang Wars | The Wall]
destroying information before u encrypt data can greatly reduce required complexity of encryption algorithms, therefore reducing overhead associated with decrypting.
^ But the information is destroyed, so the cypher is useless.

quote:
any noise pattern combined with another *unrelated* noise pattern

Are not ''orthogonal'' noise patterns impossible?

...
It''s misleading to show the picture in false color. Change it to shades of grey and look it. The second one is much better than the first, but there''s still a visible pattern. It''s pretty good for how simple and fast the cypher is.

A simple test is to see how well a lossless compressor does on the encrypted data. A good cypher will produce a data that will become _larger_ when compression is attmepted.

I think I have to agree that buying a heavy-weight college text would get you alot further and alot faster than playing around.

At least you''ll get advice from someone who''s worked with it alot
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

This topic is closed to new replies.

Advertisement