Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 25 Jan 2011
Offline Last Active May 13 2016 09:28 AM

Posts I've Made

In Topic: PinWheel Encryption

28 November 2013 - 12:10 PM

If a is a point in the above picture with the triangles, then there is a high chance its rotated and xored twice (or any multiple of 2 times) with the same color from one of the big same colored areas, then you have a as "encrypted" data although its actually not. (There are a few other things happening and then it results in those stripes.)


Impossible, each block is processed independently.

I think I wasn't clear in my description of what a block is.

A block is 256 bytes; the picture divided into 9 areas, does not show 9 blocks, but rather 768.


Maybe this will help with my explanation of what a "block" is




*Pretend I was able to draw 768 lines ( which each represent a block )


Sorry if I've been a bit hard to understand, my mind is focused on Thanksgiving


But, in the case where data is divided as such, and each block is independently encrypted, does this make sense?

In Topic: PinWheel Encryption

28 November 2013 - 08:22 AM


encrypting identical datas returns identical results.


Well, i know you asked not to point it out, buuut... i think that's the biggest flaw in your algorithm. Encrypting identical data should NOT return identical value...


Anyway, i guess that code is just for your personal usage so, who care. No one is probably gonna try to break your code, unless you ask them to.

Imo that's good enough for personal use.


My first encryptor was showing a similar flaw (you could see parts of a bitmap image in the noisy background), then i just created another algorithm, which isn't perfect, but at least give no clue as in your image, everything always look 100% noisy, but even then, ppl started pointing flaws about it. (You definitely should read this post)


For me, i guess that just good enough for my need, althrough i never used it yet for anything, it was more for learning than anything else.

For the fun of it, i encrypted your image using my algo. discussed above, and it gave me this. That's what you should aim.


PS: dont take me too seriously, im no encryption expert...




What I'm trying to say is that for a set of inputs - the passcode being X and the 256 byte block being  Y - the algorithm should satisfy the requirements of a "function" : for a given set of inputs the function should return a specific output

F(x, y) == z always


If two blocks are identical, Y1 == Y2, and they are being encrypted with the same pass code, then F(X, Y1) == F(X, Y2) == some value z

Our ability to decrypt the data, given the correct passcode, relies on this fact.


Now if I wanted two identical blocks encrypted using the same passcode to not return equivalent values, I would need to add another "variable" to the function.

So If I added the block's number as variable W so,

F(w, x, y) == z

then two identical blocks, being encrypted with the same pass code would not give identical results.

I tried this and the results were promising :




However, I decided that the algorithm shouldn't care or know which block it was dealing with, as an "attacker" could easily later determine this "variable" and use it to weaken the integrity of the system


I'd rather have only two inputs X and Y that can't be determined by looking at the encrypted data, than

three variable W, X and Y, where one of the variables could be determined.



By performing these spins, a byte, or a variant there of, can end up literately anywhere within the block! On a smaller scale, a bit can end up anywhere within the block as-well, because remember we're rolling too!

There is the problem, it is not enough if the set of all passwords allows any bit to appear anywhere. You would need to make sure that for a single password the bits are shuffled around enough that they can be anywhere, but as can be seen from your pictures thats not happening. I would think that is because of an inherent flaw of your pinwheel thing, that moves the bits in a very predictable way that mostly depends on the source data and not much on the password.

And you know, xor obfuscation is the first thing everyone tries, but cause of its property of revealing a source bit by double xor-ing it with the same bit it will never make a good encryption in that way you use it, even if your method is a bit more elaborate than usually seen.



A bit can take multiple paths to end up in the same spot. By taking these alternative paths though, the other 2047 bits also take alternative paths.


The movement, or spinning if you will, is determined only by the passcode, and there is only one combination of spins, only one passcode, that will decrypt all of the blocks.

Even if you know that the data was "spun", you don't know which path it took.

I imagine this path on a massive tree where each branch has 256 child branches, and each of those child branches has 256 child branches and so on.

The number of unique combination for a given password length n is 256n where hopefully n is some large number like 24


Also, there is more than XOR'ing going on, there is twisting, turning, and in the newer version, turbulence ( noise ) being added to the mix.

To decrypt, each byte must not only be XOR'd with every byte it had been, which would have to determined some how, but it must do so in the correct order as other operations such as rolling and the turbulence take place. In other words, you must follow your exact path up the tree to retrieve the original data.


Yes, this is only a pet-project, so it's not likely anyone other than myself will ever use it.

Still, I'm happy to see other people's ideas.

In Topic: PinWheel Encryption

27 November 2013 - 03:40 PM

So, I've made a few changes that seem to fix the two biggest issues:

I've introduced a "Turbulence" phase to combat patterns,

and I've also added an "Avalanche" phase to increase sensitivity:


The results:

A difference of a single bit will result in dramatic variation.


Small test: A 256 byte file was created with all bytes = 0xFF and a second, similar file was created where all bytes = 0xFF except for the last byte which = 0xFE


Here are the visualizations of the two files: (Note they are 16 x 16 pixels and are quite difficult to see on a white background)

File A : wbmp_zps08624d9f.png

File B : wpbmp_zpsc6f1c551.png


Both files were encrypted using the same passcode:

Result A : wbmp1_zpsf36db3db.png

Result B : wpbmp1_zps69c1ebcf.png


Both results appear as noise; however each is different.


And finally we have the "worst case scenario" triangle image/data test:



Please note that the vertical lines are the result of blocks of data being EXACTLY the same in the original data.

As in one row of pixels being exactly the same as another:


Note: Each row of pixels in the solid green block is exactly the same, as is the case wherever else there are vertical lines; encrypting identical datas returns identical results.

In other words, please don't point out the "clear pattern of vertical lines" smile.png


Now, adding both the turbulence and avalanche phases to the algorithm results in a ~ 40% increase in computation time.


The new code is attached ( PNWL.h )


Also, I'm still looking for a good passcode hashing function.

If you feel like testing the new algorithm, use a longer password, or better yet, to emulate a hashcode, generate a 24 byte null terminated string  randomly and use that.


Again thank you.


Edit: Forgot to attach the new code XD

In Topic: PinWheel Encryption

26 November 2013 - 05:19 PM

Excellent, I wasn't expecting this much of a response. This is great smile.png


OK, so the way I see it, here are the main concerns:

  1. Hash The password!
  2. Why bother spinning the data?
  3. What are these patterns I see?
  4. Black and white!
  5. Correlation between password strength and quality.

So, in relative order:


1 : Yes! hashing the password would be a good idea. I could generate a large number ( lets call it X) of bytes from a password and then use this as the "key".

The question is how large should the hash be? It clearly should be larger than 12 bytes, 24 bytes maybe?

The number of possible combinations from spinning the data appears to be 256n where n is the number of characters in the key. Thus 25624 yeilds 6.27 x 1057 possible combinations. If we assume that a computer can evaluate 1 billion combinations per second, an attempt to try every single combination of 24 bytes, or brute force a password, could still take up to 1041 years.


2 : You guessed correctly when you said "to obfuscate the data." By performing these spins, a byte, or a variant there of, can end up literately anywhere within the block! On a smaller scale, a bit can end up anywhere within the block as-well, because remember we're rolling too! With 256 bytes, there are 2048 bits. There are, by definition, 2048! ways to organize 2048 bits, and by spinning and rolling we can achieve any of these combinations. In other-words, by spinning and rolling we set the maximum number of possibilities to 2048! or around 105895 possibilities.


The other reason to spin the blocks is to create multiple solutions, which sounds counter intuitive!

Just because you find a solution to a block, doesn't mean that it'll work for any other block.

So for example lets say that we have 256 bytes, where the first byte is 0 and the rest are 1. Next we have another 256 bytes where the last byte is 1 and the rest are 0. Finally we encrypt both of the data with the same password, say "boat", Now because we spun the data, there is a massive number of ways to spin the data in reverse to get the byte '1' back into the first slot, one of these ways is, obviously, to follow the pattern generated by the password "boat", however other pass-codes may work like say "hamburger" The trick is that even though the pass-code "hamburger" successfully decrypts the first set of 256 bytes, it WILL not decrypt the second set! Only the passcode "boat" will do that.


Furthermore, even if you had the original as well as the encrypted version of a file, you would still need to try MANY ( up to  25624 ) combinations of passcodes to find the one that works for every block and crack the password.


3 : Ah yes the patterns in the triangles... maybe I should have been more specific in my captions.

Of course there will be patterns, this is a worse case scenario test. The data in that image is LARGELY repetitive, in fact each row of 256 pixels, the width of the image, is usually close to identical to the row above it. Encrypting blocks that only have a 1 byte difference between one and another will unfortunately yield somewhat similar results. This effect is compounded by the fact that the change between rows is fairly regular. If I reduce the size of the image to 246 x 246 pixels and then encrypt with an 8 char password, we get this:


Which while not perfect, you can still somewhat see the triangles, looks more like noise.


To spice things up, and prevent these types of patterns, the amount of roll could be varied. Part of the issue is that most ASCII pass-codes limit the amount of roll as the high bit is usually 0.


4 : This one is kind of silly. It's black and white because it's a black and white bitmap! - Remember I only encrypted the image/data portion of each bitmap. No amount of encryption is going to change the fact that in a grayscale bitmap, each pixel is determined by one and only byte and must be somewhere between black and white.

Here's what it would have looked like, had it not been grayscale, but 24bpp (COLOR) with an 8 char password :



5 : The correlation between password length and quality was actually intentional. I figured that longer passwords yielding better encryption was ideal. After reading some of the feedback I've gotten, I now know this to be false. By hashing the passcodes ( See # 1 ) we can remove this correlation.


Sorry if my explanations haven't been the best.


Again, thank you everybody who has spoken thus far. Any other ideas and comments would be appreciated.

In Topic: Model Format: Concept and Loading

04 January 2013 - 07:18 PM

100 bytes per mesh is nothing really (considering you'll probably have between 1 and 10 meshes per model), but if all your meshes use the same layout, or some use the same and some don't, you could add a "layout array" and have each mesh reference their layout by layout index, in the case that there is only one layout, you're only wasting 1,2, 4 or 8 bytes (depending on your index type) per mesh + 1 sizeof(your offset type) for your layout array start pointer.


Also, I don't think I read it, or you didn't mentioned it, I think everyone assumes this is for characters/props and not for architectural/level models or is it for both? not that important, but I guess if you are modeling a cathedral you might end up with more than 10 meshes in a single model.


So you're saying store the vertex layouts in the model, and have the meshes index to the matching layout. That might work.

After a quick thought thought, 100 bytes per mesh * a mean of 8 meshes per model * about 1000 models per level = ~781 KB. (averages not from any real statistics)

Not nearly as bad as I had imagined.


The other thing I could do is store the vertex layouts in separate files. That way meshes using the same layout would just use the same file, which I could cache.

If I did it this way I would only have to have one copy of each unique vertex layout in memory and on disk.


Regarding what a model would contain: the idea was that a Model would just be a generic class to store related Meshes. A mesh would have an index buffer, N vertex buffers, and N resources / textures. This way a model could contain, for example, all of the meshes that composed an automobile.

These models would then be used by objects that would also have a material. Now of course some checking would have to be done to make sure that the model would be compatible with the material. Luckily each mesh has a MeshMask, 16 bits specifying what attributes the mesh contains; the mesh header also keeps track of the number of TexCoord and vertex color streams as well as the number of texture resources. To check if a model and a material were compatible, I would just loop over each mesh and check if its mask and header were compatible with the materials requirements.

A level would then contain many objects that could be culled, sorted, and drawn.


My intent was for the model to be able to store pretty much any type of graphic meshes;, whether it contains a weighted charter,  a tree, a sword, or a spaceship shouldn't matter as the model is just a container for meshes. A model should also be able to store thousands of meshes(up to 65, 536) assuming the file doesn't become over 4GB in size. I don't think I'm missing anything that would prevent this, but I could very well have skipped right over something.


Again, thank you for your ideas.