MAG's EXPAND AND CONTRACT CORTEX SIMPLIFICATION
Ok, imagine this, you start off with a small eye, its 32x32 and its either on or off, no greyscale.
Then call this a "sheet" and its nodes reading the sheet in segments, the first nodes are 2x2 sized (so theres 16x16 of them, 2x2 each reading a 32x32 sheet) then in the next level 2x2 again, then it goes to 3x3 twice, then 4x4 twice, then 5x5 twice, then 6x6 twice, (you wind up with about 15 layers) and you keep growing, it means you slowly read more of the input per node but you always output 4x4 every node for 16 different identities, each node detects 16 different things only. 2x2 actually makes 16 different permutations, but you are allowed to grow because it naturally becomes invariant (theoretically) and you always only need 16 different things even for the larger input nodes. this way youll collapse to a smaller output sheet, and collect larger parts of the screen that were segmented before, but now together. this is like for id'ing larger parts of the screen.
just imagine after youve processed every black box (or node) they make a 4x4 output per node, so say after the first 16x16 nodes on the first level update, they will make a 16*4 (64x64) output sheet, to be processed by the next level. so if you notice, itll actually grow a bit until the node size increase takes over then itll reduce the output sheet to 1x1 after enough levels. every node updates the same way, just they read a variable input segment of the previous output sheet, depending on how far up the contraction we are.
updating the nodes is very simple.
STEP BY STEP
1. first pick id (of 16 id's) for the input, only pick exact matches to the internal store of the node, if the node is 2x2 the id's are all 2x2, if there is no id overwrite least used id.
2. then looking at the last frame, link the cell in the old frame to the new frame, if the same cell is on twice add a predict myself one more time to it. u have so many cells for each id, to store context, overwrite the least used cell, always.
3. then make prediction, you do that by taking the output pixel of the 4x4 output segment (the id) and then finding the next 4 cells that follow and light them up. this formulates the output sheet with space and time pressed together.
and this way, you can feedforward learn, feedforward "poke" and feedback reconstruct all the motion video that went into it, and you can constrain some parts of the sheet, for say forcing motor control onto it.
If youd like to feedback (reconstruct its innards), it would be a matter of poking with some kind of fuzzy error allowance (to grab similarities), then feeding back from there, and rolling predictions on the final layer that moved (that didnt just roll to itself) poke, feed, poke, feed, and thatll cycle out whats inside it, and the more error you add the more itll think everything is the same thing, and could be interesting to watch.
its simple, when i get my computer ill do it, hopefully itll run on a couple of gtx 690's.
Counting all ids throughout the nodes with 16 nodes per id, it can store about a million id's representing parts and wholes throughout the layers, I wonder if thats enough for everything?