Jump to content

  • Log In with Google      Sign In   
  • Create Account

#Actualrouncer

Posted 26 July 2013 - 12:58 AM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)   -> approx size <-
input/output sheet texture                        R8_UNORM (one for each layer) small
prev. frame in/out sheet                                    R8_UNORM "                      small
integer conversion texture                                   R32_UINT "                      small
outputid texture                                                     R8_UINT "                       small
idusage texture                                                     R32_UINT "                     small
internal state texture                                            R32G32_UINT "               small
internal state prediction state                                R8_UINT "                      medium
volume prediction texture                                 R8G8B8A8_UINT "               large
volume connection usage texture                            R32_UINT "                  large

nodeid buffer vertex small
vertexidoutput buffer vertex                 (just one reusable and reallocatable) small
vertexprediction buffer vertex                                 "                                      small


LEARNING CYCLE->

ok. pick id.

1. first pick id for the input, if there is no id overwrite least used id.

1. (quick load)

take the input sheet, and develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels for each input segment.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant. (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id. this will take another texture to keep.


2. temporal and prediction.
2. (medium load)


for every node make a single vertex at the corresponding id's location (one column in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH. if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction. we do this using the previous output sheet.
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)

 

PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
but cleverly using the vertex list here is not all columns will be relevant.
in fact ive just figured out a solution,  bust the column with a vertex list, and read off the usage values, then pick the least on cpu? one thing on cpu? dunno.

3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.

formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------

just got to get over this non integer problem, and get my computer, then off i go.

then you just got to do it for all 15 layers, and if its running 60fps eat my dust.

When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only. I think that will work but im not sure, im trying it that way first.


#16rouncer

Posted 26 July 2013 - 12:57 AM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)   -> approx size <-
input/output sheet texture                        R8_UNORM (one for each layer) small
prev. frame in/out sheet                                    R8_UNORM "                      small
integer conversion texture                                   R32_UINT "                      small
outputid texture                                                     R8_UINT "                       small
idusage texture                                                     R32_UINT "                     small
internal state texture                                            R32G32_UINT "               small
internal state prediction state                                R8_UINT "                      medium
volume prediction texture                                 R8G8B8A8_UINT "               large
volume connection usage texture                            R32_UINT "                  large

nodeid buffer vertex small
vertexidoutput buffer vertex                 (just one reusable and reallocatable) small
vertexprediction buffer vertex                                 "                                      small


LEARNING CYCLE->

ok. pick id.

1. first pick id for the input, if there is no id overwrite least used id.

1. (quick load)

take the input sheet, and develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels for each input segment.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant. (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id. this will take another texture to keep.


2. temporal and prediction.
2. (medium load)


for every node make a single vertex at the corresponding id's location (one cell in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH. if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction. we do this using the previous output sheet.
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)

 

PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
but cleverly using the vertex list here is not all columns will be relevant.
in fact ive just figured out a solution,  bust the column with a vertex list, and read off the usage values, then pick the least on cpu? one thing on cpu? dunno.

3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.

formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------

just got to get over this non integer problem, and get my computer, then off i go.

then you just got to do it for all 15 layers, and if its running 60fps eat my dust.

When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only. I think that will work but im not sure, im trying it that way first.


#15rouncer

Posted 26 July 2013 - 12:56 AM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)   -> approx size <-
input/output sheet texture                        R8_UNORM (one for each layer) small
prev. frame in/out sheet                                    R8_UNORM "                      small
integer conversion texture                                   R32_UINT "                      small
outputid texture                                                     R8_UINT "                       small
idusage texture                                                     R32_UINT "                     small
internal state texture                                            R32G32_UINT "               small
internal state prediction state                                R8_UINT "                      medium
volume prediction texture                                 R8G8B8A8_UINT "               large
volume connection usage texture                            R32_UINT "                  large

nodeid buffer vertex small
vertexidoutput buffer vertex                 (just one reusable and reallocatable) small
vertexprediction buffer vertex                                 "                                      small


LEARNING CYCLE->

ok. pick id.

1. first pick id for the input, if there is no id overwrite least used id.

1. (quick load)

take the input sheet, and develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant. (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id. this will take another texture to keep.


2. temporal and prediction.
2. (medium load)


for every node make a single vertex at the corresponding id's location (one cell in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH. if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction. we do this using the previous output sheet.
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)

 

PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
but cleverly using the vertex list here is not all columns will be relevant.
in fact ive just figured out a solution,  bust the column with a vertex list, and read off the usage values, then pick the least on cpu? one thing on cpu? dunno.

3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.

formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------

just got to get over this non integer problem, and get my computer, then off i go.

then you just got to do it for all 15 layers, and if its running 60fps eat my dust.

When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only. I think that will work but im not sure, im trying it that way first.


#14rouncer

Posted 26 July 2013 - 12:53 AM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)   -> approx size <-
input/output sheet texture                        R8_UNORM (one for each layer) small
prev. frame in/out sheet                                    R8_UNORM "                      small
integer conversion texture                                   R32_UINT "                      small
outputid texture                                                     R8_UINT "                       small
idusage texture                                                     R32_UINT "                     small
internal state texture                                            R32G32_UINT "               small
internal state prediction state                                R8_UINT "                      medium
volume prediction texture                                 R8G8B8A8_UINT "               large
volume connection usage texture                            R32_UINT "                  large

nodeid buffer vertex small
vertexidoutput buffer vertex                 (just one reusable and reallocatable) small
vertexprediction buffer vertex                                 "                                      small


LEARNING CYCLE->

ok. pick id.

1. first pick id for the input, if there is no id overwrite least used id.

1. (quick load)

take the input sheet, and develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant. (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id. this will take another texture to keep.


2. temporal and prediction.
2. (medium load)


for every node make a single vertex at the corresponding id's location (one cell in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH. if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction. we do this using the previous output sheet.
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)

PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
but cleverly using the vertex list here is not all columns will be relevant.


3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.

formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------

just got to get over this non integer problem, and get my computer, then off i go.

then you just got to do it for all 15 layers, and if its running 60fps eat my dust.

When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only. I think that will work but im not sure, im trying it that way first.


#13rouncer

Posted 25 July 2013 - 07:50 PM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)
-> approx size <-
input/output sheet texture R8_UNORM (one for each layer) small
prev. frame in/out sheet R8_UNORM " small
integer conversion texture R32_UINT " small
outputid texture R8_UINT " small
idusage texture R32_UINT " small
internal state texture R32G32_UINT " small
internal state prediction state R8_UINT " medium
volume prediction texture R8G8B8A8_UINT " large
volume connection usage texture R32_UINT " large

nodeid buffer vertex small
vertexidoutput buffer vertex (just one reusable and reallocatable) small
vertexprediction buffer vertex " small


LEARNING CYCLE->

ok. pick id.

1. first pick id for the input, if there is no id overwrite least used id.

1. (quick load)

take the input sheet, and develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant. (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id. this will take another texture to keep.


2. temporal and prediction.
2. (medium load)


for every node make a single vertex at the corresponding id's location (one cell in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH. if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction. we do this using the previous output sheet.
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)

PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
but cleverly using the vertex list here is not all columns will be relevant.


3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.

formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------

just got to get over this non integer problem, and get my computer, then off i go.

then you just got to do it for all 15 layers, and if its running 60fps eat my dust.

When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only. I think that will work but im not sure, im trying it that way first.

#12rouncer

Posted 25 July 2013 - 03:56 PM

so well use mipless textures, this way they can be non sizes of 2.
-------------------------------------------------------------------------------------------------------------------------------------------------
DATA REQUIRED (10 textures per layer, 3 reusable vertex buffers)
                                                                                                                                       ->  approx size   <-
input/output sheet texture            R8_UNORM          (one for each layer)                              small
prev. frame in/out sheet               R8_UNORM                      "                                                small
integer conversion texture            R32_UINT                        "                                                 small
outputid texture                            R8_UINT                           "                                                 small
idusage texture                             R32_UINT                        "                                                 small
internal state texture                      R32G32_UINT                  "                                                 small
internal state prediction state        R8_UINT                          "                                                medium
volume prediction texture                 R8G8B8A8_UINT           "                                                 large
volume connection usage texture   R32_UINT                         "                                                 large
 
nodeid buffer                                  vertex                                                                                 small
vertexidoutput buffer                         vertex          (just one reusable and reallocatable)           small
vertexprediction buffer                     vertex                                         "                                      small
 
 
LEARNING CYCLE->
 
ok. pick id.
1. first pick id for the input, if there is no id overwrite least used id.
 
1. (quick load)
 
take the input sheet, and for each nodes input develop an unsigned integer texture. this is just making an integer out of on and off values for each input segment.
predictive state pixels are 1.0f, non predictive state are 0.5f and off pixels are 0.
just sum the pixels with powers of 2 relevant to their position, and you wind up with a number that stands for a pattern of pixels.
make a vertex list for every node in the level.
Compare input integer with the internal id integers and find which id is the same as the input, if none are mark it as a -1.
then take this new vertex list outputted and if you find a -1 you write the input integer to the internals texture at the least used id.
you need yet another texture for how often each id is used.
and then we then decrement and increment these.
the internal state must be able to distinguish between predictions and non predictions, non predictions override predictions, this way
we can reconstruct data, without it we cant.  (in my theory so far, which is pretty loose to tell the truth.)
so there needs to be either "im not a prediction" or "i am a prediction" with every pixel in the id.  this will take another texture to keep.
 
 
2. temporal and prediction.
2. (medium load)
 
 
for every node make a single vertex at the corresponding id's location (one cell in the output 4x4 segment is actually active, from the input) and look up its column
You must access the volume texture to do this, so its a lot of samples,
you must pick the most least used cell, starting from 0 going to DEPTH.   if a cell is in predictive state from the previous frame you needent scan, and just activate this cell, and mark its use as a more valid a prediction.  we do this using the previous output sheet. 
then from this single vertex, develop future predictions using the prediction volume. (which is always 4x4xNODESxDEPTH)
 
PROB - the main problem is i have to sample all the way up the column, ill keep thinking for better solution.
       but cleverly using the vertex list here is not all columns will be relevant.
 
 
3. formulate output sheet.
3. (quick load)
mark the new prediction marks on the volume texture, using the previous output sheet, and adjust uses.
 
formulate the output sheet, youll have a sparse representation plus prediction, predicted values MUST BE MARKED AS PREDICTIONS.
-------------------------------------------------------------------------------------------------------------------------------------------------
 
just got to get over this non integer problem, and get my computer, then off i go.
 
then you just got to do it for all 15 layers, and if its running 60fps eat my dust.
 
When poking you pick closest result to id instead of just picking exacts while learning, this will give you inference abilities you dont have whilst learning, which works
with exacts only.  I think that will work but im not sure, im trying it that way first.

PARTNERS