I thought of limiting the comma between players and host by changing how often they receive updates. If the screen is 2000 pixels wide and a character takes up the whole screen at 1 meter then at 1/(2000/(d)) it will only have a pixel of change for a meter of movement so if the maximum speed is 1 m/2s you don’t need to worry any more than every 2 s. Also for bullets you would have a last agreed upon position received and sent between each player so they know who their bullet cone effects that they have to sent trigger updates to. The width grows linearly but the area of the screen that we need to worry about updating grows inverse squarely 1/(2000/(d^2)).
Some more time for ideas.
You could use Amazon AWS EC2 instance or other cloud platforms to make a cloud gaming platform and recruit resources on demand. In steam and other cloud platforms it's possible to steal and crack the game code and logic and thus pirate the game for free. If the game is never on the client but the client acts as a dumb terminal getting screenfuls each frame over the web and sending inputs and getting an audio stream then nothing can be stolen. This idea isn't new. You'd have to have people pay to top up their accounts with play hours because the cost is hosting these games and servers. It would have to be a new niche with strategy or turn based games. Mostly because you're probably using very cheap machines (in terms of hardware capabilities) but that are costing a few cents or more every hour. And you're sending huge amounts of data every frame. It's basically video streaming. And they mustn't depend too much responsiveness, so classy strategy, turn-based, or some kind of lock-step first person shooter (it's possible and I posted some ideas before). It would probably be best for simple games like arcades or shoot em ups that newbies like to create. It costs practically nothing to add a new game to the system if it's not a full blown system like steam or origin and can even be implemented with PayPal (because we're cheap amateur indie startup geniuses that don't need to ... Well okay it's crap but it's a big world and maybe there's a little bit of room for this). Basically you just give the developers an open source basis for how to make apps for your dumb client that grabs screenfuls and audio from the server and sends inputs, and then you let people join your program and they never reveal their code or assets.
Another idea is a unique niche for developers that make free games, like on smartphones, but for PC or any other platform includes smartphones themselves. Basically it would be ideal suited for first person shooters where you have a billboard ad on some building that players see. Advertisers go onto the system, be it the ad platforms site or that of the developers of the game, and they fill out a form. They upload the ad image or text and select any special settings like on what map or location they want their ads to appear and pay through PayPal (again, for amateurs and cheap startups or people who have nothing better to do). The problem is you can't be sure the developers haven't tampered the ad system or are serving up the ads to as many people as they say, so you have to develop trust with whatever developers and only advertise with people you trust. Eventually maybe somebody can develop a rating or third party evaluation system or website that shows everybody's rating or who's trusted or reviewed by other advertisers.
The other idea is to use higher dimensions (4) to make a cube with a grid of tiles for a strategy or civ game that is warped (as a 3-brane in a 4-bulk higher dimensional space) to appear continuous and spherical. The buildings would be warped and it would be interesting. But you would give all the buildings a tile to be on and each would appear to be on square grid that is repeating in 3d somehow and you would be amazed at how that is happening and it would hard to wrack ones brain around it and you would learn something about higher dimensional space. 
For the cloud platform, it makes sense that the developers would get paid by a share of the play hours that users spend there. It makes more sense than having people pay for play hours (bandwidth, CPU hosting, costs) and to buy a game on top of that. It would allow people to try any game they want to for free as long as they have enough play hours. It might make more sense to eliminate this complexity from the player, but some games or apps might be more worthy and complex and require a heavier cost so maybe it still makes sense to charge based on the product, just maybe a different number of credits or something, based on machine, bandwidth, space requirements and also the developer's expected share. This may be good for any sort of app like a graphical design suite or tool that would otherwise be pirated. Making the platform website neutral may make it a general purpose solution for any kind of technology.
Actually I don't care about discussing this.
So I guess I'm going to mix the fun of a break I was going to take with actually trying to make money. I don't actually need to make money. I just need to be trying to make money. And I have a load of better things to be doing than games and anime. Which is kind of disappointing because I actually do need money to move out and do my own things. Such horses***. So I guess I have to come up with some crazy invention or tech and sell it to NASA/Roskosmos/whatever. @)$)#@( Which... seems doable... maybe in a year. Haha. Looking at the phase-shift propulsion developments going on it seems doable and I have to read to find out if or why it hasn't been applied to electromagnetic radiation yet.
So excuse the spam not related to games... but I do not get a break now to make anime and games. Back to work for me. And no hope of getting out of this hole. I don't get money for just correcting mistakes in relativity theory and being right, about new things in string theory etc.  Maybe the money fairy will come and sprinkle money on me in mysterious ways as I continue to make progress.
I think I have to talk about everything on my mind and go into detail into the ideas everywhere because maybe the mental disconnect I feel is because I'm mulling this over by myself and there's a disconnect in communication or contact with reality. Also I thought maybe actually making games I was afraid to make might make money. Ie I was thinking now I actually have the reason or feeling of deservedness to make games that shamelessly grub money while making all the kids spend all their parents money etc. I was afraid it would make TOO much money. Haha. JK. But I don't think that will go anywhere as I'll still have to invest too much time or money and have nothing in the end.
But about the ideas. I think writing in gsjournal is the only way I have of writing papers, unless some other journal besides the one I tried (JPhysA) publishes me. Actually seems like a waste of time if I already have everything organized in a simple way in a single place.
I got banned by the idiots at physicsforums.net because they don't allow original research and thinking proposals. I think it was just because I was spreading dissent about relativity or rather just plain out saying it was wrong without any other solution or work done to show why. But I think if I make a second account... I don't know they don't even reply to the contact I sent asking about reinstating my account. Ah... [Swearing]
Well there's other English physics and science forums, if I'm that desperate. And there's sci phys relativity on Google groups and the gsjournal forum. And dxdy.ru which is Russian and the lurkmore.ru discussion page behind the wiki entry on the theory of relativity. Um. Probably others. I feel like it's a copout and only rejects that don't know anything have to resort to this kind of spamming pretty though I actually intend to have discussions and unique content. Ok. Well that probably can't go wrong. Looking forward to this.
[Edit] Not for the job thing. And I think that this might be at least interesting for the intial discussion I get when I start.
I actually have to get a job now because I'm not officially making money, even though it's ridiculous and I should be getting paid for the useful physics work that I do. All these people in universities getting taught this ridiculous stuff that they take so seriously that if they even tried a little seriously in detail would realize that it has a flaw (ie the Pythagorean triangle of relativity). All these people that are doing these time-dilation experiments that don't get values corresponding to the gravitational time dilation that they should experience. And they have all the values they need for it. Just the gravitational acceleration. The velocity is in the rest frame. Anyways. The whole system is flawed and some people (like me) don't get money when they should or somebody else doing the same work would get paid. I have to actually try to make money instead of doing useful work. Grants are only for university students or professors. Imagine me doing web development work or app design or something. Ridiculous. And getting a job is just basically getting a rich person to support you. Fucking ridiculous! Agh. So frustrated. Guess I have to go apply for jobs on craigslist. Might just make sense to just rob a government, Fight Club style.
I'm kind of not sure if working on this takes too much of a mental commit charge as opposed to working on the useful physics work that I do. I didn't sleep though so this is not like I'm capable of doing anything besides mental work anyway. I think in the day today I might be able to do some work on the dopamine engine / subfield game, but I already put in my shift on the physics so I'm going to take a break overall. I think I found that by working on the fun stuff, the stuff that I'm kind of itching to do, I did away with (maybe this was just because I would've come to it anyway or because I hadn't slept at that time either) the unnecessary work of what I was doing with generating the acceleration space curvature, while I could do something more productive in the meanwhile, and so forked a new project to possibly do that while avoiding the possible other work of the acceleration space. Though I didn't do anything with that. And am now going to continue with the mainline project once I finish, though I am going to work on the mini side tasks, or rather the other main tasks, while I #ifdef 0 the unfinished acceleration space walking.
I was thinking there would be eg a gamedev.bc.ca or something but that's a government level domain address.
I was thinking of taking the remaining 5 months of the year as break to work on anime and games. I didn't do anything really to show for it physically (no new tech... or results, or anything to show that it is correct and a new discovery) but it was a good start and I think I should do it anyway because ... the only way to make these things is when you're not thinking about them too much... and... I think with the attitude I have right now that this is not the main thing I do, to make a living, or, trying to make money off of, or rather, that I don't charge money for it, and have substance that I can put into it that is not solely in the computer or anime or art drawing for that matter, field, and that somehow, deep inside, I can ward off the bad attitude that is naturally directed at game developers and the unproductive and self-destructive mindset that game developers that try to make a living off of this have and fall into, by the fact that this is not the main thing I do, that I do this just for fun, that I will have a different result than before... and really with physics you've got to take it easy. When you work on something, you have to keep your mind flexible and attack new problems and your mind continues working on it anyway somehow. Physics is the most relaxed field... if you don't go into quantum physics I guess. Or take the wrong turn at hicksville. Just kidding. But I mean that... physics or any kind of science is actually productive and useful and is actually respectable work. So you can take breaks. This is a total mind shift. I dunno. If you actually make a contribution and do something useful. And the more deeply somebody is in a field (and indebted with its mistakes) the more sheepish and miserable they become, eg the people I saw at the electronics store who were having second thoughts about going inside, haha. So I think keeping as far detached from the dirty grunt work is the best part, that is, keeping an overview rather than an in-the-mud view. That is the luxury some people have when they start out. And must eventually succumb to. If you're lucky you develop some kind of innate knowledge that you can transplant into another field.
 I think art, like music, and maybe anime and/or games, can have an inspirational or "coolness" impression effect, that can make people take up subjects or to give insights, like the techno song that made me take up physics... that I won't talk about. Yes, that is possibly what I intend to do, but probably won't have enough time for. Really I was doing it before I even heard it though. And don't think I have enough material for that kind of impressive effect.
So this has to have some kind of kick or purpose beyond being just a game. So here's an idea for what was originally going to be "isometric forum" but now is "subfield" the physics lecture hall. Basically you're a student and somebody's giving a lecture. You have all the controls of an FPS. You can sit down in a chair, crouch on your desk, lie on the floor, or if you spent some of your student money to buy paper, you can crunch it up and throw it at the instructor, or make a meme face. The instructor and can up to the chalk board, equip it, and type on his keyboard, which will render the text on the chalkboard. He or she can also upload a PDF or two and JPEG's or PNG's or GIF's etc, and use them as slides, or PPT slides themselves, and give out PDF "handouts" all integrated into the rendering engine. Outside is war going on, and the roof's getting blown off, but here you are, discussing physics. You can assign different times and names to lecture halls for example, to say who's going to be lecturing when, and if this is worked into an MMO design with potentially limitless numbers of players (too lazy to say how right now), then this can be a continuous 24/7 thing like Habbo Hotel.
Also, there's cafeterias, dorms, and a whole campus etc, and you can design your own, and host your own 24/7 MMO servers.
You can also eg be in an office tower, or at your desk, processing PDF's from arxiv.org, or at a business meeting presentation with overhead projector slides powerpoint with arxiv.org PDF slides, or discussing it, or maybe there's a rocket launch that you're all furiously working on making happen, or you're at a pentagon, discussing top secret PDF's, or... at a surgery room.... or in king Tut's tomb, uncovering hidden for millenia old PDF's, or PNG's, or JPEG's, or GIF's, etc, or MP3's, or WAV's, or ... the secret chalice of the damned of porn .AVI .MPG .MOV .MP4..... ritual file opening and sharing. Basically a game of file exchange and information digging and role play and sharing. The idea was to make a 3D forum for discussion of physics or something. With lecture halls and buildings with levels etc.
I think it would be interesting as a way to give presentations or lectures for people with their own fringe theories or something. Personally I think if people dig up random academic papers on physics for example and discuss them or have somebody lecture them and give everybody a brief run-down and digest, it would be interesting, and have some kind of purpose. It would be like a forum in a way, or maybe a Khan Academy of sorts, but for more sporadic and individual information bites. And the title "subfield" actually fits quite nicely with it. Or you could be mechanics in an air base garage working with UFO mock ups, trying to come up with propulsion, and the next day you come over again and there's a revised model or something eg. Because it's totally open source and moddable and supposed to be easy for anybody to pick up at a glance. Oh and you can use voice chat of course. That's probably what the lecturer would do at least.
I'm just happy wracking up screenshots and pieces of art on my DeviantArt, which I get enjoyment out of. That is, I'm saying this because, you may be wondering why, the whole giant game is being made at all. I looked through so many, well, a few, new pieces of mecha art I saw. It feels like whoever read what I wrote and down-voted just doesn't understand what the heck I'm talking about. Eg the part about being low-poly and almost MineCraft-like. I hate MineCraft and the style, for the purpose it's being used for. What I meant is the crisp, polygon-perfect models, where every polygon is PHYSICAL and has reactive properties, if you shoot it, if you hit it, etc., and that you prefer to use eg texture data for details rather than extra polygons. But there's a certain Pavlovian action-response association you get when you severely trim the fat on the polygons and graphics, until only the reactive and functional components remain. It becomes so dangerous and like CS 1.6. Why would I waste extra man hours making all these details and garbage if I can do much more productive things. It should be like a game art style. Trash style. Which just gave me ideas. To put meme faces and emoticons superimposed and part of the 3 dimensional world as billboards etc.
This is another idea I had. Basically imagine if you need to keep a list of billions of IP address (32 bits each) visitors to your site or worse IPv6 (256 bits each?) And if you had each IPv4 address you would get a total of 4+ GB. What if I told you you can reduce that to 400 MB?
The amount of entries encodable by using combinations (of size 2b-digit numbers) rather than permutation (using b-digit numbers) is:
sum ( (2^(2b))! / (k!)((2^(2b)-k)!) ) k from 0 to 2^(2b) = 2^(2^(2b))
And the amount of space used by them is (combo vs permu, worst case scenario all the IP addresses):
I calculated this to be around 400 MB using I think (this might be completely wrong as I don't remember exactly all of this and it does seem dumb to use combinations rather than permutations):
k * 2b
Where k is given by:
sum ( (2^(2b))! / (k!)((2^(2b)-k)!) ) k from 0 to 2^(2b) = 2^32. And the k should thus be limited (ie number of numbers used in the combination from the absolute possible maximum of 2^32 or even 2^(32*2)).
32 * 2^32 or if using bit field then only 2^32
I graphed them and noticed that at early levels (ie less than 32 bits) there was more encodable data by eg using 2b-digit wide number combinations (and finding the correct way to encode and quickly retrieve the visited IP addresses). The method with permutations if allocating data up front would be to keep 2^32 bits in total with one bit for each possible IP address.
This is better than an upfront big permutation bits integers cost.
An x fixed amount of 2b-wide or 3b-wide numbers instead of b-wide numbers might be much cheaper and have same amount values or more perhaps cheaper (less memory) than up-front cost of permutations of a single (2^b)-wide number.
How this would exactly be stored is the main issue.
keeping a linked list is basically a hidden cost
but if you store it as one integer or an array of variable size that helps, or an up-front fixed-size integer with repeats meaning unused and all -1 meaning nothing added, which rather defeats the point because the big-permu-int is more efficient as it can store more for the same num of bits as "unused" repeats etc have meaning as unique
values, but there might be a way to make the up-front permu int variable size to save space that has an equal distribution of possibilities for any same number of visitors and only grows in bit size requirements as there are more visitors.
The max size for a big permu int is 2^32 bits or 0.5 GB for all 32-bit IP's visitors worst case. About a few million visitors is 1/1000th of 4+ billion possible IP addresses (bits in the big permu int) or 1 MB.
The best idea though is to keep this variable sized array for all the numbers for the combo, use a separate integer to count how much bits or digits there are, and find some way to use permutation to encode the combination numbers to get rid of ordering redundancy.
The basic idea is, just looking at the number of permutations vs combinations, of b-bit numbers, there are more combinations than permutations.
As an example using a 2-k 2-bit IP system with combinations we store eg:
0 = 0
1 = 1
2 = 2
3 = 3
0,1 = 4
0,2 = 5
0,3 = 6
1,2 = 7
1,3 = 8
2,3 = 9
Ie we stored more than 2^2 = 8 entries because we also used the fact that not having a number is also a value or combination.
Also has anybody already noticed that the Fermat numbers without +1 are equal to the sum of number of combinations of 0 to 2^b numbers out of 2^b? Ie it is equal to 2^(2^b), which is equal to the more full combinatorics equation for that, if you graph those together.
And compare it to "(2^b)^b" or the number of permutations of b b-bit numbers (ie a single b*b-bit-wide int).
With IPv6 it's going to be even harder to keep track of lots of IP addresses, or even checking through them for the presence of one.
So I think it's about time I talk about that secret super-duper technique I was going to talk about, but said I would post elsewhere first, because it was so good. I'm already satisfied with how things are going (in other areas), so I don't mind posting it. I made all these screenshots and methods in 2016.
I was wrong in the screenshots with only 1 prism and 3 triangles, as that means only 3 3-plane point intersections from a total of 4 planes. There should actually be 4 3-plane point intersections from them: 123, 124, 134, 234
The basic idea is we can think of any point in any dimensional space using axes as one of either two things. 1.) The simple case is that a point is the vector components (coordinates) along each axis added together to make one vector and thus point. But, 2.) equivalently in 3D and lower spaces as expressed in our 3D space world, but not equivalent in higher dimensional spaces, expressed in our 3D space world, we can also think of the 3-point(s) as the intersections of the hyper-planes (3D normal vector + distance) where each plane is at a distance along the axis (which are skewed, to fit all together in a 3D space representation) and the normal is obtained by the direction of that (skewed) axis vector. The result is 4 3-point intersections for 4D space, and 15 3-point intersections for 7D space, etc. The skewed axes can also be used with method #1, but with addition of a vector along the skewed axes.
You can use the axes for whatever you want: x,y,z,t or mass or acceleration components. It can be connected together in 7D or other dimensions to form a shape. It is best in some cases (#1 but not really #2) to choose skewed axes that don't counteract each other in opposite directions, so that less space is shared, however it is unavoidable with method #1 but method #1 is the one that will give a single-point intersection.
Also, here is the code for the caloric metabolic graph:
And "testfolder" with exe:
You can ignore the copyright / license I have there, if you make an improved version and sell it or make it free or for smartphones.
Also found this later version of the zombie shooter.
This one has shooting, reload, jumping, crouching, moving boxes, etc, with sound effects. Click F1 to toggle between third and first person view.
This one here is something different that I can't run because it crashes but maybe it will run on Windows 7, which I think I used when making it.
Here is what I made to track pixels between frames for the 5-point algorithm that I intended to use for scene reconstruction.
The first result I had with the tracking was strangely similar to a map of surface normals. I don't remember how it happened, but I think it was because I was tracking 2x2 pixels (so very inaccurate and prone to jumping to other likely matches) or because the checks I used were different.
Just another small topic. With today's sub-200 ms lag, it may be possible to make a lockstep protocol FPS, where there wouldn't be any advantage to having a faster computer or internet connection, so that people with faster computers or faster internet won't get head-shots or escape from fire while the other people are a step behind. Also, you can pack the pitch and yaw rotation in pixels in a tiny packet if you narrow down the range and put all the bits into one variable, and the same for movement, taking only 4 bits for forward, backward, left, and right, or even less if we only allow the 9 possible movements (the 8 directions and standing still), and pack them together, and perhaps add jumping and crouching for 11, and maybe shooting for 12. This can decrease the data expenditure and make it possible to put in more players in an MMO FPS. Also by making the master server only act as a matchmaker, and tell the clients occasionally when somebody joins and their address (maybe using IPv6 ideally, but it's possible with IPv4, if behind NAT) and have the clients talk to each other, so that everybody in the PVS knows each other and updates their positions and keeps in contact, and if anybody from a neighboring region should come into contact with another neighbor's, the intermediary will inform the second that the first is there and pass over their address so that they can establish a connection themselves and update themselves.  At least, it wouldn't be unfair or have any different happening of events between different computers clients, though they might lag, if a client-server lockstep is used where the game might continue on a server faster than a client updates, though the client would still be sending commands and might process the game, though might not receive updates fast enough or render the frames fast enough. Though this is for a client-server model, and would require more work for a distributed MMO model where each client is a server in its own way, though really peer-to-peer with a master server that generally oversees things.  So all together with 9 values for movement direction, and x 2 jumping x 2 crouching x 2 shooting values, gives a requirement for 72 values, which can be stored in less than 7 bits.
I will also briefly describe how the orientability maps are generated and how they are used.
To generate an orientability map, a model, which is not supposed to have any holes or inner chasms where there is a detatched inner part (although that's supposed to be clipped and removed in the process of generation), is unwrapped into a sphere. Then, it is as if a laser scanner scans it from every angle, using polar coordinates into a texture, for the colors, and the x, y, z, coordinates of the original model from those points, using RGB encoding (3 textures for the coordinates).
I tried different methods of unwrapping, unsuccessfully until the third method.
The correct method was, for a point of a triangle marked as being hidden, with a ray going from the center to that point, intersecting some point along that line farther out, to take the normal along that triangle or line, with the normal pointing toward the same direction as the normal of the triangle, and to move it a tiny distance that way, to give it room, so that when that hidden point or points are moved outward spherically, to a similar radius as the non-hidden point, it will have room and not end up behind it. tet->neib[v]->wrappos = tet->neib[v]->wrappos + updir * 0.01f + outdir;
( Line 12334 in https://github.com/dmdware/sped/blob/master/source/tool/rendertopo.cpp#L12334 )
And the rest are the work leading up to the correct rendering methods:
In the shaders there is a lot of left-over code, from eg shadow mapping and normal mapping, that can be removed, and will probably make it faster. At each step in the search for the right pixel data, in the fragment shader, it will update the next texture jump size in pixels, at a decreasing rate, then the "upnavc" and "sidenavc" coordinates after the jump are calculated, from the previous "jumpc", for the coordinate and color textures, the 3-d coordinates of those pixels are obtained, they are then transformed based on the orientation of the model, its translation, and the camera orientation and translation, the "upnavoff" and "sidenavoff" 3-d coordinates are obtained thusly, for the "offset" or "not correct" position (as opposed to the position that would be obtained if the exact match for the pixel was found in the model), the "offsets" are then literally turned into offsets from the previous 3-d position "offpos", and normalized, and then the "offvec" is calculated by subtracting from outpos2 (obtained by projecting a ray from the camera's near plane, or in the case of perspective, from the point origin of the eye, to the plane of the quad being rendered), subtracting the "offpos" 3-d coordinate of the current position of search, and only the camera-aligned x and y coordinates are used, (the camera up and right vectors), by decomposing the "offvec" using the camera's up and right vectors, using the "sidenavoff" and "upnavoff" actual offsets along the textures based on the pixel jumps 1 jump right and 1 jump up, to give the next texture jump "texjump" offset along the texture to get closer to "outpos2" (on only the camera-aligned right and up vectors, ignoring depth), and the new x-,y-,z-coordinates are obtained and an offset length from the required "outpos2" are calculated, so that if it is too high, at the end of the search, it is discarded, if it is perhaps on a place on the screen where there shouldn't be any part of the model.
The perspective projection of orientability maps works the same way except, the "viewdir" must be calculated individually for each screen fragment, for use with the search, and instead of "outpos2" on the screen being used to get the right up- and right-vector camera-aligned coordinates, a ray is drawn through that fragment from the camera origin, and a point is searched in 3-d space closest to the ray.
The "islands" are pre-generated by the editor based on all combinations of angles and offsets along the up- and right- vectors (ie the pixels there). You can also simplify the rotation functions in the shaders and tool when rendering to not use so much padding and modification of the angles, e.g. adding or subtracting half or full pi and multiplying or dividing by two.
An improvement can be made to avoid the edge cases where part of the back side is rendered, to check if the normal at that point is pointing backwards, and if it is, to jump back to the previous place, but still decrease the next jump size and to increment the step counter.
This is probably the last big idea I had, if I can't dig up others.
It started here: https://www.gamedev.net/topic/676377-local-hash/
Since then, I developed it more:
 The "pencil" source code is now available here (with a working CentOS 7 makefile so far only): https://github.com/dmdware/pencil
 The "testfolder" with the exe and resource files is here: https://drive.google.com/file/d/0B2Wuir9-DSiURWxpd09YblNIT28/view?usp=sharing
And got some results by accident when I was attempting to write a "psychodeloscope" (basically a neural network that would process an image, or a camera capture, as if it's sensing, and the microphone etc, into one array of bits, and through several layers and convolution, and half of the array being hidden, so there is essentially double the amount of bits needed for all input and output, and produce an image on the screen, its "face", and sound, etc). I made a mistake at one point, in how to write expanding circles from the center of the screen, that were necessary, for something I forget, but basically it would actively "dream", by trying to produce an output, that would try to achieve a desired result, with a hidden layer or something with part of the next frame's input containing the prediction of the result of the world, which it would then map to the previous input (hidden and otherwise). The hash or brain would be like a learning algorithm that advances to the next possible combination of bits, skipping all the previous attempted configurations and any in between that don't work for that last mapping and any previous (this can be done iteratively, to make it faster, so that only the last mapping is considered, not all the history of them, so that, say, over 15 years, it might learn slowly like a child). Being functionally complete, and having enough layers (which turned out to be too high, make the idea unusable, except for maybe image manipulation effects as a plug-in in image editting programs perhaps, and may other uses like sound effects), it would be able to turn any output into any output, with consideration of the hidden layer it uses to "talk" to itself. I got an interesting effect. The effects for the bit-twiddling / bit-shifting were so nice, I thought they make a great album cover or are currency-quality like. They were just attempts to apply the mapping to frames from a movie and images. Some of them look like "the fourth plane of hell" or something. I used one of them as the wallpaper and art for my phone. I think there's more images on that page above, not sure, too lazy to check. I also had an idea to make a "dreamhub.com" kind of like a "DreamTube" where people would upload games or videos encoded in the neural format, so others can play it forward, or play around with it. There might be a use for these hashing algorithms as an image storage and compression format, or a video one.
Here are the different hashing algorithms I tried (rather the operations in the hash):
[attachment=36119:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-09-06 03-05-05 nand.png]
NAND version 2, with only 3 operations, not involving a second mask (I don't know what that means, reading from my own notes, but I remember trying to use two masks and two bits of input, instead of just two inputs and one mask, for one output):
[attachment=36120:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-09-06 03-07-41 nand2 only3ops, noinvolvingsecmask.png]
NAND-NOR combination, seems broken:
[attachment=36121:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-09-06 09-20-29 nandnorseemsbroken.png]
[attachment=36122:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-09-06 09-21-48 norgood.png]
XNOR (the original one, later found to be functionally incomplete just by itself, although it also had a NAND, but was then superfluous because it had extra levels required for a full amount of mappings, though it did produce the interesting image effect results above):
[attachment=36123:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-09-06 09-25-25 xnor funcincomplete.png]
I also made a "pencil" app on CentOS, that used a NAND hash and you assigned the masks, and the behavior, whether it fed back on itself, whether it was supplied an increasing counter integer, or whether it got a timestamp each time to use as input:
[attachment=36124:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-08-31 06-19-34.png]
[attachment=36125:D!D 1/2 D,D 1/4 D 3/4 Do N?DoNEURD?D 1/2 D? D,D. 2016-08-31 06-25-08.png]
[attachment=36128:09j09 (DoD 3/4 D?D,N?).png]
[Edit] an eg image encoding using the hash could work by mapping the 24-bit pixel index to map to the 24-bit RGB value and the number of bytes would be stored in the file header, or the hash can be mapped to output the color values (maybe in a hidden layer or not) as output to the previous input.
This is a minor topic discussion. Just for the pics.
Idea for billboard RTS, using stolen art:
This is just going to be an image dump.
Also, I found more images to add to https://www.gamedev.net/blog/2402/entry-2262975-asdfasfd-edit3/
I have some more time for ideas. This one doesn't require writing, just mostly copy-pasting.
This is an idea for a zombie shooter, possibly a novel or a (computer-generated or trash-snipped-together-from-random-Google-images?) movie now.
The original game exe is here, for the earlier version:
And this is an even older version:
 The newer version in the exe:
 More images, of a newer version:
The Greenlight page for it is here (to follow the train of ideas, read this Greenlight page before the "idea dump" below):
I was working on this and made the YouTube videos in the Greenlight page as a way to express the ideas for the cutscenes, without a lot of work. Now I realize this can be made into a whole short length film, using the material I have. I at one point recruited some people to make some music, and got these two. The first one is for the "rave scene", with a beat reminiscent of zombie nation. The second is perhaps for the ending.
 The people that wrote these awesome tracks are... somewhere on my Skype. I believe it's "KODA production" for the first one, and for the others "MSF Studio": https://soundcloud.com/msfstudio/msf-studio-vector-ost
The clips are here (there are a few things more at the end here than in the Greenlight page texts):
Here is a dump of the things that I came up with later than the texts on the Greenlight page:
Winding path in forest grass trees
Going down dip then up between trees path one in middle shadows shade
Fancy motorcycle foreign
Start off on bike just escaping
Marsh yellow wheat or grass trodden or flattened beach front forest clearing interspersed
Nobody would've gotten infected if they stayed indoors when the streets became dangerous
But they had no choice once food started running out
It was a survival spectacle for anybody who stayed indoors and witnessed countless survivors trying to make it to
safety as they ventured into the unknown, only to be mauled like the rest after giving a good fight
there's still room on the zombie bandwagon
kappa beta phi bros
time outlast fortify
I have some original ideas. Using Anaglyph 3D glasses, or a distributed peer-to-peer MMO, or
a battle-with-the-unconconscious where you try to keep your body and mind satisfied or else
your character does stuff on it's own... e.g., killing roommate out of insanity or eating
habitually, and you have a level of control over your character based on how satisfied your
subconscious is. E.g., it will wander off on its own and you have a throttle that determines
how much 'effort' you can spend. And you can only steer it one way or another, and your
character basically goes more and more mindless until effort is regenerated. Maybe for more
fun, the 'dark side' and 'light side' of the character will be controlled by two players,
and the conscious side gets points to spend on actions that are used up. The unconscious
gets points when there is an unsatisfied need. And in the meanwhile it can also act as
impulses or urges.
Maybe the conscious side just picks choices that the unconscious gives it.
2015 11 03
new idea for software
barter efficiency hypothesis
(NOTE: I think this idea is crap and I just wrote it for the fun. this is the system they use after the apocalypse.)
based on the premise that the value expressed in
money terms might not fully capture the value of
all goods and services to everyone in a global
economy and that sometimes, exchange between two
or more parties can be cheaper and more agreeable
using barter, unilaterally or in more complicated
the money economy is barter too, in the sense that
when a person sells something, like their labour,
they barter it for what that money buys. because
money values must abide by the common denominator,
they cannot express what value an individual might
place in something. a person might like ice cream
from a local shop more than it is worth in money.
of course this person would never pay more than it
is worth, though he may be willing to do something
that somebody else might value more, who in turn
could provide the ice cream parlour owners with
something they value. what if there was a place
where people could browse through billions of goods
and services to match their needs, while looking
through requests for goods and services from other
although it is possible
to do barter already, the barter network aims to make
it easier and enable people to expand their possibilities
by giving people their own currencies or certificates
for what they are willing to do or exchange. these
certificates or promises can be traded for higher
value with other users, making everyone a trader.
on modern barter websites the trade is too short.
money is so useful because it holds its value
while allowing for complex exchange.
who says a lump of coal is worth $200 to you?
who says that a plate of steel is worth $100 to you
if the car manufacturer is willing to buy it at that
But it might be argued that the extra price you pay
with money is the cost of convenience and liquidity.
And not everyone
is probably worse off with a money economy, as some will
earn more sales than they would under barter.
money is debt but money isn't an indicator of how
much society owes you. the origin of money is
in certificates issued
by different tradesman who promised to fulfill
certain deeds. and i believe this better describes
the complex choices people have today in selling and
purchasing goods and services. i believe that
inaccuracy and mismatch in what something is worth
to different people leads to unemployment and
social woes, and that the world would be a better
place if people bartered. what if everyone could print
their own money, and the government couldn't drive
inflation up. everyone becomes the sole proprietor
of their own business.
perhaps initially useful for the exchange of
virtual goods or services, this might later become
a useful medium for the exchange used goods, talent,
local one-time jobs, and perhaps even entire living
arrangements. perhaps one day you will do all your
shopping through barter net. perhaps you have a
broken phone, or a game console you can't sell, and
you can't find a buyer and don't know how to. you can
outsource finding a buyer to another trader. instead
of asking for money, think of something else you might
like. and combine it into bundles. maybe it won't
be the end all solution to replace money, but
at least the medium of exchange of a community of
digital content creators.
perhaps it might be an alternative to piracy.
the age of digital barter.
so how it would work basically,
you would have a collection of certificates,
and other people would see and it could say
"so and so wants this" when they click on your
certificate, and you could propose to trade for
something they have, or you could see a list
of offers of what somebody is offering.
the success or failure of this would probably
depend on getting large sellers onboard, like
living places. that is the whole point of a
medium of exchange, to sustain oneself. if it
can't do that, it's useless.
Make it distributed network hosted on user device
that issues certificates tied to user, like bitcoin.
Certificates issued can be disputed if what they promise
isn't delivered and seller's reputation will be tarnished
so others will not make the same mistake.
For example, I would barter my used videogames.
In search of something interesting.
Or some old device that's broken that I don't use anymore,
Or anything I don't want.. food? and shipping?
Different things can be attached to certificate...
address, email, phone, images of promised physical item,
text, links, etc.
What would be valuable is the services of 3d modellers,
artists, or voice actors, or musicians, or novel writers,
comic book artists, etc. Maybe allow people to choose
I have a TON of stuff I don't need anymore... CD's, books,
clothes, broken or old devices, etc.
A place to find old used or broken electronics
Even if you don't need it, you might trade it with someone who
Testimony ratings feedback
Only revealed when a user redeems the certificate
See everybody online
People really don't want to spend money
But if given the chance to trade something they don't need,
they may give it a try
Sell computing power
Ratings feedback reviews
Of redeemed certificate
worst case scenario is, there's a lot of useless junk that
nobody wants, or people end up with certificates they can't do
think what marriage will be like
proposal for marriage, what do you have, money? "i give you
television, remote control... " (borat)
They won't even consider it (Amazon), pushing the prices low
enough for them to have an interest in it, when a more
agreeable trade might be had using barter certificates
What would be most useful is transportation, food/catering,
and living arrangements, and electricity and utilities
As an experiment,
Track what you buy and where that money goes and what they buy
Can a barter be made?
Maybe there's a business in buying broken smartphones and
computers and using them for computing power
Computing what? Just go to freegeeks and get their
Maybe resell them?
or ebay "core 2 duo motherboard"
2016 06 11
Barternet firechat Bluetooth kazaa post z calypso
Rave light effects fog lazers
Recovery plan sold all children into slavery for ten years futures
Inside industrial facility all inside
Ladders walkways passages hanging chains vents pipes smoke above windows metal walls grates below walls wires rooms levels big b sky clouds above nature around secluded river
2016 11 29
looking back on the global financial situation
countries suffered from the profit bottleneck
in that nobody would produce unless they earned a profit, which took money out of the economy if they did not spend it
unless a country or company chose to maximize something else,
like a mineral, or its own currency
barter also offered a way out
wave mail distributed messaging system
and spice trade, the distributed file sharing program and geo-web inter-mesh server
tunnels http over bluetooth or wifi direct
web pages are now binary to save energy and bandwidth
barter credits circular debt system
money started out as an "I owe you" system where a tradesman promised to deliver
some good or service to the bearer of a token for which he took something
but the token was then not destroyed and became a general government-backed instrument of trade
in the digital circular debt system
these "I owe you's" could be passed around
to make sure there was no fraud, each transaction would be numbered, by both ordinal and timestamp,
and recorded in nearby neighbour devices who would also send it out to others to make fraud harder by witnesses
whenever an "I owe you" made a complete circular, for example from the original tradesman A, to device B, to C,
and back to A, it would be eliminated by the system by again propagating the erase signal to all parties and witnesses
which is also necessary to eliminate bloating in the system
now instead of credit cards, people could trade using an app on their devices tied to their unique MAC address
the internet was now distributed and worked by bluetooth and wifi-direct, where each person could carry around
their web blog server
neighbours could tunnel through other neighbours to increase their range and amplify their signal,
and cache unvisited pages or files for later viewing
the wave mail and distributed inter-mesh now functioned by self-assigned address system by interest groups,
which would not be tied geographically
now the internet server pages are compiled dynamic libraries instead of served-up scripts, which helps
save energy and bandwidth
inside home garage door bunker
throw canister explosive using wire chained beer sized can
inside police precint jail half underground near river
wear earplugs at night to fall asleep ignore zombies clawing and gnawing at the door and windows fence
informal toy currency
paid to do experiments on z
we hang our laundry to dry to conserve energy or because we dont have it
soap is running out
we use washing more than towels now because water is easier to obtain
rain moisture extractors temporary solution
informal toy currencies arose in places as a temporary solution to do favours between locals
mostly to exchange physical household goods or books saved
interactopedias are a new novel form of media
going camping carries the same risk of wildlife from z's that before would be bears
signs hang some places of "zombie crossings" on roads between places
it's a human figure with one arm out in front and a back tilted head
we take a photo of each deceased to pass on to their relatives if they are found
and as a monument to the tragedy
pathologists are working on an investigation and
a period-by-period reconstruction of the exact events
that occured during the epidemic
textbooks are being made that will teach the biological, cellular mechanism
by which the infection spreads and acts
the virus is thought to bind to the Molybdate transporter AB2C2 complex in the open state
to travel across the cell membrane
the virus might have been dormant all this time and have occurred previously in the past,
leading to periodic mass extinctions known as z ages
this might have been the cause of the extinction of much life on earth during the dinosaur period
in which only isolated pockets of life survived that hid in caves or the oceans
there's new technology now besides wave mail and geo-internet
an interesting phenomenon now is apps that use neuro-hashes to evolve behaviour of virtual creatures
the neuro-hash captures everything needed for a functionally complete general purpose solution
to intelligence, able to capture any behaviour or circuit
it is composed of a 128-bit wide and 512-bit high segment of masks for the neural "weights"
where the intermediate values generated by NAND operations between two previous values and a
mask bit, are mixed by offsets of 2's starting from 1, going on to 2,4,8 etc. to give a complete
mix of every input bit with every other every 8 levels.
the creatures get senses of their surroundings by using an "ask-answer" system, depending on
the values of bits in the left half of the output bits. the right half is fed back in
to imitate internal thought continuity.
the creatures evolve in a grid world and do different things to achieve success.
newer versions will learn by the "dream mechanism". their output is compared to the
real world output that would be fed back into the segments but the bits are
advanced to the next closest value that preserves most of the mappings or "memes"
and contains the correct output given the inputs.
the evolution is very computation heavy so the system is distributed and seeded differently,
with interminglings and cross-planting of creatures across host worlds.
z rodeo is a new spectacle where a zomboy tries to ride or outmaneuver a zombie in a ring
while a crowd watches
i was confused at first because i did not know if the zomboy was the one being chased
or the chaser
z-wranglers i think they should be called is a better name
society is evolving because of the outbreak and they may become a part of our existance
thought suppresant drugs have been somewhat successful in making zombies docile and manageable
tales are told of an other that will come one day that is immune to the infection and can
walk among a crowd of infected untouched and ignored
there are also tales of a survivor that held out in a mall all by himself for several years
each state now has their own founding survivors
the zombie-human relation is a regarded as a new form dual predation symbiosis called zombivorism
technically, it is not considered zombivorism when humans eat the undead flesh for survival,
but is thought to be a temporary phase in the ecology
There are at least 30strains of zombigen by now
The fact that it has a high mutation rate is probably why it is so hard to develop an antigen for it
A technique known as zombigen strain dating is used to break up and classify the spread and growth curves of the infection
month-last games mmo join paytoplay
escape to safety
quick match rooms:
microtransactions mmo buy ammo etc?
bag cloth system survivalism
2017 Feb 15 wed
room window dark bedroom side
flashing on wall from window
then trembling grows
person bed death pose hands
2017 Mar 04 sunday
scene epidemic spread reconstruction research effort
 And here are some story-line missions:
Prologue (introduction in the beginning).
My name is Michael or Mike as my friends call me. This story took place a month ago and sitting at home, remembering these terrible moments, I want to tell you about the horror,
that happened with me and billions of other people, recording all this, maybe it will interest somebody, I want to leave this for generations to come and tell them in more detail
about the horror known as the Pathogen.
The virus swallowed cities and the epicenter became New York. At that moment I was in Boston. I worked at a brokerage firm as a broker.
My whole family, parents, wife with baby lived in New York, and I moved to a temporary job in Boston with good pay and promotion opportunities,
but as I regret, that I wasn't near my family when the epidemic began, I still search and hope to find my family, but so far my search hasn't had any luck, from New York remain
only ruins and the city is being rebuilt anew, sweeping up the remains of the horrifying tragedy.
But I've digressed from the main subject, the EPIDEMIC! An unknown virus, which scientists have named the Pathogen.
It turned people into animals, into walking zombies, ready to kill you at any moment. Back then my only goal was to survive
and get to New York, find my family and live peacefully, but life is an unpredictable thing. The moment it all began I was scared and my only thought was to SURVIVE!!!
Chapter 1 - Unwelcome guest
(NOTE!!! this chapter happens in the beginning near the house, later the hero Michael goes out into the streets a goes through the blocks etc wading through zombies,
finding a knife on the road, baseball bat, frying pan and so on, this is a choice for the player, the chapter begins at night at 4 AM).
Michael: "Home from work I quickly ate and without undressing fell into bed tiredly, the day was tough, but what awaited me was an unwelcome guest,
acquaintance with The Zombie, fear and bloodshed. Already, and the day had begun so well." (at this moment the screen should go dark and the sound of knocking
at the door). "Oh my god who could have decided to to pour into my home at 4 in the MORNING! I need to see who it is." (the hero gets up and goes to
the door, and opening it sees a zombie!). I'm coming, I'm coming I said, and opening the door saw THIS! A monster roaring in blood that pounced on me,
I needed to stop him, I ran to the kitchen and got a knife, and hesistated (?) to kill the unwelcome guest." (here you run to the kitchen and grab and knife and slash the zombie).
"Already I thought to kill somebody would be harder than it is, although what kind of person is this, this is a real
animal - a zombie that they scared us with in the movies became a reality. Walking out into the streets I saw real chaos, flaming cars, sounds
of shooting in the distance, this was all in reality very frightening, I walked through the street and found a baseball bat near the neighbour's house, but what I needed to do now
was to go along the familiar streets, ahead into the unknown saving my life with the hope to find at least somebody alive, but one thing kept bugging me, how my parents were,
my wife, daughter, I left for only a week and was supposed to be back in two days
and here is such chaos! Life is an unpredictable thing. There weren't any cars nearby, and those that were weren't in any condition to drive. My path through the streets
will be long, the main thing was to find at least any signs of life and survive." (NOTE!!! Then the hero starts to walk through the street killing zombies
in the first level only hand arms are available and no pistols, the level is 5-10 minutes, the quantity of zombies is conservative,
but not too great.) Having gone through familiar streets, having killed a decent number of zombies I heard a sound from a broken down car, somebody was talking
from a walkie-talkie, I needed to immediately find out who this was. (you go to the car and activate the walkie-talkie
and here the first level ends).
Chapter 2 - Stranger
(time is morning 6 AM)(hero) Walking up to the car I heard a voice and decided to answer. "Who is this?" I asked. (general)"Good morning stranger, I am general Alex Shields,
I will help you get to safety, right now go to the Alfred gunshop and there you will see a manhole. get in there
and just go straight, all sewer paths lead to Oldridge city hall, just don't ask any questions, as soon as you get out of the sewers and reach the end,
contact me, about what to do next, and now I need to go! To our soon meeting, stranger!". (hero) "Wait!" I said, but at that moment
the walkie got turned off. I had no choice, all that was left for me to do was to get to the Alfred gunshop, the path won't be easy.
(the game goes for 5-8 minutes, going along the streets to the gunshop killing zombies). Upon arriving at the gunshop, I saw broken glass
and decided to take a few pistols, grenades and my movie favourite shotgun, this was all that was left at the gunshop after the chaos. (the player gets
weapons, pistols, grenades and shotguns ammo will lie on the streets sewers etc. but not too much this is supposed to be survival).
The sewers were right under my feet and what was left to do was, listen to the stranger, get in the sewers and just go
in the hopes of meeting this general, I hope we will soon get acquainted in person, but ahead of me awaited a tough path through the stinking sewers.
(getting in the sewers the chapter ends)
Chapter 3 - Oldridge City Hall
(chapter begins in the sewers at 9 in the morning, killing zombies there should be a medium amount of them, you open the manhole and get out near City Hall
- supermarket Oldridge, going inside). "Having gone through hell in the sewers I removed the manhole and saw a blinding sun, how long was I going through that sewer?
Looking around I saw an entrace to City Hall and not a single living soul beside me. Having gone inside City Hall I tried to contact the general, but he
didn't answer, I'll have to go inside Oldridge and find the general, if he's there of course. Having entered the City Hall I saw a sea of zombies and cries for help
on the second floor, I'll have to break through fighting." (later through City Hall you're killing zombies make many of them, you get through to the second floor
and see the general dead with a note). "Having broken through a crowd of zombies, going to the cries for help, I saw the dead general Alex Shields
and a note: (general) "Lad, if you're reading this note, that means I probably died, here I've left a map and the coordinates of the evacuation point.
Please if you get to the end look after my daughter, even though we are not familiar with each other, I hear by your voice, that you're a good and
kind fellow, I don't have a choice anymore all my close friends have perished and I cannot leave my daughter with anyone, those soldiers on base are too dumb
to raise her, they only know how to shoot. She's left without a mother or father. She's already there with the soldiers at the evacuation point they're
holding the point, I hope you get to the end, I believe in you, with best wishes general Alex Shields."
(hero) "A shame general, he was a good person and judging by the note the best father, damn the epidemic." I took the note with the coordinates,
GPS navigator that the general left me, not much was left to do, enter into the GPS navigator that the general left the coordinates
of the evacuation and not much was left to do, the main thing was to get there alive. Having entered the coordinates I went into the streets, the arrow pointed
to the nearest forest,
there was nothing to do, I will return to the forest for another "adventure". (here the chapter ends, there should be many zombies in the level %70 of the maximum
lots of health packs and ammo)
He says that he thinks after 4 chapters when we've done the design + game mechanics etc. we need to make a demo with 4 levels so that people can rate the game and
He says he will send 2 more chapters in a week.
I can make a free demo of the game for iPhone to put up on the app store when we're done the 4 levels. And work on the android version after that.
Chapter 4 - Angara Heights
"After wading through the forest I arrived at Angara Heights, the outskirts of the city.
I would later discover that Oldridge city had been nuked to stop the spread of the infection."
The player goes through the forest until he arrives at a ridge over a river with a highway bridge.
The player crosses the bridge and gets to the other side.
"When I got to the end I had to fight off the swarms."
The player fights the zombies until a timer runs off.
"Finally a plane landed to evacuate any survivors. After that, I arrived at a survivor's camp.
And the search for survivors continues as we try to understand the epidemic." THE END
This was a story-line made by somebody else for a previous project.
The idea is to use the Structure Sensor from Occipital (or an ordinary smartphone with SceneLib), to record the depths and colors of some scene, with readings of the accelerometers and gravitometers, etc. recorded, along with video with sound and color. Then using the orientation of the camera on those frames, given what should be the up direction, and the depths values to calculate a flat surface ahead and keep it relatively planted with the accelerometer and depth data, a "device" (perhaps made in a modelling program with bump mapping and specular highlights, etc, to make it realistic, and very detailed, and animated) is drawn on top with a timer displayed. It's supposed to look very technological. Perhaps it's plugged in by cables, like an external hard drive, but with changes. This is really a prank or just-for-fun project to see how far it can go, but it can be applied to film-making and reused for lots of different things. The data would be recorded and exported to computer where the heavy work would be done.
I have an image with some ventilation fan animations I made. The animation isn't there, but you can see sort of what I meant. If to put this on top a large external hard drive, to make it seem as if it's technical. And to make an LED numerical reader on top of it (I will explain what for).
So there would need to be an LED numerical display. The digits will count up slowly, to indicate how "G's" the device is producing, up to Earth's 1 G, and higher. As it does so, using the background (let's say it's got a TV right up close behind it and you can hear it playing and see it, with the device overlaid on top of it, and the camera moving around and shaking and panning etc. and maybe a finger moving closer) the device is overlaid on top using it's on depth values, to write into the depth of the scene, so that if there's something in front, it will discard the correct pixels based on depth.
What does the device do? It is like a gravitational amplifier, or space warper. As it increases the G's, the pixels will be altered using an effect similar to orientability maps. Basically, depending on the depths, giving the 3-dimensional positions of the scene surface vertices, and the colors there, a gravitational lensing effect would be created, like experienced near a black hole. I have a theory that the brain is innately built with such an understanding of the nature of gravity, or perhaps there would at least be a gut feeling of what this is. There can also be sounds added of a device powering up, whizzing up, etc.
A lot of other effects can also be made on this principle. Although that is the coolest one I wanted to make. You can also use the "spherical blending" method in "isometric shooters" to render fire, fog, smoke, explosions, clouds, etc. You can film a doorway with the door open, and you going through it, and use those depth and color values of you walking through, on top of a doorway scene with the door closer, to make it seem as if you've walked right through the door. Or film something in the kitchen or bathroom. Let's say the toilet. And another scene overlaid on top of it with your hand reaching in, to give the appearance of your hand going through the side and through the solid and appearing inside.
You can also make a video of yourself with an "alien" in your living room, giving it a handshake, etc (using motion tracking algorithms to track your hand pixels eg). You can make an app with some models of some celebrities, for smartphone perhaps using SceneLib, so that people can take photos of themselves with the camera view overlaid with a celebrity standing beside, animated, giving you a handshake, with your arm over their shoulder, or you in front of them and then behind, etc. You can film yourself with Arnold Schwarzenneger, Anna Kournikova, Pamela Anderson, Robert DeNiro, etc.
A more interesting thing can be done with plenoptic cameras from Lytro, using the 5-point algorithm or SceneLib perhaps. Because the Structure Sensor only has a range of 3.5 m, you can use plenoptic cameras (in theory) to film eg a skyscraper from another window, or from the ground, and eg make an alien UFO go in between the buildings, or fog, or explosions, etc.
There is an app called Unity-chan on iPhone, which is basically an anime girl 3D model with animations, for augmented reality (AR), that is rendered with the right orientation given the smartphone's accelerometers or gravitometers. Although it doesn't use depth-writing and depth-checking to make the girl appear correctly with anything overlaid on top, it does look interesting with things standing behind it.
Here is me playing around with it:
Also another effect I thought of... being in a space station, perhaps with feet braces to explain why you're not floating, and testing some device, with the walls etc made in a 3D modelling program.
So when I got the Structure Sensor and realized it was only 3.5 m ranged, I tried to come up with my own algorithms using only mono-RGB video. It's possible if you look for algorithms like "mono-SLAM" on Google. I wanted to use it to scan neighborhoods for use as levels in my shooter game. This is probably something somebody should look into, if it isn't already, using the techniques we have, to do urban scene reconstruction. Engineers and other companies use "remote sensing" with giant back-pack sized laser sensors to scan surroundings. But these are expensive and bulky.
So there's the 5-point algorithm. Originally I didn't realize what this was. I thought it was possible only with 3 points, to determine the relative orientation and translation of a camera frame, with respect to the previous camera frame, using the tracked positions of pixels shifted on the screen, using 3 (or rather 5) of them to derive all the needed information to get the camera angle, and the positions of all the points. I only realized 5 are needed when I laid it out in equations and found that 5 points are needed to solve and eliminate for all the variables.
It's been solved using Gauss's method using matrices (eg here in OpenCV https://github.com/opencv/opencv/blob/master/modules/calib3d/src/five-point.cpp ). But the way I laid it out before I found that, and which seems better to me even now, because it doesn't use matrices, doesn't use the double-sized floating point coefficients there in part of the solution, and because it just uses simple trigonometry and linear algebra, is to use basic algebra instead of matrices. And therefore should me more efficient, accurate, and precise, and give you an understanding how it works (because I don't understand the Gauss method with matrices).
I haven't finished solving it, but it's possible, and I might come back to it later. The reason I didn't finish it is because it gets HUGE (over a 100 pages and growing fast) just for a single equation, solving for and isolating variables at each step, to combine them from two subsequent equations, and bring them together by substituting the solution for the isolated variable from one equation into that variable for the other equation, and simplify them. I did use wxMaxima to simplify this, but I found I needed the Reduce computer algebra system to use data files from text, using batch scripts, and to output to text files, for the next stage, to automate the process. I just didn't finish that, but I believe I did try it out. WxMaxima and pretty much any other CAS besides Reduce just dies under such huge equations, not only because it has to render them (Reduce can work in command line mode and in a mode to display the equations on a continuous line, instead of formatting to appear as if it's spaced out using several lines and rearrangement), but also because of memory constraints of any underlying Lisp interpreter (Reduce uses Portable Standard Lisp, unlike Maxima's and other's Common Standard Lisp, so doesn't have a fixed memory limit, or maybe it does but it can be changed, or maybe that's what I tried to do in the first place before I found Reduce, if I remember correctly), and just being too slow. You can probably get much better results if automating using a professional expensive CAS like Maple or Mathcad or Mathematica, the ones used in universities and engineering and science.
You can see here my attempts to first solve it on paper:
The exact equations are (recalling from memory):
a = point 1 screen x in frame 1
b = point 1 screen y in frame 1
c = point 2 screen x in frame 1
d = point 2 screen y in frame 1
e = point 3 screen x in frame 1
f = point 3 screen y in frame 1
g = point 4 screen x in frame 1
h = point 4 screen y in frame 1
i = point 5 screen x in frame 1
j = point 5 screen y in frame 1
k = point 1 screen x in frame 2
l = point 1 screen y in frame 2
m = point 2 screen x in frame 2
n = point 2 screen y in frame 2
o = point 3 screen x in frame 2
p = point 3 screen y in frame 2
q = point 4 screen x in frame 2
r = point 4 screen y in frame 2
s = point 5 screen x in frame 2
t = point 5 screen y in frame 2
The points must be in the range (-1,+1) of the screen's space, with eg up being +1 and right +1, for my implementation of these equations.
u = screen field of view angle horizontally
v = screen field of view angle vertically
w = camera frame 2 offset from 1 relative x position coordinate
x = camera frame 2 offset from 1 relative y position coordinate
y = camera frame 2 offset from 1 relative z position coordinate
z = camera frame 2 view vector x coordinate of total length 1, with frame 1's being vector (0,0,1)
A = camera frame 2 view vector y coordinate of total length 1, with frame 1's being vector (0,0,1)
B = camera frame 2 view vector z coordinate of total length 1, with frame 1's being vector (0,0,1)
C = camera frame 2 right vector x coordinate of total length 1, with frame 1's being vector (1,0,0)
D = camera frame 2 right vector y coordinate of total length 1, with frame 1's being vector (1,0,0)
E = camera frame 2 right vector z coordinate of total length 1, with frame 1's being vector (1,0,0)
F = camera frame 2 up vector x coordinate of total length 1, with frame 1's being vector (0,1,0)
G = camera frame 2 up vector y coordinate of total length 1, with frame 1's being vector (0,1,0)
H = camera frame 2 up vector z coordinate of total length 1, with frame 1's being vector (0,1,0)
By the way, whenever you solve for and simplify a solution for a variable in the main equation, you have to substitute that in ALL the other equations. The way I did it was, I kept track of all the other previous variables that I solved for and substituted, and the order in which I substituted them, that I then do the same on all the second main equations I use, before combining with the first main one (by solving for another variable in one, and substituting that variable in the other with the solution of the variable using all the other remaining variables).
I = ray right ratio of total length 1, for ray from camera frame 2 to point 1, for the right component
J = ray up ratio of total length 1, for ray from camera frame 2 to point 1, for the up component
K = ray forward ratio of total length 1, for ray from camera frame 2 to point 1, for the forward component
L = ray scaling ratio, to get the length distance of point 1 from camera frame 2 position, for the unit-length ray
M = ray right ratio of total length 1, for ray from camera frame 2 to point 2, for the right component
N = ray up ratio of total length 1, for ray from camera frame 2 to point 2, for the up component
O = ray forward ratio of total length 1, for ray from camera frame 2 to point 2, for the forward component
P = ray scaling ratio, to get the length distance of point 2 from camera frame 2 position, for the unit-length ray
Q = ray right ratio of total length 1, for ray from camera frame 2 to point 3, for the right component
R = ray up ratio of total length 1, for ray from camera frame 2 to point 3, for the up component
S = ray forward ratio of total length 1, for ray from camera frame 2 to point 3, for the forward component
T = ray scaling ratio, to get the length distance of point 3 from camera frame 2 position, for the unit-length ray
U = ray right ratio of total length 1, for ray from camera frame 2 to point 4, for the right component
V = ray up ratio of total length 1, for ray from camera frame 2 to point 4, for the up component
W = ray forward ratio of total length 1, for ray from camera frame 2 to point 4, for the forward component
X = ray scaling ratio, to get the length distance of point 4 from camera frame 2 position, for the unit-length ray
Y = ray right ratio of total length 1, for ray from camera frame 2 to point 4, for the right component
Z = ray up ratio of total length 1, for ray from camera frame 2 to point 4, for the up component
I" = ray forward ratio of total length 1, for ray from camera frame 2 to point 4, for the forward component
I~ = ray scaling ratio, to get the length distance of point 4 from camera frame 2 position, for the unit-length ray
I' = ray right ratio of total length 1, for ray from camera frame 2 to point 5, for the right component
Iz = ray up ratio of total length 1, for ray from camera frame 2 to point 5, for the up component
I| = ray forward ratio of total length 1, for ray from camera frame 2 to point 5, for the forward component
IGBP = ray scaling ratio, to get the length distance of point 5 from camera frame 2 position, for the unit-length ray
Now, assuming that camera frame 1 is at (0,0,0) in our relative coordinate system:
0 + I+- * Iu = w + (C * I + F * J + z * K) * L
0 + I2 * Iu = x + (D * I + G * J + A * K) * L
0 + I3 * Iu = y + (E * I + H * J + B * K) * L
0 + I? * I" = w + (C * Q + F * R + z * S) * T
0 + I. * I" = x + (D * Q + G * R + A * S) * T
0 + I, * I" = y + (E * Q + H * R + B * S) * T
0 + I 3/4 * If = w + (C * U + F * V + z * W) * X
0 + I? * If = x + (D * U + G * V + A * W) * X
0 + I' * If = y + (E * U + H * V + B * W) * X
0 + I+ * D? = w + (C * Y + F * Z + z * I") * I~
0 + I^ * D? = x + (D * Y + G * Z + A * I") * I~
0 + D- * D? = y + (E * Y + H * Z + B * I") * I~
0 + DS * D" = w + (C * I' + F * Iz + z * I|) * IGBP
0 + D' * D" = x + (D * I' + G * Iz + A * I|) * IGBP
0 + D' * D" = y + (E * I' + H * Iz + B * I|) * IGBP
I+- = in frame 1, with total length 1, the ray right ratio, for ray from camera 1 to point 1, assuming that the frame 1 right direction is aligned with +x axis
I2 = in frame 1, with total length 1, the ray up ratio, for ray from camera 1 to point 1, assuming that the frame 1 up direction is aligned with +y axis
I3 = in frame 1, with total length 1, the ray forward ratio, for ray from camera 1 to point 1, assuming that the frame 1 forward direction is aligned with +z axis
Iu = in frame 1, the length scale of the ray, from camera 1 to point 1, to give the correct distance to the point
I? = in frame 1, with total length 1, the ray right ratio, for ray from camera 1 to point 2, assuming that the frame 1 right direction is aligned with +x axis
I. = in frame 1, with total length 1, the ray up ratio, for ray from camera 1 to point 2, assuming that the frame 1 up direction is aligned with +y axis
I, = in frame 1, with total length 1, the ray forward ratio, for ray from camera 1 to point 2, assuming that the frame 1 forward direction is aligned with +z axis
I" = in frame 1, the length scale of the ray, from camera 1 to point 2, to give the correct distance to the point
I 3/4 = in frame 1, with total length 1, the ray right ratio, for ray from camera 1 to point 3, assuming that the frame 1 right direction is aligned with +x axis
I? = in frame 1, with total length 1, the ray up ratio, for ray from camera 1 to point 3, assuming that the frame 1 up direction is aligned with +y axis
I' = in frame 1, with total length 1, the ray forward ratio, for ray from camera 1 to point 3, assuming that the frame 1 forward direction is aligned with +z axis
If = in frame 1, the length scale of the ray, from camera 1 to point 3, to give the correct distance to the point
I+ = in frame 1, with total length 1, the ray right ratio, for ray from camera 1 to point 4, assuming that the frame 1 right direction is aligned with +x axis
I^ = in frame 1, with total length 1, the ray up ratio, for ray from camera 1 to point 4, assuming that the frame 1 up direction is aligned with +y axis
D- = in frame 1, with total length 1, the ray forward ratio, for ray from camera 1 to point 4, assuming that the frame 1 forward direction is aligned with +z axis
D? = in frame 1, the length scale of the ray, from camera 1 to point 4, to give the correct distance to the point
DS = in frame 1, with total length 1, the ray right ratio, for ray from camera 1 to point 5, assuming that the frame 1 right direction is aligned with +x axis
D' = in frame 1, with total length 1, the ray up ratio, for ray from camera 1 to point 5, assuming that the frame 1 up direction is aligned with +y axis
D' = in frame 1, with total length 1, the ray forward ratio, for ray from camera 1 to point 5, assuming that the frame 1 forward direction is aligned with +z axis
D" = in frame 1, the length scale of the ray, from camera 1 to point 5, to give the correct distance to the point
The line segments are expressed in this way with vectors:
line 1 in frame 1 = (0,0,0) to (0 + I+- * Iu, 0 + I2 * Iu, 0 + I3 * Iu) etc...
line 1 in frame 2 = (w,x,y) to (w + (C * I + F * J + z * K) * L, x + (D * I + G * J + A * K) * L, y + (E * I + H * J + B * K) * L) etc...
(0 + I+- * Iu, 0 + I2 * Iu, 0 + I3 * Iu) = (w + (C * I + F * J + z * K) * L, x + (D * I + G * J + A * K) * L, y + (E * I + H * J + B * K) * L)
Note: I'm making all these single-character variables, because you need to have only a single, unique character in these CAS and can't use more than one, and it also makes the equations a lot shorter, than they would otherwise have been.
Given these identities for the camera's directional vectors, because they are perpendicular, so should have a dot product of 0, and are all of unit length, and can be obtained from each other using cross products of the other two:
F = D * B - E * A
G = E * z - C * B
H = C * A - D * z
C = A * H - B * G
D = B * F - z * H
E = z * G - A * F
I don't remember if you need all of these, but it will be easy to see, which variables you need, and which ones are left.
These are hard to understand, but they make sense if you take apart a dot or a cross product, and decompose them into the x, y, or z component, so see for yourself and make this yourself instead of just following my exact equations, probably.
Whenever you have a square root in the results, you have to split the further equation paths into a positive and negative square, and when at the end to check using the forward equation if it makes sense. There are only about 4 such square roots if I remember correctly and they were not a big problem.
F^2 + G^2 + H^2 = 1
C^2 + D^2 + E^2 = 1
A^2 + B^2 + C^2 = 1
I^2 + J^2 + K^2 = 1
M^2 + N^2 + O^2 = 1
Q^2 + R^2 + S^2 = 1
You won't use all of the above, because you can solve the others using the other identities.
C = E * B - D * A = - (E * z * B + D * z * A) / (z)
z = G * E - H * D
A = H * C - F * E
B = F * D - G * C
C = A * (C * A - D * z) - B * (E * z - C * B)
E = z * (E * z - C * B) - A * (D * B - E * A)
(I.e., the ratios for distance of the points are positive, so in front, of camera 1.)
Iu > 0
I" > 0
If > 0
D? > 0
D" > 0
And the same for camera 2?
L > 0
T > 0
X > 0
I~ > 0
IGBP > 0
These last ones are more complex:
The angle between three points, in both frame 1 and frame 2, will have a certain single angle, for any three set of these points.
And the horizontal and vertical angles of the fields of view, can be used also with the angles formed with camera 1 or 2, with any of the points, with respect to the near-plane, or near-arc.
The given variables are:
We have I, J, Q, R, U, V, Y, Z, I', Iz (the screen ratios of the points in frame 2, to give the ratios of the up and right vector along the near-arc, so called because the result will be at unit-distance from the camera's position), and the K, S, W, I", I| can be obtained from the previous ratios, because these are the forward ratios, and together they give a unit-length, and the screen 1 up and right ratios I+-, I2, I?, I., I 3/4 , I?, I+, I^, DS, D', and the forward ratios from them, I3, I,, I', D-, D'.
The following variables can solved or obtained and substituted in the order:
F, G, H (starting up vector for camera 2, of unit-length), C and E (the camera 2 right vector components, for x and z, which can be obtained solely from those of the up vector), A (the camera 2 view y, which can be obtained simply from the previous ones),
Iu, I", If, D?, D" (the lengths for the distances to the points from frame 1 [correction]),
Then the lengths of the points from frame 1, then the camera 2 translation x and y, etc...
You can solve the much simpler 2-dimensional equations to see a possible result. I also did not solve those, but I did start, and I think they only require 4 points, if I'm not mistaken.
I have at least two more really interesting topics, then maybe some others, maybe tomorrow. And some other minor ones.
Hope you enjoyed that. Now if you excuse me, I'm going to relax, because I have a thrashing headache, from inadequate sleep.
You can probably do all sorts of effects you want, eg refraction, lasers, or make games that used augmented reality, like a playing board that might be shared between two smartphones, with virtual figures that you look over by moving yourself and your smartphone around, and control on the screen.
[Edit] also I'm not sure if it's a good idea to use the variable name "i" because CAS may mistake it for the imaginary number constant.
I was going to write up about the next topic tomorrow but now that I've thought about it, I realize I better not, until I publish it somewhere else, because it's just so awesome. So you'll have to wait. A few months. In the meanwhile I will talk about other ideas. And the game idea that it is for is not really amazing, as much as the technique.
[Edit] tomorrow's topic will be really interesting though.
For the next game idea, I'm going to have to explain two new things in two entries.
I came up with an idea of "cryptographic" like resources that are generated by computing power. Basically, they are like primes, and they can be traded, for example, if one player or factory doesn't have them, they can obtain them, if they're unique, and they can easily be checked to satisfy the requirements for being such a resource. And there are several, likely unlimited (this way) amount of different kinds of these "new prime". Some players could choose, based on past experience or the age and size of the game world, to aim their searches and explorations of these kinds of new primes into higher ranges (like digging deeper for minerals?). Different of these types of primes have different values depending on eg rarity, and not all of them are necessarily of the same distribution law, and some might be faster to find after a certain point, while others are different. But generally they get harder to find and therefore more valuable as more are found in the lower range or domain.
Here are some examples of them.
The only problem with them is that they are not all of equal valuable like other resources, and they don't have any value if they are not unique. In a way, they are non-competitive if you hold on to them (which might be against the game rules, but can probably be hacked), but it is competitive or destructive in the sense that obtaining one and trading it to somebody else makes it impossible for somebody else to trade that person for the same one.
The "Prime Series":
The main idea behind primes is that forz = x / y
for all y it must be satisfied thatz
giving remainder > 0 for an integer-only division result, for all y. And "x" is the prime. For all "x / [z to x-1]", the remainder will be > 0.
For this series we get: 1, 3, 5, 7, 11, 13...
The "Power Series" and "Inverse Exponent Series":
We can extend this to:z = logy(x)
Which gives the series: 3, 5, 6, 7, 10, 11, 12, 13, 14, 17... (Correct me if I'm wrong, I'm just writing it as I put it down in the notes.) Which means that "y" to the power of "z" must give "x". In a lot of cases, the "x^z" will be less than "x", which means a remainder. For all "[z to x-1]root(x)", the remainder will be > 0.
Another one is:z = yroot(x)
Which is in the same class. We get the same series: 3, 5, 6, 7, 10, 11, 12, 13...
Named so after the tetration hyper-operator.
Orx down down y
Orx = z up up y
The "super-logarithm" / "tetra-logarithm", and "super-root".
These may be problematic as occurances of these "primes" may be too often, even at higher levels. They may be a lower-value currency.
These may feed into, or by themselves, give the following series, which can be extended to as many series as needed.x = a2 + b3 + c5
The "Primes Series" is fed into this (2,3,5) to give the coefficients, and all the occurances of x, where0
Must be satisfied. This is like "find the biggest 105th x such that it is only a sum of the primes 2, 3, 7," etc.
The "Power Series" or "Prime Inverse Exponent Series" is fed into this:x = log3(a) log5(b) log6(c)x = 3^a 5^b 6^c
Where the same conditions for a, b, c, and x must be met as above. (Actually this is for the "logarithm series").
The "Root Series" is fed into:x = a^3 b^5 c^6
For the "Super-Logarithm Series" I have not finished thinking through, but it can probably be done the same way. That is all for now. Maybe I'll get some strength to write more tomorrow.
 Sorry if this is boring and geeky but I found this fascinating when making this, when I got the idea from reading up tetration operators, and imagining that there must be an endless series of operators. The player will not notice any of these primes or technicalities and would probably just see resources, or behavior of factories or buildings, and how fast they produce new resources, or maybe they will only see numbers, if they want to micro-manage them. If you do find this boring, well, I don't know what you're doing on a game-dev forum, because you have to be technically inclined and dig in and understand these things to do any good work. This could also have an application as a crypto-currency, who knows, and this may all be a lot more fun in action than it is reading it in snail-pace slow-motion.
With the addition of accelerometers, gravitometers, magnetometers, cameras, LED beams, GPS tracking and navigation, flashlights, internet, calling, and making smartphones pretty much computers, are smartphones not the Swiss army knife of devices? With the addition of perhaps laser pointing, directional bluetooth, and directional control sticks, the phones can be furthered made into the Swiss army knives.
If you're interested in my art that I first posted, I have a DeviantArt account here: http://polyfrag.deviantart.com
So the next idea I have is, what if the smartphone had 3 or so "mini-RADARs"? Ie, they would rotate inside the phone, and you could use it to orient yourself with other devices or cell towers. It would give a centimeter-accurate GPS and orientation meter. Instead of calculating the time-delay between cell towers or signal strength, and triangulating position based off of that, or off of GPS satellites, which is only accurate to a meter usually (or was it 10 meters? it was something that was not useful for determining a person's position with respect to other people), the mini-RADARs would calculate the relative angles of the cell towers or any other devices, and use the angles to calculate relative position and orientation. This may also make it possible to point the phone at something and get information that way, like another person holding a smartphone, or a store window, or an apartment window on the 5th floor (maybe they have a home blog?) This is similar to the infrared LED like in remote controls also found in the LG G3 (they have all the codes for all the remote controls you can use), but would be longer range and more powerful.
[Edit] I should also explain how directional Bluetooth or miniRADAR works. In normal RADAR a radio wave is bounced off of a metal object. But for miniRADAR there would be a little dish that can only receive signals and send from a certain angle and direction. If it rotates very fast and tried checking when the signal works or doesn't maybe by trying to keep the target in and out of it's rim of reception, it can detect an angle.
 The cell tower could work by the same principal, with a mini-RADAR, to tell the client their angle with respect to the tower. The tower or the client could send a packet or datagram with one number one way, and a different one the other way, and depending on which one is received back, it determines the direction. A whole bunch of numbers could be sent in a series, and the combination of them is used to determine the angle. One mini-RADAR of the three could be facing the other side that the other one isn't, in the anti-parallel direction, and the third could assist. I assumed three were needed, if perhaps they were locating at opposite ends of the device, if perhaps the electronics inside interfered with the signal. The cell towers could implement these mini-RADARs first perhaps, while the older smartphones were "phased out".