Liza Shulyayeva

Member
  • Content count

    69
  • Joined

  • Last visited

Community Reputation

627 Good

About Liza Shulyayeva

  • Rank
    Member

Personal Information

Social

  • Twitter
    Lazer
  1. Yesterday I added tests to server deployment in my deployServer.sh script: echo "Running tests" cd ../../server/lib set -e echo "mode: set" > allcoverage.out for d in $(go list ./... | grep -v vendor); do parentdir=`dirname "$d"` subdir=`basename "$d"` echo "subdir: " $subdir if [[ $subdir == "tests" ]]; then go test -cover -coverpkg=$parentdir -coverprofile=profile.out $d else go test -cover -coverprofile=profile.out $d fi if [ -f profile.out ]; then tail -n+2 profile.out >> allcoverage.out rm profile.out fi done Basically this goes into the root where all of my server packages live. Then for each found package we get the subdirectory name and the full path of the parent directory. If the subdirectory is named "tests" (most of my tests are in packages under the package I'm actually testing), we run go test -cover with -coverpkg specified as the parent dir of the test dir. Otherwise we do not specify -coverpkg because it is in the same directory as the test. At the end we get an allcoverage.out file which can be opened in the browser to view coverage for each tested source file:
  2. Server and client for SnailLife Go

    Over the last couple of days I restructured SnailLife Go into a server and client. I’m still in the “rough draft” stage, but the top level project structure now looks like this: gosnaillife ├── client ├── cmd ├── common ├── LICENSE.md ├── README.md ├── server └── setup Intent Split application into server and client CLI apps. Create REST API for client-server communication Have a “common” package for structures which will be reused by both server and client Create some rudimentary deployment scripts for both server and client main.go I started by creating snaillifesrv/main.go alongside snaillifecli/main.go The cmd directory now looks like this: cmd ├── snaillifecli │ └── main.go └── snaillifesrv └── main.go snaillifecli/main.go The client main.go runs some simple configuration with viper (right now there is just a server.json config file with the server url to connect to depending on which environment you are running). After running the configuration it waits for user input. Once input is received, it tries to find and run a cobra command by that name. package main import "fmt" import ( "os" "bufio" "gitlab.com/drakonka/gosnaillife/client/lib/interfaces/cli" "gitlab.com/drakonka/gosnaillife/client/lib/interfaces/cli/commands" "runtime" "strings" "path/filepath" "io/ioutil" "github.com/spf13/viper" "gitlab.com/drakonka/gosnaillife/common/util" ) func main() { fmt.Println("Welcome to SnailLife! The world is your oyster.") configureClient() if err := commands.RootCmd.Execute(); err != nil { fmt.Println(err) os.Exit(1) } waitForInput() } func configureClient() { projectRoot := getProjectRootPath() confPath := projectRoot + "/config" envname, err := ioutil.ReadFile(confPath + "/env.conf") if err != nil { util.HandleErr(err, "") } envFile := string(envname) configPath := confPath + "/" + envFile viper.AddConfigPath(configPath) // Config client viper.SetConfigName("server") err = viper.ReadInConfig() if err != nil { util.HandleErr(err, "") } } func waitForInput() { buf := bufio.NewReader(os.Stdin) fmt.Print("> ") input, err := buf.ReadBytes('\n') if err != nil { fmt.Println(err) } else { cmd, err := cli.TryGetCmd(string(input)) if err != nil { fmt.Println(err) } else { err := cmd.Execute() if err != nil { fmt.Println("ERROR: " + err.Error()) } } } waitForInput() } func getProjectRootPath() string { _, b, _, _ := runtime.Caller(0) folders := strings.Split(b, "/") folders = folders[:len(folders)-2] path := strings.Join(folders, "/") basepath := filepath.Dir(path) + "/client" return basepath } snaillifesrv/main.go When launching snaillifesrv, a subcommand is expected immediately. Right now the only supported subcommand is serve, which will start the server. package main import "fmt" import ( "gitlab.com/drakonka/gosnaillife/server/lib/infrastructure/env" "os" "runtime" "path/filepath" "strings" "gitlab.com/drakonka/gosnaillife/server/lib/infrastructure" "gitlab.com/drakonka/gosnaillife/common" "gitlab.com/drakonka/gosnaillife/server/lib/interfaces/cli/commands" ) var App env.Application func main() { setProjectRootPath() confPath := env.ProjectRoot + "/config" App = infrastructure.Init(confPath, common.CLI) if err := commands.RootCmd.Execute(); err != nil { fmt.Println(err) os.Exit(1) } } func setProjectRootPath() { _, b, _, _ := runtime.Caller(0) folders := strings.Split(b, "/") folders = folders[:len(folders)-2] path := strings.Join(folders, "/") basepath := filepath.Dir(path) + "/server" env.ProjectRoot = basepath } Client So far an extremely barebones implementation, it looks like this: client ├── config │ ├── config.go │ ├── dev │ │ └── server.json │ └── env.conf └── lib └── interfaces └── cli ├── cli.go ├── cmd.go └── commands ├── register.go ├── root.go └── test.go Right now only the register command is implemented. Server The server is where the bulk of the existing packages ended up going: server ├── config │ ├── config.go │ ├── dev │ │ ├── auth.json │ │ └── database.json │ └── env.conf └── lib ├── domain │ ├── item │ └── snail │ ├── snail.go │ └── snailrepo.go ├── infrastructure │ ├── auth │ │ ├── authenticator.go │ │ ├── auth.go │ │ ├── cli │ │ │ ├── auth0 │ │ │ │ ├── auth0.go │ │ │ │ └── tests │ │ │ │ ├── auth0_test.go │ │ │ │ └── config_test.go │ │ │ ├── cli.go │ │ │ └── cli.so │ │ ├── provider.go │ │ └── web │ ├── databases │ │ ├── database.go │ │ ├── mysql │ │ │ ├── delete.go │ │ │ ├── insert.go │ │ │ ├── mysql.go │ │ │ ├── retrieve.go │ │ │ ├── tests │ │ │ │ └── mysql_test.go │ │ │ └── update.go │ │ ├── repo │ │ │ ├── repo.go │ │ │ ├── tests │ │ │ │ ├── repo_test.go │ │ │ │ ├── testmodel_test.go │ │ │ │ └── testrepo_test.go │ │ │ └── util.go │ │ └── tests │ │ └── testutil.go │ ├── env │ │ └── env.go │ ├── init.go │ └── init_test.go └── interfaces ├── cli │ └── commands │ ├── root.go │ └── serve.go └── restapi ├── err.go ├── handlers │ └── user.go ├── handlers.go ├── logger.go ├── restapi.go ├── router.go └── routes.go I followed a lot of the advice from this useful post about creating REST APIs in Go. When the user runs the register command on the client, here is what happens on the server. I have added comments to the copy below to help explain: package handlers import ( "encoding/json" "fmt" "errors" "io/ioutil" "io" "net/http" "gitlab.com/drakonka/gosnaillife/common/restapi" "gitlab.com/drakonka/gosnaillife/common/util" "strings" "gitlab.com/drakonka/gosnaillife/server/lib/infrastructure/auth" "gitlab.com/drakonka/gosnaillife/server/lib/infrastructure/env" http2 "gitlab.com/drakonka/gosnaillife/common/util/http" ) func CreateUser(w http.ResponseWriter, r *http.Request) { fmt.Println("Creating user") var user restapi.UserReq body, err := ioutil.ReadAll(io.LimitReader(r.Body, 1048576)) // The infamous Go error handling - I need a better way. if err != nil { util.HandleErr(err, "CreateUserErr") return } if err := r.Body.Close(); err != nil { util.HandleErr(err, "CreateUserErr") return } // Unmarshal the data we get from the client into UserReq if err := json.Unmarshal(body, &user); err != nil { // If we were unable to unmarshal, send an error response back to the client util.HandleErr(err, "CreateUserErr") w.Header().Set("Content-Type", "application/json; charset=UTF-8") w.WriteHeader(422) // unprocessable entity if err := json.NewEncoder(w).Encode(err); err != nil { util.HandleErr(err, "CreateUser") return } return } fmt.Println("Running registration") resBody, err := registerUser(user) if err != nil { util.HandleErr(err, "CreateUserErr") } // Start creating a userRes to send back to the client. userRes := buildUserResponse(resBody) status := http.StatusOK if err != nil { status = http.StatusInternalServerError } w.Header().Set("Content-Type", "application/json; charset=UTF-8") w.WriteHeader(status) if err := json.NewEncoder(w).Encode(userRes); err != nil { util.HandleErr(err, "CreateUserErr") return } } func registerUser(user restapi.UserReq) (resBody []byte, err error) { // Find an Auth0 provider (that is all we'll support for now) var auth0 auth.Provider auth0 = env.App.Authenticator.FindProvider("Auth0") if auth0 != nil { resBody, err = auth0.Register(user.Username, user.Password) } else { err = errors.New("Auth0 provider not found") } return resBody, err } func buildUserResponse(resBody []byte) (*restapi.UserRes) { res := restapi.UserRes{} // Find any keys we may find relevant from the Auth0 response body m, _ := util.FindInJson(resBody, []string {"_id", "statusCode", "name", "description", "error"}) httpErr := buildHttpErr(m) if id, ok := m["_id"]; ok { res.Id = fmt.Sprintf("%v", id) } res.HttpErr = httpErr return &res } func buildHttpErr(m map[string]interface{}) (httpErr http2.HttpErr) { // The Auth0 response body *sometimes* contains errors in statusCode/name/description format and *sometimes* just contains a single "error" json key if sc, ok := m["statusCode"]; ok { codeStr := fmt.Sprintf("%v", sc) if strings.HasPrefix(codeStr,"4") || strings.HasPrefix(codeStr, "5") { scf := sc.(float64) httpErr.StatusCode = int(scf) httpErr.Name = fmt.Sprintf("%v", m["name"]) httpErr.Desc = fmt.Sprintf("%v", m["description"]) } } else if error, ok := m["error"]; ok { httpErr.StatusCode = 500 httpErr.Name = "Error" httpErr.Desc = fmt.Sprintf("%v", error) } return httpErr } In the end the server sends a UserRes back to the client package restapi import ( "gitlab.com/drakonka/gosnaillife/common/util/http" ) type UserRes struct { HttpErr http.HttpErr `json:"httpErr"` Id string `json:"id"` Username string `json:"username"` } type UserReq struct { Username string `json:"username"` Password string `json:"password"` Connection string `json:"connection"` } Deployment I made a couple of quick scripts to deploy client and server. Note that go-bindata lets you compile your config files into the binary, making for easier distribution (and maybe slightlyimproved security for the secret keys stored in the server config since you don’t have loose configs with credentials sitting around) Client #!/bin/sh echo "Building and installing SnailLife" go-bindata -o ../../client/config/config.go ../../client/config/... cd ../../cmd/snaillifecli; go build GOBIN=$GOPATH/bin go install Server #!/bin/sh echo "Building and installing SnailLife server" go-bindata -o ../../server/config/config.go ../../server/config/... cd ../../server/lib/infrastructure/auth/cli echo "Building cli.so auth plugin" go build -buildmode=plugin -o cli.so echo "Building SnailLifeSrv" cd ../../../../../cmd/snaillifesrv; go build GOBIN=$GOPATH/bin go install Anyway, as you can see there is a long way to go. Up next I am going to write some tests for the REST API and the cobra commands (which I should really have been doing already).
  3. Go auth (and other) progress

    July 28 Made a bit more progress on the authentication basics today. Relevant commits are: Add http package; have auth0 test delete user it has just registered after test is done Create json util; add logout test July 29 Today I focused a bit on the building and installation of snaillifecli. I switched my custom app configuration code for Viper because it apparently integrates really well with Cobra, which is a library to help make CLI applications. It is really tempting to avoid plugging in existing libraries and write everything from scratch because I’m positive that it will teach me a lot about Go, but the existing solutions seem more than suitable and I want to get to working on actual snails at some point. I also checked in a couple of quick scripts to build and install the app. deployDebug deploys to a subdirectory under GOBIN and copies the config file the app will use alongside the executable. This is really dangerous because it means database credentials are exposed to whoever wants to look in the config file and is to be used for local debug purposes only. The deployProd script first runs go-bindata to generate a go file from the json config and have the configuration compiled into the binary during the build step. This way any sensitive database credentials and such are not directly exposed. Of course though, I don't plan on distributing any binary with secret key information in it to external users.
  4. Playing with authentication in Go

    It’s almost 2am and I’m sleepy, but I wanted to write this down while it’s relatively fresh in my mind. I’ve been playing around with user authentication in Go. While actual user interaction is not the primary part of the simulation and will not be the focus in the Go rewrite as I said in my previous post, there will need to be a few basic actions that a user will take before leaving the rest of the simulation to do its thing. This is why I mentioned implementing a very basic CLI to interact with the simulation earlier. The user will basically just need to sign up, log in, set some basic options for their snail stable (like the stable name and location), capture a snail or two, and leave them to do their thing from there. It is kind of like norn Wolfling runs in Creatures 3 - you have to hatch some norns before you let nature take its course. Design differences from the PHP version The main difference from the way the concept of users is currently implemented in the PHP version of SnailLife is this: users will no longer be considered synonymous with owners. In the PHP version of SnailLife, users and owners are just one table - account details, moderator/BrainHub management access fields, and stable information are all stored in one location. In my opinion this isn’t the best approach. A user of the snail simulation will not neccessarily need to be an owner of a stable or of any snails. This is especially true considering I’m building this thing with multiple possible applications in mind. Approach I haven’t worked much with authentication systems before - the PHP version of the app made use of the authentication features that came with Laravel. So this is largely going to be a matter of trial and error. The commit with the rough first stage of this can be found here. Here are the highlights: For the Go rewrite I am thinking of using Auth0. I suspect I may need different auth approaches for web and cli authentication, and my first thought is to separate these into plugins. I have added a client type enum to the Application struct that is populated on app init. If the application is of type CLI, the CLI authentication plugin will be loaded. If it is of type Web, the web authentication plugin will be loaded (but I am only implementing the CLI version for now). The CLI plugin is built like this: go build -buildmode=plugin -o cli.so, and the authorizer then imports cli.so (or web.so, which is currently not built) The plugin could potentially have multiple possible providers, but right now I am only implementing Auth0 - each provider is to implement the Provider interface defined outside of the plugin in the auth package. I have added an auth.json to the conf directory (in gitignore of course), and also added a separate credentials generator for auth0 testing (also gitignored). So far I have a registration and login test implemented - the test generates a random username (email) and password each run to test registration and login. Authentication package structure is currently as follows: auth ├── authenticator.go ├── auth.go ├── cli │ ├── auth0 │ │ ├── auth0.go │ │ └── tests │ │ ├── auth0_test.go │ │ └── config_test.go │ ├── cli.go │ └── cli.so ├── provider.go └── web
  5. Trying out Go

    A couple of weeks ago I had the genius idea to rewrite SnailLife in Go. I’ve already looked into doing this once before a couple of years ago, but wasn’t really feeling it and stuck with PHP (mostly for nostaligia reasons). Now though, SnailLife is this bloated PHP app. Most of the core functionality is in. After weeks of battling infrastructure issues, when everything was back up and running again, I took a step back and saw how big the app (or rather the 3 apps now) had become. At the same time I’d been reading in passing about Go and became curious, so I figured - why not look into learning Go by rewriting SnailLife? Not because I think Go itself will necessarily make anything better, but because a rewrite might. The features are mostly already designed, reimplementing them in another language would hopefully let me focus more on improving the overall project structure while learning the new language of choice. Of course, the “learning the new language of choice” part also increases the likelihood of my turning my messy PHP app into a messy Go app as I go, but…it’ll be fun, OK? Anyway, I’m not yet sure if I’ll stick with the Go port or if I’m just amusing myself for a while before going back to the already largely implemented PHP version. So far I haven’t coded anything snail-specific and have instead been focusing on setting up database-related packages. I’ve made the code public on GitLab for now, though not sure if that’ll change when I go into writing the more snail-specific functionality: https://gitlab.com/drakonka/gosnaillife When I started the PHP version of SnailLife, I started by building the website and the main functionality that lets users interact with their snails. As time went on this focus switched almost exclusively to the back-end, and to working on functionality that required no user interaction. I realized that this is what the core of the idea was - simulating the actual snails - the brain, organ function, etc - things that the user could eventually influence indirectly, but things that would tick away on their own even if no user was involved. So for the Go version I am not starting with a web front end but with a simple CLI, and focusing on implementing the core of the snail itself first. Eventually I can build whatever front-end I want, or even multiple front-ends if I feel like it. Heck, I could even expose some sort of API for others to make their own apps on top of the simulation (if anyone wanted to, in theory). Go notes to self Open and close DB connections as little as possible - the driver handles connection pooling for you, you should only really need to do it once. Best way of reusing constructors between tests might be to create some test utilities outside of _test files which are imported only by the tests. Example usage in my case is creating a test db and table to run my mysql and repo tests against, which are in different packages. Every directory is a package. There is no way to structure code in subdirectories without each subdirectory being a separate package. Make use of table driven tests. They allow you to run multiple test cases per test. interface{} is an empty interface and can hold values of any type…avoid passing this around too much, better to learn to structure the code so you don’t have to. Go code looks to be very easy to move around and restructure if needed, so it should be fine to experiment with different project structures as I go. Current tentative project structure drakonka/gosnaillife ├── cmd │ └── snaillifecli │ └── main.go ├── config │ ├── dev │ │ └── database.json │ └── env.conf ├── lib │ ├── domain │ │ ├── item │ │ └── snail │ │ ├── snail.go │ │ └── snailrepo.go │ ├── infrastructure │ │ ├── databases │ │ │ ├── database.go │ │ │ ├── mysql │ │ │ │ ├── delete.go │ │ │ │ ├── insert.go │ │ │ │ ├── mysql.go │ │ │ │ ├── retrieve.go │ │ │ │ ├── tests │ │ │ │ │ └── mysql_test.go │ │ │ │ └── update.go │ │ │ ├── repo │ │ │ │ ├── repo.go │ │ │ │ ├── repoutil.go │ │ │ │ └── tests │ │ │ │ ├── repo_test.go │ │ │ │ ├── testmodel_test.go │ │ │ │ └── testrepo_test.go │ │ │ └── tests │ │ │ └── testutil.go │ │ ├── env │ │ │ └── env.go │ │ ├── init.go │ │ ├── init_test.go │ │ └── util │ │ ├── collection.go │ │ └── err.go │ ├── interfaces │ └── usecases
  6. State of the Snail - Debugging Hell

      Thanks! Hoping to not disappoint :)
  7. State of the Snail - Debugging Hell

      Technically yes, but I have not yet made it available for people to start using (not that anyone would be clamouring to try it anyway, I think it would actually be pretty boring for anybody except me - the focus is very far from "fun"). But because I have a goal to hopefully attend the European Conference on Artificial Life this year, I'm using that as a bit of a vague deadline to have something a bit more interesting and functional to show by that time (whether that be the simulation itself or the brain visualization debugging tool I mentioned above). We'll see how it goes though; I've been working a lot lately so not sure how much time or energy I'll have to really get there in time.
  8. State of the Snail - Debugging Hell

      I wouldn't call it a game; it is intended to be more of an amateur browser based life simulation. You find, raise, race, and breed snails and manage a snail stable :)
  9. State of the Snail - Debugging Hell

    It has been a while! SnailLife work has been moving at a snail's pace. Why? Because debugging the snail brain has turned into a highly demotivating, hellish endeavour. The snails make decisions and perform actions based on so much *crap*, that the simple log files I was using all long are just not cutting it anymore. The Laravel log for the original app has turned into 3 Laravel logs because I now have 3 independent apps (the front-end, BrainRunner, and BrainHub to coordinate all the brain runners). That then turned into individual log files per snail plus individual log files per brain task. And still making sense of why snails are choosing to behave in certain ways is nigh impossible - the brain *and* infrastructure issues I have been facing have been seemingly neverending. Logging issues aside I have added some debug settings to help me narrow down issues. The BrainRunner app now has a snail debug config file which allows me to disable the following actions that would otherwise take place on each brain check: growth life check idle action check organ impact check bodyweight check movement feelings of claustrophobia feelings of loneliness More settings will be added, such as disabling sensory/short term/long term memories, for example. The "idle action check" is where the bulk of the "magic" happens, that is the biggest black hole. Adding options to disable certain functionality has helped, but the whole thing has been demotivating to say the least. I got into this to simulate snails, not work on debugging tools. And yes, of course I realize when dealing with a convoluted system like this I should have known what to expect. I did sort of expect this...I just chose to ignore it until the last minute to work on more fun things ;) After putting in a great deal of effort to remain organized and do things right in my projects at work, I've allowed good behaviour to fly out the window at home. Anyway, I have now moved logging out of local log files and into Loggly. It's not done yet, but the main logs like Laravel logs from the BrainRunner and BrainHub, snail logs, and brain task logs are all in Loggly now (in addition to the local machine). To send the default Laravel logs to Loggly for each app I added the `configureMonologUsing` method in bootstrap/app.php: $app->configureMonologUsing(function ($monolog) use ($app){ $today = date("Y-m-d"); $logFileName = "logs/laravel-$today.log"; $monolog->pushHandler(new MonoStreamHandler(storage_path($logFileName), MonoLogger::INFO)); $monolog->pushHandler(new MonoStreamHandler(storage_path($logFileName), MonoLogger::WARNING)); $monolog->pushHandler(new MonoStreamHandler(storage_path($logFileName), MonoLogger::ERROR)); $monolog->pushHandler(new MonoStreamHandler(storage_path($logFileName), MonoLogger::CRITICAL)); if (CoreUtility::InternetOn()) { $envlabel = strtolower(env('APP_ENV')); $maintag = "brainrunner_$envlabel"; $envtag = "env_$envlabel"; $logglyString = CoreUtility::buildLogglyPushHandler(array($maintag, $envtag)); $monolog->pushHandler(new LogglyHandler($logglyString, MonoLogger::INFO)); $monolog->pushHandler(new LogglyHandler($logglyString, MonoLogger::WARNING)); $monolog->pushHandler(new LogglyHandler($logglyString, MonoLogger::ERROR)); $monolog->pushHandler(new LogglyHandler($logglyString, MonoLogger::CRITICAL)); } return $app;}); Aside from having all of my logs in a central place, Loggly lets me query the logs easily to get more relevant information. I use tags to distinguish between individual brain tasks, snails, apps, environments, etc. But that is not enough. I feel to debug the brain in its current state having a visual representation of the brain would help a great deal. What originally made me think about this was my earlier experimentation with TensorFlow and TensorBoard. TensorBoard provides a visual of your TensorFlow graph and lets you interact with said visual to see how tensors flow between operations. The brain as I have it can also be represented as a set of nodes (each node representing a neuron), and I should be able to visualize the flow of inputs between neurons in a similar way. What if I could have a graph for every brain task that runs on every BrainRunner and see exactly what path each input took and why? I think I will look into this next. As you can see I have not been doing much work on the actual *snails* here. But I think the return on investment in improved debugging tools will be worth it if I have any hope of getting any further with simulating a snail brain. On another note it looks like ECAL 2017 ticket prices are out and I'd better start saving: https://project.inria.fr/ecal2017/registration/
  10. Amazon Lumberyard... whats the point?

      From what I understand (and I didn't look into this in-depth so please correct me if I'm wrong), while technically this might be true the only other option you have is to host your own servers. You are apparently not allowed to use other third party hosting options, so the options are either AWS or self-hosting. Which to me is still not sounding that bad for what you get, but there is some restriction there.   As someone who used EC2 and Amazon's deployment solutions for a while before I got sick of the maintenance aspect and having instances go down only to spin up in incorrect regions I'd be wary of hosting all of my infrastructure with AWS again. But of course this was just one personal project, where I didn't feel like doing all of the ongoing maintenance myself - if you really go all out to set up everything professionally and manage it AWS could be a great option.
  11. What's the point of GitHub?

    I've never used GitHub to gain visibility or collaborators for my projects - I've used it purely to have a remote location I trust to store my code and its history. Although I was happy paying $5 per month for a limited number of private repos, when my private repo requirement went up I switched to GitLab.
  12. Snaily Updates: BrainHub + Runners, SnailLife Logo

    I finally have two BrainRunners working on a DigitalOcean droplet, and one BrainHub on another droplet queueing and assigning tasks to the runners. It's still rough, but let's start with the BrainHub's scheduled artisan commands (Artisan is the CLI that comes with Laravel): [font='courier new']QueueIdleBrainChecks[/font] runs every minute. public function handle() { // Find snail due for brain check. $snailController = new SnailController(); $allIdleSnails = $snailController->getAllIdleSnails(); foreach ($allIdleSnails as $snail) { // Seconds since last brain check $diff = Carbon::now()->diffInSeconds (Carbon::parse($snail->brainCheckedAt)); if ($diff >= 60) { $existingQueuedBrainCheck = QueuedBrain::where('snailID', '=', $snail->snailID)->first(); // If brain check is not already queued, queue a new check if ($existingQueuedBrainCheck === null) { $queuedBrain = new QueuedBrain(); $queuedBrain->snailID = $snail->snailID; $queuedBrain->save(); } } } } This basically just gets all living idle snails from the `snail_life` db and creates a new brain check entry in the `brain_hub` db. Also every minute we run the [font='courier new']AssignBrainsToRunners[/font] artisan command: public function handle() { $brainRunnerRetriever = new BrainRunnerRetriever(); $allIdleBrainRunners = $brainRunnerRetriever->getAllIdleBrainRunners(); $taskRetriever = new TaskRetriever(); foreach ($allIdleBrainRunners as $brainRunner) { $task = $taskRetriever->GetNextQueuedTask(); if ($task !== null) { $brainRunner->assignQueuedTask ($task); } } } This finds any available (idle) brain runners and assigns the next queued tasks to them. In the `BrainRunner` model: public function assignQueuedTask($task) { // Change status of BrainRunner to 1 - Busy $this->updateStatusCode(1); $url = $this->url() . DIRECTORY_SEPARATOR . 'api/runTask'; Log::info('assignQueuedTask url: ' . $url); $postfields = 'taskID=' . $task->id . '&snailID=' . $task->snailID . '&runnerID=' . $this->id . '&hubURL=' . env('APP_URL'); $curl = curl_init(); curl_setopt_array($curl, array( CURLOPT_RETURNTRANSFER => 1, CURLOPT_URL => $url, CURLOPT_POST => 1, CURLOPT_POSTFIELDS => $postfields, CURLOPT_TIMEOUT_MS => 2500 )); $resp = curl_exec($curl); curl_close($curl); // Delete task from queue $task->delete(); } The brain runner then runs the task: public function RunTask($task) { try{ $this->taskID = $task['taskID']; $this->hubURL = $task['hubURL']; $this->runnerID = $task['runnerID']; // Get runner specific logger $utility = new Utility($this->taskID); $logger = $utility->logger; $snailController = new SnailController(); $logger->addInfo('INFO: RunTask: Initialized SnailActionController'); // The recurring event is actually not required anymore, // but we kind of hack it together because before the BrainHub // the snail brains relied on it and still do temporarily. $event = new RecurringEvent(); $event->name = 'IdleSnailAction'; $logger->addInfo('INFO: RunTask: Initialized IdleSnailAction Event'); // Find the snail $snail = $snailController->findSnail($task['snailID']); if ($snail === null) { $logger->addError('ERROR: Snail ID ' . $task['snailID'] . ' NOT FOUND.'); } else { $logger->addInfo ('INFO: RunTask: Found Snail ID: ' . $snail->snailID); $snailActionCoordinator = new SnailActionCoordinator(); $snailActionCoordinator->checkAction([$snail], 60, $event); $logger->addInfo ('INFO: RunTask: Action Checked'); } $logger->addInfo ('INFO: RunTask: Reported task as finished'); // Save log to S3 (might rip this out later as we don't need to keep that many logs anyway) $utility->saveLog($this->runnerID); } catch(\Exception $e){ $logger->addError('ERROR: RunTask: Exception Caught, cancelling task'); $logger->addError($e); } $this->reportTaskFinish(); } The BrainHub connects to both the main SnailLife database and the BrainHub database. BrainRunners can only connect to the SnailLife database. The only thing that ever reads from or modifies the BrainHub DB is the BrainHub itself. SnailLife logo and website I have been getting really sick of looking at the ugly black and white SnailLife website. So I decided to try and make it a little more exciting. It's still ugly, but at least it's colorful-ugly now! I stumbled across a bunch of open source logos over at Logodust. I felt like taking a break from the BrainHub for a while and messed around with combining two of the logos and adding an eye stalk to make something vaguely resembling a snail. The logo is also quite versatile in case I ever decide to ditch the snail idea, since I've been told it looks like a chinchilla or one of those deep water fish with the light in front of its face as well... [font='Noto Serif'] [/font][font='Noto Serif'] So now the site is, though by no means great, just a little less bland:[/font][font='Noto Serif'] [/font]
  13. BrainHub and BrainRunner - Finally Over the Hump

      This sounds like it would make a really really interesting story. You should write it. 
  14. BrainHub and BrainRunner - Finally Over the Hump

    Note: This post may make more sense if you also check out the one I posted before Christmas on my blog (which I unfortunately forgot to also post here at the time: http://liza.io/the-brain-scope-is-growing-brainhub/ I've started this post maybe twenty times now, since before Christmas, and each time I keep putting it off, thinking "I'll just blog when I have this next bit done." But each "next bit" is followed by something else, and then something else, into infinity. So I'll just write an update. Since before the holidays I've been working on BrainHub and BrainRunner, which I've already written about. Basically - checking each brain every minute as part of the main SnailLife app was becoming unmanageable. All background processes will be moved to complementary apps away from the user-facing SnailLife application. The idea is to have a BrainHub controlling tasks sent to individual BrainRunners (which are hosted on other DigitalOcean or EC2 droplets). So, here is the first iteration of the BrainHub admin site. The BrainHub has its own database of queued tasks and BrainRunners but also connects to the main SnailLife database and imports a package called SnailLifeCore to be able to get information about brains due for a check and allow admin SnailLife users to log in/control the hub. It is sort of functional in that BrainHub runs 2 scheduled tasks: QueueIdleBrainChecks AssignBrainsToBrainRunners Both of these run every minute. The first gets a list of brains that need to be processed and puts them into the queued_brains table in the brainhub db. The second, AssignBrainsTobrainRunners, looks for any idle brain runners (brain runners with status code 0) and assigns the next brain in the queue to them. Then, the brain runner checks the brain and reports back to BrainHub with the result, which releases the runner to process the next brain in the queue. Right now there are some issues - runners don't get consistently released, for example. Should be easy enough to fix, but for now I've added an emergency release button to the admin site (you can see it above). But right right now I am working on logging. The brain runner creates a new log for each brain task it runs. These then need to be backed up to AWS S3 (as opposed to being stored on the Droplet itself), and then the admin site will display the logs by-task for each runner. There is a mountain of work to do on this but it feels like I'm sort of over the main hump of setting up the core package and the BrainHub and BrainRunner apps to sit alongside the main SnailLife app.
  15. Breaking in as a programmer

      That's funny, I have the opposite opinion of European game companies. To work in games again I got my Australian citizenship and moved from Australia to Europe (Sweden) on a working holiday visa, then looked for work. Browsing through the job ads in Sweden was like walking through some sort of magical game development company fantasy land - there were so many games companies, and so many hiring (in comparison to where I lived in AU anyway). Of course there was also more competition, but that's expected. Depending on where you are exactly the opportunities in Europe definitely are there.   From what I gather you area already in Europe? Is your country part of the EU? This makes it so much easier to relocate, and also makes it easier for a company to hire you remotely either with relocation included or with your ability to relocate yourself, as there is far less hassle with visas and such. If you are not in a country that's part of the EU you may want to consider saving up and physically moving yourself, then looking for work - I know it's a bit scary, but in my case that seemed to be the only option to actually have a company seriously consider an application for a non-senior role.