Posts

  • Setting up Webpack to Modify URL per Environment in TypeScript

    A website I am currently creating has a very simple front end, but I wanted to be able to swap out instances of my API URLs in TypeScript depending which environment I am working in or building for. So far they are all ‘http://127.0.0.1:8000/’, but when I deploy, I didn’t want to have to remember to set a URL for prod or staging etc.

    Unfortunately, I was unable to find a good way to do this with just the TypeScript compiler for static files. I’m not a big fan of the complexity that webpack adds, but I’ve noticed it also minifies and obfuscates my javascript, which is a nice bonus. If you know of a good way to do this with the TypeScript compiler alone, please let me know!

    I tried a few different methods, but a combination of using webpack and splitting my config into development and production and DefinePlugin worked. Otherwise, I mostly just followed the installation instructions for webpack including installing locally. I tried installing globally based on other tutorials, but I ran into a bunch of issues doing that.

    Also, I’m a little frustrated with how confusing the documentation was on this. I tried using Environment Variables in Webpack, but I couldn’t figure out how to access those in my code (not just the config file). It also took me some searching to understand how to use DefinePlugin because the documentation does not make it clear where that should go or how to include it. I found these two links that helped me figure it out (see my webpack.dev.js, webpack.staging.js, and webpack.prod.js files below).

    With this setup, I run ‘npm run build:test’, ‘npm run build:staging’, or ‘npm run build:prod’ depending if I am working locally or building for production. Those commands are mapped in package.json:

    {
      "name": "av_frontend",
      "version": "1.0.0",
      "description": "",
      "private": true,
      "scripts": {
        "test": "echo \"Error: no test specified\" && exit 1",
        "build:test": "webpack --config webpack.dev.js --watch ",
        "build:staging": "webpack --config webpack.staging.js",
        "build:prod": "webpack --config webpack.prod.js"
      },
      "author": "",
      "license": "ISC",
      "dependencies": {
        "@types/bootstrap": "^4.1.2",
        "@types/node": "^10.12.12",
        "bootstrap": "^4.1.3"
      },
      "devDependencies": {
        "ts-loader": "^5.3.1",
        "typescript": "^3.2.2",
        "webpack": "^4.27.1",
        "webpack-cli": "^3.1.2",
        "webpack-merge": "^4.1.4"
      }
    }
    
    

    I added –watch on test so that when I’m developing in VSCode, I can just leave it running and it’ll update whenever I save a file. If I want to run that manually I have to run ‘node_modules/.bin/webpack –config webpack.dev.js –watch’ because I installed webpack locally, not globally.

    When I first got this set up, I realized it was only outputting a single Javascript file because I didn’t understand how ‘entry’ and ‘output’ worked in the webpack config file (see code below). Now I define each entry JS file for each page (they both have an import for my config with the IP addresses) with a name. Under ‘output’ [name].js corresponds with each ‘entry’. So my output ends up in the dist/ folder and the two files are named ‘login.js’ and ‘purchase.js’ based on the two fields in the ‘entry’ object. Any files added to ‘entry’ will produce a corresponding output Javascript file.

    I also missed something in the instructions for Typescript when I was initially setting this up that lead to a long hunt for why it wasn’t including my config.ts file (the error messages were not great). Don’t forget the ‘resolve’ field below or webpack will get confused when trying to import any file with a .ts extension referenced in another file.

    webpack.common.js

    const path = require('path');
    
    module.exports = {
      entry: {    
        login: './src/login.ts',
        purchase: './src/purchase.ts'   
      },
      devtool: 'inline-source-map',
      module: {
        rules: [
          {
            test: /\.tsx?$/,
            use: 'ts-loader',
            exclude: /node_modules/
          }
        ]
      },
      resolve: {
        extensions: [ '.tsx', '.ts', '.js' ]
      },
      output: {
        filename: '[name].js',
        path: path.resolve(__dirname, 'dist')
      }
    };
    

    webpack.dev.js

    const merge = require('webpack-merge');
    const webpack = require('webpack');
    const common = require('./webpack.common.js');
    
    module.exports = merge(common, {
      mode: 'development',
      plugins: [
        new webpack.DefinePlugin({
          'process.env': {
            'API_URL': JSON.stringify("http://127.0.0.1:8000/")
          }
        })
      ]
    });
    

    webpack.staging.js

    const merge = require('webpack-merge');
    const webpack = require('webpack');
    const common = require('./webpack.common.js');
    
    module.exports = merge(common, {
      mode: 'production',
      plugins: [
        new webpack.DefinePlugin({
          'process.env': {
            'API_URL': JSON.stringify("https://staging.com/")
          }
        })
      ]
    });
    

    webpack.prod.js

    const merge = require('webpack-merge');
    const webpack = require('webpack');
    const common = require('./webpack.common.js');
    
    module.exports = merge(common, {
      mode: 'production',
      plugins: [
        new webpack.DefinePlugin({
          'process.env': {
            'API_URL': JSON.stringify("https://production.com")
          }
        })
      ]
    });
    

    I wrote this blog post originally thinking I had solved this problem, but the solution I was using can only handle development and production environments. It also is a little more complicated than the above solution. It is described here: https://basarat.gitbooks.io/typescript/content/docs/tips/build-toggles.html

  • Debugging PostgreSQL Port 5433 and Column Does Not Exist Error

    I am creating a Django application using PostgreSQL (PSQL) for my database and was nearly finished with the API when I discovered some strange behavior. After successfully testing the API in the Django app, I decided to run some basic queries on the database. I received the following error for nearly every field in the app:

        select MaxSceneKey from game_progress_gameplaykeys;
        ERROR:  column "maxscenekey" does not exist
        LINE 1: select MaxSceneKey from game_progress_gameplaykeys;
                       ^
        HINT:  Perhaps you meant to reference the column "game_progress_gameplaykeys.MaxSceneKey".
    

    I was getting the same result for every field in the table that I tried (and when I try to include the table name as the hint suggests), except for ‘user_id’ and ‘objective’.

    I confirmed that the fields existed using \d+ game_progress_gameplaykeys, tried changing some of their field types, and even upgraded from Postgres 9.5 to 10.5 (I was planning to do this anyway).

    After a bunch of searching, I found the issue:

    “All identifiers (including column names) that are not double-quoted are folded to lower case in PostgreSQL.” from https://stackoverflow.com/questions/20878932/are-postgresql-column-names-case-sensitive

    I created camelCase field names in my Django app based on what the field names were previously in my application (written in C#).

    I decided to fix this (for now) by fixing my models to all use snake_case and using https://github.com/vbabiy/djangorestframework-camel-case to switch the keys from camelCase to snake_case when they come into the API. One issue solved!

    While debugging that issue, I decided to update my laptop’s code + postgres version since I hadn’t worked on it in a while and wanted to see if the issue was just on my desktop. When I reinstalled PSQL, I couldn’t seem to log into it using the user I was creating. Using the postgres user was fine, though.

    I finally figured out the issue was that PSQL was running on port 5433, not 5432 (the default). After that, I was puzzling over what could be running on 5432 since ‘netstat’ and ‘lsof’ revealed nothing else running on my WSL Ubuntu VM. As I was searching around, I saw someone mention that really only PSQL should be running on that port, and I realized I had installed PSQL on Windows on that machine before I moved over to WSL. I uninstalled that, switched back to 5432 in Linux, restarted PSQL, and boom, good to go.

    While I was debugging that issue, I learned some good information about PSQL along the way:

    /etc/postgresql/10/main/postgresql.conf allows you to set and check the port that PSQL is running on.

    /etc/postgresql/10/main/pg_hba.conf allows you to set different security protocols for connections to PSQL. Notable for local development: set the local connection lines to ‘trust’ so you don’t have to enter a password when logging in.

    Note: you need to restart the PSQL server for either of these changes to take effect. Note 2: MORE IMPORTANT NOTE: Don’t use trust anywhere other than a local version of PSQL. Ever.

    These are the lines I had to change to get that to work (may be different in versions of PSQL other than 10.5):

    # "local" is for Unix domain socket connections only
    local   all             all                                     trust
    # IPv4 local connections:
    host    all             all             127.0.0.1/32            trust
    # IPv6 local connections:
    host    all             all             ::1/128                 trust
    
  • Configuring NGINX for localhost

    I had a little trouble finding a simple way to set NGINX up to work locally, so I wanted to write up some quick instructions here. I’m using NGINX with Windows Subsystem for Linux (WSL).

    First, I installed NGINX in WSL with ‘sudo apt-get install nginx’

    Then, I created a symlink to my frontend directory in my home directory in WSL.

    In /etc/nginx/conf.d, I created basic config file localhost.conf:

    server {
        listen       8080;
    
        location / {
            root   /home/username/frontend_directory;
            index  index.html index.htm;
        }
    
        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
    

    The only thing you should need to change is the bolded frontend_directory. Keep in mind the path may differ depending where you keep your files. I restructured my front end after setting up Webpack to include a dist/ folder and broke this config before modifying this line again.

    To start NGINX (I have to do this every time I restart the computer): ‘sudo nginx’

    Then go to 127.0.0.1:8080 and your site should be live!

    If you need to make changes to your configuration, you should first try: ‘sudo service nginx reload’ which will give you a ‘hot’ reload instead of restarting the server.

    If you do need to restart the server, you can do so with ‘sudo service nginx restart’.

  • Sending JSON to a server using fetch() in TypeScript

    For a new project, I wanted to use TypeScript on the front end but not any of the frameworks that usually include it (React, Angular, etc.). Unfortunately, this means that when I have been trying to figure out how to do something in TypeScript, searches often lead me to solutions involving those frameworks.

    I still haven’t found a good resource for creating a JSON object and sending it to a backend using TypeScript. The easiest solution would be to relax the TypeScript compiler and writing it the same way we would in JavaScript, but that defeats the point of using TypeScript. In looking at example code, I found that creating an interface to describe the JSON object is one accepted way to do it.

    interface IJSON
    {
        email:string;
        fullName: string; 
        shortName: string; 
        password: string; 
        institution: string; 
        isStudent: boolean;
    }
    
    const url = 'http://127.0.0.1:8000/register/';
    
    function gatherData(e:Event)
    {
        e.preventDefault();  //don't reload page so that we can test.
        
        let json:IJSON = 
        {
            email: (<HTMLInputElement>document.getElementById("email")).value,
            fullName: (<HTMLInputElement>document.getElementById("fullName")).value,
            shortName: (<HTMLInputElement>document.getElementById("shortName")).value,
            password: (<HTMLInputElement>document.getElementById("password")).value,
            institution: (<HTMLInputElement>document.getElementById("institution")).value,
            isStudent: true,
        }
        sendDataViaFetch(json);
    }
    
    function sendDataViaFetch(json:IJSON)
    {    
        var request = new Request(url, {
            method: 'POST',
            body: JSON.stringify(json),
            headers: new Headers({
                'Content-Type': 'application/json',
                'Authorization': this.basic })
        });
        
        fetch(request)
        .then(function() {
            // Handle response we get from the API
        });
    }
    
    window.addEventListener('submit', gatherData);
    

    If you have a better way, please let me know!

  • Python Reference Talks

    While trying to get more familiar with Django, I started watching talks from DjangoCon from the last few years. I can’t seem to find the talk, but one of them had a list of great Python/Django talks, which inspired me to create me own list (with some definite overlap).

    I have found that revisiting talks like these make me reconsider some design problems that I have recently worked through, so I want to keep a list and rewatch these periodically. I will likely add to this list in the future.

    These two go together (from DjangoCon2015):

  • Setting up Conda, Django, and PostgreSQL in Windows Subsystem for Linux

    Because I feel much more comfortable in a terminal than on the Windows command line (or in Powershell), I’ve really been enjoying Windows Subsystem for Linux (WSL). In fact, I use it almost exclusively for accessing the server I run this blog from. WSL is essentially a Linux VM of only a terminal shell in Windows (with no GUI access to Linux) and no lag (like you get in most VMs).

    When I created my Grocery List Flask App, I began by using WSL. However, I ran into an issue that prevented me from seeing a locally hosted version of the API in Windows, so I switched to the Windows command line for that app.

    Recently, I’ve been developing a Django application (more on that in a future post), and I ran into a similar issue. Between posting the issue with localhost on WSL and starting this new app, there was a response that I had been meaning to check out. I found that for Django and PostgreSQL, making sure everything was running from localhost (or 0.0.0.0) instead of 127.0.0.x, seemed to fix any issues I had. PSQL gave me some issues just running within WSL, but I found that I just need to add ‘-h localhost’ to get it to run.

    Below are the commands I used to get Conda, Django, and PSQL all set up on my PC and then again my laptop. This works for Django 2.0, PSQL 9.5, and Conda 4.5.9.

    Installation Instructions

    Edit: I originally had installation instructions in here for PSQL 9.5. If you want 9.5 in Ubuntu, good news! You already have it. To install the newest version of PSQL, you should uninstall that version first, then install the new version from here

    Install Conda (need to restart after installing for it to recognize ‘conda’ commands)

    #create environment
    conda create --name NameOfEnvironment
    #activate environment
    source/conda activate NameOfEnvironment
    #install Django
    conda install -c anaconda django
    #install psycopg2, to interface with PSQL
    conda install -c anaconda psycopg2
    
    If you get ‘permission denied’, or it hangs, just rerun the install command that failed. Not sure why, but that fixed things for me.
    
    #Remove PSQL from Ubuntu:
    sudo apt-get --purge remove postgresql\*
    #Then run this to make sure you didn’t miss any:
    dpkg -l | grep postgres
    
    #Install PSQL 10 using instructions here: https://www.postgresql.org/download/linux/ubuntu/ I have copied them here for convenience, but please double check that they have not changed
    #Create the file /etc/apt/sources.list.d/pgdg.list and add the following line:
    #deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main
    
    #Then exeucte the following three commands
    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
    sudo apt-get update
    sudo apt-get install postgresql-10
    
    sudo service postgresql start    
    sudo -i -u postgres -h localhost
    createuser --interactive
    	   psql user: local_user
    	   y to superuser
    
    #create the database
    createdb local_db
    #log into local_db
    psql -d local_db
    
    #privileges for Django to modify tables.
    GRANT ALL PRIVILEGES ON DATABASE local_db TO local_user;
    
    ALTER USER local_user WITH PASSWORD 'password';
    
    '\q' to quit interactive console.
    'exit' to leave postgres as postgres user.
    
    #one line command to log in as the user to check tables during development.
    psql -h localhost -d local_db -U local_user
    
    python manage.py makemigrations
    python manage.py migrate
    
    Now log back in to PSQL using the line above, then enter '\dt' and you should see tables like django_admin_log, django_content_type, django_migrations, and django_sessions. Your PSQL DB is now connected to your Django app!
    
    #optional for now, but allows you to ensure db connection works by storing credentials for the superuser you create.
    python manage.py createsuperuser
    
    #command to run the server. go to localhost:8000 in your web browser to view!
    python manage.py runserver 0.0.0.0:8000
    

    I used this post for reference

  • Unity3D Scriptable Objects

    This week at our local Unity user meetup group, I presented (along with a co-organizer of the group) about Scriptable Objects in Unity. You can find that talk here.

    This is that same content in text form.

    Scriptable objects are a powerful tool in designing and developing games in Unity3D. It took me longer than I’d like to admit to get around to using them, but I’d like to introduce them in such a way that makes it easier for you to just get started using them.

    What is a Scriptable Object (SO)?

    It is a script that derives from ScriptableObject, instead of MonoBehaviour. This script allows the user to create objects either in memory or as .asset files in Unity, which are also referred to as Scriptable Objects.. A simple example:

    using UnityEngine;
    
    [CreateAssetMenu(menuName = "SOs/FloatVar")]
    public class FloatVariable : ScriptableObject
    {
        public float Value;
    
        void MethodName()
        {
            //Do stuff
        }
    }
    

    The line with ‘CreateAssetMenu’ adds a new line to the ‘Create’ menu in the Project window in Unity. When you click that menu item, it will create a new .asset file that has access to the variables and methods defined in your file.

    It does not have access to the standard Update(), Start(), Awake()* methods because those are part of MonoBehaviour. It does derive from Unity’s Object class, so it has access to classes like GameObject, Transform, etc.

    *use OnEnable for initialization instead of Start or Awake

    SOs can contain data and functions just like a MB, but it can’t be attached to a GameObject in the hierarchy as a component. A SO can be referenced by a MB, though.

    Two things to differentiate:

    A script that creates the SO (same as a MB in Project) The SO, which is a .asset file. (lives in Project folder, analogous to an instance of a MB in the hierarchy).

    SOs aren’t meant to replace MBs everywhere in your project. But there are places where they are a better fit for storing data/functions.

    Why use SOs?

    No need for JSON, XML, Text, which means no need to parse data. Can save large amounts of data and optimize data loading when you need it. They don’t get reset when exiting playmode! Since SOs aren’t tied to the scene (not in the hierarchy), you can commit changes to source control without impacting a scene another team member may be working on. This allows you to more easily reference data/functions across scenes when using multiple scenes in development, which I highly recommend you do (this could be a whole other blog post). No need to depend on manager classes to hold all of your references.

    3 and 4 combine to allow us to store the data and functions for a type of enemy and tweak that inside of play mode, have the changes saved, then share that with a teammate without having to worry about impacting the scene file. We also don’t have to re-instantiate prefabs or change every instance of the enemy in a scene (or multiple scenes).

    You may already be doing something similar with prefabs (holding/referencing data and never instantiating that particular prefab). If so, look at SOs! Using prefabs for this purpose is confusing and accident prone (accidentally throw a prefab into a scene, get confused between what is a prefab and what is a data holder).

    If you are a less experienced Unity developer and this seems like a lot to consider, don’t worry about digesting all of it. Just think about some piece of your game design and try to make it as a SO instead of a Monobehaviour such as an enemies stats, or some inventory items.

    If you are more experienced, but some of this doesn’t entirely make sense, please try out SOs for some small use cases to see how they differ from MBs. It took me some time to get used to thinking in terms of using SOs, but they are a great tool for a lot of use cases.

    How to use SOs

    A few Unity Learn examples that demonstrate different use cases for SOs:
    Text Adventure
    Ability System
    Character Select
    Customizing UI

    Two talks about SOs that really helped me understand how and where to use them:
    Richard Fine - Overthrowing the MonoBehaviour Tyranny in a Glorious Scriptable Object Revolution
    Link to the project from the talk

    Game Architecture with Scriptable Objects
    Blog post for the previous talk

    Serialization

    This is how Unity reads out your data attached in scripts. This gets talked about alongside Scriptable Objects sometimes because the serialization that Unity does sometimes messes up scriptable objects. Unity serializes data when it enters/exits play mode and some data types don’t play nice (polymorphic classes for example). If you are having issues with data resetting/corrupting under those circumstances, check these out:
    Forum post on Serialization and Scriptable Objects
    Blog post from Lucas Meijer
    Blog Post from Tim Cooper
    Talk by Richard Fine

  • Grocery List App and Flask Deployment Issues

    In addition to starting this blog, I wanted to build some small projects to get some experience with technologies I am not currently using at work. Since I’m currently using Django at work, I decided to create a small Grocery List application using Flask and DynamoDB. You can find the repo and installation info here.

    I ran into two major issues when trying to create the application:

    First, I tried to set things up without a virtual environment for Python, which caused errors with libraries not pointing to the correct locations. I thought that since this was the only application I’d have on the server, a virtual environment wouldn’t be as important. I realize now there is a big upside to separating your application Python install from your system Python install. I highly recommend setting up a virtual environment for your Python app whether it be virtualenv, conda, or something else even if you only intend it to be the only app on a system.

    Second, I didn’t understand how to set up virtual hosts with Apache when I started this project. Getting this blog, the front page of the grocery list app, and the Flask API all routed correctly and running simultaneously took me a few hours to figure out. Two things that seemed to be required to get this all running (otherwise, I was getting the top root folder serving up on all subdomains):

    ‘NameVirtualHost *:80’ at the top of the file and the port number (80) in this line for each of the host definitions: ‘<VirtualHost *:80>’. See link below for that line in context.

    As I was working through the issues, I created this thread for help on reddit: https://www.reddit.com/r/flask/comments/7fbfs1/af_apache_deployment_questions/

    I almost posted to serverfault.com, but I ended up figuring things out as I was creating a post.

    References: http://peatiscoding.me/geek-stuff/mod_wsgi-apache-virtualenv/ https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvii-deployment-on-linux-even-on-the-raspberry-pi

  • Backups Using a Network Attached Raspberry Pi

    After setting up my Raspberry Pi as a NAS, I wanted to set up backups that are easy to run and check on. Initially, I wanted to set them up to run automatically, but another goal of the Pi set up was for me to turn my PC off more often. I’m currently thinking that if I’m going to turn the PC off, I will just run the backups manually since turning my PC on every Saturday night (or whenever I’d set it to run) isn’t really automated. If I go back on this and set up a cron job I’ll be sure to post about that as well.

    One problem I haven’t been able to solve yet is how to backup Windows itself with this set up. Windows 7 backup tool fails, and I can’t see my network drives with the two free backup applications I tried for Windows. I can include specific folders from that drive in my backup script and/or occasionally switch my external drive to my PC to run that full backup, which is probably what I’ll end up doing.

    Apart from Windows, I have a few different backups I want to run:

    1. The SD card for my raspberry pi
    2. A large hard drive with a lot of media files (movies, music, pictures, etc.)
    3. A SSD that has all of my games on it.
    4. Also on that SSD are folders for my personal software and game development projects. I want to also back these up to AWS.

    Here is the script as it currently stands. I run it from Windows Subsystem for Linux on Windows 10. I have some notes below to explain the set up and why I chose to use the tools and configurations that I did.

    #!/bin/bash
    today=`date '+%Y_%m_%d'`;
    
    #backup raspberry pi
    ssh username@ipaddress "sudo dd if=/dev/mmcblk0 bs=1M | gzip - | dd of=/media/pi/HDDName/pibackup/pibackup$today.gz" > /mnt/e/rsynclogs/pibackuplog$today.txt
    
    #backing up all of my development work including Unity to S3 for offsite backups. Have to add dates to the log files otherwise it overwrites the file
    aws s3 sync /mnt/e/Development s3://developmentFolder/ --delete > /mnt/e/rsynclogs/S3DevOutput$today.txt
    
    aws s3 sync /mnt/e/Unity s3://unityFolder/ --delete > /mnt/e/rsynclogs/S3UnityOutput$today.txt
    
    #backup D drive excluding a few folders, and write logs out.
    rsync -avP --delete --size-only --exclude-from '/mnt/d/rsynclogs/exclude.txt' --log-file=/mnt/d/rsynclogs/rsynclog$today.txt /mnt/d/ username@ipaddress:/media/pi/MediaBackup/
    
    #backup E drive excluding a few folders, and write logs out.
    rsync -avW --delete --size-only --exclude-from '/mnt/e/rsynclogs/exclude.txt' --log-file=/mnt/e/rsynclogs/rsynclog$today.txt /mnt/e/ username@ipaddress:/media/pi/GamesBackup/
    

    The Raspberry Pi backup is modified from: https://johnatilano.com/2016/11/25/use-ssh-and-dd-to-remotely-backup-a-raspberry-pi/

    I don’t have access to the network drives from the terminal (or at least I don’t know how to access them from WSL without ssh ing), so I needed the output to be relative to the Pi. The quotations enclose the commands that get sent to the Pi, so I had to extend them to include the output location. I also changed ‘bs=1m’ to ‘bs=1M’. I believe the lowercase m is expected on Mac, but the uppercase is required on most flavors of Linux.

    In order to run it from the script I had to set up my user to not require a password to execute the command, which I did by doing the following:

    At a terminal on the Pi, enter ‘sudo visudo’, change the last line to: ‘username ALL = NOPASSWD: ALL’, where username is the username you are using to ssh. If you are doing this as the pi user, I don’t think this will be necessary. I’d kind of like to limit this to just the ‘dd’ command, but I’m not sure how to tell it where dd. I may update this in the future.

    For setting up rsync with the correct flags, I used these two links: https://www.howtogeek.com/175008/the-non-beginners-guide-to-syncing-data-with-rsync/ https://www.thegeekstuff.com/2011/01/rsync-exclude-files-and-folders/?utm_source=feedburner

    Two notes for the rsync commands:

    By default, drives mount with the ‘pi’ user. Since I was setting up my backups to work with a different user, rsync was giving me errors about not being able to set the time on the files when I’d run the command. I’m pretty sure this was because the user didn’t have permissions on the drive. By adding the drives to fstab, it mounts them as root instead, which allows the user to access them since it has root permissions. I should have done this when setting up the drive as a NAS, but I only did it for the initial drive I was testing. See here for instructions on adding drives to fstab: https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/

    For my E drive rsync, I tried initially with the same settings as the D drive, but the backup kept hanging on different files. I saw several recommendations of different flags that people claimed to be the culprit. I tried turning several off and on, but the one that seemed to fix things was swapping -P for -W (suggested here: https://github.com/Microsoft/WSL/issues/2138), which forces entire files to be transferred instead of partial files. I could probably add –progress back in, but -v for verbose gives me enough output to see where issues arise. I’d advise adding –progress back in if you encounter issues and need to check where things are going wrong..

    You can find instructions for setting up the AWS CLI tools and using syncing with S3 in the AWS docs. I couldn’t find anything in there for logging, but StackOverflow had a good solution: https://stackoverflow.com/questions/35075668/output-aws-cli-sync-results-to-a-txt-file

    The last thing I added to the script was a variable to grab the current date so I don’t overwrite the pi backup or the log files.

    One thing I’d like to add is a way to clean up the pi backups. At ~3GB each, it isn’t a big issue currently, but eventually I’ll want to clean them up.

  • Setting up a Raspberry Pi as a NAS and Plex server

    When my external HDD failed, I debated getting a network attached storage device (NAS) before realizing the price wasn’t worth it for me. I don’t have that much data, and really all I wanted was a way to automate backups and have a plex server that requires less power than my PC (so I could turn it off more often).

    While I was looking around at options, I found I could do both of those things with a Raspberry Pi. I’d been wanting to get one for a while, but never had a good project to justify picking one up. I ordered a Raspberry Pi 3, a case with a fan and a power supply (that has a power switch), and a 32GB SD card. That is more storage than I need, but a 16GB card wasn’t much cheaper. I also picked up an 8 TB Seagate external hard drive.

    While I think anyone can set this up, I will say that I have a decent working knowledge of Linux, which helped getting started and troubleshooting issues I ran into. If you aren’t very familiar with Linux and the terminal, you can still get all of this set up, but debugging issues and working through it all might be a little more difficult.

    First, I would suggest setting up SSH on your pi so you don’t have to go back and forth between working on the pi and another machine: https://www.raspberrypi.org/documentation/remote-access/ssh/ I’d also recommend setting it up with an ssh key so you don’t have to enter your PW every time you log in: https://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/

    I didn’t do either of those at first and it became annoying switching back and forth between machines since I only have one keyboard/mouse/display set up. If you have a separate keyboard, mouse, and display for your Pi, it might not be as helpful right away, but I’d still recommend it.

    My first step was to try to set up the NAS. I ran through this tutorial: https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/, but I hit a wall near the end. I think this was written for Windows 7 and I’m on Windows 10 where mapping a network drive looks different. I also think I may have skipped the last step of setting the samba PW for my user account, which may have been the bigger problem. Here are some notes about where I did things a little differently:

    My external HDD had 4 partitions when I started the process and they automatically mounted, so I skipped the parts about mkdir /media/USBHDD1 and mount…USBHDD1

    ‘security = user’ was not already in the samba config file commented out so I just added it in the authentication section. For the section they tell you to add to the config file, I made four copies of that at the bottom of the file, one for each partition that I have.

    For the last part of the tutorial about adding the network drives on my Windows PC I had to follow different directions and go to Windows Explorer -> This PC -> Map network drive (in the file menu) -> then in the file for folder enter ‘\raspberrypi', then I clicked ‘Browse’ which let me select the folder (I divided my external HDD into a few partitions). I was also to manually enter ‘\raspberrypi\nameOfFolder’ to get it to see a drive. I repeated that for each of my drive partitions.

    One mistake I made was only setting up one of the partitions in fstab. This caused me some serious issues with permissions when trying to set up backups with rsync.

    I’m in the process of automating my backups. When I finish testing my backup scripts, I’ll be sure to post about it.

    To set my pi up as a plex server, I followed this tutorial: https://www.codedonut.com/raspberry-pi/raspberry-pi-plex-media-server/

    The only problem I ran into was that I had to change permissions on the folder ‘/media/pi’ where it automounted my drives because Plex couldn’t access them. The permissions on the drive folders themselves were fine.

    After that small adjustment, I was able to stream a 1080p movie with a bitrate ~10Mb/s over my local network without any trouble, but I tried streaming one closer to 25 Mb/s and the Pi definitely couldn’t handle it. I’m not sure where exactly the limit is, but that is something to note.

    Finally, I wanted to test how resilient this set up is so I made sure I could restart while ssh’d and even shut down the pi.

    To restart from command line: ‘sudo reboot’. This allowed me log back in after a couple of minutes.

    To shutdown then start up the pi: ‘sudo halt’ on the command line. This shuts it down, but the red light stays on (and the fan, so the board is still getting power). I can then use the power button on the AC cable to shut off power, then press it again to turn it back on. When it comes back up, the NAS drives automount and I can see them on my PC and Plex is running.

    One last thing I did, for security, was to create a user other than pi, give it sudo permissions (‘sudo usermod -a -G sudo USERNAME’), and give the ‘pi’ user a much more complicated password so it would be more difficult to hack. I saw one tutorial recommend deleting the pi user account, but I decided that was overkill. At the very least, you should change the default password of you default user if you are going to make the Pi visible on a network.

  • Unity 2D Tools for Level Building

    This week for our local Unity meetup group, I presented an intro to some of the new 2D Tools in Unity (There is an intro about more general Unity topics, so for the 2D stuff skip to 17 minutes in): https://www.youtube.com/watch?v=xopzxmzFJUs

    Here are the links to things I mentioned were outside of the scope of that talk but might be interesting to learn:

    Sprite masks: https://docs.unity3d.com/Manual/class-SpriteMask.html

    2D side scrolling brawler style camera (focus on 9-slicing sprites and new features for sorting): https://unity3d.com/learn/tutorials/topics/2d-game-creation/introduction-and-goals

    Platformer character controller: https://unity3d.com/learn/tutorials/topics/2d-game-creation/intro-and-session-goals

    https://github.com/MelvynMay/UnityPhysics2D - a lot of interesting scenes demoing 2D physics.

    Pretty cool topdown game from Unity to show off tilemap and other 2d features (from a talk at Unite Austin, >https://www.youtube.com/watch?v=RkaEh–qUAY>): https://github.com/Unity-Technologies/2d-gamedemo-robodash

    2D Game Kit - This is a 2D Game that Unity built to show off 2D features, and what a complete project looks like including tools for designers so that they don’t need to dive into the code to create new puzzles, levels, etc. https://blogs.unity3d.com/2018/02/13/introducing-2d-game-kit-learn-unity-with-drag-and-drop/, https://unity3d.com/learn/tutorials/s/2d-game-kit. Unity also recorded a live training for this recently that I’m assuming they will publish soon, but I can’t find a link to it yet.

    Edit: Unity posted the live training for 2D Game Kit here: https://unity3d.com/learn/tutorials/projects/2d-game-kit/overview-and-goals?playlist=49633

    What I covered in the video is using the new Tilemaps and associated features for designing levels in 2D. This was also covered by this Unity Learn tutorial: https://unity3d.com/learn/tutorials/topics/2d-game-creation/intro-2d-world-building-w-tilemap, and this blog post: https://blogs.unity3d.com/2018/01/25/2d-tilemap-asset-workflow-from-image-to-level/. This are very thorough and a great reference for these features. I found that there were a couple of things I could talk about not covered in those videos, specifically how to create your own rule and random tiles, and how to create tiles and tilemaps from art that you generate or find yourself.

    Finally, here is the collection of brushes and tiles that Unity has coded that cover a huge range of use cases: https://github.com/Unity-Technologies/2d-extras

    The ground sprites I used came from here: https://bitcan.itch.io/tileset-simples

    And the flower sprites I used came from here: https://onimaru.itch.io/green-platform

subscribe via RSS