Programming My Life - Andrew Mshar
  1. Python Reference Talks

    While trying to get more familiar with Django, I started watching talks from DjangoCon from the last few years. I can’t seem to find the talk, but one of them had a list of great Python/Django talks, which inspired me to create me own list (with some definite overlap).

    I have found that revisiting talks like these make me reconsider some design problems that I have recently worked through, so I want to keep a list and rewatch these periodically. I will likely add to this list in the future.

    These two go together (from DjangoCon2015): - Jane Austen on Pep 8 - Lacey Williams Henschel - The Other Hard Problem: Lessons and Advice on Naming Things by Jacob Burch

  2. Setting up Conda, Django, and PostgreSQL in Windows Subsystem for Linux

    Because I feel much more comfortable in a terminal than on the Windows command line (or in Powershell), I’ve really been enjoying Windows Subsystem for Linux (WSL). In fact, I use it almost exclusively for accessing the server I run this blog from. WSL is essentially a Linux VM of only a terminal shell in Windows (with no GUI access to Linux) and no lag (like you get in most VMs).

    When I created my Grocery List Flask App, I began by using WSL. However, I ran into an issue that prevented me from seeing a locally hosted version of the API in Windows, so I switched to the Windows command line for that app.

    Recently, I’ve been developing a Django application (more on that in a future post), and I ran into a similar issue. Between posting the issue with localhost on WSL and starting this new app, there was a response that I had been meaning to check out. I found that for Django and PostgreSQL, making sure everything was running from localhost (or 0.0.0.0) instead of 127.0.0.x, seemed to fix any issues I had. PSQL gave me some issues just running within WSL, but I found that I just need to add '-h localhost' to get it to run.

    Below are the commands I used to get Conda, Django, and PSQL all set up on my PC and then again my laptop. This works for Django 2.0, PSQL 9.5, and Conda 4.5.9.

    Installation Instructions

    Edit: I originally had installation instructions in here for PSQL 9.5. If you want 9.5 in Ubuntu, good news! You already have it. To install the newest version of PSQL, you should uninstall that version first, then install the new version from here

    Install Conda (need to restart after installing for it to recognize ‘conda’ commands)

    #create environment
    conda create --name NameOfEnvironment
    #activate environment
    source/conda activate NameOfEnvironment
    #install Django
    conda install -c anaconda django
    #install psycopg2, to interface with PSQL
    conda install -c anaconda psycopg2
    
    If you get permission denied, or it hangs, just rerun the install command that failed. Not sure why, but that fixed things for me.
    
    #Remove PSQL from Ubuntu:
    sudo apt-get --purge remove postgresql\*
    #Then run this to make sure you didn’t miss any:
    dpkg -l | grep postgres
    
    #Install PSQL 10 using instructions here: https://www.postgresql.org/download/linux/ubuntu/ I have copied them here for convenience, but please double check that they have not changed
    #Create the file /etc/apt/sources.list.d/pgdg.list and add the following line:
    #deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main
    
    #Then exeucte the following three commands
    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
    sudo apt-get update
    sudo apt-get install postgresql-10
    
    sudo service postgresql start    
    sudo -i -u postgres -h localhost
    createuser --interactive
           psql user: local_user
           y to superuser
    
    #create the database
    createdb local_db
    #log into local_db
    psql -d local_db
    
    #privileges for Django to modify tables.
    GRANT ALL PRIVILEGES ON DATABASE local_db TO local_user;
    
    ALTER USER local_user WITH PASSWORD 'password';
    
    '\q' to quit interactive console.
    'exit' to leave postgres as postgres user.
    
    #one line command to log in as the user to check tables during development.
    psql -h localhost -d local_db -U local_user
    
    python manage.py makemigrations
    python manage.py migrate
    
    Now log back in to PSQL using the line above, then enter '\dt' and you should see tables like django_admin_log, django_content_type, django_migrations, and django_sessions. Your PSQL DB is now connected to your Django app!
    
    #optional for now, but allows you to ensure db connection works by storing credentials for the superuser you create.
    python manage.py createsuperuser
    
    #command to run the server. go to localhost:8000 in your web browser to view!
    python manage.py runserver 0.0.0.0:8000
    

    I used this post for reference

  3. Unity3D Scriptable Objects

    This week at our local Unity user meetup group, I presented (along with a co-organizer of the group) about Scriptable Objects in Unity. You can find that talk here.

    This is that same content in text form.

    Scriptable objects are a powerful tool in designing and developing games in Unity3D. It took me longer than I’d like to admit to get around to using them, but I’d like to introduce them in such a way that makes it easier for you to just get started using them.

    What is a Scriptable Object (SO)?

    It is a script that derives from ScriptableObject, instead of MonoBehaviour. This script allows the user to create objects either in memory or as .asset files in Unity, which are also referred to as Scriptable Objects.. A simple example:

    using UnityEngine;
    
    [CreateAssetMenu(menuName = "SOs/FloatVar")]
    public class FloatVariable : ScriptableObject
    {
        public float Value;
    
        void MethodName()
        {
            //Do stuff
        }
    }
    

    The line with ‘CreateAssetMenu’ adds a new line to the ‘Create’ menu in the Project window in Unity. When you click that menu item, it will create a new .asset file that has access to the variables and methods defined in your file.

    It does not have access to the standard Update(), Start(), Awake()* methods because those are part of MonoBehaviour. It does derive from Unity’s Object class, so it has access to classes like GameObject, Transform, etc.

    *use OnEnable for initialization instead of Start or Awake

    SOs can contain data and functions just like a MB, but it can’t be attached to a GameObject in the hierarchy as a component. A SO can be referenced by a MB, though.

    Two things to differentiate:

    A script that creates the SO (same as a MB in Project) The SO, which is a .asset file. (lives in Project folder, analogous to an instance of a MB in the hierarchy).

    SOs aren’t meant to replace MBs everywhere in your project. But there are places where they are a better fit for storing data/functions.

    Why use SOs?

    No need for JSON, XML, Text, which means no need to parse data. Can save large amounts of data and optimize data loading when you need it. They don’t get reset when exiting playmode! Since SOs aren’t tied to the scene (not in the hierarchy), you can commit changes to source control without impacting a scene another team member may be working on. This allows you to more easily reference data/functions across scenes when using multiple scenes in development, which I highly recommend you do (this could be a whole other blog post). No need to depend on manager classes to hold all of your references.

    3 and 4 combine to allow us to store the data and functions for a type of enemy and tweak that inside of play mode, have the changes saved, then share that with a teammate without having to worry about impacting the scene file. We also don’t have to re-instantiate prefabs or change every instance of the enemy in a scene (or multiple scenes).

    You may already be doing something similar with prefabs (holding/referencing data and never instantiating that particular prefab). If so, look at SOs! Using prefabs for this purpose is confusing and accident prone (accidentally throw a prefab into a scene, get confused between what is a prefab and what is a data holder).

    If you are a less experienced Unity developer and this seems like a lot to consider, don’t worry about digesting all of it. Just think about some piece of your game design and try to make it as a SO instead of a Monobehaviour such as an enemies stats, or some inventory items.

    If you are more experienced, but some of this doesn’t entirely make sense, please try out SOs for some small use cases to see how they differ from MBs. It took me some time to get used to thinking in terms of using SOs, but they are a great tool for a lot of use cases.

    How to use SOs

    A few Unity Learn examples that demonstrate different use cases for SOs:
    Text Adventure
    Ability System
    Character Select
    Customizing UI

    Two talks about SOs that really helped me understand how and where to use them:
    Richard Fine - Overthrowing the MonoBehaviour Tyranny in a Glorious Scriptable Object Revolution
    Link to the project from the talk

    Game Architecture with Scriptable Objects
    Blog post for the previous talk

    Serialization

    This is how Unity reads out your data attached in scripts. This gets talked about alongside Scriptable Objects sometimes because the serialization that Unity does sometimes messes up scriptable objects. Unity serializes data when it enters/exits play mode and some data types don’t play nice (polymorphic classes for example). If you are having issues with data resetting/corrupting under those circumstances, check these out:
    Forum post on Serialization and Scriptable Objects
    Blog post from Lucas Meijer
    Blog Post from Tim Cooper
    Talk by Richard Fine

  4. Grocery List App and Flask Deployment Issues

    In addition to starting this blog, I wanted to build some small projects to get some experience with technologies I am not currently using at work. Since I’m currently using Django at work, I decided to create a small Grocery List application using Flask and DynamoDB. You can find the repo and installation info here.

    I ran into two major issues when trying to create the application:

    First, I tried to set things up without a virtual environment for Python, which caused errors with libraries not pointing to the correct locations. I thought that since this was the only application I’d have on the server, a virtual environment wouldn’t be as important. I realize now there is a big upside to separating your application Python install from your system Python install. I highly recommend setting up a virtual environment for your Python app whether it be virtualenv, conda, or something else even if you only intend it to be the only app on a system.

    Second, I didn’t understand how to set up virtual hosts with Apache when I started this project. Getting this blog, the front page of the grocery list app, and the Flask API all routed correctly and running simultaneously took me a few hours to figure out. Two things that seemed to be required to get this all running (otherwise, I was getting the top root folder serving up on all subdomains):

    'NameVirtualHost *:80' at the top of the file and the port number (80) in this line for each of the host definitions: ''. See link below for that line in context.

    As I was working through the issues, I created this thread for help on reddit: https://www.reddit.com/r/flask/comments/7fbfs1/af_apache_deployment_questions/

    I almost posted to serverfault.com, but I ended up figuring things out as I was creating a post.

    References: http://peatiscoding.me/geek-stuff/mod_wsgi-apache-virtualenv/ https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvii-deployment-on-linux-even-on-the-raspberry-pi

  5. Backups Using a Network Attached Raspberry Pi

    After setting up my Raspberry Pi as a NAS, I wanted to set up backups that are easy to run and check on. Initially, I wanted to set them up to run automatically, but another goal of the Pi set up was for me to turn my PC off more often. I’m currently thinking that if I’m going to turn the PC off, I will just run the backups manually since turning my PC on every Saturday night (or whenever I’d set it to run) isn’t really automated. If I go back on this and set up a cron job I’ll be sure to post about that as well.

    One problem I haven’t been able to solve yet is how to backup Windows itself with this set up. Windows 7 backup tool fails, and I can’t see my network drives with the two free backup applications I tried for Windows. I can include specific folders from that drive in my backup script and/or occasionally switch my external drive to my PC to run that full backup, which is probably what I’ll end up doing.

    Apart from Windows, I have a few different backups I want to run: 1. The SD card for my raspberry pi 2. A large hard drive with a lot of media files (movies, music, pictures, etc.) 3. A SSD that has all of my games on it. 4. Also on that SSD are folders for my personal software and game development projects. I want to also back these up to AWS.

    Here is the script as it currently stands. I run it from Windows Subsystem for Linux on Windows 10. I have some notes below to explain the set up and why I chose to use the tools and configurations that I did.

    #!/bin/bash
    today=`date '+%Y_%m_%d'`;
    
    #backup raspberry pi
    ssh username@ipaddress "sudo dd if=/dev/mmcblk0 bs=1M | gzip - | dd of=/media/pi/HDDName/pibackup/pibackup$today.gz" > /mnt/e/rsynclogs/pibackuplog$today.txt
    
    #backing up all of my development work including Unity to S3 for offsite backups. Have to add dates to the log files otherwise it overwrites the file
    aws s3 sync /mnt/e/Development s3://developmentFolder/ --delete > /mnt/e/rsynclogs/S3DevOutput$today.txt
    
    aws s3 sync /mnt/e/Unity s3://unityFolder/ --delete > /mnt/e/rsynclogs/S3UnityOutput$today.txt
    
    #backup D drive excluding a few folders, and write logs out.
    rsync -avP --delete --size-only --exclude-from '/mnt/d/rsynclogs/exclude.txt' --log-file=/mnt/d/rsynclogs/rsynclog$today.txt /mnt/d/ username@ipaddress:/media/pi/MediaBackup/
    
    #backup E drive excluding a few folders, and write logs out.
    rsync -avW --delete --size-only --exclude-from '/mnt/e/rsynclogs/exclude.txt' --log-file=/mnt/e/rsynclogs/rsynclog$today.txt /mnt/e/ username@ipaddress:/media/pi/GamesBackup/
    

    The Raspberry Pi backup is modified from: https://johnatilano.com/2016/11/25/use-ssh-and-dd-to-remotely-backup-a-raspberry-pi/

    I don't have access to the network drives from the terminal (or at least I don't know how to access them from WSL without ssh ing), so I needed the output to be relative to the Pi. The quotations enclose the commands that get sent to the Pi, so I had to extend them to include the output location. I also changed 'bs=1m' to 'bs=1M'. I believe the lowercase m is expected on Mac, but the uppercase is required on most flavors of Linux.

    In order to run it from the script I had to set up my user to not require a password to execute the command, which I did by doing the following:

    At a terminal on the Pi, enter 'sudo visudo', change the last line to: 'username ALL = NOPASSWD: ALL', where username is the username you are using to ssh. If you are doing this as the pi user, I don’t think this will be necessary. I'd kind of like to limit this to just the ‘dd’ command, but I'm not sure how to tell it where dd. I may update this in the future.

    For setting up rsync with the correct flags, I used these two links: https://www.howtogeek.com/175008/the-non-beginners-guide-to-syncing-data-with-rsync/ https://www.thegeekstuff.com/2011/01/rsync-exclude-files-and-folders/?utm_source=feedburner

    Two notes for the rsync commands:

    By default, drives mount with the 'pi' user. Since I was setting up my backups to work with a different user, rsync was giving me errors about not being able to set the time on the files when I'd run the command. I’m pretty sure this was because the user didn’t have permissions on the drive. By adding the drives to fstab, it mounts them as root instead, which allows the user to access them since it has root permissions. I should have done this when setting up the drive as a NAS, but I only did it for the initial drive I was testing. See here for instructions on adding drives to fstab: https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/

    For my E drive rsync, I tried initially with the same settings as the D drive, but the backup kept hanging on different files. I saw several recommendations of different flags that people claimed to be the culprit. I tried turning several off and on, but the one that seemed to fix things was swapping -P for -W (suggested here: https://github.com/Microsoft/WSL/issues/2138), which forces entire files to be transferred instead of partial files. I could probably add --progress back in, but -v for verbose gives me enough output to see where issues arise. I’d advise adding --progress back in if you encounter issues and need to check where things are going wrong..

    You can find instructions for setting up the AWS CLI tools and using syncing with S3 in the AWS docs. I couldn’t find anything in there for logging, but StackOverflow had a good solution: https://stackoverflow.com/questions/35075668/output-aws-cli-sync-results-to-a-txt-file

    The last thing I added to the script was a variable to grab the current date so I don’t overwrite the pi backup or the log files.

    One thing I’d like to add is a way to clean up the pi backups. At ~3GB each, it isn’t a big issue currently, but eventually I’ll want to clean them up.

  6. Setting up a Raspberry Pi as a NAS and Plex server

    When my external HDD failed, I debated getting a network attached storage device (NAS) before realizing the price wasn’t worth it for me. I don’t have that much data, and really all I wanted was a way to automate backups and have a plex server that requires less power than my PC (so I could turn it off more often).

    While I was looking around at options, I found I could do both of those things with a Raspberry Pi. I’d been wanting to get one for a while, but never had a good project to justify picking one up. I ordered a Raspberry Pi 3, a case with a fan and a power supply (that has a power switch), and a 32GB SD card. That is more storage than I need, but a 16GB card wasn’t much cheaper. I also picked up an 8 TB Seagate external hard drive.

    While I think anyone can set this up, I will say that I have a decent working knowledge of Linux, which helped getting started and troubleshooting issues I ran into. If you aren’t very familiar with Linux and the terminal, you can still get all of this set up, but debugging issues and working through it all might be a little more difficult.

    First, I would suggest setting up SSH on your pi so you don’t have to go back and forth between working on the pi and another machine: https://www.raspberrypi.org/documentation/remote-access/ssh/ I’d also recommend setting it up with an ssh key so you don’t have to enter your PW every time you log in: https://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/

    I didn’t do either of those at first and it became annoying switching back and forth between machines since I only have one keyboard/mouse/display set up. If you have a separate keyboard, mouse, and display for your Pi, it might not be as helpful right away, but I’d still recommend it.

    My first step was to try to set up the NAS. I ran through this tutorial: https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/, but I hit a wall near the end. I think this was written for Windows 7 and I’m on Windows 10 where mapping a network drive looks different. I also think I may have skipped the last step of setting the samba PW for my user account, which may have been the bigger problem. Here are some notes about where I did things a little differently:

    My external HDD had 4 partitions when I started the process and they automatically mounted, so I skipped the parts about mkdir /media/USBHDD1 and mount...USBHDD1

    'security = user' was not already in the samba config file commented out so I just added it in the authentication section. For the section they tell you to add to the config file, I made four copies of that at the bottom of the file, one for each partition that I have.

    For the last part of the tutorial about adding the network drives on my Windows PC I had to follow different directions and go to Windows Explorer -> This PC -> Map network drive (in the file menu) -> then in the file for folder enter '\raspberrypi\', then I clicked 'Browse' which let me select the folder (I divided my external HDD into a few partitions). I was also to manually enter '\raspberrypi\nameOfFolder' to get it to see a drive. I repeated that for each of my drive partitions.

    One mistake I made was only setting up one of the partitions in fstab. This caused me some serious issues with permissions when trying to set up backups with rsync.

    I’m in the process of automating my backups. When I finish testing my backup scripts, I’ll be sure to post about it.

    To set my pi up as a plex server, I followed this tutorial: https://www.codedonut.com/raspberry-pi/raspberry-pi-plex-media-server/

    The only problem I ran into was that I had to change permissions on the folder ‘/media/pi’ where it automounted my drives because Plex couldn't access them. The permissions on the drive folders themselves were fine.

    After that small adjustment, I was able to stream a 1080p movie with a bitrate ~10Mb/s over my local network without any trouble, but I tried streaming one closer to 25 Mb/s and the Pi definitely couldn’t handle it. I’m not sure where exactly the limit is, but that is something to note.

    Finally, I wanted to test how resilient this set up is so I made sure I could restart while ssh’d and even shut down the pi.

    To restart from command line: ‘sudo reboot’. This allowed me log back in after a couple of minutes.

    To shutdown then start up the pi: ‘sudo halt’ on the command line. This shuts it down, but the red light stays on (and the fan, so the board is still getting power). I can then use the power button on the AC cable to shut off power, then press it again to turn it back on. When it comes back up, the NAS drives automount and I can see them on my PC and Plex is running.

    One last thing I did, for security, was to create a user other than pi, give it sudo permissions (‘sudo usermod -a -G sudo USERNAME’), and give the ‘pi’ user a much more complicated password so it would be more difficult to hack. I saw one tutorial recommend deleting the pi user account, but I decided that was overkill. At the very least, you should change the default password of you default user if you are going to make the Pi visible on a network.

  7. Unity 2D Tools for Level Building

    This week for our local Unity meetup group, I presented an intro to some of the new 2D Tools in Unity (There is an intro about more general Unity topics, so for the 2D stuff skip to 17 minutes in): https://www.youtube.com/watch?v=xopzxmzFJUs

    Here are the links to things I mentioned were outside of the scope of that talk but might be interesting to learn:

    Sprite masks: https://docs.unity3d.com/Manual/class-SpriteMask.html

    2D side scrolling brawler style camera (focus on 9-slicing sprites and new features for sorting): https://unity3d.com/learn/tutorials/topics/2d-game-creation/introduction-and-goals

    Platformer character controller: https://unity3d.com/learn/tutorials/topics/2d-game-creation/intro-and-session-goals

    https://github.com/MelvynMay/UnityPhysics2D - a lot of interesting scenes demoing 2D physics.

    Pretty cool topdown game from Unity to show off tilemap and other 2d features (from a talk at Unite Austin, >https://www.youtube.com/watch?v=RkaEh--qUAY>): https://github.com/Unity-Technologies/2d-gamedemo-robodash

    2D Game Kit - This is a 2D Game that Unity built to show off 2D features, and what a complete project looks like including tools for designers so that they don’t need to dive into the code to create new puzzles, levels, etc. https://blogs.unity3d.com/2018/02/13/introducing-2d-game-kit-learn-unity-with-drag-and-drop/, https://unity3d.com/learn/tutorials/s/2d-game-kit. Unity also recorded a live training for this recently that I’m assuming they will publish soon, but I can’t find a link to it yet.

    Edit: Unity posted the live training for 2D Game Kit here: https://unity3d.com/learn/tutorials/projects/2d-game-kit/overview-and-goals?playlist=49633

    What I covered in the video is using the new Tilemaps and associated features for designing levels in 2D. This was also covered by this Unity Learn tutorial: https://unity3d.com/learn/tutorials/topics/2d-game-creation/intro-2d-world-building-w-tilemap, and this blog post: https://blogs.unity3d.com/2018/01/25/2d-tilemap-asset-workflow-from-image-to-level/. This are very thorough and a great reference for these features. I found that there were a couple of things I could talk about not covered in those videos, specifically how to create your own rule and random tiles, and how to create tiles and tilemaps from art that you generate or find yourself.

    Finally, here is the collection of brushes and tiles that Unity has coded that cover a huge range of use cases: https://github.com/Unity-Technologies/2d-extras

    The ground sprites I used came from here: https://bitcan.itch.io/tileset-simples

    And the flower sprites I used came from here: https://onimaru.itch.io/green-platform

« Page 3 / 3