While trying to get more familiar with Django, I started watching talks from DjangoCon from the last few years. I can’t seem to find the talk, but one of them had a list of great Python/Django talks, which inspired me to create me own list (with some definite overlap).
I have found that revisiting talks like these make me reconsider some design problems that I have recently worked through, so I want to keep a list and rewatch these periodically. I will likely add to this list in the future.
Because I feel much more comfortable in a terminal than on the Windows command line (or in Powershell), I’ve really been enjoying Windows Subsystem for Linux (WSL). In fact, I use it almost exclusively for accessing the server I run this blog from. WSL is essentially a Linux VM of only a terminal shell in Windows (with no GUI access to Linux) and no lag (like you get in most VMs).
When I created my Grocery List Flask App, I began by using WSL. However, I ran into an issue that prevented me from seeing a locally hosted version of the API in Windows, so I switched to the Windows command line for that app.
Recently, I’ve been developing a Django application (more on that in a future post), and I ran into a similar issue. Between posting the issue with localhost on WSL and starting this new app, there was a response that I had been meaning to check out. I found that for Django and PostgreSQL, making sure everything was running from localhost (or 0.0.0.0) instead of 127.0.0.x, seemed to fix any issues I had. PSQL gave me some issues just running within WSL, but I found that I just need to add '-h localhost' to get it to run.
Below are the commands I used to get Conda, Django, and PSQL all set up on my PC and then again my laptop. This works for Django 2.0, PSQL 9.5, and Conda 4.5.9.
Installation Instructions
Edit: I originally had installation instructions in here for PSQL 9.5. If you want 9.5 in Ubuntu, good news! You already have it. To install the newest version of PSQL, you should uninstall that version first, then install the new version from here
Install Conda (need to restart after installing for it to recognize ‘conda’ commands)
#create environmentcondacreate--nameNameOfEnvironment#activate environmentsource/condaactivateNameOfEnvironment#install Djangocondainstall-canacondadjango#install psycopg2, to interface with PSQLcondainstall-canacondapsycopg2Ifyouget‘permissiondenied’,orithangs,justreruntheinstallcommandthatfailed.Notsurewhy,butthatfixedthingsforme.#Remove PSQL from Ubuntu:sudoapt-get--purgeremovepostgresql\*#Then run this to make sure you didn’t miss any:dpkg-l|greppostgres#Install PSQL 10 using instructions here: https://www.postgresql.org/download/linux/ubuntu/ I have copied them here for convenience, but please double check that they have not changed#Create the file /etc/apt/sources.list.d/pgdg.list and add the following line:#deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main#Then exeucte the following three commandswget--quiet-O-https://www.postgresql.org/media/keys/ACCC4CF8.asc|sudoapt-keyadd-sudoapt-getupdatesudoapt-getinstallpostgresql-10sudoservicepostgresqlstartsudo-i-upostgres-hlocalhostcreateuser--interactivepsqluser:local_userytosuperuser#create the databasecreatedblocal_db#log into local_dbpsql-dlocal_db#privileges for Django to modify tables.GRANTALLPRIVILEGESONDATABASElocal_dbTOlocal_user;ALTERUSERlocal_userWITHPASSWORD'password';'\q'toquitinteractiveconsole.'exit'toleavepostgresaspostgresuser.#one line command to log in as the user to check tables during development.psql-hlocalhost-dlocal_db-Ulocal_userpythonmanage.pymakemigrationspythonmanage.pymigrateNowlogbackintoPSQLusingthelineabove,thenenter'\dt'andyoushouldseetableslikedjango_admin_log,django_content_type,django_migrations,anddjango_sessions.YourPSQLDBisnowconnectedtoyourDjangoapp!#optional for now, but allows you to ensure db connection works by storing credentials for the superuser you create.pythonmanage.pycreatesuperuser#command to run the server. go to localhost:8000 in your web browser to view!pythonmanage.pyrunserver0.0.0.0:8000
This week at our local Unity user meetup group, I presented (along with a co-organizer of the group) about Scriptable Objects in Unity. You can find that talk here.
This is that same content in text form.
Scriptable objects are a powerful tool in designing and developing games in Unity3D. It took me longer than I’d like to admit to get around to using them, but I’d like to introduce them in such a way that makes it easier for you to just get started using them.
What is a Scriptable Object (SO)?
It is a script that derives from ScriptableObject, instead of MonoBehaviour. This script allows the user to create objects either in memory or as .asset files in Unity, which are also referred to as Scriptable Objects.. A simple example:
The line with ‘CreateAssetMenu’ adds a new line to the ‘Create’ menu in the Project window in Unity. When you click that menu item, it will create a new .asset file that has access to the variables and methods defined in your file.
It does not have access to the standard Update(), Start(), Awake()* methods because those are part of MonoBehaviour. It does derive from Unity’s Object class, so it has access to classes like GameObject, Transform, etc.
*use OnEnable for initialization instead of Start or Awake
SOs can contain data and functions just like a MB, but it can’t be attached to a GameObject in the hierarchy as a component. A SO can be referenced by a MB, though.
Two things to differentiate:
A script that creates the SO (same as a MB in Project)
The SO, which is a .asset file. (lives in Project folder, analogous to an instance of a MB in the hierarchy).
SOs aren’t meant to replace MBs everywhere in your project. But there are places where they are a better fit for storing data/functions.
Why use SOs?
No need for JSON, XML, Text, which means no need to parse data.
Can save large amounts of data and optimize data loading when you need it.
They don’t get reset when exiting playmode!
Since SOs aren’t tied to the scene (not in the hierarchy), you can commit changes to source control without impacting a scene another team member may be working on.
This allows you to more easily reference data/functions across scenes when using multiple scenes in development, which I highly recommend you do (this could be a whole other blog post).
No need to depend on manager classes to hold all of your references.
3 and 4 combine to allow us to store the data and functions for a type of enemy and tweak that inside of play mode, have the changes saved, then share that with a teammate without having to worry about impacting the scene file. We also don’t have to re-instantiate prefabs or change every instance of the enemy in a scene (or multiple scenes).
You may already be doing something similar with prefabs (holding/referencing data and never instantiating that particular prefab). If so, look at SOs! Using prefabs for this purpose is confusing and accident prone (accidentally throw a prefab into a scene, get confused between what is a prefab and what is a data holder).
If you are a less experienced Unity developer and this seems like a lot to consider, don’t worry about digesting all of it. Just think about some piece of your game design and try to make it as a SO instead of a Monobehaviour such as an enemies stats, or some inventory items.
If you are more experienced, but some of this doesn’t entirely make sense, please try out SOs for some small use cases to see how they differ from MBs. It took me some time to get used to thinking in terms of using SOs, but they are a great tool for a lot of use cases.
This is how Unity reads out your data attached in scripts. This gets talked about alongside Scriptable Objects sometimes because the serialization that Unity does sometimes messes up scriptable objects. Unity serializes data when it enters/exits play mode and some data types don’t play nice (polymorphic classes for example). If you are having issues with data resetting/corrupting under those circumstances, check these out: Forum post on Serialization and Scriptable Objects Blog post from Lucas Meijer Blog Post from Tim Cooper Talk by Richard Fine
In addition to starting this blog, I wanted to build some small projects to get some experience with technologies I am not currently using at work. Since I’m currently using Django at work, I decided to create a small Grocery List application using Flask and DynamoDB. You can find the repo and installation info here.
I ran into two major issues when trying to create the application:
First, I tried to set things up without a virtual environment for Python, which caused errors with libraries not pointing to the correct locations. I thought that since this was the only application I’d have on the server, a virtual environment wouldn’t be as important. I realize now there is a big upside to separating your application Python install from your system Python install. I highly recommend setting up a virtual environment for your Python app whether it be virtualenv, conda, or something else even if you only intend it to be the only app on a system.
Second, I didn’t understand how to set up virtual hosts with Apache when I started this project. Getting this blog, the front page of the grocery list app, and the Flask API all routed correctly and running simultaneously took me a few hours to figure out. Two things that seemed to be required to get this all running (otherwise, I was getting the top root folder serving up on all subdomains):
'NameVirtualHost *:80' at the top of the file and the port number (80) in this line for each of the host definitions: ''. See link below for that line in context.
After setting up my Raspberry Pi as a NAS, I wanted to set up backups that are easy to run and check on. Initially, I wanted to set them up to run automatically, but another goal of the Pi set up was for me to turn my PC off more often. I’m currently thinking that if I’m going to turn the PC off, I will just run the backups manually since turning my PC on every Saturday night (or whenever I’d set it to run) isn’t really automated. If I go back on this and set up a cron job I’ll be sure to post about that as well.
One problem I haven’t been able to solve yet is how to backup Windows itself with this set up. Windows 7 backup tool fails, and I can’t see my network drives with the two free backup applications I tried for Windows. I can include specific folders from that drive in my backup script and/or occasionally switch my external drive to my PC to run that full backup, which is probably what I’ll end up doing.
Apart from Windows, I have a few different backups I want to run:
1. The SD card for my raspberry pi
2. A large hard drive with a lot of media files (movies, music, pictures, etc.)
3. A SSD that has all of my games on it.
4. Also on that SSD are folders for my personal software and game development projects. I want to also back these up to AWS.
Here is the script as it currently stands. I run it from Windows Subsystem for Linux on Windows 10. I have some notes below to explain the set up and why I chose to use the tools and configurations that I did.
#!/bin/bashtoday=`date'+%Y_%m_%d'`;#backup raspberry pi
sshusername@ipaddress"sudo dd if=/dev/mmcblk0 bs=1M | gzip - | dd of=/media/pi/HDDName/pibackup/pibackup$today.gz">/mnt/e/rsynclogs/pibackuplog$today.txt
#backing up all of my development work including Unity to S3 for offsite backups. Have to add dates to the log files otherwise it overwrites the file
awss3sync/mnt/e/Developments3://developmentFolder/--delete>/mnt/e/rsynclogs/S3DevOutput$today.txt
awss3sync/mnt/e/Unitys3://unityFolder/--delete>/mnt/e/rsynclogs/S3UnityOutput$today.txt
#backup D drive excluding a few folders, and write logs out.
rsync-avP--delete--size-only--exclude-from'/mnt/d/rsynclogs/exclude.txt'--log-file=/mnt/d/rsynclogs/rsynclog$today.txt/mnt/d/username@ipaddress:/media/pi/MediaBackup/
#backup E drive excluding a few folders, and write logs out.
rsync-avW--delete--size-only--exclude-from'/mnt/e/rsynclogs/exclude.txt'--log-file=/mnt/e/rsynclogs/rsynclog$today.txt/mnt/e/username@ipaddress:/media/pi/GamesBackup/
I don't have access to the network drives from the terminal (or at least I don't know how to access them from WSL without ssh ing), so I needed the output to be relative to the Pi. The quotations enclose the commands that get sent to the Pi, so I had to extend them to include the output location. I also changed 'bs=1m' to 'bs=1M'. I believe the lowercase m is expected on Mac, but the uppercase is required on most flavors of Linux.
In order to run it from the script I had to set up my user to not require a password to execute the command, which I did by doing the following:
At a terminal on the Pi, enter 'sudo visudo', change the last line to: 'username ALL = NOPASSWD: ALL', where username is the username you are using to ssh. If you are doing this as the pi user, I don’t think this will be necessary. I'd kind of like to limit this to just the ‘dd’ command, but I'm not sure how to tell it where dd. I may update this in the future.
By default, drives mount with the 'pi' user. Since I was setting up my backups to work with a different user, rsync was giving me errors about not being able to set the time on the files when I'd run the command. I’m pretty sure this was because the user didn’t have permissions on the drive. By adding the drives to fstab, it mounts them as root instead, which allows the user to access them since it has root permissions. I should have done this when setting up the drive as a NAS, but I only did it for the initial drive I was testing. See here for instructions on adding drives to fstab: https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
For my E drive rsync, I tried initially with the same settings as the D drive, but the backup kept hanging on different files. I saw several recommendations of different flags that people claimed to be the culprit. I tried turning several off and on, but the one that seemed to fix things was swapping -P for -W (suggested here: https://github.com/Microsoft/WSL/issues/2138), which forces entire files to be transferred instead of partial files. I could probably add --progress back in, but -v for verbose gives me enough output to see where issues arise. I’d advise adding --progress back in if you encounter issues and need to check where things are going wrong..
The last thing I added to the script was a variable to grab the current date so I don’t overwrite the pi backup or the log files.
One thing I’d like to add is a way to clean up the pi backups. At ~3GB each, it isn’t a big issue currently, but eventually I’ll want to clean them up.
When my external HDD failed, I debated getting a network attached storage device (NAS) before realizing the price wasn’t worth it for me. I don’t have that much data, and really all I wanted was a way to automate backups and have a plex server that requires less power than my PC (so I could turn it off more often).
While I was looking around at options, I found I could do both of those things with a Raspberry Pi. I’d been wanting to get one for a while, but never had a good project to justify picking one up. I ordered a Raspberry Pi 3, a case with a fan and a power supply (that has a power switch), and a 32GB SD card. That is more storage than I need, but a 16GB card wasn’t much cheaper. I also picked up an 8 TB Seagate external hard drive.
While I think anyone can set this up, I will say that I have a decent working knowledge of Linux, which helped getting started and troubleshooting issues I ran into. If you aren’t very familiar with Linux and the terminal, you can still get all of this set up, but debugging issues and working through it all might be a little more difficult.
I didn’t do either of those at first and it became annoying switching back and forth between machines since I only have one keyboard/mouse/display set up. If you have a separate keyboard, mouse, and display for your Pi, it might not be as helpful right away, but I’d still recommend it.
My first step was to try to set up the NAS. I ran through this tutorial:
https://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/, but I hit a wall near the end. I think this was written for Windows 7 and I’m on Windows 10 where mapping a network drive looks different. I also think I may have skipped the last step of setting the samba PW for my user account, which may have been the bigger problem. Here are some notes about where I did things a little differently:
My external HDD had 4 partitions when I started the process and they automatically mounted, so I skipped the parts about mkdir /media/USBHDD1 and mount...USBHDD1
'security = user' was not already in the samba config file commented out so I just added it in the authentication section. For the section they tell you to add to the config file, I made four copies of that at the bottom of the file, one for each partition that I have.
For the last part of the tutorial about adding the network drives on my Windows PC I had to follow different directions and go to Windows Explorer -> This PC -> Map network drive (in the file menu) -> then in the file for folder enter '\raspberrypi\', then I clicked 'Browse' which let me select the folder (I divided my external HDD into a few partitions). I was also to manually enter '\raspberrypi\nameOfFolder' to get it to see a drive. I repeated that for each of my drive partitions.
One mistake I made was only setting up one of the partitions in fstab. This caused me some serious issues with permissions when trying to set up backups with rsync.
I’m in the process of automating my backups. When I finish testing my backup scripts, I’ll be sure to post about it.
The only problem I ran into was that I had to change permissions on the folder ‘/media/pi’ where it automounted my drives because Plex couldn't access them. The permissions on the drive folders themselves were fine.
After that small adjustment, I was able to stream a 1080p movie with a bitrate ~10Mb/s over my local network without any trouble, but I tried streaming one closer to 25 Mb/s and the Pi definitely couldn’t handle it. I’m not sure where exactly the limit is, but that is something to note.
Finally, I wanted to test how resilient this set up is so I made sure I could restart while ssh’d and even shut down the pi.
To restart from command line:
‘sudo reboot’. This allowed me log back in after a couple of minutes.
To shutdown then start up the pi:
‘sudo halt’ on the command line. This shuts it down, but the red light stays on (and the fan, so the board is still getting power). I can then use the power button on the AC cable to shut off power, then press it again to turn it back on. When it comes back up, the NAS drives automount and I can see them on my PC and Plex is running.
One last thing I did, for security, was to create a user other than pi, give it sudo permissions (‘sudo usermod -a -G sudo USERNAME’), and give the ‘pi’ user a much more complicated password so it would be more difficult to hack. I saw one tutorial recommend deleting the pi user account, but I decided that was overkill. At the very least, you should change the default password of you default user if you are going to make the Pi visible on a network.
This week for our local Unity meetup group, I presented an intro to some of the new 2D Tools in Unity (There is an intro about more general Unity topics, so for the 2D stuff skip to 17 minutes in): https://www.youtube.com/watch?v=xopzxmzFJUs
Here are the links to things I mentioned were outside of the scope of that talk but might be interesting to learn: