Programming My Life - python
  1. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Testing

    This is the fourth and final installment of the series breaking down my talk from DjangoConUS 2022. The first entry covered background information about the project, the second was about using Django Rest Framework, and the third was about infrastructure and deployment.

    Before diving in, I'd like to emphasize that my testing philosophy isn't the end state for a project. It's a guiding principal for getting started and staying motivated with testing. If you have an established application with a large team, you likely already have rules and processes to follow for testing. What I want to discuss here is how I think about testing as I'm building a project. This isn't limited to small/side projects, but it's much more important earlier in a project's lifecycle.

    So what is my philosophy? Test enough to give you confidence to change and deploy your code.

    For me, this does not include test driven development (TDD) or a specific coverage number. Why not test driven development? I've tried it, and it doesn't match my preferred way to work in most projects which is to develop my code then write tests. If you find TDD works better for you, definitely do that! I do some development by writing tests first, but I generally write at least some code before I write my tests. And while I do strive for a high coverage percentage, I use that metric more as a guide to see where I might need more tests rather than a bar to meet. That is, I'm more interested in coverage of a given file than I am an overall number.

    Ok so why this philosophy? For me, I find that without tests (either at all, or even with limited tests for a specific section of the code), I'm much less confident about changing code. This means that I take a lot longer to write or change code in untested sections of the codebase. Usually, I have to take extra time to think through edge cases since I can't be confident those were covered previously and, I don't know what they are because they aren't written into any tests.

    There are several terms specific to testing that I'd like to explicitly define since some folks use different definitions and terminology for different classes of tests. I want to make sure I'm clear about what I mean when I'm using these terms.

    First, unit tests test a 'unit' of code, generally meaning as little code as possible (oftentimes a single function) so that you can be sure you are testing each section without depending on other sections, which can introduce complexity into tests.

    Integration tests bring together (integrate) two or more 'units' to test their functionality when combined.

    End to end tests cover the full functionality of some part of the application. For example, testing a student sending gameplay data to an endpoint and receiving a response, or someone purchasing our game.

    Our test suite has a large number of unit tests, almost no integration tests, and a few end to end tests. I'll explain my reasoning for each of these.

    First, unit tests. I really like having a good suite of unit tests for two reasons:

    1. Almost every time I write some code, then write unit tests, I find that I refactor the code as I'm writing the tests. Sometimes this is just a little bit of cleanup. Other times, it's a bigger rewrite. But almost every time, writing unit tests gets me to think about my code a little bit differently, and I uncover something that I want to improve.
    2. When I'm making a change to some code I haven't touched in a while, I know that my unit tests will tell me if I broke something. This gives me more confidence to dive in than if I didn't have them.

    Unit tests take some time and effort to write and maintain, but I'll take that overhead on any project for the confidence that they give. One other great use of unit tests is covering a bugfix. Sometimes, I'll find a bug, write a fix, then write a test (or two) that covers the case for that bug so we can avoid it in the future.

    Why no integration tests?

    I think the functionality is covered better by end to end tests for this project. For other projects I've worked on, I've had a much larger suite of integration tests and fewer end to end tests. This is highly dependent on what your application does. Generally, for larger projects, you'll want more integration tests to ensure parts of your code work together without having to run a larger suite of end to end tests for every change.

    End to end tests can take a long time to write, a long time to run (relative to unit tests), and are more difficult to maintain. That's why I recommend these only cover the most important parts of your code. Even though these were the most difficult to write and maintain, these give me the most confidence when deploying my code. I know that something can still be wrong if these pass, but I at least know the most important parts of the site are mostly working. I wish I would have written these earlier in the project since my first few deployments were much more stressful without them and required some manual testing.

    For our end to end tests, I use Selenium. I've heard a lot of good things about Playwright, and I'm hoping to have some time to look into it, but I haven't investigated it enough yet to recommend it myself. I also use pytest instead of the built in testing system in Django. I don't have a strong opinion about pytest versus the test runner in Django, but I've used pytest a good bit professionally (with and without Django), so I find it easier to get started with. I also like the features it has for running tests (like flags for running just the last failed test, --lf, stopping when hitting a failure --x, etc.) as well as fixtures and parametrized inputs. I'd recommend you give pytest a try if you haven't already. You don't need to worry about the more advanced features when starting with it and you can build up your knowledge as you go.

    Speaking of fixtures, I tend to create a factory for each data type that I need (when I need it for a test) with Factory Boy and make that a fixture if I need to use it in multiple tests. I'll move those fixtures out to a conftest file (this is a file particular to pytest) if I find a need for them across multiple test files. That way, I'm not populating conftest with a large number of fixtures that aren't used or are only used in one or two places, making it easier to read. If you'd like me to write more about how I use fixtures, let me know!

    Some key advice: write tests early!

    This doesn't mean you need 100% coverage on day 1. Or even 10%. But having a test, any test, makes it much easier to write the next one. So start with something as simple as making sure your pages load or if your application is an API, that your endpoints return the correct basic response to a good request. Then, as you are building out your application, keep asking yourself, what sections of the code worry me the most to change, or what do I worry about when we deploy? And write tests in those areas to alleviate your stress. Also, try to get some tests covering your most important code in place as early as you can. In the long run, it'll speed up your development and increase your confidence in deploying your code.

  2. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Infrastructure and Deployment

    This is the third installment of the series breaking down my talk from DjangoConUS 2022. The first entry covered background information about the project and the second was about using Django Rest Framework.

    First, some important context: if you are a devops engineer, or have a lot of experience with AWS/GCP/Azure, this post may not be for you. This post is aimed at folks who would prefer to write Django code than deal with the intracacies of deployment.

    With that being said, this is the section of the talk that made me want to write this series of posts. Specifically, I realized that I focused a lot on the infrastructure setup of this project, which I want to outline here. However, I wish I would have spent more time focusing on what I think the goal of any successful deployment strategy should be for a Django project regardless of the infrastructure:

    Repeatability and confidence in deployments.

    There are a lot of ways to get to this point. And on day 1 (or even 100 if it's a side project), you likely won't be there. But starting with good process documentation, and moving that toward making your deployments consistent and repeatable is a massive boost in confidence that increases the time you can spend working on your application rather than its infrastructure. It can also significantly lower your stress when it comes to deploying. Importantly, this is all independent of what services you use to deploy your application!

    At AstroVenture, I chose Amazon Web Services (AWS) for our infrastructure early on in founding the company because I had experience with it, and we received free credits for a limited time. At that time, I manually created an EC2 instance for the server (and RDS for the database) and manually installed all of the packages I needed to run the server. I manually installed the app and 'deployments' were done by pulling from GitHub and restarting the gunicorn workers. The 'backup strategy', was the in depth document I wrote with step by step instructions about how I did all of that. I tested it by recreating the server a second time for our production environment and using the first as a test/staging server.

    In the event of a catastrophic server issue, I was likely to be down for several hours, if not an entire day. But having that document gave me the confidence that it wouldn't be more than that, and that I wouldn't have to stress that I would miss a step during that recovery. So if you aren't sure how you would bring your servers back in the event of everything going down, I'd highly recommend going through this exercise for whatever service you are using. It can at least alleviate some of the stress of a deployment going wrong.

    From there, I hired a friend who had more infrastructure experience to write the Packer and Terraform scripts we use now, and to help me make a few architecture decisions that allow us to have ~zero downtime deployments and scaling. I was already using a load balancer, but we added an autoscaling group so that we can spin up new instances if we need.

    The Packer scripts create the server image with the application and all of its dependencies, so if we ever have an issue where we need to redeploy an old stable version, we can do that directly from the image instead of having to recreate it. Luckily, we haven't had to do that yet. We use the Terraform scripts to provision an entirely new server and wait until it is live before swapping it with the previous server (and then terminating that one). There are other tools that handle automating infrastructure and application building that others might prefer, but these have worked well (in combination with some good old fashioned bash scripts) for us.

    We also have end to end tests (more on this in a post coming soon), which I run after every deployment to make sure that the most important parts of the site are functioning correctly.

    What if you don't have a friend with devops experience that can help you, and you don't have that experience yourself?

    There are a number of options of Platform as a Service (PaaS) offerings from companies like Render and Fly.io that a lot of folks in the Django community are using. I'm hoping to try these in the near future along with Django Simple Deploy. So while I can't give specific recommendations for these platforms, I can tell you that the goal: Repeatability and confidence in deployments is even easier to achieve on these platforms than it is on AWS (or GCP or Azure). They handle a lot of the work that our Packer and Terraform scripts do so that you can focus on your application code. The tradeoff with these services is that they can be a little bit (to a lot if you scale very large) more expensive than equivalent 'bare metal' servers from AWS, GCP, or Azure. But they can also be cheaper starting out, and the added price can be worth it while you get started.

    No matter what tools you use for hosting and deploying your code, if you are reluctant to deploy because of something in your process you aren't confident about, I strongly recommend you look into ways to address that issue. I found that it was a big relief to stop worrying about deploying once I was able to address the more manual parts of our process.

    Finally, remember, you don't have to do this all at once, and you don't have to be at the point of continuous integration of your code to feel confident with your deployments. Take small steps and work toward the goal of feeling confident deploying. It'll make coding a lot more fun!

  3. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Django vs Django Rest Framework (DRF)

    If you've ended up here from somewhere outside of this blog, and are looking for an exhaustive comparison of these two libraries, I regret to inform you, this isn't that. If you're here for the next installment of the series breaking down my talk from DjangoConUS 2022, welcome back!

    This section of the talk outlines my project's usage of Django Rest Framework (DRF). For some added context, I wanted to start this project using DRF, but ran into some difficulties because the data coming from the game was largely string based and so it wasn't able to be neatly serialized into the types we had in the database. As a result, I ended up using DRF, but I wasn't able to use some of the built in features like viewsets and generics.

    If you are new to Django, or have never used DRF, one major reason to consider DRF is if you are interested in returning JSON from your application rather than integrating with the Django frontend. Vanilla Django does a great job with integrating the frontend (templates) and backend code. But if you need to send data to a frontend that's not written in Django, or you are developing an API, DRF has a lot of tools to help.

    With that out of the way, there is one bit of this section of the talk that I'd like to amend. In that part, I say that if your data model requires a lot of changes to get from hitting your endpoints to your database (like ours did), that you should consider falling back to vanilla Django (versus powering forward with Django Rest Framework).

    What I should have said, is that you can still use DRF, but if you feel like you are fighting with the viewsets or mixins or serializers, you can fall back to the basic APIView, which allows much more prescriptive code rather than the classes built into DRF, which are powerful, but less flexible. If you are more comfortable with vanilla Django, feel free to head back that way, but DRF is capable of handling scenarios where your incoming data don't match your models.

    If you are familiar with DRF, you may be thinking 'isn't that the point of the serializer?' and you would be correct. But, handling too much complexity in the serializer can lead to worse performance. Before I was able to rewrite my incoming data, I tried (with some help from a developer more experienced with DRF) to rewrite our endpoints to use DRF and ended up with more queries than I had before! I needed to update the incoming data before I could improve performance. Otherwise, I needed too many queries to gather everything and return it the way it needed to be to match what our game was expecting. I'm hoping to write up more about how I determined the number of queries, and how I keep my codebase from increasing that number. If that's interesting to you, let me know!

    This gets us to the core message I want to convey: Don't feel like you have to choose only one way of doing things in a Django project. Or that you can't evolve your codebase in the future.

    Yes, consistency can help readability in a project. But just because some of your endpoints use viewsets doesn't mean that every endpoint needs to use viewsets. Use the right tool for the job. It's ok to mix DRF generic views with viewsets, or DRF viewsets with Django class based views if that better matches what you are trying to accomplish for a given route. And you can always evolve your code to better match the libraries you are using as you learn more about them.

    One piece of advice from this section of the talk I still find myself coming back to is to really focus on understanding serializers and getting to know the inner workings of DRF (by this I mean what it's doing generally behind the scenes to process requests, not necessarily understanding every line of code in the library). After getting over the initial hump of learning about DRF, it can be easy to write some quick views that work. But it can be harder to modify them if you don't understand some of the basics of what DRF is doing under the hood.

    Working with DRF, when I send in data and get an unexpected error, the issue is very often with (what I implemented in) the serializer. Whether that's fields that are missing* (or are not in the model or are misspelled) or I'm trying to do something that isn't possible with a specific serializer class (usually with a serializer that is more specific than what I need). Occasionally, I'll run into an error with a misconfigured URL, or a view that's using the wrong viewset or generic view, but most often, it has to do with the serializer.

    *In fact, just today, I hit an error in the admin locally because I forgot to update my serializer after updating a model field's name.

    So how do we get to know DRF (and serializers) better? If you don't have an app, build one! Either use the DRF Tutorial or some other tutorial to build an app with DRF. There are many good ones out there (Real Python has quite a few). If you have an app already, try using DRF to rebuild endpoints you already have and see how DRF handles them. Or consider adding a few new endpoints. Think about extra data a user may want, or combining data from multiple tables. Anything different from what the endpoints are currently doing will improve your understanding. Even if these changes aren't optimal, you'll see how changes in the data affect the viewsets/generic views and serializers, which will help your understanding a whole lot more than reading another blog post...

    I recently rewrote the majority of the endpoints in my application in DRF, and I love how little code there is in my views.py file now! It's also great that I have fewer queries and less processing to do. A lot of that has to do with the aforementioned update to the incoming data, but it's still great to see.

    If you use Django and haven't used DRF (or haven't used it in a while), I highly recommend trying it. There is no need to commit to rewriting your whole application. It can be helpful to simplify endpoints that best match DRF's built in generics or viewsets. And remember that with APIView you can learn as you go and make things a bit more explicit before diving in entirely with generics and viewsets.

  4. AstroVenture and University of Mars

    In this post, I'd like to provide the context for the project I'm currently writing about on this blog. I'm also using this as an opportunity to start a series of blog posts breaking down my recent talk from DjangoConUS 2022 about what I learned related to Django while building this project.

    I've realized since giving the talk there are a few things I'd like to change in/add to it. So instead of a blog post summarizing the talk, I'm opting to revisit the talk and discuss those changes/additions (as well as highlighting the advice I feel holds up). The beginning of the talk covers some of the content of this post. The other sections of the talk will be covered in dedicated posts in the coming weeks.

    In 2018, I founded AstroVenture with my former colleagues from Penn State University. When I worked there from 2010-2014, we built a videogame that teaches introductory astronomy to undergraduate students that we later named 'University of Mars'. We founded the company in order to sell that game to anyone who wants to learn astronomy. If you'd like to check it out, the first quarter of the game is free as a demo at the link above. If you'd like to play more than that, contact me! There is also a video at the site if you just want to see some gameplay to get a better idea of how it works.

    In the nearly 5 years since we founded the company, this has been a side project that I've worked on actively. It is the project that I really dove into and learned Django with. I'd previously had some professional experience with Django, but I had a lot of other responsibilities on that project, so I didn't have time to dig into Django itself as much as I would have liked. The game was built with Unity3D, which I also learned as I built this project.

    The game uses a story to engage students and set up discussions for each topic. The gameplay loop for each lesson is for the game characters to start discussing the topic of the lesson (ranging from gravity to dark energy), then the student explores some type of interactive minigame on the topic, followed by a quiz. Then we usually lead into the next section's discussion and repeat.

    The servers that I manage for the company handle the website, which has the functionality you would expect from a Django website: login, download the game, view content related to the game (mostly in the form of our static 'encyclopedia' pages that accompany the game). Additionally, those servers also track student progress as they play through the game in order for instructors to give credit to students for finishing the different sections of the game. Currently, instructors can download a CSV file with player progress and quiz scores, but I'm working on integrating with Canvas and Blackboard to make grade syncing more direct.

    The backend code for this player tracking was the first API that I ever wrote and was originally written in PHP hosted in a VPS. When I founded the company, I rewrote the API in Django and moved the hosting to AWS. Since my time was limited, I mostly ported the previous API to Django, faults and all. Fortunately, I had some time to dedicate to this recently and I've just finished updating it to be more efficient and use more of Django Rest Framework's built in features. As for the infrastructure, I chose AWS because I had some experience with it, and I had access to free credits there for a limited time. I'll cover more about the infrastructure in a future post.

    We use Sentry for error handling and Stripe for payment processing, and I highly recommend both.

    The front end of the website is in a separate repo written in Typescript[1] with a bit of jQuery sprinkled in. It also uses Bootstrap for CSS. I chose to not use Django for the front end because I thought I might have someone else working on it who wouldn't have familiarity with Django. If I were to start again today, I might do it differently and have everything in one repo. I'm not exactly sure what I'd settle on, though I'd investigate HTMX, Alpine.js, and Tailwind for CSS as I've heard good things about those.

    If any of these technology choices are interesting to you, I'm happy to discuss them more. See the contact links on the about page, or at the bottom of this page.

    In the upcoming posts, I'll talk more about the other sections of the talk: infrastructure, testing, and Django vs DRF.

    1 - This is almost worth a blog post in itself. When I chose to use Typescript, it was because I was frustrated with JavaScript's lack of typing leading to errors (usually with JSON objects) and not being able to easily debug them in larger codebases. I think Typescript is good, but it was also difficult to find good resources (responses on StackOverflow etc.) dealing with Typescript outside of React or Angular, which made getting started a lot more difficult for me.

  5. Using Docker for Debugging

    My current set of tools for deploying my application to production includes Packer and Terraform. I didn't write all of the code for the deployments, but I've been over most of it now.

    When trying to upgrade my server from Ubuntu 20.01 to 22.01, I ran into some problems with conflicting versions of dependencies (related to postgres, mostly). My first instinct was to create an EC2 instance, then walk through each step manually, but I realized I didn't even have to spin up an instance if I could use Docker.

    I'm not much of a Docker user, but I've used it a few times professionally. Mostly other people have done the hard work of creating a docker image, and I've run it for development. So I thought this was a great opportunity to try using it myself.

    I started by spinning up a docker image for the target Ubuntu version, named jammy:

    docker pull ubuntu:jammy
    docker create -it --name jelly ubuntu:jammy
    docker start jelly
    docker attach jelly
    

    Then running through the scripts from my packer build manually:

    echo "deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main"> /etc/apt/sources.list.d/pgdg.list
    apt-get update
    apt-get install -y wget gnupg
    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
    
    export DEBIAN_FRONTEND=noninteractive
    apt update && apt upgrade -y && apt auto-remove -y && apt update
    
    apt-get install -y unzip libpq-dev postgresql-client-14 nginx curl awscli logrotate
    

    This is where I get the errors about conflicting versions or versions not available. Did you catch it?

    echo "deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main"> /etc/apt/sources.list.d/pgdg.list
    

    The bionic in there refers to the version of Ubuntu that we are requesting the dependencies for. It should be jammy!

    This was a great example where Docker saved me some time (and a few pennies) from not having to spin up a cloud instance. And there really wasn't much to learn about Docker itself to dive in. So my advice to you is to give Docker a try for a simple use case like this. Despite being used in massive (sometimes very complicated) build chains, Docker is a relatively simple technology to get started with if you already have some familiarity with the Linux version you are planning to use in your container.

    Maybe someday I'll start using it for building a small project...

  6. Code Formatting Configs in Django

    There are a few code formatting tools I like to use in just about any Python (and Django) project: black, isort, and flake8. These all serve slightly different purposes, and there are alternatives to each. A full discussion/comparison of code formatting tools in Python is beyond the scope of this post*, but in brief:

    • Black is great because it automatically formats my code with a well recognized style. I have my editor (currently VSCode) set to run black when I save a file. This makes my code consistent, and is great in teams I've worked in because it means we don't have to worry about styling in code reviews.
    • isort is great for sorting imports. Even though I tend to do a bit of this sorting as I go, I inevitably swap an import or leave one out of place. This organizes them so I don't have to.
    • flake8 catches minor (and sometimes not so minor) code issues like unused variables.

    There are a few ways to configure these libraries, but for now, I have them in their own individual config files. These should all be placed at the root of your Django project.

    For black, I use the default config. In some other projects, I've extended the line length, but currently I'm leaving black as default.

    For isort, isort.cfg is the filename and mine looks like this:

    [settings]
    skip_glob=env/*
    extend_skip_glob=**/migrations
    profile=black
    

    Where env is the name of your environment. The extended skip for migrations isn't necessary, but I wanted to avoid changing my migration files. The black profile is nice to avoid conflicts between black and isort. Skipping your environment is likely required. When I've left this out (or accidentally run isort without the config) isort runs against the libaries in my environment and ends up breaking them by creating circular dependencies. One way to make sure you aren't going to do this is to run isort --check-only . to make sure it's only running against your files before actually running it.

    For flake8 .flake8 is the filename and mine looks like this:

    [flake8]
    exclude =
        migrations
        __pycache__
        manage.py
        settings.py
        env
        .pytest_cache
        .vscode
    

    These exclude all of the files in the folders migrations, __pycache__ and env (where env is the name of your environment). You may want to exclude more, or not exclude some of these, but I've found this to work for me.

    Finally, on other projects I've used all of these with pre-commit, which is a library that uses git hooks to prevent you from committing code that doesn't use the standards you and your team have set, such as black, isort, and flake8, or some other combination of code formatting tools. It also allows you to use a single command to run all three tools.

    Unfortunately, I can't figure out how to use pre-commit in my current setup with my python projects in WSL and my git client (Fork) in Windows. Perhaps this is a sign I should start using the git CLI more. But if you know of a good way to use precommit with this setup, let me know!

    *between writing and publishing this I read James Bennett covering similar ground. He goes a bit deeper on some of the tools I mention (and other related tools), so I wanted to link this for further reading.

  7. Python Reference Talks

    While trying to get more familiar with Django, I started watching talks from DjangoCon from the last few years. I can’t seem to find the talk, but one of them had a list of great Python/Django talks, which inspired me to create me own list (with some definite overlap).

    I have found that revisiting talks like these make me reconsider some design problems that I have recently worked through, so I want to keep a list and rewatch these periodically. I will likely add to this list in the future.

    These two go together (from DjangoCon2015): - Jane Austen on Pep 8 - Lacey Williams Henschel - The Other Hard Problem: Lessons and Advice on Naming Things by Jacob Burch

  8. Setting up Conda, Django, and PostgreSQL in Windows Subsystem for Linux

    Because I feel much more comfortable in a terminal than on the Windows command line (or in Powershell), I’ve really been enjoying Windows Subsystem for Linux (WSL). In fact, I use it almost exclusively for accessing the server I run this blog from. WSL is essentially a Linux VM of only a terminal shell in Windows (with no GUI access to Linux) and no lag (like you get in most VMs).

    When I created my Grocery List Flask App, I began by using WSL. However, I ran into an issue that prevented me from seeing a locally hosted version of the API in Windows, so I switched to the Windows command line for that app.

    Recently, I’ve been developing a Django application (more on that in a future post), and I ran into a similar issue. Between posting the issue with localhost on WSL and starting this new app, there was a response that I had been meaning to check out. I found that for Django and PostgreSQL, making sure everything was running from localhost (or 0.0.0.0) instead of 127.0.0.x, seemed to fix any issues I had. PSQL gave me some issues just running within WSL, but I found that I just need to add '-h localhost' to get it to run.

    Below are the commands I used to get Conda, Django, and PSQL all set up on my PC and then again my laptop. This works for Django 2.0, PSQL 9.5, and Conda 4.5.9.

    Installation Instructions

    Edit: I originally had installation instructions in here for PSQL 9.5. If you want 9.5 in Ubuntu, good news! You already have it. To install the newest version of PSQL, you should uninstall that version first, then install the new version from here

    Install Conda (need to restart after installing for it to recognize ‘conda’ commands)

    #create environment
    conda create --name NameOfEnvironment
    #activate environment
    source/conda activate NameOfEnvironment
    #install Django
    conda install -c anaconda django
    #install psycopg2, to interface with PSQL
    conda install -c anaconda psycopg2
    
    If you get permission denied, or it hangs, just rerun the install command that failed. Not sure why, but that fixed things for me.
    
    #Remove PSQL from Ubuntu:
    sudo apt-get --purge remove postgresql\*
    #Then run this to make sure you didn’t miss any:
    dpkg -l | grep postgres
    
    #Install PSQL 10 using instructions here: https://www.postgresql.org/download/linux/ubuntu/ I have copied them here for convenience, but please double check that they have not changed
    #Create the file /etc/apt/sources.list.d/pgdg.list and add the following line:
    #deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main
    
    #Then exeucte the following three commands
    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
    sudo apt-get update
    sudo apt-get install postgresql-10
    
    sudo service postgresql start    
    sudo -i -u postgres -h localhost
    createuser --interactive
           psql user: local_user
           y to superuser
    
    #create the database
    createdb local_db
    #log into local_db
    psql -d local_db
    
    #privileges for Django to modify tables.
    GRANT ALL PRIVILEGES ON DATABASE local_db TO local_user;
    
    ALTER USER local_user WITH PASSWORD 'password';
    
    '\q' to quit interactive console.
    'exit' to leave postgres as postgres user.
    
    #one line command to log in as the user to check tables during development.
    psql -h localhost -d local_db -U local_user
    
    python manage.py makemigrations
    python manage.py migrate
    
    Now log back in to PSQL using the line above, then enter '\dt' and you should see tables like django_admin_log, django_content_type, django_migrations, and django_sessions. Your PSQL DB is now connected to your Django app!
    
    #optional for now, but allows you to ensure db connection works by storing credentials for the superuser you create.
    python manage.py createsuperuser
    
    #command to run the server. go to localhost:8000 in your web browser to view!
    python manage.py runserver 0.0.0.0:8000
    

    I used this post for reference

  9. Grocery List App and Flask Deployment Issues

    In addition to starting this blog, I wanted to build some small projects to get some experience with technologies I am not currently using at work. Since I’m currently using Django at work, I decided to create a small Grocery List application using Flask and DynamoDB. You can find the repo and installation info here.

    I ran into two major issues when trying to create the application:

    First, I tried to set things up without a virtual environment for Python, which caused errors with libraries not pointing to the correct locations. I thought that since this was the only application I’d have on the server, a virtual environment wouldn’t be as important. I realize now there is a big upside to separating your application Python install from your system Python install. I highly recommend setting up a virtual environment for your Python app whether it be virtualenv, conda, or something else even if you only intend it to be the only app on a system.

    Second, I didn’t understand how to set up virtual hosts with Apache when I started this project. Getting this blog, the front page of the grocery list app, and the Flask API all routed correctly and running simultaneously took me a few hours to figure out. Two things that seemed to be required to get this all running (otherwise, I was getting the top root folder serving up on all subdomains):

    'NameVirtualHost *:80' at the top of the file and the port number (80) in this line for each of the host definitions: ''. See link below for that line in context.

    As I was working through the issues, I created this thread for help on reddit: https://www.reddit.com/r/flask/comments/7fbfs1/af_apache_deployment_questions/

    I almost posted to serverfault.com, but I ended up figuring things out as I was creating a post.

    References: http://peatiscoding.me/geek-stuff/mod_wsgi-apache-virtualenv/ https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvii-deployment-on-linux-even-on-the-raspberry-pi