Programming My Life - Andrew Mshar
  1. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Testing

    This is the fourth and final installment of the series breaking down my talk from DjangoConUS 2022. The first entry covered background information about the project, the second was about using Django Rest Framework, and the third was about infrastructure and deployment.

    Before diving in, I'd like to emphasize that my testing philosophy isn't the end state for a project. It's a guiding principal for getting started and staying motivated with testing. If you have an established application with a large team, you likely already have rules and processes to follow for testing. What I want to discuss here is how I think about testing as I'm building a project. This isn't limited to small/side projects, but it's much more important earlier in a project's lifecycle.

    So what is my philosophy? Test enough to give you confidence to change and deploy your code.

    For me, this does not include test driven development (TDD) or a specific coverage number. Why not test driven development? I've tried it, and it doesn't match my preferred way to work in most projects which is to develop my code then write tests. If you find TDD works better for you, definitely do that! I do some development by writing tests first, but I generally write at least some code before I write my tests. And while I do strive for a high coverage percentage, I use that metric more as a guide to see where I might need more tests rather than a bar to meet. That is, I'm more interested in coverage of a given file than I am an overall number.

    Ok so why this philosophy? For me, I find that without tests (either at all, or even with limited tests for a specific section of the code), I'm much less confident about changing code. This means that I take a lot longer to write or change code in untested sections of the codebase. Usually, I have to take extra time to think through edge cases since I can't be confident those were covered previously and, I don't know what they are because they aren't written into any tests.

    There are several terms specific to testing that I'd like to explicitly define since some folks use different definitions and terminology for different classes of tests. I want to make sure I'm clear about what I mean when I'm using these terms.

    First, unit tests test a 'unit' of code, generally meaning as little code as possible (oftentimes a single function) so that you can be sure you are testing each section without depending on other sections, which can introduce complexity into tests.

    Integration tests bring together (integrate) two or more 'units' to test their functionality when combined.

    End to end tests cover the full functionality of some part of the application. For example, testing a student sending gameplay data to an endpoint and receiving a response, or someone purchasing our game.

    Our test suite has a large number of unit tests, almost no integration tests, and a few end to end tests. I'll explain my reasoning for each of these.

    First, unit tests. I really like having a good suite of unit tests for two reasons:

    1. Almost every time I write some code, then write unit tests, I find that I refactor the code as I'm writing the tests. Sometimes this is just a little bit of cleanup. Other times, it's a bigger rewrite. But almost every time, writing unit tests gets me to think about my code a little bit differently, and I uncover something that I want to improve.
    2. When I'm making a change to some code I haven't touched in a while, I know that my unit tests will tell me if I broke something. This gives me more confidence to dive in than if I didn't have them.

    Unit tests take some time and effort to write and maintain, but I'll take that overhead on any project for the confidence that they give. One other great use of unit tests is covering a bugfix. Sometimes, I'll find a bug, write a fix, then write a test (or two) that covers the case for that bug so we can avoid it in the future.

    Why no integration tests?

    I think the functionality is covered better by end to end tests for this project. For other projects I've worked on, I've had a much larger suite of integration tests and fewer end to end tests. This is highly dependent on what your application does. Generally, for larger projects, you'll want more integration tests to ensure parts of your code work together without having to run a larger suite of end to end tests for every change.

    End to end tests can take a long time to write, a long time to run (relative to unit tests), and are more difficult to maintain. That's why I recommend these only cover the most important parts of your code. Even though these were the most difficult to write and maintain, these give me the most confidence when deploying my code. I know that something can still be wrong if these pass, but I at least know the most important parts of the site are mostly working. I wish I would have written these earlier in the project since my first few deployments were much more stressful without them and required some manual testing.

    For our end to end tests, I use Selenium. I've heard a lot of good things about Playwright, and I'm hoping to have some time to look into it, but I haven't investigated it enough yet to recommend it myself. I also use pytest instead of the built in testing system in Django. I don't have a strong opinion about pytest versus the test runner in Django, but I've used pytest a good bit professionally (with and without Django), so I find it easier to get started with. I also like the features it has for running tests (like flags for running just the last failed test, --lf, stopping when hitting a failure --x, etc.) as well as fixtures and parametrized inputs. I'd recommend you give pytest a try if you haven't already. You don't need to worry about the more advanced features when starting with it and you can build up your knowledge as you go.

    Speaking of fixtures, I tend to create a factory for each data type that I need (when I need it for a test) with Factory Boy and make that a fixture if I need to use it in multiple tests. I'll move those fixtures out to a conftest file (this is a file particular to pytest) if I find a need for them across multiple test files. That way, I'm not populating conftest with a large number of fixtures that aren't used or are only used in one or two places, making it easier to read. If you'd like me to write more about how I use fixtures, let me know!

    Some key advice: write tests early!

    This doesn't mean you need 100% coverage on day 1. Or even 10%. But having a test, any test, makes it much easier to write the next one. So start with something as simple as making sure your pages load or if your application is an API, that your endpoints return the correct basic response to a good request. Then, as you are building out your application, keep asking yourself, what sections of the code worry me the most to change, or what do I worry about when we deploy? And write tests in those areas to alleviate your stress. Also, try to get some tests covering your most important code in place as early as you can. In the long run, it'll speed up your development and increase your confidence in deploying your code.

  2. Update for Notarizing Mac Unity Apps

    I recently received an email from Apple stating that they are migrating users from altool to notarytool to notarize applications. Apple says altool will no longer work for notarization starting November 1 2023, but that applications notarized before that will still behave as normal. I previously wrote about notarizing a Unity app on MacOS, so I wanted to add to that post since the linked scripts will be out of date by the end of the year.

    What I'm about to write about is covered by Apple here, but I ended up submitting (and waiting and submitting and waiting) several times before figuring out the proper incantations, so I'm gathering the information here.

    First, in the script for notarizing your application, you'll need to change --username and --asc-provider to --apple-id and --team-id respectively. Then you'll need to remove --primary-bundle-id (and the value associated with that flag) and --file. For that last one, remove ONLY --file and leave the location of the file you want to notarize. Finally, swap altool --notarize-app for notarytool submit and add --wait (the wait flag is optional, but I'll explain it below).

    So you should go from this in the old script: xcrun altool --notarize-app --username "$USERNAME" --password "$PASSWORD" --asc-provider "$TEAM_ID" --primary-bundle-id "$BUNDLE_ID" --file "$SIGNING_FOLDER/$APP_NAME.zip"

    to xcrun notarytool submit --wait --apple-id "$USERNAME" --password "$PASSWORD" --team-id "$TEAM_ID" "$SIGNING_FOLDER/$APP_NAME.zip"

    The new tool has a nice progress text in the terminal for the upload status that looks like XX% (YY MB of ZZ MB). The wait flag is nice because it means you can skip the second script (from the link in the previous blog post) for checking the status of the file after you've uploaded it. If you use the wait flag, after the file finishes uploading, you'll see 'In progress ........', while it waits to be notarized. Much nicer than a hanging terminal session with no info!

  3. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Infrastructure and Deployment

    This is the third installment of the series breaking down my talk from DjangoConUS 2022. The first entry covered background information about the project and the second was about using Django Rest Framework.

    First, some important context: if you are a devops engineer, or have a lot of experience with AWS/GCP/Azure, this post may not be for you. This post is aimed at folks who would prefer to write Django code than deal with the intracacies of deployment.

    With that being said, this is the section of the talk that made me want to write this series of posts. Specifically, I realized that I focused a lot on the infrastructure setup of this project, which I want to outline here. However, I wish I would have spent more time focusing on what I think the goal of any successful deployment strategy should be for a Django project regardless of the infrastructure:

    Repeatability and confidence in deployments.

    There are a lot of ways to get to this point. And on day 1 (or even 100 if it's a side project), you likely won't be there. But starting with good process documentation, and moving that toward making your deployments consistent and repeatable is a massive boost in confidence that increases the time you can spend working on your application rather than its infrastructure. It can also significantly lower your stress when it comes to deploying. Importantly, this is all independent of what services you use to deploy your application!

    At AstroVenture, I chose Amazon Web Services (AWS) for our infrastructure early on in founding the company because I had experience with it, and we received free credits for a limited time. At that time, I manually created an EC2 instance for the server (and RDS for the database) and manually installed all of the packages I needed to run the server. I manually installed the app and 'deployments' were done by pulling from GitHub and restarting the gunicorn workers. The 'backup strategy', was the in depth document I wrote with step by step instructions about how I did all of that. I tested it by recreating the server a second time for our production environment and using the first as a test/staging server.

    In the event of a catastrophic server issue, I was likely to be down for several hours, if not an entire day. But having that document gave me the confidence that it wouldn't be more than that, and that I wouldn't have to stress that I would miss a step during that recovery. So if you aren't sure how you would bring your servers back in the event of everything going down, I'd highly recommend going through this exercise for whatever service you are using. It can at least alleviate some of the stress of a deployment going wrong.

    From there, I hired a friend who had more infrastructure experience to write the Packer and Terraform scripts we use now, and to help me make a few architecture decisions that allow us to have ~zero downtime deployments and scaling. I was already using a load balancer, but we added an autoscaling group so that we can spin up new instances if we need.

    The Packer scripts create the server image with the application and all of its dependencies, so if we ever have an issue where we need to redeploy an old stable version, we can do that directly from the image instead of having to recreate it. Luckily, we haven't had to do that yet. We use the Terraform scripts to provision an entirely new server and wait until it is live before swapping it with the previous server (and then terminating that one). There are other tools that handle automating infrastructure and application building that others might prefer, but these have worked well (in combination with some good old fashioned bash scripts) for us.

    We also have end to end tests (more on this in a post coming soon), which I run after every deployment to make sure that the most important parts of the site are functioning correctly.

    What if you don't have a friend with devops experience that can help you, and you don't have that experience yourself?

    There are a number of options of Platform as a Service (PaaS) offerings from companies like Render and Fly.io that a lot of folks in the Django community are using. I'm hoping to try these in the near future along with Django Simple Deploy. So while I can't give specific recommendations for these platforms, I can tell you that the goal: Repeatability and confidence in deployments is even easier to achieve on these platforms than it is on AWS (or GCP or Azure). They handle a lot of the work that our Packer and Terraform scripts do so that you can focus on your application code. The tradeoff with these services is that they can be a little bit (to a lot if you scale very large) more expensive than equivalent 'bare metal' servers from AWS, GCP, or Azure. But they can also be cheaper starting out, and the added price can be worth it while you get started.

    No matter what tools you use for hosting and deploying your code, if you are reluctant to deploy because of something in your process you aren't confident about, I strongly recommend you look into ways to address that issue. I found that it was a big relief to stop worrying about deploying once I was able to address the more manual parts of our process.

    Finally, remember, you don't have to do this all at once, and you don't have to be at the point of continuous integration of your code to feel confident with your deployments. Take small steps and work toward the goal of feeling confident deploying. It'll make coding a lot more fun!

  4. Lessons Learned Teaching Undergraduate Astronomy with a Video Game - Django vs Django Rest Framework (DRF)

    If you've ended up here from somewhere outside of this blog, and are looking for an exhaustive comparison of these two libraries, I regret to inform you, this isn't that. If you're here for the next installment of the series breaking down my talk from DjangoConUS 2022, welcome back!

    This section of the talk outlines my project's usage of Django Rest Framework (DRF). For some added context, I wanted to start this project using DRF, but ran into some difficulties because the data coming from the game was largely string based and so it wasn't able to be neatly serialized into the types we had in the database. As a result, I ended up using DRF, but I wasn't able to use some of the built in features like viewsets and generics.

    If you are new to Django, or have never used DRF, one major reason to consider DRF is if you are interested in returning JSON from your application rather than integrating with the Django frontend. Vanilla Django does a great job with integrating the frontend (templates) and backend code. But if you need to send data to a frontend that's not written in Django, or you are developing an API, DRF has a lot of tools to help.

    With that out of the way, there is one bit of this section of the talk that I'd like to amend. In that part, I say that if your data model requires a lot of changes to get from hitting your endpoints to your database (like ours did), that you should consider falling back to vanilla Django (versus powering forward with Django Rest Framework).

    What I should have said, is that you can still use DRF, but if you feel like you are fighting with the viewsets or mixins or serializers, you can fall back to the basic APIView, which allows much more prescriptive code rather than the classes built into DRF, which are powerful, but less flexible. If you are more comfortable with vanilla Django, feel free to head back that way, but DRF is capable of handling scenarios where your incoming data don't match your models.

    If you are familiar with DRF, you may be thinking 'isn't that the point of the serializer?' and you would be correct. But, handling too much complexity in the serializer can lead to worse performance. Before I was able to rewrite my incoming data, I tried (with some help from a developer more experienced with DRF) to rewrite our endpoints to use DRF and ended up with more queries than I had before! I needed to update the incoming data before I could improve performance. Otherwise, I needed too many queries to gather everything and return it the way it needed to be to match what our game was expecting. I'm hoping to write up more about how I determined the number of queries, and how I keep my codebase from increasing that number. If that's interesting to you, let me know!

    This gets us to the core message I want to convey: Don't feel like you have to choose only one way of doing things in a Django project. Or that you can't evolve your codebase in the future.

    Yes, consistency can help readability in a project. But just because some of your endpoints use viewsets doesn't mean that every endpoint needs to use viewsets. Use the right tool for the job. It's ok to mix DRF generic views with viewsets, or DRF viewsets with Django class based views if that better matches what you are trying to accomplish for a given route. And you can always evolve your code to better match the libraries you are using as you learn more about them.

    One piece of advice from this section of the talk I still find myself coming back to is to really focus on understanding serializers and getting to know the inner workings of DRF (by this I mean what it's doing generally behind the scenes to process requests, not necessarily understanding every line of code in the library). After getting over the initial hump of learning about DRF, it can be easy to write some quick views that work. But it can be harder to modify them if you don't understand some of the basics of what DRF is doing under the hood.

    Working with DRF, when I send in data and get an unexpected error, the issue is very often with (what I implemented in) the serializer. Whether that's fields that are missing* (or are not in the model or are misspelled) or I'm trying to do something that isn't possible with a specific serializer class (usually with a serializer that is more specific than what I need). Occasionally, I'll run into an error with a misconfigured URL, or a view that's using the wrong viewset or generic view, but most often, it has to do with the serializer.

    *In fact, just today, I hit an error in the admin locally because I forgot to update my serializer after updating a model field's name.

    So how do we get to know DRF (and serializers) better? If you don't have an app, build one! Either use the DRF Tutorial or some other tutorial to build an app with DRF. There are many good ones out there (Real Python has quite a few). If you have an app already, try using DRF to rebuild endpoints you already have and see how DRF handles them. Or consider adding a few new endpoints. Think about extra data a user may want, or combining data from multiple tables. Anything different from what the endpoints are currently doing will improve your understanding. Even if these changes aren't optimal, you'll see how changes in the data affect the viewsets/generic views and serializers, which will help your understanding a whole lot more than reading another blog post...

    I recently rewrote the majority of the endpoints in my application in DRF, and I love how little code there is in my views.py file now! It's also great that I have fewer queries and less processing to do. A lot of that has to do with the aforementioned update to the incoming data, but it's still great to see.

    If you use Django and haven't used DRF (or haven't used it in a while), I highly recommend trying it. There is no need to commit to rewriting your whole application. It can be helpful to simplify endpoints that best match DRF's built in generics or viewsets. And remember that with APIView you can learn as you go and make things a bit more explicit before diving in entirely with generics and viewsets.

  5. AstroVenture and University of Mars

    In this post, I'd like to provide the context for the project I'm currently writing about on this blog. I'm also using this as an opportunity to start a series of blog posts breaking down my recent talk from DjangoConUS 2022 about what I learned related to Django while building this project.

    I've realized since giving the talk there are a few things I'd like to change in/add to it. So instead of a blog post summarizing the talk, I'm opting to revisit the talk and discuss those changes/additions (as well as highlighting the advice I feel holds up). The beginning of the talk covers some of the content of this post. The other sections of the talk will be covered in dedicated posts in the coming weeks.

    In 2018, I founded AstroVenture with my former colleagues from Penn State University. When I worked there from 2010-2014, we built a videogame that teaches introductory astronomy to undergraduate students that we later named 'University of Mars'. We founded the company in order to sell that game to anyone who wants to learn astronomy. If you'd like to check it out, the first quarter of the game is free as a demo at the link above. If you'd like to play more than that, contact me! There is also a video at the site if you just want to see some gameplay to get a better idea of how it works.

    In the nearly 5 years since we founded the company, this has been a side project that I've worked on actively. It is the project that I really dove into and learned Django with. I'd previously had some professional experience with Django, but I had a lot of other responsibilities on that project, so I didn't have time to dig into Django itself as much as I would have liked. The game was built with Unity3D, which I also learned as I built this project.

    The game uses a story to engage students and set up discussions for each topic. The gameplay loop for each lesson is for the game characters to start discussing the topic of the lesson (ranging from gravity to dark energy), then the student explores some type of interactive minigame on the topic, followed by a quiz. Then we usually lead into the next section's discussion and repeat.

    The servers that I manage for the company handle the website, which has the functionality you would expect from a Django website: login, download the game, view content related to the game (mostly in the form of our static 'encyclopedia' pages that accompany the game). Additionally, those servers also track student progress as they play through the game in order for instructors to give credit to students for finishing the different sections of the game. Currently, instructors can download a CSV file with player progress and quiz scores, but I'm working on integrating with Canvas and Blackboard to make grade syncing more direct.

    The backend code for this player tracking was the first API that I ever wrote and was originally written in PHP hosted in a VPS. When I founded the company, I rewrote the API in Django and moved the hosting to AWS. Since my time was limited, I mostly ported the previous API to Django, faults and all. Fortunately, I had some time to dedicate to this recently and I've just finished updating it to be more efficient and use more of Django Rest Framework's built in features. As for the infrastructure, I chose AWS because I had some experience with it, and I had access to free credits there for a limited time. I'll cover more about the infrastructure in a future post.

    We use Sentry for error handling and Stripe for payment processing, and I highly recommend both.

    The front end of the website is in a separate repo written in Typescript[1] with a bit of jQuery sprinkled in. It also uses Bootstrap for CSS. I chose to not use Django for the front end because I thought I might have someone else working on it who wouldn't have familiarity with Django. If I were to start again today, I might do it differently and have everything in one repo. I'm not exactly sure what I'd settle on, though I'd investigate HTMX, Alpine.js, and Tailwind for CSS as I've heard good things about those.

    If any of these technology choices are interesting to you, I'm happy to discuss them more. See the contact links on the about page, or at the bottom of this page.

    In the upcoming posts, I'll talk more about the other sections of the talk: infrastructure, testing, and Django vs DRF.

    1 - This is almost worth a blog post in itself. When I chose to use Typescript, it was because I was frustrated with JavaScript's lack of typing leading to errors (usually with JSON objects) and not being able to easily debug them in larger codebases. I think Typescript is good, but it was also difficult to find good resources (responses on StackOverflow etc.) dealing with Typescript outside of React or Angular, which made getting started a lot more difficult for me.

  6. [TIL] Django Admin Permissions, etc.

    So far in my main project (more on that to come), the only users of the admin have been myself and my cofounder, so I've made us both superusers. Recently, I had a request to allow an instructor to view and update data from their own courses. This would require me to give them admin access, but in a highly limited view. The good news is that there is a lot of information on how to limit admin access in Django! The bad news is that a lot of it applies to entire models or other use cases that were much less specific than mine. Three major things I learned about:

    First, I'm not sure if this is the best or only way to do this, but I created a group (I could not get this to work as permissions for only one user) and limited the permissions to just a few models (and only the required permissions). I'd be interested if there is a way to do this per individual, but I think I might need the group settings sooner rather later anyway.

    Second, for those models they are able to use, I needed to limit what they could see to only their students. I was able to do that by adding to the relevant models in admin.py:

        def get_queryset(self, request):
            qs = super().get_queryset(request)
    
            if request.user.is_superuser:
                return qs
            else:
                return qs.filter(institution=their_institution)
    

    This returns all data for superusers, but only returns whatever you filter on to other users. In my case, I only have two levels of admin user, so this simple filter works well. You'll have to fill in the filter depending on your model.

    Third, and this was most difficult to track down, I needed to make sure they couldn't see data outside of their scope when adding new students to their course. For the users model, I was able to add a filter to the user admin like above. But for the Course model, I would still see all courses even when using get_queryset for the CourseAdmin as above. I'm not sure why this is. To fix this, I had to use:

        def formfield_for_foreignkey(self, db_field, request, **kwargs):
            if not request.user.is_superuser and db_field.name == "course":
                kwargs["queryset"] = Course.objects.filter(institution=institution)
            return super().formfield_for_foreignkey(db_field, request, **kwargs)
    

    This overwrites the data returned for a foreign key in a dropdown on the 'Add' page, and was exactly what I needed!

  7. Swapping burrs in my Oxo Coffee Grinder

    And now for something completely different...

    You may have noticed my favicon is a mug of coffee. I'm a huge coffee fan, and the last few years I've gone further down the rabbit hole with third wave/specialty coffee.

    I recently bought a new coffee grinder to upgrade from my ~5 year old Oxo grinder. I try not to be wasteful, so when considering what to do with the old grinder, I decided to search around to see if anyone had recommendations for modifying it. In my (not so exhaustive) search I found this recommendation for swapping to a better set of burrs which I decided to try.

    The instructions were pretty good and the most time consuming part was cleaning out all of the grounds from previous usage (perhaps I should have cleaned it more regularly...). But all in all it was quite an easy swap.

    The retention still isn't great, but it's better and the new burrs work well!

    Old dirty burrs New burrs installed

    Since this isn't a coffee blog (yet...) I don't want to get too deep in the weeds, but the old grinder and the new one have different burr types (flat vs. conical) which has allowed me to highlight the difference between them as I try new coffees. It's a lot of fun!

  8. Signing and Notarizing a Unity App for MacOS

    Edit: A lot of the information in this post is still great, but Apple have recently recommended moving from altool (referenced in the linked scripts below) to notarytool, so I have added an update to this post here.

    For the past few years, Apple's Gatekeeper has made it difficult to run apps downloaded from the internet. Since most of the users of the game that I distribute (more on this soon) aren't technical, we get a significant number of folks needing help even with detailed instructions we supply. I finally had the time to look into code-signing our Unity game for MacOS that would be distributed outside the app store (downloaded from the internet).

    The following two links have almost all of the instructions necessary, but I want to highlight two issues I ran into in case others have a similar problem. And I think I can explain the root issue a little better than I found elsewhere.

    Links: a good example of how to script the process and a walkthrough with much more thorough explanations

    Note that my Macbook is managed by the IT department at my current employer, so this may be an issue only for managed machines.

    First, it isn't specified in either link, but I installed my certificate (the one I created and downloaded from developer.apple.com, see the second link above for more about that) into my 'login' keychain after running into the issue below and reading through some apple developer forum and stack overflow discussions. I'm not sure if that matters, but it seems that is the recommended way.

    I kept getting Warning: unable to build chain to self-signed root for signer when trying to run codesign, and the answer here about the WWDR Intermediate Certification worked for me initially. I installed the linked cert into my System keychain. But I was using the wrong cerification (Distribution, which is for submitting to the store, I think) so notarization failed.

    After getting the correct certification (again, from the instructions in the second link above, and installing into my 'login' keychain), I had to download the intermediate certification Developer ID - G2 (Expiring 09/17/2031 00:00:00 UTC) from here, and I installed that into my System keychain. This got notarization to work!

    I know it was this one because after I added that, my certification (the one I created in my developer account, downloaded and installed to login) changed to 'trusted'. I tried a different one first that didn't change the status of my certification.

    In short, if codesigning is giving you the error above, you are probably missing the intermediate certification from Apple (this was another answer from the developer forum link above, but I didn't understand it when I read it). What this means is that you should determine which type of certification you requested from Apple (Apple Distribution, etc.) and find the matching certification from Apple. It wasn't immediately clear to me which was the matching certification in my case, but I downloaded two that seemed like they might be correct and only one of them changed my cert to trusted in Keychain Access. Good luck!

  9. Using Docker for Debugging

    My current set of tools for deploying my application to production includes Packer and Terraform. I didn't write all of the code for the deployments, but I've been over most of it now.

    When trying to upgrade my server from Ubuntu 20.01 to 22.01, I ran into some problems with conflicting versions of dependencies (related to postgres, mostly). My first instinct was to create an EC2 instance, then walk through each step manually, but I realized I didn't even have to spin up an instance if I could use Docker.

    I'm not much of a Docker user, but I've used it a few times professionally. Mostly other people have done the hard work of creating a docker image, and I've run it for development. So I thought this was a great opportunity to try using it myself.

    I started by spinning up a docker image for the target Ubuntu version, named jammy:

    docker pull ubuntu:jammy
    docker create -it --name jelly ubuntu:jammy
    docker start jelly
    docker attach jelly
    

    Then running through the scripts from my packer build manually:

    echo "deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main"> /etc/apt/sources.list.d/pgdg.list
    apt-get update
    apt-get install -y wget gnupg
    wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
    
    export DEBIAN_FRONTEND=noninteractive
    apt update && apt upgrade -y && apt auto-remove -y && apt update
    
    apt-get install -y unzip libpq-dev postgresql-client-14 nginx curl awscli logrotate
    

    This is where I get the errors about conflicting versions or versions not available. Did you catch it?

    echo "deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main"> /etc/apt/sources.list.d/pgdg.list
    

    The bionic in there refers to the version of Ubuntu that we are requesting the dependencies for. It should be jammy!

    This was a great example where Docker saved me some time (and a few pennies) from not having to spin up a cloud instance. And there really wasn't much to learn about Docker itself to dive in. So my advice to you is to give Docker a try for a simple use case like this. Despite being used in massive (sometimes very complicated) build chains, Docker is a relatively simple technology to get started with if you already have some familiarity with the Linux version you are planning to use in your container.

    Maybe someday I'll start using it for building a small project...

Page 1 / 3 »