So far in my main project (more on that to come), the only users of the admin have been myself and my cofounder, so I've made us both superusers. Recently, I had a request to allow an instructor to view and update data from their own courses. This would require me to give them admin access, but in a highly limited view. The good news is that there is a lot of information on how to limit admin access in Django! The bad news is that a lot of it applies to entire models or other use cases that were much less specific than mine. Three major things I learned about:
First, I'm not sure if this is the best or only way to do this, but I created a group (I could not get this to work as permissions for only one user) and limited the permissions to just a few models (and only the required permissions). I'd be interested if there is a way to do this per individual, but I think I might need the group settings sooner rather later anyway.
Second, for those models they are able to use, I needed to limit what they could see to only their students. I was able to do that by adding to the relevant models in admin.py:
This returns all data for superusers, but only returns whatever you filter on to other users. In my case, I only have two levels of admin user, so this simple filter works well. You'll have to fill in the filter depending on your model.
Third, and this was most difficult to track down, I needed to make sure they couldn't see data outside of their scope when adding new students to their course. For the users model, I was able to add a filter to the user admin like above. But for the Course model, I would still see all courses even when using get_queryset for the CourseAdmin as above. I'm not sure why this is. To fix this, I had to use:
You may have noticed my favicon is a mug of coffee. I'm a huge coffee fan, and the last few years I've gone further down the rabbit hole with third wave/specialty coffee.
I recently bought a new coffee grinder to upgrade from my ~5 year old Oxo grinder. I try not to be wasteful, so when considering what to do with the old grinder, I decided to search around to see if anyone had recommendations for modifying it. In my (not so exhaustive) search I found this recommendation for swapping to a better set of burrs which I decided to try.
The instructions were pretty good and the most time consuming part was cleaning out all of the grounds from previous usage (perhaps I should have cleaned it more regularly...). But all in all it was quite an easy swap.
The retention still isn't great, but it's better and the new burrs work well!
Since this isn't a coffee blog (yet...) I don't want to get too deep in the weeds, but the old grinder and the new one have different burr types (flat vs. conical) which has allowed me to highlight the difference between them as I try new coffees. It's a lot of fun!
For the past few years, Apple's Gatekeeper has made it difficult to run apps downloaded from the internet. Since most of the users of the game that I distribute (more on this soon) aren't technical, we get a significant number of folks needing help even with detailed instructions we supply. I finally had the time to look into code-signing our Unity game for MacOS that would be distributed outside the app store (downloaded from the internet).
The following two links have almost all of the instructions necessary, but I want to highlight two issues I ran into in case others have a similar problem. And I think I can explain the root issue a little better than I found elsewhere.
Note that my Macbook is managed by the IT department at my current employer, so this may be an issue only for managed machines.
First, it isn't specified in either link, but I installed my certificate (the one I created and downloaded from developer.apple.com, see the second link above for more about that) into my 'login' keychain after running into the issue below and reading through some apple developer forum and stack overflow discussions. I'm not sure if that matters, but it seems that is the recommended way.
I kept getting Warning: unable to build chain to self-signed root for signer when trying to run codesign, and the answer here about the WWDR Intermediate Certification worked for me initially. I installed the linked cert into my System keychain. But I was using the wrong cerification (Distribution, which is for submitting to the store, I think) so notarization failed.
After getting the correct certification (again, from the instructions in the second link above, and installing into my 'login' keychain), I had to download the intermediate certification Developer ID - G2 (Expiring 09/17/2031 00:00:00 UTC) from here, and I installed that into my System keychain. This got notarization to work!
I know it was this one because after I added that, my certification (the one I created in my developer account, downloaded and installed to login) changed to 'trusted'. I tried a different one first that didn't change the status of my certification.
In short, if codesigning is giving you the error above, you are probably missing the intermediate certification from Apple (this was another answer from the developer forum link above, but I didn't understand it when I read it). What this means is that you should determine which type of certification you requested from Apple (Apple Distribution, etc.) and find the matching certification from Apple. It wasn't immediately clear to me which was the matching certification in my case, but I downloaded two that seemed like they might be correct and only one of them changed my cert to trusted in Keychain Access. Good luck!
My current set of tools for deploying my application to production includes Packer and Terraform. I didn't write all of the code for the deployments, but I've been over most of it now.
When trying to upgrade my server from Ubuntu 20.01 to 22.01, I ran into some problems with conflicting versions of dependencies (related to postgres, mostly). My first instinct was to create an EC2 instance, then walk through each step manually, but I realized I didn't even have to spin up an instance if I could use Docker.
I'm not much of a Docker user, but I've used it a few times professionally. Mostly other people have done the hard work of creating a docker image, and I've run it for development. So I thought this was a great opportunity to try using it myself.
I started by spinning up a docker image for the target Ubuntu version, named jammy:
The bionic in there refers to the version of Ubuntu that we are requesting the dependencies for. It should be jammy!
This was a great example where Docker saved me some time (and a few pennies) from not having to spin up a cloud instance. And there really wasn't much to learn about Docker itself to dive in. So my advice to you is to give Docker a try for a simple use case like this. Despite being used in massive (sometimes very complicated) build chains, Docker is a relatively simple technology to get started with if you already have some familiarity with the Linux version you are planning to use in your container.
Maybe someday I'll start using it for building a small project...
This is a short post to tell you to create more bash aliases.
I have had one for my devops setup for a while because I have to set several environment variables to get just about anything done. More recently, though, I set up a few more for my main Django project and even this blog. For the blog it's a simple as entering the directory and activating the environment:
alias pmlstart='cd ~/pml_blog && source env/bin/activate'
For the Django project, it took me forever to remember how to start my local postgres instance (is it service start, or start service, or...?), so I finally just made that part of an alias for when I start working on the project:
alias webstart="cd ~/website && source env/bin/activate && sudo service postgresql start && source env_vars.sh"
Since writing this, I added that last piece to it to load environment variables into my environment so I don't have to remember to edit my activate file when I need to recreate my environment locally. Highly recommended!
Are there bash commands you use frequently together? Make an alias!
There are a few code formatting tools I like to use in just about any Python (and Django) project: black, isort, and flake8. These all serve slightly different purposes, and there are alternatives to each. A full discussion/comparison of code formatting tools in Python is beyond the scope of this post*, but in brief:
Black is great because it automatically formats my code with a well recognized style. I have my editor (currently VSCode) set to run black when I save a file. This makes my code consistent, and is great in teams I've worked in because it means we don't have to worry about styling in code reviews.
isort is great for sorting imports. Even though I tend to do a bit of this sorting as I go, I inevitably swap an import or leave one out of place. This organizes them so I don't have to.
flake8 catches minor (and sometimes not so minor) code issues like unused variables.
There are a few ways to configure these libraries, but for now, I have them in their own individual config files. These should all be placed at the root of your Django project.
For black, I use the default config. In some other projects, I've extended the line length, but currently I'm leaving black as default.
For isort, isort.cfg is the filename and mine looks like this:
Where env is the name of your environment. The extended skip for migrations isn't necessary, but I wanted to avoid changing my migration files. The black profile is nice to avoid conflicts between black and isort. Skipping your environment is likely required. When I've left this out (or accidentally run isort without the config) isort runs against the libaries in my environment and ends up breaking them by creating circular dependencies. One way to make sure you aren't going to do this is to run isort --check-only . to make sure it's only running against your files before actually running it.
For flake8 .flake8 is the filename and mine looks like this:
These exclude all of the files in the folders migrations, __pycache__ and env (where env is the name of your environment). You may want to exclude more, or not exclude some of these, but I've found this to work for me.
Finally, on other projects I've used all of these with pre-commit, which is a library that uses git hooks to prevent you from committing code that doesn't use the standards you and your team have set, such as black, isort, and flake8, or some other combination of code formatting tools. It also allows you to use a single command to run all three tools.
Unfortunately, I can't figure out how to use pre-commit in my current setup with my python projects in WSL and my git client (Fork) in Windows. Perhaps this is a sign I should start using the git CLI more. But if you know of a good way to use precommit with this setup, let me know!
*between writing and publishing this I read James Bennett covering similar ground. He goes a bit deeper on some of the tools I mention (and other related tools), so I wanted to link this for further reading.
In the time since I've posted on this blog, I've erased the environment I used to set up and run Jekyll. Since I've never been a Ruby programmer and most of the code I write now is in Python, I thought I'd look into a blogging engine in Python. I found Pelican and Nikola to be recommended by a number of folks, so I decided to give Pelican a try, and it's been great so far! Using a Python based site generator feels more comfortable for me since I'm more familiar with pip than gems.
It seems remarkably similar to Jekyll and I only had to make a few small changes to the front matter on a few posts to get all of my content working with Pelican. I've made some tweaks to the default theme to move a few icons around to my liking, but otherwise haven't made major changes. I'll make a follow up post if I make interesting changes that others might want to use.
While I was trying to incorporate Stripe’s Javascript library into my codebase, I ran into some issues implementing it in Typescript. I eventually was able to fix those issues, but while I was trying to figure that out, I left the Stripe code as Javascript and stumbled upon an interesting way to interface that with my Typescript code compiled by WebPack.
First, at the bottom of the Typescript file you want to call from Javascript:
I found the documentation for presigned URLS on AWS using boto to be insufficient, so here is how I formed the request.
I wanted to have a file private in S3, but expose it to a user for download if they were authorized to access it. I found the easiest way to do that was leave the object in S3 as private, but use the AWS API to generate a pre-signed URL with a low timeout (a minute or so). Unfortunately, I found the documentation not so straightforward and cobbled together a few Stack Overflow answers to come up with exactly what I needed. In particular, I needed to add the ‘ResponseContentType’ to get the file to download correctly, and needed to specify my AWS credentials.
I’m using Django to serve up the download via a get request:
classGetDownloadURL(APIView):defget(self,request):# Get the service client.session=boto3.session.Session(profile_name="AWSUserName")s3=session.client("s3")# Generate the URL to get 'key-name' from 'bucket-name'url=s3.generate_presigned_url(ClientMethod="get_object",Params={"Bucket":"your-s3-bucket","Key":"SampleDLZip.zip","ResponseContentType":"application/zip",},ExpiresIn=100,)returnResponse(url)
Notes: You will need to update the Key and Bucket params to match what you have in S3.
Depending how you have set up your AWS credentials, you may be able to omit the ‘profile_name="AWSUserName"’ parameter. I prefer to be explicit in my config because I’ve run into issues when I have used the default in the past.