Django’s FileField and ImageField are good at storing files, but on their own they don’t let us control access. When we’re dealing with public content this isn’t a problem, but sometimes your project needs to store and serve sensitive files, such as private data uploaded by users, personal documents, or content that is for members only.
If you’re storing your files locally you have a few options for storing them outside a public path and serving them securely, such as writing a view to check access and return a FileResponse, or return a special header to your web server to offload the file response. But if you’re using blob storage such as S3, there’s no efficient way to proxy the files - it’s going to be relatively slow and tie up a worker each time you serve a file, and it’s not going to scale.
Files stored in a public S3 bucket can be served directly from the bucket’s URL. We don’t want to do this with our sensitive data - all attackers would need to know is the filename, which makes them vulnerable to guessing or enumeration attacks. However, AWS does give us a way to serve securely from a private bucket: by creating a time-limited signed URL.
AWS S3 pre-signed URLs are temporary URLs that grant time-limited access to private objects. We can generate these URLs from Django when access should be granted, return them to the user’s browser, then S3 will handle the actual file serving.
Set Up Your S3 Bucket
The first thing to do is create an S3 bucket that blocks all public access. In Amazon S3:
- Create a bucket (eg
myproject-private-media) - Check “ACLs disabled”
- Enable “Block all public access”
- Make a note of which region the bucket is in - we’ll need that later
Next we need to create an IAM user or role with minimal permissions. In AWS IAM:
- Create a user
- Select “Attach policies directly”
- Click “Create policy” and paste this JSON (changing the bucket name to match the one you created):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::myproject-private-media",
"arn:aws:s3:::myproject-private-media/*"
]
}
]
}
- Go back to where you’re creating the user, select your new policy, and finish creating the user
- Open the user, click “Create access key”, select your use case and complete creating the key
- Make a note of the access key and secret access key - we’ll need them in the next section
Set up django-storages
The easiest way to work with S3 files is to use django-storages with Boto3, the AWS SDK for Python, just as we would for public S3 buckets.
If you’ve not got that set up, first install the dependencies:
pip install django-storages boto3
then update your Django settings to install storages, and collect the relevant AWS settings:
import os
INSTALLED_APPS = [
...
'storages',
]
AWS_S3_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_S3_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
PRIVATE_BUCKET = os.getenv('PRIVATE_BUCKET')
# and if your bucket is not in the region us-east-1:
AWS_S3_REGION_NAME = os.getenv('AWS_S3_REGION')
AWS_S3_ENDPOINT_URL = f"https://s3.{AWS_S3_REGION_NAME}.amazonaws.com"
If you already have S3 set up everything will probably be the same - you’ll just need to set your PRIVATE_BUCKET to the name of the private bucket we created in the previous step. If you have a more complicated setup, we can control things on a per-bucket basis in the next step below.
Do note that if your bucket is in a region other than us-east-1, you’ll need to set AWS_S3_REGION_NAME and AWS_S3_ENDPOINT_URL to prevent signature mismatch errors.
Lastly set the environment variables for Django to pick up - that’s going to depend on how you deploy your site.
Create a Custom Storage Backend
Now we need to tell Django how to access this bucket. Create myapp/storage.py:
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
class S3PrivateStorage(S3Boto3Storage):
bucket_name = settings.PRIVATE_BUCKET
default_acl = None
querystring_auth = True
private_storage = S3PrivateStorage()
By default the signed URLs we generate will be valid for 1 hour, but you can change that by specifying the timeout in seconds with querystring_expire.
If your settings need to be a bit more nuanced - for example, if your bucket is in a different region to the other buckets in the project, or you want to use different AWS credentials for it - you can override the global settings with attributes on this class. See the django-storages docs for more details and options.
Use it in models
Once that’s set up, we’re ready to use it in our FileField or ImageField - define them with storage which uses our S3PrivateStorage instance:
from django.db import models
from myapp.storage import private_storage
class PrivateDocument(models.Model):
title = models.CharField(max_length=255)
file = models.FileField(
storage=private_storage,
)
Control access in views
Now the files are being stored privately, we can choose when to allow access.
Here’s a simple view which only allows access if the user is logged in:
from django.contrib.auth.decorators import login_required
from django.shortcuts import render
from .models import PrivateDocument
@login_required
def private_document_list(request):
docs = PrivateDocument.objects.all()
return render(request, 'list.html', {'docs': docs})
and now in its corresponding template, we can access the object’s URL like we would for any normal file or image field:
{% for doc in docs %}
<li>
<a href="{{ doc.file.url }}">{{ doc.title }}</a>
</li>
{% endfor %}
There’s no difference here to what we’d do with a regular public storage - because our S3PrivateStorage class has querystring_auth = True, when we access doc.file.url django-storages will automatically generate a temporary signed S3 URL for us.
The user’s browser will then get a link directly to S3, and Django won’t be involved in the transfer at all. It’s also worth noting that the signed URL is generated entirely on our server, so we don’t have any backend round trips to AWS to slow things down.
Although we’re just doing a login check here, the access control logic can be as complicated as you like; for example, we could add an owner field to the document model and filter the queryset to the current user. The important thing is we only generate the doc.file.url for documents that the user has access to.
This works in exactly the same way for an ImageField:
<img src="{{ object.image.url }}" alt="{{ object.title }}">
A word of warning
There are some things to watch out for with this approach.
Whenever working with your private files remember that anyone with the URL will be able to access it, so make sure that you don’t accidentally add a link without applying access control first.
Private link expiry may also trip up you or your users. The file is accessible until the link expires, so make sure your timeout is appropriate for your use-case.
if the data is sensitive and your site will trigger a download immediately, setting your timeout to a very short value may make sense. For example, you could set your S3PrivateStorage expiry to as little as a minute, ie querystring_expire=60 - and it won’t matter if it’s a slow download, as long it starts before the expiry it will complete.
On the other hand, if you expect your users to want to bookmark or share the file with other people, it may have expired by the time they click it, and because the URL is direct to S3 you won’t be able to control the error message they see. You have a couple of imperfect options, like a “share file” redirect view, or setting a longer expiry - but if you find this really is an issue, signed URLs may not be the best solution for your project.
Putting it all together
To try this out yourself, save this minimal nanodjango script as private_s3.py, update the settings, and run it with uv run private_s3.py (you’ll need to install uv first):
# /// script
# dependencies = [
# "nanodjango", "django-style", "django-storages", "boto3",
# ]
# ///
from django.contrib.auth.decorators import login_required
from django.db import models
from django.shortcuts import render
from nanodjango import Django
from storages.backends.s3boto3 import S3Boto3Storage
app = Django(
EXTRA_APPS=["storages"],
# --- Update these settings ---
AWS_S3_ACCESS_KEY_ID="YOUR_KEY",
AWS_S3_SECRET_ACCESS_KEY="YOUR_SECRET",
PRIVATE_BUCKET="YOUR_BUCKET",
AWS_S3_REGION_NAME="YOUR_REGION",
AWS_S3_ENDPOINT_URL="https://s3.YOUR_REGION.amazonaws.com",
)
class S3PrivateStorage(S3Boto3Storage):
bucket_name = app.settings.PRIVATE_BUCKET
default_acl = None
querystring_auth = True
@app.admin
class PrivateDocument(models.Model):
title = models.CharField(max_length=255)
file = models.FileField(storage=S3PrivateStorage())
@app.path("/")
@login_required
def private_document_list(request):
docs = PrivateDocument.objects.all()
return render(request, "list.html", {"docs": docs})
app.templates = {
"list.html": """{% extends "base.html" %}
{% block content %}
{% for doc in docs %}
<li>
<a href="{{ doc.file.url }}">{{ doc.title }}</a>
</li>
{% endfor %}
{% endblock %}
"""
}
if __name__ == "__main__":
app.run()
Log in at http://localhost:8000/admin/, upload a file, and see the generated signed URL at http://localhost:8000/. If you get run into any errors, just double check you’ve got AWS configured correctly and that you’ve copied the right values into the settings.
Using signed URLs to serve private files directly from S3 reduces load and bandwidth for your Django process, and scales automatically with your traffic - and once you’ve got the initial setup in place, it works in exactly the same way as Django’s regular file and image fields.