Django 6.0 shipped with one of its most anticipated features: a built-in Tasks framework. For years, Django developers have reached for Celery, RQ, or other third-party solutions for background processing. Now, task definition and queuing can be handled natively.
Here’s the simplest possible example using nanodjango:
# /// script
# dependencies = ["nanodjango"]
# ///
import time
from django.http import HttpResponse
from django.tasks import task
from nanodjango import Django
app = Django()
@task
def long_running_job(duration):
print(f"--> TASK STARTED: Sleeping for {duration} seconds...")
time.sleep(duration)
print("--> TASK FINISHED: Woke up!")
@app.route("/")
def index(request):
long_running_job.enqueue(duration=5)
return HttpResponse("Job started! Check your terminal.")
if __name__ == "__main__":
app.run()
The @task decorator marks a function as a background task, and .enqueue() adds it to the queue.
Copy the above code, save it as example.py anywhere, and run with uv run example.py. Hit the endpoint. Can you see the problem?
By default, Django 6 uses an ImmediateBackend task backend. This means task.enqueue() becomes essentially just a fancy function call since it happens on the same thread as your view.
The Built-in Backends Don’t Actually Background
Django 6 includes two backends:
- ImmediateBackend (default): Runs tasks synchronously, blocking until complete
- DummyBackend: Stores tasks without executing them at all
Neither provides actual background execution. With ImmediateBackend, when you call task.enqueue(), your request blocks until the task finishes. A 5-second task means a 5-second response time.
This is by design—Django provides the framework, but expects you to bring your own execution backend.
Zero-Infrastructure Background Tasks
I wrote django-tasks-local to fill this gap. It runs tasks in a ThreadPoolExecutor, freeing your request thread immediately. No Redis. No Celery. No database. Just install and configure.
Have CPU heavy tasks? It also comes with a more suitable ProcessPoolBackend for those.
pip install django-tasks-local
# settings.py
TASKS = {
"default": {
"BACKEND": "django_tasks_local.ThreadPoolBackend",
"OPTIONS": {
"MAX_WORKERS": 10, # Thread pool size
"MAX_RESULTS": 1000, # How many results to keep in memory
}
}
}
That’s it. Your tasks now run in background threads.
Basic Example
Here’s the same example from above, now with true background execution:
# /// script
# dependencies = ["nanodjango", "django-tasks-local"]
# ///
import time
from django.http import HttpResponse
from django.tasks import task
from nanodjango import Django
app = Django(
TASKS={
"default": {
"BACKEND": "django_tasks_local.ThreadPoolBackend",
}
}
)
@task
def long_running_job(duration):
print(f"--> TASK STARTED: Sleeping for {duration} seconds...")
time.sleep(duration)
print("--> TASK FINISHED: Woke up!")
@app.route("/")
def index(request):
long_running_job.enqueue(duration=5)
return HttpResponse("Job started! Check your terminal.")
if __name__ == "__main__":
app.run()
The @task decorator marks a function as a background task, and .enqueue() adds it to the queue. With ThreadPoolBackend, the response returns immediately while the task runs in a separate thread.
Real-World Example: Progress Tracking with SSE
The django-tasks-local repo includes a full demo with real-time progress updates streamed via Server-Sent Events. Tasks report progress through Django’s cache, and the frontend displays live progress bars.
You can try it via the magic of uv (and implicitly nanodjango) by just downloading the example module:
cd /tmp
wget https://raw.githubusercontent.com/lincolnloop/django-tasks-local/main/example.py
uv run example.py
Open localhost:8000 and click “Start Long Job”. Here’s what happens:
- The browser calls
/start-job, which enqueues a task and returns immediately with the job ID - The task starts running in a background thread, writing progress to Django’s cache every second
- The browser opens an SSE connection to
/events - A generator function constantly polls the cache and yields progress updates as JSON
- The frontend updates the progress bar as each SSE message arrives
- When there are no more running jobs, the browser closes the SSE connection
You can start multiple jobs and watch them all progress concurrently. If you start more than 4, you’ll see the other ones queue up since the example runs with "MAX_WORKERS": 4.
If you’re diving into the code, then you might be nerdy enough to appreciate that django-tasks-local also provides a bonus feature of a current_result_id context variable that lets tasks access their own result ID without needing it passed as a parameter.
When to Use django-tasks-local
This backend is ideal for:
- Development: Get real background execution without setting up Redis or a database worker
- Low-volume production: Apps where losing a few tasks on deploy is acceptable
- Prototyping: Quickly test task-based architectures before committing to infrastructure
The big caveat to note is that, by design, results are stored in memory only - when your Django server restarts, all pending tasks and results are lost. For persistence, use django-tasks which provides a DatabaseBackend with a worker command.
Why not just use threading.Thread(target=job).start()? Sure, if that floats your boat! But integrating with the official Django Tasks API will allow for easier migration to Celery/DB later, plus you’ll have access to task progress tracking and result storage.
Links
- Official Django Tasks Framework Documentation
- django-tasks-local on PyPI (and the GitHub Repository)
- django-tasks - Database-backed alternative for persistence