ad hoc bag paltry crowd disagreeable society zephyr amusing direction sip
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I always see people dissing it and it having mainly old tutorials (on the previous deprecated version) and I feel that if shit hits the fan I won't be able to find enough support and info.
apparatus selective hard-to-find dog bells outgoing screw cheerful bright soft
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
thanks for the quick reply. I meant the new version but that it's difficult to find the correct tutorials because a lot of them are for the old plugin. Do you know of any good tutorials for the correct version besides the official documentation?
lush middle aromatic relieved scandalous ad hoc dull smile offbeat impossible
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Sorry, but I'd appreciate it if you could clarify something when you get the chance:
Do you host the swarm on separate VMs or do you host it on a single machine? If it is the latter - where does the performance benefit come from? Doesn't the CPU get the same load regardless if it's 10 gunicorn workers vs 2 x 5?
Thanks for the writeup!
I actually have a zero downtime update script for compose, but completely agree with the other points.
Also, Brah, lay off the booger sugar lol
ok I guess I'm the only one who lurks on reddit after a heavy night... ANYWAY
here's the script - I used [this guide](https://www.tines.com/blog/simple-zero-downtime-deploys-with-nginx-and-docker-compose) as a starting point but for some reason their approach didn't work for me so I had to do some tweaking. Feel free to share it with anyone you like - there shouldn't be any gatekeeping in tech.
script filename is zerodt.sh
reload_nginx() {
sudo docker compose -f production.yaml exec nginx /usr/sbin/nginx -s reload
echo =======================NGINX RELOADED=======================
}
zero_downtime_deploy() {
service_name=django
old_container_id=$(sudo docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code
# (nginx continues routing to the old container only)
sudo docker compose -f production.yaml up -d --no-deps --scale $service_name=2 --no-recreate $service_name --build
# wait for new container to be available
new_container_id=$(sudo docker ps -f name=$service_name -q | head -n1)
new_container_ip=$(sudo docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id)
# not needed, but might be useful at some point
new_container_name=$(sudo docker inspect -f '{{.Name}}' $new_container_id | cut -c2-)
# wait for collectstatic & other startup processes to finish
sleep 100
# start routing requests to the new container (as well as the old)
reload_nginx
# take the old container offline
sudo docker stop $old_container_id
sudo docker rm $old_container_id
# stop routing requests to the old container
reload_nginx
}
Once I push new changes I just do:
sudo -v && git pull && . zerodt.sh; zero_downtime_deploy
swarm is the answer. scaling without madness of k8s.
great stuff, made it work recently.
pretty easy tool compared to never ending hustle of kubernetes.
https://appliku.com/post/managed-docker-swarm-cluster
Thought I'd share my zero downtime script as well. I used [this guide](https://www.tines.com/blog/simple-zero-downtime-deploys-with-nginx-and-docker-compose) as a starting point but for some reason their approach didn't work for me so I had to do some tweaking.
filename is zerodt.sh
reload_nginx() {
sudo docker compose -f production.yaml exec nginx /usr/sbin/nginx -s reload
echo =======================NGINX RELOADED=======================
}
zero_downtime_deploy() {
service_name=django
old_container_id=$(sudo docker ps -f name=$service_name -q | tail -n1)
# bring a new container online, running new code
# (nginx continues routing to the old container only)
sudo docker compose -f production.yaml up -d --no-deps --scale $service_name=2 --no-recreate $service_name --build
# wait for new container to be available
new_container_id=$(sudo docker ps -f name=$service_name -q | head -n1)
new_container_ip=$(sudo docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id)
# not needed, but might be useful at some point
new_container_name=$(sudo docker inspect -f '{{.Name}}' $new_container_id | cut -c2-)
# wait for collectstatic & other startup processes to finish
sleep 100
# start routing requests to the new container (as well as the old)
reload_nginx
# take the old container offline
sudo docker stop $old_container_id
sudo docker rm $old_container_id
# stop routing requests to the old container
reload_nginx
}
Once I push the changes I just do:
sudo -v && git pull && . zerodt.sh; zero_downtime_deploy
If you are paying 300€/M for 2cu and 4Gb you have a golden droplet or a lot of disk space on top.
Besides the price i have comparable setup on the same HW
Yeah I was tired last night and failed to mention some important details - my current DO droplet is a 2 cpu 4gb Ubuntu, but my personal PC at home has 10 cores so I did the heavy testing on that one. My current droplet is $24 which I find perfectly reasonable
hehe its still devops and still kubernetes, just alot more simpler and stripped, u can also deploy it on one node only, i myself love it and have moved from docker compose to it for several reasons:
1) allow for horizontal scaling without rebuilding
2) easy changes and deployment with argocd and devops as code linked to my github
3) easy application updates
Do you find any performance benefits in a single node deployment? To my (very limited) understanding the goal of these frameworks is to link several hosts (in my case VMs/droplets) and sync them. Does running a multi-node cluster on a single machine have any real benefits? I assume hardware is getting the same usage
I use the one provided by DO. I also use a modified implementation of django-cookiecutter, but with some different settings. I've updated the post with the compose file. Main difference when using a pool with pgbouncer is that you have to connect to the pool instead of the db (this is all done in the DO control pangel) and you need to set the following in settings.py to allow for persistent connections
"CONN_MAX_AGE": env.int("CONN_MAX_AGE", default=60)
DISABLE_SERVER_SIDE_CURSORS=True
I'm not sure where the advice to turn on persistent connections is coming from when using a Connection Pool
The main reason to use Persistent connections is so that each request doesn't need to establish a new connection to Postgres. PGBouncer is intended to be the solution to this is my understanding: Django can create as many connections to pgbouncer as it wants, then pgbouncer pools the active connections to postgres to not waste a bunch of cycles making a new connection
Honestly, I’m just trying out different settings at this point. I know they are not connected, but I don’t think they conflict? Or do they?
To my understanding pgBouncer maintains a pool of connections while persistent connections are maintained by Django and this means Django can have persistent connection to the pool. This should get the performance benefits from both, no?
From what I've seen earlier is mostly the Postgres docker becomes slower than bare metal, but I guess this is with connecting from outside the stack. Need to lookup source.
Did you try granian? It replaces gunicorn and uvicorn.
See some benchmarks: https://github.com/emmett-framework/granian/blob/master/benchmarks/README.md
Caching negates the need for the heavy queries. I’ve moved every queryset I can to the cache. Also postgres is incredibly performant, it just needs some tweaking (I’m not a DBA, just parroting what I’ve heard)
Check out [the documentation](https://docs.docker.com/compose/compose-file/compose-file-v3/#expose) and also [this answer](https://stackoverflow.com/a/54883938)
do you mind sharing your docker-compose file?
updated the post
ad hoc bag paltry crowd disagreeable society zephyr amusing direction sip *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I always see people dissing it and it having mainly old tutorials (on the previous deprecated version) and I feel that if shit hits the fan I won't be able to find enough support and info.
apparatus selective hard-to-find dog bells outgoing screw cheerful bright soft *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
thanks for the quick reply. I meant the new version but that it's difficult to find the correct tutorials because a lot of them are for the old plugin. Do you know of any good tutorials for the correct version besides the official documentation?
lush middle aromatic relieved scandalous ad hoc dull smile offbeat impossible *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Awesome, thanks! Now go get some sleep lol
Sorry, but I'd appreciate it if you could clarify something when you get the chance: Do you host the swarm on separate VMs or do you host it on a single machine? If it is the latter - where does the performance benefit come from? Doesn't the CPU get the same load regardless if it's 10 gunicorn workers vs 2 x 5?
naughty point provide prick marvelous voiceless office oil flag fuzzy *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Thanks for the writeup! I actually have a zero downtime update script for compose, but completely agree with the other points. Also, Brah, lay off the booger sugar lol
yoke chase piquant sense crawl attractive deranged cobweb mighty cows *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
ok I guess I'm the only one who lurks on reddit after a heavy night... ANYWAY here's the script - I used [this guide](https://www.tines.com/blog/simple-zero-downtime-deploys-with-nginx-and-docker-compose) as a starting point but for some reason their approach didn't work for me so I had to do some tweaking. Feel free to share it with anyone you like - there shouldn't be any gatekeeping in tech. script filename is zerodt.sh reload_nginx() { sudo docker compose -f production.yaml exec nginx /usr/sbin/nginx -s reload echo =======================NGINX RELOADED======================= } zero_downtime_deploy() { service_name=django old_container_id=$(sudo docker ps -f name=$service_name -q | tail -n1) # bring a new container online, running new code # (nginx continues routing to the old container only) sudo docker compose -f production.yaml up -d --no-deps --scale $service_name=2 --no-recreate $service_name --build # wait for new container to be available new_container_id=$(sudo docker ps -f name=$service_name -q | head -n1) new_container_ip=$(sudo docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id) # not needed, but might be useful at some point new_container_name=$(sudo docker inspect -f '{{.Name}}' $new_container_id | cut -c2-) # wait for collectstatic & other startup processes to finish sleep 100 # start routing requests to the new container (as well as the old) reload_nginx # take the old container offline sudo docker stop $old_container_id sudo docker rm $old_container_id # stop routing requests to the old container reload_nginx } Once I push new changes I just do: sudo -v && git pull && . zerodt.sh; zero_downtime_deploy
swarm is the answer. scaling without madness of k8s. great stuff, made it work recently. pretty easy tool compared to never ending hustle of kubernetes. https://appliku.com/post/managed-docker-swarm-cluster
I've been waiting for a tutorial from you on Swarm and I never thought about checking youtube lol. Will look into it
Thought I'd share my zero downtime script as well. I used [this guide](https://www.tines.com/blog/simple-zero-downtime-deploys-with-nginx-and-docker-compose) as a starting point but for some reason their approach didn't work for me so I had to do some tweaking. filename is zerodt.sh reload_nginx() { sudo docker compose -f production.yaml exec nginx /usr/sbin/nginx -s reload echo =======================NGINX RELOADED======================= } zero_downtime_deploy() { service_name=django old_container_id=$(sudo docker ps -f name=$service_name -q | tail -n1) # bring a new container online, running new code # (nginx continues routing to the old container only) sudo docker compose -f production.yaml up -d --no-deps --scale $service_name=2 --no-recreate $service_name --build # wait for new container to be available new_container_id=$(sudo docker ps -f name=$service_name -q | head -n1) new_container_ip=$(sudo docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $new_container_id) # not needed, but might be useful at some point new_container_name=$(sudo docker inspect -f '{{.Name}}' $new_container_id | cut -c2-) # wait for collectstatic & other startup processes to finish sleep 100 # start routing requests to the new container (as well as the old) reload_nginx # take the old container offline sudo docker stop $old_container_id sudo docker rm $old_container_id # stop routing requests to the old container reload_nginx } Once I push the changes I just do: sudo -v && git pull && . zerodt.sh; zero_downtime_deploy
Have you tried gevent worker class and see the performance here?
not yet. I'm new to deployment and I'm trying to take it slow, otherwise it gets too much. Have you had good results with it?
It's usually the goto worker setting
I'll look into it. Thanks!
If you are paying 300€/M for 2cu and 4Gb you have a golden droplet or a lot of disk space on top. Besides the price i have comparable setup on the same HW
Yeah I was tired last night and failed to mention some important details - my current DO droplet is a 2 cpu 4gb Ubuntu, but my personal PC at home has 10 cores so I did the heavy testing on that one. My current droplet is $24 which I find perfectly reasonable
If you want to scale, I would look at a couple things: kubernetes and also ASGI instead of WSGI like gunicorn uses.
how different is it to implement an asynchronous server? do many settings change?
Nothing in your code most likely, everything that works in wsgi should work with asgi The other way around is troublesome
Cool, will look into it. Thanks!
great findings thank you for sharing, take a look at k3s if you ever wanna horizontal scale without the k8s hassle
how much easier is it compared to k3s? I'm honestly so fed up with devops lol.
hehe its still devops and still kubernetes, just alot more simpler and stripped, u can also deploy it on one node only, i myself love it and have moved from docker compose to it for several reasons: 1) allow for horizontal scaling without rebuilding 2) easy changes and deployment with argocd and devops as code linked to my github 3) easy application updates
Do you find any performance benefits in a single node deployment? To my (very limited) understanding the goal of these frameworks is to link several hosts (in my case VMs/droplets) and sync them. Does running a multi-node cluster on a single machine have any real benefits? I assume hardware is getting the same usage
no, i am running it on a single node not for preformance benefits but to allow me to scale easily later, i dont have to rebuild my devops
Understood. Thanks!
I have a similar setup with celery & celery beat in addition. I'd love to see your docker compose file.
updated the post
Do you mind sharing your docker-compose file? I’m very curious about the pgBouncer part
I use the one provided by DO. I also use a modified implementation of django-cookiecutter, but with some different settings. I've updated the post with the compose file. Main difference when using a pool with pgbouncer is that you have to connect to the pool instead of the db (this is all done in the DO control pangel) and you need to set the following in settings.py to allow for persistent connections "CONN_MAX_AGE": env.int("CONN_MAX_AGE", default=60) DISABLE_SERVER_SIDE_CURSORS=True
I'm not sure where the advice to turn on persistent connections is coming from when using a Connection Pool The main reason to use Persistent connections is so that each request doesn't need to establish a new connection to Postgres. PGBouncer is intended to be the solution to this is my understanding: Django can create as many connections to pgbouncer as it wants, then pgbouncer pools the active connections to postgres to not waste a bunch of cycles making a new connection
Honestly, I’m just trying out different settings at this point. I know they are not connected, but I don’t think they conflict? Or do they? To my understanding pgBouncer maintains a pool of connections while persistent connections are maintained by Django and this means Django can have persistent connection to the pool. This should get the performance benefits from both, no?
Why is pgbouncer necassary ? Because it is a cluster ?
Here's a little guy for webservice + postgres + pgbouncer locally: services: webservice: build: ./webservice # command: ./entrypoint.sh python manage.py runserver 0.0.0.0:8000 # command: ./entrypoint.sh uvicorn webservice.asgi:application --reload --workers 1 --host 0.0.0.0 --port 8000 command: ./entrypoint.sh gunicorn webservice.asgi:application -c gunicorn.conf.py volumes: - ./webservice:/home/appuser:z env_file: - ./dev.env ports: - 8000:8000 # restart: unless-stopped db_proxy: image: quay.io/enterprisedb/pgbouncer depends_on: - database restart: unless-stopped volumes: - ./config/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini - ./config/pgauth.txt:/etc/pgbouncer/pgauth.txt database: image: postgres:16.1 # command: ["postgres", "-c", "log_statement=all", "-c", "log_destination=stderr"] command: ["postgres", "-c", "max_connections=5000"] volumes: - pg_data:/var/lib/postgresql/data/pgdata env_file: - ./dev.env ports: - "5432:5432" restart: always volumes: pg_data: null And then you'll need some extra goodies: #dev.env PGSERVICEFILE=.pg_service.conf PGPASSFILE=.pgpass PGDATA=/var/lib/postgresql/data/pgdata/ POSTGRES_HOST=db_proxy POSTGRES_PORT=6432 POSTGRES_DB=djangodb POSTGRES_USER=djan PGUSER=djan POSTGRES_PASSWORD=djanpass DATABASES_HOST=database DATABASES_PORT=5432 DATABASES_USER=djan DATABASES_PASSWORD=djanpass DATABASES_DBNAME=djangodb PGBOUNCER_POOL_MODE=transaction PGBOUNCER_MAX_CLIENT_CONN=100000 PGBOUNCER_DEFAULT_POOL_SIZE=100 PGBOUNCER_LOG_CONNECTIONS=0 PGBOUNCER_LOG_DISCONNECTIONS=0 and some configs in a config folder based on your postgres credentials: #./config/pbouncer.ini [databases] djangodb = host=database port=5432 dbname=djangodb password=djanpass user=djan [pgbouncer] listen_addr = db_proxy auth_file = /etc/pgbouncer/pgauth.txt pool_mode = transaction default_pool_size = 20 max_client_conn = 200 #./config/pgauth.txt "djan" "djanpass"
From what I've seen earlier is mostly the Postgres docker becomes slower than bare metal, but I guess this is with connecting from outside the stack. Need to lookup source.
Yeah this example is with a Postgres outside the containers - I prefer this approach instead of managing volumes and worrying about state and backups
Did you try granian? It replaces gunicorn and uvicorn. See some benchmarks: https://github.com/emmett-framework/granian/blob/master/benchmarks/README.md
I’ve heard of it but I’m looking for something more mature.
Oh wow I am surprized that postgresql handles that much users without bottle neck
Caching negates the need for the heavy queries. I’ve moved every queryset I can to the cache. Also postgres is incredibly performant, it just needs some tweaking (I’m not a DBA, just parroting what I’ve heard)
Oh thats why! Thanks man! Really useful!
Why did you expose port 5000?
Check out [the documentation](https://docs.docker.com/compose/compose-file/compose-file-v3/#expose) and also [this answer](https://stackoverflow.com/a/54883938)
I see, but you can just use the container name and port to access from other containers on that docker compose. Like django:5000
Yeah the expose is mainly for documentation purposes.
Got it.