Of course! Deploying a Python application on a Linux server is a fundamental skill for any developer. This guide will walk you through the most common and robust methods, from the simple to the professional.

We'll cover:
- The Core Concepts: What you need before you start.
- Method 1: The Simple Way (Using
systemd): Best for single, simple applications. - Method 2: The Standard Way (Using Gunicorn + Nginx): The most common and recommended approach for web apps.
- Method 3: The Professional Way (Using Docker): The modern standard for consistency and scalability.
- Method 4: The Scalable Way (Using Process Managers like
systemdorsupervisord): For running multiple processes or complex apps.
Prerequisites: The Foundation
Before deploying, you need a Linux server. You can get one from a cloud provider like AWS, DigitalOcean, Linode, or even a VPS from Vultr.
-
A Linux Server: A fresh Ubuntu 22.04 server is a great choice for this guide.
-
SSH Access: You should be able to connect to your server via
ssh user@your_server_ip.
(图片来源网络,侵删) -
A Python Application: Let's assume you have a simple Flask app for this example. Create a file named
app.py:# app.py from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return "Hello from Python deployed on Linux!" if __name__ == '__main__': # In production, you will NOT use the built-in server. # This is just for local testing. app.run(host='0.0.0.0', port=5000) -
Project Files: Your project should have a
requirements.txtfile listing its dependencies.Create
requirements.txt:Flask gunicorn # We'll need this for Method 2
Method 1: The Simple Way (systemd Service)
This method is great for running a single script or a simple application as a background service that starts automatically on boot.

Steps:
-
Copy your files to the server:
# On your local machine scp app.py requirements.txt user@your_server_ip:/home/user/my_app/
-
SSH into your server:
ssh user@your_server_ip
-
Set up a Virtual Environment (Highly Recommended):
cd /home/user/my_app python3 -m venv venv source venv/bin/activate pip install -r requirements.txt
-
Create a
systemdservice file: This file tells the Linux system how to run your application.sudo nano /etc/systemd/system/my_app.service
Paste the following configuration, adjusting the paths and user (
user) to match your setup:[Unit] Description=My Python App After=network.target [Service] User=user Group=user WorkingDirectory=/home/user/my_app ExecStart=/home/user/my_app/venv/bin/gunicorn --workers 3 --bind unix:/home/user/my_app/my_app.sock app:app ExecReload=/bin/kill -s HUP $MAINPID Restart=always [Install] WantedBy=multi-user.target
User/Group: The user that will run the service.WorkingDirectory: The directory where your code is.ExecStart: The most important line. We're using Gunicorn here (even in the simple method) because it's robust. It runs your app (app:appmeans theappvariable inapp.py) and creates a socket file (my_app.sock).
-
Start and Enable the Service:
sudo systemctl daemon-reload sudo systemctl start my_app sudo systemctl enable my_app # Starts on boot
-
Check the Status:
sudo systemctl status my_app
If it's active (running), great! You can check the logs with
journalctl -u my_app -f.
Limitation: This method runs your app, but it doesn't expose it to the internet. You need a reverse proxy like Nginx (see Method 2) for that.
Method 2: The Standard Way (Gunicorn + Nginx)
This is the industry standard for deploying Python web applications. Nginx acts as a "reverse proxy," handling incoming public traffic and forwarding it to your Python application (Gunicorn), which runs on a private port.
Steps:
-
Follow steps 1-3 from Method 1 to get your code and virtual environment ready on the server.
-
Install Nginx:
sudo apt update sudo apt install nginx
-
Configure Nginx: Create a new server block configuration file for your app.
sudo nano /etc/nginx/sites-available/my_app
Paste this configuration:
server { listen 80; server_name your_domain.com; # Or your_server_ip location / { # This is the magic: proxy to the Gunicorn socket proxy_pass http://unix:/home/user/my_app/my_app.sock; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } -
Enable the Site:
# Create a symbolic link to enable the site sudo ln -s /etc/nginx/sites-available/my_app /etc/nginx/sites-enabled/ # Remove the default site to avoid conflicts sudo rm /etc/nginx/sites-enabled/default # Test the Nginx configuration sudo nginx -t # If the test is successful, reload Nginx sudo systemctl reload nginx
-
Create the
systemdservice for Gunicorn: This is similar to Method 1, but now we're focusing on Gunicorn.sudo nano /etc/systemd/system/gunicorn.service
Paste this config:
[Unit] Description=Gunicorn instance to serve my_app After=network.target [Service] User=user Group=user WorkingDirectory=/home/user/my_app ExecStart=/home/user/my_app/venv/bin/gunicorn \ --workers 3 \ --bind unix:/home/user/my_app/my_app.sock \ app:app [Install] WantedBy=multi-user.target -
Start and Enable Gunicorn:
sudo systemctl daemon-reload sudo systemctl start gunicorn sudo systemctl enable gunicorn
-
Check Everything:
- Check Gunicorn status:
sudo systemctl status gunicorn - Check Nginx status:
sudo systemctl status nginx - Visit
http://your_server_iporhttp://your_domain.comin your browser. You should see "Hello from Python deployed on Linux!"
- Check Gunicorn status:
Method 3: The Professional Way (Docker)
Docker packages your application and all its dependencies into a portable "container." This ensures your app runs the same way on your laptop, in staging, and in production.
Steps:
-
On your local machine, create a
Dockerfilein your project's root directory:# Use an official Python runtime as a parent image FROM python:3.9-slim # Set the working directory in the container WORKDIR /app # Copy the dependencies file and install them COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy the rest of the application code into the container COPY . . # Make port 5000 available to the world outside this container EXPOSE 5000 # Define environment variable ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 # Run the application CMD ["flask", "run"]
-
Build the Docker image:
docker build -t my-python-app .
-
Run the container:
# Run it in the background (-d), mapping port 5000 on your host to 5000 in the container docker run -d -p 5000:5000 --name my-running-app my-python-app
Now you can visit
http://localhost:5000on your local machine.
Deploying to a Server:
The real power of Docker is deploying the image.
-
Push your image to a registry:
# Tag the image for a registry like Docker Hub docker tag my-python-app your-dockerhub-username/my-python-app:latest # Push it to the registry docker push your-dockerhub-username/my-python-app:latest
-
On your server:
- Install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh - Pull the image:
docker pull your-dockerhub-username/my-python-app:latest - Run the container:
docker run -d -p 80:5000 --name my-running-app your-dockerhub-username/my-python-app:latest
- Install Docker:
Now Nginx isn't even strictly necessary if you're just mapping a port, but for production, you'd still use Nginx as a reverse proxy in front of Docker to handle SSL termination, load balancing, and static files.
Method 4: The Scalable Way (supervisord or advanced systemd)
For more complex applications, you might need to run multiple processes (e.g., a web server, a worker for background tasks, a scheduler). systemd can handle this, but a dedicated process manager like Supervisor is often easier to configure.
Concept: Supervisor runs a "program" for each part of your application. It automatically restarts them if they crash.
-
Install Supervisor:
sudo apt install supervisor
-
Create a configuration file for your app:
sudo nano /etc/supervisor/conf.d/my_app.conf
-
Define your programs:
[program:my_app_web] command=/home/user/my_app/venv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 app:app directory=/home/user/my_app user=user autostart=true autorestart=true stderr_logfile=/var/log/supervisor/my_app_web_err.log stdout_logfile=/var/log/supervisor/my_app_web_out.log [program:my_app_worker] command=/home/user/my_app/venv/bin/python worker.py directory=/home/user/my_app user=user autostart=true autorestart=true stderr_logfile=/var/log/supervisor/my_app_worker_err.log stdout_logfile=/var/log/supervisor/my_app_worker_out.log
(Note:
worker.pyis a hypothetical file for background tasks.) -
Tell Supervisor to read the new config and start the programs:
sudo supervisorctl reread sudo supervisorervisor update sudo supervisorervisor start my_app_web sudo supervisorervisor start my_app_worker
Summary: Which Method to Choose?
| Method | Best For | Pros | Cons |
|---|---|---|---|
systemd (Simple) |
Simple scripts, single-process apps, quick prototypes. | Built into Linux, simple, reliable, auto-starts on boot. | Not ideal for web apps without a proxy; less flexible for scaling. |
| Gunicorn + Nginx (Standard) | Most Python web applications. The go-to production stack. | Robust, scalable, secure, industry standard, handles static files. | Requires managing two services (Gunicorn, Nginx). |
| Docker (Professional) | Ensuring consistency across environments, microservices, CI/CD pipelines. | Portability, isolation, simplified dependency management. | Can have a steeper learning curve; slightly higher resource overhead. |
Supervisor / Advanced systemd |
Applications with multiple processes (web, workers, schedulers). | Manages multiple processes, easy to configure and monitor. | Adds another layer to manage. |
For 95% of new Python web applications, start with Method 2 (Gunicorn + Nginx). It's the perfect blend of power, simplicity, and performance. If you need to manage complex dependencies or ensure your app runs identically everywhere, add Docker on top of that stack.
