기본 콘텐츠로 건너뛰기

How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 16.04

How To Serve Flask Applications with uWSGI and Nginx on Ubuntu 16.04

Introduction

In this guide, we will be setting up a simple Python application using the Flask micro-framework on Ubuntu 16.04. The bulk of this article will be about how to set up the uWSGI application server to launch the application and Nginx to act as a front end reverse proxy.

Prerequisites

Before starting on this guide, you should have a non-root user configured on your server. This user needs to have sudo privileges so that it can perform administrative functions. To learn how to set this up, follow our initial server setup guide.

To learn more about uWSGI, our application server and the WSGI specification, you can read the linked section of this guide. Understanding these concepts will make this guide easier to follow.

When you are ready to continue, read on.

Install the Components from the Ubuntu Repositories

Our first step will be to install all of the pieces that we need from the repositories. We will install pip , the Python package manager, in order to install and manage our Python components. We will also get the Python development files needed to build uWSGI and we'll install Nginx now as well.

We need to update the local package index and then install the packages. The packages you need depend on whether your project uses Python 2 or Python 3.

If you are using Python 2, type:

sudo apt-get update

sudo apt-get install python-pip python-dev nginx

If, instead, you are using Python 3, type:

sudo apt-get update

sudo apt-get install python3-pip python3-dev nginx

Create a Python Virtual Environment

Next, we'll set up a virtual environment in order to isolate our Flask application from the other Python files on the system.

Start by installing the virtualenv package using pip .

If you are using Python 2, type:

sudo pip install virtualenv

If you are using Python 3, type:

sudo pip3 install virtualenv

Now, we can make a parent directory for our Flask project. Move into the directory after you create it:

mkdir ~/ myproject

cd ~/ myproject

We can create a virtual environment to store our Flask project's Python requirements by typing:

virtualenv myprojectenv

This will install a local copy of Python and pip into a directory called myprojectenv within your project directory.

Before we install applications within the virtual environment, we need to activate it. You can do so by typing:

source myprojectenv /bin/activate

Your prompt will change to indicate that you are now operating within the virtual environment. It will look something like this (myprojectenv)user@host:~/myproject$ .

Set Up a Flask Application

Now that you are in your virtual environment, we can install Flask and uWSGI and get started on designing our application:

Install Flask and uWSGI

We can use the local instance of pip to install Flask and uWSGI. Type the following commands to get these two components:

Note

Regardless of which version of Python you are using, when the virtual environment is activated, you should use the pip command (not pip3 ).

pip install uwsgi flask

Create a Sample App

Now that we have Flask available, we can create a simple application. Flask is a micro-framework. It does not include many of the tools that more full-featured frameworks might, and exists mainly as a module that you can import into your projects to assist you in initializing a web application.

While your application might be more complex, we'll create our Flask app in a single file, which we will call myproject.py :

nano ~/ myproject / myproject .py

Within this file, we'll place our application code. Basically, we need to import Flask and instantiate a Flask object. We can use this to define the functions that should be run when a specific route is requested:

~/myproject/myproject.py

from flask import Flask app = Flask(__name__) @app.route("/") def hello () : return "Hello There!" if __name__ == "__main__" : app.run(host= '0.0.0.0' )

This basically defines what content to present when the root domain is accessed. Save and close the file when you're finished.

If you followed the initial server setup guide, you should have a UFW firewall enabled. In order to test our application, we need to allow access to port 5000.

Open up port 5000 by typing:

sudo ufw allow 5000

Now, you can test your Flask app by typing:

python myproject .py

Visit your server's domain name or IP address followed by :5000 in your web browser:

http:// server_domain_or_IP :5000

You should see something like this:

When you are finished, hit CTRL-C in your terminal window a few times to stop the Flask development server.

Create the WSGI Entry Point

Next, we'll create a file that will serve as the entry point for our application. This will tell our uWSGI server how to interact with the application.

We will call the file wsgi.py :

nano ~/ myproject /wsgi.py

The file is incredibly simple, we can simply import the Flask instance from our application and then run it:

~/myproject/wsgi.py

from myproject import app if __name__ == "__main__" : app.run()

Save and close the file when you are finished.

Configure uWSGI

Our application is now written and our entry point established. We can now move on to uWSGI.

Testing uWSGI Serving

The first thing we will do is test to make sure that uWSGI can serve our application.

We can do this by simply passing it the name of our entry point. This is constructed by the name of the module (minus the .py extension, as usual) plus the name of the callable within the application. In our case, this would be wsgi:app .

We'll also specify the socket so that it will be started on a publicly available interface and the protocol so that it will use HTTP instead of the uwsgi binary protocol. We'll use the same port number that we opened earlier:

uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app

Visit your server's domain name or IP address with :5000 appended to the end in your web browser again:

http:// server_domain_or_IP :5000

You should see your application's output again:

When you have confirmed that it's functioning properly, press CTRL-C in your terminal window.

We're now done with our virtual environment, so we can deactivate it:

deactivate

Any Python commands will now use the system's Python environment again.

Creating a uWSGI Configuration File

We have tested that uWSGI is able to serve our application, but we want something more robust for long-term usage. We can create a uWSGI configuration file with the options we want.

Let's place that in our project directory and call it myproject.ini :

nano ~/ myproject / myproject .ini

Inside, we will start off with the [uwsgi] header so that uWSGI knows to apply the settings. We'll specify the module by referring to our wsgi.py file, minus the extension, and that the callable within the file is called "app":

~/myproject/myproject.ini

[uwsgi] module = wsgi:app

Next, we'll tell uWSGI to start up in master mode and spawn five worker processes to serve actual requests:

~/myproject/myproject.ini

[uwsgi] module = wsgi:app master = true processes = 5

When we were testing, we exposed uWSGI on a network port. However, we're going to be using Nginx to handle actual client connections, which will then pass requests to uWSGI. Since these components are operating on the same computer, a Unix socket is preferred because it is more secure and faster. We'll call the socket myproject.sock and place it in this directory.

We'll also have to change the permissions on the socket. We'll be giving the Nginx group ownership of the uWSGI process later on, so we need to make sure the group owner of the socket can read information from it and write to it. We will also clean up the socket when the process stops by adding the "vacuum" option:

~/myproject/myproject.ini

[uwsgi] module = wsgi:app master = true processes = 5 socket = myproject .sock chmod-socket = 660 vacuum = true

The last thing we need to do is set the die-on-term option. This can help ensure that the init system and uWSGI have the same assumptions about what each process signal means. Setting this aligns the two system components, implementing the expected behavior:

~/myproject/myproject.ini

[uwsgi] module = wsgi:app master = true processes = 5 socket = myproject .sock chmod-socket = 660 vacuum = true die-on-term = true

You may have noticed that we did not specify a protocol like we did from the command line. That is because by default, uWSGI speaks using the uwsgi protocol, a fast binary protocol designed to communicate with other servers. Nginx can speak this protocol natively, so it's better to use this than to force communication by HTTP.

When you are finished, save and close the file.

Create a systemd Unit File

The next piece we need to take care of is the systemd service unit file. Creating a systemd unit file will allow Ubuntu's init system to automatically start uWSGI and serve our Flask application whenever the server boots.

Create a unit file ending in .service within the /etc/systemd/system directory to begin:

sudo nano /etc/systemd/system/ myproject .service

Inside, we'll start with the [Unit] section, which is used to specify metadata and dependencies. We'll put a description of our service here and tell the init system to only start this after the networking target has been reached:

/etc/systemd/system/myproject.service

[Unit] Description=uWSGI instance to serve myproject After=network.target

Next, we'll open up the [Service] section. We'll specify the user and group that we want the process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We'll give group ownership to the www-data group so that Nginx can communicate easily with the uWSGI processes.

We'll then map out the working directory and set the PATH environmental variable so that the init system knows where our the executables for the process are located (within our virtual environment). We'll then specify the commanded to start the service. Systemd requires that we give the full path to the uWSGI executable, which is installed within our virtual environment. We will pass the name of the .ini configuration file we created in our project directory:

/etc/systemd/system/myproject.service

[Unit] Description=uWSGI instance to serve myproject After=network.target [Service] User= sammy Group=www-data WorkingDirectory=/home/ sammy / myproject Environment="PATH=/home/ sammy / myproject / myprojectenv /bin" ExecStart=/home/ sammy / myproject / myprojectenv /bin/uwsgi --ini myproject .ini

Finally, we'll add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

/etc/systemd/system/myproject.service

[Unit] Description=uWSGI instance to serve myproject After=network.target [Service] User= sammy Group=www-data WorkingDirectory=/home/ sammy / myproject Environment="PATH=/home/ sammy / myproject / myprojectenv /bin" ExecStart=/home/ sammy / myproject / myprojectenv /bin/uwsgi --ini myproject .ini [Install] WantedBy=multi-user.target

With that, our systemd service file is complete. Save and close it now.

We can now start the uWSGI service we created and enable it so that it starts at boot:

sudo systemctl start myproject

sudo systemctl enable myproject

Configuring Nginx to Proxy Requests

Our uWSGI application server should now be up and running, waiting for requests on the socket file in the project directory. We need to configure Nginx to pass web requests to that socket using the uwsgi protocol.

Begin by creating a new server block configuration file in Nginx's sites-available directory. We'll simply call this myproject to keep in line with the rest of the guide:

sudo nano /etc/nginx/sites-available/ myproject

Open up a server block and tell Nginx to listen on the default port 80. We also need to tell it to use this block for requests for our server's domain name or IP address:

/etc/nginx/sites-available/myproject

server { listen 80; server_name server_domain_or_IP ; }

The only other thing that we need to add is a location block that matches every request. Within this block, we'll include the uwsgi_params file that specifies some general uWSGI parameters that need to be set. We'll then pass the requests to the socket we defined using the uwsgi_pass directive:

/etc/nginx/sites-available/myproject

server { listen 80; server_name server_domain_or_IP ; location / { include uwsgi_params; uwsgi_pass unix:/home/ sammy / myproject / myproject .sock; } }

That's actually all we need to serve our application. Save and close the file when you're finished.

To enable the Nginx server block configuration we've just created, link the file to the sites-enabled directory:

sudo ln -s /etc/nginx/sites-available/ myproject /etc/nginx/sites-enabled

With the file in that directory, we can test for syntax errors by typing:

sudo nginx -t

If this returns without indicating any issues, we can restart the Nginx process to read the our new config:

sudo systemctl restart nginx

The last thing we need to do is adjust our firewall again. We no longer need access through port 5000, so we can remove that rule. We can then allow access to the Nginx server:

sudo ufw delete allow 5000

sudo ufw allow 'Nginx Full'

You should now be able to go to your server's domain name or IP address in your web browser:

http:// server_domain_or_IP

You should see your application output:

Note

After configuring Nginx, the next step should be securing traffic to the server using SSL/TLS. This is important because without it, all information, including passwords are sent over the network in plain text.

The easiest way get an SSL certificate to secure your traffic is using Let's Encrypt. Follow this guide to set up Let's Encrypt with Nginx on Ubuntu 16.04.

Conclusion

In this guide, we've created a simple Flask application within a Python virtual environment. We create a WSGI entry point so that any WSGI-capable application server can interface with it, and then configured the uWSGI app server to provide this function. Afterwards, we created a systemd service file to automatically launch the application server on boot. We created an Nginx server block that passes web client traffic to the application server, relaying external requests.

Flask is a very simple, but extremely flexible framework meant to provide your applications with functionality without being too restrictive about structure and design. You can use the general stack described in this guide to serve the flask applications that you design.

from http://udobi.tistory.com/23 by ccl(A) rewrite - 2020-03-06 19:54:28

댓글

이 블로그의 인기 게시물

스프링 프레임워크(Spring Framework)란?

스프링 프레임워크(Spring Framework)란? "코드로 배우느 스프링 웹 프로젝트"책을 개인 공부 후 자료를 남기기 위한 목적이기에 내용 상에 오류가 있을 수 있습니다. '스프링 프레임워크'가 무엇인지 말 할 수 있고, 해당 프레임워크의 특징 및 장단점을 설명할 수 잇는 것을 목표로합니다. 1. 프레임워크란? 2. 스프링 프레임워크 "뼈대나 근간을 이루는 코드들의 묶음" Spring(Java의 웹 프레임워크), Django(Python의 웹 프레임워크), Flask(Python의 마이크로 웹 프레임워크), Ruby on rails(Ruby의 웹 프레임워크), .NET Framework, Node.js(Express.js 프레임워크) 등등. 프레임워 워크 종류 : 3. 개발 시간을 단축할 수 있다. 2. 일정한 품질이 보장된 결과물을 얻을 수 있다. 1. 실력이 부족한 개발자라 허다러도 반쯤 완성한 상태에서 필요한 부분을 조립하는 형태의 개발이 가능하다. 프레임워크를 사용하면 크게 다음 3가지의 장점 이 있습니다. 프레임워크 이용 한다는 의미 : 프로그램의 기본 흐름이나 구조를 정하고, 모든 팀원이 이 구조에 자신의 코드를 추가하는 방식으로 개발 한다. => 이러한 상황을 극복하기 위한 코드의 결과물이 '프레임워크' 입니다. 개발자는 각 개개인의 능력차이가 크고, 따라서 개발자 구성에 따라서 프로젝트의 결과 차이가 큽니다. 2. 스프링 프레임워크(Spring Framework) 자바 플랫폼을 위한 오픈 소스 애플리케이션 스프링의 다른 프레임워크와 가장 큰 차이점은 다른 프레임워크들의 포용 입니다. 이는 다시말해 기본 뼈대를 흔들지 않고, 여러 종류의 프레임워크를 혼용해서 사용할 수 있다는 점입니다. 대한민국 공공기관의 웹 서비스 개발 시 사용을 권장하고 있는 전자정부 표준프레임워크 이다. 여러 프레임워크들 중 자바(JAV...

[GCP] Flask로 TF 2.0 MNIST 모델 서빙하기

[GCP] Flask로 TF 2.0 MNIST 모델 서빙하기 Google Cloud Platform 우선 TensorFlow 2.0을 설치하자. 머신에 직접 설치하거나 도커를 다운받아 사용, 혹은 구글 colab을 활용( https://www.tensorflow.org/install)하면 되는데, TensorFlow에서 권장하는대로 머신에 VirtualEnv를 활용해서 설치하자 ( https://www.tensorflow.org/install/pip). 설치하는 김에 Flask도 같이 설치해보자. Compute Machine 하나를 생성(크게 부담 없는 예제라 g1 instance)하고, SSH를 연결하여 실행하면 된다. $ sudo apt update $ sudo apt install python3-dev python3-pip $ sudo pip3 install -U virtualenv # 굳이 system-wide로 flask를 설치할 필요는 없지만 그렇게 했다. $ sudo pip3 install flask $ sudo pip3 install flask-restful # virtualenv 환경에서 tensorflow 2.0 설치 $ virtualenv --system-site-packages -p python3 ./venv $ source ./venv/bin/activate # sh, bash, ksh, or zsh (venv) $ pip install --upgrade pip (venv) $ pip install --upgrade tensorflow 모든 환경이 마련되었으니, 우선 MNIST 모델을 TF 2.0으로 Training하여 모델을 Save 해 두자(tf_mnist_train.py). 대략 99% 이상 정확도가 나온다! import tensorflow as tf import numpy as np # 학습 데이터 load ((train_data, train_label), (eval_data, eval_label)) = tf....

Coupang CS Systems 채용 정보: 쿠팡 운용 관리 시스템을 구축 하고...

Coupang CS Systems 채용 정보: 쿠팡 운용 관리 시스템을 구축 하고... Global Operation Technology는 상품을 고객에게 지연 없이 전달 될 수 있도록 하는 조직입니다. 1997년, 초창기 아마존에 입사한다고 상상해보세요. 그 당시 누구도 e-commerce 산업이, 아마존이라는 회사가 지금처럼 성장하리라고는 생각하지 못했을 것입니다. 하지만, 그 당시 아마존을 선택한 사람들은 e-commerce 산업을 개척했고, 아마존을 세계적인 회사로 성장시켰습니다. 2016년 '아시아의 아마존'으로 성장하고 있는 쿠팡, 당신에게 매력적인 선택이 아닐까요? Global Operation Technology: eCommerce에서 주문을 한 뒤 벌어지는 상황에 대해서 호기심을 가져보신 적이 있나요? Global Operation Technology는 상품을 고객에게 지연 없이 전달 될 수 있도록 하는 조직입니다. 매일 최첨단 소프트웨어 기술을 이용해 고객의 주문을 받고 상품을 어느 창고에서 출고 시킬지, 포장을 하나의 박스 또는 여러 개로 나눌 것인지, 어떤 배송 루트를 선택하고 어떻게 고객에게 배송 상태를 보여줄지 결정하는 시스템과 서비스를 개발 합니다. What Global Operations Technology does: CS and C-Returns System 적극적 고객서비스를 바탕으로 고객의 목소리를 통해 끊임없이 고객 에게 서비스를 제공하고 Andon 메커니즘을 통해 고객의 목소리를 회사 전체와 공유합니다. 그리고 고객 문제 해결과 구매 이후 벌어질 수 있는 고객 문제를 사전에 예방하기 위한 시스템 개발을 통해 미래의 상황을 예측 합니다. Tranportation System TSP (Traveling Salesman Problem) 와 같은 CS 최적화 관리 문제를 다룹니다.배송 물품의 실시간 추적, 3P 하드웨어와 소프트웨어를 통합, 각 배송 루트에 할당되는 물량 예측하고 T...