Stack on a Box | .Net Core, NGINX, Flask on Ubuntu.
Being able to develop a tidy stack on a small, portable box is very convenient. In this post, I will highlight how I set up a full stack application on a small Ubuntu box hosted by Digital Ocean. NGINX will be used to host the applications (the front-end .NET Core application and the Flask/Python API) in server blocks. I will also show how I used AWS Cognito OAuth2 to secure the application.
Create an Ubuntu Instance on Digital Ocean
I’ve created the smallest droplet instance in Digital Ocean. As of this writing, it costs only $5/month. Droplets are very easy to manage and easy to upgrade. Follow the prompts and set up a project with your droplet.
Create a floating IP. You will use this IP to connect to it via SSH. From the outset, ssh is not installed and port 22 is closed, and since we’ll be doing a majority of our work via SSH, we will want to go and enable that. You don’t need to set up a floating IP if you don’t want to, but I did. You can use the private IP as well.

Create Sudo User
However, before we can do *anything*, lets create our sudo
user. Open up your “Console” via the interface (see above image in the upper right corner) and login as root. Use the password that was sent to you on droplet creation or have them send you a new one (alternatively you can set up with SSH keys).
Once you’re logged in as root
, create your user, follow the prompts, and add them to the sudoers
group.
# adduser jen
#usermod -aG sudo jen
Install SSH
Next we’ll update and install SSH.
$ sudo apt update
$ sudo apt install openssh-server
Once installed SSH will start automatically. You can check its status with the following command:
sudo systemctl status ssh

Ubuntu comes with a firewall configuration tool called UFW. If the firewall is enabled on your system, make sure to open the SSH port:
sudo ufw allow ssh
With this command, you can now programs such as ssh, Putty, or WINSCP to access your box. Try it out in terminal.

Install .NET Core Requirements
.NET Core Runtimes
Open a terminal and ssh into your machine. We need to add the Microsoft repository key and feed.
sudo wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
Next, we’ll install the .NET Core runtime. Because we are only running the site and not doing any debugging/troubleshooting, we only need this package. You can also install the SDK if you think you’ll do development/debugging on the site.
sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install aspnetcore-runtime-3.1
Application | Include Override Headers Snippet to Startup.cs
Because NGINX is our reverse proxy, it will drop some of the header requests to the application. We need to add some header overrides in order to ensure those requests make it to the application. Failure to do so will result in redirect_url_mismatch errors with OAuth.
Install Microsoft.AspNetCore.HttpOverrides
via Nuget, and update your Startup.cs
page.
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
app.UseAuthentication(); //add before other middleware
Create ASP.NET Core Deployable Project (manual deployment)
Use WinSCP and connect to your site. Due to permissions where we will deploy the site, you can set up a temporary folder on the user’s desktop. This document uses a user named ‘jen’’ which is part of the sudoer group.
On your local desktop, publish your project with the following settings:

Once the folder contents have been deployed, copy over your ‘publish’ folder to your user folder. You should see MyProject.dll inside the ‘publish’ folder. We now have our app ready and in a deployable state.
Install NGINX
Next, we are going to get Nginx and ensure it runs on startup.
sudo -s
nginx=stable # use nginx=development for latest development version
add-apt-repository ppa:nginx/$nginx
apt-get update
apt-get install nginx
Because we just installed it, explicitly start the service.
sudo service nginx start
Now we need to configure Nginx. We are assuming that we are hosting one site. You can find the config file we will modify here: /etc/nginx/sites-available/default
Open this file in a text editor (nano or vim). Note that the configurations below assume you have set up your traffic security settings. Replace your “location” block with this code below:
sudo nano /etc/nginx/sites-available/default
# ---- default -----
server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_secret_header;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The code above has the .NET Core application running on port 5000. If you’ve changed the port, be sure to change this value. The other parts, such as those for the headers, are to ensure that all header information is passed to the application, allowing for the OAuth2 to work correctly.
Verify that your config has no syntax errors. And if its ok, then reload NGINX to pick up the changes.
# sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
$ sudo nginx -s reload
Now move your code over to the location we defined in the default
config.
sudo cp -r /home/jen/publish/* /var/www/html
Running the Application Within Session
Navigate to the directory where the files were placed (/var/www/html) and type: ‘$dotnet MyProject.dll
‘
If you run into a permission error, update the permissions: ‘$chmod u+x MyProject.dll
‘
Running the Application Persistently
Now we will run the application as a service so that it can run independent from the ssh session. Create a service file for the application:
sudo nano /etc/systemd/system/myproject.service
Include the following configuration:
[Unit]
Description=MyProject web application
[Service]
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/dotnet /var/www/html/MyProject.dll
Restart=always
RestartSec=10 # Restart service after 10 seconds if dotnet service crashes
SyslogIdentifier=offershare-web-app
Environment=ASPNETCORE_ENVIRONMENT=Production
[Install]
WantedBy=multi-user.target
Then we want to run it:
sudo systemctl enable myproject.service
sudo systemctl start myproject.service
sudo systemctl status myproject.service
If you make any modifications to the .service file, you should stop the service, modify it, and then reload.
$ sudo systemctl stop myproject.service
$ sudo nano /etc/systemd/system/myproject.service
$ systemctl daemon-reload
$ sudo systemctl start myproject.service
It should also be noted that if you re-deploy your .NET Core code, you will need to restart the service to clear the cache.
Install NGINX Server Blocks
We are going to need a place to host our API that the front-end will connect to, yet we do not want to leave this box. Thus, we are going to host a private API in a server block. This API will be written in Python using the Flask module.
After the initial ProjectAPI site is up, we will add a single server block for the API. We will leave the current html folder as it is for the front-end (default and residing in the folder html
).
Make the api folder in the user folder. We will be serving up the api using Gunicorn, and it needs user level write permissions, so we will leave it here.
sudo mkdir -p ~/api
#Set the permissions
sudo chown -R $USER:$USER ~/api
#Copy over the current configuration to the api configuration file.
sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/api
#Edit the file and modify to add the following:
sudo nano /etc/nginx/sites-available/api
Here is an example of a configuration for a private facing api, for both a domain and ip:
server {
listen 80;
#CHOOSE ONE
server_name api.myproject.local; #can be anything
#server_name subdomain.mysite.com; #if it were public I'd do something like this
location / {
proxy_pass http://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Note that the port it is being passed to is the port the API is being served on (we will set that up below).
For your .NET application, open up your appsettings.json
file and ensure that the local address is what is being accessed as your endpoint (note that this file is something you set up for your application):
"Endpoints": {
"MyProjectAPI": " "http://api.myproject.local/"
},
If using a subdomain, the only difference would be to uncomment the server_name for the domain and not localhost.
server_name
contains the subdomain where the project is hosted. In our case we are not binding to a domain (nor a subdomain) and will just use a local address.
And, because this API is private, the endpoint in the appsettings.json file is a local address, instead of a domain name or IP address.
Next, open up your /etc/hosts
file and add the api.myproject.local address to the list
sudo nano /etc/hosts
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 ubuntu-machinename ubuntu-machinename
127.0.0.1 localhost
127.0.0.1 api.myproject.local #<-- add this line
Create a symlink to the sites-enabled folder for nginx to pickup.
sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/
After the symlink has been created, we will now modify the nginx.config
sudo nano /etc/nginx/nginx.conf
Then ensure that this line is left uncommented:
http {
. . .
server_names_hash_bucket_size 64;
. . .
}
Finally verify the syntax of the config file and restart nginx.
sudo nginx -t
sudo systemctl restart nginx
Set Up Flask Python Application
Much like we did above, use WinSCP to move over all of your items into your “api” folder. This will include your app.py and any other files your application depends on. Once everything is moved over, ssh
into that folder. Make sure pip3
is used with the current user-level permissions (do not install using sudo
). We also want to install virutalenv
using Python3.
sudo apt-get -y install python3-pip
sudo apt-get install python3.6-venv
python3 -m pip install --user virtualenv
We will start off my creating the virtual environment for this application. Go to the location where the files will live (~/api) and type:
python3 -m venv venv
Head on into your virtual environment:
source venv/bin/activate
Once here, we will install Flask and Gunicorn.
pip3 install flask gunicorn
Install all necessary modules by installing via your requirements file. If such a module does not exist, then install all necessary dependencies. To create a new requirements file after downloading all dependencies you can type $pip3 freeze > requirements.txt.
pip3 install -r requirements.txt
You should see something like this:
(venv) jen@myprojectmachine:~/api$ python3 myapp.py
* Serving Flask app "myapp" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
User a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)
Set Up Gunicorn
Python comes with an app server (see above), but it is not meant for production use. Thus we will use Gunicorn to serve up our application.
Install Gunicorn
Outside of your venv, install the following (or verify that they are installed)
deactivate
sudo apt update
sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools
Then go into your virtual environment and install the following:
source venv/bin/activate
pip3 install wheel
pip3 install gunicorn flask
python3 myapp.py (to test)
If that is working well, then we will want to go and create the production level WSGI entry point. We’ll just call it wsgi.py
.
nano ~/api/wsgi.py
//----------------wqsg.py------------------------------
from myapp import app
if __name__ == "__main__":
app.run()
Configuring Gunicorn
Gunicorn is a python application that will also run in your python application. We will install with pip3 into the virtual environment. Let’s try it out.
cd ~/api
gunicorn --bind 0.0.0.0:5001 wsgi:app
Verify that this is working. It should be listening at http://0.0.0.0:5001 without any issue. Once that is working well, CONTROL-C to exit and then deactivate your venv.
Next, let’s create the systemd service unit file. Creating a systemd unit file will allow Ubuntu’s init system to automatically start Gunicorn and serve the Flask application whenever the server boots. Create a unit file ending in .service
within the /etc/systemd/system
directory to begin.
sudo nano /etc/systemd/system/myapp.service
//-----------------myapp.service----------
[Unit]
Description=Gunicorn instance to serve MyApp API
After=network.target
[Service]
User=jen
Group=www-data
WorkingDirectory=/home
/jen/api
Environment="PATH=/home/jen/api/bin"
ExecStart=/home/jen/api/venv/bin/gunicorn --workers 3 --bind unix:myapp.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
With that, our systemd service file is complete. Save and close it now. We can now start the Gunicorn service we created and enable it so that it starts at boot:
sudo systemctl start myapp
sudo systemctl enable myapp
Let’s check the status:
sudo systemctl status myapp
Configuring Nginx to Proxy Requests
We need to go ahead and update our api nginx config to proxy with Gunicorn.
sudo nano /etc/nginx/sites-available/api
In your location block, replace all code with the following:
location / {
include proxy_params; proxy_pass http://unix://home/jen/api/myapp.sock;
}
To enable the Nginx server block configuration you’ve just created, link the file to the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled
With the file in that directory, you can test for syntax errors. If successful, restart the Nginx process to pick up the changes.
sudo nginx -t
sudo systemctl restart nginx
If you encounter any errors, trying checking the following:
sudo less /var/log/nginx/error.log
: checks the Nginx error logs.sudo less /var/log/nginx/access.log
: checks the Nginx access logs.sudo journalctl -u nginx
: checks the Nginx process logs.sudo journalctl -u myapp
: checks your Flask app’s Gunicorn logs
Securing the Application
SSL Certificates
Grab your SSL certificates and move them into your server into /etc/nginx.
You should have a .chained
and a .key
file. Next we want to create the Diffie-Hellman group, which is used for secure negotiations between clients. Type the following to create the .pem
file.
sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096
Configuring Nginx to Use SSL
Now we just need to modify our Nginx configuration to take advantage of these new SSL settings. We will create an Nginx configuration snippet in the /etc/nginx/snippets
directory.
//ssl-params.conf
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable strict transport security for now. You can uncomment the following
# line if you understand the implications.
# add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
Adjusting the NGINX Configuration to Use SSL
Next we will update the front-end application nginx.conf
(default) with the following:
//etc/nginx/sites-available/default
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mysub.mysite.com www.mysub.mysite.com;
# SSL
ssl_certificate mysub.mysite.com.chained;
ssl_certificate_key mysub.mysite.com.key;
# security
include snippets/ssl-params.conf;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
#HTTP Redirect
server {
listen 80;
listen [::]:80;
server_name *.mysub.mysite.com mysub.mysite.com;
return 301 https://mysub.mysite.com$request_uri;
}
In this file, now we will serve the site from port 443, so we move that location block to the server listening on 443. Then on port 80, we redirect to port 443. The 443 server block considers the ssl-params.conf
file as well as the chained cert and key.
Adjusting the Firewall
We need to adjust some settings on the ufw firewall. We’ll use this as it is pretty standard. Let’s install it and then check it out.
sudo apt install ufw
sudo ufw enable
sudo ufw status
It will probably look like this, meaning that only HTTP traffic is allowed to the web server:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
Now, to additionally let in HTTPS traffic, we can allow the “Nginx Full” profile and then delete the redundant “Nginx HTTP” profile allowance.
sudo ufw allow 'Nginx Full'
sudo ufw delete allow 'Nginx HTTP'
Finally, verify the syntax of your NGINX configuration and restart it.
sudo nginx -t
sudo systemctl restart nginx
Conclusion
If you followed this guide, you should have a nice, compact stack all set up on Digital Ocean. It should be secured with SSL. The front-end is a .Net Core application that talks to a private Flask/Python API. We’ve set up server blocks and have the services running persistently.
In a future post, I’ll discuss how to secure your application using AWS Cognito, both with and without a load balancer (ALB).