Stack on a Box | .Net Core, NGINX, Flask on Ubuntu.
Posted on May 15, 2020
Being able to develop a tidy stack on a small, portable box is very convenient. In this post, I will highlight how I set up a full stack application on a small Ubuntu box hosted by Digital Ocean. NGINX will be used to host the applications (the front-end .NET Core application and the Flask/Python API) in server blocks. I will also show how I used AWS Cognito OAuth2 to secure the application.
Create an Ubuntu Instance on Digital Ocean
I’ve created the smallest droplet instance in Digital Ocean. As of this writing, it costs only $5/month. Droplets are very easy to manage and easy to upgrade. Follow the prompts and set up a project with your droplet.
Create a floating IP. You will use this IP to connect to it via SSH. From the outset, ssh is not installed and port 22 is closed, and since we’ll be doing a majority of our work via SSH, we will want to go and enable that. You don’t need to set up a floating IP if you don’t want to, but I did. You can use the private IP as well.

Create Sudo User
However, before we can do *anything*, lets create our sudo
user. Open up your “Console” via the interface (see above image in the upper right corner) and login as root. Use the password that was sent to you on droplet creation or have them send you a new one (alternatively you can set up with SSH keys).
Once you’re logged in as root
, create your user, follow the prompts, and add them to the sudoers
group.
# adduser jen
#usermod -aG sudo jen
Install SSH
Next we’ll update and install SSH.
$ sudo apt update
$ sudo apt install openssh-server
Once installed SSH will start automatically. You can check its status with the following command:
sudo systemctl status ssh

Ubuntu comes with a firewall configuration tool called UFW. If the firewall is enabled on your system, make sure to open the SSH port:
sudo ufw allow ssh
With this command, you can now programs such as ssh, Putty, or WINSCP to access your box. Try it out in terminal.

Install .NET Core Requirements
.NET Core Runtimes
Open a terminal and ssh into your machine. We need to add the Microsoft repository key and feed.
sudo wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
Next, we’ll install the .NET Core runtime. Because we are only running the site and not doing any debugging/troubleshooting, we only need this package. You can also install the SDK if you think you’ll do development/debugging on the site.
sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get update
sudo apt-get install aspnetcore-runtime-3.1
Application | Include Override Headers Snippet to Startup.cs
Because NGINX is our reverse proxy, it will drop some of the header requests to the application. We need to add some header overrides in order to ensure those requests make it to the application. Failure to do so will result in redirect_url_mismatch errors with OAuth.
Install Microsoft.AspNetCore.HttpOverrides
via Nuget, and update your Startup.cs
page.
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
app.UseAuthentication(); //add before other middleware
Create ASP.NET Core Deployable Project (manual deployment)
Use WinSCP and connect to your site. Due to permissions where we will deploy the site, you can set up a temporary folder on the user’s desktop. This document uses a user named ‘jen’’ which is part of the sudoer group.
On your local desktop, publish your project with the following settings:

Once the folder contents have been deployed, copy over your ‘publish’ folder to your user folder. You should see MyProject.dll inside the ‘publish’ folder. We now have our app ready and in a deployable state.
Install NGINX
Next, we are going to get Nginx and ensure it runs on startup.
sudo -s
nginx=stable # use nginx=development for latest development version
add-apt-repository ppa:nginx/$nginx
apt-get update
apt-get install nginx
Because we just installed it, explicitly start the service.
sudo service nginx start
Now we need to configure Nginx. We are assuming that we are hosting one site. You can find the config file we will modify here: /etc/nginx/sites-available/default
Open this file in a text editor (nano or vim). Note that the configurations below assume you have set up your traffic security settings. Replace your “location” block with this code below:
sudo nano /etc/nginx/sites-available/default
# ---- default -----
server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_cache_bypass $http_secret_header;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The code above has the .NET Core application running on port 5000. If you’ve changed the port, be sure to change this value. The other parts, such as those for the headers, are to ensure that all header information is passed to the application, allowing for the OAuth2 to work correctly.
Verify that your config has no syntax errors. And if its ok, then reload NGINX to pick up the changes.
# sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
$ sudo nginx -s reload
Now move your code over to the location we defined in the default
config.
sudo cp -r /home/jen/publish/* /var/www/html
Running the Application Within Session
Navigate to the directory where the files were placed (/var/www/html) and type: ‘$dotnet MyProject.dll
‘
If you run into a permission error, update the permissions: ‘$chmod u+x MyProject.dll
‘
Running the Application Persistently
Now we will run the application as a service so that it can run independent from the ssh session. Create a service file for the application:
sudo nano /etc/systemd/system/myproject.service
Include the following configuration:
[Unit]
Description=MyProject web application
[Service]
WorkingDirectory=/var/www/html
ExecStart=/usr/bin/dotnet /var/www/html/MyProject.dll
Restart=always
RestartSec=10 # Restart service after 10 seconds if dotnet service crashes
SyslogIdentifier=offershare-web-app
Environment=ASPNETCORE_ENVIRONMENT=Production
[Install]
WantedBy=multi-user.target
Then we want to run it:
sudo systemctl enable myproject.service
sudo systemctl start myproject.service
sudo systemctl status myproject.service
If you make any modifications to the .service file, you should stop the service, modify it, and then reload.
$ sudo systemctl stop myproject.service
$ sudo nano /etc/systemd/system/myproject.service
$ systemctl daemon-reload
$ sudo systemctl start myproject.service
It should also be noted that if you re-deploy your .NET Core code, you will need to restart the service to clear the cache.
Install NGINX Server Blocks
We are going to need a place to host our API that the front-end will connect to, yet we do not want to leave this box. Thus, we are going to host a private API in a server block. This API will be written in Python using the Flask module.
After the initial ProjectAPI site is up, we will add a single server block for the API. We will leave the current html folder as it is for the front-end (default and residing in the folder html
).
Make the api folder in the user folder. We will be serving up the api using Gunicorn, and it needs user level write permissions, so we will leave it here.
sudo mkdir -p ~/api
#Set the permissions
sudo chown -R $USER:$USER ~/api
#Copy over the current configuration to the api configuration file.
sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/api
#Edit the file and modify to add the following:
sudo nano /etc/nginx/sites-available/api
Here is an example of a configuration for a private facing api, for both a domain and ip:
server {
listen 80;
#CHOOSE ONE
server_name api.myproject.local; #can be anything
#server_name subdomain.mysite.com; #if it were public I'd do something like this
location / {
proxy_pass http://localhost:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Note that the port it is being passed to is the port the API is being served on (we will set that up below).
For your .NET application, open up your appsettings.json
file and ensure that the local address is what is being accessed as your endpoint (note that this file is something you set up for your application):
"Endpoints": {
"MyProjectAPI": " "http://api.myproject.local/"
},
If using a subdomain, the only difference would be to uncomment the server_name for the domain and not localhost.
server_name
contains the subdomain where the project is hosted. In our case we are not binding to a domain (nor a subdomain) and will just use a local address.
And, because this API is private, the endpoint in the appsettings.json file is a local address, instead of a domain name or IP address.
Next, open up your /etc/hosts
file and add the api.myproject.local address to the list
sudo nano /etc/hosts
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 ubuntu-machinename ubuntu-machinename
127.0.0.1 localhost
127.0.0.1 api.myproject.local #<-- add this line
Create a symlink to the sites-enabled folder for nginx to pickup.
sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/
After the symlink has been created, we will now modify the nginx.config
sudo nano /etc/nginx/nginx.conf
Then ensure that this line is left uncommented:
http {
. . .
server_names_hash_bucket_size 64;
. . .
}
Finally verify the syntax of the config file and restart nginx.
sudo nginx -t
sudo systemctl restart nginx
Set Up Flask Python Application
Much like we did above, use WinSCP to move over all of your items into your “api” folder. This will include your app.py and any other files your application depends on. Once everything is moved over, ssh
into that folder. Make sure pip3
is used with the current user-level permissions (do not install using sudo
). We also want to install virutalenv
using Python3.
sudo apt-get -y install python3-pip
sudo apt-get install python3.6-venv
python3 -m pip install --user virtualenv
We will start off my creating the virtual environment for this application. Go to the location where the files will live (~/api) and type:
python3 -m venv venv
Head on into your virtual environment:
source venv/bin/activate
Once here, we will install Flask and Gunicorn.
pip3 install flask gunicorn
Install all necessary modules by installing via your requirements file. If such a module does not exist, then install all necessary dependencies. To create a new requirements file after downloading all dependencies you can type $pip3 freeze > requirements.txt.
pip3 install -r requirements.txt
You should see something like this:
(venv) jen@myprojectmachine:~/api$ python3 myapp.py
* Serving Flask app "myapp" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
User a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)
Set Up Gunicorn
Python comes with an app server (see above), but it is not meant for production use. Thus we will use Gunicorn to serve up our application.
Install Gunicorn
Outside of your venv, install the following (or verify that they are installed)
deactivate
sudo apt update
sudo apt install python3-pip python3-dev build-essential libssl-dev libffi-dev python3-setuptools
Then go into your virtual environment and install the following:
source venv/bin/activate
pip3 install wheel
pip3 install gunicorn flask
python3 myapp.py (to test)
If that is working well, then we will want to go and create the production level WSGI entry point. We’ll just call it wsgi.py
.
nano ~/api/wsgi.py
//----------------wqsg.py------------------------------
from myapp import app
if __name__ == "__main__":
app.run()
Configuring Gunicorn
Gunicorn is a python application that will also run in your python application. We will install with pip3 into the virtual environment. Let’s try it out.
cd ~/api
gunicorn --bind 0.0.0.0:5001 wsgi:app
Verify that this is working. It should be listening at http://0.0.0.0:5001 without any issue. Once that is working well, CONTROL-C to exit and then deactivate your venv.
Next, let’s create the systemd service unit file. Creating a systemd unit file will allow Ubuntu’s init system to automatically start Gunicorn and serve the Flask application whenever the server boots. Create a unit file ending in .service
within the /etc/systemd/system
directory to begin.
sudo nano /etc/systemd/system/myapp.service
//-----------------myapp.service----------
[Unit]
Description=Gunicorn instance to serve MyApp API
After=network.target
[Service]
User=jen
Group=www-data
WorkingDirectory=/home
/jen/api
Environment="PATH=/home/jen/api/bin"
ExecStart=/home/jen/api/venv/bin/gunicorn --workers 3 --bind unix:myapp.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
With that, our systemd service file is complete. Save and close it now. We can now start the Gunicorn service we created and enable it so that it starts at boot:
sudo systemctl start myapp
sudo systemctl enable myapp
Let’s check the status:
sudo systemctl status myapp
Configuring Nginx to Proxy Requests
We need to go ahead and update our api nginx config to proxy with Gunicorn.
sudo nano /etc/nginx/sites-available/api
In your location block, replace all code with the following:
location / {
include proxy_params; proxy_pass http://unix://home/jen/api/myapp.sock;
}
To enable the Nginx server block configuration you’ve just created, link the file to the sites-enabled
directory.
sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled
With the file in that directory, you can test for syntax errors. If successful, restart the Nginx process to pick up the changes.
sudo nginx -t
sudo systemctl restart nginx
If you encounter any errors, trying checking the following:
sudo less /var/log/nginx/error.log
: checks the Nginx error logs.sudo less /var/log/nginx/access.log
: checks the Nginx access logs.sudo journalctl -u nginx
: checks the Nginx process logs.sudo journalctl -u myapp
: checks your Flask app’s Gunicorn logs
Securing the Application
SSL Certificates
Grab your SSL certificates and move them into your server into /etc/nginx.
You should have a .chained
and a .key
file. Next we want to create the Diffie-Hellman group, which is used for secure negotiations between clients. Type the following to create the .pem
file.
sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096
Configuring Nginx to Use SSL
Now we just need to modify our Nginx configuration to take advantage of these new SSL settings. We will create an Nginx configuration snippet in the /etc/nginx/snippets
directory.
//ssl-params.conf
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable strict transport security for now. You can uncomment the following
# line if you understand the implications.
# add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
Adjusting the NGINX Configuration to Use SSL
Next we will update the front-end application nginx.conf
(default) with the following:
//etc/nginx/sites-available/default
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mysub.mysite.com www.mysub.mysite.com;
# SSL
ssl_certificate mysub.mysite.com.chained;
ssl_certificate_key mysub.mysite.com.key;
# security
include snippets/ssl-params.conf;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
#HTTP Redirect
server {
listen 80;
listen [::]:80;
server_name *.mysub.mysite.com mysub.mysite.com;
return 301 https://mysub.mysite.com$request_uri;
}
In this file, now we will serve the site from port 443, so we move that location block to the server listening on 443. Then on port 80, we redirect to port 443. The 443 server block considers the ssl-params.conf
file as well as the chained cert and key.
Adjusting the Firewall
We need to adjust some settings on the ufw firewall. We’ll use this as it is pretty standard. Let’s install it and then check it out.
sudo apt install ufw
sudo ufw enable
sudo ufw status
It will probably look like this, meaning that only HTTP traffic is allowed to the web server:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
Now, to additionally let in HTTPS traffic, we can allow the “Nginx Full” profile and then delete the redundant “Nginx HTTP” profile allowance.
sudo ufw allow 'Nginx Full'
sudo ufw delete allow 'Nginx HTTP'
Finally, verify the syntax of your NGINX configuration and restart it.
sudo nginx -t
sudo systemctl restart nginx
Conclusion
If you followed this guide, you should have a nice, compact stack all set up on Digital Ocean. It should be secured with SSL. The front-end is a .Net Core application that talks to a private Flask/Python API. We’ve set up server blocks and have the services running persistently.
In a future post, I’ll discuss how to secure your application using AWS Cognito, both with and without a load balancer (ALB).
COVID-19
Posted on April 24, 2020
Lately, a lot of COVID-19 data has been made available for analysis. I was interested in doing something other than the time series analyses that everyone is doing.
Correlations are fun, and I was thinking about different hypotheses about how data might be related. For example:
- Might more affluent areas have fewer, or more, cases of COVID-19?
- Could one’s political affiliation be correlated with cases of COVID-19?
One might argue that Republicans are more likely to be affluent, and according to what I’ve seen in the news, Republicans might take social distancing measures less seriously. Republicans are also typically of an older demographic, and being that the 65+ crowd is more vulnerable, perhaps all of these factors contribute to a higher correlation with COVID-19 cases?
First I will play around with correlations and make some nice graphs to get a visual of the data.
DataSets Used:
- COVID-19
- Political Affiliation by County
- MEDIAN INCOME IN THE PAST 12 MONTHS (IN 2018 INFLATION-ADJUSTED DOLLARS) BY GEOGRAPHICAL MOBILITY IN THE PAST YEAR FOR RESIDENCE 1 YEAR AGO IN THE UNITED STATES
I decided to explore Median income as it relates to COVID-19 deaths. I grabbed the total median income and removed any “null” rows so that I was left with only “valid” data. I then did an inner join on this file with the COVID-19 file, on County and State.
Next, it is obvious that larger populations in larger counties will naturally have more COVID cases, so to mitigate this I worked with the percentage of people for a variable. So the median income and the number of deaths was divided by the population N and then multiplied by 100% to get the percentage.
Here is what I found:

I did a correlation for each state (if available) to get a good visual to ascertain whether some states showed this same correlation or not (and to speculate as to why as well).
Looking at these correlations was interesting. Some states, I noted, at least Illinois, California, and New York which have populous cities and are very democratic, had positive, significant correlations. Other states did not, notably some Republican states such as Florida and Arizona. These casual correlations had me wondering about political affiliation. Given that the current situation has the country very politically divided, perhaps one’s political affiliation (at least at the state level) would play a part in the number of COVID cases as it relates to Income.
Just to get an idea of there was a basic relationship between number of COVID cases and party, I graphed just that — divided the number of COVID cases into two groups — those from Republican states and those from Democrat states.

This really surprised me as I thought for sure those more right-leaning people in the news who are protesting the Shelter In Place orders might have higher levels of infection. But, it should be noted that different states have different capacities for testing, and New York and other “more progressive” states have been much more proactive in getting their population tested. This could explain the correlations in these states, as well as the increased number of recorded cases for democrat states vs. republican states.
Surely there was an interaction of some sort, or multiple factors were in play. Next I wanted to explore whether or not a county’s Party and Median Income played a role in the number of cases. I decided to run a multiple linear regression.
> summary(fit)
Call:
lm(formula = p.cases ~ MedIncomeTotal + IsDemocrat, data = bigDataSet)
Residuals:
Min 1Q Median 3Q Max
-0.24329 -0.05881 -0.02446 0.01095 2.49839
Coefficients: (1 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.044e-01 2.969e-02 -3.518 0.00046 ***
MedIncomeTotal 5.051e-06 9.664e-07 5.227 2.21e-07 ***
IsDemocrat 6.302e-02 1.295e-02 4.865 1.38e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1646 on 784 degrees of freedom
Multiple R-squared: 0.08065, Adjusted R-squared: 0.0783
F-statistic: 34.39 on 2 and 784 DF, p-value: 4.839e-15
I found it interesting that all pieces of this regression were meaningful! Time to plot the two equations — one for Democrats and one for Republicans:
Republicans (coded as 0): y = -0.01044 + 0.0000051x
Democrats (coded as 1): y = 0.05262 + 0.0000051x

The data become a bit more clear now. My original hypothesis that Republicans likely suffer more COVID cases seems to be true, its just that the spread of the cases in Democrats is more varied — there are some counties with a LOT of cases (especially New York, which is that outlier up there).
Might more affluent areas have fewer, or more, cases of COVID-19?
Overall, the answer to his question is YES, although the correlation is weak. The correlation does not exist in every state, which could be due to testing and death documentation capacities. Not all states had enough data either to get a reliable analysis. State-wise, the correlations were all over — some not significant, some positive, and some negative.
Could one’s political affiliation be correlated with cases of COVID-19?
Just directly looking at the cases by county, it appears that Democrats are more likely to be infected and die from COVID-19. However, a multiple linear regression that considers one’s party along with income level by county shows that for both parties, as median income increases, so does the number of COVID-19 deaths (which corresponds with the original correlation that was run), but also that this line is higher for Republicans, meaning that for a given Median Income Level, they will likely have more COVID-19 cases than Democrats.
Why is this? Perhaps my original speculation is true. Republicans are typically of an older age-group, and we know that the 65+ crowd is more vulnerable to COVID-19, and so they are more likely to die from it. It could also be that Republicans are practicing less social distancing. But, that is confounded by the fact that cities, which are more populous, are largely Democrat, so one may speculate that the closer confines, despite social distancing measures, will invariably infect more people.
Yet, age also correlates with wealth. Perhaps the 65+ crowd in general accounts for the increase in COVID-19 cases with increases in median income.
The most believable conclusion to me is that age plays a major factor — it can explain both the Party and Median Income variables. Older people are generally more wealthy and more Republican, and the model seems to explain that nicely.
Usings in C#
Posted on January 24, 2020

When starting a new job in a position of leadership, you often will inherit less-than-optimal code. Perhaps some of this code is “legacy” or had been written unchecked by previous developers, and had made its way to production. Depending on the competence of your predecessor, some of this code may be less than your standard.
In my experience, I discovered a lot of this sort of code. The code-base I inherited came from a small team of inexperienced developers with no formal engineering education, but had successfully gotten the startup off the ground and running. The code was not optimal nor written well, but it got the job done. But now I was tasked with growing the building a proper engineering team, complete with version control with a scalable management style (I adopted agile).
One offense I discovered early on was in a collection of poorly written webservices. This was written in Microsoft webserves 2,0 (SOAP). It had been deprecated since (2006?), and was written in .vb scripts (not even compiled). The code was full of SQL injection vulnerabilities along with a lot of copy-pasted code, with poor naming conventions. Much of this had to change, organically.
One easy but large, low-hanging fruit type error was the offense of leaving connections to the database open. This would routinely fill the app pool and degrade the user experience, as well as even kill the IIS server and kill the hosting server! It was unacceptable to have a user, on an average internet connection, have their site take more than 10 seconds to load.
Imagine opening up the code and seeing a function as this:
[WebMethod]
public DataTable GetData(int Id, string entityName)
{
SqlConnection connection = ConnectDB.connectToDB(Id.ToString());
string sql = @"SELECT * FROM MyTable where name =" + entityName + ";
SqlCommand cmd = new SqlCommand(sql, connection);
try
{
connection.Open();
SqlDataAdapter adapter = new SqlDataAdapter(cmd);
DataTable table = new DataTable("Data");
adapter.Fill(table);
return table;
}
catch
{
return new DataTable("NoData");
}
finally
{
if (connection != null)
{
connection.Close();
}
}
}
There is a lot to unpack here. A lot.
First and foremost, we cannot move forward without recognizing the blatant SQL injection vulnerability. The not-so-obfuscated variable name “entityName” is concatenated onto the string that is the sql command. This should instead be written with a parameter and added later to the sql command.
Secondly, the use of the try/catch/finally is not how I would recommend. I try and avoid try/catches as I find them to be somewhat poor performance-wise.
Lastly, the lack of using IDisposible where available will definitely impact user experience; not only will it leave open stale connections and occupy the IIS app pool, but it could potentially “take down” a server once IIS begins refusing connections to the database. One would have to tweak IIS to continually refresh the app pool, which is not recommended. It is also not scalable — are you really going to refresh the app pool on potentially every page reload? I would hope not!
This pattern was ubiquitous throughout the entire set of services I inherited. The services consisted of approximately 20 separate .asmx services with some of those being thousands of lines of code. On top of that, code in general was not very well organized and had a lot of copy/pasted code (i.e. was not written in an object oriented way — in fact, not one model or class existed upon inheritance).
As part of an Epic, I set out to refactor these main issues out of this set of services:
- Use IDisposable where applicable – This meant placing using blocks on SqlConnection, SqlCommand, SqlDataAdapter, DataTable, and readers and writers, to name the most common.
- Get rid of SQL injection vulnerabilities – This means to ensure we add the variables we want to use as parameters to the SQL Command.
- Do not concatenate immutable objects – Strings are immutable! It irks me when they are being continually concatenated as it forces me to sit there and imagine each and every character being re-written to memory on every concatenation. Ugh! Use a StringBuilder, please!
- Remove try/catch/finally – the usings will get rid of the finally by closing the connection automatically, and instead of falling into a “catch”, a proper exception should be logged (and thrown if in debug mode).
- Reorganize copy/pasted code – This means creating static classes where applicable, or creating objects where necessary.
- Break apart huge files, and organize according to function – I like to have all CRUD operations for a particular entity together in the file, in the order READ, CREATE, UPDATE, DELETE, and then ordered alphabetically according to the entity upon which it operates. If a file is excessively long, like over 1000 lines, when it needs to be broken apart. For example, a service can (and should!) share the same namespace, but its larger functions can be broken apart into different files. The more files the better, in my opinion, as they are smaller, more modular, and more easy to work with.
Thus, I re-wrote the set of services within a two-week sprint, along with all appropriate unit tests to ensure I did not break core functionality. Additionally, I created a manual for the team to follow with respect to coding guidelines, so such a mess would be mitigated in the future.
Publishing of these services showed a fantastic uptick in performance. With the old configuration, our infrastructure set-up failed at around 600 simultaneous users, with very poor 1990-ish performance until then. The new updates allowed us over 1000 users with no degradation in performance (with default IIS settings) — we hit the simultaneous user-ceiling of our test! Huzzah! It was a breath of fresh air since the band-aide rolled out years prior was to configure IIS to refresh after every 200 connections.
A re-written function above would look like this:
[WebMethod]
public DataTable GetData(int Id, string name)
{
var sql = @"SELECT * FROM MyTable where name = @entityName";
using (var connection = ConnectDB.connectToDB(Id.ToString()))
{
using(var cmd = new SqlCommand(sql, connection))
{
cmd.Parameters.AddWithValue("@entityName", name);
try
{
connection.Open();
using(var adapter = new SqlDataAdapter(cmd))
{
using(var table = new DataTable("Data"))
{
adapter.Fill(table);
return table;
}
}
}
catch (Exception ex)
{
//or log this somewhere
Console.WriteLine(ex.Message);
}
}
}
}
The Scientific Mindset
Posted on January 4, 2020

I earned my PhD at the end of 2013 with a thesis topic on low-level vision science. Obtaining a PhD is a long and sometimes arduous task. One cannot be afraid of digging in and “doing it yourself” – it is that attitude along with a large dose of tenacity and caffeine that can get you to the finish line.
However, the main takeaway from the pursuit of a PhD is the training around how to be a scientist. It is surprising how many people I meet in my day to day who do not think as scientists, so much so I need to change the way I speak with them..
Since my PhD, I have found myself within the domain of software engineering. I originally had aspirations of “doing something” within the field of computational neuroscience, particularly with respect to the visual system. I wasn’t sure exactly what, but I allowed the cards to fall where they would, and eventually I found myself in a small consulting company that was interested in using AI and looking to expand into the tech industry.
A scientist goes about a problem by following the scientific method. First she gathers all known information and constraints — what do we already KNOW about the problem. Next she forms a hypothesis – or an educated guess as to what the problem could be. Next, each hypothesis is deconstructed in a very objective way with a series of what-ifs and expectation of results, without emotional attachment to any outcome. Every hypothesis is controlled very carefully, typically there is one independent and one dependent variable – and in terms of debugging a programmed function, it is a series of very pointed “if/else” questions. Finally, each hypothesis is tested with the utmost rigor. Are we still unable to disprove the null hypothesis?
Background knowledge is essential to being a good debugger. One must fully understand the context of the problem. But even if the entire context is unknown, one is able to find a “hook and line” and follow the problem “backwards” until reaching the point of failure.
Let’s take an example that I am often thrown in at work. As with many companies, we have quite a bit of legacy code we are forced to maintain, only because many vital business transactions and processes rely on this code. On occasion, an issue will popup that no one knows anything about.
In my tenure as a software engineer, I have worked with my fair share of non-scientist developers, with the difference being quite notable. These non-scientists typically are unable to formulate a hypothesis and instead opt to simply code around the issue. Many will offer to refactor huge swaths of code, regardless of the technical debt involved. Of course this is challenged by people like me, and when pressed for follow-up as to why a particular solution is chosen by the non-scientist developer, the answer usually falls flat with a weak “I don’t know” or I’ve even had the person become highly critical of past architectural decisions made by the company. Such non-scientists tend to be more defensive about their code-writing as they lack the ability to defend its architecture and motivation for certain decisions.
Personally, I want “the truth” and sometimes the truth hurts. I’m not the best developer nor the best manager, but I recognize this and strive always towards what is better. I live off of constructive feedback because that is the only surefire way to grow. I owe my scientific attitude to the notion of “the more I know the more I realize I don’t know”. I am never convinced that there is a RIGHT answer – I constantly speculate. Could there be a better solution? Maybe that person knows more than I do, let’s have a listen.
It is important to stay humble and to realize that you don’t have all the answers. Only then is there room for growth. Taking a scientific approach to debugging unleashes the power to debug ANYTHING, for you can simply follow the trail of facts until hypotheses and assumptions are no longer being met.
Surprisingly, I enjoy debugging. I like digging into the unknown and figuring out the puzzle. I like to tap into existing patterns and simply inject a solution into current frameworks (regardless of whether they are “right” or not). I pride myself on finding a bug and fixing it with minimal technical dept. Bug squashing can be a tedious job and its not always fun, like creating something from scratch with all the fancy bells and whistles, but it is inherently satisfying to crack open the code and track down an issue that has eluded others for so long. I can do this because I approach the problem like a scientist, and it is this same attitude that I am always on the lookout for when scoping out others to join the team.
ASPNET AJAX Control Toolkit Tip
Posted on December 10, 2019
In this day and age, async is king. We are used to pages loading asynchronously and deferring the loading of not-as-important scripts or images until later in the page lifecycle. We do it with lazy loading, with AJAX, and with smaller calls to the service layer to populate parts of the view model.
However, such a convention does not necessarily jive with ASP.NET webforms. A webform has a full posted-back page lifecycle, consisting of a page initialization, to a page load, postback, and unload (there are more stages, but that’s the gist). Every action on the page generates a post-back, forcing the entire page lifecycle to start again. Depending on your configuration, this can degrade the user’s experience.
With webforms, the user’s state was used to carry their selection or preferences across requests. These sorts of things can also be stored in the ViewState and can allow for a page to be functional. Note, however, that ViewStates (client side) and SessionStates (server side) can get bloated and overloaded, so they should be used carefully.
By default, the ViewState on a page is set to True.
I was updating a page to be more dynamic and responsive to user input. At the time, we did not have the luxury or rewriting to use more asynchonous methods or to use a javascript framework, and so I was working with what I had.
I had an Accordion object that held a set of filters. These filters were set up much like you would see on Amazon.com — the filters in a left menu would control data in the main div of the page. The previous iteration had the user select all filters and then “apply” that to the data. The page would refresh and one would see the filtered data set. This approach created performance issues as multiple trips to the database and an entire reloading of the viewstate and session were initiated.
Not only did we want to improve performance, but we wanted to make it more “dynamic”, in that a user could choose a filter and it would not refresh the entire page, but rather would only refresh certain sections. For this we used update panels.
However, despite having the !IsPostback
code in the appropriate places, I ran into an error. The error stated that I had a repeated control in the accordion.
This happened because I generated the accordion, which held the filters, server-side. It is generated once, at page initialization, with a call to the database, which returned filters based on the logged in user’s account permissions. When the user chose a filter, the set of filters was being appended on each postback, generating the error.
If you ever run into such a case, then the key for me was to simply set EnableViewState = false on the control itself. You may need to save the ViewState for other actions on the page, but conveniently enough you can target certain objects for which a ViewState is not retained. Once I did this, the new accordion filters were able to load on each post-back.
No, this was not the ideal solution, but it was acceptable in order to get the desired behavior. The generation of the accordion and filters was a legacy decision, and the web services that populated that accordion were coupled very tightly with the application code architecture. This is a great example of where you just have to tap into current patterns and “blend it in” and get it working. Technical debt is a real thing and it is important to be cognizant of that. A business must run and a good leader is able to weigh the pros and cons of taking a specific decision for both the short-term interests and long-term successes of the company.
IdentityServer4 – Global Logout
Posted on March 22, 2019
I’ve created a brand new, micro-services oriented platform at my current company. A key feature to this approach is integration of Single Sign-On. For this, I’ve adopted the wonderful, open-source project IdentityServer4.
The abridged version of the architecture is that the company creates multiple apps with api resources (a big inspiration is the whole Google API market). So, one team may be responsible for App A and another for App B, and so on. Each app is registered with the OAuth provider. Single sign-on allows for a user logging into the platform to login once to access all apps. It’s lovely.
However, it was a bit more challenging to get Single Sign-Out (or single log-out / SLO). Our apps are written in C# .Net Core, meaning we use an MVC pattern and are server based (as opposed to more javascript only browser based applications). As such, IdentityServer4 supports both Front Channel Logout and Back Channel Logout. I decided to move ahead with using front-channel logout.
Logging out from a single client was easy, but the challenge was killing the entire session AND telling all other clients who had active sessions that the user had logged out.
Here is how I set up my simple (but working!) solution.
Step 1: In each of your projects ensure that there are two logout functions, one for a UI logout and one for the front-channel logout. The front-channel logout is called by an iframe from IdentityServer4 when it ends the session (endSession endpoint).
In my setup, because all apps belong to the company, I have one single class that every controller inherits. This class pulls information about the user as needed, but also contains my two logout methods:
public async Task<IActionResult> Logout()
{
await HttpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
var disco = await client.GetDiscoveryDocumentAsync(config.Value.authority);
return Redirect(disco.EndSessionEndpoint);
}
public async Task<IActionResult> FrontChannelLogout(string sid)
{
if (User.Identity.IsAuthenticated)
{
var currentSid = User.FindFirst("sid")?.Value ?? "";
if (string.Equals(currentSid, sid, StringComparison.Ordinal))
{
await HttpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
}
}
return NoContent();
}
The first logout action does not need any parameter as it is initiated by the user via a button click on the UI. This function removes the local cookie and logs the user out (using Identity) and then redirects the user to be logged out at the IdentityServer. The IdentityServer then takes care of logging the user out of all active sessions, but ONLY if a Front (or back) channel url is configured.
Step 2: Where you configure your clients (whether in a Config.cs or in database), within that client you want to point the FrontChannelLogoutUrl to the location of the front-channel logout, so that IdentityServer can call it with your session id (sid).
So for me, because all of my controllers inherit my OAuth class, I can call any controller for that project and add “frontchannellogout”. The sid is automatically passed by IdentityServer4 (though you can turn that off in a Configuration section).
For example, on the client configuration in the IdentityServer I have something like this:
new Client
{
ClientId = "my-client-id",
ClientName = "Hello world",
AllowedGrantTypes = GrantTypes.HybridAndClientCredentials,
RequireConsent = false,
ClientSecrets =
{
new Secret("mysecret".Sha256())
},
RedirectUris = Endpoints.redirectDomains,
PostLogoutRedirectUris =Endpoints.redirectDomains,
FrontChannelLogoutUri=Endpoints.BaseUrl+"/home/frontchannellogout",
AllowedScopes =
{
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
"my-client-api"
},
AllowOfflineAccess = true
},
Every client would have this configured specific to itself. Now that this FrontChannelLogoutUri is configured, now when the user is logging out of IdentityServer, upon logout, an iframe is created where each FrontChannelLogout endpoint is called on each client application, effectively ending the session locally.
The user is now globally logged out of IdentityServer4.
Switching to Google People API from Google +
Posted on March 7, 2019

On March 7th, Google is pulling the plug on Google+ API. Many sites use Google+ API for basic external user authentication.
At our company, we use Google+ API to support Google logins, but it was time to make the switch to another supported API in order to continue coverage for our users.
Our current, in production application is a .NET Framework application, which still uses webforms (legacy I know, we are in the process of upgrading to .NET Core).
Making the switch was pretty straight forward, and I wanted to share my experience as I didn’t see much in terms of documentation.
Current Situation
Currently, Google+ API is enabled. In my startup.auth.cs, I have the following:
app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions()
{
ClientId = AppSettings.GoogleClientId,
ClientSecret = AppSettings.GoogleSecret,
Provider = new GoogleOAuth2AuthenticationProvider()
{
OnAuthenticated = (context) =>
{
context.Identity.AddClaim(new Claim("urn:google:name", context.Identity.FindFirstValue(ClaimTypes.Name)));
context.Identity.AddClaim(new Claim("urn:google:email", context.Identity.FindFirstValue(ClaimTypes.Email)));
context.Identity.AddClaim(new System.Security.Claims.Claim("urn:google:accesstoken", context.AccessToken, ClaimValueTypes.String, "Google"));
return Task.FromResult(0);
}
}
});
My first step was to simply change out the API being used. Instead of Google + API, I used Google People API. To be safe, I set up another project (with new credentials) so that I could play around on my development environment without impacting my customers.
I set up a new project, added the allowed <my-domain>/signin-google URLs, and installed the new ClientId and Secret.
My first pass didn’t work.
var manager = Context.GetOwinContext().GetUserManager<ApplicationUserManager>();
var signInManager = Context.GetOwinContext().Get<ApplicationSignInManager>();
var loginInfo = Context.GetOwinContext().Authentication.GetExternalLoginInfo();
if (loginInfo == null) //login failed
{
RedirectOnFail();
return;
}
logininfo was returning NULL.
It turns out, which might be obvious to many, is that the claims returned by the Google People API differ from those of Google+. So all I had to do was remove the claims I created on startup. I didn’t need those claims for my basic authentication, so it worked for me.
app.UseGoogleAuthentication(new GoogleOAuth2AuthenticationOptions()
{
ClientId = AppSettings.GoogleClientId,
ClientSecret = AppSettings.GoogleSecret,
Provider = new GoogleOAuth2AuthenticationProvider()
{
OnAuthenticated = (context) =>
{
//note that here is where I removed those claims I was creating before.
return Task.FromResult(0);
}
}
});
After I did that, my google authentication worked perfectly!
My SlackBot
Posted on September 28, 2018

Slack is a wonderful collaboration tool for uniting team communication. I loved the idea of using Slack to automate many of my tasks. As a Software Engineering Director of a small company, one is expected to wear many hats, and I wanted to see what of the many patterned, “repeatable” tasks I could automate for my team.
As mentioned previously, I work for a small company. This tends to insinuate that we do not have the funds to dump into enterprise software, and also means that as employees of the company, we need to get creative. Back in my undergraduate days, I was the president of the Linux User’s Group and helped pioneer an open-source FOSS Conference called Flourish!. I was very much into the free-use and free-pay model of software. Open-source and open use. Tapping into this idea, our group decided to start out with Trello and then move to Taiga for our project management platform.
Taiga is a free project management platform when hosted internally. It has a lovely UI and is like a more basic view of JIRA. It also comes with a nice API. I decided that I would use this API to not only generate my own metrics reports (for another post), but also to create a sort of “Scrum Master” bot that could remind the team of important issues or stale user stories as we neared the end of a sprint.
The technology we use for the “middle-layer” of our application stack is ASPNET C#. I thoroughly enjoy the framework, in particular .NET Core. My SlackBot application is written as an API that runs on a server with a set of “Helper” classes, allowing me to use SlackBot in a variety of migration scripts (to update users when a particular migration has completed,for example).
The use of “async” is a relatively new feature, introduced somewhere around version 5 of C#. I wanted to ensure that I could integrate SlackBot into some legacy code, if need be. The legacy code at the company was very old, much of it not refactored for years (can you say .vb code?), and it was not worth the technical debt to rewrite any of it, since we had a long-term plan to deprecate as much of it as possible, albeit organically and in time. As such, I wrote both asynchronous and synchronous versions in my SlackBot class.
SlackBot is a static class that lives in a referenced library of any project that will use it. The helper project is written in .net standard, allowing for it to be referenced by both .Net Core and ASPNET frameworks.
using Newtonsoft.Json;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
namespace Anderson.Core.MrTechBot
{
public static class SlackClient
{
public static async Task<HttpResponseMessage> SendMessageAsync(string webhook, string message)
{
HttpClient _httpClient = new HttpClient();
var payload = new
{
text = message
};
var serializedPayload = JsonConvert.SerializeObject(payload);
var response = await _httpClient.PostAsync(webhook,
new StringContent(serializedPayload, Encoding.UTF8, "application/json"));
return response;
}
public static HttpResponseMessage SendMessage(string webhook, string message)
{
HttpClient _httpClient = new HttpClient();
var payload = new
{
text = message
};
var serializedPayload = JsonConvert.SerializeObject(payload);
var response = _httpClient.PostAsync(webhook,
new StringContent(serializedPayload, Encoding.UTF8, "application/json")).Result;
return response;
}
public static void SendToMrTechBot(string webhook, string message)
{
var response = SendMessage(webhook, message);
}
public static async Task SendToMrTechBotAsync(string webhook, string message)
{
var response = await SendMessageAsync(webhook, message);
}
}
}
The code above was really all that was needed. Next we move along to the application logic that utilized this class.
The application must first explicitly define the objected used by the API. I defined those with a set of models that mapped directly onto the Taiga API per the documentation. Using a singleton HttpClient (very important for performance and general connection cleanup), the application polls the Taiga API every few minutes and sends along any updates.
First, I wanted SlackBot to tell all users if any NEW issues had been assigned to them since the previous day. I wanted this to happen only once and in the morning. The user notified would then be prompted to at least address the new issue; the status of the issue should be changed from NEW to ToDo (after it had been scoped out or readied for Planning Poker, or reassigned). To do this, I dedicated a single field in a datatable on a small MariaDB database that kept track of whether or not a ping had been sent for that day. If the flag was true, no matter how many times the endpoint was called, the users would not be notified again.
public static void UpdateSendReportFlag(int bit)
{
using (var _db = new MySqlConnection(CONNECTION_STRING))
{
string sql = @"update MrTechBot set SentToday=@bit";
using (var cmd = new MySqlCommand(sql, _db))
{
_db.Open();
cmd.Parameters.AddWithValue("@bit", bit);
cmd.ExecuteNonQuery();
_db.Close();
}
}
}
There is also a GET block that is called inside a while loop within the main program. The code block above is how I generally approach the service-layer logic (the “repository) for fetching data. I separate these calls into their own services so that I can connect any sort of API or data mocker to it. The above uses raw MySqlCommands, but I have also used Dapper (small and mighty but limited) as well as Entity Framework (which can almost be “too convenient” in that you’re not really writing SQL).
The while loop uses the Taiga API to authenticate, grab all issues, user stories, users, and projects, along with the various severity and priority levels, as well as classification (bug, enhancement, etc). This is all compiled into “UserReport” objects that hold all relevant objects pertaining to the user, and because I follow the same convention across projects, I can easily compile a person’s issues and user stories across all simultaneous projects (something the API does not support as of this post).
The UserReports are compiled and then a message composed and sent to the relevant user. Users are stored in a dictionary of TaigaUserName and SlackName. Because this was an internal piece of software, I simply had a two-column table in MariaDB representing my dictionary. The more correct way would be for the SlackBot to authenticate the user to the application and do the linking for the user. It would still be in database, but for my own purpose I inserted these manually.
public int ProcessBot(Taiga t)
{
var mr_techbot_channel = "https://hooks.slack.com/services/<channel_hash>";
userTaigaMap.Add("jenanderson", mr_techbot_channel);
//...
foreach (var user in t.TaigaUsers)
{
//post to private channel
if (userTaigaMap.Keys.Contains(user.Value.UserName.ToLower()))
{
if (userTaigaMap.Any(x => x.Key.Equals(user.Value.UserName.ToLower())))
taigaSlackUsers.Add(new TaigaSlackUser() { Webhook = userTaigaMap[user.Value.UserName.ToLower()], username = user.Value.UserName, Issues = issues });
}
}
Task.WaitAll(IntegrateWithSlackAsync(taigaSlackUsers));
return 1; //return 1 so know block has completed
}
Finally, we compose the message:
private static async Task IntegrateWithSlackAsync(List<TaigaSlackUser> summ)
{
TimeSpan start = new TimeSpan(8, 0, 0); //10 o'clock
TimeSpan end = new TimeSpan(10, 0, 0);
TimeSpan now = DateTime.Now.TimeOfDay;
Dictionary<string, string> emojiPriority = new Dictionary<string, string>();
emojiPriority.Add("high", "🔴");
emojiPriority.Add("normal", ":green-circle:");
emojiPriority.Add("low", "⚪");
if ((now > start) && (now < end) && DateTime.Today.DayOfWeek != DayOfWeek.Saturday && DateTime.Today.DayOfWeek != DayOfWeek.Sunday)
{
if (GetSendReportFlag() == 0)
{
foreach (var itm in summ)
{
StringBuilder builder = new StringBuilder();
builder.AppendLine("*Hi <@" + itm.username + ">!* :wave:");
builder.AppendLine("Please take action on the following NEW issues:");
foreach (var i in itm.Issues)
{
builder.AppendLine(emojiPriority[i.Priority.ToLower()] + "*" + i.ProjectName + " | " + i.Subject + "*");
builder.AppendLine("><URL_TO_HOSTED_TAIGA>" + i.Slug + "/issue/" + i.Reference);
}
if (itm.Issues.Count > 0)
{
var slackClient = new SlackClient(new Uri(itm.Webhook));
var response = await slackClient.SendMessageAsync(builder.ToString());
}
}
UpdateSendReportFlag(1);
}
}
else
{
if (GetSendReportFlag() == 1)
{
UpdateSendReportFlag(0);
}
}
}
}
And there is my basic SlackBot!
Many improvements could be made to this code, and it has yet to be refactored. It started as a fun side-project and some shortcuts were taken for the sake of production. I would prefer to get more sophisticated with the polling logic I’ve instituted. This would be a nice opportunity to integrate SignalR and perhaps Redis for the very basic DB functionality I implement. I would also like to create user interactions with the bot allowing the user to ping SlackBot when they wanted to know of any new issues, or they could ask questions regarding a specific User Story or Project.
Leon’s Fastest Triathlon: Race Report
Posted on November 21, 2014
Third Place and my Finisher’s Medal!
I took part in the Leon’s Fastest Triathlon on June 2, 2013. Despite my complete lack of preparation for the race, I still placed on the podium. I got third place for my age group — amazing!
Days prior to the race I was even considering not racing as I was not feeling well (was it pre-race jitters? Or something else?). The last race I had I got some sort of stomach bug and it really messed things up, and I was feeling very similar a few days prior, and so I was nervous.
Anyway, SF convinced me to go anyway and just “do it to have fun”, and so I did. We woke up at around 4:30am so that we could arrive at the transition area before it closed. Thankfully we had plenty of time.
The sun was shining and it was cool but not cold. It seemed alright. But then, within transition closing and the start of the first few waves, some dark and heavy clouds began rolling in, and right behind them was a crisp and COLD wind. By the time my wave started (the very last wave — 30 minutes after the elite waves went), everyone was shivering.
The jump into the metal tasting lake was welcomed by many due to its relative warmth compared to the air. For me, the lake was kind of gross — it smelled bad and had a funny taste to it. It was also pretty murky. Nonetheless, we all jumped in and waited for the start. I got to start with SF as they grouped all females and males who were 39 and under all together.
Sitting in the water, I can honestly say I just wanted to get out. Once the wave officially started, I was feeling very cold and my muscles just did not want to work. However, I just persisted and swam. It wasn’t my best swim — I just went — neither fast neither slow. I ended up with a 27 minutes swim. Not my best work, but considering the circumstances it was alright.
I was still frozen as a made my way through transition. I saw SF there — we had finished with approximately the same swim. We exchanged words about how cold it was and sort of “took our time”. I think the first transition was around 2 minutes for me as I couldn’t get my wetsuit off. SF and I started the bike leg at the same time.
The bike was FREEZING. I was soaking wet and now I had to go out for over an hour in 50 degree weather — wearing shorts! SF was about 2 minutes ahead of me the entire time, and I wanted to make sure I kept up with him. I was faithfully putting out 200 watts the entire time, yet because I neglected to utilize the aero-position, my average speed was only 20 mph. Additionally, I was convinced that my third chain-ring was broken because I could not shift into it — later it turns out my hands were just too frozen to apply enough force to switch it!
For the entire bike I was just counting down the miles. I was cold, and sniffling the entire time. Also, it started to misty-rain and it obstructed my view. I hate riding in wet conditions. I am always afraid I’ll crash!
Thankfully, I did not crash. I made it to transition — frozen — and thankful that the last leg of the race was here. I threw on my running shoes and like a robot, started the run. For the first half a mile, I could not feel my toes. They were numb. Eventually, the feeling *did* come back but I was just done. I started my run a little too fast, at around 7:15 a mile; I wanted a 7:30 pace for the race. Starting out too fast was to my detriment because I eventually slowed to a 7:50 pace. I was just cold and tired and still oh-so-done.
The run course is an out and back. I didn’t think that I’d place *at all* considering my lack of preparation, but on the way back when I stopped seeing girls in my age-group, I started to think “hey, I might actually have a chance!” For the last two miles I was just day dreaming about actually being on the podium and being in disbelief about it. I remember telling myself “you haven’t made it yet, so focus on the task at hand”, and I just kept running. I wanted a 2:30 overall time, but I had missed my run goal due to my 7:50 pace, and was actually on tract for just under 2:40. I didn’t think I’d get a good placement.
Then at the last 100 yards, a girl who looked about my age ZOOMED past me. She crossed the line 2 seconds before me. 2 seconds! Once I saw her overtake me I did put in the sprint rockets, but she just snuck up on me! I thought for sure that she had taken any chance of a podium placement from me and I was a bit disappointed. I later found out that she was in a different age-group than me (yay!) so I actually did place!
But, thankfully she did sprint ahead of me, which subsequently forced me to sprint as well, because the 4th place age grouper was only 10 seconds behind me. If it hadn’t been for our sprinter superstar, I’m sure this other girl would have overtaken me. I was super lucky.
In the end, I grabbed a water and waited for SF. He was only 5 minutes behind me. Then we got some food and found our clothes. Someone went into cardiac arrest just seconds after they crossed the finish line. It was crazy. I had never seen anything like that! I think they were OK, but the person was literally dead for at least a minute. It was quite an adventure.
I was awesome to win the medal overall. I am still in disbelief. So far I have two podium finishes, which is awesome.
This race also qualified me for the Triathlon National Championships, and I will do them in 2014. Then, if I can get a respectable time, I can be a part of Team USA, which is pretty damn cool. We’ll see though — its still a LOOOONG way ahead.
Overall, I was pleasantly surprised.
Lake Zurich Triathlon: Race Report
Posted on July 15, 2013
Yesterday was the Lake Zurich Triathlon (July 14, 2013). Overall, it was a good time and I had a lot of fun. The field was a bit more competitive than last year’s, but that just made it even more fun.
SF and I were able to book a room at the hotel directly across the street, so travel to the bike check-in was quick. We got there at around 5:30 and the parking lot was already starting to fill up. Thankfully we got a spot and started to unpack.
Now, it must have been the morning humidity or *something* because the mosquitos were on the attack. The bugs were EVERYWHERE. I got at least 5 bites while actively trying to swat them away, and SF even got bitten in the middle of his forehead. Jerks. I got bit on my shoulder (twice), arm, leg, and butt! On my butt! Darn mosquitos!
Checkin was uneventful. I was lucky and got the endcap spot — the spot I wanted last year but was taken by this other girl. Everyone seemed friendly. The referee had me get a missing end-cap for my areo bars, but other than that I just set up my stuff and waited. I didn’t want anyone to move my things while I was away.
About 20 minutes to the closing of transition, SF and I made our way to the lake. The water temperature was 78 degrees and wetsuits were (surprisingly) legal, however we did not swim with wetsuits. After a quick warmup we waited on the beach for our respective waves.
It was a running start across the mat — a little strange but whatever, I went with it. The swim was nice and the water was perfect. Aside from the slow people, it seemed to clear out about a third of the way through, and I was on my own. The water was flat and sighting was easy. I caught up to people who were 3 waves ahead of me!
At the beach, I ran to transition and grabbed the bike, and I was off. The bike course was perfect! So smooth and lovely. I was in areo nearly the entire time (aside from a few sharp turns here and there). I was really feeling it and was pushing some power at around 200 watts. Too bad for me my bike computer was not fastened in the cradle all too well and it kept disconnecting and turning off, but from what I could see, after realized it was loose, I was outputting at around 200 watts. Its a bit lower than I wanted but, whatever. Next time I can do better.
There were times where I was pedaling air, as I like to say, meaning that I should have switched to a higher gear but failed to do so. I just never went into my third chain ring. Oh well. I should have in retrospect, but lack of bike training has relegated me to just coasting when I am going too fast. Had I shifted up, I’d have gotten my average past 20 mph, but alas it was a mere 19.8. My bike split was 1:15. Meh.
However, the bike was still nice and I was ready to run. I threw on my shoes and was off (:57 transition). I felt good coming out of the gate — the legs were still fresh and I was feeling good, but soon after (like around mile 2) the humidity started to affect me. I could *really* feel the humidity and I had to slow down. I slowed to about an 8min/mile and was kicking myself for not staying in the 7s like I usually do. I was just overheating. I drank at every water station (which they were sparsely placed along the trail) and was just pushing through. On occasion, I slowed to an 8:30 min/mile, and dug deep to stay around 8 mins. It was tough, especially that last 1.2 miles.
The run was where I went from second place to fourth in my age group. First I remember a 25 year old girl zooming past me at around mile 2 or 3. It seemed like the humidity wasn’t bothering her at all! I attempted to pace off of her for a while but only managed 30 seconds. She was clipping along at 7:30 min/mile pace. Good for her! I thought she was going to win.
Then at around mile 4 I was passed by a 27 year old. She wasn’t moving overly fast, but just slightly faster than me. At mile 4 I was dying, so I didn’t try to pass her — though had the weather been nicer or had I been cooler I might have tried. At this point I figured I was at least third place, but maybe fourth, because I never assume I am first for anything.
In the end I finished 4th in my age group with a time of 2:39. Not overly bad, but not my greatest. I have lots of room for improvement. I have 9 more minutes to chop off for an awesome race. One day I will get below 2:30.
After the crossed the finish line I went to the showers immediately. Lake Zurich Paulus Park has showers and they were awesome. This is one minor reason why I love this race! They have showers. I stood in the cold shower for at least 10 minutes — and still, I was sweating. However, it felt great to get out of my nasty triathlon clothes.
I really like the Lake Zurich Triathlon — it is one of my favorite races. The race is run very well and there is lots of crowd support. It is also a smaller race — maybe 500 participants, which I truly appreciate. It makes the race so much more fun as you’re not battling it out with slow pokes (nothing against slow people, it just deters from *my* own race experience). Paulus Park and the surrounding area are beautiful — smooth bike course, fresh and clean lake, and a lovely run course around the lake. Lovely. As long as I stay in Chicago I will be doing the Lake Zurich Triathlon.
Next year, I am in the 30-34 age group — one of the most competitive! This is great! Competition will drive me towards excellence.