Technology

Running dotnet core web application in Ubuntu Core

For a current project I am looking into running a dotnet core web application in Ubuntu Core this does require some extra work compared with running in docker or on bare metal. To illustrate the project I will use a simple dotnet webapi and a fresh install of Ubuntu 17.10 the snap will be deployed and used in the same operating system. In an effort to ensure consistent build we will be using an lxc container for our builds. This makes the process less dependend on operating system, the guide should be usable with only minor adjustments on other linux distros.

Requirements

It is possible to do the entire development process in a continer, however for this purpose i prefer to develop on my active OS and only deploy to a snap. That means that we will need to install dotnet core on our system. The process for this varies slightly depending on your distribution and I will limit this guide to Ubuntu. If you are using another distro Microsoft have documentation on how to install on all supported distributions.

Dotnet Core

The following commands should install dotnet core sdk on a system running Ubuntu 17.04 or 17.10 if you are running an alternative version the Microsoft documentation have repository links for other versions.

curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg

sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-zesty-prod zesty main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-get update
sudo apt-get install -y dotnet-sdk-2.0.0

LXC containers

To have a clean build environment we need to install and prepare an lxc container. The following commands should install lxc and create a container with dotnet Core sdk and snapcraft tools.

sudo apt-get install -y lxc
sudo lxc launch ubuntu:16.04 snapcraft -c security.privileged=true
sudo lxc config device add snapcraft homedir disk source=/home/$USER path=/home/ubuntu
sleep 5 # wait for container to initilize
sudo lxc exec snapcraft -- apt update 
sudo lxc exec snapcraft -- apt install -y snapcraft curl
sudo lxc exec snapcraft -- sh -c 'curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > /home/ubuntu/microsoft.gpg'
sudo lxc exec snapcraft -- sudo mv /home/ubuntu/microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg 
sudo lxc exec snapcraft -- sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'

sudo lxc exec snapcraft -- apt update 
sudo lxc exec snapcraft -- apt install -y dotnet-sdk-2.1.3

The commands in the rest of this project will assume that you have Visual Studio Code installed on your system. If you prefer another editor you can replace all calls to code with your editor of choice.

Project definition and build config

As previusly mentioned we will create a basic webapi and use that as an example for our snap.

The following commands will initialise a basic webapi in the folder dotnetSnap/src it will also create a Makefile and open that in Visual Studio Code.

mkdir -p dotnetSnap/src
cd dotnetSnap/src
dotnet new webapi
touch Makefile
code Makefile

Since snapcraft's make plugin calls make all we need to ensure that make all runs our build step. In this case we want our dotnet project to be published as a self-contained application targeting x64 Ubuntu. After snapcraft runs make all it will run make install which copies the binaries to a folder under bin in the snap.

all: build

build:
    @dotnet publish --self-contained -o bin/publish \
    -c Release --runtime ubuntu.16.04-x64 
install:
    @mkdir -p $(DESTDIR)/bin
    @cp -r bin/publish $(DESTDIR)/bin/
clean:
    @rm -rf bin

With our dotnet project ready for deployment we will shift our focus towards Snapcraft. The setup required here is fairly limited and the following commands should create a new snap project in your dotnetSnap folder.

cd ..
snapcraft init
code snap/snapcraft.yaml

The snapcraft defintion needs some minor editing to make it usable for our purpose. Mainly we need to add an app that runs the application which have the default name of src we also need to define a part which include our build step.

We are using the make plugin which calls:

make all
make install

This will compile and publish our project to the snap's binary folder. We also add libunwind8 and libicu55 which are required for the dotnet application to function properly.

Our snapcraft.yaml will look like this:

name: Dotnet_snap # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: A simple example snap for dotnet in snaps
description: A simple example snap for dotnet in snaps
grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

apps:
  testapi:
    command: bin/publish/src

parts:
  testapi:
    plugin: make
    source: src/
    stage-packages:
      - libunwind8
      - libicu55

Building installing and running

When we have configured our application we use the following command to build the project.

sudo lxc exec snapcraft -- sh -c "cd /home/ubuntu/dotnetSnap; snapcraft"

This will create Dotnet_snap_0.1_amd64.snap which can be installed on your system using:

sudo snap install Dotnet_snap_0.1_amd64.snap --devmode

We should now be able to run our api using

sudo Dotnet_snap.testapi

Hopefully this guide have provided some help in running your dotnet core applications in a snap, the code from my project is available on github.

Moving to Squarespace

I do realise that most of my previous posts have been about hosting changes and to continue the tradition this post will discuss my recent move to SquareSpace.

This change is another step in my ongoing process of simplifying my online presence. SquareSpace will allow me to use one host for everything apart from source code. I will continue to use a combination of bitbucket and github as hosting solutions.

The move will replace my current solution which consist of a static blog hosted on s3 and a photography portfolio on smugmug. The benefits of this setup is it's adaptability and the level of control. However since I seldom use these benefits a simple managed solution will lower the threshold for posting.

Another major benefit with moving is having a web frontend for editing and publishing, something that further simplifies the process. I do believe that I could achieve similar results using a completely self hosted solution, however I personally don't feel that I have the time or the interest to build such a solution.

Once I had decided on moving to a managed solution I was somewhat spoiled for choice since there are a lot of alternatives. I ended up with Squarespace for a few reasons, mainly I found that the service included all the things I required. It's also a reputable hosting service with a lot of support from photographers and influencers that I follow.

So after a few days of testing the Squarespace platform I decided to make the move, this entailed a fairly limited amount of work. Mainly moving relevant old posts to this new system and deciding on good way to display my pictures.
There are two things I will miss with this move, a complete solution for syntax highlighting and a nice way to display my book reviews. I am however looking further into these things to try and find a solution for that.

What you should be told on the first day of any CS course

I can't count the number of times that I've been told that for developers the tools don't matter, just give us a prompt, a basic text editor and away we go.

The idea that the tools you use is inconsequential is at least as I see it one of the worst misconceptions about development, especially since it's spread by old developers and university teachers to fresh minds looking to write their first lines of code.

As an example current projects include a lot of back and forth between python, excel and mssql, as you can imagine this leads to good deal of semi manual string manipulation.

The work pattern of copying large amount of strings around while making minute changes to the order of characters is nothing specific to the projects I am currently working on, id say it's one of the cornerstones of any development project. 

At the very least you should refactor your code before finishing a project.

These situations doesn't necessarily require a decent text editor but I think anyone that have tried refactoring a reasonably sized program in nano would agree that it's not fun.

However this task done in a fully equipped advanced text editor like sublime, emacs, vi or atom is relatively joyous.

Finally even though I dislike the notion that advanced tools is unnecessary for developers I do believe that the choice of these tools is up to each of us.

I do for example use sublime for almost all of my development with the sole exception of C# which I write in the frankly excellent Visual Studio.

To sum up the advice I would have liked to have been told my first day at CS would be:

"Start this first day with trying out emacs,vi or sublime and use one of them for the next 3 years." 

I believe that the single best thing I've done for my development skills and workplace sanity is to use an advanced text editor.

Task based concurrency in C#


I have recently switched over to a company that mainly works with C# and python.
One of my first assignments have been to implement a minimal scheduler I have been trying to wrap my head around task based concurrency in C#.
This was necessary since I decided upon a "worker -- producer" setup with a producer that fetches "tasks" from a database, and a collection of workers that executes the tasks independently. This provides the benefit of allowing us to scale the number of workers to fit the current systems cpu and memory resources.
This approach is not without limitations but most of them are basic thread safety concerns.
The main issue however have been my somewhat lacking experience using tasks, which have lead to this post.

Real stuff


Below I will try and explain the basics for my use case:

The goal is to run a specific function in a object as a new task and await the result. The command used will be on similar to the following:

public async Task<bool> Execute(int n)
{
    return n == 0;
}

And the basic caller function will be similar to the following:

public async void Run()
{
    while(true)
    {
        if(!task.Empty)
        {
            Command c = tasks.Next();
            // MAGIC PART
         }
     }
}

The command class in the example implements my Execute function and the "Run" function will run in a worker object.

My initial attempts at "magic" ended up with the task running synchronously with the calling thread blocking and awaiting the result. This behavior was close to the desired result, however there was no possibility to specify await time, or to do other processing on the worker thread.

//Initial Magic
Task<bool> task = c.Execute();

To achieve my desired result this function call had to be adapted, I ended up having to create a "runner" task that handled the waiting for the intended task.

var runner = Task.Factory.StartNew(() =>
{
  var runningTask = c.Execute();
  return runningTask;
});

This "runner" task can then be ignored until the worker is free to await the result. Or as in my case it can be used to create a timeout loop for the task.

while (waited < timeOut)
{
  if (runner.Wait(500) || runner.IsCompleted)
  {
    break;
  }
    Thread.Sleep(1);
    waited += 500;
}

This loop "pools" the task every 500 ms to see if it has completed, and if the timeout value is reached I am free to cancel the task.